Saturday, April 05, 2008

Impaled on the Horns of an OPEX Dilemma

The finance industry are clearly having a tough time at present. As losses mount, CEO's & CIO's are increasingly scrutinizing the costs of doing business. One interesting metric, the cost of running a single production application; $1,000,000 per annum! Take the many thousands of applications typically used in a large finance house, and operational costs rapidly exceeds the billion dollar per annum mark.

Why is this?

Surely, over the last few years the Finance industry has increasingly driven down the price of enterprise software, to the point that an application server may now be charged at a few hundred dollars per node. Likewise, basic networking, server and storage are cheaper than at any time in the past.

The problem isn't the cost of the raw materials, rather the fact that these organizations have built increasingly complex environments which must be maintained by an army of IT staff.

I'm probably not far off the mark suggesting 80% of the annual cost for each application relates to support and development staff that are required to maintain and keep the application running.

And the choices available to the CxO?

  • Use Cheaper Resource: Ship operations out to China, India or Mexico! While on-paper attractive as a quick fix; there is a catch. Wages tend to normalize as time progress, with the cost of initially cost effective workforces rising to the point that the Market will bear. Indeed - it has a name; "Free Market Dynamics". Hence within a reasonable timeframe (~5 yrs) - the cost advantage will evaporated; meanwhile the company is still left with a complex manually intensive operational environment. Traditional - third party outsourcing - of which there are several failed examples exist in the late 1999 / early 2000 period - fall into this category. This approach does nothing to address the the root cause of the spiraling operational costs – complexity! In short - a strategy guaranteed to fail in the medium / long term.
  • Reduce the Number of Applications: If the cost relates to the number of applications - simply forcing down the number of applications in use will initially reduce OPEX costs. Whilst a reasonable strategy for some, the Financial Service industry is highly adaptive and constantly needing the evolve applications and services. Hence, a "no new" applications policy merely results in bolt-ons of additional functionality to existing systems - increasing complexity and associated costs of the remaining applications.
  • Use Technology to Reduce headcount: The IT industry have collectively failed to provide real solutions to this! Despite a flood of Automated Run-Book, Monitoring, Configuration Management, Package / OS Deployment and Virtualization Management products, humans are still very much still "in-the-loop"; directly concerned with all aspects of every software service in the runtime environment. Environments are more complex than ever!

So what is stopping the IT industry developing the right products? Simply, industry continues to fail to realize that automation of the existing is not sufficient. A fundamental/radical change in perspective with respect to how distributed systems are built and maintained is needed to address the Complexity Crisis organizations now face. Funnily enough, this is what Infiniflow has been developed to address.

And the users of the technology?
  • The fear of change!
  • The linear relationship between status and managed headcount.
  • And most importantly, a severe shortage of sufficiently talented engineers and architects that have the vision and determination to drive such changes through their organizations - (Paremus referring to these rather special individuals as Samurai).
So if you are a frustrated Samurai, contact us at Paremus, we can introduce you to many like minded individuals :)

Meanwhile, if you are a CEO / CIO with the desire to tackle the root causes of your organizations IT complexity - why not drop me an e-mail, and we'll explain how we might be able to help; specifically you may find the dramatic impact that Infiniflow has on operational cost of great interest.

2 comments:

Dimitar said...

There is one more factor - determinism. As inefficient as it is, most current SCM practices revolve around controlling change.

Dynamic self-provisioning platforms like Infiniflow make it more difficult to be sure what exactly is running where.

Yes, I know it shouldn't matter in principle, but right now it does. For example, these days, the compiler optimizations and SQL query plans are assumed to just work, but looking 15 years back, they weren't nearly that accepted nor robust (at least from what I hear).

I am sure that runtimes like Infiniflow and Spring Application Platform are the future, but I also think it would take us some time to get there. The software is still evolving, the best practices are not here and right now, even though it is easy to write SCA-wired, Jini-distributed, OSGi-loaded POJOs, actually understanding the 'magic' that makes it work still requires some qualified developers.

Richard Nicholson said...

Dimitar,

First apologies. Its ages since I've logged into my blog :(

If the environment is truly dynamic - and all inter-relationships and dependencies communicated to the runtime then absolute locality really is irrelevant.

I'd also suggest that SCA-wired, Jini distributed, OSGi loaded POJO's are not that easy. Indeed Infiniflow is the only platform with this capability - and we laid out the concepts in 2005!!

The whole point of distributed runtimes like Infiniflow (I don't classify Spring Application Platform in the same category) - is that the average developer doesn't need to be exposed or concerned with the underlying - and hard - distributed engineering problems.

Of course - there will always be a few that want to build their own. Diversity is good!


Cheers

Richard