Wednesday, May 30, 2007

Venture Capitalists embrace Command Economy in preference to Free Market!

A recent article Interesting Times for Distributed DataCentres by Paul Strong (Ebay - Distinguished Research Scientist ) makes a number of interesting points:
  • For Web2.0 Services to scale, you MUST back-end these onto massively horizontally scaled processing environments.
  • Most Enterprise datacentre environments are moving towards, or could be considered as, priomordial Grid type architectures.
  • What is really missing is the Data Centre MetaOperating System - to provide the resource scheduling and management functions required.
Whilst these arguments are correct, and highlight a real need, Industry & VC response seems entirely inappropriate.

Whilst VC and major Systems Vendors are happly throwing money into expounding the virtues of loosely coupled business models enabled by Web2.0 and all things WS-SOA; somewhat perplexingly, they also continue to invest in managment / virtualization / infrastructure solutions which drive tight couplings through the infrastructure stack. Examples include data centre "virtualization" or, as per my previous blog entry on the Complexity Crisis, configuration / deployment management tools.

Hence, industry investment seems to continue to favor the technology equivalent of the "command economy" in which the next generation of distributed Grid data centre is really just one more iteration on today's; central IT organisation control/manage and allocate IT resource in a rigid hierarchical/control command structure. The whole environment is viewed as rigid system which one centrally controls at each layer of the ISO stack; approaches that continue the futile attempt to make distributed environments behave like MainFrames!

What is actually needed is a good dose of Free Market Economics!
  • Business Services dynamically compete for available resources at each point in time,
  • Resources may come and go - as they feel fit!
  • Infrastructure and Systems look after their own interests, and optimise their behaviors to ensure overall efficency within the Business Ecosystem.
Successful next generation MetaOperating Systems, will heavily leverage such principles at the core of their architectures!

You simply cannot beat an efficient Market!
A new survey posted on GRID today highlights the Risks associated with Infrastructure Complexity. Interesting highlights include:
  • Each hour of downtime costs Fortune 1000 companies in excess of $300,000 according to 1/3 of the survey responses.
Of course, dependent on the specific Industry, these figures could be so much larger! Everyone tends to focus on availability/scaling issues for the new Internet based companies (Google, Yahoo, Amazon, Ebay). However, if you want to see real risk - consider the impact on some of the core systems that support global Banking / Financial systems.
  • Trouble shooting the problem can take more than a day. According to 1/3 of survey responses.
So if these are the same guys that have the $300,000 an hour loss - the figures are starting to mount up.
  • Change Management for Fortune/FT 1000 companies occupies 11 full time people!
  • Installation and configuration of core applications is a major resource sink; taking 4 days to configure a complete application infrastructure stack.
The report then goes on to justify change management / configuration management products. The implication being that to address the complexity issues, these Fortune/FT 1000 companies need to purchase and configure yet more enterprise software?

So Layering Complexity upon Complexity!!

I wonder, just what is the Production impact, if after all this automation, you loose the systems that are doing the automation and configuration?? I suspect recovery would be significantly longer than 1 working day!

The truth of the matter is that Enterprise Systems including those based upon the latest ESB, Grid, WS-SOA Marketectures are the root cause of the explosive increase in Complexity.

Each of these approaches implicitly assume that:
  • The compute resource landscape is static,
  • Software functionality is static
  • Provisioning is thought of as a one time event, and
  • Failure is treated as an exception.
Whereas in reality:
  • Compute resource landscape is dynamic
  • Software functionality needs to evolve and adapt
  • Provisioning is an on-going process - caused by
  • Continual - re-optimisation against the shfting compute landscape and recovery from failure.
So how do these Fortune/FT 1000 companies dig themselves out of their current Complexity Crisis?

By building the correct IT foundations for their businesses! Fortune 1000 companies need to implement Enterprise wide solutions where configuration, adaption and recovery are core design features. Systems configure, deploy and maintain themselves, as part of what they do (by way of an example - see Infiniflow)! Such solutions will also heavily leverage industry trends towards modularization via OSGi & SCA.

Whether you are the CIO of a Global Bank, a Gaming Company or a Telcoms company, once the correct technology foundations have been put in place - no easy task - significant OPEX savings WILL follow. However, take the easy route - fail with the foundations, avoid necessary change - and no amount of management, configuration or deployment software bandaid will save you!

Saturday, May 12, 2007

Its been a hectic month, with Paremus working on a number of projects including the SSOA demonstrator for DODIIS, our product release, and getting ready for JavaOne!
















As can be seen to the far right in the above photo - Paremus shipped some 16 Mac Mini's to JavaOne to provide real time demonstrations of multiple distributed SCA / OSGi based Systems running across an Infiniflow Enterprise Service Fabric! Each SCA system was dynamical instantiated, and we demonstrated the isolation and recovery capabilities of the Service Fabric by failing compute resources (well actually visitors to the stand insisted on pulling the cables) - without impact to the running Systems!

I was on stand duty for much of the time, so didn't get first hand experience of the presentations. However, feedback indicated that the usual keynote presentations were, well, as expected; but that both the OSGi and SCA standards are at the "tipping point" - with a significant increase in delegate interest and vendor activity relative to last year.

In addition to the usual conversations about OSGi, SCA, WS & Jini, those passing the Paremus stand may have overheard conversations concerning genetic algorithms, the importance of CAS and even the apparent failure of string theory - the latter, I hasten to add, having nothing to do with the Infiniflow's architecture :)


I'm almost looking forward to JavaOne 2008!