Thursday, May 15, 2008

"Don't worry about people stealing your ideas. If your ideas are any good, you'll have to ram them down people's throats"

Howard Aiken 1900-1973

Thursday, May 08, 2008

"All truth passes through three stages. First, it is ridiculed. Second, it is violently opposed. Third, it is accepted as being self-evident."

Arthur Schopenhauer

Saturday, April 05, 2008

If only Newton were Apache....

Whilst I'm on a roll...

I continue to get asked by all sorts of parities, "Why is Newton using a GPL (actually AGPL) open source license? Why isn't it Apache?" Less frequently, the same question in another guise - "Why did Paremus set up codeCauldron rather than join Apache or Eclipse?"

This question usual emanates from one of three sources:

  • Individual Developer: Usually very excited about Newton and its capabilities; typically a Samurai - (Paremus like Samurai!). The conversation goes - "If only Newton were Apache, I could deploy it in production without sign-off". The Samurai sees the value in the product, sees the vision - and believes (but cannot ensure) the organization will do the right thing; the right thing being to pay for support and consultancy services from the company that developed the product. Unfortunately I've seen exactly the opposite behavior from a number of Organisations! Many VC's believe without questioning - the diffusion model; i.e. "Give it away and the revenue will roll in" - yes, those very words have been used. My response - I continue to watch SpringSource and MuleSource with much interest! But I predict that the VC's in question are in for a shock!
  • The Small SI: Usually want Newton capabilities upon which they can build business specific services - but do not want to pay for the privilege. Newton is unique in its capabilities at present - so either the SI must make their own derivative code GPL, or develop the equivalent of Newton capabilities themselves, or enter a commercial relationship with Paremus. If only Newton were Apache!
  • Tier 1 Technology Vendors: Complain - "We (who shall remain nameless) cannot officially look at Newton code because it's GPL. Implication being: We are not interested in a commercial relationship with you, rather we want to see what you Paremus folks have, try and guess where you are going, and then do it ourselves. If only Newton were Apache!
So whilst capable of generating a large footprint, the Apache license model is, I believe, a significant barrier for small innovative companies wanting to build a financial successful business, as:
  • Its easy to give something away. Trying to charge for usage a-priori - much more difficult! Again, I continue to watch SpringSource and MuleSource with much interest!
  • The giants of the Software Industry, after the Linux/JBOSS experience have become quite effective at controlling open source communities, and neutralizing potential threats to the status quo; just my paranoid observations.
Perhaps Microsoft were correct all along?

That said, companies with closed source / proprietary software products seem to make the same mistake. The market is tough, developers opt for "free open source" solutions, our ROI isn't obvious? So give away the product based on some criteria - to customers with revenue below a certain level, or limited functionality / scale of the free product. Later - when the customer exceeds this boundary - we have them by the balls! (queue evil laugh). A viable long-term business strategy?

Again, I have my doubts.
Impaled on the Horns of an OPEX Dilemma

The finance industry are clearly having a tough time at present. As losses mount, CEO's & CIO's are increasingly scrutinizing the costs of doing business. One interesting metric, the cost of running a single production application; $1,000,000 per annum! Take the many thousands of applications typically used in a large finance house, and operational costs rapidly exceeds the billion dollar per annum mark.

Why is this?

Surely, over the last few years the Finance industry has increasingly driven down the price of enterprise software, to the point that an application server may now be charged at a few hundred dollars per node. Likewise, basic networking, server and storage are cheaper than at any time in the past.

The problem isn't the cost of the raw materials, rather the fact that these organizations have built increasingly complex environments which must be maintained by an army of IT staff.

I'm probably not far off the mark suggesting 80% of the annual cost for each application relates to support and development staff that are required to maintain and keep the application running.

And the choices available to the CxO?

  • Use Cheaper Resource: Ship operations out to China, India or Mexico! While on-paper attractive as a quick fix; there is a catch. Wages tend to normalize as time progress, with the cost of initially cost effective workforces rising to the point that the Market will bear. Indeed - it has a name; "Free Market Dynamics". Hence within a reasonable timeframe (~5 yrs) - the cost advantage will evaporated; meanwhile the company is still left with a complex manually intensive operational environment. Traditional - third party outsourcing - of which there are several failed examples exist in the late 1999 / early 2000 period - fall into this category. This approach does nothing to address the the root cause of the spiraling operational costs – complexity! In short - a strategy guaranteed to fail in the medium / long term.
  • Reduce the Number of Applications: If the cost relates to the number of applications - simply forcing down the number of applications in use will initially reduce OPEX costs. Whilst a reasonable strategy for some, the Financial Service industry is highly adaptive and constantly needing the evolve applications and services. Hence, a "no new" applications policy merely results in bolt-ons of additional functionality to existing systems - increasing complexity and associated costs of the remaining applications.
  • Use Technology to Reduce headcount: The IT industry have collectively failed to provide real solutions to this! Despite a flood of Automated Run-Book, Monitoring, Configuration Management, Package / OS Deployment and Virtualization Management products, humans are still very much still "in-the-loop"; directly concerned with all aspects of every software service in the runtime environment. Environments are more complex than ever!

So what is stopping the IT industry developing the right products? Simply, industry continues to fail to realize that automation of the existing is not sufficient. A fundamental/radical change in perspective with respect to how distributed systems are built and maintained is needed to address the Complexity Crisis organizations now face. Funnily enough, this is what Infiniflow has been developed to address.

And the users of the technology?
  • The fear of change!
  • The linear relationship between status and managed headcount.
  • And most importantly, a severe shortage of sufficiently talented engineers and architects that have the vision and determination to drive such changes through their organizations - (Paremus referring to these rather special individuals as Samurai).
So if you are a frustrated Samurai, contact us at Paremus, we can introduce you to many like minded individuals :)

Meanwhile, if you are a CEO / CIO with the desire to tackle the root causes of your organizations IT complexity - why not drop me an e-mail, and we'll explain how we might be able to help; specifically you may find the dramatic impact that Infiniflow has on operational cost of great interest.

Monday, February 04, 2008

Henry Ford and Software Assembly

Having used the Henry Ford analogy on numerous occasions; it was interesting to read a recent JDJ article by Eric Newcomer.

The Henry Ford analogy to software goes something like this (quoting Eric)...

"The application of the Ford analogy to software is that if you can standardize application programming APIs and communications protocols, you can meet requirements for application portability and interoperability. If all applications could work on any operating system, and easily share data with all other applications, IT costs (which are still mainly labor) would be significantly reduced and mass production achievable."

Eric suggests that despite the software industry having attempted the pursuit of software re-usability, these activities have failed. Whilst the Web Services initiative has, to some degree, increased interoperability, it has failed to deliver code re-use. Eric concludes that the whole analogy is wrong, and that rather than trying to achieve code re-use, the industry needs to focus of sophisticated tools to import code, check for conformance and ready it for deployment within the context of a particular production environment.

This article triggered a number of thoughts:
  • Did the industry seriously expect WS-* to usher in a new era of code re-use? Surely Web Services are a way to achieve loose coupling between existing, and so by definition, stove-piped monolithic applications? I guess the answer here partly depends on the granularity of re-use intended?
  • Perhaps JEE should have faired better? Generic or re-usable business logic that could be deployed to a general purpose application server seems like just the thing! However, expensive bloated JEE runtimes, and the associated complexity and restrictions, prompted the developer migration to Spring.
Do these experiences really point to a fundamental issue with the idea of code re-use, or are they an indication that the standards developed by the IT industry were simply not up to the job?

If the latter, then what is actually needed? Clearly:
  • It must be significantly simpler for developers to re-use existing code relative to the effort required to cut new code for the task in hand -  thus implying:
  1. The ability to rapidly search for existing components with the desired characteristics.
  2. The ability to rapidly access and include the desired components into new composite applications.
  3. Component dependency management must be robust and intuitive both during the development cycle and during the life-time of the application in production.
  • The runtime environment must be sufficiently flexible and simple that it offers little or no resistance to developers and their use of composite applications.
  • In addition to the runtime environment insulating applications from resource failure, and providing horizontal scale, the runtime must also track all components that are in use, and the context (the composite system) in which they are used.

I'd argue that, unlike previous IT attempts, current industry initiatives are clearly moving in the right direction:
  • The OSGi service platform gives us a vendor neutral industry standard for fine-grained component deployment and life-cycle management. Several excellent OSGi open source projects are available; namely Knopflerfish , Apache Felix and Eclipse Equinox
  • Service Component Architecture (SCA) provides a vendor neutral industry standard for service composition.
  • Next generation runtime environments like Infiniflow (itself built from the ground up using OSGi and SCA) replace static stove-piped Grids, Application Servers and ESB's with cohesive, distributed, adaptive & dynamic runtime environments.
But are these trends sufficient to usher in the new era of code re-use?

Possibly - possibly not.

Rather than viewing code re-use simply in terms of "find - compose - deploy" activities, we perhaps need one more trigger; the development framework itself should implicitly support the concept of code re-use! This message was convincingly delivered by Rickard Oberg in his presentation concerning the qi4j project at this years JFokus conference.

But what would be the impact if these trends succeed? Will the majority organizations build their applications from a common set of tried and tested shrink wrapped components? To what extent will third party components be common across organizations, or in house developed components be common across systems within organizations?

The result will almost certainly be adaptive radiation; an explosion in re-usable software components from external software companies and internal development groups. As with any such population, a power-law can be expected in terms of use, and so re-use; a few components being used by the vast majority of systems, whilst many components occupying unique niches, perhaps adapted or built to address the specific needs within a single specialist application in a single organization.

Going back to the Henry Ford analogy, whilst standardization of car components enabled the move to mass production, this was not, at least ultimately at the expense of diversity. Indeed, the process of component standardization, whilst initially concerned with the production of Ford Model Ts (black only) resulted in cars available for every lifestyle, for every budget and in any colour!

Thursday, January 17, 2008

LiquidFusion - Any Takers?

Just after I found out about the Sun's purchase of MySQL, the news about Oracle's acquisition of BEA filtered through.

Can this be anything other than consolidation within an aging market sector. An indication that the "one size fits all" monolithic messaging middleware /  application server era is in its twilight years?

Perhaps OSGi and SCA will, in due course, be seen as key technology enablers allowing the shift away from costly monolith middleware?

Wednesday, January 16, 2008

Sun no Longer Afraid?

I've just been contacted by an old friend asking for my thoughts w.r.t Sun's MySQL announcement. Certainly news to me! Yet, a quick check of Sun's front page and Jonathan's blog, just to be sure, confirms the story.

So initial response was surprise. Sun had previously purchased an excellent database technology and then proceeded to silently kill it by burying it behind medico middleware. Anyone remember Clustra? True Clustra was a new market entrant whereas MySQL has massive market adoption. 

My interpretation was always that Sun were to concerned about the Oracle relationship - and specifically the Oracle on Sparc business line - to risk having any in-house product that remotely looked like a relational database. 

If true - that would imply the  revenue stream is no longer as important as it use to be? 

Whatever, it seems to me like a bold and interesting move. Far more so than the StorageTek acquisition (still don't understand that one). This also follows on from Lustre; to my mind an interesting technology motivated acquisition.



 


 

Monday, January 07, 2008

Complexity - Part II: It all depends on the Question you ask!

I previously argued that the apparent complexity of a system varies dramatically with respect to the type of question you ask. The answer to one question may make a given system seem inordinately complex, yet ask another similar question, from a slightly different perspective, and the same system appears very simple.

Hence, it is the question that dictates where the line is drawn separating hidden and exposed system complexity.

Assume I want to deploy a set of services to an Enterprise. These services have specific runtime requirements and interdependencies. The usual question asked is...
  • "What compute resources do I have, what are their individual configurations and capabilities?"
The response to which, an extensive list of resources and associated configurations/capabilities are presented, that now need analyzing. Like the positions of nodes in a lattice, the initial question, and subsequent answer, expose too much unnecessary information!

In contrast, if I ask,
  • "Out of the cloud of potential resource which may or may not exist, what sub-set resources currently satisfies the following conditions?"
The response requires no further thought. Whilst I may never know the configuration of everything, I'll always know whether there are resources capable of servicing my stated requirements. As the response to the question is simple, and requires no effort on my part, I have no issue in re-asking the question as may times as required; this is essential, as the one thing I do know is that the environment WILL change!!

Re-visiting the lattice analogy.

Because it is simple to measure emergent macroscopic properties such a pressure, temperature and volume, it is easy to re-measure these and so deduce the relationship between them over time - e.g. Boyles Law. This would have been a significant challenge if the microscopic quantities of position, mass and velocity for each particle had been used instead!

Abstraction versus Virtualization?

Resource abstraction is different from resource virtualization. Whilst the latter attempts to represent a physical resource with a “virtual” equivalent, this equivalent emulating the attributes as the underlying entity, resource abstraction masks the complexity of the entity (physical or virtual), representing this resource via a simplified description. Resource abstraction and resource virtualization are orthogonal / complementary and interdependent.

To Conclude
  • As systems become increasingly distributed and composed of an ever increasing number of moving parts - we need to step back from attempting a microscope description of the environment, and rather describe it in terms of its emergent macroscopic characteristics.
  • We need to intelligently define the boundaries - the point at which microscopic behavior gives way to a more appropriate macroscopic view. Also don't be surprised if several boundaries exist.
  • Dynamic service discovery / dynamic service provisioning / re-provisioning are fundamental - they are MUST HAVE core behaviors.
  • So avoid all architectures and solutions that assume a static world, comprising of fixed immutable resources at known addresses; NB including wiring systems together via static immutable middleware services! Unfortunately the vast percentage of current software solutions, and the mindsets of the engineers that built them.
Build dynamic systems, manage them with respect to their macroscopic properties and the management / complexity issue vanishes. Conversely, if runtime complexity is a serious issue - it's about time you redesign / rebuilt your systems as no amount of traditional management software will save you.