Thursday, January 17, 2008

LiquidFusion - Any Takers?

Just after I found out about the Sun's purchase of MySQL, the news about Oracle's acquisition of BEA filtered through.

Can this be anything other than consolidation within an aging market sector. An indication that the "one size fits all" monolithic messaging middleware /  application server era is in its twilight years?

Perhaps OSGi and SCA will, in due course, be seen as key technology enablers allowing the shift away from costly monolith middleware?

Wednesday, January 16, 2008

Sun no Longer Afraid?

I've just been contacted by an old friend asking for my thoughts w.r.t Sun's MySQL announcement. Certainly news to me! Yet, a quick check of Sun's front page and Jonathan's blog, just to be sure, confirms the story.

So initial response was surprise. Sun had previously purchased an excellent database technology and then proceeded to silently kill it by burying it behind medico middleware. Anyone remember Clustra? True Clustra was a new market entrant whereas MySQL has massive market adoption. 

My interpretation was always that Sun were to concerned about the Oracle relationship - and specifically the Oracle on Sparc business line - to risk having any in-house product that remotely looked like a relational database. 

If true - that would imply the  revenue stream is no longer as important as it use to be? 

Whatever, it seems to me like a bold and interesting move. Far more so than the StorageTek acquisition (still don't understand that one). This also follows on from Lustre; to my mind an interesting technology motivated acquisition.



 


 

Monday, January 07, 2008

Complexity - Part II: It all depends on the Question you ask!

I previously argued that the apparent complexity of a system varies dramatically with respect to the type of question you ask. The answer to one question may make a given system seem inordinately complex, yet ask another similar question, from a slightly different perspective, and the same system appears very simple.

Hence, it is the question that dictates where the line is drawn separating hidden and exposed system complexity.

Assume I want to deploy a set of services to an Enterprise. These services have specific runtime requirements and interdependencies. The usual question asked is...
  • "What compute resources do I have, what are their individual configurations and capabilities?"
The response to which, an extensive list of resources and associated configurations/capabilities are presented, that now need analyzing. Like the positions of nodes in a lattice, the initial question, and subsequent answer, expose too much unnecessary information!

In contrast, if I ask,
  • "Out of the cloud of potential resource which may or may not exist, what sub-set resources currently satisfies the following conditions?"
The response requires no further thought. Whilst I may never know the configuration of everything, I'll always know whether there are resources capable of servicing my stated requirements. As the response to the question is simple, and requires no effort on my part, I have no issue in re-asking the question as may times as required; this is essential, as the one thing I do know is that the environment WILL change!!

Re-visiting the lattice analogy.

Because it is simple to measure emergent macroscopic properties such a pressure, temperature and volume, it is easy to re-measure these and so deduce the relationship between them over time - e.g. Boyles Law. This would have been a significant challenge if the microscopic quantities of position, mass and velocity for each particle had been used instead!

Abstraction versus Virtualization?

Resource abstraction is different from resource virtualization. Whilst the latter attempts to represent a physical resource with a “virtual” equivalent, this equivalent emulating the attributes as the underlying entity, resource abstraction masks the complexity of the entity (physical or virtual), representing this resource via a simplified description. Resource abstraction and resource virtualization are orthogonal / complementary and interdependent.

To Conclude
  • As systems become increasingly distributed and composed of an ever increasing number of moving parts - we need to step back from attempting a microscope description of the environment, and rather describe it in terms of its emergent macroscopic characteristics.
  • We need to intelligently define the boundaries - the point at which microscopic behavior gives way to a more appropriate macroscopic view. Also don't be surprised if several boundaries exist.
  • Dynamic service discovery / dynamic service provisioning / re-provisioning are fundamental - they are MUST HAVE core behaviors.
  • So avoid all architectures and solutions that assume a static world, comprising of fixed immutable resources at known addresses; NB including wiring systems together via static immutable middleware services! Unfortunately the vast percentage of current software solutions, and the mindsets of the engineers that built them.
Build dynamic systems, manage them with respect to their macroscopic properties and the management / complexity issue vanishes. Conversely, if runtime complexity is a serious issue - it's about time you redesign / rebuilt your systems as no amount of traditional management software will save you.