Tuesday, December 01, 2009

“Making Modularity Manageable™”.


Or should that be "OSGi Mmm...." ;)


I am very pleased to announce the release of Paremus Nimble today. Nimble combines a feature rich OSGi shell with the industries most powerful dependency resolver. The Nimble release is not only a software release, but a landmark in our ongoing mission to making adaptive OSGi based systems simpler to develop and manage than the legacy they replace.


The most productive way to manage and interact with OSGi Frameworks that's currently available, you may download and use Nimble for free on a 30 day renewable license.


So whether you're new to OSGi, an experienced OSGi developer, or an administrator charged with deploying and managing sophisticated OSGi based composite applications, we suspect you'll find Paremus Nimble invaluable.


Given that you've read this far, you may also be interest in a couple of posts from the Paremus team including:

'55k download and 85 chars to run spring simple web app' & 'osgi file install nimble style'


Its also been well received in the twittersphere including comments such as


very impressive http://bit.ly/7lzv3w Nimble by paremus #osgi #in” by @yanpujante.


Thanks Yan!


Please feel free to feedback experiences and comments to the Nimble Blog, or tweet us @paremus.



Merry Xmas (almost)


Richard.

Thursday, October 22, 2009

OSGi: The Value Proposition?

In a recent blog Hal Hildebrand argues OSGi's value proposition in terms of its ability to reduce long term 'complexity'. Hal argues that whilst it may be harder to start with OSGi as it initially appears more complex, for large applications and large teams it is ultimately simpler because the architecture is 'modular'. A diagram along the lines of the following is used to emphasis the point.


Complexity over time?


As an ex-Physicists, I'm naturally interested in concepts such as 'Complexity', 'Information' and 'Entropy'; and while I agree with Hal's sentiments, I feel uneasy when the 'complexity' word is used within such broad brush general arguments. Indeed; I find myself asking, in what way is a modular system 'simpler'? Surely a modular system exposes previously hidden internal structure, and while this is 'necessary complexity' (i.e information describing the dependencies in the composite system), the system is never-the-less visibly more complex!

For those interested, the following discussion between physicists at a Perimeter Institute seminar concerning 'information' is amusing, illuminating and demonstrates just how difficult such concepts can be.

Before attempting to phrase my response, I visited Kirk Knoernschild's blog - IMO one of the industries leading experts in modularisation - to see what he had to say on the subject.

Sure enough Kirk states the following:

"As we refactor a coarse-grained and heavyweight module to something finer-grained and lighter weight, we’re faced with a set of tradeoffs. In addition to increased reusability, our understanding of the system architecture increases! We have the ability to visualize subsytems and identify the impact of change at a higher level of abstraction beyond just classes. In the example, grouping all classes into a single module may isolate change to only a single module, but understanding the impact of change is more difficult. With modules, we not only can assess the impact of change among classes, but modules, as well."

Hence, Kirk would seem to agree. As one modularises an application, complexity increases in the form of exposed structural dependencies. Note that one must be careful not to confuse this 'necessary' complexity with accidental complexity; a subject of previous blog entries of mine - see Complexity Part I & Part II


OSGi - Preventing 'System Rot'?

Those who have worked in a large enterprise environment will know that systems tend to 'rot' over time. Contributing factors are many and varied but usually include:
  • Structural knowledge is lost as key developers and architects leave the organisation.
  • Documentation missing and / or inadequate.
  • The inability to effectively re-factor the system in response to changing business requirements.
The third issue is really a 'derivative' of the others: As application structure is poorly understood, accidental complexity is introduced over time as non-optimal changes are made.

Hence, rather than trying to frame OSGi's value proposition arguments in terms of 'complexity' - OSGi's value is perhaps more apparent when framed in terms of 'necessary information' required to manage and change systems over time?



Structural information loss over time for modular and non-modular System.


Unlike a traditional system, the structure of a modular System is always defined: The structural information exposed by a correctly modularised system being the necessary information (necessary complexity) required for the long term maintenance of that System.

In principle, at each point in time:
  • The components used within the System are known
  • The dependencies between these components are known
  • The impact of changing a component is understood
However, the value of this additional information is a function of the tooling available to the developer and the sophistication of the target runtime environment.


The Challenge: Simplifying while preserving Flexibility

Effectively dealing with both module and context dependencies is key to realizing OSGi's true value in the enterprise.

To quote Kirk yet again:

"Unfortunately, if modules become too lightweight and fine-grained we’re faced with the dilemma of an explosion in module and context dependencies. Modules depend on other modules and require extensive configuration to deal with context dependencies! Overall, as the number of dependencies increase, modules become more complex and difficult to use, leading us to the corollary we presented in Reuse: Is the Dream Dead:"

The issue of module dependency management is well understood. Development tooling initiatives are underway to ease module dependency management during the development process; an example of which being the SIGIL project recently donated by Paremus to the Apache Felix.

However, Kirk's comment with respect to 'context dependencies' remain mostly unheard.

From a run time perspective vendors and early adopters currently adopt one of the following two strategies:

  • Explicit management of all components: Dependency resolution is 'frozen in' at development time. All required bundles, or a list of required bundles, are deployed to each runtime node in the target runtime environment; i.e. operations are fully exposed to the structural dependencies / complexities of the application
  • Use of an opaque deployment artifact: Dependency resolution is again 'frozen in' at development time. Here the application is 'assembled' at development time and released as a static opaque blob into the production environment. Operations interact with this release artifact, much like today's legacy applications. While the dependencies are masked, as the unit of deployment is the whole application, this decreases flexibility, and if one considers the 'Re-use Release Equivalence Principle' partly negates OSGi's value proposition with respect to code re-use.
Both of these approaches fail with respect to Kirk's 'context dependencies'. As dependencies are 'frozen in' at development time there is no ability to manage 'context' dependencies at runtime. Should conditions in the runtime environment for whatever reason require a structural change; a complete manual re-release process must be triggered. With these approaches, operational day to day management will at best remain painful.

In contrast, leveraging our Nimble resolver technology Paremus pursue a different approach:

  • The runtime environment - a 'Service Fabric' - is model driven. Operations release and interact with a running Service via its model representation; this an SCA description of the System. Amongst other advantages, this shields the operations staff from unnecessary structural information.
  • The Service Fabric dynamically assembles each System resolving all modules AND context dependencies.
  • Resolution policies may be used to control various aspects of the dynamic resolution process for each System; this providing a higher level policy based hook into runtime dependency management.
  • The release artifacts are OSGi bundles and SCA System descriptions - conforming with the 're-use / release equivalence principle'.
  • The inter-relationship between all OSGi bundles and all Systems with the Service Fabric may be easily deduced.
The result is a run time which is extremely flexible, promotes code-reuse, whilst is significantly easier to manage than traditional environments. OSGi is an important element, but the use of a high level structural description used in conjunction with the model driven runtime are also essential elements of this story.


OSGi: The Value Proposition?

The short answer really is - "it depends on how you use it"!

Without a doubt, many will naively engage with OSGi, and will unwittingly increase operational management complexity beyond any benefits achieved by application modularization; see 'OSGi here, there and everywhere'. However, for those that implement solutions that maximize flexibility and code-reuse, while minimizing management, OSGi's value proposition is substantial; and the runtime used is a critical factor in realising these benefits.

How Substantial?

To date my only benchmark is provided by an informal analysis made by a group of architects at a tier 1 Investment Bank in 2008. They estimated the potential OPEX cost saving per production application, assuming that it were replaced with a Service Fabric equivalent; for the purpose of this blog one may equate Service Fabric to adaptive, distributed OSGi runtime.

Cost savings in terms of
  • application release efficiency.
  • ongoing change management,
  • fault diagnostics and resolution,
  • efficiency savings through code re-use
were estimated. The final figure suggested a year on year OPEX saving of 60% per application. Somewhat surprised at the size of the estimate I've challenge the group on several occasions, each time the response was that the estimates were conservative.

To turn this into some real numbers - consider the following. A tier 1 investment bank may have as many as ~1000 applications; each application typically costing $1m per annum. Lets assume that only 30% of the applications are suitable for migrating to the new world - we're still looking at a year on year saving of $200m. Migration costs are not included in this, but these are short term expenses. Likewise neither are the cost savings realized by replacing legacy JEE Application Server and middleware with the Service Fabric solution.

As always - 'mileage may vary' - but never the less, quite a value proposition for OSGi!

Monday, October 12, 2009

How do you scale your Spring DM or POJO applications without development framework lock-in?


"Cloud centric composite applications promise to be more disruptive and more rewarding than either the move to client-server architectures in the early 1990’s, or web-services in the late 1990’s. A successful Private Cloud / Platform as a Service (PaaS) solution will provide the robust and agile foundations for an organization’s next generation of IT services.


These Cloud / PaaS runtimes will be in use for many years to come. During their lifetime they must therefore be able to host a changing ecosystem of software services, frameworks and languages.


Hence they must:

be able to seamlessly, and incrementally, evolve in response to changing business demands

at all cost, avoid locking an organization into any one specific development framework,

programming language or middleware messaging product."


Want to know more? Read the new Paremus Service Fabric architecture paper which may be found here.


Wednesday, September 30, 2009

Cloud Computing - finally, FINALLY, someone gets it!

I've been really busy these last few months. So not had the time or inclination to post. Yet after reading Simon Crosby's recent article Whither the Venerable OS? I felt compelled to put pen to paper - or rather should that be fingers to keyboard.

Whilst a good read, the magic paragraph for me appears towards the end of Crosby's article.

"If IaaS clouds are the new server vendors, then the OS meets the server when the user runs an app in the cloud. That radically changes the business model for the OS vendor. But is the OS then simply a runtime for an application? The OS vendors would rightly quibble with that. The OS is today the locus of innovation in applications, and its rich primitives for the development and support of multi-tiered apps that span multiple servers on virtualized infrastructure is an indication of the future of the OS itself: Just as the abstraction of hardware has extended over multiple servers, so will the abstraction of the application support and runtime layers. Unlike my friends at VMware who view virtualization as the "New OS" I view the New OS as the trend toward an app isolation abstraction that is independent of hardware: the emergence of Platform as a Service."

Yes! Finally someone understands!

This is IMO exactly right, and the motivation behind the Paremus Service Fabric; a path we started down in 2004!

OK, so we were a bit ahead of the industry innovation curve.

Anyway, related commentary on the internet suggests that Simon's article validates VMwares acquisition of SpringSource. Well, I'd actually argue quite the opposite. Normal operating systems have been designed to run upon a fixed, unchanging resource landscapes; in contrast a "Cloud" operating system must be able to adapt, and must allow hosted applications to adjust, to a continuously churning set of IaaS resources. Quite simply, SpringSource do not have these capabilities in any shape or form.

However, I would disagree with the last point in Simon's article. Having reviewed Microsoft's Azure architecture, it seems to me no different from the plethora of Cloud/distributed ISV solutions. Microsoft's Azure platform has a management/provisioning framework that fundamentally appears to be based on a Paxos like consensus algorithm; this no different from a variety of ISV's that are using Apache Zookeeper as a registry / repository: All connection oriented architectures, all suffering with the same old problems!

Whilst such solutions are robust in a static environment, such approaches fail to account for the realities of complex system failures. Specifically, rather than isolated un-correlated failure events, failures in complex systems tend to be correlated and cascade! Cloud operating systems must address this fundamental reality and Microsoft are no further ahead than VMware or Google; indeed the race hasn't even started yet!

And the best defence against cascading failure in complex systems? Well that would be dynamic re-assembly driven by 'eventual' structural and data consistency.

Saturday, July 11, 2009

How Nimble is your OSGi runtime?

Hands up all of you managing OSGi dependencies via an editable list of bundles. Easy isn't it! It just works right!?

Well actually - it 'just works' for a single application running in a small number of containers. From an enterprise perspective you are unintentionally contributing to an impending complexity meltdown; an explosion of dependency and configuration management issues. And if you are unfortunate enough to end up supporting your own composite creations, you may well end up envying the fate of Prometheus and rueing the day you learnt to code.

Possible harsh? But I'm not alone voicing this concern!

In his recent article "Reuse: Is the Dream Dead?", Kirk Knoernschild continues his efforts to educate the industry on the tensions between code 're-use' and 'simplicity of use'. Kirk argues that as you increase potential re-use via lightweight fine-grained components, the complexity of dependencies and necessary environmental configurations corresponding increase, so making these same components harder to use.

A simple concept, yet if unaddressed, an issue that will make your life as an enterprise developer increasing uncomfortable and help edge OSGi closer to that seemingly inevitable 'trough of disillusionment'.

Yet, from a development perspective the issue of dependency management is well understood.


Whilst initially found wanting, a number of projects now exist to address this; including the SIGIL eclipse plug-in which Paremus recently contributed to the Apache Felix project, (SIGIL leveraging Peter Krien's BND tool).


In contrast, the issue of dependency management in Production is less immediately obvious, its impact more profound and generally ignored.

* Will aspects of the runtime environment affect the runtime dependencies within the application?

* Will applications be isolated from each other, or might they run within the same JVM?

* How are the released artifacts subsequently managed in the production environment with respect to ongoing bundle dependency and version management?

Echoing Kirk's concerns, Robert Dunne started his presentation at OSGi DevCon Europe with the observation that; 'whilst modularity was good, its benefits are often undermined by dependency and configuration complexity'. The subject of Robert's presentation? The Paremus Nimble Resolver, which is our response to the concerns posed by Kirk.

Nimble is a high performance runtime dependency resolver. To deploy a composite application to a Nimble enabled runtime (i.e. the Paremus Service Fabric) one specifies:

* The root component of the artifact.

* And a set of associated policies and constraints.

Nimble then does the rest.

Presented with the 'root', Nimble dynamical constructs the target composite; ensuring that the structural dependencies are resolved in a manner consistent with both organizational policies and the runtime environment within which it finds itself.

Nimble's OSGi capabilities include:

* Fragment attachment policies.

* Optional import policies.

* Import version range narrowing.

* The ability to resolve dependencies on extender bundles (DS, 'classic' Spring, Spring DM, iPOJO).

With Nimble policies allowing:

* The configuration of selected extensions.

* Flexible constraint requirement -> capability matching.

* The ability to configure optional dependency resolution behaviors.

Not just OSGi, Nimble is a generic artifact resolver with a plug-able architecture. Any artifact type may be supported, with support currently available for:

* OSGi Bundles

* POJO's, 'classic' Spring & Spring DM

* WAR

* Configurations.

A Nimble enable runtime quite literally dynamically assembles all required runtime application and infrastructure service dependencies around the deployed business components! Specify a WAR artifact and Nimble will instantiate the appropriate Servlet engine dictated by runtime policy attached to the WAR; i.e. Tomcat or Jetty Sir? Specify a 'Configuration', and Nimble responds by installing the target of the configuration, and of-course its dependencies.

Nimble not only directly addresses Kirk's concerns, but goes on to radically transforms our understanding of the responsibilities and capabilities of next generation composite aware Service Platforms. But most importantly, Nimble was created to enable effect re-use whilst making life simpler for you and the organizations you work for.

Thursday, April 09, 2009

Whilst recently writing up a white paper, I idly spent sometime looking through my usual archive - the Internet (anything to avoid writing). :-/

When did we (Paremus) first announce distributed OSGi again? Answer, not 2009 as one believe if you listened to all the IT vendor noise about RFC119 - but in December 16th 2005.

OK - we were a little early :)

This press release even had a quote from Jon Bostrom. Jon, six years early in 1998 actually visited Salomon Brothers UK to provide a Jini train course too, what turned out to be, a proto-Paremus team.

This morning I was alerted to a blog concerning Jini and OSGi which I dually half-read, then responded. Then realized that the blogger had actually reference a short 5 minute talk I gave at the Brussels JCM 10 Jini event September 2006. As the message from this presentation had been ignored by the community since that point - I has somewhat surprised / pleased to see it referenced.

My message at the time was simple and quite unpopular...

To survive and flourish Jini must embrace OSGi

The other thing that sprang to mind was Jim Waldo's presentation at the same conference. Unlike mine, this widely report with great enthusiasm; I really don't mind Jim:)

The interesting thing was - at least to my mind - one of Jim's most profound comments seemed to be missed by most.

Program v.s. Deploy - we'll put the management in later

This struck particular resonance with the Paremus engineering team - as our dynamic target state provisioning sub-system for Infiniflow had been released earlier that very year. This leveraging those very ideas!

Its now 2009 - we have the industry has defined the relevant required standards for distributed OSGi based frameworks. Now the industry is wondering how to develop, deploy and manage runtimes that consist of 1000's of dynamical deployed bundles running on a Cloud of Compute resource. No problem! Paremus have been doing that for half a decade ;)

Conclusions? Nothing profound. Perhaps the slow pace of the IT industry? But isn't the Internet a great communal memory!

Wednesday, March 11, 2009

Teleport or Telegraph?

If this blog entry were chiseled in stone, no currently existing technology would be capable of near instantaneous transportation of that stone. Perhaps quantum entanglement might one day provide the basis for Teleportation - yet much serious physics and engineering would be required to make this more than Science Fiction.

Yet the same information - in an binary format (Morse) - could have be transmitted across a continent at near the speed of light over a hundred years ago.

Both approaches achieve the same result - transmission of information.

Sometimes identifying the correct approach, the correct perspective, is far more important than the amount of engineering effort you throw at a problem.

Which brings me to the following article.

So VMware need 2,000 people to build a resource orchestration layer? Certainly, trying to manage a resource landscape so that it appears unchanging to a population of legacy applications is extremely difficult!

The alternative?

Take a different perspective.

Build dynamic / agile applications that adapt to the changing characteristics of their operational environments.

Friday, February 27, 2009

Global Financial Meltdown and Google Mail Service Outage

Whilst the current global economic meltdown and the recent Google e-mail service outage may seem entirely different types of event, there is some degree of commonality. Both represent catastrophic cascading failure within large complex distributed systems.

The analogy unfortunately finishes there. 

Google were up and running again in a couple of hours whilst the worlds economies may take a decade to recover. However the central theme -  how to avoid systemic catastrophic failure within complex systems - remains of deep concern to system architects and economists alike. 

Where does that leave "Cloud Computing". Quite simply don't believe the hype. Public Cloud infrastructures will continue to fail, hopefully infrequently, but almost certainly in a spectacular manner. The next generation for Public Cloud will need to be built upon a more modular resources landscape (swarms of geographically dispersed meshed data centre nodes) - with a suitably advanced distributed & partition-able Cloud Operating System. 

Unfortunately the same is true of the current generation of Grid Provisioning and Virtualization Management Software solutions increasingly used by large corporations. Use of this technology will end in tears for a number of large IT departments. To much visible complexity, too little automation. Like the economic meltdown, these solutions fail to account for outlier risks which cause systemic failure within complex systems.

The answer? Well its not a programming language (sorry Erlang!), nor a specific piece of middleware, nor specific replication technology, nor classic clustering.

To start the journey one must first realize that... 

Agility and Robustness are simply two faces of the same coin.

Thursday, February 12, 2009

Forget Cloud - OSGi is the new Cool Thing!

Or so an Industry Analyst recently informed me. 

Yet the flurry of Twittering & Blogging concerning the distributed OSGi section of the new OSGi 4.2 specification is certainly interesting. Is OSGi approaching some sort of enterprise adoption tipping point? These along with other commercial indications imply this is likely.

This is good news. OSGi deserves to be wildly successful, OSGi is one of the key enablers for the next generation of enterprise. 

Yet a danger lurks in the shadows. 

The use of OSGi does not in itself guarantee any sort of coherent architecture, nor is capable of addressing the current complexity crisis with the enterprise. OSGi is simply a tool - and in the wrong hands OSGi runtime systems will seem orders of magnitude more complex than the systems they replaced.  Meanwhile, the distributed OSGi section of the 4.2 specification is simply an acknowledgment that "things" exist outside the local JVM - no more - no less. 

Distributed OSGi has little to say about how to address Deutsch's 8 Fallacies ( actually if you follow the link you'll notice that Wikipedia now have a 9th :)  ). How these distributed entities discover each other, interact with each other, and which protocols are used is left as an exercise to the software vendor. This is not a criticism of the standard - this is a good thing. OSGi doesn't constrain distributed architectures. 

Yet this allows business as usual for the Software Vendors. And so we see the same old tired SOA rhetoric.

"ESB's & WS-*, would you like OSGi with that sir?"

But joking aside - the real danger is that OSGi's fate may become hopelessly entangled with the current disillusionment surrounding the web of vendor SOA Market-ectures

Paremus have always argued that OSGi deserves to be complemented by a network SOA framework that is as adaptable and dynamic as OSGi is locally within the JVM. A Self-Similar Architecture!

It was for this reason that Paremus fused OSGi (the new Cool technology) with Jini (was Jini ever Cool?) within the Newton project in 2006. A solution, in its commercial Infiniflow guise, which has been in customer production for over 2 years. 

As for Cloud Computing - that story has only just started ;-)