Wednesday, June 17, 2009

Repeatable, Reusable, Rapid?

I am intrigued by the amount of attention and support being given to “In Praise of Slow”. Originally published in 2004, this book by Carl HonorĂ© is an entertaining and thought provoking commentary on our “culture of speed”. HonorĂ© grabs our attention in the opening pages by describing how his lifestyle has led him to optimize his time with his son, searching out the shortest books for the bed time story, and pondering on why Snow White couldn’t have made do with 3 dwarves! His thesis is that we should pay attention to detail and do important things right first time. Like the slow food movement, set up originally to combat fast food, it’s about preserving culture, heritage, localization and small scale.

I am instinctively supportive of this idea of “doing things right”. In our own industry I worry about unfettered offshoring, agile development adopted purely for speed, the compromise of architectural principles trading short term gain for life time cost.

I note Ron Tolido’s blog develops this theme and extends the idea to Slow IT. Ron suggests “It is about using the principles of Enterprise Architecture to create a platform for continuous business change. This is not a paradox: only on top of a simplified, secure and flexible foundation of building blocks we can orchestrate and change solutions on a daily basis.” He calls it “Slow IT, the art of careful technology”.

I am completely with Ron in rejecting the superficial - Web 2.0, panic package acquisitions and the like for use in serious enterprise business processes. Yes we need to transition enterprise systems to modern componentized architecture that permits continuous upgrade of smaller moving parts.

However for all that I do believe that Slow IT is not going to go anywhere fast! We already have Slow IT today and the opportunities for misunderstanding are legion.

Last week I attended a presentation from Nick Cheetham at the Department of Work and Pensions in the UK. He commented that his organization is one of the few that right now is experiencing growth – because the unemployment rate is set to treble. But for most of us the imperative is to do more with less. And it’s interesting to observe different responses to this pressure.

I see tangible evidence that companies are increasing the rate of offshoring, in order to cut costs. I see others slashing the number of projects and programs, and focusing on the core business. But the primary observable effect is redundancy – reduction in head count, which is driven simply by the numbers. Then it’s up to the retained staff to figure out how to do more with less.

Maybe “slow” is an unintended but inevitable consequence of the current situation the archetypal enterprise is in, but I suggest a more appropriate focus is along the lines of “repeatable, reusable, rapid”. And this applies to everything – processes, services, components, infrastructure, skills, etc.

In this month’s CBDI Journal we publish a report on Implementation Architecture and Automation Unit Specification. This is an area in which we have been teaching and advising for some time, but we realized we hadn’t documented the guidance. It’s a critically important area – you have the business designs, the service specifications, so how do you deliver an effective software design and implementation, and demonstrate to governance reviewers that you are complying with SOA and EA principles and policy? And it’s a classic opportunity area for practicing repeatable, reusable, rapid techniques.

Like many folk, CBDI has been talking and advising on matters relating to repeatable, reusable, rapid for years. I seem to recall the phrase “reuse before you buy before you build” was coined by a colleague in TI around 1994 when we were developing the ideas around Component Based Development. It seems what goes around comes around, which perhaps says “good things come to those that wait”. But that’s different to advocating “slow”, which seems a bit like turkeys voting for Christmas.

Wednesday, June 3, 2009

Modernizing Legacy – Thoughts on Analyzing and Classifying Existing Application Landscape

The last few weeks I have been working on strategic direction for a government’s applications in a particular area of citizen support. It has prompted me to challenge some of our widely held assumptions about Legacy and how we manage it.

First what do we really mean by Legacy? The general understanding of legacy means something handed down from the past, an inheritance, gift or donation. Clearly in computing generally the term legacy has become synonymous with obsolete – still functional to some extent but does not work optimally with modern systems.

But this really isn’t adequate. Old doesn’t automatically mean obsolete – to be superseded. Old fashioned shouldn’t automatically mean redundant. What’s needed is a better taxonomy that allows us to communicate more meaning about the nature of the application portfolio.

I recall the presentation last year by Colin Smart of HBOS on their work in classifying their application landscape. Colin suggested an expansion of legacy to the following:

·         Strategic – everyone should use this

·         Retiring – no one should use this (but should use this instead)

·         Contained – no one should use this (but we haven’t an alternative)

which I like a lot. It immediately provides an answer to my basic concern regarding supersession and or redundancy. Incidentally the Contained class relates to our Chernobyl pattern – which we describe as “wrap in concrete, don’t expect to replace any time soon”.

But I feel we could go further in the classification particularly to integrate with the emerging SOA.

A significant part of the application area we were analyzing had been developed using the Information Engineering methodology and presumably at that time the IEF (Information Engineering Facility), now renamed CA Gen. A particular feature of this delivery approach was to establish an integrated, business driven data model which transformed directly to an integrated database. Now regardless of the issues surrounding a tightly integrated database, it was interesting to see how easily the aging application portfolio could map to the Core Business Service layer because the integrated, business context database effectively inherited an implied Business Type model (BTM). Rather than publishing Underlying Services, the CA Gen application would be easily able to publish Core Business Services that would be directly called by the Business Process layer.

I just happen to have been looking at a CA Gen application, but of course back in the 80s and 90s Information Engineering was widely used, and I wouldn’t be surprised if many legacy applications exhibited similar characteristics. However it must be said that many older delivery technologies wouldn’t have the same internal integrity as the IEF, and so any original quality may have deteriorated with time.

But if you do discover this, the alignment of the (very large) application area with the BTM also provides more confidence that the application area could be componentized – to break up peer to peer calls (between modules) and to turn those into service calls, thereby reducing the unit of change, deployment and release. And believe me, the example I am advising on is a BIG application area, so there is real value in doing this. The same action opens up the possibility of replacing or reengineering selected modules if appropriate – simply giving the customer more options on how to manage the application area.

So to return to my taxonomy question, I recommend that there are more useful classifications that would really help to better communicate what’s really going on in the application estate – Usage (as per Colin’s suggestion), Service Layer and Separation.

Usage

Service Layer

Implementation

Strategic

Core Business

Components

Retiring

Underlying

Componentized modules

Contained

Underlying

Monolithic structure

 

In most cases I would imagine Strategic applications would be supporting Core Business Services directly. Sometimes there will be situations where the API architecture is inadequate, and where a Strategic application is shown as supporting the Underlying Service layer, it serves to highlight the potential conflict. Similarly if a Retiring application is supporting the Core Business layer, then some urgent action is needed.

It might be expected that Strategic applications would have some level of componentized implementation. Conversely if a Strategic application has monolithic implementation architecture it highlights the risk, likely cost overhead and reduced options.

I would be very interested in hearing about other legacy classification systems that members have used.