Saturday, 23 October 2010

The difference between "Real" Architecture & I.T. Architecture

One of the architecture forums I'm active in has many "real architects" (those that create the built environment - buildings, parks, towns etc.) as well as I.T. Architects and a common discussion is why Architecture has a much higher "success" rate than I.T. Architecture.

By "success" we generally mean that whatever was originally designed gets built with few modifications from the original plan and once built lasts for quite a long time - at least for its initially projected life and usually much longer.

I.T. projects on the other hand as a success rate considerably below 50% (some studies put it below 20%) and even then the successful projects are subject to considerable change even before the first iteration goes into production.

Why is this?

Part of it that when a Building Architect designs a building he has access to almost everything he needs to know about all the materials present in his proposed building (stress and sheer factors, density, composition, life expectancy, degradation rates etc) and a set of scientific formulae for calculating everything else.

At the high-end of the industry Dynamic Simulation Models are used extensively in building design - one of the companies that I've dealt with is the Building Research Establishment Ltd who specialise in developing and publishing these simulation models for use by Building Architects - so before anything is physically built they are able to simulate many different building configurations before settling on a final design.

As a consequence a Building Architect is pretty certain (within whatever probability boundary they require) that the proposed building will be fit for purpose, meet all the legislative requirements and can be built from the desired materials.

IT Architects on the other hand, generally have to use "our experience" to guess at the best way of solving a problem and, because we have no scientific way of proving what will or will not work, must use powers of persuasion to convince people that the proposed solution will meet the requirements.

So, when we do try and research a particular subject we generally decide on a number of subjective "design patterns" that sort of provide a solution but no means of choosing between them. Consequently we end having to commit money and physically build significant parts of a target architecture before we can test that it works. That's a huge leap of faith for most organisations which is why the "power of persuasion" is so important - in many ways much more important than technical prowess (as I've found to my cost more than once in the past).

If we assume that accurate and provable prediction is a major part of why "real architecture" is so much more successful than "IT architecture" then the question becomes how to introduce simulation models into the IT world?

I worked on a resource simulation model once when I was at British Airways in the 1990's which they used it for modelling resource requirements across their flight network but it could have modelled pretty much any activity based environment - It's only limitation was the availability of the necessary base data to populate it with.

Shortly after than I also worked on another simulation application for the same company that focused on simulating, predicting and optimising Bookings demand across the flight network. (I mention this because it's exactly the same problem as modelling data-flows through an organisations data processing systems.) Again the only limitation was availability of base data and the more we had - we used 5 years of data in the end - the more accurate the simulations would be.

In both cases the business had a fully worked through model on which it could base any decisions with a reasonable chance of success prior to actually spending further time & money.

The major part of the solution is then to build appropriate, industry accepted simulation models that can use the metrics to forward project onto a proposed IT architecture and allow dynamic "what if?" scenarios to be explored different potential topologies prior to anything actually being changed.

Even this isn't the difficult part - it's just software engineering and a bunch of algorithms most of which already exist in other disciplines.

The difficult part is creating an industry-wide set of commonly agreed base data that can be used to underpin the simulation model when applied to a particular organisation or business domain.

This isn't just gathering raw numbers, as the "Business Intelligence" people would have us do, but needs a detailed analysis of the variables that affect each of the metrics e.g. time delay per firewall in a network or record density in a database storage segment or asynchronous disk I/O response time and so on.

Unfortunately I can't see this bit happening any time soon because most companies wouldn't see where the payback would be in doing this.

No comments:

Post a Comment