Wednesday, January 13, 2010
Test Drive your Architectures for effective evaluations
Are you one of those sitting in ivory towers of Enterprise Architecture Group, tasked with responsibility of evaluating project architectures and ensuring IT Governance? If you have done few reviews before, you would know that it is highly subjective process that depends largely on the evaluators’ technical skills, functional/environmental knowledge, authority structure, and other political forces. The evaluators who are parachuted into the project group for reviews are especially handicapped due to lack of knowledge of functional requirements, and often only concentrate on ‘technology’ implementation reviews. Hence, most of the reviews remain superficial and only partially beneficial.
So how exactly should you evaluate the architectures?
I would say do what you do when you buy a car. Test Drive! You will only know the car’s performance when you actually drive it and feel it, and not just by reading specifications or by asking questions. Similarly, to evaluate the architecture, apply it to the specific functional scenarios and measure the quality attributes.
Carnegie Mellon Institute (SEI) has developed the architecture evaluation methodologies on the same principle. The most known methods are –
This methodology analyzes how well the software architecture satisfies the quality attributes (e.g. scalability, modifiability, performance etc), by applying the architecture to short-listed functional scenarios. It prescribes developing a quality attribute utility tree, and analyzing it for each scenario and alternative architecture approaches. For each scenario, the method prescribes identifying the following –
a) Sensitivity points – a collection of components that are critical for achieving a quality attribute,
b) Trade-off points – a sensitivity point that affects multiple quality attributes, typically trades one off for the other,
c) Risks – something that inhibits the system from achieving its quality goal (this also includes the decisions that are not taken),
d) Non-risks – something that is done right (and should not be changed)
CBAM begins where ATAM leaves off. It prescribes analyzing the cost, benefits and schedule implications as well for architectural approaches before making the final decisions.
This is more of a design review than architecture review, but uses the same principle of applying design to scenarios, and even writing a pseudo code to evaluate different parts of design.
The SEI has documented a very formal step by step process for all these methods. One way that may be counter productive as people tend to focus on activities, rather than the principles behind these methods. There is a great scope to tailor these methods to suit your organization, and conduct such evaluation in agile way.
More on this topic later…
- Amit Unde