Architects: Execute and evaluate your methods!

Since architecture is a relative new field, much debate goes on about the methods and techniques to be used within the field. As one of the key competencies of an architects is to be able think conceptually, it is only natural for architects to engage in lengthy discussions about their tools, techniques, approaches and methods. A recent example of such a discussion can be found on the Via Nova Architectura website, where a rather opinionative posting on TOGAF 9 resulted in an involved discussion with 38 elaborate responses.

In principle, there is nothing wrong with a critical attitude towards methods. Especially in developing disciplines, one needs to be critical about the efficiency and effectiveness of the methods we apply. Regretfully, however, current discussions on architecture methods tend to be based on opinions, views and impressions rather than facts derived from thorough evaluations in practice. In the earlier mentioned discussion on TOGAF 9, several participants referred to their own (undocumented and unvalidated) private experiences. These experiences might be true and hard earned. Regretfully, however, they do not constitute a solid base for further (mature) development of the field. I would therefore like to make the case for a more scientific approach to evaluating and refining our architecture methods. This is not an easy thing to do. But is the road to maturity ever an easy road to travel? 

In my opinion, we should stop discussions about architecture methods until we have conducted rigourous evaluations of these methods in practice, or at least discuss any claimed shortcomings in terms of well documented case studies observed in real life applications. For example, TOGAF, as a "purposely designed object", has an intended set of situations in which its creators will claim it to be efficient and effective in achieving certain goals. The questions to be evaluated then are: Is the method efficient and effective in achieving these goals? Are the goals relevant? Can the method be improved any further? 

So rather than debating on the quality of TOGAF 9, say, I think we should focus on using our architecture methods in practice, and debate about rigourous methods to evaluate their efficiency and effectiveness in achieving results. To some architects, this may come as an unwanted call for more rigour in our field, as it may result in less heroism and folklore. An earlier call made on architects by Ron Tolido and myself to stop discussing architecture methods, but rather to get on in applying them, resulted in an even more heated debate. 

Mind you. I am arguing the case for the evaluation of methods based on experiences in practice, and not the a desk-research based comparison of methods based on features that can be found in their descriptions. In the past, ample desk research has been conducted using elaborate classification frameworks in terms of their scope, viewpoints, intent, description, etc. Rather than doing more classification work, I propose we should focus on an evaluation of their practical usage. Our focus should really be on applying methods in real life situations, and evaluating these engagements in order to further improve our methods. Practical application and rigourous evaluation is key here.

Methods typically contain a description of how to "do the work". A method "to achieve X" usually contains a description of the activities and ordering on how to indeed achieve X. This set of activities, and its ordering, is quite often referred to as the method's way of working. This, nevertheless, does not necessarily mean that the way of working should be a fully elaborated recipe that will apply to all situations. A method is likely to starts its life as a method aimed at achieving goals in specific contexts. A method as a "purposely designed object", however, typically aims to be applicable in a wider range of situations. This requires the way of working as suggested by the method, to evolve into a kind of a body of actionable knowledge. In other words, a "theory for getting things done to achieve X" rather than a recipe applicable to one situation only. Quite often, the notion of method is equated to a "recipe based" way of working only. I refer to this as a "method in the narrow sense", where a method taking a wider perspective involving a "theory on getting things done" would be a "method in the broader sense". The latter requires a stronger focus on core operating principles as well as heuristics on how to make things work in specific situations. Methods can quite well start their life as a method in the narrow sense, while evolving to become a method in the broader sense. This, obviously, requires several re-design and re-factoring steps, combined with application in practice and rigourous evaluation.

Comments

  1. When you mention "desk-research based comparison of methods", I guess you are referring to the kind of comparative review carried out by IFIP WG 8.1 in the 1990s.

    While I agree that this approach had its problems, we still need some kind of evaluation framework for assembling and comparing experience.

    The problem with evaluating a methodology based on project success is that you never know whether the project succeeded because of the methodology or despite it.

    Veryard, R. (1985), "What Are Methodologies Good For?", Data Processing, Vol. 27 No.6, pp.9-12.

    R Veryard, "Future of Information Systems Design Methodologies," Information and Software Technology, vol. 29, no. 1, pp. 33-37, 1987

    ReplyDelete
  2. Richard, for any kind of empirical research, a "codification framework" is needed in which to represent ones findings. So I agree, we need a framework like you suggest. But this is different from the frameworks needed to classify methods based on the properties of their descriptions.

    In addition to selecting/evolving/defining a framework to position the evaluation results, it is also important to actually get into the act of evaluating. Doing evaluations will be difficult enough, and actually requires experimentation in its own right. I'll definitely look up your papers! Thanx!!

    ReplyDelete
  3. I welcome any more practical approach to architecting, so let's hope we won't discuss too much about this item (but enough...).

    ReplyDelete
  4. One of the big problems for systematically evaluating EA experience is that of assurance latency or evaluation latency - How Long It Takes Before We Can Know If It Worked.

    For example, if you are going to test claims about lifetime cost of ownership or through-life capability management or future-proofing, you need a fairly long-term longitudinal study.

    ReplyDelete

Post a Comment

Popular posts from this blog

Three perspectives on enterprise architecture

Models that matter; Return on Modelling Effort

Enterprise architecture as organisational Zen