Sixteen enthusiastic participants contributed to the informal “Birds of a Feather” session on Architecture Competence. There was a diverse set of perspectives, ranging from embedded system architects to IT services organizations to enterprise architects.
As each participant introduced himself, we collected questions to be posed to the entire group. There was more than we could cover in the hour allotted, but we made pretty good progress.
We started with a discussion about what the output of a competence assessment should look like – that is, what would the sponsor commissioning an assessment expect to see as the output. This also took us into discussions about how an architecture competence assessment was related to a CMMI assessment, and into the various scenarios in which a sponsor would commission an assessment.
There was a range of opinions regarding the assessment results. Some people were looking for a qualitative assessment – strengths/weaknesses/opportunities/threats, or identifying activities to “start/stop/continue”. Others wanted the results to include benchmarking against the state of the art. Everyone seemed to agree that the results should include recommendations for corrective action.
Regarding CMMI, those who were already using that felt that a similar representation of the assessment results (that is, a 1 to 5 scale) would make it easier for managers to understand.
There was a discussion about whether the practice of Enterprise Architecture was defined well enough to allow an assessment of an organization’s competence in that architecture genre. This led to discussion about assessing competence in a practice that is not recognized by some managers and organizations as adding value – if manager’s think that enterprise architecture is a “necessary evil” required to satisfy funding or other requirements, then there is no motivation to achieve any more than minimal competence.
The group also discussed assessment scenarios and triggers. One unambiguous trigger would be external requirements, such as a customer requiring a satisfactory assessment as part of a vendor qualification. At least one organization was already undertaking a competence assessment at the request of a newly appointed CTO, who saw a gap between requirements and implementation, but this proactive approach seems to be the exception.
The group closed with a discussion of the SEI’s current work in architecture competence. Following a workshop in 2008 which brought in external experts to review our work, we have developed an assessment method and instrument. We consider competence at the individual, project team, and organization levels, viewed through four models of competence: duties, skills, and knowledge of architects; human performance theory which looks at value produced versus cost, without consideration of process and methods; organization coordination requirements resulting from architecture decisions versus coordination capacity; and organizational learning, or how the organization acquires, internalizes, and communicates knowledge and experience. From this framework, we have identified distinct competence areas and developed questions to probe in each area. We are currently looking for organizations interested in piloting this assessment and improvement method with us.
-John Klein, SEI