Testing plays a critical role in the development of software-reliant systems. Even with the most diligent efforts of requirements engineers, designers, and programmers, faults inevitably occur. These faults are most commonly discovered and removed by testing the system and comparing what it does to what it is supposed to do. This blog post at the SEI Blog by Paul Clements summarizes a method that improves testing outcomes (including efficacy and cost) in a software-reliant system by using an architectural design approach that describes a coherent set of architectural decisions taken by architects to help meet the behavioral and quality attribute requirements of systems being developed.
See also these additional posts by Paul here on the SATURN blog about architecture support for testing.
Recently in this space I’ve described the SEI’s recent research in architecture support for testing (AST), with which we are striving to understand how architecture can be used to lead to better testing outcomes. I also described a testing practitioner’s workshop we held at the SEI in February, which resulted in a set of 29 model problems in AST. These are problems that, if solved, would make a substantial difference in the testing of software. I invited readers of this blog to add their votes to those of our workshop participants to produce a ranking of the model problems in terms of importance, using the following scale:
- VH (Very High) = 5 (meaning that the respondent places a very high value on this capability)
- H (High) = 4
- M (Medium) = 3
- L (Low) = 2
- VL (Very Low) = 1 (meaning that this is a capability that is not at all valuable to the respondent)
As promised, here are the results.
In a recent post, I mentioned a workshop in Architecture Support for Testing that was held at the SEI in February. The output of that workshop was a set of 30 model problems. These are problems that, if solved, would result in a significant decrease in project resources devoted to testing and/or a significant increase in system quality given an expenditure level. Since we are investigating the relationship between architecture and testing, each of the model problems has a flavor of architecture to it as well as a focus on testing.
Our workshop participants are, at this writing, casting their votes for the most important of these problems, but while they are doing that, I wanted to give this readership the same opportunity. The most important of the model problems (as determined by voting) will be taken to the Researchers’ Workshop on Architecture-Based Testing in Pisa, Italy, in late March. There, they will be put before some of the leading researchers to solve, or try to solve, or begin to solve, or begin to think about solving.
This is another update about our project in architecture support for testing. I would like to tell you about a series of workshops we’re running, with the help of others, or participating in, and to invite you to join us.
Practitioners’ Workshop in Architecture-Based Testing, Pittsburgh, February 1-2
On February 1-2, 2011, 13 dedicated individuals braved the Pittsburgh winter to attend an invitation-only workshop on Architecture Support for Testing, run by the SEI’s project team of the same name. Our guests came to us from the U.S. Army, industry, and academia and were chosen because of their practical knowledge of software testing, architecture, or both. Their job was to speak to the needs of software system developers with respect to testing and how architecture can be used to improve testing practices.
The goal of the workshop was to help answer this question:
A few weeks ago I posted a blog entry about our new effort in Architecture-Based Testing. The project’s goal is to help find answers to the two questions shown below:
A new project at the SEI
The SEI’s Architecture-Centric Engineering (ACE) Initiative has launched a new research project called “Architecture Based Testing.” I thought I would use this opportunity to tell everyone about it, and to ask for your feedback.
By architecture-based testing, we mean using a system’s architecture to inform and guide the system’s testing activities. While there has been substantial work devoted to this topic in the research community, not much of that research seems to have filtered into communities of practitioners. Hence, the promise of architecture-based testing–to use architecture to reduce the time and expense of testing and to increase its effectiveness–remains unfulfilled.
Charting the territory
Work in architecture-based testing can be broadly categorized as follows: Continue reading