Notes by Brendan Foote
All Architecture Evaluation Is Not the Same: Lessons Learned from More Than 50 Architecture Evaluations in Industry
Matthias Naab, Jens Knodel, and Thorsten Keuler, Fraunhofer IESE
Matthias has evaluated many systems’ architecture, ranging from tens of thousands of lines of code to tens of millions, and primarily in Java, C++ and C#. From this he distills out commonalities in the various stages of the evaluations. To start with, the initiator of the evaluation was either the development company or an outside company, such as a current customer or a potential one. The questions being asked also varied—whether wondering if the architecture is adequate for one’s solutions, what the impact would be of changing the system’s paradigm, or how big a difference there was between a system and the reference architecture.
Posted in Architecture-Centric Engineering, Architecture-Centric Practices, SATURN Conference
Tagged architecture evaluation, architecture review, Architecture Tradeoff Analysis Method, ATAM, SATURN 2013, SATURN Conference, SEI, software architecture, software architecture evaluation, software architecture review, software design, software development, software engineering, Software Engineering Institute, testing
QoSA is the premier forum for the presentation of new results in the area of software architecture quality. It brings together researchers, practitioners and students who are concerned with software architecture quality in a holistic way. As a working conference QoSA has a strong practical bias, encompassing research papers, industrial reports and invited talks from renowned speakers.
The best contribution of the conference will receive the ACM SIGSOFT Distinguished Paper Award. To learn more, see the full press release about QoSA and the award.
Posted in Architecture-Centric Engineering, Architecture-Centric Practices, Conferences and Events
Tagged architecture certification, architecture evaluation, architecture review, software architecture, software architecture evaluation, software architecture requirements, software architecture review, software design, software development, software engineering, testing
In 2013, the Software Engineering Institute (SEI) Architecture Technology User Network (SATURN) software architecture onference will celebrate its 9th year. Each year SATURN attracts an international audience of practicing software architects, industry thought leaders, developers, technical managers, and researchers to share ideas, insights, and experience about effective architecture-centric practices for developing and maintaining software-intensive systems.
Posted in Architecture and Agile, Architecture-Centric Engineering, Architecture-Centric Practices, Conferences and Events, Quality Attribute Analysis, SATURN Conference, Service-Oriented Architecture
Tagged architecture certification, architecture evaluation, architecture review, Architecture Tradeoff Analysis Method, ATAM, attribute-driven design, Carnegie Mellon, cloud computing, documentation, non-functional requirements, SATURN 2013, SATURN Conference, SATURN Network, SEI, SOA, software architecture, software architecture evaluation, software architecture requirements, software architecture review, software design, software development, software engineering, Software Engineering Institute, system architecture, system of systems, systems architecture, technical debt, testing, ULS systems, ultra-large-scale systems
Win-Win with Agile Architecture
Michael Stal, Siemens Corporate Research
This keynote covered software architecture and how it can be combined with Agile in systematic way; perspectives on agility and architecture.
“Experts solve problems, geniuses avoid them” (Einstein). Architects should be geniuses.
Architecture and design are two sides of the coin. If you knew everything in advance, you could design the best architecture. Waterfall would be a perfect fit. But the real world is not perfect.
The other side of the coin is represented by the Agile Manifesto. In software architecture, embracing change is important. However, change should be planned.
Posted in Architecture and Agile, Architecture-Centric Engineering, Architecture-Centric Practices, SATURN Conference
Tagged agile release planning, architecture evaluation, architecture review, non-functional requirements, SATURN 2012, SATURN Conference, software architecture, software architecture requirements, software architecture review, software design, software development, software engineering, technical debt, testing
Testing plays a critical role in the development of software-reliant systems. Even with the most diligent efforts of requirements engineers, designers, and programmers, faults inevitably occur. These faults are most commonly discovered and removed by testing the system and comparing what it does to what it is supposed to do. This blog post at the SEI Blog by Paul Clements summarizes a method that improves testing outcomes (including efficacy and cost) in a software-reliant system by using an architectural design approach that describes a coherent set of architectural decisions taken by architects to help meet the behavioral and quality attribute requirements of systems being developed.
See also these additional posts by Paul here on the SATURN blog about architecture support for testing.
Recently in this space I’ve described the SEI’s recent research in architecture support for testing (AST), with which we are striving to understand how architecture can be used to lead to better testing outcomes. I also described a testing practitioner’s workshop we held at the SEI in February, which resulted in a set of 29 model problems in AST. These are problems that, if solved, would make a substantial difference in the testing of software. I invited readers of this blog to add their votes to those of our workshop participants to produce a ranking of the model problems in terms of importance, using the following scale:
- VH (Very High) = 5 (meaning that the respondent places a very high value on this capability)
- H (High) = 4
- M (Medium) = 3
- L (Low) = 2
- VL (Very Low) = 1 (meaning that this is a capability that is not at all valuable to the respondent)
As promised, here are the results.
On Thursday, April 28 from 1:30 to 2:30 Eastern time, Chuck Weinstock of the SEI will present a free SEI webinar, titled “Assurance Cases for Medical Devices.”
About the Webinar
Recently the U.S. Food and Drug Administration (FDA) issued guidance to infusion-pump manufacturers recommending the use of an assurance case to justify claims of safety. An assurance case is somewhat similar in form and content to a legal case. It specifies a claim regarding a property of interest, evidence that supports that claim, and a detailed argument explaining how the evidence supports the claim. Assurance cases have been used in Europe for more than 15 years to argue safety cases for military, avionics, railway, and nuclear systems. The FDA is the first U.S. organization to officially encourage their use in assessing safety-critical systems.
In a recent post, I mentioned a workshop in Architecture Support for Testing that was held at the SEI in February. The output of that workshop was a set of 30 model problems. These are problems that, if solved, would result in a significant decrease in project resources devoted to testing and/or a significant increase in system quality given an expenditure level. Since we are investigating the relationship between architecture and testing, each of the model problems has a flavor of architecture to it as well as a focus on testing.
Our workshop participants are, at this writing, casting their votes for the most important of these problems, but while they are doing that, I wanted to give this readership the same opportunity. The most important of the model problems (as determined by voting) will be taken to the Researchers’ Workshop on Architecture-Based Testing in Pisa, Italy, in late March. There, they will be put before some of the leading researchers to solve, or try to solve, or begin to solve, or begin to think about solving.
This is another update about our project in architecture support for testing. I would like to tell you about a series of workshops we’re running, with the help of others, or participating in, and to invite you to join us.
Practitioners’ Workshop in Architecture-Based Testing, Pittsburgh, February 1-2
On February 1-2, 2011, 13 dedicated individuals braved the Pittsburgh winter to attend an invitation-only workshop on Architecture Support for Testing, run by the SEI’s project team of the same name. Our guests came to us from the U.S. Army, industry, and academia and were chosen because of their practical knowledge of software testing, architecture, or both. Their job was to speak to the needs of software system developers with respect to testing and how architecture can be used to improve testing practices.
The goal of the workshop was to help answer this question: