James Downey, PhD, a solution architect for Dell Services, blogs about cloud computing at
and contributes to @DellinTheClouds on Twitter. Also an active member of SDForum, James often writes for the SDForum newsletter on issues of interest to the software engineering community.
Will cloud computing make software architects obsolete? If cloud vendors take responsibility for quality attributes through SLAs, what work is left for architects? What decisions remain after the one big decision of moving to the cloud? Throughout the SOA and Cloud Computing track at the Software Engineering Institute (SEI) SATURN conference held this past week near San Francisco, SEI researchers and industry practitioners made clear that by increasing design options, the cloud dramatically expands the role of the architect. In reality, the decision to go cloud is anything but binary.
Indeed, the architectural decisions around cloud computing are of such criticality that Olaf Zimmermann, executive IT architect at IBM Research, built a guidance model and decision-making tool to assist organizations through the process of making those decisions. According to Zimmermann, organizations face hundreds of decisions when implementing cloud computing, and the literature neither makes these choices explicit nor does it clarify the order in which they should be addressed. His model includes a set of alternatives for each issue, decision drivers, recommendations, and an outcome. Essentially, the guidance model works as a pre-built issue list. Zimmermann continues to build up the issue list over time as he reviews the experience of IBM teams on projects and discovers common patterns.
Addressing many of these same issues, SEI researcher Grace A. Lewis highlighted the key decisions that organizations must make when adopting any of the three most popular forms of cloud computing–IaaS, PaaS, and SaaS. If an organization chooses infrastructure as a service (IaaS), it must decide what parts of an application to move to the cloud, what communication mechanism shall connect the parts, where to store data, how to secure the application, and how to detect and communicate resource failures. Cloud computing, according to Lewis, makes the job of architect all the more challenging because it is the architect who must accommodate for the loss of control over quality.
Platform-as-a-service, Lewis explained, opens up its own set of questions: should data be stored locally and computed in the cloud or stored in the cloud as well?; where should authentication occur?; should existing systems be redesigned to better exploit the cloud? Even software-as-a-service, apparently the most out-of-the-box cloud service, forces many design choices on an organization: what type of clients shall the system suppor?t; how shall the organization monitor performance?; shall users authenticate through the SaaS application or through single sign-on?; how shall the SaaS application integrate with legacy, behind-the-firewall systems? (In my own experience as an IT consultant, let me point out that integration between SaaS and legacy systems often becomes more important and far more complex than anticipated.)
One of the most appealing cloud use cases, that of cloud bursting, Lewis explained, poses quite a few challenges for architects. Cloud bursting occurs when a system within a data center expands to resources in the cloud to handle peak demand. To make cloud bursting work, the application must be monitored so that triggers launch the expansion. Likewise, triggers must cause contraction when demand decreases. And architects must build in mechanisms for moving state back and forth during expansions and contractions.
Through the example of cloud bursting, Lewis raised an important point regarding the cloud. Although clouds are by definition massively scalable, deploying an application to the cloud does not magically make it massively scalable. If massive scalability is a requirement, then an application must be architected for massive scalability.
In architecting applications for massive scalability, architects have long understood the importance of designing presentation and business logic tiers to enable spreading these layers out over many servers and balancing the load between servers in a farm, perhaps across multiple data centers. In the past and still today for most applications, these horizontally scaled-out application servers utilize a single database, which can scale up through additional processors, memory, and disk space but not scale out. Relational databases, the de facto standard for decades, do not scale well across machines; their strengths–normalization, multi-table joins, and transactions–have become their weakness. To get around this bottleneck, organizations that require truly massive scale have pioneered whole new classes of databases–key-value stores, graphs, and document databases–collectively referred to as NoSQL. These new classes of databases, each with its own strengths and weaknesses, scale across many servers but at the expense of the transactional capabilities and consistency so long taken for granted. By presenting new tradeoffs, NoSQL databases create ever more design choices that demand the skills of an architect.
Regardless of how the cloud evolves, it was clear to me by the end of the presentations that the years ahead will be exciting ones for software architects–new challenges, new learning, new ways to apply the principles of the discipline.
- James Downey