Many types of software systems, including big data applications, lend them themselves to highly incremental and iterative development approaches. In essence, system requirements are addressed in small batches, enabling the delivery of functional releases of the system at the end of every increment, typically once a month. The advantages of this approach are many and varied. Perhaps foremost is the fact that it constantly forces the validation of requirements and designs before too much progress is made in inappropriate directions. Ambiguity and change in requirements, as well as uncertainty in design approaches, can be rapidly explored through working software systems, not simply models and documents. Necessary modifications can be carried out efficiently and cost-effectively through refactoring before code becomes too “baked” and complex to easily change. This blog post at the SEI Blog by Ian Gorton of the SEI, the second in a series addressing the software engineering challenges of big data, explores how the nature of building highly scalable, long-lived big data applications influences iterative and incremental design approaches.
Connect with SATURN
Join SATURN Network mailing List here:Get UpdatesFor Email Marketing you can trust.
Subscribe to our feed:
Topicsagile release planning architecture evaluation architecture review Architecture Tradeoff Analysis Method ATAM Carnegie Mellon cloud computing documentation enterprise architecture non-functional requirements SATURN 2010 SATURN 2011 SATURN 2012 SATURN 2013 SATURN 2014 SATURN 2015 SATURN Conference SEI SOA software architecture software architecture evaluation software architecture requirements software architecture review software design software development software engineering Software Engineering Institute system architecture systems architecture technical debt