Zum Hauptinhalt springen
Nicht aus der Schweiz? Besuchen Sie lehmanns.de
Relating System Quality and Software Architecture -

Relating System Quality and Software Architecture (eBook)

eBook Download: EPUB
2014 | 1. Auflage
420 Seiten
Elsevier Science (Verlag)
978-0-12-417168-8 (ISBN)
Systemvoraussetzungen
89,22 inkl. MwSt
(CHF 87,15)
Der eBook-Verkauf erfolgt durch die Lehmanns Media GmbH (Berlin) zum Preis in Euro inkl. MwSt.
  • Download sofort lieferbar
  • Zahlungsarten anzeigen
System Quality and Software Architecture collects state-of-the-art knowledge on how to intertwine software quality requirements with software architecture and how quality attributes are exhibited by the architecture of the system. Contributions from leading researchers and industry evangelists detail the techniques required to achieve quality management in software architecting, and the best way to apply these techniques effectively in various application domains (especially in cloud, mobile and ultra-large-scale/internet-scale architecture) Taken together, these approaches show how to assess the value of total quality management in a software development process, with an emphasis on architecture. The book explains how to improve system quality with focus on attributes such as usability, maintainability, flexibility, reliability, reusability, agility, interoperability, performance, and more. It discusses the importance of clear requirements, describes patterns and tradeoffs that can influence quality, and metrics for quality assessment and overall system analysis. The last section of the book leverages practical experience and evidence to look ahead at the challenges faced by organizations in capturing and realizing quality requirements, and explores the basis of future work in this area. - Explains how design decisions and method selection influence overall system quality, and lessons learned from theories and frameworks on architectural quality - Shows how to align enterprise, system, and software architecture for total quality - Includes case studies, experiments, empirical validation, and systematic comparisons with other approaches already in practice.
System Quality and Software Architecture collects state-of-the-art knowledge on how to intertwine software quality requirements with software architecture and how quality attributes are exhibited by the architecture of the system. Contributions from leading researchers and industry evangelists detail the techniques required to achieve quality management in software architecting, and the best way to apply these techniques effectively in various application domains (especially in cloud, mobile and ultra-large-scale/internet-scale architecture) Taken together, these approaches show how to assess the value of total quality management in a software development process, with an emphasis on architecture. The book explains how to improve system quality with focus on attributes such as usability, maintainability, flexibility, reliability, reusability, agility, interoperability, performance, and more. It discusses the importance of clear requirements, describes patterns and tradeoffs that can influence quality, and metrics for quality assessment and overall system analysis. The last section of the book leverages practical experience and evidence to look ahead at the challenges faced by organizations in capturing and realizing quality requirements, and explores the basis of future work in this area. - Explains how design decisions and method selection influence overall system quality, and lessons learned from theories and frameworks on architectural quality- Shows how to align enterprise, system, and software architecture for total quality- Includes case studies, experiments, empirical validation, and systematic comparisons with other approaches already in practice.

Front Cover 1
Relating System Quality and Software Architecture 4
Copyright 5
Contents 6
Acknowledgements 16
About the Editors 18
List of Contributors 22
Foreword by Bill Curtis Managing Systems Qualities through Architecture 26
About the Author 27
Foreword by Richard Mark Soley Software Quality Is Still a Problem 28
Quality Testing in Software 28
Enter Automated Quality Testing 29
Whither Automatic Software Quality Evaluation? 29
Architecture Intertwined with Quality 30
About the Author 30
Preface 32
Part 1: Human-centric Evaluation for System Qualities and Software Architecture 34
Part 2: Analysis, Monitoring, and Control of Software Architecture for System Qualities 36
Part 3: Domain-specific Software Architecture and Software Qualities 38
Chapter 1: Relating System Quality and Software Architecture: Foundations and Approaches 42
Introduction 42
Quality 42
Architecture 43
System 43
Architectural scope 43
System quality and software quality 43
1.1. Quality Attributes 44
1.2. State of the Practice 46
1.2.1. Lifecycle approaches 46
1.2.1.1. Waterfall 46
1.2.1.2. Incremental 47
1.2.1.3. Iterative 47
1.2.1.4. Agile 49
1.2.2. Defining requirements 50
1.2.3. Defining the architecture 50
1.2.3.1. Documenting an architecture 51
1.2.4. Assessing an architecture 52
1.2.4.1. Quantitative versus qualitative approaches 52
1.2.4.2. Scenario-based evaluation 52
1.2.4.3. Experience-based evaluation 53
1.3. State of the Art 53
1.3.1. Loose coupling 53
1.3.2. Designing for reuse 54
1.3.3. Quality-centric design 54
1.3.4. Lifecycle approaches 54
1.3.5. Architecture representation 56
1.3.6. Qualities at runtime through self-adaptation 57
1.3.7. A value-driven perspective to architecting quality 58
References 60
Part I: Human-Centric Evaluation for Systems Qualities and Software Architecture 62
Chapter 2: Exploring How the Attribute Driven Design Method Is Perceived 64
Introduction 64
2.1. Background 65
2.1.1. ADD method 65
2.1.2. Technology acceptance model 67
2.2. The Empirical Study 68
2.2.1. Research questions 68
2.2.2. Experiment design and study variables 68
2.2.3. Participants and training 69
2.2.4. The architecting project 70
2.2.5. Data collection 70
2.3. Results 71
2.3.1. Questionnaire reliability 71
2.3.2. Descriptive statistics 71
2.3.2.1. Usefulness of ADD method 71
2.3.2.2. Ease of use of ADD method 72
2.3.2.3. Willingnes of use 72
2.3.3. Hypotheses tests 72
2.4. Discussion 73
2.4.1. ADD issues faced by subjects 73
2.4.1.1. Team workload division and assignment 74
2.4.1.2. No consensus in terminology 74
2.4.1.3. ADD first iteration 74
2.4.1.4. Mapping quality attributes to tactics, and tactics to patterns 75
2.4.2. Analysis of the results 76
2.4.3. Lessons learned 77
2.4.4. Threats to validity 78
2.5. Conclusions and Further Work 78
References 79
Chapter 3: Harmonizing the Quality View of Stakeholders 82
Introduction 82
3.1. Adopted Concepts of the UFO 83
3.1.1. Selection of the Foundational Ontology 86
3.2. Assessment and Related Concepts 86
3.2.1. Specification-level concepts 87
3.2.2. Execution-level concepts 90
3.2.3. State of the Art: Addressing basic quality-related concepts 91
3.3. The Harmonization Process 92
3.3.1. Quality Subjects' positions in the harmonization process 92
3.3.2. Process definition and harmonization levels 93
3.3.3. Running example 94
3.3.4. View harmonization process 94
3.3.4.1. Stage 1: Harmonizing artifacts 94
3.3.4.1.1. Artifact harmonization example 95
3.3.4.2. Stage 2: Harmonizing property types 95
3.3.4.2.1. Example of harmonizing property types 95
3.3.4.3. Stage 3: Aligning quality views 96
3.3.4.3.1. Quality view alignment example 96
3.3.5. Quality harmonization process 97
3.3.5.1. Substitution artifacts 97
3.3.5.2. Rank-oriented and property-oriented harmonization. Expected property state 98
3.3.5.3. Stage 1: Producing an initial example property state 98
3.3.5.3.1. Example for producing the initial example property state 99
3.3.5.4. Stage 2: Executing initial assessment and deciding on a negotiation 99
3.3.5.4.1. Example of a negotiation decision 100
3.3.5.5. Stage 3: Performing negotiations 100
3.3.5.5.1. Example of negotiations 100
3.3.6. State of the art: Addressing harmonization process activities 101
3.3.6.1. Addressing organization sides 101
3.3.6.2. Addressing quality subjects 101
3.3.6.3. Generic process-based techniques 102
3.3.6.4. Addressing harmonization activities 102
3.4. Practical Relevance 103
3.4.1. Empirical studies 103
3.4.1.1. Conducting interviews 104
3.4.1.2. Postmortem analysis 104
3.4.1.3. Data processing 104
3.4.1.4. Soundness factors 105
3.4.2. Practical application 105
3.4.2.1. The QuASE process 105
3.4.2.2. QuOntology and QuIRepository 106
3.4.2.3. Elicitation stage 106
3.5. Conclusions and Future Research Directions 107
3.5.1. Basic conclusions 107
3.5.2. Future research and implementation directions 108
Acknowledgment 108
References 109
Chapter 4: Optimizing Functional and Quality Requirements According to Stakeholders' Goals 116
Introduction 116
4.1. Smart Grid 118
4.1.1. Description of smart grids 118
4.1.2. Functional requirements 120
4.1.3. Security and privacy requirements 121
4.1.4. Performance requirements 121
4.2. Background, Concepts, and Notations 123
4.2.1. The i* framework 123
4.2.2. Problem-oriented requirements engineering 123
4.2.3. Valuation of requirements 125
4.2.4. Optimization 127
4.3. Preparatory Phases for QuaRO 128
4.3.1. Understanding the purpose of the system 128
4.3.2. Understanding the problem 130
4.4. Method for Detecting Candidates for Requirements Interactions 130
4.4.1. Initialization phase: Initial setup 133
4.4.1.1. Set up initial tables 133
4.4.1.2. Set up life cycle 135
4.4.2. Phase 1: Treating case 1 135
4.4.3. Phase 2: Treating case 2 136
4.4.4. Phase 3: Treating case 3 137
4.4.5. Phase 4: Treating case 4 138
4.5. Method for Generation of Alternatives 139
4.5.1. Relaxation template for security 139
4.5.2. Relaxation template for performance 143
4.6. Valuation of Requirements 145
4.7. Optimization of Requirements 152
4.8. Related Work 156
4.9. Conclusions and Perspectives 157
Acknowledgment 158
References 159
Part II: Analysis, Monitoring, and Control of Software Architecture for System Qualities 162
Chapter 5: HASARD: A Model-Based Method for Quality Analysis of Software Architecture 164
Introduction 164
Motivation 164
Related works and open problems 165
Software quality models 165
Quality analysis of software architecture 166
Hazard analysis methods and techniques 167
Overview of the proposed approach 168
Organization of the chapter 169
5.1. Hazard Analysis of Software Architectural Designs 170
5.1.1. Identification of design hazards 170
5.1.2. Cause-consequence analysis 171
5.2. Graphical Modeling of Software Quality 174
5.2.1. Graphic notation of quality models 174
5.2.2. Construction of a quality model 176
5.3. Reasoning About Software Quality 177
5.3.1. Contribution factors of a quality attribute 177
5.3.2. Impacts of design decisions 179
5.3.3. Quality risks 180
5.3.4. Relationships between quality issues 181
5.3.5. Trade-off points 183
5.4. Support Tool SQUARE 184
5.5. Case Study 186
5.5.1. Research questions 186
5.5.2. The object system 186
5.5.3. Process of the case study 187
5.5.4. Main results of quality analysis 188
5.5.5. Conclusions of the case study 190
5.6. Conclusion 192
5.6.1. Comparison with related work 192
5.6.1.1. Software quality models 192
5.6.1.2. Hazard analysis 193
5.6.1.3. Evaluation and assessment of software architecture 194
5.6.2. Limitations and future work 194
Acknowledgments 195
References 195
Chapter 6: Lightweight Evaluation of Software Architecture Decisions 198
Introduction 198
6.1. Architecture Evaluation Methods 199
6.2. Architecture Decisions 201
6.2.1. Decision forces 204
6.3. Decision-Centric Architecture Review 206
6.3.1. Participants 206
6.3.2. Preparation 207
6.3.3. DCAR method presentation 208
6.3.4. Business drivers and domain overview presentation 208
6.3.5. Architecture presentation 208
6.3.6. Decisions and forces completion 209
6.3.7. Decision prioritization 209
6.3.8. Decision documentation 210
6.3.9. Decision evaluation 211
6.3.10. Retrospective 211
6.3.11. Reporting the results 212
6.3.12. Schedule 212
6.4. Industrial Experiences 213
6.4.1. Industrial case studies 213
6.4.2. Additional observations made in our own projects 215
6.5. Integrating DCAR with Scrum 216
6.5.1. Up-front architecture approach 216
6.5.2. In sprints approach 217
6.6. Conclusions 218
Acknowledgments 218
References 219
Chapter 7: A Rule-Based Approach to Architecture Conformance Checking as a Quality Management Measure 222
Introduction 222
7.1. Challenges in Architectural Conformance Checking 223
7.2. Related Work 224
7.3. A Formal Framework for Architectural Conformance Checking 226
7.3.1. Formal representation of component-based systems 227
7.3.2. Formal representation of models 230
7.3.2.1. Classification of models 230
7.3.2.2. Transformation of models 231
7.3.3. Conformance of models 231
7.3.4. Prototypical implementation 232
7.4. Application of the Proposed Approach 235
7.4.1. The common component modeling example 235
7.4.1.1. Architectural aspects of CoCoME 235
7.4.1.2. The architectural rules of CoCoME 239
7.4.1.3. Results of checking the architectural rules of CoCoME 240
7.4.2. Further case studies 241
7.4.2.1. Checking layers 241
7.4.2.2. Checking domain-specific reference architectures 242
7.4.3. Results 242
7.5. Conclusion 243
7.5.1. Contribution and limitations 243
7.5.2. Future work 245
7.5.3. Summary 246
References 246
Chapter 8: Dashboards for Continuous Monitoring of Quality for Software Product under Development 250
Introduction 250
8.1. Developing Large Software Products Using Agile and Lean Principles 252
8.2. Elements of Successful Dashboards 253
8.2.1. Standardization 253
8.2.2. Focus on early warning 255
8.2.3. Focus on triggering decisions and monitoring their implementation 256
8.2.4. Succinct visualization 256
8.2.5. Assuring information quality 257
8.3. Industrial Dashboards 258
8.3.1. Companies 258
8.3.1.1. Ericsson 259
8.3.1.2. Volvo Car Corporation 259
8.3.1.3. Saab electronic defense systems 259
8.3.2. Dashboard at Ericsson 260
8.3.3. Dashboard at VCC 262
8.3.4. Dashboard at Saab electronic defense systems 264
8.4. Recommendations for Other Companies 266
8.4.1. Recommendations for constructing the dashboards 266
8.4.2. Recommendations for choosing indicators and measures 267
8.5. Further Reading 267
8.6. Conclusions 268
References 269
Part III: Domain-Specific Software Architecture and Software Qualities 272
Chapter 9: Achieving Quality in Customer-Configurable Products 274
Introduction 274
Outline of the chapter 275
9.1. The Flight Management System Example 276
9.2. Theoretical Framework 277
9.2.1. Configurable models 277
9.2.1.1. System views 277
9.2.1.2. Variability within the views 278
9.2.1.3. Variability view 278
9.2.1.4. Feature configuration, products, and resolution of model variance points 279
9.2.2. Quality assurance of configurable systems 280
9.2.2.1. Product-centered approaches 281
9.2.2.2. Product-line-centered approaches 281
9.3. Model-Based Product Line Testing 282
9.3.1. What is MBT? 282
9.3.2. Test model for the flight management system 283
9.3.3. Applying MBT 283
9.3.4. Product-centered MBT 285
9.3.5. Product-line-centered MBT 286
9.3.6. Comparison 287
9.4. Model-Based deployment 288
9.4.1. What is deployment? 288
9.4.2. Spatial and temporal deployment 289
9.4.3. Application and resource models for the flight management system 290
9.4.4. Product-centered software deployment 292
9.4.4.1. Step 1: Deriving the product models 292
9.4.4.2. Step 2: Evaluating the deployment candidates 292
9.4.4.3. Step 3: Aggregation of results 294
9.4.5. Product-line-centered software deployment 294
9.4.5.1. Reuse of previously computed allocations 295
9.4.5.2. Maximum approach 296
9.5. Related Work 296
9.5.1. General product line approaches 296
9.5.2. Product line testing 297
9.5.3. Deployment 298
9.6. Conclusion 298
9.6.1. Model-based product line testing 298
9.6.2. Model-based deployment for product lines 299
References 299
Chapter 10: Archample-Architectural Analysis Approach for Multiple Product Line Engineering 304
Introduction 304
10.1. Background 305
10.1.1. Multiple product line engineering 305
10.1.2. Software architecture analysis methods 306
10.2. Case Description 307
10.3. MPL Architecture Viewpoints 308
10.4. Archample Method 311
10.4.1. Preparation phase 313
10.4.2. Selection of feasible MPL decomposition 313
10.4.3. Evaluation of selected MPL design alternative 314
10.4.4. Reporting and workshop 315
10.5. Applying Archample Within an Industrial Context 316
10.5.1. Preparation phase 316
10.5.2. Selection of feasible MPL decomposition 316
10.5.3. Evaluation of the selected MPL design alternative 322
10.5.4. Reporting and workshop 323
10.6. Related Work 323
10.7. Conclusion 325
Acknowledgments 325
References 325
Chapter 11: Quality Attributes in Medical Planning and Simulation Systems 328
Introduction 328
11.1. Chapter Contributions 330
11.2. Background and Related Work 331
11.2.1. MPS systems 331
11.2.2. Software development for MPS systems 331
11.2.3. Quality attributes of MPS 332
11.3. Challenges Related to Achieving Quality Attributes in MPS Systems 332
11.4. Quality Attributes in MPS Systems 334
11.4.1. Performance 334
11.4.2. Usability 335
11.4.3. Model correctness 335
11.5. Handling Quality Attributes at the Architecture Stage of MPS Systems 336
11.5.1. Architectural stakeholders 336
11.5.2. MPS architecture documentation 337
11.5.3. Architecture process 338
11.6. Conclusions 340
References 340
Chapter 12: Addressing Usability Requirements in Mobile Software Development 344
Introduction 344
12.1. Related Work 345
12.2. Usability Mechanisms for Mobile Applications 346
12.3. System Status Feedback 348
12.3.1. SSF generic component responsibilities 348
12.3.2. SSF architectural component responsibilities 351
12.4. User Preferences 351
12.4.1. User Preferences generic component responsibilities 353
12.4.2. User Preferences architectural component responsibilities 355
12.5. A Mobile Traffic Complaint System 356
12.5.1. Usability requirements 358
12.5.2. Impact on the software architecture 358
12.5.3. Usability and interactions between entities 361
12.6. Discussion 363
12.7. Conclusions 363
References 364
Chapter 13: Understanding Quality Requirements Engineering in Contract-Based Projects from the Perspective of Software Arc ... 366
Introduction 366
13.1. Motivation 367
13.2. Background on the Context of Contract-Based Systems Delivery 368
13.3. Empirical Studies on the Software Architecture Perspective on QRs 370
13.4. Research Process 370
13.4.1. Research objective and research plan 370
13.4.2. The case study participants 371
13.4.3. The research instrument for data collection 373
13.4.4. Data analysis strategy 374
13.5. Results 375
13.5.1. RQ1: How do the software architects understand their role with respect to engineering QRs? 375
13.5.2. RQ2: Do SAs and RE staff use different terminology for QRs? 378
13.5.3. RQ3: How do QRs get elicited? 379
13.5.4. RQ4: How do QRs get documented? 381
13.5.5. RQ5: How do QRs get prioritized? 381
13.5.6. RQ6: How do QRs get quantified, if at all? 383
13.5.7. RQ7: How do QRs get validated? 385
13.5.8. RQ8: How do QRs get negotiated? 386
13.5.9. RQ9: What role does the contract play in the way SAs cope with QRs? 387
13.6. Discussion 389
13.6.1. Comparing and contrasting the results with prior research 389
13.6.1.1. Role of SAs in engineering of QR 389
13.6.1.2. QRs vocabulary of SAs and RE staff 390
13.6.1.3. QR elicitation 390
13.6.1.4. QRs documentation 390
13.6.1.5. QRs prioritization 390
13.6.1.6. QRs quantification 390
13.6.1.7. QRs validation 391
13.6.1.8. QRs negotiation 391
13.6.1.9. Contract's role in SAs' coping strategies 392
13.6.2. Implications for practice 392
13.6.3. Implications for research 393
13.7. Limitations of the Study 394
13.8. Conclusions 395
Acknowledgments 396
References 396
Glossary 400
Author Index 402
Subject Index 414

Foreword by Richard Mark Soley: Software Quality Is Still a Problem


Richard Mark Soley, Ph.D., Chairman and Chief Executive Officer, Object Management Group, Lexington, Massachusetts, U.S.A.

Since the dawn of the computing age, software quality has been an issue for developers and end users alike. I have never met a software user—whether mainframe, minicomputer, personal computer, or personal device—who is happy with the level of quality of that device. From requirements definition, to user interface, to likely use case, to errors and failures, software infuriates people every minute of every day.

Worse, software failures have had life-changing effects on people. The well-documented Therac-25 user interface failure literally caused deaths. The initial Ariane-5 rocket launch failure was in software. The Mars Climate Orbiter crash landing was caused by a disagreement between two development teams on measurement units. Banking, trading, and other financial services failures caused by software failures surround us; no one is surprised when systems fail, and the (unfortunately generally correct) assumption is that software was the culprit.

From the point of view of the standardizer and the methodologist, the most difficult thing to accept is the fact that methodologies for software quality improvement are well known. From academic perches as disparate as Carnegie Mellon University and Queen's University (Prof. David Parnas) to Eidgenoessische Techniche Hochschule Zürich (Prof. Bertrand Meyer), detailed research and well-written papers have appeared for decades, detailing how to write better-quality software. The Software Engineering Institute, founded some 30 years ago by the United States Department of Defense, has focused precisely on the problem of developing, delivering, and maintaining better software, through the development, implementation, and assessment of software development methodologies (most importantly the Capability Maturity Model and later updates).

Still, trades go awry, distribution networks falter, companies fail, and energy goes undelivered because of software quality issues. Worse, correctable problems such as common security weaknesses (most infamously the buffer overflow weakness) are written every day into security-sensitive software.

Perhaps methodology isn't the only answer. It's interesting to note that, in manufacturing fields outside of the software realm, there is the concept of acceptance of parts. When Boeing and Airbus build aircraft, they do it with parts built not only by their own internal supply chains, but in great (and increasing) part, by including parts built by others, gathered across international boundaries and composed into large, complex systems. That explains the old saw that aircraft are a million parts, flying in close formation! The reality is that close formation is what keeps us warm and dry, miles above ground; and that close formation comes from parts that fit together well, that work together well, that can be maintained and overhauled together well. And that requires aircraft manufacturers to test the parts when they arrive in the factory and before they are integrated into the airframe. Sure, there's a methodology for building better parts—those methodologies even have well-accepted names, like “total quality management,” “lean manufacturing,” and “Six Sigma.” But those methodologies do not obviate the need to test parts (at least statistically) when they enter the factory.

Quality Testing in Software


Unfortunately, that ethos never made it into the software development field. Although you will find regression testing and unit testing, and specialized unit testing tools like JUnit in the Java world, there has never been a widely accepted practice of software part testing based solely on the (automated) examination of software itself. My own background in the software business included a (non-automated) examination phase (the Multics Review Board quality testing requirement for the inclusion of new code into the Honeywell Multics operating system 35 years ago measurably and visibly increased the overall quality of the Multics code base) showed that examination, even human examination, was of value to both development organizations and systems users. The cost, however, was rather high and has only been considered acceptable for systems with very high failure impacts (for example, in the medical and defense fields).

When Boeing and Airbus test parts, they certainly do some hand inspection, but there is far more automated inspection. After all, one can't see inside the parts without machines like X-rays and NMR machines, and one can't test metal parts to destruction (to determine tensile strength, for example) without automation. That same automation should and must be applied in testing software—increasing the objectivity of acceptance tests, increasing the likelihood that those tests will be applied (due to lower cost), and eventually increasing the quality of the software itself.

Enter Automated Quality Testing


In late 2009, the Object Management Group (OMG) and the Software Engineering Institute (SEI) came together to create the Consortium for IT Software Quality (CISQ). The two groups realized the need to find another approach to increase software quality, since

 Methodologies to increase software process quality (such as CMMI) had had insufficient impact on their own in increasing software quality.

 Software inspection methodologies based on human examination of code is an approach, which tend to be prone to errors, objective, inconsistent, and generally expensive to be widely deployed.

 Existing automated code evaluation systems had no consistent (standardized) set of metrics, resulting in inconsistent results and very limited adoption in the marketplace.

The need for the software development industry to develop and widely adopt automated quality tests was absolutely obvious, and the Consortium immediately set upon a process (based on OMG's broad and deep experience in standardization and SEI's broad and deep experience in assessment) to define automatable software quality standard metrics.

Whither Automatic Software Quality Evaluation?


The first standard that CISQ was able to bring through the OMG process, arriving at the end of 2012, featured a standard, consistent, reliable, and accurate complexity metric for code, in essence an update to the Function Point concept. First defined in 1979, there were five ISO standards for counting function points by 2012, none of which was absolutely reliable and repeatable; that is, individual (human) function counters could come up with different results when counting the same piece of software twice! CISQ's Automatic Function Point (AFP) standard features a fully automatable standard that has absolutely consistent results from one run to the next.

That doesn't sound like much of an accomplishment, until one realizes that one can't compute a defect, error, or other size-dependent metric without an agreed sizing strategy. AFP provides that strategy, and in a consistent, standardized fashion that can be fully automated, making it inexpensive and repeatable.

In particular, how can one measure the quality of a software architecture without a baseline, without a complexity metric? AFP provides that baseline, and further quality metrics under development by CISQ and expected to be standardized this year, provide the yardstick against which to measure software, again in a fully automatable fashion.

Is it simply lines-of-code that are being measured, or in fact entire software designs? Quality is in fact inextricably connected to architecture in several places; not only can poor software coding or modeling quality lead to poor usability and fit-for-purpose; but poor software architecture can lead to a deep mismatch with the requirements that led to the development of the system in the first place.

Architecture Intertwined with Quality


Clearly software quality—in fact, system quality in general—is a fractal concept. Requirements can poorly quantify the needs of a software system; architectures and other artifacts can poorly outline the analysis and design against those requirements; implementation via coding or modeling can poorly execute the design artifacts; testing can poorly exercise an implementation; and even quotidian use can incorrectly take advantage of a well-implemented, well-tested design. Clearly, quality testing must take into account design artifacts as well as those of implementation.

Fortunately, architectural quality methodologies (and indeed quality metrics across the landscape of software development) are active areas of research, with promising approaches. Given my own predilections and the technical focus of OMG over the past 16 years, clearly modeling (of requirements, of design, of analysis, of implementation, and certainly of architecture) must be at the fore, and model- and rule-based approaches to measuring architectures are featured here. But the tome you are holding also includes a wealth of current research and understanding from measuring requirements design against customer needs to usability testing of completed systems. If the software industry—and that's every industry these days—is going to increase not only the underlying but also the perceived level of...

Erscheint lt. Verlag 30.7.2014
Sprache englisch
Themenwelt Mathematik / Informatik Informatik Programmiersprachen / -werkzeuge
Mathematik / Informatik Informatik Software Entwicklung
ISBN-10 0-12-417168-0 / 0124171680
ISBN-13 978-0-12-417168-8 / 9780124171688
Informationen gemäß Produktsicherheitsverordnung (GPSR)
Haben Sie eine Frage zum Produkt?
EPUBEPUB (Adobe DRM)

Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM

Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belle­tristik und Sach­büchern. Der Fließ­text wird dynamisch an die Display- und Schrift­größe ange­passt. Auch für mobile Lese­geräte ist EPUB daher gut geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine Adobe-ID und die Software Adobe Digital Editions (kostenlos). Von der Benutzung der OverDrive Media Console raten wir Ihnen ab. Erfahrungsgemäß treten hier gehäuft Probleme mit dem Adobe DRM auf.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine Adobe-ID sowie eine kostenlose App.
Geräteliste und zusätzliche Hinweise

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich
Apps programmieren für macOS, iOS, watchOS und tvOS

von Thomas Sillmann

eBook Download (2025)
Carl Hanser Verlag GmbH & Co. KG
CHF 40,95