Beruflich Dokumente
Kultur Dokumente
STUDY MATERIAL
This document contains the study material for Software Project Management as per the JNTU
syllabus.
PAGE NO. OF
CHAPTER # CHAPTER
NUMBER QUESTIONS
UNIT - I
1 Conventional Software Management 4 3
2 Evolution of Software Economics 13 4
UNIT – II
3 Improving Software Economics 18 10
UNIT – III
4 The Old Way and the New 32 4
5 Life-Cycle Phases 40 3
UNIT – IV
6 Artifacts of the Process 46 10
7 Model-Based Software Architecture 63 6
UNIT – V
8 Workflows of the Process 67 3
9 Checkpoints of the Process 72 4
10 Iterative Process Planning 79 4
UNIT – VI
11 Project Organizations and Responsibilities 89 4
12 Process Automation 97 7
UNIT – VII
13 Project Control and Process Instrumentation 111 6
14 Tailoring the Process 125 9
UNIT – VIII
15 Modern Project Profiles 133 7
16 Next-Generation Software Economics 140 4
17 Modern Process Transitions 146 2
Appendix D CCPDS-R Case Study 150 18
TEACHING PLAN
UNIT V
8 Workflows of the Process
8.1 Software Process Workflows 118 – 121 1
8.2 Iteration Workflows 121 – 124 1
9 Checkpoints of the Process
9.1 Major Milestones 126 – 132 2
9.2 Minor Milestones 132 – 133 1
9.3 Periodic Status Assessments 133 – 134 1
10 Iterative Process Planning
10.1 Work Breakdown Structures 139 – 146 2
10.2 Planning Guidelines 146 – 149 1
10.3 The Cost and Schedule Estimating Process 149 – 150 1
10.4 The Iteration Planning Process 150 – 153 1
10.5 Pragmatic Planning 153 – 154 1
UNIT VI
11 Project Organizations and Responsibilities
11.1 Line-of-Business Organizations 156 – 158 1
11.2 Project Organizations 158 – 165 2
11.3 Evolution of Organizations 165 – 166 1
12 Process Automation
12.1 Tools: Automation Building Blocks 168 – 172 1
12.2 The Project Environment 172 - 186 2
UNIT VII
13 Project Control and Process Instrumentation
13.1 The Seven Core Metrics 188 – 190 1
13.2 Management Indicators 190 – 196 2
13.3 Quality Indicators 196 – 199 1
13.4 Life-Cycle Expectations 199 – 201 1
13.5 Pragmatic Software Metrics 201 – 202 1
13.6 Metrics Automation 202 – 208 1
14 Tailoring the Process
14.1 Process Discriminants 209 – 218 2
14.2 Example: Small-Scale Project versus Large-Scale 218 – 220 1
Project
UNIT VIII
15 Modern Project Profiles
15.1 Continuous Integration 226 – 227 1
15.2 Early Risk Resolution 227 – 228 1
15.3 Evolutionary Requirements 228 – 229 1
15.4 Teamwork among Stakeholders 229 – 231 1
15.5 Top 10 Software Management Principles 231 – 232 1
15.6 Software Management Best Practices 232 – 236 1
16 Next-Generation Software Economics
16.1 Next-Generation Cost Models 237 – 242 2
16.2 Modern Software Economics 242 – 247 1
17 Modern Process Transitions
17.1 Culture Shifts 248 – 251 1
17.2 Denouement 251 - 254 1
Total 75
Software Crisis:
Flexibility of the software is both a boon and a bane.
Boon: it can be programmed to do anything.
Bane: because of the “anything” factor, it becomes difficult to plan, monitor, and control
software development.
This unpredictability is the basis of what is known as “software crisis”.
A number of analyses were done on the state of the software engineering industry over the last
decades.
Their findings concluded that the success rate of software projects is very low.
Their other findings can be summarized as:
1. Software development is highly unpredictable.
Only about 10% of projects are delivered successfully within initial budget and schedule
estimates.
2. Rather than the technology advances, it is the management discipline that is responsible
for the success or failure of the projects.
3. The level of software scrap and rework is indicative of an immature process.
The above three analyses-conclusions, while showing the magnitude of the problem and the state
of the current software management, prove that there is much room for improvement.
As a retrospective, we shall examine the waterfall model theory to critically analyze how the
industry ignored much of the theory, but still managed to evolve good and not-so-good practices,
particularly while using the modern technologies.
1.1.1 IN THEORY
Winston Royce’s paper – Managing the Development of Large Scale Software Systems – based
on lessons learned while managing large software projects, provides a summary of conventional
software management philosophy.
System
requirements
Software
requirements
Analysis
Program
design
Coding
Testing
Operations
Waterfall Model Part II: The large-scale system approach
Waterfall Model Part III: 5 necessary improvements for this approach to work
1.1.2 IN PRACTICE
Projects using the conventional process exhibited the following symptoms
characterizing their failure:
Protracted integration and late design breakage
Late risk resolution
Requirements-driven functional decomposition
Adversarial stakeholder relationships
Focus on documents and review meetings
Figure 1-2 illustrates development progress versus time for a typical development project using
the waterfall model management process.
Progress is defined as percent coded – that is demonstratable in its target form.
Software that is compilable and executable need not necessarily be complete, compliant, or up to
specifications.
From the figure we can notice, regarding the development activities, that:
Early success via paper designs and thorough briefings
Commitment to code late in the life cycle
Integration difficulties due to unforeseen implementation issues and interface ambiguities
Heavy budget and schedule pressure to get the system working
Late and last-minute efforts of non-optimal fixes, with no time for redesign
A very fragile, unmaintainable product delivered late
Given the immature languages and technologies used in the conventional approach,
there was substantial emphasis on perfecting the design before committing it to coding
and consequently, it was difficult to understand or make any changes to it.
This practice resulted in the use of multiple formats –
requirements in English
preliminary design in flowcharts
detailed design in program design languages
implementations in the target languages like FORTRAN, COBOL, or C
and error-prone, labor-intensive translations between formats.
Conventional techniques imposed a waterfall model on the design process.
This resulted in late integration and lower performance levels.
In this scenario, the entire system was designed on paper, then implemented all at once, then
integrated.
Only at the end of this process there was scope for system testing to verify the soundness of the
fundamental architecture – interfaces and structure.
Generally, in conventional processes 40% or more of life-cycle resources are consumed by
testing.
Configuration
Format Ad hoc text Flowcharts Source code
baseline
Coding and Protracted
Requirements Program
Activity unit testing integration and
analysis design
testing
Fragile
Product Documents Documents Coded units
baselines
100%
Integration
begins
Development Progress
Late design
breakage
(% Coded)
Project Schedule
Figure 1-2. Progress profile of a conventional software project
ACTIVITY COST
Management 5%
Requirements 5%
Design 10%
Code and unit testing 30%
Integration and test 40%
Deployment 5%
Environment 5%
Total 100%
Lack of early risk resolution is another serious issue related with the waterfall process.
This was due to the focus on early paper artifacts in which the real design, implementation, and
integration risks were relatively intangible.
The risk profile of waterfall model projects includes four distinct periods of risk exposure, where
risk is defined as the probability of missing a cost, schedule, feature, or quality goal.
Early in the life cycle – as the requirements are being specified – the actual risk exposure is
highly unpredictable.
After a design concept is available – even on paper – the risk exposure stabilizes.
It usually stabilizes at a relatively higher level as there are too few tangible facts as part of an
objective assessment.
As the system is coded, some of the individual component risks get resolved.
As the integration begins, the real system-level quantities and risks become tangible.
During this period many real design issues are resolved and engineering trade-offs are made.
Resolving these issues late in the life cycle, when there is great inertia inhibiting changes, is very
expensive.
Period Period
Figure 1-3. Risk Profile of a conventional software project across its life cycle
This process tends to resolve the important risks, but by sacrificing the quality and its
maintainability.
Redesigning may also include tying loose-ends at the last minute and patching up bits and pieces
into a coherent single piece.
These sorts of changes do not conserve the overall design integrity and its maintainability.
The conventional process focused more on producing documents in an attempt to describe the
software product, than focusing on producing tangible increments of the products themselves.
Difficult words:
Scrap waste/refuse amass collect over time
embryonic early in life-cycle deploy install
Concise brief and clear shoe-horn removing roughness
Fragile delicate adversarial with opposition
Façade deceptive face
Technologies for environment automation, size reduction, and process improvement are
not independent of one another.
In each era, the key is complementary growth in all technologies.
For example, the process advances couldn’t be used successfully without new component
technologies and increased tool automation.
The use of modern practices and the promise of improved economics are not always
guaranteed.
Software
ROI
Software Size
- 1960s-1970s - 1980s-1990s - 2000 and on
- Waterfall model - Process improvement - Iterative development
- Functional design - Encapsulation-based - Component-based
- Diseconomy of scale - Diseconomy of scale - Return on investment
This is because of the multiplicity of similar projects being very large in size (systems of
systems), and most of them being long-lived products.
Figure 2-2 provides an overview of how a ROI profile can be achieved in subsequent
efforts across life cycles of different domains.
Software
ROI
Software
ROI
Software
ROI
Among all the commercial cost estimation models available COCOMO, Ada COCOMO,
and COCOMO II are the most open and well-documented models.
With reference to the measurement of software size: there are basically two points of
view: SLOC and FP. [The third one – an ad hoc point of view by immature developers
uses no systematic measurement of size.]
Today, language advance and the use of components, automatic source code generation,
and OOP have made SLOC a much more ambiguous measure.
Most real-world use of cost models is bottom-up (substantiating a target cost) rather than
top-down (estimating the “should” cost).
Cost modelers
Here is
Risks, options, how justify
trade-offs, that cost
alternatives
Cost estimate
It provides scope to the project manager to examine the risks associated with achieving
the target costs, and to discuss this information with other stakeholders.
It results in various combinations in the plans, designs, process, or scope being proposed.
This process provides a platform for a basis of estimate and an overall cost analysis.
Independent cost estimates – done by people independent of the development team – are
generally inaccurate.
A credible estimate can be produced by a competent team – consisting of the software
project manager, and the software architecture, development, and test managers –
iteratively preparing several estimates and sensitivity analyses.
Such a team, ultimately, takes the ownership of that cost estimate for the project to
succeed.
These parameters are given in priority order for most software domains:
TABLE 3-1. Important trends in improving software economics
COST MODEL PARAMETERS TRENDS
Size Higher order languages (C++, Ada95, Java, VB, etc.)
Abstraction and component-based Object-oriented – analysis, design, programming)
development technologies Reuse
Commercial components
Process Iterative development
Methods and techniques Process maturity models
Architecture-first development
Acquisition reform
Personnel Training and personnel skill development
People factors Teamwork
Win-win cultures
Environment Integrated tools (visual modeling, compiler, editor,
Automation technologies and tools debugger, change management, etc.)
Open systems
Hardware platform performance
Automation of coding, documents, testing, analyses
Quality Hardware platform performance
Performance, reliability, accuracy Demonstration-based assessment
Statistical quality control
The above table lists some of the technology developments, process improvement efforts,
and management approaches targeted at improving the economics of software
development and integration.
There are significant dependencies among these trends.
Tools enable size reduction and process improvements.
Size reduction approaches lead to process changes.
Process improvements drive tool requirements.
In the domain of user interface software, a decade earlier, development teams had to
spend extensive time analyzing operations, human factors, screen layout, and screen
dynamics – all on paper as committing designs was very expensive.
So it was heavy workload in the initial stages in the form of paper artifacts which had to
be frozen after taking the user concurrence and the high construction costs could be
minimized.
Today, graphical user interface (GUI) technology tools enable a new and different
process.
A matured GUI technology has made the conventional process obsolete.
GUI tools enable the developers to construct an executable user interface faster and at
less cost.
Paper descriptions are no more necessary, resulting in better efficiency.
Operations analysis and human factors analysis – still relevant – are carried out in a
realistic target environment using existing primitives and building blocks.
Engineering and feedback cycles now take only a few days/weeks – a great reduction
from months’ of time required earlier.
Further, the old process could not afford re-runs. Designs were done completely – after
thorough analysis and design – in one construction cycle.
The new GUI process is geared to take the user interface through a few realistic versions,
incorporating user feedback all along the way.
It also achieves a stable understanding of the requirements and the design issues in
balance with one another.
The ever-increasing advances in the hardware technology also have been influencing the
software technology improvements.
The availability of higher CPU speeds, more memory, and more network bandwidth has
eliminated many complexities.
Simpler, brute-force solutions are now possible – all this because of advances in
hardware technology.
3.1 REDUCING SOFTWARE PRODUCT SIZE
Producing a product that achieves design goals with minimum amount of human-
generated source material is the most significant way to improve return on investment
(ROI) and affordability.
Component-based development is the way for reducing the “source” language size.
Reuse, OO technology, automatic code generation, and higher-order programming
languages are all focused on achieving a system with fewer lines of human-specified
source directive/statements.
This size reduction is the primary motivation behind improvements in
higher order languages – like C++, Ada 95, Java, V Basic, and 4GLs
automatic code generators – CASE tools, visual modeling tools, GUI builders
reuse of commercial components – OSs, windowing environments, DBMSs,
middleware, networks
object-oriented technologies – UML, visual modeling tools, architecture
frameworks.
There is one limitation in this “type” of code/size reduction:
Apparently, this recommendation comes from a simple observation: code that isn’t there
need not be developed and can’t break.
This is not entirely the case.
When size-reducing technologies are used, they reduce the number of human-generated
source lines.
All of them tend to increase the amount of computer-executable code.
So this negates the second part of the observation.
Mature and reliable size reduction technologies are powerful at producing economic
benefits.
Immature technologies may reduce the development size but require more investment in
achieving required levels of quality and performance.
This may have a negative impact on overall project performance.
3.1.1 LANGUAGES
Universal function points (UFPs) are useful metrics for language-independent early life-
cycle estimates.
UFPs are used to indicate the relative program sizes required to implement a given
functionality.
The basic units of FPs are
external user inputs
external outputs,
internal logical data groups,
external data interfaces, and
external inquiries.
SLOC metrics are useful as estimators after a candidate solution is formulated and an
implementation language is known.
Substantial data is documented relating SLOC to FPs as shown below:
LANGUAGE SLOC PER UFP The data in the table illustrate why modern
Assembly 320 languages like C++, Ada 95, Java, and Visual
C 128 Basic are more preferred.
FORTRAN 77 105 Their level of expressiveness is attractive.
COBOL 85 91 There is risk of misuse in applying the data in
Ada 83 71 the table.
C++ 56 This data is a precise average of several
Ada 95 55 imprecise numbers.
Java 55 Each language has a domain of usage.
Visual Basic 35
Visual Basic: useful for building simple interactive applications, not useful for real-time,
embedded programs.
Ada 95: useful for mission critical real-time applications, not useful for parallel, scientific,
and highly number-crunching applications on higher configurations.
Two observations within the data concern the differences and relationships between Ada
83 and Ada 95, and C and C++.
The difference in expressiveness between the two versions of Ada is mainly due to the
features added to support OOP.
The difference between the two versions of C is more profound.
C++ incorporated several of the advanced features of Ada with more support for OOP.
C++ was developed as a superset of C.
This has its pros and cons.
The C compatibility made it easy for C programmers to migrate to C++.
On the downside, a number of C++ compiler users were programming in C, so the
expressiveness of the OOP based C++ was not being exploited.
The evolution of Java eliminated many of the problems in the C++ language.
It conserves the OO features and adds further support for portability and distribution.
UFPs can be used to indicate the relative program sizes required to implement a given
functionality.
For example, to achieve a given application with a fixed number of function points, one
of the following program sizes would be required:
10,00,000 lines of assembly language The values indicate the relative
4,00,000 lines of C expressive power of various languages.
2,20,000 lines of Ada 83 Commercial components and code
1,75,000 lines of Ada 95 or C++ generators can further reduce the size of
Reduction in the size of human-generated code, inhuman-generated
turn reduces thecode.
size of the team and
the time needed for development.
Adding a commercial DBMS, a commercial GUI builder, and a commercial middleware
can reduce the effective size of development to the following final size:
75,000 line of Ada 95 or C++ with integration of several commercial components
The use of the highest level language and appropriate commercial components has a
sizable impact on cost – particularly when it comes to large projects which have higher
life-cycle cost.
Generally, simpler is better: reducing size increases understandability, changeability, and
reliability.
OO methods, notations, and visual modeling provide strong technology support for the
process framework.
3.1.3 REUSE
Reusing existing components and building reusable components have been natural
software engineering activities along with the improvements in programming languages.
Software design methods implicitly dealt with reuse in order to minimize development
costs while achieving all the other required attributes of performance, feature set, and
quality.
Reuse should be treated as a routine part of achieving a return on investment.
Common architectures, common processes, precedent experience, and common
environments are all instances of reuse.
Reuse is an important discipline that has an impact on the efficiency of all workflows and
the quality of most artifacts.
TABLE 3-3. Advantages and disadvantages of commercial components versus custom software
APPROACH ADVANTAGES DISADVANTAGES
Commercial ☺ Predictable license costs Frequent upgrades No control over upgrades
components ☺ Broadly used mature technology Up-front license fees and maintenance
☺ Available now Recurring maintenance fees Unnecessary features that
☺ Dedicated support organization Dependency on vendor consume extra resources
☺ Hardware/software Run-time efficiency sacrifices Often inadequate
independence Functionality constraints reliability and stability
☺ Rich in functionality Integration not always trivial Multiple-vendor
incompatibilities
Custom ☺ Complete change freedom Expensive unpredictable Drain on expert resources
development ☺ Smaller, and often simpler development
implementations Unpredictable availability
☺ Often better performance date
☺ Control of development and Undefined maintenance model
enhancement Immature and fragile
Single-platform dependency
The paramount message here is: these decisions must be made early in the life cycle as
part of the architectural design.
3.2 IMPROVING SOFTWARE PROCESSES
Process is an overloaded term.
For software-oriented organizations there are many processes and sub-processes.
The main and distinct process perspectives are:
Metaprocess: an organization’s policies, procedures, and practices for pursuing a
software-intensive line of business.
The focus of this process is on organizational economics, long-term
strategies, and a software ROI.
Macroprocess: a project’s policies, procedures, and practices for producing a complete
software product within certain cost, schedule, and quality constraints.
The focus of the macroprocess is on creating an adequate instance of the
metaprocess for a specific set of constraints.
Microprocess: a project team’s policies, procedures, and practices for achieving an
artifact of the software process.
The focus of the microprocess is on achieving an intermediate product
baseline with adequate quality and adequate functionality as economically
and rapidly as possible.
Although these three levels of process overlap somewhat, they have different objectives,
audiences, metrics, concerns, and time scales.
These are shown in Table 3-4.
TABLE 3-4. Three levels of process and their attributes
ATTRIBUTES METAPROCESS MACROPROCESS MICROPROCESS
Subject Line of business Project Iteration
Objectives Line-of-business Project profitability Resource management
profitability Risk management Risk resolution
Competitiveness Project budget, Milestone budget,
schedule, quality schedule, quality
Audience Acquisition Software project Sub-project managers
authorities, customers managers Software engineers
Organizational Software engineers
management
Metrics Project predictability
On budget, on On budget, on
Revenue, market shareschedule schedule
Major milestone Major milestone
success progress
Prepared by S. S. RAMANUJEM, PROF., DEPT OF MCA, SWARNANDHRA
COLLEGE OF ENGINEERING & TECHNOLOGY, NARSAPUR
PART – I SOFTWARE MANAGEMENT RENAISSANCE Page 24 of 187
Chapter – 3 IMPROVING SOFTWARE ECONOMICS
Project scrap and Release/iteration scrap
rework and rework
Concerns Bureaucracy Vs Quality Vs financial Content Vs schedule
standardization performance
Time scales 6 to 12 months 1 to many years 1 to 6 months
The macroprocess is the project-level process that affects the cost estimation
model.
All project processes consist of productive activities and overhead activities.
For a project to be successful a complex web of sequential and parallel steps are
required.
As the scale of project increases the complexity of the web also increases.
To manage the complexity of the web, overhead steps also have to be included.
Productive activities result in tangible progress toward the end-product.
The quality of the software process strongly affects the required effort and thereby the
schedule for producing the software product.
The difference between a good process and a bad one will affect overall cost estimates by
50% to 100%.
Reduction in effort will improve the overall schedule.
So a better process can have a greater effect in reducing the time it will take for the team
to achieve the product vision with the required quality.
1.We could take an N-step process and improve the efficiency of each step.
2.We could take an N-step process and eliminate some steps so that it is now only an M-
step process.
3.We could take an N-step process and use more concurrency in the activities being
performed or the resources being applied.
But we work in an imperfect world and we need to manage engineering activities so that
scrap and rework profiles do not impact the win conditions of any stakeholders.
The original COCOMO model suggests that the combined effects of personnel skill and
experience can have an impact on productivity of as much as a factor of four.
For a large team it is almost always possible to end up with nominal people and
experience.
Any team with all geniuses with lot of experience, high IQ it may turn to be
dysfunctional.
So instead of “just hire good people”, it should “just formulate a good team.”
Two most important aspects of an excellent team are: balance and coverage.
When a team is out of balance, it is vulnerable.
For example, a football team need for diverse skills. So is it with a software development
team.
When a team is unbalanced in any one of the dimensions, a project becomes risky.
Balancing a team is a paramount factor in good team work.
5. The principle of phase-out: Keeping a misfit on the team doesn’t benefit anyone.
A misfit demotivates other team members, will not self-actualize, and disrupts the team
balance in some dimension.
Misfits are obvious, and it is never right to procrastinate weeding them out.
1.Hiring skills. Placing the right person in the right job is obvious, but is hard to achieve.
2.Customer-interface skill. A prerequisite for success is the avoidance of adversarial
relationships among stakeholders.
3.Decision-making skill. A decisive person only can have a clear sense of direction, and
only such a person can direct others. So indecisiveness is not a characteristic for a
manager to be successful.
4.Team-building skill. Teamwork requires that manager establish trust, motivate progress,
exploit eccentric skilled persons, transition average people into top performers,
eliminate misfits, and consolidate diverse opinions into a team direction.
5.Selling skill. Successful mangers must sell all stakeholders on decisions and priorities,
sell candidates on job positions, sell changes to the status quo in the face of resistance,
and sell achievements against objectives.
Prepared by S. S. RAMANUJEM, PROF., DEPT OF MCA, SWARNANDHRA
COLLEGE OF ENGINEERING & TECHNOLOGY, NARSAPUR
PART – I SOFTWARE MANAGEMENT RENAISSANCE Page 27 of 187
Chapter – 3 IMPROVING SOFTWARE ECONOMICS
In practice selling requires continuous negotiation, compromise, and empathy.
The tools and environment used in the software process have a linear effect on the
productivity of the process.
Planning tools, requirements management tools, visual modeling tools, compilers, editors,
debuggers, quality assurance analysis tools, test tools, and user interfaces provide support
for automating the evolution of software engineering artifacts.
Configuration management environments provide the foundation for executing and
instrumenting the process.
The isolated impact of tools and automation allows improvements of 20 to 40% in effort.
When tools and environments are viewed as the primary delivery vehicle for process
automation and improvement, their impact can be higher.
Process improvements reduce scrap and rework eliminating steps and minimizing the
number of iterations in the process.
Process improvement also increases the efficiency of certain steps in the process.
This is done – primarily by the environment – by automating manual tasks that are
inefficient or error-prone.
A common thread in successful software projects is they hire good people and provide
them with good tools to accomplish their jobs.
Automation of the design process provides payback in quality, the ability to estimate
costs and schedules, and overall productivity using a smaller team.
Integrated toolsets play an increasingly important role in incremental/iterative
development by allowing the designers to traverse quickly among development artifacts
and keep them up-to-date.
Round-trip engineering is a term used to describe the key capability of environments that
support iterative development.
Different information repositories are maintained for the engineering artifacts.
Automation support is needed to ensure efficient and error-free transition of data from
one artifact to another.
The present environments’ support for automation is not to the expected extent.
Example: automated test case construction from use case and scenario descriptions has
not yet evolved to support beyond trivial cases like unit test scenarios.
While describing the economic improvements associated with tools and environments,
tool vendors make relatively accurate individual assessments of life-cycle activities to
support their claims of economic benefits.
For example:
Requirements analysis and evolution activities consume 40% of life-cycle costs.
Software design activities have an impact on more than 50% of software
development effort and schedule.
Coding and unit testing consume about 50% of software development effort and
schedule.
Test activities can consume as much as 50% of a project’s resources.
Configuration control and change management are critical activities that can
consume as much as 25% of resources of large-scale projects.
Documentation activities can consume more than 30% of project engineering
resources.
Project management, business administration, and progress assessment can
consume as much as 30% of project budgets.
Such simple assertions are not reasonable, given the complex interrelationships among
the software development activities and the tools.
The combined effect of all tools tends to be less than about 40%, and most this benefit
can’t be gained without some change in the process.
So an individual tool can improve a project’s productivity by about 5%.
In general, it is better to normalize claims to the virtual 275% total than the 100% total
we deal with in the real world.
The best practices – derived from development process and technologies – improve cost
efficiency and in addition impact improvements in quality for the same cost.
Prepared by S. S. RAMANUJEM, PROF., DEPT OF MCA, SWARNANDHRA
COLLEGE OF ENGINEERING & TECHNOLOGY, NARSAPUR
PART – I SOFTWARE MANAGEMENT RENAISSANCE Page 29 of 187
Chapter – 3 IMPROVING SOFTWARE ECONOMICS
Some dimensions of quality improvement are:
Key practices that improve overall software quality include the following:
1.Focusing
on driving requirements and critical use cases early in the life cycle
on requirements completeness and traceability late in the life cycle
throughout the life cycle on a balance between requirements evolution, design
evolution, and plan evolution.
2.Using metrics and indicators to measure the progress and quality of an architecture as it
evolves from a high-level prototype into a fully compliant product.
3.Providing integrated life-cycle environments that support early and continuous
configuration control, change management, rigorous design methods, document
automation, and regression test automation.
4.Using visual modeling and higher level languages that support architectural control,
abstraction, reliable programming, reuse, and self-documentation.
5.Early and continuous insight into performance issues through demonstration-based
evaluations.
Many organizations overemphasize meeting and formal inspections, and require coverage
across all engineering products.
This approach can be counterproductive.
Only 20% of technical artifacts – use cases, design models, source code, and test cases –
deserve detailed scrutiny compared with other, more useful quality assurance activities.
A process whose quality assurance emphasis is on inspections will not be cost-effective.
Architectural issues are exposed only through more rigorous engineering activities like:
Quality assurance is everyone’s responsibility and should be integral to almost all process
activities instead of a separate discipline performed by quality assurance specialists.
Questions on this chapter:
1. The key to substantial improvement of software economics is a balanced attack
across several interrelated dimensions. Comment in detail.
2. Explain how reducing software product size contributes to the improvement of
software economics.
3. Explain Booch’s reasons for the success of object-oriented projects. Clearly bring
out the interrelationships among the dimensions of improving software economics.
4. Explain the relative advantages and disadvantages of using commercial
components versus custom software.
5. Explain how software economics is improved by improving software processes.
6. Explain how improvement of team effectiveness contributes to software
economics.
7. Explain Boehm’s staffing principles.
8. Explain how software environments help in improving automation as a way of
improving software economics.
9. Explain the key practices that improve overall software quality, in view of the
general quality improvements with a modern process in comparison with that of
conventional processes.
10. Comment on the relative merits and demerits of peer inspections for quality
assurance.
Difficult words:
caveat caution trivial small/inconsequential
Many systems required a new management paradigm to respond to budget pressures, the
dynamic and diverse threat environment, the long operational lifetime of systems, and the
predominance of large-scale, complex applications.
1. Make quality #1. Quality must be quantified and mechanisms put into place to
motivate its achievement.
It is not easy to define quality at the outset of the project.
A modern process framework strives to understand the trade-offs among features,
quality, cost, and schedule as early in the life cycle as possible.
This understanding must be achieved to specify or manage the achievement of quality.
2. High-quality software is possible. Techniques that have been demonstrated to
increase quality include involving the customer, prototyping, simplifying design,
conducting inspections, and hiring the best people.
3. Give products to customers early. No matter how hard you try to learn users’ needs
during the requirements phase, the most effective way to determine real needs is to
give users a product and let them play with it.
This is a key tenet of a modern process framework. There must be several
mechanisms to involve the customer throughout the life cycle.
4. Determine the problem before writing the requirements. When faced with what
they believe is a problem, most engineers rush to offer a solution. Before a problem is
tried to be solved, all the alternatives must be explored and one shouldn’t be blinded
by the obvious solution.
The parameters of a problem become more tangible as a solution evolves.
A modern process framework evolves the problem and the solution together until the
problem is fully understood to commit to full production.
5. Evaluate design alternatives. After the requirements are agreed upon, a variety of
architectures and algorithms must be examined. An “architecture” is not simply used
because it was used in the requirements specification.
This principle is based more on the waterfall thinking in two ways:
a) The requirements precede the architecture and don’t evolve together.
b) The architecture is incorporated in the requirement specification.
A modern process promotes the analysis of design alternatives concurrently with
requirements specification.
The notations and artifacts for requirements and architecture are explicitly decoupled.
Design
Subsystem integration
Assessment Implementation
System test
The following table, Table 4-1, maps the top 10 risks of the conventional process to the
key attributes and principles of a modern process.
For a project to be successful there must be a well defined separation between “research
and development activities” and “production activities”.
A failure to define and execute these two stages with proper balance and appropriate
emphasis leads to the failure of the project.
Most unsuccessful projects exhibit one of the following characteristics:
An overemphasis on research and development.
Too many analyses or paper studies are performed.
The construction of engineering baselines is delayed.
An overemphasis on production.
Rush-to-judgment designs, premature work by overeager coders, and continuous
hacking are typical.
Successful projects have a well-defined project milestone when there is a transition from
a research attitude to a product attitude.
Earlier phases focus on achieving functionality.
Later phases revolve around achieving a product that can be shipped to a customer, with
explicit attention to robustness, performance, fit, and finish.
This life-cycle balance, subtle and intangible, is the foundation for successful software
management.
A modern software development process must be defined to support the following:
Evolution of the plans, requirements, and architecture, together with well-defined
synchronization points.
Risk management and objective measures of progress and quality.
Evolution of system capabilities through demonstrations of increasing
functionality.
5.1 ENGINEERING AND PRODUCTION STAGES
To achieve economies of scale and higher ROI, a software manufacturing process should
be driven by technological improvements in process automation and component-based
development.
The table 5-1 is a summary of the differences in emphasis between the two stages –
engineering and production.
The transition between engineering and production is very crucial for the stakeholders.
Depending on the specifics of a project the time and resources dedicated to the two stages
can be highly variable.
Having only two stages to a life cycle sounds a little coarse, too simplistic, for most
applications.
So, the engineering stage is decomposed into two distinct phases, inception and
elaboration, and the production stage into construction and transition.
These four phases of the life-cycle process are loosely mapped to the conceptual
framework of the spiral model.
PRIMARY OBJECTIVES
Establishing the project’s software scope and boundary conditions, including an
operational concept, acceptance criteria, and a clear understanding of what is and
is not intended to be in the product.
Discriminating the critical use cases of the system and the primary scenarios of
operation that will drive th4e major design trade-offs.
Demonstrating at least one candidate architecture against some of the primary
scenarios.
Estimating the cost and schedule for the entire project, including detailed
estimates for the elaboration phase.
Estimating potential risks – sources of unpredictability.
ESSENTIAL ACTIVITIES
Formulating the scope of the project.
This activity involves capturing the requirements and operational concept in an
information repository that describes user’s view of the requirements.
The repository should be sufficient to define the problem space and derive the
acceptance criteria for the end product.
PRIMARY OBJECTIVES
Preparing a baseline architecture as rapidly as practical while establishing a
configuration-managed snapshot with all changed rationalized, tracked and
maintained.
Preparing a baseline of the vision.
Preparing a baseline of a high-fidelity plan for the construction phase.
Demonstrating that the baseline architecture will support the vision at a
reasonable cost in a reasonable time.
ESSENTIAL ACTIVITIES
Elaborating the vision.
This activity involves establishing a high-fidelity understanding of the critical
use cases that drive architectural or planning decisions.
Elaborating the process and infrastructure.
The construction process, the tools and process automation support, and the
intermediate milestones and their respective evaluation criteria are established.
PRIMARY OBJECTIVES
Minimizing development costs by optimizing resources and avoiding unnecessary
scrap and rework.
Achieving adequate quality as rapidly as practical.
Achieving useful versions as rapidly as practical
ESSENTIAL ACTIVITIES
Resource management, control, and process optimization
Complete component development and testing against evaluation criteria
Assessment of product releases against acceptance criteria of the vision
PRIMARY OBJECTIVES
Achieving user self-supportability
Achieving stakeholder concurrence that deployment baselines are complete and
consistent with the evaluation criteria of the vision.
Achieving final product baselines as rapidly and const-effectively as practical.
ESSENTIAL ACTIVITIES
Synchronization and integration of concurrent construction increments into
consistent deployment baselines.
Deployment-specific engineering – cutover, commercial packaging and
production, sales rollout kit development, field personnel training.
Assessment of deployment baselines against the complete vision and acceptance
criteria in the requirements set.
The transition form one phase to the next maps more to a significant business decision
than to the completion of a specific development activity.
These intermediate phase transitions are the primary anchor points of the software
process, when technical and management perspectives are brought into synchronization
and agreement among the stakeholders is achieved with respect to the current
understanding of the requirements, design, and plan to complete.
The UML is a suitable representation format in the form of visual models with a well-
defined syntax and semantics for requirements and design artifacts. Visual modeling
using UML is a primitive notation for early life-cycle artifacts.
The primary mechanism for evaluating the evolving quality of each artifact set is the
transitioning of information from set to set, and
Thereby maintaining a balance of understanding among the requirements, design,
implementation, and deployment artifacts
Each of these components of the system description evolves over time.
Requirement Set
Structured text is used for the vision statement to document the project scope that
supports the contract between the funding authority and the project team.
Ad hoc formats may also be used for
Supplementary specifications – such as regulatory requirements
User mockups or other prototypes that capture requirements
UML notation is used for engineering representations of requirements model – use case
models, domain models.
The requirements set is the primary engineering context for evaluating the other three
engineering artifact sets and is the basis for test cases.
Requirements artifacts are evaluated, assessed, and measured through a combination of
the following:
Analysis of consistency with the release specifications of the management set.
Analysis of consistency between the vision and the requirements models
Mapping against design, implementation, and deployment sets to evaluate the
consistency and completeness and the semantic balance between information in the
different sets.
Analysis of changes between the current version of requirements artifacts and
previous versions – scrap, rework, and defect elimination trends
Subjective review of other dimensions of quality
Design Set
UML notation is used to engineer the design models for the solution.
The design set contains varying levels of abstraction that represent the components of the
solution space – their identities, attributes, static relationships, dynamic interactions.
The design models include structural and behavioral information to ascertain the
following costs:
Bill of materials – quantity and specification of primitive parts and materials,
labor and other costs
Design model information can be straightforwardly and automatically translated into a
subset of the implementation and deployment set artifacts.
Specific design set artifacts include
The design model
The test model
The software architecture description – an extract of information from the design
model that is pertinent to describing an architecture.
The design set is evaluated, assessed and measured through a combination of the
following:
Analysis of the internal consistency and quality of the design model
Analysis of consistency with requirements models
Translation into implementation and deployment sets and notations – traceability,
source code generation, compilation, linking – to evaluate the consistency and
completeness and the semantic balance between information in the sets.
Analysis of changes between the current version of the design model and previous
versions – scrap, rework, and defect elimination trends
Subjective review of other dimensions of quality
Human analysis is required as the level of automated analysis of design models is limited.
Automated analysis will improve with the maturity of design model analysis tools that
support metrics collection, complexity analysis, style analysis, heuristic analysis, and
consistency analysis.
Implementation Set
Implementation sets are human-readable formats.
The implementation set includes
① Source code – programming language notations – that represents the tangible
implementations of components – their form, interface, and dependency
relationships
② Executables necessary for stand-alone testing of components.
These executables are the primitive parts needed to construct the end product, including
⒜ custom components, ⒝ application programming interfaces (APIs) of commercial
components, and ⒞APIs or reusable or legacy components in a programming language
source like Ada 95, C++, Visual Basic, Java, or Assembly.
Implementation set artifacts can also be translated – compiled and linked – into a subset
of deployment set – end-target executables.
Specific artifacts include
Self-documenting product source code baselines and associated files –
compilation scripts, configuration management infrastructure, data files
Self-documenting test source code baselines and associated files – input test
data files, test result files
Standalone component executables
Component test driver executables
The implementation sets are evaluated, assessed and measured through a combination of
the following:
Analysis of consistency with the design models
Translation into deployment set notations – compilation and linking – to evaluate the
consistency and completeness among the artifact sets
Assessment of component source or executable files against relevant evaluation
criteria through inspection, analysis, demonstration, or testing
Execution of standalone component test cases that automatically compare expected
results with the actual results
Analysis of changes between the current version of the implementation set and
previous versions – scrap, rework, and defect elimination trends
Subjective review of other dimensions of quality
Deployment Set
అ) The deployment set includes
ఆ) User deliverables and machine language notations
ఇ) Executable software
ఈ) The build scripts
ఉ) Installation scripts
ఊ) Executable target specific data necessary to use the product in its environment
The machine language notations represent the product components in the target form
intended for distribution to the user.
Deployment set information can be
① Installed
② Executed against (test) scenarios of use
③ Dynamically configured to support the features required in the end-product.
Specific artifacts include
∙ Executable baselines and associated run-time files
∙ The user manual
The deployment sets are evaluated, assessed and measured through a combination of the
following:
Testing against the usage scenarios and quality attributes defined in the requirements
set to evaluate the consistency and completeness and the semantic balance between
information in the two sets
Testing the portioning, replication, and allocation strategies in mapping components
of the implementation set to physical resources of deployment system – platform type,
number, network topology
Testing against the defined usage scenarios in the user manual such as installation,
user-oriented dynamic reconfiguration, mainstream usage, and anomaly management.
Analysis of changes between the current version of the deployment set and previous
versions –defect elimination trends, performance changes
Subjective review of other dimensions of quality
Each artifact set is the predominant development focus of one phase of the life cycle; the
other sets take on check and balance roles.
Requirements Focus
Design Focus
Implementation Focus
Deployment Focus
Allocation of responsibilities among project teams is straightforward and aligns with the
process workflows.
Because of the difference in concerns with respect to the source code in the
implementation set and the executable code in deployment set, the separation between
them is important.
The structure of information delivered to the user or test organization is different from
that of the source code.
Engineering decisions that have an impact on the quality of the deployment set but are
relatively incomprehensible in the design and implementation sets include the following:
Deployment of commercial products to customers can also span a broad range of test and
deployment configurations.
For example: middleware products provide high-performance, reliable object request
brokers that are delivered on several platform implementations, including workstation
operating systems, bare embedded processors, large mainframe operating systems, and
several real-time operating systems.
The product configurations support various compilers and languages as well as various
implementations of network software.
The heterogeneity of all the various target configurations results in the need for a highly
sophisticated source code structure and a huge suite of different deployment artifacts.
The inception phase focuses mainly on critical requirements, usually with a secondary
focus on an initial deployment view, little focus on implementation except perhaps choice
of language and commercial components, and possibly some high-level focus on the
design architecture but not on design detail.
During the elaboration phase, there is more depth in requirements and more breadth in
the design set, and further work on implementation and deployment issues such as
performance trade-offs under primary scenarios and make/buy analyses.
Requirements
Requirements
Requirements
Implementation
Implementation
Implementation
Implementation
Deployment
Deployment
Deployment
Deployment
Design
Design
Design
Design
Desi
Implementati
Require
Depl
Requi
Im
Management Mana
Management Management
Manageme Management
Later on, the emphasis is on realizing the design in source code and individually tested
components.
This phase should drive the requirements, design, and implementation sets almost to
completion.
Substantial work is also done on the deployment set, at least to test one or a few instances
of the programmed system through a mechanism such as an alpha or beta release.
The main focus of the transition phase is on achieving consistency and completeness of
the deployment set in the context of other sets.
Residual defects are resolved, and feedback from alpha, beta, and system testing is
incorporated.
In the conventional system requirements are not specified upfront, and then do design,
and so forth.
In contrast, the entire system is evolved; decision about the deployment may affect
requirements, and not the other way around.
The key emphasis here is to break the conventional mold, in which the default
interpretation is that one set precedes another.
Instead, one state of the entire system evolves into a more elaborate state of the system,
involving evolution in each of the parts.
During the transition phase, traceability between the requirements set and the deployment
set is extremely important.
The evolving requirements set captures a mature and pr4ecise representation of the
stakeholders’ acceptance criteria, and the deployment set represents the actual end-user
product.
So, during the transition phase, completeness and consistency between the two sets is
important.
Traceability among the other sets is necessary only to the extent that it aids the
engineering or management activities.
In the modern process the same sets, notations, and artifacts for the products of test
activities are used as for the product development.
The necessary test infrastructure is being identified as a required subset of the end
product.
In the process, seve4ral engineering disciplines are forced into the process.
The test artifacts must be developed concurrently with the product from inception
through deployment.
So, testing is a full-life-cycle activity, not a late life-cycle activity.
The test artifacts are communicated, engineered, and developed within the same
artifact set as the developed product.
The test artifacts are implemented in programmable and repeatable format like the
software.
The test artifacts are documented in the same way as the product is documented.
Developers of the test artifacts use the same tools, techniques, and training as the
software engineers developing the product.
These disciplines allow for significant levels of homogenization across project workflows.
All the activities are carried out within the notations and techniques of the four sets used
for engineering artifacts. They do not use separate sequences of design and test
documents.
Interpersonal communications, stakeholder reviews, and engineering analyses can be
performed with fewer distinct formats, fewer ad hoc notations, less ambiguity, and higher
efficiency.
For assessment workflow, in addition to testing, inspection, analysis, and demonstration
are also used.
Testing refers to the explicit evaluation through execution of deployment set components
under controlled scenario with an expected and objective outcome.
Tests can be automated.
The test artifacts are highly project-specific. But, there is a relationship between test
artifacts and the other artifact sets.
For example: consider a project to perform seismic data processing for the purpose of oil
exploration.
This system has three fundamental subsystems:
(1) a sensor subsystem that captures raw seismic data in real time and delivers these
data to:
(2) a technical operations subsystem that converts raw data into an organized database
and manages queries to this database from
(3) a display subsystem that allows workstation operators to examine seismic data in
human-readable form.
Such a system would result in the following test artifacts:
∎ Management set.
The release specifications and release descriptions capture the objectives,
evaluation criteria, and results of an intermediate milestone.
These artifacts are the test plans and test results negotiate4d among internal
project teams.
The software change order capture test results – defects, testability changes,
requirements ambiguities, and enhancements – and the closure criteria associated
with making a discrete change to a baseline.
∎ Requirements set.
The system-level use cases capture the operational concept for the system and the
acceptance test case descriptions, including the expected behavior of the system
and its quality attributes.
The entire system is a test artifact as it is the basis of all assessment activities
across the life-cycle.
∎ Design set.
A test model for non-deliverable components needed to test the product baselines
is captured in the design set.
These components include such design set artifacts as a seismic event simulation
for creating realistic sensor data; a “virtual operator” that can support unattended,
after-hours test cases; specific instrumentation suites for early demonstration of
resource usage; transaction rates or response times; and use case test drivers and
component stand-alone test drivers.
∎ Implementation set.
Self-documenting source code representations for test components and test drivers
provide the equivalent test procedures and test scripts.
These source files include human-readable data files representing certain
statically defined data sets that are explicit test source files.
Output files from test drivers provide the equivalent of test reports.
∎ Deployment set.
Executable versions of test components, test drivers, and data files are provided.
For any release, all the test artifacts and product artifacts are maintained using the same
baseline version identifier.
They are created, changed, and obsolesced as a consistent unit.
As test artifacts are captured using the same notations, methods, and tools, the approach
to testing is consistent with design and development.
This approach forces the evolving test artifacts to be maintained so that regression tsting
can be automated easily.
The word document can mean paper document or electronically transmitted information
in the form of processed data, reviews, etc.
Business Case
The business case artifact provides all the information necessary to determine whether the
project is worth investing in.
It details the expected revenue, expected cost, technical and management plans, and
backup data necessary to demonstrate risks and realism of the plans.
The main purpose is to transform the vision into economic terms so that an organization
can make an accurate ROI assessment.
The financial forecasts are evolutionary, updated with more accurate forecasts as the life
cycle progresses.
The software development plan (SDP) elaborates the process framework into a fully
detailed plan.
It is the defining document for the project’s success.
It must comply with contract, comply with organization standards, evolve along with the
design and requirements, and be used consistently across all subordinate organizations
doing the software development.
Two indications of useful SDP are periodic updating – it is not stagnant shelf-ware and
understanding and acceptance by managers and practitioners.
FIGURE 6-5. A default/typical outline for a software development plan
Once software is placed in a controlled baseline, all change must be formally tracked and
managed.
By automating data entry and maintaining change records on-line, most of the change
management bureaucracy and metrics collection and reporting activities can be
automated.
Release Specifications
The scope, plan, and objective evaluation criteria for each baseline release are derived
from the vision statement and other sources like make/buy analyses, risk management
concerns, architectural considerations, shots in the dark, implementation constraints,
quality thresholds.
These artifacts are to evolve along with the process, achieving greater fidelity as the life
cycle progresses and requirements understanding matures.
FIGURE 6-6. Default/Typical release specification outline
I . Iteration Content
II . Measurable Objectives
A. Evaluation criteria
B. Follow-through approach
III .
Demonstration plan
. A Schedule of activities
B Team responsibilities
.
IV Operational scenarios
.
(use cases demonstrated)
. A Demonstration procedures
B.. Traceability to vision and business case
.
.
Status Assessments
Status assessments provide periodic snapshots of project health and status, including the
software project manager’s risk assessment, quality indicators, and management
indicators.
With varying periodicity, the forcing function must persist.
A good management process must ensure that the expectations of all stakeholders –
contractor, subcontractor, customer, and user – are synchronized and consistent.
The periodic assessment documents provide the critical mechanism
a) for managing everyone’s expectations throughout the life cycle;
b) for addressing, communicating, and resolving management issues, technical
issues, and project risks
c) for capturing project history.
They are the periodic heartbeat for management attention.
Typical status assessment should include a review of resources, personnel staffing,
financial – cost and revenue – data, top 10 risks, technical progress – metrics snapshots,
major milestone plans and results, total project/product scope, action items, and follow-
through.
Continuous open communications with objective data derived directly from on-going
activities and evolving product configurations are mandatory in any project.
Environment
An important emphasis is to define the development and maintenance environment as a
first-class artifact of the process.
A robust, integrated development environment must support automation of the
development process.
This should include requirements management, visual modeling, document automation,
host and target programming tools, automated regression testing, and continuous and
integrated change management, and feature and defect tracking.
Hiring good people and equipping them with good tools is a must for success.
Automation of the software development process provides payback in quality, the ability
to estimate costs and schedules, and overall productivity using a smaller team.
By allowing the designers to traverse quickly among development artifacts and easily
keep the artifacts up-to-date, integrated toolsets play an increasingly important role in
incremental and iterative development.
Deployment
A deployment document can take many forms:
Depending on the project it could involve several document subsets for transitioning the
product into operational status.
In big contractual efforts in which the system is delivered to a separate maintenance
organization, deployment artifacts may include computer system operations manuals,
software installation manuals, plans and procedures for cutover, site surveys, and so on.
For commercial software products, deployment artifacts may include marketing plans,
sales rollout kits, and training courses.
Management Artifact Sequences
In each phase of the life cycle, new artifacts are produced and previous developed ones
are updated to incorporate lesson learned and to capture further depth and breadth of the
solution.
Some artifacts are updated at each major milestone, others at each minor milestone.
FIGURE 6-8. Artifact sequences across a typical life cycle
Vision Document
The vision document provides a complete vision for the software system under
development and supports the contract between the funding authority and the
development organization.
Every project – irrespective of its size – needs a source for capturing the expectations
among its stakeholders.
A project vision is meant to be changeable as understanding evolves of the requirements,
architecture, plans, and technology.
A good vision document should change slowly.
The vision document is written from the user’s perspective, focusing on the essential
features of the system and acceptable levels of quality.
The vision document should have at least two appendixes:
The first appendix should describe the operational concept using use cases – a visual
model and a separate artifact.
The second appendix should describe the change risks inherent in the vision statement, to
guide defensive design efforts.
The vision statement should include a description of what will be included as well as
those features considered but not included.
It should also specify operational capacities – volumes, response times, accuracies, user
profiles, and inter-operational interfaces with entities outside the system boundary.
The vision should not be defined only for the initial operating level; its likely evolution
path should be addressed so that there is a context for assessing design adaptability.
The operational concept involves specifying the use cases and scenarios for nominal and
off-nominal usage.
The use case representation provides a dynamic context for understanding and refining
the scope, for assessing the integrity of a design model, and for developing acceptance
test procedures.
The vision document provides the contractual basis for the requirements visible to the
stakeholders.
Architecture Description
The architecture description provides an organized view of the software architecture
under development.
It is extracted largely from the design model and includes views of the design,
implementation, and deployment sets sufficient to understand how the operational
concept of the requirements set will be achieved.
The breadth of the architecture description depends on the project being developed.
The architecture can described using a subset of the design model or as an abstraction of
the design model with supplementary material, or a combination of both.
As an example of the above two, consider the organization of a book:
I. Architecture overview
A. Objectives
B. Constraints
C. Freedoms
II. Architecture views
A. Design view
B. Process view
C. Component view
D. Deployment
III. Architectural interactions
A. Operational concept under primary scenarios
B. Operational concept under secondary scenarios
C. Operational concept under anomalous conditions
IV. Architecture performance
V. Rationale, trade-offs, and other substantiation
Document production cycles, review cycles, and update cycles also injected visible and
formal snapshots of progress into the schedule – introducing more schedule dependencies
and synchronization points.
A more effective approach is to redirect this documentation effort to improving the rigor
and understandability of the information source and allowing on-line review.
Such an approach can eliminate a huge, unproductive source of scrap and rework in the
process and allow for continuous review by all the stakeholders.
People want to review information but don’t understand the language of the
artifact.
Reviewers may resist learning the engineering language in which the artifact is
written.
Patronizing such audiences should be stopped as they typically add cost and time
to the process without adding value.
People want to review the information but don’t have access to the tools.
When the development tools are not available to the reviewers, it results in
exchange of paper documents.
Standardized formats – UML, spreadsheets, Visual Basic, C++, and Ada 95,
visualization tools, and the Web are making it economically feasible for all
stakeholders to exchange information electronically.
If the reviewers don’t accept this feature, it pollutes the software development
process.
It is important that information inherent in the artifact be emphasized, not the paper on
which it is written.
Short documents are more useful than the long ones.
Software is the primary product, documentation is merely support material.
The design model includes the full breadth and depth of information.
An architecture view is an abstraction of the design model. It contains only the
architecturally significant information.
Systems require four views: design, process, component, and deployment.
The purposes of these four views are:
Q Design: describes architecturally significant structures and functions of the design
model
Q Process: describes concurrency and control thread relationships among the design,
component, and deployment views
Q Component: describes the structure of the implementation set
Q Deployment: describes the structure of the deployment set
The design view is necessary for every system, and the others can be included depending
on the complexity of the system.
Figure 7-1 summarizes the artifacts of the design set, including architecture views and
architecture description.
The architecture description is captured electronically as a single printable document.
The engineering models and architectural views are collections of UML diagrams.
(A) The requirements model addresses the behavior of the system as seen by the end-
users, analysts, and testers.
This view is modeled statically using use-case and class diagrams, and dynamically using
sequence, collaboration, state chart, and activity diagrams.
The use case view describes how the system’s critical – architecturally significant –
use cases are realized by elements of the design model.
It is modeled statically using use case diagrams, and dynamically using UML
behavioral diagrams.
(B) The design model addresses the architecture of the system and the design of the
components within the architecture, including functional structure, concurrency
structure, implementation structure and execution structure of the solution space, as
seen by its developers.
Static descriptions are provided with structural diagrams – class, object, component,
and deployment diagrams.
Dynamic descriptions are provided with UML behavioral – collaboration, sequence,
state chart, activity – diagrams.
The design view describes the architecturally significant elements of the design model.
This view, an abstraction of the design model, addresses the basic structure and
functionality of the solution.
It is modeled statically using class and object diagrams, and dynamically using the
UML behavioral diagrams.
The process view addresses the run-time collaboration issues involved in executing the
architecture on a distributed deployment model, including the logical software
network topology, interprocess communication, and state management.
This view is modeled statically using deployment diagrams, and dynamically using the
UML behavioral diagrams.
The component view describes the architecturally significant elements of the
implementation set.
This view, an abstraction of the design model, addresses the software source code
realization of the system from the perspective of the project’s integrators and
developers, especially with regard to releases and configuration management.
It is modeled statically using component diagrams, and dynamically using the UML
behavioral diagrams.
The deployment view addresses the executable realization of the system, including the
allocation of logical processes in distribution view to physical resources of the
deployment network.
It is modeled statically using deployment diagrams, and dynamically using the UML
behavioral diagrams.
FIGURE 7-1. Architecture, an organized and abstracted view into the design
d l
ARCHITECTURAL DESCRIPTIONS take on different forms and styles in different
organizations and domains.
An architecture requires a subset of artifacts in each engineering set.
The actual level of content in each set is situation-dependent, and there are few good
heuristics for describing objectively what is architecturally significant.
The architecture description takes a wide range of forms: from a simple direct subset of
UML diagrams – applicable for a small, highly skilled team building a development tool,
to a complex set of models with a variety of distinct views that capture and
compartmentalize the concerns of a sophisticated system – suitable for a highly
distributed, large-scale, catastrophic-cost-of-failure command and control system.
The artifact sets evolve through a project life cycle form the engineering stage – where
focus is on the requirements and design artifacts, to the production stage – with focus on
the implementation and deployment artifacts.
The transition point form the engineering stage to the production stage constitutes a state
in which the project has achieved when relevant stakeholders agree that the vision can be
achieved with a highly predictable cost and schedule.
Support for this state requires not only briefings and documents, but also executable
prototypes that demonstrate evolving capabilities.
These demonstrations provide tangible feedback on the maturity of the solution.
The more standard components are used, the simpler this state is to achieve.
The more custom components are used, the harder it is to achieve and the harder it is to
estimate construction costs.
In the sequential way of the conventional process model, the main problem arises as the
details of each phase have to be completed and frozen before the next phase starts. In this
process important engineering decisions would either slow down or completely get
stopped.
The intent here, is to recognize explicitly the continuum of activities in all phases.
Management
Environment
Requirements
Design
Implementation
Assessment
Deployment
The following table shows the allocation of artifacts and the emphasis of each workflow in
each of the life-cycle phases of inception, elaboration, construction, and transition.
TABLE 8-1. The artifacts and life-cycle emphases association with each workflow
WORKFLOW ARTIFACTS LIFE-CYCLE PHASE EMPHASIS
Management Business case Inception: prepare business case and vision
Software development Elaboration: Plan development
Plan Construction: Monitor and control development
Status assessments Transition: Monitor and control deployment
Vision
Work breakdown
structure
Environment Environment Inception: define development environment and
Software change order change management infrastructure
database Elaboration: install development environment and
establish change management database
Construction: maintain development environment
and software change order database
Transition: transition management environment and
software change order database
Requirements Requirements set Inception: define operational concept
Release specifications Elaboration: define architecture objectives
Vision Construction: define iteration objectives
Transition: refine release objectives
Design Design set Inception: formulate architecture concept
Architecture description Elaboration: achieve architecture baseline
Construction: design components
Transition: refine architecture and components
Implementation Implementation set Inception: support architecture prototypes
Deployment set Elaboration: produce architecture baseline
Construction: produce complete componentry
Transition: maintain components
TABLE 8-1. The artifacts and life-cycle emphases association with each workflow (Continued)
WORKFLOW ARTIFACTS LIFE-CYCLE PHASE EMPHASIS
Assessment Release specifications Inception: assess plans, vision, prototypes
Release descriptions Elaboration: assess architecture
User manual Construction: assess interim releases
Deployment set Transition: assess product releases
Deployment Deployment set Inception: analyze user community
Elaboration: define user manual
Construction: prepare transition materials
Transition: transition product to user
Design
Implementation
Assessment
Deployment
Implementation:
o developing or acquiring any new components, and enhancing/modifying
any existing components, to demonstrate the evaluation criteria allocated
to this iteration
o integrating and testing all new/modified components with
existing baselines of the previous versions
Assessment:
o evaluating the results of the iteration, including compliance with the
allocated evaluation criteria and the quality of the current baselines
o identifying any rework required and determining if it should be performed
before deployment of this release or allocated to the next release
o assessing results to improve the basis of the subsequent iteration’s plan
Deployment:
o transitioning the release to an external organization or to internal closure
by conducting a post-mortem so that lessons learned can be captured and
reflected in the next iteration
Many of the activities in this sequence also occur concurrently.
For example, requirements analysis is not done all in one continuous lump; it intermingles
with management, design, implementation, and so on.
Iterations in the inception and elaboration phases focus on management, requirements, and
design activities.
Iterations in the construction phase focus on design, implementation, and assessment.
Iterations in the transition phase focus on assessment and deployment.
In practice, the various sequences and overlaps among iterations become more complex
The terms iteration and increment deal with some of the pragmatic considerations.
An iteration represents the state of the overall architecture and the complete deliverable
system.
An increment represents the current work in progress that will be combined with the
preceding iteration to form the next iteration.
Figure 8-4, an example of a simple development life cycle, illustrates the difference between
iterations and increments.
A typical build sequence from the perspective of an abstract layered architecture is also
illustrated therein.
100%
Progress
1. List the seven top-level workflows and map them to product artifacts.
(P 118, Fig 8-1/P199)
2. Using a neat diagram, explain the workflow of an iteration. (Fig 8-2/P 121)
3. Explain the build sequence associated with a layered architecture with iteration
emphasis across the life cycle. (Fig. 8-3/P 123, Fig. 8-4/P124)
Initial
Life-cycle Life-cycle operational Product
objectives architecture capability release
milestone milestone milestone milestone
Major ▲ ▲ ▲ ▲
milestones Strategic focus on global concerns of the entire software project
FIGURE 9-1.
Minor △ △ △ △ △ △ △ A typical
milestones Tactical focus on local concerns of the current iteration sequence of
stake-holder
Status ◇ ◇ ◇ ◇ ◇ ◇ ◇ ◇ ◇ ◇ ◇ ◇ ◇ ◇ ◇ ◇ expectations
assessments Periodic synchronization of stakeholder expectations
Table 9-1 summarizes the balance of information across the major milestones.
TABLE 9-1. The general status of plans, requirements, and products across the major milestones
UNDERSTANDING SOLUTION SPACE
OF PROBLEM PROGRESS
SPACE (SOFTWARE PRODUCT)
MILESTONES PLANS (REQUIREMENTS)
Definition of Baseline vision, Demonstration of at least one
stakeholder including growth feasible architecture
responsibilities vectors, quality
attributes, and
Life-cycle priorities
objective Low-fidelity life- Use case model Make/buy/reuse trade-offs
cycle plan
High-fidelity Initial design model
elaboration phase
plan
High-fidelity Stable vision and use Stable design set
construction phase case model
plan
Life-cycle
Low-fidelity Evaluation criteria for Make/buy/reuse decisions
architecture
transition phase construction release,
milestone
plan initial operational
capability
Draft user manual Critical component prototypes
High-fidelity Acceptance criteria Stable implementation set
transition phase for product release
Initial operational
plan Releasable user Critical features and core
capability
manual capabilities
milestone
Objective insight into product
qualitites
Next-generation Final user manual Stable deployment set
Product release
product plan Full features
milestone
Compliant quality
TABLE 9-1. The general status of plans, requirements, and products across the major milestones
I. Requirements
A. Use case model
B. Vision document – text, use cases
C. Evaluation criteria for elaboration – text, scenarios
II. Architecture
A. Design view – object models
B. Process view – run-time layout, executable code structure
C. Component view – subsystem layout, make/buy/reuse component identification
D. Deployment view – target run-time layout, target executable code structure
E. Use case view – test case structure, test result expectation
1. Draft user manual
III. Source and executable libraries
A. Product components
B. Test components
C. Environment and tool components
FIGURE 9-2. Engineering artifacts available at the life-cycle architecture milestone
Default agenda(s) for the life-cycle architecture milestone:
Presentation Agenda
I. Scope and Objectives
A. Demonstration overview
II. Requirements assessment
A. Project vision and use cases
B. Primary scenarios and evaluation criteria
Demonstration Agenda
I. Evaluation criteria
II. Architecture subset summary
III. Demonstration environment summary
IV. Scripted demonstration scenarios
V. Evaluation criteria results and follow-up items
For longer iterations more intermediate review points may be necessary: test readiness
reviews , intermediate design walkthroughs , etc.
Iterations take different forms and priorities in different phases in the life-cycle.
Early iterations focus on analysis and design,
Later iterations focus more on completeness, consistency, usability, and change
management.
The milestones of an iterations and its associated evaluation criteria must focus the
engineering activities as defined in the software development plan, business case, and
vision.
Iteration Readiness Review.
o This informal milestone is conducted at the start of each iteration.
o To review the detailed iteration plan and the evaluation criteria allocated to
the iteration.
Iteration Assessment Review.
o This informal milestone is conducted at the end of each iteration.
o To assess the achievements of the objectives and satisfying of the evaluation
criteria by an iteration.
o To review iteration results
o To review qualification test results
o To determine the amount of rework to be done
o To review the impact of the iteration results on the plan for subsequent
iterations.
The project and the organizational culture determine the format and the content of these
informal milestones.
Management
Requirements
Design
Implementation
Assessment
Deployment
Iteration N - 1
Iteration-N
Iteration N + 1
TOPIC CONTENT
Personnel Staffing plan vs. actuals
Attritions, additions
Financial trends Expenditure plan vs. actuals for the previous, current, and
next major milestones
Revenue forecasts
Top 10 risks Issues and criticality resolution plans
Quantification – cost, time, quality – of exposure
Technical progress Configuration baseline schedules for major milestones
Software management metrics and indicators
Current change trends
Test and quality assessments
Major milestone Plan, schedule, and risks for next major milestone
plans and results Pass/fail results for all acceptance criteria
Total product scope Total size, growth, and acceptance criteria perturbations
Management
System requirements and design
Subsystem 1
Component 11
Requirements
Design
Code
Test
Documentation
… (similar structures for other components)
Component 1N
Requirements
Design
Code
Test
Documentation
… (similar structures for other subsystems)
Subsystem M
Component M1
Requirements
Design
Code
Test
Documentation
… (similar structures for other components)
Component MN
Requirements
Design
Code
Test
Documentation
Integration and test
Test planning
Test procedure preparation
FIGURE 10-1.
Testing
Conventional work
Test reports
breakdown structure,
Other support areas following the product
Configuration control hierarchy
Quality assurance
System administration
Second-level WBS elements are the defined for each phase of the life
cycle.
These elements allow the fidelity of the plan to evolve more naturally
with the level of understanding of the requirements and architecture,
and the risks therein.
Third-level WBS elements are defined for the focus of activities that
produce the artifacts of each phase.
These elements may be the lowest level in the hierarchy that collects the
cost of a discrete artifact for a given phase, or they may be decomposed
further into several lower level of activities that, taken together, produce
a single artifact.
A WBS consistent with the process framework – phases, workflows, and artifacts – should show
how the elements of the process framework can be integrated into a plan.
It should provide a framework for estimating the costs and schedules of each element, allocating
them across a project organization, and tracking expenditures.
The structure should be tailored to the specifics of a project in the following ways:
1. Scale. Larger projects will have more levels and substructures.
2. Organizational structure. Projects that span multiple organizational entities may introduce
constraints that necessitate different WBS allocations.
3. Degree of custom development. Depending on the character of the project, there can be
different emphases in the requirements, design, and implementation workflows.
A business process re-engineering project based primarily on existing components
would have much more depth in the requirements element and a fairly shallow design
and implementation element.
A fully custom development of a one-of-a-kind technical application requires fairly
deep design and implementation elements to manage the risks associated with the
custom, first-generation components.
4. Business context. Contractual projects require more elaborate management and
assessment elements.
They require more elaborate substructures for the deployment element.
An application deployed to a single site may have a trivial deployment element or an
elaborate one.
5. Precedent experience. Most of the projects are developed as new generations of a legacy
system or in the context of existing organizational standards rather than from the scratch.
It is important to accommodate these constraints to ensure that new project exploit the
existing experience base and benchmarks of project performance.
The WBS decomposes the character of the project and maps it to the life cycle, the budget, and
the personnel.
AA Inception phase management
A. Management
AAA Business case development
AAB Elaboration phase release specification
AAC Elaboration phase WBS baselining
AAD Software development plan
AAE Inception phase project control and status assessments
B. Environment
BA Inception phase environment specification
BB Elaboration phase environment baselining
BBA Development environment installation and administration
BBB Dev. environment integration and custom tool-smithing
BBC SCO database formulation
D. Design
DA Inception phase architecture prototyping
E. Implementation
EA Inception phase component prototyping
EB Elaboration phase component implementation
EBA Critical component coding demonstration integration
EC Construction phase component implementation
ECA Initial release(s) component coding and stand-alone testing
ECB Alpha release component coding and stand-alone testing
ECC Beta release component coding and stand-alone testing
ECD Component maintenance
ED Transition phase component maintenance
F. Assessment
FA Inception phase assessment planning
FB Elaboration phase assessment
FBA Test modeling
FBB Architecture test scenario implementation
FBC Demonstration assessment and release descriptions
G. Deployment
GA Inception phase deployment planning
GB Elaboration phase deployment planning
GC Construction phase deployment
GCA User manual baselining
Inception Elaboration
WBS element Fidelity WBS element Fidelity
Management High Management High
Environment Moderate Environment High
Requirements High Requirements High
Design Moderate Design High
Implementation Low Implementation Moderate
Assessment Low Assessment Moderate
Deployment Low Deployment Low
FIGURE 10-3. Evolution of planning fidelity in the WBS over the life cycle
WBS is the most valuable source of objective information about the project plan in performing
project assessments and software management audits.
Another important attribute of a good WBS is that the planning fidelity inherent in each element
is commensurate with the current life-cycle phase and project state.
The risk is that the guidelines may be adopted blindly without being adapted to specific project
circumstances.
This may lead to an incompetent management team.
Two simple planning guidelines should be considered when a project plan is being initiated or
assessed.
The details of the first guideline : The details of the second guideline :
The data in Table 10-1 and Table 10-2 come mostly from software cost estimation efforts.
By then, the top-down approach should be well tuned to the project-specific parameters, so it
should be used more as a global assessment technique.
Planning the content and schedule of the major milestones and their intermediate iterations is the
most tangible form of the overall risk management plan.
An evolutionary build plan is important as there are always adjustments in build content and
schedule as early conjecture evolves into well-understood project circumstances.
100%
A description of a generic build progression and general guidelines on the number of iterations in
each phase:
Iteration is used to mean a complete synchronization across the project, with a well-orchestrated
global assessment of the entire project baseline.
Other micro-operations – monthly, weekly or daily builds – are performed en route to these
project-level synchronization points.
Inception iterations.
The early prototyping activities integrate the foundation components of a candidate
architecture and provide an executable framework for elaborating critical use cases of
the system.
This framework includes existing components, commercial components, and custom
prototypes sufficient to demonstrate a candidate architecture and sufficient
requirements understanding to establish a credible business case, vision, and software
development plan.
To achieve an acceptable prototype, two iterations may be necessary based on the size
of the project.
Elaboration iterations.
These iterations result in an architecture, including a complete framework and
infrastructure for execution.
Upon completion of the architecture iteration, a few critical use case should be
demonstrable:
(1) initializing the architecture,
(2) injecting a scenario to drive the worst-case data processing flow through the
system – for example, the peak load scenario, and
(3) injecting a scenario to drive the worst-case control flow through the system – for
example, orchestrating the fault-tolerance use cases.
Two iterations should be planned for, to achieve an acceptable architectural baseline.
More iterations may be required in exceptional cases.
Construction iterations.
Most projects require at least two major construction iterations:
(1) An alpha release includes executable capability for all the critical use cases.
It represents about 70% of the total product breadth and performs at quality –
performance and reliability – levels below the final expectations.
(2) A beta release provides 95% of the total product capability breadth and achieves
some the important attributes.
A few more features need to be completed, and improvements in robustness and
performance are necessary for the final product release to be acceptable.
To manage risks or optimize resource consumption, a few more iterations may be
necessary, in some cases.
Transition iterations.
Most projects use a single iteration to transition a beta release into the final product.
A number of small-scale iterations may be necessary to resolve defects, incorporate
beta feedback, and incorporate performance improvements.
Because of the overhead associated with a full-scale transition to the user community,
most projects do away with a single iteration between a beta release and the final
product release.
1. Define a WBS. Compare the issues related to the conventional and evolutionary WBSs.
2. Explain the two planning guidelines.
3. Explain how planning balance is achieved throughout the life cycle in cost and schedule
estimating process.
4. Explain the iteration planning process in the four phases of the life cycle.
Project teams are motivated by the cost, schedule, and quality of specific deliverables.
Project rarely invest in any technology or service that doesn’t directly impact the cost,
schedule, or quality of the deliverables.
The organizations focused on the project – the level where software is developed and
delivered.
Generic roles, relationships, and responsibilities are described in this chapter for both line
of business and project types of software organizations.
The ideas or recommendations presented here have to be tailored to the domain, scale,
cultures, and personalities of a specific situation.
SEPA facilitates the exchange of information and process guidance both to and
from project practitioners.
This role is accountable to the general manager for maintaining current assessments
of the process maturity and the plan for future process improvements.
The SEPA must help initiate and periodically assess project processes.
Only when the SEPA understands both the desired improvement and the project
context, it will be possible to catalyze the capture and dissemination of best
software practices.
The SEPA is a necessary role in any organization to take on the responsibility and
accountability for the process definition and its modification, improvement, or
technology insertion.
The SEPA could be a single individual – the general manager, or a team of
representatives.
The SEPA must be an authority, competent and powerful, not a staff position
rendered impotent by ineffective bureaucracy.
The PRA is responsible for ensuing that a software project complies with all
organizational and business unit software policies, practices, and standards.
A software project manager is responsible for meeting the requirements of a
contract or any other compliance standard, and is accountable to the PRA.
The PRA reviews both the project’s conformance to contractual obligations and the
project’s organizational policy obligations.
The customer monitors contract requirements, contract milestones, contract
deliverables, monthly management reviews, progress, quality, cost, schedule, and
risk.
The PRA reviews customer commitments and adherence to organizational policies,
organizational deliverables, financial performance, and other risks and
accomplishments.
Infrastructure
An organization’s infrastructure provides human resources support, project-
independent research and development, and other capital software engineering
assets.
Artifacts: Artifacts:
Vision statement Work breakdown structure
Requirements set
Activities: Activities:
Requirements elicitation Financial forecasting, reporting
Requirements specification WBS definition, administration
Use case modeling
Schedules, costs, functionality, and quality expectations are highly interrelated. They
require continuous negotiation among the stakeholders who have different goals.
The software management team is expected to deliver win conditions to all stakeholders.
So, the software manager has the burdensome task of balance.
The focus of software management team activities over the project life cycle:
Software Management
Systems engineering
Artifacts: Responsibilities:
Business case Financial Administration Resource commitments
Vision Quality assurance Personnel assignments
Software development plan Plans, priorities
WBS Stakeholder satisfaction
Status assessments Scope definition
Requirements set Risk management
Project control
Life-Cycle Focus
Inception elaboration Construction Transition
Elaboration phase Construction phase Transition phase Customer satisfaction
planning planning planning Contract closure
Team formulation Full staff recruitment Construction plan Sales support
Contracts baselining Risk resolution optimization Next-generation planning
Architecture costs Product acceptance Risk management
criteria
Construction costs
Software Architecture
For any project, the skill of the software architecture team is crucial.
It provides the framework for facilitating communications, for achieving system-wide
quality, and for implementing the applications.
The success of the team depends on the quality of the architecture team. A good team
ensures success. A bad team with an expert team also, the project fails.
The inception and elaboration phases are dominated by two distinct teams: the software
management team and software architecture team.
The software development and software assessment teams engage in support roles in the
production stage.
By the construction phase, the architecture transitions into a maintenance mode and must
be supported by a minimal level of effort to ensure continuity of the engineering legacy.
The architecture team must include the following level of expertise:
Domain experience to produce an acceptable design view and use case view
Software technology experience to produce an acceptable process view, component
view, and deployment view
The architecture team is responsible for system-level quality, which includes attributes
such as reliability, performance, and maintainability.
These attributes span multiple components and represent how well the components
integrate to provide an effective solution.
So, the architecture team decides how multiple-component design issues are resolved.
Software Development Team
The figure on the next page shows the software development team activities over the life-
cycle:
The software development team is the most application-specific group.
It comprises several sub-teams dedicated to groups of components that require a common
skill set.
Software Development
Database:
Specialists with experience in the organization, storage, and retrieval of data
GUI:
Specialists with experience in the display organization, data presentation, and user
interaction to support human input, output, and control needs
Domain applications:
Specialists with experience in algorithms, application processing, or business rules
specific to the system
The software development team is responsible for the quality of individual components,
including all component development, testing, and maintenance.
The development team decides how nay design or implementation issue local to a single
component is resolved.
There are two reasons for using an independent team for software assessment:
(1) to ensure an independent quality perspective.
(2) to exploit the concurrency of activities
Schedules can be accelerated by developing software and preparing for testing in parallel
with development activities.
Change management, test planning, and test scenario development can be performed in
parallel with design and development.
Software Assessment
Evaluation criteria will document the customer’s expectations at each major milestone,
and release descriptions will substantiate the test results.
The final iterations will be equivalent to acceptance testing and include levels of detail
similar to the levels of detail of software test plans, procedures, and reports.
The artifacts evolve from brief, abstract versions in early iterations into more detailed and
more rigorous documents, with detailed completeness and traceability discussions in later
releases.
These scenarios should be subjected to change management like other software and are
always maintained up-to-date for automated regression testing.
The assessment team is responsible for the quality of baseline releases with respect to the
requirements and customer expectations.
The assessment team is therefore responsible for exposing any quality issues that affect
the customer’s expectations, whether or not the expectations are captured in the
requirements.
The project organization represents the architecture of the team and needs to evolve
consistent with the project plan captured in the WBS.
The following figure illustrate how the team’s center of gravity shifts over the life cycle,
with about 50% of the staff assigned to one set of activities in each phase:
Software architecture Software Software Assessment Software architecture Software Software Assessment
20% development 10% 50% development 10%
20% 20%
Inception Elaboration
Transition Construction
Software management Software management
10% 10%
Software architecture Software development Software Assessment Software architecture Software development Software Assessment
5% 35% 50% 10% 50% 30%
♥ Inception team
Focus on planning, with enough support from other teams to ensure that the plans
represent a consensus of all perspectives
♥ Elaboration team
Focus on architecture in which the driving forces of the project reside in the software
architecture team and are supported by the software development and software
assessment teams to achieve a stable architecture baseline
♥ Construction team
Most of the activity resides, in a balanced way, in the software development and
software assessment teams
♥ Transition team
Customer-focused with usage feedback driving the deployment activities
The task of integrating the environment and infrastructure for software development
results in the selection of incompatible tools that have different information repositories,
are supplied by different vendors, work on different platforms, use different jargon, and
are based on different process assumptions.
Integrating such an infrastructure is unexpectedly very complex.
They include the tool selection, custom tool-smithing, and process automation necessary
to perform against the development plan with acceptable efficiency.
Each of the three levels of the process requires a certain degree of process automation for
the corresponding process to be carried out efficiently:
1. Metaprocess.
An organization’s policies, procedures, and practices for managing a software-
intensive line of business.
The automation support for this level is called an infrastructure.
An infrastructure is an inventory of preferred tools, artifact templates,
microprocess guidelines, macroprocess guidelines, project performance
repository, database of organizational skill sets, and library of precedent
examples of past project plans and results.
2. Macroprocess.
A project’s policies, procedures, and practices for producing a complete
software product within their cost, schedule, and quality constraints
The automation support for a project’s process is called an environment.
An environment is a specific collection of tools to produce a specific set of
artifacts as governed by a specific project plan.
3. Microprocess.
A project team’s policies, procedures, and practices for achieving an artifact of
the software process.
The automation support for generating an artifact is called a tool.
Tools include requirements management, visual modeling, compilers, editors,
debuggers, change management, metrics automation, document automation, test
automation, cost estimation, and workflow automation.
While the main focus of process automation is the workflow of a project-level
environment, the infrastructure context of the project’s parent organization and the tool
building blocks are the prerequisites.
There are many tools available to automate the software development process.
Most of the core software development tools map closely to one of the process workflows,
as illustrated in the following figure:
FIGURE 12-1. Automation and tool components that support the process workflows
Each of the process workflows has a distinct need for automation support.
In some cases, it is necessary to generate an artifact; in others, it is needed for
bookkeeping.
Management
There are many opportunities for automating the project planning and control activities of
the management workflow.
Software cost estimation tools and WBS tools are useful for generating the planning
artifacts.
For managing against a plan, workflow management tools and a software project
control panel that can maintain an on-line version of the status assessments are
advantageous.
This automation support can improve the insight of the metrics collection and reporting
concepts.
Environment
Requirements
The equal treatment of all requirements wasted time on non-driving requirements, and
also on the corresponding paper work that was ultimately discarded.
In a modern process,
♥ The requirements are captured in the vision statement.
Automated
production
Forward engineering (source generation from models)
Traceability
links
Reverse engineering (models generation from source)
Requirement Set
UML Models
Deployment Set
Executable Code
In the above figure, the automated translation of design models to source code – both
forward and reverse engineering – is well established.
Compilers and linker provide automation of source code into executable code.
The primary reason for round-trip engineering is to allow freedom in changing software
engineering data sources.
This configuration control of all the technical artifacts is crucial to maintaining a
consistent and error-free representation of the evolving product.
It is not always necessary to have bi-directional transitions in all cases.
Translation from one data source to another may not provide 100% completeness. For
example, translating design models into C++ source code may provide only the structural
and declarative aspects of the source code representation.
The code components may still need to be fleshed out with the specifics of certain object
attributes and methods.
Tacking changes in the technical artifacts is crucial to understanding the true technical
progress trends and quality trends toward delivering an acceptable end product or interim
release.
In a modern process change management has become fundamental to all phases and
almost all activities.
The atomic unit of software work that is authorized to create, modify, or obsolesce
components within a configuration baseline is called a software change order (SCO).
SCOs are a key mechanism for partitioning, allocating, and scheduling software work
against an established software baseline and for assessing progress and quality.
An example SCO, as a good starting point for describing a set of change primitives:
Title:__________________________________________________________________
Description Name: ______________________________ Date: ____________
Project: ________________________________________________
It shows the level of detail required to achieve the metrics and change management rigor.
By automating data entry and maintaining change records on-line, the change
management activity associated with metrics reporting can also be automated.
If resolution requires two people on two different teams, two discrete SCOs should be
filled up.
The basic fields of the SCO are title, description, metrics, resolution, assessment, and
disposition.
☺ Title.
The title is suggested by the originator and is finalized upon acceptance by the
configuration control board – CCB.
This field should include a reference to an external software problem report if
the change was initiated by an external person – such as a user.
☺ Description.
The problem decomposition includes the name of the originator, date of
origination, CCB-assigned SCO identifier, and relevant version identifiers of
related support software.
The textual problem description should provide as much detail as possible,
along with attached code excerpts, display snapshots, error messages, and any
other data that may help to isolate the problem or description the change needed.
☺ Metrics.
The metrics collected for each SCO are important for planning, for scheduling
and for assessing quality improvement.
Change categories are type 0 (critical bug), type 1 (bug), type 2 (enhancement),
type 3 (new feature), and type 4 (other).
Upon acceptance of the SCO, initial estimates are made of the amount of
breakage and effort required to resolve the problem.
The breakage item quantifies the volume of change, and the rework quantifies
the complexity of change.
After resolution, the actual breakage is noted, and the actual rework effort is
further elaborated.
The analysis item identifies the number of staff hours spent in understanding the
required change – re-creating, isolating, and debugging the problem if the
change is type 0 or 1; analysis and prototyping alternative solutions if it is type
2 or 3.
The implement item identifies the staff hours necessary to design and implement
the resolution.
The test item identifies the hours expended in testing the resolution.
The document item identifies all effort expended in updating other artifacts such
as user manual or release description.
Breakage quantifies the extent of change and can be defined in units SLOC,
function points, files, components, or classes.
In the case of SLOC, a source file comparison program to quantify differences
may provide a simple estimate of the breakage.
☺ Resolution.
This field includes the name of the person responsible for implementing the
change, the components changed, the actual metrics, and a description of the
change.
The lowest level of component references should be kept at approximately the
level of allocation to an individual.
A “component” allocated to a team is not a sufficiently detailed reference.
☺ Assessment.
This field describes the assessment technique as inspection, analysis,
demonstration, or test.
It should also reference all existing test cases and new test cases executed, and it
should identify all different test configurations, such as platforms, topologies,
and compilers.
Configuration Baseline
There are generally two classes of baselines: external product releases and internal testing
releases.
A configuration baseline is controlled formally as it is a packaged exchange between
groups.
For example, the development organization may release a configuration baseline to the
test organization.
A project may release a configuration baseline to the user for beta testing.
Generally, three levels of baseline releases are required for most systems: major, minor,
and interim.
Each level corresponds to a numbered identifier such as N.M.X, where N is the major
release number, M the minor release number, and X the interim release identifier.
A major release represents a new generation of the product or project.
A minor release represents the same basic product with enhanced features, performance,
or quality.
Major and minor releases are intended to be external product releases that are persistent
and supported for a period of time.
The figure on the following page shows examples of some release name histories for two
different situations:
Once software is placed in a controlled baseline, all changes are tracked.
A distinction must be made for the cause of a change.
Change categories are:
Type 0:
Critical failures, which are defects that are nearly always fixed before nay external
release
These changes represent show-stoppers with an impact on the usability of the
software in its critical use cases.
Type 1:
A bug or defect – with no impairment of the usefulness of the system or that can be
worked around.
These errors tend to correlate nuisances in critical use cases or to serious defects in
secondary use cases that have a low probability of occurrence.
Prototype 0.1
Prototype 0.1
FIGURE 12-4. Example release histories for a typical project and a typical product
Type 2:
A change that is an enhancement rather than a response to a defect
Its purpose is to improve performance, testability, usability, or some aspect of
quality that represents good value engineering.
Type 3:
A change that is necessitated by an update to the requirements
Such an update could be new features or capabilities that are outside the scope of the
current vision and business case.
Type 4:
Changes not accommodated by the other categories
Examples: document only or a version upgrade to commercial components
The following table provides examples of these changes in the context of two different
project domains: a large-scale, reliable air traffic control system and a packaged software
development tool.
CHANGE
TYPE AIR TRAFFIC CONTROL PROJECT PACKAGED VISUAL MODELING TOOL
Type 0 Control deadlock and loss of flight data Loss of user data
Type 1 Display response time that exceeds the Browser expands but does not collapse
requirement by 0.5 second displayed entries
Type 2 Add internal message field for response Use of color to differentiate updates form
time instrumentation previous version of visual model
Type 3 Increase air traffic management Port to new platform such as Win-NT
capacity from 1,200 to 2,400
simultaneous flights
Type 4 Upgrade from Oracle 7 to Oracle 8 to Exception raised when interfacing to MS Excel
improve query performance 5.0 due to Windows resource management bug
TABLE 12-1. representative examples of changes at opposite ends of project spectrum
Highest organization level: standards that promise (1) strategic and long
term process improvements, (2) general technology insertion and
education, (3) comparability of project and business unit performance, (4)
mandatory quality control.
The organization policy is the defining document for the organization’s software policies.
In any process assessment, this is the tangible artifact that says what to do.
From this document, reviewers should be able to question and review project and
personnel and determine whether the organization does what it says.
I Process-primitive definitions
A. Life-cycle phases (inception, elaboration, construction, transition)
B. Checkpoints (major milestones, minor milestones, status assessments)
C. Artifacts (requirements, design, implementation, deployment, management sets)
D. Roles and responsibilities (PRA, SEPA, SEEA, project teams)
II Organizational software policies
A. Work breakdown structure
B. Software development plan
C. Baseline change management
D. Software metrics
E. Development environment
F. Evaluation criteria and acceptance criteria
G. Risk management
H. Testing and assessment
III Waiver policy
IV Appendixes
A. Current process assessment
B. Software process improvement plan
FIGURE 12-5. Organization policy outline
Organization Environment
The organization environment for automating the default process provides answers to
how things get done and the tools and techniques to automate the process.
They might include procurement agency contract monitors, end-user engineering support
personnel, third-party maintenance contractors, independent verification and validation
contractors, representatives of regulatory agencies, and others.
The following illustrates the new opportunities for value-added activities by external
stakeholders in large contractual efforts:
Electronic
Exchange
Ma nagement Ma nagement
Artifact Releases Artifact Baselines
Visual modeling
Editor-compiler-debugger
Defect tracking
Tool Subset
Environment Tools and Process
Automation
Stakeholder Activities
Configuration control board
participation
Test scenario development FIGURE 12-6.
Risk management analysis Extending
Metrics trend analysis environments
Artifact reviews, analyses, into
audits stakeholder
Independent alpha and beta domains
testing
There are several important reasons for extending development environment resources
into stakeholder domains. They are:
Technical artifacts are not just paper.
Electronic artifacts in rigorous notations such as visual models and source code are
viewed more efficiently by using tools with browsers.
Continuous and expedient feedback is more efficient, tangible, and useful when the
environment resources are electronically accessible by stakeholders.
For such a shared environment to be possible, the development teams should create an
open environment and provide adequate resources to accommodate customer access.
From the stakeholders’ side, they should avoid abusing the access and interrupting
development work.
They should participate by adding value.
1. Explain the mapping between process workflows and the software development
tools, using a neat diagram. (Page 168-172, Fig. 12-1/P 169)
2. The project environment artifacts evolve through three discrete states – the
prototyping environment, the development environment, and the maintenance
environment. Explain the four important environment disciplines for the above
evolution. (Page 172-185)
3. Explain how round-trip engineering is a key requirement for environments that
support iterative development. (Page 173-174)
4. Explain about software change orders (SCOs). (Page 175-178, Fig. 12-3/P176)
5. a) Explain, using an example, how release histories are captured in configuration
baselines. (Page 178-179, Fig. 12-4/P 179)
b) Explain the SCO transitioning process including the role of the configuration
control board (CCB). (P 179-181)
The goals of software metrics are to provide the development team with the following:
An accurate assessment of progress to date
Insight into the quality of the evolving software product
A basis for estimating the cost and schedule for completing the product with
increasing accuracy over time
Of the many different metrics applicable in managing a modern process, there are seven
core metrics to be used on all software projects.
Metrics values provide one dimension of insight; metrics trends provide a perspective for
managing the process.
Metrics trends with respect to time provide insight into how the process and the product
are evolving.
Iterative development is about managing change, and measuring change is the most
important aspect of the metrics program.
The fundamental goal of management is: predictable cost and schedule performance for a
given level of quality.
Till this goal is achieved, absolute values of productivity and quality improvement will be
secondary issues.
The seven core metrics can be used in different ways to help manage projects and
organizations:
Thereby, a project or organization can improve its ability to predict the cost, schedule, or
quality performance of future activities.
They are simple, objective, easy to collect, easy to interpret, and hard to misinterpret.
Collection can be automated and non-intrusive
They provide for consistent assessments throughout the life cycle and are derived
from the evolving product baselines than from a subjective assessment
They are useful to both management and engineering personnel for communicating
progress and quality in a consistent format
Their fidelity improves across the life cycle.
Metrics applied to the engineering stage will less accurate than those applied to the
production stage.
So, the prescribed metrics are tailored to the production stage, when the cost risk is
high and management value is leveraged.
Metrics activity during the engineering stage is geared toward establishing initial
baselines and expectations in the production stage plan.
Release 3
S S
100%
Release 2
S
Work
Release 1
S
Project Schedule
FIGURE 13-1. Expected progress for a project with three major releases
Each major organizational team must have at least one primary progress perspective.
Measurements are made against such a perspective.
For the standard teams the default perspectives of this metric are:
Software architecture team: use cases demonstrated
Software development team: SLOC under baseline change management, SCOs closed
Software assessment team: SCOs opened, test hours executed, evaluation criteria met
Software management team: milestones completed
Planned progress
100%- Actual progress (currently 35%)
(currently 25%)
Current time
Actual cost
Expenditures
(currently 25%)
Time
FIGURE 13-2. The basic parameters of an earned value system
For software projects the culture of the team, the experience of the team, and the style of
the development [the process, its rigor, and its maturity] should drive the criteria used to
assess the progress objectively.
Project Schedule
In such development projects, the maintenance team is expectably smaller than the
development team.
For a commercial product development, the sizes of the maintenance and development
teams may be the same.
Open
Closed
Released baseline
Project Schedule
The change traffic metric can be collected by change type, by release, across all releases,
by team, by components, by subsystem, and so forth.
Coupled with the work and progress metrics, it provides insight into the stability of the
software and its convergence toward stability – or divergence and instability.
Stability is defined as the relationship between opened versus closed SCOs.
The change traffic relative to the release schedule provides insight into schedule
predictability – the primary value of this metric and an indicator of the performance of
the process. The other three quality metrics focus more on the quality of the product.
Breakage is defined as the average extent of change, which is the amount of software
baseline that needs rework. .
The rework is in terms of SLOC, function points, components, subsystems, files, etc.
Modularity is the average breakage trend over time. .
For a healthy project, the trend expectation is decreasing or stable, as in figure 13-6.
Breakage
Implementation
Changes
Project Schedule
Rework is defined as the average cost of change, which is the effort to analyze, resolve,
and retest all changes to software baselines. .
Rework trends that are increasing with time clearly indicate that product maintainability
is suspect.
MTBF is computed by dividing the test hours by the number of type 0 and type 1 SCOs.
Released baselines
Project Schedule
FIGURE 13-8. Maturity expectation over a healthy project’s life cycle
Effective test infrastructure must be established for early insight into maturity.
For monolithic software the conventional software approaches focused on every line of
code, every branch, and so forth for complete test coverage.
Software errors are categorized into two types: ⑴deterministic and ⑵nondeterministic.
⑴ Bohr-bugs are a class of errors that always result when the software is stimulated in a
certain way.
These errors are caused by coding errors, and changes are isolated to single
components.
Conventional software executing a single program on a single processor typically
contained only Bohr-bugs.
⑵ Heisen-bugs are software faults that are coincidental with a certain probabilistic
occurrence of a given situation.
These errors are caused by design errors, and are not repeatable even when the
software is stimulated in the same apparent way.
To provide test coverage and resolve the statistically significant Heisen-bugs,
extensive statistical testing under realistic and randomized usage scenarios is
necessary.
Modern, distributed systems with many interoperating components executing across a
network of processors are vulnerable to Heisen-bugs.
This testing approach provides a powerful mechanism for encouraging automation in the
test activities early in the life cycle.
The quality indicators are derived form the evolving product than from the artifacts.
They provide insight into the waste generated by the process.
Scrap and rework metrics are a standard measurement perspective for manufacturing
processes.
They recognize the inherently dynamic nature of an iterative process.
They explicitly concentrate on the trends or changes with respect to time than
focusing on value.
The combination of insight from the current value and the current trend provides
tangible indicators for management action.
The actual values of these metrics vary across projects, organizations, and domains.
The relative trends should follow the following general pattern:
A mature development organization should describe metrics targets that are more definite
and precise for its line of business and specific processes.
Measuring help the decision makers ask the right questions, understand the context, and
make objective decisions.
Because of the dynamic nature of software projects, these must always be available,
tailor-able to different subsets of the evolving product – release, version, component,
class, and maintained so that trends can be assessed with reference to time.
This can be achieved when the metrics are maintained on-line as an automated byproduct.
Metrics display effects of problems; the causes require synthesis of multiple perspectives
and reasoning.
A software change order kept open for along time may mean:
The problem was simple to diagnose and the solution required substantial rework,
or
The diagnosis is a time-consuming affair and the solution requires a simple
change to a single line of code.
For managing against a plan – a software project control panel (SPCP) to maintain an on-
line version of the status of evolving artifacts provides a key advantage.
This concept was recommended by the Airlie Software Council, using the
metaphor of a project “dashboard”.
The idea is to provide a display panel that integrates data from multiple sources to
show the current status of some aspect of the project.
For example: the software manager can see a display with overall project values, a
test manager can see a display focused on metrics specific to a beta release, and
development managers can see the data concerning the subsystems and
components they are responsible for.
The panel can support standard features like warning lights, thresholds, variable
scales, digital formats, and analog formats to present an overview of the current
situation.
This automation can improve management insight into progress and quality trends
and improve the acceptance of metrics by the engineering team.
Specific monitors – called roles – include project managers, software development team
leads, software architects, and customers.
For every role, there is a specific panel configuration and scope of data presented.
Each role performs the same general use cases, with a different focus.
Monitor: defines panel layouts from existing mechanisms, graphical objects, and
linkages to project data; queries data to be displayed at different levels of
abstraction.
Indicators may display data in formats that are binary – such as black and white,
tertiary – such as red, yellow, and green, digital – integer or float, or some enumerated
data type – a sequence of discrete values like sun …. sat, ready-aim-fire, jan … dec.
Indicators also provide a mechanism to be used to summarize a
condition or circumstance associated with another metric, or
relationships between metrics and their associated control values.
Lower Threshold
Time
FIGURE 13-9(a) Examples of the fundamental metrics
FIGURE 13-9(b) Examples of the fundamental metrics
Comparison: Metric Value 1
Comparison of N values with
the same units over time.
Example: open action items
Metric Value
Metric Value 2
Time
100%-
Progress: Plan vs. actuals over time
Expected Value
Actual Value
% Complete
Time
Plan (27)
In this example, the project manager role has defined a top-level display with four
graphical objects.
1. Project activity status.
The graphical object in the upper left provides an overview of the status of the
top-level WBS elements.
The seven elements are coded red, yellow, and green to reflect current earned
value status.
Green would represent ahead of plan, yellow for within 10% of plan, and red
identifies elements with a greater than 10% cost or schedule variance.
This graphical object provides several examples of indicators: tertiary colors, the
actual percentage, and the current first derivative – up arrow means getting better,
down arrow means getting worse.
3. Milestone progress.
The graphical object in the lower left provides a progress assessment of the
achievement of milestones against plan and provides indicators of the current
values.
An SPCP should support tailoring and provide the capability to drill down into the details
for any given metric.
The following top-level use case corresponds to a monitor interacting with the control
panel. It describes the basic operational concept for an SPCP.
♥ Start the SPCP.
The SPCP starts and shows the most current information saved on the last use of the
SPCP.
The SPCP is an example of metrics automation approach that collects, organizes, and
reports values and trends extracted directly from the evolving engineering artifacts.
1. Describe the seven core metrics for project control and process instrumentation.
2. Describe the four quality indicators.
3. Discuss the reasons behind the choice of the seven core metrics.
4. List out the basic characteristics of a good metric.
5. Describe the “dash board” method of metrics automation.
6. List out the contents of a top-level use case of the basic operational concept for an SPCP.
In tailoring the management process to a specific domain or project, there are two dimensions of
discriminating factors:
technical complexity, and
management complexity.
The following figure illustrates these two dimensions of process variability and shows an
example project application:
The formality of reviews, the quality control of artifacts, the priorities of concerns, and other
process instantiation parameters are governed by the point a project occupies in these two
dimensions.
The priorities along the two dimensions are summarized in the figure on the following page.
There are six process parameters that cause major differences among project processes:
These are critical dimensions that a software project manger must consider when tailoring a
process framework to create a practical process implementation.
Prepared by S. S. RAMANUJEM, PROF., DEPT OF MCA, SWARNANDHRA COLLEGE OF
ENGINEERING & TECHNOLOGY, NARSAPUR
PART – III SOFTWARE MANAGEMENT DISCIPLINES Page 126 of 187
CHAPTER-14 TAILORING THE PROCESS
Lower Higher
Management Management
Complexity Complexity
Less emphasis on risk management More emphasis on risk management
Less process formality More process formality
More emphasis on individual skills More emphasis on teamwork
Longer production and transition Longer inception and elaboration
phases phases
14.1.1 SCALE
The single most important factor in tailoring a software process framework is the total scale of
the application.
From a process tailoring perspective, the primary measure of scale is the size of the team.
As the headcount increases, the importance of consistent interpersonal communications becomes
paramount.
Generally, five people are an optimal size for an engineering team. Most people can manage four
to seven things at a time.
There are fundamentally different management approaches to manage a team of 1 (trivial), a
team of 5 (small), a team of 25 (moderate), a team of 125 (large), a team of 625 (huge), and so
on.
As the team size grows, a new level of personnel management is introduced at a factor of 5.
This model can be used to describe some of the difference among project of different sizes.
Small projects – 5 people – require very little management overhead, but team leadership toward
a common objective is crucial.
There is some need to communicate the intermediate artifacts among team members.
Project milestones are easily planned, informally conducted, and easily changed.
There is a small number of individual workflows.
Performance depends primarily on personnel skills.
Process maturity is relatively unimportant.
Individual tools can have a considerable impact on performance.
Large projects – 125 people – require substantial management overhead, including a dedicated
software project manager and several subproject managers to synchronize project-level and
subproject-level workflows and to balance resources.
There is significant expenditure in overhead workflows across all team leads for dissemination,
review, coordination, and assessment.
Intermediate artifacts are explicitly emphasized to communicate engineering results across many
diverse teams.
Project milestones are formally planned and conducted, and changes to milestone plans are
expensive.
Large numbers of concurrent team workflows are necessary, each with multiple individual
workflows.
Performance is highly dependent on the skills of key personnel – subproject managers and team
leads.
Project performance is dependent on average people, for two reasons:
1. There are numerous mundane tasks in any large project, especially in the overhead
workflows.
2. The probability of recruiting, maintaining, and retaining a large number of exceptional
people is small.
Process maturity is necessary, particularly the planning and control aspects of managing project
commitments, progress, and stakeholder expectations.
An integrated environment is required to manage change, automate artifact production, and
maintain consistency among the evolving artifacts.
Huge projects – 625 people – require substantial management overhead, including multiple
software project managers and many subproject managers to synchronize project-level and
subproject-level workflows and balance resources.
There is significant expenditure in overhead workflows across all team leads for disseminating,
review, coordination, and assessment.
Intermediate artifacts are explicitly emphasized to communicate engineering results across many
diverse teams.
Project milestones are very formally planned and conducted, and changes to milestone plans
cause malignant re-planning.
There are very large numbers of concurrent team workflows, each with multiple individual
workflows.
Performance is highly dependent on the skills of key personnel – subproject managers and team
leads.
Software process maturity and domain experience are mandatory to avoid risk and ensure
synchronization of expectations of all the numerous stakeholders.
A mature, highly integrated, common environment across the development teams is necessary to
manage change, automate artifact production, maintain consistency among the evolving artifacts,
and improve the return on investment of common processes, common tools, common notations,
and common metrics.
The following table summarizes some key differences in the process primitives for small and
large (team-sized) projects:
TABLE 14-1. Process discriminators that result from differences in project size
PROCESS SMALLER TEAM LARGER TEAM
PRIMITIVE
Life-cycle Weak boundaries between phases Well-defined phase transitions to synchronize
phases progress among concurrent activities
Artifacts Focus on technical artifacts Change management of technical artifacts
Few discrete baselines resulting in numerous baselines
Very few management artifacts Management artifacts important
required
Workflow More need for generalists, people who Higher percentage of specialists
effort perform roles in multiple workflows More people and teams focused on a specific
allocations workflow
Checkpoints Many informal events for maintaining A few formal events
technical consistency Synchronization among teams, which can take
days
Management Informal planning, project control, and Formal planning, project control, and
discipline organization organization
Automation More ad hoc environments, managed Infrastructure to ensure a consistent, up-to-date
discipline by individuals environment available across all teams
Additional tool integration to support project
control and change control
Cohesive teams have common goals, complementary skills, and close communications.
Adversarial teams have conflicting goals, competing or incomplete skills, and less-than-open
communications.
A product funded, developed, marketed, and sold by the same organization can be set up with a
common goal, for example profitability.
A small, collocated organization can be established with cohesive skill base and excellent day-to-
day communications among its team members.
Funding authorities and users want to minimize cost, maximize the feature set, and accelerate
time to market, while development contractors want to maximize profitability.
It is impossible to collocate large teams and synchronize stakeholder expectations.
All these factors tend to degrade team cohesion and must be managed continuously.
The following table summarizes key differences in the process primitives for varying levels of
stakeholder cohesion:
For loose contracts like building a commercial product within a business unit, management
complexity will be minimal.
In these processes, feature set, time to market, budget, and quality can be freely traded off and
change with very little overhead.
The entire coordination effort might involve only the development manager, marketing manager,
and business unit manager coordinating some key commitments.
For very rigorous contracts, it could take months to authorize a change in a release schedule.
To avoid a large custom development effort, it might be desirable to incorporate a new
commercial product into the overall design of a next-generation system.
This sort of change requires coordination among the development contractor, funding agency,
users, certification agencies, associate contractors for interfacing systems, among others.
Large-scale, catastrophic cost-of-failure systems have extensive contractual rigor and require
significantly different management approaches.
The following table summarizes key differences in the process primitives for varying levels of
process flexibility/rigor.
14.2 EXAMPLE:
SMALL-SCALE PROJECT VERSUS
LARGE-SCALE PROJECT
An analysis of the differences between the phases, workflows, and artifacts of two projects on
opposite ends of the management complexity spectrum shows how different two software project
processes can be.
The exercise is to point out the dimensions of flexibility, priority, and fidelity that can change
when a process framework is applied to different applications, projects, and domains.
The following table illustrates the differences in schedule distribution of a small project –
example, 50,000 SLOC Visual Basic application, built by a team of five, and a large project – a
3,00,000 SLOC embedded program, built by a team of 40.
TABLE 14-7. Schedule distribution across phases for small and large projects
ENGINEERING PRODUCTION
DOMAIN INCEPTION ELABORATION CONSTRUCTION TRANSITION
Small 10% 20% 50% 20%
commercial
project
Large, complex 15% 30% 40% 15%
project
The biggest difference is the relative time at which the life-cycle architecture milestone occurs.
This corresponds to the amount of time spent in the engineering stage compared to the
production stage.
For a small project the split is 30/70 and for a large project, it is 45/55.
The leverage of the various process components in the success of the project is a key aspect in
the differences between any two projects.
This reflects the importance of staffing or the level of associated risk management.
The following table lists the workflows in order of their importance:
TABLE 14-8. Differences in workflow priorities between small and large projects
RANK SMALL COMMERCIAL PROJECT LARGE, COMPLEX PROJECT
1 Design Management
2 Implementation Design
3 Deployment Requirements
4 Requirements Assessment
5 Assessment Environment
6 Management Implementation
7 Environment Deployment
Iterative activities
100%
Development Progress
(% Coded)
Conventional
Project Profile
Project Schedule
Figure 15-1. Progress profile of a modern software project
Iterative development produces the architecture first, allowing integration to occur as the
verification activity of the design phase and enabling design flaws to be detected and resolved
earlier in the life-cycle.
This approach avoids the big-bang integration at the end of a project by stressing continuous
integration throughout the project.
The figure above illustrates the differences between the progress profile of a modern project and
that of a conventional project:
The architecture-first approach forces integration into the design phase through the construction
of demonstrations.
The demonstrations don’t eliminate the design breakage; instead, they make it happen in the
engineering stage, when it can be resolved efficiently in the context of life-cycle goals.
The down stream integration nightmare, late patches, and shoe-horned software fixes are avoided.
The result is a more robust and maintainable design.
The engineering stage – inception and elaboration phases – of the life cycle focuses on
confronting the risks and resolving them before the production stage.
Conventional projects do the easy stuff first, thereby demonstrating early progress.
A modern process takes up the important 20% of the requirements, use cases, components, and
risks.
This is the essence of the principle of architecture first.
Defining the architecture does not include simple steps for which visible progress can be
achieved easily.
The effect of the 80/20 lessons learned in the past in software management experience provide a
useful risk management perspective:
80% of the software scrap and rework is caused by 20% of the errors.
Elaborate the change-critical components first so that broad-impact changes occur when
the project is nimble or agile.
The following figure compares the risk management profile of a modern project with the profile
for a conventional project.
Figure 15-2. Risk Profile of a modern software project across its life cycle
High
Project Risk Exposure
Controlled Risk
Management period
Conventional
Project Risk
Profile
Risk Modern Project
Exploration Risk Elaboration Risk Profile
Period Period
Low
Project Life Cycle
Use Case
COMMON MECHANISMS Model
COMMON MECHANISMS Ra
Rb
.
.
.
Ri
Major milestones provide tangible results and feedback from a usage point of view.
The following table shows the results of major milestones in a modern process
One of the most visible efforts in capturing the best software management practices has been the
Software Acquisition Best Practices Initiative by the U. S. Department of Defense to “improve
and restructure our software acquisition management process.”
As per Brown’s summarization, the initiative has three components: the Airlie Software Council
(of software industry gurus), seven different issue panels (of industry and government
practitioners), and a program manager’s panel (of experienced industry project managers).
Each component produced recommendations and results, and reviewed the work of the other
components.
One of the Airlie Software Council’s products was a set of nine best practices.
The Council focused on the practices that would have the greatest effect in improving the
software management discipline for large-scale software projects and controlling the
complexities therein.
The nine best practices are:
2. Agreement on interfaces.
The intent is: architecture-first principle.
Getting the architecture baselined forces the project to again agreement on the external
interfaces – inherent in the architecture.
3. Formal inspections.
The assessment workflow, along with the other engineering workflows, throughout the
life cycle must balance different defect removal strategies.
The lowest important strategy, in terms of breadth, should be formal inspection, as it is
high on cost in human resources and low on defect discovery rate.
4. Metric-based scheduling and management.
This is related to model-based notation and objective quality control principles.
Without rigorous notations for artifacts, the measurement of progress and quality
degenerates into subjective estimates.
5. Binary quality gates at the inch-pebble level.
Many projects have highly detailed plans laid out at a great expense, early in the life
cycle.
Within a few months, some of the requirements change or the architecture changes, and a
large percentage of the detailed planning must be re-baselined.
A better approach would be to maintain fidelity of the plan commensurate with an
understanding of the requirements and the architecture.
Rather than inch pebbles, establishing milestones in the engineering stage flowed by inch
pebbles in the product stage is recommendable.
This follows the evolving levels of detail principle.
6. Program-wide visibility of progress versus plan.
Open communications among project team members is necessary.
7. Defect tracking against quality targets.
This is related to architecture-first and objective quality control principles.
The make-or-break defects and quality targets are architectural.
Getting control on these qualities early and tracking their trends are requirements for
success.
8. Configuration management.
The Council emphasized configuration management as key to controlling complexity and
tracking changes to all artifacts.
It also recognized that automation is important because of the volume and dynamics of
modern, large-scale projects.
This reasoning follows the change management principle.
9. People-aware management accountability.
This is another very obvious management principle.
1) List the approaches of how a modern process framework resolves the drawbacks of
conventional approach. (P 225-226)
2) Compare and contrast how the project profiles differ between a conventional approach
and a modern process. (P 226-227)
3) Explain how risk resolution is carried out in the iterative processes early in the life cycle,
and its advantages. (P 228-229)
4) Explain the organization of software components in a modern process, and thereby how
teamwork among stakeholders helps in achieving better results. (P 229-231)
5) List out and explain the top 10 software management principles. (P 231-232)
6) Explain how balancing the top 10 software management principles achieves economic
results. (P231-233)
7) Discuss the Airlie Software Council’s nine best practices of software management; also
explain how they are mapped onto the top 10 management principles. (P 232-235)
A problem, today, is a continuing inability to predict with precision the resources required for a
given software endeavor.
Accurate estimates are possible today, although they are imprecise.
It will be difficult to improve empirical estimation models when the data is noisy and highly
uncorrelated, and is based on differing process and technology foundations.
There are no exactly matching cost models for an iterative software process focused on an
architecture-first approach.
Many cost estimators still use a conventional process experience base to estimate a modern
project profile.
The following discussion presents a perspective on how a software cost model should be
structured to support the estimation of a modern software process.
A next-generation software cost model should explicitly separate architectural engineering from
application production.
The cost of designing, producing, testing, and maintaining the architecture baseline is a function
of scale, quality, technology, process, and team skill.
For an organization having achieved a stable architecture, the production costs should be an
exponential function of size, quality, and complexity, with a much more stable range of process
and personnel influence.
The production stage cost model should reflect an economy of scale (exponent less than 1.0)
similar to that of conventional economic models.
Next-generation software cost models should estimate large-scale architectures with economy of
scale.
This implies, the process exponent during the production stage will be less than 1.0.
The reasons is, larger the system, more the opportunity to exploit automation and to reuse
common processes, components, and architectures.
In the conventional process, the minimal level of automation for the overhead activities of
planning, project control, and change management led to labor-intensive workflows and a
diseconomy of scale.
This lack of automation was as true for multiple-project, line-of-business organization as it was
for individual projects.
EffortArch EffortApp
Size/Complexity Size/Complexity
Team Size [small and expert as possible] Team Size [large and diverse as needed]
Architecture: small team of software engineers Architecture: small team of software engineers
Applications: small team of domain engineers Applications: as many as needed
Product Product
Executable architecture, production plans Deliverable, useful function, Tested baselines
Requirements Warranted quality
Focus
Focus Focus
Design
Design and
and integration
integration Implement, test, and maintain
Host
Host development
development environment
environment Target technology
Phases Phases
Inception and elaboration Construction and transition
Next-generation environments and infrastructures are moving to automate and standardize many
of these management activities, requiring a lower percentage of effort for overhead activities as
scale increases.
Reusing common
processes across multiple iterations of a single project,
multiple releases of a single product, or
multiple projects in an organization
relieves many of the sources of diseconomy of scale.
Critical sources of scrap and rework are eliminated by applying precedent experience and mature
processes.
Establishing trustworthy plans based on credible project performance norms and using reliable
components reduce other sources of scrap and rework.
Most reuse of components reduces the size of the production effort. The reuse of processes, tools,
and experience has a direct impact on the economies of scale.
Another important difference is that architectures and applications have different units of mass –
scale versus size, and are representations of the solution space.
Scale might be measured in terms of architecturally significant elements – classes, components,
processes, nodes – and size might be measured in SLOC or MB of executable code.
These measures differ from measures of the problem spaces such as discrete requirements or use
cases.
The problem space description drives the definition of the solution space.
There are many solutions to any given problem, as illustrated in the following figure, each with a
different value proposition.
So, the cost estimation model must be governed by the basic parameters of a candidate solution.
If the value propositions are not acceptable solutions to the problem, more candidate solutions
need to be pursued or the problem statement needs to change.
The debate between function point users and source line users is an indication of the need for
measures of both scale and size.
Function points are more accurate at quantifying the scale of the architecture required, and
SLOC is more accurate in depicting the size of components that make up total implementation.
The advantage of using SLOC is that collection can easily be automated and precision can be
easily achieved.
The accuracy of SLOC as a measure of size is ambiguous and can lead to misinterpretation when
SLOC is used in absolute comparisons among different projects and organizations.
SLOC is a successful measure of size in the later phases of the life cycle, when the most
important measures are the relative changes from month to month as the project converges on
releasable versions.
Function pints are not easily extracted from any rigorous representation format, so automation
and change tracking are difficult or ambiguous.
A rigorous notation for design artifacts is a necessary prerequisite to improvements in the fidelity
with which the scale of a design can be estimated.
There will be an opportunity to automate, in the future, a new measure of scale derived from
design representations in UML.
Two major improvements in next-generation software cost estimation models can be expected:
1. Separation of the engineering stage from the production stage will force estimators to
differentiate between architectural scale and implementation size.
This will permit greater accuracy and more precision in life-cycle estimates.
2. Rigorous design notations such as UML will offer an opportunity to define units of
measure for scale that are more standardized and therefore can be automated and tracked.
These measures can also be traced more straightforwardly into the costs of production.
Technology advances are going to make two breakthroughs possible in the software process:
1. The availability of integrated tools that automate the transition of information between
requirements, design, implementation, and deployment elements.
These tools allow comprehensive round-trip engineering among the engineering artifacts.
2. The current four sets of fundamental technical artifacts would collapse into three sets by
automating the need for a separate implementation set.
This technology advance is illustrated in the following figure:
Round-trip engineering
Requirements
Requirements
Requirements
Deployment
Deployment
Deployment
Design
Design
Design
Implementatio
Implementatio
Software Next-generation
Conventional Engineering environment
Experience Experience expectation
The technology advance would allow executable programs to be produced directly from
UML representations without any hand-coding.
The framework arising out of Boehm’s top 10 software metrics can be used to summarize
some of the important themes in an economic context and speculate on how a modern
software management framework should perform.
The following are the expected changes. The organizational manager should strive for these
changes in making the transition to a modern process.
1. Finding and fixing a software problem after delivery costs 100 times more than finding
and fixing the problem in early design phases.
Modern processes, component-based development technologies, and architecture
frameworks are explicitly target at improving this relationship.
An architecture-first approach will yield tenfold to hundredfold improvement in the
resolution of architectural errors.
So, the iterative process places a huge premium on early architecture insight and risk-
confronting activities.
2. You can compress software development schedules 25% of nominal, but no more.
This metric will be valid for the engineering stage of the life cycle because of the
intellectual content of the system.
In the engineering stage, if a consistent baseline of architecture, construction plans, and
requirements is achieved, then schedule compression in production phase can be
flexible.
Whether the engineering stage is spread over multiple projects [in line-of-business
organizations], or multiple increments [in project organization], there should be more
opportunity for concurrent development.
3. For every $1 you spend on development, you will spend $2 on maintenance.
Generalizing this metric is difficult as there are different maintenance models.
To measure this ratio, the productivity rates between development and maintenance
would be a better alternative.
In iterative development, interestingly, the line between development and maintenance
is fuzzy.
A mature process and a good architecture reduce scrap and rework considerably.
All things considered the $1 to $2 ratio may turn out to $1 to $1.
4. Software development and maintenance costs are primarily a function of the number
of SLOC.
In today’s component based technology, SLOC as a size-metric is irrelevant.
Commercial components reuse, and automatic code generators are all giving a new
meaning to a SLOC.
The use of more components, more types of components, and more sources of
components necessitate more integration work and drive up costs.
The component industry still being immature, standard bills of materials (costing) are
not available for improving the fidelity of its cost estimations.
So, the next-generation cost models should be less sensitive to the number of SLOC
and more sensitive to the discrete number of components and their ease of integration.
5. Variations among people account for the biggest differences in software productivity.
For an engineering activity with intellectual property as the product, the dominant
productivity factors will be personnel skills, teamwork, and motivations.
A modern process encapsulates the requirements for high-leverage people in the
engineering stage as the team size is relatively small.
The production stage with larger team sizes, should operate with less dependency on
scarce expertise.
6. The overall ratio of software to hardware costs is still growing. In 1955, it was 15:85;
in 1985, 85:15.
The main impact of this metric on software economics is that hardware continues to get
cheaper.
Processing cycles, storage, and network bandwidth continue to offer new opportunities
for automation.
So, software environments are playing a more important role.
1. Propose and explain a general structure for a cost estimation model for the modern
software process. [P 237-238, Fig. 16-1/P 239]
2. Compare and contrast a cost estimation model for the modern process with that of the
conventional process. [P 239-241, Fig. 16-2/P 240]]
3. Explain the possible improvements in the next-generation cost estimation models over
the conventional ones. [P 241-242, Fig. 16-3/P 242]
4. Discuss the modern software economics keeping Boehm’s top 10 software metrics as a
framework. [P 242-245]
Many of the techniques and disciplines required for a modern process will necessitate a
significant paradigm shift.
As usual, changes will be resisted by some stakeholders or certain intra-organizational factors.
It is important, equally, to separate cultural resistance from objective resistance.
In this chapter we consider the important culture shifts to be prepared for in order to avoid the
sources of friction in transitioning to and practicing a modern process.
17.2 DENOUEMENT
The conventional software process was characterized by:
Sequentially transitioning from requirements to design to code to test
Achieving 100% completeness of each artifact at each life-cycle stage
Achieving high-fidelity traceability among all artifacts at teach stage in the life cycle
A modern iterative development process framework is characterized by:
☺ Continuous round-trip engineering from requirements to test at evolving levels of
abstraction
☺ Achieving high-fidelity understanding of the drivers as early as practical
☺ Evolving the artifacts in breadth and depth based on risk management priorities
☺ Postponing completeness and consistency analyses until later in the life cycle
A modern process framework attacks the primary sources of the diseconomy of scale of the
conventional software process.
The following figure illustrates the next generation of software project performance depicting the
development progress versus time:
100%-
Development Progress
Range of
domain-
(% coded)
Modern
reusable Project Profile
assets Conventional
Project Profile
Project Schedule
FIGURE 17-1. Next-generation project performance
In the above figure, progress is defined as percent coded – demonstrable in its target form.
Organizations that succeed should be capable of deploying software products that are constructed
largely from existing components in 50% less time, with 50% fewer development resources, and
maintained by teams 50% the size of these required by the systems.
To avoid the apprehension of failure due to the transitioning to new techniques and technologies,
a safe path is to maintain the status quo and rely on existing methods.
But for higher success, maintaining the status quo is not always safe.
To make a transition, two points of wisdom from champions and change agents are:
1. Pioneer any new techniques on a small pilot program, and
2. Be prepared to spend more resources – money and time – on the first project that makes
the transition.
But both the recommendations are counter-productive.
A better way to transition to a more mature iterative development process that supports
automation technologies and modern architectures is to:
1. List out and explain the culture shifts to be overcome while transitioning to modern
processes. [P 248-251]
2. Explain the issues in transitioning from conventional practices to modern iterative
methods, with reference to performance and the strategies to be adopted for transitioning.
[P 251-253, Fig 17-1/P 252]
The project included: systems engineering, hardware procurement, and software development.
These three components consumed 1/3 of the budget.
The schedule spanned 1987 to 1994.
The software effort included the development of three distinct software systems of more than one
million SLOC.
This case study focuses on the initial software development, called the Common Subsystem of
about 3,55,000 SLOC.
The Common Subsystem effort also produced a reusable architecture, a mature process, and an
integrated environment for efficient development of the two software subsystems of similar size
that followed.
So, this case study represents about 1/6 of the overall CCPDS-R project effort.
The data, given here, are derived from published papers, internal TRW guidebooks, and contract
deliverable documents.
The CCPDS-R project produced a large-scale, highly reliable command and control system that
provides missile warning information used by the National Command Authority.
The procurement agency was Air Force Systems Command Headquarters, Electronic Systems
Division, at Hanscom Air Force Base, Massachusetts.
The primary user was US Space Command, and the full-scale development contract was awarded
to TRW’s Systems Integration Groups.
1. The Common Subsystem was the primary missile warning system within the upgrade
program.
3,55,000 SLOC
48-month software development schedule
provided reusable components, tools, environment, process, and procedures for
the following subsystems
included a primary installation with a backup system
2. The Processing and Display Subsystem (PDS) was a scaled-down missile warning
display system for all nuclear-capable commanders-in-chief.
2,50,000 SLOC
fielded on remote, read-only workstations distributed worldwide
3. The STRATCOM Subsystem provided both missile warning and force management
capability for the backup missile warning center.
4,50,000 SLOC
The following figure summarizes the overall acquisition process and the products of each phase:
These event and products enabled the FSD source selection to be based on demonstrated
performance of the contractor-proposed team and the FSD proposal.
From the software perspective a source selection criterion included in the FSD proposal activities:
a software engineering exercise.
This was a unique and effective approach for assessing the abilities of the competing contractors
in software development.
The customer was concerned with the overall software risk of the project.
CCPDS-R was a large software development activity and one of the first to use the Ada
programming language.
There was apprehension about the Ada development environments, contractor processes, and
contractor training programs being mature enough to use on a full-scale development effort.
The software engineering exercise occurred immediately after the FSD proposals were submitted.
The customer provided the bidders with a simple two-page specification of a “missile warning
simulator” with some of the same fundamental requirements as the CCPDS-R system.
It included a distributed architecture, a flexible user interface, and the basic processing scenarios
of a simple CCPDS-R missile warning thread.
The exercise was to provide objective evidence of the credibility of each contractor’s proposed
software development approach.
The TRW’s CCPDS-R team demonstrated that the team was prepared, credible, and competent
at conducting the proposed software approach.
This took 12 staff-months – 12 people full-time for 23 days.
In preparing for the CCPDS-R project, TRW placed a strong emphasis on evolving the right
team.
The CD-phase team represented the architecture team, responsible for an efficient engineering
stage.
This team had the following primary responsibilities:
Analyze and specify the project requirements
Define and develop the top-level architecture
Plan the FSD phase software development activities
Configure the process and development environment
Establish trust and win-win relationships among the stakeholders
The CD phase team was small, expert, and with little organizational hierarchy.
The team covered all necessary skills, and with no competition among personnel.
Prepared by S. S. RAMANUJEM, PROF., DEPT OF MCA, SWARNANDHRA COLLEGE OF
ENGINEERING & TECHNOLOGY, NARSAPUR
PART – V CASE STUDIES AND BACKUP MATERIAL Page 153 of 187
APPENDIX D CCPDS-R CASE STUDY
The Common Subsystem software comprised six computer software configuration items (CSCIs).
[CSCI is a government jargon for a set of components that is managed, configured, and
documented as a unit and allocated to a single team for development].
Tasks
-1500 -500
Message Types
Sockets
1,148
-750 -250
200
IPDR PDR IPDR PDR
Months Months
FIGURE D-3. Common Subsystem SAS evolution
The graphs in the figure show that there was significant architectural change over the first 20
months of the project, after which the architecture remained stable.
Requirements analysis
Architecture analysis
Architecture synthesis
Critical-thread demonstration
By providing a set of complete use cases it enabled the user to perform a subset of the mission.
Planning the content and schedule of the Common Subsystem builds resulted in a useful and
accurate representation of the overall risk management plan.
The build plan was thought out early in the inception phase by the management team.
The management team set the expectation for reallocating build content as the life cycle
progressed and more-accurate assessments of complexity, risk, personnel, and engineering trade-
offs were achieved.
There were several adjustments in the build-content and schedule as early conjecture evolved
into objective fact.
The following figure illustrates the detailed scheduled and CSCI content of the Common
Subsystem:
FIGURE D-5(a). Common Subsystem builds (Part – 1)
CSCI Build
0 1 2 3 4 5 Total
NAS 8 8 2 2 20
SSV 33 25 102 106
DCO 23 27 20 70
TAS 2 3 5 10
CMP 3 6 6 15
CCO 5 31 37 7 80
♦ Build 0. This build comprised the foundation components necessary to build a software
architecture skeleton.
The inter-task/interprocess communications, generic task and process executives, and
common error reporting components were included.
173 Build 3
SPDW SCDW STOR1 SSTOR2
83 Build 4
SPDW SCDW SSTOR
7
Build 5
SPDW SCDW SSTOR
355
SSRR SIPDR SPDR SCDR
| | | | | | | |
0 5 10 15 20 25 30 34
Months after contract award
The individual milestones within a build included a preliminary design walkthrough (PDW), a
critical design walkthrough (CDW), and a turnover review (TOR).
The schedules for these milestones were flowed down from, and integrated with, the higher level
project milestones – SRR, IPDR, PDR, and CDR.
The following figure provides an overview of a build’s life cycle and the focus of activities:
The design walkthroughs were informal and highly interactive with open critique.
Initial prototyping and design work was concluded with a PDW and a basic capability
demonstration.
The walkthrough focused on the structural attributes of the components within the build.
The basic agenda was tailored for each build.
For each CSCI it generally included the following topics:
Overview: CSCI overview, interfaces, components, and metrics
Components: walkthrough for each major component, showing its source code interface,
allocated system requirement specification (SRS) requirements, current
metrics, operational concept for key usage scenarios, standalone test plan,
and error conditions and responses
Demonstration: focused on exercising the control interfaces across the components within
the integrated architecture
A build’s design work was concluded with a CDW and a capability demonstration that exposed
the key performance parameters of components within the build.
The CDW focused on the completeness of the components and the behavioral perspective of
operating within the allocated performance requirements.
Code Walkthroughs
These were used to disseminate project-wide expertise and ensure the development of self-
documenting source code.
The CSCI managers and software chief engineer coordinated the need for code walkthroughs and
their allocation among various authors to meet the following objectives:
Better dissemination of self-documenting source code style.
Identification of coding issues not caught by compilers and source code analysis tools
o Readability issues: Object naming, coding style, and commenting style
o Complexity of objects or methods: levels of necessity
o Reuse: is custom code being developed where components are available
o Performance issues: implementation efficiency levels
Reduction of the amount of source code needed for review in the larger design
walkthroughs
Exposure of personnel to the products of experts
Code review, typically, involved a single reviewer going-in for detailed analysis using on-line
source code browsing tools.
A one-page result of the review was given to the author, the CSCI manager, and the software
chief engineer.
The software chief engineer was responsible for noting global trends, identifying improvements
needed in code analysis tools, and raising lessons learned to the appropriate walkthrough or other
technical exchange forum.
A metrics tool developed to scan Ada source files and compile statistics on the amount of
completed Ada and TBD_Statements.
This metric and the CCPDS-R coding standards provided metrics for monitoring the progress
from several perspectives.
Some key measures of progress could be extracted directly from the evolving source baselines.
The software engineers followed the software standards in coding the source files and
maintaining them in compilable formats.
Once a month, all source code was processed by the tools and integrated into various
perspectives for communicating progress.
The resulting metrics were useful to communicate about the need for resources, and need for re-
prioritization certain activities.
Acceptance by both manager and practitioner, and extraction directly from evolving artifacts,
were crucial to the success of this metrics approach.
The CCPDS-R build structure accommodated a manageable and straightforward test program.
Because compilable Ada was used as the primary format in the life cycle, most integration issues
– data type consistency, program unit obsolescence, and program unit dependencies – were
caught and resolved in compilation.
Substantial informal testing took place as a natural by-product of the early architecture
demonstrations and the requirements that all components be maintained in a compatible format.
This informal testing was not sufficient to verify that requirements were satisfied and reliability
expectations were met.
A highly rigorous test sequence was devised with five different test activities:
① Stand-alone test
② Build integration test
③ Reliability test
④ Engineering string test
⑤ Final qualification test
The sequence of baselines allowed maximum time for the early-build, critical-thread components
to mature.
To increase trustworthiness in their readiness for operational use further extensive testing was
done on the components.
Testing process went on till such a time that an empirical [software] mean time between failures
(MTBF) was demonstrable and acceptable to the customer.
The CCPDS-R build sequence and test program are good examples of confronting the most
important risks first.
A stable architecture was also achieved early in the life cycle so that substantial reliability testing
could be performed.
This strategy allowed useful maturity metrics to be established to demonstrate a realistic
software MTBF to the customer.
Substantial tailoring was allowed to match the development approach and to accommodate the
use of Ada as a design language and also as the implementation language.
The three critical objectives of the IPDR major milestone demonstration of the Common
Subsystem:
1. Tangible assessment of the software architecture design integrity through construction of
a prototype SAS.
2. Tangible assessment of the critical requirements understanding through construction of
the worst-case missile warning scenario
3. Exposure of the architectural risks associated with peak missile warning scenario and the
fault detection and recovery scenario.
The CCPDS-R software culture is evident in these objectives.
These demonstrations were engineering activities with ambitious goals, open discussion of trade-
offs, and a show-me approach to substantiating assertions about progress and quality.
The results of a demonstration were apt to change requirements, plans, and designs equally; all
three of these dimensions evolved during the life cycle.
Demonstration activities spanned a six-month period, with the first three months focused on
planning.
A few people from the stakeholder teams participated in specifying the formal evaluation criteria.
The following figure summarizes the schedule for the IPDR demonstration:
FIGURE D-8. CCPDS-R first demonstration activities and schedule
The above figure includes details of the intense integration period in the two months before the
demonstration.
The essential IPDR evaluation criteria were derived from the requirements, the risk
assessments, and the evolving design trade-offs:
All phases:
No critical errors shall occur.
Phase 1:
The system shall initialize itself in less than 10 minutes.
The system shall be initialized from a single terminal.
After initialization is complete, the number of processes, tasks, and sockets shall
match exactly the expected numbers in the then-current SAS baseline.
Phase 2:
The total processor utilization for each node shall be less than 30% when
averaged over the worst-case peak scenario.
There shall be no error reports of duplicate or lost messages.
All displayed data shall be received within 1 second from its injection time.
The message injection process shall maintain an injection rate matching the
intended scenario rate.
The data logs shall show no unexpected state transitions or error reports and
shall log all injected messages.
Phase 3:
The operator shall be capable of injecting a fault into any object.
An error report shall be received within 2 seconds of the injection of a fault.
The switchover from the primary to backup thread shall be completed within 2
seconds of the fault injection with no loss of data.
The shutdown of the failed primary thread and re-initialization as a new backup
thread shall be completed in less than 5 minutes from failure.
The data logs shall match the expected state transitions with no fatal errors
reported other than the injected fault.
There were more evaluation criteria for less important visibility into detailed capabilities and
intermediate results.
Time = Month 17
KSLOC PDW: Preliminary Design Walkthrough
CDW: Critical Design Walkthrough
8 Build 0 TOR : Turnover Review
SPDW SCDW SSTOR
SOriginal milestone
43 Build 1 Development Stand-alone test
SPDW SCDW SST0R S
Turnover for
configuration control
61 Build 2
SPDW SCDW STOR
173 Build 3
SPDW SCDW STOR1 STOR2
83 Build 4
SPDW SCDW STOR
7
Build 5
SPDW SCDW SSTOR
355 Common Subsystem Design
Common Subsystem SAT
SSRR SIPDR SPDR SCDR
| | | | | | | |
0 5 10 15 20 25 30 34
Months after contract award
The above figure depicts the top-level progress summary for each build and for the Common
Subsystem as a whole.
The length of the shading within each build relative to the dashed line – current month –
identifies whether progress was ahead of or behind schedule.
Prepared by S. S. RAMANUJEM, PROF., DEPT OF MCA, SWARNANDHRA COLLEGE OF
ENGINEERING & TECHNOLOGY, NARSAPUR
PART – V CASE STUDIES AND BACKUP MATERIAL Page 173 of 187
APPENDIX D CCPDS-R CASE STUDY
The shading was a judgment by the software chief engineer, who combined the monthly progress
metrics and the monthly financial metrics into a consolidated assessment.
Monthly collection of metrics provided detailed management insight into build progress, code
growth, and other indicators.
To provide multiple perspectives, the metrics were collected by build and by CSCI.
Individual CSCI managers collected and assessed their metrics before the metrics were
incorporated into a project-level summary.
The following figure illustrates the monthly progress assessments for the Common Subsystem
and each build:
FIGURE D-10. Common Subsystem development progress
100% Common Subsystem Progress
| | |
Subsystem progress (% coded)
Actuals
| | |
Plan
| | | |
Individual
Build 0 Build 1 Build 2 Build 3 Build 4 Build 5
build progress S S S S S S
100%
| | |
Subsystem progress (% coded)
| | | | | | |
| | | | | | | | | |
3 6 9 12 15 18 21 24 27 30
Contract Month
The figure plots the progress against the plan for requirements verification tests.
SATs, ESTs, and FQTs were sources of test cases used.
SATs were the responsibility of the development teams, executed in the formal configuration
management environment and peer-reviewed by the test personnel.
ESTs consisted of functionally related groups of scenarios that demonstrated requirements
spanning multiple components.
FQTs were tests for requirements compliance that were not demonstrated until a complete
system existed.
Quantitative performance requirements (QPRs) spanned all CSCIs.
Actuals
Plan
| | | |
| | | | | | | | | |
32 34 36 38 40 42 44 46 48
10,000
80,000 Repaired
Broken
Cumulative SLOC
60,000
40,000
20,000
| | | | | | | | | |
5 10 15 20 25 30 35 40 45
Contract Month
The above figure shows the cumulative number of SLOC broken – checked out of the baseline
for rework because of an identified defect, enhancement, or other change, and the number of
SLOC repaired – checked back into the baseline.
Breakage rates that diverged from repair rates resulted in reprioritization of resources, and
corrective actions taken to ensure that the test organization – driving the breakage and
development organization – driving the repair, remained in relative equilibrium.
D.7.4 MODULARITY
The following figure identifies the total breakage as a ratio of the entire software subsystem:
FIGURE D-13. Common Subsystem modularity
25%
Closed rework
Currently open rework
20%
10%
5%
| | | | | | | | | |
5 10 15 20 25 30 35 40 45
Contract Month
This metric identifies the total scrap generated by the Common Subsystem software development
process as about 25% of the whole product. Industry average software scrap runs in the 40% to
60% range. The initial configuration management baseline was established around the time of
PDR, (month 14). Later, 1,600 discrete changes were processed against configuration baseline.
D.7.5 ADAPTABILITY
Overall, about 5% of effort was expended in rework activities against software baselines.
The average cost of change was about 24 hours per SCO. These values prove the ease with
which the software baselines could be changed. The level of adaptability achieved by CCPDS-R
was four times better than the typical project.
FIGURE D-14. Common Subsystem adaptability
10 Maintenance changes
Most of the early SCO trends were changes that affected multiple people and multiple
components – design changes in the above figure.
The later SCO trends were usually localized to a single person and a single component –
implementation changes in the above figure.
The final phase of SCOs reflected an uncharacteristic increase in breakage, the result of a large
engineering change proposal that completely changed the input message set to the Common
Subsystem.
This area turned out to be more difficult than thought of.
Although the design was robust and adaptable for a number of premeditated change scenarios, an
overhaul of the message set was never foreseen nor accommodated in the design.
D.7.6 MATURITY
CCPDS-R had a specific reliability requirement with a specific allocation in the software.
The independent test team constructed an automated test suite to exercise the evolving software
baseline with randomized message scenarios.
Extensive testing was done under realistic conditions to substantiate software MTBF in a
credible way.
The reliability-critical components were subjected to the most reliability stress testing.
This plan ensured early insight into maturity and software reliability issues.
The following figure illustrates the results:
| | |
10,000 50,000 1,00,000
With modern distributes architectures, statistical testing serves two purposes: (1) it ensures
maximum coverage, and (2) uncovers significant issues of races, deadlocks, resource overruns,
memory leakage, and other Heisen-bugs (uncertainty conditions).
Overall system integrity can be tested by executing randomized and accelerate scenarios for long
periods of time.
This definition treats declarative – specification – design more sensitively than it does executable
– body – design.
This definition provided a consistent and adequate measure, though it is not a perfect definition.
To allocate budgets properly and to compare productivities of different categories, a method for
normalizing them was devised.
The result was an extension of the COCOMO technique for incorporating reuse, called
equivalent SLOC (ESLOC).
ESLOC converts the standard COCOMO measure of SLOC into a normalized measure that is
comparable on an effort-per-line basis.
The need for this new measure arises in budget allocation and productivity analysis for mixtures
of newly developed, reused, and tool-produced source code.
For example, a 10,000-SLOC display component that is automatically produced from a tool by
specifying 1,000 lines of display formatting script should not be allocated the same budget as a
newly developed 10,000-SLOC component.
The following table defines the conversion of SLOC to ESLOC on CCPDS-R:
TABLE D-11. SLOC-to-ESLOC conversion factors
SLOC DESIGN IMPLEMENT TEST ESLOC
FORMAT NEW = 40% NEW = 20% NEW = 40%
Commercial 0% 0% 0% 0%
New 40% 20% 40% 100%
Reused 20% 5% 30% 55%
Automated 0% 0% 40% 40%
Tool input 30% 10% 10% 50%
ESLOC was analyzed solely to ensure that the overall staffing and budget allocations, negotiated
with each CSCI lead, were relatively fair.
These ESLOC estimates were input to cost modeling analyses that incorporated the relative
complexity of each CSCI and other COCOMO effort adjustment factors.
This code counting process provided a useful perspective for discussing several of the
engineering trade-offs being evaluated.
After the 1st year, the SLOC counts were stable and well correlated to the schedule estimating
analyses performed throughout the project life cycle.
CCPDS-R illustrates why SLOC is a problematic metric for measuring software size, and at the
same time is an example of a complex system in which SLOC metrics worked very effectively.
This section on software size is a good example of the issues associated with transitioning to
component-based development.
Projects can and must deal with heterogeneous measurements of size, but there is no industry-
accepted approach.
So, project managers need to analyze carefully such important metrics definitions.
The CCPDS-R subsystems had consistent measures of human generated SLOC and
homogeneous processes, teams, and techniques – making comparison of productivities possible.
Cost per SLOC is taken as the normalized unit of measure to compare productivities.
Relative costs among subsystems are taken to be most relevant.
The PDS subsystem was delivered at 40% of the cost per SLOC of the Common Subsystem, and
the STRATCOM Subsystem at 33%.
This is one of the real indicators of a level 3 or level 4 process.
The following table summarizes the SCO traffic across all CSCIs at month 58:
By the 58th month the Common Subsystem was beyond its FQT and had processed a few SCOs
in a maintenance mode to accommodate engineering change proposals.
The PDS and STRATCOM subsystems were into their test phases.
For completeness, the table provides entries for support, test, and operating system/vendor.
Support included code generation tools, configuration management tools, metrics tools, and
standalone test drivers; test included software drivers used for requirements verification.
Table D-13 shows that the values of the modularity metric – average scrap per change – and the
adaptability metric – average rework per change – were generally better in the subsequent
subsystems than they were in the Common Subsystem.
The only exception was the SCG CSCI, a special communications capability needed in the
STRATCOM subsystem that did not have a counterpart in the other subsystems and was
uniquely complex.
FIGURE D-16.
Common Subsystem
CSCI summary
43%
18% 16% 12% 9% 2% < 1%
< 4 hours 4 to 8 8 to 16 16 to 40 40 to 80 80 to 160 >160 hours
Overall, this level of productivity and quality was approximately double TRW’s standard for
previous command center software projects.
The TRW management instituted an award fee flow-down program as an incentive to people to
remain on the project for a long time.
As a result, there was very little attrition of people across the Common Subsystem, and also
during the PDS and STRATCOM subsystems, as they overlapped enough with the Common
Subsystem.
2.Setting standards and procedures for design walkthroughs and software artifacts.
The core team represented the frontline pioneers for most of the software activities by
conducting any project workflow first, or building the first version of most artifacts.
The core team was intimately involved with setting precedent in standard for activities or the
formats/contents of any artifact.
The basic premise of the CCPDS-R award fee flow-down plan was that employees would share
in the profitability of the project.
[Award fees are contract payments above the cost basis. They are tied to project performance
against predefined criteria.]
It was agreed to allocate a portion of the award fee pool at each major milestone to be directly
given to the employees.
The relative contribution and longevity on the project of the individuals were the criteria for the
distribution of the pool.
The flow-down plan was to achieve the following objectives:
Reward the entire team for excellent performance
Reward different peer groups relative to their overall contribution
Minimize attrition of good people
The plan was complex, but its implementation was simple.
This plan, in the end, achieved its goals in minimizing attrition.
One flaw in the plan was that the early award fees (at PDR and CDR) were far less substantial
than the later award fees.
So, the teams responsible for the construction and transition phases got more than did those
working on the inception and elaboration phases.
Management defined the various peer groups – systems engineering, software engineering,
business administration, and administration.
Every 6 months, the people within each peer group ranked one another with respect to their
contribution to the project.
The manager of each peer groups also ranked the team members.
The manager compiled the results into a global performance ranking of the peer group.
Each award fee was determined by the customer at certain major milestones.
Half of each award fee pool was distributed to project employees.
The algorithm for the distributions is:
o The general range of additional compensation relative to each employee’s salary was
about 2% to 10% every year.
o The distribution to each peer group was relative to the average salary of the group.
o The differences in employees’ salaries within each group defined the relative
differences in the expected contributions of employees toward overall project success.
o The distribution within a peer group had two parts:
Half of the total peer group pool was distributed equally among all.
The other half was distributed to the top performers within the group as
defined by the group’s self-ranking.
Management had some discretion in the amounts and ranges.
The true impact of this award fee flow-down plan is hard to determine.
But, it made a difference in the overall teamwork and in retaining the critical people.
The peer ranking worked well in discriminating the top performers.
Excepting a few surprises, the peer rankings matched management’s perceptions closely.
TRW shared a little less than 10% of its overall profit with its employees.
The return on this investment would be considered high by all stakeholders.
D.10 CONCLUSIONS
TRW and the Air Force have documented the successes of architecture-first development on
CCPDS-R.
This project achieved two-fold increases in productivity and quality along with on-budget, on-
schedule deliveries of large mission-critical systems.
The success of the CCPDS-R is due to the balanced use of modern technologies, modern tools,
and an iterative development process.
The following table summarizes the dimensions of improvement incorporated into the CCPDS-R
project:
TABLE D-15. CCPDS-R technology improvements
PARAMETER MODERN SOFTWARE PROCESS CCPDS-R APPROACH
Environment Integrated tools DEC/Rational/Custom tools
VAX/DEC-dependent
Open systems Sever VAX family upgrades
Hardware performance Custom-developed management
Automation system, metrics tools, code auditors
Size Reuse, commercial components Common architecture primitives,
tools, processes across all subsystems
Object oriented Message-based, object-oriented
architecture
Higher level languages 100% Ada
CASE tools custom automatic code generators for
architecture, message input/output,
display format source code
Early investment in NAS development
Distributed middleware for reuse across multiple subsystems
Process Iterative development Demonstration, multiple builds, early
delivery
Process maturity models Level 3 process before SEI CMM
definition
Architecture first Executable architecture baseline at
PDR
Acquisition reform Excellent customer-contractor-user
teamwork; highly tailored 2167A for
iterative development
Training Mostly on-the-job training and
internal mentoring
The resulting efficiencies were largely attributable to a major reduction in the software scrap and
rework (less than 25%) enabled by an architecture-first focus, an iterative development process,
an enlightened and open-minded customer, and the use of modern environments, languages, and
tools.
The Common Subsystem subsidized much of the groundwork for the PDS and STRATCOM
subsystems.
This investment paid significant returns on the subsequent subsystems, in which productivity and
quality improved.
This is the economic expectation of a mature software process as that developed and evolved on
CCPDS-R.