Sie sind auf Seite 1von 119

DEPARTMENT OF INFORMATION TECHNOLOGY

SEMESTER – VI

U6CSA17 SOFTWARE ENGINEERING & PROJECT MANAGEMENT LTPC


(common for CSE, IT) 3104

OBJECTIVE
To understand the following :
Different life cycle models
Requirement dictation process
Analysis modeling and specification
Architectural and detailed design methods
Implementation and testing strategies
Verification and validation techniques
Project planning and management
Use of CASE tools

UNIT I INTRODUCTION 8
The evolving role of Software – Software characteristics, Software Process: Software
Lifecycle models –-The linear sequential model - The prototyping model - The RAD
model - Evolutionary software process models - The incremental model - The spiral
model- Various Phases in Software Development
UNIT II RISK MANAGEMENT & CODING STANDARDS 9
Risk Analysis & Management: Assessment-Identification–Projection-Refinement-
Principles, Introduction to Coding Standards.
UNIT III TESTING TECHNIQUE & TESTING TOOLS 9
Software testing fundamentals - Test case design - White box testing - Basis path
testing - Control structure testing - Black box testing - Testing for specialized
environments, Testing strategies - Verification and validation - Unit testing -
Integration testing - Validation testing - System testing - The art of debugging,
Testing tools - Win runner, Load Runner.
UNIT IV SOFTWARE QUALITY ASSURANCE 10
Quality concepts - cost of quality - Software Quality Group (SQA)-Roles and
responsibilities of SQA group- Formal Technical reviews- Quality standards .
UNIT V SOFTWARE PROJECT MANAGEMENT 9
Introduction to MS Project –Creating a Project Plan File-Creating Work Break Down
Structure-Creating and Assigning Resources-Finalizing the project plan
Case Study.
TOTAL: 45+15(Tutorial) = 60 periods

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


1
TEXT BOOKS
1. Roger. S. Pressman, “Software Engineering – A Practitioner’s Approach”, sixth
Edition, McGraw Hill ,International Edition, Singapore, 2006.
2. Ian Sommerville, “Software Engineering”, sixth Edition, Pearson Education, New
Delhi, 2001.
3. Microsoft Project 2007 for Dummies.
REFERENCE BOOKS
1. Ali Behforooz, Frederick J Hudson, “Software Engineering Fundamentals”,
second edition, Oxford University Press, Noida, 2003.
2. Fairley R, “Software Engineering Concepts”, second edition, Tata McGraw Hill,
New Delhi, 2003.
3. Jalote P, “An Integrated Approach to Software Engineering”, third edition,
Narosa Publishers, New Delhi, 2005.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


2
UNIT – I

INTRODUCTION

Water fall model

The first published model of the software development process was derived
from other engineering process. This is illustrated from following figure. Because of
the cascade form one phase to another, this model is know as ‚water fall model ‚ of
software life cycle model. The principal stages of the model map on to fundamental
development activities.

Requirements
definition

System and
software design

Implementation
and unit testing

Integration and
system testing

Operation and
maintenance

1. Requirement analysis and Specification: the system services constraints and


goals are established by consultation with system users. They are then defined in
detail and serve as system specification.
2. System and software design: the system design process partition the
requirements to either hardware or software system. It establishes overall system
architecture. Software design involves identifying and describing the
fundamental software system abstraction and their relationship.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


3
3. Implementation and unit testing: During this stage the software design is
realized as a set of programs or program unit. Unit testing involves verifying that
each unit its specification.
4. Integrating and system testing: the individual program units or programs are
integrated and tested as a completed system to ensure that the software
requirements have been met. After testing the software system is delivered to the
customer.
5. Operation and maintenance: the system is installed and put into practical use.
Maintenance involves correcting errors which were not discovered in earlier
stages of life cycle.

 No fabrication step
 Program code is another design level
 Hence, no ‚commit‛ step – software can always be changed<!

 No body of experience for design analysis (yet)


 Most analysis (testing) is done on program code
 Hence, problems not detected until late in the process

 Waterfall model takes a static view of requirements


 Ignore changing needs
 Lack of user involvement once specification is written

 Unrealistic separation of specification from the design

 Doesn’t accommodate prototyping, reuse, etc

Concept of software process

The following are the basic principles to develop a software process

Structured set of activities required eg. Specification, Design, Validation Evolution


Activities vary depending on the organization and the types of system begin
development.

It must be explicitly modeled if it is to be managed.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


4
In order to analysis software process we must clearly mention the process
characteristics.

 Understandability
 Visibility
 Supportability
 Acceptability
 Reliability
 Maintainability
 Rapidity

Generic-building is a type of software building that build any product some


point to remember are:

 Specification - Set out requirements and constraints


 Design – Produce a paper model of the system
 Manufacture- Builds the system
 Test- Check the system meets the required specifications
 Install- Deliver system to customer and ensure it is operational
 Maintain- Repair fault in the system as they are discovered
 Normally, specifications are incomplete / anomalous
 Very blurred distinction between specification, design and manufacture
 No physical realization of the system for testing
 Software does not wear out- maintenance does not mean component
replacement
 Main purpose of software process model is ‚to give risk-reducing structure
and growing recognition that in many cases software should be developed in
an evolutionary fashion
e.g Microsoft Visual C++ is shipped every 4 months

Software validation

Software validation is intended to show that a system is confirmed to its


specification and that the system meets the expectation of a customer buying the
system. It involves checking process such as inspections and reviews at the each
stage of the process

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


5
The majority of validations costs are incurred after the operational system are
tested. Except for small program system should not be tested as a single unit. Large
system is built out of sub system which is built out of modules which are composed
out of procedures and functions. The testing process should therefore proceed in a
stage where testing is carried out incrementally with conjunction with system
implementation.

The stages in the testing process are:

Unit testing: individual components are tested to ensure that they operate correctly.

Module testing: A module is a collection of dependant component such as an object


class, an abstract data type. A module encapsulates related components, so can be
tested without other system module.

Sub system checking: this phase involves testing collections of module which have
been integrated into sub system. The most common problems which arise in large
software system are interface mismatch.

System testing: the subsystems are integrated to make up the systems. This process
is concerned with finding errors that result form unanticipated interaction between
sub systems and sub systems interface problems.

Acceptance testing: this is the final stage in the testing process before the system is
accepted for operational use. The system is tested with data supplied by the systems
customers rather than simulated data.

Unit test and modular testing are responsibility of the programmers. They make up
their own data and incrementally tested to test the code as it is developed.

Later stages of testing involve integrating work from a number of programmers and
must be planned in advance. An independent team of tester should work from pre
formulated test plans which are developed form the system specification and design.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


6
THE COMPUTER BASED SYSTEM IN SOFTWARE APPROACH

A set or arrangement of element that are organized to accomplish some


method, procedure or control by processing information. Elements include:
hardware software people users and operators database documentation procedures
the steps that define the specific user of each system element or the procedural
context in which the system resides. Systems transform information. System can
contain, as elements other system. Computer system engineering or systems analysis
system functions are discovered, analyzed and allocated to individual system
element. Customer-defined goals and contains are used to derive a representation of
function, performance, interfaces, design constraints and information structures for
each system element. Tasks include: identify desired function ‚ bound ‚ or identify
the scope of various element / functions allocated function to system element
alternatives are proposed and evaluated selection criteria include:

 Project considerations
 Scheduling, cost
 Risk business considerations marketing
 Profit, risk technical analysis
 Function
 Performance manufacturing evaluation
 Availability
 Quality assurance human issues
 Worker
 Customer’s environmental interface
 System’s external environment legal considerations
 Liability, infringement systems analysis

System analysis is conducted with the following objectives: identify the


customer’s needs evaluate the system concept for feasibility performance economic
and technical analysis allocated functions to the various system elements establish
cost and schedule constraints create a system definition that forms the foundation
for all subsequent engineering work.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


7
Pervasive system requirements

There are many other categories of requirements that also deserve attention
pervasive system requirements include.

 Accessibility
 Reusability
 Adaptability
 Robustness
 Availability
 Safety
 Compatibility
 Security
 Correctness
 Testability
 Efficiency
 Usability
 Fault tolerance
 Integrity
 Maintainability
 Reliability

Prototyping model

Prototyping is used in 2 types of model

1. Evolutionary prototyping: object is to work with customer and to evolve a


final system from an initial outline specification. Should start with well-
understood
2. Throw-away prototyping: objective is to understand the system requirements.
Starts with poorly understood requirement.

Requirement is difficult capture accurately. Prototype is a working model of


part or all of the final system and use of high level language to create working
programs quickly. Now day’s modern four generation language are very suitable.
Prototype may be inefficient not as robust as final system/ less functionality.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


8
Problem with prototyping

Documentation may get neglected

 Effort in building a prototype may be wasted


 Difficult to plan and manage

Advantages

 Faster than the waterfall model


 High level of user involvement from start
 Technical or other problem discovered early risk reduced

Problems with evolutionary prototyping

 Difficult to plan as amount of effort is uncertain


 Documentation may be neglected
 Could degenerate into ‚build and fix‛ with poorly structure code
 Language which are good for prototyping not always best for final product

Advantage

 Effort of prototype is not wasted


 Faster than the waterfall model
 High level of user involvement from start technical or other problems
discovered early

Incremental model

The model combines elements of the linear sequential model with the iterative
philosophy of prototyping. This model applies linear sequences in a staggered
fashion as calendar time progresses. Each sequence produces an increment of the
software.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


9
For e.g. Word processing s/w developed using incremental paradigm which delivers
file management, editing, and document production in 1st increment, More
sophisticated editing and document production in 2nd increment, Spelling and
grammar checking in 3rd increment and advanced page layout in the 4th increment

When the incremental model is used, 1st increment is core product i.e. basic
requirement are addressed but supplementary features remain undelivered. The
core product is used by the customer. So from evaluation of 1 st increment, a plan is
developed to the next increment hence from the plan, modification is made to the
core product which is met by the customer. Thus the process is repeated until the
complete product is produced.

Incremental model mainly focuses on the delivery of the operational product


with each increment, unlike the prototyping.

Incremental development is useful when staffing is unavailable for a complete


implementation by the business deadline that has been established for the project.
Early increments can be implemented with fewer people. If the core product is well
received, then additional staff can be added to implement the next increment. The
increments are managed to technical risks. For e.g

A major system requires the availability of new hardware that is under


development and whose delivery date is uncertain. It is possible to plan early to
avoid the use of new h/w, thereby enabling partial functionality to be delivered to
users without coordinate delay.

Incremental development advantages

 Customer value can be delivered with each increment so system functionality


is available earlier
 Early increments act as a prototype to help elicit requirements for later
increments
 Lower risk of overall project failure
 The highest priority system services tend to receive the most testing

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


10
Spiral model

Spiral model is proposed by the Boehm.

It is evolutionary s/w process model that couples the iterative nature of prototyping
with controlled and systematic aspects of linear sequential model.

The model is divided into number of framework activities called as task regions.

There are 6 task regions:

Determine objectives
Evaluate alternatives
alternatives and identify, resolve risks
constraints Risk
analysis
Risk
analysis
Risk
analysis Opera-
Prototype 3 tional
Prototype 2 protoype
Risk
REVIEW analysis Proto-
type 1
Requirements plan Simulations, models, benchmarks
Life-cycle plan Concept of
Operation S/W
requirements Product
design Detailed
Requirement design
Development
plan validation Code
Design Unit test
Integration
and test plan V&V Integration
Plan next phase test
Acceptance
Service test Develop, verify
next-level product

Customer communication: Effective communication b/w developer and customer.

Planning: Defining resources, timelines and other project related information.


Risk analysis: Assessing the technical and management risk.

Engineering: Building one or more representation of the application.

Construction and release: Used to construct, test, install, and provide user support
(e.g. Documentation and training)

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


11
Customer evaluation: Obtaining the customer feedback based on evaluation of the
software representations.

Each of the regions is populated by a set of work tasks, called task set, that are
adapted to the characteristics of the project to be undertaken.

The number of work tasks and their formality is low. For larger, more critical
projects each task region contains more work tasks that are defined to achieve a
higher level of formality.

The software engineering team moves around in a clockwise direction


beginning at the center. The first circuit in spiral might result in the development of
product specification. The subsequent passes around the spiral might result in the
development of prototype and then progressively sophisticated versions of the s/w.
Each passes through the planning region results in adjustments to the project plan.
Cost and schedule are adjusted based on the feed back derived from customer
evaluation. In addition the project manager adjusts the planned number of iterations
required to complete the s/w.

WINWIN spiral model

In spiral model, developer asks customer what is required and the customer
provides sufficient detail to proceed. Unfortunately this rarely happens, in reality the
customer and developer enter into a process negotiation, where the customer and
developer enter into a process of negotiation, where the customer may be asked to
balance functionality, performance and other product or system characteristics
against cost and time to market.

Boehm's WINWIN model defines a set of negotiation activities at the beginning of


each pass around the spiral. The following activities

 Identification of the system or subsystem's key stake holders.


 Determination of the stakeholder wins conditions.
 Negotiation of the stakeholder wins condition to reconcile them into a set of
win-win conditions for all concerned.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


12
Successful completion of these initial steps achieves win-win result, which becomes
key criterion for proceeding to software and system definition.

The model has three process milestones, called anchor points that establish
the completion of 1 cycle around the spiral and provide decision milestones before
the s/w projects proceeds.

The anchor points represent three different views of progress at the project as
the project traverses the spiral. The first anchor point, life cycle objectives (LCO),
defines a set of objectives for each major software engineering activity. for example
as part a set of objectives establishes the definition of top-level system/product
requirements. The second anchor point, life cycle architecture (LCA), establishes
objectives that must be met as the system and software architectures is defined. For
e.g as part of LCA, the software project team must demonstrate that it has evaluated
the applicability of the shelf and reusable software components and considered their
impact on architectural decisions. Initial operational capability is the third anchor
point and represents a set of objectives associated with the preparation of the
software for installation/distribution, site preparation prior to installation and
assistance by all parties that will support the software.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


13
UNIT – II

RISK MANAGEMENT & CODING STANDARDS

Risk management deals with the identification, quantification, response and control

Identification of risks

A risk is a possible future event, which, if it happens, will hurt the project. A
risk is a problem or disaster looking for an opportunity to happen. The
characteristics of a risk are:

If the risk happens, you lose something: time, money, etc. there is some non-
zero likelihood that the risk will occur (i.e. don’t worry about meteors crashing into
your project office) to some degree, we can minimize the risk.

Generic risks:

1. Unrealistic, unstable, immature or excessive requirements


2. Misunderstanding the requirements (even if they are reasonable)
3. Personnel turnover, loss of key people
4. Inadequate time for testing
5. Misunderstanding of target environment
6. Disputes among project teams
7. Misestimating of project difficulty or complexity

Project specific risks:

1. Failure of research components to converge to successful solution


2. Failure of sub-contractor to deliver promised hardware/software on schedule
3. Inadequate budget
4. Unreasonable schedule
5. Inadequate personnel or personnel organization
6. Inadequate, inconsistent development platform, tools
7. Inadequate resources allocated to project
8. Uncooperative customers, vendors, subcontractors

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


14
Business risks:

1. No one wants the product


2. The organization doesn’t want the product
3. The organization doesn’t know how to sell the product
4. Senior management loses interest in the project
5. Loss of budget or personnel allocated to the project
6. Reorganization that makes the project a non-core activity

Boehm’s top 10 risk items

1. Personnel shortfalls: skill and knowledge levels, staff turnover, team dynamics.
2. Unrealistic schedules and budgets: requirements demand more time or money.
3. Developing the wrong software functions: complexity, imperfect understanding.
4. Developing the wrong user interface: not user-friendly, misleading.
5. Gold plating: adding unnecessary ‚nice‛ features
6. Continuing stream of requirements changes: requirement volatility forces rework
7. Shortfalls in externally performed tasks: subcontractors or users don’t do what’s
needed
8. Shortfalls in externally furnished components: hardware or supporting software
is inadequate
9. Real time performance shortfalls: some or all of the system causes bottlenecks<
10. Straining computer science capabilities: unstable or unfamiliar technology

RISK QUANTIFICATION

Risk exposure is the sum of ((probability of risk) * (cost of risk)) for all risks.
The cost of the risk can be measured in dollars or some other impact measurement.

Risk estimation (projection)

 identify likelihood of each perceived risk


 determine the consequence/impact of the risk on the project and/or product
-nature of the problem associated with the risk
o scope – how serious is the risk and how much of the project will be affected
o timing – will the risk arise early or late in the project
 determine uncertainty bounds for risk estimates

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


15
Risk avoidance

 change requirements (or the methods, or<) so the risk no longer applies
 take precautions to reduce the probability of the risk
 e.g. delete functionality that looks hard to implement

Risk transfer

 make your risk someone else’s risk


 look for another way (tool, technique, resources<) to tackle the situation
 e.g. project completion insurance, move risk to client

Risk assumption

 accept a potential risk and its consequences as part of the project but take
 preventative actions to reduce probability and impact prepare contingency
plans
 e.g. arrange for ‚stand-by‛ expertise

Risk control

All this risk management takes resources and may add delays to the project.
The amount of risk management actually done depends on the magnitude of the
project and the consequences of the expected risks and the cost of mitigation.

Risk management tracks risk exposure during a project and applies risk control to
reduce exposure.

Risk control attempts to minimize the probability of risks occurring and/or


minimizes its effect on the project.

The recommended steps are:

1. Identify the risks use existing lists (like Boehm’s) and make your own lists
specific to the politics, culture, technology, etc. that constitutes the project
environment review the project plan with skepticism

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


16
2. Determine the risk exposure calculate the probability of occurrence and the cost
or occurrence identify the highest risks that really matter (you probably
shouldn’t deal with every risk)
3. Develop strategies to mitigate the risks decide whether you want to take
precautions (risk avoidance), look for other ways to proceed (risk transfer) or
create contingency plans (risk assumption)
4. Handle the risks monitoring critical tasks, deliverables to detect if the risk has
occurred looking for new risks update risk estimates and contingency plans as
the project moves along taking timely action

A risk management example

The risk is project high staff turnover.

Pre-project prevention

 t ry to address the causes of the high turnover


 f ix any problems that can be fixed (for example, working conditions)
 assume high turnover will occur and structure the project to compensate for
the effects of high turnover

Intra-project compensation

 organize project teams so information about the project is widely


disseminated
 demand detailed documentation in all phases of the project
 monitor to make sure that documentation is kept current
 use peer reviews (inspections, walkthroughs) of all work as a mechanism to
disseminate project information
 identify critical personnel and assign a backup person to each and keep the
backup person should current with the critical person’s work

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


17
Risk documentation

Risk description:

 risk statement (condition-consequences format, ‚if this happens then that will
occur‛)
 context (circumstances, environment, resources and other issues affecting the
risk)
 impact (affected products, schedules, etc.)
 time frame (period during which risk is real, period during which action
can/should be taken)
 probability (likelihood of risk occurring)
 mitigation strategy (proposed action or eliminates, reduce or prevent the risk)

Risk management and control information:

 status
 priority (high, medium, low)
 risk origin (who identified the risk)
 date opened/identified
 assigned to (person examining risk and recommending mitigation)
 status/date (status changes such as change in probability of key events or a
change in the potential impact)
 date closed

RISK ANALYSIS

A systematic approach for describing and/or calculating risk. Risk analysis involves
the identification of undesired events, and the causes and consequences of these
events.

A risk analysis can be quantitative. However, this requires the existence of suitable
data. (Relevant and reliable)

A risk analysis can also be qualitative.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


18
In either case, the following elements should be included:

 A description of problems and objectives


 Selection of procedures, methods and data sources
 Identification of undesired events
 An analysis of causal factors and consequences
 A description of risk
 Mitigating measures
 Presentation of results

Based on the last item, a comparison with the tolerability matrix can be made,
and also the results of the risk analysis should be useful in identifying risk
mitigating measures.

Undesirable Likehood of Occurence


outcome Very Likely Possible Unlikely
Loss of life Catastrophic Catastrophic severe
Loss of Catastrophic severe severe
spacecraft
Loss of Severe Severe High
mission
Degraded High Moderate Low
mission
Inconvenience Moderate Low Low

Risk is the potential harm that may arise from some current process or from
some future event.

Risk is present in every aspect of our lives and many different disciplines
focus on risk as it applies to them. From the IT security perspective, risk
management is the process of understanding and responding to factors that may
lead to a failure in the confidentiality, integrity or availability of an information
system. IT security risk is the harm to a process or the related information resulting
from some purposeful or accidental event that negatively impacts the process or the
related information.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


19
Risk is a function of the likelihood of a given threat-source’s exercising a particular
potential Vulnerability, and the resulting impact of that adverse event on the
organization.

Threats

One of the most widely used definitions of threat and threat-source can be
found in the National

Threat: The potential for a threat source to exercise (accidentally trigger or


intentionally exploit) a specific vulnerability.

Threat-Source: Either (1) intent and method targeted at the intentional exploitation
of a Vulnerability or (2) a situation and method that may accidentally trigger a
vulnerability.iii The threat is merely the potential for the exercise of a particular
vulnerability. Threats in themselves are not actions. Threats must be coupled with
threat-sources to become dangerous. This is an important distinction when assessing
and managing risks, since each threat-source may be associated with a different
likelihood, which, as will be demonstrated, affects risk assessment and risk
management. It is often expedient to incorporate threat sources into threats. The list
below shows some (but not all) of the possible threats to information systems.

Threat (Including
Description
Threat Source)
Accidental The unauthorized or accidental release of classified,
Disclosure personal, or sensitive information
Acts of Nature All types of natural occurrences (e.g., earthquakes,
hurricanes, tornadoes) that may damage or affect the
system/application. Any of these potential threats could
lead to a partial or total outage, thus affecting availability
Alteration of Software An intentional modification, insertion, deletion of
operating system or application system programs,
whether by an authorized user or not, which compromises
the confidentiality, availability, or integrity of data,
programs, system, or resources controlled by the system.
This includes malicious code, such as logic bombs, Trojan
horses, trapdoors, and viruses.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


20
Bandwidth Usage The accidental or intentional use of communications
bandwidth for other then intended purposes
Electrical An interference or fluctuation may occur as the result of a
Interference/ commercial power failure. This may cause denial of
Disruption service to authorized users (failure) or a modification of
data (fluctuation).
Intentional An intentional modification, insertion, or deletion of data,
Alteration of Data whether by authorized user or not, which compromises
confidentiality, availability, or integrity of the data
produced, processed, controlled, or stored by data
processing systems.
System An accidental configuration error during the initial
Configuration Error installation or upgrade of hardware, software,
(Accidental) communication equipment or operational environment.
Telecommunication Any communications link, unit or component failure
Malfunction/ sufficient to cause interruptions in the data transfer via
Interruption telecommunications between computer terminals, remote
or distributed processors, and host computing facility.

Vulnerability: A flaw or weakness in system security procedures, design,


implementation, or internal controls that could be exercised (accidentally triggered
or intentionally exploited) and result in a security breach or a violation of the
system’s security policy.

Vulnerabilities are not merely flaws in the technical protections provided by


the system.

Significant vulnerabilities are often contained in the standard operating


procedures that systems administrators perform, the process that the help desk uses
to reset passwords or inadequate log review. Another area where vulnerabilities
may be identified is at the policy level. For instance, a lack of a clearly defined
security testing policy may be directly responsible for the lack of vulnerability
scanning.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


21
Here are a few examples of vulnerabilities related to contingency planning/
disaster recovery:

Not having clearly defined contingency directives and procedures

• Lack of a clearly defined, tested contingency plan


• The absence of adequate formal contingency training
• Lack of information (data and operating system) backups
• Inadequate information system recovery procedures, for all processing areas
(including networks)
• Not having alternate processing or storage sites
• Not having alternate communication services

Qualitative Risk Assessment

Qualitative risk assessments assume that there is already a great degree of


uncertainty in the likelihood and impact values and defines them, and thus risk, in
somewhat subjective or qualitative terms. Similar to the issues in quantitative risk
assessment, the great difficulty in qualitative risk assessment is defining the
likelihood and impact values. Moreover, these values need to be defined in a manner
that allows the same scales to be consistently used across multiple risk assessments.

The results of qualitative risk assessments are inherently more difficult to


concisely communicate to management. Qualitative risk assessments typically give
risk results of ‚High‛, ‚Moderate‛ and ‚Low‛. However, by providing the impact
and likelihood definition tables and the description of the impact, it is possible to
adequately communicate the assessment to the organization’s management.

Identifying Threats

As was alluded to in the section on threats, both threat-sources and threats


must be identified.

Threats should include the threat-source to ensure accurate assessment.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


22
Some common threat-sources include:

• Natural Threats—floods, earthquakes, hurricanes


• Human Threats—threats caused by human beings, including both
unintentional

(Inadvertent data entry) and deliberate actions (network based attacks, virus
infection, unauthorized access)

• Environmental Threats—power failure, pollution, chemicals, water damage


Individuals who understand the organization, industry or type of system (or better
yet all three) are key in identifying threats. Once the general list of threats has been
compiled, review it with those most knowledgeable about the system, organization
or industry to gain a list of threats that applies to the system.

It is valuable to compile a list of threats that are present across the


organization and use this list as the basis for all risk management activities. As a
major consideration of risk management is to ensure consistency and repeatability,
an organizational threat list is invaluable.

Identifying Vulnerabilities

Vulnerabilities can be identified by numerous means. Different risk


management schemes offer different methodologies for identifying vulnerabilities.
In general, start with commonly available vulnerability lists or control areas. Then,
working with the system owners or other individuals with knowledge of the system
or organization, start to identify the vulnerabilities that apply to the system. Specific
vulnerabilities can be found by reviewing vendor web sites and public vulnerability
archives, such as Common Vulnerabilities and Exposures If they exist, previous risk
assessments and audit reports are the best place to start. Additionally, while the
following tools and techniques are typically used to evaluate the effectiveness of
controls, they can also be used to identify vulnerabilities:

• Vulnerability Scanners – Software that can examine an operating system,


network application or code for known flaws by comparing the system (or
system responses to known stimuli) to a database of flaw signatures.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


23
• Penetration Testing – An attempt by human security analysts to exercise threats
against the system. This includes operational vulnerabilities, such as social
engineering
• Audit of Operational and Management Controls – A thorough review of
operational and management controls by comparing the current documentation
to best practices (such as ISO 17799) and by comparing actual practices against
current documented processes. It is invaluable to have a base list of
vulnerabilities that are always considered during every risk assessment in the
organization. This practice ensures at least a minimum level of consistency
between risk assessments. Moreover, vulnerabilities discovered during past
assessments of the system should be included in all future assessments. Doing
this allows management to understand that past risk management activities
have been effective.

Relating Threats to Vulnerabilities

One of the more difficult activities in the risk management process is to relate
a threat to vulnerability. Nonetheless, establishing these relationships is a mandatory
activity, since risk is defined as the exercise of a threat against vulnerability. This is
often called threat-vulnerability (T-V) pairing. Once again, there are many
techniques to perform this task. Not every threat-action/threat can be exercised
against every vulnerability. For instance, a threat of ‚flood‛ obviously applies to a
vulnerability of ‚lack of contingency planning‛, but not to a vulnerability of ‚failure
to change default authenticators.‛While logically it seems that a standard set of T-V
pairs would be widely available and used; there currently is not one readily
available. This may be due to the fact that threats and especially vulnerabilities are
constantly being discovered and that the T-V pairs would change fairly often.
Nonetheless, an organizational standard list of T-V pairs should be established and
used as a baseline.E4 A169 4E4

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


24
AN INTRODUCTION TO INFORMATION SYSTEM RISK MANAGEMENT

Risk Managed

The purpose of assessing risk is to assist management in determining where


to direct resources. There are four basic strategies for managing risk: mitigation,
transference, acceptance and avoidance. Each will be discussed below.

For each risk in the risk assessment report, a risk management strategy must
be devised that reduces the risk to an acceptable level for an acceptable cost. For each
risk management strategy, the cost associated with the strategy and the basic steps
for achieving the strategy (known as the Plan Of Action & Milestones or POAM)
must also be determined.

Mitigation

Mitigation is the most commonly considered risk management strategy.


Mitigation involves fixing the flaw or providing some type of compensatory control
to reduce the likelihood or impact associated with the flaw. A common mitigation
for a technical security flaw is to install a patch provided by the vendor. Sometimes
the process of determining mitigation strategies is called control analysis.

Transference

Transference is the process of allowing another party to accept the risk on


your behalf. This is not widely done for IT systems, but everyone does it all the time
in their personal lives. Car, health and life insurance are all ways to transfer risk. In
these cases, risk is transferred from the individual to a pool of insurance holders,
including the insurance company. Note that this does not decrease the likelihood or
fix any flaws, but it does reduce the overall impact (primarily financial) on the
organization.

Acceptance

Acceptance is the practice of simply allowing the system to operate with a


known risk. Many low risks are simply accepted. Risks that have an extremely high

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


25
cost to mitigate are also often accepted. Beware of high risks being accepted by
management. Ensure that this strategy is in writing and accepted by the manager(s)
making the decision. Often risks are accepted that should not have been accepted,
and then when the penetration occurs, the IT security personnel are held
responsible. Typically, business managers, not IT security personnel, are the ones
authorized to accept risk on behalf of an organization.

Avoidance

Avoidance is the practice of removing the vulnerable aspect of the system or


even the system itself. For instance, during a risk assessment, a website was
uncovered that let vendors view their invoices, using a vendor ID embedded in the
HTML file name as the identification and no authentication or authorization per
vendor. When notified about the web pages and the risk to the organization,
management decided to remove the web pages and provide vendor invoices via
another mechanism. In this case, the risk was avoided by removing the vulnerable
web pages.

Communicating Risks and Risk Management Strategies

Risk must also be communicated. Once risk is understood, risks and risk
management strategies must be clearly communicated to organizational
management in terms easily understandable to organizational management.
Managers are used to managing risk, they do it every day. So presenting risk in a
way that they will understand is key.

With a quantitative risk assessment methodology, risk management decisions


are typically based on comparing the costs of the risk against the costs of risk
management strategy. A return on investment (ROI) analysis is a powerful tool to
include in the risk assessment report. This is a tool commonly used in business to
justify taking or not taking a certain action. Managers are very familiar with using
ROI to make decisions.

With a qualitative risk assessment methodology, the task is somewhat more


difficult. While the cost of the strategies is usually well known, the cost of not
implementing the strategies is not, which is why a qualitative and not a quantitative

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


26
risk assessment was performed. Including a management-friendly description of the
impact and likelihood with each risk and risk management strategy is extremely
effective. Another effective strategic is showing the residual risk that would be
effective after the risk management strategy was enacted.

Risk Assessment/Management Tools

• National Institute of Standards & Technology (NIST) Methodology


• OCTAVE®
• FRAP
• COBRA
• Risk Watch

National Institute of Standards & Technology (NIST) Methodology

NIST Special Publication (SP) 800-30, Risk Management Guide for Information
Technology Systems is the US Federal Government’s standard. This methodology is
primarily designed to be qualitative and is based upon skilled security analysts
working with system owners and technical experts to thoroughly identify, evaluate
and manage risk in IT systems. The process is extremely comprehensive, covering
everything from threat-source identification to ongoing evaluation and assessment.

The NIST methodology consists of 9 steps:

• Step 1: System Characterization


• Step 2: Threat Identification
• Step 3: Vulnerability Identification
• Step 4: Control Analysis
• Step 5: Likelihood Determination
• Step 6: Impact Analysis
• Step 7: Risk Determination
• Step 8: Control Recommendations
• Step 9: Results Documentation

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


27
OCTAVE®

The Software Engineering Institute (SEI) at Carnegie Mellon University


developed the Operationally Critical, Threat, Asset and Vulnerability Evaluation
(OCTAVE) process. The main goal in developing OCTAVE is to help organizations
improve their ability to manage and protect themselves from information security
risks. OCTAVE is workshop-based rather than tool based. This means that rather
than including extensive security expertise in a tool, the participants in the risk
assessment need to understand the risk and its components. The workshop-based
approach espouses the principle that the organization will understand the risk better
than a tool and that the decisions will be made by the organization rather than by a
tool. There are three phases of workshops. Phase 1 gathers knowledge about
important assets, threats, and protection strategies from senior managers.

Phase 1 consists of the following processes:

• Process 1: Identify Senior Management Knowledge


• Process 2: (multiple) Identify Operational Area Management Knowledge
• Process 3: (multiple) Identify Staff Knowledge
• Process 4: Create Threat Profiles

Phase 2 gathers knowledge from operational area managers. Phase 2 consists of


the following processes:

• Process 5: Identify Key Components


• Process 6: Evaluate Selected Components

Phase 3 gathers knowledge from staff. Phase 3 consists of the following processes:

• Process 7: Conduct Risk Analysis


• Process 8: Develop Protection Strategy (workshop A: strategy development)
(workshop

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


28
B: strategy review, revision, approval)

These activities produce a view of risk that takes the entire organization’s
viewpoints into account, while minimizing the time of the individual participants.
The outputs of the OCTAVE process are:

• Protection Strategy
• Mitigation Plan
• Action List

FRAP

The Facilitated Risk Assessment Process (FRAP) is the creation of Thomas


Peltier. It is based upon implementing risk management techniques in a highly cost-
effective way. FRAP uses formal qualitative risk analysis methodologies using
Vulnerability Analysis, Hazard Impact Analysis, Threat Analysis and
Questionnaires. Moreover, FRAP stresses pre-screening systems and only
performing formal risk assessments on systems when warranted. Lastly, FRAP ties
risk to impact using the Business Impact Analysis as a basis for determining impact.
Thomas Peltier has written a book on FRAP and several consulting companies,
including RSA and Peltier Associates, teach FRAP.

COBRA

The Consultative, Objective and Bi-functional Risk Analysis (COBRA) process


was originally created by C & A Systems Security Ltd. in 1991. It takes the approach
that risk assessment is a business issue rather than a technical issue. It consists of
tools that can be purchased and then utilized to perform self-assessments of risk,
while drawing on the expert knowledge embedded in the tools. The primary
knowledge bases are:

• IT Security (or default)


• Operational Risk
• 'Quick Risk' or 'high level risk'
• e-Security

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


29
There are two primary products, Risk Consultant and ISO Compliance. Risk
Consultant is a tool with knowledge bases and built in templates that allow the user
to create questionnaires to gather the information about the types of assets,
vulnerabilities, threats, and controls. From this information, Risk Consultant can
create reports and make recommendations, which can then be customized. ISO
Compliance is similar, only this product is focused on ISO 17799 compliance.

Risk Watch

Risk Watch is another tool that uses an expert knowledge database to walk
the user through a risk assessment and provide reports on compliance as well as
advice on managing the risks. Risk Watch includes statistical information to support
quantitative risk assessment, allowing the user to show ROI for various strategies.
Risk Watch has several products, each focused along different compliance needs.
There are products based on NIST Standards (U.S. government), ISO 17799, HIPAA
and Financial Institution standards (Gramm Leach Bliley Act, California SB 1386
(Identify Theft standards), Facilities Access Standards and the FFIEC Standards for
Information Systems).

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


30
UNIT - III

TESTING TECHNIQUE & TESTING TOOLS

Software Testing is the process of executing a program or system with the intent of
finding errors. Or, it involves any activity aimed at evaluating an attribute or
capability of a program or system and determining that it meets its required results.
Software is not unlike other physical processes where inputs are received and
outputs are produced. Where software differs is in the manner in which it fails. Most
physical systems fail in a fixed (and reasonably small) set of ways. By contrast,
software can fail in many bizarre ways. Detecting all of the different failure modes
for software is generally infeasible.

Unlike most physical systems, most of the defects in software are design
errors, not manufacturing defects. Software does not suffer from corrosion, wear-
and-tear generally it will not change until upgrades, or until obsolescence. So once
the software is shipped, the design defects or bugs will be buried in and remain
latent until activation.

Software bugs will almost always exist in any software module with
moderate size: not because programmers are careless or irresponsible, but because
the complexity of software is generally intractable and humans have only limited
ability to manage complexity. It is also true that for any complex systems, design
defects can never be completely ruled out.

Discovering the design defects in software, is equally difficult, for the same
reason of complexity. Because software and any digital systems are not continuous,
testing boundary values are not sufficient to guarantee correctness. All the possible
values need to be tested and verified, but complete testing is infeasible. Exhaustively
testing a simple program to add only two integer inputs of 32-bits (yielding 2^64
distinct test cases) would take hundreds of years, even if tests were performed at a
rate of thousands per second. Obviously, for a realistic software module, the
complexity can be far beyond the example mentioned here. If inputs from the real
world are involved, the problem will get worse, because timing and unpredictable
environmental effects and human interactions are all possible input parameters.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


31
To improve quality:

As computers and software are used in critical applications, the outcome of


a bug can be severe. Bugs can cause huge losses. Bugs in critical systems have caused
airplane crashes, allowed space shuttle missions to go awry, halted trading on the
stock market, and worse. Bugs can kill. Bugs can cause disasters. The so-called year
2000 (Y2K) bug has given birth to a cottage industry of consultants and
programming tools dedicated to making sure the modern world doesn't come to a
screeching halt on the first day of the next century. [Bugs] In a computerized
embedded world, the quality and reliability of software is a matter of life and death.

Quality means the conformance to the specified design requirement. Being


correct, the minimum requirement of quality, means performing as required under
specified circumstances. Debugging, a narrow view of software testing, is performed
heavily to find out design defects by the programmer. The imperfection of human
nature makes it almost impossible to make a moderately complex program correct
the first time. Finding the problems and get them fixed is the purpose of debugging
in programming phase.

2. Explain Unit Testing.

Unit Testing focuses verification effort on the smallest unit of software


design the software component or module. Using the component-level design
description as a guide, important control paths are tested to uncover errors within
the boundary of the module. The relative complexity of tests and uncovered errors is
limited by the constrained scope established for unit testing. The unit test is white-
box oriented, and the step can be conducted in parallel for multiple components

Unit Test Considerations

The tests that occur as part of unit tests are illustrated schematically in
Figure 4.1. The module interface is tested to ensure that information properly flows
into and out of the program unit under test. The local data structure is examined to
ensure that data stored temporarily maintains its integrity during all steps in an
algorithm's execution. Boundary conditions are tested to ensure that the module

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


32
operates properly at boundaries established to limit or restrict processing. All
independent paths (basis paths) through the control structure are exercised to ensure
that all statements in a module have been executed at least once. And finally, all
error handling paths are tested.

Interface Local Data structures


Module Boundary conditions
Independent Paths Error
Handling paths

Test
Cases

Fig. 4.1

Tests of data flow across a module interface are required before any other test
is initiated. If data do not enter and exit properly, all other tests are moot. In
addition, local data structures should be exercised and the local impact on global
data should be ascertained (if possible) during unit testing.

Selective testing of execution paths is an essential task during the unit test.
Test cases should be designed to uncover errors due to erroneous computations,
incorrect comparisons, or improper control flow. Basis path and loop testing are
effective techniques for uncovering a broad array of path errors.

Among the more common errors in computation are

(1) Misunderstood or incorrect arithmetic precedence,


(2) Mixed mode operations,
(3) Incorrect initialization,
(4) Precision inaccuracy,
(5) Incorrect symbolic representation of an expression. Comparison and control
flow are closely coupled to one another (i.e., change of flow frequently occurs
after a comparison).

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


33
Test cases should uncover errors such as

(1) Comparison of different data types,


(2) Incorrect logical operators or precedence,
(3) Expectation of equality when precision error makes equality unlikely,
(4) Incorrect comparison of variables,
(5) Improper or nonexistent loop termination,
(6) Failure to exit when divergent iteration is encountered.

Interface Local Data


Driver structures Boundary
conditions Independent
Paths Error Handling paths

Module
to be
tested
Test
Cases
st st
ri
v
e
Fig 4.2
r
Good design dictates that error conditions be anticipated and error handling paths
set up to reroute or cleanly terminate processing when an error does occur. Yourdon
calls this approach antibugging. Unfortunately, there is a tendency to incorporate
error handling into software and then never test it. A true story may serve to
illustrate: i passes is invoked, when the maximum or minimum allowable value is
encountered. Test cases that exercise data structure, control flow, and data values
just below, at, and just above maxima and minima are very likely to uncover errors.

Unit Test Procedures

Unit testing is normally considered as an adjunct to the coding step. After


source level code has been developed, reviewed, and verified for correspondence to
component level design, unit test case design begins. A review of design information

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


34
provides guidance for establishing test cases that are likely to uncover errors in each
of the categories discussed earlier. Each test case should be coupled with a set of
expected results.

Because a component is not a stand-alone program, driver and/or stub


software must be developed for each unit test. The unit test environment is
illustrated in Figure. In most applications a driver is nothing more than a "main
program" that accepts test case data, passes such data to the component (to be tested)
, and prints relevant results. Stubs serve to replace modules that are subordinate
(called by) the component to be tested. A stub or "dummy subprogram" uses the
subordinate module's interface, may do minimal data manipulation, prints
verification of entry, and returns control to the module undergoing testing.

Drivers and stubs represent overhead. That is, both are software that must be
written (formal design is not commonly applied) but that is not delivered with the
final software product. If drivers and stubs are kept simple, actual overhead is
relatively low. Unfortunately, many components cannot be adequately unit tested
with ‛simple" overhead software. In such cases, complete testing can be postponed
until the integration test step (where drivers or stubs are also used)

Integration Testing

Integration testing is a systematic technique for constructing the program structure


while at the same time conducting tests to uncover errors associated with
interfacing. The objective is to take unit tested components and build a program
structure that has been dictated by design.

There is often a tendency to attempt non incremental integration; that is, to


construct the program using a "big bang" approach. All components are combined in
advance. The entire program is tested as a whole. And choose usually results! A set
of errors is encountered. Correction is difficult because isolation of causes is
complicated by the vast expanse of the entire program. Once these errors are
corrected, new ones appear and the process continues in a seemingly endless loop.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


35
Incremental integration is the antethesis of the big bang approach. The program is
constructed and tested in small increments, where errors are easier to isolate and
correct; interfaces are more likely to be tested completely; and a test approach may
be applied.

Top-down Integration

structure. Modules are integrated by moving downward through the control


hierarchy, beginning with the main control module (main program). Modules
subordinate (and ultimately subordinate) to the main control module are
incorporated into the structure in either a depth-first or breadth-first manner.

Referring to Figure depth-first integration would integrate all components on a


major control path of the structure. Selection of a major path is somewhat arbitrary
and depends on application-specific characteristics. For example, selecting the left-
hand path, components MI, M2 ' M3 would be integrated first. Next, Ma or (if
necessary for proper functioning of MV M6 would be integrated. Then, the central
and right- hand control paths are built. Breadth-first integration incorporates all
components directly subordinate at each level, moving across the structure
horizontally. From the figure, components M2, M3, and M4 (a replacement for stub
54) would be integrated first. The next control1evel, M5, M6, and so on, follows.

The integration process is performed in a series of five steps:

1. The main control module is used as a test driver and stubs are substituted for all
components directly subordinate to the main control module.
2. Depending on the integration approach selected (i.e., depth or breadth first),
subordinate stubs are replaced one at a time with actual components.
3. Tests are conducted as each component is integrated.
4. On completion of each set of tests, another stub is replaced with the real
component.
5. Regression testing may be conducted to ensure that new errors have not been
introduced.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


36
The process continues from step 2 until the entire program structure is built.

The top-down integration strategy verifies major control or decision points early in
the test process. In a well-factored program structure, decision making occurs at
upper levels in the hierarchy and is therefore encountered first. If major control
problems do exist, early recognition is essential. If depth-first integration is selected,
a complete function of the software may be implemented and demonstrated. For
example, consider a classic transaction structure in which a complex series of
interactive inputs is requested, acquired, and validated via an incoming path. The
incoming path may be integrated in a top-down manner. All input processing (for
subsequent transaction dispatching) may be demonstrated before other elements of
the structure have been integrated. Early demonstration of functional capability is a
confidence builder for both the developer and the customer.

Top-down strategy sounds relatively uncomplicated, but in practice, logistical


problems can arise. The most common of these problems occurs when processing at
low levels in the hierarchy is required to adequately test upper levels. Stubs replace
low- level modules at the beginning of top-down testing; therefore, no significant
data can flow upward in the program structure.

The tester is left with three choices:

1. Delay many tests until stubs are replaced with actual modules,
2. Develop stubs that perform limited functions that simulate the actual module, or
3. Integrate the software from the bottom of the hierarchy upward.

The first approach (delay tests until stubs are replaced by actual modules)
causes us to loose some control over correspondence between specific tests and
incorporation of specific modules. This can lead to difficulty in determining the
cause of errors and tends to violate the highly constrained nature of the top-down
approach. The second approach is workable but can lead to significant overhead, as
stubs become more and more complex.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


37
Bottom-up Integration

Bottom-up integration testing, as its name implies, begins construction and


testing with atomic modules (i.e., components at the lowest levels in the program
structure). Because components are integrated from the bottom up, processing
required for components subordinate to a given level is always available and the
need for stubs is eliminated.

A bottom-up integration strategy may be implemented with the following steps:

1. Low-level components are combined into clusters (sometimes called builds) that
perform a specific software sub function.
2. A driver (a control program for testing) is written to coordinate test case input
and output.
3. The cluster is tested.
4. Drivers are removed and clusters are combined moving upward in the program
structure.
Components are combined to form clusters 1, 2, and 3. Each of the clusters is tested
using a driver (shown as a dashed block). Components in clusters 1 and 2 are
subordinate to r3 via. Drivers Dl and D2 are removed and the clusters are interfaced
directly to Ma. Similarly, driver D3 for cluster 3 is removed prior to integration with
module Mb. Both Ma and Mb will ultimately be integrated with component Mc, and
so forth.
M
c

M M
a b

D D D
1 2 3

Clust
Clus er 3
ter 1

Cluster
2

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


38
As integration moves upward, the need for separate test drivers lessens. In fact, if
the top two levels of program structure are integrated top down, the number of
drivers can be reduced substantially and integration of clusters is greatly simplified.

Regression Testing

Regression testing is the reexecution of some subset of tests that have already been
conducted to ensure that changes have not propagated unintended side effects. In a
broader context, successful tests (of any kind) result in the discovery of errors, and
errors must be corrected. Whenever software is corrected, some aspect of the
software configuration (the program, its documentation, or the data that support it)
is changed. Regression testing is the activity that helps to ensure that changes (due to
testing or for other reasons) do not introduce unintended behavior or additional
errors.

Regression testing may be conducted manually, by re-executing a subset of


all test cases or using automated capture/playback tools. capture/playback tools
enable the software engineer to capture test cases and results for subsequent
playback and comparison.

The regression test suite (the subset of tests to be executed) contains three
different classes of test cases:

1. A representative sample of tests that will exercise all software functions.


2. Additional tests that focus on software functions that are likely to be affected by
the change.
3. Tests that focus on the software components that have been changed.

As integration testing proceeds, the number of regression tests can grow quite
large. Therefore, the regression test suite should be designed to include only those
tests that address one or more classes of errors in each of the major program
functions. It is impractical and inefficient to re-execute every test for every program
function once a change has occurred.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


39
Types of Loop Testing

Loop testing is a white-box testing technique that focuses exclusively on the validity
of loop constructs. Four different classes of loops can be defined:

1. Simple loops
2. Concatenated loops
3. Nested loops
4. Unstructured loops

Simple loops. The following set of tests can be applied to simple loops, where n is
the maximum number of allowable passes through the loop.

1. Skip the loop entirely.


2. Only one pass through the loop.
3. Two passes through the loop.
4. m passes through the loop where m < n. 5. n -I, n, n + I passes through the loop.

Nested loops. If we were to extend the test approach for simple loops to nested
loops, the number of possible tests would grow geometrically as the level of nesting
increases. This would result in an impractical number of tests.

An approach that will help to reduce the number of tests:

1. Start at the innermost loop. Set all other loops to minimum values.
2. Conduct simple loop tests for the innermost loop while holding the outer loops
at their minimum iteration parameter (e.g., loop counter) values. Add other tests
for out-of-range or excluded values.
3. Work outward, conducting tests for the next loop, but keeping all other outer
loops at minimum values and other nested loops to "typical" values.
4. Continue until all loops have been tested.

Concatenated loops. Concatenated loops can be tested using the approach defined
for simple loops, if each of the loops is independent of the other. However, if two
loops are concatenated and the loop counter for loop 1 is used as the initial value for
loop 2, then the loops are not independent. When the loops are not independent, the
approach applied to nested loops is recommended.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


40
Unstructured loops. Whenever possible, this class of loops should be redesigned to
reflect the use of the structured programming constructs (Chapter 16).

Validation Testing.

At the culmination of integration testing, software is completely assembled as a


package, interfacing errors have been uncovered and corrected, and a final series of
software tests validation testing-may begin. Validation can be defined in many ways,
but a simple (albeit harsh) definition is that validation succeeds when software
functions in a manner that can be reasonably expected by the customer. At this point
a battle hardened software developer might protest: "Who or what is the arbiter of
reasonable expectations?"

Reasonable expectations are defined in the Software Requirements Specification a


document that describes all user-visible attributes of the software. The specification
contains a section called Validation Criteria. Information contained in that section
forms the basis for a validation testing approach.

Validation Test Criteria

Software validation is achieved through a series of black-box tests that


demonstrate conformity with requirements. A test plan outlines the classes of tests to
be conducted and a test procedure defines specific test cases that will be used to
demonstrate conformity with requirements. Both the plan and procedure are
designed to ensure that all functional requirements are satisfied, all behavioral
characteristics are achieved, all performance requirements are attained,
documentation is correct, and human- engineered and other requirements are met
(e.g., transportability, compatibility, error recovery, maintainability).

After each validation test case has been conducted, one of two possible conditions
exist:

(1) The function or performance characteristics conform to specification and are


accepted
(2) A deviation from specification is uncovered and a deficiency list is created.
Deviation or error discovered at this stage in a project can rarely be corrected
prior to scheduled delivery. It is often necessary to negotiate with the customer
to establish a method for resolving deficiencies.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


41
Configuration Review

An important element of the validation process is a configuration review. The


intent of the review is to ensure that all elements of the software configuration have
been properly developed, are cataloged, and have the necessary detail to bolster the
support phase of the software life cycle. The configuration review, sometimes called
an audit.

Alpha and Beta Testing

It is virtually impossible for a software developer to f how the customer will


really use a program. Instructions for use may be misinterpreted; strange
combinations of data may be regularly used; output that seemed clear to the tester
may be unintelligible to a user in the field.

When custom software is built for one customer, a series of acceptance tests
are conducted to enable the customer to validate all requirements. Conducted by the
end user rather than software engineers, an acceptance test can range from an
informal "test drive" to a planned and systematically executed series of tests. In fact,
acceptance testing can be conducted over a period of weeks or months, thereby
uncovering cumulative errors that might degrade the system over time.

If software is developed as a product to be used by many customers, it is impractical


to perform formal acceptance tests with each one. Most software product builders
use a process called alpha and beta testing to uncover errors that only the end-user
seems able to find.

The alpha test is conducted at the developer's site bya customer. The software is
used in a natural setting with the developer "looking over the shoulder" of the user
and recording errors and usage problems. Alpha tests are conducted in a controlled
environment.

The beta test is conducted at one or more customer sites by the end-user of the
software. Unlike alpha testing, the developer is generally not present. Therefore, the
beta test is a "live" application of the software in an environment that cannot be
controlled by the developer. The customer records all problems (real or imagined)

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


42
that are encountered during beta testing and reports these to the developer at
regular intervals. As a result of problems reported during beta tests, software
engineers make modifications and then prepare for release of the software product
to the entire customer base.

Software Testing Strategies Issues

The following issues must be addressed if a successful software testing strategy is


to be implemented:

Specify product requirements in a quantifiable manner long before testing


commences. Although the overriding objective of testing is to find errors, a good
testing strategy also assesses other quality characteristics such as portability,
maintainability, and usability. These should be specified in a way that is measurable
so that testing results are unambiguous.

State testing objectives explicitly. The specific objectives of testing should be stated
in measurable terms. For example, test effectiveness, test coverage, mean time to
failure, the cost to find and fix defects, remaining defect density or frequency of
occurrence, and test work-hours per regression test all should be stated within the
test plan.

Understand the users of the software and develop a profile for each user category.
Use cases that describe the interaction scenario for each class of user can reduce
overall testing effort by focusing testing on actual use of the product.

Develop a testing plan that emphasizes "rapid cycle testing." Gilb recommends that
a software engineering team "learn to test in rapid cycles (2 percent of project effort)
of customer-useful, at least field 'trialable,' increments of functionality and/or quality
improvement." The feedback generated from these rapid cycle tests can be used to
control quality levels and the corresponding test strategies.

Build "robust" software that is designed to test itself. Software should be designed
in a manner that uses antibugging techniques. That is, software should be capable of
diagnosing certain classes of errors. In addition, the design should accommodate
automated testing and regression testing.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


43
Use effective formal technical reviews as a filter prior to testing. Formal technical
reviews can be as effective as testing in uncovering errors. For this reason, reviews
can reduce the amount of testing effort that is required to produce high-quality
software

Develop a continuous improvement approach for testing process: The Test strategy
should be measured. The matrices collected during testing should be used as part of
a statistical process control approach for software testing.

Process of Deriving Test Cases

The basis path testing method can be applied to a procedural design or to source
code. In this section, we present basis path testing as a series of steps. The procedure
average, depicted in POL in Figure, will be used as an example to illustrate each step
in the test case design method. Note that average, although an extremely simple
algorithm, contains compound conditions and loops. The following steps can be
applied to derive the basis set:

1. Using the design or code as a foundation, draw a corresponding flow graph. A


flow graph is created using the symbols and construction rules. Flow graph is
created by numbering those POL statements that will be mapped into corresponding
flow graph nodes.
2. Determine the cyclomatic complexity of the resultant flow graph. The cyclomatic
complexity, V(G), is determined by applying the algorithms. It should be noted that
V(G) can be determined without developing a flow graph by counting all conditional
statements in the POL (for the procedure average, compound conditions count as
two) and adding I. Referring to Figure
V(G) = 6 region
V(G) = 17 edges -13 nodes + 2 = 6
V(G) = 5 predicate nodes + I = 6

PROCEDURE average:

This procedure computes the average of 100 few.


Numbers that lie between bounding values; it also computes the Sum and the
total number valid

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


44
INTERFACE RETURNING average ,total, input, total. valid;
INTERFACE ACCEPTS value, minimum, maximum;
TYPE value[1:100] is SCALAR ARRAY;
TYPE average, total. input, total. valid
Minimum, maximum, sum is SCALAR;
TYPE I is INTEGER
i=1;
total .input=total.valid=0;
sum=0;
DO WHILE value[i]<>-999 AND total.input<100
Increment total.input by 1;
IF value[i]>=minimum and value[i]<=maximum
THEN increment total.valid by 1;
Sum=s.sum+value[i]
Else skip
ENDIF
Increment I by 1;
ENDDO
IF total.valid>0
THEN average=sum/total.valid;
ELSE average=-999;
ENDIF
END average.

3. Determine a basis set of linearly independent paths. The value of V(G) provides
the number of linearly independent paths through the program control structure. In
the case of procedure average, we expect to specify six paths:

path I: 1-2-10-11-13
path 2: 1-2-10-12-13
path 3: 1-2-3-10-1 1-13
path 4: 1-2-3-4-5-8-9-2-. ..
path 5: 1-2-3-4-5-6-8-9-2-. ..
path 6: 1-2-3-4-5-6-7-8-9-2-. ..

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


45
The ellipsis following paths 4, 5, and 6 indicates that any path through the
remainder of the control structure is acceptable. It is often worthwhile to identify
predicate nodes as an aid in the derivation of test cases. In this case, nodes 2, 3, 5, 6,
and 10 are predicate nodes.

4. Prepare test cases that will force execution of each path in the basis set. Data
should be chosen so that conditions at the predicate nodes are appropriately set as
each path is tested. Test cases that satisfy the basis set just described are

Path 1 test case:


value(k) = valid input, where k < j for 2 s j s 100
value(l) = -999 where 2 S j S 100
Expected results: Correct average based on k values and proper totals.
Note: Path 1 cannot be tested stand-alone but must be tested as part of path 4, 5,
and 6 tests.

4
1
0
5
1 1
2 1
6
1
3
7
8

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


46
Path 2 test case:

value(l) = -999

Expected results: Average = -999; other totals at initial values.


Path 3 test case:
Attempt to process 101 or more values.
First l00 values should be valid.
Expected results: Same as test case 1.
Path 4 test case:
value(l) = valid input where i < 100
value(k) < minimum where k < j
Expected results : Correct average based on k values and proper totals.
Path 5 test case:
value (i) = valid input where i < 100
value(k) > maximum where k <= i
Expected results: Correct average based on n values and proper totals.
Path 6 test case:
value(i) = valid input where i < 100
Expected results: Correct average based on n values and proper totals

Each test case is executed and compared to expected results. Once all test
cases have been completed, the tester can be sure that all statements in the program
have been executed at least once.

It is important to note that some independent paths (e.g., path I in our


example) cannot be tested in stand-alone fashion. That is, the combination of data
required to traverse the path cannot be achieved in the normal flow of the program.
In such cases, these paths are tested as part of another path test.

Concepts of Black Box Testing.

The black-box approach is a testing method in which test data are derived
from the specified functional requirements without regard to the final program
structure. It is also termed data-driven, input/output driven or requirements-based
testing. Because only the functionality of the software module is of concern, black-

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


47
box testing also mainly refers to functional testing a testing method emphasized on
executing the functions and examination of their input and output data. The tester
treats the software under test as a black box only the inputs, outputs and
specification are visible, and the functionality is determined by observing the
outputs to corresponding inputs. In testing, various inputs are exercised and the
outputs are compared against specification to validate the correctness. All test cases
are derived from the specification. No implementation details of the code are
considered.

It is obvious that the more we have covered in the input space, the more
problems we will find and therefore we will be more confident about the quality of
the software. Ideally we would be tempted to exhaustively test the input space. But
as stated above, exhaustively testing the combinations of valid inputs will be
impossible for most of the programs, let alone considering invalid inputs, timing,
sequence, and resource variables. Combinatorial explosion is the major roadblock in
functional testing. To make things worse, we can never be sure whether the
specification is either correct or complete. Due to limitations of the language used in
the specifications (usually natural language), ambiguity is often inevitable. Even if
we use some type of formal or restricted language, we may still fail to write down all
the possible cases in the specification. Sometimes, the specification itself becomes an
intractable problem: it is not possible to specify precisely every situation that can be
encountered using limited words. And people can seldom specify clearly what they
want they usually can tell whether a prototype is, or is not, what they want after
they have been finished. A specification problem contributes approximately 30
percent of all bugs in software.

The research in black-box testing mainly focuses on how to maximize the


effectiveness of testing with minimum cost, usually the number of test cases. It is not
possible to exhaust the input space, but it is possible to exhaustively test a subset of
the input space. Partitioning is one of the common techniques. If we have partitioned
the input space and assume all the input values in a partition is equivalent, then we
only need to test one representative value in each partition to sufficiently cover the
whole input space. Domain testing partitions the input domain into regions, and
consider the input values in each domain an equivalent class. Domains can be
exhaustively tested and covered by selecting a representative value(s) in each
domain. Boundary values are of special interest. Experience shows that test cases

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


48
that explore boundary conditions have a higher payoff than test cases that do not.
Boundary value analysis requires one or more boundary values selected as
representative test cases. The difficulties with domain testing are that incorrect
domain definitions in the specification can not be efficiently discovered.

Good partitioning requires knowledge of the software structure. A good


testing plan will not only contain black-box testing, but also white-box approaches,
and combinations of the two.

Black Box Testing is testing technique having no knowledge of the internal


functionality/structure of the system. This testing technique treats the system as
black box or closed box. Tester will only know the formal inputs and projected
results. Tester does not know how the program actually arrives at those results.
Hence tester tests the system based on the functional specifications given to him.
That is the reason black box testing is also considered as functional testing. This
testing technique is also called as behavioral testing or opaque box testing or simply
closed box testing. Although black box testing is a behavioral testing, Behavioral test
design is slightly different from black-box test design because the use of internal
knowledge is not illegal in behavioral testing.

Advantages of Black Box Testing

1. Efficient when used on Larger systems


2. As the tester and developer are independent of each other, test is balanced and
unprejudiced
3. Tester can be non-technical.
4. There is no need of having detailed functional knowledge of system to the tester.
5. Tests will be done from a end user's point of view. Because end user should
accept the system. (This is reason, sometimes this testing technique is also called
as Acceptance testing)
6. Testing helps to identify the vagueness and contradiction in functional
specifications.
7. Test cases can be designed as soon as the functional specifications are complete.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


49
Disadvantages of Black Box Testing

1. Test cases are tough and challenging to design, without having clear functional
specifications
2. It is difficult to identify tricky inputs, if the test cases are not developed based on
specifications.
3. It is difficult to identify all possible inputs in limited testing time. So writing test
cases is slow and difficult
4. Chances of having unidentified paths during this testing
5. Chances of having repetition of tests that are already done by programmer.

Principles of Modularity

Certain principles must be followed to ensure proper modularity:

1. Linguistic modular units


2. Few interfaces
3. Small interfaces (weak coupling)
4. Explicit interfaces
5. Information hiding

Obtaining good modular architectures requires that communications occur in a


controlled and disciplined way.

Linguistic Modular Units

Modules must correspond to syntactic units in the language used. The


principle of linguistic modular units expresses that the formalism used to express
designs, programs, etc. must support the view of modularity retained. The language
may be a programming language, a program design language, a specification
language, etc. In the case of programming languages, modules should be separately
compiled. This principle follows from several modularity criteria: Decomposability:
if a system is divided into separate tasks then each one must result in a clearly
delimited syntactic unit which is separately compliable.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


50
Composability: only closed units can be combined.
Protection: the scope of errors can only be controlled if modules are syntactically
delimited.

Few Interfaces

Every module should communicate with as few others as possible. The few
interfaces principle restricts the overall number of communication channels between
modules in a software architecture. Communication may occur between modules in
a variety of ways - the few interfaces principle limits the number of such
connections. If a system is composed of n modules, then the number of intermodule
connections should remain much closer to the minimum, n-1, than to the maximum,
(n(n-1))/2. This principle follows from the criteria of continuity and protection: if
there are too many relations between modules, then the effect of a change or of an
error may propagate to a large number of modules. It is also connected to
composability, understandability and decomposability - to be reusable in another
environment, a module should not depend on too many others.

Small Interfaces

(Weak Coupling)If any two modules communicate at all, they should


exchange as little information as possible. The weak coupling principle relates to the
size of intermodule connections rather than to their number. Its items from the
criteria of continuity (propagation of changes) and protection (propagation of
errors). Counter-Example: Global Variables or a ‚COMMON block'': every module
may directly use every piece of data. The modules are considered to be tightly
coupled The problem is that every module may also misuse the common data.

Explicit Interfaces

Whenever two modules A and B communicate, this must be obvious from the
text of A or B or both. Criteria: Decomposability and composability: if a module is to
be decomposed into or composed with others, any outside connection should be
clearly marked. Continuity: what other element might be impacted by a change
should be obvious. Understandability: how can one understand A if its behaviour is
influenced by B in some tricky way? One of the problems in applying this principle

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


51
is that there is more to intermodule coupling than procedure calls; data sharing is an
important source of indirect coupling.
Information Hiding

All information about a module should be private to the module unless it is


specifically declared public. The assumption is made that every module is known to
the rest of the world through some official description or interface. The whole text of
the module itself could play the role of interface, however the principle states that
this should not in general be the case.

The interface should include only some of the module's properties - the rest
should remain private.

The fundamental reason behind this principle is the continuity criterion. If a


module changes, but only in a way that affects its private elements, not the interface,
then other modules who use it (client modules) will not be affected. The interface
should be the description of the module's function(s); anything that relates to the
implementation, of these functions should be kept private so as to preserve other
modules from later reversals of implementation decisions. Information hiding does
not imply protection in the sense of security restrictions - client designers may be
permitted to see all the details, but they should be unable to write client modules
whose correct functioning depends on private information. Information hiding
emphasizes the need to separate function from implementation.

White Box Testing

Contrary to black-box testing, software is viewed as a white-box, or glass-box


in white-box testing, as the structure and flow of the software under test are visible
to the tester. Testing plans are made according to the details of the software
implementation, such as programming language, logic, and styles. Test cases are
derived from the program structure. White-box testing is also called glass-box
testing, logic-driven testing design-based testing

There are many techniques available in white-box testing, because the problem
of intractability is eased by specific knowledge and attention on the structure of the
software under test. The intention of exhausting some aspect of the software is still
strong in white-box testing, and some degree of exhaustion can be achieved, such as

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


52
executing each line of code at least once (statement coverage), traverse every branch
statements (branch coverage), or cover all the possible combinations of true and false
condition predicates (Multiple condition coverage).

Control-flow testing, loop testing, and data-flow testing, all maps the
corresponding flow structure of the software into a directed graph. Test cases are
carefully selected based on the criterion that all the nodes or paths are covered or
traversed at least once. By doing so we may discover unnecessary "dead" code --
code that is of no use, or never get executed at all, which can not be discovered by
functional testing.

In mutation testing, the original program code is perturbed and many


mutated programs are created, each contains one fault. Each faulty version of the
program is called a mutant. Test data are selected based on the effectiveness of
failing the mutants. The more mutants a test case can kill, the better the test case is
considered. The problem with mutation testing is that it is too computationally
expensive to use. The boundary between black-box approach and white-box
approach is not clear-cut. Many testing strategies mentioned above, may not be
safely classified into black-box testing or white-box testing. It is also true for
transaction-flow testing, syntax testing, finite-state testing, and many other testing
strategies not discussed in this text. One reason is that all the above techniques will
need some knowledge of the specification of the software under test. Another reason
is that the idea of specification itself is broad it may contain any requirement
including the structure, programming language, and programming style as part of
the specification content.

We may be reluctant to consider random testing as a testing technique. The test


case selection is simple and straightforward: they are randomly chosen. Study in
indicates that random testing is more cost effective for many programs. Some very
subtle errors can be discovered with low cost. And it is also not inferior in coverage
than other carefully designed testing techniques. One can also obtain reliability
estimate using random testing results based on operational profiles. Effectively
combining random testing with other testing techniques may yield more powerful
and cost-effective testing strategies. White box testing is testing from the inside--tests
that go in and test the actual program structure.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


53
UNIT – IV

SOFTWARE QUALITY ASSURANCE

Documentation Standards

Documentation standards in a software project are particularly important as


documents are the only tangible way of representing the software and the software
process. Standardized documents have a consistent appearance, structure and
quality and should therefore be easier to read and understand.

There are three types of documentation standards:

1. Documentation process standards These standards define the process which


should be followed fro document production.
2. Document Standards These are standards that govern the structure and
presentation of documents.
3. Document interchange standards These are standards that ensure that all
electronic copies of documents are compatible.

Figure: A document production process including quality checks

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


54
Process standard define the process used to produce documents. This means
defining the procedures involved in document development and the software tools
used for documents production. Checking and refinement procedures which ensure
that high quality documents are produced should also be defined.

Document process quality standards must be flexible and must be able to


cope with all types of document. For working papers or memos, there is no need for
explicitly quality checking. However, where documents are formal documents used
for further development or are released to customer; a formal quality process should
be adopted. Figure is a model of one possible process.

Drafting, checking, revising and redrafting is an iterative process. It should


continue until a document of acceptable quality is produced. The acceptable quality
level depends on the document type and the potential readers of the document.

Document standards should apply to all documents produce dint the course
of the software development. Documents should have consistent style and
appearance and documents s of the same type should have a consistent structure.
Although document standards should be adapted to the needs of a specific project, it
is good practice for the same ‘house style’ to be used in all of the document
produced by an organization.

Examples of document standards which may be developed are:

1. Document identification standards As large systems projects may produce


thousands of documents; each document must be uniquely identified. For formal
documents, this identifier may be the formal identifier defined by the
configuration manger. For formal documents, the style of the document
identifier should be defined by the project manager.
2. Document structure standards. Each class of document produced during a software
project should follow some standard structure. Structure standards should
define the sections to be included and should specify the conventions used for
page numbering, page header and footer information, and section and sub-
section numbering.
3. Document presentation standards Document presentation standards define a house
style for documents and they contribute significantly to document consistency.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


55
They include the definition of fonts and styles used in the document the use of
logos and company names, the use of colour of highlight document structure etc.
4. Document update standards As a document evolves to reflect changes in the
system, a consistent way of indicating document changes should be used. You
can use different colour of cover to indicate a new document version and change
bars in the margin to indicate modified or added paragraphs.

Document interchange standards are important as electronic copies of documents


are interchanged. The use of interchange standards allow documents to be
transferred electronically and re-created in their original form.

Assuming that the use of standard tools is mandated in the process standards,
interchange standards defiant eh conventions for using these tools. Example of
interchange standards include the use of an agreed standard macros set if a text
formatting system is used to document production or the use of standard style sheet
if a word processor is used. Interchange standards may also limit the fonts and text
styles used because of different printer and display capabilities.

Process and Quality standards

Process and product Quality

An underlying assumption of quality managements is that the quality of the


development process directly affects the quality of delivered products. This
assumption is derived from manufacturing systems where product quality is
intimately related to the production process. Indeed, in automated mass production
systems, once an acceptable level of process quality has been attained, product
quality naturally follows. Figure illustrates this approach to quality assurance.

Figure: Process based quality

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


56
Process quality is particularly important in software development. The
reason for this that it is difficult to measure software attributes, such as
maintainability. Without using the software for a long period. Quality
improvement focuses on identifying good quality products, examining the process
used to develop these products, then generalizing these processes so that they may
be applied across a range of projects. However, the relationship between software
process and software product quality is complex. Changing the process does not
always lead to improved product quality.

There is a clear link between process and product quality in manufacturing


because the process is relatively easy to standardize and monitor. One
manufacturing systems are calibrated; they can be run again and again to output
high-quality products. Software is not manufactured but is designed. As software
development in a creative rather than a mechanical process, the influence of
individual skills and experience is significant. External factors, such as the novelty of
an application or commercial pressure for an early product release, also affect
product quality irrespective of the process used.

Nevertheless, process quality has significant influence on the quality of the


software. Process quality management involves.

1. Defining process standards such as how reviews should be conducted, when


reviews should be held, etc.
2. Monitoring the development process to ensure that the standards are being
followed.
3. Reporting the software process to project management and to the buyer of the
software.

A danger of process-based quality assurance is that the prescribed process


may be inappropriate for the type of software which is being developed. For
example, process quality standards may specify that a specification must be
complete and approved before implementation can begin. However, some systems
may require prototyping which involves program implementation. The quality team
may suggest that this prototyping should not be carried out because its quality
cannot be monitored. In such situations, senior management must intervene to
ensure that the quality process supports rather than hinders product development.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


57
Quality Planning

Quality planning should begin at an early stage in the software process. A quality
plan should set out the desired product qualities. It should define how these are to
be assessed. It therefore defines what ‘high quality’ software actually means.
Without such a definition, different engineers may work in an opposing way so that
different product attributes are optimized. The result of the quality planning process
is a project quality plan.

The quality plan should select those organizational standards that are
appropriate to a particular product and development process. New standards may
have to be defined if the project uses new methods and tools. Humphrey (1989), in
his classic book on software management, suggests an outline structure for a quality
plan. This includes:

1. Product introduction A description of the product, its intended market and the
quality expectations for the product.
2. Product plans the critical release dates and responsibilities for the product
along with plans for distribution and product servicing.
3. Process descriptions the development and service processes which should be
used for product development and management.
4. Quality goals the quality goals and plans for the product, including an
identification and justification of critical product quality attributes.
5. Risks and risk management the key risks which might affect product quality
and the actions to address these risks.

When writing quality plans, you should try to keep them as short as possible.
If the document is too long, engineers will not read it and this will defeat the
purpose of producing a quality plan.

There is a wide range of potential software quality attributes (figure) that


should be considered during the quality planning process. In general, it is not
possible for any system to be optimized for all of these attributes so a critical part of
quality planning is selecting critical quality attributes and planning how these can be
achieved.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


58
The quality plan should define the most significant quality attributes for the
product being developed. It may be that efficiency is paramount and other factors
have to be scarified to achieve this. If this is set out in the plan, the engineers
working quality assessment process. This should be standard way of assessing
whether some quality, such as maintainability, is present in the product.

Software Measurement and Metrics

Software measurement is concerned with deriving a numeric value for some


attribute of a software product or a software process. By comparing these values to
each other and to standards which apply across an organization, it is possible to
draw conclusions about the quality of software processes.

The use of systemic software measurement and metrics is still relatively


uncommon. There is a reluctance to introduce measurement because the benefits are
unclear. One reason for this is that, in many companies, the software processes used
are still poorly organized and are not sufficiently mature to make use of
measurements. Another reason is that there are not standards for metrics and hence
limited support for data collection and analysis. Most companies will not be
prepared to introduce measurement until such standards and tools are available.

Software metric is any type of measurement which relates to a software


system, process or related documentation. Examples are measures of the size of a
product in lines of code, the Fog index (Gunning, 1962) which is a measure of the
readability of a passage of written text, the number of reported faults in a delivered
software product and the number of person-days required to develop a system
component.

Metrics may be either control metrics or predictor metrics. Control metrics


are usually associated with software processes; predictor metrics are associated with
software products. Examples of control or process metrics are the average effort and
time required to repair reported defects. Examples of predictor metrics are the
cyclomatic complexity of a module, the average length of identifiers in a program
and the number of attributes and operations associated with objects in a design.
Both control and predictor metrics may influence management decision making as
shown in figure 1.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


59
Figure shows some external quality attributes which might be of interest and
internal attributes which can be measured and which be related to the external
attribute. The diagram suggests that there may be a relationship between external
and internal attributes but it does not say what that relationship is. If the measure of
the internal attribute is to be a useful predictor of the external software characteristic,
three conditions must hold (Kitchenham, 1990):

Figure Relationships between internal and external software attributes

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


60
1. The internal attribute must be measure accurately
2. A relationship must exist between what we can measure and the external
behavioral attribute
3. This relationship is understood, has been validated and can be expressed in
terms of a formula or model.

Model formulation involves identifying the functional forms of the model (linear
exponential, etc) by analysis of collected data, identifying the parameters which are
to be included in the model and calibrating these using existing data. Such model
development, if it is to be trusted, requires experience in statistical techniques. A
professional statistician should be involved in the process.

The measurement process

A software measurement process that may be part of a quality control process


is shown in figure. Each of the components of the system is analysed separately and
the different values of the metric compared both with each other and, perhaps, with
historical measurement data collected on previous projects. Anomalous
measurements should be used to focus the quality assurance effort on components
that may have quality problems.

The key stages in this process are:

1. Choose measurement to be made The questions that the measurement is intended


to answer should be formulated and the measurement required to answer
these questions defined. Measurements which are not directly relevant to
these questions need not be collected. Basili’s GQM (Goal-Questiona-Metric)
paradigm (Basili and Rombach, 1988), discussed in the following chapter, is a
good approach to use when deciding what data is to be collected.
2. Select components to be assessed It may not be necessary of desirable to assess
metric values for all of the components in a software system. In some cases, a
representative selection of components may be chosen for measurement. In
other, components which are particularly critical such as core components
that are in almost constant use may be assessed.
3. Measure component characteristics The selected components are measured and
metric values are computed. This normally involves processing the

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


61
component representation (design, code etc.) using an automated data
collection tool. This may be specially written or may already be incorporated
in CASE tools that are used in an organization.

Figure: The process of product measurement

4. Identify anomalous measurement Once the component measurements have been


made, these should then be compared to each other and to previous
measurements which have been recorded in a measurement database. You
should look for unusually high or low values for each metric as these suggest
that there could be problems with the component exhibiting these values.
5. Analyze anomalous components Once components which have anomalous
values for particular metrics have been identified, you should examine these
components to decide whether or not the anomalous metric values mean that
the quality of the component is compromised. An anomalous metric value for
complexity (say) does not necessarily mean a poor quality component. There
may be some other reason for the high value and it may bot mean that there
are component quality problems.

Collected data should be maintained as an organizational resource and historical


records of all projects should be maintained even when data has not been used
during a particular project. Once a sufficiently large measurement database has
been established, comparisons across projects may then be made and specific metrics
can be refined according to organizational needs.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


62
Product metrics

Product metrics are concerned with characteristics of the software itself.


Unfortunately, Software characteristics that can be easily measured such as size and
cyclomatic complexity do not have a clear and universal relationship with quality
attributes such as understandability and maintainability. The relationships vary
depending on the development process and technology and the type of system is
being developed. Organisations interested in measurement have to construct a
historical database. This can then be used to discover how the software product
attributes are related to the qualities of interest to the organization.

Product metrics fall into two classes:

1. Dynamic metrics which are collected by measurements made of a program in


execution.
2. Static metrics which are collected by measurements made of the system
representations such as the design, program or documentation.

These different types of metric are related to different quality attributes.


Dynamic metric help to assess the efficiency and the reliability of a program whereas
static metric help to assess the complexity, understandability and maintainability of
a software system.

Dynamic metrics are usually fairly closely related to software quality


attributes. It is relatively easy to measure the execution time required for particular
functions and to assess the time required to start up a system. These relate directly
to the system’s efficiency.

Process Improvement Process

Process improvement means understanding existing processes and changing


these processes to improve produce quality and/or reduce costs and development
time. Most of the literature on process improvement has focused on improving
processes to improve product quality and, in particular, to reduce the number of
defects in delivered software. Once this has been achieved cost or schedule
reduction might becomes the principal improvement goals.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


63
There is a strong relationship between the quality of the developed software
product and quality of the software process used to create that product. By
improving the software process, it is hoped that the related product quality is
correspondingly enhance. Software processes are inherently complex and involve a
very large number of activities. Like products, processes also have attributes or
characteristics as shown in figure.

It is not possible to make process improvements that optimize all process


attributes simultaneously. For example, if a rapid development process is required
then it may be necessary to reduce the process visibility. Making a process visible
means producing documents at regular intervals. This will slow down the process.

Process improvement does not simply mean adopting particular methods or


tools or using some mode of a process that has been used elsewhere. Although
organizations which develop the same type of software clearly have much in
common, there are always local organizational factors, procedures and standards
which influence the process. The simple introduction of published process
improvement is unlikely to be successful. Process improvement should always be
seen as an activity that is specific to an organization or a part of a larger
organization.

The process characteristics

Process Characteristics Description


Understandability To what extent is the process explicity defined and
how easy is it to understand the process definition
Visibility Do the process activities culminate in clear results so
that the progress of the process is externally visible?
Supportability To what extent can the process activities be
supported by CASE tools?
Acceptability Is the defined process acceptable to and usable by the
engineers responsible for producing the software
product?
Reliability Is the process designed in such a way that process
errors are avoided or trapped before they result in
product errors?

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


64
Robustness Can the process continue in spite of unexpected
problems?
Maintainability Can the process evolves to reflect changing
organization requirements or identified process
improvement?
Rapidity How fast can the process of delivering a system from
a given specification be completed?

A generic model of the process improvement process is illustrated in figure.


Process improvement is a long-term, iterative process. Each of the activities shown in
Figure might last several months. Successful process improvement requires
organizational commitment and resources. It must be sanctioned by senior
management and must have an associated budget to support improvements.

There are a number of key stages in the process improvement process:

1. Process analysis Process analysis involves examining existing processes and


producing a process model to document and understand the process. In some
cases, it may be possible to analyze the process quantitatively Measurements
made during the analysis add extra information to the process model.
Quantitative analysis before and after changes have been introduced allows an
objective assessment of the benefits (or the problems) of the process change.
2. Improvement identification This stage is concerned with using the results of the
process analysis to identify quality; schedule or cost bottlenecks where process
factors might adversely influence the product quality, Process improvement
should focus on loosening these bottlenecks by proposing new procedures,
methods and tools to address the problems.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


65
3. Process change introduction Process change introduction means putting new
procedures, methods and tools into place and integrating them with other
process activities. It is important to allow enough time to introduce changes and
to ensure that these changes are compatible with other process activities and with
organizational procedures and standards.
4. Process change training Without training, it is not possible to gain the full benefits
from process changes. They may be rejected by the managers and engineers
responsible for development projects. All too commonly, process changes have
been imposed without adequate training and the effects of these change have
been to degrade rather than improve product quality.
5. Change tuning Proposed process changes will never be completely effective as
soon as they are introduced. There needs to be a tuning phase where minor
problems are discovered, modifications to the process are proposed and the
introduced. This tuning phase should last for several months until the
development engineers are happy with the new process.

SEI Process capability maturity model

The SEI Process Capability Maturity Model

The software Engineering Institute (SEI) at Carnegic-Mellon Univeresity is a


DoD-funded institute whose mission is software technology transfer. It was
established to improve the capabilities of the US software industry and, specifically,
the capabilities of those organizations who receive DoD funding for large defence
projects. In the mid-1980s, the SEI initiated a study of ways of assessing the
capabilities of contractors. They were particularly interested in contractors who were
bidding for software projects funded by the US Department of Defense.

The outcome of this capability assessment work was the SEI Software
Capbaility Maturity Model. This has been tremendously influential in convincing
the software engineering community; in general, to take process improvement
seriously, The SEI model classifies software processes into five different levels as
shown in figure.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


66
These five levels are defined as follows

1. Initial level At this level, an organization does not have effective management,
quality assurance and configuration control procedures in place. It is called
the repeatable level because the organization can successfully repeat projects
of the same type. However, there is a lock of a formal process model. Project
success is dependent on individual managers motivating a team and on
organizational folk-lore acting as an intuitive process description.
2. Repeatable level At this level, an organization has formal management quality
assurance and configuration control procedures in place. It is called the
repeatable level because the organization can successfully repeat projects of
the same type. However, there is a lack of a formal process model. Project
success is dependent on individual managers motivating a team and on
organizational folklore acting as an intuitive process description.
3. Defined level At this level, an organization has defined its process and thus has
a basis for qualitative process improvement. Formal procedure is in place to
ensure that the defined process is followed in all software projects.
4. Managed Level A Level 4 organization has a define process and a formal
programme of quantitative data collection. Process and product metrics are
collected and fed into the process improvement activity
5. Optimizing level At this level, an organization is committed to continuous
process improvement. Process improvement is budgeted and planned and is
an integral part of the organization’s process.

Figure: The SEI capability maturity model

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


67
The maturity level in the initial version of the model was criticized as being too
imprecise. After experience with using the model for capability evaluation as
discussed in the following section, a revised version was adopted (Paulk et al, 1993)
The five levels were retained but were defined more specifically in terms of key
process areas (Figure). Process improvement should be concerned with establishing
these key processes and not with simply reaching some arbitrary level in the model.
A similar approach base don key practices, has been used to derive requirements
engineering process maturity model

The SEI work on this model has been influenced by methods of statistical quality
control in manufacturing. Humphrey (1988), in the first widely published
description of the model states.

W.E. Deming, in his work with the Japanese industry, after World War II, applied
the concepts of statistical process control to industry. While there are important
differences, these concepts are just as applicable to software as they are to
automobiles, cameras, wristwatches and steel.

The SEI maturity model is an important contribution but it should not be taken as
a definitive capability model for all software processes. The model was developed to
assess the capabilities of companies developing defense software. These are large
long-lifetime software systems which have complex interfaces with hardware and
other software systems. They are developed large teams of engineers and must
follow the development standards and procedures laid down by the US Department
of Defense.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


68
Figure: Key process Areas (© 1993 IEEE)

The first three level of the SEI model are relatively simple to understand. The
key process areas include practices which are currently used in industry. Some
organization have reached the higher levels of the model (Diaz and Sligo, 1997) but
the standards and practices that are applicable at that level are not widely
understood. In some cases, the best practice might diverge from the SEI model
because of local organizational circumstances.

Problems at the higher levels do not negate the usefulness of the SEI mode.
Most organizations are at lower levels of process maturity. There are, however,
three more serious problems with the SEI model. These may mean that is not a good
predictor of an organization’s capability to produce high-quality software.

The major problems with the capability maturity model are:

1. The model focuses exclusively on project management rather than product


development. It does not take into account an organization’s use of
technologies such as providing, formal or structured methods, tools for static
analysis etc.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


69
2. A excludes risk analysis and resolution and key process technology (Bollinger
and McGowan, 1991). The domain of applicability of the model is not defined.
The authors of the model clearly recognize that the model is not appropriate
for all organization. However, they do not describe the type of organizations
where they think the model should and should not be used. The consequence
of this is that the model has been oversold as a way to tackle software process
problems. For smaller organization, in particular the model is too
bureaucratic. Humphrey has recognized this and how now developed
smaller-scale process improvement strategies (Humphrey, 1995)

Process classifications

The process maturity classification proposed in the SEI model is appropriate for
large, long-lifetime software projects undertaken by large organizations. There are
many other types of software project and organization where this view of process
maturity should not be applied directly.

Different types of process can be identified.

1. Informal processes These are processes where there is not a strictly defined
process model. The process used in chosen by the development team.
Informal processes may use formal procedures such as configuration
management but the procedure to be used and the relationships between
procedures are not predefined.
2. Managed Processes These are processes where there is defined process model in
place. This is used to drive the development process. The process model
defines the procedures used, their scheduling and the relationships between
the procedures.
3. Methodical Processes These are processes where some defined development
method or methods (Such as systematic methods for object-oriented design)
are used. These processes benefit from CASE tool-support for design and
analysis processes.
4. Improving processes These are processes which have inherent improvement
objectives. There is a specific budget for process improvements and
procedures in place for introducing such improvements. As part of these
improvements, quantitative process measurement may be introduced.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


70
These classifications obviously overlap and a process may fall into several
classes. For example, the process may be informal in that it is chosen by the
development team. The team may choose to use a particular design method. They
may also have a process-improvement capability. In this case, the process would be
classified as informal, methodical and improving.

Figure shows different types of product and the type of process that might be
used for their development.

Figure: Process applicability

Figure: Process tools support

The classes of system shown in Figure may overlap. Therefore, small systems
which are re-engineered can be developed using a methodical process. Large
systems always need a managed process. However, if the domain is not well

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


71
understood, it may be difficult to choose an appropriate design method. Large
systems may therefore be developed using a managed process that is not based on
any particular design method.

Process classification provides a basis for choosing the right process to be


used when a particular type of product is to be developed. For example, say a
program is needed to support a transition from one type of computer system to
another. This has a relatively short lifetime. Its development does not require the
standards and management procedures which are appropriate for software which
will be used for many years.

Process classification recognizes that the process affects product quality. It


does not assume, however, that the process is always the dominant factor. It
provides a basis for improving different types of process. Different types of process
improvements may be applied to the different types of process. For example, the
improvements to methodical processes might be based on better method training,
better integration of requirements and design, improved CASE tools etc.

Most software processes now have some CASE tool support so they are
supported processes. Methodical processes are now usually supported by analysis
ad design workbenches. However, processes may have other kinds of tool support
(for example, prototyping tools, and testing tools) irrespective of whether or not a
structured design method is used.

The tool support that can be effective in supporting processes depends on the
process classification. For example, informal processes can use generic tools such as
prototyping languages, compilers, debuggers, word processors, etc. They will rarely
use more specialized tools in a consistent way. Figure shows that a spectrum of
different tools can be used in software development. The effectiveness of particular
tools depends on the type of process that is used.

Analysis and design workbenches are only likely to be cost-effective where a


methodical process is being followed. Specialized tools are developed as part of the
process improvement activity to provide specific support for improving certain
process activities.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


72
Functions of Clause

Clause 4.1: Management responsibility

The model recognizes the importance of management responsibility for quality


throughout the organization. Whilst it is impossible for senior management to
oversee everything personally, the standard explicitly provides for a management
representative who is directly responsible for quality and is accountable to senior
management.

This clause also sets out the basic principles for establishing the quality system
within the organization and sets out many of its functions, which are then described
in greater detail in later sections.

Clause 4.2: Quality system

The model requires the organization to set up a quality system. The system
should be documented and a quality plan and manual prepared. The scope of the
plan is determined by the activities undertaken and consequently the standard
(ISO9001/2/3) employed. The focus of the plan should be to ensure that activities are
carried out in a systematic way and documented.

Clause 4.3: Contract review

Contract review specifies that each customer order should be regarded as a contract.
Order entry procedures should be developed and documented. The aim of these
procedures is to:

Ensure that customer requirements are clearly defined in writing.


Highlight differences between the order and the original quotation, so that
they may be agreed.
Ensure that the requirements can be met.

The aim of this clause is to ensure that both the supplier and customer
understand the specified requirements of each order and to document this agreed
specification to prevent misunderstandings and conflict at a later date.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


73
Clause 4.4: Design control

Design control procedures are required to control and verify control and
verify design activities, to take the results from market research through to practical
designs. The key activities covered are:

Planning for research and development


Assignment of activities to qualified staff.
Identify interfaces between relevant groups.
Preparation of a design brief.
Production of technical data
Verification that the output from the design phases meets the input
requirements.
Identification and documentation of all changes and modifications.

The aim of this section is to ensure that the design phase is carried out
effectively and to ensure that the output from the design phase accurately reflects
the input requirements.

Clause 4.6: Document control

Three levels of documentation are recognized by the standard:

Level 1: planning and policy documents.


Level 2: procedures.
Level 3: detailed instructions.

The top level documents the quality plan and sets out policy on key quality
issues. Each level adds more detail to the documentation. Where possible, existing
documentation should be incorporated. The aim should be to provide systematic
documentation, rather than simply to provide more documents. It is important that
each level of documentation is consistent with the one above it, providing greater
detail as each level is descended.

It is a common complaint that the standard requires a prohibitive amount


of documentation to be produced. Supporters of the standard argue that

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


74
systematizing of documentation can actually lead to a reduction in volume due to
the removal of obsolete and surplus documents. It is more likely that some
reduction will be achieved, which will offset greater volumes in other areas.

Good existing documentation should be incorporated into any new system


and this is facilitated by the standard not specifying a particular format, but
merely specifying that document be fit for their intended purpose.

Clause 4.6: Purchasing

The purchasing system is designed to ensure that all purchased products and
services conform to the requirements and standards of the organization. The
emphasis should be placed on verifying the supplier's own quality management
procedures. Where a supplier has also obtained external accreditation for their
quality management systems, checks may be considerably simplified. As with all
procedures, they should be documented.

Clause 4.7: Purchaser-supplied product

All services and products supplied by the customer must be checked for suit.
ability, in the same way as supplies purchased from any other supplier. In order to
ensure this, procedures should be put in place and documented, so that these
services and products may be traced through all processes and storage.

Clause 4.8: Product identification and traceability

To ensure effective process control and to correct any non-conformance, it is


necessary to establish procedures to identify and trace materials from input to
output. This also enables quality problems to be traced to root causes. It may be that
the problem can be traced back to supplied materials, in which case the problem
may lie outside the quality system altogether.

Clause 4.9: Process control

Process control requires a detailed knowledge of the process itself. This must be
documented, often in graphical form, as a process flow chart or similar. Procedures

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


75
for setting up or calibration must also be recorded. Documented instructions should
be available to staff to ensure that they have the capability to carry out the task as
specified.

It is staggering how often organizations do not understand their own processes


properly. The discipline of documenting the actual process precisely and
unambiguously for accreditation purposes can be very educational.

Clause 4.10: Inspection and testing

Inspection and testing are required to ensure conformance in three stages:

Incoming materials or services.


In process.
Finished product and/or service.

All incoming supplies must be checked in some way. The method will vary
according to the status of the supplier's quality management procedures, from full
examination to checking evidence supplied with the goods.

Monitoring 'in process' is required to ensure that all is going according to


plan. At the end of the process, any final inspection tests documented in the quality
plan must be carried out. Evidence of conformity to quality standards, together with
details of any supporting 'in-process' monitoring may be included. In an effective
system, however, the final inspection and test should not have to be as it otherwise
would be. In addition, it should not reveal many problems, since they should have
been eliminated by this stage.

Clause 4.11: Inspection, Measuring and testing equipment

Any equipment used for measuring and testing must be calibrated and
maintained. Checking and calibration activities should become part of regular
maintenance. Management should ensure that checks are carried out at the
prescribed intervals and efficient records are kept.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


76
Clause 4.12: Inspection and testing status

All material and services may be classified in one of three categories:

1 Awaiting inspection or test.


2 Passed inspection.
3 Failed inspection.

This status should be clearly identifiable at any stage. It is important that


material awaiting inspection is not mistakenly allowed to miss inspection at any
stage, as non-conformance may go undetected.

Clause 4.13: Control of non-conforming product

The standard defines non-conforming product as all products or services


falling outside tolerance limits agreed in advance with the customer. Once again it
is not prescriptive about performance levels. All non-conforming products or
services should be clearly identified, documented and if possible physically
separated from the conforming product. Procedures should be established to
handle non-conforming products by reworking, disposal, re-grading or other
acceptable documented courses of action.

There are circumstances where the standard permits the sale of non-con- forming
product provided that the customer is clearly aware of the circumstances and is
generally offered a concession. Representatives of accreditation bodies suggest that
this area where organizations often become lax after a while, relaxing procedures
and allowing non-conforming product through.

Clause 4.14: Corrective action

Corrective action is the key to continual improvement. Such action should be


implemented via a systematic programme which provides the duties of all parties.
Records should be kept of any action taken so that future audits can investigate its
effectiveness.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


77
Clause 4.15: Handling, storage, packaging and delivery

Handling and associated activities must be designed to protect the quality


built into the product. Subcontractors employed for transportation should be
subject to the same documented procedures as internal employees. The scope of
this clause is determined by the contract with the customer. The clause covers all
activities which are the contractual obligation of the supplier.

Clause 4.16: Quality records

Quality records are vital to ensure that quality activities have actually been
carried out. They form the basis for quality audits, both internal and external. They
do not have to conform to a prescribed format, but must be fit for their intended
purpose. As many will exist before the accredited system is implemented, the aim is
to systematize and simulate existing practice wherever possible, to reduce wasted
effort in reproducing previous work in this area.

Clause 4. 17: Internal quality audits

The quality system should be 'policed' from within the organization and not
dependent upon external inspection. Procedures should be established to set up
regular internal audits as part of normal management procedure. The role of internal
audits should be to identify problems early in order to minimize their impact and
cost.

Clause 4.18: Training

Training activities should be implemented and documented. In particular, written


procedures are required:

To establish training needs


To carry out training activity
To record the training requirements and completed activities for each member
of staff.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


78
It is a requirement of the standard that, at all stages, the staff required to carry out
a particular function have the skills, knowledge and tools necessary to do a proper
job. Training refers not just too formal courses but to informal knowledge sharing as
well.

Clause 4.19: Service

Where servicing procedures are required, they should be documented and


verified. The procedures should ensure that servicing is actually carried out and
that sufficient resources are available. It is necessary to set p good interfaces with
the customer if this function is to be carried out effectively. The same monitoring
procedures as are applied to internal processes should be carried out within the
servicing function.

Clause 4.20: Statistical techniques

Statistical techniques are required to be used where appropriate. The standard


does not specify particular techniques or methods but does specify that once again
they should be appropriate for the intended purpose. Their use may be necessary in
order to satisfy other requirements, notably process control, detailed in Clause 4.9.

Comparison of the requirements of the three principle standards

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


79
Capability Maturity Model (CMM).

1.CMM has been developed at Carnegie Mellon since around 1987; it is still
undergoing refinement.
2.The five CMM levels (in order of increasing maturity) are:

1.Initial -- ad hoc
2.Repeatable -- basic project management techniques are used
3.Defined -- a software engineering process is used
4.Managed -- quantitative Q/A process is used
5.Optimizing -- the process itself can be refined to improve efficiency

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


80
3.Improvements claimed for more mature processes:

1.More reliable software


2.Better visibility into process, particularly from top-level management perspective
3.Less risk in contracting with firms that have a mature process in place.

Requirements management

Goal 1: System requirements allocated to software are controlled to establish a


baseline for software engineering and management use.
Goal 2: Software plans, products, and activities are kept consistent with the system
requirements allocated to the software.

Project planning

Goal 1: Software estimates are documented for use in planning and tracking the
software project. Software project activities and commitments are planned and
documented.
Goal 2: Affected groups and individuals agree to their commitments related to the
software project.

Project tracking and oversight

Goal 1: Actual results and performances are tracked against toe software plans.
Goal 2: Corrective actions are taken and managed to closure when actual results and
performance deviate significantly from the software plans.
Goal 3: Changes to software commitments are agreed to by the affected groups and
individuals.

Subcontract management

Goal 1: The prime contractor selects qualified software subcontractors.


Goal 2: The prime contractor and the software subcontractor agree to their
commitments to each other.
Goal 3: The prime contractor and the software subcontractor maintain ongoing
communications.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


81
Goal 4: The prime contractor tracks the software subcontractor's actual results and
performance against its commitments.

Software quality assurance

Goal 1: Software quality assurance activities are planned.


Goal 2: Adherence of software products and activities to the applicable standards,
procedures, and requirements is verified objectively.
Goal 3: Affected groups and individuals are informed of software quality assurance
activities and results.
Goal 4: Noncompliance issues that cannot e resolved within the software project are
addressed by senior management.

Configuration management

Goal 1: Software configuration management activities are planned.


Goal 2: Selected software work products are identified, controlled, and available.
Goal 3: Changes to identified software work products are controlled.
Goal 4: Affected groups and individuals are informed of the status and content of
software baselines

Level 3

Organization process focus

Goal 1: Software process development and improvement activities are coordinated


across the organization.
Goal 2: The strengths and weaknesses of the software processes used are identified
relative to a process standard.
Goal 3: Organization-level process development and improvement activities are
planned.

Organization process definition

Goal 1: A standard software process for the organization is developed and


maintained.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


82
Goal 2: Information related to the use of the organization's standard software
process by the software projects is collected, reviewed, and made available.

Training program

Goal 1: Training activities are planned.


Goal 2: Training for developing the skills and knowledge needed to perform
software management and technical roles is provided.
Goal 3: Individuals in the software engineering group and software-related groups
receive the training necessary to perform their roles.

Integrated software management

Goal 1: The project's defined software process is a tailored version of the


organization's standard software process.
Goal 2: The project is planned and managed according to the project's defined
software process.

Software product engineering

Goal 1: The software engineering tasks are defined, integrated, and consistently
performed to produce the software.
Goal 2: Software work products are kept consistent with each other.

Intergroup coordination

Goal 1: The customer's requirements are agreed to by all affected groups.


Goal 2: The commitments between the engineering groups are agreed to by the
affected groups.
Goal 3: The engineering groups identify, track, and resolve intergroup issues.
Peer reviews
Goal 1: Peer review activities are planned.
Goal 2: Defects in the software work products are identified and removed.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


83
Level 4

Quantitative process management

Goal 1: The quantitative process management activities are planned.


Goal 2: The process performance of the project's defined software process is
controlled quantitatively.
Goal 3: The process capability of the organization's standard software process is
known in quantitative terms.

Software quality management

Goal 1: The project's software quality management activities are planned.


Goal 2: Measurable goals for software product quality and their priorities are
defined.
Goal 3: Actual progress toward achieving the quality goals for the software products
is quantified and managed.

Level 5

Defect prevention

Goal 1: Defect prevention activities are planned.


Goal 2: Common causes of defects are sought out and identified.
Goal 3: Common causes of defects are prioritized and systematically eliminated.

Technology change management

Goal 1: Incorporation of technology changes are planned.


Goal 2: New technologies are evaluated to determine their effect on quality and
productivity.
Goal 3: Appropriate new technologies are transferred into normal practice across
the organization.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


84
Process change management

Goal 1: Continuous process improvement is planned.


Goal 2: Participants in the organization's software process improvement activities
is organization wide.
Goal 3: The organization's standard software process and the projects' defined
software processes are improved continuously.

1. A technologist's view of how to build quality software


2. The developers of CMM focus almost exclusively on the management aspects of
the software process, not the technical aspects.
3. From their perspective, proper management is far more important to achieving
quality software than is proper software engineering technology.
4. Technologists tend to believe the opposite (but this belief is probably mistaken).
5. For example, when technologists read that

"CMM does not currently address expertise in particular application domains,


advocate specific software technologies, or suggest how to select, hire, motivate, and
retain competent people." technologists view CMM of limited utility when it comes
to assessing what it really takes to build quality software. So, if we assume that a
mature software process is in place, here is one technologist's view of the top ten
things that really count for achieving quality software:

1.A good requirements specification, produced with copious end-user feedback.


2.Thorough and formal test procedures at all levels of development.
3.The 7+/-2 hierarchy rule, 25-public-operation rule, and 25-line rule,
4.A document-as-you-go policy, where document commentary is written with the
following question in mind: "What will I need to know when I come back here in
six months and want to understand what's going on?"
5.Thoughtfully written version control log messages and/or explicit log files.
6.One set of conventions per project, and one extremely nasty manager to enforce
them.
7.Thorough and formal revision control procedures, including readily accessible
access to at least the last three working versions of the system.
8.Teamwork, including frequent critical reviews.
9.Good specification and design roadmaps, in whatever diagraming style(s) you
like real deadlines.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


85
Capability Maturity Model Integration (CMMI):

The CMMI project is a collaborative effort to provide models for achieving


product and process improvement. The primary focus of the project is to build tools
to support improvement of processes used to develop and sustain systems and
products.

The output of the CMMI project is a suite of products, which provides an


integrated approach across the interprise for improving processes, while reducing
the redundancy, complexity and cost resulting from the use of separate and multiple
CMMs.

Conops:

The concepts of operations (CONOPS) for the CMMI product suite includes.

Background and description of CMMI


Process for using the CMMI
The scenarios for use
The process for maintenance and support
The approach for adding new disciplines

It is intended that the CONOPS not only describe the use of the proposed
product suite, but also be used to obtain consenses from the developers, users and
discipline owners on the required infrastructure to develop, implement, transition
and sustain the CMMI product suite.

Why CMMI:

CMMs have been in use for various discipline with the intent of providing a
model of best practices for each of the intended discipline. But in complex
environment, such as development where several of these discipline are employed,
the collective use of individual models has resulted in redundancies, additional
complexity, increased costs and at times, discrepancies.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


86
To improve the efficiency of model use and increase the return on investment,
the CMMI project was created to provide a single integrated set of models.

Since not all organizations employ every discipline, the project also provides
CMMI models for individual disciplines.

Since not all processes apply equally to all organizations, the CMMI models
are tailorable to an organization’s mission and business objectives and criteria for
tailoring are provided.

The Framework of CMMI:

Initially, the CMMI project includes the disciplines of system engineering,


software engineering and Integrated product and Process development (IPPD).

A framework is provided that generates products for each of these disciplines


as well as allowing for new disciplines that can be added in the future. A common
set of process areas is provided that forms the core of an integrated capability model
and applies to all disciplines.

The effort to define and develop the CMMI is being sponsored by the Office
of the Secretary of Defence / Acquisition and Technology (OSD (A&T)). The
industry sponsor is the System Engineering Committee of the National Defense
Industrial Association (NDIA). The effort includes the design, implementation
transition and sustainment efforts. The CMMI project is a collaborative effort with
participation by OSD. Services, government agencies, industry through the National
Defence Industrial Association (NDIA) System Engineering Committee and the
Software Engineering Institutes (SEI) of Carnegie Mellon University. The
management structure for the project includes a steering group made up of
government, industry and SEI and reporting to OSD (A&T). This steering group is
responsible for overall direction, guidance and requirements provided to the project
manager and product development team.

The responsibility for project management has been assigned to the SEI. A
product Development Team, consisting of the SEI is developing the CMMI product
suite. The initial review of the product suit is accomplished by stakeholder, CMMI

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


87
reviewers, consisting of industry and government representatives. As new
disciplines are added or the project moves through he sustainment phase, the
makeup of the management structure may change as necessary.

The initial CMMI product suite includes a framework for generating CMMI
products, and a set of CMMI products produced by the framework. The framework
includes common elements and best features of the existing models as well as rules
and methods for generating CMMI products users may select discipline specific
element of the CMMI product suite based on their business objectives and mission
needs.

The CMMI product suite will consist of framework, Capability maturity


model – Integration models, Training products, Assessment products, Glossary.

Operational Use of CMMI:

The CMMI product suite was developed specifically for those users who are
system and product developers and want to improve their processes and products.
Tools and models are provided that enable users to assess where they are.

Identify goals for future improvements.


Follow models of best practices to achieve those goals
Use CMMI products to

 Conduct training
 Perform Assessments
 Do tailoring

Users of CMMI Product Suite:

Recognizing that development usually can be complex and require different


players, the users are

Enterprise Executives
Product Decision makers
Product Developers

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


88
Product Evaluators
Product Owners
Process Champions
Process Improvement Sponsors
Process Improvement Groups
Process Developers
Process Implementers
Process Improvement Consultants
Trainers
Assessors
Discipline Specific Professional Organizations

Use of Models:

CMMI models are used for several purposes. They are

1. Guide process improvement efforts and help organizations establish and


achieve improvement goals.
2. Provide a common language for cross – Organizational communication and
bench marking.
3. It provides an integrating, organizing framework for organizational
endeavors.
4. Help on organization understand what specific practices to perform, how to
improve its capability in performing those practices, and what process areas
to focus on next.
5. Use of Assessment Methods:
Assessments are an integral part of an organization’s process improvement
program. An assessment measures status against a reference model,
motivates process improvements and provides a basis for action planning.
The initial CMMI product suite contains the methodology for a full
assessment. The full assessment is formal and robust and based on analysis to
extensive data gathered through several sources, including questionnaires,
interviews and documents.
6. For organizations that are maturity level and the next maturity level
represents a logical evolution of its practices, then the staged representation
might be preferable for organizations that need to improve specific practices

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


89
or process areas to meet its business needs, the continuous representation
may be more adaptable.
7. Concept for maintenance and support provision exists to maintain and evolve
the CMMI product suite after development and initial release. The intent is to
ensure that it is widely adapted and institutionalized accepted nationally and
internationally, kept upto – date by continuous monitoring with the user
community, and new disciplines added as necessary to further enhance
enterprise wide process improvement.
8. It is fully comply with ISO.
9. It ensures customer satisfaction.
10. Implements high robust maturity practices.
11. Most explicitly link engineering activities and management to obtain business
objectives.
12. Incorporates lessons learned from additional process of best practices.

SIX SIGMA

What is Six Sigma?

Six sigma is a statistical value of a process or product characteristics compared


to a specification level.
A six sigma level process would exhibit no more than 3.4 defects per million
opportunities.

Very few processes achieve this level of performance and consequently most
organizations endure very high costs use to poor quality. Most company processes
produce upwards of 6000 defects per million opportunities which for many is simply
not good enough for today’s competitive environment where customer demands
increase exponentially.

To achieve a six sigma level of performance a systematic process based


methodology and project frame work must be employed. The methods must operate
within a defined deployment structure which involves the development of personnel
at various level within the organization to operate and lead the six sigma
implementation programme.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


90
Elements Typical Six Sigma

The elements of a typical six sigma programme involve.

Developing the strategic context, rationale and drivers.


Executive development.
Project champion development.
Specialist six sigma practioners sometimes called black belts and master black
who will lead six sigmaprojects.
Local six sigma facilitator and operators development, sometimes called green
belt and yellow belts.
Systematic project selection management and review usually focused on cost
measurable projects.

Benefits of Six Sigma

Six sigma benefits stem from a significant improvement is process performance,


which is turn results in:

Dramatic reduction in defects, cycle time and cost.


Reduced reliance on inspection for quality.
Greatly improved customer satisfaction.
Reduced costs from rework and elimination of non – value adding work.

Six Sigma Approach:

Six sigma programme will include


Development of a top down strategic support strategy.
Creation of a sound change framework with good ‘enabling’ process.
A structured management sponsorship’s method.
A comprehensive training suite from executive down through the
organizational structure.
Development of well rounded ‘Black Belts’ and ‘Green Belts’ able to influence
the change and teach others as well as the statistical techniques.
A systematic approach to design of processes as well as a rigorous and
consistent approach to improving existing ones.
Continued coaching, support and review.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


91
Output of Six Sigma Programme:

The benefits of six sigma programme when applied to the organizations are

Meet and exceed customer expectations.


Dramatically improve quality and reduce costs
Drive out waste.
Improve cost, cycle time and time to market.
Develop a cadre of people whose skills and knowledge will continue to bring
value to the business.
Achieve the strategic goals by mobilizing the organizations own people
towards meeting them in a short time frame.

Reviews and Audits.

The principles behind the review process are

1. Establishing what reviews are needed by the project.


2. What are the contents of various reviews?
3. What should be the results of the review?

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


92
THE MANAGEMENT REVIEW PROCESS

1. Objective:-

1. Making activities progress according to plan.


2. Changing project direction.
3. Identify the need for alternative planning.
4. Maintaining global control of the project.

2. Inputs:-

1. Statement of objectives.
2. List of issues to be addressed.
3. Current project schedule and cost data
4. Report from other reviews or audits.
5. Reports of resources assigned to the project.
6. Data on the software elements completed.

3. Entry Criteria:-

Authorization is established in the project planning documents. Initiating


event would happen when the review leader conforms a statement of objectives for
the review meeting.

4. Management Review procedures

It includes

a) Planning  The review leader identifies the review team. The leader
must schedule the meetings and he is responsible for the
distribution of input materials.
b) Overview  A qualified person from the project conducts an overview
session for the review team. This makes them to better
understand and can achieve maximal productivity.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


93
c) Preparation  Each person should study the material and prepares for
the review meeting.
d) Examination  Examine the project status to determine if it complies
with the expected status. It also records all the deviations
from the expected status.
e) Rework  ----------------

5. Exit Criteria:-

The management review is complete when the statement of objective is


addressed and when the management review report has been issued.

6. Output:-

The review report identifies the project, the review team, inputs to the review,
review objectives and a list of issues and recommendation.

7. Auditability:-

The management review report is an auditable item.

THE TECHNICAL REVIEW PROCESS

1. Objectives:-

1. Evaluation of a specific software elements.


2. Identification of any discrepancies from specifications and standards.
3. Recommendations after the examination of alternatives.

2. Responsibilities:-

1. Leader responsible for conducting review and issuing the review report.
2. Recorder responsible for documenting findings, decisions and
recommendations made by the review team.
3. Team member responsible for their own preparation.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


94
3. Input:-

1. Statement of objectives.
2. Software elements being examined.
3. Specification for the software elements.
4. Plans, standards or guidelines against which the software elements are to be
examined.

4. Entry Criteria:-

Authorization is defined by the project planning documents. Initiating event


would happen when a statement of objectives has been established.

5. Procedure:-

1. Planning  The review leader identifies the review team, schedule meetings
and distributes input materials.
2. Overview  A qualified person from the project will conduct the overview
session for the review team.
3. Preparation Each person studies the material and prepares for the review
meeting.
4. Examination Examine the software element relative to guidelines,
specifications and standards.

6. Exit Criteria:-

The technical review is complete when the statement of objectives have been
addressed and when the technical review report has been issued.

7. Output:-

The technical review report identifies the review team, software elements
reviewed, inputs to the review etc.

8. Auditability:-

The technical review report is an auditable item.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


95
AUDIT

1. Audit Objective:-

1. Provides an objective compliance confirmation of products and processes to


adhere to standards, guidelines, specifications and procedures.
2. Performed in accordance with documented plans and procedures.
3. Results are documented and are submitted to the management of the audited
organization.

2. Input:-

1. Purpose and scope of the audit


2. Objective audit criteria
3. Software elements and processes to be audited.
4. Background information regarding the organization.

3. Entry Criteria:-

1. A special project milestone has been reached.


2. External parties demand an audit.
3. A local organizational element requested the audit.
4. Respond by initiating an audit.

4. Procedure:-

1. Planning:-

It includes

a) Project processes to be examined


b) Software required to be examined
c) Reports shall be identified
d) Reports distribution
e) Required follow-up activities
f) Requirements

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


96
g) Objective audit criteria
h) Audit procedures and checklists.
i) Audit personnel.
j) Organization involved in the audit.
k) Date, time, place, agenda of session.

2. Overview  An overview meeting with the audited organization is


recommended.

3. Preparation:-

1. Understand the organization


2. Understand the product and processes.
3. Understand the objective audit criteria.
4. Prepare for the audit report.
5. Detail the audit plan.

4. Examination:-

1. Review procedures and instructions


2. Examine the work breakdown structures.
3. Examine evidence of implementation
4. Interview personnel to know the status and functioning of the process and the
product.
5. Examine element documents.
6. Test the elements.

5. Reporting:-

The audit team will issue a draft report to the audit organization for review
and comments.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


97
6. Audit Exit Criteria :-

An audit is complete when:

1. Each element within the scope has been examined.


2. Findings have been presented to the auditing organization.
3. Response to the draft audit have been received.
4. Final findings have been formally presented.
5. Audit report has been prepared and submitted.
6. Recommendation report has been prepared.
7. All follow-up actions by the auditing organization have been performed.

7. Output:-

The draft and final audit report contain

1. Audit identification
2. Scope
3. Conclusions
4. Synopsis
5. Follow-up

8. Auditability:-

The materials must be maintained by the audit organization for a stipulated


period of time subsequent to the audit.

Software Quality Assurance plan (SQA Plant).

Quality means the totality of features and characteristics of a product or


service that bear on its ability to satisfy given needs.

Quality assurance means a planned and systematic pattern of all actions


necessary to provide adequate confidence that material, data, supplies and services
conform to establish technical requirements and achieve satisfactory performance.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


98
SQAP provides the necessary framework to plan the systematic actions
necessary to provide adequate confidence that the item or product conforms to
established technical requirements.

Quality goals:-

The five most basic considerations for quality goal establishment are

1. Functionality
2. Performance
3. Constraints
4. Technological innovativeness
5. Technological and managerial risk.

Organization Structure:-

Project
Software Configuration
Management
Management

Design and System Applications Testing team


Analysis team team Support team

Factors affecting the SQA:-

1. Size of the system


2. Criticality of the system
3. Cost of correcting errors
4. Type of release
5. Relationship with the user.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


99
Seven keys to leadership:-

1. Trust your subordinates


2. Develop a vision
3. Keep you cool
4. Encourage risk
5. Be an expert
6. Invite dissent
7. Simplify

Ways to kill quality assurance:-

The 5 methods to ensure the failure of SQA are

1. Too many technical niceties.


2. Too much time spent stopping rather than preventing defects.
3. What happens when effort is wasted?
4. Management has a problem with the mathematical kid.
5. Always complaining about the Government, but no one does anything.

Review Guidelines for formal technical review.

1.Review the product, not the producer


2.Set an agenda and maintain it
3.Limit debate and rebuttal
4.Enunciate problem areas, but don’t attempt to solve every problem noted.
5.Take written notes
6.Limit the number 05 participants and insist upon advance preparation.
7.Develop a checklist for each product that is likely to be reviewed.
8.Allocate resources and schedule time for FTR’s
9.Conduct meaningful training for all reviewers.
10.Review you early reviews.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


100
UNIT – V

SOFTWARE PROJECT MANAGEMENT

Laws of Project Management

Projects progress quickly until they are 90% complete. Then they remain at 90%
complete forever. When things are going well, something will go wrong. When
things just can’t get worse, they will. When things appear to be going better, you
have overlooked something. If project content is allowed to change freely, the rate
of change will exceed the rate of progress. Project teams detest progress reporting
because it manifests their lack of progress

Software Project Management Plan


Software Project

All technical and managerial activities required to deliver the deliverables to the
client. A software project has a specific duration, consumes resources and produces
work products.

Management categories to complete a software project:

Tasks, Activities, Functions

Software Project Management Plan:

The controlling document for a software project, Specifies the technical and
managerial approaches to develop the software product and Companion document
to requirements analysis document: Changes in either may imply changes in the
other document. SPMP may be part of project agreement.

Project Agreement

The document written for a client that defines the scope, duration, cost and
deliverables for the project ,the exact items, quantities, delivery dates, delivery

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


101
location. The Client: Individual or organization that specifies the requirements and
accepts the project deliverables and can be a contract, a statement of work, a business
plan, or a project charter. Deliverables (= Work Products that will be delivered to the
client:

Documents
Demonstrations of function
Demonstration of nonfunctional requirements
Demonstrations of subsystems

Functions
Examples:

Project management
Configuration Management
Documentation
Quality Control (Verification and validation)
Training

Tasks

Smallest unit of management accountability


Atomic unit of planning and tracking
Finite duration, need resources, produce tangible result (documents, code)
Specification of a task: Work package
Name, description of work to be done
Preconditions for starting, duration, required resources
Work product to be produced, acceptance criteria for it
Risk involved Completion criteria Includes the acceptance criteria for the
work products (deliverables) produced by the task.

Task Sizes

Finding the appropriate task size is problematic


To do lists from previous projects
During initial planning a task is necessarily large

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


102
You may not know how to decompose the problem into tasks at first
Each software development activity identifies more tasks and modifies
existing ones
Tasks must be decomposed into sizes that allow monitoring
Work package usually corresponds to well define work assignment for one
worker for a week or a month.
Depends on nature of work and how well task is understood.

Examples of Tasks

Unit test class ‚Foo‛


Test subsystem ‚Bla‛
Write user manual
Write meeting minutes and post them
Write a memo on NT vs Unix
Schedule the code review
Develop the project plan related tasks are grouped into hierarchical sets of
functions and activities.

Action items

Appear on the agenda in the Status Section (See lecture on communication)


Cover: What?, Who?, When?
Example of action items:

Florian unit tests class ‚Foo‛ by next week


Marcus develops a project plan before the next meeting
Bob posts the next agenda for the Simulation team meeting before Sep 10,
12noon.

Activities

Major unit of work culminates in major project milestone:

Internal checkpoint should not be externally visible


Scheduled event used to measure progress

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


103
Milestone often produces baseline: formally reviewed work product under
change control (change requires formal procedures)
Activities may be grouped into larger activities:
Establishes hierarchical structure for project (phase, step, ...)
Allows separation of concerns
Precedence relations often exist among activities (PERT Chart)

Examples of Activities

Major Activities:

Planning
Requirements
Analysis
System Design
Object Design
Implementation
System Testing
Delivery

Activities during requirements analysis:

Refine scenarios
Define Use Case model
Define object model
Define dynamic model
Design User Interface

Structure of a Software Project Management Plan

0. Front Matter
1. Introduction
2. Project Organization
3. Managerial Process
4. Technical Process
5. Work Elements, Schedule, Budget Optional Inclusion

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


104
SPMP Part 0: Front Matter

Title Page
Revision sheet (update history)
Preface: Scope and purpose Tables of contents, figures, tables

SPMP Part 1: Introduction

1.1 Project Overview


Executive summary: description of project, product summary
1.2 Project Deliverables
All items to be delivered, including delivery dates and location
1.3 Evolution of the SPMP
Plans for anticipated and unanticipated change
1.4 Reference Materials
Complete list of materials referenced in SPMP
1.5 Definitions and Acronyms

SPMP Part 2: Project Organization

2.1 Process Model


Relationships among project elements
2.2 Organizational Structure
Internal management, organization chart
2.3 Organizational Interfaces
Relations with other entities
2.4 Project Responsibilities
Major functions and activities; nature of each; who’s in charge
Process Model
Shows relationships among the Functions, activities, tasks, Milestones, Baselines,
Reviews and Work breakdown structure

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


105
Project Responsibilities

Planner
Analyst
Designer
Programmer
Tester
Maintainer
Trainer
Document Editor
Web Master
Configuration Manager
Group leader
Liaison
Minute Taker
Project Manager

Observations on Management Structures

Egoless structures don't work well ,"Ownership" is important.


Hierarchical information flow does not work well with iterative and
incremental software development process
Manager is not necessarily always right
Project-based structures
Cut down on bureaucracy reduces development time
Decisions are expected to be made at each level
Hard to manage

Hierarchical Structure

Projects with high degree of certainty, stability, uniformity and repetition.


Requires little communication
Role definitions are clear
The more people on the project, the more need for a formal structure
Customer might insist that the test team be independent from the design team
Project manager insists on a previously successful structure

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


106
Project-Based Structure

Project with degree of uncertainty


Open communication needed among members
Roles are defined on project basis
Requirements change during development
New technology develops during project

Project Role: Group leader

Responsible for intra-group communication


(Meeting Management: Primary Facilitator)
Run the weekly project meeting
Post agenda before meeting
Define and keep track of action items (who, what, when)
Measure progress (Enforce milestones)
Deliver work packages for the tasks to the project management
Present problems and status of team to project manager
The group leader has to be rotated among members of the team.
Group Leader: Create an Agenda
o Purpose of Meeting
o Desired Outcome
Information Sharing
o Information Processing
o Meeting Critique
Project Role: Group Liason (Architecture, HCI)
o Responsible for inter-group communication
o Make available public definitions of subsystem
Developed by the team to the architecture teams (ensure consistency, etc)
Coordinate tasks spanning more than one group with other teams
Responsible for team negotiations

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


107
Project Role: Planner

Plans the activities of an individual team and has the following responsibilities.

Define project plan for team


PERT chart, resource table and GANTT chart showing work packages
Enter project plan into MS Project
Make project plan available to management
Report group project status to group leader
No explicit planner in JAMES. Responsibilities assumed by group leaders

Project Role: Document Editor

Collect, proofread and distribute team documentation


Submit team documentation to Architecture team
Collect agenda and take Minutes at meetings

Web Master

Maintain team home page


Keep track of meeting history
Keep track of design rationale

SPMP Part 3: Managerial Processes

3.1 Management Objectives and Priorities


o Philosophy, goals and priorities
3.2 Assumptions, Dependencies, Constraints
o External factors
3.3 Risk Management
o Identifying, assessing, tracking, contingencies for risks
3.4 Monitoring and Controlling Mechanisms
o Reporting mechanisms and formats, information flows, reviews

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


108
Examples of Assumptions

o There are enough cycles on the development machines and security will not be
addressed
o There are no bugs in the CASE Tool recommended for the project

Examples of Dependencies

o The VIP team depends on the vehicle subsystem provided by the vehicle team
o The automatic code generation facility in the CASE tool Rose/Java depends on
JDK. The current release of Rose/Java supports only JDK 1.0.2

Examples of Constraints

o The length of the project is 3 months. limited amount of time to build the
system
o The project consists of beginners. It will take time to learn how to use the tools
o Not every project member is always up-to-date with respect to the project
status
o The use of UML and a CASE tool is required
o Any new code must be written in Java
o The system must use Java JDK 1.1

SPMP Part 4: Technical Process

o 4.1 Methods, Tools and Techniques


Computing system, development method, team structure, etc.
Standards, guidelines, policies.
o 4.2 Software Documentation
Documentation plan, including milestones, reviews and baselines.
o 4.3 Project Support Functions
Plans for functions (quality assurance, configuration management).

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


109
SPMP Part 5: Work Elements

o 5.1 Work Packages (Work breakdown structure)


o Project decomposed into tasks; definitions of tasks
o 5.2 Dependencies
o Precedence relations among functions, activities and tasks
o 5.3 Resource Requirements
o Estimates for resources such as personnel, computer time, special
hardware, support software.
o 5.4 Budget and Resource Allocation
o Connect costs to functions, activities and tasks.
o 5.5 Schedule
o Deadlines, accounting for dependencies, required milestones

Creating Work Packages

Work Breakdown Structure (WBS)


Break up project into activities (phases, steps) and tasks.
The work breakdown structure does not show the interdependence of the
tasks

WBS Trade-offs

Work breakdown structure influences cost and schedule thresholds for establishing
WBS in terms of percentage of total effort:

Small project (7 person-month): at least 7% or 0.5 PM


Medium project (300 person-month): at least 1% or 3 PMs
Large project (7000 person-month): at least 0.2 % or 15 PMs
Determination of work breakdown structure is incremental and iterative

Dependencies and Schedule

An important temporal relation: ‚must be preceded by‛ Dependency graphs


show dependencies of the tasks (hiercharchical and temporal)
o Activity Graph:

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


110
 Nodes of the graph are the project milestones
 Lines linking the nodes represent the tasks involved
Schedule Chart (MS-Project):
o Nodes are tasks and milestones
o Lines represent temporal dependencies
o Estimate the duration of each task
o Label dependency graph with the estimates

Project Management Tools for Work Packages

Visualization Aids for Project Presentation


o Graphs (Schedule), Trees (WBS)
o Tables (Resources)
o Task Timeline
Gantt Charts: Shows project activities and tasks in parallel.
Enables the project manager to understand which tasks can be performed
concurrently.
Schedule Chart (PERT Chart)
Cornerstone in many project management tools
Graphically shows dependencies of tasks and milestones

PERT: Program Evaluation and Review Technique

– A PERT chart assumes normal distribution of tasks durations


– Useful for Critical Path Analysis

CPM: Critical Path Method

Project: Building a House


Activity 1: Landscaping the lot
Task 1.1: Clearing and grubbing
Task 1.2: Seeding the Turf
Task 1.3: Planting shrubs and trees
Activity 2: Building the House
Activity 2.1 : Site preparation
Activity 2.2: Building the exterior

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


111
Activity 2.3: Finishing the interior
Activity 2.1 : Site preparation
Task 2.1.1: Surveying
Task 2.1.2: Obtaining permits
Task 2.1.3: Excavating
Task 2.1.4: Obtaining materials

Activity 2: Building a House, ctd


Activity 2.2: Building the exterior
Task 2.2.1: Foundation
Task 2.2.2: Outside Walls
Task 2.2.3: Exterior plumbing
Task 2.2.4: Exterior electrical work
Task 2.2.5: Exterior siding
Task 2.2.6: Exteror painting
Task 2.2.7: Doors and Fixtures
Task 2.2.8: Roof

Activity 2.3 : Finishing the Interior


Task 2.3.1: Interior plumbing
Task 2.3.2: Interior electrical work
Task 2.3.3: Wallboard
Task 2.3.4: Interior painting
Task 2.3.5: Floor covering
Task 2.3.6: Doors and Fixtures

Slack Time and Critical Path


Slack Time

Available Time - Estimated (‚Real‛) Time for a task or activity  Or: Latest Start
Time - Earliest Start Time

Critical Path

The path in a project plan for which the slack time at each task is zero.The critical
path has no margin for error when performing the tasks (activities) along its route.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


112
How do you become a good project planner?

Establish a project plan


Start with the plan based on your experience with the last project(s)
Keep track of activities and their duration
Determine difference between planned and actual performance
Make sure to do a post-mortem Lessons learned
Ask developers for feedback
Write a document about what could have been improved

Project Management Heuristics

Make sure to be able to revise or dump a project plan the Complex system
development is a nonlinear activity.If project goals are unclear and complex
use team-based project management. In this case avoid perfect GANTT charts
and PERT charts
Don’t look too far into the future
Avoid micro management of details
Don’t be surprised if current project management tools don’t work:
They were designed for projects with clear goals and fixed organizational
structures

Project Management Summary

Get agreement among customers, managers and teams


o Problem statement
o Software project management plan
o Project agreement
o Make sure agreement allows for iteration
o Team Structures
o Project planning
Start with work breakdown structure (WBS)
Identify dependencies and structure: Tasks, activities, functions
Tools and Techniques
GANTT, Dependency graph, Schedule, Critical Path Analysis

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


113
Communication Management

Communication in distributed Teams


Communication Modes
Communication Mechanisms
Scheduled Communications
Communication workflow

Example: Distributed Document Review with Lotus Notes

Asynchronous Communication Mechanisms

E-Mail
Newsgroups
Web
Lotus Notes

Example of Asynchronous Documentation:


Document Review

Fill out a review form


Attach document to be reviewed
Distribute the review form to reviewers
Wait for comments from reviewers
Review comments
Create action items from selected comments
Revise document and post the revised version
Iterate the review cycle
Example:
 ‚Review of Documents‛ Database in JAMES Project

Fill out the Review Form


Select reviewers
Select the document to be reviewed
Add comments to reviewers
Determine deadline

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


114
Document Editor & Web Master Tasks

Editor reviews comments


Editor selects reviewed comments
Web Master posts reviewed document and action items
Team members complete their action items
Editor integrates changes
Editor posts changed document on the review database for the next review cycle

PERT Network model

PERT’s time estimates involve the beta probability distribution to capture three
estimates (optimistic, most likely, and pessimistic) of the duration of an activity.

The beta distribution was chosen for PERT instead of the normal distribution
because it more closely resembles people’s behavior when estimating. We are
naturally optimistic, which skews the results to the left. If the actual duration is
shorter than the most likely, it will not be much shorter, but if it is longer, then it
could be a lot longer. Actually, since we only use three estimates in PERT, you
would get a triangular distribution if plotted.

The formula for the weighted average of PERT is actually the mean of the
triangular distribution used as an approximation of the beta, by experimentation.

The formula for computing the weighted average is:

PERT weighted average = optimistic + 4(most likely) + pessimistic 6

Since scheduling a lot of activities with three estimates is computationally


messy, and many people argue that three ‚estimates‛ are not that much more
accurate than one ‚estimates‛ most project management scheduling software uses
the CPM method for scheduling.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


115
Critical chain scheduling

In critical chain, Goldratt says that a project schedule is a lot like a factory,
except that a progress line of work is moving through a number of activities, instead
of a product being made in a sequence of machines. He describes some common
problems with the way project scheduling has been handled in recent years, such as
yielding to the student syndrome, and doing too much multitasking to get optimum
organizational throughput.

Goldratt’s approach is to build a project schedule from the final end


deliverable backwards toward the beginning of the project, focusing on utilizing the
‚herbies‛ as efficiently as possible, even if it means others in the organization might
be idle for a time.

In the project, the critical chain is indicated as a heavy line through several
activities and stops at the Early Finish Date. Some of these activities have been
identified as critical through a CPM analysis, and some are critical because the
resource needed for them demands it.

It is important to realize that the critical path and the critical chain are not the
same thing. The critical path is the longest path through the network when only
activities are considered. A critical chain is the longest path through the network
considering both activities and resources.

Estimate scheduling using Brainstorming.

Brainstorming is a method, which is used to invent the activities with your


project team, group them and arrange them into a WBS, and then find all the
dependencies using the information.

Nominal Group Technique

It is simple to use and extremely valuable for collecting balanced input an


ideas form those on the project team.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


116
It is good for:

Problem solving – looking for root cause of problems, or solution approaches

Creative decision making – for estimates and tough choices Idea generating
situations – completely new input is required

Process

1. Each participant privately generates a list of inputs


2. Round robin, each participant provides one new item for the common list
3. Group discusses items as needed after list is complete to eliminate duplicates.
4. Each person privately ranks the n most important items
5. Using these anonymous votes, the leader calculates a team ranking for each item
6. Group discusses the resulting ranking, anomalies
7. Repeat steps 4, 5 and 6 until a consensus top set is reached.

Process for identifying new dependencies

1. Brainstorm an activity list – use the nominal group technique to build a list of
possible activities for the project on stickies.

2. Find affinity collections – logically ‚group‛ activities by identifying the work


that must or will be performed together. The level of detail and number of high-
level items will be governed by the nature of the project.

3. Find highest WBS levels – summarize the groupings by work product output to
derive the higher levels of the WBS.

4. Capture WBS representation – Copy all the WBS information to paper. It


becomes part of the project plan.

5. Find dependencies - working from the en to the beginning, logically arrange


stickies on the whiteboard. Using markers, draw the dependency relationships
between activities being careful to look for all interdependencies.

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


117
2. Explain the Organizational Structures.

Types of Organizations

(i) Functional
(ii) Matrix
(iii) Projectized

Functional Organization

The functional organization is what most of us think of as a ‚standard‛ old-


fashioned organization. It is the style in which people are divided into their
functional specialties and report to a functional area manager.

The advantages of a purely functional organizational form are that it:

1. Clearly defines authority-each specialist reports to only one manager


2. Eliminates duplication of functions – all engineers are in one group, marketing
personnel is in another and soon.
3. Encourages technical competence and specialization – engineers sit near other
engineers
4. Provides career paths for specialized skills – people see a career path within the
department
5. Focuses attention on key functions – concentration on core competencies is
encouraged

A pure functional organization has no cross-functional projects

Two derivatives of this form are

1. the project expediter organization


2. the project coordinator organization

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


118
Matrix Organization

In matrix organizations, there is a balance of power established between the


functional and project managers. The project worker in a matrix organization has a
multiple command system of accountability and responsibility.

Three types of matrix organizations are

1. weak
2. balanced
3. strong

Advantages of matrix organizations

1. Enables project objectives to be clearly communicated


2. Permits project integration to be done across functional lines
3. makes efficient use of resources
4. enhances information flow within an organization
5. Retains functional disciplinary teams
6. Encourages higher morale.
7. develops project managers
8. makes conflicts minimal and more easily resolved

Project zed Organizations

In project zed organizations, the project manager has total authority and acts
like a mini-CEO. All personnel assigned to the project report to project manager,
usually in a vertical organization, so the company becomes like a layered matrix.

The clear advantages for a project in this form of organization are that it
establishes a unity of command and promotes more effective communication.

The disadvantages are that it fosters duplication of facilities and inefficient


use of resources, and project team members work themselves out of a job.

*******************

VEL TECH Dr. RR & Dr. SR TECHNICAL UNIVERSITY


119

Das könnte Ihnen auch gefallen