Beruflich Dokumente
Kultur Dokumente
SEMESTER – VI
OBJECTIVE
To understand the following :
Different life cycle models
Requirement dictation process
Analysis modeling and specification
Architectural and detailed design methods
Implementation and testing strategies
Verification and validation techniques
Project planning and management
Use of CASE tools
UNIT I INTRODUCTION 8
The evolving role of Software – Software characteristics, Software Process: Software
Lifecycle models –-The linear sequential model - The prototyping model - The RAD
model - Evolutionary software process models - The incremental model - The spiral
model- Various Phases in Software Development
UNIT II RISK MANAGEMENT & CODING STANDARDS 9
Risk Analysis & Management: Assessment-Identification–Projection-Refinement-
Principles, Introduction to Coding Standards.
UNIT III TESTING TECHNIQUE & TESTING TOOLS 9
Software testing fundamentals - Test case design - White box testing - Basis path
testing - Control structure testing - Black box testing - Testing for specialized
environments, Testing strategies - Verification and validation - Unit testing -
Integration testing - Validation testing - System testing - The art of debugging,
Testing tools - Win runner, Load Runner.
UNIT IV SOFTWARE QUALITY ASSURANCE 10
Quality concepts - cost of quality - Software Quality Group (SQA)-Roles and
responsibilities of SQA group- Formal Technical reviews- Quality standards .
UNIT V SOFTWARE PROJECT MANAGEMENT 9
Introduction to MS Project –Creating a Project Plan File-Creating Work Break Down
Structure-Creating and Assigning Resources-Finalizing the project plan
Case Study.
TOTAL: 45+15(Tutorial) = 60 periods
INTRODUCTION
The first published model of the software development process was derived
from other engineering process. This is illustrated from following figure. Because of
the cascade form one phase to another, this model is know as ‚water fall model ‚ of
software life cycle model. The principal stages of the model map on to fundamental
development activities.
Requirements
definition
System and
software design
Implementation
and unit testing
Integration and
system testing
Operation and
maintenance
No fabrication step
Program code is another design level
Hence, no ‚commit‛ step – software can always be changed<!
Understandability
Visibility
Supportability
Acceptability
Reliability
Maintainability
Rapidity
Software validation
Unit testing: individual components are tested to ensure that they operate correctly.
Sub system checking: this phase involves testing collections of module which have
been integrated into sub system. The most common problems which arise in large
software system are interface mismatch.
System testing: the subsystems are integrated to make up the systems. This process
is concerned with finding errors that result form unanticipated interaction between
sub systems and sub systems interface problems.
Acceptance testing: this is the final stage in the testing process before the system is
accepted for operational use. The system is tested with data supplied by the systems
customers rather than simulated data.
Unit test and modular testing are responsibility of the programmers. They make up
their own data and incrementally tested to test the code as it is developed.
Later stages of testing involve integrating work from a number of programmers and
must be planned in advance. An independent team of tester should work from pre
formulated test plans which are developed form the system specification and design.
Project considerations
Scheduling, cost
Risk business considerations marketing
Profit, risk technical analysis
Function
Performance manufacturing evaluation
Availability
Quality assurance human issues
Worker
Customer’s environmental interface
System’s external environment legal considerations
Liability, infringement systems analysis
There are many other categories of requirements that also deserve attention
pervasive system requirements include.
Accessibility
Reusability
Adaptability
Robustness
Availability
Safety
Compatibility
Security
Correctness
Testability
Efficiency
Usability
Fault tolerance
Integrity
Maintainability
Reliability
Prototyping model
Advantages
Advantage
Incremental model
The model combines elements of the linear sequential model with the iterative
philosophy of prototyping. This model applies linear sequences in a staggered
fashion as calendar time progresses. Each sequence produces an increment of the
software.
When the incremental model is used, 1st increment is core product i.e. basic
requirement are addressed but supplementary features remain undelivered. The
core product is used by the customer. So from evaluation of 1 st increment, a plan is
developed to the next increment hence from the plan, modification is made to the
core product which is met by the customer. Thus the process is repeated until the
complete product is produced.
It is evolutionary s/w process model that couples the iterative nature of prototyping
with controlled and systematic aspects of linear sequential model.
The model is divided into number of framework activities called as task regions.
Determine objectives
Evaluate alternatives
alternatives and identify, resolve risks
constraints Risk
analysis
Risk
analysis
Risk
analysis Opera-
Prototype 3 tional
Prototype 2 protoype
Risk
REVIEW analysis Proto-
type 1
Requirements plan Simulations, models, benchmarks
Life-cycle plan Concept of
Operation S/W
requirements Product
design Detailed
Requirement design
Development
plan validation Code
Design Unit test
Integration
and test plan V&V Integration
Plan next phase test
Acceptance
Service test Develop, verify
next-level product
Construction and release: Used to construct, test, install, and provide user support
(e.g. Documentation and training)
Each of the regions is populated by a set of work tasks, called task set, that are
adapted to the characteristics of the project to be undertaken.
The number of work tasks and their formality is low. For larger, more critical
projects each task region contains more work tasks that are defined to achieve a
higher level of formality.
In spiral model, developer asks customer what is required and the customer
provides sufficient detail to proceed. Unfortunately this rarely happens, in reality the
customer and developer enter into a process negotiation, where the customer and
developer enter into a process of negotiation, where the customer may be asked to
balance functionality, performance and other product or system characteristics
against cost and time to market.
The model has three process milestones, called anchor points that establish
the completion of 1 cycle around the spiral and provide decision milestones before
the s/w projects proceeds.
The anchor points represent three different views of progress at the project as
the project traverses the spiral. The first anchor point, life cycle objectives (LCO),
defines a set of objectives for each major software engineering activity. for example
as part a set of objectives establishes the definition of top-level system/product
requirements. The second anchor point, life cycle architecture (LCA), establishes
objectives that must be met as the system and software architectures is defined. For
e.g as part of LCA, the software project team must demonstrate that it has evaluated
the applicability of the shelf and reusable software components and considered their
impact on architectural decisions. Initial operational capability is the third anchor
point and represents a set of objectives associated with the preparation of the
software for installation/distribution, site preparation prior to installation and
assistance by all parties that will support the software.
Risk management deals with the identification, quantification, response and control
Identification of risks
A risk is a possible future event, which, if it happens, will hurt the project. A
risk is a problem or disaster looking for an opportunity to happen. The
characteristics of a risk are:
If the risk happens, you lose something: time, money, etc. there is some non-
zero likelihood that the risk will occur (i.e. don’t worry about meteors crashing into
your project office) to some degree, we can minimize the risk.
Generic risks:
1. Personnel shortfalls: skill and knowledge levels, staff turnover, team dynamics.
2. Unrealistic schedules and budgets: requirements demand more time or money.
3. Developing the wrong software functions: complexity, imperfect understanding.
4. Developing the wrong user interface: not user-friendly, misleading.
5. Gold plating: adding unnecessary ‚nice‛ features
6. Continuing stream of requirements changes: requirement volatility forces rework
7. Shortfalls in externally performed tasks: subcontractors or users don’t do what’s
needed
8. Shortfalls in externally furnished components: hardware or supporting software
is inadequate
9. Real time performance shortfalls: some or all of the system causes bottlenecks<
10. Straining computer science capabilities: unstable or unfamiliar technology
RISK QUANTIFICATION
Risk exposure is the sum of ((probability of risk) * (cost of risk)) for all risks.
The cost of the risk can be measured in dollars or some other impact measurement.
change requirements (or the methods, or<) so the risk no longer applies
take precautions to reduce the probability of the risk
e.g. delete functionality that looks hard to implement
Risk transfer
Risk assumption
accept a potential risk and its consequences as part of the project but take
preventative actions to reduce probability and impact prepare contingency
plans
e.g. arrange for ‚stand-by‛ expertise
Risk control
All this risk management takes resources and may add delays to the project.
The amount of risk management actually done depends on the magnitude of the
project and the consequences of the expected risks and the cost of mitigation.
Risk management tracks risk exposure during a project and applies risk control to
reduce exposure.
1. Identify the risks use existing lists (like Boehm’s) and make your own lists
specific to the politics, culture, technology, etc. that constitutes the project
environment review the project plan with skepticism
Pre-project prevention
Intra-project compensation
Risk description:
risk statement (condition-consequences format, ‚if this happens then that will
occur‛)
context (circumstances, environment, resources and other issues affecting the
risk)
impact (affected products, schedules, etc.)
time frame (period during which risk is real, period during which action
can/should be taken)
probability (likelihood of risk occurring)
mitigation strategy (proposed action or eliminates, reduce or prevent the risk)
status
priority (high, medium, low)
risk origin (who identified the risk)
date opened/identified
assigned to (person examining risk and recommending mitigation)
status/date (status changes such as change in probability of key events or a
change in the potential impact)
date closed
RISK ANALYSIS
A systematic approach for describing and/or calculating risk. Risk analysis involves
the identification of undesired events, and the causes and consequences of these
events.
A risk analysis can be quantitative. However, this requires the existence of suitable
data. (Relevant and reliable)
Based on the last item, a comparison with the tolerability matrix can be made,
and also the results of the risk analysis should be useful in identifying risk
mitigating measures.
Risk is the potential harm that may arise from some current process or from
some future event.
Risk is present in every aspect of our lives and many different disciplines
focus on risk as it applies to them. From the IT security perspective, risk
management is the process of understanding and responding to factors that may
lead to a failure in the confidentiality, integrity or availability of an information
system. IT security risk is the harm to a process or the related information resulting
from some purposeful or accidental event that negatively impacts the process or the
related information.
Threats
One of the most widely used definitions of threat and threat-source can be
found in the National
Threat-Source: Either (1) intent and method targeted at the intentional exploitation
of a Vulnerability or (2) a situation and method that may accidentally trigger a
vulnerability.iii The threat is merely the potential for the exercise of a particular
vulnerability. Threats in themselves are not actions. Threats must be coupled with
threat-sources to become dangerous. This is an important distinction when assessing
and managing risks, since each threat-source may be associated with a different
likelihood, which, as will be demonstrated, affects risk assessment and risk
management. It is often expedient to incorporate threat sources into threats. The list
below shows some (but not all) of the possible threats to information systems.
Threat (Including
Description
Threat Source)
Accidental The unauthorized or accidental release of classified,
Disclosure personal, or sensitive information
Acts of Nature All types of natural occurrences (e.g., earthquakes,
hurricanes, tornadoes) that may damage or affect the
system/application. Any of these potential threats could
lead to a partial or total outage, thus affecting availability
Alteration of Software An intentional modification, insertion, deletion of
operating system or application system programs,
whether by an authorized user or not, which compromises
the confidentiality, availability, or integrity of data,
programs, system, or resources controlled by the system.
This includes malicious code, such as logic bombs, Trojan
horses, trapdoors, and viruses.
Identifying Threats
(Inadvertent data entry) and deliberate actions (network based attacks, virus
infection, unauthorized access)
Identifying Vulnerabilities
One of the more difficult activities in the risk management process is to relate
a threat to vulnerability. Nonetheless, establishing these relationships is a mandatory
activity, since risk is defined as the exercise of a threat against vulnerability. This is
often called threat-vulnerability (T-V) pairing. Once again, there are many
techniques to perform this task. Not every threat-action/threat can be exercised
against every vulnerability. For instance, a threat of ‚flood‛ obviously applies to a
vulnerability of ‚lack of contingency planning‛, but not to a vulnerability of ‚failure
to change default authenticators.‛While logically it seems that a standard set of T-V
pairs would be widely available and used; there currently is not one readily
available. This may be due to the fact that threats and especially vulnerabilities are
constantly being discovered and that the T-V pairs would change fairly often.
Nonetheless, an organizational standard list of T-V pairs should be established and
used as a baseline.E4 A169 4E4
Risk Managed
For each risk in the risk assessment report, a risk management strategy must
be devised that reduces the risk to an acceptable level for an acceptable cost. For each
risk management strategy, the cost associated with the strategy and the basic steps
for achieving the strategy (known as the Plan Of Action & Milestones or POAM)
must also be determined.
Mitigation
Transference
Acceptance
Avoidance
Risk must also be communicated. Once risk is understood, risks and risk
management strategies must be clearly communicated to organizational
management in terms easily understandable to organizational management.
Managers are used to managing risk, they do it every day. So presenting risk in a
way that they will understand is key.
NIST Special Publication (SP) 800-30, Risk Management Guide for Information
Technology Systems is the US Federal Government’s standard. This methodology is
primarily designed to be qualitative and is based upon skilled security analysts
working with system owners and technical experts to thoroughly identify, evaluate
and manage risk in IT systems. The process is extremely comprehensive, covering
everything from threat-source identification to ongoing evaluation and assessment.
Phase 3 gathers knowledge from staff. Phase 3 consists of the following processes:
These activities produce a view of risk that takes the entire organization’s
viewpoints into account, while minimizing the time of the individual participants.
The outputs of the OCTAVE process are:
• Protection Strategy
• Mitigation Plan
• Action List
FRAP
COBRA
Risk Watch
Risk Watch is another tool that uses an expert knowledge database to walk
the user through a risk assessment and provide reports on compliance as well as
advice on managing the risks. Risk Watch includes statistical information to support
quantitative risk assessment, allowing the user to show ROI for various strategies.
Risk Watch has several products, each focused along different compliance needs.
There are products based on NIST Standards (U.S. government), ISO 17799, HIPAA
and Financial Institution standards (Gramm Leach Bliley Act, California SB 1386
(Identify Theft standards), Facilities Access Standards and the FFIEC Standards for
Information Systems).
Software Testing is the process of executing a program or system with the intent of
finding errors. Or, it involves any activity aimed at evaluating an attribute or
capability of a program or system and determining that it meets its required results.
Software is not unlike other physical processes where inputs are received and
outputs are produced. Where software differs is in the manner in which it fails. Most
physical systems fail in a fixed (and reasonably small) set of ways. By contrast,
software can fail in many bizarre ways. Detecting all of the different failure modes
for software is generally infeasible.
Unlike most physical systems, most of the defects in software are design
errors, not manufacturing defects. Software does not suffer from corrosion, wear-
and-tear generally it will not change until upgrades, or until obsolescence. So once
the software is shipped, the design defects or bugs will be buried in and remain
latent until activation.
Software bugs will almost always exist in any software module with
moderate size: not because programmers are careless or irresponsible, but because
the complexity of software is generally intractable and humans have only limited
ability to manage complexity. It is also true that for any complex systems, design
defects can never be completely ruled out.
Discovering the design defects in software, is equally difficult, for the same
reason of complexity. Because software and any digital systems are not continuous,
testing boundary values are not sufficient to guarantee correctness. All the possible
values need to be tested and verified, but complete testing is infeasible. Exhaustively
testing a simple program to add only two integer inputs of 32-bits (yielding 2^64
distinct test cases) would take hundreds of years, even if tests were performed at a
rate of thousands per second. Obviously, for a realistic software module, the
complexity can be far beyond the example mentioned here. If inputs from the real
world are involved, the problem will get worse, because timing and unpredictable
environmental effects and human interactions are all possible input parameters.
The tests that occur as part of unit tests are illustrated schematically in
Figure 4.1. The module interface is tested to ensure that information properly flows
into and out of the program unit under test. The local data structure is examined to
ensure that data stored temporarily maintains its integrity during all steps in an
algorithm's execution. Boundary conditions are tested to ensure that the module
Test
Cases
Fig. 4.1
Tests of data flow across a module interface are required before any other test
is initiated. If data do not enter and exit properly, all other tests are moot. In
addition, local data structures should be exercised and the local impact on global
data should be ascertained (if possible) during unit testing.
Selective testing of execution paths is an essential task during the unit test.
Test cases should be designed to uncover errors due to erroneous computations,
incorrect comparisons, or improper control flow. Basis path and loop testing are
effective techniques for uncovering a broad array of path errors.
Module
to be
tested
Test
Cases
st st
ri
v
e
Fig 4.2
r
Good design dictates that error conditions be anticipated and error handling paths
set up to reroute or cleanly terminate processing when an error does occur. Yourdon
calls this approach antibugging. Unfortunately, there is a tendency to incorporate
error handling into software and then never test it. A true story may serve to
illustrate: i passes is invoked, when the maximum or minimum allowable value is
encountered. Test cases that exercise data structure, control flow, and data values
just below, at, and just above maxima and minima are very likely to uncover errors.
Drivers and stubs represent overhead. That is, both are software that must be
written (formal design is not commonly applied) but that is not delivered with the
final software product. If drivers and stubs are kept simple, actual overhead is
relatively low. Unfortunately, many components cannot be adequately unit tested
with ‛simple" overhead software. In such cases, complete testing can be postponed
until the integration test step (where drivers or stubs are also used)
Integration Testing
Top-down Integration
1. The main control module is used as a test driver and stubs are substituted for all
components directly subordinate to the main control module.
2. Depending on the integration approach selected (i.e., depth or breadth first),
subordinate stubs are replaced one at a time with actual components.
3. Tests are conducted as each component is integrated.
4. On completion of each set of tests, another stub is replaced with the real
component.
5. Regression testing may be conducted to ensure that new errors have not been
introduced.
The top-down integration strategy verifies major control or decision points early in
the test process. In a well-factored program structure, decision making occurs at
upper levels in the hierarchy and is therefore encountered first. If major control
problems do exist, early recognition is essential. If depth-first integration is selected,
a complete function of the software may be implemented and demonstrated. For
example, consider a classic transaction structure in which a complex series of
interactive inputs is requested, acquired, and validated via an incoming path. The
incoming path may be integrated in a top-down manner. All input processing (for
subsequent transaction dispatching) may be demonstrated before other elements of
the structure have been integrated. Early demonstration of functional capability is a
confidence builder for both the developer and the customer.
1. Delay many tests until stubs are replaced with actual modules,
2. Develop stubs that perform limited functions that simulate the actual module, or
3. Integrate the software from the bottom of the hierarchy upward.
The first approach (delay tests until stubs are replaced by actual modules)
causes us to loose some control over correspondence between specific tests and
incorporation of specific modules. This can lead to difficulty in determining the
cause of errors and tends to violate the highly constrained nature of the top-down
approach. The second approach is workable but can lead to significant overhead, as
stubs become more and more complex.
1. Low-level components are combined into clusters (sometimes called builds) that
perform a specific software sub function.
2. A driver (a control program for testing) is written to coordinate test case input
and output.
3. The cluster is tested.
4. Drivers are removed and clusters are combined moving upward in the program
structure.
Components are combined to form clusters 1, 2, and 3. Each of the clusters is tested
using a driver (shown as a dashed block). Components in clusters 1 and 2 are
subordinate to r3 via. Drivers Dl and D2 are removed and the clusters are interfaced
directly to Ma. Similarly, driver D3 for cluster 3 is removed prior to integration with
module Mb. Both Ma and Mb will ultimately be integrated with component Mc, and
so forth.
M
c
M M
a b
D D D
1 2 3
Clust
Clus er 3
ter 1
Cluster
2
Regression Testing
Regression testing is the reexecution of some subset of tests that have already been
conducted to ensure that changes have not propagated unintended side effects. In a
broader context, successful tests (of any kind) result in the discovery of errors, and
errors must be corrected. Whenever software is corrected, some aspect of the
software configuration (the program, its documentation, or the data that support it)
is changed. Regression testing is the activity that helps to ensure that changes (due to
testing or for other reasons) do not introduce unintended behavior or additional
errors.
The regression test suite (the subset of tests to be executed) contains three
different classes of test cases:
As integration testing proceeds, the number of regression tests can grow quite
large. Therefore, the regression test suite should be designed to include only those
tests that address one or more classes of errors in each of the major program
functions. It is impractical and inefficient to re-execute every test for every program
function once a change has occurred.
Loop testing is a white-box testing technique that focuses exclusively on the validity
of loop constructs. Four different classes of loops can be defined:
1. Simple loops
2. Concatenated loops
3. Nested loops
4. Unstructured loops
Simple loops. The following set of tests can be applied to simple loops, where n is
the maximum number of allowable passes through the loop.
Nested loops. If we were to extend the test approach for simple loops to nested
loops, the number of possible tests would grow geometrically as the level of nesting
increases. This would result in an impractical number of tests.
1. Start at the innermost loop. Set all other loops to minimum values.
2. Conduct simple loop tests for the innermost loop while holding the outer loops
at their minimum iteration parameter (e.g., loop counter) values. Add other tests
for out-of-range or excluded values.
3. Work outward, conducting tests for the next loop, but keeping all other outer
loops at minimum values and other nested loops to "typical" values.
4. Continue until all loops have been tested.
Concatenated loops. Concatenated loops can be tested using the approach defined
for simple loops, if each of the loops is independent of the other. However, if two
loops are concatenated and the loop counter for loop 1 is used as the initial value for
loop 2, then the loops are not independent. When the loops are not independent, the
approach applied to nested loops is recommended.
Validation Testing.
After each validation test case has been conducted, one of two possible conditions
exist:
When custom software is built for one customer, a series of acceptance tests
are conducted to enable the customer to validate all requirements. Conducted by the
end user rather than software engineers, an acceptance test can range from an
informal "test drive" to a planned and systematically executed series of tests. In fact,
acceptance testing can be conducted over a period of weeks or months, thereby
uncovering cumulative errors that might degrade the system over time.
The alpha test is conducted at the developer's site bya customer. The software is
used in a natural setting with the developer "looking over the shoulder" of the user
and recording errors and usage problems. Alpha tests are conducted in a controlled
environment.
The beta test is conducted at one or more customer sites by the end-user of the
software. Unlike alpha testing, the developer is generally not present. Therefore, the
beta test is a "live" application of the software in an environment that cannot be
controlled by the developer. The customer records all problems (real or imagined)
State testing objectives explicitly. The specific objectives of testing should be stated
in measurable terms. For example, test effectiveness, test coverage, mean time to
failure, the cost to find and fix defects, remaining defect density or frequency of
occurrence, and test work-hours per regression test all should be stated within the
test plan.
Understand the users of the software and develop a profile for each user category.
Use cases that describe the interaction scenario for each class of user can reduce
overall testing effort by focusing testing on actual use of the product.
Develop a testing plan that emphasizes "rapid cycle testing." Gilb recommends that
a software engineering team "learn to test in rapid cycles (2 percent of project effort)
of customer-useful, at least field 'trialable,' increments of functionality and/or quality
improvement." The feedback generated from these rapid cycle tests can be used to
control quality levels and the corresponding test strategies.
Build "robust" software that is designed to test itself. Software should be designed
in a manner that uses antibugging techniques. That is, software should be capable of
diagnosing certain classes of errors. In addition, the design should accommodate
automated testing and regression testing.
Develop a continuous improvement approach for testing process: The Test strategy
should be measured. The matrices collected during testing should be used as part of
a statistical process control approach for software testing.
The basis path testing method can be applied to a procedural design or to source
code. In this section, we present basis path testing as a series of steps. The procedure
average, depicted in POL in Figure, will be used as an example to illustrate each step
in the test case design method. Note that average, although an extremely simple
algorithm, contains compound conditions and loops. The following steps can be
applied to derive the basis set:
PROCEDURE average:
3. Determine a basis set of linearly independent paths. The value of V(G) provides
the number of linearly independent paths through the program control structure. In
the case of procedure average, we expect to specify six paths:
path I: 1-2-10-11-13
path 2: 1-2-10-12-13
path 3: 1-2-3-10-1 1-13
path 4: 1-2-3-4-5-8-9-2-. ..
path 5: 1-2-3-4-5-6-8-9-2-. ..
path 6: 1-2-3-4-5-6-7-8-9-2-. ..
4. Prepare test cases that will force execution of each path in the basis set. Data
should be chosen so that conditions at the predicate nodes are appropriately set as
each path is tested. Test cases that satisfy the basis set just described are
4
1
0
5
1 1
2 1
6
1
3
7
8
value(l) = -999
Each test case is executed and compared to expected results. Once all test
cases have been completed, the tester can be sure that all statements in the program
have been executed at least once.
The black-box approach is a testing method in which test data are derived
from the specified functional requirements without regard to the final program
structure. It is also termed data-driven, input/output driven or requirements-based
testing. Because only the functionality of the software module is of concern, black-
It is obvious that the more we have covered in the input space, the more
problems we will find and therefore we will be more confident about the quality of
the software. Ideally we would be tempted to exhaustively test the input space. But
as stated above, exhaustively testing the combinations of valid inputs will be
impossible for most of the programs, let alone considering invalid inputs, timing,
sequence, and resource variables. Combinatorial explosion is the major roadblock in
functional testing. To make things worse, we can never be sure whether the
specification is either correct or complete. Due to limitations of the language used in
the specifications (usually natural language), ambiguity is often inevitable. Even if
we use some type of formal or restricted language, we may still fail to write down all
the possible cases in the specification. Sometimes, the specification itself becomes an
intractable problem: it is not possible to specify precisely every situation that can be
encountered using limited words. And people can seldom specify clearly what they
want they usually can tell whether a prototype is, or is not, what they want after
they have been finished. A specification problem contributes approximately 30
percent of all bugs in software.
1. Test cases are tough and challenging to design, without having clear functional
specifications
2. It is difficult to identify tricky inputs, if the test cases are not developed based on
specifications.
3. It is difficult to identify all possible inputs in limited testing time. So writing test
cases is slow and difficult
4. Chances of having unidentified paths during this testing
5. Chances of having repetition of tests that are already done by programmer.
Principles of Modularity
Few Interfaces
Every module should communicate with as few others as possible. The few
interfaces principle restricts the overall number of communication channels between
modules in a software architecture. Communication may occur between modules in
a variety of ways - the few interfaces principle limits the number of such
connections. If a system is composed of n modules, then the number of intermodule
connections should remain much closer to the minimum, n-1, than to the maximum,
(n(n-1))/2. This principle follows from the criteria of continuity and protection: if
there are too many relations between modules, then the effect of a change or of an
error may propagate to a large number of modules. It is also connected to
composability, understandability and decomposability - to be reusable in another
environment, a module should not depend on too many others.
Small Interfaces
Explicit Interfaces
Whenever two modules A and B communicate, this must be obvious from the
text of A or B or both. Criteria: Decomposability and composability: if a module is to
be decomposed into or composed with others, any outside connection should be
clearly marked. Continuity: what other element might be impacted by a change
should be obvious. Understandability: how can one understand A if its behaviour is
influenced by B in some tricky way? One of the problems in applying this principle
The interface should include only some of the module's properties - the rest
should remain private.
There are many techniques available in white-box testing, because the problem
of intractability is eased by specific knowledge and attention on the structure of the
software under test. The intention of exhausting some aspect of the software is still
strong in white-box testing, and some degree of exhaustion can be achieved, such as
Control-flow testing, loop testing, and data-flow testing, all maps the
corresponding flow structure of the software into a directed graph. Test cases are
carefully selected based on the criterion that all the nodes or paths are covered or
traversed at least once. By doing so we may discover unnecessary "dead" code --
code that is of no use, or never get executed at all, which can not be discovered by
functional testing.
Documentation Standards
Document standards should apply to all documents produce dint the course
of the software development. Documents should have consistent style and
appearance and documents s of the same type should have a consistent structure.
Although document standards should be adapted to the needs of a specific project, it
is good practice for the same ‘house style’ to be used in all of the document
produced by an organization.
Assuming that the use of standard tools is mandated in the process standards,
interchange standards defiant eh conventions for using these tools. Example of
interchange standards include the use of an agreed standard macros set if a text
formatting system is used to document production or the use of standard style sheet
if a word processor is used. Interchange standards may also limit the fonts and text
styles used because of different printer and display capabilities.
Quality planning should begin at an early stage in the software process. A quality
plan should set out the desired product qualities. It should define how these are to
be assessed. It therefore defines what ‘high quality’ software actually means.
Without such a definition, different engineers may work in an opposing way so that
different product attributes are optimized. The result of the quality planning process
is a project quality plan.
The quality plan should select those organizational standards that are
appropriate to a particular product and development process. New standards may
have to be defined if the project uses new methods and tools. Humphrey (1989), in
his classic book on software management, suggests an outline structure for a quality
plan. This includes:
1. Product introduction A description of the product, its intended market and the
quality expectations for the product.
2. Product plans the critical release dates and responsibilities for the product
along with plans for distribution and product servicing.
3. Process descriptions the development and service processes which should be
used for product development and management.
4. Quality goals the quality goals and plans for the product, including an
identification and justification of critical product quality attributes.
5. Risks and risk management the key risks which might affect product quality
and the actions to address these risks.
When writing quality plans, you should try to keep them as short as possible.
If the document is too long, engineers will not read it and this will defeat the
purpose of producing a quality plan.
Model formulation involves identifying the functional forms of the model (linear
exponential, etc) by analysis of collected data, identifying the parameters which are
to be included in the model and calibrating these using existing data. Such model
development, if it is to be trusted, requires experience in statistical techniques. A
professional statistician should be involved in the process.
The outcome of this capability assessment work was the SEI Software
Capbaility Maturity Model. This has been tremendously influential in convincing
the software engineering community; in general, to take process improvement
seriously, The SEI model classifies software processes into five different levels as
shown in figure.
1. Initial level At this level, an organization does not have effective management,
quality assurance and configuration control procedures in place. It is called
the repeatable level because the organization can successfully repeat projects
of the same type. However, there is a lock of a formal process model. Project
success is dependent on individual managers motivating a team and on
organizational folk-lore acting as an intuitive process description.
2. Repeatable level At this level, an organization has formal management quality
assurance and configuration control procedures in place. It is called the
repeatable level because the organization can successfully repeat projects of
the same type. However, there is a lack of a formal process model. Project
success is dependent on individual managers motivating a team and on
organizational folklore acting as an intuitive process description.
3. Defined level At this level, an organization has defined its process and thus has
a basis for qualitative process improvement. Formal procedure is in place to
ensure that the defined process is followed in all software projects.
4. Managed Level A Level 4 organization has a define process and a formal
programme of quantitative data collection. Process and product metrics are
collected and fed into the process improvement activity
5. Optimizing level At this level, an organization is committed to continuous
process improvement. Process improvement is budgeted and planned and is
an integral part of the organization’s process.
The SEI work on this model has been influenced by methods of statistical quality
control in manufacturing. Humphrey (1988), in the first widely published
description of the model states.
W.E. Deming, in his work with the Japanese industry, after World War II, applied
the concepts of statistical process control to industry. While there are important
differences, these concepts are just as applicable to software as they are to
automobiles, cameras, wristwatches and steel.
The SEI maturity model is an important contribution but it should not be taken as
a definitive capability model for all software processes. The model was developed to
assess the capabilities of companies developing defense software. These are large
long-lifetime software systems which have complex interfaces with hardware and
other software systems. They are developed large teams of engineers and must
follow the development standards and procedures laid down by the US Department
of Defense.
The first three level of the SEI model are relatively simple to understand. The
key process areas include practices which are currently used in industry. Some
organization have reached the higher levels of the model (Diaz and Sligo, 1997) but
the standards and practices that are applicable at that level are not widely
understood. In some cases, the best practice might diverge from the SEI model
because of local organizational circumstances.
Problems at the higher levels do not negate the usefulness of the SEI mode.
Most organizations are at lower levels of process maturity. There are, however,
three more serious problems with the SEI model. These may mean that is not a good
predictor of an organization’s capability to produce high-quality software.
Process classifications
The process maturity classification proposed in the SEI model is appropriate for
large, long-lifetime software projects undertaken by large organizations. There are
many other types of software project and organization where this view of process
maturity should not be applied directly.
1. Informal processes These are processes where there is not a strictly defined
process model. The process used in chosen by the development team.
Informal processes may use formal procedures such as configuration
management but the procedure to be used and the relationships between
procedures are not predefined.
2. Managed Processes These are processes where there is defined process model in
place. This is used to drive the development process. The process model
defines the procedures used, their scheduling and the relationships between
the procedures.
3. Methodical Processes These are processes where some defined development
method or methods (Such as systematic methods for object-oriented design)
are used. These processes benefit from CASE tool-support for design and
analysis processes.
4. Improving processes These are processes which have inherent improvement
objectives. There is a specific budget for process improvements and
procedures in place for introducing such improvements. As part of these
improvements, quantitative process measurement may be introduced.
Figure shows different types of product and the type of process that might be
used for their development.
The classes of system shown in Figure may overlap. Therefore, small systems
which are re-engineered can be developed using a methodical process. Large
systems always need a managed process. However, if the domain is not well
Most software processes now have some CASE tool support so they are
supported processes. Methodical processes are now usually supported by analysis
ad design workbenches. However, processes may have other kinds of tool support
(for example, prototyping tools, and testing tools) irrespective of whether or not a
structured design method is used.
The tool support that can be effective in supporting processes depends on the
process classification. For example, informal processes can use generic tools such as
prototyping languages, compilers, debuggers, word processors, etc. They will rarely
use more specialized tools in a consistent way. Figure shows that a spectrum of
different tools can be used in software development. The effectiveness of particular
tools depends on the type of process that is used.
This clause also sets out the basic principles for establishing the quality system
within the organization and sets out many of its functions, which are then described
in greater detail in later sections.
The model requires the organization to set up a quality system. The system
should be documented and a quality plan and manual prepared. The scope of the
plan is determined by the activities undertaken and consequently the standard
(ISO9001/2/3) employed. The focus of the plan should be to ensure that activities are
carried out in a systematic way and documented.
Contract review specifies that each customer order should be regarded as a contract.
Order entry procedures should be developed and documented. The aim of these
procedures is to:
The aim of this clause is to ensure that both the supplier and customer
understand the specified requirements of each order and to document this agreed
specification to prevent misunderstandings and conflict at a later date.
Design control procedures are required to control and verify control and
verify design activities, to take the results from market research through to practical
designs. The key activities covered are:
The aim of this section is to ensure that the design phase is carried out
effectively and to ensure that the output from the design phase accurately reflects
the input requirements.
The top level documents the quality plan and sets out policy on key quality
issues. Each level adds more detail to the documentation. Where possible, existing
documentation should be incorporated. The aim should be to provide systematic
documentation, rather than simply to provide more documents. It is important that
each level of documentation is consistent with the one above it, providing greater
detail as each level is descended.
The purchasing system is designed to ensure that all purchased products and
services conform to the requirements and standards of the organization. The
emphasis should be placed on verifying the supplier's own quality management
procedures. Where a supplier has also obtained external accreditation for their
quality management systems, checks may be considerably simplified. As with all
procedures, they should be documented.
All services and products supplied by the customer must be checked for suit.
ability, in the same way as supplies purchased from any other supplier. In order to
ensure this, procedures should be put in place and documented, so that these
services and products may be traced through all processes and storage.
Process control requires a detailed knowledge of the process itself. This must be
documented, often in graphical form, as a process flow chart or similar. Procedures
All incoming supplies must be checked in some way. The method will vary
according to the status of the supplier's quality management procedures, from full
examination to checking evidence supplied with the goods.
Any equipment used for measuring and testing must be calibrated and
maintained. Checking and calibration activities should become part of regular
maintenance. Management should ensure that checks are carried out at the
prescribed intervals and efficient records are kept.
There are circumstances where the standard permits the sale of non-con- forming
product provided that the customer is clearly aware of the circumstances and is
generally offered a concession. Representatives of accreditation bodies suggest that
this area where organizations often become lax after a while, relaxing procedures
and allowing non-conforming product through.
Quality records are vital to ensure that quality activities have actually been
carried out. They form the basis for quality audits, both internal and external. They
do not have to conform to a prescribed format, but must be fit for their intended
purpose. As many will exist before the accredited system is implemented, the aim is
to systematize and simulate existing practice wherever possible, to reduce wasted
effort in reproducing previous work in this area.
The quality system should be 'policed' from within the organization and not
dependent upon external inspection. Procedures should be established to set up
regular internal audits as part of normal management procedure. The role of internal
audits should be to identify problems early in order to minimize their impact and
cost.
1.CMM has been developed at Carnegie Mellon since around 1987; it is still
undergoing refinement.
2.The five CMM levels (in order of increasing maturity) are:
1.Initial -- ad hoc
2.Repeatable -- basic project management techniques are used
3.Defined -- a software engineering process is used
4.Managed -- quantitative Q/A process is used
5.Optimizing -- the process itself can be refined to improve efficiency
Requirements management
Project planning
Goal 1: Software estimates are documented for use in planning and tracking the
software project. Software project activities and commitments are planned and
documented.
Goal 2: Affected groups and individuals agree to their commitments related to the
software project.
Goal 1: Actual results and performances are tracked against toe software plans.
Goal 2: Corrective actions are taken and managed to closure when actual results and
performance deviate significantly from the software plans.
Goal 3: Changes to software commitments are agreed to by the affected groups and
individuals.
Subcontract management
Configuration management
Level 3
Training program
Goal 1: The software engineering tasks are defined, integrated, and consistently
performed to produce the software.
Goal 2: Software work products are kept consistent with each other.
Intergroup coordination
Level 5
Defect prevention
Conops:
The concepts of operations (CONOPS) for the CMMI product suite includes.
It is intended that the CONOPS not only describe the use of the proposed
product suite, but also be used to obtain consenses from the developers, users and
discipline owners on the required infrastructure to develop, implement, transition
and sustain the CMMI product suite.
Why CMMI:
CMMs have been in use for various discipline with the intent of providing a
model of best practices for each of the intended discipline. But in complex
environment, such as development where several of these discipline are employed,
the collective use of individual models has resulted in redundancies, additional
complexity, increased costs and at times, discrepancies.
Since not all organizations employ every discipline, the project also provides
CMMI models for individual disciplines.
Since not all processes apply equally to all organizations, the CMMI models
are tailorable to an organization’s mission and business objectives and criteria for
tailoring are provided.
The effort to define and develop the CMMI is being sponsored by the Office
of the Secretary of Defence / Acquisition and Technology (OSD (A&T)). The
industry sponsor is the System Engineering Committee of the National Defense
Industrial Association (NDIA). The effort includes the design, implementation
transition and sustainment efforts. The CMMI project is a collaborative effort with
participation by OSD. Services, government agencies, industry through the National
Defence Industrial Association (NDIA) System Engineering Committee and the
Software Engineering Institutes (SEI) of Carnegie Mellon University. The
management structure for the project includes a steering group made up of
government, industry and SEI and reporting to OSD (A&T). This steering group is
responsible for overall direction, guidance and requirements provided to the project
manager and product development team.
The responsibility for project management has been assigned to the SEI. A
product Development Team, consisting of the SEI is developing the CMMI product
suite. The initial review of the product suit is accomplished by stakeholder, CMMI
The initial CMMI product suite includes a framework for generating CMMI
products, and a set of CMMI products produced by the framework. The framework
includes common elements and best features of the existing models as well as rules
and methods for generating CMMI products users may select discipline specific
element of the CMMI product suite based on their business objectives and mission
needs.
The CMMI product suite was developed specifically for those users who are
system and product developers and want to improve their processes and products.
Tools and models are provided that enable users to assess where they are.
Conduct training
Perform Assessments
Do tailoring
Enterprise Executives
Product Decision makers
Product Developers
Use of Models:
SIX SIGMA
Very few processes achieve this level of performance and consequently most
organizations endure very high costs use to poor quality. Most company processes
produce upwards of 6000 defects per million opportunities which for many is simply
not good enough for today’s competitive environment where customer demands
increase exponentially.
The benefits of six sigma programme when applied to the organizations are
1. Objective:-
2. Inputs:-
1. Statement of objectives.
2. List of issues to be addressed.
3. Current project schedule and cost data
4. Report from other reviews or audits.
5. Reports of resources assigned to the project.
6. Data on the software elements completed.
3. Entry Criteria:-
It includes
a) Planning The review leader identifies the review team. The leader
must schedule the meetings and he is responsible for the
distribution of input materials.
b) Overview A qualified person from the project conducts an overview
session for the review team. This makes them to better
understand and can achieve maximal productivity.
5. Exit Criteria:-
6. Output:-
The review report identifies the project, the review team, inputs to the review,
review objectives and a list of issues and recommendation.
7. Auditability:-
1. Objectives:-
2. Responsibilities:-
1. Leader responsible for conducting review and issuing the review report.
2. Recorder responsible for documenting findings, decisions and
recommendations made by the review team.
3. Team member responsible for their own preparation.
1. Statement of objectives.
2. Software elements being examined.
3. Specification for the software elements.
4. Plans, standards or guidelines against which the software elements are to be
examined.
4. Entry Criteria:-
5. Procedure:-
1. Planning The review leader identifies the review team, schedule meetings
and distributes input materials.
2. Overview A qualified person from the project will conduct the overview
session for the review team.
3. Preparation Each person studies the material and prepares for the review
meeting.
4. Examination Examine the software element relative to guidelines,
specifications and standards.
6. Exit Criteria:-
The technical review is complete when the statement of objectives have been
addressed and when the technical review report has been issued.
7. Output:-
The technical review report identifies the review team, software elements
reviewed, inputs to the review etc.
8. Auditability:-
1. Audit Objective:-
2. Input:-
3. Entry Criteria:-
4. Procedure:-
1. Planning:-
It includes
3. Preparation:-
4. Examination:-
5. Reporting:-
The audit team will issue a draft report to the audit organization for review
and comments.
7. Output:-
1. Audit identification
2. Scope
3. Conclusions
4. Synopsis
5. Follow-up
8. Auditability:-
Quality goals:-
The five most basic considerations for quality goal establishment are
1. Functionality
2. Performance
3. Constraints
4. Technological innovativeness
5. Technological and managerial risk.
Organization Structure:-
Project
Software Configuration
Management
Management
Projects progress quickly until they are 90% complete. Then they remain at 90%
complete forever. When things are going well, something will go wrong. When
things just can’t get worse, they will. When things appear to be going better, you
have overlooked something. If project content is allowed to change freely, the rate
of change will exceed the rate of progress. Project teams detest progress reporting
because it manifests their lack of progress
All technical and managerial activities required to deliver the deliverables to the
client. A software project has a specific duration, consumes resources and produces
work products.
The controlling document for a software project, Specifies the technical and
managerial approaches to develop the software product and Companion document
to requirements analysis document: Changes in either may imply changes in the
other document. SPMP may be part of project agreement.
Project Agreement
The document written for a client that defines the scope, duration, cost and
deliverables for the project ,the exact items, quantities, delivery dates, delivery
Documents
Demonstrations of function
Demonstration of nonfunctional requirements
Demonstrations of subsystems
Functions
Examples:
Project management
Configuration Management
Documentation
Quality Control (Verification and validation)
Training
Tasks
Task Sizes
Examples of Tasks
Action items
Activities
Examples of Activities
Major Activities:
Planning
Requirements
Analysis
System Design
Object Design
Implementation
System Testing
Delivery
Refine scenarios
Define Use Case model
Define object model
Define dynamic model
Design User Interface
0. Front Matter
1. Introduction
2. Project Organization
3. Managerial Process
4. Technical Process
5. Work Elements, Schedule, Budget Optional Inclusion
Title Page
Revision sheet (update history)
Preface: Scope and purpose Tables of contents, figures, tables
Planner
Analyst
Designer
Programmer
Tester
Maintainer
Trainer
Document Editor
Web Master
Configuration Manager
Group leader
Liaison
Minute Taker
Project Manager
Hierarchical Structure
Plans the activities of an individual team and has the following responsibilities.
Web Master
o There are enough cycles on the development machines and security will not be
addressed
o There are no bugs in the CASE Tool recommended for the project
Examples of Dependencies
o The VIP team depends on the vehicle subsystem provided by the vehicle team
o The automatic code generation facility in the CASE tool Rose/Java depends on
JDK. The current release of Rose/Java supports only JDK 1.0.2
Examples of Constraints
o The length of the project is 3 months. limited amount of time to build the
system
o The project consists of beginners. It will take time to learn how to use the tools
o Not every project member is always up-to-date with respect to the project
status
o The use of UML and a CASE tool is required
o Any new code must be written in Java
o The system must use Java JDK 1.1
WBS Trade-offs
Work breakdown structure influences cost and schedule thresholds for establishing
WBS in terms of percentage of total effort:
Available Time - Estimated (‚Real‛) Time for a task or activity Or: Latest Start
Time - Earliest Start Time
Critical Path
The path in a project plan for which the slack time at each task is zero.The critical
path has no margin for error when performing the tasks (activities) along its route.
Make sure to be able to revise or dump a project plan the Complex system
development is a nonlinear activity.If project goals are unclear and complex
use team-based project management. In this case avoid perfect GANTT charts
and PERT charts
Don’t look too far into the future
Avoid micro management of details
Don’t be surprised if current project management tools don’t work:
They were designed for projects with clear goals and fixed organizational
structures
E-Mail
Newsgroups
Web
Lotus Notes
PERT’s time estimates involve the beta probability distribution to capture three
estimates (optimistic, most likely, and pessimistic) of the duration of an activity.
The beta distribution was chosen for PERT instead of the normal distribution
because it more closely resembles people’s behavior when estimating. We are
naturally optimistic, which skews the results to the left. If the actual duration is
shorter than the most likely, it will not be much shorter, but if it is longer, then it
could be a lot longer. Actually, since we only use three estimates in PERT, you
would get a triangular distribution if plotted.
The formula for the weighted average of PERT is actually the mean of the
triangular distribution used as an approximation of the beta, by experimentation.
In critical chain, Goldratt says that a project schedule is a lot like a factory,
except that a progress line of work is moving through a number of activities, instead
of a product being made in a sequence of machines. He describes some common
problems with the way project scheduling has been handled in recent years, such as
yielding to the student syndrome, and doing too much multitasking to get optimum
organizational throughput.
In the project, the critical chain is indicated as a heavy line through several
activities and stops at the Early Finish Date. Some of these activities have been
identified as critical through a CPM analysis, and some are critical because the
resource needed for them demands it.
It is important to realize that the critical path and the critical chain are not the
same thing. The critical path is the longest path through the network when only
activities are considered. A critical chain is the longest path through the network
considering both activities and resources.
Creative decision making – for estimates and tough choices Idea generating
situations – completely new input is required
Process
1. Brainstorm an activity list – use the nominal group technique to build a list of
possible activities for the project on stickies.
3. Find highest WBS levels – summarize the groupings by work product output to
derive the higher levels of the WBS.
Types of Organizations
(i) Functional
(ii) Matrix
(iii) Projectized
Functional Organization
1. weak
2. balanced
3. strong
In project zed organizations, the project manager has total authority and acts
like a mini-CEO. All personnel assigned to the project report to project manager,
usually in a vertical organization, so the company becomes like a layered matrix.
The clear advantages for a project in this form of organization are that it
establishes a unity of command and promotes more effective communication.
*******************