Beruflich Dokumente
Kultur Dokumente
SDLC Activities:
Feasibility: Determining if the proposed development is worthwhile.
SOFTWARE Quality Assurance: Determining Activities that will help ensure quality
of the product.
Work break down structure: Determining subtasks necessary to develop the product.
Testing: Executing the software with data to help ensure that software
works properly.
Regression Testing: Saving tests from previous version to ensure that new version
retains the previous capabilities.
SOFTWARE Myths:
1. SOFTWARE is easy to change.
2. Testing software can remove all the errors.
3. Reusing software increases safety.
4. Software can work right first time.
5. Software with more features is better Software.
6. Addition of more software engineers will make up the delay.
7. Software can be designed thoroughly enough to avoid most
integration problems.
Software: Software product have multiple users and have good user interface,
proper user manual and good documentation so software has large
number of users, properly designed carefully implemented and
thoroughly tested.
Bochm: The practical application scientific knowledge in design and
construction of computer programs and associated documentation
required to develop, operate and maintain them.
Program
s
Operating
Documentation Procedure
s
SOFT
WAR
E Format Specification
Analysis/
Specification Context Diagram
DFD
Flow Charts
Design
E-R Diagram
Documentation
Manuals
Source Code Listing
Implementation
Cross Reference Listing
Test Data
Testing
Test Result
List of documentation manuals
System overview
Analysis/
Specification Beginners Guide Tutorial
Reference Guide
Operating
Procedure
Installation Guide
Design
System Administration
guide
Feasibility
Study
Requirement analysis
and Specification
Design
Coding and
Unit Testing
Integration and
System Testing
Maintenance
3. Design: The goal of design phase is to transform the SRS into a structure that is
suitable for implementation in some programming language. In technical terms,
through the design phase we derive the Software architecture from SRS document
in design phase, 2 types of approaches are there:
Note:- The imp. Components of this document are functional requirements, non
function requirement and the goals of implementation. Function supported by the
system. Non function requirement identify the performance requirement. The
regd. Std. to be followed etc.
Traditional design approach: currently used by many industries. It required two
different activities to be performed.
i. Structured analysis: preparing a detailed analysis of different function to
be carried out by system and identification of data flow among different
functions. The whole Software is divided into sub parts. How data will
flow b/w different processors is identified using DFD. Structured design is
undertaken. Once the structured analysis activity is complete design
consist of 2 main activities :
Architectural design (or high level design) and
Detail Design ( or low level design): high level design involves
decomposing the system into modules and represent interfaces relationship
is among them. During detailed design different modules are designed in
greater detail e.g. Data structures and algorithms are selected for modules.
Different well known methodologies are available for implementing high
level and low level designs.
ii. Object oriented design: new techniques for Software design in this
technique various objects that occur in the problem domain and solution
domains are identified and then different kind of relationship that exists
among them are satisfied. The object structure is further refined to obtain
detailed design. Advantages are less development efforts and time and
better maintainability.
Feasibility
Study
Requirement analysis
and Specification
Design
Coding and
Unit Testing
Integration and
System Testing
Maintenance
Iterative waterfall model
The classical waterfall model is an idealities one. Since it assumes that no defect is
introduced during any of the phases of life cycle. However, in practical Environments
defect do get introduced. In almost every phase of life cycle defects are the problem due
to which software crashes and the company goes into loss. These defects usually get
detected much later in life cycle for example a design defect might go unnoticed till the
coding or testing phase. Once the defect is detected, we need to go back to that phase
whose it got introduced and redo some of the work done during this phase and
subsequent phases. Therefore, in practical software development work, it is not possible
to strictly follow the classical waterfall model.
Errors get introduced in each phase of life cycle. It is preferable to detect. These
errors in same phase if not in same phase, then as early as possible for example if a
design error is detected during design phase. It will take less cost and effort. In
comparison of the detection of errors in later phases. This principle of detecting errors as
close to its point of introduced is called phase containment of errors. This is an imp.
Software engineering principle.
* Feedback paths are needed in the classical waterfall model from every phase to its
precluding phase as shown in fig.
Prototype model: This model suggests that before developing actual Software, a
working prototype of system should be built firs. A prototype is a toy implementation of
system, having limited functional capabilities, low reliability and inefficient performance
compared to the actual software. these are several reasons to develop a prototype initially
the requirements and process continues until Accepted by the user. Second reason for
developing prototype is that it is impossible to “get it right” the first time and one must
plan to throw away the first product in order to develop a good quality product as
advocated by books. Third it helps to critically examine technical issues associated with
the product development.
The prototyping model of software development is shown in fig.
Requirement Gathering
Quick Design
Customer Customer
Suggestions Evaluation
of the
prototype
Acceptance
by customer
Design
Implement
Testing
Maintenance
Prototype model
Advantages:
• A partial product is built in initial stages. Therefore customer gets a chance to see
the product early in life cycle and thus give necessary feedback.
• Requirements become more clear resulting accurate product.
• New requirements are easy accommodated, as these is scope for refinement.
• As user is involved form starting of project, he feels more secure, comfortable and
satisfied.
Disadvantages:
• After seeing an early prototype end users demands the actual system to be
delivered soon.
• End user may not like to know the different b/w a prototype and a well developed
system.
• If not managed properly, iterative process of prototype demonstration and
refinement can continue for long duration.
• If end user is too satisfied with initial prototype, he may loose interest in the
project.
• Poor documentation.
Evolutionary Model: This model is also known as successive versions model. In this
model, the system is first Brake down into several functional units that can be
incrementally implemented and delivered. The developer first design core module of the
system and the core modules are tested thoroughly, thereby reducing chances of errors in
the final product new functionalities in successive versions. Each evolutionary version
may be developed using in iterative waterfall model. As user get a chance to experiment
with partially developed system much before the fully developed version is released, this
provide facility to find exact requirements of the user. Also core module get tested
thoroughly, therefore reduce chances of errors.
A B A B
A
C D
Evolutionary Model
Advantages:
• As product is to be delivered in parts, total cost of project in distributed.
• Limited no. of persons can be put on project because work to be delivered in
parts.
• Customers get chance to see the useful functionality early in software
development life cycle.
• As a result of end user’s feedback requirements for successive release become
clearer.
• As functionality is increased in steps, testing also becomes easy.
• Risk of failure of product is decreased as users. Start using the product early.
Disadvantages:
• For most practical problems, it is difficult to subdivide the problem into several
functional units that can be incrementally implemented or delivered. As product is
delivered in parts, total development cost is higher.
• Well defined interfaces are required to connect modules developed with each
phase.
• Well defined project planning to distribute work properly.
• Design phase problem in selecting the core module.
The Spiral model: This is also one of popular process model used by industry. This
model proposed by Boehm in 1988 for large size products. The model focuses on
minimizing risk through the use of prototype. One can view the spiral model as a
waterfall model with each stage proceeded by risk analysis stage. The model is divided
into four quadrants. Each has a specific purpose. Each spiral represents the progress made
in the project the exact no. of loops in spirals is not fixed and each loop of spiral
represents a phase of software process. In first quadrant, objectives and alternative
means to develop software and constraints imposed on the product are identified. The
next quadrant deals with identification of risks and strategies to resolve the risk. The 3rd
represents waterfall model consisting activities like design, detailed design, coding and
testing. 4th quadrant evaluates the product, requirements are further refined and so in the
product. No. of loops through quadrants are vary from project to project.
* The alternative solutions are evaluated and potential project risks are identified
and dealt with by developing an appropriate prototype.
th
The 4 quadrant (stage) consists of receiving the results of stages traversed so far with
the customer and planning the next iteration around the spiral model, since it subsumes
all the discussed models.
If a risk is resolved successfully, planning for next cycle is done. If at some stage risk
cannot be resolved, project is terminated.
This can also be used if requirements of project are vary complex or if company planning
to introduce new technologies.
Examples: decision support system, defiance, aerospace and large business projects.
2. Evaluate alternatives,
1. Determine objectives Identify and resolve
and identify alternatives Risks
4. Customer
a) Evaluation
b) Review
c) Plan for next
cycle
Advantages:
• The model tries to resolve all possible risk involved in the project.
• End user gets a chance to see product in early life cycle.
• With each phase as product is refined after customer feedback, model ensures a
good quality Software.
• The model makes use of techniques like reuse, prototyping and component based
design.
Disadvantages:
• The model requires expertise in risk management and excellent management
skills.
• The model is not suitable for small projects as cost of risk analysis may exceed
the actual cost of project.
NOTE: This model is called Meta model since it encompasses all discussed model
and uses prototype as risk reduction mechanism.
* This model is much more flexible compared to other models since the exact no. of
phases in software development process in this model is not fixed. It is possible that
for some project the design is accomplished over 3 to 4 consecutive loops and in
some other project the design is accomplished in just one loop.
Software Characteristics:
1. Correctness: Correctness is the extent to which a program satisfies us
specification.
2. Reliability: Reliability is the property that defines now well software meets
its requirements.
3. Efficiency: Efficiency is a factor related to execution of Software. it includes
response time, memory requirement and throughput. It is most important part
of critical applications e.g. Radar system
4. Usability: Usability is concerned with effort required to learn and operate
Software properly.
5. Maintainability: Maintainability is the effort required to locate and fix errors
in operating programs.
6. Testability: Testability is the effort required to test to ensure that the system
or module performs its intended function.
7. Flexibility: Flexibility is effort required to modify an operational program or
enhance its functionality.
8. Portability: Portability is the effort required to transfer the Software from one
H/w configuration to another.
9. Reusability: Reusability is the extent to which parts of Software can be
reused in other related applications.
10. Interoperability: Interoperability is the effort required to couple the system
with other system.
Maintainability Portability
Flexibility Reusability
Testability Interoperability
Product
Revision Product
Transition
Product Operations
Correctness
Reliability
Efficiency
Integrity
Usability
3.5 –
Paid for but
not received
3–
Delivered but
2.5 – not used
2–
1.5 – Abandoned or
reworked
1– Used after
changes
0.5 – Used as
delivered
0
1 2 3 4 5
Changes in user requirement have been a major problem 50% of systems required
modifications due to changes in user requirements.
Documentation Other Efficiency improvement
Changes in user
requirements
3.4% 4.0%
5.5%
H/W changes
6.2% 41.9%
Routine 9.0%
debugging
12.4%
Emergency 17.5%
files
Quality Issue:
1. Correctness
2. Maintainability
3. Reusability
4. openness and Interoperability
5. Portability
6. Security
7. Integrity
8. user friendless
Project Size Estimation Metrics: The size of program is neither the no. of bytes that
source code occupies nor the byte size of executable code but is an indicator of effort and
time required to develop the program. It indicates the development complexity.
Lines of Code (LOC): This is the simple measure of problem. This metric measures the
numbers of source. Instructions required solving a problem. Lines used for commenting
the code and header lines are ignored. Even though estimating LOC count at end of
project is very simple, but in the beginning of project it is very tricky. Project manager
divide the problem into modules and each modules into sub modules and so on until the
size is predictable. By using the estimation of lowest level modules, the project managers
arrive at total size estimation.
Disadvantages:
• LOC gives numerical value of problem size that varies with coding style. Because
different programmers use different coding style and programming language.
• A good problem size measure should consider overall complexity of problem and
effort needed to solve it. In some problems, design might be very complex and
coding is very straight forward. In general effort the effort required for coding is
not proportional overall development effort.
• LOC measure poorly correctness with quality and efficiency of code. For example
some programmers produce a lengthy and complicated code structure. They
almost make effective use of available instruction set. Therefore would have a
higher Loc.
• If a programmer uses several library routines, then LOC will be lower. If a
manger use Loc to count efforts by different engineers, they would be
discouraged.
• Loc measure textual complexity. A program having complex logic requires much
effort to develop than a program with simple logic.
• It is very difficult to arrive an accurate Loc estimation Loc metric can be
computed after code has been fully developed.
Function point metric: The idea behind in function point metric is that size of software
product is directly depend on number and type of different functions it perform. It
computes size of software product using 5 different characteristics of software.
Start
External Input
External Output
External Enquires
External Files
Internal Files
Exit
1. External I/P are events taking place in system which results into change of
data in system.
2. External O/P is user and control data coming out of system. E.g. Report,
display of data, error messages.
3. Inquiries do not change system data. These are I/P from user causes
immediate response.
4. Internal files are files maintained and understood by customer.
5. External interface are files shared by system and other programmers.
FP =UFP * CAF
=618 * 1.07
=661.26
=661
Disadvantage: It does not take into account the algorithm complexity of software. To
overcome we use feature point metric is used.
Heuristic Technique: This assumes that project parameters can be modeled using
mathematical expression various heuristic models are divided into 3 classes.
• Static single variable models
• Static multi variable models.
• Dynamic multi variable models.
1. Provides a mean to estimate different characteristics of a problem, using
previously estimated characteristics of software product such as size.
Resource =Ci * edi
• Where e is characteristic of software which has already been estimated
and resource could be effort, project duration staff size etc. constants ci and di can
be determined using data collected from past projects.
Basic COCOMO model
Static multivariable cost estimation model is of from
Resource= C1 * e1 + C2 * e2 + …………. Cn * edn
Where e1,e2…… characteristics of software already estimate and C1,C2,d1,d2…..
are constants. It provide more accurate estimate than single variable cost estimation
model dynamic multivariable models project resource requirements as a function of
time.
• Expert Judgment: Expert analyze problem thoroughly then based upon educated
guess problem size is find out. Experts estimate size/cost of different components
of system and then combines them to arrive at overall estimation. Expert may not
have experience of that particular project. Estimation of group may minimize
factors like individual oversight, lack of familiarity with particular subject and
personality.
Analytical Estimation Techniques: Derive results based upon certain basic assumptions
regarding the project analytical techniques do have a scientific basis. Halstead’s software
science is an example.
Halstead software science technique: Halstead software science / Technique is a
analytical technique to compute size, development effort and development cost of
software.
For a given program, let
- n1 be unique no. of operators used in the program
- n2 be unique no. of operands used in the program
- N1 be total no. of operators used in the program
- N2 be total no. of operands used in the program
There is no general agreement among researchers about the operators and operands for
any given programming language only few guide lines have been provided. For example
assignment, arithmetic and logical operators are operators. A pair of parentheses as well
as a block is considered as single operator label of go to statement is considered as single
operator. If ----- then ---- else -----end if and while ---- do are considered as single
operators. A statement termination is considered as single operators.
Operators in C language:
(, [, ., *, +, -. ~, !, ++, --, /, %, <=, >=, !=, ==, and, ^, |, andand, ||, =, *=, +=, /=, %=, -=,
and=, ^=, ?, {, ; case default if else, switch, while, do, for, goto, continue, break, return
and a function name.
Operands: operands are variables and constants those are used with operators.
Examples: a=andb;
a, b are treated as operands.
=, and are treated as operands.
int func(int a, int b)
{
------
}
{ }, ( ) are operators.
func, a, b are not treated as operands.
func(a,b);
func, a, b are operands and func, ; are operators.
Program length and Vocabulary: Length of program is total usage of all operators and
operands.
Length N=N1 + N2
Program vocabulary is total unique operators and operands.
n=n1+n2
Software
Reliability
Size of database
Product Attributes
Complexity
Analyst Capability
Software Engg.
Personnel Attributes Capability
Applications Experience
Virtual M/C Experience
Programming Language Expertise
Performance Requirements
Memory Constraints
Computer Attributes
Environment
Turn around Time
For every project rating is given to cost drivers very low, nominal, high and very high.
Equation for Intermediate COCOMO:
E= a (KLOC) b *(EAF)
EAF: Effort Adjustment factor.
Example:
Size= 200 KLOC
Cost driver:
Software Reliability = 1.15
Uses of Software Tools = 0.91
Product Complexity = 0.85
Execution time Constraints = 1.00
Calculate effort and Tdev for 3 types of product.
Solution:
EAF=1.15 * 0.91 * 0.85 * 1.00 = 0.8895
Organic Project:
E=2.4 * (200)1.05 * 0.8895 = 742 PM
Semi Detached:
E=3.0 * (200)1.12 * 0.8895 = 1012 PM
Embedded:
E=3.6 * (200)1.2 * 0.8895 = 1437 PM
Organic:
Tdev = 2.5 * (742)0.38 =…….
Semidetached:
Tdev = 2.5 * (1012)0.38 =…….
Embedded:
Tdev = 2.5 * (1437)0.38 =…….
The Complete COCOMO model: Software product is not a single homogeneous entity.
Large systems are made up of several sub systems. Some of these subsystems may be
considered as organic and some embedded and for some require high Reliability. The
cost for each sub-system is estimated separately. For example:
Team Structure: Problems of different complexities and sizes require different team
structure. For effective solution, usually every organization has a standard formal team
structure.
• Democratic Team: This structure does not enforce any formal team hierarchy. A
manager provides administrative leadership and at different times different
members of group provide technical leadership.
The democratic organization leads to higher morale and job satisfaction.
The democratic team structure is more appropriate for less understood problems.
The programmer can share and review one another’s work is called Ego less
programming.
Disadvantage as team members may waste a lot of time arguing about trivial
points due to absence of any authority in the team.
Software
Engineer
Communication
Path
Democratic Team
• Chief Programmer Team: Senior engineer provides technical leadership and
partitions the task into different team members. It works well when task is well
understood. Disadvantage is that much responsibility and authority is provided to
chief programmer.
Project Manager
Communication
and control
• Mixed Control team Structure: It takes ideas from democratic and chief
engineer team structure. Communication is limited. It is very suitable for very
large projects.
Staffing: Since Software project manager take responsibility of choosing their team,
they need to identify good software engineers for success of project. A mix
conception held by managers is tree assumption. That one software engineer is as
productive as another. However productivity b/w worst and best software engineers in
a scale of 1 to 30. The worst engineers even some time reduce overall productivity.
Project
WBS
Software
Project
Specify Validate
Elicitate
Requirement Requirement
Requirements
s s
b) Activity Graph: This shows the inter dependence of different activity of project.
It is also called N/w model. Nodes represents milestones and activities are
represented by links
M1
Act 1 Act3
Start M3 M4 M5
Act 5 Act 6 Finish
Act 2 Act4
M2
c) Grant Chart: This is used to represent project plans graphically. Where each task
is represented by a horizontal Bar. The length of Bar is proportional to completion
time of activity. Different type of activities can be represented through different
colors, shapes or shades.
Specification
Design
Database Part
Design
GUI Part
Code
Database Part
Code
GUI Part
Write
Manual
White part of bar represents length of time taken by each task and shaded part of bar
represents select time i.e. latest time by which task must be finished.
d) Pert Charts: (Project Evaluation and Review Technique) This consist of N/w of
boxes and arrows. The box represents activities and arrows represent task
dependencies. This is sophisticated form of activity chart. There is more than one
critical paths depending upon permutations of estimates for each task analysis of
critical path makes PERT Charts very complex. Gantt chart an automatically from
PERT charts. However PERT cannot be automatically derived from Gantt charts. It
provides addition information to engineers. PERT charts are monitor timely progress
of activities.
Jan15 - Apr1 Apr1 - July15 July15 - Nov15
Design Code Integrate and Test
Database Part Database Part
Jan1-Jan15 Nov15
Specification Finish
RISK Management: defined as identifying and understanding the risk that may cause
project delay or even failure in some cases. It is the planning to minimize their efforts on
project performance.
1. Risk is uncertainty or lack of complete knowledge of set of all possible future
events. This definition is given by Robert.
2. Risks are factors or aspects which are likely to have a negative impact on
project performance. This definition is given by Kelkar.
3. Risk is probability that some adverse circumstances will actually occur. This
definition is given by Somerville.
4. Risks are those unknown events which if occur can even result into project
failure. This definition is given by Boehm.
Risk Expose
Multiple of 5 - 7/36 10800 10800*7/36
=2100
In game 2
900+800-500 = 1200
Unit 2
Requirement Engineering Process
Requirement Elicitation:-
This activity is concerned with understanding the problem domain at beginning of
problem because requirements are not clearly understood. This required expertization.
The process of acquiring knowledge about specific problem domain through various
techniques to build requirements model. This process help analyst to produce formal
specification of s\w to develop to meet customer needs. Various source of domain
knowledge can be users, business manuals, existing s\w of same type ,standards etc.
Requirement Analysis:-
It is to produce formal s\w requirement models. This activity specifies functional and
non-functional requirements of system along with constraints imposed on the system.
This model is used in various stages of SDLC and used as agreement between the end
users and developers. A no. of structured and object-oriented models are available for
building requirement model.
Requirement Validation:-
This is the process to ensure the consistency of requirement models with respect to
customer needs. If requirements are not validated, the error will propagate to successive
stages of SDLC and require a lot of modifications and rework
a) Ensure that requirements are consistent. They do not conflict with other
requirements.
b) Ensure that requirements are complete in all respects.
c) Ensure that requirements are realistic and realizable.
Reviews, prototyping, test case generation are effective ways to validate requirements.
Analysis Principles
Investigators have identified analysis problems and their causes and have developed a variety of
modeling notations and overcome from the problems
Operational principles
• information domain of problem must be represented and understood
• function that software perform must be defined
• behavior of software must be represented
• models that consists of information , functions and behavior must be partitioned in a
manner that uncover details in a hierarchical fashion
• analysis process should move from essential information toward implementation details
Analysis Guidelines
According to Davis
• understand the problem before u begin to create analysis model
• develop prototype that enables a user to understand
• record origin of and reason for every requirement
• use multiple views of requirements
• rank requirements
• work to eliminate ambiguity
Characteristics are stated, and one or more diagrams are included to graphically
represent overall structure of the software.
Specification Reviews: - A review of SRS conducted by both software developer and the
customer. Because specification builds the foundation for development
Phase, extreme case should be taken in conducting reviewed.
Reviews are fast conducted as microscopic level .Review ensure that specification is
complete and accurate when the overall information, functional and behavior domains are
considered .However to fully explore each of these domains ,review become more
detailed if specification contains “vague terms”(some, some times, often, usually ,most or
mostly)then reviewer flag the statements for further specification.
Once review is complete, SRS is signed off by customer and developer .The specification
becomes a contract for software developer. If customer further request for changes. Then
it will increase cost and/or time.
CASE tools are used to solve the problem occurred during review.
PROBLEM PARTITIONING:-
If problems are too large and complex to be understand as a whole for this reason , we
partition the problem into points to clearly understand the problem. Establish interfaces
between the points so that overall function can be accomplished. A problem can be
divided either into horizontally or vertically.
CHARACTERSTICS OF SRS:-
1. Correctness.
2. Completeness
3. Consistency
4. Unambiguousness
5. Ranking for importance
6. Modifiability
7. Verifiability
8. Traceability
9. Design independent
10. Understandable by customer.
Representation:
1) Representation format and content should be reliant to problem.
2) Information contained with in the specification should be nested.
3) Diagrams and other notational forms should be restricted. In number and
consistent in use. Confusing or in consentient notation whether graphical or
symbolic degrades understanding and fosters errors.
4) Representation should be reusable. The content of specification will change.
CASE tools are used to update all representation that are affected by each change.
Specification principle:
1) Separate functionality from implementations.
2) Develop a model of desired behavior of a system that contains data and
functional responses of system.
3) Establish the context in which software operates by specifying the manner. In
which other system components interact with software.
4) Define environment in which software operates.
5) Create a cognitive model rather than a design or implementation model describes
a system as perceived by its user community.
6) Check that specification must be torrent of incompleteness and augmentable.
7) Establish the content and structure of a specification in a way that enable it to be
capable to change.
Fourth generation Technique: It consists of brad array of database query and reporting
language, program and application generators and high level nonprocedural languages. It
enable software engineer to generate executable code quickly, ideal for rapid prototyping.
At first level, we focus on which modules are needed for the system, specification of
these modules and how these modules are interconnected. The outcome of high level
design is called as program structure or software architecture. Tree like diagram called
structure is used to represent control hierarchy of high level design.
At second level, data structure and the algo used by different modules are designed.
The outcome of this level is known as module as module specification document.
Detailed design is an extension of system design. Much of the design effort for designing
software is spent creating the top level design. It has major impact on testability,
efficiency and modifiability of the system.
Design Principles:-
1) It should be understandable, since that a design is easily understandable is also
easy to maintain and change unless a design is understandable, It require a
tremendous .effect to maintain it.
2) It should be correct; that is system built must statistics the requirement of that
system.
3) It should be verifiable; complete (Implements all the specifications) and traceable
(all design element can be traced to some requirements).
4) It should be efficient .The idea is that if some resources are expansive and
precious. Them it is desirable that those resources are used efficiently. In case of
computer system, the efficient system is one that consumes less processor time
and memory.
5) It should be modular that is if a module independent of each other than each
module can be easily understandable separately, hence reduce the complexity.
6) It should have high cohesion low coupling, low fan-out and abstraction.
7) A design should contain distich representation of data, architecture, interfaces and
modules.
Design Concepts:-
Abstraction:-
It is a tool that permits a designer to consider a component at an abstract
level without worrying about the details of implementation of component. A abstraction
of a component defines the external behavior of component without bothering with the
internal details that per duce the behavior. It is important part of design process that
produces and play important role in maintains phase .To modify a system, the first step is
to know what the system are and how using the concept of abstraction the behavior of
entire system can be understood. It also helps in determining how modifying effect the
system.
Functional Abstraction:-
A module is specified by the function it performs. When a problem is
partition is the overall transformation function for the system is function. The
decomposition of system in terms of functional modules .example is driving a car. In this
we are not aware about how internal operations are performed.
Data abstraction:-
It is the collection of data that describes a data abject. Like can has
different part e.g. uterus engine, fuel etc.
Refinement:-
Modularity:-
To solve the complex problem, we divide to large problem into
manageable modules / sub problems. A system is considered modular if it consists of
discreet components. So that each component can be implemented separately and a
change. In a particular module has minimal side effect on all other components.
Criteria for modularity
Software Architecture:- software architecture includes overview of software and the ways
in which structure provides conceptual integrity for the system.
Extra functional properties:- This is the description that how design architecture
achieves requirement for performance , reliability , security , adaptability and
other characteristics.
Families of related systems:-This is the representation of components of system
and the manner in which those components are packed 7 interacts with one anther.
Architectural models:-
Structural model represent architecture as an organized collection program
components.
Framework model increases the level of design abstraction by reusing the
existing design frameworks type of applications.
Process models focus on design of burliness or technical processes.
Function models used to represent functional hierarchy of system.
Dynamic model focus on behavioral aspects of program architecture.
Tool hierarchy:-
It represents the organization of program components. It is also known as
neat hierarchy or program structure.
Depth and Width:- Provides an indication of number of levels of control and
over all span of control.
Fan in: - How many modules directly control a given module. A good design
should have high fan in.
Fan out:- Number of modules that are directly controlled by another modules
A Fan Out
Depth B C D E F
G Fan In
Width
Super ordinate module:- A module the controls anther modules. In diagram ‘A’ is
an example of super ordinate module.
Horizontal partitioning: - Define sap rate branches of modules hierarchy for each
major function. Control modules are used to control the communication and
execution of function.
Advantages: -
• Software is easy to maintain.
• Software is easy to test.
• Propagate fever side effects.
• Software is easier to extend.
Disadvantages: -Passes more that data across the module interfaces and can
complicate overall control of program flow.
Vertical partitioning: - Suggests that decision making and easier modules should be
distribution top-down. In pray structure. Top level modules should perform control
functions and do little processing. Modules at low level perform all input processing and
output tasks.
Function 2
Function 1 Function 3
Horizontal Partitioning
Decision making
Modules
DFD: data flow diagram is a modeling tool used to model the functional view of the
system in terms of processes and flow of data between these processes. The technique for
modeling flow of data between processes is also called process modeling.
DATA FLOW Data flow shows data in motion between different processes, process
and store or External agent or process.
Data Flow represents:
• data input to process
• output from a process
• insertion of new data into store
• retrieving data from store
• updating existing data in store
• deleting " " " "
convergent data flow convergent data flow is formed by merging of multiple data
flow in a single data flow.
External Agent
Also called terminators and represents people, organization or
other system external to system being developed. These provide I /p to system and also
receive o/p from system.
Context Diagram
Context diagram shows working of whole organization is
represented by a single process and interaction with external agent is shown through
exchange of data.
1. Customer placing order
2. Company place order with vendor
Arrow:-Arrow connects two or more states indicating that state S1 changes to state S2
when some condition satisfied.
Action:-When system changes states in response to condition. It performs one or more
actions.
Condition:-condition is some event which causes to system to change from state S1 to
S2.
Data Dictionary:-
It is important part of structured analysis. It is the organized listing of all data elements of
system with their precise and unambiguous definitions. Data dictionary contains
information about:
=>Definition of data stores.
=>Definition of data flows.
=> Definition of control flows.
=>definition of entities, relationship, attributes, external agents.
=>Meaning of aggregate item with comments.
Example :-
Constructs are:
A) if condition then
{Statement 1;
Statement 2;
--------
Statement n;
}
else
{Statement 1;
Statement 2;
-----------
-----------
Statement n;
}
b)
Initialization part;
do
{
Statement 1;
Statement 2;
--------
Statement n;
} while (condition);
c)
For (initialization part; condition part; increment/decrement part)
d)
Switch (value)
{
Case 1:
Do something;
Break;
Case 2:
Do something;
Break;
------------
Default:
Do something;
Break;
e)
repeat
do something;
until(condition);
Conditions/Actions Rules
1. 2. 3. 4. 5.
C1passenger Yes. Yes. Yes.
from any class…
C2flights taken Yes.
>3 per yrs.
C3business class Yes.
passenger.
C4Executive // Yes
…//…
C5flights taken
per yr<=3
C6pts.earned Yes.
>=400.
C7pts. Yes
Earned>=1000
C8pts.
Earned>=1500
A1charge No. No. No.
standard fares.
A2offers 10 % No.
discount.
A3offers 30 % No.
discount.
A4offers free No.
ticket.
A5offers free no.
holiday package.
Example:-An customer offers attractive discounts to its customer based on the flights
taken per year. Passengers are classified into economy class, business class and executive
class Passengers. For each of these class normal air fairs are changed for that class. For
each class passengers earns some prints. If passengers takes flights thrice a year, air lines
offers 10% discount on air fare rest of the year .if business class passenger earned 400
pts. 30 % discount are given .if passenger of any class earns 1000 pts. He is offered a free
ticket to any destination in the world if executive class passenger earns 1500 pts airlines
offers free holiday package for two. Draw decision taken
The conditions are:-
C1 passenger from any class.
C2 Flights taken > 3 per year.
C3 Business class passenger.
C4 Executive ………………..
C5 flights taken a year <=3.
C6 pts earned >=400.
C7 pts earned >=1000.
C8 pts earned >=1500
The actions are…
A1charge standard fares.
A2offers 10% discount.
A3…….30 % discounts.
A4offer a free ticket.
A5offer a free holiday package for two.
Decision tree:-It serves same purpose as a decision table .It is very much easy to
understand.
Risk planning:-This is concerned with identifying strategies for managing risk.
*Risk avoidance:-Technique focuses on restructuring of project so as to avoid that risk.
*Risk transfer:-Solves the problem of risk impact by buying insurance.
Risk Monitoring: - continuous processes which Identify probability of occurrence of
risk and their impact on project .Techniques are top ten risk tracking, millstone tracking
and corrective actions.
Risk decision tree:-In a casino, there are two options to play a gem .Option a if you roll
two dices and get multiple of 5, you win Rs 10800.If you get multiple of 3, you win Rs
7200 to casino. In second option if you get multiple of 4 you win rs 3600.if you get 2 or
12 you win rs 14400.INother cases you have to pay 720.whic game you should play.
Database A database is a collection of related data. Data mean known facts that can
be recorded and that have implicit meaning e.g.:- name, telephone no.s.
A database has the following implicit properties:-
• A Database represent some aspect of world called miniworld or universe of
discount.
• A Database is a logically coherent collection of data with some different meaning.
A Database is designed, built and populated with data for a specific purpose.
EXAMPLE
The goal of 3-schema arch. is to separate user application and physical database.
In this architecture schema can be defined at 3 levels.
1. The internal level has a internal schema which describes the physical
storage structure of database. Internal schema uses a physical data
modeland describes the complete details of data storage and access paths
for database.
2. The conceptual level has a conceptual schema which describes the
structure of a whole database for a community of user. Conceptual schema
hides the details of physical storage structure and concentrates on
describing entities, data types relationship, user option and constraints. A
high level data model can be used at this level.
3. The external or view level includes a number of external schemas or user
views. Each external schema describes the part of the database that a
particular user group is interested in and hides the rest of database from
that user group. A high level data model or an implementation model can
be used at this level.
3-schema arch. are only description of data, the only data that actually exists is at the
physical level . each user i/p refers only to its external schema. Hence DBMS must
transform a request specified on external schema into a request against the conceptual
schema and then into a request on internal schema for processing over the stored
database. The process of transforming request and result between levels are called
mappings.
Data mining It is used for knowledge discovery the process of searching data for
unanticipated new knowledge.
BUILIDING A DATA WAREHOUSE An appropriate schema should be chosen that
reflect anticipated usage. Acquisition of data for the warehouse involves following steps:
• Data must be extracted from multiple, heterogeneous sources.
• Data must be formatted for consistency within warehouse. Names, meaningsand
domains of data from unrelated sources must be reconciled.
• Data must be cleaned to ensure validity. As data managers in organization
discover that their data are being cleaned for i/p into warehouse, they will likely
want to upgrade their data with the cleaned data. The process of returning
cleaned data to source is called backflushing.
• Data must be fitted into the data model of the warehouse. Data from various
sources must be installed in data model of warehouse.
• Data must be loaded into warehouse monitoring tools for loads as well as method
to recover from incomplete or incorrect loads are required.
• How up –to – data must the data be?
• Can the warehouse go offline and for how long?
• What are the data interdependencies?
• What is storage availability?
• What is the distribution requirement?
• What is loading time?
Testing Principles:
• Tests should be planned long before testing begins. Testing can begin as
soon as requirement model is complete.
• To be most effective, be testing should be conducted by an independent
third party.
• Testing should begin in the small and progress toward testing in the large.
• Tester while testing the product must have destructive attitude in order to
do effective testing.
• Exhaust we testing (It is impossible to execute every combination of paths
during testing) is not possible.
• All tests should be traceable to customer requirements.
• Full testing i.e. Testing should start from requirement phase and end at
acceptance testing.
Testability: is how easily a computer program can be tested.
Characteristics of testable Software:
1) Operability: The better it works, the more efficiently it can be tested.
2) Observability: What you see is what you test.
3) Controllability: The better we can control the software, the more the
testing can be automated.
4) Decomposability: Software system is built from independent modules.
Software modules can be tested independently.
5) Simplicity: The less there is to test, the more quickly we can test it.
6) Stability: The fewer the changes, the fewer the disruptions to testing.
7) Understandability: The more information we have, the smarter we will
test.
8) Debugging: The Process of finding and correcting errors in a program.
Testing Terminology:
1) Errors: Am out of deviation from correct result.
2) Tester: is a person whose aim is to find fault in product.
3) Test case: A test case is a set of I/P’s and expected O/P’s for a program under
test. Test case is a triplet [I, S, O] where I is data I/P to the system, S is state of
system at which data is I/P, and O is expected O/P of system.
4) Mistake: An action performed by a person leads to incorrect result.
5) Fault: Outcome of mistake. It can be wrong step, definition in a program. A fault
is an incorrect intermediate state that may have been entered during program
execution. Software may or may not lead to failure.
6) Failure: is outcome of fault. Failure is a manifestation of an error but a mere
presence of an error may not necessary lead to failure.
7) Test suite: is the set of all test cases with which a given software product is to be
tested.
Structural testing/ White box / Glass box: Internal structure of code is
considered so require internal detail of program.
1) Using W.B testing methods the software Engineering Can derive test cases that
test all logical decisions on True/ false basis.
2) Guarantee that all independent paths with in a module have been exercised at
least once.
3) Execute all loops at their boundaries and with in their operational bounds.
4) Exercise internal data structures to ensure there validity.
5) Statement coverage, branch coverage…… do
Basis Path Testing: Basis Path testing is a White Box testing technique proposed
by Tom. It is used to derive complexity of procedure and use this measure as a
guideline for defining a basic set of execution paths.
Flow Graph: A directed graph in which nodes are either entire statements or
fragments of statements and edges represents flow of control.
Basic controls of flow graph:
I/P
I/P
I/P
O/P
(While)
Sequence O/P
I/P O/P
(if – then - else) (Until)
I/P
O/P
Regions
(area
edge bounded
1 by edges
and
1 nodes)
2,
3 R3
2
R2 4,
6
5
3
7 R1 8
4
6 9
5
7 8 1
0
9
1 R4
1
11
Node
1,
Read 2
number
If
3 4
num %
2==0
5
Number is
even
Number is Flow graph
odd
Exit
Predicate node: Each node that contains condition and is characterized by two or
more edges emanating from it.
Independent Paths: Any path through the program that introduces at least one new
set of processing statements or new conditions. In terms of flow graph an independent
path must move along at least one edge that has not been traversed before the path is
defined.
Path 1: 1-11
Path 2: 1-2-3-6-7-9-10-1-11
Path 3: 1-2-3-6-8-9-10-1-11
Path 4: 1-2-3-4-5-10-1-11
Path: 1-2-3-4-5-10-1-2-3-6-8-9-10-1-11 X
Cyclomatic / Structural Complexity: It provides an upper bound for no. of tests that
must be conducted to ensure that all statements have been tested at least once.
How do we come to know that how many paths to execute i.e It is used to find the no.
of independent paths through a program.
2
3
e 3 d b
b
f d 4 c f
5 4
c 5 g e
g
Connected to node
Connections
Node 1 2 3 4 5
1 1 =0
2
3 1 1 2-1 = 1
4 1 1 2-1 = 1
5 1 1 2-1 = 1
--------
3+1 = 4 Cyclomatic
Complexly
Data flow testing: Based upon use of data structures and flow of data in program. The
data structures are important part of any program and hence must be taken into
consideration while designing test cases.
Decision Coverage / Branches coverage: This focus on executing each branch of each
decision (like if statement, while, do – while and for loop) at least once.
e.g. if ((x<20) andand (y> 50))
sum= sum+x;
else
sum = sum +y;
This must be tested using test cases such that decision part is evaluated as true at least
once and as false at least once.
Multiple Condition Coverage: This type of testing is done by doing decision and
branch coverage and test case are designed for all possible combination of conditions.
Test cases for last example will be
a=f b=f c=f
a=f b=f c=t
a=f b=t c=f
a=f b=t c=t
a=t b=f c=f
a=t b=f c=t
a=t b=t c=t
a=t b=t c=t
In total and test cases will be required.
Condition testing: This type of testing is done to test all logical conditions in a program
module. It must check
a) Boolean Expression b) Compound Conditions
c) Simple conditions d) Relational Expressions.
If( (a) andand (b) andand (!c)
printf(“welcome”);
else
printf(“invalid user”);)
Program module must be tested with each condition (a, b, c) true once and false once.
Concatenated
Loop
Unstructured
Loop
P – Use: Path can be identified starting from definition of variable and ending at a
statement where the variable is appearing in predicate called dp path.
All – Use: Paths can be identified starting from definition of variable to its every possible
use.
du – Use: path is identified starting from definition of a variable and ending at a point
where it is used but its value is not changed.
main()
{
int a, b, c, d;
float desc, root1, root2;
printf(“\n Enter value of a,b,c”);
dc-path
scanf(“%d %d
%d”,anda,andb,andc);
desc=b * b – 4 * a * c ;
dp-path
if (desc==0){
dp-path
if (desc<0){
printf(“Root1 = %f”,root1);
printf(“Root2 = %f”,root2);
}
Functional testing / Black Box / Behavioral testing:
Attempts to find errors in following categories.
1) Incorrect or missing function’s
2) Behavior or Performance error.
3) Interface error.
4) Initialization or termination errors.
Boundary Value analysis: Leads to a selection of test cases that exercise boundary
values because a greater n. of errors tends to occur at boundaries of Input domain rather
than at centre.
• Basic idea is to use I/P variable values at their minimum, just above the minimum,
at nominal value, just below the maximum value and at their maximum.
• In boundary value analysis, test cases are obtained by holding the values of all but
one variable at their nominal values and letting that variable assume its extreme
values.
• Yields (until) test cases.
E.g. Roots for the quadratic equation
x2+bx+c = 0 is [0, 100]
Real (b2 - 4ac) > 0
Imaginary (b2 - 4ac) < 0
Equal (b2 - 4ac) = 0
Not quadratic if a=0
Equivalence class Partitioning: In this method, I/P domain is divided into a finite
number of equivalence classes. If one test case in a class detects an error, all other test
cases in the class would be expected to find same error or if a test case did not detect an
error, we would expect that no. other test cases in the class would find an error.
I/P domain O/P domain
Valid I/P’s
Equivalence Partitioning
Procedure:
1) The equivalence classes are identified by taking each I/P condition and divide it
into valid and invalid classes. For example, if I/P condition specifies a range of
values from 1 to 99, we identify one valid equivalence class [1< item < 99].
2) Using equivalence classes generate test cases. This is done by writing test cases
covering all valid equivalence classes. Then a test case is written for each invalid
equivalence class so that no test contains more than one invalid class. This is to
check that no two invalid classes mask each other.
Example: O/P domain equivalence class test for triangle problem as
Test Case a b c Expected O/P
1 10 10 10 Equilateral
2 20 20 25 Isosceles
3 25 20 15 Scalene
4 15 10 30 Not a Triangle
Cause Effect Graphing: This technique establish relationship b/w logical and I/P
combinations called causes and corresponding action called effect. The causes and effects
are represented by graph.
Causes Effect
Logical
I/P Covers
Combinatio Action
n
Procedure:
a) For a module identify I/P condition (causes) and
actions (effect). (Identify causes and effects for a particular module).
b) Develop a cause effect graph.
c) Contest cause effect graph into decision table.
d) Each column of decision table represents test case.
Derive test cases from decision table.
Example:
Step1: identification of cause and effects.
In an income tax processing system if annual taxable salary of a person is less
than equal to 60000 and expenses don’t exceed Rs. 30000, 10% income tax is charged. If
salary greater than 60000 and less then equal to 200000 and expanses don’t exceed
40000, tax 20% is charged. For salary greater than 200000, 5% surcharge is also charged.
If expanses are greater than 40000, surcharge is 8 % Design test cases using cause effect
graph technique.
Step 1:
Causes Effect
C1 – Salary <= 60000 E1 – Compute tax at 10% rate
C2 – Salary >= 60000 and <=200000E2 – Compute tax at 20% rate
C3 – Salary > 200000 E3 – Compute tax at 20% rate+5%Surcharge
C4 – Expanses <= 30000 E4 – Compute tax at 20% rate + 8 % Surcharge
C5 – Expanses <= 40000
C6 – Expanses > 40000
Step 2: Cause effect graph.
C
1
E
AND 1
C
2
E
C AND 2
3
C E
4 AND 3
C
5 E
AND 4
C
6
Step 3: Draw decision table corresponding to cause effect graph.
1 2 3 4
C1 1 0 0 0
C2 0 1 0 0
Cause C3 0 0 1 1
C4 1 0 0 0
C5 0 1 1 0
C6 0 0 0 1
E1 X - - -
Effects E2 - X - -
E3 - - X -
E4 - - - X
Mutation Testing: In mutation testing, Software is first tested by using initial testing
techniques. After initial testing, mutation testing takes place. Basic idea is to make a few
small changes to a program such as changing a conditional operator or changing type of
variable. Each time program is changed, it is called mutated program and change effected
is called a mutant.
Mutated program is tested against full test cases. If there exists at least one test
case in test suite for which a mutant gives an incorrect result. The mutant is said to be
dead. If mutant remains alive even after applying all test cases, the test data is enhanced
to kill the mutant.
Example:
main()
{
int a,b,total=0;
clrscr();
printf(“\n Enter valued of a and b = “);
printf(“%d %d”, anda, andb);
for(i=1; i<a ;i++)
{
if(b>0)
total=total+b;
else
total=total-b;
b--;
}
printf(“Total = %d”, total);
getch();
}
Mutants can be:
total=total*b
or
total= total/b;
or
total=total-b;
Stress testing: Stress testing is also called endurance testing. Stress tests are black box
test. This is to check the capabilities of Software by applying abnormal or even illegal I/P
conditions. I/P data volume, I/P data rate, processing time, utilization of memory are
tested beyond the designed capacity. For example suppose an operating system is
designed to support 15 multi programmed jobs. System is stressed by attempting to run
15 or more jobs simultaneously.
Error Seeding: Error seeding, introduce known errors. In other words, some artificial
errors are introduced into program. This is used to check.
• No. of errors remaining in the product.
• Effectiveness of test strategy.
N – Total number of defects in system
n – Defects found by testing.
S – Total no. of seeded defects.
s – Defects found during testing.
n/N = s/S
N = S*n/s
Remaining defects = N-n
= n*(5-s)/S))
Error seeding is effective if kind of seeded errors closely with kind of defects that
actually exist.
Levels of Testing:
a) Unit Testing
b) Integration Testing
c) System Testing
d) Acceptance Testing
Unit Testing: Unit testing concerns with testing of smallest component. Test cases are
designed to check
Program Logic -> functionality -> Interfaces
Boundary Conditions -> Data structures -> all paths in program.
Driver and Stub modules: in order to test a single module, we need a complete
environment to provide all that is necessary for execution of module. Besides the module
under test, we need following in order to test a module.
Non local data structures that a module accesses.
The module under test calls procedures that belong to other modules which is not
a part of it.
Since required modules are not usually available until they too have been tested.
Stubs and dress provide complete environment for execution. Stubs is a dummy
procedure that has same I/) parameters as given procedure but has simplified
behavior. A driver module would contain no local data structures and also have code
to call different functions of module with appropriate parameter values.
Driver
module
UNIT under
Test
Stub
module
A0
A1 A2 A3
X1 X2 Y1 Y2 Y3
SNo. Module under test Stubs required Module interaction
to be tested
1) A0 Stub(A1), Stub(A2), Unit testing A0
Stub(A3)
Big – Bang Testing: In this technique, all the modules in system are integrated in
single step. It least effective and least used technique.
Disadvantage: Problem in debugging errors associated with any module.
Sandwich / MIX Integration Technique: It follows both top down and Bottom
up approaches. In bottom up approach, testing can start only after bottom level
modules are coded and tested. In top – down approach, testing can start only after
top – level modules have been coded and tested. In MIX approach testing can
start as and when modules are available. This is commonly used technique for
testing.
Non – Incremental (Phased Big bang testing is a degenerate case of phased integration
testing approach) Vs Incremental Integration Testing:
1) In incremental integration testing, only one new module is added to system under
construction each time. In non – incremental integration testing, a group of related
modules are added to system each time.
2) Non incremental integration requires less no. of integration steps than incremental
approach.
3) Debugging is easy in incremental integration testing because it is known that error
is caused due to addition of newly added module. However in phased integration
testing the error might be due to any of the newly added modules.
System testing: System testing is done to validate the fully developed system to assure
than it meets its requirements.
a) Performance testing: This type of testing deals with quality related issued like
security, accuracy efficiency using stress test, volume test, reliability test, security
test. System testing is done at developers end.
b) Function testing: Black box testing techniques are used to check functionality of
system.
c) Acceptance testing: This concerned with usability testing of product.
Alpha – testing: It is conducted at developer’s site by customers. The software is
used in a natural setting with developer “looking over the shoulder of user”.
Developer Records errors and usage problems. Alpha testing is done in controlled
environment.
Beta – Testing: Beta – testing is conducted at one or more customer’s sites by
end users. Unlike alpha – testing developer is generally not present. So Beta –
testing is a “Live” application of software that is not controlled by developers.
The customer records all problems that are encountered during beta testing and at
regular intervals report to developers. As a result of problems reported during
Beta – testing, the software developer makes modifications and then prepare for
release of software product to entire customer box.
Debugging process:
1) Identify bugs in the product and generate error report.
2) Assign the problem to software engineer to ensure that effect is genuine.
3) Analyze the problem by understating main cause. Debugging tools are used for
this purpose.
4) Resolve the problem by making changes to product.
5) Validate the final product.
Debugging Approaches:
1) Backward analysis: It involves tracking the problem backwards from the
location of failure message in order to find the region of faultily code. A detailed
study is conducted to find cause of defect.
2) Forward analysis: Tracking the program forward using break points or print
statements at different points and the program and analyze the outcome at these
points. A proper attention is given to find the cause of defect in the areas where
wrong results are displayed.
3) Brute force Technique: It is the least effective technique. In this technique
program is loaded with print statements to point intermediate values with the hope
that some proved values will help to identify the statements containing the errors.
4) Cause Elimination Method: In this technique, a list of cause is developed due to
which error occurs and then test are conducted to remove the errors.
5) Program Slicing: A slice of program for a particular variable is the set of source
lines preceding this statement that can influence the value of that variable.
First approach has high building and redesign cost. But produce a good quality product
and hence maintenance cost will be minimum. Second approach allow changes in
program but time consuming .third approach generates the code quickly but is of poor
quality difficult to understand ,In complete and hence increase maintenance cost .now
reverse engg. come into play to capture functionality of system and generate the re-
constructed design to implement system with new language .
Advantages of Reverse Engg. Process:-
Unit 4
• So a program may contain known faults but may still be seem as reliable
by its users.
• Each user of system uses it in different ways .Faults which affect the
reliability of system for one user will never interfere with working of
another user.
Software Reliability Importance: Software reliability is important than efficiency due
to following reason:-
1. Computers are now cheap and fast.
2. Unreliable software is liable to be discarded by user.
3. System failure cost may be large.
4. Unreliable systems are difficult to improve.
5. Inefficiency is predictable.
6. Unreliable system may cause information loss.
Reliability metrics: The measure according to which software reliability is decided are
called reliability is metrics.
There are 2 types of metrics:
1. Hardware reliable metrics
2. Software reliable metrics.
Software reliability metrics: These metrics are defined depending upon the nature of
software failure.
• Software component failure are transient i.e. they occur just for some
inputs.
• The system can often remain in operation after a failure has occurred.
• For hardware reliability measure the common matrix is mean time to
failure which cannot be used when we are interested in whether a software
system will be available or not to meet a demand.
• Following metrics are used for software reliability specification:-
.
AVAIL measure of how likely the system is *continuously
availability available to the user. Running
Example availability of 0.998 means system such
that in every 1000 time units, the system as telephone
is likely to be available for 998 of those. System.
• The choice of metric used depends upon the type of system and requirement
of application domain.
Example:
• Suppose system users are interested about how often system will fail, as there
is significant cost in restarting the system. So a metric based upon MTTF or
ROCOF should be used.
• Suppose, the system should always meet a request for service because there is
some cost in failing to deliver the service. So, metric used is POFOD.
• Suppose, users are concerned with that system is available when a request for
system is made. Some problem is occurring when the system became
unavailable. So metric used is AVAIL.
Generally 3 types of measurement can be made while deciding the reliability of system.
I. The no. of system failure given a no. of system inputs. This is to nearer
POFOD.
II. The time b/w system failure. This is used to measure ROCOF and MTTP.
III. The elapsed repair and restart time when a system failure occurs. Given that
system must be continuously available this is used measure AVAIL.
• So time is important factor in all reliability metrics.
• Various time units such as calendar time, processor time or no. of
transaction discrete units can be used.
• Calendar time units used in = monitoring system such as alarm system.
Processor time unit used in = telephone switching systems.
No. of transaction time unit used in = Bank ATM system.
• Reliability metrics are based around the probability of system failure and
they cannot account the consequences of such a failure.
So latter type of failures are less acceptable rather than failure local to ATM.
• Following table gives the possible failure classes and possible reliability
specification for different type of system failures:
Failure class Example Reliability metric
1.permanent The system fails to operate ROCOF
non-corrupting With any aid which is input 1 OCCURRENCE/1000 DAYS
software must be restarted to
correct failure.
2.transient non The magnetic stripe data cannot POFOD
corrupting be read on an undamaged card 1 in 1000 transaction
which is input
3.transient A pattern of transaction across Unquantifiable /should never
corrupting the n/w cause database happen in life time of system
corruption
• The cost for developing and validating a reliability specification for a software
system is very high.
Technique used for achieving reliability:
Generally 3 techniques are used to achieve reliability in a software system:
Fault avoidance:
The design and implementation of system should be organized in order to produce fault
free systems.
Fault tolerance:
This strategy assumes that residual faults remain in the system .by using this method;
facilities are provided in the software to allow operation to continue when these faults
cause system failure.
Fault detection:
Fault are detected before the software is put into operation .The software validation
process uses static and dynamic methods to discover any faults which remain in the
system after implementation .
Fault avoidance:
• A good software process should be oriented towards fault tolerance rather than
fault detection and removal.
• Its main objective is to produce fault free software i.e. software which confirm to
its specifications.
• But there may be errors in specifications so fault free software docs now always
mean that it will satisfy user requirements.
• Fault avoidance and development of fault free software depends on following
factors:-
1. The availability of precise system specification i.e. an unambiguous
description of what must be implemented.
2. The adoption of an organizational quality philosophy in which quality is the
driver of software process.
3. The adoption of an approach to software design and implementation which
use information hiding and encapsulation.
4. The use of strongly typed programming language so that possible errors are
detected by language compiler.
5. Restrictions on the use of programming constructs, such as pointers, which are
error prone.
• Fault free software is impossible to achieve if low level programming languages
with limited type checking are used in program development.
• So, strongly typed language such as C++ can be used for software development.
• Faults may remain in software after development so development process must
include.
Software reliability specification:-
• Software reliability must be specified quantitatively in the w/w requirement
specifications.
• Depending upon the type of system, one or more metrics may be used or
reliability specification.
• During reliability specification writing, the specifies should identify different
types of failures and decide whether these should be treated differently in
specification.
• Different types of error are shown below:-
2. Data typing
• The principal of “NEED TO KNOW” must be adopted to control access to
system data i.e. program components should be allowed to access to data which
they need to implement their function.
• Access to other data should not be allowed.
The advantage of “information hiding “is that hidden information cannot be corrupted by
external components.
• To implement this concept, we must use object oriented programming language
such as C++ in which classes and objects provide encapsulation and hiding of
data.
• The concept of generic classes and function can be used to support variety of
parameters in the language.
Example:
template<class T>
class queue
{
Public:
Queue(int size=100);
~queue( );
void put (T, x);
T remove( );
Int size( );
Private:
int front,rear;
T* qvec;
};
Fault tolerance
• A fault tolerant system can continue in operation after some system failure
occurred.
• Fault tolerance is needed in situations where system failure would cause some
catastrophic accident or when a loss of system operation until aircraft has landed.
• There are 4 aspects to fault tolerance:-
1. Failure detection: The system must detect that a particular state combination has
resulted or will result in system failure.
2. Damage assessment: The part of the system state which has been affected
failure must be detected.
3. Fault recovery: The system must restore its state to a known safe state. This can
be obtained by correcting the damage state (forward error recovery) or by
restoring the system state to a known safe state (backward error recovery).
4. Fault repair: It involves modifying the system so that the fault does not recur. In
many cases software failure are transient and due to particular combination of
system inputs. No repair is necessary as normal functioning can resume after
immediately fault recovery.
• When a fault is not transient fault, a new version of faulty software
component must be installed dynamically i.e. without stopping system.
Fault tolerance
1 Fault tolerant h/w: Most commonly used fault tolerant h/w techniques are based upon
Triple Modular Tendency (TDK).
• TMR: The h/w unit is replicated three (or more sometimes) times. The output
from each unit is compared. If one of the unit fails and does not provide the same
output as other, its output is ignored.
A1
O/P
A2 comparator.
A3
2. Fault detection software: There are mainly 2 fault tolerant software approaches
which have been derived from the h/w model where a component is replicated.
1. N-Version programming:
• By using a common specification, the software system is implemented
in a no. of different version by different teams.
• These versions are executing in parallel and these outputs are
compared using voting system and consistent o/p is rejected. At least
3 versions of software system should be available.
• Assumption is that it is unlikely that different team will make same
design or programming error. Avizienis describe this approach to fault
tolerance.
LIMITATIONS:
1 A number of experiments suggested that the assumption
made is not valid.
2 Different team may make the same mistakes due to common
misinterpretation of specification or because they
independently arrive at same algorithm to solve the problem.
2. Recovery blocks:
• Each program includes a test to check if the component is executed
successfully.
• It also includes alternative which allow system to backup and repeat
the computation if that detect a failure.
• They are executed in reference and implementations are different of
same specification.
• Probability of the error gets reduced as different algorithms are used
for each recovery block.
• Weakness of both these method is that they are based on assumption
that specification is correct. They do not tolerate specification error.
• Software fault tolerant require that software to be executed under the
control of fault tolerant controller which control this process.
Exception handling:
Exceptions: some peculiar problem other than logic or syntax errors. They are known as
exceptions. Exceptions are run time anomalies or unusual condition that a program may
encounter while executing i.e. division by zero, access an array outside the array.
Basics of exception handling: exceptions are of two types “synchronous exception”
and “asynchronous exception”. Errors such as “overflow” belong to the synchronous type
of exception. The errors that are caused by events beyond the control of the program
(such as keyboard interrupts) are called asynchronous exceptions.
The purpose of exception handling mechanism is to provide means to detect and report
an “exceptional circumstance” so that appropriate action can be taken. Following tacks
need to perform when exception occurs.
1. Find the problem (hit the exception)
2. Inform that an error has occurred. (throw the exception)
3. receive the error information (catch the exception)
4. Take corrective action. (handle the exception)
In C++, exception handling is basically built upon three keywords i.e. try, throw and
catch. The keyword try is used to preface a block of statement which may generate
exception known as try block when an exception is detected; it is thrown using a throw
statement in the try block. A catch block defined by keyword catch ‘catches’ the
exception ‘thrown’ by the throw statement in the try block and handles it appropriately.
The catch block that catches an exception must immediately follow the try block that
throws the exception. The general form is as:
try
{
…………….
throw exception;
}
When try block throws an exception, the program control leaves the try block and enters
the catch statement of catch block. Exceptions are objects used to transmit information
about a problem. If the type of object thrown matches the argument type in the catch
block, then catch block is executed for handling the exception. If they do not match, the
program is aborted with the help of abort () function which is involved by default when
no exception is detected and thrown, the control goes to the statement immediately after
the catch block. Catch block is skipped.
Discuss SEI capability maturity model.
Answer: SEI CAPABILITY MATURITY MODEL- It was the first proposed by the
SOFTWARE engineering institute of the “Camage Melon” university, USA. SEI model
was originally developed to assist the US department of defense (DOD) in software
acquisition. In simple words, CMM is a reference model for inducting the software
process making into different levels. It can be used product the most likely outcome to be
expected from the next project that the original undertakes SEI CMM can be used in two
ways:
1. Capability evaluation
2. SOFTWARE process assessment.
Capability evaluation and Software process assessment differ in motivation objective
and the final use of result. Capability evaluation provides a way to asses the Software
process capability of an organization. The results of capability evaluation indicate the
likely contradictor performance of the contractor is awarded a work. Therefore the
results of Software process capability assessment can be used to elect a contractor on
the other hand Software process assessment is used by an organization with the
objective to improve its process capability. Thus, this type of assessment is for purely
internal use.
SEI CMM classifies Software development into the following five maturity levels:-
Level 1: initial
Level 2: repeatable.
Level 3: Defined
Level 4: managed
Level 5: optimizing
Level 1: INITIAL A Software development organization at this level is
characterized by ad hoc activities very few or no process are not defined , different
engineers follow their own process and as the results development efforts became
chaotic. Therefore, it is also called chaotic level. The success of projects depends on
individual efforts. When engineers leave, the successors have great difficulty in
understanding the process followed getting work completed.
Level 2: Repeatable At this level, the basic project management practices such as
tracking cost and schedule enables and size and cost estimation techniques like
function point analysis, COCOMO etc. are used. The necessary process discipline in
place to repeat earlier success on the projects with similar applications.
Level 3: Defined-At this level, the process for both management and development
activities are definedand documented. There is a common organization wide
understanding of activities, rolesand responsibilities. The process through defined, the
process and the product qualities are not measured.
Level 4:Managed: At this level, focus is on software metrics. Two type of metrics
are collected. Product metrics measure the characteristics of product being developed,
such as its size, reliability, time, complexity, understanding etc. Process metrics
reflect the effectiveness of the process being used, such as average no if defects found
per hour of inspection etc. The Software process and product quality are measured
and quantitative quality for the product re used. The process metrics are used to check
if a project performed satisfactory and the results are used to evaluate performance
and improve the process.
Level 5: Optimizing At this stage, the process and product metrics are collected.
Process and product measurement data are analyzed for continuous process
improvement e.g. if from analysis of process measurement results, it is found that the
code reviews are not very effective and a large no of error are detected only during
using testing, then the process may be five tuned to make the review more effective.
Also the lessons learned from specific project are incorporated into the process.
Continuous process improvements is achieved by carefully analyzing the quantitative
pack from process measurements and from application of effective ideas and
technologies such an organization identifies the Software engineering practices and
innovations which may be tools, method and processor.
Substantial evidence has now been gathered which indicates that the
Software process maturity as defined by CMM has several business benefits. The
problem with CMM based process improvement imitative is that they understand
what is need to be improved but they need guidance about how to improve it.
Highly systematic and measured approach to Software developments
suits large organization dealing with negotiated Software, safety critical software etc.
Small organization typically handles application such as internet, e-commerceand are
without an established product range, revenue based and experience on past projects
etc. for such organization, a CMM based appraisal is probably excessive. These
organization need to operate more effectively at lower levels of monitoring i.e. they
need to practice effective project management, reviews, configuration management
etc.
Answer: A CASE tool is generic term used to denote any form of automated for
Software engineering. A CASE tool can mean any tool used to automate some activities
associated with Software level orpiment. CASE tool assist in phase related task such as
specification, structured analysis, design, loading, testing etc. and other are related to
non-phase activities such as project management and configuration management.
PRIMARY OBJECTIVE-
1. To increase productivity.
2. To produce better quality Software at lower end.
CASE Environment-The true power of CASE toll are realized only when these sets of
tools are integrated into common framework or environment. If the different tool are not
integrated then generated by one tool would have to input to other tools. This may
involve format conversions and hence an additional effort of exporting data from one tool
and importing to another.
CASE tool are characterized by the stage or stages of Software development life cycle on
which they focus. Since different tools convey different stages common information, it is
required that they integrate through some central repository to have a consistent view of
information associated with the Software.
The central repository is usually a data dictionary containing the definition of all
composite and elementary data items through the central repository, all the CASE
tools in a CASE environment share common information among themselves different
level language.
1. The user should be able to define the sequence of states through which a
created prototype can run.
2. STRUCTURED ANALYSIS and DEIGN- A CASE tool should support one
or more of the structured analysis and design techniques. It should support,
effort lesser, making of fairly complex diagrams and preferably through
hierarchy of level. The tool must support completeness and consistency
checking across the design and analysis and through all levels of analysis
hierarchy whenever there is heavy computational load during consistency
checking, it should be possible to temporarily disable such checking.
3. CODE GENERATION-As far as the code generation is concerned the general
expectation from a CASE tool is quite low. A reasonable requirement is
traceability from source file to designate. More pneumatic support is expected
from a CASE tool during the code generation.
Phase comparison is following:
1. The CASE tool should generate records, structures, class definition
automatically from the contents of the data dictionary in one or more
popular programming language.
2. The CASE tool should support generation of module skeleton or
templates in one or more programming language.
3. It should generate database from relational DBMS.
4. The tool generates code for user interface from prototypes, definitions for
X-Windows, and MS-Windows based application.
5. TEST CASE GENERATOR – standard has feature of supporting both
design and requirement testing.
Thus a CASE environment facilities the automation of step by step methodologies for
Software development
BENEFITS OF CASE:
• Cost saving through all development phases different studying carried out to
measure the impact of CASE put the effort reduction to b/w 40% and 50%.
• Cost leads to considerable improvement to quality. This is mainly due to the facts
that one can effort lesser iterate through different phase of software development,
the chances of human errors are considerably reduced.
• CASE tools help produce high quality and consistent document.
• CASE tools reduced the drudger in software engineer’s work.
• CASE tools have led to revolutionary cost saving in the software maintenance
efforts.
• Use of CASE environment has an impact on the style of working of a company
and works it conscious of structured and orderly approach.
CASE SUPPORT IN SOFTWARE LIFE CYCLE:
Prototyping Support- we know, prototyping is useful to understand the requirement of
couple software products to demonstrate a concept to market new ideas etc. The
prototyping CASE tool requirements are as follows:
• Design user interaction.
• Define the system control flow.
• Store and retrieve data required by the system.
• Incorporate some processing logic.
A good prototyping tool should support features:
• Prototyping CASE tool should support the users to create a GUI using a graphics
editor.
• It should integrate with the data dictionary of CASE environment.
• If possible, it should be able to integrate with the external user defined modules
written in c or some other programming language.
• It should generate test set reports in ASCII format which can be directly
imported into the test plan document.
ARCHITECTURE OF A CASE ENVIONMENT-The important concepts of a modern
CASE environment is user interface, the tool, the object mgmt. system and a repository.
USER INTERFACE: the user interface provides a consistent framework for accessing
different tools, thus making it easier for users to interact with different tools and reduce
the learning time of how the different tools are used.