Sie sind auf Seite 1von 128

INDEX

Sr.No

Topic

Page
No.

1.

Introduction To SAD.

1-7

2.

Approaches To System

Development

8-23

3.

Analysis :Investigating System


Requirements

24-33

4.

Feasibility Analysis

34-44

5.

Modeling System Requirements

45-65

6.

Design

66-86

7.

Designing Input, Output And User


Interface

87-90

8.

Testing

91-103

9.

Implementation And Maintenance

104-112

10.

Documentation

113-118

11.

Software Documents

119-122

12.

Question Answer

123-127

13.

University Question Papers

128-129

1.Introduction To System Analysis And Design


System :

A system is an organized relationship among functioning components/units.


It is an orderly grouping of interrelated components linked together to achieve a specific
objective.
It exists because it is designed to achieve one or more objectives.
E.g.: Payroll system, Computer system, Business system (Organization).

Super System :

A system that is made up of sub-systems or smaller systems is called a super system.

Characteristics Of A System:
1. Organization
a. It implies the structure and order of a system.
b. It is the arrangement of components that helps to achieve a given objective.
E.g. (I) In a business system, the hierarchical relationship starting with the management
(super system) at the top, leading downwards to the departments (sub systems) represents
the organization structure.
E.g. (II) In a computer system, there are input devices, output devices, a processing unit,
storage devices linked together to work as a whole to produce the required output from the
given input.
2. Interaction
Interaction refers to the manner in which each component of a system functions with other
components of a system.
E.g.: There must be interaction between (i) the purchase dept and the production dept (ii)
the payroll dept and the personnel dept (iii) CPU with the I/O devices.
3. Interdependence
a. Interdependence defines how different parts of an organization are dependent on one
another.
b. They are coordinated and linked together according to a plan (i.e., the output of one
subsystem may be the input of another subsystem).
E.g.: User -> Analyzer -> Programmer -> User/Operator. Here, a system is designed for
the user, but it requires an analyzer to analyze it, and then it requires coding by the
programmer and testing by the user.
4. Integration
a. Integration refers to the completeness of a system.
b. It means that subsystems of a system work together within a system even though
each subsystem performs its own unique function.
5. Central Objective
The objective of a system as a whole is of more importance as compared to the objectives
of any of its individual subsystem.
http://way2mca.com

-1-

Elements Of A System:
1. Input/Output
a. One of the major objectives of a system is to produce output that has some value to the
user using given input.
b. Input: It is the data or information which is entered into the system.
c. Output: It is the outcome after processing the input.
2. Processor
a. It is an element of the system that performs the actual transformation of input into
output.
b. It may modify the input partially or completely.
3. Control
a. The control element guides the system.
b. It is a decision-making subsystem that controls the pattern of activities related to the
input, processing and output.
E.g.: The management is the decision-making body that controls the activities of an
organization, just as the CPU controls the activities of the Computer System.
4. Environment
It is the surroundings in which a system performs.
E.g.: The users and vendors of a system work form the environment of that system.
5. Boundaries
a. A system should be defined by its boundaries i.e. limits that identify components and
processes related to the system.
E.g.: The limitation of the payroll system can only calculate salary.
b. An Automation Boundary is a boundary that separates manual processes from
automated processes.
E.g.: Entering basic code for salary is a manual process while the actual calculation
of the salary is an automated process.
6. Feedback
a. It implies users response to a system.
b. It provides valuable information about what improvements and updates can be applied
to the system.

Difference Between S/W System Development & Other System Development :


The Software Systems Development is different from other types of systems development.
The major factors which widen these differences are as follows :a. The Software is an intangible product. It is to be conceived only conceptually until a
very later stage of system development viz. the coding stage.
b. Almost every software system being developed in the world is different from its
predecessors in many aspects. Some of the aspects that change quite often are the
business domain, the technology domain, and the process of software development.
http://way2mca.com

-2-

c. As a result of the above, it is very rare to find a software systems development


professional working on very similar software development projects consecutively for a
long time. In fact, if the developer was not required to work on a variety of software
systems development projects, (s)he should introspect to assess his/her progress in
the career and if needed, plan to improve future career opportunities at the earliest.
d. The technological evolutions are much faster in software systems development area for
last three decades, than any other area. The rate obsolescence is also very high. This
has a major impact not only on the number of new software architectures evolving
each year, but also the underlying process of developing software systems is also
evolving at equally fast rate.
Organization :
An organization consists of interrelated departments.
E.g.: Marketing Dept, Production Dept, and Personnel Dept.
Org -> Management -> Dept
Component :
A component may be a physical component or a managerial step.
The wheels, engine, etc. of a car are examples of Physical components.
Planning, organizing, controlling activities are examples of Managerial steps.
A component may also be classified as simple or complex.
A simple computer with one Input device and one Output device is an example of a simple
component.
A network of terminals linked to a mainframe is an example of a complex component.
Information System:
An information system is a collection of interrelated components which input data, process
it and produce the
output required to complete the given task.

Types Of Information Systems :


1. Transaction Processing System :
The information system that captures or records the transactions
system/organization.
E.g.: Organizations have financial packages tracking their financial activities.

affecting

2. Management Information System (MIS):


This information system uses information captured by the Transaction Processing System
for purposes like planning, controlling.
E.g. (I) An entire days transaction that is compiled by the Transaction Processing
System can be documented into MIS reports.
E.g. (II) Cell Phone companies have a billing department which keeps track of very call
received or made using the MIS.
3. Executive Information System (MIS):
This information system provides information for executives for planning economic forecast.
E.g.: Constant stock updates given by news channels.
http://way2mca.com

-3-

4. Communication Support System :


This support system allows employees, customers, clients to communicate with one
another.
E.g.: Employees of an organization can e-mail each other. Some organizations have
specially allotted e-mail addresses for their employees of various departments as also their
clients and customers.
5. Decision Support System (DSS) :
Decision emphasis decision making in problem situation, not information processing,
retrieval, or reporting.
Support requires computer-aided decision situations with enough structure to permit
computer support.
System accentuates the integrated nature of problem solving, suggesting a combined
man, machine, and decision environments.

SYSTEM ANALYSIS AND DESIGN


System is an orderly grouping of interdependent components linked together according to a
plan to achieve a specific objective.
System analysis and design refers to the process of examining a business situation with the
intent of improving it through better procedures and methods.
System development can generally be thought of as having two major components :
(1)Systems analysis and (2)Systems design.
Systems design is the process of planning a new business system or one to replace or
complement an existing system but before this planning can be done, the old system must
be thoroughly understood and determine how computers can best be used (if at all) to
make its operation more effective.
System Analysis is the process of gathering and interpreting facts, diagnosing problems and
using the information to recommend improvements to the system.
This is the job of the system analyst.

THE MULTIFACETED ROLE OF THE ANALYST


1. Change Agent/Agent of change :
Persuader (the mildest form of intervention)
Imposer (the most severe intervention)
Catalyst (in between the above two types)
The goal is to achieve acceptance of the candidate system with a minimum of
resistance.
2. Investigator and monitor
Investigator Information is gathered and put together and studied to determine why
the present system does not work well and what changes will correct the problem.
Monitor the analyst must monitor programs in relation to time, cost and quality.
Time is the most important, if time gets away project is delayed and eventually cost
is increased.

http://way2mca.com

-4-

3. Architect:
Architect function as liaison between the clients abstract design requirements and the
contractors detailed building plan.
Analyst The users logical design requirements and the detailed physical system
design.
As architect, the analyst also creates a detailed physical design of candidate systems.
He/she aids users in formalizing abstract ideas and provides details to build the end
product - the candidate system.
4. Psychologist
Analyst plays the role of a psychologist in the way he/she reaches people, interprets
their thoughts, assesses their behavior, and draws conclusions from these interactions.
Understanding interventional relationships is important.
5. Salesperson:
Selling change
Selling ideas
Selling the system takes place at each step in the system life cycle.
Sales skills and persuasiveness are crucial to the success of the system.
6. Motivator
Candidate system must be well designed and acceptable to the user.
System acceptance is achieved through user participation in its development, effective
user training and proper motivation to use the system.
Motivation is obvious during the first few weeks after implementation.
If the users staff continues to resist the system then it becomes frustrating.
7. Politician
Diplomacy & finesse in dealing with people can improve acceptance of the system.
Politician must have the support of his/her constituency, the analysts goal to have the
support of the users staff.
He/she represents their thinking and tries to achieve their goals through
computerization.

REQUIREMENTS OF A GOOD SYSTEMS ANALYST:


System Analyst :
A person who conducts a methodical study and evaluation of an activity such as a business
to identify its desired objectives in order to determine procedures by which these objectives
can be gained.
The various skills that a system analyst must posses can be divided into two categories
1) Interpersonal skills
2) Technical skills
Interpersonal skills deal with relationships and the interface of the analyst with people in
business.
Technical skills focus on procedures and techniques for operations analysis, systems
analysis, and computer science.
http://way2mca.com

-5-

Interpersonal skills include:


1. Communication :
He/she must have the ability to articulate and speak the language of the user, a flare
for mediation, and a knack for working with virtually all managerial levels in the
organization.
2. Understanding:
Identifying problems and assessing their ramifications. Having a group of company
goals and objectives and showing sensitivity to the impact of the system on people at
work.
3. Teaching:
Educating people in use of computer systems, selling the system to the user and giving
support when needed.
4. Selling:
Selling ideas and promoting innovations in problem solving using computers.

Technical skills include:


1. Creativity:
Helping users model ideas into concrete plans and developing candidate systems to
match user requirements.
2. Problem solving:
Reducing problems to their elemental levels for analysis, developing alternative
solutions to a given problem, and delineating the pros and cons of candidate systems.
3. Project management:
Scheduling, performing well under time constraints, coordinating team efforts, and
managing costs and expenditures.
4. Dynamic interface:
Blending technical and non-technical considerations in functional specifications and
general design.
5. Questioning attitude and inquiring mind:
Knowing the what, when, why, where, who and how a system works.
6.

Knowledge of the basics of the computer and the business


function :
The above skills are acquired by a system analyst through his/her education, experience
and personality.

Educational Background Of System Analyst :


1. He/she must have knowledge of systems theory and organization behavior.
http://way2mca.com
-6-

2. Should be familiar with the makeup and inner workings of major application areas such
as financial accounting, personnel administration marketing and sales, operations
management, and model building and production control.
3. Competence in system tools and methodologies and a practical knowledge of one or
more programming and data base languages.
4. Experience in hardware and software specification which is important for selection.

Personal Attributes Of System Analyst :


1. Authority:
The confidence to tell people what to do. Project management and to get the team to
meet deadlines is the result of this quality.
2. Communication Skills:
Ability to articulate and focus on a problem area for logical solution.
3. Creativity :
Trying ones own ideas. Developing candidate systems using unique tools or methods.
4. Responsibility:
Making decisions on ones own and accepting the consequences of these decisions.
5. Varied skills:
Doing different projects and handling change.
****************

http://way2mca.com

-7-

2.Approaches To Software System Development


1. Structured Approach
2. Object-Oriented Approach
3. Information Engineering Approach

Structured Approach :
Structured Approach is made up of three techniques :
(1) Structured Programming :

A structured program is a program that has one beginning, one end, and each step in
program execution consists of one of the three programming constructs: (a) A sequence
of program statements (b) A decision where one set of statements executes, or another
set of statements executes (c) A repetition of a set of statements.

Top down Programming divides complex programs into a hierarchy of program


modules, where each module is written using the rules of structured programming, and
may be called by its top-level boss module as required.

Modular Programming : If the program modules are separate programs working


together as a system (and not part of the same program), then these programs are
organized into a top-to-bottom hierarchy; in that case, if multiple programs are involved
in the hierarchy, then the arrangement is called modular programming.

(2) Structured Design, together call SADT i.e. Structured Analysis &
Design Techniques

Definition: Structured Design is a technique that provides guidelines for deciding what
the set of programs should be, what each program should accomplish, and how the
programs should be organized into a hierarchy.

Principles: Program Modules should be (a) loosely coupled i.e. each module is
as independent of the other modules as possible and thus easily changeable and (b)
highly cohesive i.e. each module accomplishes a clear task.

User Interface Design is done in conjunction with Structured Design.

Structure Chart: It is a graphical model showing hierarchy of program modules


produced in structured design.

(3) Structured Analysis :

Definition: Structured Analysis is a technique that helps define what the system needs
to do (processing requirements), what data the system needs to store and use (data

http://way2mca.com

-8-

requirements), what inputs and outputs are needed, and how the functions
work
together overall to accomplish required tasks.

Data Flow Diagram (DFD): It is a graphical model showing inputs, processes, storage,
outputs of a system produced in structured analysis.

Entity Relationship Diagram (ERD): It is a graphical model of the data needed by the
system, including entities about which information is stored and the relationships among
them, produced in structured analysis.

Weaknesses of Structured Approach :


Structured Approach makes processes the focus rather than the data.

Object-Oriented Approach :

Definition: Object-Oriented Approach to system development is an approach that


views an information system as a collection of interacting objects that work together to
accomplish a task.

Object: It is a programmatic representation of a physical entity, that can


respond to messages.

Object-Oriented Analysis: It involves defining all types of objects that do


work in the system, and showing how the objects interact to complete required tasks.

Object-Oriented Design: It involves defining all additional types of objects necessary


to communicate with people and devices in the system, redefining each type of object
so it can be implemented with a specific language or environment.

Object-Oriented Programming: It involves writing statements in a programming


language to define what each type of object does, including the messages the objects
send to each other.

Class: It is a collection of similar objects, and each class may have specialized
subclasses, and/or a generalized superclass.

Class Diagram: It is a graphical model that shows all the classes of objects in the
system-oriented approach.

Advantages Of Object-Oriented Approach :


(a) Naturalness (looks at the world in terms of tangible objects and not complex
procedures) &
(b) Reuse (classes can be used again and again whenever they are needed).
http://way2mca.com

-9-

Drawbacks Of Object-Oriented Approach :


Since it is drastically different from the traditional approach, it is sometimes difficult to
understand.

Information Engineering Approach

Definition: Information Engineering Approach is a system development approach that


focuses on strategic planning, data modeling, and automated tools.
Advantage over Structured Approach: More rigorous and complete than Structured
Approach and the focus is more on data than on processes.
Strategic Planning: It defines all information systems the organization needs to conduct
its business, using the Architecture Plan.
Architecture Plan: This plan defines business functions and activities the system needs
to support, the data entities about which the system needs to store information, and
the technological infrastructure the organization plans to use to support the
information system.
Process Dependency Diagram: It focuses on which processes are dependent on
other processes.
CASE Tool: It helps automate work by forcing the analyst to follow the I.E.
Approach (sometimes at the expense of flexibility).

System Development Life Cycle :


The System Development Life Cycle (SDLC) is a method of System Development
that consists of 5 phases: Planning, Analysis, Design, Implementation, and Support. The
first four phases of Planning, Analysis, Design and Implementation are undertaken during
development of the project, while the last phase of Support is undertaken post-completion
of the project. Each phase has some activities associated with it, and each activity may
have some tasks associated with it.
1. Planning Phase
Following are the activities of the Planning Phase:
i] Define the Problem
- Meeting the Users
- Determine scope of Problem
- Define System capabilities
ii] Confirm Project Feasibility :
- Identify intangible costs & benefits
- Estimate tangible, developmental, & operational costs
- Calculate NPV, ROI, Payback
- Consider technical, cultural, schedule feasibility of the Project
iii] Plan Project Schedule (Chart out a complete project schedule, including the activities
and tasks of each phase.)
iv] Staff the Project (Provide required staff, such as the Analysts, the Programmers, the
End-Users, etc.)
v] Launch the Project (Begin actual work on the Project)
2. Analysis Phase :
http://way2mca.com

- 10 -

Following are the activities of the Analysis Phase:


i] Gather information
- Meet the User to understand all aspects of the Problem
- Obtain information by observing business procedures, asking questions to user,
studying existing documents, reviewing existing systems, etc.
ii] Define System Requirements (Review & analyze obtained information and structure it
to understand requirements of new system, using graphical tools.)
iii] Build Prototype for Discovery of Requirements (Build pieces of System for Users to
review)
iv] Prioritize Requirements (Arrange requirements in order of importance)
v] Generate & Evaluate alternatives (Research alternative solutions while building
system requirements.)
vi] Review recommendations with Management (Discuss all possible alternatives with
Management and finalize best alternative)
2. Design Phase :
Following are the activities of the Design Phase:
i] Design & Integrate Network (Understand Network Specifications of Organization, such as
Computer equipment, Operating Systems, Platforms, etc.)
ii] Design Application Architecture
- Design model diagrams according to the problem
- Create the required computer program modules
iii] Design User Interfaces (Design the required forms, reports, user screens, and decide on
the sequence of interaction.)
iv] Design System Interface (Understand how the new system will interact with the existing
systems of the organization)
v] Design & Integrate Database (Prepare a database scheme and implement it into
the system) .
vi] Build Prototype for Design Details (Check workability of the proposed design
using a prototype.)
vii] Design & Integrate System Controls (Incorporate facilities such as login and
password protection to protect the integrity of the database and the application
program.)
3. Implementation Phase :
Following are the activities of the Implementation Phase:
i] Construct Software Components (Write code for the design, using programming l
languages such as Java, VB, etc.)
ii] Verify & Test Software (Check the functionality of the software components.)
iii] Build Prototype for Tuning (Make the software components more efficient using a
prototype, to make the system capable of handling large volumes of transaction.)
iv] Convert Data (Incorporate data from existing system into new system and make sure it
is updated and compatible with the new system.)
v] Train & Document (Train users to use the new system, and prepare the documentation.)
vi] Install Software (Install the software and make sure all components are running
properly and check for database access.)
http://way2mca.com

- 11 -

4. Support Phase :
Following are the activities of the Support Phase:
i] Provide support to End-Users (Provide a helpdesk facility and training programs, to
provide support to end users.)
ii] Maintain & Enhance new System (Keep the system running error-free, and
provide upgrades to keep the system contemporary.)

Classic Lifecycle Model :


This model is also known as the waterfall or linear sequential model. This model demands a
systematic and sequential approach to software development that begins at the system
level and progresses through analysis, design, coding testing and maintenance. Figure 1.1
shows a diagrammatic representation of this model.

The life-cycle paradigm incorporates the following activities:


System engineering and analysis : Work on software development begins by
establishing the requirements for all elements of the system. System engineering and
analysis involves gathering of requirements at the system level, as well as basic top-level
design and analysis. The requirement gathering focuses especially on the software. The
analyst must understand the information domain of the software as well as the required
function, performance and interfacing. Requirements are documented and reviewed with
the client.
Design: Software design is a multi-step process that focuses on data structures, software
architecture, procedural detail, and interface characterization. The design process translates
requirements into a representation of the software that can be assessed for quality before
http://way2mca.com

- 12 -

coding begins. The design phase is also documented and becomes a part of the software
configuration.
Coding: The design must be translated into a machine-readable form. Coding performs
this task. If the design phase is dealt with in detail, the coding can be done mechanically.
Testing : Once code is generated, it has to be tested. Testing focuses on the logic as well
as the function of the program to ensure that the code is error free and that o/p matches
the requirement specifications.
Maintenance : Software undergoes change with time. Changes may occur on account of
errors encountered, to adapt to changes in the external environment or to enhance the
functionality and / or performance. Software maintenance reapplies each of the preceding
life cycles to the existing program.
The classic life cycle is one of the oldest models in use. However, there are a few
associated problems.
Some of the disadvantages are given below.
1. Real projects rarely follow the sequential flow that the model proposes. Iteration always
occurs and creates problems in the application of the model.
2.

It is difficult for the client to state all requirements explicitly. The classic life cycle
requires this and it is thus difficult to accommodate the natural uncertainty that occurs
at the beginning of any new project.

3. A working version of the program is not available until late in the project time span. A
major blunder may remain undetected until the working program is reviewed which is
potentially disastrous.
In spite of these problems the life-cycle method has an important place in software
engineering work. Some of the reasons are given below.
1. The model provides a template into which methods for analysis, design, coding, testing
and maintenance can be placed.
2. The steps of this model are very similar to the generic steps that are applicable to all
software engineering models.
3. It is significantly preferable to a haphazard approach to software development.

Prototype Model :
Often a customer has defined a set of objectives for software, but not identified the
detailed input, processing or output requirements. In other cases, the developer may be
unsure of the efficiency of an algorithm, the adaptability of the operating system or the
form that the human-machine interaction should take. In these situations, a prototyping
approach may be the best approach. Prototyping is a process that enables the developer to
create a model of the software that must be built. The sequence of events for the
prototyping model is illustrated in figure 1.2. Prototyping begins with requirements
http://way2mca.com

- 13 -

gathering. The developer and the client meet and define the overall objectives for the
software, identify the requirements, and outline areas where further definition is required.
In the next phase a quick design is created. This focuses on those aspects of the software
that are visible to the user (e.g. i/p approaches and o/p formats). The quick design leads to
the construction of the prototype. This prototype is evaluated by the client / user and is
used to refine requirements for the software to be developed. A process of iteration occurs
as the prototype is tuned to satisfy the needs of the client, while at the same time
enabling the developer to more clearly understand what needs to be done.

The prototyping model has a few associated problems.


Disadvantages:
1. The client sees what is apparently a working version of the software unaware that in the
rush to develop a working model, software quality and long-term maintainability is not
considered. When informed that the system must be rebuilt, most clients demand that
the existing application be fixed and made a working product. Often software
developers are forced to relent.
2. The developer often makes implementation compromises to develop a working model
quickly. An inappropriate operating system or language may be selected simply because
of availability. An inefficient algorithm may be used to demonstrate capability.
Eventually the developer may become familiar with these choices and incorporate them
as an integral part of the system.
http://way2mca.com

- 14 -

Although problems may occur prototyping may be an effective model for software
engineering. Some of the advantages of this model are enumerated below.
Advantages:
1. It is especially useful in situations where requirements are not clearly defined at the
beginning and are not understood both by the client and the developer.
2. Prototyping is also helpful in situations where an application is built for the first time with
no precedents to be followed. In such circumstances, unforeseen eventualities may
occur which cannot be predicted and can only be dealt with when encountered.

Spiral Model :
The spiral model in software engineering has been designed to incorporate the best
features of both the classic life cycle and the prototype models, while at the same time
adding an element of risk-taking analysis that is missing in these models. The model,
represented in figure 1.3, defines four major activities defined by the four quadrants of the
figure :
Planning : Determination of objectives, alternatives and constraints.

Risk analysis : Analysis of alternatives and identification or resolution of risks.

Engineering : Development of the next level product.

Customer evaluation : Assessment of the results of engineering.


An interesting aspect of the spiral model is the radial dimension as depicted in the figure.
With each successive iteration around the spiral, progressively more complete versions of
the software are built. During the first circuit around the spiral, objectives, alternatives and
constraints are defined and risks are identified and analyzed. If risk analysis indicates that
there is an uncertainty in the requirements, prototyping may be used in the engineering
quadrant to assist both the developer and the client. The client now evaluates the
engineering work and makes suggestions for improvement.
At each loop around the spiral, the risk analysis results in a go / no-go decision. If risks
are too great the project can be terminated.
In most cases however, the spiral flow continues outward toward a more complete model
of the system, and ultimately to the operational system itself. Every circuit around the spiral
requires engineering that can be accomplished using the life cycle or the prototype models.
It should be noted, that the number of development activities increase as activities move
away from the center of the spiral.
Like all other models, the spiral model too has a few associated problems, which are
discussed below.
Disadvantages :
It may be difficult to convince clients that the evolutionary approach is controllable.

It demands considerable risk assessment expertise and relies on this for success.

If major risk is not uncovered, problems will undoubtedly occur.

http://way2mca.com

- 15 -

The model is relatively new and has not been as widely used as the life cycle or the
prototype models. It will take a few more years to determine efficiency of this process
with certainty.
This model however is one of the most realistic approaches available for software
engineering. It also has a few advantages, which are discussed below.

Advantages :
The evolutionary approach enables developers and clients to understand and react to
risks at an evolutionary level.

It uses prototyping as a risk reduction mechanism and allows the developer to use this
approach at any stage of the development.

It uses the systematic approach suggested by the classic life cycle method but
incorporates it into an iterative framework that is more realistic.

This model demands an evaluation of risks at all stages and should reduce risks before
they become problematic, if properly applied.

Component Assembly Model :


Object oriented technologies provide the technical framework for a component based
process model for software engineering. This model emphasizes the creation of classes that
encapsulate both data and the algorithms used to manipulate the data. The componenthttp://way2mca.com

- 16 -

based development (CBD) model incorporates many characteristics of the spiral model. It is
evolutionary in nature, thus demanding an iterative approach to software creation.
However, the model composes applications from pre-packaged software components called
classes. The engineering begins with the identification of candidate classes. This is done by
examining the data to be manipulated, and the algorithms that will be used to accomplish
this manipulation. Corresponding data and algorithms are packaged into a class. Classes
created in past applications are stored in a class library. Once candidate classes are
identified the class library is searched to see if a match exists. If it does, these classes are
extracted from the library and reused. If it does not exist, it is engineered using objectoriented techniques. The first iteration of the application is then composed. Process flow
moves to the spiral and will ultimately re-enter the CBD during subsequent passes through
the engineering activity.
Advantages :
The CBD model leads to software reuse, and reusability provides software engineers
with a number of measurable benefits.

This model leads to a 70% reduction in development cycle time and an 84% reduction
in projection cost.

Disadvantages :
The results mentioned above are inherently dependent on the robustness of the
component library.

http://way2mca.com

- 17 -

There is little question in general that the CBD model provides a significant advantage for
software engineers.

Rapid Application Development(RAD) Model :


Rapid Action Development is an incremental software development process model that
emphasizes an extremely short development cycle. The RAD model is a high-speed
adaptation of the linear sequential model in which rapid development is achieved by using
component-based construction.
If requirements are well understood and project scope is constrained, the RAD model
enables a development team to create a fully functional system within 60-90 days. Used
primarily for information system applications, the RAD approach encompasses the following
phases :

Business modeling : The information flow among business functions is modeled so as


to understand the following:
i) The information that drives the business process .
ii) The information generated.
iii) The source and destination of the information generated.
iv) The processes that affect this information.

http://way2mca.com

- 18 -

Data modeling : The information flow defined, as a part of the business-modeling


phase is refined into a set of data objects that are needed to support the business. The
attributes of each object are identified and the relationships between these objects are
defined.

Process modeling: The data objects defined in the previous phase are transformed to
achieve the information flow necessary to implement a business function. Processing
descriptions are created for data manipulation.

Application generation : RAD assumes the use of fourth generation techniques


Rather than using third generation languages, the RAD process works to reuse existing
programming components whenever possible or create reusable components. In all
cases, automated tools are used to facilitate construction.
Testing and turnover: Since RAD emphasizes reuse, most of the components have
already been tested. This reduces overall testing time. However, new components must
be tested and all interfaces must be fully exercised.
In general, if a business function can be modularized in a way that enables each function to
be completed in less than three months, it is a candidate for RAD. Each major function can
be addressed by a separate RAD team and then integrated to form a whole.
http://way2mca.com

- 19 -

Advantages :

Modularized approach to development

Creation and use of reusable components

Drastic reduction in development time


Disadvantages :
For large projects, sufficient human resources are needed to create the right number
of RAD teams.

Not all types of applications are appropriate for RAD. If a system cannot be
modularized, building the necessary components for RAD will be difficult.

Not appropriate when the technical risks are high. For example, when an application
makes heavy use of new technology or when the software requires a high degree of
interoperability with existing programs.

Incremental Model :
This model combines elements of the linear sequential model with the iterative philosophy
of prototyping. The incremental model applies linear sequences in a staggered fashion as
time progresses. Each linear sequence produces a deliverable increment of the software.
For example, word processing software may deliver basic file management, editing and
document production functions in the first increment. More sophisticated editing and
document production in the second increment, spelling and grammar checking in the third
increment, advanced page layout in the fourth increment and so on. The process flow for
any increment can incorporate the prototyping model. When an incremental model is used,
the first increment is often a core product. Hence, basic requirements are met, but
supplementary features remain undelivered. The client uses the core product. As a result of
his evaluation, a plan is developed for the next increment. The plan addresses
improvement of the core features and addition of supplementary features. This process is
repeated following delivery of each increment, until the complete product is produced. As
opposed to prototyping, incremental models focus on the delivery of an operational product
after every iteration.

http://way2mca.com

- 20 -

Figure 1.6 The Incremental Model.


Advantages Of Incremental Model :
1. Particularly useful when staffing is inadequate for a complete implementation by the
business deadline.
2. Early increments can be implemented with fewer people. If the core product is well
received, additional staff can be added to implement the next increment.
3. Increments can be planned to manage technical risks. For example, the system may
require availability of some hardware that is under development. It may be possible to
plan early increments without the use of this hardware, thus enabling partial
functionality and avoiding unnecessary delay.

Extreme Programming(XP) :

The most widely used agile process, originally proposed by Kent Beck.
XP Planning :
Begins with the creation of user stories.
Agile team assesses each story and assigns a cost.
Stories are grouped to for a deliverable increment
A commitment is made on delivery date
After the first increment project velocity is used to help define subsequent delivery
dates for other increments.

http://way2mca.com

- 21 -

XP Design :
Follows the KIS principle.
For difficult design problems, suggests the creation of spike solutionsa design
prototype.
Encourages refactoringan iterative refinement of the internal program design.

XP Coding :
Recommends the construction of a unit test for a store before coding commences
Encourages pair programming.

XP Testing :
All unit tests are executed daily.
Acceptance tests are defined by the customer and executed to assess customer
visible functionality.

Format Method Model :

1. The formal methods model encompasses a set of activities that leads to formal
mathematical specification of computer software.
2. Formal methods enable a software engineer to specify, develop, and verify a computerbased system by applying a rigorous mathematical notation.
3. When formal methods are used during development, they provide a mechanism for
eliminating many of the problems that are difficult to overcome using other software
engineering paradigms. Ambiguity, incompleteness, and inconsistency can be
http://way2mca.com

- 22 -

discovered and corrected more easily, not through adhoc review but through the
application of mathematical analysis.
4. When formal methods are used during design, they serve as a basis for program
verification and therefore enable the software engineer to discover and correct errors
that might go undetected.
5. The formal methods model offers the promise of defect-free software.
Drawbacks Of Format Method Model :
1. The development of formal models is quite time consuming and expensive.
2. Because few software developers have the necessary background to apply formal
methods, extensive training is required.
3. It is difficult to use the models as a communication mechanism for technically
unsophisticated customers.

@@@@@@@@@@@@

http://way2mca.com

- 23 -

3.Analysis : Investigating System Requirements


Introduction :
The requirement analysis task is a process of discovery, refinement, modeling and
specification. The software scope is refined in detail. Models of the required
information, control flow, operational behavior and data content are created. Alternative
solutions are analyzed and allocated to various software elements.
Both the developer and the customer take an active role in requirements analysis and
specification. The customer attempts to reformulate a sometimes, unclear concept of
software function and performance into concrete detail. The developer acts as interrogator,
consultant and problem-solver.

Requirement analysis is a software engineering task that bridges the gap between
system level software allocation and software design.

It enables the system engineer to specify software function and performance, indicate
softwares interface with other system elements and establish design constraints that
the software must meet.

It allows the software engineer to refine the software allocation and build models of the
process, data and behavioral domains that will be treated by software.

It provides the software designer with a representation of information and function that
can be translated into data, architectural and procedural design.

It also provides the developer and the client with the means to assess quality once the
software is built.

The principles of requirement analysis call upon the analyst to systematically


approach the specification of the system to be developed. This means that the analysis has
to be done using the available information. Generally, all computer systems are looked
upon as information processing systems, since they process data input and produce a
useful output.
The logical view of a system gives the overall feel of how the system operates. Any system
performs three generic functions: input, output and processing. The logical view focuses on
the problem-specific functions. This helps the analyst to identify the functional model of the
system. The functional model begins with a single context level model. Over a series of
iterations, more and more functional detail is provided, until all system functionality is
represented.
They physical view of the system focuses on the operations being performed on the data
that is either taken as input or generated as output. This view determines the actions to be
performed on the data under specific conditions. This helps the analyst to identify the
behavioral model of the system. The analyst can determine an observable mode of
behavior that changes only when some event occurs.
Examples of such events are:
http://way2mca.com

- 24 -

i) An internal clock indicating some specified time has passed.


ii) A mouse movement.
iii) An external time signal.

What are System Requirements?

System Requirements are the functions that our system must perform.
During planning, the Analyst defines system capabilities, during analysis, the
Analyst expands these into a set of system requirements.
There are two types of System Requirements:

Functional : activities that a system must perform with respect to the organisation.
Technical : operational objectives related to the environment, hardware, and
software of the organization.

In functional requirements, for example, if a Payroll System is being developed, then it


is required to calculate salary, print paychecks, calculate taxes, net salary etc.

In technical requirements, for example, the system may be required to support multiple
terminals with the same response time, or may be required to run on a specific
operating system.

Sources of System Requirements


The Stakeholders :

The Stakeholders of the System are considered as the primary source of information for
functional system requirements.

Stakeholders are people who have an interest in the successful implementation of your
system.

There are three groups of stakeholders: (a) Users who use the system on a daily basis
(b) Clients who pay for and own the system (c) Technical staff i.e. the people who must
ensure that the system operates in the computing environment of the organization.

The analysts first task during analysis is to (a) identify every type of stakeholder and
(b) identify the critical person from each type (group) of stakeholders.

User Stakeholders :

User Stakeholders are identified into 2 types: (a) Vertical and (b) Horizontal.

Horizontal implies that an analyst needs to look at information flow across departments
or functions.

For example, a new inventory system may affect multiple departments, such as
sales, manufacturing, etc, so these departments need to be identified, so as to collect
information relevant to them.
http://way2mca.com
- 25

Vertical implies that an analyst needs to look at information flow across job levels, such
as clerical staff, middle management, executives, etc.

Each of these users may need the system to perform different functions with respect to
themselves.

A Transaction is the single occurrence of a piece of work or an activity done in an


organization.

A Query is a request for information from a system or from a database.

Analysis tasks :
All analysis methods are related by a set of fundamental principles:
The information domain of the problem must be represented and understood.

Models that depict system information function and behavior should be developed.

The models and the problem must be partitioned in a manner that uncovers detail in a
layered or hierarchical fashion.

The analysis process should move form essential information to implementation detail.
Software requirement analysis may be divided into five areas of effort:
i) Problem recognition :
Initially, the analyst studies the system specification and the software project plan. Next
communication for analysis must be established so that problem recognition is ensured.
The analyst must establish contact with management and the technical staff of the
user/customer organization and the software development organization. The project
manager can serve as a coordinator to facilitate establishment of communication paths.
The objective of the analyst is to recognize the basic problem elements as perceived by the
client.
ii) Evaluation and synthesis :
Problem evaluation and synthesis is the next major area of effort for analysis. The analyst
must evaluate the flow and content of information, define and elaborate all
software functions, understand software behavior in the context of events that affect the
system, establish interface characteristics and uncover design constraints. Each of these
tasks serves to define the problem so that an overall approach may be synthesized.
iii) Modeling :
We create models to gain a better understanding of the actual entity to be built. The
software model must be capable of modeling the information that software transforms the
http://way2mca.com

- 26 -

functions that enable the transformation to occur and the behavior of the system during
transformation. Models created serve a number of important roles:
The model aids the analyst in understanding the information, function and behavior
of the system, thus making the requirement analysis easier and more systematic.

The model becomes the focal point or review and the key to determining the
completeness, consistency and accuracy of the specification.

The model becomes the foundation for design, providing the designer with an
essential representation of software that can be mapped into an implementation
context.

iv) Specification :
There is no doubt that the mode of specification has much to do with the quality of the
solution. The quality, timeliness and completeness of the software may be
adversely affected by incomplete or inconsistent specifications.
Software requirements may be analyzed in a number of ways. These analysis techniques
lead to a paper or computer-based specification that contains graphical and natural
language descriptions of the software requirements.
v)

Review :

Both the software developer and the client conduct a review of the software requirements
specification. Because the specification forms the foundation of the development phase,
extreme care is taken in conducting the review.
The review is first conducted at a macroscopic level. The reviewers attempt to ensure that
the specification is complete, consistent and accurate. In the next phase, the review is
conducted at a detailed level. Here, the concern is on the wording of the specification. The
developer attempts to uncover problems that may be hidden within the
specification content.

Fact-Finding Methods:
Fact-finding techniques are used to identify system requirements, through comprehensive
interaction with the users using various ways of gathering information.
There are six methods of Information Gathering which are as follows :
1. Distribute & Collect Questionnaires :
Questionnaires enable the project team to collect information from a large
number of stakeholders conveniently, and to obtain preliminary insight on their
information needs.

This information is then used to identify areas that need further research using
document reviews, interviews, and observation.

Questionnaires can be used to answer quantitative questions, such as How many


orders do you enter in a day?

http://way2mca.com

- 27 -

Such questions are called closed-ended questions i.e. questions that have
simple, definitive answers and do not invite discussion or elaboration.
They can be used to determine the users opinion about various aspects of a
system (say, asking the user to rate a particular activity on a scale of 1-5).

Questionnaires, however, do not provide information about processes, workflow, or techniques used.

Questions that illicit a descriptive


interviews, or observation.

Such questions that encourage discussion and elaboration are called openended questions.

response

are

best

answered

using

2. Review Existing Reports, Forms, and Procedure Descriptions :

Two advantages of reviewing existing documents and documentation:


To get a better understanding of processes

To gain knowledge about the industry or the application that needs to be


studied.

An analyst requests for and reviews procedural manuals, and work descriptions, in
order to understand business functions.

Documents and reports can also be used in interviews, where forms and reports
are used as visual aid, and working documents are used for discussion.

Discussion can center on use of each form, its objective, distribution, and
information content.

Forms already filled-out with real information ensure a correct understanding of the
fields and data content.

Reviewing existing documentation of existing procedures helps identify business


rules, while written procedures also help in discovering discrepancies and
redundancies in the business processes.

It is essential to ensure that the assumptions and business rules derived from
existing documentation are accurate.

3. Conduct Interviews & Discussions with Users :


Interviewing stakeholders is considered the most effective way to
understand business functions and rules, though it is also the most time-consuming
and resource-expensive.

In this method, members of the project team (system analysts) meet with
individual groups of users, in one or multiple sessions in order to understand all
processing requirements through discussion.

http://way2mca.com

- 28 -

An effective interview consists of three parts: (a) Preparing for the interview (b)
Conducting the interview and (c) Following up the interview.

Before an Interview:
Establish objective of interview (what do you want to accomplish
through this interview?)

Determine correct user(s) to be involved (no. of users depends on the


objective)

Determine project team members to participate (at least 2)

Build a list of questions and issues to be discussed

Review related documents and materials (list of specific questions, open and
closed ended)

Set the time and location (quiet location, uninterrupted)

Inform all participants of objective, time, and locations (each participant


should be aware of objective of the interview)

During an Interview:
Dress appropriately (show good manners)
Arrive on time (arriving early is a good practice, if long interview, prepare for
breaks)
Look for exceptions and error conditions (ask what if questions, ask
about exceptional situations)
Probe for details (ensure complete understanding of all procedures and rules)
Take thorough notes (handwritten note-taking makes user feel that what he
has to say is important to you)
Identify and document unanswered items or open questions (useful for
next interview session)

After an Interview:
Review notes for accuracy, completeness, and understanding (absorb,
understand, document obtained information)
Transfer information to appropriate models and documents (create models
for better understanding after complete review)
Identify areas that need further clarification (keep a log of unanswered
questions, such as those based on policy questions raised by new system,
include them in next interview)
Send thank-you notes if appropriate

http://way2mca.com

- 29 -

4. Observe Business Processes & Work-flow :


Observing business procedures that the new system will support are an excellent
way to understand exactly how the users use a system, and what information they
need.
A quick walkthrough of the work area gives a general understanding of the layout of
the office, the need and use of computer equipment, and the general workflow.

Actually observing a user at his job provides details about the actual usage of the
computer system, and how the business processes are carried out in reality.

Being trained by a user and actually performing the job allows one to
discover the difficulties of learning new procedures, the importance of an
easy-to-use system, and drawbacks of the current system that the new system
needs to address.

It must be remembered that the level of commitment required by different processes


varies from one process to another.

Also, the analyst must not be a hindrance to the user.

5. Build Prototypes :
Building a prototype implies creating an initial working model of a larger, more
complex entity.

Types of prototypes: throwaway, discovery, design, evolving prototypes.

Different phases of the SDLC require different prototypes.

The Discovery Prototype is used in the Planning & Analysis phases to test feasibility
and help identify processing requirements.

The Development Prototype is used in the design, coding and implementation


phases, to test designs, effectiveness of code and workability of software.

Discovery prototypes are usually discarded after the concept has been tested, while
an Evolving prototype is one that grows and evolves and may eventually be used as
the final, live system.

Characteristics of Prototypes:
A prototype should be operative i.e. a working model, that may provide lockand-feel but may lack some functionality.

It should be focused on a single objective, even if simple prototypes are being


merged into a single large prototype.

It should be built and modified easily and quickly, so as to enable


immediate modification if approach is wrong.

http://way2mca.com

- 30 -

6. Conduct Joint Application Design (JAD) Sessions :


JAD is a technique used to expedite the investigation of system requirements.
Usually, the analysts first meet with the users and document the discussion through
notes & models (which are later reviewed).

Unresolved issues are placed on an open-items list, and are eventually


discussed in additional meetings.

The objective of this technique is to compress all these activities into a shorter series
of JAD sessions with users and project team members.

During a session, all of the fact-finding, model-building, policy decisions, and


verification activities are completed for a particular aspect of the system.
The success of a JAD session depends on the presence of all key stakeholders and
their contribution and decisions.

Validate The Requirements / Requirements Validation :

Requirements validation is a critical step in the development process, usually during


requirements engineering or requirements analysis. Also at delivery (client acceptance
test).
Requirements validation criteria:
Complete : All possible scenarios, in which the system can be used, are described,
including exceptional behavior by the user or the system.

Consistent : There are no two functional or nonfunctional requirements that


contradict each other.

Unambiguous : Requirements can not be interpreted in mutually exclusive ways.

Correct : The requirements represent the clients view.

More Requirements validation criteria :


Realistic : Requirements can be implemented and delivered.
Verifiable : Requirements can be checked.
Needs an exact description of the requirements
Problem with requirements validation :
Requirements change very fast during requirements elicitation.
Tool support for managing requirements :
Store requirements in a shared repository

Provide multi-user access

http://way2mca.com

- 31 -

Automatically create a system specification document from the repository.

Allow for change management.

Provide traceability throughout the project lifecycle.

Structured Walkthroughs :

1. A structured walkthrough is a planned review of a system or its software by persons


involved in the development effort.
2. The participants are generally at the same level in the organization: that is , they are
analysts or programmer-analysts.
Typically department managers for marketing or manufacturing are not involved in the
review even though they may be the eventual recipients of the system.
3. Sometimes structured walkthroughs are called Peer Reviews because the participants
are colleagues at the same level in the organization.
Characteristics :
1. The purpose of walkthroughs is to find areas where improvement can be made in the
system or the development process.
2. A walkthrough should be viewed by the programmers and analysts as an opportunity to
receive assistance, not as an obstacle to be avoided or tolerated.
3. The review session does not result in the correction of errors or changes in
specifications. Those activities remain the responsibility of the developers. Hence the
emphasis is constantly on review, not repair.
4. The individuals who formulated the design specifications or created the program code
are as might be expected, part of the review team.
5. A moderator is sometimes chosen to lead the review, although many organizations
prefer to have the analyst or designer who formulated the specifications or program
lead the session, since they have greater familiarity with the item being reviewed. In
either case, someone must be responsible for keeping the review focused on the subject
of the meeting.
6. A scribe or recorder is also needed to capture the details of the discussion and the ideas
that are raised.
Since the walkthrough leader or the sponsoring programmers or analysts may not be
able to jot down all the points aired by the participants, appointing another individual to
take down all the relevant details usually ensures a more complete and objective record.

http://way2mca.com

- 32 -

7. The benefits of establishing standards for data names, module determination, and data
item size and type are recognized by systems managers. The time to start enforcing
these standards is at the design stage.
Therefore, they should be emphasized during walkthrough sessions.
8. Maintenance should also be addressed during walkthroughs. Enforcing coding
standards, modularity, and documentation will ease later maintenance needs.
9. It is becoming increasingly common to find organizations that will not accept new
software for installation until it has been approved by software maintenance teams. In
such an organization, a participant from the quality control or maintenance team should
be an active participant in each structured walkthrough.
10.

11.

(i) The walkthrough team must be large enough to deal with the subject of the review
in a meaning way, but not so large that it cannot accomplish anything.
(ii) Generally no more than 7 to 9 persons should be involved , including the individuals
who actually developed the product under review, the recorder, and the review
leader.
a. As a general rule, management is not directly involved in structured walkthrough
sessions. Its participation could actually jeopardize the intent of the review team
from speaking out about problems they see in project.
b. Because management is often interpreted to mean evaluation.
c. Managers may feel that raising many questions, identifying mistakes or suggesting
changes indicates that the individual whose work is under review is incompetent/
d. It is best to provide managers with reports summarizing the review session rather
than to have them participate.
e. The most appropriate type of report will communicate that a review of the specific
project or product was conducted, who attended, and what action the team took. It
need not summarize errors that were found, modifications suggested, or revisions
needed.

12. Structured reviews rarely exceed 90 minutes in length.


13. The structured walkthrough can be used throughout the systems development process
as a constructive and cost-effective management tool, after the detailed investigation
(requirements review), following design (design review), and during program
development (code review and testing review).

@@@@@@@@@@@@

http://way2mca.com

- 33 -

4.Feasibility Analysis
A feasibility study is a preliminary study undertaken to determine and document a
project's viability. The results of this study are used to make a decision whether to proceed
with the project, or table it. If it indeed leads to a project being approved, it will - before the
real work of the proposed project starts - be used to ascertain the likelihood of the project's
success. It is an analysis of possible alternative solutions to a problem and a
recommendation on the best alternative. It, for example, can decide whether an order
processing be carried out by a new system more efficiently than the previous one.
A feasibility study could be used to test a new working system, which could be used because
:

The current system may no longer suit its purpose,

Technological advancement may have rendered the current system obsolete,

The business is expanding, allowing it to cope with extra work load,

Customers are complaining about the speed and quality of work the business provides,

Competitors are now winning a big enough market share due to an effective integration
of a computerized system.
Within a feasibility study, seven areas must be reviewed, including those of a Needs Analysis,
Economics, Technical, Schedule, Organizational, Cultural, and Legal.
1. Operational Feasibility :
It involves the following two tests:
Understanding whether the problem is worth solving and whether the solution to the
problem will work out, by analyzing the following criteria: (PIECES)
(a) Performance (b) Information (c) Economy (d) Control (e) Effectiveness (f)
Service.

Getting the management's and end-users' views on the solution by analyzing


the following:
(a) Will the current working environment change?
(b) How do the end users feel about their role in the new system?
(c) Would the end-users resist the new system?

2. Organizational & Cultural Feasibility :

The new system must fit into the work-environment of the organization.
It must also fit with the culture of the organization.
It should not depart dramatically, from existing norms.

It

has to deal with issues such as:


Low computer literacy
Perceived loss of control by staff or management
Fear of change of job responsibility
Reversal of longstanding work procedures
Fear of loss of job due to increased automation

http://way2mca.com

- 34 -

It essentially involves identifying factors that might prevent the effective use of the
new system, thus resulting in loss of business benefits.
Such factors can be tackled with high user involvement during the system's
development and well-planned training procedures and proper orientation after the
system's completion.
3. Technical Feasibility :

This involves testing the proposed technological requirements and the available
expertise.
A company may implement new technology in the new system, or upgrade the
technology of an existing system.
In some cases, the scope and approach of the project may need to be
changed to restructure and reduce the technological risk.
When the risks are identified, the solutions may include conducting additional
training, hiring consultants, hiring more experienced employees.
A realistic assessment will help identify technological risks early and permit
correctivemeasures to be taken.
4. Schedule Feasibility :

It involves assessing if the project can be completed according to the proposed project
schedule.
Every schedule requires many assumptions and estimates about the project, as the
needs and scope of the system may not be known at this stage.
Sometimes, a project may need to be completed within a deadline given by the upper
management.
Milestones should be developed within the project schedule to assess the ongoing risk
of the schedule slipping.
Deadlines should not be considered during project schedule construction, unless they
are absolute.
5. Resource Feasibility :

The availability of resources is a crucial assessment in terms of project feasibility.


The primary resource consists of the members of the team.
Development projects require the involvement of system analysts, system
technicians, users.

Three risks are involved here:


(a) Required people may not available to the team when needed.
(b) People who are assigned, may not have the necessary skills.
(c) People already working on the project may leave midway.

Also, adequate computer resources, physical facilities, and support staff are
valuable resources.
Delays in making these resources can affect the project schedule.

http://way2mca.com

- 35 -

6. Economic Feasibility :

Economic feasibility consists of two tests:


(a) do the anticipated benefits exceed the projected cost of development?
(b) does the organization have adequate cash flow to fund the project?

The new system must increase income, either through cost saving, or by
increased revenues.
The economic feasibility of a system is usually assessed using one of the following
methods:
(a) Cost/Benefit Analysis.
(b) Calculation of the Net Present Value (NPV)
(c) Payback Period, or Breakeven Point
(d) Return on Investment

Cost estimation :
Software cost estimation is a continuing activity, which starts at the proposal stage and
continues through the lifetime of the project. There are several different techniques
of software cost estimation. They are:
i) Expert judgment :
One or more experts on the software development techniques to be used, and on the
application domain, are consulted. They each estimate a project cost and the final cost is
arrived at by consensus.
ii) Estimation by analogy :
This technique is applicable when other projects in the same application domain have been
completed. The cost of a new project is estimated by analogy with these
completed projects.
iii) Parkinsons law :
It states that work expands to fill the time available. In software costing, it means that the
cost is determined by available resources rather than by objective assessment.
iv) Pricing to win :
The software cost is estimated to be whatever the customer has available to spend on the
project. The estimated effort depends on the customers budget and not on the software
functionality.
v) Top-down estimation:
A cost estimate is established by considering the overall functionality of the project and
how that functionality is provided by interacting functions. Cost estimates are made on the
basis of logical function rather than component implementation of the function.

http://way2mca.com

- 36 -

vi) Bottom-up estimation :


The cost of each component is estimated. All these costs are added to produce a final cost
estimate.

Cost/Benefit Analysis

It is the analysis used to compare costs and benefits to see whether the investment in the
development of a new system will be more beneficial than costly.
Cost And Benefits Categories :

In developing cost estimates for a system, we need to consider several cost elements.
Following are the types of costs that are analyzed :

Hardware Costs : Costs related to actual purchases or leasing of computers and


peripheral devices.

Personnel Costs : Costs including staff salaries and benefits (staff includes
system analysts, programmers, end-users, etc.).

Facility Costs: Costs involved in the preparation of the physical site where the
computer system will be operating (wiring, flooring, air conditioning, etc.).

Developmental Costs: Costs involved in the development of the system (hardware


costs, personnel costs, facility costs).

Operating Costs : Costs incurred after the system is put into production i.e. the
day-to-day operations of the system (salaries of people using the application, etc.).

A system is also expected to provide benefits.The first task is to identify each benefits
and then assign a monetary value to it for cost/benfit analysis.

Benefits may be tangible and intangible or direct and indirect.The two major benefits
are as follows :

Improving performance : The performance category emphasis improvement in the


accuracy of or access to information and easier access to the system by the
authorised users.
Minimizing the cost of processing : Minimizing costs through an efficient systemerror control or reduction of staff is a benefit that should be measured and
included in cost/benefit analysis.

Procedure For Cost/Benefit Determination :


There is a difference between expenditure and investment . We spend to get what we need
, but we invest to realise a return on the investment.Building a computer-based system is
an investment .Costs are incurred throughout its life cycle.Benefits are realized in the form
of reduced operating costs, improved corporate image, staff efficiency or revenue.
http://way2mca.com

- 37 -

Cost/Benefit Analysis is a procedure that gives a picture of the various costs, benefits
and rules associated with a system.
The determination of the cost and benefits entails the follwing step :
1. Identify the costs and benefits pertaining to a given project.
2. Categorize the various costs and benefits for analysis.
3. Select a method of evaluation.
4. Interpret the result of the analysis.
5. Take action.
1. Costs And Benefits Identification :
Certain costs and benfits are more easily identifiable than others.For example, direct
costs, such as the price of a hard disk, are easily identified from company invoice
payments or canceled checks.
Direct benefits offeten relates one-to-one to direct costs, especially savings from
reducing cost in the activity in question.
Other direct costs and benefits, however may not be well defined, since they
represent estimated costs or benefits that have some uncertainity.An example of
such costs is reserve for bad debt. It is a discerned real cost, although its exact
amount is not so immediate.
A Category of costs or benefits that is not easily discernible is opportunity costs and
opportunity benefits.
These are the costs or benefits forgone by selecting one alternative over another.
They do not show in the organizations account and therefore are not easy to
identify.
2. Classification of Cost and benefits :
The next step in cost and benefit determination is to categorize costs and benefits. They
may be tangible or intangible , direct or indirect , fixed or variable.
Let us review each category.

http://way2mca.com

- 38 -

3. Select Evaluation Method :


When all financial data have been identified and broken down into cost categories, the
analyst must select a method of evaluation.Several method are avaiable.
http://way2mca.com

- 39 -

The Common are as follows :


i.] Net Benefit Analysis :
Net Benefit simply involves subracting total costs from total benfits.
It is easy to calculate, easy to interpret, and easy to present.
The main drawback is that it does not account for the time value of money and
does not discount future cash flow.
Cost/Benefit
Costs
Benefits
Net benefits

Year
0
$-1,000
0
$-1,000

Year
1
$-2,000
650
$-1,350

Year
2
$-2,000
4,900
$-2,900

Total

5,550
$550

Above table illustrates the use of net benefit analysis. Cash flow amounts are shown for
three time period : Period 0 is the present period followed by two succeeding periods. The
negative numbers represent cash outlays. A cursory look at the numbers shows that net
benefit is $550.
The time value of money is extremly important in evaluation processes.Let us explain
what it mean.If you were faced with an opportunity that generates $3000 per year, how
much would you be willing to invest? Obiviously , youd like to invest less than the $3000.
To earn the same money five years from now, the amount of investment woul be even
less. What is suggested here is that money has a time value. Todays dollar and
tommorows dollar are not the same.The time lag accounts for the time value of money.
The time value of money is usually expressed in the form of interest on the funds
invested to realize the future value.Assuming compounded interest, the formula is :
F=P(1+i)^n
Where
F= Future value of an investment.
P= Present value of the investment.
i = Interest rate per compounding period.
n = Number of years.
For example, $3000 invested in Treasury notes for three years at 10% interest would have
a value at maturity of :
F=$3000(1+0.10)^3
=3000(1.33)
=$3993
ii.]

Present Value Analysis :


In developing long-term projects, it is often dificult to compute todays costs with the
full value of tomorrows benefits.Certain investments offer benefit periods that vary
with different projects.
Present Value analysis controls for these problems by calculating costs and benefits
of the system in terms of todays value of the investment and then comparing across
alternatives.
A critical factor to consider in computing present value is a discount rate equivalent
to the forgone amount that the money could earn if it were invested in a different

http://way2mca.com

- 40 -

projects.It is similar to the opportunity cost of the funds being considered for the
project.

Example : Suppose that $3000 is to be invested in a microcomputer for our safe


deposit tracking system and the average annual benefit is $1500 for the four year
life of the system.The investment has to be made today, whereas the benefits are in
the future.We compare present values to the future values by considering the time
value of the money to be invested.The amount that we are willing to invest today is
determined by the value of the benefits at the end of a given period(year).The
amount is called the present value of the benefit.
To compute the present value, we take the formula for future value(F=P/(1+i)^n)
and solve for the present value(P) as follows :
P=F/(1+i)^n
So the present value of $1500 invested at 10% interest at the end of the fourth year
is :
P=1500/(1+0.10)^4
=1500/1.61
=$1027.39
That is, if we invst $1027.39 today at 10% interest , we can expect to have $1500 in
4 years.This calculation can be represented for each year where a benefit is
expected.

Year
1
2
3
4

Estimated
Future
Value
$1500
$1500
$1500
$1500

Discount
Rate
X
X
X
X

0.908
0.826
0.751
0.683

=
=
=
=

Present
Value

Cumulative
Prsent Value of
Benefits

$1363.63
$1239.67
$1127.82
$1027.39

$1363.63
$2603.30
$3731.12
$4758.51

1[(1+i)^n]
P=F[(1+i)^n]
iii.] Net Present Value (NPV) Calculation :
The present value of rupee/dollar (currency) benefits and costs for investments such
as a new system.
Two concepts are involved:
All benefits and costs are calculated in terms of today's rupee/dollar (currency)
values,i.e. present values.
Benefits and costs are combined to give a net value.
It essentially tells you how much should be invested today, in order to
achieve a predetermined amount of benefit at a predetermined later point in time.
The following two terms hold great importance in this calculation:
Discount rate: It is the annual percentage rate that an amount of money is
discounted to bring it to a present value.
Discount factor: It is the accumulation of yearly discounts based on the discount
rate.

http://way2mca.com

- 41 -

Formula: If Present value is PV, amount received in future is FV, discount interest
rate is i, discount factor is F, and number of years is n :

For example, if the future amount is Rs. 1500, and the number of years is 4, at say
10% discount rate, then the present value can be calculated as:

i.e. today, the investment should be 1024.5, to get 1500 after 4 years.
iv.] Payback Period/Breakeven Period Calculation :
The payback period is the period at which rupee/dollar (currency) benefits
offset the rupee/dollar (currency) costs.
It is the point in time, when the increased cash flow exactly pays off the
costs of development and operation.
When the net value becomes positive, that is the year in which payback occurs.
Consider the following table:

http://way2mca.com

- 42 -

v.] Return on Investment (ROI) Calculation :


Return on investment is a measure of the percentage gain received from an
investment such as a new system.
Similar to interest rate, this calculation is meant to ensure that costs and
benefits are exactly equal over a specified time period.
This time period can be the expected life of the investment, or it could be an
arbitrary period.
It is the measure of the percent gain received from an investment.
Formula : If Estimated time period Benefits is EB, Estimated time period Costs is EC,
then ROI = (EB - EC)/EC
Here, EC is the sum of developmental costs (DC) and
total present value of costs (PC).
If EB = 60,00,000; DC = 12,00,000; PC = 9,00,000
ROI = [60,00,000 (12,00,000 + 9,00,000)] / (12,00,000 + 9,00,000)
185%
4. Interpret Results of the Analysis And Final Action :
When the evaluation of the project is complete, the results have to be interpreted.
This entails comparing actual results against a standard or the result of an alternative
investment.
The interpretation phase as well as the subsequent decision phase are subjective,
requiring judgement and intuition.
Depending on the level of uncertainty, the analyst may be confronted with a single
known value or a range of values.
In either case, simpler measures such as net benefit analysis are easier to calculate
and present than other measures, although they do not discount future cash flows.
The decision to adopt an alternative candidate system can be highly subjective ,
depending on the analysts or end users confidence in the estimated costs and
benefits and the magnitude of the system.
In summary , cost/benefits analysis is a tool for evaluating projects rather than a
replacement of the decision maker.In real-life business situations, whenever a choice
among alternatives is considered, cost/benefits is an important tool.
Like any tool, however it has problems :

Valuation problems : Intangible costs and benefits are difficult to quantify and tangible
costs are generally more pronounced than tangible benefits.In most cases, then, a
project must have substantial intangible benefits to be accepted.

Distortion problems : There are two ways of distorting the results of cost/benefit
analysis.One is the intentional favoritism of an alternative for political reasons.The
second is when data are incomplete or missing from the analysis.

http://way2mca.com

- 43 -

Completeness problems : Occasionally an alternative is overlooked that compromises


the quality of the final choice.furthermore, the costs related to cost/benefit analysis may
be on the high side or not enough costs may be considered to do a complete analysis.In
either case, the realibility of the final choice is in doubt.

List of Deliverables :

When the design of an information system is complete the specifications are documented in
a form that outlines the features of the application. These specifications are termed as the
deliverables or the design book by the system analysts.
No design is complete without the design book, since it contains all the details that must be
included in the computer software, datasets & procedures that the working information
system comprises.
The deliverables include the following:
1. Layout charts :
Input & output descriptions showing the location of all details shown on reports,
documents, & display screens.
2. Record layouts :
Descriptions of the data items in transaction & master files, as well as related database
schematics.
3. Coding systems :
Descriptions of the codes that explain or identify types of transactions, classification, &
categories of events or entities.
4. Procedure Specification :
Planned procedures for installing & operating the system when it is constructed.
5. Program specifications :
Charts, tables & graphic descriptions of the modules & components of computer
software & the interaction between each as well as the functions performed & data
used or produced by each.
6. Development plan :
Timetables describing elapsed calendar time for development activities; personnel
staffing plans for systems analysts, programmers, & other personnel; preliminary testing
& implementation plans.
7. Cost Package :
Anticipated expenses for development, implementation and operation of the new
system, focusing on such major cost categories as personnel, equipment,
communications, facilities and supplies.

@@@@@@@@@@@@

http://way2mca.com

- 44 -

5.Modeling : System Requirements


Overview Of Model :

What is a model ? A model is a representation of some aspect of the system to be


built.

What is its purpose ?


A model helps an analyst clarify and refine the design of the system.
It also helps in describing the complexity of the information system.
It provides a convenient way of storing information about the system in a
readily understandable form.
It helps communication between the analyst and the programmer (i.e. members of
the project team).
It also assists in communication between the project team and the system users
and stakeholders.
It can be used as documentation by a future development team, while maintaining
or enhancing the system.

Types of Models :

The type of the model is based on the nature of the information being represented.
It includes :
1. Mathematical Model :
A Mathematical Model is a series of formulae that describe the technical aspects
of a system.
Such models are used to represent precise aspects of the system, that can be
best represented through formulae or mathematical notations, such as equations and
functions.
They are useful in expressing the functional requirements of scientific and
engineering applications, that tend to compute results using elaborate mathematical
algorithms.
They are also useful for expressing simpler mathematical requirements in
business systems, such as net salary calculation in a payroll system.
2. Descriptive Model :
Descriptive models are required for narrative memos, reports, or lists that describe
some aspects of the system.
This model is required especially because there is a limitation to what information can
be defined using a mathematical model.
Effective models of information systems involve simple lists of features, inputs, outputs,
events, users.
Lists are a form of descriptive models that are concise, specific, and useful.
Algorithms written using structured English or pseudocode are also considered
precise descriptive models.
3. Graphical Models :
Graphical Models include diagrams and schematic representations of some aspect
of a system.
http://way2mca.com

- 45 -

They simplify complex relationships that cannot be understood with a verbal description.
Analysts usually use symbols to denote various parts of the model, such as external
agents, processes, data, objects, messages, connections.
Each type of graphical model uses unique and standardized symbols to represent pieces
of information.

Data Flow Diagram :

A Data Flow Diagram (DFD) is also known as a Process Model. Process Modeling is an
analysis technique used to capture the flow of inputs through a system (or group of
processes) to their resulting output.

A Data Flow Diagram is a graphical system model that shows all the main requirements
for an information system.

It involves representation of the following:


Source (External agent)/ Destination (External agent) : A person or organisation
that lies outside the boundary of a system and provides inputs to the system or
accepts the systems output.

Process/Activity : A process is an algorithm or procedure that transforms data input


into data output.

Data store : A place where data is held, for future access by one or more
processes.

Data Flow : An arrow in a DFD that represents flow of data among the processes
of the system, the data stores, and the external agents.
All processes are numbered to show proper sequence of events.

DFD Syntax :

http://way2mca.com

- 46 -

Reading Data Flow Diagram (DFD) :

Level of Abstraction :

Many different types of DFDs are produced to show system requirements.


Some may show the processing at a higher level (a general view), while others may
show the processing at a lower level (a detailed view).
These differing views are referred to as levels of abstraction.
Thus Levels of Abstraction can be defined as a modelling technique that breaks the
system into a hierarchical set of increasingly more detailed models.
The higher level DFDs can be decomposed into separate lower level detailed DFDs.

Context Diagram :
A Context Diagram is a Data Flow Diagram that describes the highest view of a system.
It summarizes all processing activities within the system into a single process
representation.

All the external agents and data flow (into and out of the system) are shown in
one diagram, with the whole system represented as a single process.
It is useful for defining the scope and boundaries of a system.

The boundaries of a system in turn help identify the external agents, as they lie outside
the boundary of the system.

The inputs and outputs of a system are also clearly defined.


NOTE : Data stores are not included.

The Context level DFD process take 0 as the process number, while the numbering in
the 0 Level DFD starts from 1.

DFD Fragments :
A DFD fragment is a DFD that represents system response to one event within a single
process.
Each DFD fragment is a self-contained model showing how the system responds to a
single event.
http://way2mca.com

- 47 -

The main purpose of DFD fragments is to allow the analyst to focus attention on just
one part of the system at a time.
Usually, a DFD fragment is created for each event in the event list (later made into an
event table).

Physical & Logical DFDs :


If the DFD is a physical model, then one or more assumptions about the implementation
technology are embedded in the DFD.
If the DFD is a logical model, then it assumes that the system will be implemented using
perfect technology.
Elements that indicate assumptions about implementation technology are:
Technology-specific processes (E.g.: Making copies of a document)
Actor-specific process names (E.g.: Engineer checks parts)
Technology- specific or actor-specific process orders.
Redundant processes, data flows, and files
Physical DFDs are sometimes developed and used during the last stages of analysis or
early stages of design.
Physical DFDs serve one purpose, and that is to represent one possible implementation
of the logical system requirements.
Creating Data Flow Diagram :
1. Integrating Scenario Description DFDs start with the use cases and requirements definition
Generally, the DFDs integrate the use cases
Names of use cases become processes
Inputs and outputs become data flows
Small data inputs and outputs are combined into a single flow
2. Step In Building DFD 1. Build the context diagram
2. Create DFD fragments for each use case
3. Organize DFD fragments into level 0 diagram
4. Decompose level 0 processes into level 1 diagrams as needed; decompose level 1
processes into level 2 diagrams as needed; etc.
5. Validate DFDs with user to ensure completeness and correctness
3. Creating The Context Diagram
Draw one process representing the entire system (process 0)
Find all inputs and outputs listed at the top of the use cases that come from or go to
external entities; draw as data flows
Draw in external entities as the source or destination of the data flows.
Example

http://way2mca.com

- 48 -

4. Creating DFD Fragments


Each use case is converted into one DFD fragment
Number the process the same as the use case number
Change process name into verb phrase
Design the processes from the viewpoint of the organization running the system.
Add data flows to show use of data stores as sources and destinations of data
Layouts typically place
processes in the center
inputs from the left
outputs to the right
stores beneath the processes.

http://way2mca.com

- 49 -

Example

5.Creating Level 0 Diagram

Combine the set of DFD fragments into one diagram


Generally move from top to bottom, left to right
Minimize crossed lines
Iterate as needed
Example

http://way2mca.com

- 50 -

Creating Level 1 Diagram (And Below ) Each use case is turned into its own DFD
Take the steps listed on the use case and depict each as a process on the level 1 DFD
Inputs and outputs listed on use case become data flows on DFD
Include sources and destinations of data flows to processes and stores within the DFD
May also include external entities for clarity.
When to stop decomposing DFDs?
Ideally, a DFD has at least three processes and no more than seven to nine.

Basic rules for Process Modeling/DFD :

1. A series of data flows always starts or ends at an external agent and starts or ends at a
data store. Conversely, this means that a series of data flows can not start or end at a
process.
2.
3.
4.
5.

A process must have both data inflows and outflows.


All data flows must be labeled with the precise data that is being exchanged.
Process names should start with a verb and end with a noun.
Data flows are named as descriptive nouns.

http://way2mca.com

- 51 -

6. A data store must have at least one data inflow.


7. A data flow can not go between an external agent and a data store, but a process must
be in between.
8. A data flow can not go between to external agents, but a process must be in between.
9. A data flow can not go between to data stores, but a process must be in between.
10. External agents and data flows can be repeated on a process model in order to avoid
lines crossing, but do not repeat processes.

Evaluating DFD Quality :

A high-quality set of DFDs is identified by its readability, internal consistency, and


accurate representation of system requirements.
Some important terms :
Information overhead : It is an undesirable condition that occurs when too
much when information is presented to a reader at one time.
Rule of 7 2 : It is a rule of model design that limits the number of model
components or connections among components to not more than nine, and not
less than five.
Minimization of interfaces : It is a principle of model design that seeks
simplicity by minimizing the number of interfaces or connections among model
components.

Structured English :

Structured English describes procedures. The procedure may be a process in a DFD.


Structure English is the marriage of English language with the syntax and structured
programming.
Thus structured English aims at getting the benefits of both the programming logic and
natural language.
Program logic helps to attain precession while natural language helps in getting the
convenience of spoken languages.
Structured English can be specified as a process specification tool.
The two building blocks of Structured English are
(1) Structured logic or instructions organized into nested or grouped procedures, and
(2) Simple English statements such as add, multiply, move, etc. (strong, active, specific
verbs).

Five conventions to follow when using Structured English :


1. Express all logic in terms of sequential structures, decision structures, or iterations.
2. Use and capitalize accepted keywords such as: IF, THEN, ELSE, DO, DO WHILE, DO
UNTIL, PERFORM
3. Indent blocks of statements to show their hierarchy (nesting) clearly.
4. When words or phrases have been defined in the Data Dictionary, underline those
words or phrases to indicate that they have a specialized, reserved meaning.
5. Be careful when using "and" and "or" as well as "greater than" and "greater than or
equal to" and other logical comparisons.
http://way2mca.com

- 52 -

Example Of Structured English :


A bank will grant loan under the following conditions 1. If a customer has an account with
the bank and had no loan outstanding, loan will be granted. 2. If a customer has an
account with the bank but some amount is outstanding from previous loans then loan will
be granted if special approval is needed. 3. Reject all loan applications in all other cases.
The Structured English for above example would be as follows :
IF customer has a Bank Account
THEN
IF Customer has no dues from previous account
THEN Allow loan facility
ELSE
IF Management Approval is obtained
THEN Allow loan facility
ELSE Reject
ELSE Reject

Decision Table :

Decision tables are a precise yet compact way to model complicated logic.
Decision tables, like if-then-else and switch-case statements, associate conditions with
actions to perform. But, unlike the control structures found in traditional programming
languages, decision tables can associate many independent conditions with several
actions in an elegant way.
Decision Tables are useful when complex combinations of conditions, actions, and rules
are found or you require a method that effectively avoids impossible situations,
redundancies, and contradictions.

Structures Of Decision Table :


Decision tables are typically divided into four quadrants, as shown below
The Four Quadrants
Conditions

Condition alternatives

Actions

Action entries

Each decision corresponds to a variable, relation or predicate whose possible values are
listed among the condition alternatives.
Each action is a procedure or operation to perform, and the entries specify whether (or
in what order) the action is to be performed for the set of condition alternatives the
entry corresponds to.
Many decision tables include in their condition alternatives the don't care symbol, a
hyphen. Using don't cares can simplify decision tables, especially when a given condition
has little influence on the actions to be performed.
In some cases, entire conditions thought to be important initially are found to be
irrelevant when none of the conditions influence which actions are performed.
http://way2mca.com
- 53

Aside from the basic four quadrant structure, decision tables vary widely in the way the
condition alternatives and action entries are represented.
Some decision tables use simple true/false values to represent the alternatives to a
condition (akin to if-then-else), other tables may use numbered alternatives (akin to
switch-case), and some tables even use fuzzy logic or probabilistic representations for
condition alternatives.
In a similar way, action entries can simply represent whether an action is to be
performed (check the actions to perform), or in more advanced decision tables, the
sequencing of actions to perform (number the actions to perform).

Example Of Decision Table :


The limited-entry decision table is the simplest to describe. The condition alternatives are
simple boolean values, and the action entries are check-marks, representing which of the
actions in a given column are to be performed.
A technical support company writes a decision table to diagnose printer problems based
upon symptoms described to them over the phone from their clients.
Printer Troubleshooter
Rules

Conditions

Printer does not print

A red light is flashing

Printer is unrecognized

Check the power cable

Actions

Check the printer-computer cable

Ensure printer software is installed

Check/replace ink

Check for paper jam

X
X

X
X

Of course, this is just a simple example (and it does not necessarily correspond to the
reality of printer troubleshooting), but even so, it demonstrates how decision tables can
scale to several conditions with many possibilities.
Benefits Of Decision Table :
Decision tables make it easy to observe that all possible conditions are accounted for. In
the example above, every possible combination of the three conditions is given.
In decision tables, when conditions are omitted, it is obvious even at a glance that logic
is missing. Compare this to traditional control structures, where it is not easy to notice
http://way2mca.com

- 54 -

gaps in program logic with a mere glance --- sometimes it is difficult to follow which
conditions correspond to which actions!
Just as decision tables make it easy to audit control logic, decision tables demand that a
programmer think of all possible conditions.
With traditional control structures, it is easy to forget about corner cases, especially
when the else statement is optional. Since logic is so important to programming,
decision tables are an excellent tool for designing control logic.
In one incredible anecdote, after a failed 6 man-year attempt to describe program logic
for a file maintenance system using flow charts, four people solved the problem using
decision tables in just four weeks.

Decision Tree :

In operations research, specifically in decision analysis, a decision tree is a decision


support tool that uses a graph or model of decisions and their possible consequences,
including chance event outcomes, resource costs, and utility.
A Decision tree is a predictive model ; that is, a mapping from observations about an
item to conclusions about its target value.
A decision tree is used to identify the strategy most likely to reach a goal. Another use
of trees is as a descriptive means for calculating conditional probabilities.
Decision Trees are useful when the sequence of conditions and actions is critical or not
every condition is relevant to every action.
More descriptive names for such tree models are classification tree (discrete outcome)
or regression tree (continuous outcome). In these tree structures, leaves represent
classifications and branches represent conjunctions of features that lead to those
classifications.

Example Of Decision Tree :


Consider the Book Seller example where the condition are as follows :
If order is from book store
And if order is for 6 copies
Then discount is 25%
Else (if order is for less then 6 copies)
No discount is allowed
Else (if order is from libraries)
If order is for 50 copies or more
Then discount is 15%
Else if order is for 20 to 49 copies
Then discount is 10%
Else if order is for 6 to 19 copies
Then discount is 5%
Else (order is for less then 6 copies)
No discount is allowed.

http://way2mca.com

- 55 -

The Decision Tree for the above example would be as follows :


Order > 6 copies

Discount

25%

Book Store
Order < 6 copies
Book Order

Order > 50 copies

Discount
Discount
Discount
Discount

Libraries

No Discount

15%

Order >20 && Order < 49

10%

Order >19 && Order < 6

5%

Order < 6 Copies

No

Entity-Realtionship Diagram( E-R Diagram ) :

ERD complements DFD. While DFD focuses on processes and data flow between them,
ERD focuses on data and the relationships between them.
It helps to organise data used by a system in a disciplined way.
It helps to ensure completeness, adaptability and stability of data.
It is an effective tool to communicate with senior management (what is the data needed
to run the business), data administrators (how to manage and control data), database
designers (how to organise data efficiently and remove redundancies).

Components Of E-R Diagram :


1. Entity :
It represents a collection of objects or things in the real world whose individual
members or instances have the following characteristics:
Each can be identified uniquely in some fashion.
Each plays a necessary role in the system we are building.
Each can be described by one or more data elements (attributes).

Entities generally correspond to persons, objects, locations, events, etc. Examples are
employee, vendor, supplier, materials, warehouse, delivery, etc.

There are five types of entities.


Fundamental entity : It does not depend on any other entity for its existence. For
e.g. materials
Subordinate entity : It depends on another entity for its existance. For example,
in an inventory management system, purchase order can be an entity and it will

http://way2mca.com

- 56 -

depend on materials being procured. Similarly invoices will depend on purchase


orders.
Associative entity : It depends on two or more entities for its existence. For
example, student grades will depend on the student and the course.
Generalisation entity : It encapsulates common characteristics of many
subordinate entities. For example, a four wheeler is a type of vehicle. A truck is a
type of four wheeler .
Aggregation entity : It consists of or an aggregation of other entities. For
example, a car consists of engine, chasis, gear box, etc. A vehicle can also be
regarded as an aggregation entity, because a vehicle can be regarded as an
aggregation of many parts.

2. Attributes :
They express the properties of the entities.
Every entity will have many attributes, but only a subset, which are relevant for the
system under study, will be chosen.
For example, an employee entity will have professional attributes like name,
designation, salary, etc. and also physical attributes like height, weight, etc. But only
one set will be chosen depending on the context.
Attributes are classified as entity keys and entity descriptors.
Entity keys are used to uniquely identify instances of entities.
Attributes having unique values are called candidate keys and one of them is
designated as primary key. The domains of the attributes should be pre-defined. If
'name' is an attribute of an entity, then its domain is the set of strings of alphabets
of predefined length.
3.

Relationships :
They describe the association between entities.
They are characterised by optionality and cardinality.
Optionality is of two types, namely, mandatory and optional.

Mandatory relationship means associated with every instance of the first entity
there will be atleast one instance of the second entity.
Optional relationship means that there may be instances of the first entity, which
are not associated with any instance of the second entity. For example, employeespouse relationship has to be optional because there could be unmarried
employees. It is not correct to make the relationship mandatory.

Cardinality is of three types: one-to-one, one-to-many, many-to-many.

One-to-one relationship means an instance of the first entity is associated with only
one instance of the second entity. Similarly, each instance of the second entity is
related to one instance of the first entity.
One-to-many relationship means that one instance of the first entity is related to
many instances of the second entity, while an instance of the second entity is
associated with only one instance of the first entity.
In many-to-many relationship an instance of the first entity is related to many
instances of the second entity and the same is true in the reverse direction also.

http://way2mca.com

- 57 -

ERD notation : There are two type of notation used


1. Peter Chen notation
2. Bachman notation.
Not surprisingly, Peter Chen and Bachman are the name inventors of the notation. The
following table gives the notation.
COMPONENT

REPRESENTATION

ENTITY (SET) OR
OBJECT TYPE

PURCHASE ORDER

RELATIONSHIP

ATTRIBUTE

PRIMARY KEY
ATTRIBUTE

CARDINALITY

OPTIONALITY

WEAK ENTITY

STRONG-WEAK
REALATIONSHIP

MULTIVALUED
ATTRIBUTE

PETER CHEN
http://way2mca.com

- 58 -

BACHMAN

Example for Bachman notation :

Example for Peter Chen notation :

http://way2mca.com

- 59 -

Primary Key :

To distinguish occurrences of entities and relationships


Distinction made using values of some attributes
Superkey : set of one/more attributes which, taken collectively, uniquely identify an
entity in an entity set
Superkey may contain extraneous attributes.
e.g., rollno is sufficient to identify students

it is a primary key
combination (rollno, name) is a superkey
name itself may not be sufficient as key

Candidate key is minimal superkey. No subset of it is a superkey


An entity may have multiple candidate keys

Primary key is a candidate key chosen by designer as the principal means of


identification.

Primary Key For Relationship :


Made of primary keys of all participating entities e.g., primary key of STUDY is
(rollno, courseno)

Weak Entity :

Weak entity does not have a primary key on its own.


They are related to one/more strong entities
They often can be visualized as multivalued attribute or group of attributes
They either have a partial key or we add one to distinguish between those which are
related to same strong entity
Examples:
Branches of a bank
Interviews between candidates and companies viewed as entities (not
relationships) so that they can participate further in relationships
E-R diagrams follow

http://way2mca.com

- 60 -

partial key (BrName in example) also called discriminatory attribute


A weak entity can participate further in relationships with other entities
A weak entity can also have weak entities dependent on in
primary key of weak entity = primary key of its strong entity + discriminating
attribute of weak entity within the context of strong entity

Generalization :

To generalize from two or more entity sets and factor out commonality

Entity E is generalization of entities E1, E2, E3 if each instance of E is also an


instance of one and only one of E1, E2, etc.; E called superclass of E1, E2,
It is represented by IS-A relationship

Example : given two entities Faculty and Non-faculty, we can define a general entity
called Employee

Common attributes are factored out to define Employee entity; specific (noncommon) attributes incorporated in Faculty and Non-faculty entities.

http://way2mca.com

- 61 -

Another Example :

Specialization :

It is also called subset hierarchy


Entity E1 is subset of E if every instance of E1 is also an instance of E; this is also IS-A
relationship
E called superset and E1 as subset (or sub-class); E may have multiple and possibly
over-lapping subsets
Every instance in E need not be present in subsets of E.
Specialization allows classification of an entity in subsets based on some distinguishing
attribute/property
We may have several specialization of same entity
The subsets may have additional attributes

http://way2mca.com

- 62 -

Given below are a few examples of ER diagrams using Bachman notation. First the textual
statement is given followed by the diagram :
1. In a company, each division is managed by only one manager and each manager
manages only one division

3. Among the automobile manufacturing companies, a company manufactures many


cars, but a given car is manufactured in only one company

3. In a college, every student takes many courses and every course is taken by many
students

4. In a library, a member may borrow many books and there may be books which are not
borrowed by any member

http://way2mca.com

- 63 -

4. A teacher teaches many students and a student is taught by many teachers. A


teacher conducts examination for many students and a student is examined by many
teachers.

6. An extension of example-3 above is that student-grades depend upon both student and
the course. Hence it is an associative entity

7. An employee can play the role of a manager. In that sense, an employee reports to
another employee.

http://way2mca.com

- 64 -

8. A tender is floated either for materials or services but not both.

9. A car consists of an engine and a chasis

@@@@@@@@@@@@

http://way2mca.com

- 65 -

6.Design
The system design is aimed at ensuring before construction or coding, that if a system is
constructed in a specific way, the users information will be met completely and accurately.
Several development activities are carried out during structured design. They are database
design, implementation planning, system test preparation, system interface specification,
and user documentation(see below figure).
1.Database
Design

Allocation
of
Functions

6.Design
Specification

7.Design phase
Walkthrough

To Implementation

2.Program
Design

3.System test
requirements
Definition

4.Program test
requirements
definition

5.System
Interface
Specification

1. Database Design :
This activity deals with the design of the physical data base. A key is to determine how
the access paths are to be implemented. A physical path is derived from a logical path.
It may be implemented by pointers, chains or other mechanisms.
2. Program Design :
In conjunctions with data base design is a decision on the programming language to be
used and the flowcharting, coding and debugging procedure prior to conversion. The
operating system limits the programming languages that will run on the system.
When the system design is under way and programming begins, the plans and
test cases for implementation are soon required. This means there must be detailed
schedules for system testing and training of the user staff. Planned training allows time
for selling the candidate system to those who will deal with it on regular basis.
Consequently, user resistance should be minimized.
3. System And program test preparation :
Each aspect of the system has a separate test requirements. System testing is done
after all programming and testing is completed. The test cases cover every aspect of
the candidate system actual operations, user interface and so on. System and
program test requirements become a part of design specifications- a prerequisite to
implementations.
In contrast to system testing is acceptance testing, which puts the system through a
procedure design to convince the user that the candidate system will meet the stated
requirements. Acceptance testing is technically similar to system testing, but politically it
is different. In system testing, bugs are found and corrected with no one watching.
http://way2mca.com

- 66 -

Acceptance testing is conducted in the presence of the user, audit representatives, or


the entire staff.
4. System Interface specification :
This phase specifies for the user how information should enter and leave the system.
The designer offers the user various options. By the end of the design, formats have to
be agreed upon so that machine-machine and human-machine protocols are well
defined prior to implementations.
Before the system is ready for implementations user documentation in the form
of a users or operators manual must be prepared. The manual provides instruction on
how to install and operates the system, how to display or print output, in what format
and so on. Much of this documentation cannot be written until the operation
documentation is finalized- a task that usually follows design.

System Flowchart :

A system flowchart is a diagram that describes the overall flow of control between
computer programs in a system.

It is observed that programs and subsystems have complex interdependencies including


flow of data, flow of control, and interaction with data stores.

It is a diagrammatic representation that illustrates the sequence of operations to be


performed to get the solution of a problem.

It effectively indicates where input enters the system, how it is processed and
controlled, and how it leaves the system in the form of the desired output.Here,
emphasis is placed on input documents and output reports.

Only limited details are displayed, about the process that transforms the input to output.
For convenience of design, it is a good idea to segregate the inputs, processes, outputs,
and files involved in the system into a tabular form before proceeding with the
flowchart.

System Flowcharts are generally drawn in the early stages of formulating computer
solutions.It facilitate communication between programmers and business people.

The System Flowchart play a vital role in the programming of a problem and are quite
helpful in understanding the logic of complicated and lengthy problems. Once the
flowchart is drawn, it becomes easy to write the program in any high level language.

Often we see how flowcharts are helpful in explaining the program to others. Hence, it
is correct to say that a flowchart is a must for the better documentation of a complex
program.

http://way2mca.com

- 67 -

http://way2mca.com

- 68 -

Example Of System Flowchart :

Advantages Of System Flowchart :


1. Communication : Flowcharts are better way of communicating the logic of a system to
all concerned.
2. Effective analysis : With the help of flowchart, problem can be analysed in more
effective way.
3. Proper documentation : Program flowcharts serve as a good program documentation,
which is needed for various purposes.
4. Efficient Coding : The flowcharts act as a guide or blueprint during the systems analysis
and program development phase.
5. Proper Debugging : The flowchart helps in debugging process.
6. Efficient Program Maintenance : The maintenance of operating program becomes easy
with the help of flowchart. It helps the programmer to put efforts more efficiently on
that part.

http://way2mca.com

- 69 -

Limitations Of System Flowchart :


1. Complex logic : Sometimes, the program logic is quite complicated. In that case,
flowchart becomes complex and clumsy.
2. Alterations and Modifications : If alterations are required the flowchart may require redrawing completely.
3. Reproduction : As the flowchart symbols cannot be typed, reproduction of flowchart
becomes a problem.
4. Loss Of Details : The essentials of what is done can easily be lost in the technical details
of how it is done.
Structure Chart :
A structure chart is a hierarchical diagram showing the relationships between the
modules of a computer program.

It shows which modules within a system interact and also graphically depicts the data
that are communicated between various modules.

Structure charts are developed prior to the writing of program code.

They identify the data passes existing between individual modules that interact with one
another.

Structure Chart Notation :

Symbol for a module (one being designed).


Symbol for a pre-defined module (eg. a
library)

Symbol for a relationship


Symbol for control-driven data
Symbol for information-driven data

http://way2mca.com

- 70 -

Fan-in (relationship to more than one parent) :

Fan-out (relationship to more than one child) :

Cross-over (possibly due to fan-in) two valid solutions are as follows :


1.

2.

http://way2mca.com

- 71 -

Continuity (possibly due structure size) :

Module X

(From p. 1)

Module X
(See p. n)

Page 1
Page 2

Page n

Iterative invocation(when one action is made up of more than one repeated smaller ones):

Module X

Module X

Module X
exactly n

n
Module Y
X invokes Y
undefined times

Module Y
X invokes Y a
max. of n times

Module Y
X invokes Y
exactly n times

Module X
n

Module Y
X invokes Y any
no. of times from
n to m

Code inclusion (considered as physical integration with logical separation) :

http://way2mca.com

- 72 -

Module X
Module Y is actually code in module X

Module Y
Transaction centre (considered as a way of selection of one from many possible functions
at any one specific moment) :

Module A

Module B

Module C

Module D

In this example module As function can be the


function
of either one of modules B, C or D
Structure Chart Elements :
Module : Denote a logical piece of program
Library : Denote a logical piece of program that is repeated in a structure chart
Loop : A module is repeated
Conditional Line: Subordinate modules are invoked by control modules based on some
conditions.
Control Couple : Communicates that a message or system flag is being passed from one
module to another
Data Couple : Communicates that data is being passed from one module to another
Off Page : Identifies when the part of the diagram are continued on another page of
structure chart.
On Page: Identifies when the part of the diagram are continued somewhere else on the
same page of the structure chart.
Example Of Structure Chart :

http://way2mca.com

- 73 -

Payroll Processing

Two approaches exist for developing a structure chart :


(a) Transaction Analysis :

It makes use of a system flowchart to develop a structure chart.


Here, the system flowchart is examined to identify each major program.
These are usually the transactions supported by the system.
Thus, transaction analysis can be looked at as a process of identifying each
separate transaction that must be supported by the system, and then constructing a
branch for each one in the structure chart.
While every transaction will be a direct sub-module for the boss module, each
transaction will be the boss module for its sub-tree of processes for that transaction.
Each sub-tree may be developed using transform analysis.

(b) Transform Analysis :


It makes use of DFD fragments to develop the sub-module tree structures in a
system flowchart.

It is based on the idea that input is "transformed" into output by the system.
Three important concepts are involved:

Afferent data flow : It is the incoming data flow in a sequential set of processes.
Efferent data flow : It is the outgoing data flow from a sequential set of
processes.
Central transform : It transforms afferent data flow into efferent data flow.

The following steps are followed to develop a structure chart from a DFD fragment.
1. Identify input, processes, output from the DFD fragment.
2. Reorganize DFD fragment to arrange input (afferent data flow) to the
left, process (central transform) in the center, and output (efferent data flow) to
the right.

http://way2mca.com

- 74 -

3. From the first two steps, identify the boss module (calling module) and branch
out the sub-modules out of the boss module (this is the boss module of each
transaction and not necessarily of the entire system).
4. Provide appropriate data flow lines, and show input and output data using data
couples.
5. Display condition clauses using control couples.

Module Coupling & Module Cohesion :

Concepts of module coupling and module cohesion are used to evaluate the quality of a
structure chart.
Module Coupling :
Module coupling is a measure of how a module is connected to other modules
in the program.
It is desirable to make modules as independent as possible, so as to allow them to be
executed in any environment.
Every module should have its own well-defined interface to accept inputs, and it should
be able to output data in the required form.
The module then need not know who invoked it.
Module coupling is achieved by passing data couples between modules.
Module Cohesion :
Module cohesion refers to the degree to which all the code within the module
contributes to implementing one well-defined task.
Modules with high cohesion tend to perform a single (or similar) task.
Modules with poor cohesion tend to perform loosely related tasks.
Modules with high cohesion tend to have low coupling, as they mostly act on the same
internal data.
Modules with poor cohesion tend to pass unrelated data between themselves to request
for, or provide services.

HIPO( Hierarchy Input Process Output ) Chart :

1. It is a commonly used method for developing systems software


2. An acronym for hierarchical Input process output, developed by IBM for its large,
complex operating systems.
3. Greatest strength of HIPO is the documentation of a system
Purpose Of HIPO Chart :
1. Assumption on which HIPO is based : It is easy to lose track of the intended function
of a system or component in a large system.
2. Users view : Single functions can often extend across several modules.
3. Analysts concern : Understanding, describing, and documenting the modules and their
interaction in a way that provides sufficient detail but that does not lose sight of the
larger picture.
4. HIPO diagrams are graphic, rather than narrative, descriptions of the system. They
assist the analyst in answering three guiding questions:
http://way2mca.com

- 75 -

i. What does the system or module do? (Asked when designing the system).
ii. How does it do it? (Asked when reviewing the code for testing or maintenance)
iii. What are the inputs and outputs? (Asked when reviewing the code for testing or
maintenance)
5. A HIPO description for a system consists of the visual table of contents & the functional
diagrams.
Visual Table Of Content :
1. The visual table of contents (VTOC) shows the relation between each of the documents
making up a HIPO package.
2. It consists of a Hierarchy chart that identifies the modules in a system by number and in
relation to each other and gives a brief description of each module.
3. The numbers in the contents section correspond to those in the organization section.
4. The modules are in increasing detail. Depending on the complexity of the system, three
to five levels of modules are typical.
Functional Diagrams :
1. For each box defined in the VTOC a diagram is drawn.
2. Each diagram shows input and output (right to left or top to bottom), major processes,
movement of data, and control points.
3. Traditional flowchart symbols represent media, such as magnetic tape, magnetic disk
and printed output.
4. A solid arrow shows control paths, and an open arrow identifies data flow.
5. Some functional diagrams contain other intermediate diagrams, but they also show
external data, as well as internally developed data and the step in the procedure where
the data are used.
6. A data dictionary description can be attached to further explain the data elements used
in a process.
7. HIPO diagrams are effective for documenting a system.
8. They aid designers and force them to think about how specifications will be met and
where activities and components must be linked together.
9.
Disadvantages :
1. They rely on a set of specialized symbols that require explanation, an extra concern
when compared to the simplicity of , for eg. Data flow diagram.
2. HIPO diagrams are not as easy to use for communication purpose as many people
would like.
3. They do not guarantee error-free systems.

http://way2mca.com

- 76 -

Example Of HIPO Chart :

Warnier Orr Diagrams :


1. Warnier/Orr diagrams (also known as logical construction of programs/ logical
construction of systems).
2. Warnier/Orr diagram is a style of diagram which is extremely useful for describing
complex processes (e.g. computer programs, business processes, instructions) and
objects (e.g. data structures, documents, parts explosions).
3. Initially developed in France by Jean- Dominique Warnier and in the United States by
Kenneth Orr.
4. This method aids the design of program structures by identifying the output &
processing results & then working backwards to determine the steps & combinations of
input needed the steps & combinations of input needed to produce them.
5. The simple graphic methods used in Warnier/ Orr diagrams make the levels in the
system evident and the movement of the data between them vivid.

http://way2mca.com

- 77 -

Basic Elements :

Bracket : A bracket encloses a level of decomposition in a diagram. It reveals what


something "consists of" at the next level of detail.

Sequence : The sequence of events is defined by the top-to-bottom order in a diagram.


That is, an event occurs after everything above it in a diagram, but before anything
below it.

OR : You represent choice in a diagram by placing an "OR" operator between the items
of a choice. The "OR" operator looks either like
or
.

AND: You represent concurrency in a diagram by placing an "AND" operator between


or
.
the concurrent actions. The "AND" operator looks either like

Repetition : To show that an action repeats (loops), you simply put the number of
repetitions of the action in parentheses below the action.

Using Warnier/Orr Diagrams :


1. The ability to show the relation between processes and steps in a process is not unique
to Warnier/Orr diagrams, nor is the use of iteration, alternation,or treatment of
individual cases, the approach used to develop systems definitions with Warnier/Orr
diagrams is different and fits well with those used in logical system design.
2. To develop a Warnier/Orr diagram, the analyst works backwards, starting with systems
output and using an output oriented analysis.
3. On paper, the development moves from left to right. First the intended output or results
of the processing are defined. At the next level, shown by inclusion with a bracket, the
steps needed to produce the output are defined.
4. Each step in turn is further defined. Additional brackets group the Processes required to
produce the result on the next level.
5. A completed Warnier/Orr diagram includes both process groupings & data requirements.
Data elements are listed for each process or process component.
These data elements are the ones needed to determine which alternative or case
should be handled by the system & to carry out the process.
The analyst must determine where each data element originates, how it is used, and
how individual elements are combined.
6. When the definition is completed , a data structure for each process is documented. It,
in turn, is used by the programmers, who work from the diagrams to code the software.
Example Of Warnier/Orr Diagram :
The diagram below illustrates the use of these constructs to describe a simple process.

http://way2mca.com

- 78 -

You could read the above diagram like this :


"Welcoming a guest to your home (from 1 to many times) consists of greeting the guest
and taking the guest's coat at the same time, then showing the guest in. Greeting a guest
consists of saying "Good morning" if it's morning, or saying "Good afternoon" if it's
afternoon, or saying "Good evening" if it's evening. Taking the guest's coat consists of
helping the guest remove their coat, then hanging the coat up."
Advantages of Warnier/ Orr diagrams :
1. They are simple in appearance and easy to understand. Yet they are powerful design
tools.
2. They have the advantage of showing groupings of processes and the data that must be
passed from level to level.
3. The sequence of working backwards ensures that the system will be result oriented.
4. This method is useful for both data and process definition. It can be used for each
independently , or both can be combined on the same diagram.

Designing Databases :
Database :
It is an integral collection of stored data that is centrally managed and controlled.
It consists of two related stores of information :
(1) Physical data store and (2) The schema.
Physical data store is the storage area used by a DBMS to store the raw bits and bytes
of a database.
http://way2mca.com
- 79 -

Schema is the description of the structure, content, and access controls of a physical
data store or database.
It contains additional information about the data stored in the physical data store:
(a) Access and content controls (authorization, allowable values)
(b) Relationships among data elements (pointer indicating customer of a particular
order)
(c) Details of physical data store organization (types, lengths, indexing, sorting)

DBMS (Database Management System) :


It is system software that manages and controls access to a database.
It has four key components
(a) API (b) query interface (c) administrative interface (d) data access
programs/subroutines.
Working of a DBMS
(a) Application programs, users, administrators tell the DBMS what data they need (for
reading/writing) using names defined in the schema.
(b) DBMS accesses the schema to verify that the requested data exists, and
that the requesting user has appropriate access privileges.
(c) If request is valid, then the DBMS extracts information about the physical
organization of the requested data from the schema and uses that info to access the
physical data store on behalf of the requesting program or user.

A DBMS provides the following data access and management capabilities:


(a) Allow simultaneous access by many users/application programs.
(b) Allow access to data without writing application programs, i.e. through queries.
(c) Managing all data of an information system as an integrated whole, through
uniform, consistent access, content controls.

Entity :
An Entity is a real-world object distinguishable from other objects. An entity is described
(in DB) using a set of attributes.
Examples : a book, an item, a student, a purchase order.
Entity Set
An Entity Set is a collection of similar entities.
E.g., All employees, Set of all books in a library.
Attribute :
An entity has a set of attributes
Attribute defines property of an entity
It is given a name
Attribute has value for each entity
Value may change over time
Same set of attributes are defined for entities in an entity set

Example :
Entity set BOOK has the following attributes
TITLE
ISBN

http://way2mca.com

- 80 -

ACC-NO
AUTHOR
PUBLISHER YEAR
PRICE

A particular book has value for each of the above attributes.


An attribute may be multi-valued, i.e., it has more than one value for a given entity;
e.g., a book may have many authors
An attribute which uniquely identifies entities of a set is called primary key attribute of
that entity set
Composite attribute : date, address, etc

Relationships :
It represents association among entities
E.g. : (1) A particular book is a text for particular course
Book Database Systems by C.J. Date is text for course identified by code
CS644
(2) Student GANESH has enrolled for course CS644.
Relationship set :
It is a set of relationships of same type.
Words relationship and relationship set often used interchangeably
Certain entity sets :
Binary relationship : between two entities sets.
E.g. : Binary relationship set STUDY between STUDENT and COURSE.
Ternary relationship : among three entity sets.
E.g. : relationship STUDY could be ternary among STUDENT, COURSE and
TEACHER.

A relationship may have attributes.


E.g. : Attribute GRADE and SEMESTER for STUDY.

http://way2mca.com

- 81 -

Normalization :
Normalization is the process of taking data from a problem and reducing it to a set of
relations while ensuring data integrity and eliminating data redundancy
Data integrity - all of the data in the database are consistent, and satisfy all integrity
constraints.
Data redundancy if data in the database can be found in two different locations (direct
redundancy) or if data can be calculated from other data items (indirect redundancy)
then the data is said to contain redundancy.
Example Of Normalization :
Data taken is used to illustrate the process of normalization. A company obtains parts from
a number of suppliers. Each supplier is located in one city. A city can have more than one
supplier located there and each city has a status code associated with it. Each supplier may
provide many parts. The company creates a simple relational table to store this information
that can be expressed in relational notation as :
FIRST (s#, status, city, p#, qty) where
s#

supplier identifcation number (this is the primary key)

status status code assigned to city


city

name of city where supplier is located

p#

part number of part supplied

qty>

quantity of parts supplied to date

In order to uniquely associate quantity supplied (qty) with part (p#) and supplier (s#), a
composite primary key composed of s# and p# is used.
First Normal Form :
A relational table, by definition, is in first normal form. All values of the columns are atomic.
That is, they contain no repeating values. Figure1 shows the table FIRST in 1NF.
Figure 1: Table in 1NF

http://way2mca.com

- 82 -

Although the table FIRST is in 1NF it contains redundant data. For example, information
about the supplier's location and the location's status have to be repeated for every part
supplied. Redundancy causes what are called update anomalies. Update anomalies are
problems that arise when information is inserted, deleted, or updated. For example, the
following anomalies could occur in FIRST:

INSERT. The fact that a certain supplier (s5) is located in a particular city (Athens)
cannot be added until they supplied a part.
DELETE. If a row is deleted, then not only is the information about quantity and part
lost but also information about the supplier.
UPDATE. If supplier s1 moved from London to New York, then six rows would have
to be updated with this new information.

Second Normal Form


The definition of second normal form states that only tables with composite primary
keys can be in 1NF but not in 2NF.
A relational table is in second normal form 2NF if it is in 1NF and every non-key column is
fully dependent upon the primary key.
That is, every non-key column must be dependent upon the entire primary key. FIRST is in
1NF but not in 2NF because status and city are functionally dependent upon only on the
column s# of the composite key (s#, p#). This can be illustrated by listing the functional
dependencies in the table:
s# > city, status
city > status
(s#,p#) >qty
The process for transforming a 1NF table to 2NF is:
1. Identify any determinants other than the composite key, and the columns they
determine.
2. Create and name a new table for each determinant and the unique columns it
determines.
3. Move the determined columns from the original table to the new table. The
determinate becomes the primary key of the new table.
4. Delete the columns you just moved from the original table except for the
determinate which will serve as a foreign key.
5. The original table may be renamed to maintain semantic meaning.

http://way2mca.com

- 83 -

To transform FIRST into 2NF we move the columns s#, status, and city to a new table
called SECOND. The column s# becomes the primary key of this new table. The results
are shown below in Figure 2.
Figure 2: Tables in 2NF

Tables in 2NF but not in 3NF still contain modification anomalies. In the example of
SECOND, they are
INSERT. The fact that a particular city has a certain status (Rome has a status of 50)
cannot be inserted until there is a supplier in the city.
DELETE. Deleting any row in SUPPLIER destroys the status information about the city as
well as the association between supplier and city.
Third Normal Form
The third normal form requires that all columns in a relational table are dependent only
upon the primary key. A more formal definition is: A relational table is in third normal
form (3NF) if it is already in 2NF and every non-key column is non transitively
dependent upon its primary key. In other words, all nonkey attributes are functionally
dependent only upon the primary key.
Table PARTS is already in 3NF. The non-key column, qty, is fully dependent upon the
primary key (s#, p#). SUPPLIER is in 2NF but not in 3NF because it contains a transitive
dependency. A transitive dependency is occurs when a non-key column that is a
determinant of the primary key is the determinate of other columns. The concept of a
transitive dependency can be illustrated by showing the functional dependencies in
SUPPLIER:

http://way2mca.com

- 84 -

SUPPLIER.s# > SUPPLIER.status


SUPPLIER.s# > SUPPLIER.city
SUPPLIER.city > SUPPLIER.status
Note that SUPPLIER.status is determined both by the primary key s# and the non-key
column city. The process of transforming a table into 3NF is:
1. Identify any determinants, other the primary key, and the columns they determine.
2. Create and name a new table for each determinant and the unique columns it
determines.
3. Move the determined columns from the original table to the new table. The
determinate becomes the primary key of the new table.
4. Delete the columns you just moved from the original table except for the
determinate which will serve as a foreign key.
5. The original table may be renamed to maintain semantic meaning.
To transform SUPPLIER into 3NF, we create a new table called CITY_STATUS and move the
columns city and status into it. Status is deleted from the original table, city is left behind to
serve as a foreign key to CITY_STATUS, and the original table is renamed to
SUPPLIER_CITY to reflect its semantic meaning. The results are shown in Figure 3 below.
Figure 3: Tables in 3NF

The results of putting the original table into 3NF has created three tables. These can be
represented in "pseudo-SQL" as:
PARTS (#s, p#, qty)
Primary Key (s#,#p)
Foreign Key (s#) references SUPPLIER_CITY.s#

http://way2mca.com

- 85 -

SUPPLIER_CITY(s#, city)
Primary Key (s#)
Foreign Key (city) references CITY_STATUS.city
CITY_STATUS (city, status)
Primary Key (city)
Advantages of Third Normal Form :
The advantage of having relational tables in 3NF is that it eliminates redundant data which
in turn saves space and reduces manipulation anomalies.
For example, the improvements to our sample database are :
INSERT. A fact about the status of a city, Rome has a status of 50, can be added even
though there is not supplier in that city. Likewise, facts about new suppliers can be added
even though they have not yet supplied parts.
DELETE. Information about parts supplied can be deleted without destroying information
about a supplier or a city. UPDATE. Changing the location of a supplier or the status of a
city requires modifying only one row.

Database Design :
1. The first database design step in structured systems analysis converts the ER analysis
model to logical record types and specifies how these records are to be accessed.
2. These accessed requirements are later used to choose keys that facilitate data access.
3. Quantitative data such as item sizes, numbers of records and access frequency are
often also added at this step.
Quantitative data is needed to compute the storage requirements and transaction
volumes to be supported by the computer system.
4. The combination of logical record structure access specifications and quantitative data is
sometimes known as the system level database specification. This specification is used
at the implementation level to choose a record structure supported by a DBMS.
5. The simplest conversion is to make each set of the ER diagram into a record type.
6. Object models must be converted to logical record structures when a logical analysis
model is implemented by a conventional DBMS. The simplest conversion here is for each
object class to become a logical record, with each class attribute converted to a field.
Where attributes are structured, they themselves become a separate record.
7. Different methodologies give their logical records different names like blocks, schema,
and modules.
@@@@@@@@@@@@
http://way2mca.com

- 86 -

7.Designing Input, Output And UsersInterface


Input Design :
1. Systems analysts decide the following input design details:
i. What data to input
ii. What medium to use
iii. How the data should be arranged or coded
iv. The dialogue to guide users in providing input
v. Data items and transactions needing validation to detect errors.
vi. Methods for performing input validation and steps to follow when errors occur.
2. The design decisions for handling input specify how data are accepted for computer
processing.
3. Analysts decide, whether the data are entered directly, perhaps through a workstation,
or by using source documents, such as sales slips, bank checks, or invoices where the
data in turn are transferred into the computer for processing.
4. The design of input also includes specifying the means by which end- users and system
operators direct the system in which actions to take.
e.g. a system user interacting through a workstation must be able to tell the system
whether to accept input, produce a report, or end processing.
5. Online systems include a dialogue or conversation between the user and the system.
Through the dialogue users request system services and tell the system when to
perform a certain function.
6. the nature of online conversations often makes the difference between a successful and
unacceptable design.
7. An improper design that leaves the display screen blank will confuse a user about what
action to take next.
8. The arrangement of messages and comments in online conversations, as well as the
placement of data, headings and titles on display screens or source documents, is also
part of input design.
9. Sketches of each are generally prepared to communicate. The arrangement to users for
their review, and to programmers and other members of the systems design team.
Output Design :
1. Output generally refers to the results and information that are generated by the system.
2. For many end-users, output is the main reason for developing the system and the basis
on which they will evaluate the usefulness of the application.
http://way2mca.com

- 87 -

3. Most end-users will not actually operate the information. System or enter data through
workstations, but they will use the output from the system.
4. When designing output, systems analysts must accomplish the following:
i. Determine what information to present.
ii. Decide whether to display, print, or speak the information and select the output
medium.
iii. Arrange the presentation of information in an acceptable format.
iv. Decide how to distribute the output to intended recipients.
5. The arrangement of information on a display or printed document is termed a layout.
6. Accomplishing the general activities listed above will require specific decisions, such as
whether to use preprinted forms when preparing reports and documents, how many
lines to plan on a printed page, or whether to use graphics & colour.
7. The output design is specified on layout forms, sheets that describe the location
characteristics (such as length & type) and format of the column headings & pagination.
These elements are analogous to an architects blueprint that shows the location of
each component.
Design Review :
1. Design reviews; focus on design specifications for meeting previously identified systems
requirements.
2. The information supplied about the design prior to the session can be communicated
using HIPO charts, structured flowcharts, Warnier/ Orr diagrams, screen designs, or
document layouts.
3. Thus, the logical design of the system is communicated to the participants so they can
prepare for the review.
4. The purpose of this type of walkthrough is to determine whether the proposed design
will meet the requirements effectively and efficiently.
5. If the participants find discrepancies between the design and requirements, they will
point them out and discuss them.
6. It is not the purpose of the walkthrough to redesign portions of the system. That
responsibility remains with the analyst assigned to the project.
User Interface Design :
1. There are two aspects to interface design.
i. To choose the transactions in the business process to be supported by interfaces.
This defines the broad interface requirements in terms of what information is input
and output through the interface during the transaction.

http://way2mca.com

- 88 -

ii. The design of the actual screen presentation, including its layout and in that the
sequence of screens that may be needed to process the transaction.
2. Choosing the Transaction Modules
1. Defining the transactions that must be supposed through interfaces is part of the
system specification.
2. Each interface object defines one interface module which will interact with the user
in some way. Each such interaction results in one transaction with the system.
3. Defining the presentation
1. Each interaction includes both the presentation and dialog.
2. Presentation describes the layout of information.
Dialog Describes the sequence of interactions between the user and the computer.
4. Evaluation of Interfaces.
1. User-friendly the interface should be helpful, tolerant and adaptable, and the user
should be happy and confident to use it.
2. friendly interactions results in better interfaces, which not only make users more
productive but also make their work easier and more pleasant . The terms
effectiveness and efficiency are also often used to describe interfaces.
3. An interface is effective when it results in a user finding the best solution to a
problem, and it is efficient when it results in this solution being found in the shortest
time with least error.
5. Workspace
1. the computer interface is part of a user workspace
2. A workspace defines all the information that is needed for users work as well as the
layout of this information.
6. Robustness
1. Robustness is an important feature of interface.
2. this means that the interface should not fail because of some action taken by the
user, or indeed that a user error leads to a system breakdown.
3. This in turn requires checks that prevent users from making incorrect entries.
7. Usability
1. Usability is a term that defines how easy it is to use an interface.
2. The things that can be measured to describe usability are usability metrics.
3. Metrics cover objective factors as well as subjective factors these are:
Analytical metrics which can be directly described - for example whether all the
information needed by a user appears on the screen.
Performance metrics, which include things like the time used to perform a task,
system robustness, or how easy it is to make the system fail.
Cognitive workload metrics or the mental effort required of the user to use the
system. It covers aspects such as how closely the interface approximates the
users mental model or reactions to the system.
User satisfaction metrics, which include such things as how helpful the system is
or how easy it is to learn.
http://way2mca.com

- 89 -

Interactive Interfaces :
1. The ideal interactive interface is the one where the user can interact with the computer
using natural language.
2. The user types in a sentence on the input device (or perhaps speaks into a speech
recognition device) and the computer analyzes this sentence and responds to it.
3. The form of dialog and presentation depends on the kind of system supported. There
are different kinds of interactionE.g.:
Dialogs in transactions processing that allow the input of one transaction that
describes an event or action, such as a new appointment, or a deposit in an account.
Designing an artifact such as a document or a report or the screen layout itself.
Making a decision about a course of action such as what route to take to make a set
of deliveries; and
Communication and coordination with other group members.
Note :

i. Interactive transaction dialog is usually an interchange of messages between the


user and the computer in a relatively short space of time.
ii. The dialog concerns one fact and centers around the attributes relates to that
fact.
iii. Different presentation methods are used in on-line user dialog for entering
transaction data the most common methods are menus, commands or templates.

Menus :
1. A menu system presents the user with a set of actions and requires the user to select
one of those actions.
2. It can be defined as a set of alternative selections presented to a user in a window.
Commands and prompts :
1. In this case the computer asks the user for specific inputs.
2. On getting the input, the computer may respond with some information or ask the user
for more information.
3. This process continues until all the data has been entered into the computer or
retrieved by the user.
Templates :
1. Templates are equivalent to forms on a computer.
2. A form is presented on the screen and the user is requested to fill in the form.
3. Usually several labeled fields are provided and the user enters data into the blank
spaces.
4. Fields in the template can be highlighted or blink to attract the users attention.
5. The advantage templates have over menus or commands is that the data is entered
with fewer screens.
@@@@@@@@@@@@
http://way2mca.com

- 90 -

8.Testing
Software Testing :
Testing is the process of exercising a program with the specific intent of finding errors prior
to delivery to the end user.
Software testing is a process used to identify the correctness, completeness and quality of
developed computer software.
Actually, testing can never establish the correctness of computer software, as this can only
be done by formal verification (and only when there is no mistake in the formal verification
process). It can only find defects, not prove that there are none.
Testing Objectives :

Testing is the process of executing a program with the intent of finding errors
A good test case is one that has a high probability of finding an as-yet discovered
errors.
Find as many defects as possible.
Find important problems fast.
A successful test is one that uncovers an as-yet undiscovered error.
Our objective is to design tests that systematically uncover different classes of errrors
and do so with a minimum amount of time and effort.

Testing cannot show the absence of defects, it can only show that SW errors are
present.
It is not unusual for a SW development organization to expend between 30 and 40
percent of total project effort on testing.
Testing is a destructive process rather than constructive.

http://way2mca.com

- 91 -

Testing Principles :

All tests should be traceable to customer requirements.


Tests should be planned long before testing begins. (with requirement model)
The Pareto principle applies to SW testing: 80 % of all errors uncovered during testing
will likely be traceable to 20 % of all program modules.
Testing should begin in the small and progress toward testing in the large. Modules
->clusters ->entire system
Exhaustive testing is not possible. Huge number of combinations
Testing should be conducted by an independent third party.

S/W Testing Strategy / A Strategic approach to SW Testing :

A strategy for SW testing integrates SW test case design methods into a well-planned
series of steps that result in the successful construction of SW
These approaches and philosophies are what we shall call strategy.
A SW team should conduct effective formal technical reviews. By doing this many errors
will be eliminated before testing commences.
Client Needs

Acceptance Testing

Requirements

System Testing

Design

Integration Testing

Coding

Unit Testing

We begin by testing-in-the-small and move toward testing-in-the-large.


Different testing techniques are appropriate at different point in time.
Testing is conducted by the developer and an independent test group(ITG).
Note that testing occur at a time near project deadline. Testing progress must be
measurable and problem must surface as early as possible.
Testing and debugging are different activities, but debugging must be accommodated in
any testing strategy.

Who Test The S/w ?


Developer : The one who understands the system but, will test gently and, is driven
by delivery.
Independent User : The one who must learn about the system, but, will attempt to
break it and, is driven by equality.
A good test :
Has a high probability of finding an error
http://way2mca.com

- 92 -

Is not redundant testing time and resources are limited


Should be a best of breed the test that has the highest likelihood of uncovering a
whole class of errors should be used.
Should be neither too simple nor too complex

Verification and Validation :

Software testing is one element of a broader topic that is often referred to as


verification and validation.
Verification refers to the set of activities that ensure that software correctly
implements a specific function.
Validation refers to a different set of activities that ensure that the software meets
specification requirements.
The definition of verification and validation encompasses many of the activities that we
refer to as software quality assurance:

Software engineering activities provide the foundation form which quality is built.
Analysis, design and coding methods act to enhance the quality by providing uniform
techniques and predictable results.
Formal technical reviews help to ensure the quality of the products.
Throughout the process measures and controls are applied to every element of a
software configuration. These help to maintain uniformity.
Testing is the last phase in which quality can be assessed and errors can be
uncovered.

Unit Testing :

Unit Testing is a dynamic method for verification, where the program is actually
compiled and executed.
It is one of the most widely used methods, and the coding phase is sometimes called
the Coding and unit testing phase.
As in other forms of testing, unit testing involves executing the code with some test
cases and then evaluating the results.
The goal of unit testing is to test modules or single unit, not the entire software
system. The two types of unit testing to test the single units are as follows :
Black-box Testing
White-box Testing
Unit testing is most often done by the programmer himself.
The programmer, after finishing the coding of a module, tests it with test data. The
tested module is then delivered for integration testing and further testing.

http://way2mca.com

- 93 -

Unit Test Environment :

http://way2mca.com

- 94 -

Black-Box Testing :

1. Black box testing, also called behavioral testing, focuses on the functional requirements
of the software.
2. Black-box testing enables the software engineer to derive sets of input conditions that
will fully exercise all functional requirements for a program.
3. Black-box testing attempts to find errors in the following categories:
i. Incorrect or missing functions.
ii. Interface errors
iii. Errors in data structures or external database access.
iv. Behavior or performance errors, and
v. Initialization and termination errors.

4. Unlike white-box testing, which is performed early in the testing process, black-box
testing tends to be applied during later stages of testing. Because black-box testing
tends to be applied during later stages of testing. Because black-box testing purposely
disregards control structure, attention is focused on the information domain.
5. Tests are designed to answer the following questions:
1. How is functional validity tested?
2. How is system behavior & performance tested?
3. What classes of input will make good test cases?
4. Is the system particularly sensitive to certain input values?
5. How are the boundaries of a data class isolated?
6. What data rates and data volume can the system tolerate?
7. What effect will specific combinations of data have on system operation?
Advantages Of Black-Box Testing :
By applying black-box techniques, a set of test cases can be derived that satisfy the
following criteria:
1. Test cases that reduce, by a count that is greater than one, the number of additional
test cases that must be designed to achieve reasonable testing.
2. Test cases that tell us something about the presence or absence of classes of errors,
rather than an error associated only with the specific test at hand.

WHITE-BOX TESTING :
1. White-box testing, sometimes called glass-box testing is a test case design method that
uses the control structure of the procedural design to derive test cases.
2. Using white-box testing methods the software engineer can derive test cases that
http://way2mca.com

- 95 -

i. Guarantee that all independent paths within a module have been exercised at least
once.
ii. Exercise all logical decisions on their true and false sides.
iii. Execute all loops at their boundaries and within their operational bounds and
iv. Exercise internal data structures to ensure their validity.

Reasons/Advantages of conducting white-box tests:


1. Logic errors and incorrect assumptions are inversely proportional to the probability that
a program path will be executed.
- Errors tend to creep into ones work when one design and implement function,
conditions, or control those are out of the mainstream.
- Everyday processing tends to be well understood (and well scrutinized), while
special case processing tends to fall into the cracks.
2. We often believe that a logical path is not likely to be executed when in fact; it may be
executed on a regular basis. The logical flow of a program is sometimes counter
intuitive, meaning that ones unconscious assumptions about flow of control and data
may lead us to make design errors that are uncovered only once path testing
commences.
3. Typographical errors are random. When a program is translated into programming
language source code, it is likely that some typing errors will occur. Many will be
uncovered by syntax and type checking mechanisms, but others may go undetected
until testing begins.

Integration Testing

The focus of the integration testing is on examining the connections or links between the
program and their modules.
In particular, the integration test examines the following :
1. A proper program/module is being called by the proper program/module as desired.
2. The call to a program/module is compatible input and output parameters as desired.
Compatibility here refers to the following examinations of the calling and called
parameters :
The data type matching.
The number of parameters.
The order of parameters.
Other specific validations, such as specific value or range of value matching.
http://way2mca.com

- 96 -

3. Run-time exception, such as no data at input, no data to output, resources not


available, resources conflicts, etc.
4. The input data received from other software system-called a external input interface- is
processed correctly by this s/w system and all the outputs using that data match exactly
with what is desired. The same examination is carried out for every data input from
external input interface.
5. The output data generated by this s/w system that is used as input by other s/w
system-called as external output interface- is processed correctly by the next-in-line s/w
system and all the outputs using that data match exactly with what is desired. The
same examination is carried out for every data output to external output interface.
This integration testing not only examines the internal links, but also links with external s/w
systems.
For s/w systems of large size, following suggestions can be given to carry out the
integration testing :
1. It is suggested to break the integration testing effort into smaller manageable units of
the system.
2. Identify and prioritize the important or critical (external) input data interface in the
system.
3. Arrange these data flows in the order of priority.
4. Using a data flow as a focus point, identify the set of processes that can be grouped in
a convenient manner to make each group manageable.
5. Carry out the entire integration testing in these units.
No matter how and how much we simplify the integration testing, the challenges in the
Integration testing are many.
Some of them are listed as follows :
1. The mismatch analysis can lead to conclusions, which are very difficult to imagine.
2. Sometimes a good understanding of complete system flow chart would be useful to
sketch at a gross level only to identify the approximate group of likely sources of
defects. A program developer(s) needs to be consulted to assess the exact identify of
programs needed to change.
3. Some times, identifying the cause of the errors may lead to very surprisingly different or
remotely possible sources.
4. To remove an error effectively, integration test may require correcting the program lines
from several programs. Different program developers are involved and the impact of
their sources in the resulting error may be seen by any one or some of them. But still
the root cause has to be removed.
5. The possible errors listed above are only the time-of-iceberg possible errors, of the
total. Therefore, some of the errors were unimaginable.

http://way2mca.com

- 97 -

6. Since the total code size is large, the number of inputs and expected outputs are large,
the time taken to prepare for every single integration testing, their execution and
analysis of test results, fixing etc. is a much longer cycle. Also, how many such cycles
are required for complete testing is difficult to predict ahead of time. Therefore, some
team, members may loose focus as the integration test prolongs.
7. For testing of external interfaces, the project manager is dependent on the co-operation
of the users of the neighboring application systems. This involves co-ordination effort,
which may be time consuming.
Unit Testing in the OO Context
The concept of the Unit broadens due to encapsulation.
A single operation in isolation in the conventional view of unit testing does NOT
work.
Context of a class should be considered.

Comparison
Unit testing of conventional SW focus on the algorithmic detail and the data that
flow across the module interface.
Unit testing of OO SW is driven by the operations encapsulated by the class and the
state behavior of the class.

Integration testing in OO Context


Begins by evaluating the correctness and consistency of the OOA and OOD models.
Testing strategy changes

Integration focuses on classes and their execution across a thread or in the context
of a usage scenario.
Validation uses conventional black box methods.

Test case design draws on conventional methods, but also encompasses special
features.

OOT Strategy :
Class testing is the equivalent of unit testing.
Operations within the class are tested.
The state of behavior of class is examined.
Integration applied three different strategies :

Thread-based Testing Integrates the set of classes required to respond to one input or event.
Use-based Testing Integrates the set of classes required to respond to one use case.
Cluster Testing
Integrates the set of classes required to demonstrate one collaboration

http://way2mca.com

- 98 -

Validation testing/ Acceptance Testing

Software validation is achieved through a series of black box tests that demonstrate
conformity with requirements. A test plan outlines the classes of tests to be conducted and
a test procedure defines specific test cases that will be used to demonstrate conformity
with requirements.
After each validation test case has been conducted, one of two possible conditions exists:
i) The function or performance characteristics conform to the specifications and are
accepted.
ii) A deviation form specifications is discovered and a deficiency list is created.
An important element of validation testing is configuration review. The intent of the review
is to ensure that all elements of the software configuration have been properly developed
and are well documented. The configuration review is sometimes called an audit.
If software is developed for the use of many customers, it is impractical to perform formal
acceptance tests with each one. Many software product builders use a process called alpha
and beta testing to uncover errors that only the end-user is able to find.
The alpha test is conducted at the developers site by the customer. The software is used in
a natural setting with the developer recording errors and usage problems. Alpha tests are
performed in a controlled environment.
Beta tests are conducted at one or more customer sites by the end-users of the software.
Unlike alpha testing, the developer is generally not present. The beta test is thus a live
application of the software in an environment that cannot be controlled by the developer.
The customer records all the errors and reports these to the developer at regular intervals.
As a result of problems recorded or reported during these tests, the developers make
modifications and prepare for the release of the entire customer base.

Alpha Testing & Beta Testing :

1. System validation checks the quality of the software in both simulated and live
environments.
2.
i. First the software goes through a phase, often referred as alpha testing, in which
errors and failures based on simulated user requirements are verified and studied.
ii. The alpha test is conducted at the developers site by a customer.
iii. The software is used in a natural setting with the developers recording errors &
usage problems.
iv. Alpha tests are conducted in a controlled environment.

http://way2mca.com

- 99 -

3.

i. The modified software is then subjected to phase two called beta testing in the
actual users site or a live environment.
ii. The system is used regularly with live transactions after a scheduled time, failures
and errors are documented and final correction and enhancements are made before
the package is released for use.
iii. Unlike alpha testing, the developer is generally not present.
Therefore, the beta test is a live application of the software in an environment that
cannot be controlled by the developer.
iv. The customer records all problems (real or imagined) that are encountered during
beta testing and reports these to the developer at regular intervals.
v. As a result of problems reported during beta tests, software engineers make
modifications and then prepare for release of the software product to the entire
customer base.

System testing
System testing is actually a series of tests whose purpose is to fully exercise the computerbased system. Although each test has a different purpose, all work to verify that system
elements have been properly integrated and perform allocated functions.
Some types of common system tests are :
i) Recovery testing :
Many computer-based systems must recover from faults and resume processing within a
pre-specified time. In some cases, a system must be fault-tolerant, i.e. processing faults
must not cause overall system function to cease. In other cases, a system failure must be
corrected within a specified period of time or severe economic damage will occur.
Recovery testing is a system test that forces the software to fail in a variety of ways and
verifies that recovery is properly performed. If recovery is automatic, re-initialization,
check-pointing mechanisms, data recovery and restart are evaluated for correctness. If
recovery requires human intervention, the mean-time-to-repair (MTTR) is evaluated to
determine whether it is within acceptable limits.
http://way2mca.com
- 100 -

ii) Security testing :


Any computer-based system that manages sensitive information or causes actions that can
harm individuals is a target for improper or illegal penetration. Security testing attempts to
verify that protection mechanisms built into a system will, in fact, protect it from improper
penetration. During security testing, the tester plays the role of the hacker who desires to
penetrate the system.
Given enough time and resources, good security testing will ultimately penetrate a system.
The role of the system designer is to make penetration cost more than the value of the
information that will be obtained.
iii) Stress testing :
During earlier testing steps, white box and black box techniques result in a thorough
evaluation of normal program functions and performance. Stress tests are designed to
confront programs with abnormal situations. Stress testing executes a system in a manner
that demands resources in abnormal quantity, frequency or volume. Essentially, the tester
attempts to break the program.
A variation of stress testing is a technique called sensitivity testing. In some situations, a
very small range of data contained within the bounds of valid data for a program may
cause extreme and even erroneous processing or performance degradation. Sensitivity
testing attempts to uncover data combinations within valid input classes that may cause
instability or improper processing.
iv) Performance testing :
Software that performs the required functions but does not conform to performance
requirements is unacceptable.
Performance testing is designed to test run-time
performance of software within the context of an integrated system. Performance testing
occurs through all the steps in the testing process.

Debugging :
Debugging occurs as a consequence of successful testing. That is, when a test case
uncovers an error, debugging is the process that results in the removal of the error.
Debugging is a skillful process. A software engineer, evaluating the results of a test,
is often confronted with the indication of a software problem. The external manifestation
of the error and the internal cause of the error may have no obvious relation to one
another. The process of connecting the symptom to the cause is a part of debugging.
The debugging process will always have one of two outcomes :
i) The cause will be found and corrected.
ii) The cause will not be found.
In the latter case, the person performing debugging may suspect a cause, design a test
case to help validate that suspicion, and work toward error correction in an iterative
fashion.
In general three categories for debugging approaches may be suggested:

http://way2mca.com

- 101 -

i) Brute force :
This is the most common and least efficient method of debugging. Memory dumps are
taken, run-time traces are invoked and write statements are loaded in the hope that the
mass of information produced will provide the required solution. Although, the
information yielded may result in success, it involves a lot of wasted time and effort.
ii) Backtracking :
This method can be used successfully in small programs. Beginning at the site where the
symptom has been uncovered, the source code is traced backward, manually, till the site of
the cause is found. Unfortunately, as the number of lines of code increases, the number
of potential backward paths may become unmanageably large.
iii) Cause elimination :
This approach introduces the concept of binary partitioning. Data related to the error
occurrence are organized to isolate potential causes. A cause hypothesis is devised, and the
data is used to prove or disprove the hypothesis. Alternatively, a list of all possible causes is
developed and tests are conducted to eliminate each.

http://way2mca.com

- 102 -

@@@@@@@@@@@@

http://way2mca.com

- 103 -

9.Implementation And Maintenance


What is Systems Implementation?
Systems implementation is the construction of the new system and the delivery of that
system into production (that is, the day-to-day business or organization operation).
The systems implementation process is in terms of construction and delivery phases of the
life cycle.
The Steps involved in Implementing and evaluating of the system are as follows
:
Plan conversion from the old system to the new one
Train users
Purchase and install new equipment
Convert files
Install system
Review and evaluate system: whether the intended users are indeed using the system
The Personnel involved in implementing and evaluating the system are as
follows :
Analyst
System designer
Programmers
User managers
Operations workers
Systems managers
1. The Construction Phase of Systems Implementation :
The construction phase does two things : builds and tests a functional system that fulfills
business or organizational design requirements, and implements the interface between the
new system and the existing production system.
The project team must construct the database, application programs, user and system
interfaces, and networks. Some of these elements may already exist in your project or be
subject to enhancement.
The Activity involved in construction Phase are as follows :
i.] Build and Test the Networks.
ii.] Build and Test Databases.
iii.] Install and Test New Software Packages (if necessary).
iv.] Write and Test New Programs.
i.]

Build and Test the Networks (if necessary)

Purpose : To build and test new networks and modify existing networks for use by the
new system.
Roles : This activity will normally be completed by the same system specialists who
designed the network(s).
http://way2mca.com

- 104 -

System owner and users :- not usually involved.

System analyst :- the system analysts role is more in terms of a facilitator and
ensures that business requirements are not compromised by the network solution.

Network designer (project/site specific role) : - the network designer is a specialist in


the design of local and wide are networks and in addressing connectivity issues.

System builders : - the network administrator the person who has the expertise
for building and testing network technology for the new system. S/he will also be
familiar with network architecture standards that must be adhered to for any
possible new networking technology.

Prerequisites (Inputs) : This activity is triggered by the approval from the system
owners to continue the project into systems design. The key input is the network design
requirements defined during systems design.
Deliverables (Outputs) : The principle deliverable of this activity is an installed network
that is placed into operation. Network details will be recorded in the project repository for
future reference.
[Having introduced the roles, inputs and outputs, now focus on the implementation steps.]
Application Techniques : Skills for developing networks are an important skill for
systems analysts.
Steps:

Review the network design requirements outlined in the technical design statement
developed during systems design.
Make any appropriate modifications to existing networks and/or develop new networks.
Review network specifications for future reference.

This task must precede immediately other programming activities because databases are
the resources shared by the computer programs to be written.

Build and Test Databases

ii.]

Purpose : The purpose of this activity is to build and test new databases and modify
existing databases for use by the new system.
Roles : This activity will typically be completed by the same system specialist who
designed the database.

System owner and system users :- not usually involved.


System analyst (optional) :- depends on the organization
System designer :- usually also becomes the builder for this activity.
System builder : - yup, this person does the work.
Database administrator :- when the database is part of a corporate database,
theres usually a database administrator who will be involved.

http://way2mca.com

- 105 -

Prerequisites (Inputs): The primary input to this activity is the database design
requirements specified in the technical design statement during systems design. Sample
data from production databases is often loaded into tables for testing the database
.
Deliverables: (Outputs): - the end product of this activity is an unpopulated (empty)
database structure for the new database.
[This is the function most people associate with systems analysis they dont see all the
other work involved.]
There are several application techniques used in building and testing databases.
1) Sampling : sampling methods are used to obtain representative data for testing
database tables.
2) Data Modeling : requires a good understanding of data modeling we focus on this in
part 2 (after the midterm).
3) Database design.
To complete this phase there are 6 steps:
1) Review the technical design statement for database design requirements (know what
youre up to)
2) Locate production databases that may contain representative data for testing
database tables. Otherwise, generate test data for database tables [get data that will
really test the robustness of your design. Dont just pick easy cases.]
3) Build/modify the database according to the design specifications.
4) Load tables with sample data.
5) Test database tables and relationships to adding, modifying, deleting, and retrieving
records. All possible relationship paths and data integrity checks should be tested.
6) Review database schema and record for future reference.

Install and Test New Software Packages (if necessary)

iii.]

Purpose : - To install and test any new software packages and make them available to the
organizations software library.
Roles : This is the first activity in the life cycle that is specific to the applications
programmer.

System owner and system users :- not usually involved


Systems analyst (optional) :- may participate in the testing of the software and
clarifying requirements.
Systems designers :- may be involved in integration requirements and program
documentation
System builder :- applications programmer. The programmer (or team thereof) is
responsible for the installation and testing of new software packages. Network
Administrators :- the network administrator may be involved in installing and testing

http://way2mca.com

- 106 -

on the network server (actually, it is a sure bet that the network administrator will
be involved).
Prerequisites (Inputs) : The main activity is the new software packages and
documentation received from system vendors. The applications programmer will complete
the installation and testing of the package according to the integration requirements and
program documentation that was developed during system design.
Deliverables (Outputs) : The principle deliverable of this activity is the installed and
tested software package(s) that are made available in the software library. Any modified
software specifications and new integration requirements that were necessary are
documented and made available in the project repository to provide a history and serve as
future reference.
Applicable Techniques : well, there really isnt much to this. Depends on the
programming experience and knowledge of the tester. Essentially just good housekeeping
installs, test, and maintain good documentation for others to follow.
iv.] Write and Test New Programs
Purpose : The purpose of this activity is to write and test all programs to be developed inhouse.
Roles : This activity is specific to the applications programmer.
System owner and system users :- not involved

System analyst :- optional

System designer :- optional, may be involved in clarifying the programming plan,


integration requirements, and program documentation (developed during systems
design) that is used in writing and testing the programs

System builder :- the person responsible for this activity. Applications


programmer or programming team they write and test the in-house software.

Note that there is often an objective, or specially trained, person to test the application
(hence the name, the application tester).
Prerequisites (Inputs): The primary input to this activity is the technical design
statement, plan for programming, and test data that was developed during the systems
design phase. Since new programs or program components may have already been written
and in use by other existing systems, the experienced applications programmer will know to
first check for possible reusable software components available in the software library.
Some information systems shops have a quality assurance group staffed by specialists who
review the final program documentation for conformity to standards. This group will
appropriate feedback regarding quality recommendations and outputs (or OPT).
Deliverables (Outputs): The output is of course the new programs and reusable
software components that are placed in a software library. You should also have created
http://way2mca.com

- 107 -

program documentation that may need to be approved by quality assurance people and as
a record of the project.
Applicable Techniques : If the modules are coded top-down, they should be tested and
debugged top-down as theyre written. There are three levels of testing: stub, unit (or
program) and systems testing.
Stub testing is the test performed on individual modules, whether they are in main
programs or are subroutines.
Unit or program testing is a test whereby all the modules that have been coded and
stub tested are tested as an integrated unit.
Systems testing is the tests that ensure that the application programs written in
isolation work properly when integrated into a whole system.
2. The Delivery Phase of Systems Implementation :
This is the final part of the implementation phase of the SDLC delivers the new system
into operation.
To achieve this, you must complete the following :
Conduct a system test to make sure that the new system works
Prepare a conversion plan to smooth the transition to the new system
Install databases used by the new system
Provide training and documentation for individuals using the new system
Convert from the old system to the new system and evaluate the project and final
system.
The Activity involved in construction Phase is as follows :
i.] Conduct System Test.
ii.] Prepare Conversion Plan.
iii.] Install Database.
iv.] Train System Users.
Conduct System Test

i.]

Purpose : The purpose is to test all software packages, custom-built ones, and other
existing programs to make sure they work together and work correctly.
Roles : The systems analyst usually manages this.

System owners and users :- not involved


System analyst :- facilitates by working with project team members in solving
problems
System designer :- test integration requirements and resolve design problems
System builders :- all sorts may be involved applications programmers, database
programmers, network specialists, etc.

http://way2mca.com

- 108 -

Prerequisites :You need the software packages, in-house (custom)-built programs and
any existing programs in the new system.
Deliverables (Outputs) : Any modifications as discovered during implementation.
Continue until the test is successful. You, or others, will have tested the system with some
form of data (the system test data). [I like to use a readily identifiable record that I can
track through all phases of the system; thats why you see Lars Ingersol in databases used
in this and other courses.]
ii.] Prepare Conversion Plan
This activity is not usually performed by the systems analyst it is usually planned by
upper managers, a steering committee of some kind or some other person. Although in
your work as the analyst/designer, you will have to include the conversion plan in your
planning (time and resource projections, Gantt charts, etc.), the specifics are defined by
others. So we will skip the details of this activity.
However, as part of your planning, youll need to consider the following:
Getting training materials ready
Establish a schedule for installing databases
Identify a training program (or in-house trainers) and schedule for the system users
Develop a detailed installation strategy to follow
Develop a systems acceptance test plan.
There are several common conversion strategies:
Abrupt cutover on a specific date (usually coinciding with some business data,
like the start of the new financial year, or a school year) the old system goes offline and the new one is placed into operation.

Parallel conversion both old and new systems are used for a period of time;
this is done to ensure that all major problems in the new system have been solved
before abandoning the old system.

Location conversion when the same system will be used in multiple locations,
usually one location is selected to start with (to see where the problems are in
conversion) and then the conversion is performed on all the other sites.

Staged conversion each successive version of the new system is converted as it


is developed.

Anticipate some problems of each strategy. For example, an abrupt cutover will be
successful only if the computer program is absolutely perfect which will require lots of
testing before hand and will likely require training before users actually go live. The
parallel conversion is a lot of work for everyone the workers must use both systems,
essentially doing their job twice. Theres lots of opportunity for problems.

http://way2mca.com

- 109 -

Install Databases

iii.]

Purpose : To populate the new system databases with existing data from the old system.
Roles : Usually only the system builders application programmers and data entry
personnel
Prerequisites: (Inputs): Existing data from the old system, coupled with database
schemas and database structures for the new database.
Deliverable (Outputs) : the restructured data populated with data from the old system
Applicable Techniques : You may need to "massage" the data such as writing programs
to convert the old data into the new data formats.
iv.] Train System Users
Purpose : provide training and documentation to system users to prepare them for a
smooth transition to the new system.
Roles :

System owners : - must support this activity: be willing to approve release time for
training
System users :- the system is designed for them, so train em.
System analyst :- from the system documentation the system analyst may write the
end-user documentation (manuals)
System designers and builders :- not usually involved.

Inputs : Youll need system documentation (remember that repository?)


Outputs : Youll write user training and documentation. This includes the technical manual,
too. Remember : Write the manual as if you had to use it, too! The users are likely the
business experts youre not; youre the likely technical experts the users are not.
The training manual should address every possible situation. Heres a sample that is only a
demo - clearly it does not cover all the issues that should be in a manual:

Finally, the users must be trained. This may be done in-house (by the analyst or others) or
by hiring an outside training company.
http://way2mca.com

- 110 -

Maintenance :
Not all jobs run successfully. Sometimes an unexpected boundary condition or an overload
causes an error. Sometimes the o/p fails to pass controls. Sometimes program bugs may
appear. No matter what the problem, a previously working system that ceases to
function, requires emergency maintenance. Isolating operational problems is not
always an easy task, particularly when combinations of circumstances are responsible.
The ease with which a problem can be corrected is directly related to how well a system
has been designed and documented. Changes in environment may lead to maintenance
requirement. For example, new reports may need to be generated, competitors may alter
market conditions, a new manager may have a different style of decision-making,
organization policies may change, etc. Information should be able to accommodate
changing needs. The design should be flexible to allow new features to be added with ease.
Although software does not wear out like hardware, integrity of the program, test
data and documentation degenerate as a result of modifications. Hence, the system will
need maintenance. Maintenance covers a wide range of activities such as correcting code,
design errors, updating documentation and upgrading user support.

Maintenance is necessary to eliminate errors in the system during its working life and to
tune the system to any variations in its working environment.

It has been seen that there are always some errors found in the system that must be
noted and corrected. It also means the review of the system from time to time.
The review of the system is done for :

knowing the full capabilities of the system


knowing the required changes or the additional requirements
studying the performance.

System Maintenance is one of the largest part of system development effort.

If a major change to a system is needed, a new project may have to be set up to carry
out the change. The new project will then proceed through all the above life cycle
phases.

The Maintenance process include the following steps :


Obtain maintenance request
Transform requests into changes
Design changes
Implement changes

Types Of Maintenance :
Software maintenance can be classified into four types:
1] Corrective Maintenance :
It means repairing processing or performance failures, or making changes
because of previously uncorrected problems or false assumptions.
It involves changing the software to correct defects.
For example:
http://way2mca.com

- 111 -

2]

3]

4]

Debugging and correcting errors or failures and emergency fixes that are
required when newly developed software is installed for the first time.
Fixing errors due to incomplete specifications, which may result in
erroneous assumptions, such as assuming an employee code is 5 numeric
digits instead of 5 characters.

Adaptive Maintenance :
Over time the environment for which the software was developed is likely to
change.
Adaptive maintenance results in modifications to the software to accommodate
changes in the external environment.
For example:

Report formats may have been changed.

New hardware may have been installed (changing from 16-bit to 32bit environment)
Perfective Maintenance (Enhancement ):
This implies changing the performance or modifying the program to improve or
enhance the system.
It extends the software beyond its original functional requirements.
This type of maintenance involves more time and money than both
corrective and adaptive maintenance.
For example:

Automatic generation of dates and invoice numbers.

Reports with graphical analysis such as pie charts, bar charts, etc.

Providing on-line help system.


Preventive Maintenance :
Preventive maintenance is conducted to enable the software to serve the needs
of the end-user.
It is done to prevent any more maintenance to the application keeping future
results in focus.
Changes are made to the software so that it can be corrected, adapted and
enhanced more easily.
For example:
Application rebuilding from one platform to another.
Changing from a single-user to a multi-user environment.

General Rules To Reduce S/W maintenance :


In general software maintenance can be reduced by keeping the following points in mind:
A system should be planned keeping the future in mind.
User specifications should be accurate.
The system design should be modular.
Documentation should be complete.
Proper steps must be followed during the development cycle.
Testing should be thorough.
@@@@@@@@@@@@

http://way2mca.com

- 112 -

10. Documentation
Documentation :
Documentation is not a step in SDLC. It is an activity on-going in every phase of SDLC. It is
about developing documents initially as a draft, later on the review document and then a
signed-off document.
The document is born, either after it is signed-off by an authority or after its review. It cries
initial version number. However, the document also undergoes changes and then the only
way to keep your document up to date is to incorporate these changes. A document
generally explains the system and helps people to interact with it.
Importance Of Software Documentation :
The software documentation is important because of the following reasons :
1. The development of s/w starts with abstract ideas in the minds of the Top Management
of User Organization, and these ideas take different forms as the s/w development
takes place. The Documentation is the only link between the entire complex processes
of s/w development.
2. The documentation is written-communication, therefore it can be used for the future
reference as the s/w development advances or even after the s/w is developed, and it is
useful for keeping the s/w up to date.
3. The documentation carried out during a SDLC stage, say system analysis, is useful for
the respective system developer to draft his/her ideas in the form which is shareable
with the other team members or users. Thus it acts as a very important media for
communication.
4. The document reviewer(s) can use the document for pointing out the deficiencies in
them, only because the abstract ideas or models are documented. Thus, documentation
provides facility to make abstract ideas, tangible.
5. When the draft document is reviewed and recommendations incorporated, the same is
useful for the next stage developers, to base their work on. Thus documentation of a
stage is important for the next stage.
6. Documentation is very important because it documents very important decision about
freezing the system requirements, the system design and implementation decision,
agreed between the users and developers or amongst the developers themselves.
7. Documentation provides a lot of information about the software system. This make it
very useful tool to know about the software system even without using it.

http://way2mca.com

- 113 -

8. Since the team members in a s/w development team, keep adding, as the s/w
development projects goes on, the documentation acts as important source of detailed
and complete information for the newly joined members.
9. Also, the user organization may spread implementation of a successful s/w system to
few other locations in the organization. The documentation will help the new users to
know the operations of the s/w system. The same advantage can be drawn when a new
user joins the existing team of users. Thus, documentation makes the users productive
on the job, very fast and at low cost.
10. Documentation is live and important as long as the s/w is in use by the user
organization.
11. When the user organization starts developing a new software system to replace this
one, even then the documentation is useful. E.g. The system analysts can refer to the
documentation as a starting point for discussions on the new system requirements.

Designing Documentation Structure :


Table of Contents.
Navigation Controls lead user to Documentation topics.
How to perform certain tasks.
Definitions of important terms.
Documentation Writing :
Use Active Voice
E-prime Style
Consistent terms
Simple language
Friendly language
Parallel grammatical structures
Correct use of steps
Short paragraphs
Types of documentation :
1) Program documentation
2) System documentation
3) Operations documentation
4) User documentation
1] Program documentation :
The program documentation begins in the systems analysis phase and continues during
systems implementation.
It includes process descriptions and report layouts.
Programmers provide documentation with comments that make it easier to understand
and maintain the program in the future processing.
http://way2mca.com

- 114 -

An analyst must verify that program documentation is accurate and complete.

2] System documentation :
System documentation describes the systems functions and how they are implemented.
System documentation consist of detailed information about a systems design
specifications, its internal workings and its functionality
Most system documentation is prepared during the systems analysis and systems design
phases.
Internal documentation :
It is the System documentation that is part of the program source code or is generated
at compile time.
External documentation :
It is also a System documentation that includes the outcome of structured diagramming
techniques such as data flow and entity-relationship diagrams
System documentation consists of the following :
Data dictionary entries.
Data flow diagrams.
Screen layouts.
Source documents.
Initial systems request.
3] Operation documentation :
Typically used in a minicomputer or mainframe environment with centralized processing
and batch job scheduling.
Documentation tells the IS operations group how and when to run programs.
Common example is a program run sheet, which contains information needed for
processing and distributing output.
4] User documentation :
Typically includes the following items

System overview

Source document description, with samples

Menu and data entry screens

Reports that are available, with samples

Security and audit trail information

Responsibility for input, output, processing

Procedures for handling changes/problems

Examples of exceptions and error situations

Frequently asked questions (FAQ)

Explanation of Help & updating the manual

Written documentation material often is provided in a user manual.


User documentation Component includes :

Table of Contents
Index
Find or search
Links to definitions

http://way2mca.com

- 115 -

Analysts prepare the material and users review it and participate in developing the
manual.

Online documentation can empower users and reduce the need for direct IS support

Context-sensitive Help

Interactive tutorials

Hints and tips

Hypertext

On-screen demos

Computer Aided Software Engineering (CASE) tools


Today software engineers use tools that are analogous to the computer-aided
design and engineering tools used by hardware engineers.
CASE tools support contemporary systems development.
It automates step-by-step development methods.
It reduces the amount of repetitive work.
It allows developers to free their mind cycles for more creative problem-solving tasks.
Categories Of CASE tools
Integrated CASE tools(contains functionality of both).
Upper CASE
Lower CASE
1. Upper CASE tools :
Allows the analysts to create and modify the system design.
Upper CASE tools primarily help the analysts and designers.
All info about the project is stored in the CASE repository encyclopedia (collection
of records/ elements) that contains diagrams, screens, and other info.
Analysis reports are produced using the repository information, for validation.
Also helps in modeling functional requirements of organizations.
Helps in drawing the boundaries for a given project/system.
Helps analysts to visualize how the present project (subsystem) meshes with the
other (sub)systems of the organization.
2. Lower CASE tools :
Lower CASE tools are used more by the programmers and others who implement the
systems designed via upper CASE tools.
Used to generate source code reduces the programming efforts/cost greatly
Code generation may be slow initially familiarity with the methodology used by a
tool.
Testing, and maintenance costs are reduced test, debug, modify efforts are greatly
reduced. With change in design the code is simply regenerated.
There are several different tools with different functionalities.They are:

http://way2mca.com

- 116 -

i) Business systems planning tools


By modeling the strategic information requirements of an organization, these tools
provide a meta-model from which specific information systems are derived. Business
systems planning tools help software developers to create information systems that route
data to those that need the information. The transfer of data is improved and the decisionmaking is expedited.
ii) Project management tools
Using these management tools, a project manager can generate useful estimates of cost,
effort and duration of a software project, plan a work schedule and track projects on a
continuous basis. In addition, the manager can use these tools to collect metrics that will
establish a baseline for software development quality and productivity.
iii) Support tools
This category encompasses document production tools, network system software,
databases, electronic mail, bulletin boards and configuration management tools that are
used to control and manage the information that is created as the software is developed.
iv) Analysis and design tools
These enable the software engineer to model the system that is being built. These tools
assist in the creation of the model and an assessment of the models quality. By performing
consistency and validity tests on each model, these tools provide the engineer with the
insight and help to eliminate errors before they propagate into the program.
v) Programming tools
System software utilities, editors, compilers, debuggers are a legitimate part of CASE tools.
In addition to these, new and powerful tools can be added. Object-oriented
programming tools, 4G languages, advanced database query systems all fall into this tool
category.
vi) Integration and testing tools
Testing tools provide a variety of different levels of support for the software testing steps
that are applied as part of the software engineering process. Some tools provide direct
support for the design of test cases and are used in the early stages of testing. Other tools,
such as automatic regression testing and test data generation tools are used during
integration and validation testing and can help reduce the amount of effort required for
testing.
vii) Prototyping and simulation tools
These tools span a wide range of tools that include simple screen painters to simulation
products for timing and sizing analysis of real-time embedded systems. At their most
fundamental, prototyping tools focus on the creation of screens and reports that will enable
a user to understand the input and output domain of an information system.
viii) Maintenance tools
These tools can help to decompose an existing program and provide the engineer with
some insight. However, the engineer must use intuition, design sense and intelligence to
complete the reverse engineering process and / or to re-engineer the application.
ix) Framework tools
These tools provide a framework from which an integrated project support environment
(IPSE) can be created. In most cases, framework tools actually provide database
http://way2mca.com

- 117 -

management and configuration management capabilities along with utilities that enable
tools from different vendors to be integrated into the IPSE.
Advantages Of CASE tools
Automate many manual tasks
Generate system documentation
Promote standardization
Promote greater consistency & coordination
Disadvantage Of CASE tools
CASE tools cannot automatically provide a functional, relevant system
It cannot automatically force analysts to use a prescribed methodology or create a
methodology when one does not exist
Cannot radically transform the system analysis and design process
@@@@@@@@@@@@

http://way2mca.com

- 118 -

Software Project Plan

http://way2mca.com

- 119 -

Software Requirement Specification

http://way2mca.com

- 120 -

Design Documentation

http://way2mca.com

- 121 -

Test Documentation

http://way2mca.com

- 122 -

QUESTION ANSWER
What are the CASE tools? Explain some CASE tools used for prototyping.
(May-06-15Marks, Nov-03, M-05, Dec-04).
Answer :
Computer assisted software engineering (CASE)
Computer assisted software engineering (CASE) is the use of automated software
tools that support the drawing and analysis of system models and associated
specifications. Some tools also provide pr typing and code generation facilities.
At the center of any true CASE tools architecture is a developers database called a
CASE repository where developers can store system models, detailed descriptions
and specifications and other products of systems developments.
A CASE tool enables people working on a software project to store data about a
project, its plan and schedules, to be able to track its progress and, make changes
easily, analyze and store data about user, store the design of a s stem through
automation..
A CASE environment makes system development economical and practical. The
automated tools and environment provides a mechanism for system personnel to
capture the document and model and information system.
A CASE environment is a number of CASE tools, which use integrated approach to
support the interaction between environments components and user of the
environment.
CASE Components
CASE tools generally include five components - diagrammatic tools, an information
repository interface generators, code generators, and management tools.
Diagrammatic Tools
Diagrammatic tools support analysis and documentation of application
requirements.
Typically, they include the capabilities to produce data flow diagrams, data
structure diagrams, and program structure charts.
These high-level tools are essential for support of structured analysis mythology
and CASE tools incorporate structured analysis extensively.
They support the capability to draw diagram in chart and to store the details
internally. When changes must be made the nature of changes is described to
the system which can then withdraw the entire diagram automatically.
The ability to change and redraw eliminates an activity that analyst finds both
tedious and undesirable.
Centralized Information Repository
A centralized information repository or data dictionary aides the capture analysis
processing and distribution of all system information.
The dictionary contains the details system components such as data items, data
flows and processes and also includes information describing the volumes and
frequency of each activity.
While dictionary are designed so that the information is easily accessible. They
also include built-in control and safeguards to preserve the accuracy and
consistency of the system details.
http://way2mca.com

- 123 -

The use of authorization levels process validation and procedures for testing
consistency of the description ensures that access to definitions and the revisions
made to them in the information repository occur properly according to the
prescribed procedures.

Interface Generators:
System interfaces are the means by which users interact with an application,
both to enter information and data and to receive information.
Interface generator provides the capability to prepare mockups and prototypes of
user interfaces.
Typically the support the rapid creation of demonstration system menus,
presentation screens and report layouts.
Interfaces generators are an important element for application prototyping,
although they are useful with all developments methods.
Code Generators:
Code generators automated the preparations of computer software.
They incorporate method that allows the conversion of system specifications into
executable source code.
The best generator will produce the approximately 75 percent of the source code
for an application. The rest must be written by hand. The hand coding as this
process is termed, is still necessary.
Because CASE tools are general purpose tools not limited any specific area such
as manufacturing control, investment portfolio analysis, or accounts
management, the challenge of fully automating software generation is
substantial.
The greatest benefits accrue in the code generator are integrated with the
central information repository such as combination achieved objective of creating
reusable computer code.
When specification change code can be regenerated by feeding details from data
dictionary through the code generators. The dictionary contents can be reused to
prepare the executable code.
Management Tools:
CASE systems also assist project manager in maintaining efficiency and
effectiveness throughout the application development process.
The CASE components assist development manager in the scheduling of the
analysis and designing activities and allocation of resources to different project
activities.
Some CASE systems support the monitoring of project development schedules
against the actual progress as well as the assignments of specific task individuals.

Some CASE management tools allow project managers to specify custom


elements. For example, they can select the graphic symbols to describe process,
people, department, etc.

http://way2mca.com

- 124 -

What is cost benefit analysis? Describe any two methods of performing same.
(May- 06, May-04).
Answer :
Cost and benefit analysis:
Cost benefit analysis is a procedure that gives the picture of various costs benefits,
and rules associated with each alternative system.
Cost and benefit categories :
In developing cost estimates for a system we need to consider several cost
elements. Among them are followings:
Hardware costs
Hardware costs relate to the actual purchase or lease of the computer and
peripherals (printer, disk, drive, tape, unit e.g.).determining the actual costs of
hardware is generally more difficult when the system is shared by many users than
for a dedicated stand-alone system.
Personnel costs:
Personnel costs include EDP staff salaries and benefits (health insurance, vocation
time, sick pay, etc) as well as payment of those involved in developing the system
.costs incurred during the developments of a system are one-time costs and are
labeled development costs.
Facility costs:
Facility costs are expenses incurred in preparation of the physical site where the
application or computer will be in operation .this includes wiring, flooring, lighting
and air conditioning. these costs are treated as one time costs.
Operating costs:
Operating costs includes all costs associated with the day-to-day operation of the
system .the amount depends on the number shifts .the nature of applications and
the caliber of the operating staff. Te amount charged is based on computer time
,staff time and volume of the output produced.
Supply cost
Supply costs are variable costs that increase with increase use of paper, ribbons,
disks, and like.
Procedure for cost benefit determination :

The determination costs and benefits entails the following costs:


Identify the costs and benefits pertaining to a given project.
Categorize the various costs and benefits for analysis
Select a method of evaluation
Interpret the result of the system
Take action

Classification of costs and benefits :


http://way2mca.com

- 125 -

Tangible or intangible cost and benefits


Tangibility refers to the ease with which costs or benefits can be measured an outlay of
cash for a specific item or activity is referred to a tangible cost. The purchase of
hardware or software ,personnel training and employee salaries are example of tangible
costs.
Costs that are known to exist but whose financial value cannot be accurately measured
are referred to as intangible costs. For example employee morale problems caused by a
new system or lowered company image is an intangible cost
Benefits can also be classified as tangible and intangible. Tangible benefits such as
completing jobs in fewer hours or producing reports with no errors are quantifiable.
Intangible benefits, such as more satisfied customers or an improved corporate image
are not easily quantified.
Direct or indirect cost and benefits
Direct costs are those with which a money figure can be directly associated in a project.
They are applied directly to a particular operation .for example the purchase of a box of
diskettes for $35 is a direct cost.
Indirect costs are the results of operation that are not directly associated with a given
system or actively .they are often referred to overhead
Direct benefits also can be specifically attributable to a given project for example a new
system that can handle 25 percent more transaction per day is a direct benefit.
Indirect benefits are realized as a by- product of another activity or system.
Fixed variable cost and benefits
Fixed costs are sunk costs .they are constant and do not change once encountered they
will not recur. examples are straight line depreciation of hardware exempt employee
salary and insurance.
Variable costs are incurred on a regular basis .they are usually proportional to work
volume and continue as long as the system is in operation. for example the costs of
computer forms vary in proportion to amount of processing or the length of the reports
required.
Fixed benefits are also constant and do not change. an example is a decrease in the
number of personnel by 20 percent resulting from the use of a new computer.
Variable benefits are realized on a regular basis .for example; consider a safe deposit
tracking system that saves 20 minutes preparing customer notices compared with the
manual system.
Evaluation method
Net benefit analysis
Net benefit analysis simply involves subtracting total costs from total benefits. It is easy
to calculate, easy to interpret and easy to present .the main drawback is that it does not
account for the time value of money and does not discount future cash flow.
http://way2mca.com

- 126 -

The time value of money is usually expressed in the form of interest on the funds
invested to realize the future value assuming compounded interest on the funds
invested to realize the future value .assuming compounded interest the formula is
F=P(1+i)pow(n);

F=future value of an investment;


P=present value of the investment.
i=interest rate per compounding year
n=number of years

Present Value analysis


In developing long term projects, it is often difficult to compare todays costs with the
full value of tomorrows benefits. the time value of money allows for interest rates
,inflation and other factors that after the value of the investment. Present value analysis
controls for these problem by calculating the costs and benefits of the system in terms
of todays value of investment and then comparing across alternatives.
Present value=future value/(1+i)pow(n)
Net present value is equal to discounted benefits minus discounted costs
Payback analysis
The pay back method is a common measure of the relative time value of a project. It
determines the time it takes for the accumulated benefits to equal the initial investment.
It is easy to calculate and allows two or more activities to be ranked. The payback
period may be computed by the following formula
Overall cash outlay/annual cash return= (a * b) +(c *d)/5+2=years + INS time/years
to recover elements of the formula
a=capital investment
b=investment credit
c=cost investment
d=companies income tax

http://way2mca.com

e=state and local taxes


f=life of capital
g=time to install system
h=benefits & savings

- 127 -

Das könnte Ihnen auch gefallen