Sie sind auf Seite 1von 100

Distributed by--

--Praful aka Search Engine








Structured System Analysis and
Design

(For Private Circulation Only)

1/99 7:18 PM 9/16/2007
1 1. .1 1
S SY YS ST TE EM M

System:
A system is an organized relationship among functioning components/units.
It is an orderly grouping of interrelated components linked together to achieve a
specific objective.
It exists because it is designed to achieve one or more objectives.
E.g.: Payroll system, Computer system, Business system (Organization).

Organization:
An organization consists of interrelated departments.
E.g.: Marketing Dept, Production Dept, and Personnel Dept.
Org Management Dept

Component:
A component may be a physical component or a managerial step.
The wheels, engine, etc. of a car are examples of Physical components.
Planning, organizing, controlling activities are examples of Managerial steps.
A component may also be classified as simple or complex.
A simple computer with one Input device and one Output device is an example of a
simple component.
A network of terminals linked to a mainframe is an example of a complex component.

Information System:
An information system is a collection of interrelated components which input data,
process it and produce the output required to complete the given task.

Super System:
A system that is made up of sub-systems or smaller systems is called a super
system.

Characteristics Of A System:

1. Organization
a. It implies the structure and order of a system.
b. It is the arrangement of components that helps to achieve a given objective.
E.g. (I) In a business system, the hierarchical relationship starting with the
management (super system) at the top, leading downwards to the departments (sub
systems) represents the organization structure.
E.g. (II) In a computer system, there are input devices, output devices, a processing
unit, storage devices linked together to work as a whole to produce the required
output from the given input.

2. Interaction

2/99 7:18 PM 9/16/2007
Interaction refers to the manner in which each component of a system functions with
other components of a system.
E.g.: There must be interaction between (i) the purchase dept and the production
dept (ii) the payroll dept and the personnel dept (iii) CPU with the I/O devices.

3. Interdependence
a. Interdependence defines how different parts of an organization are dependent on
one another.
b. They are coordinated and linked together according to a plan (i.e., the output of
one subsystem may be the input of another subsystem).
E.g.: User Analyzer Programmer User/Operator. Here, a system is designed
for the user, but it requires an analyzer to analyze it, and then it requires coding by
the programmer and testing by the user.

4. Integration
a. Integration refers to the completeness of a system.
b. It means that subsystems of a system work together within a system even though
each subsystem performs its own unique function.

5. Central Objective
The objective of a system as a whole is of more importance as compared to the
objectives of any of its individual subsystem.

Elements Of A System:

1. Input/Output
a. One of the major objectives of a system is to produce output that has some value
to the user using given input.
b. Input: It is the data or information which is entered into the system.
c. Output: It is the outcome after processing the input.

2. Processor
a. It is an element of the system that performs the actual transformation of input into
output.
b. It may modify the input partially or completely.

3. Control
a. The control element guides the system.
b. It is a decision-making subsystem that controls the pattern of activities related to
the input, processing and output.
E.g.: The management is the decision-making body that controls the activities of an
organization, just as the CPU controls the activities of the Computer System.

4. Environment
It is the surroundings in which a system performs.
E.g.: The users and vendors of a system work form the environment of that system.

5. Boundaries
a. A system should be defined by its boundaries i.e. limits that identify components
and processes related to the system.

3/99 7:18 PM 9/16/2007
E.g.: The limitation of the payroll system can only calculate salary.
b. An Automation Boundary is a boundary that separates manual processes from
automated processes.
E.g.: Entering basic code for salary is a manual process while the actual calculation
of the salary is an automated process.

6. Feedback
a. It implies users response to a system.
b. It provides valuable information about what improvements and updates can be
applied to the system.

Types Of Information Systems:

1. Transaction Processing System:
The information system that captures or records the transactions affecting a
system/organization.
E.g.: Organizations have financial packages tracking their financial activities.

2. Management Information System (MIS):
This information system uses information captured by the Transaction Processing
System for purposes like planning, controlling.
E.g. (I) An entire days transaction that is compiled by the Transaction Processing
System can be documented into MIS reports.
E.g. (II) Cell Phone companies have a billing department which keeps track of very
call received or made using the MIS.

3. Executive Information System (MIS):
This information system provides information for executives for planning economic
forecast.
E.g.: Constant stock updates given by news channels.

4. Communication Support System:
This support system allows employees, customers, clients to communicate with one
another.
E.g.: Employees of an organization can e-mail each other. Some organisations have
specially allotted e-mail addresses for their employees of various departments as
also their clients and customers.

System Analysis:
System Analysis is the process of specifying and understanding in details, what a
system or information system should do.

System Design:
System Design is the process of specifying in detail, how the components of a
system can be physically implemented.

System Analyst:
A System Analyst is a business professional who uses analysis and design
techniques to solve a business problem by using Information Technology/
Information Systems.

4/99 7:18 PM 9/16/2007
He is more like a developer-cum-programmer-cum-problem-solver.

System Analyst as a Problem-Solver:
Information Systems are developed to solve problems for the organization.
The focus is the business and to make it run effectively and efficiently.
There are different problems which an analyst comes across
E.g. (I) A customer wants to place an order for products of a company at any time of
the day. The problem is to be able to process the order round the clock.
E.g. (II) The Management needs to know the current financial position of the
organization. The problem is to collect data from various departments and presenting
the information as one.)
System Analysis and Design focuses on understanding the problem and outlining the
approach to solve the problem.
a. To understand the problem, the analyst must learn every possible aspect of the
problem (E.g. business processes, data to be entered, data to be stored, will
existing system [if any] be affected, will other business systems be affected)
b. The Analyst need to confirm with the management, the benefits of solving the
problem especially the cost estimation.
c. If solving the problem is possible, then the analyst has to develop a set of
feasible solutions.
d. For each possible solution, the Analyst has to take into consideration, the
following aspects: (a) Components of the systems (b) Technology to be used for
building the System (c) Workforce required to build the system.
e. He then needs to decide with the management, which possible solution is the
best alternative.
The Analyst has to select the solution with fewest risks and maximum benefits.
The chosen system must also be cost-effective and consistent with the contemporary
corporate industry.
Once the system has been finalized, the Analyst then has to work on the design
specification (E.g. program modules, databases, networks, user interfaces).
Once the design specification is complete, actual construction of the system begins.
Once that step is completed, the system is implemented.
The final step is providing support in the form of customer support, training the user
to use the system, providing help information.


Skills required by a System Analyst:

1. Technical Knowledge & Skills:
A System Analyst should understand the fundamentals about how a computer works,
the devices that interact with the computer, communication networks, database
management system, programming languages, operating systems.
A System Analyst should also know tools (software products used to develop
analysis & design specification) and techniques (used to complete system
development activities) for developing a system.
a. Project Planning Techniques (How to plan and manage a System Development
Project)
b. System analysis techniques
c. System design techniques

5/99 7:18 PM 9/16/2007
d. System construction & implementation techniques
e. System support techniques

2. Business Knowledge & Skills:
A System Analyst needs to understand the business organization in general.
a. What activities and processes does the organisation perform?
b. How is the organisation constructed?
c. How is the organisation managed?
d. What type of work goes on in the organisation?

3. People Knowledge & Skills:
A System Analyst needs to understand a lot about the people working for the
organization.
A System Analyst should also understand their perspectives on the problem they are
trying to solve.
a. How people think
b. How people learn
c. How people react to change
d. How people communicate
e. How people work in a variety of jobs and levels
1 1. .1 1. .2 2
A AP PP PR RO OA AC CH HE ES S T TO O S SO OF FT TW WA AR RE E
S SY YS ST TE EM M D DE EV VE EL LO OP PM ME EN NT T
1. Structured Approach
2. Object-Oriented Approach
3. Information Engineering Approach

Structured Approach
- Structured Approach is made up of three techniques: (1) Structured
Programming (2) Structured Analysis (3) Structured Design, together call
SADT i.e. Structured Analysis & Design Techniques.
- Structured Programming:
A structured program is a program that has one beginning, one end, and
each step in program execution consists of one of the three programming
constructs: (a) A sequence of program statements (b) A decision where
one set of statements executes, or another set of statements executes (c)
A repetition of a set of statements.
Top Down Programming divides complex programs into a hierarchy of
program modules, where each module is written using the rules of
structured programming, and may be called by its top-level boss module
as required.
Modular Programming: If the program modules are separate programs
working together as a system (and not part of the same program), then
these programs are organized into a top-to-bottom hierarchy; in that case,

6/99 7:18 PM 9/16/2007
if multiple programs are involved in the hierarchy, then the arrangement is
called modular programming.
- Structured Design:
Definition: Structured Design is a technique that provides guidelines for
deciding what the set of programs should be, what each program should
accomplish, and how the programs should be organized into a hierarchy.
Principles: Program Modules should be (a) loosely coupled i.e. each
module is as independent of the other modules as possible and thus
easily changeable and (b) highly cohesive i.e. each module accomplishes
a clear task.
User Interface Design is done in conjunction with Structured Design.
Structure Chart: It is a graphical model showing hierarchy of program
modules produced in structured design.
- Structured Analysis:
Definition: Structured Analysis is a technique that helps define what the
system needs to do (processing requirements), what data the system
needs to store and use (data requirements), what inputs and outputs are
needed, and how the functions work together overall to accomplish
required tasks.
Data Flow Diagram (DFD): It is a graphical model showing inputs,
processes, storage, outputs of a system produced in structured analysis.
Entity Relationship Diagram (ERD): It is a graphical model of the data
needed by the system, including entities about which information is stored
and the relationships among them, produced in structured analysis.
- Weaknesses of Structured Approach:
Structured Approach makes processes the focus rather than the data.

Information Engineering Approach
- Definition: Information Engineering Approach is a system development
approach that focuses on strategic planning, data modeling, and automated
tools.
- Advantage over Structured Approach: More rigorous and complete than
Structured Approach, and the focus is more on data than on processes.
- Strategic Planning: It defines all information systems the organisation needs
to conduct its business, using the Architecture Plan.
- Architecture Plan: This plan defines business functions and activities the
system needs to support, the data entities about which the system needs to
store information, and the technological infrastructure the organisation plans
to use to support the information system.
- Process Dependency Diagram: It focuses on which processes are dependent
on other processes.
- CASE Tool: It helps automate work by forcing the analyst to follow the I.E.
Approach (sometimes at the expense of flexibility).

Object-Oriented Approach

7/99 7:18 PM 9/16/2007
- Definition: Object-Oriented Approach to system development is an approach
that views an information system as a collection of interacting objects that
work together to accomplish a task.
- Object: It is a programmatic representation of a physical entity, that can
respond to messages.
- Object-Oriented Analysis: It involves defining all types of objects that do work
in the system, and showing how the objects interact to complete required
tasks.
- Object-Oriented Design: It involves defining all additional types of objects
necessary to communicate with people and devices in the system, redefining
each type of object so it can be implemented with a specific language or
environment.
- Object-Oriented Programming: It involves writing statements in a
programming language to define what each type of object does, including the
messages the objects send to each other.
- Class: It is a collection of similar objects, and each class may have
specialized subclasses, and/or a generalized superclass.
- Class Diagram: It is a graphical model that shows all the classes of objects in
the system-oriented approach.
- Advantages: (a) Naturalness (looks at the world in terms of tangible objects
and not complex procedures) & (b) reuse (classes can be used again and
again whenever they are needed).
- Drawbacks: Since it is drastically different from the traditional approach, it is
sometimes difficult to understand.







1 1. .2 2
S SO OF FT TW WA AR RE E D DE EV VE EL LO OP PM ME EN NT T C CY YC CL LE E

System Development Life Cycle
- The System Development Life Cycle (SDLC) is a method of System
Development that consists of 5 phases: Planning, Analysis, Design,
Implementation, and Support.
- The first four phases of Planning, Analysis, Design and Implementation are
undertaken during development of the project, while the last phase of Support
is undertaken post-completion of the project.
- Each phase has some activities associated with it, and each activity may
have some tasks associated with it.
- (phase : division on SDLC where similar activities are performed)

8/99 7:18 PM 9/16/2007

Planning Phase
Following are the activities of the Planning Phase:
- Define the Problem
i. Meeting the Users
ii. Determine scope of Problem
iii. Define System capabilities
- Confirm Project Feasibility
i. Identify intangible costs & benefits
ii. Estimate tangible, developmental, & operational costs
iii. Calculate NPV, ROI, Payback
iv. Consider technical, cultural, schedule feasibility of the Project
- Plan Project Schedule (Chart out a complete project schedule, including the
activities and tasks of each phase.)
- Staff the Project (Provide required staff, such as the Analysts, the
Programmers, the End-Users, etc.)
- Launch the Project (Begin actual work on the Project)

Analysis Phase
Following are the activities of the Analysis Phase:
- Gather information
i. Meet the User to understand all aspects of the Problem
ii. Obtain information by observing business procedures, asking questions to
user, studying existing documents, reviewing existing systems, etc.
- Define System Requirements (Review & analyze obtained information and
structure it to understand requirements of new system, using graphical tools.)
- Build Prototype for Discovery of Requirements (Build pieces of System for
Users to review)
- Prioritize Requirements (Arrange requirements in order of importance)
- Generate & Evaluate alternatives (Research alternative solutions while
building system requirements.)
- Review recommendations with Management (Discuss all possible alternatives
with Management and finalize best alternative)
-

Design Phase
Following are the activities of the Design Phase:
- Design & Integrate Network (Understand Network Specifications of
Organisation, such as Computer equipment, Operating Systems, Platforms,
etc.)
- Design Application Architecture
i. Design model diagrams according to the problem
ii. Create the required computer program modules
- Design User Interfaces (Design the required forms, reports, user screens, and
decide on the sequence of interaction.)

9/99 7:18 PM 9/16/2007
- Design System Interface (Understand how the new system will interact with
the existing systems of the organisation)
- Design & Integrate Database (Prepare a database scheme and implement it
into the system)
- Build Prototype for Design Details (Check workability of the proposed design
using a prototype.)
- Design & Integrate System Controls (Incorporate facilities such as login and
password protection to protect the integrity of the database and the
application program.)

Implementation Phase
Following are the activities of the Implementation Phase:
- Construct Software Components (Write code for the design, using
programming languages such as java, VB, etc.)
- Verify & Test Software (Check the functionality of the software components.)
- Build Prototype for Tuning (Make the software components more efficient
using a prototype, to make the system capable of handling large volumes of
transaction.)
- Convert Data (Incorporate data from existing system into new system and
make sure it is updated and compatible with the new system.)
- Train & Document (Train users to use the new system, and prepare the
documentation.)
- Install Software (Install the software and make sure all components are
running properly and check for database access.)

Support Phase
Following are the activities of the Support Phase:
- Provide support to End-Users (Provide a helpdesk facility and training
programs, to provide support to end users.)
- Maintain & Enhance new System (Keep the system running error-free, and
provide upgrades to keep the system contemporary.)
-
Software Development Cycle Variations:

Waterfall Model
- In the Waterfall Model Life Cycle, each phase is completed in a sequence,
and the results of one phase flow into the next phase.
- It takes a top-down approach, where once a phase is completed, it cannot be
modified.
- That is to say, the activities and decisions of each phase are frozen.
- This Model is considered to be rigid, as there is no chance for a change of
direction in the flow of the phases.
- Hence it insists on careful planning and control of the project.
- The advantage of the same is that it is possible to define at any point, exactly
what has been decided, and also exactly how far along the project is.

10/99 7:18 PM 9/16/2007
- The drawback of this Model is its inflexibility, and hence it is preferred for
tangible rather than intangible projects.

Spiral Model
- The Spiral Model Life Cycle involves heavy iteration, that breaks the project
into smaller pieces, each with a different risk associated with it.
- The risk could be anything from an undefined system requirement, to complex
technology, to an uncertain competitive environment.
- Initially, the project handles a few risks, and with each iteration, it expands
and addresses more risks, until eventually the system is completed.
- A system is considered to be completed when all risks have been addressed.
- Hence it insists on careful planning and control of the project.
- The advantage of the same is that it is possible to define at any point, exactly
what has been decided, and also exactly how far along the project is.
- The drawback of this Model is its inflexibility, and hence it is preferred for
tangible rather than intangible projects.

Prototyping Model
- The Prototyping Model is considered to be a basic model.
- It has two parts: (a) a Discovery Prototype and (b) a Development Prototype.
- The Discovery Prototype is used in the Planning & Analysis phases to test
feasibility and help identify processing requirements.
- The Development Prototype is used in the design, coding and implementation
phases, to test designs, effectiveness of code and workability of software.
- This model allows us to move through the five phases at will, where after
coding, the implementation phase is undertaken only if the project is
satisfactory at that point.
- The drawback of this model is that a lot of reworking is required.

Rapid Application Development Model
- The Rapid Application Development (RAD) Model is used for speed
development, which is necessary to overcome continuing backlog of needed
systems, and to keep the system contemporary.
- The key concept used in this model is speeding up the activities of each
phase.
- Another approach taken to speed up activities is iterative development.
- It thus involves collection of development approaches, tools and techniques
that have proven to shorten the project development schedule.
- Another approach to this model is creating working prototypes for the user to
review, and then expand the prototype into a finished system, with the users
approval.
- The drawback of this model is that the risk involved is a lot higher, as the
focus is on taking less time, that may lead to compromise of quality.

1 1. .3 3

11/99 7:18 PM 9/16/2007
F FA AC CT T F FI IN ND DI IN NG G T TE EC CH HN NI IQ QU UE ES S

Fact-finding techniques are used to identify system requirements, through
comprehensive interaction with the users using various ways of gathering
information.

Six Methods of Information Gathering:
1. Distribute & Collect Questionnaires to Stakeholders
2. Review existing reports, forms, and procedure descriptions
3. Conduct interviews & discussions with users
4. Observe business processes and workflows
5. Build prototypes
6. Conduct Joint Application Design (JAD) sessions

Distribute & Collect Questionnaires
- Questionnaires enable the project team to collect information from a large
number of stakeholders conveniently, and to obtain preliminary insight on
their information needs.
- This information is then used to identify areas that need further research
using document reviews, interviews, and observation.
- Questionnaires can be used to answer quantitative questions, such as How
many orders do you enter in a day?
- Such questions are called closed-ended questions i.e. questions that have
simple, definitive answers and do not invite discussion or elaboration.
- They can be used to determine the users opinion about various aspects of a
system (say, asking the user to rate a particular activity on a scale of 1-5).
- Questionnaires, however, do not provide information about processes, work-
flow, or techniques used.
- Questions that illicit a descriptive response are best answered using
interviews, or observation.
- Such questions that encourage discussion and elaboration are called open-
ended questions.

Review Existing Reports, Forms, and Procedure Descriptions
- Two advantages of reviewing existing documents and documentation:
i. To get a better understanding of processes
ii. To gain knowledge about the industry or the application that needs to be
studied.
- An analyst requests for and reviews procedural manuals, and work
descriptions, in order to understand business functions.
- Documents and reports can also be used in interviews, where forms and
reports are used as visual aid, and working documents are used for
discussion.
- Discussion can center on use of each form, its objective, distribution, and
information content.

12/99 7:18 PM 9/16/2007
- Forms already filled-out with real information ensure a correct understanding
of the fields and data content.
- Reviewing existing documentation of existing procedures helps identify
business rules, while written procedures also help in discovering
discrepancies and redundancies in the business processes.
- It is essential to ensure that the assumptions and business rules derived from
existing documentation are accurate.
Conduct Interviews & Discussions with Users
- Interviewing stakeholders is considered the most effective way to understand
business functions and rules, though it is also the most time-consuming and
resource-expensive.
- In this method, members of the project team (system analysts) meet with
individual groups of users, in one or multiple sessions in order to understand
all processing requirements through discussion.
- An effective interview consists of three parts: (a) Preparing for the interview
(b) Conducting the interview and (c) Following up the interview.
- Before an Interview:
i. Establish objective of interview (what do you want to accomplish through
this interview?)
ii. Determine correct user(s) to be involved (no. of users depends on the
objective)
iii. Determine project team members to participate (at least 2)
iv. Build a list of questions and issues to be discussed
v. Review related documents and materials (list of specific questions, open
and closed ended)
vi. Set the time and location (quiet location, uninterrupted)
vii. Inform all participants of objective, time, and locations (each participant
should be aware of objective of the interview)
- During an Interview:
i. Dress appropriately (show good manners)
ii. Arrive on time (arriving early is a good practice, if long interview, prepare
for breaks)
iii. Look for exceptions and error conditions (ask what if questions, ask
about exceptional situations)
iv. Probe for details (ensure complete understanding of all procedures and
rules)
v. Take thorough notes (handwritten note-taking makes user feel that what
he has to say is important to you)
vi. Identify and document unanswered items or open questions (useful for
next interview session)
- After an Interview:
i. Review notes for accuracy, completeness, and understanding (absorb,
understand, document obtained information)
ii. Transfer information to appropriate models and documents (create
models for better understanding after complete review)

13/99 7:18 PM 9/16/2007
iii. Identify areas that need further clarification (keep a log of unanswered
questions, such as those based on policy questions raised by new system,
include them in next interview)
iv. Send thank-you notes if appropriate

Observe Business Processes & Work-flow
- Observing business procedures that the new system will support is an
excellent way to understand exactly how the users use a system, and what
information they need.
- A quick walkthrough of the work area gives a general understanding of the
layout of the office, the need and use of computer equipment, and the general
workflow.
- Actually observing a user at his job provides details about the actual usage of
the computer system, and how the business processes are carried out in
reality.
- Being trained by a user and actually performing the job allows one to discover
the difficulties of learning new procedures, the importance of an easy-to-use
system, and drawbacks of the current system that the new system needs to
address.
- It must be remembered that the level of commitment required by different
processes varies from one process to another.
- Also, the analyst must not be a hindrance to the user.

Build Prototypes
- Building a prototype implies creating an initial working model of a larger, more
complex entity.
- Types of prototypes: throwaway, discovery, design, evolving prototypes.
- Different phases of the SDLC require different prototypes.
- The Discovery Prototype is used in the Planning & Analysis phases to test
feasibility and help identify processing requirements.
- The Development Prototype is used in the design, coding and implementation
phases, to test designs, effectiveness of code and workability of software.
- Discovery prototypes are usually discarded after the concept has been tested,
while an Evolving prototype is one that grows and evolves and may
eventually be used as the final, live system.
- Characteristics of Prototypes:
i. A prototype should be operative i.e. a working model, that may provide
lock-and-feel but may lack some functionality.
ii. It should be focused on a single objective, even if simple prototypes are
being merged into a single large prototype.
iii. It should be built and modified easily and quickly, so as to enable
immediate modification if approach is wrong.

Conduct Joint Application Design (JAD) Sessions
- JAD is a technique used to expedite the investigation of system requirements.

14/99 7:18 PM 9/16/2007
- Usually, the analysts first meet with the users and document the discussion
through notes & models (which are later reviewed).
- Unresolved issues are placed on an open-items list, and are eventually
discussed in additional meetings.
- The objective of this technique is to compress all these activities into a shorter
series of JAD sessions with users and project team members.
- During a session, all of the fact-finding, model-building, policy decisions, and
verification activities are completed for a particular aspect of the system.
- The success of a JAD session depends on the presence of all key
stakeholders and their contribution and decisions.
2 2. .1 1
I IN NV VE ES ST TI IG GA AT TI IN NG G S SY YS ST TE EM M
R RE EQ QU UI IR RE EM ME EN NT TS S

What are System Requirements?
- System Requirements are the functions that our system must perform.
- During planning, the Analyst defines system capabilities, during analysis, the
Analyst expands these into a set of system requirements.
- There are two types of System Requirements:
i. Functional activities that a system must perform with respect to the
organization.
ii. Technical operational objectives related to the environment, hardware,
and software of the organization.
- In functional requirements, for example, if a Payroll System is being
developed, then it is required to calculate salary, print paychecks, calculate
taxes, net salary etc.
- In technical requirements, for example, the system may be required to
support multiple terminals with the same response time, or may be required to
run on a specific operating system.

Sources of System Requirements The Stakeholders
- The Stakeholders of the System are considered as the primary source of
information for functional system requirements.
- Stakeholders are people who have an interest in the successful
implementation of your system.
- There are three groups of stakeholders: (a) Users who use the system on a
daily basis (b) Clients who pay for and own the system (c) Technical staff i.e.
the people who must ensure that the system operates in the computing
environment of the organisation.
- The analysts first task during analysis is to (a) identify every type of
stakeholder and (b) identify the critical person from each type (group) of
stakeholders.


15/99 7:18 PM 9/16/2007
User Stakeholders:
- User Stakeholders are identified into 2 types: (a) Vertical and (b) Horizontal.
- Horizontal implies that an analyst needs to look at information flow across
departments or functions.
- For example, a new inventory system may affect multiple departments, such
as sales, manufacturing, etc, so these departments need to be identified, so
as to collect information relevant to them.
- Vertical implies that an analyst needs to look at information flow across job
levels, such as clerical staff, middle management, executives, etc.
- Each of these users may need the system to perform different functions with
respect to themselves.
- A Transaction is the single occurrence of a piece of work or an activity
done in an organisation.
- A Query is a request for information from a system or from a database.



Characteristics and Information Needs of Different Users:

Business Operations Users: (1)
- They are people who use the system to perform day-to-day operations of an
organization.
- Such users will be able to provide information about the daily operations of
the business and how the new system must support them.

Query Users: (2)
- They are people who need current information from the system.
- They can also be Business Operations Users.
- A customer who needs to view specific information may also be considered a
Query user.
- Such users provide information about what kind of data should be available
on a periodical basis (such as daily, weekly, monthly etc) and also what
format is most convenient.

Management Users: (3)
- They are people who oversee the organisation, and make sure that its daily
procedures are performed effectively and efficiently.
- They thus expect statistical and summary information from the system.
- Such users provide information about (a) what reports should be generated,
(b) what performance statistics should the system maintain, (c) what volume
of transaction does the system need to support and log, (d) whether the
controls of the system are adequate to prevent errors and fraud, and (e) how
many and how often will the requests for information be made.

Executive Users: (4)

16/99 7:18 PM 9/16/2007
- They are the top level executives of the organisation, who are interested in
strategic issues as well as daily issues.
- They need information from the system that will help them compare overall
improvements in resource utilization.
- Also, they may want the system to interface with other systems to provide
strategic information regarding trends and directions of the industry and the
business.

Client Stakeholders: (5)
- Client Stakeholder is a person or a group who is providing funds for the
project.
- A Client Stakeholder may be the same group as the executive users, or may
be a separate group of people, such as a board of trustees, or executives of a
parent company.
- Since the client funds the project, he expects some benefits from the system.
- He must receive project status reviews from time to time so as to maintain
ongoing approval and release of funds.

Technical Stakeholders: (6)
- Technical Stakeholder is a person or a group who establishes and maintains
the computing environment of the organisation.
- Though they do not form a true user group, they provide valuable information
about the technical requirements of the system.
- They provide guidance in areas such as programming language, computer
platforms, hardware equipment.

Identifying System Requirements:
- Previously, identification of system requirements was a four step process, and
led to wastage of time:
i. Develop current systems Physical Model (physical processes & activities)
ii. Develop current systems Logical Models (logical business functions)
iii. Develop new systems Logical Model
iv. Develop new systems Physical Model
- The current method of identifying system requirements is a two step process:
i. Identify current system Procedures (only understand business needs, not
processes)
ii. Develop requirements and models for new system

Structured Walkthroughs:
- A Structured Walkthrough is a review of the findings from the analysts
investigation and of the models built on the basis of these findings.
- The objective of a SW is to find errors and problems through Verification &
Validation.
- This is done by documenting requirements, and then reviewing them for
errors, omissions, inconsistencies, or problems with the description.
- Though a review of the findings can be informal, a SW is always formal.

17/99 7:18 PM 9/16/2007
- One thing to be remembered about a SW is that it is NOT a performance
review.
- What is reviewed? The analysts documentation prepared during the
analysis phase, such as flowcharts, model diagrams, etc. is reviewed.
- When is it reviewed? The walkthrough should be conducted as soon as
possible after the documents have been created.
- Who is involved?
i. Two parties are involved: the person whos work is being reviewed, and
those who are reviewing it.
ii. For verification, i.e. for checking correctness and consistency,
experienced analysts may be involved in the walkthrough.
iii. For validation, i.e. ensuring that system satisfies all the needs of the
various stakeholders, the appropriate stakeholders should also be
involved.
iv. In general, the nature of the work to be reviewed dictates who the
reviewers should be.
- How is it undertaken? Similar to an interview, preparation, execution and
follow-up constitute a walkthrough.
i. Preparation: The analyst readies the material to be reviewed, provides
copies to the participants of the SW, schedules a time and place for the
same, and notifies all participants.
ii. Execution:
1. The analyst presents his work point by point, perhaps by using a
sample test case, and processing it through the defined flow.
2. The reviewers check for inconsistencies or problems and point them
out accordingly, and these comments are documented by a helper for
the analysts future reference.
3. Though suggestions for correction are provided during the SW, no
corrections are actually made during the same.
iii. Follow-Up: This consists of the making of changes and corrections, based
on the documented comments and suggestions (if major errors are
located, then an additional SW may be scheduled).

2 2. .2 2
M MO OD DE EL LL LI IN NG G S SY YS ST TE EM M
R RE EQ QU UI IR RE EM ME EN NT TS S

Overview of Models
- What is a model? A model is a representation of some aspect of the system
to be built.
- What is its purpose?
i. A model helps an analyst clarify and refine the design of the system.
ii. It also helps in describing the complexity of the information system.

18/99 7:18 PM 9/16/2007
iii. It provides a convenient way of storing information about the system in a
readily understandable form.
iv. It helps communication between the analyst and the programmer (i.e.
members of the project team).
v. It also assists in communication between the project team and the system
users and stakeholders.
vi. It can be used as documentation by a future development team, while
maintaining or enhancing the system.

Types of Models
- The type of the model is based on the nature of the information being
represented.
- Types of models include:
i. Mathematical Model
ii. Descriptive Model
iii. Graphical Model

Mathematical Model
- A Mathematical Model is a series of formulae that describe the technical
aspects of a system.
- Such models are used to represent precise aspects of the system, that can
be best represented through formulae or mathematical notations, such as
equations and functions.
- They are useful in expressing the functional requirements of scientific and
engineering applications, that tend to compute results using elaborate
mathematical algorithms.
- They are also useful for expressing simpler mathematical requirements in
business systems, such as net salary calculation in a payroll system.

Descriptive Model
- Descriptive models are required for narrative memos, reports, or lists that
describe some aspects of the system.
- This model is required especially because there is a limitation to what
information can be defined using a mathematical model.
- Effective models of information systems involve simple lists of features,
inputs, outputs, events, users.
- Lists are a form of descriptive models that are concise, specific, and useful.
- Algorithms written using structured English or pseudocode are also
considered precise descriptive models.
Graphical Models
- Graphical Models include diagrams and schematic representations of some
aspect of a system.
- They simplify complex relationships that cannot be understood with a verbal
description.
- Analysts usually use symbols to denote various parts of the model, such as
external agents, processes, data, objects, messages, connections.

19/99 7:18 PM 9/16/2007
- Each type of graphical model uses unique and standardized symbols to
represent pieces of information.

Models used in Analysis Phase
- The models involved in the Define system requirements activity of the
Analysis Phase are logical models.
- They define in great detail what is required, without committing to any specific
technology.
- Following logical models are used in the Analysis Phase:
i. Event Table (S)
ii. Data Flow Diagram (S)
iii. Entity Relationship Diagram (S)
iv. Class Diagram (O)
v. Use Case Diagram (O)
vi. Sequence Diagram (O)
vii. Collaboration Diagram (O)
viii. State Chart Diagram (O)
(S) Structured, (O) Object-Oriented

Models used in Design Phase
- The models involved in the Design Phase are physical models as they show
how some aspect of the system will be implemented with a specific
technology.
- Some models are used in both Analysis and Design Phases.
- Following physical models are used in the Analysis Phase:
i. Screen Layouts (S) (O)
ii. Reports Layouts (S) (O)
iii. System Flowchart (S)
iv. Structure Chart (S)
v. Class Diagram (O)
vi. Database Scheme (S)
vii. Network Diagram (S)
(S) Structured, (O) Object-Oriented


Overview of Events:
- An event is an occurrence at a specific time and place that can be described
and that affects the system.
- It may be that an event occurs at the end of a sequence of activities, which do
not affect the system.
- For example, consider the following sequence of activities, and Bill
Generation System:
1. A person goes to a shop.
2. He browses through the range of items available.
3. He selects an item of his choice.
4. He pays for the item and leaves.

20/99 7:18 PM 9/16/2007
- The first three activities do not affect the system in any way it is only the
fourth activity, where the system is involved and hence is affected.

Types of Events:
- Three types of events must be considered:
i. External Events
ii. Temporal Events
iii. State Events

External Events:
- External events are events that occur outside the system, usually initiated by
an external agent.
- An External Agent is an entity that supplies or receives data from the system.
- The Analyst, while identifying external events, usually identifies the external
agents first.
- External events usually begin to define what a system needs to be able to do.
- They are also the events that lead to important transactions that the system
must process.
- Checklist to identify external events:
i. External agent wants something that may result in a transaction
ii. External agent wants some information
iii. Some data changed needs to be updated
iv. Management wants some information


Temporal Events:
- Temporal event is one that occurs as a result of reaching a point in time.
- These events are different from external events in that the system should
automatically produce the required output (weekly reports, monthly paycheck)
without being told to do so.
- Checklist to identify temporal events:
i. Internal outputs needed
1. Management reports (summary or exception)
2. Operational reports (detailed transaction)
3. Statements, status reports (payroll etc.)
ii. External outputs needed
1. Statements, status reports, bills, reminders
- They are periodic events that occur after predefined intervals of time.


State Events:
- A State Event is an event that occurs when something happens inside the
system that triggers the need for processing.
- Sometimes, State Events occur as a result of external events.

21/99 7:18 PM 9/16/2007
- They are similar to temporal events, with one major difference the trigger
involved in state events is another event, while the trigger involved with a
temporal event is a predefined time.

Some Important Terms:
- System Controls: They are checks and procedures put in place to protect the
integrity of the system.
- Event Table: It is a table that lists events in rows and key pieces of
information related to the event in columns.
- Trigger: It is an occurrence that tells the system that an event has occurred,
either through the input given or in case of a temporal event,
- Source: It is an external agent or actor that supplies data to the system.
- Activity: It is the behaviour (action) that the system performs when an event
occurs.
- Response: It is the output produced by the system that goes to the
destination.
- Destination: It is an external agent or actor that receives data from the
system.
2 2. .3 3
D DA AT TA A M MO OD DE EL LL LI IN NG G

Data Entity
- A data entity is the object that the system needs to store information about.
- They can be likened to external actors or agents that interact with or use the
system, such as a customer or an employee.
- They may also be objects such as products, orders, invoices, etc.
- Types of entities:
i. They could be tangible, i.e. easily identified, such as a book, or intangible.
ii. It could be a role played by a person, such as a doctor, customer,
employee, etc.
iii. It could be an organizational unit, such as a department, or work group.
iv. It could be a location, such as a store, a branch, or a warehouse.
v. It could be information about something, such as info about a product, or
an order.

Attributes of a Data Entity
- An attribute of a data entity is a piece of specific information about that entity.
- For example, a customer has an ID, a name, a contact address, a contact
phone number, etc.
- The attribute that uniquely identifies an entity is called a key or an identifier.
- An attribute that is a collection of similar or related attributes is known as a
composite attribute (for example, first name, middle name, and last name can
be stored as full name)


22/99 7:18 PM 9/16/2007
Relationships among Entities
- A relationship is a naturally occurring association between specific entities.
- Relationships between entities apply in two ways for example, a customer
places an order, and an order is placed by a customer.
- The nature of each relationship is considered in terms of the number of
associations.
- Cardinality: It is the number of associations that occur between specific
entities.
- Multiplicity: It is a synonym for cardinality (often used in the object-oriented
approach).
- Binary Relationship: It is a relationship between two different types of entities
(such as a Customer and an Order).
- Unary Relationship: It is a recursive relationship between two entities of the
same type (such as a person being married to another person)
- Ternary Relationship: It is a relationship between three different types of
entities.
- N-ary Relationship: It is a relationship between n different types of entities.

Entity-Relationship Diagram
- An Entity-Relationship Diagram is a diagram that represents the relationship
between two entities in a system.
- The relationships within an ER Diagram can be of the following types:
i. One-to-one: One instance of one entity will have a one-to-one
correspondence with the one instance of the other (one person ABC
marries another person XYZ).
ii. One-to-many: One instance of one entity will have a one-to-many
correspondence with many instances of the other (one customer ABC
places orders O100, O200, O300).
iii. Many-to-many: Many instances of one entity will have a one-to-many
correspondence with many instances of the other (employees ABC and
XYZ work on projects P300 and P530).
2 2. .4 4
P PR RO OC CE ES SS S M MO OD DE EL LL LI IN NG G

Data Flow Diagram
- A Data Flow Diagram is a graphical system model that shows all the main
requirements for an information system.
- It involves representation of the following:
i. Source (External agent)
ii. Process/Activity
iii. Data store
iv. Trigger
v. Response
vi. Destination (External agent)

23/99 7:18 PM 9/16/2007
- All processes are numbered to show proper sequence of events.

Important Terms
- External Agent: A person or organisation that lies outside the boundary of a
system and provides inputs to the system or accepts the systems output.
- Process: A process is an algorithm or procedure that transforms data input
into data output.
- Data Flow: An arrow in a DFD that represents flow of data among the
processes of the system, the data stores, and the external agents.
- Data Store: A place where data is held, for future access by one or more
processes.

Level of Abstraction
- Many different types of DFDs are produced to show system requirements.
- Some may show the processing at a higher level (a general view), while
others may show the processing at a lower level (a detailed view).
- These differing views are referred to as levels of abstraction.
- Thus Levels of Abstraction can be defined as a modelling technique that
breaks the system into a hierarchical set of increasingly more detailed
models.
- The higher level DFDs can be decomposed into separate lower level detailed
DFDs.

Context Diagram
- A Context Diagram is a Data Flow Diagram that describes the highest view of
a system.
- It summarizes all processing activities within the system into a single process
representation.
- All the external agents and data flow (into and out of the system) are shown in
one diagram, with the whole system represented as a single process.
- It is useful for defining the scope and boundaries of a system.
- The boundaries of a system in turn help identify the external agents, as they
lie outside the boundary of the system.
- The inputs and outputs of a system are also clearly defined.
- NOTE: Data stores are not included.
- The Context level DFD process take 0 as the process number, while the
numbering in the 0 Level DFD starts from 1.

DFD Fragments
- A DFD fragment is a DFD that represents system response to one event
within a single process.
- Each DFD fragment is a self-contained model showing how the system
responds to a single event.
- The main purpose of DFD fragments is to allow the analyst to focus attention
on just one part of the system at a time.

24/99 7:18 PM 9/16/2007
- Usually, a DFD fragment is created for each event in the event list (later made
into an event table).

The Event-Partitioned System Model
- The Event-Partitioned System Model is a DFD that models system
requirements using a single process for each event in a system or subsystem.
- An entire set of DFD fragments can be combined into an event-partitioned
system model or diagram 0.
- Diagram 0 is a more detailed version of a context diagram and a synonym of
the EPSM.

Decomposing Processes
- The main reason for decomposing processes is to observe the details of one
activity.
- A DFD fragment can be decomposed into sub-processes, just like the Context
diagram can be decomposed into diagram 0.
- Decomposition of a process gives the analyst a clearer idea about the system
requirements for that process.

Physical & Logical DFDs
- If the DFD is a physical model, then one or more assumptions about the
implementation technology are embedded in the DFD.
- If the DFD is a logical model, then it assumes that the system will be
implemented using perfect technology.
- Elements that indicate assumptions about implementation technology are:
i. Technology-specific processes (E.g.: Making copies of a document)
ii. Actor-specific process names (E.g.: Engineer checks parts)
iii. Technology- specific or actor-specific process orders
iv. Redundant processes, data flows, and files
- Physical DFDs are sometimes developed and used during the last stages of
analysis or early stages of design.
- Physical DFDs serve one purpose, and that is to represent one possible
implementation of the logical system requirements.

Evaluating DFD Quality
- A high-quality set of DFDs is identified by its readability, internal consistency,
and accurate representation of system requirements.
- Some important terms:
i. Information overhead: It is an undesirable condition that occurs when too
much information is presented to a reader at one time.
ii. Rule of 7 2: It is a rule of model design that limits the number of model
components or connections among components to not more than nine,
and not less than five.
iii. Minimization of interfaces: It is a principle of model design that seeks
simplicity by minimizing the number of interfaces or connections among
model components.

25/99 7:18 PM 9/16/2007
4 4. .1 1
O OB BJ JE EC CT T- -O OR RI IE EN NT TE ED D
R RE EQ QU UI IR RE EM ME EN NT TS S, , S SP PE EC CI IF FI IC CA AT TI IO ON NS S & &
a an na al ly ys si is s

How are SSAD & OOAD different?
- SSAD involves algorithmic decomposition of a large program into smaller
steps, in a top-to-bottom manner.
- OOAD emphasizes on building an application in a bottom-up manner,
whereby, the software system becomes a collection of discrete similar objects
grouped into classes, that incorporates a data structure, and has some
behaviour.
- While the emphasis of SSAD is on the tools and techniques of Structured
Approach used in a top-to-bottom manner, OOAD is made up of objects and
classes that are used in a bottom-up approach.
- It is thus said that OOAD is very close to the actual programming of the
system.

Designing Object-Oriented Systems
- When designing an object-oriented based software system, it is essential to
decompose the system into smaller parts.
- For example, in a bank, we can look at the Transaction Details Table, the
Accounts Table, and the Account Balance as three unique objects that are
part of the larger system of the Bank that communicate with each other, not
through data flow, but through message-passing.
- Another example is that of a car, which is a well-defined object with a unique
behaviour, that is viewed as a single unit, even though it is actually composed
of many separate smaller parts.

Concepts & Terminology
- Object: An object is a programmatic representation of a physical entity, and
has its own behaviour. (E.g.: An employee, a transaction, an order)
- Instance of an Object:
i. The instance of an object refers to actual physical data defined by the data
structure of an object.
ii. For example, the object Employee has a data structure of EmpID and
EName, while we may say that the Employee whose EName is ABC and
whose EmpID is 1000 is an instance of the object Employee.
- Behaviour:
i. Behaviour of an object is defined as the activities the object performs in a
state.

26/99 7:18 PM 9/16/2007
ii. Each object exhibits particular behaviour in which it performs certain tasks
and may go from one stage to another.
iii. For example, a loan object in a Banking application may be paid off, or
processed, or cancelled, etc.
iv. The Window object in a GUI environment, for example, may get focus, or
lose focus, etc.
- Class:
i. A class is defined as a set of objects that have some common
characteristics and/or behaviour.
ii. Objects that do not share any common characteristics & behaviour cannot
be grouped into a single class.

Object-Oriented Themes
- The fundamental mechanisms of object-oriented approach are:
i. Inheritance
ii. Encapsulation
iii. Polymorphism
iv. Data Abstraction

Inheritance Generalization & Specialization
- Generalization is a technique wherein the attributes and functions that are
common to several types of classes are grouped into one main class, called
the superclass.
- The attributes and methods of a superclass are then inherited by the
subclasses.
- Specialization is a technique wherein a subclass inherits attributes from its
superclass, but also has some unique attributes that make it more specific
and specialized.
- Inheritance is thus, a relationship among classes wherein one class shares
the structure or behaviour defined by one or more classes, leading to
reusability of code.
- For example, consider a Hospital_Employee class, and its subclasses Doctor
and Nurse:
i. Since both the doctor and nurse are hospital employees, they inherit some
common attributes such as ID, Name, Salary as also some common
methods such as calculateSalary().
ii. But they each perform specific functions and hence will have some
specialized attributes the doctor will have Area of Specialization (his
field) as a unique field, while a nurse may have Area of Supervision (what
area in the hospital does she supervise).

Encapsulation
- Consider the following example (show graphical representation):
i. Consider two objects: a Car and a Mechanic.
ii. The Car has an Attribute, called Status, that can take 2 values To be
repaired, and Repaired.

27/99 7:18 PM 9/16/2007
iii. The attribute status can only be changes through its Method
changeStatus().
iv. At some time t, the value of status is To be repaired.
v. At this time t, the mechanic sends a message to the Car objects
changeStatus() method, asking it to change the value of status to
Repaired.
vi. Thus, at time t+1, the value of status is Repaired.
- Encapsulation is thus, a protective wrapper around the data and code that is
being manipulated within a class.
- It allows other objects, only indirect, limited and controlled access to the data
and code.
- It defines a particular behaviour that can be used by other objects to access
the code and data of that object (like the changeStatus() method).

Polymorphism
- Polymorphism is the ability of an object to respond differently to an identical
message.
- The reaction of the object depends on the variations in the information
supplied along with the message.
- The object accordingly understands the context of the input information and
acts on it.
- For example, as class that contains a method to calculate Salary of
Employees, accepts the Employee Type Code and the Employee ID, and
accordingly decides on what pay structure to apply and what taxes to deduct
from the salary.

Data Abstraction
- Each object exhibits some behaviour (that is defined by its methods) and has
some specific attributes.
- The process of identifying the necessary information about an object is known
as Data Abstraction.
- Advantages:
i. Abstraction focuses on the problem.
ii. It identifies essential characteristics and methods, and helps eliminate
unnecessary details associated with the object.
- For example: every Account object may have the attributes Name, ID,
Balance, and the methods withdraw(), deposit(), etc.

Advantages of OOAD
- The object-oriented approach considers the process of analysis, design and
implementation in terms of the development of the application.
- It promotes reuse of objects when a new business is applied (reuse is
important from the point of view that modifications are carried out faster and
there are less errors and maintenance problems).
- It includes powerful concepts of Inheritance, Encapsulation, Polymorphism
and Data Abstraction.

28/99 7:18 PM 9/16/2007

How to identify Objects & Classes?
- The objects in real-life projects are identified as follows:
i. Interfaces & Presentation
ii. Control
iii. Data Management
- Interfaces & Presentation include objects that manage the systems
interaction with its users through menu screens, query screens, etc.
- Control includes objects that manage the control of the application flow and
decision making, objects such as the Account Class or the Customer Class.
- Data management includes objects that manage the data structure of the
application, i.e. the actual data content of the system, such as Account data,
Customer data.

Guidelines to identify/describe Classes
- When identifying classes, first look for tangible things (E.g.: A person, an
invoice, an order, etc.)
- Identify the roles played (E.g.: Patient, Doctor, Employee, etc.)
- Identify the events that take place (E.g.: Opening an Account, Printing a
Report, etc.)

Steps in building a Class Diagram
- Identify and name the classes
- Identify, name and assign the associations between these classes (one-one,
one-many, etc).
- Identify inheritance, generalization.

Class Diagram Java Implementation

(NOTE: Please draw the diagram yourself)

public class Figure
{
private size;
private position;
.
.
.
getFigureCount()
{
.
.
.
}
}


29/99 7:18 PM 9/16/2007
Unified Modelling Language (UML)
- Introduction:
i. The UML was developed by Grady Booch & Jim Rumbaugh of the
Rational Software Corporation in late 1994.
ii. It is a standard language for specification, visualization, and construction
of software components.
- Description:
i. The UML represents a collection of diagrams i.e. tools that have proven
successful in the modelling of large and complex software systems.
ii. It is a very important part of developing object-oriented software and is
constantly required in the development process.
- Goals:
i. The primary goal of the UML is to provide users with a ready-to-use,
expressive modelling language, which can develop and exchange
meaningful models of the system under development.
ii. The UML provides specialization mechanisms to explain core concepts.
iii. It encourages the growth of object-oriented tools and software.
iv. It supports higher-level development concepts.
v. It helps to understand the object-oriented modelling language and
integrate its best practices into the object-oriented system.
- Diagrams under UML:
i. Class Diagram
ii. Use Case Diagram
iii. Sequence Diagram
iv. Collaboration Diagram
v. State Chart Diagram
vi. Activity Diagram
vii. Component Diagram
viii. Deployment Diagram

Use Case Diagram (UCD)
- A Use Case Diagram represents interaction between the user and the
system.
- It represents the relationship between the ACTOR (an entity) and a USE
CASE (an event)
- Thus we can identify the two main components of a UCD as the actor and the
use case.
- How to draw a UCD?
i. Identify the events involved, using an Event Table, and accordingly
identify all the events and actors.
ii. Then, list a sequence of steps that a user might take in order to complete
an event these steps are the use cases.
iii. Now establish relationships between the actors, and use cases involved.
iv. The <<include>> statement indicates flow of data from one use case to
another.


30/99 7:18 PM 9/16/2007
Sequence Diagram (SD)
- A Sequence Diagram is a model that shows the sequence of messages
passed between objects during a use case.
- It focuses on the details of the messages as also the sequence of interaction
between objects that occurs during the flow of events for a single use case.
- Message:
i. It can be considered as a method.
ii. It consists of two parts (a) directional arrow and (b) description.
iii. Syntax: [true/false condition] return_value := message_name (
parameter_list )
- Lifeline:
i. The Lifeline is a vertical line under an object or actor in a sequence
diagram used to show the passage of time for that object or actor
(remember: time flows from top to bottom).
- Activation Lifeline:
i. The Activation Lifeline is a narrow vertical rectangle used to emphasize
that the concerned object is active only during a part of the scenario for
that sequence diagram.
- Difference between a Customer Actor & a Customer Object:
i. A Customer Actor is an external physical person who plays the role of a
customer.
ii. A Customer Object is a computer object (a record) that maintains
information about the Customer Actor.
iii. We can say that Customer Object is a virtual Customer.
- Steps to develop a Sequence Diagram:
i. Identify all objects and actors (that have occurred in the use case
diagram).
ii. Based on the flow of activities, identify each message and its source &
destination.
iii. Identify whether the message is always sent under a certain condition.
iv. Also identify the parameters required to be sent with that message.
v. Give the message a proper name.
vi. Place the message correctly in the sequence of messages.
Collaboration Diagram
- A Collaboration Diagram shows how actors and objects in a use case
collaborate with each other.
- Though the information contained in the Collaboration Diagram is the same
as that in the Sequence Diagram, the focus of each of the two diagrams is
different.
- While in a Sequence Diagram, the focus is on the sequence and details of the
messages, the emphasis of a Collaboration Diagram is on how the objects
and actors interact to carry out a use case.
- Also, there is no lifeline symbol in a Collaboration Diagram.
- Each message instead is numbered sequentially to indicate the order of the
message.

31/99 7:18 PM 9/16/2007
- Link-Line: It is a connecting line between two actors/objects on which all
messages are placed.

Object States
- State:
i. A state for an object is a condition when the object satisfies some
criterion, performs some action, and waits for an event.
ii. Each object has to complete a lifecycle from creation to destruction.
iii. An object comes into existence in a particular manner and performs some
activity or task when it is in that particular state.
iv. For example, a machine is initially in a not working or off state. The
moment it is switched on, it goes into the working or on state. In the
on state, the machine can perform its manufacturing processes.
- Representation of a State:
i. A state is represented by a Rounded Rectangle.
- Concurrency or Concurrent State:
i. The condition of an object to be in more than one states at a particular
time is called concurrency or a concurrent state.
- Composite State:
i. A high level state that has other states nested within it is called a
composite state
- Action/Behaviour:
i. The activity performed by an object in a particular state is known as an
activity or the behaviour of that object.
- Entry Point & Exit Point:
i. The entry point (i.e. point of creation of an object) is represented by a
darkened circle, while the exit point (i.e. point of destruction of an object)
is represented by two concentric circles (where the inner circle is
darkened).

Object Transitions
- A transition is a mechanism that causes an object to leave its original state
and change to a new state.
- Object states are semi-permanent, since transition can interrupt and end a
particular state.
- However, transitions themselves cannot be interrupted.
- Once a transition begins, it runs to completion, when taking the object to a
new (destination) state.
- A transition is represented by an arrow.
- Syntax: transition_name ( parameters ) [guard condition]/action_expression

Internal Transitions
- Internal Transition is a special kind of transition that does not cause the object
to leave a particular state.
- There are three versions of Internal Transition:
i. Entry/action expression

32/99 7:18 PM 9/16/2007
ii. Do/action expression
iii. Exit/action expression
- For example, a machine that is in an ON state goes from Idle state to
Working state.

State Chart Diagram (SCD)
- A Use Case Diagram represents interaction between the user and the
system.

Activity Diagram
- A Use Case Diagram represents interaction between the user and the
system.

Component Diagram
- A Use Case Diagram represents interaction between the user and the
system.

Deployment Diagram
- A Use Case Diagram represents interaction between the user and the
system.
5 5. .1 1 O OB BJ JE EC CT T- -O OR RI IE EN NT TE ED D D Da at ta ab ba as se es s

Object Database Management System (ODBMS)
- ODBMS is a database management system that stores data as objects or
class instances.
- ODBMSs are designed specifically to store objects and to interface with
object-oriented programming languages.
- The advantage of using a DBMS specifically designed for objects is direct
support for method storage, inheritance, nested objects, object linking, and
programmer- defined data types.
- ODBMSs have extensive usage in scientific and engineering applications
based on OO tools.
- ODBMSs are also expected to supplant RDBMSs in more traditional business
applications gradually, as OO technology is being recognized as a better tool
for interfacing object and relational databases.
- One of the standards proposed by the Object Database Management Group
is the Object Definition Language (ODL).
- ODL can be defined as a standard object database description language
promulgated by the Object Database Management Group for the purpose of
describing the structure and content of an object database.

Designing Object Databases
- To create an object database schema from a class diagram, follow these
steps:

33/99 7:18 PM 9/16/2007
1. Determine which classes require persistent storage.
2. Define persistent classes.
3. Represent relationships among persistent classes.
4. Choose appropriate data types and value restrictions (if necessary) for
each field.

Representing Classes
- Objects can be classified into two broad types for purposes of data
management: a) transient objects & b) persistent objects.
- Transient objects
i.A transient object is that object that doesnt need to store any attribute values
between instantiations or method invocations.
ii.It exists only during the lifetime of a program or process, i.e. it is created each
time a program or process is executed and then destroyed when a
program or process terminates.
iii.Example: Objects created to implement user-interface components (such as a
view window or pop-up menu).
- Persistent object
i.A persistent object is not destroyed when the program or process that creates
it ceases execution.
ii.Instead, the object continues to exist independently of any program or
process.
iii.Storing the object state to persistent memory (such as a magnetic or optical
disk) ensures that the object exists between process executions.
iv.Objects can be persistently stored with a file or database management
system.
- An object database schema includes a definition for each class that requires
persistent storage.
- ODL class definitions can derive from the corresponding UML class diagram.
- Thus, classes already defined in UML are reused for the database schema
definition.
- For example, an ODL description of the RMO Customer class is

- Once each class has been defined, then relationships among classes must
be defined.

Representing Relationships

Object Identifiers:
- An Object identifier is a physical storage address or a reference that can be
converted to a physical storage address at run time, that can be stored within
another object to represent a relationship, and is used to relate objects of one
class to objects of another class.

34/99 7:18 PM 9/16/2007
- Each object stored within an ODBMS is automatically assigned a unique
object identifier.
- An ODBMS represents relationships by storing the identifier of one object
within related objects.
- Object identifiers bind related objects together.
- Example: Consider a one-to-one relationship between the classes Employee
and Workstation. Each Employee object has an attribute called Computer that
contains the object identifier of the Workstation object assigned to that
employee. Each Workstation object has a matching attribute called User that
contains the object identifier of the Employee that uses that workstation.

Navigation:
- The ODBMS uses attributes containing object identifiers to find objects that
are related to other objects.
- The process of extracting an object identifier from one object and using it to
access another object is called navigation.
- Example: Consider the user query: List the manufacturer of the workstation
assigned to employee Joe Smith. A query processor can find the requested
employee object for the Name attribute Joe Smith, as also Joe Smiths
workstation object by using the object identifier stored in Computer. Note that
a query processor can answer the opposite query (list the employee name
assigned to a specific workstation) by using the object identifier stored in
User.
- Thus, a matched pair of attributes is required to allow a relationship to be
navigated in both directions.

How to define relationships:
- Attributes that represent relationships are specified indirectly by an object
database schema designer by declaring relationships between objects.
- Example: Consider the following class declarations for the ODL schema
language:

- Here, the keyword relationship is used to declare a relationship between one
class and another, i.e. the class Employee has a relationship called Uses
with the class Workstation. The class Workstation has a matching
relationship called AssignedTo with the class Employee. Each relationship
includes a declaration of the matching relationship in the other class using the
keyword inverse.
- The ODBMS is thus informed that the two relationships are actually mirror
images of one another.

35/99 7:18 PM 9/16/2007
- There are two advantages to declaring a relationship as shown above instead
of creating an attribute containing an object identifier:
i.The ODBMS assumes responsibility for determining how to implement the
connection among objects, i.e. the schema designer has declared an
attribute of type re1tionship and left it up to the ODBMS to determine an
appropriate type for that attribute.
ii.The ODBMS assumes responsibility for maintaining referential integrity; so,
for example, deleting a workstation will cause the Uses link of the related
Employee object to be set to NULL or undefined.
- ODBMSs will automatically create attributes containing object identifiers to
implement declared relationships, where the user and programmer will be
shielded from all details of how those identifiers are actually implemented and
manipulated.

One-to-Many Relationships:
- Consider the one-to-many relationship between the classes Customer and
Order, where Customer can make many different Orders, but a single Order
can be made by only one Customer.
- A single object identifier is required to represent the relationship of an Order
to a Customer, whereas multiple object identifiers are required to represent
the relationship between one Customer and many different Orders.
- Partial ODL class declarations for the classes Customer and Order are as
follows:

- The relationship Makes is declared between a single Customer object and a
set of Order objects.
- By declaring the relationship as a set, you instruct the ODBMS to allocate
multiple Order object identifier attributes dynamically to each Customer
object.

Many-to-Many Relationships:
- A many-to-many relationship is represented differently depending on whether
the relationship has any attributes.
- Many-to-many relationships without attributes are represented similarly to
one-to-many relationships.
- For example, assume that the many-to-many relationship between the RMO
classes Catalog and Productltem has no attributes.

36/99 7:18 PM 9/16/2007
- In this case, the relationship can be represented as follows:

- Note that the relationship is declared as a set within both class declarations,
so that an object of either class to be related to multiple objects of the other
class.
- In case of many-to-many relationships with attributes, the attribute is
considered part of a class, not as part of a relationship.
- But relationships among objects can (and often do) have attributes.
- Example: The relationship marriage between two person objects, has
attributes such as the date, time, and place of the wedding, and these are
attributes of the relationship itself, not of the objects participating in the
relationship.
- Refer to Figure 10-18 for an example of the above.

Association Class:
- An association class is a class that represents a relationship and stores the
attributes of that relationship.
- Example: the association class CatalogProduct has been created to represent
the many-to-many relationship between Catalog and Productltem, where the
many-to-many relationship between Catalog and Productltem has been
decomposed into a pair of one-to-many relationships between the original
classes and the new association class.
- The schema class declarations are as follows:




Generalization Relationships:
- Consider the generalization hierarchy in Figure 10.19.
- Here, Web Order, Telephone Order, and Mail Order are each more specific
versions of the class Order.

37/99 7:18 PM 9/16/2007
- The ODL class definitions that represent these classes and their
interrelationships are as follows:

- The keyword extends indicates that WebOrder, TelephoneOrder, and
MailOrder derive from Order.
- When stored in an object database, objects of the three derived classes will
inherit all of the attributes, methods, and relationships defined for the Order
class.
- Key Attributes are not required in an object database since referential integrity
is implemented with object identifiers.
- But, key attributes are useful in object databases in order to guarantee unique
object content and provide a means of querying database contents.
- The ODBMS automatically enforces uniqueness of key attributes in an object
database.
- Thus, declaring an attribute to be a key guarantees that no more than one
object in a class can have the same key value.
5 5. .2 2
H Hy yb br ri id d O OB BJ JE EC CT T- -R Re el la at ti io on na al l D Da at ta ab ba as se es s

Hybrid Object-Relational Database Design
- A relational database management system used to store object attributes and
relationships is called a hybrid object-relational DBMS or hybrid DBMS.
- The hybrid DBMS approach is currently the most widely employed approach
to persistent object storage.
- Designing a hybrid database is essentially two design problems in one, since
a complete relational database schema must be designed, and at the same
time, the designer must also develop an equivalent set of classes to represent
the relational database contents within the OO programs.
- This is a complex task because the database designer must bridge the
differences between the object-oriented and relational views of stored data.

38/99 7:18 PM 9/16/2007
- The following are the most important mismatches between the relational and
OO views of stored data:
i.Data stored in an RDBMS is static, and programmer-defined methods cannot
be stored nor automatically executed.
ii.ODBMSs can represent a wider range of relationship types than RDBMSs,
including classification hierarchies and whole-part aggregations, whereas
relationships in an RDBMS can only be represented using referential
integrity.
iii.ERDs have no features that can represent methods, and thus, programs that
access the database must implement methods internally.
iv.Also, inheritance cannot be directly represented in an RDBMS because a
classification hierarchy cannot be directly represented.
- Although the relational and OO views of stored data have significant
differences, they also have significant overlaps.
- There is a considerable degree of overlap among the two representations,
including the following
i.Grouping of data items into entities or classes
ii.Defining one-to-one, one-to-many, and many-to-many relationships among
entities or classes
- This overlap provides a basis for representing classes and objects within a
relational database.

Classes and Attributes

- Designers can store classes and object attributes in an RDBMS by defining
appropriate tables in which to store them.
- For a completely new system, a relational schema can be designed based on
a class diagram in essentially the same fashion as for an ERD.
- A table is created to represent each class, the fields of each table represent
the attributes of the corresponding class, while each row holds the attribute
values of a single object.
- A key field (or group of fields) must be chosen for each table.
- As described earlier, a designer can choose a natural key field from the
existing attributes or add an invented key field.
- Primary key fields are needed to guarantee uniqueness within tables and to
represent relationships using foreign keys.

Relationships

- ODBMSs use object identifiers to represent relationships among objects.
- But object identifiers are not created by RDBMSs, so relationships among
objects stored in a relational database must be represented using foreign
keys.
- Foreign key values serve the same purpose as object identifiers in an
ODBMS, i.e. they provide a means for one object to refer to another.

39/99 7:18 PM 9/16/2007
- To represent one-to-many relationships, designers add the primary key field
of the class on the one side of the relationship to the table representing the
class on the many side of the relationship.
- To represent many-to-many relationships, designers create a new table that
contains the primary key fields of the related class tables and any attributes of
the relationship itself.
- Example: The one-to-many relationship between the Customer and Order
classes is represented by the foreign key AccountNumber stored in the Order
table. The many-to-many relationship between the Catalog and Productltem
classes is represented by the table CatalogProduct that contains the foreign
keys CatalogNumber and ProductNumber.
- Classification relationships such as the relationship among Order, MailOrder,
TelephoneOrder, and WebOrder are a special case in relational database
design.
- Just as a child class inherits the data and methods of a parent class, a table
representing a child class inherits some or all of its data from the table
representing its parent class.
- This inheritance can be represented in two ways:
i.Combine all the tables into a single table containing the superset of all class
attributes but excluding any invented key fields of the child classes.
ii.Use separate tables to represent the child classes and substitute the primary
key of the parent class table for the invented keys of the child class tables.
- Either method is an acceptable approach to representing a classification
relationship.
- Using the first method, all of the non-key fields from MailOrder,
TelephoneOrder, and WebOrder have been added to the Order table, so for
any particular order, some of the field values in each row will be null. For
example, a row representing a telephone order would have no values for the
fields EmailAddress, ReplyMethod, DateReceived, and ProcessorClerk.
- Using the second method for representing inheritance, the relationship among
the three child order types and the parent Order table is represented by the
foreign key OrderNumber in all three child class tables. The invented key of
each table has now been removed. Thus, in each case, the foreign key
representing the inheritance relationship also serves as the primary key of the
table representing the child class.

Data Types
- A data type is a storage format and decides the allowable content of a field,
class attribute, or program variable.
- A primitive data type is a storage format directly implemented by computer
hardware or a programming language [Examples: memory address (a
pointer), boolean, integer, unsigned integer, short integer (one byte), long
integer (multiple bytes), single characters, real numbers (floating point
numbers), and double precision (double-length) integers and real numbers].

40/99 7:18 PM 9/16/2007
- Some procedural programming languages (such as C) and most OO
languages enable programmers to define additional data types using the
primitive data types as building blocks.
- As information systems have become more complex, the number of data
types used to implement them has increased.
- A complex data type is a data type not directly supported by computer
hardware or a programming language.
- Some modern data types: dates, times, currency (money), audio streams,
video images, motion video streams, and uniform resource locators (URLs, or
web links).
- They may also be called user-defined data types because they may be
defined by users during analysis and design or by programmers during design
and implementation.

Relational DBMS Data Types

- For each field in a relational database schema, the designer must choose an
appropriate data type.
- For many fields, the choice of a data type is relatively straightforward.
- Example: Customer names and addresses fixed or variable-length
character arrays; Inventory quantities and item prices integers and real
numbers, respectively; Color character array containing the name of the
color OR set of three integers representing the intensity of the primary colors
red, blue, and green.
- Modern RDBMSs have added an increasing number of new data types to
represent the data required by modem information systems, such as DATE,
LONG, and LONGRAW.
- LONG is typically used to store large quantities of formatted or unformatted
text (such as a word processing document), whereas LONGRAW can be
used to store large binary data values, including encoded pictures, sound,
and motion video.
- Modern RDBMSs can also perform many validity and format checks on data
as they are stored in the database, such as a check that a quantity on hand
cannot be negative, or that a string containing a URL must begin with http://.
- The validity and format constraints are then automatically shared by all
application programs that use the database.
- Each program is simpler, and the possibility for errors due to mismatches
among data validation logic is eliminated.
- Application programs still have to provide program logic to recover from
attempts to add bad data, but they are freed from actually performing validity
checks.

Object DBMS Data Types

- ODBMSs typically provide a set of primitive and complex data types
comparable to those of an RDBMS.

41/99 7:18 PM 9/16/2007
- ODBMSs also allow a schema designer to define format and value
constraints.
- But ODBMSs provide an even more powerful way to define useful data types
and constraints, where a schema designer can define a new data type and its
associated constraints as a new class.
- A class is a complex user-defined data type that combines the traditional
concept of data with processes (methods) that manipulate that data.
- In most OO programming languages, programmers are free to design new
data types (classes) that extend those already defined by the programming
language.
- Incompatibility between system requirements and available data types is not
an issue, since the designer can design classes to specifically meet the
requirements.
- To the ODBMS, instances of the new data type are simply objects to be
stored in the database.
- Class methods can perform many of the type and error checking functions
previously performed by application program code and/or by the DBMS itself.
- Thus, the programmer constructs a custom designed data type and all of the
programming logic required to use it correctly, and indirectly performs validity
checking and format conversion by extracting and executing programmer-
defined methods stored in the database.
- The DBMS is thus freed from direct responsibility for managing complex data
types and the values stored therein.
- The flexibility to define new data types is a chief reason that OO tools are so
widely employed in non-business information systems.
- In fields such as engineering, biology and physics, stored data is considerably
more complex than simple strings, numbers, and dates.
- OO tools enable database designers and programmers to design custom data
types that are specific to a problem domain.
5 5. .3 3 D Di is st tr ri ib bu ut te ed d D Da at ta ab ba as se es s

Distributed Databases
- An organizations data is typically stored in many different databases rather
than one, often under the control of many different DBMSs, for the following
reasons:
i.Information systems may have been developed at different times using
different DBMSs.
ii.Parts of an organizations data may be owned and managed by different
organizational units.
iii.System performance is improved when data is physically close to the
applications that use it.
- Information systems of varying size and function are developed for different
purposes, using different tools and support environments, and under the
direction and control of different parts of an organization.

42/99 7:18 PM 9/16/2007
- As a result, the data of a single large organization is typically fragmented
across a number of hardware, software, organizational, and geographic
boundaries.
- Database access is a significant performance factor in most information
systems, since much of the activity of a typical information system is querying
and updating database contents.
- Thus, delays in processing or answering an application programs database
requests largely determine the applications throughput and response time.

Distributed Database Architectures

- There are several possible architectures for distributing database services,
including the following:
i.Single database server
ii.Replicated database servers
iii.Partitioned database servers
iv.Federated database servers
- Combinations of these architectures are also possible.

Single Database Server:
- In this architecture, clients on one or more LANs share a single database
located on a single computer system.
- This database server may be connected to one of the LANs or directly to the
WAN backbone (Connection directly to the WAN ensures that no one LAN is
overloaded by all of the network traffic to and from the database server).
- Advantage: Simplicity, since there is only one server to manage, and all
clients are programmed to direct requests to that server.
- Disadvantage 1: Server Failure & Performance Bottleneck:
i.Susceptibility to server failure and possible overload of the network or server,
with no backup capabilities.
ii.Poorly suited to applications that must be available on a seven day, 24-hour
basis.
iii.Performance bottlenecks within a single database server or in the network
segment to which the server is attached.
iv.Insufficient to respond quickly to all of the service requests it receives.
- Solution:
i.To improve performance, a more powerful computer system may be
designated as the database server, but even here, employing the largest
mainframes may be impractical due to cost, system management, or
network performance considerations.
- Disadvantage 2: Network Traffic:
i.Requests to and responses from a database server may traverse large
distances across local and wide area networks.
ii.Database transactions must also compete with other types of network traffic
(such as voice, video, and web site accesses) for available transmission
capacity.

43/99 7:18 PM 9/16/2007
iii.Thus, delays in accessing a remote database server may result from network
congestion or propagation delay from client to server.
- Solution:
i.One way to reduce network congestion is to increase capacity within an entire
network, which is an expensive and impractical solution.
ii.Another approach specifically geared to improving database access speed is
to locate database servers physically close to their clients (for example on
the same LAN segment), minimizing distance-related delay for requests
and responses and removing a large amount WAN traffic.
iii.Moving a database server closer to its clients is a relatively simple matter
when all of the clients are located close to one another, but when clients
are widely dispersed, no single location for the database server can
possibly improve database access performance for all clients at the same
time.
iv.Thus, the distant clients must pay a greater performance penalty for
database access.

Replicated Database Servers
- Using a replicated database server architecture can eliminate delay in
accessing distant database servers.
- Each server is located near one group of clients and maintains a separate
copy of the needed data, and the clients are configured to interact with the
database server on their own LAN.
- Database accesses are eliminated from the WAN and propagation delay is
minimized
- Also, local network and database server capacity can be independently
optimized to local needs.
- Replicated database servers also make an information system more fault
tolerant.
- Applications can be programmed to direct access requests to any available
server, with preference to the nearest server.
- When a server is unavailable, client access can be automatically redirected to
another available server, i.e. accesses are redirected across a WAN only
when a local database server is unavailable.
- A transaction server interposed between clients and replicated database
servers monitors loads on all database servers and automatically directs
client requests to the server with the lowest load.
- Drawbacks & their Solutions:
i.To check data inconsistency, updates to each database copy must
periodically be made to all other copies of the database.
ii.Also, database synchronization (process of ensuring consistency among two
or more database copies) is required.
iii.Designers can implement synchronization by developing customized
synchronization programs or by using synchronization utilities built into the
DBMS.

44/99 7:18 PM 9/16/2007
iv.Custom application programs are seldom employed because they are difficult
to develop and because they would need to be modified each time the
database schema or number and location of database copies change.
v.Many DBMSs provide utilities for automatic or manual synchronization of
database copies.
vi.Such utilities are generally powerful and flexible but also expensive.
vii.Incompatibilities among methods used by DBMSs to perform synchronization
make using DBMSs from different vendors impractical.
viii.The time delay between an update to a database copy and the propagation of
that update to other database copies is an important database design
decision.
ix.During the time between the original update and the update of database
copies, application programs that access outdated copies arent receiving
responses that reflect current reality.
x.Designers can address this problem by reducing the synchronization delay,
but shorter delays imply more frequent (or possibly continuous) database
synchronization.
xi.Synchronization then consumes a substantial amount of database server
capacity, and a large amount of network capacity among the related
database servers must be provided.
xii.The proper synchronization strategy is a complex trade-off among cost,
hardware and network capacity, and the need of application programs and
users for current data.

Partitioned Database Servers:
- The database schema can be divided into partitions, where each partition is
accessed by a different group of clients.
- Traffic among clients and the database server in each group is restricted to a
local area network.
- Partitioned database server architecture is feasible only when a schema can
be cleanly partitioned among client access groups.
- Client groups must require access to well-defined subsets of a database (for
example, marketing data rather than production data).
- Members of a client access group must be located in small geographic
regions, but if a single access group is spread among multiple geographic
sites (for example, order processing at three regional centers), then a
combination of replicated and partitioned database server architecture is
usually required.
- It is seldom possible to partition a database schema into mutually exclusive
subsets, and some portions of a database that are typically needed by most
or all users, must exist in each partition.
- Common database contents would exist on each server, and those contents
would need to be synchronized periodically.
- Thus, partitioning can reduce the problems associated with database
synchronization, but seldom eliminates them entirely.


45/99 7:18 PM 9/16/2007
Federated Database Servers:
- Some information systems are best served by a federated database
architecture which is commonly used to access data stored in databases with
incompatible storage models (for example, network and relational models) or
DBMSs.
- A single unified database schema is created on a combined database server,
which acts as an intermediary between application programs and the
databases residing on other servers.
- Database requests are first sent to the combined database server, which in
turn makes appropriate requests of the underlying database servers.
- Results from multiple servers are combined and reformatted to fit the unified
schema before the system returns a response to the client.
- Federated database server architecture can be extremely complex.
- A number of DBMS products are available to implement such systems, but
they are typically expensive and difficult to implement and maintain.
- Federated database architectures also tend to have high requirements for
computer system and network capacity, but this expense and management
complexity is generally less than would be required to implement and
maintain application programs that interact directly with all of the underlying
databases.
- Federated database server architecture is to implement a data warehouse,
which is a collection of data used to support structured and unstructured
managerial decisions.
- Data warehouses typically draw their content from operational databases
within an organization and multiple external databases (for example,
economic and trade data from databases maintained by governments, trade
industry associations, and private research organizations).
- Because data originates from a large number of incompatible databases, a
federated architecture is typically the only feasible approach for implementing
a data warehouse.
5 5. .4 4
C Cl li ie en nt t- -S Se er rv ve er r S So of ft tw wa ar re e E En ng gi in ne ee er ri in ng g

Component Based Software Engineering. Engineering of component based systems. The
CBSE process. Domain engineering. The component based development. Classifying and
retrieving components. Economics of CBSE.

The Structure of Client/Server System:
- Hardware, Software, Databases, and Network Technologies together
contribute to computer architecture.
- A root system i.e. server is one that is a repository of data, and has many
clients connected to it.
- This logic relates to one computer residing above another, where the
computer above, is the server, and computers below, are the clients.

46/99 7:18 PM 9/16/2007
- The client issues requests to the server, and server processes these requests
if they are fit to be processed.

Implementations:
- File Server: Client requests for specific information from a file, and the server
accordingly provides it.
- Database Server: Client issues an SQL request and the database server
processes the request and returns the result to the client.
- Transaction Server: Client request invokes a procedure at the server-side and
the result is transmitted back to the client.
- Groupware Server: Server provides applications which enable clients to
communicate with each other.
- On the basis of the requests passed between the Client-Server System, all
the required software components need to be distributed on the Server and
the clients.
- Three subsystems of the client-server architecture:
i.User Interaction
ii.Application
iii.Database
- User Interaction Subsystem: This subsystem consists of the user interface.
- Application Subsystem: This subsystem implements the requirements defined
by the application, for example, processing of requests, calculations.
- Database Management Subsystem: This subsystem performs manipulation
and management of data required by application, such as processing of SQL
transactions, addition of records, etc.

Guidelines for Distribution of the Subsystems:
- The User Interaction subsystem is usually placed on the client, as it depends
on the available computer and the window-based environment used.
- The Database Management subsystem and the database access capability
are located on the server.
- The data commonly required by the user should be placed on the client, to
minimize the loading of the server.

Linking of Client-Server Software Subsystems:
- A number of different mechanisms are used to link the various subsystems of
the client server architecture.
- The most common types of linking mechanisms are:
i.Pipe: Widely used in UNIX-based systems; they permit transfer of information
between different machines running on different operating systems.
ii.Remote Procedure Call: They permit a process to invoke the execution of
another process or module based on different machines.
iii.Client-Server SQL Interaction: This is used to pass SQL requests and
associated data from one component to another; this mechanism is
usually used with Relational Database Management System applications.


47/99 7:18 PM 9/16/2007
Designing Client-Server Systems:
- The Software Development Life Cycle applies to the designing of Client-
Server Systems as well.

Testing Strategies for Client-Server Software Components:
- Testing of client-server software components occurs at 3 levels.
i.First, individual client-server applications are tested.
- However, the operation of the server and the network is not taken into
consideration.
i.Then, the client software and the associated server applications are tested,
but network operations are not explicitly exercised.
ii.Finally, the complete client-server architecture, including its network
operations and performance, is tested.
- Many different types of tests are conducted at each of these levels:
i.Application Function Test tests the functionality of the client application, and
attempts to uncover errors in the operations.
ii.Server Tests are conducted to test the coordination and data management
functions of the server.
iii.Database Test is conducted to judge the accuracy and integrity of the data
stored at the server.
iv.Transaction Test is carried out to ensure that each class of transactions is
processed according to its requirements.
v.Network Communications Tests are conducted to verify if the communication
and message-passing across the network occurs correctly.























48/99 7:18 PM 9/16/2007





















SYSTEM ANALYSIS AND DESIGN


System is an orderly grouping of interdependent components linked together according to
a plan to achieve a specific objective.

System analysis and design refers to the process of examining a business situation with
the intent of improving it through better procedures and methods.

System development can generally be thought of as having two major components:
Systems analysis and systems design.

Systems design is the process of planning a new business system or one to replace or
complement an existing system but before this planning can be done, the old system must
be thoroughly understood and determine how computers can best be used (if at all) to
make its operation more effective.

System Analysis is the process of gathering and interpreting facts, diagnosing problems
and using the information to recommend improvements to the system.
This is the job of the system analyst.

REQUIREMENTS OF A GOOD SYSTEMS ANALYST:

System Analyst:

49/99 7:18 PM 9/16/2007

A person who conducts a methodical study and evaluation of an activity such as a
business to identify its desired objectives in order to determine procedures by which these
objectives can be gained.

The various skills that a system analyst must posses can be divided into two categories

1) Interpersonal skills
2) Technical skills

Interpersonal skills deal with relationships and the interface of the analyst with people in
business.
Technical skills focus on procedures and techniques for operations analysis, systems
analysis, and computer science.

Interpersonal skills include:

1. Communication :
He/she must have the ability to articulate and speak the language of the user, a
flare for mediation, and a knack for working with virtually all managerial levels
in the organization.

2. Understanding:
Identifying problems and assessing their ramifications. Having a group of
company goals and objectives and showing sensitivity to the impact of the system
on people at work.

3. Teaching:
Educating people in use of computer systems, selling the system to the user and
giving support when needed.

4. Selling:
Selling ideas and promoting innovations in problem solving using computers.

Technical skills include:

1. Creativity:
Helping users model ideas into concrete plans and developing candidate systems to
match user requirements.


2. Problem solving:
Reducing problems to their elemental levels for analysis, developing alternative
solutions to a given problem, and delineating the pros and cons of candidate systems.



50/99 7:18 PM 9/16/2007
3. Project management:
Scheduling, performing well under time constraints, coordinating team efforts, and
managing costs and expenditures.

4. Dynamic interface:
Blending technical and non-technical considerations in functional specifications and
general design.


5. Questioning attitude and inquiring mind:
Knowing the what, when, why, where, who and how a system works.

6. Knowledge of the basics of the computer and the business function.
The above skills are acquired by a system analyst through his/her education,
experience and personality.

Educational Background:

He/she must have knowledge of systems theory and organization behavior.
Should be familiar with the makeup and inner workings of major application areas
such as financial accounting, personnel administration marketing and sales,
operations management, and model building and production control.
Competence in system tools and methodologies and a practical knowledge of one
or more programming and data base languages.
Experience in hardware and software specification which is important for
selection.
Personal Attributes:

1. Authority:
The confidence to tell people what to do. Project management and to get the team
to meet deadlines is the result of this quality.

2. Communication Skills:
Ability to articulate and focus on a problem area for logical solution.

3. Creativity :
Trying ones own ideas. Developing candidate systems using unique tools or
methods.

4. Responsibility:
Making decisions on ones own and accepting the consequences of these decisions.

5. Varied skills:
Doing different projects and handling change.

THE MULTIFACETED ROLE OF THE ANALYST

51/99 7:18 PM 9/16/2007

1. Change Agent:
Agent of change.
Persuader (the mildest form of intervention)
Imposer (the most severe intervention)
Catalyst (in between the above two types)
The goal is to achieve acceptance of the candidate system with a minimum of
resistance.

2. Investigator and monitor
Investigator Infromation is gathered and put together and studied to determine
why the present system does not work well and what changes will correct the
problem.
Monitor the analyst must monitor programs in relation to time, cost and quality.
Time is the most important, if time gets away project is delayed and eventually
cost is increased.

3. Architect:
Architect function as liaison between the clients abstract design requirements
and the contractors detailed building plan.
Analyst The users logical design requirements and the detailed physical system
design.
As architect, the analyst also creates a detailed physical design of candidate
systems.
He/she aids users in formalizing abstract ideas and provides details to build the
end product - the candidate system.

4. Psychologist
Analyst plays the role of a psychologist in the way he/she reaches people,
interprets their thoughts, assesses their behavior, and draws conclusions from
these interactions.
Understanding interfunctional relationships is important.

5. Salesperson:
Selling change
Selling ideas
Selling the system takes place at each step in the system life cycle.
Sales skills and persuasiveness are crucial to the success of the system.

6. Motivator
Candidate system must be well designed and acceptable to the user.
System acceptance is achieved through user participation in its development,
effective user training and proper motivation to use the system.
Motivation is obvious during the first few weeks after implementation.
If the users staff continues to resist the system then it becomes frustrating.


52/99 7:18 PM 9/16/2007
7. Politician
Diplomacy & finesse in dealing with people can improve acceptance of the
system.
Politician must have the support of his/her constituency, the analysts goal to have
the support of the users staff.
He/she represents their thinking and tries to achieve their goals through
computerization.

ELEMENTS OF A SYSTEM:

1. Inputs & Outputs
Inputs: The elements (material, human resources, information) that enter the
system for processing
Output: is the outcome of processing.

2. Processor(s)
Element of a system that involves the actual transformation of input into output.
Depending upon the specifications of the output processors may modify the input
totally or partially.
If the output specification change , so will the processing.

3. Control
Control element guides the system
Decision making subsystem that controls the pattern of activities governing input,
processing and output.
Top management.

4. Feedback
Positive feedback
Negative feedback
Changes


5. Environment:
It is the suprasystem.
It is the source external elements that effect the system.
Consist of vendors, competitors and others.
It influence the actual performance of the business

6. Boundaries & Interface:
Boundaries the limits that identify its components, processes &
interrelationships when it interfaces with another system.


Types of System:


53/99 7:18 PM 9/16/2007
Classification

1. Physical or Abstract
2. Open or Closed
3. Man-made Information systems

1. Physical or Abstract
Physical : tangible entities that may be static or dynamic
STATIC in operation
Eg.Physical parts of a computer centre, that can be seen and counted. Chairs,
tables etc.

DYNAMIC in operation
Eg. Programmed computer is a dynamic system, data, programs, output , and
applications change as the users demand.

2. Abstract:
Abstract systems are conceptual or non physical entities.
Formulas of relationships among sets of variables.

MODEL:
A model is a representation of a real or a planned system.
The use of models makes it easier for the analyst to visualize relationships in the
system under study.
The objective is to point out the significant elements and the key
interrelationships of a complex system.
1. Schematic model:
a schematic model is a two dimensional chart depicting system elements and their
linkages.
2. Flow system Models:
It shows the how of the material energy and information that hold the system
together.
There is an orderly flow of logic.
Example : PERT (program Evaluation and Review Technique)

3. STATIC SYSTEM MODELS:
this type of model exhibits one pair of relationships such activity time or cost
quantity.
Gantt Chart
4. Dynamic System Models:
Business Organizations are dynamic systems
It depicts an ongoing, constantly changing system.
It consist of
1. Inputs that enter the system
2. processor through which transformation takes place
3. the program(s) required for processing

54/99 7:18 PM 9/16/2007
4. the output (s) that result from processing.


Five important characteristics of open systems

1. Input from outside
Open systems are self-adjusting and self-regulating. When functioning properly,
an open system reaches a steady state or equilibrium. In a retail firm, for example,
a steady state exists when goods are purchased and sold without being either out
of stock or overstocked. An increase in the cost of goods forces a comparable
increase
2.





FOURTH GENERATION TECHNIQUES

1. The term fourth generation techniques (4 GT) encompass a broad array of
software tools that have one thing in common: each enables the software engineer
to specify some characteristic of software at a high-level. The tool then
automatically generates source code based on the developers specification.
2. A software development environment that supports the 4GT paradigm includes
some or all of the following tools: non procedural languages for database query
report generation, data manipulation, screen interaction and definition, code
generation : high-level graphics capability : spreadsheet capability and automated
generation of HTML and similar languages used for we-site creation using
advanced software tools. Initially, many of the tools noted previously were
available only for very specific application domains, but today 4 GT
environments have been extended to address most software application categories.
3. 4 GT begins with a requirements gathering step. Ideally, the customer would
describe requirements and these would be directly translated into an operational
prototype. But this is unworkable. The customer may be unsure of what is
required, may be ambiguous in specifying facts that are known, and may be
unable or unwilling to specify information in a manner that a 4 GT tool can
consume. For this reason, the customer/ developer dialog described for other
process models remains an essential part of the 4 GT approach.
4. For small applications, it may be possible to move directly from the requirements
gathering step to implementation using a nonprocedural fourth generation
language (4 GL) or a model composed of a network of graphical icons. However,
for larger efforts, it is necessary to develop a design strategy for the system, even
if a 4 GL is to be used. The use of 4 GT without design (for large projects) will
cause the same difficulties (poor quality, poor maintainability, poor customer
acceptance) that have been encountered when developing software using
conventional approaches.

55/99 7:18 PM 9/16/2007
5. Implementation using a 4GL enables the software developer to represent desired
results in a manner that leads to automatic generation of code to create those
results. Obviously a data structure with relevant information must exist and be
readily accessible by the 4 GL.
6. To transform a 4GL implementation into a product, the developer must conduct
thorough testing, develop meaningful documentation, and perform all other
solution integration activities that are required in other software engineering
paradigms. In addition, the 4GL developed software must be built in a manner
that enables maintenance to be performed expeditiously.
7. The 4GT model has advantages and disadvantages :
i) Dramatic reduction in software development time and greatly
improved productivity for people who build software.
ii) 4 GT tools are not all that must easier to use than programming
languages that the resultant source code produced by such tool is
inefficient and that the maintain ability of large software systems
developed using 4 GT is open to question.

It is possible to summarize the current state of 4GT approaches:
1. The use of 4GT is a viable approach for many different application areas. Coupled
with computer aided software engineering tools and code generators, 4GT offers a
credible solution to many software problems.
2. Data collected from companies that use 4GT indicate that the time required to
produce software is greatly reduced for small and intermediate applications and
that the amount of design and analysis for small applications is also reduced.
3. However, the use of 4GT for large software development efforts demands as
much or more analysis, design, and testing (software engineering activities) to
achieve substantial time savings that result from the elimination of coding.

WATERFALL (ITERATIVE) MODEL
The waterfall model derives its name due to the cascading effect from one phase to the
other as is illustrated in Figure1.1. In this model each phase well defined starting and
ending point, with identifiable deliveries to the next phase.
Note that this model is sometimes referred to as the linear sequential model or the
software life cycle.

56/99 7:18 PM 9/16/2007

The model consist of six distinct stages, namely:
1. In the requirements analysis phase
(a) The problem is specified along with the desired service objectives (goals)
(b) The constraints are identified
2. In the specification phase the system specification is produced from the detailed
definitions of (a) and (b) above. This document should clearly define the product
function.

57/99 7:18 PM 9/16/2007
Note that in some text, the requirements analysis and specifications phases are
combined and represented as a single phase.
3. In the system and software design phase, the system specifications are translated
into a software representation. The software engineer at this stage is concerned
with:
Data structure
Software architecture
Algorithmic detail and
Interface representations
The hardware requirements are also determined at this stage along with a picture
of the overall system architecture. By the end of this stage the software engineer
should be able to identify the relationship between the hardware, software and the
associated interfaces. Any faults in the specification should ideally not be passed
down stream
4. In the implementation and testing phase stage the designs are translated into the
software domain
Detailed documentation from the design phase can significantly reduce
the coding effort.
Testing at this stage focuses on making sure that any errors are identified
and that the software meets its required specification.
5. In the integration and system testing phase all the program units are integrated and
tested to ensure that the complete system meets the software requirements. After this
stage the software is delivered to the customer [Deliverable The software product is
delivered to the client for acceptance testing.]
6. The maintenance phase the usually the longest stage of the software. In this phase
the software is updated to:
Meet the changing customer needs
Adapted to accommodate changes in the external environment
Correct errors and oversights previously undetected in the testing phases
Enhancing the efficiency of the software

58/99 7:18 PM 9/16/2007
Observe that feed back loops allow for corrections to be incorporated into the model. For
example a problem/update in the design phase requires a revisit to the specifications
phase. When changes are made at any phase, the relevant documentation should be
updated to reflect that change.
WATERFALL MODEL ADVANTAGES & DISADVANTAGES
Advantages
Testing is inherent to every phase of the waterfall model
It is an enforced disciplined approach
It is documentation driven, that is, documentation is produced at every stage
Disadvantages
The waterfall model is the oldest and the most widely used paradigm.
However, many projects rarely follow its sequential flow. This is due to the inherent
problems associated with its rigid format. Namely:
It only incorporates iteration indirectly, thus changes may cause considerable
confusion as the project progresses.
As The client usually only has a vague idea of exactly what is required from
the software product, this WM has difficulty accommodating the natural
uncertainty that exists at the beginning of the project.
The customer only sees a working version of the product after it has been coded. This
may result in disaster if any undetected problems are precipitated to this stage.

RAD

Rapid Application Development (RAD) is an incremental software development process
model that emphasizes an extremely short development cycle. If requirements are well
understood and project scope is constrained the RAD process enables a development
team to create a fully functional system within very short time periods( eg. 60 to 90 days)

RAD approach encompasses the following phases.

1. Business modeling:
The information flow among business functions is modeled in a way that answers
the following questions:
What information drives the business process?
What information is generated?

59/99 7:18 PM 9/16/2007
Who generates it?
Where does the information go?
Who processes it?

2. Data Modeling:
The information flow defined as part of the business modeling phase is refined
into a set of data objects that are needed to support the business.
The characteristics (called attributes) of each object are identified and the
relationships between these objects defined.

3. Process Modeling:
The data objects defined in the data modeling phase are transformed to achieve
the information flow necessary to implement a business function.
Processing descriptions are created for adding, modifying deleting or retrieving a
data object.

4. Application generation:
RAD assumes the use of fourth generation techniques. Rather than creating
software using conventional third generation programming languages the RAD
process works to reuse existing program components (when necessary).
In all cases, automated tools are used to facilitate construction of the software.

5. Testing and Turnover:
Since the RAD process emphasizes reuse, many of the program components have
already been tested. This reduces overall testing time. However new components
must be tested and all interfaces must be fully exercised.

The time constraints imposed on a RAD project demand scalable scope. If a
business application can be modularized in a way that enables each major
function to be completed in less than three months, it is a candidate for RAD.
Each major function can be addressed by a separate RAD team and then
integrated to form a whole.

The RAD approach has the following Drawbacks:

For large but scalable projects RAD requires sufficient human resources to
create the right number of RAD teams.
RAD requires developers and customers who are committed to the rapid-
fire activities necessary to get a system complete in a much abbreviated
time frame. If commitment is lacking from either constituency, RAD
projects will fail.
Not all types of applications are appropriate for RAD. If a system cannot
be properly modularized, building the components necessary for RAD will
be problematic.

60/99 7:18 PM 9/16/2007
If high performance is an issue and performance is to be achieved through
tuning the interfaces to system components the RAD approach may not
work.
RAD is not appropriate when technical risks are high. This occurs when a
new application makes heavy use of new technology or when the new
software requires a high degree of interoperability with existing computer
programs.

























61/99 7:18 PM 9/16/2007
RAD MODEL






60- 90 days


Team # 2
Team # 3

Process
modeling

Business
modeling

Data modeling

Process
modeling

Business
modeling

Data modeling

Process
modeling

Data modeling

Testing and
Turnover

Application
generation

Business
modeling
Team # 1

Application
generation

Testing and
Turnover

Application
generation

Testing and
Turnover

62/99 7:18 PM 9/16/2007
SYSTEM DEVELOPMENT LIFE CYCLE



Recognition of Need - what is the problem?

One must know what the problem is before it can be solved. The basis for a candidate
system is recognition of a need for improving an information system or a procedure.

Feasibility Study

Depending on the results of the initial investigation, the survey is expanded to a more
detailed feasibility study.
A feasibility study is a test of a system proposal according to its workability, impact on
the organization, ability to meet user needs, and effective use of resources.

It focuses on three major questions:

1. what are the users demonstrable needs and how does a candidate system meet
them?
2. What resources are available for given candidate systems? Is the problem worth
solving?
3. what are the likely impact of the candidate system on the organization? How well
does it fit within the organizations master MIS plan?

Each of these questions must be answered carefully. They revolve around investigation
and evaluation of the problem, identification and description of candidate systems,
specification of performance and the cost of each system, and final selection of the best
system.
The objective of a feasibility study is not to solve the problem but to acquire a sense of its
scope. During the study, the problem definition is crystallized and aspects of the problem
to be included in the system are determined.
Consequently, costs and benefits are estimated with greater accuracy at this stage.

The result of the feasibility study is a formal proposal.
This is simply a report a formal document detailing the nature and scope of the
proposed solution. The proposal summarizes what is known and what is going to be done.
It consists of the following :
1. Statement of the problem a carefully worded statement of the problem that led
to analysis.
2. Summary of findings and recommendations a list of the major findings and
recommendations of the study. It is ideal for the user who requires quick access to

63/99 7:18 PM 9/16/2007
the result of the analysis of the system under study. Conclusions are stated,
followed by a list of the recommendations and a justification for them.
3. Details for findings an outline of the methods and procedures undertaken by the
existing system, followed by coverage of the objectives and procedures of the
candidate system. Included are also discussions of output reports, file structures,
and costs and benefits of the candidate system.
4. Recommendations and conclusions specific recommendations regarding the
candidate system including personnel assignments, costs project schedules, and
target dates.
After the proposal is reviewed by management, it becomes a formal agreement that
paves the way for actual design and implementation.
This is a crucial decision point in the life cycle.
3) Analysis
Analysis is a detailed study of the various operations performed by a system and
their relationships within and outside of the system.
A key question is: What must be done to solve the problem?
One aspect of analysis is defining the boundaries of the system and determining
whether or not a candidate system should consider other related systems.
During analysis, data are collected on the available files, decision points, and
transactions handled by the present system.
Data flow diagrams, interviews, on site observations, and questionnaires are
examples.
The interview is a commonly used tool in analysis. It requires special skills and
sensitive to the subjects being interviewed. .
Once analysis is completed, the analyst has a firm understanding of what is to be
done. The next step is to decide how the problem might be solved. Thus in
systems design we move from the logical to the physical aspects of the life cycle.
4) Design
The most creative and challenging phase of the system life cycle is system design.
The term design describes a final system and the process by which is developed.
It refers to the technical specification (analogous to the engineers blueprints) that

64/99 7:18 PM 9/16/2007
will be applied in implementing the candidate system.
It also includes the construction of program and program testing.
The key question here is: How should be problem be solved?
The first step is to determine how the output is to be produced and in what format.
Samples of the output (and input) are also presented.
Second, input data and master files (Data base) have to be designed to meet the
requirements of the proposed output.
The operational (processing) phases are handled through program construction and
testing, including a list of the programs needed to meet the systems objectives and
complete documentation.
Finally, details related to justification of the systems and an estimate of the impact
of the candidate system on the user and the organization are documented and
evaluated by management as a step toward implementation.
The final report prior to the implementation phase includes procedure flowcharts,
record layouts, report layouts, and a workable plan for implementing the
candidate system.
Information on personal, money, hardware, facilities, and their estimate cost must
also be available. At this point, projected costs must be close to actual costs of
implementation.
5) Implementation
The implementation phase is less creative than system design.
It is primarily concerned with user training, site preparation, and file conversion.
When the candidate system is linked to terminals or remote sites, the
telecommunication network and tests of the network along with the system are also
included under implementation.

During the final testing, user acceptance is tested, followed by user training.
Depending on the nature of the system, extensive user training may be required.
Conversion usually takes place at about the same time the user is being trained or
later.


65/99 7:18 PM 9/16/2007
System testing checks the readiness and accuracy of the system to access, update and
retrieve data from new files.
Once the programs become available, test data are read into the computer and
processed against the file(s) provided for testing. If successful, the program(s) is then
run with live data. Otherwise, a diagnostic procedure is used to locate and correct
errors in the program.
In most conversions, a parallel run is conducted where the new system runs
simultaneously with the old system.
This method, though costly provides added assurance against errors in the candidate
system and also gives the user staff an opportunity to gain experiences though
operation.
In some cases, however, parallel processing is not practical.
6) Post Implementation & maintenance
After the installation phase is completed and the user staff is adjusted to the
changes created by the candidate system, evaluation and maintenance begin.
If the new information is inconsistent with the design specifications, then changes
have to be made.
Hardware also requires periodic maintenance to keep in tune with design
specifications.
The importance of maintenance is to continue to bring the new system to standards.

The Formal Methods Model:

1. The formal methods model encompasses a set of activities that leads to formal
mathematical specification of computer software.
2. Formal methods enable a software engineer to specify, develop, and verify a
computer- based system by applying a rigorous mathematical notation.
3. When formal methods are used during development, they provide a mechanism
for eliminating many of the problems that are difficult to overcome using other
software engineering paradigms. Ambiguity, incompleteness, and inconsistency
can be discovered and corrected more easily, not through adhoc review but
through the application of mathematical analysis.
4. When formal methods are used during design, they serve as a basis for program
verification and therefore enable the software engineer to discover and correct
errors that might go undetected.

66/99 7:18 PM 9/16/2007
5. The formal methods model offers the promise of defect-free software.


DRAWBACKS

1. The development of formal models is quite time consuming and expensive.
2. because few software developers have the necessary background to apply formal
methods, extensive training is required.
3. It is difficult to use the models as a communication mechanism for technically
unsophisticated customers.

The Spiral Model

The spiral model is an evolutionary software process model that couples the iterative
nature of prototyping with the controlled and systematic aspects of the linear sequential
model.
It provides the potential for rapid development of incremental versions of the software.
Using the spiral model, software is developed in a series of incremental releases. During
early iterations, the incremental release might be a prototype.
During later iterations, increasingly more complete versions of the engineered system are
produced.

A spiral model is divided into a number of framework activities, also called task regions.
























67/99 7:18 PM 9/16/2007
The above figure depicts a spiral model with six task regions.

1. Customer communication:
Tasks required to establish effective communication between developer and
customer.

2. Planning:
Tasks required to define resources, timeliness, and other project related
information.

3. Risk Analysis:
Tasks required to assess both technical and management risks.

4. Engineering :
Tasks required to build one or more representations of the application.

5. Construction and release:
Tasks required to construct , test, install, and provide user support (e.g.
documentation and training)

6. Customer evaluation:
Tasks required to obtain customer feedback based on evaluation of the software
representations created during the engineering stage and implemented during the
installation stage.

Each of the regions is populated by a set of work tasks, called tasks, called task
set, that are adapted to the characteristics of the project to be undertaken.
For small projects, the number of work tasks and their formality is low. For
larger, more critical projects, each task region contains more work tasks that are
defined to achieve a higher lecel of formality.

As this evolutionary process begins, the software engineering team moves around
the spiral in a clockwise direction, beginning at the center.

The first circuit around the spiral might result in the development of a product
specification.

Subsequent passes around the spiral might be used to develop a prototype and
then progressively more sophisticated versions of the software.

Each pass through the planning region results in adjustments to the project plan.
Cost and schedule are adjusted based on feedback derived from customer
evaluation. In addition; the project manager adjusts the planned number of
iterations required to complete the software.
Unlike classical process models that end when software is delivered, the spiral
model can be adapted to apply throughout the life of the computer software.

68/99 7:18 PM 9/16/2007
An alternative view of the spiral model:

With reference to the figure the spiral model can be considered by examining the
project entry point axis. Each cube placed along the axis can be used to represent
the starting point for different types of projects.

A concept development project starts at the core of the spiral and will continue
(multiple iterations occur along the spiral path that bounds the central shaded
region) until concept development it complete.
If the concept is to be developed into an actual product, the process proceeds
through the next cube (new product development project entry point) and a new
development project is initiated.
The new product will evolve through a number or iterations around the spiral,
following the path that bounds the region that has somewhat lighter shading than
the core. In essence , the spiral, when characterized in this way, remains operative
until the software is retired. There are times when the process is dormant, but
whenever a change is initiated, the process starts at the appropriate entry point
(e.g. product enhancement).

The spiral model is a realistic approach to the development of large-scale systems
and software. Because software evolves as the process progresses, the developer
and customer better understand and react to risks at each evolutionary level.



















69/99 7:18 PM 9/16/2007
Data dictionary

Data dictionary is a catalog a repository of elements in a system.
The major elements are datatflows, data stores, and processes.
The data dictionary stores details & descriptions of the above elements.
A properly developed data dictionary should answer the following questions for the
system analyst.
1. How many characters are in a data item.
2. By what other names it is referenced in the system.
3. Where it is used in the system.
Why is a Data Dictionary important?
Analysts use data dictionaries for 5 important reasons.

1. To manage the detail in large systems.
2. To communicate a common meaning for all system elements.
3. To document the features of the system
4. To facilitate analysis of the details in order to evaluate characteristics & determine
where system changes should be made.
5. To locate errors & omissions in the system.

The data dictionary contains two types of descriptions for the data following through the
system
1. data element
2. data structure
Each data item entry is identified by
1. data name
2. description
3. alias
4. length
5. Specific value that are permissible for it in the system.

Describing data structures:::
Data structures are built on four relationships of components

1. Sequence Relationship:
Defines the components (data items or other data structures) that are always
included in a particular data structure; a concatenation of two or more data items.

2. Selection (Either / or) Relationship
Defines alternative data items or data structures included in a data structure.

3. Iteration (Repetitive) Relationship
Defines the repetition of a component zero or more times.

4. Optional Relationship

70/99 7:18 PM 9/16/2007
A special case of iteration; data items may or may not be included, that is, zero or
one iteration.

A logical data flow diagram is derived from the physical version by
doing the following:

Show actual data needed in a process, not the documents that contain them.
Remove routing information; that is, show the flow between procedures, not
between people, offices, or locations.
Remove tools and devices (for example, accordion folders, file cabinets, or in-
boxes).
Remove control information.
Consolidate redundant data stores.
Remove unnecessary processes, such as those that do not change the data or data
flows (for example, routing, storing, or copying), stand alone from the devices on
which they occur (such as device dependent data preparation or data entry
activities),
Or represent a unique process within the system (if they duplicate other processes,
they should be consolidated into a single process).

General Rules for Drawing Logical Data Flow Diagrams

Several basic rules that underlie all the guidelines we have discussed so far are also
helpful in drawing useful logical data flow diagrams:
1. Any data flow leaving a process must be based on data that are input to the
process.
2. All data flows are named; the name reflects the data flowing between processes,
data stores, sources, or sinks.
3. Only data needed to perform the process should be an input to the process.
4. A process should know nothing about , that is, be independent of, any other
process in the system; it should depend only on its own input and output.
5. Processes are always running; they do not start or stop. Analysts should assume
that a process is always ready to function or perform necessary work.
6. Output from processes can take one of the following forms:
a) An input data flow with information added by the process (for eg. An
annotated invoice)
b) A response or change of data form (such as a change of profit dollars into
profit percentages)
c) Change of status (from unapproved to approved status)
d) Change of content (assembly or separation of information contained in one or
more incoming data flows )
e) Change in organization (for eg, the physical separation or rearrangement of
data)




71/99 7:18 PM 9/16/2007
Logical DFD

Describes the flow of logical data components between logical processes in a system,

Physical DFD

Describes the flow of physical data components between physical operations in a system.


REQUIREMENT DETERMINATION TECHNIQUES

Requirements determination involves studying the current business system to find out
how it works and where improvements should be made. Systems studies result in an
evaluation of how current methods are working and whether adjustments are necessary or
possible.
A requirement is a feature that must be included in a new system.
It may include a way of capturing or processing data, producing information, controlling
a business activity or supporting management.
The determination of requirements thus entails studying the existing system and
collecting details about it to find out what these requirements are.

Requirements Anticipation:
Having had experience in a particular business area or having encountered systems in an
environment similar to the one currently under investigation will influence systems
analysts study.

In case of requirement anticipation on the one hand, experience from previous studies can
lead to investigation of areas that would otherwise go unnoticed by an unexperienced
analyst.
Having the background to know what to ask or which aspects to investigate can be a
substantial benefit to the organization.
On the other hand, if a bias is introduced or shortcuts are taken in conducting the
investigation, requirements anticipation is a problem.

Requirement Investigation:
This activity is at the heart of systems analysis. Using a variety of tools & skills, analysts
study the current system and document its features for further analysis. Requirements
investigation relies on the fact-finding techniques and includes methods for documenting
and describing system features.

Requirement Specifications:
The data produced during the fact-finding investigations are analyzed to determine
requirements specifications. The description of features for a new system.
This activity has three interrelated parts:

Analysis of Factual Data:

72/99 7:18 PM 9/16/2007
The data collected during the fact-finding study and included in dataflow and decision
analysis documentation are examined to determine now well the system is performing
and whether it will meet the organizations demand.

Identification of Essential Requirements:
Features that must be included in a new system, ranging from operational details to
performance criteria are specified.


Selection of Requirements Fulfillment Strategies:
The methods that will be used to achieve the stated requirements are selected. These
form the basis for systems design, which follows requirements specification. All three
activities are important and must be performed correctly.

Fact-Finding Techniques
The specific methods analysis use for collecting data about requirements are called fact-
finding techniques. These include the interview, questionnaire, record inspections (on-
site review) and observation. Analysts usually employ more that one of these techniques
to help ensure an accurate and comprehensive investigation.

Interview:
Analyst use interviews to collect information from individual or from groups. The
respondents are generally current users of the existing system or potential users of the
proposed system. In some instances, the respondents may be managers or employees
who provide data for the proposed system or who will be affected by it. It is not always
the best source of application data.
It is important to know that respondents and analysts converse during an interview. The
respondents are not being interrogated.
Interviews provide analysts with opportunities for gathering information from
respondents who have been chosen for their knowledge of the system under study.
This method of fact-finding can be especially helpful for gathering information
from individuals who do not communicate effectively in writing or who may not
have the time to complete questionnaires.
Interviews allow analysts to discover area of misunderstanding, unrealistic
expectations and even indications of resistance to the proposed system.
Interviews can be either structured or unstructured. Unstructured Interviews:
using a question-and-answer format, are appropriate when analysts want to
acquire general information about a system.
This format encourages respondents to share their feeling, ideas and beliefs.
Structured interviews use standardized questions in either and open-response or
closed-response format.
The success of an interview depends on the skill of the interviewer and on his or
her preparation for the interview.

73/99 7:18 PM 9/16/2007
Analysts also need to be sensitive to the kinds of difficulties that some
respondents create during interviews and know how to deal with potential
problems.

Procedure for Cost/Benefit Determination

There is a difference between expenditure and investment. We spend to get what we
need, but we invest to realize on the investment.
Building a computer-based system is an investment. Costs are incurred throughout its life
cycle.
Benefits are realized in the form of reduced operating costs, improved corporate image,
staff efficiency, or revenues.
To what extent benefits outweigh costs is the function of cost/benefits analysis.

Cost/benefit analysis is a procedure that gives a picture of the various costs, benefits
and rules associated with a system.

The determination of costs and benefits entails the following steps:

1. Identify the costs and benefits pertaining to a given project.
2. Categorize the various costs and benefits for analysis.
3. Select a method of evaluation.
4. Interpret the results of the analysis.
5. Take action.

Costs and Benefits Identification
Certain costs and benefits are more easily identifiable than others.
Direct costs, such as the price of a hard disk, are easily identified from company invoice
payments or canceled checks.
Direct benefits often relate one-to-one to direct costs, especially savings from reducing
costs in the activity in question.
Other direct costs and benefits, however may not be well defined, since they represent
estimated costs or benefits that have some uncertainty. An example of such costs is
reserve for bad debt. It is a discerned real cost, although its exact amount is not so
immediate.
A category of costs or benefits that is not easily discernible is opportunity costs and
opportunity benefits. These are the costs or benefits forgone by selecting one-alternative
over another. They do not show in the organizations accounts and therefore are not easy
to identify.

Classification of Costs and Benefits
Costs and benefits can be categorized as tangible or intangible, direct or indirect, fixed or
variable.
Tangible or Intangible Costs and Benefits


74/99 7:18 PM 9/16/2007
Tangible refers to the ease with which costs or benefits can be measured. An outlay of
cash for a specific item or activity is referred to as a tangible cost.
They are usually shown as disbursements on the books. The purchase of hardware or
software, personnel training, and employee salaries are examples of tangible costs. They
are readily identified and measured.
Costs that are known to exist but whose financial value cannot be accurately measured
are referred to as intangible costs. For example, employee morale problems caused by a
new system or lowered company image is an intangible cost.
In some cases, intangible costs may be easy to identify but difficult to measure. For
example, the cost of the breakdown of an online system during banking hours will cause
the bank to lose deposits and waste human resources. The problem is by how much?
In other cases, intangible costs may be difficult even to identify, such as an improvement
in customer satisfaction stemming from a real-time order entry system.

Benefits are also classified as tangible or intangible. Like costs, they are often difficult to
specify accurately.
Tangible benefits, such as completing jobs in fewer hours or producing reports with no
errors, are quantifiable.
Intangible benefits, such as more satisfied customers or an improved corporate image, are
not easily quantified. Both tangible and intangible costs and benefits, however, should be
considered in the evaluation process.

Direct or indirect costs and Benefits
From a cost accounting point of view, costs are handled differently depending on
whether they are direct or indirect.
Direct costs are those with which a dollar figure can be directly associated in a
project.
They are applied directly to the operation.
For example, the purchase of a box of diskettes for $ 35 is a direct cost because
we can associate the diskettes with the dollar expended.
Direct benefits also can be specifically attributable to a given project. For
example, a new system that can handle 25 percent more transactions per day is a
direct benefit.

Indirect Costs
Indirect costs are the result of operations that are not directly associated with a given
system or activity. They are often referred to as overhead.
Examples: insurance, maintenance, protection of the computer center, heat, light, and air
conditioning are all overhead.

Indirect benefits
Indirect benefits are realized as a by-product of another activity or system.

Fixed or Variable Costs and Benefits
Some costs and benefits are constant, regardless of how well a system is used.


75/99 7:18 PM 9/16/2007
Fixed costs
Fixed costs (after the fact) are sunk costs. They are constant and do not change. Once
encountered, they will not recur.

Variable costs
Variable costs are incurred on a regular (weekly, monthly) basis. They are
usually proportional to work volume and continue as long as the system is in
operation.
For example, the costs of computer forms vary in proportion to the amount of
processing or the length of the reports required.

Fixed benefits
Fixed benefits are also constant and do not change. An example is a decrease in
the number of personnel by 20 percent resulting from the use of a new computer.
The benefit of personnel savings may recur every month.

Variable benefits
Variable benefits on the other hand, are realized on a regular basis. For example,
consider a safe deposit tracking system that saves 20 minutes preparing customer notices
compared with the manual system. The amount of time saved varies with the number of
notices produced.

Saving versus Cost Advantages
Savings are realized when there is some kind of cost advantage. A cost advantage
reduces or eliminates expenditures. So we can say that a true saving reduces or
eliminates various costs being incurred.

Select Evaluation Method
When all financial data have been identified and broken down into cost categories, the
analyst must select a method of evaluation. Several evaluation methods are available,
each with pros and cons. The common methods are:
1. Net benefit analysis.
2. Present value analysis
3. Net present value.
4. Payback analysis
5. Cash-flow analysis

Interpret Results of the Analysis and final action
When the evaluation of the project is complete, the results have to be interpreted. This
entails comparing actual results against a standard or the result of an alternative
investment.

Cost/benefit analysis is a tool for evaluation projects rather than a replacement of the
decision maker. In real-life business situations, whenever a choice among alternatives is
considered, cost/benefit analysis is an important tool.


76/99 7:18 PM 9/16/2007
THE SYSTEM PROPOSAL
The final decision following cost/benefit analysis is to select the most cost effective and
beneficial system for the user.
At this time, the analyst prepares a feasibility report on the major findings and
recommendations.

STEPS IN FEASIBILITY ANALYSIS:
Feasibility study involves eight steps:

1. Form a project team and appoint a project leader
The concept behind a project team is that future system users should be
involved in its design and implementation.
Their knowledge & experience in the operations area are essential to the
success of the system.
For small projects, the analyst and an assistant usually suffice.
The team consists of analysts and user staff enough collective expertise to
devise a solution to the problem.

2. Prepare system flowcharts
Prepare generalized system flowcharts for the system.
Information oriented charts and data flow diagrams prepared in the initial
investigation are also previewed at this time.
The charts bring up the importance of inputs, outputs and data flow among
key points in the existing system.
All other flowcharts needed for detailed evaluation are completed at this
point.

3. Enumerate potential candidate systems.
This step identifies the candidate systems that are capable of producing the
output included in the generalized flowcharts.
The above requires a transformation from logical to physical system
models.
Another aspect of this step is consideration of the hardware that can
handle the total system requirement.
An important aspect of hardware is processing and main memory.
There are a large number of computers with differing processing sizes,
main memory capabilities and software support.
The project team may contact vendors for information on the processing
capabilities of the system available.

4. Describe and identify characteristics of candidate systems.
From the candidate systems considered the team begins a preliminary
evaluation in an attempt to reduce them to a manageable number.
Technical knowledge and expertise in the hardware/ software area are
critical for determining what each candidate system can and cannot do.

77/99 7:18 PM 9/16/2007

5. Determine and Evaluate performance and cost effectiveness of each candidate
system.
Each candidate systems performance is evaluated against the system
performance requirements set prior to the feasibility study.
Whatever the criteria, there has to be a close match as practicable,
although trade-offs are often necessary to select the best system.
The cost encompasses both designing and installing the system.
It includes user training, updating the physical facilities and documenting..
System performance criteria are evaluated against the cost of each system
to determine which system is likely to be the most cost effective and also
meets the performance requirements.
Costs are most easily determined when the benefits of the system are
tangible and measurable.
An additional factor to consider is the cost of the study design and
development.
In many respects the cost of the study phase is a sunk cost (fixed cost).
Including it in the project cost estimate is optional.

6. weight system performance and cost data
In some cases, the performance and cost data for each candidate system
show which system is the best choice.
This outcome terminates the feasibility study.
Many times, however the situation is not so clear cut.
The performance/cost evaluation matrix does not clearly identify the best
system.
The next step is to weight the importance of each criterion by applying a
rating figure.
Then the candidate system with the highest total score is selected.

7. Select the best candidate system
The system with the highest total score is judged the best system.
This assume the weighting factors are fair and the rating of each
evaluation criterion is accurate.
In any case , management should not make the selection without having
the experience to do so.
Management co-operation and comments however are encouraged.

8. Feasibility Report
The feasibility report is a formal document for management use.
Brief enough and sufficiently non technical to be understandable, yet
detailed enough to provide the basis for system design.
The report contains the following sections:
1. Cover letter general findings and recommendations to be
considered.

78/99 7:18 PM 9/16/2007
2. Table of contents specifies the location of the various parts of the
report.
3. Overview Is a narrative explanation of the purpose & scope of
the project. The reason for undertaking the feasibility study, and
the department (s) involved or affected by the candidate system.
4. Detailed findings outline the methods used in the present system,
the system effectiveness & efficiency as well as operating costs are
emphasized.
This section also provides a description of the objectives and
general procedures of the candidate system.
5. Economic justification- details point by point cost comparisons and
preliminary cost estimates for the development and operation of
the candidate system.
A return on investment (ROI) analysis of the project is also
included.
6. Recommendations & conclusions
7. Appendix document all memos and data compiled during the
investigation, they are placed at the end of the report for reference.























79/99 7:18 PM 9/16/2007
LIST OF DELIVERABLES
When the design of an information system is complete the specifications are documented
in a form that outlines the features of the application
These specifications are termed as the deliverables or the design book by the system
analysts.

No design is complete without the design book, since it contains all the details that must
be included in the computer software, datasets & procedures that the working information
system comprises.

The deliverables include the following:

1. Layout charts
Input & output descriptions showing the location of all details shown on reports,
documents, & display screens.

2. Record layouts:
Descriptions of the data items in transaction & master files, as well as related
database schematics.

3. Coding systems:
Descriptions of the codes that explain or identify types of transactions,
classification, & categories of events or entities.

4. Procedure Specification:
Planned procedures for installing & operating the system when it is constructed.

5. Program specifications:
Charts, tables & graphic descriptions of the modules & components of computer
software & the interaction between each as well as the functions performed &
data used or produced by each.

6. Development plan:
Timetables describing elapsed calendar time for development activities; personnel
staffing plans for systems analysts, programmers , & other personnel; preliminary
testing & implementation plans.

7. Cost Package:
Anticipated expenses for development, implementation and operation of the new
system, focusing on such major cost categories as personnel, equipment,
communications, facilities and supplies.





80/99 7:18 PM 9/16/2007
Evaluation Methods

Net Benefit Analysis
Involves subtracting total costs from total benefits.
Easy to calculate, interpret and present.
Drawback : It does not account for the time value of money & does not discount
future cash flow.

Net Benefit Analysis - example

Year Year Year Cost/Benefit
0 1 2
Total
Costs -1000 -2000 -2000 -5000
Benefits 0 650 4900 5550
Net benefits -1000 -1350 -2900 550

Time value of money:
Usually expressed in the form of interest on the funds invested to realize the
future value.
F = P (1 + i)n
Where,
F = future value of an investment.
P = present value of the investment.
i = Interest rate per compounding period.
n = Number of years.

Present value analysis

Calculates the costs & benefits of the system in terms of present day (today)
value of the investment & then comparing across alternatives.
A critical factor to consider is a discount rate equivalent to the forgone
amount that the money could earn if it were invested in a different project.
The amount to be invested today is determined by the value of the benefits at
the end of a given period (year).
This amount is called the present value of the benefit.
P = F / (1+i)n
Present value of 1500 invested at 10% interest at the end of the 4th year is :
P = 1500/ (1+0.10)4 = 1500/ 1.61 = 1027.39

81/99 7:18 PM 9/16/2007
If 1027.39 is invested today at 10% interest we can expect to have 1500 in 4
years.

Present value Analysis using 10% Interest Rate (Discounted)
Year Estimated future
value
Discount rate
1/[(1+i)n]
Present value
P=F/[(1+i)n]
Cumulative
present value
of benefits
1 1500 0.908 1363.63 1363.63
2 1500 0.826 1239.67 2603.30
3 1500 0.751 1127.82 3731.12
4 1500 0.683 1027.39 4758.51

Net Present Value
Is the discounted benefits minus discounted costs.
Example: Investment of 3000 for microcomputer yields 4758.51 cumulative
benefit. i.e net gain of 1758.51
This value is relatively easy to calculate & accounts for the time value of
money.
Net present value is expressed as a % of the investment. i.e. 1758.51/ 3000 =
0.55%
Pay back analysis
It determines the time it takes for the accumulated benefits to equal the
initial investment.
The shorter the payback period, the sooner a profit is realized.
Pay back period =
Overall cost outlay/ Annual cash return
= (years + installation time)/ years to recover

Break- even analysis
Break-even is the point where the cost of the candidate system & that of the
current one are equal.
Breakeven compares the costs of the current & candidate systems.
When a candidate system is developed, initial costs usually exceed those of
the current system The investment period.
When both costs are equal it is break even
Beyond the break-even point, the candidate system provides greater benefit
(profit) than the old one - a return period.
Example

82/99 7:18 PM 9/16/2007

Cash-flow analysis
Projects, such as those carried out by computer & word processing services,
produce revenues from an investment in computer systems.
Cash-flow analysis keeps track of accumulated costs & revenues on a regular
basis.
The spread sheet format also provides break-even and payback
information.



APPLICATION PROTOTYPING

1. Application prototyping provides a way to acquire information describing
application requirements and evaluation based on use of a working system.
2. This development methodology also provides the experience of using the system
before a completed application is constructed and implemented.
3. The term prototype refers to a working model of an information system
application.
4. The prototype does not contain all the features or perform all the necessary
functions of the final system.
5. It includes sufficient elements to enable individual to use the proposed system to
use the proposal system to determine what they like and dont like and to identify
features to be added or changed.
6. Application prototyping , the process of developing and using the prototype, has
the following five characteristics.
i. The prototype is a live, working application.

83/99 7:18 PM 9/16/2007
ii. The purpose of prototyping is to test out assumptions made by analysts
and users about required system features.
iii. Prototypes are created quickly.
iv. Prototypes evolve through an iterative process.
v. Prototypes are relatively in expensive to build.


USES OF APPLICATION PROTOTYPING:

1. It is an effective device for clarifying user requirements.
Written specifications are typically created as a vehicle for describing application
features and the requirements that must be met.
2. Developing and actually using a prototype can be a very effective way of
identifying and clarifying the requirements an application must meet.
3. It verifies the feasibility of a system design. Analysts can experiment when
different application characteristics, evaluating user reaction and response.
4. Creating a prototype and evaluating its design through use will prove design
feasibility or suggest the need to find other alternatives.

USE OF PROTOTYPES:

1. When the prototyping process is complete decision is made about how to proceed
there are 4 ways to proceed after the information gained from developing and
using the prototype has been evaluated: discard the prototype and abandon the
application project, implement the prototype, redevelop the application, or begin
another prototype .
i. Abandon Application:
a. In some instances, the decision will be to discard the prototype and to
abandon development of the application.
b. The above conclusion does not mean that the prototype and process was a
mistake or a waste of resources. Rather, the information and experience
gained by developing and using the prototype has led to a development
decision.
c. Perhaps user and analyst have learned that the system will be unnecessary
an alternative solution was discovered during the prototyping process.
d. May be the experience suggested that the approach was inappropriate.

ii. Implement Prototype:
a. Sometimes the prototype becomes the actual system needed. In this case,
it is implemented as is no further development occurs

iii. Redevelop Application
a. A successful prototype can provide ample information about application
requirements and lead to the development of a full application.

84/99 7:18 PM 9/16/2007
b. Completion of the prototyping process is thus not the end of the
development process. Rather it signals the beginning of the next activity-
full application development.
c. The information gathered during application prototyping suggests features
that must be added to the application.


iv. Begin New Prototype
a. The information gained by developing and using the prototype will
sometimes suggest the use of an entirely different approach to meet the
organization needs.
b. It may reveal that the features of the application must be dramatically
different if the existing prototyping is inappropriate to demonstrate and
evaluate those features.
c. Consequently rather than jumping into a full-scale development effort
with the newly acquired information, management may support the
creation of another prototype that will add to the information at hand.

CODE REVIEW:

1. A code review is a structured walkthrough conducted to examine the program
code developed in a system, along with its documentation. It is used for new
systems and for systems under maintenance.
2. As a general rule, a code review does not deal with an entire software system, but
rather with individual modules or major components in a program.
3. The code itself is compared with the original specifications or requirements to
determine whether they are being satisfied.
4. Discovering that a portion of a code does not agree with the original
specifications or finding problems or mistakes that originated with the earlier
requirements analysis is not uncommon.
5. Although finding that a program or design must be modified can be frustrating, it
is better to realize it during the review than after the system is implemented.
6. As a result, costs will be lower, changes, easier, and more important, users will
receive the proper system.
7. When programs are reviewed, the participants also assess the execution efficiency
use of standard data names and modules, and program errors.
8. Obvious errors, such as syntax errors and logic errors can even be jotted down
ahead of time by team members and submitted to the recorder, thus saving
meeting them.
9. Other errors may merit discussion and examination during the review.
10. A checklist that can be used for noting problems and their severity can be
maintained.
11. Missing details unnecessary components and major and minor errors are easily
pointed out using such a checklist.
12. When applied to maintenance projects the process is the same, except that some
of the program code already existed prior to undertaking the maintenance work.

85/99 7:18 PM 9/16/2007
DATABASE DESIGN

1. The first database design step in structured systems analysis converts the ER
analysis model to logical record types and specifies how these records are to be
accessed.
2. These accessed requirements are later used to choose keys that facilitate data
access.
3. Quantitative data such as item sizes, numbers of records and access frequency are
often also added at this step.
Quantitative data is needed to compute the storage requirements and transaction
volumes to be supported by the computer system.
4. The combination of logical record structure access specifications and quantitative
data is sometimes known as the system level database specification. This
specification is used at the implementation level to choose a record structure
supported by a DBMS.
5. The simplest conversion is to make each set of the ER diagram into a record type.
6. Object models must be converted to logical record structures when a logical
analysis model is implemented by a conventional DBMS.
The simplest conversion here is for each object class to become a logical record,
with each class attribute converted to a field.
Where attributes are structured, they themselves become a separate record.
7. Different methodologies give their logical records different names like blocks,
schema, and modules.


Completing the system model:
1. The logical record structure is just one part of the system model.
2. It serves to define the database structure.
3. For database design, additional information is needed, in particular:
a. Quantitative data, which tells us the volume of information to be stored.
b. Access requirements, which tells us how the database is to be used. Access
requirements are used to choose file keys during database design.
Information about quantitative data and access requirements is gathered
during systems analysis.
Quantitative data consists of :
a. The size of data items.
b. The number of occurrences of record types.
The size of data items is usually given in a data dictionary.

Specifying Access Requirements:
1. Access requirements are initially picked up from user procedure specifications,
which include statements about how users will access data.
2. These statements become the database access requirements.
3. During database design, access paths are plotted against logical record types for
each access requirements.
4. The access paths show how data is to be used and described:

86/99 7:18 PM 9/16/2007
i. The record types accessed by each access request.
ii. The sequence in which the record types are accessed
iii. The access keys used to select record types.
iv. The items retrieved from each record; and
v. The number of records accessed.

5. Defining the access requirements completes the database specification, which is
used to create a database design.
6. The logical record structure is converted to a database logical design.
7. Access paths are used to select appropriate physical structures.
8. A number of alternatives physical structures can be chosen to satisfy the access
requirements.
Part of the design is to select from these structures.
9. An iterative process is used to make such choices and performance estimates are
made at each iteration to compare the alternatives.
10. The kinds of design techniques used will depend on the software to be used to
implement the database.
11. The simplest conversion is to a set of files. Alternatively, the database may be
implemented on a DBMS.

DESIGN OF INPUT

1. Systems analysts decide the following input design details:
i. What data to input
ii. What medium to use
iii. How the data should be arranged or coded
iv. The dialogue to guide users in providing input
v. Data items and transactions needing validation to detect
errors.
vi. Methods for performing input validation and steps to follow
when errors occur.
2. The design decisions for handling input specify how data are accepted for
computer processing.
3. Analysts decide, whether the data are entered directly, perhaps through a
workstation, or by using source documents, such as sales slips, bank checks, or
invoices where the data in turn are transferred into the computer for processing.
4. The design of input also includes specifying the means by which end- users and
system operators direct the system in which actions to take.
e.g. a system user interacting through a workstation must be able to tell the system
whether to accept input, produce a report, or end processing.
5. Online systems include a dialogue or conversation between the user and the
system. Through the dialogue users request system services and tell the system
when to perform a certain function.
6. the nature of online conversations often makes the difference between a
successful and unacceptable design.

87/99 7:18 PM 9/16/2007
7. An improper design that leaves the display screen blank will confuse a user about
what action to take next.
8. The arrangement of messages and comments in online conversations, as well as
the placement of data, headings and titles on display screens or source documents,
is also part of input design.
9. Sketches of each are generally prepared to communicate. The arrangement to
users for their review, and to programmers and other members of the systems
design team.

DESIGN OF OUTPUT

1. Output generally refers to the results and information that are generated by the
system.
2. For many end-users, output is the main reason for developing the system and the
basis on which they will evaluate the usefulness of the application.
3. Most end-users will not actually operate the information. System or enter data
through workstations, but they will use the output from the system.
4. When designing output, systems analysts must accomplish the following:
i. Determine what information to present.
ii. Decide whether to display, print, or speak the information and select
the output medium.
iii. Arrange the presentation of information in an acceptable format.
iv. Decide how to distribute the output to intended recipients.

5. The arrangement of information on a display or printed document is termed a
layout.
6. Accomplishing the general activities listed above will require specific decisions,
such as whether to use preprinted forms when preparing reports and documents,
how many lines to plan on a printed page, or whether to use graphics & colour.
7. The output design is specified on layout forms, sheets that describe the location
characteristics (such as length & type) and format of the column headings &
pagination.
These elements are analogous to an architects blueprint that shows the location of
each component.
DESIGN REVIEW

1. Design reviews; focus on design specifications for meeting previously identified
systems requirements.
2. The information supplied about the design prior to the session can be
communicated using HIPO charts, structured flowcharts, Warnier/ Orr diagrams,
screen designs, or document layouts.
3. Thus, the logical design of the system is communicated to the participants so they
can prepare for the review.
4. The purpose of this type of walkthrough is to determine whether the proposed
design will meet the requirements effectively and efficiently.

88/99 7:18 PM 9/16/2007
5. If the participants find discrepancies between the design and requirements, they
will point them out and discuss them.
6. It is not the purpose of the walkthrough to redesign portions of the system. That
responsibility remains with the analyst assigned to the project.


Hardware Selection

Determining size and capacity requirements.
1. Systems capacity is frequently the determining factor. Relevant features to
consider include the following:
a. Internal memory size.
b. Cycle speed of system for processing.
c. Number of channels for input, output and communication.
d. Characteristics of display and communication components.
e. Types and number of auxiliary storage units that can be attached.
f. Systems support and utility software provided or available.

2. Computer Evaluation and Measurement
a. A synthetic job is a program written to exercise a computers resource in a
way that allows the analyst to imitate the expected job stream and
determine the results.
b. A benchmark is the application of synthetic programs to emulate the
actual processing work handled by a computer system.
c. Benchmark programs permit the submission of a mix of jobs that are
representative of the users projected work load.

3. Financial Factors
a. The acquisition of and payment for a computer system are usually handled
through one of three common methods: rental, lease or purchase.
b. Determining which option is appropriate depends on the characteristics
and plans of the organization at the time the acquisition is made.
c. Maintenance and support
An additional factor in hardware decisions concerns the maintenance and
support of the system after it is installed.
Primary concerns are the source of maintenance, terms and response
times.

SOFTWARE SELECTION

1. One of the most difficult tasks in selecting software, once systems requirements
are known, is determining whether a particular software package fits the
requirements.

89/99 7:18 PM 9/16/2007
2. When analysts evaluate possible software for adoption, they do so by comparing
software features with previously developed application requirements.
3. The flexibility of a software system should include the ability to meet changing
requirement and varying user needs.
4. Software that is flexible is generally more valuable than a program that is totally
inflexible.
5. Areas where flexibility is wanted are data storage, reporting and options definition
of parameters and data input.
6. Ensuring that adequate controls are included in the system is an essential step in
the selection of software.
7. Systems capacity refers to the number of files that can be stored and the amount
each file will hold. Capacity also depends on the language in which the software
is written.
8. Capacity can be determined by the following:
i. The maximum size of each record measured in number of bytes.
ii. The maximum size of the file measured in number of bytes.
iii. The maximum size of the file measured in number of fields per record.
iv. The number of files that can be active at one time.
v. The number of files that can be registered in a file directory.

HIPO CHARTS

1. It is a commonly used method for developing systems software
2. An acronym for hierarchical Input process output, developed by IBM for its large,
complex operating systems.
3. Greatest strength of HIPO is the documentation of a system

PURPOSE:

1. Assumption on which HIPO is based : It is easy to lose track of the intended
function of a system or component in a large system.
2. Users view: single functions can often extend across several modules.
Analysts concern: Understanding, describing, and documenting the modules and
their interaction in a way that provides sufficient detail but that does not lose sight
of the larger picture.
3. HIPO diagrams are graphic, rather than narrative, descriptions of the system.
They assist the analyst in answering three guiding questions:
i. What does the system or module do? (Asked when designing the
system).
ii. How does it do it? (Asked when reviewing the code for testing or
maintenance)
iii. What are the inputs and outputs? (Asked when reviewing the code for
testing or maintenance)
4. A HIPO description for a system consists of the visual table of contents & the
functional diagrams.

90/99 7:18 PM 9/16/2007

VISUAL TABLE OF CONTENTS:

1. The visual table of contents (VTOC) shows the relation between each of the
documents making up a HIPO package.
2. It consists of a Hierarchy chart that identifies the modules in a system by number
and in relation to each other and gives a brief description of each module.
3. The numbers in the contents section correspond to those in the organization
section.
4. The modules are in increasing detail. Depending on the complexity of the system,
three to five levels of modules are typical.


FUNCTIONAL DIAGRAMS:

1. For each box defined in the VTOC a diagram is drawn.
2. Each diagram shows input and output (right to left or top to bottom), major
processes, movement of data, and control points.
3. Traditional flowchart symbols represent media, such as magnetic tape, magnetic
disk and printed output.
4. A solid arrow shows control paths, and an open arrow identifies data flow.
5. Some functional diagrams contain other intermediate diagrams, but they also
show external data, as well as internally developed data and the step in the
procedure where the data are used.
6. A data dictionary description can be attached to further explain the data elements
used in a process.
7. HIPO diagrams are effective for documenting a system.
8. They aid designers and force them to think about how specifications will be met
and where activities and components must be linked together.

Disadvantages:
1. They rely on a set of specialized symbols that require explanation, an extra
concern when compared to the simplicity of , for eg. Data flow diagram
2. HIPO diagrams are not as easy to use for communication purpose as many people
would like.
3. They do not guarantee error-free systems.

STRUCTURED WALKTHROUGHS:

1. A structured walkthrough is a planned review of a system or its software by
persons involved in the development effort.
2. The participants are generally at the same level in the organization: that is , they
are analysts or programmer-analysts.
Typically department managers for marketing or manufacturing are not involved
in the review even though they may be the eventual recipients of the system.

91/99 7:18 PM 9/16/2007
3. Sometimes structured walkthroughs are called Peer Reviews because the
participants are colleagues at the same level in the organization.


CHARACTERISTICS:

1. The purpose of walkthroughs is to find areas where improvement can be made in
the system or the development process.
2. A walkthrough should be viewed by the programmers and analysts as an
opportunity to receive assistance, not as an obstacle to be avoided or tolerated.
3. The review session does not result in the correction of errors or changes in
specifications. Those activities remain the responsibility of the developers. Hence
the emphasis is constantly on review, not repair.
4. The individuals who formulated the design specifications or created the program
code are as might be expected, part of the review team.
5. A moderator is sometimes chosen to lead the review, although many
organizations prefer to have the analyst or designer who formulated the
specifications or program lead the session, since they have greater familiarity with
the item being reviewed. In either case, someone must be responsible for keeping
the review focused on the subject of the meeting.
6. A scribe or recorder is also needed to capture the details of the discussion and the
ideas that are raised.
Since the walkthrough leader or the sponsoring programmers or analysts may not
be able to jot down all the points aired by the participants, appointing another
individual to take down all the relevant details usually ensures a more complete
and objective record.
7. The benefits of establishing standards for data names, module determination, and
data item size and type are recognized by systems managers. The time to start
enforcing these standards is at the design stage.
Therefore, they should be emphasized during walkthrough sessions.
8. Maintenance should also be addressed during walkthroughs. Enforcing coding
standards, modularity, and documentation will ease later maintenance needs.
9. It is becoming increasingly common to find organizations that will not accept new
software for installation until it has been approved by software maintenance
teams. In such an organization, a participant from the quality control or
maintenance team should be an active participant in each structured walkthrough.


10.
(i) The walkthrough team must be large enough to deal with the
subject of the review in a meaning way, but not so large
that it cannot accomplish anything.
(ii) Generally no more than 7 to 9 persons should be involved ,
including the individuals who actually developed the product
under review, the recorder, and the review leader.


92/99 7:18 PM 9/16/2007

11.
(i) As a general rule, management is not directly involved in
structured walkthrough sessions. Its participation could
actually jeopardize the intent of the review team from
speaking out about problems they see in project.
(ii) Because management is often interpreted to mean evaluation.
(iii) Managers may feel that raising many questions, identifying
mistakes or suggesting changes indicates that the individual
whose work is under review is incompetent/
(iv) It is best to provide managers with reports summarizing the
review session rather than to have them participate.
(v) The most appropriate type of report will communicate that a
review of the specific project or product was conducted, who
attended, and what action the team took. It need not
summarize errors that were found, modifications suggested,
or revisions needed.
12. Structured reviews rarely exceed 90 minutes in length.
13. The structured walkthrough can be used throughout the systems development
process as a constructive and cost-effective management tool, after the detailed
investigation (requirements review), following design (design review), and during
program development (code review and testing review).

TEST PLANS

Black-Box Testing:

1. Black box testing, also called behavioral testing, focuses on the functional
requirements of the software.
2. Black-box testing enables the software engineer to derive sets of input conditions
that will fully exercise all functional requirements for a program.
3. Black-box testing attempts to find errors in the following categories:
i. Incorrect or missing functions.
ii. Interface errors
iii. Errors in data structures or external database access.
iv. Behavior or performance errors, and
v. Initialization and termination errors.
4. Unlike white-box testing, which is performed early in the testing process, black-
box testing tends to be applied during later stages of testing. Because black-box
testing tends to be applied during later stages of testing. Because black-box
testing purposely disregards control structure, attention is focused on the
information domain.
5. Tests are designed to answer the following questions:
1. How is functional validity tested?

93/99 7:18 PM 9/16/2007
2. How is system behavior & performance tested?
3. What classes of input will make good test cases?
4. Is the system particularly sensitive to certain input values?
5. How are the boundaries of a data class isolated?
6. What data rates and data volume can the system tolerate?
7. What effect will specific combinations of data have on system operation?

Advantages:

By applying black-box techniques, a set of test cases can be derived that satisfy the
following criteria:

1. Test cases that reduce, by a count that is greater than one, the number of
additional test cases that must be designed to achieve reasonable testing.
2. Test cases that tell us something about the presence or absence of classes of
errors, rather than an error associated only with the specific test at hand.

WHITE-BOX TESTING:

1. White-box testing, sometimes called glass-box testing is a test case design method
that uses the control structure of the procedural design to derive test cases.
2. Using white-box testing methods the software engineer can derive test cases that
i. Guarantee that all independent paths within a module have been
exercised at least once.
ii. Exercise all logical decisions on their true and false sides.
iii. Execute all loops at their boundaries and within their operational
bounds and
iv. Exercise internal data structures to ensure their validity.


Reasons/advantages of conducting white-box tests:

1. Logic errors and incorrect assumptions are inversely proportional to the
probability that a program path will be executed.
- Errors tend to creep into ones work when one design and
implement function, conditions, or control that are out of the
mainstream.
- Everyday processing tends to be well understood (and well
scrutinized), while special case processing tends to fall into the
cracks.
2. We often believe that a logical path is not likely to be executed when in
fact; it may be executed on a regular basis. The logical flow of a program
is sometimes counter intuitive, meaning that ones unconscious
assumptions about flow of control and data may lead us to make design
errors that are uncovered only once path testing commences.

94/99 7:18 PM 9/16/2007
3. Typographical errors are random. When a program is translated into
programming language source code, it is likely that some typing errors
will occur. Many will be uncovered by syntax and type checking
mechanisms, but others may go undetected until testing begins.
ALPHA TESTING & BETA TESTING

1. System validation checks the quality of the software in both simulated and live
environments.
2.
i. First the software goes through a phase , often referred as alpha
testing , in which errors and failures based on simulated user
requirements are verified and studied.
ii. The alpha test is conducted at the developers site by a customer.
iii. The software is used in a natural setting with the developers recording
errors & usage problems.
iv. Alpha tests are conducted in a controlled environment.

3.
i. The modified software is then subjected to phase two called beta testing
in the actual users site or a live environment.
ii. The system is used regularly with live transactions after a scheduled time,
failures and errors are documented and final correction and enhancements
are made before the package is released for use.
iii. Unlike alpha testing, the developer is generally not present.
Therefore, the beta test is a live application of the software in an
environment that cannot be controlled by the developer.

iv. The customer records all problems (real or imagined) that are encountered
during beta testing and reports these to the developer at regular intervals.
v. As a result of problems reported during beta tests, software engineers
make modifications and then prepare for release of the software product to
the entire customer base.











95/99 7:18 PM 9/16/2007
USER INTERFACE DESIGN
1. There are two aspects to interface design.
i. To choose the transactions in the business process to be supported by
interfaces. This defines the broad interface requirements in terms of
what information is input and output through the interface during the
transaction.
ii. The design of the actual screen presentation, including its layout and in
that the sequence of screens that may be needed to process the
transaction.
2. Choosing the Transaction Modules
1. Defining the transactions that must be supposed through interfaces is part of
the system specification.
2. Each interface object defines one interface module which will interact with
the user in some way. Each such interaction results in one transaction with the
system.

3. Defining the presentation
1. Each interaction includes both the presentation and dialog.
2. Presentation describes the layout of information.
Dialog Describes the sequence of interactions between the user and the
computer.

4. Evaluation of Interfaces.
1. User-friendly the interface should be helpful, tolerant and adaptable, and the
user should be happy and confident to use it.
2. friendly interactions results in better interfaces, which not only make users
more productive but also make their work easier and more pleasant . the terms
effectiveness and efficiency are also often used to describe interfaces.
3. An interface is effective when it results in a user finding the best solution to a
problem, and it is efficient when it results in this solution being found in the
shortest time with least error.

5. Workspace
1. the computer interface is part of a user workspace
2. A workspace defines all the information that is needed for users work as well
as the layout of this information.

6. Robustness
1. Robustness is an important feature of interface .
2. this means that the interface should not fail because of some action taken by
the user, or indeed that a user error leads to a system breakdown.
3. This in turn requires checks that prevent users from making incorrect entries.

7. Usability
1. Usability is a term that defines how easy it is to use an interface.
2. The things that can be measured to describe usability are usability metrics.

96/99 7:18 PM 9/16/2007
3. Metrics cover objective factors as well as subjective factors these are:
Analytical metrics which can be directly described - for example whether
all the information needed by a user appears on the screen.
Performance metrics, which include things like the time used to perform a
task, system robustness, or how easy it is to make the system fail.
Cognitive workload metrics or the mental effort required of the user to use
the system. It covers aspects such as how closely the interface
approximates the users mental model or reactions to the system.
User satisfaction metrics, which include such things as how helpful the
system is or how easy it is to learn.


Interactive Interfaces:

1. The ideal interactive interface is the one where the user can interact with the
computer using natural language.
2. The user types in a sentence on the input device (or perhaps speaks into a speech
recognition device) and the computer analyzes this sentence and responds to it.
3. The form of dialog and presentation depends on the kind of system supported.
There are different kinds of interaction-

E.g.

Dialogs in transactions processing that allow the input of one transaction that
describes an event or action, such as a new appointment , or a deposit in an
account.
Designing an artifact such as a document or a report or the screen layout itself.
Making a decision about a course of action such as what route to take to make a
set of deliveries; and
Communication and coordination with other group members.


Note:
i. Interactive transaction dialog is usually an interchange of messages
between the user and the computer in a relatively short space of time.
ii. The dialog concerns one fact and centers around the attributes relates
to that fact.
iii. Different presentation methods are used in on-line user dialog for
entering transaction data the most common methods are menus,
commands or templates.
Menus:
1. A menu system presents the user with a set of actions and requires the user to
select one of those actions.
2. it can be defined as a set of alternative selections presented to a user in a window.



97/99 7:18 PM 9/16/2007
Commands and prompts

1. In this case the computer asks the user for specific inputs.
2. On getting the input, the computer may respond with some information or ask the
user for more information.
3. this process continues until all the data has been entered into the computer or
retrieved by the user.

Templates:
1. templates are equivalent to forms on a computer.
2. A form is presented on the screen and the user is requested to fill in the form.
3. usually several labeled fields are provided and the user enters data into the blank
spaces.
4. fields in the template can be highlighted or blink to attract the users attention.
5. The advantage templates has over menus or commands is that the data is entered
with fewer screens.
Warnier Orr Diagrams

1. Warnier/Orr diagrams (also known as logical construction of programs/ logical
construction of systems)
2. Initially developed in France by Jean- Dominique Warnier and in the United
States by Kenneth Orr.
3. This method aids the design of program structures by identifying the output &
processing results & then working backwards to determine the steps &
combinations of input needed the steps & combinations of input needed to
produce them.
4. The simple graphic methods used in Warnier/ Orr diagrams make the levels in the
system evident and the movement of the data between them vivid.



BASIC ELEMENTS:

1. Warnier / Orr diagrams show the processes & sequences in which they are
perfomed.
2. Each process is defined in a hierarchical manner; that is , it consists of sets of
subprocesses that define it.
3. At each level, the process is shown in a bracket that groups its components









98/99 7:18 PM 9/16/2007
















Disciplinary

4. Since a process can have many different subprocesses , a Warnier/ Orr diagram
uses a set of brackets to show each level of the system.
5. Critical factors in software definition & development are iteration or repetition &
alternation. Warnier/Orr diagrams show this very well.
6. In some situations, the only concern is whether a certain characteristic is present.
The alternative are that it is or it is not. The notation used to indicate that a
condition does not exist is a single line over the condition name and the symbol +
represents alternatives.

Using Warnier/Orr Diagrams

1. The ability to show the relation between processes and steps in a process is not
unique to Warnier/Orr diagrams, nor is the use of iteration, alternation,or
treatment of individual cases, the approach used to develop systems definitions
with Warnier/Orr diagrams is different and fits well with those used in logical
system design.
2. To develop a Warnier/Orr diagram, the analyst works backwards, starting with
systems output and using an output oriented analysis.
3. On paper, the development moves from left to right. First the intended output or
results of the processing are defined. At the next level, shown by inclusion with a
bracket, the steps needed to produce the output are defined.
4. Each step in turn is further defined. Additional brackets group the Processes
required to produce the result on the next level.
5. A completed Warnier/Orr diagram includes both process groupings & data
requirements.
o Data elements are listed for each process or process component.
o These data elements are the ones needed to determine which alternative or
case should be handled by the system & to carry out the process.

99/99 7:18 PM 9/16/2007
o The analyst must determine where each data element originates, how it is
used, and how individual elements are combined.
6. When the definition is completed , a data structure for each process is
documented. It, in turn, is used by the programmers, who work from the diagrams
to code the software.


Advantages of Warnier/ Orr diagrams
1. They are simple in appearance and easy to understand. Yet they are powerful
design tools.
2. They have the advantage of showing groupings of processes and the data that
must be passed from level to level.
3. The sequence of working backwards ensures that the system will be result
oriented.
4. This method is useful for both data and process definition. It can be used for each
independently , or both can be combined on the same diagram.

Das könnte Ihnen auch gefallen