Beruflich Dokumente
Kultur Dokumente
Sr.No
Topic
Page
No.
1.
Introduction To SAD.
1-7
2.
Approaches To System
Development
8-23
3.
24-33
4.
Feasibility Analysis
34-44
5.
45-65
6.
Design
66-86
7.
87-90
8.
Testing
91-103
9.
104-112
10.
Documentation
113-118
11.
Software Documents
119-122
12.
Question Answer
123-127
13.
128-129
Super System :
Characteristics Of A System:
1. Organization
a. It implies the structure and order of a system.
b. It is the arrangement of components that helps to achieve a given objective.
E.g. (I) In a business system, the hierarchical relationship starting with the management
(super system) at the top, leading downwards to the departments (sub systems) represents
the organization structure.
E.g. (II) In a computer system, there are input devices, output devices, a processing unit,
storage devices linked together to work as a whole to produce the required output from the
given input.
2. Interaction
Interaction refers to the manner in which each component of a system functions with other
components of a system.
E.g.: There must be interaction between (i) the purchase dept and the production dept (ii)
the payroll dept and the personnel dept (iii) CPU with the I/O devices.
3. Interdependence
a. Interdependence defines how different parts of an organization are dependent on one
another.
b. They are coordinated and linked together according to a plan (i.e., the output of one
subsystem may be the input of another subsystem).
E.g.: User -> Analyzer -> Programmer -> User/Operator. Here, a system is designed for
the user, but it requires an analyzer to analyze it, and then it requires coding by the
programmer and testing by the user.
4. Integration
a. Integration refers to the completeness of a system.
b. It means that subsystems of a system work together within a system even though
each subsystem performs its own unique function.
5. Central Objective
The objective of a system as a whole is of more importance as compared to the objectives
of any of its individual subsystem.
http://way2mca.com
-1-
Elements Of A System:
1. Input/Output
a. One of the major objectives of a system is to produce output that has some value to the
user using given input.
b. Input: It is the data or information which is entered into the system.
c. Output: It is the outcome after processing the input.
2. Processor
a. It is an element of the system that performs the actual transformation of input into
output.
b. It may modify the input partially or completely.
3. Control
a. The control element guides the system.
b. It is a decision-making subsystem that controls the pattern of activities related to the
input, processing and output.
E.g.: The management is the decision-making body that controls the activities of an
organization, just as the CPU controls the activities of the Computer System.
4. Environment
It is the surroundings in which a system performs.
E.g.: The users and vendors of a system work form the environment of that system.
5. Boundaries
a. A system should be defined by its boundaries i.e. limits that identify components and
processes related to the system.
E.g.: The limitation of the payroll system can only calculate salary.
b. An Automation Boundary is a boundary that separates manual processes from
automated processes.
E.g.: Entering basic code for salary is a manual process while the actual calculation
of the salary is an automated process.
6. Feedback
a. It implies users response to a system.
b. It provides valuable information about what improvements and updates can be applied
to the system.
-2-
affecting
-3-
http://way2mca.com
-4-
3. Architect:
Architect function as liaison between the clients abstract design requirements and the
contractors detailed building plan.
Analyst The users logical design requirements and the detailed physical system
design.
As architect, the analyst also creates a detailed physical design of candidate systems.
He/she aids users in formalizing abstract ideas and provides details to build the end
product - the candidate system.
4. Psychologist
Analyst plays the role of a psychologist in the way he/she reaches people, interprets
their thoughts, assesses their behavior, and draws conclusions from these interactions.
Understanding interventional relationships is important.
5. Salesperson:
Selling change
Selling ideas
Selling the system takes place at each step in the system life cycle.
Sales skills and persuasiveness are crucial to the success of the system.
6. Motivator
Candidate system must be well designed and acceptable to the user.
System acceptance is achieved through user participation in its development, effective
user training and proper motivation to use the system.
Motivation is obvious during the first few weeks after implementation.
If the users staff continues to resist the system then it becomes frustrating.
7. Politician
Diplomacy & finesse in dealing with people can improve acceptance of the system.
Politician must have the support of his/her constituency, the analysts goal to have the
support of the users staff.
He/she represents their thinking and tries to achieve their goals through
computerization.
-5-
2. Should be familiar with the makeup and inner workings of major application areas such
as financial accounting, personnel administration marketing and sales, operations
management, and model building and production control.
3. Competence in system tools and methodologies and a practical knowledge of one or
more programming and data base languages.
4. Experience in hardware and software specification which is important for selection.
http://way2mca.com
-7-
Structured Approach :
Structured Approach is made up of three techniques :
(1) Structured Programming :
A structured program is a program that has one beginning, one end, and each step in
program execution consists of one of the three programming constructs: (a) A sequence
of program statements (b) A decision where one set of statements executes, or another
set of statements executes (c) A repetition of a set of statements.
(2) Structured Design, together call SADT i.e. Structured Analysis &
Design Techniques
Definition: Structured Design is a technique that provides guidelines for deciding what
the set of programs should be, what each program should accomplish, and how the
programs should be organized into a hierarchy.
Principles: Program Modules should be (a) loosely coupled i.e. each module is
as independent of the other modules as possible and thus easily changeable and (b)
highly cohesive i.e. each module accomplishes a clear task.
Definition: Structured Analysis is a technique that helps define what the system needs
to do (processing requirements), what data the system needs to store and use (data
http://way2mca.com
-8-
requirements), what inputs and outputs are needed, and how the functions
work
together overall to accomplish required tasks.
Data Flow Diagram (DFD): It is a graphical model showing inputs, processes, storage,
outputs of a system produced in structured analysis.
Entity Relationship Diagram (ERD): It is a graphical model of the data needed by the
system, including entities about which information is stored and the relationships among
them, produced in structured analysis.
Object-Oriented Approach :
Class: It is a collection of similar objects, and each class may have specialized
subclasses, and/or a generalized superclass.
Class Diagram: It is a graphical model that shows all the classes of objects in the
system-oriented approach.
-9-
- 10 -
- 11 -
4. Support Phase :
Following are the activities of the Support Phase:
i] Provide support to End-Users (Provide a helpdesk facility and training programs, to
provide support to end users.)
ii] Maintain & Enhance new System (Keep the system running error-free, and
provide upgrades to keep the system contemporary.)
- 12 -
coding begins. The design phase is also documented and becomes a part of the software
configuration.
Coding: The design must be translated into a machine-readable form. Coding performs
this task. If the design phase is dealt with in detail, the coding can be done mechanically.
Testing : Once code is generated, it has to be tested. Testing focuses on the logic as well
as the function of the program to ensure that the code is error free and that o/p matches
the requirement specifications.
Maintenance : Software undergoes change with time. Changes may occur on account of
errors encountered, to adapt to changes in the external environment or to enhance the
functionality and / or performance. Software maintenance reapplies each of the preceding
life cycles to the existing program.
The classic life cycle is one of the oldest models in use. However, there are a few
associated problems.
Some of the disadvantages are given below.
1. Real projects rarely follow the sequential flow that the model proposes. Iteration always
occurs and creates problems in the application of the model.
2.
It is difficult for the client to state all requirements explicitly. The classic life cycle
requires this and it is thus difficult to accommodate the natural uncertainty that occurs
at the beginning of any new project.
3. A working version of the program is not available until late in the project time span. A
major blunder may remain undetected until the working program is reviewed which is
potentially disastrous.
In spite of these problems the life-cycle method has an important place in software
engineering work. Some of the reasons are given below.
1. The model provides a template into which methods for analysis, design, coding, testing
and maintenance can be placed.
2. The steps of this model are very similar to the generic steps that are applicable to all
software engineering models.
3. It is significantly preferable to a haphazard approach to software development.
Prototype Model :
Often a customer has defined a set of objectives for software, but not identified the
detailed input, processing or output requirements. In other cases, the developer may be
unsure of the efficiency of an algorithm, the adaptability of the operating system or the
form that the human-machine interaction should take. In these situations, a prototyping
approach may be the best approach. Prototyping is a process that enables the developer to
create a model of the software that must be built. The sequence of events for the
prototyping model is illustrated in figure 1.2. Prototyping begins with requirements
http://way2mca.com
- 13 -
gathering. The developer and the client meet and define the overall objectives for the
software, identify the requirements, and outline areas where further definition is required.
In the next phase a quick design is created. This focuses on those aspects of the software
that are visible to the user (e.g. i/p approaches and o/p formats). The quick design leads to
the construction of the prototype. This prototype is evaluated by the client / user and is
used to refine requirements for the software to be developed. A process of iteration occurs
as the prototype is tuned to satisfy the needs of the client, while at the same time
enabling the developer to more clearly understand what needs to be done.
- 14 -
Although problems may occur prototyping may be an effective model for software
engineering. Some of the advantages of this model are enumerated below.
Advantages:
1. It is especially useful in situations where requirements are not clearly defined at the
beginning and are not understood both by the client and the developer.
2. Prototyping is also helpful in situations where an application is built for the first time with
no precedents to be followed. In such circumstances, unforeseen eventualities may
occur which cannot be predicted and can only be dealt with when encountered.
Spiral Model :
The spiral model in software engineering has been designed to incorporate the best
features of both the classic life cycle and the prototype models, while at the same time
adding an element of risk-taking analysis that is missing in these models. The model,
represented in figure 1.3, defines four major activities defined by the four quadrants of the
figure :
Planning : Determination of objectives, alternatives and constraints.
It demands considerable risk assessment expertise and relies on this for success.
http://way2mca.com
- 15 -
The model is relatively new and has not been as widely used as the life cycle or the
prototype models. It will take a few more years to determine efficiency of this process
with certainty.
This model however is one of the most realistic approaches available for software
engineering. It also has a few advantages, which are discussed below.
Advantages :
The evolutionary approach enables developers and clients to understand and react to
risks at an evolutionary level.
It uses prototyping as a risk reduction mechanism and allows the developer to use this
approach at any stage of the development.
It uses the systematic approach suggested by the classic life cycle method but
incorporates it into an iterative framework that is more realistic.
This model demands an evaluation of risks at all stages and should reduce risks before
they become problematic, if properly applied.
- 16 -
based development (CBD) model incorporates many characteristics of the spiral model. It is
evolutionary in nature, thus demanding an iterative approach to software creation.
However, the model composes applications from pre-packaged software components called
classes. The engineering begins with the identification of candidate classes. This is done by
examining the data to be manipulated, and the algorithms that will be used to accomplish
this manipulation. Corresponding data and algorithms are packaged into a class. Classes
created in past applications are stored in a class library. Once candidate classes are
identified the class library is searched to see if a match exists. If it does, these classes are
extracted from the library and reused. If it does not exist, it is engineered using objectoriented techniques. The first iteration of the application is then composed. Process flow
moves to the spiral and will ultimately re-enter the CBD during subsequent passes through
the engineering activity.
Advantages :
The CBD model leads to software reuse, and reusability provides software engineers
with a number of measurable benefits.
This model leads to a 70% reduction in development cycle time and an 84% reduction
in projection cost.
Disadvantages :
The results mentioned above are inherently dependent on the robustness of the
component library.
http://way2mca.com
- 17 -
There is little question in general that the CBD model provides a significant advantage for
software engineers.
http://way2mca.com
- 18 -
Process modeling: The data objects defined in the previous phase are transformed to
achieve the information flow necessary to implement a business function. Processing
descriptions are created for data manipulation.
- 19 -
Advantages :
Not all types of applications are appropriate for RAD. If a system cannot be
modularized, building the necessary components for RAD will be difficult.
Not appropriate when the technical risks are high. For example, when an application
makes heavy use of new technology or when the software requires a high degree of
interoperability with existing programs.
Incremental Model :
This model combines elements of the linear sequential model with the iterative philosophy
of prototyping. The incremental model applies linear sequences in a staggered fashion as
time progresses. Each linear sequence produces a deliverable increment of the software.
For example, word processing software may deliver basic file management, editing and
document production functions in the first increment. More sophisticated editing and
document production in the second increment, spelling and grammar checking in the third
increment, advanced page layout in the fourth increment and so on. The process flow for
any increment can incorporate the prototyping model. When an incremental model is used,
the first increment is often a core product. Hence, basic requirements are met, but
supplementary features remain undelivered. The client uses the core product. As a result of
his evaluation, a plan is developed for the next increment. The plan addresses
improvement of the core features and addition of supplementary features. This process is
repeated following delivery of each increment, until the complete product is produced. As
opposed to prototyping, incremental models focus on the delivery of an operational product
after every iteration.
http://way2mca.com
- 20 -
Extreme Programming(XP) :
The most widely used agile process, originally proposed by Kent Beck.
XP Planning :
Begins with the creation of user stories.
Agile team assesses each story and assigns a cost.
Stories are grouped to for a deliverable increment
A commitment is made on delivery date
After the first increment project velocity is used to help define subsequent delivery
dates for other increments.
http://way2mca.com
- 21 -
XP Design :
Follows the KIS principle.
For difficult design problems, suggests the creation of spike solutionsa design
prototype.
Encourages refactoringan iterative refinement of the internal program design.
XP Coding :
Recommends the construction of a unit test for a store before coding commences
Encourages pair programming.
XP Testing :
All unit tests are executed daily.
Acceptance tests are defined by the customer and executed to assess customer
visible functionality.
1. The formal methods model encompasses a set of activities that leads to formal
mathematical specification of computer software.
2. Formal methods enable a software engineer to specify, develop, and verify a computerbased system by applying a rigorous mathematical notation.
3. When formal methods are used during development, they provide a mechanism for
eliminating many of the problems that are difficult to overcome using other software
engineering paradigms. Ambiguity, incompleteness, and inconsistency can be
http://way2mca.com
- 22 -
discovered and corrected more easily, not through adhoc review but through the
application of mathematical analysis.
4. When formal methods are used during design, they serve as a basis for program
verification and therefore enable the software engineer to discover and correct errors
that might go undetected.
5. The formal methods model offers the promise of defect-free software.
Drawbacks Of Format Method Model :
1. The development of formal models is quite time consuming and expensive.
2. Because few software developers have the necessary background to apply formal
methods, extensive training is required.
3. It is difficult to use the models as a communication mechanism for technically
unsophisticated customers.
@@@@@@@@@@@@
http://way2mca.com
- 23 -
Requirement analysis is a software engineering task that bridges the gap between
system level software allocation and software design.
It enables the system engineer to specify software function and performance, indicate
softwares interface with other system elements and establish design constraints that
the software must meet.
It allows the software engineer to refine the software allocation and build models of the
process, data and behavioral domains that will be treated by software.
It provides the software designer with a representation of information and function that
can be translated into data, architectural and procedural design.
It also provides the developer and the client with the means to assess quality once the
software is built.
- 24 -
System Requirements are the functions that our system must perform.
During planning, the Analyst defines system capabilities, during analysis, the
Analyst expands these into a set of system requirements.
There are two types of System Requirements:
Functional : activities that a system must perform with respect to the organisation.
Technical : operational objectives related to the environment, hardware, and
software of the organization.
In technical requirements, for example, the system may be required to support multiple
terminals with the same response time, or may be required to run on a specific
operating system.
The Stakeholders of the System are considered as the primary source of information for
functional system requirements.
Stakeholders are people who have an interest in the successful implementation of your
system.
There are three groups of stakeholders: (a) Users who use the system on a daily basis
(b) Clients who pay for and own the system (c) Technical staff i.e. the people who must
ensure that the system operates in the computing environment of the organization.
The analysts first task during analysis is to (a) identify every type of stakeholder and
(b) identify the critical person from each type (group) of stakeholders.
User Stakeholders :
User Stakeholders are identified into 2 types: (a) Vertical and (b) Horizontal.
Horizontal implies that an analyst needs to look at information flow across departments
or functions.
For example, a new inventory system may affect multiple departments, such as
sales, manufacturing, etc, so these departments need to be identified, so as to collect
information relevant to them.
http://way2mca.com
- 25
Vertical implies that an analyst needs to look at information flow across job levels, such
as clerical staff, middle management, executives, etc.
Each of these users may need the system to perform different functions with respect to
themselves.
Analysis tasks :
All analysis methods are related by a set of fundamental principles:
The information domain of the problem must be represented and understood.
Models that depict system information function and behavior should be developed.
The models and the problem must be partitioned in a manner that uncovers detail in a
layered or hierarchical fashion.
The analysis process should move form essential information to implementation detail.
Software requirement analysis may be divided into five areas of effort:
i) Problem recognition :
Initially, the analyst studies the system specification and the software project plan. Next
communication for analysis must be established so that problem recognition is ensured.
The analyst must establish contact with management and the technical staff of the
user/customer organization and the software development organization. The project
manager can serve as a coordinator to facilitate establishment of communication paths.
The objective of the analyst is to recognize the basic problem elements as perceived by the
client.
ii) Evaluation and synthesis :
Problem evaluation and synthesis is the next major area of effort for analysis. The analyst
must evaluate the flow and content of information, define and elaborate all
software functions, understand software behavior in the context of events that affect the
system, establish interface characteristics and uncover design constraints. Each of these
tasks serves to define the problem so that an overall approach may be synthesized.
iii) Modeling :
We create models to gain a better understanding of the actual entity to be built. The
software model must be capable of modeling the information that software transforms the
http://way2mca.com
- 26 -
functions that enable the transformation to occur and the behavior of the system during
transformation. Models created serve a number of important roles:
The model aids the analyst in understanding the information, function and behavior
of the system, thus making the requirement analysis easier and more systematic.
The model becomes the focal point or review and the key to determining the
completeness, consistency and accuracy of the specification.
The model becomes the foundation for design, providing the designer with an
essential representation of software that can be mapped into an implementation
context.
iv) Specification :
There is no doubt that the mode of specification has much to do with the quality of the
solution. The quality, timeliness and completeness of the software may be
adversely affected by incomplete or inconsistent specifications.
Software requirements may be analyzed in a number of ways. These analysis techniques
lead to a paper or computer-based specification that contains graphical and natural
language descriptions of the software requirements.
v)
Review :
Both the software developer and the client conduct a review of the software requirements
specification. Because the specification forms the foundation of the development phase,
extreme care is taken in conducting the review.
The review is first conducted at a macroscopic level. The reviewers attempt to ensure that
the specification is complete, consistent and accurate. In the next phase, the review is
conducted at a detailed level. Here, the concern is on the wording of the specification. The
developer attempts to uncover problems that may be hidden within the
specification content.
Fact-Finding Methods:
Fact-finding techniques are used to identify system requirements, through comprehensive
interaction with the users using various ways of gathering information.
There are six methods of Information Gathering which are as follows :
1. Distribute & Collect Questionnaires :
Questionnaires enable the project team to collect information from a large
number of stakeholders conveniently, and to obtain preliminary insight on their
information needs.
This information is then used to identify areas that need further research using
document reviews, interviews, and observation.
http://way2mca.com
- 27 -
Such questions are called closed-ended questions i.e. questions that have
simple, definitive answers and do not invite discussion or elaboration.
They can be used to determine the users opinion about various aspects of a
system (say, asking the user to rate a particular activity on a scale of 1-5).
Questionnaires, however, do not provide information about processes, workflow, or techniques used.
Such questions that encourage discussion and elaboration are called openended questions.
response
are
best
answered
using
An analyst requests for and reviews procedural manuals, and work descriptions, in
order to understand business functions.
Documents and reports can also be used in interviews, where forms and reports
are used as visual aid, and working documents are used for discussion.
Discussion can center on use of each form, its objective, distribution, and
information content.
Forms already filled-out with real information ensure a correct understanding of the
fields and data content.
It is essential to ensure that the assumptions and business rules derived from
existing documentation are accurate.
In this method, members of the project team (system analysts) meet with
individual groups of users, in one or multiple sessions in order to understand all
processing requirements through discussion.
http://way2mca.com
- 28 -
An effective interview consists of three parts: (a) Preparing for the interview (b)
Conducting the interview and (c) Following up the interview.
Before an Interview:
Establish objective of interview (what do you want to accomplish
through this interview?)
Review related documents and materials (list of specific questions, open and
closed ended)
During an Interview:
Dress appropriately (show good manners)
Arrive on time (arriving early is a good practice, if long interview, prepare for
breaks)
Look for exceptions and error conditions (ask what if questions, ask
about exceptional situations)
Probe for details (ensure complete understanding of all procedures and rules)
Take thorough notes (handwritten note-taking makes user feel that what he
has to say is important to you)
Identify and document unanswered items or open questions (useful for
next interview session)
After an Interview:
Review notes for accuracy, completeness, and understanding (absorb,
understand, document obtained information)
Transfer information to appropriate models and documents (create models
for better understanding after complete review)
Identify areas that need further clarification (keep a log of unanswered
questions, such as those based on policy questions raised by new system,
include them in next interview)
Send thank-you notes if appropriate
http://way2mca.com
- 29 -
Actually observing a user at his job provides details about the actual usage of the
computer system, and how the business processes are carried out in reality.
Being trained by a user and actually performing the job allows one to
discover the difficulties of learning new procedures, the importance of an
easy-to-use system, and drawbacks of the current system that the new system
needs to address.
5. Build Prototypes :
Building a prototype implies creating an initial working model of a larger, more
complex entity.
The Discovery Prototype is used in the Planning & Analysis phases to test feasibility
and help identify processing requirements.
Discovery prototypes are usually discarded after the concept has been tested, while
an Evolving prototype is one that grows and evolves and may eventually be used as
the final, live system.
Characteristics of Prototypes:
A prototype should be operative i.e. a working model, that may provide lockand-feel but may lack some functionality.
http://way2mca.com
- 30 -
The objective of this technique is to compress all these activities into a shorter series
of JAD sessions with users and project team members.
http://way2mca.com
- 31 -
Structured Walkthroughs :
http://way2mca.com
- 32 -
7. The benefits of establishing standards for data names, module determination, and data
item size and type are recognized by systems managers. The time to start enforcing
these standards is at the design stage.
Therefore, they should be emphasized during walkthrough sessions.
8. Maintenance should also be addressed during walkthroughs. Enforcing coding
standards, modularity, and documentation will ease later maintenance needs.
9. It is becoming increasingly common to find organizations that will not accept new
software for installation until it has been approved by software maintenance teams. In
such an organization, a participant from the quality control or maintenance team should
be an active participant in each structured walkthrough.
10.
11.
(i) The walkthrough team must be large enough to deal with the subject of the review
in a meaning way, but not so large that it cannot accomplish anything.
(ii) Generally no more than 7 to 9 persons should be involved , including the individuals
who actually developed the product under review, the recorder, and the review
leader.
a. As a general rule, management is not directly involved in structured walkthrough
sessions. Its participation could actually jeopardize the intent of the review team
from speaking out about problems they see in project.
b. Because management is often interpreted to mean evaluation.
c. Managers may feel that raising many questions, identifying mistakes or suggesting
changes indicates that the individual whose work is under review is incompetent/
d. It is best to provide managers with reports summarizing the review session rather
than to have them participate.
e. The most appropriate type of report will communicate that a review of the specific
project or product was conducted, who attended, and what action the team took. It
need not summarize errors that were found, modifications suggested, or revisions
needed.
@@@@@@@@@@@@
http://way2mca.com
- 33 -
4.Feasibility Analysis
A feasibility study is a preliminary study undertaken to determine and document a
project's viability. The results of this study are used to make a decision whether to proceed
with the project, or table it. If it indeed leads to a project being approved, it will - before the
real work of the proposed project starts - be used to ascertain the likelihood of the project's
success. It is an analysis of possible alternative solutions to a problem and a
recommendation on the best alternative. It, for example, can decide whether an order
processing be carried out by a new system more efficiently than the previous one.
A feasibility study could be used to test a new working system, which could be used because
:
Customers are complaining about the speed and quality of work the business provides,
Competitors are now winning a big enough market share due to an effective integration
of a computerized system.
Within a feasibility study, seven areas must be reviewed, including those of a Needs Analysis,
Economics, Technical, Schedule, Organizational, Cultural, and Legal.
1. Operational Feasibility :
It involves the following two tests:
Understanding whether the problem is worth solving and whether the solution to the
problem will work out, by analyzing the following criteria: (PIECES)
(a) Performance (b) Information (c) Economy (d) Control (e) Effectiveness (f)
Service.
The new system must fit into the work-environment of the organization.
It must also fit with the culture of the organization.
It should not depart dramatically, from existing norms.
It
http://way2mca.com
- 34 -
It essentially involves identifying factors that might prevent the effective use of the
new system, thus resulting in loss of business benefits.
Such factors can be tackled with high user involvement during the system's
development and well-planned training procedures and proper orientation after the
system's completion.
3. Technical Feasibility :
This involves testing the proposed technological requirements and the available
expertise.
A company may implement new technology in the new system, or upgrade the
technology of an existing system.
In some cases, the scope and approach of the project may need to be
changed to restructure and reduce the technological risk.
When the risks are identified, the solutions may include conducting additional
training, hiring consultants, hiring more experienced employees.
A realistic assessment will help identify technological risks early and permit
correctivemeasures to be taken.
4. Schedule Feasibility :
It involves assessing if the project can be completed according to the proposed project
schedule.
Every schedule requires many assumptions and estimates about the project, as the
needs and scope of the system may not be known at this stage.
Sometimes, a project may need to be completed within a deadline given by the upper
management.
Milestones should be developed within the project schedule to assess the ongoing risk
of the schedule slipping.
Deadlines should not be considered during project schedule construction, unless they
are absolute.
5. Resource Feasibility :
Also, adequate computer resources, physical facilities, and support staff are
valuable resources.
Delays in making these resources can affect the project schedule.
http://way2mca.com
- 35 -
6. Economic Feasibility :
The new system must increase income, either through cost saving, or by
increased revenues.
The economic feasibility of a system is usually assessed using one of the following
methods:
(a) Cost/Benefit Analysis.
(b) Calculation of the Net Present Value (NPV)
(c) Payback Period, or Breakeven Point
(d) Return on Investment
Cost estimation :
Software cost estimation is a continuing activity, which starts at the proposal stage and
continues through the lifetime of the project. There are several different techniques
of software cost estimation. They are:
i) Expert judgment :
One or more experts on the software development techniques to be used, and on the
application domain, are consulted. They each estimate a project cost and the final cost is
arrived at by consensus.
ii) Estimation by analogy :
This technique is applicable when other projects in the same application domain have been
completed. The cost of a new project is estimated by analogy with these
completed projects.
iii) Parkinsons law :
It states that work expands to fill the time available. In software costing, it means that the
cost is determined by available resources rather than by objective assessment.
iv) Pricing to win :
The software cost is estimated to be whatever the customer has available to spend on the
project. The estimated effort depends on the customers budget and not on the software
functionality.
v) Top-down estimation:
A cost estimate is established by considering the overall functionality of the project and
how that functionality is provided by interacting functions. Cost estimates are made on the
basis of logical function rather than component implementation of the function.
http://way2mca.com
- 36 -
Cost/Benefit Analysis
It is the analysis used to compare costs and benefits to see whether the investment in the
development of a new system will be more beneficial than costly.
Cost And Benefits Categories :
In developing cost estimates for a system, we need to consider several cost elements.
Following are the types of costs that are analyzed :
Personnel Costs : Costs including staff salaries and benefits (staff includes
system analysts, programmers, end-users, etc.).
Facility Costs: Costs involved in the preparation of the physical site where the
computer system will be operating (wiring, flooring, air conditioning, etc.).
Operating Costs : Costs incurred after the system is put into production i.e. the
day-to-day operations of the system (salaries of people using the application, etc.).
A system is also expected to provide benefits.The first task is to identify each benefits
and then assign a monetary value to it for cost/benfit analysis.
Benefits may be tangible and intangible or direct and indirect.The two major benefits
are as follows :
- 37 -
Cost/Benefit Analysis is a procedure that gives a picture of the various costs, benefits
and rules associated with a system.
The determination of the cost and benefits entails the follwing step :
1. Identify the costs and benefits pertaining to a given project.
2. Categorize the various costs and benefits for analysis.
3. Select a method of evaluation.
4. Interpret the result of the analysis.
5. Take action.
1. Costs And Benefits Identification :
Certain costs and benfits are more easily identifiable than others.For example, direct
costs, such as the price of a hard disk, are easily identified from company invoice
payments or canceled checks.
Direct benefits offeten relates one-to-one to direct costs, especially savings from
reducing cost in the activity in question.
Other direct costs and benefits, however may not be well defined, since they
represent estimated costs or benefits that have some uncertainity.An example of
such costs is reserve for bad debt. It is a discerned real cost, although its exact
amount is not so immediate.
A Category of costs or benefits that is not easily discernible is opportunity costs and
opportunity benefits.
These are the costs or benefits forgone by selecting one alternative over another.
They do not show in the organizations account and therefore are not easy to
identify.
2. Classification of Cost and benefits :
The next step in cost and benefit determination is to categorize costs and benefits. They
may be tangible or intangible , direct or indirect , fixed or variable.
Let us review each category.
http://way2mca.com
- 38 -
- 39 -
Year
0
$-1,000
0
$-1,000
Year
1
$-2,000
650
$-1,350
Year
2
$-2,000
4,900
$-2,900
Total
5,550
$550
Above table illustrates the use of net benefit analysis. Cash flow amounts are shown for
three time period : Period 0 is the present period followed by two succeeding periods. The
negative numbers represent cash outlays. A cursory look at the numbers shows that net
benefit is $550.
The time value of money is extremly important in evaluation processes.Let us explain
what it mean.If you were faced with an opportunity that generates $3000 per year, how
much would you be willing to invest? Obiviously , youd like to invest less than the $3000.
To earn the same money five years from now, the amount of investment woul be even
less. What is suggested here is that money has a time value. Todays dollar and
tommorows dollar are not the same.The time lag accounts for the time value of money.
The time value of money is usually expressed in the form of interest on the funds
invested to realize the future value.Assuming compounded interest, the formula is :
F=P(1+i)^n
Where
F= Future value of an investment.
P= Present value of the investment.
i = Interest rate per compounding period.
n = Number of years.
For example, $3000 invested in Treasury notes for three years at 10% interest would have
a value at maturity of :
F=$3000(1+0.10)^3
=3000(1.33)
=$3993
ii.]
http://way2mca.com
- 40 -
projects.It is similar to the opportunity cost of the funds being considered for the
project.
Year
1
2
3
4
Estimated
Future
Value
$1500
$1500
$1500
$1500
Discount
Rate
X
X
X
X
0.908
0.826
0.751
0.683
=
=
=
=
Present
Value
Cumulative
Prsent Value of
Benefits
$1363.63
$1239.67
$1127.82
$1027.39
$1363.63
$2603.30
$3731.12
$4758.51
1[(1+i)^n]
P=F[(1+i)^n]
iii.] Net Present Value (NPV) Calculation :
The present value of rupee/dollar (currency) benefits and costs for investments such
as a new system.
Two concepts are involved:
All benefits and costs are calculated in terms of today's rupee/dollar (currency)
values,i.e. present values.
Benefits and costs are combined to give a net value.
It essentially tells you how much should be invested today, in order to
achieve a predetermined amount of benefit at a predetermined later point in time.
The following two terms hold great importance in this calculation:
Discount rate: It is the annual percentage rate that an amount of money is
discounted to bring it to a present value.
Discount factor: It is the accumulation of yearly discounts based on the discount
rate.
http://way2mca.com
- 41 -
Formula: If Present value is PV, amount received in future is FV, discount interest
rate is i, discount factor is F, and number of years is n :
For example, if the future amount is Rs. 1500, and the number of years is 4, at say
10% discount rate, then the present value can be calculated as:
i.e. today, the investment should be 1024.5, to get 1500 after 4 years.
iv.] Payback Period/Breakeven Period Calculation :
The payback period is the period at which rupee/dollar (currency) benefits
offset the rupee/dollar (currency) costs.
It is the point in time, when the increased cash flow exactly pays off the
costs of development and operation.
When the net value becomes positive, that is the year in which payback occurs.
Consider the following table:
http://way2mca.com
- 42 -
Valuation problems : Intangible costs and benefits are difficult to quantify and tangible
costs are generally more pronounced than tangible benefits.In most cases, then, a
project must have substantial intangible benefits to be accepted.
Distortion problems : There are two ways of distorting the results of cost/benefit
analysis.One is the intentional favoritism of an alternative for political reasons.The
second is when data are incomplete or missing from the analysis.
http://way2mca.com
- 43 -
List of Deliverables :
When the design of an information system is complete the specifications are documented in
a form that outlines the features of the application. These specifications are termed as the
deliverables or the design book by the system analysts.
No design is complete without the design book, since it contains all the details that must be
included in the computer software, datasets & procedures that the working information
system comprises.
The deliverables include the following:
1. Layout charts :
Input & output descriptions showing the location of all details shown on reports,
documents, & display screens.
2. Record layouts :
Descriptions of the data items in transaction & master files, as well as related database
schematics.
3. Coding systems :
Descriptions of the codes that explain or identify types of transactions, classification, &
categories of events or entities.
4. Procedure Specification :
Planned procedures for installing & operating the system when it is constructed.
5. Program specifications :
Charts, tables & graphic descriptions of the modules & components of computer
software & the interaction between each as well as the functions performed & data
used or produced by each.
6. Development plan :
Timetables describing elapsed calendar time for development activities; personnel
staffing plans for systems analysts, programmers, & other personnel; preliminary testing
& implementation plans.
7. Cost Package :
Anticipated expenses for development, implementation and operation of the new
system, focusing on such major cost categories as personnel, equipment,
communications, facilities and supplies.
@@@@@@@@@@@@
http://way2mca.com
- 44 -
Types of Models :
The type of the model is based on the nature of the information being represented.
It includes :
1. Mathematical Model :
A Mathematical Model is a series of formulae that describe the technical aspects
of a system.
Such models are used to represent precise aspects of the system, that can be
best represented through formulae or mathematical notations, such as equations and
functions.
They are useful in expressing the functional requirements of scientific and
engineering applications, that tend to compute results using elaborate mathematical
algorithms.
They are also useful for expressing simpler mathematical requirements in
business systems, such as net salary calculation in a payroll system.
2. Descriptive Model :
Descriptive models are required for narrative memos, reports, or lists that describe
some aspects of the system.
This model is required especially because there is a limitation to what information can
be defined using a mathematical model.
Effective models of information systems involve simple lists of features, inputs, outputs,
events, users.
Lists are a form of descriptive models that are concise, specific, and useful.
Algorithms written using structured English or pseudocode are also considered
precise descriptive models.
3. Graphical Models :
Graphical Models include diagrams and schematic representations of some aspect
of a system.
http://way2mca.com
- 45 -
They simplify complex relationships that cannot be understood with a verbal description.
Analysts usually use symbols to denote various parts of the model, such as external
agents, processes, data, objects, messages, connections.
Each type of graphical model uses unique and standardized symbols to represent pieces
of information.
A Data Flow Diagram (DFD) is also known as a Process Model. Process Modeling is an
analysis technique used to capture the flow of inputs through a system (or group of
processes) to their resulting output.
A Data Flow Diagram is a graphical system model that shows all the main requirements
for an information system.
Data store : A place where data is held, for future access by one or more
processes.
Data Flow : An arrow in a DFD that represents flow of data among the processes
of the system, the data stores, and the external agents.
All processes are numbered to show proper sequence of events.
DFD Syntax :
http://way2mca.com
- 46 -
Level of Abstraction :
Context Diagram :
A Context Diagram is a Data Flow Diagram that describes the highest view of a system.
It summarizes all processing activities within the system into a single process
representation.
All the external agents and data flow (into and out of the system) are shown in
one diagram, with the whole system represented as a single process.
It is useful for defining the scope and boundaries of a system.
The boundaries of a system in turn help identify the external agents, as they lie outside
the boundary of the system.
The Context level DFD process take 0 as the process number, while the numbering in
the 0 Level DFD starts from 1.
DFD Fragments :
A DFD fragment is a DFD that represents system response to one event within a single
process.
Each DFD fragment is a self-contained model showing how the system responds to a
single event.
http://way2mca.com
- 47 -
The main purpose of DFD fragments is to allow the analyst to focus attention on just
one part of the system at a time.
Usually, a DFD fragment is created for each event in the event list (later made into an
event table).
http://way2mca.com
- 48 -
http://way2mca.com
- 49 -
Example
http://way2mca.com
- 50 -
Creating Level 1 Diagram (And Below ) Each use case is turned into its own DFD
Take the steps listed on the use case and depict each as a process on the level 1 DFD
Inputs and outputs listed on use case become data flows on DFD
Include sources and destinations of data flows to processes and stores within the DFD
May also include external entities for clarity.
When to stop decomposing DFDs?
Ideally, a DFD has at least three processes and no more than seven to nine.
1. A series of data flows always starts or ends at an external agent and starts or ends at a
data store. Conversely, this means that a series of data flows can not start or end at a
process.
2.
3.
4.
5.
http://way2mca.com
- 51 -
Structured English :
- 52 -
Decision Table :
Decision tables are a precise yet compact way to model complicated logic.
Decision tables, like if-then-else and switch-case statements, associate conditions with
actions to perform. But, unlike the control structures found in traditional programming
languages, decision tables can associate many independent conditions with several
actions in an elegant way.
Decision Tables are useful when complex combinations of conditions, actions, and rules
are found or you require a method that effectively avoids impossible situations,
redundancies, and contradictions.
Condition alternatives
Actions
Action entries
Each decision corresponds to a variable, relation or predicate whose possible values are
listed among the condition alternatives.
Each action is a procedure or operation to perform, and the entries specify whether (or
in what order) the action is to be performed for the set of condition alternatives the
entry corresponds to.
Many decision tables include in their condition alternatives the don't care symbol, a
hyphen. Using don't cares can simplify decision tables, especially when a given condition
has little influence on the actions to be performed.
In some cases, entire conditions thought to be important initially are found to be
irrelevant when none of the conditions influence which actions are performed.
http://way2mca.com
- 53
Aside from the basic four quadrant structure, decision tables vary widely in the way the
condition alternatives and action entries are represented.
Some decision tables use simple true/false values to represent the alternatives to a
condition (akin to if-then-else), other tables may use numbered alternatives (akin to
switch-case), and some tables even use fuzzy logic or probabilistic representations for
condition alternatives.
In a similar way, action entries can simply represent whether an action is to be
performed (check the actions to perform), or in more advanced decision tables, the
sequencing of actions to perform (number the actions to perform).
Conditions
Printer is unrecognized
Actions
Check/replace ink
X
X
X
X
Of course, this is just a simple example (and it does not necessarily correspond to the
reality of printer troubleshooting), but even so, it demonstrates how decision tables can
scale to several conditions with many possibilities.
Benefits Of Decision Table :
Decision tables make it easy to observe that all possible conditions are accounted for. In
the example above, every possible combination of the three conditions is given.
In decision tables, when conditions are omitted, it is obvious even at a glance that logic
is missing. Compare this to traditional control structures, where it is not easy to notice
http://way2mca.com
- 54 -
gaps in program logic with a mere glance --- sometimes it is difficult to follow which
conditions correspond to which actions!
Just as decision tables make it easy to audit control logic, decision tables demand that a
programmer think of all possible conditions.
With traditional control structures, it is easy to forget about corner cases, especially
when the else statement is optional. Since logic is so important to programming,
decision tables are an excellent tool for designing control logic.
In one incredible anecdote, after a failed 6 man-year attempt to describe program logic
for a file maintenance system using flow charts, four people solved the problem using
decision tables in just four weeks.
Decision Tree :
http://way2mca.com
- 55 -
Discount
25%
Book Store
Order < 6 copies
Book Order
Discount
Discount
Discount
Discount
Libraries
No Discount
15%
10%
5%
No
ERD complements DFD. While DFD focuses on processes and data flow between them,
ERD focuses on data and the relationships between them.
It helps to organise data used by a system in a disciplined way.
It helps to ensure completeness, adaptability and stability of data.
It is an effective tool to communicate with senior management (what is the data needed
to run the business), data administrators (how to manage and control data), database
designers (how to organise data efficiently and remove redundancies).
Entities generally correspond to persons, objects, locations, events, etc. Examples are
employee, vendor, supplier, materials, warehouse, delivery, etc.
http://way2mca.com
- 56 -
2. Attributes :
They express the properties of the entities.
Every entity will have many attributes, but only a subset, which are relevant for the
system under study, will be chosen.
For example, an employee entity will have professional attributes like name,
designation, salary, etc. and also physical attributes like height, weight, etc. But only
one set will be chosen depending on the context.
Attributes are classified as entity keys and entity descriptors.
Entity keys are used to uniquely identify instances of entities.
Attributes having unique values are called candidate keys and one of them is
designated as primary key. The domains of the attributes should be pre-defined. If
'name' is an attribute of an entity, then its domain is the set of strings of alphabets
of predefined length.
3.
Relationships :
They describe the association between entities.
They are characterised by optionality and cardinality.
Optionality is of two types, namely, mandatory and optional.
Mandatory relationship means associated with every instance of the first entity
there will be atleast one instance of the second entity.
Optional relationship means that there may be instances of the first entity, which
are not associated with any instance of the second entity. For example, employeespouse relationship has to be optional because there could be unmarried
employees. It is not correct to make the relationship mandatory.
One-to-one relationship means an instance of the first entity is associated with only
one instance of the second entity. Similarly, each instance of the second entity is
related to one instance of the first entity.
One-to-many relationship means that one instance of the first entity is related to
many instances of the second entity, while an instance of the second entity is
associated with only one instance of the first entity.
In many-to-many relationship an instance of the first entity is related to many
instances of the second entity and the same is true in the reverse direction also.
http://way2mca.com
- 57 -
REPRESENTATION
ENTITY (SET) OR
OBJECT TYPE
PURCHASE ORDER
RELATIONSHIP
ATTRIBUTE
PRIMARY KEY
ATTRIBUTE
CARDINALITY
OPTIONALITY
WEAK ENTITY
STRONG-WEAK
REALATIONSHIP
MULTIVALUED
ATTRIBUTE
PETER CHEN
http://way2mca.com
- 58 -
BACHMAN
http://way2mca.com
- 59 -
Primary Key :
it is a primary key
combination (rollno, name) is a superkey
name itself may not be sufficient as key
Weak Entity :
http://way2mca.com
- 60 -
Generalization :
To generalize from two or more entity sets and factor out commonality
Example : given two entities Faculty and Non-faculty, we can define a general entity
called Employee
Common attributes are factored out to define Employee entity; specific (noncommon) attributes incorporated in Faculty and Non-faculty entities.
http://way2mca.com
- 61 -
Another Example :
Specialization :
http://way2mca.com
- 62 -
Given below are a few examples of ER diagrams using Bachman notation. First the textual
statement is given followed by the diagram :
1. In a company, each division is managed by only one manager and each manager
manages only one division
3. In a college, every student takes many courses and every course is taken by many
students
4. In a library, a member may borrow many books and there may be books which are not
borrowed by any member
http://way2mca.com
- 63 -
6. An extension of example-3 above is that student-grades depend upon both student and
the course. Hence it is an associative entity
7. An employee can play the role of a manager. In that sense, an employee reports to
another employee.
http://way2mca.com
- 64 -
@@@@@@@@@@@@
http://way2mca.com
- 65 -
6.Design
The system design is aimed at ensuring before construction or coding, that if a system is
constructed in a specific way, the users information will be met completely and accurately.
Several development activities are carried out during structured design. They are database
design, implementation planning, system test preparation, system interface specification,
and user documentation(see below figure).
1.Database
Design
Allocation
of
Functions
6.Design
Specification
7.Design phase
Walkthrough
To Implementation
2.Program
Design
3.System test
requirements
Definition
4.Program test
requirements
definition
5.System
Interface
Specification
1. Database Design :
This activity deals with the design of the physical data base. A key is to determine how
the access paths are to be implemented. A physical path is derived from a logical path.
It may be implemented by pointers, chains or other mechanisms.
2. Program Design :
In conjunctions with data base design is a decision on the programming language to be
used and the flowcharting, coding and debugging procedure prior to conversion. The
operating system limits the programming languages that will run on the system.
When the system design is under way and programming begins, the plans and
test cases for implementation are soon required. This means there must be detailed
schedules for system testing and training of the user staff. Planned training allows time
for selling the candidate system to those who will deal with it on regular basis.
Consequently, user resistance should be minimized.
3. System And program test preparation :
Each aspect of the system has a separate test requirements. System testing is done
after all programming and testing is completed. The test cases cover every aspect of
the candidate system actual operations, user interface and so on. System and
program test requirements become a part of design specifications- a prerequisite to
implementations.
In contrast to system testing is acceptance testing, which puts the system through a
procedure design to convince the user that the candidate system will meet the stated
requirements. Acceptance testing is technically similar to system testing, but politically it
is different. In system testing, bugs are found and corrected with no one watching.
http://way2mca.com
- 66 -
System Flowchart :
A system flowchart is a diagram that describes the overall flow of control between
computer programs in a system.
It effectively indicates where input enters the system, how it is processed and
controlled, and how it leaves the system in the form of the desired output.Here,
emphasis is placed on input documents and output reports.
Only limited details are displayed, about the process that transforms the input to output.
For convenience of design, it is a good idea to segregate the inputs, processes, outputs,
and files involved in the system into a tabular form before proceeding with the
flowchart.
System Flowcharts are generally drawn in the early stages of formulating computer
solutions.It facilitate communication between programmers and business people.
The System Flowchart play a vital role in the programming of a problem and are quite
helpful in understanding the logic of complicated and lengthy problems. Once the
flowchart is drawn, it becomes easy to write the program in any high level language.
Often we see how flowcharts are helpful in explaining the program to others. Hence, it
is correct to say that a flowchart is a must for the better documentation of a complex
program.
http://way2mca.com
- 67 -
http://way2mca.com
- 68 -
http://way2mca.com
- 69 -
It shows which modules within a system interact and also graphically depicts the data
that are communicated between various modules.
They identify the data passes existing between individual modules that interact with one
another.
http://way2mca.com
- 70 -
2.
http://way2mca.com
- 71 -
Module X
(From p. 1)
Module X
(See p. n)
Page 1
Page 2
Page n
Iterative invocation(when one action is made up of more than one repeated smaller ones):
Module X
Module X
Module X
exactly n
n
Module Y
X invokes Y
undefined times
Module Y
X invokes Y a
max. of n times
Module Y
X invokes Y
exactly n times
Module X
n
Module Y
X invokes Y any
no. of times from
n to m
http://way2mca.com
- 72 -
Module X
Module Y is actually code in module X
Module Y
Transaction centre (considered as a way of selection of one from many possible functions
at any one specific moment) :
Module A
Module B
Module C
Module D
http://way2mca.com
- 73 -
Payroll Processing
It is based on the idea that input is "transformed" into output by the system.
Three important concepts are involved:
Afferent data flow : It is the incoming data flow in a sequential set of processes.
Efferent data flow : It is the outgoing data flow from a sequential set of
processes.
Central transform : It transforms afferent data flow into efferent data flow.
The following steps are followed to develop a structure chart from a DFD fragment.
1. Identify input, processes, output from the DFD fragment.
2. Reorganize DFD fragment to arrange input (afferent data flow) to the
left, process (central transform) in the center, and output (efferent data flow) to
the right.
http://way2mca.com
- 74 -
3. From the first two steps, identify the boss module (calling module) and branch
out the sub-modules out of the boss module (this is the boss module of each
transaction and not necessarily of the entire system).
4. Provide appropriate data flow lines, and show input and output data using data
couples.
5. Display condition clauses using control couples.
Concepts of module coupling and module cohesion are used to evaluate the quality of a
structure chart.
Module Coupling :
Module coupling is a measure of how a module is connected to other modules
in the program.
It is desirable to make modules as independent as possible, so as to allow them to be
executed in any environment.
Every module should have its own well-defined interface to accept inputs, and it should
be able to output data in the required form.
The module then need not know who invoked it.
Module coupling is achieved by passing data couples between modules.
Module Cohesion :
Module cohesion refers to the degree to which all the code within the module
contributes to implementing one well-defined task.
Modules with high cohesion tend to perform a single (or similar) task.
Modules with poor cohesion tend to perform loosely related tasks.
Modules with high cohesion tend to have low coupling, as they mostly act on the same
internal data.
Modules with poor cohesion tend to pass unrelated data between themselves to request
for, or provide services.
- 75 -
i. What does the system or module do? (Asked when designing the system).
ii. How does it do it? (Asked when reviewing the code for testing or maintenance)
iii. What are the inputs and outputs? (Asked when reviewing the code for testing or
maintenance)
5. A HIPO description for a system consists of the visual table of contents & the functional
diagrams.
Visual Table Of Content :
1. The visual table of contents (VTOC) shows the relation between each of the documents
making up a HIPO package.
2. It consists of a Hierarchy chart that identifies the modules in a system by number and in
relation to each other and gives a brief description of each module.
3. The numbers in the contents section correspond to those in the organization section.
4. The modules are in increasing detail. Depending on the complexity of the system, three
to five levels of modules are typical.
Functional Diagrams :
1. For each box defined in the VTOC a diagram is drawn.
2. Each diagram shows input and output (right to left or top to bottom), major processes,
movement of data, and control points.
3. Traditional flowchart symbols represent media, such as magnetic tape, magnetic disk
and printed output.
4. A solid arrow shows control paths, and an open arrow identifies data flow.
5. Some functional diagrams contain other intermediate diagrams, but they also show
external data, as well as internally developed data and the step in the procedure where
the data are used.
6. A data dictionary description can be attached to further explain the data elements used
in a process.
7. HIPO diagrams are effective for documenting a system.
8. They aid designers and force them to think about how specifications will be met and
where activities and components must be linked together.
9.
Disadvantages :
1. They rely on a set of specialized symbols that require explanation, an extra concern
when compared to the simplicity of , for eg. Data flow diagram.
2. HIPO diagrams are not as easy to use for communication purpose as many people
would like.
3. They do not guarantee error-free systems.
http://way2mca.com
- 76 -
http://way2mca.com
- 77 -
Basic Elements :
OR : You represent choice in a diagram by placing an "OR" operator between the items
of a choice. The "OR" operator looks either like
or
.
Repetition : To show that an action repeats (loops), you simply put the number of
repetitions of the action in parentheses below the action.
http://way2mca.com
- 78 -
Designing Databases :
Database :
It is an integral collection of stored data that is centrally managed and controlled.
It consists of two related stores of information :
(1) Physical data store and (2) The schema.
Physical data store is the storage area used by a DBMS to store the raw bits and bytes
of a database.
http://way2mca.com
- 79 -
Schema is the description of the structure, content, and access controls of a physical
data store or database.
It contains additional information about the data stored in the physical data store:
(a) Access and content controls (authorization, allowable values)
(b) Relationships among data elements (pointer indicating customer of a particular
order)
(c) Details of physical data store organization (types, lengths, indexing, sorting)
Entity :
An Entity is a real-world object distinguishable from other objects. An entity is described
(in DB) using a set of attributes.
Examples : a book, an item, a student, a purchase order.
Entity Set
An Entity Set is a collection of similar entities.
E.g., All employees, Set of all books in a library.
Attribute :
An entity has a set of attributes
Attribute defines property of an entity
It is given a name
Attribute has value for each entity
Value may change over time
Same set of attributes are defined for entities in an entity set
Example :
Entity set BOOK has the following attributes
TITLE
ISBN
http://way2mca.com
- 80 -
ACC-NO
AUTHOR
PUBLISHER YEAR
PRICE
Relationships :
It represents association among entities
E.g. : (1) A particular book is a text for particular course
Book Database Systems by C.J. Date is text for course identified by code
CS644
(2) Student GANESH has enrolled for course CS644.
Relationship set :
It is a set of relationships of same type.
Words relationship and relationship set often used interchangeably
Certain entity sets :
Binary relationship : between two entities sets.
E.g. : Binary relationship set STUDY between STUDENT and COURSE.
Ternary relationship : among three entity sets.
E.g. : relationship STUDY could be ternary among STUDENT, COURSE and
TEACHER.
http://way2mca.com
- 81 -
Normalization :
Normalization is the process of taking data from a problem and reducing it to a set of
relations while ensuring data integrity and eliminating data redundancy
Data integrity - all of the data in the database are consistent, and satisfy all integrity
constraints.
Data redundancy if data in the database can be found in two different locations (direct
redundancy) or if data can be calculated from other data items (indirect redundancy)
then the data is said to contain redundancy.
Example Of Normalization :
Data taken is used to illustrate the process of normalization. A company obtains parts from
a number of suppliers. Each supplier is located in one city. A city can have more than one
supplier located there and each city has a status code associated with it. Each supplier may
provide many parts. The company creates a simple relational table to store this information
that can be expressed in relational notation as :
FIRST (s#, status, city, p#, qty) where
s#
p#
qty>
In order to uniquely associate quantity supplied (qty) with part (p#) and supplier (s#), a
composite primary key composed of s# and p# is used.
First Normal Form :
A relational table, by definition, is in first normal form. All values of the columns are atomic.
That is, they contain no repeating values. Figure1 shows the table FIRST in 1NF.
Figure 1: Table in 1NF
http://way2mca.com
- 82 -
Although the table FIRST is in 1NF it contains redundant data. For example, information
about the supplier's location and the location's status have to be repeated for every part
supplied. Redundancy causes what are called update anomalies. Update anomalies are
problems that arise when information is inserted, deleted, or updated. For example, the
following anomalies could occur in FIRST:
INSERT. The fact that a certain supplier (s5) is located in a particular city (Athens)
cannot be added until they supplied a part.
DELETE. If a row is deleted, then not only is the information about quantity and part
lost but also information about the supplier.
UPDATE. If supplier s1 moved from London to New York, then six rows would have
to be updated with this new information.
http://way2mca.com
- 83 -
To transform FIRST into 2NF we move the columns s#, status, and city to a new table
called SECOND. The column s# becomes the primary key of this new table. The results
are shown below in Figure 2.
Figure 2: Tables in 2NF
Tables in 2NF but not in 3NF still contain modification anomalies. In the example of
SECOND, they are
INSERT. The fact that a particular city has a certain status (Rome has a status of 50)
cannot be inserted until there is a supplier in the city.
DELETE. Deleting any row in SUPPLIER destroys the status information about the city as
well as the association between supplier and city.
Third Normal Form
The third normal form requires that all columns in a relational table are dependent only
upon the primary key. A more formal definition is: A relational table is in third normal
form (3NF) if it is already in 2NF and every non-key column is non transitively
dependent upon its primary key. In other words, all nonkey attributes are functionally
dependent only upon the primary key.
Table PARTS is already in 3NF. The non-key column, qty, is fully dependent upon the
primary key (s#, p#). SUPPLIER is in 2NF but not in 3NF because it contains a transitive
dependency. A transitive dependency is occurs when a non-key column that is a
determinant of the primary key is the determinate of other columns. The concept of a
transitive dependency can be illustrated by showing the functional dependencies in
SUPPLIER:
http://way2mca.com
- 84 -
The results of putting the original table into 3NF has created three tables. These can be
represented in "pseudo-SQL" as:
PARTS (#s, p#, qty)
Primary Key (s#,#p)
Foreign Key (s#) references SUPPLIER_CITY.s#
http://way2mca.com
- 85 -
SUPPLIER_CITY(s#, city)
Primary Key (s#)
Foreign Key (city) references CITY_STATUS.city
CITY_STATUS (city, status)
Primary Key (city)
Advantages of Third Normal Form :
The advantage of having relational tables in 3NF is that it eliminates redundant data which
in turn saves space and reduces manipulation anomalies.
For example, the improvements to our sample database are :
INSERT. A fact about the status of a city, Rome has a status of 50, can be added even
though there is not supplier in that city. Likewise, facts about new suppliers can be added
even though they have not yet supplied parts.
DELETE. Information about parts supplied can be deleted without destroying information
about a supplier or a city. UPDATE. Changing the location of a supplier or the status of a
city requires modifying only one row.
Database Design :
1. The first database design step in structured systems analysis converts the ER analysis
model to logical record types and specifies how these records are to be accessed.
2. These accessed requirements are later used to choose keys that facilitate data access.
3. Quantitative data such as item sizes, numbers of records and access frequency are
often also added at this step.
Quantitative data is needed to compute the storage requirements and transaction
volumes to be supported by the computer system.
4. The combination of logical record structure access specifications and quantitative data is
sometimes known as the system level database specification. This specification is used
at the implementation level to choose a record structure supported by a DBMS.
5. The simplest conversion is to make each set of the ER diagram into a record type.
6. Object models must be converted to logical record structures when a logical analysis
model is implemented by a conventional DBMS. The simplest conversion here is for each
object class to become a logical record, with each class attribute converted to a field.
Where attributes are structured, they themselves become a separate record.
7. Different methodologies give their logical records different names like blocks, schema,
and modules.
@@@@@@@@@@@@
http://way2mca.com
- 86 -
- 87 -
3. Most end-users will not actually operate the information. System or enter data through
workstations, but they will use the output from the system.
4. When designing output, systems analysts must accomplish the following:
i. Determine what information to present.
ii. Decide whether to display, print, or speak the information and select the output
medium.
iii. Arrange the presentation of information in an acceptable format.
iv. Decide how to distribute the output to intended recipients.
5. The arrangement of information on a display or printed document is termed a layout.
6. Accomplishing the general activities listed above will require specific decisions, such as
whether to use preprinted forms when preparing reports and documents, how many
lines to plan on a printed page, or whether to use graphics & colour.
7. The output design is specified on layout forms, sheets that describe the location
characteristics (such as length & type) and format of the column headings & pagination.
These elements are analogous to an architects blueprint that shows the location of
each component.
Design Review :
1. Design reviews; focus on design specifications for meeting previously identified systems
requirements.
2. The information supplied about the design prior to the session can be communicated
using HIPO charts, structured flowcharts, Warnier/ Orr diagrams, screen designs, or
document layouts.
3. Thus, the logical design of the system is communicated to the participants so they can
prepare for the review.
4. The purpose of this type of walkthrough is to determine whether the proposed design
will meet the requirements effectively and efficiently.
5. If the participants find discrepancies between the design and requirements, they will
point them out and discuss them.
6. It is not the purpose of the walkthrough to redesign portions of the system. That
responsibility remains with the analyst assigned to the project.
User Interface Design :
1. There are two aspects to interface design.
i. To choose the transactions in the business process to be supported by interfaces.
This defines the broad interface requirements in terms of what information is input
and output through the interface during the transaction.
http://way2mca.com
- 88 -
ii. The design of the actual screen presentation, including its layout and in that the
sequence of screens that may be needed to process the transaction.
2. Choosing the Transaction Modules
1. Defining the transactions that must be supposed through interfaces is part of the
system specification.
2. Each interface object defines one interface module which will interact with the user
in some way. Each such interaction results in one transaction with the system.
3. Defining the presentation
1. Each interaction includes both the presentation and dialog.
2. Presentation describes the layout of information.
Dialog Describes the sequence of interactions between the user and the computer.
4. Evaluation of Interfaces.
1. User-friendly the interface should be helpful, tolerant and adaptable, and the user
should be happy and confident to use it.
2. friendly interactions results in better interfaces, which not only make users more
productive but also make their work easier and more pleasant . The terms
effectiveness and efficiency are also often used to describe interfaces.
3. An interface is effective when it results in a user finding the best solution to a
problem, and it is efficient when it results in this solution being found in the shortest
time with least error.
5. Workspace
1. the computer interface is part of a user workspace
2. A workspace defines all the information that is needed for users work as well as the
layout of this information.
6. Robustness
1. Robustness is an important feature of interface.
2. this means that the interface should not fail because of some action taken by the
user, or indeed that a user error leads to a system breakdown.
3. This in turn requires checks that prevent users from making incorrect entries.
7. Usability
1. Usability is a term that defines how easy it is to use an interface.
2. The things that can be measured to describe usability are usability metrics.
3. Metrics cover objective factors as well as subjective factors these are:
Analytical metrics which can be directly described - for example whether all the
information needed by a user appears on the screen.
Performance metrics, which include things like the time used to perform a task,
system robustness, or how easy it is to make the system fail.
Cognitive workload metrics or the mental effort required of the user to use the
system. It covers aspects such as how closely the interface approximates the
users mental model or reactions to the system.
User satisfaction metrics, which include such things as how helpful the system is
or how easy it is to learn.
http://way2mca.com
- 89 -
Interactive Interfaces :
1. The ideal interactive interface is the one where the user can interact with the computer
using natural language.
2. The user types in a sentence on the input device (or perhaps speaks into a speech
recognition device) and the computer analyzes this sentence and responds to it.
3. The form of dialog and presentation depends on the kind of system supported. There
are different kinds of interactionE.g.:
Dialogs in transactions processing that allow the input of one transaction that
describes an event or action, such as a new appointment, or a deposit in an account.
Designing an artifact such as a document or a report or the screen layout itself.
Making a decision about a course of action such as what route to take to make a set
of deliveries; and
Communication and coordination with other group members.
Note :
Menus :
1. A menu system presents the user with a set of actions and requires the user to select
one of those actions.
2. It can be defined as a set of alternative selections presented to a user in a window.
Commands and prompts :
1. In this case the computer asks the user for specific inputs.
2. On getting the input, the computer may respond with some information or ask the user
for more information.
3. This process continues until all the data has been entered into the computer or
retrieved by the user.
Templates :
1. Templates are equivalent to forms on a computer.
2. A form is presented on the screen and the user is requested to fill in the form.
3. Usually several labeled fields are provided and the user enters data into the blank
spaces.
4. Fields in the template can be highlighted or blink to attract the users attention.
5. The advantage templates have over menus or commands is that the data is entered
with fewer screens.
@@@@@@@@@@@@
http://way2mca.com
- 90 -
8.Testing
Software Testing :
Testing is the process of exercising a program with the specific intent of finding errors prior
to delivery to the end user.
Software testing is a process used to identify the correctness, completeness and quality of
developed computer software.
Actually, testing can never establish the correctness of computer software, as this can only
be done by formal verification (and only when there is no mistake in the formal verification
process). It can only find defects, not prove that there are none.
Testing Objectives :
Testing is the process of executing a program with the intent of finding errors
A good test case is one that has a high probability of finding an as-yet discovered
errors.
Find as many defects as possible.
Find important problems fast.
A successful test is one that uncovers an as-yet undiscovered error.
Our objective is to design tests that systematically uncover different classes of errrors
and do so with a minimum amount of time and effort.
Testing cannot show the absence of defects, it can only show that SW errors are
present.
It is not unusual for a SW development organization to expend between 30 and 40
percent of total project effort on testing.
Testing is a destructive process rather than constructive.
http://way2mca.com
- 91 -
Testing Principles :
A strategy for SW testing integrates SW test case design methods into a well-planned
series of steps that result in the successful construction of SW
These approaches and philosophies are what we shall call strategy.
A SW team should conduct effective formal technical reviews. By doing this many errors
will be eliminated before testing commences.
Client Needs
Acceptance Testing
Requirements
System Testing
Design
Integration Testing
Coding
Unit Testing
- 92 -
Software engineering activities provide the foundation form which quality is built.
Analysis, design and coding methods act to enhance the quality by providing uniform
techniques and predictable results.
Formal technical reviews help to ensure the quality of the products.
Throughout the process measures and controls are applied to every element of a
software configuration. These help to maintain uniformity.
Testing is the last phase in which quality can be assessed and errors can be
uncovered.
Unit Testing :
Unit Testing is a dynamic method for verification, where the program is actually
compiled and executed.
It is one of the most widely used methods, and the coding phase is sometimes called
the Coding and unit testing phase.
As in other forms of testing, unit testing involves executing the code with some test
cases and then evaluating the results.
The goal of unit testing is to test modules or single unit, not the entire software
system. The two types of unit testing to test the single units are as follows :
Black-box Testing
White-box Testing
Unit testing is most often done by the programmer himself.
The programmer, after finishing the coding of a module, tests it with test data. The
tested module is then delivered for integration testing and further testing.
http://way2mca.com
- 93 -
http://way2mca.com
- 94 -
Black-Box Testing :
1. Black box testing, also called behavioral testing, focuses on the functional requirements
of the software.
2. Black-box testing enables the software engineer to derive sets of input conditions that
will fully exercise all functional requirements for a program.
3. Black-box testing attempts to find errors in the following categories:
i. Incorrect or missing functions.
ii. Interface errors
iii. Errors in data structures or external database access.
iv. Behavior or performance errors, and
v. Initialization and termination errors.
4. Unlike white-box testing, which is performed early in the testing process, black-box
testing tends to be applied during later stages of testing. Because black-box testing
tends to be applied during later stages of testing. Because black-box testing purposely
disregards control structure, attention is focused on the information domain.
5. Tests are designed to answer the following questions:
1. How is functional validity tested?
2. How is system behavior & performance tested?
3. What classes of input will make good test cases?
4. Is the system particularly sensitive to certain input values?
5. How are the boundaries of a data class isolated?
6. What data rates and data volume can the system tolerate?
7. What effect will specific combinations of data have on system operation?
Advantages Of Black-Box Testing :
By applying black-box techniques, a set of test cases can be derived that satisfy the
following criteria:
1. Test cases that reduce, by a count that is greater than one, the number of additional
test cases that must be designed to achieve reasonable testing.
2. Test cases that tell us something about the presence or absence of classes of errors,
rather than an error associated only with the specific test at hand.
WHITE-BOX TESTING :
1. White-box testing, sometimes called glass-box testing is a test case design method that
uses the control structure of the procedural design to derive test cases.
2. Using white-box testing methods the software engineer can derive test cases that
http://way2mca.com
- 95 -
i. Guarantee that all independent paths within a module have been exercised at least
once.
ii. Exercise all logical decisions on their true and false sides.
iii. Execute all loops at their boundaries and within their operational bounds and
iv. Exercise internal data structures to ensure their validity.
Integration Testing
The focus of the integration testing is on examining the connections or links between the
program and their modules.
In particular, the integration test examines the following :
1. A proper program/module is being called by the proper program/module as desired.
2. The call to a program/module is compatible input and output parameters as desired.
Compatibility here refers to the following examinations of the calling and called
parameters :
The data type matching.
The number of parameters.
The order of parameters.
Other specific validations, such as specific value or range of value matching.
http://way2mca.com
- 96 -
http://way2mca.com
- 97 -
6. Since the total code size is large, the number of inputs and expected outputs are large,
the time taken to prepare for every single integration testing, their execution and
analysis of test results, fixing etc. is a much longer cycle. Also, how many such cycles
are required for complete testing is difficult to predict ahead of time. Therefore, some
team, members may loose focus as the integration test prolongs.
7. For testing of external interfaces, the project manager is dependent on the co-operation
of the users of the neighboring application systems. This involves co-ordination effort,
which may be time consuming.
Unit Testing in the OO Context
The concept of the Unit broadens due to encapsulation.
A single operation in isolation in the conventional view of unit testing does NOT
work.
Context of a class should be considered.
Comparison
Unit testing of conventional SW focus on the algorithmic detail and the data that
flow across the module interface.
Unit testing of OO SW is driven by the operations encapsulated by the class and the
state behavior of the class.
Integration focuses on classes and their execution across a thread or in the context
of a usage scenario.
Validation uses conventional black box methods.
Test case design draws on conventional methods, but also encompasses special
features.
OOT Strategy :
Class testing is the equivalent of unit testing.
Operations within the class are tested.
The state of behavior of class is examined.
Integration applied three different strategies :
Thread-based Testing Integrates the set of classes required to respond to one input or event.
Use-based Testing Integrates the set of classes required to respond to one use case.
Cluster Testing
Integrates the set of classes required to demonstrate one collaboration
http://way2mca.com
- 98 -
Software validation is achieved through a series of black box tests that demonstrate
conformity with requirements. A test plan outlines the classes of tests to be conducted and
a test procedure defines specific test cases that will be used to demonstrate conformity
with requirements.
After each validation test case has been conducted, one of two possible conditions exists:
i) The function or performance characteristics conform to the specifications and are
accepted.
ii) A deviation form specifications is discovered and a deficiency list is created.
An important element of validation testing is configuration review. The intent of the review
is to ensure that all elements of the software configuration have been properly developed
and are well documented. The configuration review is sometimes called an audit.
If software is developed for the use of many customers, it is impractical to perform formal
acceptance tests with each one. Many software product builders use a process called alpha
and beta testing to uncover errors that only the end-user is able to find.
The alpha test is conducted at the developers site by the customer. The software is used in
a natural setting with the developer recording errors and usage problems. Alpha tests are
performed in a controlled environment.
Beta tests are conducted at one or more customer sites by the end-users of the software.
Unlike alpha testing, the developer is generally not present. The beta test is thus a live
application of the software in an environment that cannot be controlled by the developer.
The customer records all the errors and reports these to the developer at regular intervals.
As a result of problems recorded or reported during these tests, the developers make
modifications and prepare for the release of the entire customer base.
1. System validation checks the quality of the software in both simulated and live
environments.
2.
i. First the software goes through a phase, often referred as alpha testing, in which
errors and failures based on simulated user requirements are verified and studied.
ii. The alpha test is conducted at the developers site by a customer.
iii. The software is used in a natural setting with the developers recording errors &
usage problems.
iv. Alpha tests are conducted in a controlled environment.
http://way2mca.com
- 99 -
3.
i. The modified software is then subjected to phase two called beta testing in the
actual users site or a live environment.
ii. The system is used regularly with live transactions after a scheduled time, failures
and errors are documented and final correction and enhancements are made before
the package is released for use.
iii. Unlike alpha testing, the developer is generally not present.
Therefore, the beta test is a live application of the software in an environment that
cannot be controlled by the developer.
iv. The customer records all problems (real or imagined) that are encountered during
beta testing and reports these to the developer at regular intervals.
v. As a result of problems reported during beta tests, software engineers make
modifications and then prepare for release of the software product to the entire
customer base.
System testing
System testing is actually a series of tests whose purpose is to fully exercise the computerbased system. Although each test has a different purpose, all work to verify that system
elements have been properly integrated and perform allocated functions.
Some types of common system tests are :
i) Recovery testing :
Many computer-based systems must recover from faults and resume processing within a
pre-specified time. In some cases, a system must be fault-tolerant, i.e. processing faults
must not cause overall system function to cease. In other cases, a system failure must be
corrected within a specified period of time or severe economic damage will occur.
Recovery testing is a system test that forces the software to fail in a variety of ways and
verifies that recovery is properly performed. If recovery is automatic, re-initialization,
check-pointing mechanisms, data recovery and restart are evaluated for correctness. If
recovery requires human intervention, the mean-time-to-repair (MTTR) is evaluated to
determine whether it is within acceptable limits.
http://way2mca.com
- 100 -
Debugging :
Debugging occurs as a consequence of successful testing. That is, when a test case
uncovers an error, debugging is the process that results in the removal of the error.
Debugging is a skillful process. A software engineer, evaluating the results of a test,
is often confronted with the indication of a software problem. The external manifestation
of the error and the internal cause of the error may have no obvious relation to one
another. The process of connecting the symptom to the cause is a part of debugging.
The debugging process will always have one of two outcomes :
i) The cause will be found and corrected.
ii) The cause will not be found.
In the latter case, the person performing debugging may suspect a cause, design a test
case to help validate that suspicion, and work toward error correction in an iterative
fashion.
In general three categories for debugging approaches may be suggested:
http://way2mca.com
- 101 -
i) Brute force :
This is the most common and least efficient method of debugging. Memory dumps are
taken, run-time traces are invoked and write statements are loaded in the hope that the
mass of information produced will provide the required solution. Although, the
information yielded may result in success, it involves a lot of wasted time and effort.
ii) Backtracking :
This method can be used successfully in small programs. Beginning at the site where the
symptom has been uncovered, the source code is traced backward, manually, till the site of
the cause is found. Unfortunately, as the number of lines of code increases, the number
of potential backward paths may become unmanageably large.
iii) Cause elimination :
This approach introduces the concept of binary partitioning. Data related to the error
occurrence are organized to isolate potential causes. A cause hypothesis is devised, and the
data is used to prove or disprove the hypothesis. Alternatively, a list of all possible causes is
developed and tests are conducted to eliminate each.
http://way2mca.com
- 102 -
@@@@@@@@@@@@
http://way2mca.com
- 103 -
Purpose : To build and test new networks and modify existing networks for use by the
new system.
Roles : This activity will normally be completed by the same system specialists who
designed the network(s).
http://way2mca.com
- 104 -
System analyst :- the system analysts role is more in terms of a facilitator and
ensures that business requirements are not compromised by the network solution.
System builders : - the network administrator the person who has the expertise
for building and testing network technology for the new system. S/he will also be
familiar with network architecture standards that must be adhered to for any
possible new networking technology.
Prerequisites (Inputs) : This activity is triggered by the approval from the system
owners to continue the project into systems design. The key input is the network design
requirements defined during systems design.
Deliverables (Outputs) : The principle deliverable of this activity is an installed network
that is placed into operation. Network details will be recorded in the project repository for
future reference.
[Having introduced the roles, inputs and outputs, now focus on the implementation steps.]
Application Techniques : Skills for developing networks are an important skill for
systems analysts.
Steps:
Review the network design requirements outlined in the technical design statement
developed during systems design.
Make any appropriate modifications to existing networks and/or develop new networks.
Review network specifications for future reference.
This task must precede immediately other programming activities because databases are
the resources shared by the computer programs to be written.
ii.]
Purpose : The purpose of this activity is to build and test new databases and modify
existing databases for use by the new system.
Roles : This activity will typically be completed by the same system specialist who
designed the database.
http://way2mca.com
- 105 -
Prerequisites (Inputs): The primary input to this activity is the database design
requirements specified in the technical design statement during systems design. Sample
data from production databases is often loaded into tables for testing the database
.
Deliverables: (Outputs): - the end product of this activity is an unpopulated (empty)
database structure for the new database.
[This is the function most people associate with systems analysis they dont see all the
other work involved.]
There are several application techniques used in building and testing databases.
1) Sampling : sampling methods are used to obtain representative data for testing
database tables.
2) Data Modeling : requires a good understanding of data modeling we focus on this in
part 2 (after the midterm).
3) Database design.
To complete this phase there are 6 steps:
1) Review the technical design statement for database design requirements (know what
youre up to)
2) Locate production databases that may contain representative data for testing
database tables. Otherwise, generate test data for database tables [get data that will
really test the robustness of your design. Dont just pick easy cases.]
3) Build/modify the database according to the design specifications.
4) Load tables with sample data.
5) Test database tables and relationships to adding, modifying, deleting, and retrieving
records. All possible relationship paths and data integrity checks should be tested.
6) Review database schema and record for future reference.
iii.]
Purpose : - To install and test any new software packages and make them available to the
organizations software library.
Roles : This is the first activity in the life cycle that is specific to the applications
programmer.
http://way2mca.com
- 106 -
on the network server (actually, it is a sure bet that the network administrator will
be involved).
Prerequisites (Inputs) : The main activity is the new software packages and
documentation received from system vendors. The applications programmer will complete
the installation and testing of the package according to the integration requirements and
program documentation that was developed during system design.
Deliverables (Outputs) : The principle deliverable of this activity is the installed and
tested software package(s) that are made available in the software library. Any modified
software specifications and new integration requirements that were necessary are
documented and made available in the project repository to provide a history and serve as
future reference.
Applicable Techniques : well, there really isnt much to this. Depends on the
programming experience and knowledge of the tester. Essentially just good housekeeping
installs, test, and maintain good documentation for others to follow.
iv.] Write and Test New Programs
Purpose : The purpose of this activity is to write and test all programs to be developed inhouse.
Roles : This activity is specific to the applications programmer.
System owner and system users :- not involved
Note that there is often an objective, or specially trained, person to test the application
(hence the name, the application tester).
Prerequisites (Inputs): The primary input to this activity is the technical design
statement, plan for programming, and test data that was developed during the systems
design phase. Since new programs or program components may have already been written
and in use by other existing systems, the experienced applications programmer will know to
first check for possible reusable software components available in the software library.
Some information systems shops have a quality assurance group staffed by specialists who
review the final program documentation for conformity to standards. This group will
appropriate feedback regarding quality recommendations and outputs (or OPT).
Deliverables (Outputs): The output is of course the new programs and reusable
software components that are placed in a software library. You should also have created
http://way2mca.com
- 107 -
program documentation that may need to be approved by quality assurance people and as
a record of the project.
Applicable Techniques : If the modules are coded top-down, they should be tested and
debugged top-down as theyre written. There are three levels of testing: stub, unit (or
program) and systems testing.
Stub testing is the test performed on individual modules, whether they are in main
programs or are subroutines.
Unit or program testing is a test whereby all the modules that have been coded and
stub tested are tested as an integrated unit.
Systems testing is the tests that ensure that the application programs written in
isolation work properly when integrated into a whole system.
2. The Delivery Phase of Systems Implementation :
This is the final part of the implementation phase of the SDLC delivers the new system
into operation.
To achieve this, you must complete the following :
Conduct a system test to make sure that the new system works
Prepare a conversion plan to smooth the transition to the new system
Install databases used by the new system
Provide training and documentation for individuals using the new system
Convert from the old system to the new system and evaluate the project and final
system.
The Activity involved in construction Phase is as follows :
i.] Conduct System Test.
ii.] Prepare Conversion Plan.
iii.] Install Database.
iv.] Train System Users.
Conduct System Test
i.]
Purpose : The purpose is to test all software packages, custom-built ones, and other
existing programs to make sure they work together and work correctly.
Roles : The systems analyst usually manages this.
http://way2mca.com
- 108 -
Prerequisites :You need the software packages, in-house (custom)-built programs and
any existing programs in the new system.
Deliverables (Outputs) : Any modifications as discovered during implementation.
Continue until the test is successful. You, or others, will have tested the system with some
form of data (the system test data). [I like to use a readily identifiable record that I can
track through all phases of the system; thats why you see Lars Ingersol in databases used
in this and other courses.]
ii.] Prepare Conversion Plan
This activity is not usually performed by the systems analyst it is usually planned by
upper managers, a steering committee of some kind or some other person. Although in
your work as the analyst/designer, you will have to include the conversion plan in your
planning (time and resource projections, Gantt charts, etc.), the specifics are defined by
others. So we will skip the details of this activity.
However, as part of your planning, youll need to consider the following:
Getting training materials ready
Establish a schedule for installing databases
Identify a training program (or in-house trainers) and schedule for the system users
Develop a detailed installation strategy to follow
Develop a systems acceptance test plan.
There are several common conversion strategies:
Abrupt cutover on a specific date (usually coinciding with some business data,
like the start of the new financial year, or a school year) the old system goes offline and the new one is placed into operation.
Parallel conversion both old and new systems are used for a period of time;
this is done to ensure that all major problems in the new system have been solved
before abandoning the old system.
Location conversion when the same system will be used in multiple locations,
usually one location is selected to start with (to see where the problems are in
conversion) and then the conversion is performed on all the other sites.
Anticipate some problems of each strategy. For example, an abrupt cutover will be
successful only if the computer program is absolutely perfect which will require lots of
testing before hand and will likely require training before users actually go live. The
parallel conversion is a lot of work for everyone the workers must use both systems,
essentially doing their job twice. Theres lots of opportunity for problems.
http://way2mca.com
- 109 -
Install Databases
iii.]
Purpose : To populate the new system databases with existing data from the old system.
Roles : Usually only the system builders application programmers and data entry
personnel
Prerequisites: (Inputs): Existing data from the old system, coupled with database
schemas and database structures for the new database.
Deliverable (Outputs) : the restructured data populated with data from the old system
Applicable Techniques : You may need to "massage" the data such as writing programs
to convert the old data into the new data formats.
iv.] Train System Users
Purpose : provide training and documentation to system users to prepare them for a
smooth transition to the new system.
Roles :
System owners : - must support this activity: be willing to approve release time for
training
System users :- the system is designed for them, so train em.
System analyst :- from the system documentation the system analyst may write the
end-user documentation (manuals)
System designers and builders :- not usually involved.
Finally, the users must be trained. This may be done in-house (by the analyst or others) or
by hiring an outside training company.
http://way2mca.com
- 110 -
Maintenance :
Not all jobs run successfully. Sometimes an unexpected boundary condition or an overload
causes an error. Sometimes the o/p fails to pass controls. Sometimes program bugs may
appear. No matter what the problem, a previously working system that ceases to
function, requires emergency maintenance. Isolating operational problems is not
always an easy task, particularly when combinations of circumstances are responsible.
The ease with which a problem can be corrected is directly related to how well a system
has been designed and documented. Changes in environment may lead to maintenance
requirement. For example, new reports may need to be generated, competitors may alter
market conditions, a new manager may have a different style of decision-making,
organization policies may change, etc. Information should be able to accommodate
changing needs. The design should be flexible to allow new features to be added with ease.
Although software does not wear out like hardware, integrity of the program, test
data and documentation degenerate as a result of modifications. Hence, the system will
need maintenance. Maintenance covers a wide range of activities such as correcting code,
design errors, updating documentation and upgrading user support.
Maintenance is necessary to eliminate errors in the system during its working life and to
tune the system to any variations in its working environment.
It has been seen that there are always some errors found in the system that must be
noted and corrected. It also means the review of the system from time to time.
The review of the system is done for :
If a major change to a system is needed, a new project may have to be set up to carry
out the change. The new project will then proceed through all the above life cycle
phases.
Types Of Maintenance :
Software maintenance can be classified into four types:
1] Corrective Maintenance :
It means repairing processing or performance failures, or making changes
because of previously uncorrected problems or false assumptions.
It involves changing the software to correct defects.
For example:
http://way2mca.com
- 111 -
2]
3]
4]
Debugging and correcting errors or failures and emergency fixes that are
required when newly developed software is installed for the first time.
Fixing errors due to incomplete specifications, which may result in
erroneous assumptions, such as assuming an employee code is 5 numeric
digits instead of 5 characters.
Adaptive Maintenance :
Over time the environment for which the software was developed is likely to
change.
Adaptive maintenance results in modifications to the software to accommodate
changes in the external environment.
For example:
New hardware may have been installed (changing from 16-bit to 32bit environment)
Perfective Maintenance (Enhancement ):
This implies changing the performance or modifying the program to improve or
enhance the system.
It extends the software beyond its original functional requirements.
This type of maintenance involves more time and money than both
corrective and adaptive maintenance.
For example:
Reports with graphical analysis such as pie charts, bar charts, etc.
http://way2mca.com
- 112 -
10. Documentation
Documentation :
Documentation is not a step in SDLC. It is an activity on-going in every phase of SDLC. It is
about developing documents initially as a draft, later on the review document and then a
signed-off document.
The document is born, either after it is signed-off by an authority or after its review. It cries
initial version number. However, the document also undergoes changes and then the only
way to keep your document up to date is to incorporate these changes. A document
generally explains the system and helps people to interact with it.
Importance Of Software Documentation :
The software documentation is important because of the following reasons :
1. The development of s/w starts with abstract ideas in the minds of the Top Management
of User Organization, and these ideas take different forms as the s/w development
takes place. The Documentation is the only link between the entire complex processes
of s/w development.
2. The documentation is written-communication, therefore it can be used for the future
reference as the s/w development advances or even after the s/w is developed, and it is
useful for keeping the s/w up to date.
3. The documentation carried out during a SDLC stage, say system analysis, is useful for
the respective system developer to draft his/her ideas in the form which is shareable
with the other team members or users. Thus it acts as a very important media for
communication.
4. The document reviewer(s) can use the document for pointing out the deficiencies in
them, only because the abstract ideas or models are documented. Thus, documentation
provides facility to make abstract ideas, tangible.
5. When the draft document is reviewed and recommendations incorporated, the same is
useful for the next stage developers, to base their work on. Thus documentation of a
stage is important for the next stage.
6. Documentation is very important because it documents very important decision about
freezing the system requirements, the system design and implementation decision,
agreed between the users and developers or amongst the developers themselves.
7. Documentation provides a lot of information about the software system. This make it
very useful tool to know about the software system even without using it.
http://way2mca.com
- 113 -
8. Since the team members in a s/w development team, keep adding, as the s/w
development projects goes on, the documentation acts as important source of detailed
and complete information for the newly joined members.
9. Also, the user organization may spread implementation of a successful s/w system to
few other locations in the organization. The documentation will help the new users to
know the operations of the s/w system. The same advantage can be drawn when a new
user joins the existing team of users. Thus, documentation makes the users productive
on the job, very fast and at low cost.
10. Documentation is live and important as long as the s/w is in use by the user
organization.
11. When the user organization starts developing a new software system to replace this
one, even then the documentation is useful. E.g. The system analysts can refer to the
documentation as a starting point for discussions on the new system requirements.
- 114 -
2] System documentation :
System documentation describes the systems functions and how they are implemented.
System documentation consist of detailed information about a systems design
specifications, its internal workings and its functionality
Most system documentation is prepared during the systems analysis and systems design
phases.
Internal documentation :
It is the System documentation that is part of the program source code or is generated
at compile time.
External documentation :
It is also a System documentation that includes the outcome of structured diagramming
techniques such as data flow and entity-relationship diagrams
System documentation consists of the following :
Data dictionary entries.
Data flow diagrams.
Screen layouts.
Source documents.
Initial systems request.
3] Operation documentation :
Typically used in a minicomputer or mainframe environment with centralized processing
and batch job scheduling.
Documentation tells the IS operations group how and when to run programs.
Common example is a program run sheet, which contains information needed for
processing and distributing output.
4] User documentation :
Typically includes the following items
System overview
Table of Contents
Index
Find or search
Links to definitions
http://way2mca.com
- 115 -
Analysts prepare the material and users review it and participate in developing the
manual.
Online documentation can empower users and reduce the need for direct IS support
Context-sensitive Help
Interactive tutorials
Hypertext
On-screen demos
http://way2mca.com
- 116 -
- 117 -
management and configuration management capabilities along with utilities that enable
tools from different vendors to be integrated into the IPSE.
Advantages Of CASE tools
Automate many manual tasks
Generate system documentation
Promote standardization
Promote greater consistency & coordination
Disadvantage Of CASE tools
CASE tools cannot automatically provide a functional, relevant system
It cannot automatically force analysts to use a prescribed methodology or create a
methodology when one does not exist
Cannot radically transform the system analysis and design process
@@@@@@@@@@@@
http://way2mca.com
- 118 -
http://way2mca.com
- 119 -
http://way2mca.com
- 120 -
Design Documentation
http://way2mca.com
- 121 -
Test Documentation
http://way2mca.com
- 122 -
QUESTION ANSWER
What are the CASE tools? Explain some CASE tools used for prototyping.
(May-06-15Marks, Nov-03, M-05, Dec-04).
Answer :
Computer assisted software engineering (CASE)
Computer assisted software engineering (CASE) is the use of automated software
tools that support the drawing and analysis of system models and associated
specifications. Some tools also provide pr typing and code generation facilities.
At the center of any true CASE tools architecture is a developers database called a
CASE repository where developers can store system models, detailed descriptions
and specifications and other products of systems developments.
A CASE tool enables people working on a software project to store data about a
project, its plan and schedules, to be able to track its progress and, make changes
easily, analyze and store data about user, store the design of a s stem through
automation..
A CASE environment makes system development economical and practical. The
automated tools and environment provides a mechanism for system personnel to
capture the document and model and information system.
A CASE environment is a number of CASE tools, which use integrated approach to
support the interaction between environments components and user of the
environment.
CASE Components
CASE tools generally include five components - diagrammatic tools, an information
repository interface generators, code generators, and management tools.
Diagrammatic Tools
Diagrammatic tools support analysis and documentation of application
requirements.
Typically, they include the capabilities to produce data flow diagrams, data
structure diagrams, and program structure charts.
These high-level tools are essential for support of structured analysis mythology
and CASE tools incorporate structured analysis extensively.
They support the capability to draw diagram in chart and to store the details
internally. When changes must be made the nature of changes is described to
the system which can then withdraw the entire diagram automatically.
The ability to change and redraw eliminates an activity that analyst finds both
tedious and undesirable.
Centralized Information Repository
A centralized information repository or data dictionary aides the capture analysis
processing and distribution of all system information.
The dictionary contains the details system components such as data items, data
flows and processes and also includes information describing the volumes and
frequency of each activity.
While dictionary are designed so that the information is easily accessible. They
also include built-in control and safeguards to preserve the accuracy and
consistency of the system details.
http://way2mca.com
- 123 -
The use of authorization levels process validation and procedures for testing
consistency of the description ensures that access to definitions and the revisions
made to them in the information repository occur properly according to the
prescribed procedures.
Interface Generators:
System interfaces are the means by which users interact with an application,
both to enter information and data and to receive information.
Interface generator provides the capability to prepare mockups and prototypes of
user interfaces.
Typically the support the rapid creation of demonstration system menus,
presentation screens and report layouts.
Interfaces generators are an important element for application prototyping,
although they are useful with all developments methods.
Code Generators:
Code generators automated the preparations of computer software.
They incorporate method that allows the conversion of system specifications into
executable source code.
The best generator will produce the approximately 75 percent of the source code
for an application. The rest must be written by hand. The hand coding as this
process is termed, is still necessary.
Because CASE tools are general purpose tools not limited any specific area such
as manufacturing control, investment portfolio analysis, or accounts
management, the challenge of fully automating software generation is
substantial.
The greatest benefits accrue in the code generator are integrated with the
central information repository such as combination achieved objective of creating
reusable computer code.
When specification change code can be regenerated by feeding details from data
dictionary through the code generators. The dictionary contents can be reused to
prepare the executable code.
Management Tools:
CASE systems also assist project manager in maintaining efficiency and
effectiveness throughout the application development process.
The CASE components assist development manager in the scheduling of the
analysis and designing activities and allocation of resources to different project
activities.
Some CASE systems support the monitoring of project development schedules
against the actual progress as well as the assignments of specific task individuals.
http://way2mca.com
- 124 -
What is cost benefit analysis? Describe any two methods of performing same.
(May- 06, May-04).
Answer :
Cost and benefit analysis:
Cost benefit analysis is a procedure that gives the picture of various costs benefits,
and rules associated with each alternative system.
Cost and benefit categories :
In developing cost estimates for a system we need to consider several cost
elements. Among them are followings:
Hardware costs
Hardware costs relate to the actual purchase or lease of the computer and
peripherals (printer, disk, drive, tape, unit e.g.).determining the actual costs of
hardware is generally more difficult when the system is shared by many users than
for a dedicated stand-alone system.
Personnel costs:
Personnel costs include EDP staff salaries and benefits (health insurance, vocation
time, sick pay, etc) as well as payment of those involved in developing the system
.costs incurred during the developments of a system are one-time costs and are
labeled development costs.
Facility costs:
Facility costs are expenses incurred in preparation of the physical site where the
application or computer will be in operation .this includes wiring, flooring, lighting
and air conditioning. these costs are treated as one time costs.
Operating costs:
Operating costs includes all costs associated with the day-to-day operation of the
system .the amount depends on the number shifts .the nature of applications and
the caliber of the operating staff. Te amount charged is based on computer time
,staff time and volume of the output produced.
Supply cost
Supply costs are variable costs that increase with increase use of paper, ribbons,
disks, and like.
Procedure for cost benefit determination :
- 125 -
- 126 -
The time value of money is usually expressed in the form of interest on the funds
invested to realize the future value assuming compounded interest on the funds
invested to realize the future value .assuming compounded interest the formula is
F=P(1+i)pow(n);
http://way2mca.com
- 127 -