Beruflich Dokumente
Kultur Dokumente
In the early 1980s, the more flexible Incremental Model was introduced. It is
also called the Staged Delivery Model. This model performs the waterfall in
overlapping sections, attempting to compensate for the length of projects by
producing usable functionality earlier, in increments. All the requirements are
collected at one shot. The technical architecture is finalized upfront. The objectives
are divided into several increments or builds. The first build is developed and
delivered. This is followed by the next portion until all the objectives have been met.
It is easier to build and design than a whole project. However, it has its own
drawbacks.
Spiral model:
For a typical application, the spiral model means that you have a rough-cut of user
elements as an operable application, add features in phases, and, at some point,
add the final graphics. Each phase starts with a design goal and ends with the
client reviewing the progress thus far. Analysis and engineering efforts are
applied at each phase of the project, with an eye toward the business goal of the
project. The spiral model is used most often in large projects and needs constant
review to stay on target. However, if we use the spiral in the reverse, that is adding
the peripheral features and then developing the core, we more often than not end
up going nowhere.
Other models:
There are several other Life Cycle models: Rational's Iterative Development
Model, Adaptive Model, Rapid Application Development, Evolutionary
Prototyping, and V Model. You are not required to memorize the models now.
The intention is to give you an idea of what the software life cycle phases are and
how they are combined into different structures to form life cycle models.
Maintenance model:
For projects that require us to maintain software, Cognizant has its own
Application Value Management process model. This involves initial knowledge
transition, followed by steady state, where enhancements, production support, and
bug fixes take place.
Model selection:
Summary:
The Software stages are Requirement, Design, Development, Testing, and Delivery
Stages such as Analysis, Design, Testing, and Delivery can be carried out at single
or multiple points of time during the software project according to different life
cycle models
The various models for project development are: Waterfall, Incremental, Iterative,
Spiral, and Maintenance
What is quality?
Quality can be defined in various ways depending upon the customer's point of
view or from the point of view of the product or from a value angle or even
according to the manufacturing specifications.
Whatever work product you produce during day-to-day activities contribute to the
quality of the product.
The entire process for ensuring quality makes up the quality management of
an organization.
Responsibility of quality:
In this page you, will learn the responsibilities involved in ensuring the quality
of a product.
The responsibility of ensuring proper quality of every deliverable lies with the
organization. This implies that it is the responsibility of each and every employee of
the organization.
The quality of the process and product of the organization needs to conform to
certain set standards.
Now you will learn about the various Quality Models and Standards.
ISO
CMMi
PCMM
BS 7 7 9 9
Six Sigma
Quality and associated certifications are the differentiating factors that allow
companies to beat their competitors and win clients.
The quality control activities verify whether the deliverables are of acceptable
quality and that they are complete and correct. Examples of quality control
activities in software involve testing and reviews
Quality assurance verifies the process used to create the deliverables. This
can be performed by a manager, client, or a third-party reviewer. Examples of
quality assurance include the software quality assurance audits, software
configuration management and defect prevention.
The quality management system includes all the procedures, templates, and
guidelines associated with software engineering in the organization. In
Cognizant, the quality management system is the QView.
Even though there are policies, frameworks, review, testing, audits and
conformance checks in place, the ownership and responsibility of producing a
quality product rests with the individual.
Quality is a reflection of your ability.
Quality is a relative term, which aims at customer satisfaction. There are several
models and standards in the industry to ensure quality
The various activities involved in ensuring the quality of software are Quality
Control, Quality Assurance, and Quality Management
Quality is not a goal but a journey. There are multiple paths leading to proper
quality.
ISO 9001 consists of five main sections which covers the standard requirements.
You learned that, CMMi stands for Capability Maturity Model Integrated.
CMMi consists of guidelines for managing the:
Engineering Processes
Support Processes
Linking the four, CMMi forms guidelines to manage and execute software
projects.
The different benefits from CMMi are in the areas of improved process, budget
control, delivery capability, increased productivity, and reduction of defects.
One has to progress maturity level by maturity level and reach the pinnacle of
excellence.
C1 or, Highly Critical - This involves firewall and finance related data.
Storage of Information
Transmission of Information
Access to Information
Destruction of Information
Just as security of information, reduced variability is also very critical for customer
satisfaction. Six Sigma is a statistical methodology that significantly improves
customer satisfaction and shareholder value by reducing variability in every aspect
of our business.
"Six Sigma is not something else that you do...it is what you do."
Take a look at how Cognizant's quality journey has evolved over time and what it
implies. All the certifications of Cognizant have been achieved enterprise-wide
across all centers. The certifications have been achieved in an incremental
approach with Cognizant's growth as a company.
http://cognizantonline/qview/
The responsibility for monitoring all the process and quality related activities lie with
the individual employees of Cognizant. The group responsible for ensuring quality
processes and continuous improvement is the process and quality group. The
diagram shows the structure of the process and quality group. SEPG is a virtual
group consisting of quality champions and representative practitioners from all
locations and vertical or, horizontal or, support groups. It is responsible for process
definition, maintenance, and improvement of the software processes used by the
organization. The SQAG is responsible for guiding projects through facilitations and
monitoring the process and quality of the Projects through auditing and reporting,
periodic status reporting, and timely escalations. The SQAG is also responsible for
conducting the internal quality audits that serve as periodic assessments of project
and process health and compliance in the organization.
This illustration shows the types of audits that take place in a project and their
distribution throughout the project life cycle. All the audit results, which include, the
non-conformances and the observations, are logged in the Q Smart online tool. Like
any other process, the quality process also strives for continuous improvement in
Cognizant. This improvement is enhanced by the feedback about the quality
process received from the associates. Q Smart can also be used to provide feedback
on the quality process. Any associate can provide feedback in this manner.
Apart from the certifications to be adhered to and the quality champions and quality
reviewers, the management also encourages quality and innovation by awarding
the project of the year, associate of the year, and the best practice awards. The
best ideas and innovations are recognized, rewarded, and re-used to ensure
continuous improvement and high quality in process, product, and practice.
Six Sigma uses a standard methodology to use statistical solutions for reducing
variability of a product and thereby reducing defects
All the certifications of Cognizant have been achieved enterprise-wide across all
centers. The certifications have been achieved in an incremental approach with
Cognizant's growth as a company
Q view is a web-based application that outlines the Cognizant's quality policy. Here
you can find that the process documents are arranged according to the documents
relevant to the different types of projects and processes
Q view also contains different references that exist for the benefit of the associates
In cognizant, the group responsible for ensuring quality processes and continuous
improvement is the process and quality group
All the audit results, which includes, the non-conformances and the observations,
are logged in the Q Smart online tool
The types of projects that are dealt with by Cognizant can be classified into
Development Projects and Application Value Management or Maintenance Projects.
In the following pages, you will learn in detail about application development
projects.
New development - A project developed from scratch for a customer based on given
requirement, Re-engineering - Converting an existing software system into another
system having the same functionalities, but with different environment, platform,
programming language, operating system, etc., and Product development - A
software product developed not for a single customer, but based on some market
demand for a customer base for that type of product.
Classification based on size like Large, Medium, Small, and Very Small.
The reason why the projects are categorized into large, medium, small, and very
small is that they need to follow different processes for development. A large
project and a very small project cannot follow the same steps while executing
deliveries. To keep a large project on track, one needs to have substantial
processes in place, whereas having few processes leads to the risk of unmanaged
processes, which may lead to failure. Whereas, for a small project, having a huge
process makes it a tedious and unnecessary overhead.
The table shows the recommended life-cycle models for the different types of
development projects in Cognizant. A large project can follow the waterfall,
incremental, or iterative model. A medium project follows the waterfall or iterative
model. Small and very small projects follow the waterfall model. Most of these
recommendations are because of understandable reasons. A project of small
duration could run into problems if there were too many cycles of delivery as in
iterative. Also, unless a project is of a large duration, breaking it up into increments
does not make sense and might lead to complications from multiple deliveries. The
following pages will give you details on these three types of life-cycle models.
The initial planning broadly scopes the iterations and arrives at a roadmap
Merging with the customer's process and project-specific requirement to form the
project's software process
Process model configuration is the single most important activity to finalize the
process model before the start of the project. Process Model is the basis for some
processes like, Project planning and tracking, Application build, integration, testing,
release, branching or merging strategies may be derived based on this, Distribution
of work within teams and across teams, Planning for Rollouts, etc.
All the development life-cycle models follow the parallel processes of development
and delivery management. While the development process deals with the project
development stages and associated activities, delivery management deals with the
project management activities relevant throughout the project.
The types of application value management projects that are handled by Cognizant
are:
Maintenance: This means taking ownership of the software system and ensuring
that it meets current and future needs of all users at prescribed service levels.
Mass Change: These are projects involving change of attributes of a system from
one to the other.
Any AVM project consists of taking ownership of the software system and ensuring
that it meets current and future needs of all users. A maintenance project typically
involves providing the applicable activities from the following:
Production support
Bug fix
Minor enhancements
Major enhancements
Testing
Documentation
Re-engineering, etc.
The picture shows the process for one typical maintenance project.
The schematic description of the process model definition for an AVM project is
shown here. The steps include:
Selecting adequate project life-cycle based on the type of AVM project, such as
maintenance, mass change or testing, and project scope to arrive at an operational
model
Merging with the customer's process to form the project's software process, and
The AVM life-cycle also follows the parallel processes of maintenance and delivery
management. Take a look at the process models for maintenance, mass change,
testing.
The process model is selected according to the life-cycle model, incorporating the
customer's required processes and is tailored to suit the specific requirements of
the project
Imagine organizing a college fest. Behind the face of a day or two of enjoyment and
celebrations, what are the challenges involved for the one who is in charge of
organizing it? One runs up against budget issues, time and schedule issues,
resource problems, risks, constraints, and managing expectations. Arranging and
executing a college fest is a project. And all the challenges faced are a part of any
project in the industry.
Learn the structure of the project team and the role of team members in the project
In the previous example, organizing the college fest was a temporary endeavor and
it was unique-that is to say that every college fest is different. Similarly, any project
is a temporary endeavor undertaken to create a unique product or service and it
progresses through a number of life cycle phases. Temporary implies that every
project has a definite beginning and a definite end. Unique implies that the product
produced by the project is different in some way from all similar entities.
Whatever be the type, a project always has certain characteristics and challenges.
Just like the college fest that you saw previously, any project in the industry
involves challenges with respect to cost, risks, scope, quality, resources,
expectations, time, etc.
What are stakeholders and who are the stakeholders in a project? A stakeholder is
anyone who has an interest in the project. They may be at the client's end, internal
to the project, or external. The external are Sub-contractors, Suppliers, External
consultants, and Service providers. The internal stakeholders are Senior
management, Project manager, Human resources, Business development, Finance,
Administration, Training, Internal systems, Quality assurance, and SEPG. The client
stakeholders are Sponsor, Contact person, End-user, and Outsourced party.
The most important project management tasks during the execution of the project
involve:
Planning for the project's cost, effort, and schedule at the beginning
To track the progress of the project, one needs to monitor whether the effort, cost,
and schedule are following the plan or are deviating. In case of deviations, one
needs to take corrective action, re-modify the plan, etc. To track the effort and the
cost of the project, it is of extreme importance to log the time spent by each
individual in the project against the activities performed by them. For this purpose,
each associate is expected to log his activities in the peoplesoft tool. Peoplesoft
Demo follows this.
A stakeholder is anyone who has an interest in the project. They may be at the
client's end, internal to the project, or external
(b) The internal stakeholders are senior management, project manager, human
resources, business development, finance, administration, training, internal
systems, quality assurance, and SEPG
(c) The client stakeholders are sponsor, contact person, end user, and
outsourced party
The project organization structure in Cognizant is a typical offshore-onsite delivery
model, where every individual has a role to play
(a) The officials at the top end of the hierarchy provide guidance and ensure
communication with the end customer and proper execution of the project. The
team members are essential for performing the different activities that go on to
satisfy the requirements of the project
(b) The support groups including NSS, Admin, HR, and quality assurance provide
all the necessary support for the project to meet the various infrastructural,
financial, resource, and quality requirements
The details of the time spent by individual team members need to be logged in
Peoplesoft
Obtaining a degree is more than studying and taking the test. It additionally
involves arranging for hostel accommodation, arranging for commuting services,
setting up a time table, selecting a study environment, identifying resources and
study material, having periodic meetings with the teachers, receiving the grades,
and finally celebrating the graduation.
Similarly, a software project does not begin and end with the software development
life cycle consisting of requirements, analysis and design, coding, testing, and
delivery stages. Additionally, it involves activities throughout the project life cycle
such as business understanding, configuration set up, infrastructure set up,
execution, progress analysis, change management, and finally closure. All these
activities fall under the delivery management framework. Delivery management is a
new framework to address all project management practices across the entire life
cycle of a project. This framework describes the various stages in project
management, such as the entry and exit criteria, tasks, input and output, tools and
techniques to perform the tasks, responsibilities, verification and validation criteria.
The execution phase involves delivery of product, tracking activities against the
plan, performing defect prevention, communicating with the client, sending status
reports, managing change, managing risks, having peer reviews, etc.
The closure phase consists of obtaining the project sign off. Finally, project
retrospection is carried out covering best practices, tools or re-usable components
developed, lessons learned, project performance against the quality goals, and
associated learning.
The delivery management phases run parallel to the SDLC phases in a project. The
two diagrams on the page show how the different delivery management phases are
aligned to the SDLC phases of a standard development project and a maintenance
project.
Delivery management involves all the project activities other than standard SDLC
phases that are required throughout the project life cycle for project execution
b) Formalization, which includes Formal contract, Raising the Work Order in Prolite
d) Execution, which involves the tracking activities against the plan, performing
defect prevention, communicating with the client, Sending status reports, Managing
change and risks, Having peer reviews, etc.,
Metrics:
Measurements act as indicators of progress and are understood by everyone
irrespective of experience level and technology background.
One cannot chase a target without knowing it. Nor can one know about success, and
achievements, without defining success in measurable terms. And on the other
hand, one cannot also accurately gauge the degree of danger one is in without
appropriate figures.
The performance of the project has to be measured so that it can be kept under
control.
A project works toward a goal the measurements along the way compare the actual
results of the project against the projections to determine whether the project is on
track or deviating from the goal.
Some examples of base measures that are considered for measuring the software
process and product are schedule, effort, size, and defect. These measures are
combined in different ways to form the different software metrics. The following
page will give you some examples on process metrics for application development
projects.
The metric schedule variance is got from the measure schedule, by comparing the
actual with the planned. It is a measure of whether or not the project is meeting the
planned dates for the start and finish of modules. Similarly, the metrics effort
variance is got from the measure effort - This is a measure of whether the planned
effort is matching the actual effort or not. We can also know about the load factor,
which is a measure of whether the people in the project are adequately, lightly, or
heavily loaded. All these are examples of process metrics. They indicate whether
the process of producing software products is adequate or not. Some of the other
process metrics are review efficiency and requirement stability. In the next page,
you will see some examples of product metrics in application development projects.
The metric defect density is got by putting defects and size together. This gives an
indication of how robust the product is. This is an example of a product metric.
Product metrics indicate whether the quality of the produced software product is
adequate or not.
Similarly, in AVM projects, there are different measurements that evolve into
metrics.
For Example:
You have seen measurements and metrics. However, metrics and measurements
are meaningful only if we have a goal against which we compare the metrics to find
out whether or not the project is on the right path. How are goals set? The business
objectives of the organization are laid down. One example of a business objective
may be to meet delivery commitments on time. These business objectives are then
translated to one or many process objectives. Following the previous example, the
process objective aligned to meeting delivery commitments on time may be
translated to having less than 5.86 percent variation from the schedule during the
process of coding and unit testing. These process objectives are mapped to sub
processes, which make up the process. In our case, the process of coding and unit
testing is made up of coding, code review, and unit testing. After the sub processes
are identified, metrics are attached to the sub processes, which can be used to
measure the performance in the sub processes. In this example, coding is mapped
to the coding schedule variance, code review to the code review schedule variance,
and unit testing to the unit testing schedule variance. Finally, a goal is set for each
of these metrics to ensure that the process objective is met. Here, for example, the
goal for the coding schedule variance is 1.65 percent, the code review schedule
variance is 0.33 percent, and the unit testing schedule variance is 1.32 percent.
Business objectives are at the organization level and will span across projects and
support groups.
Along with the business goals, a set of metrics derived from the data collected from
past projects as well as data from industry figures are set as organizational
baselines. These baselines are used as a basis to arrive at project-specific quality
goals. They give an indication of the capability of the organization. The
organizational baseline values for the metrics are documented in the OLBM or
organizational-level benchmarks, which contain the mandatory and optional metrics
for each type of project, the goals associated with them, and the upper and lower
control limits for each metric. Each project sets its goals by either following the
goals laid down in the OLBM or setting their own goals as appropriate for the
specific project characteristics.
Along with goal setting, the projects have to formalize the data-collection tools to
collect the actual measures for the project for the metrics to be computed and
compared to the goals. The tools that are generally used for data collection are
Prolite, e-Tracker, Time sheet, Defect logs, and Project plan.
How are the metrics used? Periodically, metrics are collected and analyzed. As the
metrics are analyzed, one can get a quantitative idea of whether the project is
proceeding along the right track or not. If a metric deviates significantly from the
goal so as to come close to or cross the control limits, it indicates something is
wrong in the process. This necessitates corrective action. For example, too much
schedule or effort variance may mean that our initial estimates were wrong and
may force us to revise our estimates for the project.
How are the metrics used? Periodically, metrics are collected and analyzed. As the
metrics are analyzed, one can get a quantitative idea of whether the project is
proceeding along the right track or not. If a metric deviates significantly from the
goal so as to come close to or cross the control limits, it indicates something is
wrong in the process. This necessitates corrective action. For example, too much
schedule or effort variance may mean that our initial estimates were wrong and
may force us to revise our estimates for the project.
Apart from the projects, the metrics collected across the organization are analyzed
by a group called the metrics council and the organization's baselines are modified
periodically based on the analysis. This revision of baselines indicates a change in
the performance capability of the organization.
The software processes and products are measured and controlled with metrics
The organizational baseline contains the goals and control limits for each
compulsory and optional metric for all types of projects. The OLBM depicts
organization's capability
Projects set their own project goals based on organization-level goals or they can
set appropriate goals based on the project characteristics
Metrics are compared against the baseline figures to check the progress of the
project. If the figures cross the control limits or are deviating too much from the
goal, it becomes necessary to take corrective action
Metrics are generated from the project data entered in Prolite and e-Tracker
BQSEA1:
You will now move on to the development of software product. In most engineering
disciplines, specifications are the first step in the development of a product.
Consider the case of house construction. One starts with specifications, goes on to
design, and finally building and finishing the product. Similarly in software
development, one starts with product requirements, followed by architectural
details, and then proceeds to building, that is, developing the code. It is then
followed by reviewing and installing the product.
Coding and development is one of the major activities in Software Engineering. But
software engineering goes much beyond coding. It consists of various activities to
encompass all aspects of software production, such as requirements, specifications,
design, coding, testing, integration, documentation, deployment, and maintenance.
Coding would occupy as little as 5 percent of the total work involved in a Software
Engineering Project. Although artistic and scientific in its scope, it has to adhere to
several time-tested processes pertaining to the different aspects.
Now that you know the processes involved in software development, take a look at
the number of people involved. They are spread across the managerial, technical,
and end user cadre. And like any other industry, software is linked to peripheral
issues, such as business, Contractual, Legal, and Environmental. Hence, remember,
Software is "Not Just Some Pretty Code".
Software development can be compared to art. Imagine building the Sistine Chapel
alone and without a blueprint. The best works of art require discipline, teamwork
and planning.
Software engineering is the art, craft, and science of building large, important
software systems. It is an amalgam of artistry, craftsmanship, and scientific thought
While being a major aspect, software engineering goes much beyond coding
Software engineering is akin to art, which cannot succeed without a blueprint and
teamwork
Once Code has been generated, program testing begins during the Testing stage.
Testing process focuses on the logical internals of the software, ensuring all
statements have been checked for correctness. It also focuses on the functional
externals, that is, conducting tests to uncover errors and ensure that defined input
will produce actual results that agree with the actual results. At the implementation
stage, after all tests have shown that the completed software works as intended, it
is deployed in its production environment. Implementation is a planned activity and
steps pertaining to it are documented as part of the Roll-Out Plan. A series of checks
and reviews are conducted in this stage to ensure that all components of the
completed software have been installed correctly. Software undergoes change after
it has been deployed and delivered to the customer. Change will occur because
errors may be encountered, or because the software needs to be adapted to meet
changes in its external or operating environment, or because the customer requires
functional or performance enhancements. These are issues that are resolved during
the Maintenance or Post Implementation stage.
You will now learn about the basic building block of any stage. Basic building blocks
of a Stage are Tasks. Activities explain how a task needs to be performed. See what
each of them signifies. In the next few pages, you will learn in detail about elements
of any software development stage.
You will now learn about the various elements of a stage. The Entry Criteria
provides inputs which can be documents or tasks. It is then followed by the Task or
list of activities that are implemented to complete the process. The Verification
consists of reviews and approvals that confirms adequacy of the activities done
during the Task period. The stage ends with an Exit criteria which consists of work
products or documents that may serve as an Entry Criteria for the next Stage.
Here is an example that describes the Elements of the Requirements Stage. The
Entry Criteria for Requirements Stage is the Business Need. For example, the client
requires a system that will automate the process of banking according to his needs.
Tasks of the Stage would include activities like requirement capture, requirement
analysis, and requirements documentation. Work Products created in this stage
could be completed requirements gathering checklists and Software Requirements
Specifications. As part of Verification the completed SRS document will be reviewed
by the Project Manager and approved by the Client Representative. Signed off SRS
will be Exit Criteria for this stage, which would become the Entry Criteria for the
next Stage, Analysis and Design.
The various stages in the SDLC are Requirement, Analysis, Design, Coding, Testing,
Implementation, and Maintenance or Post implementation
The elements of a stage include Entry criteria, Task, Verification, and Exit criteria
Software Engineering is not just Code Construction. Each Software Application that
is created follows a well defined set of activities, and has a well defined Life Cycle
from initiation to the retirement of the Software Application. Similar to a car
manufacture, a software application development project has well defined stages
that are implemented in a predefined fashion to create the software application.
Once Code has been generated, program testing begins during the Testing stage.
Testing process focuses on the logical internals of the software, ensuring all
statements have been checked for correctness. It also focuses on the functional
externals, that is, conducting tests to uncover errors and ensure that defined input
will produce actual results that agree with the actual results. At the implementation
stage, after all tests have shown that the completed software works as intended, it
is deployed in its production environment. Implementation is a planned activity and
steps pertaining to it are documented as part of the Roll-Out Plan. A series of checks
and reviews are conducted in this stage to ensure that all components of the
completed software have been installed correctly. Software undergoes change after
it has been deployed and delivered to the customer. Change will occur because
errors may be encountered, or because the software needs to be adapted to meet
changes in its external or operating environment, or because the customer requires
functional or performance enhancements. These are issues that are resolved during
the Maintenance or Post Implementation stage.
You will now learn about the various elements of a stage. The Entry Criteria
provides inputs which can be documents or tasks. It is then followed by the Task or
list of activities that are implemented to complete the process. The Verification
consists of reviews and approvals that confirms adequacy of the activities done
during the Task period. The stage ends with an Exit criteria which consists of work
products or documents that may serve as an Entry Criteria for the next Stage.
Here is an example that describes the Elements of the Requirements Stage. The
Entry Criteria for Requirements Stage is the Business Need. For example, the client
requires a system that will automate the process of banking according to his needs.
Tasks of the Stage would include activities like requirement capture, requirement
analysis, and requirements documentation. Work Products created in this stage
could be completed requirements gathering checklists and Software Requirements
Specifications. As part of Verification the completed SRS document will be reviewed
by the Project Manager and approved by the Client Representative. Signed off SRS
will be Exit Criteria for this stage, which would become the Entry Criteria for the
next Stage, Analysis and Design.
The various stages in the SDLC are Requirement, Analysis, Design, Coding, Testing,
Implementation, and Maintenance or Post implementation
In the analysis and design stage of software development, the focus gradually shifts
from "What to build" to "How to build". Over the next few pages, you will learn
about the analysis and design stage in detail.
You must be aware that the Requirement Specifications Document acts as the exit
criteria of the Requirement Stage. This same document is the entry criteria for the
Analysis Stage. Functional Specifications Document is the exit criteria for the
Analysis Stage and in turn the entry criteria for the Design Stage. In the Design
Stage, the Detailed Design Document is the most important document that gets
created and is used as the basis of Code Construction in the Code Construction
Stage.
Analysis and Design are one of the foremost stages in software development cycle.
Analysis is the software engineering task that bridges the gap between the software
requirements stage and software design. The objective of software analysis is to
state precisely what the system will do to provide a solution to the client's need at a
functional level.This is captured in the Functional Specification Document.
Design creates a detailed Design Document that acts as the "blue-print" for the
developers or the team that will construct the code to create the system.
The typical elements of software design include Program Architectural Design, Data
Design, Interface Design, and Component Design.
Analysis and Design are one of the foremost stages in software development cycle.
Analysis is the software engineering task that bridges the gap between the software
requirements stage and software design. The objective of software analysis is to
state precisely what the system will do to provide a solution to the client's need at a
functional level.This is captured in the Functional Specification Document.
Design creates a detailed Design Document that acts as the "blue-print" for the
developers or the team that will construct the code to create the system.
The typical elements of software design include Program Architectural Design, Data
Design, Interface Design, and Component Design.
This is the overall Architecture Design for the SmartBook System. It defines the
relationship between the structural elements of the Software Application being built.
Architecture for the system needs to be built as part of Software Analysis and
Design Stage. The Data Design specifies the data structures needed to implement
the solution. It includes the Database or File System Space Requirements. It also
includes Table or Layout details, such as Table or Record name, Column or Field
names and description,Type of Column or Field the length, Default values, Edit or
validation conditions associated to a Column or Field, and Details of all Keys or
Indexes of a Table or Record. These are the interface designs which describe how
the software communicates within itself, with the systems that interoperate with it
and the Humans who use it. Interface Design for the system needs to be built as
part of Software Analysis and Design Stage. The Component level design transforms
the structural elements of software into procedural description of the Software
Component. It includes Program specifications, that is, the Functions or Algorithms
that define the procedural design.
Here is a case study to understand the concept of software analysis and design
better. Mercury Travels is a premier Travel Agency of the country. Mercury wants to
automate its business processes. Requirement Analysis reveals that the specific
requirement of Mercury is to create an Air or Rail Ticket Booking System for the
Travel Agents. Other business processes will not be included in the current
automation initiative.
Here, you can see an Analysis Model that is used to express the problem. Such a
diagram is called a Data Flow Diagram, in which each bubble indicates the activity
taking place. The box on the other hand, is used to denote an external source or
sink of information. The parallel bars denote data store or file while an arc is used to
denote the flow of components among the other 3 components. Note that SRS 01.1
has been expressed using processes 2a and 2b.
The bubble 1 which denotes the activity, "Determine form of travel" has been
factored further to a lower level of D F D.
The Analysis Models form the basis of the Program Specifications which is an
essential component of the Design Document.
In addition to Program Specifications the Design Document includes other details
like Data Design and Program Architecture Design.
Data design involves the overall data model design for the application. Program
hierarchy and program-level interfaces are addressed in the program architecture
design.
Look closely at the examples here. There are two ways you can visualize the
building or construction of a house. Builder may appoint a bricklayer to create the
walls and carpenter to create windows, fit windows into the walls etc and slowly
create the house.
Alternatively, the builder may fit the standard models of doors, windows, roofs,
walls, and rooms available in the market to create the house. This is how most
buildings get built now.
There are two approaches to creating the Design Specifications in a project. One is
the Structured Analysis and Design Technique that can be traced to the 1970s. The
other is a newer concept called the Object Oriented approach which as a concept
was developed from 1990s.
The object-oriented technique on the other hand focuses on the system behavior. In
the recent years OOAD technique has become very popular with software
engineers.Objects represent a sample expected system behavior and they are
called upon to function as a whole. Re-usable and common objects help in achieving
greater modularity and are manageable from the project management perspective.
In analysis and design the focus is on "HOW to Build" a solution and not on "WHAT
to Build"
Analysis is the software engineering stage that bridges the gap between the
software requirements stage and software design stage
In the software design stage, detailed design document is the most important
document that is created
The elements of software design are Program architectural design, Data design,
Interface design, Component design
There are two ways in which software engineers visualize "HOW to build" a solution
during analysis and design stage, namely, Structured Analysis and Design (SSAD)
technique and object-oriented technique
CODE CONSTRUCTION:
Your builder has taken your house requirements and has given you the building
plan and the prototype of your house. So in your mind, you have the picture of your
dream house ready. What do you think is the activity that the builder has to engage
in now? Yes. You guessed it right. The builder will need to construct the house now.
You will now learn how code is constructed. Design provides the basis for code
construction. A unit test helps a developer to consider what needs to be done as
requirements are nailed down firmly by tests. There is a rhythm to developing
software unit tests. First, you create one test to define some small aspect of the
problem at hand. Then, you create the simplest code that will make that test pass.
Then, you create a second test. Now, you add to the code you just created to make
this new test pass, but do not write any more code until you have created a third
test. You continue until there is nothing left to test. The next page holds an activity
for you. You have to connect the numbers in a series beginning with number one.
For example, click number two to connect number one and number two.
Now, you know that any unstructured piece of work is difficult to understand and
that code is no exception.
You should remember that even though completed code is an important deliverable
that is given to the customer, software engineering is not just coding. Coding is a
stage within the software development life cycle. The subsequent page explains the
coding process in detail.
Design documents and unit test cases that are updated with test cases are
important inputs for the coding stage. Code construction using language-specific
coding standards, peer review of code, and updating code based on review
comments are major tasks in this phase. Peer-reviewed code is the output of the
coding stage.
This page delves into details of the tasks and activities of the coding stage. It must
be noted here that peer review of code is a very important task in the code
construction stage.
A coding standards document tells developers how they must write their code.
Instead of each developer coding in their own preferred style, they will write all
code to the standards outlined in the document. This makes sure that a large
project is coded in a consistent style, that is, parts are not written differently by
different programmers. Not only does this solution make the code easier to
understand, it also ensures that any developer who looks at the code will know what
to expect throughout the entire application.
This section outlines concepts that are generic in nature and applicable to most
software tools and platforms. Platform-specific conventions and guidelines are
covered under the relevant company standard. The relevant language-specific
standard must be referred to when constructing code in a specific language.
This page deals with the good practices on code layout and programming. Code
layout deals with the structure of the code and the way it is laid out. It affects
readability and ease of modification of code.
Here are the guidelines to be followed for maintaining presentation aspects of code
when fixing a bug.
This page outlines the concepts pertaining to sentence construction that are generic
in nature and applicable to most software tools and platforms. Platform-specific
conventions and guidelines are covered under the relevant company standard.
Code-level readability is not about using comments only. The main contributor to
code-level readability is not comments, but good programming style. Check the first
example of code provided. This reflects bad programming style. Check the second
example. Even though this code does not use any comment, it is much more
readable. It takes us toward the goal of "self-documenting code".
When commenting a source code, always use comments judiciously so that we can
ensure our code is readable and clear.
Use of headers in a program will not add to its functionality. It is of immense help
during maintenance of a program.
In this page, you will learn about the declarations standards in a piece of code:
A program consists of two basic entities, data and instructions. Data elements or
structures should be declared and initialized before (executable) instructions.
All header files and libraries used in the program (whether standard or user defined)
should be declared.
All global variables need to be declared and the number of global declarations used
should be minimized so as to reduce coupling between modules.
Functions and their parameters should be declared taking care to ensure that no
type mismatches occur during runtime between the calling and called module or
function or procedure.
When using arrays, remember it is cumbersome to handle arrays having more than
three dimensions. Such arrays should be avoided.
You will now learn what defensive programming is. As a programmer, you should be
able to envisage areas in your programs that can initiate errors in the behavior of
the software application. Hence, appropriate methods should be used to prevent the
occurrence of errors.
Continuing with defensive programming ensures that your program is secure and
prevents unauthorized access.
Expectations change and hence requirements change, and so it is but natural that
programs have to be modified in order to suit the new specifications. This means
that the program should be flexible enough to be modified with little or no effort.
This page identifies some practices that help in creating modifiable or flexible
programs.
Now, you will learn the importance of On-Screen Error and Help Messages. In this
example, the customer inserts his debit card in the ATM machine. But the machine
does not accept the card and ejects it immediately, without showing any error
message. This makes the customer frustrated. The next page provides some good
practices that should be followed in case of text-based error and help messages.
Design of the On-Screen Error and Help Messages have a strong bearing on user-
friendliness. This page puts forward some of the guidelines that could be followed
by the developer.
In this session, you have learned that:
The inputs for the code construction include design document and the unit test
cases document
The process of code construction involves using design document and coding
standards to create code, aligning code to the unit test cases, and peer reviewing
code before delivery
TESTING:
Testing is an activity that is used to discover errors and correct them, so that we are
able to create a defect-free product for our customer. Let us take the example of a
house. The client had specified requirements of the house she wants. The tester
tested the house to find out if all requirements of the client had been met after
delivery. The tester created the test execution details document, which detailed the
scenarios or test cases. The tester also created the results of the test execution,
which are referred to as the test log.
Testing is an important stage which follows the Coding stage in the software
development life cycle. The objective of testing is to evaluate if we have created the
system correctly. During the earlier stages, the focus was to check what is being
built but in testing when we have the end product ready, our focus shifts to validate
whether the product that has been built has been built correctly or not. Hence, the
focus shifts from building the product right to building the right product.
Testing is the process of executing a program with the specific intent of finding an
error. Success of a test is determined by the number of errors it has uncovered.
Tests can be conducted by the developer or by an independent testing team. What
one should remember is that the role of a good tester is to show the presence of the
defects or errors of that software.
This page explains the major activities and tasks of the testing stage. Creation of
the test strategy is the first step. It is based on the Requirements Document, the
Functional Specifications Document, and the Design Document. The test strategy
describes the overall plan and approach to be taken for testing, the deliverables,
and the process for test reporting. The next step is to create the test cases,
containing the individual scenarios, which would be tested with their expected
outcomes. Test cases are executed by the tester and results of the tests are
documented in the test log. The defects of testing are recorded in defect-tracking
tools such as the internal tool Prolite or the external tool called Test Director,
depending on the requirement of a project. The owner of the application being
tested then updates the application, closes the defects, and updates Prolite with the
defect status in the tools. Re-testing may be conducted to verify closure of defects.
In this page, you will know the people who are responsible for testing a software
application:
Software testing can also be conducted by the client or the ultimate users of the
system.
The team responsible for the different types of testing needs to be decided upon
during the planning stage.
Here, you will learn about the various stages in testing: Software testing is usually
performed at different levels of abstraction of the application along the software
development process by the builders of the system. There are three testing stages:
Unit testing, integration testing, and system testing. The objective and the
abstraction levels of the application to which these tests are performed are
different. Unit tests are performed on the smallest individual units of the
application, the integration tests on a group of modules and their interfaces while
the system tests are for the entire system and the interfacing external systems. In
this illustration, you can see where the three major types of testing fit into the
software development life cycle.
There is one more stage in testing that is done by the end user of the system. This
is referred to as the user acceptance test. This is to verify the functionality of the
system from the end user's perspective. Here we can see where the user
acceptance testing fits into the software development life cycle.
You will now learn about each of the testing stages in detail. First you will know
about unit testing. In this page, we see a small child building a doll house. She
checks each building block to ensure that it is in line with the design of the doll
house as she creates them. Similarly, unit testing concentrates on each unit or
component of the software as implemented in the code and checks that it is in line
with the program specification and the detailed design. The primary focus of unit
testing is to validate the logic, the structure, and the flows of the concerned
program.
Moving on to integration testing, you see that the builders of the doll house now
begin to put the individually tested blocks of the house together, that is, they
integrate the unit tested units. The primary intent is to uncover errors associated
with interfaces when the unit-tested components are integrated as a module. The
next page talks about system testing.
After the doll house has been completed, it is checked fully by the builders of house
to ensure that it is complete and ready for habitation. Additionally, builders check if
the house is secure and can withstand rain or thunder or lightning and other
environmental factors, so that it can easily be placed in its intended environment.
System testing in software checks the performance and functionality of the
complete system after all unit tested units have been integrated as per the build
plan. It also evaluates functionality with interfacing systems. Non functional
requirements like speed and reliability are also verified during system testing.
Finally, looking at the acceptance testing, you see that the doll house has been
placed in a children's park. The acceptance test verifies whether the system created
is in conformance with user requirements when placed in its 'real' environment.
Acceptance tests are often conducted by the client or by the end users.
Now, you will learn about an important concept of testing called regression testing.
Regression testing will be done to ensure that the actions taken to rectify the defect
have not produced any unexpected effects. Regression testing should be done at all
levels of testing, such as unit, integration, system, and acceptance testing. The
following page gives you an example that will help you learn the concept of
regression testing.
As seen in the previous example of a client stating her requirement for her house, it
was observed by the tester that the location of the door was incorrect and a defect
ID was allocated to it. When correcting this defect, the constructors may remove the
door and move it to the rear side. In this process, other sections of the building may
get damaged. So when correcting the defect of incorrect door location, care must
be taken to ensure that unintended defects like cracks in the walls are not
introduced in the building. Regression testing takes care of such unexpected issues
that occur as a result of fixing defects.
This page explains what the focus is for each type of testing:
Unit testing uses code and detailed design as an input to check correctness of
individual units.
Integration testing uses the system design and the functional specification
document as an input.
System testing uses the overall functionality of the system as given in the
functional specifications and software requirements. It also evaluates the non-
functional requirements.
Regression testing, on the other hand, retests the tested sections of the software to
ensure no unintended error has been introduced.
Here is another very important concept of software testing, that is, the test case.
Test cases are scenarios that are executed by the testers on the completed
application to determine if the application meets a specific requirement. One or
more test cases may be required to determine if a requirement is satisfied.
A good test case is one that uncovers errors in a complete manner with minimum
time and effort. Considering the earlier example of the completed house, the
analysis, 'check if the color of chimney is red' is a test case. If for the same
example, when the test case is written as 'check if the door does not open with a
wrong key', becomes a negative test case. Hence, we learn that a test case is a
statement specifying an input, an action, or an event and expects a specific
response after execution.
The white box approach which is based on the internal workings of the system and
black box approach which deals with the external functionality of the application or
its part being tested irrespective of the internal details. The following pages will give
more details on these two types of approaches.
The white box approach is used to create test cases to check if 'all gears mesh',
that is, to check if the internal operations are performed according to specifications.
This tests all the logical paths within the unit being tested and verifies if these are
functioning as required in the design.
Black box test case design approach does not consider the internal workings of the
application. It focuses on the functional requirements alone, and it is designed to
verify if the inputs are given correctly the output generated is also correct. It should
be noted that black box testing and white box testing are not alternative
approaches to test case design. On the other hand, they complement each other.
Here are the activities performed in the testing stage for the SMARTBOOK System.
The test scenarios or test cases are logged and tracked in a tool with the detailed
information about the test case execution. Individual program units are tested as
part of unit testing and the results are logged in the tool. Subsequently, each
functional module is considered and integration tested for its functioning and logic.
All interface-related tests between program units are covered under integration
testing. The system testing follows where all the functional modules are taken
together and the application is tested as a whole incorporating the interfacing
issues between the functional modules. Finally, User Acceptance Testing is
conducted by the users of the system and the resulting errors are corrected prior to
staging the system into production.
This page explains the salient points you should remember about testing. Test
execution activity starts after code has been constructed with unit testing of
individual modules. This will be followed by integration testing and system testing.
However, it should be noted that test planning activity occurs much earlier in the
software development life cycle. In fact, user acceptance test plans and cases are
prepared along with the requirements document. To improve the quality of the code
being delivered, it is a good practice to 'Test before you code'. The model shown is
also called the V-Model where each stage is associated with the corresponding
review and a specific test case is prepared for testing at a later stage.
The objective of a good test is to show the presence of errors and not the absence
of them
A good tester should attempt to break the system to uncover undiscovered errors
The stages important in testing are unit testing, which targets a single module,
integration testing targeting a group of modules, system testing that targets the
whole system, and acceptance testing targeting the overall system and conducted
by users
There are two important approaches that are used to design test cases, namely, the
black box approach focuses on the functional requirements of the software, and the
white box approach focuses on internal workings of the software
First, you will learn about the implementation stage in software development.
In this example, you see a statue of a jockey being built. This statue is created in
the sculpture workshop. It is being built as per the requirements provided by
Company 123.
Company 123 wants the statue to be installed in a park that they own. The
installation team of the sculptures transfers the statue to its intended environment.
They prepare the site for installation of the statue.
After the site is ready, workers install the statue in the park for public viewing. We
can see here that there is a communication activity associated with the unveiling of
the new statue.
After the system is tested completely, it is delivered to the onsite team. The onsite
team implements the tested application in the client environment. Software
implementation or the deployment stage starts after user acceptance testing is
completed. It involves all the activities needed to make the software operational for
its users.
Here, the focus is to verify that the software or the product that has been delivered
is meeting the need, that is, whether the product has been rightly built.
The main activities in the implementation stage are planning and defining the
process for rollout, to deploy the new application, train users on the new system
after the rollout has been implemented, and communicate the details of
deployment to relevant people.
Now you will learn about the post implementation stage in software development.
After the statue has been installed, as you saw in the earlier illustration, any
complication can arise.
A part of the statue may get damaged and may need to be mended. In that case a
complaint is lodged with the sculpture company.
Stakeholders of Company123, who own the statue, may want a new feature to be
added, or one of the stakeholders may want to change an existing feature of the
statue they had purchased.
Post implementation activity may be the regular warranty support. This includes
providing the support necessary to sustain, modify, and improve the operational
software of a deployed system to meet user requirements.
The requests given by the users are first classified as bugs or production support
tasks and subsequently logged in a tool like e-tracker, for tracking followed by
analysis for resolution of the request. The resolution is then implemented and
delivered to the client for implementation. Production support issues are similarly
analyzed and fixed in the application prior to their closure.
This stage generally refers to the two to six month warranty contract that may be
signed with the client
The main activities in the implementation stage are, to roll out or deploy the new
application, and train users on the new system
Post-implementation activities are implemented as per the post-implementation
process document
Major repairs, which also require in-depth analysis and designing of the solution
prior to its execution, such as relaying the air-conditioning for the entire house.
You have seen what an application maintenance process involves. Now, you will
know about the model followed by the application maintenance process. The
application maintenance model consists of planning, knowledge transition, and
service steady state. The planning phase primarily involves understanding the need
of the customer in terms of what is expected from the maintenance team. This
involves a detailed discussion with the client to identify the requirements and
finalizing the contract. The activities in this stage are: Business planning at the
organizational level: This includes proposal development, estimating for resources
and cost, and defining the escalation mechanism. Maintenance planning at
transition level: This includes scope definition and the execution process adaptation.
And finally knowledge transfer planning, which involves defining the entire
methodology to be adopted during the knowledge transfer phase and a detailed
schedule of the K T phase.
For maintaining the existing application developed by another vendor, the
maintenance team needs to understand the functionalities and the technical details
of the system. Hence, a knowledge transition phase is required prior to the
commencement of the maintenance activities. The knowledge transition phase
primarily consists of obtaining:
The knowledge about the application, considered for maintenance, from the client.
Guided support under the supervision of the client, and finally a plan defined for
transitioning the obtained knowledge to the team for future support. Initially, the
application identified for maintenance has to be thoroughly studied by the K T team.
This includes a detailed understanding about the business processes that the
application caters to and the functions served by the application. This also includes
understanding of the technical details about the application, the environment in
which the application is operating along with the details of interaction with the
interfacing systems. Finally, the application inventory is collected by the K T team
for providing support in future. After obtaining an understanding of the entire
maintenance scenario, the K T team performs the support activities under the
supervision of the client's team. This helps in getting familiar with the support
activities and also in defining a detailed plan for transitioning the knowledge
obtained and subsequently transferring the knowledge obtained about the system
to the entire maintenance team primarily at the offshore centre. The infrastructure
required to perform the support is also built during the stage and a knowledge
repository containing the details of the maintenance project is also built to capture
the entire information, learning, and mistakes committed during the execution of
the project. This helps in easy transitioning of resources down the project timelines.
The steady state support involves resolving the service requests sent by the client,
optimizing the processes continuously over time. This involves measuring and
analyzing the metrics to identify the weaknesses in the process as well as the
application being maintained and defining the corrective measures to eradicate the
weaknesses. Finally, offering the client value additions identified and obtained over
the maintenance period. This includes proactive root-cause analysis of the recurring
problems and the necessary measures for improvement. SLA-based measurement
also helps in tracking the performance strictly on a defined interval at every level.
The steady state requests can be classified based on the type of request or the level
of support and the size of the request.
Production support
Bugfix, and
Enhancements
Similarly the bugfix and the enhancements are further classified into Minor, Major,
and Super major, based on their size.
Software maintenance sustains the software product throughout its operational life
cycle. Modification requests are logged and tracked, the impact of proposed
changes is determined, code and other software artifacts are modified, testing is
conducted, and a new version of the software product is released. It also includes
training and daily support through: helpdesks that are provided to users, or any
documentation regarding the application. The enhancement bugfix request,
popularly called the EBR, primarily consists of the enhancement or bug description,
technical details, the proposed resolution for incorporating the request, and the
results of testing done after the change.
The set of activities performed for software maintenance in the steady state can be
sequentially classified into:
Based on the size, type, and complexity of the request, one or more of these phases
are integrated or eliminated from the execution cycle.
The workflow shown here actually illustrates the functioning of the onsite and
offshore team in a typical maintenance scenario, describing the activities performed
for the various levels and type of support.
Here, you will know the term service-level agreement and will see its importance in
the maintenance projects. A service-level agreement is a contractual service
commitment that describes the minimum performance criteria a provider promises
to meet while delivering a service. This is usually in measurable terms. It typically
also sets out the remedial action and any penalties that will take effect if
performance falls below the promised standard. It is an essential component of the
legal contract between a service consumer and the provider.
Finally, looking into the value additions offered to the client includes implementing
an S L A-based management, which keeps a constant eye on the health of the
project and gives a measure of the performances. This subsequently leads to the
improvement in the areas of productivity, schedule, and finally the cost involved.
The root-cause analysis done at intervals helps in identifying the pain areas of the
application and hence focuses on correcting them.
You will now learn about some key issues and challenges faced during application
maintenance. The key issues that should be adeptly dealt with for maintaining the
software effectively can be classified as:
The three primary stages of maintenance include Planning for transition, Knowledge
transition, and Steady state
The key issues that should be adeptly dealt with for maintaining the software
effectively can be classified as:
UMBRELLA ACTIVITIES:
Any standard software process model would primarily consist of two types of
activities: A set of framework activities, which are always applicable, regardless of
the project type, and a set of umbrella activities, which are the non SDLC activities
that span across the entire software development life cycle.
Umbrella activities span all the stages of the SDLC. They are not specific to any
particular life cycle stage.
The umbrella activities in a software development life cycle process include the
following:
Re-usability Management
Risk Management
The following pages would focus on the requirement traceability matrix and formal
technical reviews.
Managing traceability is required to ensure the requirements are carried through
properly to design, development, and delivery. The following animation will show
the pitfalls of poor traceability.
Now, let us try to understand the concept of traceability and its importance in
software development. For example, in an organization, the activities are
departmentalized on the basis of the functionality to be served and employees are
allocated to each department. A requirement traceability can be defined as a
method for tracing each requirement from its point of origin, through each
development phase and work product, to the delivered product. Thus, it helps in
indicating, for each work product, the requirements this work product satisfies.
When there is absence of traceability: The product gets compromised since the
development cannot be prioritized based on the order of criticality of the
component, ultimately leading to missed functionality in the delivered software.
Project management is compromised due to the lack of visibility of the components
of the application being developed and their interconnections causing a major
hindrance to continuous planning. Testing is compromised as the coverage is not
verified at every stage of the life cycle. It becomes difficult to demonstrate that the
product is ready. Finally, maintenance becomes difficult as identification and
analysis of the impacted work products and requirements becomes tedious. This
ultimately increases the complexity during testing.
The roles and responsibilities with respect to the traceability matrix are explained in
this page. Project manager ensures all required information is provided as needed,
reviews the traceability matrix for completeness. Requirement analyst updates
requirements traceability matrix as needed, support analysis as needed. Designer
provides mapping of requirements to design products. Developer provides mapping
of requirements to development products. Tester provides mapping of requirements
to test products.
This page details the concept of Peer Review in software projects and identifies the
importance and need of Peer Reviews. In software development, Peer Review refers
to a type of Software Review in which a work product (any software deliverable,
design, or document) is examined by its author and/or one or more colleagues of
the author, in order to evaluate its technical content and quality. Management
representatives should not be involved in conducting a peer review except when
included because of specific technical expertise, or when the work product under
review is a management-level document. Managers participating in a peer review
should not be line managers of any other participants in the review.
Peer Review has to be planned at the start of the project where the PM or PL
identifies the artifacts to be reviewed and the Peer Reviewers. The Review schedule
of the individual items to be reviewed along with associated reviewer needs to be
planned by the PL during the project execution. The Peer Review needs to be
conducted by the assigned Reviewers. The Review comments need to be logged in
a Review tool such as eTracker or Prolite. The developer needs to incorporate the
review comments.
CONFIGURATION MANAGEMENT:
Here, we are presenting the scenario on the day version 3.0 of Far Flung Personnel
Planner to be released.
We see here that the current problem that the far flung company was facing could
be reverted if they could roll back to the earlier release. They are unable to identify
the changes incorporated in the previous version and check whether all the changes
suggested have been incorporated in the latest version. There is no formal
communication about the status of the changes either.
Here, we will illustrate a few basic reasons why we encounter change in software
application development. If the business need is not clear to the customers, then
the way it is communicated often doesn't address the actual need which, at a later
point of time, might result in a change. Secondly, if there is a change in the
operating environment in which the system functions. Thirdly, a change might
result from the errors committed due to other reasons during requirement
gathering, design, and the testing phase of the life cycle.
Since you now know the fact that change is inevitable in software application
development, the next basic question that arises is how are we going to manage
these changes? To manage changes in software application development, we use
the discipline of software configuration management, which operates throughout
the life cycle of an application development.
How do I identify a work product uniquely, every time I make a change and how do I
record its effect on other items?
How do I inform everyone else of the changes I have made to an existing document
or code?
A baseline is a specification or product that has been formally reviewed and agreed
upon. Examples of baselines are reviewed design document, approved project
plans, and accepted product. Baselines are well-defined points in the evolution of a
software application. Hence, the baselining criteria and the person responsible for
meeting the criteria need to be defined prior to planning configuration
management.
To change the window pattern of the house, we would need to re-plan the project
and recreate the floor design. The version number of the initial project plan is
incremented by 1.0 and the new plan is named: House_Project_Plan_Version_2.0.
The floor design is also updated and the version number of the initial floor plan is
incremented by 1.0 and the new floor design is named:
House_Floor_Design_Version_2.0. Based on the new baselined configurable items
House_Project_Plan_Version_2.0 and House_Floor_Design_Version_2.0, an updated
version of the house is created. The following page will give you details on Access
control.
Access control is used to maintain integrity of configurable items. Not all associates
working in a software company are allowed to access the documents pertaining to a
particular project. Only core members of a project are allowed to gain access to
documents of a project. Again, within the project, different user groups are defined
and access rights are defined for each user group. Separate work areas are defined
for each team and access is controlled within each work area.
The first task for initiating the discipline of software configuration management,
referred to as SCM, is to create the configuration management plan. The next step
is to form the SCM team as per the roles identified in the SCM plan. The third step is
to set up a library or project repository structure as per the SCM plan. Along with
this task, access rights of each team member to each repository are also defined
and implemented. Changes implemented in all items are then implemented as per
the methodology documented in the SCM plan. The status of items changed is
maintained by the SCM coordinator. All activities of SCM are subject to configuration
audits conducted by the quality reviewer.
The SCM plan identifies the names of the SCM team and the roles of each member,
that is, the names of the reviewers, approvers, SCM coordinator, and other team
members who will be responsible for implementing a change.
Libraries or repositories are areas where a project stores and maintains the
documents and executables. This page illustrates the repository structure for
version controlled items. The development area contains all items documents that
are in development while review or test area contains items that are ready for
testing. The baseline area contains all the approved items that are ready for project
use and deliverable to the next step or stage. Old items that are no longer in use
are stored in the archival area.
Here you will see how user access rights are defined for each of the area or
repository and how read or write access is controlled in a project. To maintain
integrity of the work products, access rights to each of the folders is defined.
A software project has both version controlled and non version controlled items. All
the items that will undergo changes throughout the life cycle are version controlled
and called documents, such as design document, code etc., There are many other
items, which reflect the status at the given point of time and that individual item
will not undergo changes. Only a new instance will be created. These are typically
non-version controlled and called as records. Examples are Status reports, review
records.
This naming conventions of configurable items have been described here. The
qualifier can be a project or a module or a sub project name or any combination of
them identified as appropriate to a project.
The naming convention of other non configurable items like status reports is
illustrated here. In this case, the qualifier can be a project or a module or a sub
project name or any combination of them identified as appropriate to a project.
Status Accounting keeps track of changes made to Configurable Items and its
current status by maintaining a history and a continuous status over the period of
time. The status can be WIP, baselined, under review, or change etc. This helps in
identifying the list of changes required, the changes incorporated, and the changes
pending.
The quality reviewer or SCM coordinator of the project audits all activities pertaining
to configuration management. There are two types of audit that a quality reviewer
performs, functional configuration audit and physical configuration audit. Functional
configuration audit verifies that the system satisfies the specifications and this is
typically verified by auditing the traceability matrix. The traceability matrix traces
each requirement through the design, code, and test case, whereas the physical
configuration audit verifies the typical SCM question of the status accounting of all
SCIs.
In this page, you will learn about the modes of configuration management. SCM can
be either tool-based or manual or a combination of the two. Manual management
essentially involves configuring a folder structure in a file server with controlled
access rights for various areas. Tool-based management covers automatic version
control mechanism for both source code and documents, and access control. Since
the process is automatic, the chances of committing manual errors are eliminated.
Examples of SCM tools are: VSS, Clearcase, CVS, etc. SCM can be performed as a
combination of these two mechanisms.
Currently, VSS from Microsoft is most widely used across projects. We will learn the
various features of configuration management by using VSS as the source control
tool. VSS allows automatic version management eliminating the version naming
conventions. Instead, it keeps a history of the previous versions of the same file
through frequent check-ins or check-outs. Apart from that, the details of the check-
ins and check-outs can be stored by labeling each version of an item. The labeling
can also be used for automatic build management of the software at defined
intervals by improving the tool using add-ins to VSS. VSS allows multiple degrees of
folder level access control mechanisms to a group or at individual level. Parallel
development is also possible through the usage of branching and merging of the
main chain.
Here are the SCM best practices that are followed at Cognizant.
Highlighting and planning for controlling dependencies affecting the critical path or
SLA
Handling scope and responsibility change apart from the requirement change
SCM practices and other details followed at Cognizant can be accessed from
Cognizant's quality management system-the Qview. The demo will help you to
reach out for necessary documentation regarding the SCM practice at Cognizant.
Software Configurable Item, These are the components of a product that are to be
controlled for managing change in a product. They are identified using naming
conventions and version numbering
The SCM plan includes Names of the SCM team members, Roles of each SCM team
member, Name and location of project libraries, User access right for project
libraries, Names of configurable items, Names of non version controlled items,
Process for change control, and Process for status accounting
DEFECT MANAGEMENT:
Some maladies at their beginning are easy to cure but difficult to recognize. In
course of time, when they have not been at first recognized and treated, they
become easy to recognize and difficult to cure. This necessitates defect prevention
and defect detection to be a part of the defect management process. This is true for
software projects as much as medical science.
A defect can be defined as a flaw in a system or system component that causes the
system or component to fail in performing its required function.
You will now learn what defect reporting involves. Defects are reported using
Prolite, eTracker, or other defect tracking tools. A defect report must include: Defect
Id, Test case reference, Defect description, Defect priority and severity, Tester
name, and Test date and time. After the defect is assigned to a developer and fixed,
the final report will include the Defect fixer's name, Date and time, and Defect fix
verification. In the next page, you can see how defects are classified.
Defects are classified in terms of severity. Severity is indicative of how severe the
defect is. This can be very high or critical, high, medium, or low. Priority is an
indicator of how soon the defect needs to be fixed and this can be high, medium, or
low.
Now, you will learn how defects can be prevented. Every project prepares a defect-
prevention plan. Using the plan, the common causes of defects are identified and
eliminated. Defect-prevention tasks include: analyzing defects encountered in the
past and taking specific actions to prevent the occurrence of the same in future.
This diagram depicts the defect-prevention flow. The causal analysis meetings are
planned in the defect-prevention plan. You will learn more about causal analysis in
the next few pages.
Causal analysis primarily focuses on: fixing problems as they occur, finding what, in
the process, has permitted the defect to occur, and finding what needs to be
corrected to prevent it from occurring again.
Implementation due to: Lack of implementation skills, and lack of availability of the
proper environment.
Here are the roles and responsibilities for defect management in a project:
Identify the action proposals for preventing the special or common defects
Review causal analysis and select action proposals to be addressed
Some of the defect prevention techniques commonly used are Root-cause analysis,
Defect metrics analysis, and Defect prediction.
A defect can be defined as a flaw in a system or system component that causes the
system or component to fail in performing its required function
A defect can be of two types, namely, process defect and product defect
Every project prepares a defect-prevention plan. Using the plan, the common
causes of defects are identified and eliminated
The causal analysis meetings are planned in the defect prevention plan.
The causal analysis focus on fixing problems as they occur, finding what, in the
process, has permitted the defect to occur, and finding what needs to be corrected
to prevent it from occurring again
The outcome of the causal analysis meeting: Determines the root causes and
common causes, and proposes action plans and draws up implementation plans
BTSTC1:
SOFTWARE TESTING:
Welcome to the session on introduction to software testing. The goal of software
testing is to achieve an error-free program by finding all errors.
It verifies that all the requirements are implemented correctly for both positive and
negative conditions
It helps marketability
The scenarios design and the test case development can start in parallel with the
development cycle.
The test execution syncs up with the development cycle during the functional
testing phases.
The static and dynamic are the two types of testing techniques.
The static testing does not execute the program. It inspects or walks through the
code.
The dynamic testing generates test data and executes the program.
Some of the types of dynamic testing are black-box and white-box or glass-box
testing.
It groups perspective.
The black-box test design treats the system as a black-box. So it does not explicitly
use the knowledge of internal structure.
The two techniques are equivalence partitioning and boundary value analysis.
Equivalence partitioning divides the input domain into classes of data from which
test cases can be derived.
Boundary value analysis focuses on the boundaries of the input domain rather than
its center.
Flow graph notation is a notation for representing control flow similar to flow charts
and UML activity diagrams.
`The white-box testing is to derive test cases based on the program structure. Also,
to guarantee that all the independent path within a program module is tested.
Code
The black-box testing is planned without the intimate knowledge of the program
The two techniques in black-box testing are equivalence partitioning and boundary
value analysis
The white-box testing is planned with the intimate knowledge of the program
The types of techniques in white-box testing are basis path testing, flow graph
notation, and cyclomatic complexity
Testing Graphical User Interface (GUI) components is essential for all applications so
that it becomes user-friendly.
The main objective of the GUI testing is that the end user should feel comfortable
while using the components of the operating system.
It checks for the functionalities of all the components involved with the operating
system.
The GUI interfaces the integrity of the operating system with other package
applications.
Data validation
Building and applying tests for each layer of the operating system
Regression testing is used to ensure the quality and improve the functionality of the
system.
Discovering the hidden bugs that are not deducted during the previous tests
Forming the baseline for the application to grow with every build
Acceptance testing ensures that the system meets the business requirements. It
verifies the overall quality, correct operation, scalability, completeness, usability,
portability, and robustness of the functional components supplied by the software
system.
The main objective of acceptance testing is that the system meets the mutually
agreed acceptance criteria with the customers.
The configuration testing is carried out to ensure the compatibility of the system.
The term compatible implies to different degrees of partnership of one device to
another.
The platform and inter-product compatibility is also taken care by the configuration
testing.
The network configuration has to be tested when two or more computers share the
resources on a computer network.
The activities that occur during the installation testing are as follows:
Alpha testing is the software stage in the execution of a product. It has all the core
functions to accept inputs and generate outputs. The client at the development side
will perform an alpha testing.
Beta testing is the last stage of testing. A round of testing called alpha testing often
precedes this testing. It need not have defined test cases. It is the users who
perform this testing according to their ability.
Acceptance testing ensures that the system meets mutually agreed acceptance
criteria with the customers
UNIT TESTING:
It is the lowest level of testing. The individual unit of the software is tested in
isolation from other parts of a program.
Unit testing is the lowest level of testing and is also called program testing. The
individual unit of the software is tested in isolation from other parts of a program.
The unit test is made easy to run automatically, so that it is easily used by all
programmers.
The framework calls the tests, which in turn manipulates the domain code and
performs checks on it.
It also catches every unhandled exception and the other tests keep running.
1. Initially set the test data, functions, and methods for a unit.
The unit testing activities are field level checks, field level validation, user interface
checks, and functionality checks.
The field level checks consist of factors, such as null or not null, uniqueness, length,
date field, numeric, negative, and default display checks.
Field level validation is used to test all validations for an input field. It checks date
range and date validation with the system date.
The functionality checks for screen, field dependencies, and referential integrity
checks.
It is a streamline programming.
The best practices for the efficient usage of unit testing by the developers are as
follows:
Testing small units is important because the bugs found must be in that unit.
Unit test should be isolated so that one test does not interfere with the other tests.
The unit tests are run every time they are adding code.
Unit testing provides a safety net for application when programmers add
functionalities.
It reduces the debugging time because bugs found by developers are cheaper to
solve than those found by a separate testing department.
A unit test is linked with the domain code and the framework
The main activities of unit testing are field level checks, field level validations, user
interface checks, and functionality checks
JUnit is a unit test framework for Java programming language. It contains a series of
extensible classes that perform a great deal of testing work. The JUnit test
framework consists of facilities, such as counting of errors and failures, reporting of
errors and failures, and running tests in batches.
The JUnit test framework consists of some salient features, such as automation of
test cases, improved test coverage, consistent testing, highly reusable framework,
assertions for testing expected results, testing fixtures for sharing common test
data, testing suites for organizing and running tests, and using graphical and
textual test runners. In JUnit, all exceptions are test failures.
JUnit has its unique advantages. Unit testing is automated. It determines whether
the unit works as designed. It has features, such as line coverage, logic coverage,
and condition coverage. JUnit benefits the programmer a lot if he uses it correctly. It
has framework-supplied methods, such as assetTrue( ) and assetFalse( ).
JUnit has a few limitations. There is no direct provision for reading test data from a
file, and it cannot directly test UI components, such as JSPs and Servlets.
The JUnit test case method manipulates an instance of the class to be tested. It
issues assertions about the expected state of the object.
The two processes involved in the JUnit test case are manipulation and assertion.
Manipulation is a process that calls a method or series of methods on the instance
of the class. It passes through a variety of input, both valid and invalid. Assertion is
a process where the statement is true if the code works appropriately. If an
assertion is not true, it fails and the test case fails as well.
The requirements of a JUnit test case are a piece of code that takes a predefined
unit of code and manipulates it. A small program is required to execute various
pieces of code divided into methods that test your class. Each behavior of your class
will have a method in the test case. A behavior can have multiple methods in the
test case.
It implements a subclass of TestCase and TestRunner to run tests. The test case
runs multiple tests. All tests extend TestCase, named XXXTest.
The TestCase class generator is able to create skeletons of test methods. You can
add any number of assertions per method.
To write a JUnit test case, first create a class that extends junit.framework.TestCase
then write a public no-argument method whose name starts with test. An example
for writing a public no-argument method is displayed here. Now, manipulate an
instance of the class to be tested and issue assertions.
If you want to write a test similar to one you have previously written, write a fixture.
When you want to run more than one test, create a suite. Fixtures are a set of
objects used in JUnit to share common test data. setUp( ) method is used to
initialize the variables and the tearDown( ) method is used to release any
permanent resources you allocated in setUp( ) method. A suite refers to objects
provided in JUnit to run many number of test cases simultaneously.
The snippet of a program by using fixtures and suite objects is displayed here.
For running a JUnit test case, you can use JUnit TestRunner. You can also use both
textual and GUI TestRunners. An example for textual and GUI TestRunners is
displayed here. Textual is a faster TestRunner, whereas GUI is a user-friendly
TestRunner.
JUnit lets you test software code units by making assertions that the intended
requirements are met, but these assertions are limited to primitive operations.
Assertion Extensions for JUnit is an extension package for the JUnit framework.
In JUnit, an assertion performs an individual test. The most common assertions are
perhaps equality assertions. These compare two values, the test passes if they are
equal and fails if unequal.
Optionally override the tearDown( ) method to release object or objects under test.
Define one or more public testXXX( ) methods that exercise the object(s) under test
and assert expected results.
JUnit is designed around two key design patterns. They are command and
composite pattern.
A TestCase is a command object. Any class that contains test methods should
subclass the TestCase class. A TestCase can define any number of public testXXX( )
methods. When you want to check the expected and actual test results, you invoke
a variation of the assert( ) method.
TestCase subclasses that contain multiple testXXX( ) methods can use the setUp( )
and tearDown( ) methods to initialize and release any common objects under test,
referred to as the test fixture. Each test runs in the context of its own fixture, calling
setUp( ) before and tearDown( ) after each test method to ensure there can be no
side effects among test runs.
3. Refactor
The do’s and don'ts while working with JUnit test framework are as follows:
You can call the superclass methods, such as setUp( ) and tearDown( ) methods
when subclassing.
Always utilize the JUnit's assert or fail methods and exception handling for clean
test code.
Do not assume the order in which tests within a test case run.
Do not write test cases with side-effects and load data from hard-coded locations on
a file system.
Cactus framework is a testing framework for server-side Java code. It uses and
extends JUnit. Apache-Jakarta group developed cactus framework as an open-source
initiative.
The cactus framework is used to test Servlet, JSP, EJB, Taglib, filter and so on. The
cost of writing server-side TestCases is very low.
The architecture of the cactus framework is displayed here.
The steps to be followed while configuring the cactus framework in Web application
are as follows:
3. Write TestCases
HttpUnit parses the HTML results into Document Object Model (DOM). It has easy
link navigation and form population. It is useful for automated acceptance tests.
JUnit testing provides automation of test cases, improved test coverage, consistent
testing, and highly reusable framework
The best practices of JUnit testing are separate production, test code, and test-
driven development
The integration testing techniques are top-down, bottom-up, and big-bang testing.
The program is merged and tested from the top to the bottom in top-down
integration.
Modules subordinate to the main control module are incorporated into the structure
in either depth-first or breadth-first manner.
Stubs are functionally simpler than drivers and can be written with less labor in less
time.
The illustration here represents the testing process of top-down integration testing.
The modules are integrated by moving downward through the control hierarchy,
beginning with the main control module.
The program is merged and tested from the bottom to the top in Bottom-Up
integration testing.
It tests the modules at the lowest levels in the program structure and begins
construction and testing with atomic modules.
Many programming and testing operations can be carried out simultaneously, which
yield apparent improvement in software development effectiveness.
The test drivers have to be generated for modules at all levels except the top
controlling one.
You cannot test the program in the actual environment in which it runs.
The illustration here represents the testing process of bottom-up integration testing.
First, the terminal module is tested, and then the next set of higher-level modules is
tested with the previously tested lower modules.
The software components of an application are combined all at once into a overall
system in Big-Bang integration testing.
In this approach, every module is first unit tested in isolation from every module.
After each module is tested, all the modules are integrated together at once.
System testing is used to test whether the system performs the right business
functions.
Business functionality
Usability
Reliability
Portability
Installation
Disaster recovery
In top-down integration testing, the program is merged and tested from the top to
the bottom
In bottom-up integration testing, the program is merged and tested from the
bottom to the top
The software components of an application are combined all at once into an overall
system in big-bang integration testing
TESTING ARTIFACTS:
Test deliverables
Test procedure
It improves communication.
It streamlines tasks, roles, and responsibilities.
It is not redundant.
A test plan contains a description of testing objectives and goals, test strategy and
approach based on customer priorities, test environment, features to test with
priority and criticality, test deliverables, procedure, organization, scheduling,
measurements and metrics
A test case has a set of test inputs, execution conditions, and expected results
DEFECT MANAGEMENT:
Missing - It occurs because user requirements are not built into the product.
High
Medium
Low
A bug report is a case against a product. It must supply all necessary information to
identify and fix the problem. The report must also specify what the system should
perform.
2. It should include information about the product, such as version number and the
data being used.
Effective project management requires defect tracking and reporting as one of the
important modules.
Defect reporting software is designed for recording and reporting the defects in
projects to help project management, construction of project life cycles, project
handovers, and so on.
Defect tracking software is essential for improving project quality and managing
future requests from customers.
Any defect reported against a product is logged into a common repository and
tracked through a closure.
The document generated through defect reporting and tracking consists of the
topics as displayed here.
These topics are found as column headings when the TEST DIRECTOR tool is used.
Effective project management requires defect tracking and reporting as one of the
important modules
TEST AUTOMATION:
Software testing with an automatic test program will prevent the errors that are
made generally. Automation of testing prevents skipping of mistakes, and increases
the accuracy of the product.
Automated testing is the process of automating the currently used manual testing
process.
It decreases the redundancy of tests and increases the control when compared to
the manual testing process.
Once the test cases have been created, the test environment can be developed.
The test environment is defined as the complete set of steps necessary to execute
the test as described in the test plan.
It also includes initial set up, description of the environment, and the procedures
needed for installation and restoration of the environment.
A test method is a tool that records test input as it is sent to the software under
consideration.
The input cases stored can then be used to reproduce the test at a later time.
The two methods used for test automation are capture playback and data-driven
approach.
The matrix here represents a tool-by-tool comparison. The functionality of each tool
may also be inferred from the matrix.
2 represents good support or provides another tool for the functionalities that are
lacking.
4 represents support only through an API call or third party add-in. However, it is
not included in the general test tool or below average support.
The phases in automation are test planning, test design and development, test
execution, and measurement of the results.
Rational Purify
Rational Quantify
Rational Robot
The Rational project is a logical collection of databases and data stores that
associate with the used data while working with Rational suite. A Rational project is
associated with one Rational Test data store, Requisite Pro database, and Clear
Quest database. It also associates with multiple Rose models and Requisite Pro
projects by optionally placing them under configuration management.
It performs full functional testing. Record, play back the scripts that navigate
through the application, and test the state of objects through verification points.
Rational Robot performs full performance testing. Use Rational Robot and Test
Manager to record and play back scripts that help in determining whether a multi-
client system is performing within user-defined standards under varying loads.
It is used for creating and editing scripts by using the SQABasic, VB, and VU
scripting environments. The Rational Robot editor provides color-coded commands
with keyword Help for powerful integrated programming during script development.
It is also used for testing objects even if they are not visible in the interface of the
application.
It is integrated with Rational Purify, Quantify, and Pure Coverage. It also enables
playing back scripts under a diagnostic tool and viewing the results in a log file.
Rational Test Manager is an open and extensible framework that unites all the tools,
assets, and data, which are related to and produced by the testing effort. Under this
single framework, all participants in the testing effort can define and refine the
quality goals.
It facilitates creating, managing, and running the reports. The reporting tools track
assets, such as scripts, builds, and test documents. It also helps in tracking test
coverage and progress.
It is used for creating and managing builds, log folders, and logs.
Rational Test Manager is also used for creating and managing data pools, and
types.
The operating systems and protocols supported by Rational are displayed here.
Rational supports markup languages, such as HTML and DHTML on IE4.0 or later.
Some of the testing tools offered by Rational are Rational Requisite Pro, Rational
Clear Quest, Rational Purify, Rational Suite Performance Studio, Rational Robot,
Rational Test Manager, and so on
PERFORMANCE TESTING:
In general, to measure the latency, throughput, and utilization of the Web site while
simulating attempts by virtual users to simultaneously access the site. One of the
main objectives of performance testing is to maintain a Web site with low latency,
high throughput, and low utilization.
Performance testing requirements normally comprise of three components. They
are response time requirements, transaction volumes, and database volumes.
Load Runner is a testing tool used for testing the performance of client or server
system. It enables the user to test the system under restricted and peak load
conditions.
To generate load, the Load Runner runs thousands of virtual users that are
distributed over a network. Using the minimum hardware resources, these virtual
users provide consistent, repeatable and measurable load to execute the client or
server system just as real users would perform. The brief reports and graphs of
Load Runner provide information required to evaluate the performance of the client
or server system.
Web Load is a testing tool used for testing the scalability, functionality, and
performance of Web-based applications, such as Internet and Intranet. Web Load
can measure the performance of your application under any load conditions. The
Web Load is used to test the performance of your Web site under real-world
conditions. The Web Load executes this task by combining performance, load, and
functional tests or by running them individually.
The purpose of using volume testing is to find the weakness in the system with
respect to its handling of large amounts of data during short time periods.
The purpose of using stress testing is to test whether the system has the capacity
to handle large numbers of processing transactions during peak periods.
Performance testing can be accomplished in parallel with volume and stress testing
because it is necessary to assess performance under all conditions. System
performance is generally assessed in terms of response time and throughput rate
under different processing and configuration conditions.
Load Runner is a testing tool for testing the performance of client or server systems
Web Load is a testing tool for testing the scalability, functionality and performance
of Web-based applications
Volume testing is used to find weakness in the system with respect to its handling
of large amounts of data during short time period
Stress testing is used to test whether the system has the capacity to handle large
number of transactions during peak period
The code coverage tools exert a performance on memory or other resource cost
that are unacceptable by normal operations of the software.
Clover provides method, branch, and statement coverage for projects, packages,
files, and classes. Unlike tools that use byte code instrumentation or the Java Virtual
Machine (JVM) Profiling Application Programming Interface, Clover accurately
measures per statement coverage, rather than per-line coverage.
JProbe allows the user to easily test the applications without any code change and
identifies the performance problem in the code. It integrates easily with the
application server, Web server, IDE, JDK, and operating system.
JProbe is the industries leading choice for enterprise code profiling and analysis
since it supports platform with both 32 bit and 64 bit, and analysis an application
running on a remote server.
It instruments or inserts instructions into Java class files at the byte code level.
After instrumenting the code and running the tests, a report is generated allowing
the user to view the information coverage figures from a project level. This process
is called code coverage.
EMMA is an open-source toolkit for measuring and reporting Java code coverage.
It supports coverage types, such as class, method, line, and basic block.
EMMA can detect when a single source code line is covered only partially.
Coverage stats aggregated at method, class, and package levels are included with
EMMA.
Output reports are used to highlight items with coverage levels below user-provided
thresholds. They are of three types, plain text, HTML, and XML.
The HTML report supports merging of source code linking, such as data obtained in
different instrumentation or test runs.
EMMA does not require access to the source code. It degrades with decreasing
amount of debug information available in the input classes.
It can instrument individual .class files or entire .jar files. Hence, efficient coverage
subset filtering is possible.
The Makefile and ANT build integration are supported by EMMA on equal footing.
EMMA is quite fast since the run-time overhead of added instrumentation is small
and the byte code instrumentation is also fast.
EMMA is 100 percent pure Java and has no external library dependencies. Thus it
can work in any Java 2 Java Virtual Machine (JVM).
EMMA belongs to a class of pure Java coverage tool based on Java bytecode
instrumentation.
Hence, special JVM switches for enabling coverage is not needed. Instead run
EMMA-processed .class files. It also concludes that EMMA does not instrument the
.java sources.
EMMA offers two different options to instrument. They are offline and on-the-fly.
The offline approach works well with contexts, such as J2SE, J2EE, and distributed
client/server applications.
The on-the-fly approach is a handy and lightweight shortcut for simple standalone
programs.
EMMA pays attention to the needs of enterprise software developers' and thus
cannot run always in the latest Java version. It runs in any Java 1.2 with a JVM and
has no library dependencies. EMMA is free and uses a very liberal open-source
license.
A class can be covered even though none of its other methods are executed.
Class coverage also considers the number of loaded but uninitialized classes.
It is common to see a small number of loaded, but uninitialized classes while using
EMMARUN without the -f option.
EMMA reports class coverage so that the user could spot the classes that are
ignored by the test suite. The identified classes could be a dead code or needs more
test attention.
EMMA first covers a method when it has been set up. But the problem is to track the
method execution.
A given method can have any number of normal or abnormal exit points. It is not
clear how many of the exit paths are considered to be normal.
Looking out for uncovered methods is a good technique for detecting either dead
code or code that needs more test attention.
Clover
JProbe
EMMA
InsECT
JCoverage
EMMA supports large-scale enterprise software development while keeping
individual developer's work fast and iterative. It offers two different options to
instrument
Offline
On-the-fly
Welcome to the session on Test Case Point (TCP) analysis. TCP analysis is an
approach for doing an accurate estimation of functional testing projects.
The TCP is also used as an estimation technique to calculate the size and effort of a
testing project. The TCP counts are nothing but ranking the requirements and the
test cases that are to be written for those requirements into simple, average, and
complex. It quantifies the same into a measure of complexity. This approach
emphasizes on key testing factors that determine the complexity of the entire
testing cycle. In other words, TCP is a way of representing the efforts involved in
testing projects.
TCP analysis generates test efforts for separate testing activities. This is essential
because testing projects fall under four different models, such as test case
generation, automated script generation, manual test execution, and automated
test execution.
Test case generation is an execution model that includes designing well-defined test
cases. To determine the TCP for test case generation, first determine the complexity
of the test cases. Some test cases may be more complex due to the inherent
functionality being tested. The complexity will be an indicator of the number of TCPs
for the test case. Automated script generation is an execution model that
automates the test cases using an automated test tool.
From the list of test cases derived from test case generation model, identify the test
cases that are good candidates for automation. Some test cases save a lot of effort
if performed manually and are not worth automating. On the other hand, some test
cases cannot be automated because the test tool does not support the feature
being tested. Automation is difficult in the cases having dynamic data.
Manual test execution is an execution model. It executes the test cases already
designed and reports the defects. To determine the TCPs for manual execution, first
calculate the manual test case execution complexity based on the factors, such as
pre-conditions, steps in the test case, and verification.
Automated test execution includes the execution of the automated scripts and
reporting the defects. To determine the TCPs for automated execution, you must
calculate the automation test case execution complexity based on the pre-
conditions, such as setting up the test data. It also includes the steps needed before
starting the execution.
TCP Analysis uses a 7-step process consisting of the stages, such as identifying use
cases, identifying test cases, determining TCP for test case generation, automation,
manual execution, automated execution, and total TCP.
TCP analysis generates test efforts for separate testing activities. This is essential
because testing projects fall under four different models
Test case generation is an execution model that includes designing well-defined test
cases
Manual test execution is an execution model that involves executing the test cases
already designed and reporting the defects
Automated test execution includes the execution of the automated scripts and
reporting the defects