Sie sind auf Seite 1von 88

BQCQB1

1.s/w development life cycle:


Incremental model:

In the early 1980s, the more flexible Incremental Model was introduced. It is
also called the Staged Delivery Model. This model performs the waterfall in
overlapping sections, attempting to compensate for the length of projects by
producing usable functionality earlier, in increments. All the requirements are
collected at one shot. The technical architecture is finalized upfront. The objectives
are divided into several increments or builds. The first build is developed and
delivered. This is followed by the next portion until all the objectives have been met.
It is easier to build and design than a whole project. However, it has its own
drawbacks.

Spiral model:

For a typical application, the spiral model means that you have a rough-cut of user
elements as an operable application, add features in phases, and, at some point,
add the final graphics. Each phase starts with a design goal and ends with the
client reviewing the progress thus far. Analysis and engineering efforts are
applied at each phase of the project, with an eye toward the business goal of the
project. The spiral model is used most often in large projects and needs constant
review to stay on target. However, if we use the spiral in the reverse, that is adding
the peripheral features and then developing the core, we more often than not end
up going nowhere.

Other models:

There are several other Life Cycle models: Rational's Iterative Development
Model, Adaptive Model, Rapid Application Development, Evolutionary
Prototyping, and V Model. You are not required to memorize the models now.
The intention is to give you an idea of what the software life cycle phases are and
how they are combined into different structures to form life cycle models.

Maintenance model:

For projects that require us to maintain software, Cognizant has its own
Application Value Management process model. This involves initial knowledge
transition, followed by steady state, where enhancements, production support, and
bug fixes take place.

Model selection:

The selection of the appropriate model depends on a number of factors. It depends


upon the project type, client requirements and priority, nature of
customer, nature of requirements, technology to be used, budget, and
various other factors.

For example, stable requirements, well understood technology, small software


solution for complex and well understood problems are criteria that can seem apt
for the waterfall model.

Summary:

In this session, you have learned that:

The Software stages are Requirement, Design, Development, Testing, and Delivery

Stages such as Analysis, Design, Testing, and Delivery can be carried out at single
or multiple points of time during the software project according to different life
cycle models

The various models for project development are: Waterfall, Incremental, Iterative,
Spiral, and Maintenance

The choice of a particular model for a project depends on client involvement,


priority, requirements, technology, and other factors

2.s/w quality basics:


What is quality all about?

Here is an illustration that shows what one means by a quality product.

I need a shirt that will fit my specifications.

My shirt needs to be defect free.

It should also be cost effective.

This shirt has two pockets, which is a unique feature.

I will take this shirt because it is also branded as a quality shirt.

What is quality?
Quality can be defined in various ways depending upon the customer's point of
view or from the point of view of the product or from a value angle or even
according to the manufacturing specifications.

So many definitions of quality leave us confused.

In reality, all of them combine to define the quality of a product.

Quality distinguishes a product and becomes the deciding factor in competition.

Whatever work product you produce during day-to-day activities contribute to the
quality of the product.

Process of ensuring quality:

The process of ensuring quality in a product comprises the following components:

Quality assurance ensures proper process for production of the end


product.

Quality control ensures that the end product meets standards.

The entire process for ensuring quality makes up the quality management of
an organization.

Responsibility of quality:

In this page you, will learn the responsibilities involved in ensuring the quality
of a product.

The responsibility of ensuring proper quality of every deliverable lies with the
organization. This implies that it is the responsibility of each and every employee of
the organization.

The quality of the process and product of the organization needs to conform to
certain set standards.

These standards need to be adhered to by the members and checked for


conformance by certain specialists of the company.

Quality models and standards:

Now you will learn about the various Quality Models and Standards.

Frameworks to ensure proper quality processes can be developed by the company


aided by the standards and the models, which are internationally accepted in the
industry.
Adhering to internationally accepted quality standards and models may lead to
certification.

The various standard models that are accepted worldwide are:

ISO

CMMi

PCMM

BS 7 7 9 9

Six Sigma

These models have been developed by established academic and professional


institutions and have been tested over time.

Apart from the obvious benefits of producing quality products by following


processes that conform to internationally recognized standards and models,
certifications also have the brand value, which helps us in obtaining the
confidence of the customers.

Quality and associated certifications are the differentiating factors that allow
companies to beat their competitors and win clients.

In Software Engineering, the quality components map to similar activities as in any


other industry:

The quality control activities verify whether the deliverables are of acceptable
quality and that they are complete and correct. Examples of quality control
activities in software involve testing and reviews

Quality assurance verifies the process used to create the deliverables. This
can be performed by a manager, client, or a third-party reviewer. Examples of
quality assurance include the software quality assurance audits, software
configuration management and defect prevention.

The quality management system includes all the procedures, templates, and
guidelines associated with software engineering in the organization. In
Cognizant, the quality management system is the QView.

Even though there are policies, frameworks, review, testing, audits and
conformance checks in place, the ownership and responsibility of producing a
quality product rests with the individual.
Quality is a reflection of your ability.

Whatever you produce needs to conform to a specified quality, be it code,


documents, manuals or even a printout.

In this session, you have learned that:

Quality is a relative term, which aims at customer satisfaction. There are several
models and standards in the industry to ensure quality

Each and every employee in an organization is responsible in ensuring the quality of


a product

The various activities involved in ensuring the quality of software are Quality
Control, Quality Assurance, and Quality Management

3.s/w quality in cognizant:

Quality is not a goal but a journey. There are multiple paths leading to proper
quality.

At the end of this session, you will be able to:

Learn the basics of the major quality models

Study Cognizant's quality journey and method of implementation

Describe process management through Q View

Explain Cognizant's quality culture

To ensure the highest possible quality of software development and maintenance,


Cognizant lays stress on a Quality Management Process that integrates
Cognizant's quality approach throughout the software development life
cycle, thereby ensuring that quality is built in as development progresses.
Here you will learn about the major quality models that Cognizant employs for
ensuring the highest quality for its product. To facilitate the on-time, on-budget
completion of projects, integrating on-site and offshore teams, Cognizant
utilizes ISO 9001 certified methodology and SEI or CMMi Level 5 processes
to define and implement projects, and to allow frequent deliverables as well as
feedback from customers. Cognizant has also become the first software company to
be assessed at People-CMM Level 5, across all of its development centers in
India. Along with this, Cognizant ensures security of information by obtaining and
adhering to the requirements of BS 7799 certification. Finally, on the journey of
quality toward continuous improvement, Cognizant has focused management
direction toward the Six Sigma method of minimizing defects in the delivered
products.
Every activity leading to production of deliverables needs to be developed using the
process and practices of the Standards and Guidelines. In the following pages, you
will learn in detail about these quality models.

SO 9001 is a model for Quality Management System promoted by the


International Organization for Standards. It ensures uniformity across development
units through standard procedures. It is heralded as the first step to achieving high
quality.

ISO 9001 consists of five main sections which covers the standard requirements.

Quality management system encompasses quality manual documentation


and control of documents and records.

Management responsibility deals with management's commitment to the quality


management systems and explains that management must be dedicated to the
organization's products and customers and to the planning and review processes.

Resource management provides the criteria needed to perform a job


competently and in a safe environment. Human resources, infrastructure
planning, and work environment are discussed in this section.

Product realisation defines the steps in product development. These steps


include everything from the initial design phase to the final delivery phase.
Measurement, analysis, and improvement focuses on measuring, analyzing, and
improving the quality management system by performing periodical internal audits,
monitoring customer satisfaction, controlling nonconforming products, analyzing
data and taking corrective and preventive actions. It is aimed at achieving customer
satisfaction by preventing nonconformity at all stages from Design to Servicing.

While ISO is a generic model applicable to any industry, Capability Maturity


Models deal with processes typical to the software industry. The Capability
Maturity Model was developed by the Software Engineering Institute in Carnegie
Mellon University in 1987. The US Department of Defense funded SEI to define a
model for evaluating the capability and maturity level of the vendors supplying
software. The results of the work include various Capability Maturity Models,
such as The Capability Maturity Model for Software, The Systems
Engineering Capability Model, and The Integrated Product Development
Capability Maturity Model. Although these models have proved useful to many
organizations, the use of multiple models has been problematic. Further, applying
multiple models that are not integrated within and across an organization are costly
in terms of training, appraisals, and improvement activities. The CMM Integration
project was formed to sort out the problem of using multiple CMMs. The result was
the Capability Maturity Model Integrated or CMMi.

You learned that, CMMi stands for Capability Maturity Model Integrated.
CMMi consists of guidelines for managing the:

Process Management Processes

Project Management Processes

Engineering Processes

Support Processes

Linking the four, CMMi forms guidelines to manage and execute software
projects.

The different benefits from CMMi are in the areas of improved process, budget
control, delivery capability, increased productivity, and reduction of defects.

CMMi level 1 organization denotes that processes are unpredictable, poorly


controlled and reactive.

When an organization is at level two, process is characterized for projects


and often reactive.

At level three, the process is characterized for the organization and is


proactive.

At level four, the process is measured and controlled.

At the final level, the focus is on process improvement.

One has to progress maturity level by maturity level and reach the pinnacle of
excellence.

The People Capability Maturity Model measures how effectively an


organization manages its professional force. People are the most important
resources in the software industry. People CMM stresses on people-centric
processes, which ensure retention of workforce and growth of the
organization. Organizations that achieve Level five certification are not only
implementing and benchmarking their execution of enterprise-wide, workforce
management practices, but they are in continuous improvement mode, always
seeking opportunities for improvement. A high PCMM level is characterized by
effective professional training and mentoring programs and emphasis on continuous
improvement.

Information is one of the most valuable assets owned by Cognizant. Securing


information should be one of the most important responsibilities of every associate.
Loss of information could result in a loss of man hours spent creating information as
well as several more man-hours trying to recover lost information. Information lost
outside the corporate environment could allow competitors to gain undue
competitive edge. Information is also trusted to Cognizant by the customers.
Information received from the customers need to be protected against loss and
unauthorized access and use. BS 7799 is a Standard for developing and
maintaining an Information Security Management System. The ten domains
that come under BS 7799 are Security Policy, Security Organization, Access
Classification and Control, Personnel Security, Physical and Environmental
Security, Communications and Operations Management, Systems
Development and Maintenance, Access Control, Business Continuity, and
Compliance. The following page gives you the details on Cognizant's security
arrangements.

The Security classification in Cognizant is divided into:

C1 or, Highly Critical - This involves firewall and finance related data.

C2 or, Critical, which involves customer supplied item and personal


folders.

C3 or, Protected - Includes any project document.

C4 or, General - Consists of General information, like cognizant.com.

The Security classification of the data implies the following:

Storage of Information

Transmission of Information

Access to Information

Destruction of Information

Just as security of information, reduced variability is also very critical for customer
satisfaction. Six Sigma is a statistical methodology that significantly improves
customer satisfaction and shareholder value by reducing variability in every aspect
of our business.

"Six Sigma is not something else that you do...it is what you do."

Six Sigma is using a standard methodology to use statistical solutions for


reducing variability of a product and thereby reducing defects. The ultimate
aim is to produce products with a variability of 3.4 defects per million opportunities.
From a statistical point of view, it means to work toward achieving a normal
distribution of the produced products with very low variance around the target.

Take a look at how Cognizant's quality journey has evolved over time and what it
implies. All the certifications of Cognizant have been achieved enterprise-wide
across all centers. The certifications have been achieved in an incremental
approach with Cognizant's growth as a company.

Cognizant's Quality policy is outlined in a web-based application known as Qview.


The Quality Management System is available through Intranet

http://cognizantonline/qview/

You will view a demo on Qview in the next page.

The responsibility for monitoring all the process and quality related activities lie with
the individual employees of Cognizant. The group responsible for ensuring quality
processes and continuous improvement is the process and quality group. The
diagram shows the structure of the process and quality group. SEPG is a virtual
group consisting of quality champions and representative practitioners from all
locations and vertical or, horizontal or, support groups. It is responsible for process
definition, maintenance, and improvement of the software processes used by the
organization. The SQAG is responsible for guiding projects through facilitations and
monitoring the process and quality of the Projects through auditing and reporting,
periodic status reporting, and timely escalations. The SQAG is also responsible for
conducting the internal quality audits that serve as periodic assessments of project
and process health and compliance in the organization.

This illustration shows the types of audits that take place in a project and their
distribution throughout the project life cycle. All the audit results, which include, the
non-conformances and the observations, are logged in the Q Smart online tool. Like
any other process, the quality process also strives for continuous improvement in
Cognizant. This improvement is enhanced by the feedback about the quality
process received from the associates. Q Smart can also be used to provide feedback
on the quality process. Any associate can provide feedback in this manner.

Apart from the certifications to be adhered to and the quality champions and quality
reviewers, the management also encourages quality and innovation by awarding
the project of the year, associate of the year, and the best practice awards. The
best ideas and innovations are recognized, rewarded, and re-used to ensure
continuous improvement and high quality in process, product, and practice.

In this session, you have learned that:

The major quality models include:

ISO, which ensures uniformity across development units through standard


procedures

CMMi consisting of guidelines to manage Process management processes, Project


management processes, Engineering processes, and Support processes
PCMM is used to measures how effectively an organization manages its professional
force

BS 7799 is a standard for developing and maintaining an information security


management system

Six Sigma uses a standard methodology to use statistical solutions for reducing
variability of a product and thereby reducing defects

All the certifications of Cognizant have been achieved enterprise-wide across all
centers. The certifications have been achieved in an incremental approach with
Cognizant's growth as a company

Q view is a web-based application that outlines the Cognizant's quality policy. Here
you can find that the process documents are arranged according to the documents
relevant to the different types of projects and processes

Q view also contains different references that exist for the benefit of the associates

In cognizant, the group responsible for ensuring quality processes and continuous
improvement is the process and quality group

All the audit results, which includes, the non-conformances and the observations,
are logged in the Q Smart online tool

Cognizant management encourages quality and innovation by awarding, The Project


Of The Year, Associate of the Year, and The Best Practice awards

4.s/w engineering in cognizant:

At the end of this session, you will be able to:

Learn the types of projects in Cognizant and their adaptation of SDLC

Study the software engineering concept through the QView

Know how to address the customer requirements in designing delivery

The types of projects that are dealt with by Cognizant can be classified into
Development Projects and Application Value Management or Maintenance Projects.
In the following pages, you will learn in detail about application development
projects.

The application development projects handled by Cognizant are of the following


types:

New development - A project developed from scratch for a customer based on given
requirement, Re-engineering - Converting an existing software system into another
system having the same functionalities, but with different environment, platform,
programming language, operating system, etc., and Product development - A
software product developed not for a single customer, but based on some market
demand for a customer base for that type of product.

Development projects are classified in two different ways:

Classification based on life-cycle models like Waterfall, Incremental, and Iterative.

Classification based on size like Large, Medium, Small, and Very Small.

The reason why the projects are categorized into large, medium, small, and very
small is that they need to follow different processes for development. A large
project and a very small project cannot follow the same steps while executing
deliveries. To keep a large project on track, one needs to have substantial
processes in place, whereas having few processes leads to the risk of unmanaged
processes, which may lead to failure. Whereas, for a small project, having a huge
process makes it a tedious and unnecessary overhead.

Decisions regarding a project's size, whether it is a large, medium, small or very


small project, are arrived at depending upon the duration and the effort associated
with the project. The total project duration includes requirements through the
system testing phase in calendar months. The total project effort includes on-site as
well as off-shore effort and requirements through the system testing phase. If a
project is of less than one-month duration or the effort required is less than three
person months, it is considered a very small project. If a project is of between one
and three months or the effort required is between three and sixteen person
months, it is considered a small project. If a project is of between three and six
months or the effort required is between sixteen and forty-eight person months, it is
considered a medium project. If a project is greater than 6 months or requires more
than forty-eight person months, it is considered a large project.

The table shows the recommended life-cycle models for the different types of
development projects in Cognizant. A large project can follow the waterfall,
incremental, or iterative model. A medium project follows the waterfall or iterative
model. Small and very small projects follow the waterfall model. Most of these
recommendations are because of understandable reasons. A project of small
duration could run into problems if there were too many cycles of delivery as in
iterative. Also, unless a project is of a large duration, breaking it up into increments
does not make sense and might lead to complications from multiple deliveries. The
following pages will give you details on these three types of life-cycle models.

A waterfall model is recommended when:

The requirements are clear

The project duration is smaller


The design or technology is proven or mature

The proposed application is very similar to an existing system

The customer does not have any intermediate release requirements

An incremental model is recommended when most of the requirements are well-


understood with the exception of some T B Ds, and the total project duration is
greater than 3 months. In this model, the initial increment may focus on validating
architecture and key technical risks. Functionality is built over increments, such that
the initial increment may focus on high-priority functionality, and the last increment
may focus on nice-to-have features.

The iterative model is recommended when:

The requirements are not clear.

The initial planning broadly scopes the iterations and arrives at a roadmap

The initial iterations may involve architecture, UI frameworks, etc.

Subsequently, every iteration focuses on the set of requirements that need to be


built into that iteration

The iterations may be individually delivered to the customer

A release to production can combine one or more iterations

This model is mainly suited for time and material-based engagement.

The selection of the appropriate model depends on a number of factors. It depends


upon the project type, client requirements and priority, nature of customer, nature
of requirements, technology to be used, budget and many different factors. For
example, stable requirements, well understood technology, small software solutions
for well understood problems are criteria that can seem apt for the waterfall model.

The processes recommended according to the Cognizant quality system are


combined with the processes required by the client to arrive at the project's process
model. This process is then tailored to suit the different demands of the project.
Tailoring to suit your project requirements is a very essential defined process
because, the project needs to be aligned with this. Tailoring of the recommended
process may be required due to customer requirements or project-specific
requirements. This procedure is followed for all types of projects.

Here is a schematic description of the process model definition.

The steps include:


Determining the size of the project based on duration and effort, contractual
requirement, and project scope

Selecting an adequate project life-cycle model to arrive at an operational model

Merging with the customer's process and project-specific requirement to form the
project's software process

Tailoring for project needs to arrive at the process model

Process model configuration is the single most important activity to finalize the
process model before the start of the project. Process Model is the basis for some
processes like, Project planning and tracking, Application build, integration, testing,
release, branching or merging strategies may be derived based on this, Distribution
of work within teams and across teams, Planning for Rollouts, etc.

All the development life-cycle models follow the parallel processes of development
and delivery management. While the development process deals with the project
development stages and associated activities, delivery management deals with the
project management activities relevant throughout the project.

The types of application value management projects that are handled by Cognizant
are:

Maintenance: This means taking ownership of the software system and ensuring
that it meets current and future needs of all users at prescribed service levels.

Mass Change: These are projects involving change of attributes of a system from
one to the other.

Examples are Mass Conversions, such as field expansion, Internationalization,


Decimalization, Euro Conversion, Rehosting, Upgrade Projects, Database
Conversions, Testing are Projects whose scope is limited to testing of a system.

Any AVM project consists of taking ownership of the software system and ensuring
that it meets current and future needs of all users. A maintenance project typically
involves providing the applicable activities from the following:

Production support

Bug fix

Minor enhancements

Major enhancements

Testing
Documentation

Re-engineering, etc.

The process of any application value management process consists of planning,


knowledge transition from the current team to the maintenance team, guided
support during which the software is maintained by the maintenance team with the
guidance of the current team, followed by transition to full-fledged support, and
finally the steady state during which the maintenance team supports the application
fully.

The picture shows the process for one typical maintenance project.

Similar to development projects, in AVM projects also, the processes recommended


according to the Cognizant quality system are combined with the processes
required by the client to arrive at the project's process. The process is again tailored
to suit the different demands of the project.

The schematic description of the process model definition for an AVM project is
shown here. The steps include:

Selecting adequate project life-cycle based on the type of AVM project, such as
maintenance, mass change or testing, and project scope to arrive at an operational
model

Merging with the customer's process to form the project's software process, and

Tailoring for project-specific needs to arrive at the process model

The AVM life-cycle also follows the parallel processes of maintenance and delivery
management. Take a look at the process models for maintenance, mass change,
testing.

Some of the critical success factors of an application value management project


are:

Proper Knowledge Transition - Ensures that knowledge of the system is properly


transitioned to the team that will ultimately maintain the system.

Well-defined service-level agreement - Is an agreed upon timeframe within which a


service request like bug fix, production support request, etc., is supposed to be
responded to and ultimately resolved.

Well-defined tracking mechanism - The performed tasks need to be logged and


tracked as part of documented records. Tracking is also required to ensure that the
service-level agreements are met.
Generally, AVM projects use the eTracker tool to log the activities and to track
adherence to SLAs. Maintained Documentation details the process for request
categories, such as production support, major bug fix, minor bug fix, etc.

Proper channels of escalation in case of unresolved problems.

While designing for delivery for development as well as AVM projects, it is


imperative that all customer requirements are addressed. Apart from the stated
functional requirements of the system, there are typically other requirements which
have to be derived from the client. They may be explicitly stated by the client or
may have to be extracted from them. In most cases, one has to extrapolate the
complete requirements inclusive of must have, nice-to-have features from the
requirements directly available from the client.

The requirements typically include:

Non functional requirements inclusive of speed, response time, and scalability

Adhering to service-level agreements

Prioritization of modules, tasks, etc. due to business needs

Aligning deliveries to meet client constraints

In this session, you have learned that:

The types of projects in Cognizant are Application development and Application


value management

The projects can be classified in terms of size and life-cycle model

A large project can follow the waterfall, incremental, or iterative model

Medium project follows waterfall or iterative model

Small and very small projects follow the waterfall model

The process model is selected according to the life-cycle model, incorporating the
customer's required processes and is tailored to suit the specific requirements of
the project

Requirements of the customers may be stated or unstated. Unstated requirements


may have to be extrapolated

When addressing a client's requirements, the following considerations should be


kept in mind:

Non functional requirements inclusive of speed, response time, and scalability


Adhering to service-level agreements

Prioritization of modules, tasks etc. due to business needs, and

Aligning deliveries to meet client constraints

PROJECT MANAGEMENT CONCEPTS:

Imagine organizing a college fest. Behind the face of a day or two of enjoyment and
celebrations, what are the challenges involved for the one who is in charge of
organizing it? One runs up against budget issues, time and schedule issues,
resource problems, risks, constraints, and managing expectations. Arranging and
executing a college fest is a project. And all the challenges faced are a part of any
project in the industry.

At the end of this session you will be able to:

Study the concepts of a project and project management

Know who the stakeholders in a project are

Learn the structure of the project team and the role of team members in the project

Explain the basics of the important project management tools in Cognizant

In the previous example, organizing the college fest was a temporary endeavor and
it was unique-that is to say that every college fest is different. Similarly, any project
is a temporary endeavor undertaken to create a unique product or service and it
progresses through a number of life cycle phases. Temporary implies that every
project has a definite beginning and a definite end. Unique implies that the product
produced by the project is different in some way from all similar entities.

Some of the types of projects executed in Cognizant are Development of new


applications, Maintenance of existing applications, Re-engineering an existing
application, and Mass Change or migration like Euro, Internationalization.

Whatever be the type, a project always has certain characteristics and challenges.
Just like the college fest that you saw previously, any project in the industry
involves challenges with respect to cost, risks, scope, quality, resources,
expectations, time, etc.

Therein lies the need of project management.

Project management involves balancing competing demands among: scope, time,


cost, resource, and quality; stakeholders with differing needs and expectations,
identified requirements or needs and unidentified requirements or expectations.
Developing software products involves more than just combining programming
instructions together and getting them to run on a computer.

Effective project management processes guide you toward disciplined, superior


software engineering.

Project management involves application of knowledge, skills, techniques, and tools


to project activities in order to meet or exceed stakeholder needs and expectations
from the project. You will learn about the stakeholders in a project on the next
page.

What are stakeholders and who are the stakeholders in a project? A stakeholder is
anyone who has an interest in the project. They may be at the client's end, internal
to the project, or external. The external are Sub-contractors, Suppliers, External
consultants, and Service providers. The internal stakeholders are Senior
management, Project manager, Human resources, Business development, Finance,
Administration, Training, Internal systems, Quality assurance, and SEPG. The client
stakeholders are Sponsor, Contact person, End-user, and Outsourced party.

A project manager's job consists of balancing numerous stakeholder expectations.


What are the stakeholder expectations? Examples of stakeholders and their
expectations could be: The customer who asks: When can I get the product? What
are the features that exist? What are the additional features required? How much
would I have to pay? The employee who asks: What is my role and the job or task
specifications in this project? What learning opportunity or new skill will I get from
this project? The shareholders who ask: What is the company doing to minimize
overheads and maximize profits? What are the efforts and direction for new
avenues of business this year? How does this company compare with and stay
ahead of the competition? And finally the project manager himself who asks: When
can I get the product out? What is the quantum of work involved? How many people
and what skill will this project need? What would be their utilization and availability
on this project? How much would the cost or investment for resources be? Managing
the expectations of stakeholders can be demanding and it is the primary job of the
project manager.

The project organization structure of a typical offshore-onsite delivery model in


Cognizant is shown in the illustration. As evident from this illustration, every
individual has a role to play. While the people at the top of the picture are there to
provide guidance and to ensure communication with the end customer and proper
execution of the project, the team members are essential for performing the
different activities that go on to satisfy the requirements of the project. All the
while, the support groups including NSS, Admin, HR, and quality assurance provide
all the necessary support for the project to meet the various infrastructural,
financial, resource, and quality requirements. Big or small, everyone has a role to
play in the project.
It is a basic rule of project execution that every individual in the project has a role to
play. Every task assigned to the individual is important and any task, however
insignificant it seems to be, is crucial to the project.

It is very important that individual responsibilities are addressed according to the


plan and in coordination with other team members for proper alignment with each
other. Otherwise, there are chances of making individually perfect pieces, which will
not work in unison. This concept of teamwork is especially important since in
Cognizant we follow the onsite-offshore model with the members of the project
team working in geographically different locations. Each task needs to be aligned to
the project objective and goal. Otherwise, we may end up with something like .. this

The most important project management tasks during the execution of the project
involve:

Planning for the project's cost, effort, and schedule at the beginning

Tracking the execution of the project according to the plan

To track the progress of the project, one needs to monitor whether the effort, cost,
and schedule are following the plan or are deviating. In case of deviations, one
needs to take corrective action, re-modify the plan, etc. To track the effort and the
cost of the project, it is of extreme importance to log the time spent by each
individual in the project against the activities performed by them. For this purpose,
each associate is expected to log his activities in the peoplesoft tool. Peoplesoft
Demo follows this.

In this session, you have learned that:

Projects are finite and result in producing a unique product

Project Management consists of ensuring timely delivery of quality products within a


budgeted cost while managing all stakeholder expectations

A stakeholder is anyone who has an interest in the project. They may be at the
client's end, internal to the project, or external

(a) The external are sub-contractors, suppliers, external consultants, and


service providers

(b) The internal stakeholders are senior management, project manager, human
resources, business development, finance, administration, training, internal
systems, quality assurance, and SEPG

(c) The client stakeholders are sponsor, contact person, end user, and
outsourced party
The project organization structure in Cognizant is a typical offshore-onsite delivery
model, where every individual has a role to play

(a) The officials at the top end of the hierarchy provide guidance and ensure
communication with the end customer and proper execution of the project. The
team members are essential for performing the different activities that go on to
satisfy the requirements of the project

(b) The support groups including NSS, Admin, HR, and quality assurance provide
all the necessary support for the project to meet the various infrastructural,
financial, resource, and quality requirements

The details of the time spent by individual team members need to be logged in
Peoplesoft

The project's details regarding allocations and operational models need to be


documented in Prolite

PROJECT MANAGEMENT CONCEPTS:

Obtaining a degree is more than studying and taking the test. It additionally
involves arranging for hostel accommodation, arranging for commuting services,
setting up a time table, selecting a study environment, identifying resources and
study material, having periodic meetings with the teachers, receiving the grades,
and finally celebrating the graduation.

Similarly, a software project does not begin and end with the software development
life cycle consisting of requirements, analysis and design, coding, testing, and
delivery stages. Additionally, it involves activities throughout the project life cycle
such as business understanding, configuration set up, infrastructure set up,
execution, progress analysis, change management, and finally closure. All these
activities fall under the delivery management framework. Delivery management is a
new framework to address all project management practices across the entire life
cycle of a project. This framework describes the various stages in project
management, such as the entry and exit criteria, tasks, input and output, tools and
techniques to perform the tasks, responsibilities, verification and validation criteria.

The different processes of delivery management are Proposal, Formalization,


Startup, Execution, and Closure. The following pages will give you details on these
processes.

The proposal phase involves understanding the business, making high-level


estimates and doing the initial risk assessment.
The formalization phase deals with the formalization of the project. A signed letter
of intent or contract or statement of work is obtained during this phase. The work
order is raised in Peoplesoft.

The execution phase involves delivery of product, tracking activities against the
plan, performing defect prevention, communicating with the client, sending status
reports, managing change, managing risks, having peer reviews, etc.

The closure phase consists of obtaining the project sign off. Finally, project
retrospection is carried out covering best practices, tools or re-usable components
developed, lessons learned, project performance against the quality goals, and
associated learning.

The delivery management phases run parallel to the SDLC phases in a project. The
two diagrams on the page show how the different delivery management phases are
aligned to the SDLC phases of a standard development project and a maintenance
project.

In this session, you have learned that:

Delivery management involves all the project activities other than standard SDLC
phases that are required throughout the project life cycle for project execution

Delivery management phases include:

a) Proposal, involving Estimation, Business Understanding, Risk Analysis

b) Formalization, which includes Formal contract, Raising the Work Order in Prolite

c) Startup, consisting of Project Configuration, Revised Estimation, Drawing up the


project plans, and Business continuity plan

d) Execution, which involves the tracking activities against the plan, performing
defect prevention, communicating with the client, Sending status reports, Managing
change and risks, Having peer reviews, etc.,

e) Closure, which involves, Obtaining Project sign off, Performing project


retrospection

f) The delivery management phases go on parallel to the S D L C phases in both


development and maintenance projects

Metrics:
Measurements act as indicators of progress and are understood by everyone
irrespective of experience level and technology background.

One cannot chase a target without knowing it. Nor can one know about success, and
achievements, without defining success in measurable terms. And on the other
hand, one cannot also accurately gauge the degree of danger one is in without
appropriate figures.

Measurement is an essential part of life, and we cannot control what we cannot


measure.

The performance of the project has to be measured so that it can be kept under
control.

A project works toward a goal the measurements along the way compare the actual
results of the project against the projections to determine whether the project is on
track or deviating from the goal.

In the manufacturing industry, we know we have to measure the length, breadth,


height, or other tangible specifications of the product. But when the generated
products are formless as in the case of software, what does one measure? The
process involved in software development is measured along with the final software
product.

The software is measured using software metrics - "Quantitative measures of


specific desirable attributes of software". To arrive at metrics, some basic measures
such as schedule and effort are collected and combined to produce meaningful
indicators of the progress of software development. For example, to find out how
productive a team is, the size of the software project and effort required by the
team to do the project are considered as the base measures. Next, we divide size
by effort to find the productivity metric for the team.

Some examples of base measures that are considered for measuring the software
process and product are schedule, effort, size, and defect. These measures are
combined in different ways to form the different software metrics. The following
page will give you some examples on process metrics for application development
projects.

The metric schedule variance is got from the measure schedule, by comparing the
actual with the planned. It is a measure of whether or not the project is meeting the
planned dates for the start and finish of modules. Similarly, the metrics effort
variance is got from the measure effort - This is a measure of whether the planned
effort is matching the actual effort or not. We can also know about the load factor,
which is a measure of whether the people in the project are adequately, lightly, or
heavily loaded. All these are examples of process metrics. They indicate whether
the process of producing software products is adequate or not. Some of the other
process metrics are review efficiency and requirement stability. In the next page,
you will see some examples of product metrics in application development projects.

The metric defect density is got by putting defects and size together. This gives an
indication of how robust the product is. This is an example of a product metric.
Product metrics indicate whether the quality of the produced software product is
adequate or not.

Other product metrics are maintainability, reliability, and availability.

Similarly, in AVM projects, there are different measurements that evolve into
metrics.

For Example:

Percentage of requests meeting service-level agreements

Percentage of fixes without escalation

You have seen measurements and metrics. However, metrics and measurements
are meaningful only if we have a goal against which we compare the metrics to find
out whether or not the project is on the right path. How are goals set? The business
objectives of the organization are laid down. One example of a business objective
may be to meet delivery commitments on time. These business objectives are then
translated to one or many process objectives. Following the previous example, the
process objective aligned to meeting delivery commitments on time may be
translated to having less than 5.86 percent variation from the schedule during the
process of coding and unit testing. These process objectives are mapped to sub
processes, which make up the process. In our case, the process of coding and unit
testing is made up of coding, code review, and unit testing. After the sub processes
are identified, metrics are attached to the sub processes, which can be used to
measure the performance in the sub processes. In this example, coding is mapped
to the coding schedule variance, code review to the code review schedule variance,
and unit testing to the unit testing schedule variance. Finally, a goal is set for each
of these metrics to ensure that the process objective is met. Here, for example, the
goal for the coding schedule variance is 1.65 percent, the code review schedule
variance is 0.33 percent, and the unit testing schedule variance is 1.32 percent.

Here are some of Cognizant's business objectives:

In terms of finance - Improve profitability or reduce cost overruns

Through process - Improve productivity, Meet on-time delivery commitment, Reduce


development cycle time, Improve product quality, Reduce cost of quality, Reduce
rework

With respect to customer - Improve customer satisfaction index, and


With regard to people - Improving training effectiveness, Improving internal
satisfaction of the associates

Business objectives are at the organization level and will span across projects and
support groups.

Along with the business goals, a set of metrics derived from the data collected from
past projects as well as data from industry figures are set as organizational
baselines. These baselines are used as a basis to arrive at project-specific quality
goals. They give an indication of the capability of the organization. The
organizational baseline values for the metrics are documented in the OLBM or
organizational-level benchmarks, which contain the mandatory and optional metrics
for each type of project, the goals associated with them, and the upper and lower
control limits for each metric. Each project sets its goals by either following the
goals laid down in the OLBM or setting their own goals as appropriate for the
specific project characteristics.

Along with goal setting, the projects have to formalize the data-collection tools to
collect the actual measures for the project for the metrics to be computed and
compared to the goals. The tools that are generally used for data collection are
Prolite, e-Tracker, Time sheet, Defect logs, and Project plan.

How are the metrics used? Periodically, metrics are collected and analyzed. As the
metrics are analyzed, one can get a quantitative idea of whether the project is
proceeding along the right track or not. If a metric deviates significantly from the
goal so as to come close to or cross the control limits, it indicates something is
wrong in the process. This necessitates corrective action. For example, too much
schedule or effort variance may mean that our initial estimates were wrong and
may force us to revise our estimates for the project.

How are the metrics used? Periodically, metrics are collected and analyzed. As the
metrics are analyzed, one can get a quantitative idea of whether the project is
proceeding along the right track or not. If a metric deviates significantly from the
goal so as to come close to or cross the control limits, it indicates something is
wrong in the process. This necessitates corrective action. For example, too much
schedule or effort variance may mean that our initial estimates were wrong and
may force us to revise our estimates for the project.

Apart from the projects, the metrics collected across the organization are analyzed
by a group called the metrics council and the organization's baselines are modified
periodically based on the analysis. This revision of baselines indicates a change in
the performance capability of the organization.

In this session, you have learned that:


Metrics are quantitative indicators that indicate whether

(a) The project is on track or not

(b) The quality of the product is good or not

The software processes and products are measured and controlled with metrics

Metrics are derived as a mathematical combination of base measures (examples of


base measures being schedule, effort, size, defect, etc.)

The organizational baseline contains the goals and control limits for each
compulsory and optional metric for all types of projects. The OLBM depicts
organization's capability

Projects set their own project goals based on organization-level goals or they can
set appropriate goals based on the project characteristics

Metrics are compared against the baseline figures to check the progress of the
project. If the figures cross the control limits or are deviating too much from the
goal, it becomes necessary to take corrective action

Metrics are generated from the project data entered in Prolite and e-Tracker

BQSEA1:

SOFTWARE ENGINEERING AT COGNIZANT:


Before you delve deeper into software engineering, you must first understand what
software is. A software or a software product is set of computer programs
performing specific process and the documentation associated with the computer
programs. Software products can be developed for defined market of customers. An
example would be an accounting package, applicable across industries. On the
other hand, software products can also be customized and developed for one
particular customer. An example would be a customer banking system for a specific
bank.

Engineering is a science. Software Engineering involves art, craft, and science. It is


an amalgam of artistry, craftsmanship, and scientific thought.

You will now move on to the development of software product. In most engineering
disciplines, specifications are the first step in the development of a product.
Consider the case of house construction. One starts with specifications, goes on to
design, and finally building and finishing the product. Similarly in software
development, one starts with product requirements, followed by architectural
details, and then proceeds to building, that is, developing the code. It is then
followed by reviewing and installing the product.

Coding and development is one of the major activities in Software Engineering. But
software engineering goes much beyond coding. It consists of various activities to
encompass all aspects of software production, such as requirements, specifications,
design, coding, testing, integration, documentation, deployment, and maintenance.
Coding would occupy as little as 5 percent of the total work involved in a Software
Engineering Project. Although artistic and scientific in its scope, it has to adhere to
several time-tested processes pertaining to the different aspects.

Now that you know the processes involved in software development, take a look at
the number of people involved. They are spread across the managerial, technical,
and end user cadre. And like any other industry, software is linked to peripheral
issues, such as business, Contractual, Legal, and Environmental. Hence, remember,
Software is "Not Just Some Pretty Code".

As mentioned, the process of creating software is more than coding -- involving


people, processes, and time-tested activities. Developing software involves
opportunity of individual expression, but it needs to be brilliance encapsulated in a
framework. Small programs can be written in an ad hoc manner by a single bright
individual. However, complex software solutions are seldom developed by armchair
programmers. For complicated systems to be successfully built, one has to be
innovative and good while sticking to rules, methods, processes, and teamwork.

Software development can be compared to art. Imagine building the Sistine Chapel
alone and without a blueprint. The best works of art require discipline, teamwork
and planning.

In this session, you have learned that:

Software engineering is the art, craft, and science of building large, important
software systems. It is an amalgam of artistry, craftsmanship, and scientific thought

While being a major aspect, software engineering goes much beyond coding

Software engineering is akin to art, which cannot succeed without a blueprint and
teamwork

SOFTWARE PROJECT STAGES:

Consider this scenario. Doing things using a development methodology, and in a


step by step manner, ensures successful product, regardless of which industry the
product belongs to.
Software Engineering is not just Code Construction. Each Software Application that
is created follows a well defined set of activities, and has a well defined Life Cycle
from initiation to the retirement of the Software Application. Similar to a car
manufacture, a software application development project has well defined stages
that are implemented in a predefined fashion to create the software application.

The Various Stages in a Software Development Life Cycle are Requirement,


analysis, design, coding, testing, Implementation, and Maintenance. In the next few
pages, you will learn about each of these stages in detail.

In the Software Requirements Stage, the required functionality or behavior of the


software is identified by the Software Engineer. These Requirements are
documented in the Software Requirements Specifications Document. The Analysis
and Design stage, translates requirements into a representation of the software that
can be assessed for quality before coding begins. In this stage, typical documents
that get created are the functional specifications document, design documents, and
the program specifications. In the coding stage, the executable that can be read by
the computer is created. Individual modules of programs are assembled together to
create the final executable of the software application.

Once Code has been generated, program testing begins during the Testing stage.
Testing process focuses on the logical internals of the software, ensuring all
statements have been checked for correctness. It also focuses on the functional
externals, that is, conducting tests to uncover errors and ensure that defined input
will produce actual results that agree with the actual results. At the implementation
stage, after all tests have shown that the completed software works as intended, it
is deployed in its production environment. Implementation is a planned activity and
steps pertaining to it are documented as part of the Roll-Out Plan. A series of checks
and reviews are conducted in this stage to ensure that all components of the
completed software have been installed correctly. Software undergoes change after
it has been deployed and delivered to the customer. Change will occur because
errors may be encountered, or because the software needs to be adapted to meet
changes in its external or operating environment, or because the customer requires
functional or performance enhancements. These are issues that are resolved during
the Maintenance or Post Implementation stage.

You will now learn about the basic building block of any stage. Basic building blocks
of a Stage are Tasks. Activities explain how a task needs to be performed. See what
each of them signifies. In the next few pages, you will learn in detail about elements
of any software development stage.

You will now learn about the various elements of a stage. The Entry Criteria
provides inputs which can be documents or tasks. It is then followed by the Task or
list of activities that are implemented to complete the process. The Verification
consists of reviews and approvals that confirms adequacy of the activities done
during the Task period. The stage ends with an Exit criteria which consists of work
products or documents that may serve as an Entry Criteria for the next Stage.

Here is an example that describes the Elements of the Requirements Stage. The
Entry Criteria for Requirements Stage is the Business Need. For example, the client
requires a system that will automate the process of banking according to his needs.
Tasks of the Stage would include activities like requirement capture, requirement
analysis, and requirements documentation. Work Products created in this stage
could be completed requirements gathering checklists and Software Requirements
Specifications. As part of Verification the completed SRS document will be reviewed
by the Project Manager and approved by the Client Representative. Signed off SRS
will be Exit Criteria for this stage, which would become the Entry Criteria for the
next Stage, Analysis and Design.

In this session, you have learned that:

The SDLC is a sequence of steps that organizes the development of a software


product

The various stages in the SDLC are Requirement, Analysis, Design, Coding, Testing,
Implementation, and Maintenance or Post implementation

The building blocks of any stage are Activities and Tasks

The elements of a stage include Entry criteria, Task, Verification, and Exit criteria

REQUIREMENT DEVELOPMENT AND MANAGEMENT:

Software Engineering is not just Code Construction. Each Software Application that
is created follows a well defined set of activities, and has a well defined Life Cycle
from initiation to the retirement of the Software Application. Similar to a car
manufacture, a software application development project has well defined stages
that are implemented in a predefined fashion to create the software application.

The Various Stages in a Software Development Life Cycle are Requirement,


analysis, design, coding, testing, Implementation, and Maintenance. In the next few
pages, you will learn about each of these stages in detail.

In the Software Requirements Stage, the required functionality or behavior of the


software is identified by the Software Engineer. These Requirements are
documented in the Software Requirements Specifications Document. The Analysis
and Design stage, translates requirements into a representation of the software that
can be assessed for quality before coding begins. In this stage, typical documents
that get created are the functional specifications document, design documents, and
the program specifications. In the coding stage, the executable that can be read by
the computer is created. Individual modules of programs are assembled together to
create the final executable of the software application.

Once Code has been generated, program testing begins during the Testing stage.
Testing process focuses on the logical internals of the software, ensuring all
statements have been checked for correctness. It also focuses on the functional
externals, that is, conducting tests to uncover errors and ensure that defined input
will produce actual results that agree with the actual results. At the implementation
stage, after all tests have shown that the completed software works as intended, it
is deployed in its production environment. Implementation is a planned activity and
steps pertaining to it are documented as part of the Roll-Out Plan. A series of checks
and reviews are conducted in this stage to ensure that all components of the
completed software have been installed correctly. Software undergoes change after
it has been deployed and delivered to the customer. Change will occur because
errors may be encountered, or because the software needs to be adapted to meet
changes in its external or operating environment, or because the customer requires
functional or performance enhancements. These are issues that are resolved during
the Maintenance or Post Implementation stage.

You will now learn about the various elements of a stage. The Entry Criteria
provides inputs which can be documents or tasks. It is then followed by the Task or
list of activities that are implemented to complete the process. The Verification
consists of reviews and approvals that confirms adequacy of the activities done
during the Task period. The stage ends with an Exit criteria which consists of work
products or documents that may serve as an Entry Criteria for the next Stage.

Here is an example that describes the Elements of the Requirements Stage. The
Entry Criteria for Requirements Stage is the Business Need. For example, the client
requires a system that will automate the process of banking according to his needs.
Tasks of the Stage would include activities like requirement capture, requirement
analysis, and requirements documentation. Work Products created in this stage
could be completed requirements gathering checklists and Software Requirements
Specifications. As part of Verification the completed SRS document will be reviewed
by the Project Manager and approved by the Client Representative. Signed off SRS
will be Exit Criteria for this stage, which would become the Entry Criteria for the
next Stage, Analysis and Design.

In this session, you have learned that:

The SDLC is a sequence of steps that organizes the development of a software


product

The various stages in the SDLC are Requirement, Analysis, Design, Coding, Testing,
Implementation, and Maintenance or Post implementation

The building blocks of any stage are Activities and Tasks


The elements of a stage include Entry criteria, Task, Verification, and Exit criteria

SOFTWARE ANALYSIS AND DESIGN:

In the analysis and design stage of software development, the focus gradually shifts
from "What to build" to "How to build". Over the next few pages, you will learn
about the analysis and design stage in detail.

You must be aware that the Requirement Specifications Document acts as the exit
criteria of the Requirement Stage. This same document is the entry criteria for the
Analysis Stage. Functional Specifications Document is the exit criteria for the
Analysis Stage and in turn the entry criteria for the Design Stage. In the Design
Stage, the Detailed Design Document is the most important document that gets
created and is used as the basis of Code Construction in the Code Construction
Stage.

Analysis and Design are one of the foremost stages in software development cycle.

Click each of the images to know more.

Analysis is the software engineering task that bridges the gap between the software
requirements stage and software design. The objective of software analysis is to
state precisely what the system will do to provide a solution to the client's need at a
functional level.This is captured in the Functional Specification Document.

Design creates a detailed Design Document that acts as the "blue-print" for the
developers or the team that will construct the code to create the system.

The typical elements of software design include Program Architectural Design, Data
Design, Interface Design, and Component Design.

Analysis and Design are one of the foremost stages in software development cycle.

Click each of the images to know more.

Analysis is the software engineering task that bridges the gap between the software
requirements stage and software design. The objective of software analysis is to
state precisely what the system will do to provide a solution to the client's need at a
functional level.This is captured in the Functional Specification Document.

Design creates a detailed Design Document that acts as the "blue-print" for the
developers or the team that will construct the code to create the system.

The typical elements of software design include Program Architectural Design, Data
Design, Interface Design, and Component Design.
This is the overall Architecture Design for the SmartBook System. It defines the
relationship between the structural elements of the Software Application being built.
Architecture for the system needs to be built as part of Software Analysis and
Design Stage. The Data Design specifies the data structures needed to implement
the solution. It includes the Database or File System Space Requirements. It also
includes Table or Layout details, such as Table or Record name, Column or Field
names and description,Type of Column or Field the length, Default values, Edit or
validation conditions associated to a Column or Field, and Details of all Keys or
Indexes of a Table or Record. These are the interface designs which describe how
the software communicates within itself, with the systems that interoperate with it
and the Humans who use it. Interface Design for the system needs to be built as
part of Software Analysis and Design Stage. The Component level design transforms
the structural elements of software into procedural description of the Software
Component. It includes Program specifications, that is, the Functions or Algorithms
that define the procedural design.

Here is a case study to understand the concept of software analysis and design
better. Mercury Travels is a premier Travel Agency of the country. Mercury wants to
automate its business processes. Requirement Analysis reveals that the specific
requirement of Mercury is to create an Air or Rail Ticket Booking System for the
Travel Agents. Other business processes will not be included in the current
automation initiative.

To begin with, the software requirement specifications document is put in place.


The business need is the basis for creation of the software requirement specification
document. This activity is completed in the Requirements Stage of an Application
Development Project.

Each of the Requirements is then decomposed further to create the Software


Functional Specifications. The Functional Specifications express the system to be
built in a language that designers of the system understand. The problem presented
by the Requirements is analyzed using Analysis Models. Creation of the Analysis
Models and the Functional Specifications often take place simultaneously.

Here, you can see an Analysis Model that is used to express the problem. Such a
diagram is called a Data Flow Diagram, in which each bubble indicates the activity
taking place. The box on the other hand, is used to denote an external source or
sink of information. The parallel bars denote data store or file while an arc is used to
denote the flow of components among the other 3 components. Note that SRS 01.1
has been expressed using processes 2a and 2b.

The bubble 1 which denotes the activity, "Determine form of travel" has been
factored further to a lower level of D F D.

The Analysis Models form the basis of the Program Specifications which is an
essential component of the Design Document.
In addition to Program Specifications the Design Document includes other details
like Data Design and Program Architecture Design.

Data design involves the overall data model design for the application. Program
hierarchy and program-level interfaces are addressed in the program architecture
design.

Look closely at the examples here. There are two ways you can visualize the
building or construction of a house. Builder may appoint a bricklayer to create the
walls and carpenter to create windows, fit windows into the walls etc and slowly
create the house.

Alternatively, the builder may fit the standard models of doors, windows, roofs,
walls, and rooms available in the market to create the house. This is how most
buildings get built now.

There are two approaches to creating the Design Specifications in a project. One is
the Structured Analysis and Design Technique that can be traced to the 1970s. The
other is a newer concept called the Object Oriented approach which as a concept
was developed from 1990s.

SSAD makes a heavy use of functional decomposition. System behavior takes a


secondary role here. Like in the case of building a house brick by brick

The object-oriented technique on the other hand focuses on the system behavior. In
the recent years OOAD technique has become very popular with software
engineers.Objects represent a sample expected system behavior and they are
called upon to function as a whole. Re-usable and common objects help in achieving
greater modularity and are manageable from the project management perspective.

In this session, you have learned that:

In analysis and design the focus is on "HOW to Build" a solution and not on "WHAT
to Build"

Analysis is the software engineering stage that bridges the gap between the
software requirements stage and software design stage

Functional Analysis Document is the most important document that is created in


this phase

In the software design stage, detailed design document is the most important
document that is created

This acts as a "blue-print" to be used by the eventual implementers of the system

The elements of software design are Program architectural design, Data design,
Interface design, Component design
There are two ways in which software engineers visualize "HOW to build" a solution
during analysis and design stage, namely, Structured Analysis and Design (SSAD)
technique and object-oriented technique

CODE CONSTRUCTION:

Your builder has taken your house requirements and has given you the building
plan and the prototype of your house. So in your mind, you have the picture of your
dream house ready. What do you think is the activity that the builder has to engage
in now? Yes. You guessed it right. The builder will need to construct the house now.

You will now learn how code is constructed. Design provides the basis for code
construction. A unit test helps a developer to consider what needs to be done as
requirements are nailed down firmly by tests. There is a rhythm to developing
software unit tests. First, you create one test to define some small aspect of the
problem at hand. Then, you create the simplest code that will make that test pass.
Then, you create a second test. Now, you add to the code you just created to make
this new test pass, but do not write any more code until you have created a third
test. You continue until there is nothing left to test. The next page holds an activity
for you. You have to connect the numbers in a series beginning with number one.
For example, click number two to connect number one and number two.

Now, you know that any unstructured piece of work is difficult to understand and
that code is no exception.

You should remember that even though completed code is an important deliverable
that is given to the customer, software engineering is not just coding. Coding is a
stage within the software development life cycle. The subsequent page explains the
coding process in detail.

Design documents and unit test cases that are updated with test cases are
important inputs for the coding stage. Code construction using language-specific
coding standards, peer review of code, and updating code based on review
comments are major tasks in this phase. Peer-reviewed code is the output of the
coding stage.

This page delves into details of the tasks and activities of the coding stage. It must
be noted here that peer review of code is a very important task in the code
construction stage.

A coding standards document tells developers how they must write their code.
Instead of each developer coding in their own preferred style, they will write all
code to the standards outlined in the document. This makes sure that a large
project is coded in a consistent style, that is, parts are not written differently by
different programmers. Not only does this solution make the code easier to
understand, it also ensures that any developer who looks at the code will know what
to expect throughout the entire application.

This section outlines concepts that are generic in nature and applicable to most
software tools and platforms. Platform-specific conventions and guidelines are
covered under the relevant company standard. The relevant language-specific
standard must be referred to when constructing code in a specific language.

It is necessary to have a good coding layout structure. Good coding structure


improves the readability and maintainability of code. A good layout brings out the
logical program structure through appropriate use of blank lines, spaces, and
indentation. Let us take the example of an essay written by two young students of
class five. Student A has used a clean paper with the necessary punctuation marks,
spaces, paragraph settings, used uniform fonts, and cases. This is much easier to
read than the one written by Student B, which has not been written neatly and
clearly. These were the basic tenets of sentence construction that we had learned
when we were in school. This is true even when you write your first program.

This page deals with the good practices on code layout and programming. Code
layout deals with the structure of the code and the way it is laid out. It affects
readability and ease of modification of code.

Here are the guidelines to be followed for maintaining presentation aspects of code
when fixing a bug.

This page outlines the concepts pertaining to sentence construction that are generic
in nature and applicable to most software tools and platforms. Platform-specific
conventions and guidelines are covered under the relevant company standard.

Code-level readability is not about using comments only. The main contributor to
code-level readability is not comments, but good programming style. Check the first
example of code provided. This reflects bad programming style. Check the second
example. Even though this code does not use any comment, it is much more
readable. It takes us toward the goal of "self-documenting code".

When commenting a source code, always use comments judiciously so that we can
ensure our code is readable and clear.

Use of headers in a program will not add to its functionality. It is of immense help
during maintenance of a program.

Naming conventions for a language should be as per the recommended convention


documented in the coding checklist. The benefits of adhering to naming
conventions is that somebody going through the program can get an idea about the
purpose of various entities from their names, thus enhancing program readability.
Conventions bring about uniformity in the way program entities like variables,
symbolic constants, procedures, and functions are named. Teams can develop their
specific naming conventions for identification of programs.

In this page, you will learn about the declarations standards in a piece of code:

A program consists of two basic entities, data and instructions. Data elements or
structures should be declared and initialized before (executable) instructions.

All header files and libraries used in the program (whether standard or user defined)
should be declared.

All global variables need to be declared and the number of global declarations used
should be minimized so as to reduce coupling between modules.

All unique or complex variables or data structures should be described through


appropriate comments, clarifying the reason for such complexity.

Functions and their parameters should be declared taking care to ensure that no
type mismatches occur during runtime between the calling and called module or
function or procedure.

When using arrays, remember it is cumbersome to handle arrays having more than
three dimensions. Such arrays should be avoided.

You will now learn what defensive programming is. As a programmer, you should be
able to envisage areas in your programs that can initiate errors in the behavior of
the software application. Hence, appropriate methods should be used to prevent the
occurrence of errors.

Continuing with defensive programming ensures that your program is secure and
prevents unauthorized access.

Expectations change and hence requirements change, and so it is but natural that
programs have to be modified in order to suit the new specifications. This means
that the program should be flexible enough to be modified with little or no effort.
This page identifies some practices that help in creating modifiable or flexible
programs.

Now, you will learn the importance of On-Screen Error and Help Messages. In this
example, the customer inserts his debit card in the ATM machine. But the machine
does not accept the card and ejects it immediately, without showing any error
message. This makes the customer frustrated. The next page provides some good
practices that should be followed in case of text-based error and help messages.

Design of the On-Screen Error and Help Messages have a strong bearing on user-
friendliness. This page puts forward some of the guidelines that could be followed
by the developer.
In this session, you have learned that:

Coding is a stage within the software development lifecycle

The inputs for the code construction include design document and the unit test
cases document

The process of code construction involves using design document and coding
standards to create code, aligning code to the unit test cases, and peer reviewing
code before delivery

It is a good programming practice to follow some platform-specific conventions and


guidelines that have been documented. This will help the developer to write legible
codes

TESTING:

Testing is an activity that is used to discover errors and correct them, so that we are
able to create a defect-free product for our customer. Let us take the example of a
house. The client had specified requirements of the house she wants. The tester
tested the house to find out if all requirements of the client had been met after
delivery. The tester created the test execution details document, which detailed the
scenarios or test cases. The tester also created the results of the test execution,
which are referred to as the test log.

Testing is an important stage which follows the Coding stage in the software
development life cycle. The objective of testing is to evaluate if we have created the
system correctly. During the earlier stages, the focus was to check what is being
built but in testing when we have the end product ready, our focus shifts to validate
whether the product that has been built has been built correctly or not. Hence, the
focus shifts from building the product right to building the right product.

Now, an attempt to define software testing is made. Testing is a systematic activity


where records of test execution need to be maintained.

Testing is the process of executing a program with the specific intent of finding an
error. Success of a test is determined by the number of errors it has uncovered.
Tests can be conducted by the developer or by an independent testing team. What
one should remember is that the role of a good tester is to show the presence of the
defects or errors of that software.

This page explains the major activities and tasks of the testing stage. Creation of
the test strategy is the first step. It is based on the Requirements Document, the
Functional Specifications Document, and the Design Document. The test strategy
describes the overall plan and approach to be taken for testing, the deliverables,
and the process for test reporting. The next step is to create the test cases,
containing the individual scenarios, which would be tested with their expected
outcomes. Test cases are executed by the tester and results of the tests are
documented in the test log. The defects of testing are recorded in defect-tracking
tools such as the internal tool Prolite or the external tool called Test Director,
depending on the requirement of a project. The owner of the application being
tested then updates the application, closes the defects, and updates Prolite with the
defect status in the tools. Re-testing may be conducted to verify closure of defects.

In this page, you will know the people who are responsible for testing a software
application:

Software testing can be conducted by the developers of the system or an


independent testing group who are part of the organization that has built the
system.

Software testing can also be conducted by the client or the ultimate users of the
system.

The team responsible for the different types of testing needs to be decided upon
during the planning stage.

Here, you will learn about the various stages in testing: Software testing is usually
performed at different levels of abstraction of the application along the software
development process by the builders of the system. There are three testing stages:
Unit testing, integration testing, and system testing. The objective and the
abstraction levels of the application to which these tests are performed are
different. Unit tests are performed on the smallest individual units of the
application, the integration tests on a group of modules and their interfaces while
the system tests are for the entire system and the interfacing external systems. In
this illustration, you can see where the three major types of testing fit into the
software development life cycle.

There is one more stage in testing that is done by the end user of the system. This
is referred to as the user acceptance test. This is to verify the functionality of the
system from the end user's perspective. Here we can see where the user
acceptance testing fits into the software development life cycle.

You will now learn about each of the testing stages in detail. First you will know
about unit testing. In this page, we see a small child building a doll house. She
checks each building block to ensure that it is in line with the design of the doll
house as she creates them. Similarly, unit testing concentrates on each unit or
component of the software as implemented in the code and checks that it is in line
with the program specification and the detailed design. The primary focus of unit
testing is to validate the logic, the structure, and the flows of the concerned
program.
Moving on to integration testing, you see that the builders of the doll house now
begin to put the individually tested blocks of the house together, that is, they
integrate the unit tested units. The primary intent is to uncover errors associated
with interfaces when the unit-tested components are integrated as a module. The
next page talks about system testing.

After the doll house has been completed, it is checked fully by the builders of house
to ensure that it is complete and ready for habitation. Additionally, builders check if
the house is secure and can withstand rain or thunder or lightning and other
environmental factors, so that it can easily be placed in its intended environment.
System testing in software checks the performance and functionality of the
complete system after all unit tested units have been integrated as per the build
plan. It also evaluates functionality with interfacing systems. Non functional
requirements like speed and reliability are also verified during system testing.

Finally, looking at the acceptance testing, you see that the doll house has been
placed in a children's park. The acceptance test verifies whether the system created
is in conformance with user requirements when placed in its 'real' environment.
Acceptance tests are often conducted by the client or by the end users.

Now, you will learn about an important concept of testing called regression testing.
Regression testing will be done to ensure that the actions taken to rectify the defect
have not produced any unexpected effects. Regression testing should be done at all
levels of testing, such as unit, integration, system, and acceptance testing. The
following page gives you an example that will help you learn the concept of
regression testing.

As seen in the previous example of a client stating her requirement for her house, it
was observed by the tester that the location of the door was incorrect and a defect
ID was allocated to it. When correcting this defect, the constructors may remove the
door and move it to the rear side. In this process, other sections of the building may
get damaged. So when correcting the defect of incorrect door location, care must
be taken to ensure that unintended defects like cracks in the walls are not
introduced in the building. Regression testing takes care of such unexpected issues
that occur as a result of fixing defects.

This page explains what the focus is for each type of testing:

Unit testing uses code and detailed design as an input to check correctness of
individual units.

Integration testing uses the system design and the functional specification
document as an input.
System testing uses the overall functionality of the system as given in the
functional specifications and software requirements. It also evaluates the non-
functional requirements.

Acceptance testing is the test conducted periodically by client representatives to


check if client requirements have been met adequately.

Regression testing, on the other hand, retests the tested sections of the software to
ensure no unintended error has been introduced.

Here is another very important concept of software testing, that is, the test case.
Test cases are scenarios that are executed by the testers on the completed
application to determine if the application meets a specific requirement. One or
more test cases may be required to determine if a requirement is satisfied.

A good test case is one that uncovers errors in a complete manner with minimum
time and effort. Considering the earlier example of the completed house, the
analysis, 'check if the color of chimney is red' is a test case. If for the same
example, when the test case is written as 'check if the door does not open with a
wrong key', becomes a negative test case. Hence, we learn that a test case is a
statement specifying an input, an action, or an event and expects a specific
response after execution.

This page explains the two approaches to test case design.

The white box approach which is based on the internal workings of the system and
black box approach which deals with the external functionality of the application or
its part being tested irrespective of the internal details. The following pages will give
more details on these two types of approaches.

The white box approach is used to create test cases to check if 'all gears mesh',
that is, to check if the internal operations are performed according to specifications.
This tests all the logical paths within the unit being tested and verifies if these are
functioning as required in the design.

Black box test case design approach does not consider the internal workings of the
application. It focuses on the functional requirements alone, and it is designed to
verify if the inputs are given correctly the output generated is also correct. It should
be noted that black box testing and white box testing are not alternative
approaches to test case design. On the other hand, they complement each other.

Here are the activities performed in the testing stage for the SMARTBOOK System.
The test scenarios or test cases are logged and tracked in a tool with the detailed
information about the test case execution. Individual program units are tested as
part of unit testing and the results are logged in the tool. Subsequently, each
functional module is considered and integration tested for its functioning and logic.
All interface-related tests between program units are covered under integration
testing. The system testing follows where all the functional modules are taken
together and the application is tested as a whole incorporating the interfacing
issues between the functional modules. Finally, User Acceptance Testing is
conducted by the users of the system and the resulting errors are corrected prior to
staging the system into production.

This page explains the salient points you should remember about testing. Test
execution activity starts after code has been constructed with unit testing of
individual modules. This will be followed by integration testing and system testing.
However, it should be noted that test planning activity occurs much earlier in the
software development life cycle. In fact, user acceptance test plans and cases are
prepared along with the requirements document. To improve the quality of the code
being delivered, it is a good practice to 'Test before you code'. The model shown is
also called the V-Model where each stage is associated with the corresponding
review and a specific test case is prepared for testing at a later stage.

In this session, you have learned that:

Testing is an important stage within the software development life cycle

The objective of a good test is to show the presence of errors and not the absence
of them

A good tester should attempt to break the system to uncover undiscovered errors

The stages important in testing are unit testing, which targets a single module,
integration testing targeting a group of modules, system testing that targets the
whole system, and acceptance testing targeting the overall system and conducted
by users

There are two important approaches that are used to design test cases, namely, the
black box approach focuses on the functional requirements of the software, and the
white box approach focuses on internal workings of the software

IMPLEMENTATION AND POST IMPLEMENTATION

First, you will learn about the implementation stage in software development.
In this example, you see a statue of a jockey being built. This statue is created in
the sculpture workshop. It is being built as per the requirements provided by
Company 123.

Company 123 wants the statue to be installed in a park that they own. The
installation team of the sculptures transfers the statue to its intended environment.
They prepare the site for installation of the statue.

After the site is ready, workers install the statue in the park for public viewing. We
can see here that there is a communication activity associated with the unveiling of
the new statue.

Similarly, in case of software application development, after the software is


constructed and tested, it needs to be installed or implemented in its working
environment for the intended users to make use of it.

After the system is tested completely, it is delivered to the onsite team. The onsite
team implements the tested application in the client environment. Software
implementation or the deployment stage starts after user acceptance testing is
completed. It involves all the activities needed to make the software operational for
its users.

Here, the focus is to verify that the software or the product that has been delivered
is meeting the need, that is, whether the product has been rightly built.

The main activities in the implementation stage are planning and defining the
process for rollout, to deploy the new application, train users on the new system
after the rollout has been implemented, and communicate the details of
deployment to relevant people.

Now you will learn about the post implementation stage in software development.
After the statue has been installed, as you saw in the earlier illustration, any
complication can arise.

A part of the statue may get damaged and may need to be mended. In that case a
complaint is lodged with the sculpture company.

Stakeholders of Company123, who own the statue, may want a new feature to be
added, or one of the stakeholders may want to change an existing feature of the
statue they had purchased.

Thus, the activities involved in the post-implementation stage are support


necessary to sustain, modify, and improve a deployed system.

Similarly, in software application development, after a software has been


implemented in its intended environment, support may be needed because of the
discovery of a bug or defect; the need for enhancing the existing functionality of the
software, or the need to change the existing features or functionality of the
software application.

Post implementation activity may be the regular warranty support. This includes
providing the support necessary to sustain, modify, and improve the operational
software of a deployed system to meet user requirements.

Post Implementation is the final stage in an application development project. This


page gives you some details on the post implementation stage.

A process document describing the post-implementation process guides the


activities performed in the post-implementation phase, which generally consists of
the warranty period as per the contract signed by the client. It also includes
helpdesk support, fixing the bugs, and planning for release of the reworked
application and all other activities pertaining to the overall support of the system in
action.

Here is an illustration of the activities performed in the post implementation stage


of an application development project.

The requests given by the users are first classified as bugs or production support
tasks and subsequently logged in a tool like e-tracker, for tracking followed by
analysis for resolution of the request. The resolution is then implemented and
delivered to the client for implementation. Production support issues are similarly
analyzed and fixed in the application prior to their closure.

Application maintenance projects have a well-defined life cycle consisting of stages


like Requirements, Knowledge Transition, and Steady Stage.

If the post-implementation activities are continued over a sustained period, the


project is converted to an application maintenance project and the contract is
revised accordingly.

In this session, you have learned that:

Software implementation or deployment is an activity that makes a software


application available for use

A new application is deployed as per the roll-out plan

Post-implementation is a support activity required to sustain, modify, and improve a


deployed system

This stage generally refers to the two to six month warranty contract that may be
signed with the client

The main activities in the implementation stage are, to roll out or deploy the new
application, and train users on the new system
Post-implementation activities are implemented as per the post-implementation
process document

The process tool used to manage post implementation activities is e-tracker

THE MAINTENANCE MODEL:

Software maintenance is the process of enhancing and optimizing deployed


software as well as remedying defects in the software. Here is an example of a
constructed house. Due to wear and tear, the delivered house would require
continuous maintenance. This would include any of the following:

Quickfixes, such as repairing faulty electricity or plumbing appliances.

Additional requirements to be added with the changing needs, such as adding an


extra floor to the house.

Major repairs, which also require in-depth analysis and designing of the solution
prior to its execution, such as relaying the air-conditioning for the entire house.

At the end of this session, you will be able to:

Learn about the fundamentals of software maintenance

Study the different stages and activities of the maintenance process

Know the service-level agreements and their relevance

Identify the key issues in software maintenance

Learn about the tool, eTracker

When in operation, defects are uncovered, operating environments change, and


new user requirements surface. Software maintenance is defined as the totality of
activities required to provide cost-effective support to software. This includes
modification of a software product after delivery to correct faults or defects, adapt
the product to a modified environment, and incorporate additional features in the
application to cater to the new requirements.

Software maintenance can be categorized as:

Corrective maintenance: Is a reactive modification of a software product performed


after delivery to correct discovered problems and it is also termed as bugfix.
Adaptive maintenance: Is the modification performed on a software product after
delivery to keep a software product usable in a changed or changing environment.
It is also termed as enhancement.

Perfective maintenance: Is the modification of a software product after delivery to


improve performance or maintainability. It is also called performance tuning.

Preventive maintenance: Is the modification of a software product after delivery to


detect and correct latent faults in the software product before they become
effective faults. This is also termed as bugfix.

Here is an example of a house built earlier and which requires maintenance at a


future point in time. The maintenance person has to initially plan as to how the
contract of maintenance would be executed and which areas would be in the scope
of the maintenance. He might exclude the electrical maintenance from the contract
and he would decide on the team that would be responsible for maintenance and
other business issues. After the contract is signed, the architect responsible for the
maintenance studies the building plan in detail, including the plumbing layout.
Then, the architect passes on the details to the maintenance team. This stage is for
developing an understanding of the existing building. Now, based on the request of
the customer the house is maintained over a period of time. As discussed earlier,
the maintenance can be of the nature of quick fixes or minor repairs, such as
electrical or plumbing repairs, a major change, such as repainting the house and
incorporation of additional requirements, such as designing the plumbing system or
incorporation of air-conditioning in the house.

When the software is in operation, defects are uncovered, operating environments


change, and new user requirements surface. The principal stages of an application
maintenance project consist of planning, knowledge transition, and finally the
steady state maintenance activities.

You have seen what an application maintenance process involves. Now, you will
know about the model followed by the application maintenance process. The
application maintenance model consists of planning, knowledge transition, and
service steady state. The planning phase primarily involves understanding the need
of the customer in terms of what is expected from the maintenance team. This
involves a detailed discussion with the client to identify the requirements and
finalizing the contract. The activities in this stage are: Business planning at the
organizational level: This includes proposal development, estimating for resources
and cost, and defining the escalation mechanism. Maintenance planning at
transition level: This includes scope definition and the execution process adaptation.
And finally knowledge transfer planning, which involves defining the entire
methodology to be adopted during the knowledge transfer phase and a detailed
schedule of the K T phase.
For maintaining the existing application developed by another vendor, the
maintenance team needs to understand the functionalities and the technical details
of the system. Hence, a knowledge transition phase is required prior to the
commencement of the maintenance activities. The knowledge transition phase
primarily consists of obtaining:

The knowledge about the application, considered for maintenance, from the client.
Guided support under the supervision of the client, and finally a plan defined for
transitioning the obtained knowledge to the team for future support. Initially, the
application identified for maintenance has to be thoroughly studied by the K T team.
This includes a detailed understanding about the business processes that the
application caters to and the functions served by the application. This also includes
understanding of the technical details about the application, the environment in
which the application is operating along with the details of interaction with the
interfacing systems. Finally, the application inventory is collected by the K T team
for providing support in future. After obtaining an understanding of the entire
maintenance scenario, the K T team performs the support activities under the
supervision of the client's team. This helps in getting familiar with the support
activities and also in defining a detailed plan for transitioning the knowledge
obtained and subsequently transferring the knowledge obtained about the system
to the entire maintenance team primarily at the offshore centre. The infrastructure
required to perform the support is also built during the stage and a knowledge
repository containing the details of the maintenance project is also built to capture
the entire information, learning, and mistakes committed during the execution of
the project. This helps in easy transitioning of resources down the project timelines.

The steady state support involves resolving the service requests sent by the client,
optimizing the processes continuously over time. This involves measuring and
analyzing the metrics to identify the weaknesses in the process as well as the
application being maintained and defining the corrective measures to eradicate the
weaknesses. Finally, offering the client value additions identified and obtained over
the maintenance period. This includes proactive root-cause analysis of the recurring
problems and the necessary measures for improvement. SLA-based measurement
also helps in tracking the performance strictly on a defined interval at every level.

The steady state requests can be classified based on the type of request or the level
of support and the size of the request.

The requests can be of the following types:

Production support

Bugfix, and

Enhancements
Similarly the bugfix and the enhancements are further classified into Minor, Major,
and Super major, based on their size.

Software maintenance sustains the software product throughout its operational life
cycle. Modification requests are logged and tracked, the impact of proposed
changes is determined, code and other software artifacts are modified, testing is
conducted, and a new version of the software product is released. It also includes
training and daily support through: helpdesks that are provided to users, or any
documentation regarding the application. The enhancement bugfix request,
popularly called the EBR, primarily consists of the enhancement or bug description,
technical details, the proposed resolution for incorporating the request, and the
results of testing done after the change.

The set of activities performed for software maintenance in the steady state can be
sequentially classified into:

Modification request, Classification and Identification, Analysis, Design,


Implementation of the change in the necessary places, System Testing, Acceptance
Testing, and Delivery.

Based on the size, type, and complexity of the request, one or more of these phases
are integrated or eliminated from the execution cycle.

The workflow shown here actually illustrates the functioning of the onsite and
offshore team in a typical maintenance scenario, describing the activities performed
for the various levels and type of support.

Here is a description of different measurement areas and the areas where


improvement can be identified over a period of time.

Here, you will know the term service-level agreement and will see its importance in
the maintenance projects. A service-level agreement is a contractual service
commitment that describes the minimum performance criteria a provider promises
to meet while delivering a service. This is usually in measurable terms. It typically
also sets out the remedial action and any penalties that will take effect if
performance falls below the promised standard. It is an essential component of the
legal contract between a service consumer and the provider.

Finally, looking into the value additions offered to the client includes implementing
an S L A-based management, which keeps a constant eye on the health of the
project and gives a measure of the performances. This subsequently leads to the
improvement in the areas of productivity, schedule, and finally the cost involved.
The root-cause analysis done at intervals helps in identifying the pain areas of the
application and hence focuses on correcting them.
You will now learn about some key issues and challenges faced during application
maintenance. The key issues that should be adeptly dealt with for maintaining the
software effectively can be classified as:

Technical issues, which includes limited understanding, extent of testing possible,


and the maintainability or is it maintenance of code.

Management issues that include staffing and process-related issues.

Cost estimation that adapts the right methodology as parametric or judgmental.

Measures with respect to Analysis, Changeability, Stability, and the Testability of


the software.

Here is an overview of eTracker. eTracker is a tool used for managing the


maintenance projects in Cognizant. It is a request management tool where
individual requests can be planned and tracked to closure. It also has features like
automated metrics management, defect management, and risk management. The
tool represents the information stored in the form of various types of reports and
charts.

In this session, you have learned that:

The maintenance process fundamentally includes correction of defects, adaptation


to modified environment, and incorporation of additional requirements. They are
termed as production support, enhancements, or bug fixes

Maintenance can be categorized into proactive, reactive, correction, and


enhancement. The combination of these categories result in what are termed as
Preventive Maintenance, Perfective Maintenance, Corrective Maintenance, and
Adaptive Maintenance

The three primary stages of maintenance include Planning for transition, Knowledge
transition, and Steady state

The classification of requests is based on a combination of their size and type

A service-level agreement is a contractual service commitment that describes the


minimum performance criteria a provider promises to meet while delivering a
service. This is usually in measurable terms

The key issues that should be adeptly dealt with for maintaining the software
effectively can be classified as:

a) Technical issues, which include limited understanding, extent of testing possible,


and the maintainability of code

b) Management issues that include staffing and process-related issues


c) Cost Estimation that adapts the right methodology as parametric or judgmental

d) Measures with respect to Analysis, Changeability, Stability and the Testability of


the software

eTracker is used for tracking requests for a maintenance project

UMBRELLA ACTIVITIES:

Any standard software process model would primarily consist of two types of
activities: A set of framework activities, which are always applicable, regardless of
the project type, and a set of umbrella activities, which are the non SDLC activities
that span across the entire software development life cycle.

At the end of this session, you will be able to:

Define umbrella activities in a software life cycle

Explain the usage and importance of the following umbrella activities:

a) Traceability Matrix-its definition, usage, and relevance in the SDLC

b) Peer Reviews, Forms of reviews, Planning and execution of peer reviews

Umbrella activities span all the stages of the SDLC. They are not specific to any
particular life cycle stage.

The umbrella activities in a software development life cycle process include the
following:

Software Project Management

Formal Technical Reviews

Software Quality Assurance

Software Configuration Management

Re-usability Management

Risk Management

Measurement and Metrics

Document Preparation and Production

The following pages would focus on the requirement traceability matrix and formal
technical reviews.
Managing traceability is required to ensure the requirements are carried through
properly to design, development, and delivery. The following animation will show
the pitfalls of poor traceability.

Now, let us try to understand the concept of traceability and its importance in
software development. For example, in an organization, the activities are
departmentalized on the basis of the functionality to be served and employees are
allocated to each department. A requirement traceability can be defined as a
method for tracing each requirement from its point of origin, through each
development phase and work product, to the delivered product. Thus, it helps in
indicating, for each work product, the requirements this work product satisfies.

When there is absence of traceability: The product gets compromised since the
development cannot be prioritized based on the order of criticality of the
component, ultimately leading to missed functionality in the delivered software.
Project management is compromised due to the lack of visibility of the components
of the application being developed and their interconnections causing a major
hindrance to continuous planning. Testing is compromised as the coverage is not
verified at every stage of the life cycle. It becomes difficult to demonstrate that the
product is ready. Finally, maintenance becomes difficult as identification and
analysis of the impacted work products and requirements becomes tedious. This
ultimately increases the complexity during testing.

Some benefits are that its availability ensures correct implementation of


requirements as traceability gives a clear visibility of the interactions among the
components within the system. The forward and backward visibility into the system
actually helps in estimating the tasks and activities of the project with greater
accuracy through a detailed impact analysis for the changes. This leads to effective
project management and planning. Since traceability provides a highly visual view
of the interrelationships between the various components, it can be used to identify
the requirements that have not been covered and hence analyze the gaps between
them. Traceability gives a complete idea about the dependencies and relationships
of and between components. For any change in requirement that is requested by
the customer, it facilitates impact analysis and simplifies the maintenance activities.
Finally, traceability also helps to ensure that all the work is against current
requirements and that the requirements are completely satisfied.

The roles and responsibilities with respect to the traceability matrix are explained in
this page. Project manager ensures all required information is provided as needed,
reviews the traceability matrix for completeness. Requirement analyst updates
requirements traceability matrix as needed, support analysis as needed. Designer
provides mapping of requirements to design products. Developer provides mapping
of requirements to development products. Tester provides mapping of requirements
to test products.
This page details the concept of Peer Review in software projects and identifies the
importance and need of Peer Reviews. In software development, Peer Review refers
to a type of Software Review in which a work product (any software deliverable,
design, or document) is examined by its author and/or one or more colleagues of
the author, in order to evaluate its technical content and quality. Management
representatives should not be involved in conducting a peer review except when
included because of specific technical expertise, or when the work product under
review is a management-level document. Managers participating in a peer review
should not be line managers of any other participants in the review.

Peer Review has to be planned at the start of the project where the PM or PL
identifies the artifacts to be reviewed and the Peer Reviewers. The Review schedule
of the individual items to be reviewed along with associated reviewer needs to be
planned by the PL during the project execution. The Peer Review needs to be
conducted by the assigned Reviewers. The Review comments need to be logged in
a Review tool such as eTracker or Prolite. The developer needs to incorporate the
review comments.

In this session, you have learned that:

Umbrella activities span all the stages of the SDLC

The concept of umbrella activities focuses on Requirement Traceability Matrix

Requirement traceability matrix is needed to be maintained by projects to ensure


that the requirements are adequately addressed

Not maintaining a requirement traceability matrix results in problems including


unsatisfied requirements, problems during delivery and maintenance

Software Peer Review needs to be planned, performed, and logged

CONFIGURATION MANAGEMENT:

Here, we are presenting the scenario on the day version 3.0 of Far Flung Personnel
Planner to be released.

We see here that the current problem that the far flung company was facing could
be reverted if they could roll back to the earlier release. They are unable to identify
the changes incorporated in the previous version and check whether all the changes
suggested have been incorporated in the latest version. There is no formal
communication about the status of the changes either.

At the end of this session, you will be able to:

Learn what software configuration management is

Study the important terminologies used in software configuration management


Explain the tasks involved in software configuration management

Change management is a life cycle activity, not just a maintenance activity.

Here, we will illustrate a few basic reasons why we encounter change in software
application development. If the business need is not clear to the customers, then
the way it is communicated often doesn't address the actual need which, at a later
point of time, might result in a change. Secondly, if there is a change in the
operating environment in which the system functions. Thirdly, a change might
result from the errors committed due to other reasons during requirement
gathering, design, and the testing phase of the life cycle.

Since you now know the fact that change is inevitable in software application
development, the next basic question that arises is how are we going to manage
these changes? To manage changes in software application development, we use
the discipline of software configuration management, which operates throughout
the life cycle of an application development.

Here is an illustration of configuration management. Configuration management is a


discipline that involves a set of activities that need to be performed to manage and
control the changes that arise during the software development life cycle. The
discipline of configuration management is applied across the life cycle of the
project. SCM is that aspect of project management that focuses exclusively on
systematically controlling the changes that occur during the project by using a
defined process. SCM is a support activity that makes technical and managerial
activities more effective with the basic objectives of delivery of high-quality
software product to the client. It involves identifying and documenting the physical
and functional characteristics and managing the security and protection of a
software artifact.

Now, here is an illustration of the basic questions a configuration management


system helps us to answer about any document or code that is created or used in
software development or maintenance. Typical questions that any SCM process
should address are:

What are the artifacts to be developed within the life cycle?

What is the status of an item?

How do I identify a work product uniquely, every time I make a change and how do I
record its effect on other items?

How do I inform everyone else of the changes I have made to an existing document
or code?

Now it is essential to get familiarized with some of the commonly used


terminologies in configuration management and understand their significance in
detail. Software Configurable Item or an S C I, Baseline, Version Control, and Access
Control. You will learn about these terminologies in the following pages.

The most important concept of software configuration management is a software


configurable item or SCI. Software configurable items are the basic building blocks
of any software product. Changes to the software configurable items need to be
managed in order to control change of the software product. Here is an example of
a house that relates to an SCI. The building plan, the floor design, the doors,
windows, and chimney of the completed house can be identified to be the
configurable items of a house. In case of a software application requirements
document, the design document, code, test strategy document, and the delivered
application itself can be considered to be examples of SCI.

A baseline is a specification or product that has been formally reviewed and agreed
upon. Examples of baselines are reviewed design document, approved project
plans, and accepted product. Baselines are well-defined points in the evolution of a
software application. Hence, the baselining criteria and the person responsible for
meeting the criteria need to be defined prior to planning configuration
management.

Using version control of a configurable item, we are able to identify multiple


instances of the same configurable item, uniquely. Assume that we have completed
building a house using the Base-lined project plan: House_Project_Plan_Version_1.0,
and Base-lined Floor Design Document: House_Floor_Design_Version_1.0. Now,
there is a need to change the pattern of windows of the house.

To change the window pattern of the house, we would need to re-plan the project
and recreate the floor design. The version number of the initial project plan is
incremented by 1.0 and the new plan is named: House_Project_Plan_Version_2.0.
The floor design is also updated and the version number of the initial floor plan is
incremented by 1.0 and the new floor design is named:
House_Floor_Design_Version_2.0. Based on the new baselined configurable items
House_Project_Plan_Version_2.0 and House_Floor_Design_Version_2.0, an updated
version of the house is created. The following page will give you details on Access
control.

Access control is used to maintain integrity of configurable items. Not all associates
working in a software company are allowed to access the documents pertaining to a
particular project. Only core members of a project are allowed to gain access to
documents of a project. Again, within the project, different user groups are defined
and access rights are defined for each user group. Separate work areas are defined
for each team and access is controlled within each work area.

The first task for initiating the discipline of software configuration management,
referred to as SCM, is to create the configuration management plan. The next step
is to form the SCM team as per the roles identified in the SCM plan. The third step is
to set up a library or project repository structure as per the SCM plan. Along with
this task, access rights of each team member to each repository are also defined
and implemented. Changes implemented in all items are then implemented as per
the methodology documented in the SCM plan. The status of items changed is
maintained by the SCM coordinator. All activities of SCM are subject to configuration
audits conducted by the quality reviewer.

The software configuration management plan documents the processes and


methodologies that will be used to manage change in the project. It also identifies
the roles of the team members who will be responsible for implementing change
control in a project.

The SCM plan identifies the names of the SCM team and the roles of each member,
that is, the names of the reviewers, approvers, SCM coordinator, and other team
members who will be responsible for implementing a change.

Libraries or repositories are areas where a project stores and maintains the
documents and executables. This page illustrates the repository structure for
version controlled items. The development area contains all items documents that
are in development while review or test area contains items that are ready for
testing. The baseline area contains all the approved items that are ready for project
use and deliverable to the next step or stage. Old items that are no longer in use
are stored in the archival area.

Here you will see how user access rights are defined for each of the area or
repository and how read or write access is controlled in a project. To maintain
integrity of the work products, access rights to each of the folders is defined.

A software project has both version controlled and non version controlled items. All
the items that will undergo changes throughout the life cycle are version controlled
and called documents, such as design document, code etc., There are many other
items, which reflect the status at the given point of time and that individual item
will not undergo changes. Only a new instance will be created. These are typically
non-version controlled and called as records. Examples are Status reports, review
records.

This naming conventions of configurable items have been described here. The
qualifier can be a project or a module or a sub project name or any combination of
them identified as appropriate to a project.

To manage changes in a software configuration item, it is necessary to identify the


multiple instances of a configurable item uniquely. We use the concept of a naming
convention and a version number to identify a configurable item. After a
configurable item is baselined, a version number is given to the configurable item,
generally starting at 1.0. As already described earlier, a baselined configurable item
is either reviewed or approved based on the criteria set for baselining the item.
Here you can see how non-configurable items like quality records are named. The
example is that of a review sheet for a design document. In this case, a qualifier can
be a project or a module or a sub project name or any combination of them
identified as appropriate to a project.

The naming convention of other non configurable items like status reports is
illustrated here. In this case, the qualifier can be a project or a module or a sub
project name or any combination of them identified as appropriate to a project.

Change control is a procedural activity that ensures quality and consistency as


changes are made to configurable items. Change control is the means by which
changes are accepted into a project in a controlled manner without causing great
instability. Manages the process for initiating and making changes to baselined
software configurable items.

Status Accounting keeps track of changes made to Configurable Items and its
current status by maintaining a history and a continuous status over the period of
time. The status can be WIP, baselined, under review, or change etc. This helps in
identifying the list of changes required, the changes incorporated, and the changes
pending.

The quality reviewer or SCM coordinator of the project audits all activities pertaining
to configuration management. There are two types of audit that a quality reviewer
performs, functional configuration audit and physical configuration audit. Functional
configuration audit verifies that the system satisfies the specifications and this is
typically verified by auditing the traceability matrix. The traceability matrix traces
each requirement through the design, code, and test case, whereas the physical
configuration audit verifies the typical SCM question of the status accounting of all
SCIs.

In this page, you will learn about the modes of configuration management. SCM can
be either tool-based or manual or a combination of the two. Manual management
essentially involves configuring a folder structure in a file server with controlled
access rights for various areas. Tool-based management covers automatic version
control mechanism for both source code and documents, and access control. Since
the process is automatic, the chances of committing manual errors are eliminated.
Examples of SCM tools are: VSS, Clearcase, CVS, etc. SCM can be performed as a
combination of these two mechanisms.

Currently, VSS from Microsoft is most widely used across projects. We will learn the
various features of configuration management by using VSS as the source control
tool. VSS allows automatic version management eliminating the version naming
conventions. Instead, it keeps a history of the previous versions of the same file
through frequent check-ins or check-outs. Apart from that, the details of the check-
ins and check-outs can be stored by labeling each version of an item. The labeling
can also be used for automatic build management of the software at defined
intervals by improving the tool using add-ins to VSS. VSS allows multiple degrees of
folder level access control mechanisms to a group or at individual level. Parallel
development is also possible through the usage of branching and merging of the
main chain.

Here are the SCM best practices that are followed at Cognizant.

Addressing workflow and the responsibility for change

Conditions for initiating CRs

Highlighting and planning for controlling dependencies affecting the critical path or
SLA

Handling signoff delays

Handling scope and responsibility change apart from the requirement change

A defined branching and merging strategy

Defining and enforcing the frequency of check-ins of development code

Addressing tagging or labeling frequency or nomenclature for the builds

SCM practices and other details followed at Cognizant can be accessed from
Cognizant's quality management system-the Qview. The demo will help you to
reach out for necessary documentation regarding the SCM practice at Cognizant.

In this session, you have learned that:

Software configuration management is a discipline used to manage and control


change across the project software development life cycle

The terminologies used in SCM are:

Software Configurable Item, These are the components of a product that are to be
controlled for managing change in a product. They are identified using naming
conventions and version numbering

Baseline, Is the formally agreed upon project plan

Version Control, Identifies multiple instances of the same configurable item,


uniquely

Access Control, Is used to maintain integrity of configurable items

The software configuration management plan documents the processes and


methodologies that will be used to manage change in the project. It also identifies
the roles of the team members who will be responsible for implementing change
control in a project

The SCM plan includes Names of the SCM team members, Roles of each SCM team
member, Name and location of project libraries, User access right for project
libraries, Names of configurable items, Names of non version controlled items,
Process for change control, and Process for status accounting

DEFECT MANAGEMENT:

Some maladies at their beginning are easy to cure but difficult to recognize. In
course of time, when they have not been at first recognized and treated, they
become easy to recognize and difficult to cure. This necessitates defect prevention
and defect detection to be a part of the defect management process. This is true for
software projects as much as medical science.

At the end of this session, you will be able to:

Study the defect-tracking mechanism and

Learn about the defect prevention and causal analysis

A defect can be defined as a flaw in a system or system component that causes the
system or component to fail in performing its required function.

A defect can be of two types:

Process defect and Product defect

Prolite or eTracker needs to be used for tracking, reporting, and resolution


purposes. Here, you can see the diagram of the workflow for internal defect.

You will now learn what defect reporting involves. Defects are reported using
Prolite, eTracker, or other defect tracking tools. A defect report must include: Defect
Id, Test case reference, Defect description, Defect priority and severity, Tester
name, and Test date and time. After the defect is assigned to a developer and fixed,
the final report will include the Defect fixer's name, Date and time, and Defect fix
verification. In the next page, you can see how defects are classified.

Defects are classified in terms of severity. Severity is indicative of how severe the
defect is. This can be very high or critical, high, medium, or low. Priority is an
indicator of how soon the defect needs to be fixed and this can be high, medium, or
low.

Now, you will learn how defects can be prevented. Every project prepares a defect-
prevention plan. Using the plan, the common causes of defects are identified and
eliminated. Defect-prevention tasks include: analyzing defects encountered in the
past and taking specific actions to prevent the occurrence of the same in future.

This diagram depicts the defect-prevention flow. The causal analysis meetings are
planned in the defect-prevention plan. You will learn more about causal analysis in
the next few pages.

Causal analysis primarily focuses on: fixing problems as they occur, finding what, in
the process, has permitted the defect to occur, and finding what needs to be
corrected to prevent it from occurring again.

The outcome of the causal analysis meeting:

Determines the root causes and common causes.

Proposes action plans and draws up implementation plans.

Root causes for defects may occur during:

Coding and testing stage because of lack of programming skills, insufficient


adherence to standards, lack of environment knowledge, lack of testing skills, and
oversight of design requirements.

Implementation due to: Lack of implementation skills, and lack of availability of the
proper environment.

The responsibilities involved in managing a defect begin with:

Creating a team to coordinate defect-prevention activities for the organization. This


team is either part of SQAG or its activities are closely coordinated with that group.

Building a team to coordinate defect-prevention activities for the software project.


This team is closely tied to the team responsible for developing and maintaining the
projects with defined process. SEPG is such a team.

Here are the roles and responsibilities for defect management in a project:

Follow defect-prevention checklist from database

Adopt DP activities suggested in the kick-off meeting

Collect defect data

Conduct causal analysis

Identify the special or common root causes for defects

Identify action proposals for each root cause

Identify the action proposals for preventing the special or common defects
Review causal analysis and select action proposals to be addressed

Implement the action items and track them to closure

Review the success or failure of the past action items

Entering the data in the repository

Conduct phase-wise review of defect-prevention activities

Some of the defect prevention techniques commonly used are Root-cause analysis,
Defect metrics analysis, and Defect prediction.

In this session, you have learned that:

A defect can be defined as a flaw in a system or system component that causes the
system or component to fail in performing its required function

A defect can be of two types, namely, process defect and product defect

Every project prepares a defect-prevention plan. Using the plan, the common
causes of defects are identified and eliminated

The causal analysis meetings are planned in the defect prevention plan.

The causal analysis focus on fixing problems as they occur, finding what, in the
process, has permitted the defect to occur, and finding what needs to be corrected
to prevent it from occurring again

The outcome of the causal analysis meeting: Determines the root causes and
common causes, and proposes action plans and draws up implementation plans

BTSTC1:

SOFTWARE TESTING:
Welcome to the session on introduction to software testing. The goal of software
testing is to achieve an error-free program by finding all errors.

At the end of this session, you will be able to:

Explain software testing

List the types of testing


Software testing is a process of identifying the correctness, completeness, and
quality of developed computer software.

Testing is a way of establishing confidence. It enables to check if a product fulfills its


specified performance.

It is a process of exercising software to detect errors and to verify that it satisfies


specified requirements.

The features of software testing are as follows:

It executes a program with the intent of finding errors.

The software testing conducts a set of activities to detect errors.

Testing is the measure of software quality.

The advantages of software testing are as follows:

It verifies that all the requirements are implemented correctly for both positive and
negative conditions

Identifies defects before software deployment

It helps in improving quality and reliability

Makes software predictable in behavior

Reduces the incompatibility and interoperability issues

It helps marketability

It also helps retention of customers

The step-by-step procedure in testing is illustrated here.

The scenarios design and the test case development can start in parallel with the
development cycle.

The test execution syncs up with the development cycle during the functional
testing phases.

The static and dynamic are the two types of testing techniques.

The static testing does not execute the program. It inspects or walks through the
code.

It is symbolic execution and verification.

The dynamic testing generates test data and executes the program.
Some of the types of dynamic testing are black-box and white-box or glass-box
testing.

The advantages of static testing are as follows:

Identifies defects early so that the cost of rework is saved.

Provides checklist-based approach and also focuses on coverage.

It groups perspective.

The probability of finding defects is high.

The disadvantages of static testing are as follows:

It consumes more time.

It cannot test data dependencies.

The static testing requires high skill levels.

The black-box testing is also called functionality or data-driven testing.

Some of the features are as follows:

The black-box test design treats the system as a black-box. So it does not explicitly
use the knowledge of internal structure.

Its design is usually described as focusing on testing functional requirements,


whose contents or implementations are unknown or irrelevant.

The two techniques are equivalence partitioning and boundary value analysis.

Equivalence partitioning divides the input domain into classes of data from which
test cases can be derived.

Boundary value analysis focuses on the boundaries of the input domain rather than
its center.

The white-box testing is a structural or logic-driven testing also called glass-box


testing.

It performs structural testing process. The white-box testing is a program logic-drive


and design-based testing.

It examines the internal structure of program.


The types of techniques are as follows:

Basis path testing selects path that achieve decision coverage.

Flow graph notation is a notation for representing control flow similar to flow charts
and UML activity diagrams.

Cyclomatic complexity provides upper bound for number of tests required to


guarantee coverage of all program statements.

`The white-box testing is to derive test cases based on the program structure. Also,
to guarantee that all the independent path within a program module is tested.

The minimum criteria for white-box testing are as follows:

Code

High level and low level design document

Software requirement specification

In this session, you have learned that:

The software testing is a process of identifying the correctness, completeness, and


quality of developed computer software

The two types of testing techniques are static and dynamic

The types of dynamic testing are black-box and white-box

The black-box testing is planned without the intimate knowledge of the program

The two techniques in black-box testing are equivalence partitioning and boundary
value analysis

The white-box testing is planned with the intimate knowledge of the program

The types of techniques in white-box testing are basis path testing, flow graph
notation, and cyclomatic complexity

TYPES OF S/W TESTING:

Welcome to the session on types of software testing.

Software testing is an essential process because it enhances the quality of the


product.
In practice, the expected result of one test can be altered by the result of another
test.

Testing Graphical User Interface (GUI) components is essential for all applications so
that it becomes user-friendly.

The main objective of the GUI testing is that the end user should feel comfortable
while using the components of the operating system.

The features of GUI testing are as follows:

It provides standard look for the operating system.

It checks for the functionalities of all the components involved with the operating
system.

The GUI interfaces the integrity of the operating system with other package
applications.

The types of GUI testing errors are as follows:

Data validation

Incorrect field definitions

Mishandling of server process failures

Incorrect search criteria

Multiple database rows are returned when a single row is expected

The GUI testing strategies are as follows:

Focusing on errors in order to reduce them completely

Separating the errors logically by applying divide and conquer technique

Applying appropriate test design techniques

Building and applying tests for each layer of the operating system

Automating the test wherever possible

Regression testing is used to ensure the quality and improve the functionality of the
system.

The main objective of regression testing is to re-execute one or more tests in


subsequent build of the application or product to ensure the quality.

The features of regression testing are as follows:


Revisiting and testing all prior bug-fixes in response to new fixes or enhancements

Retesting all programs that might be affected by the fixes or enhancements

Discovering the hidden bugs that are not deducted during the previous tests

Forming the baseline for the application to grow with every build

Acceptance testing ensures that the system meets the business requirements. It
verifies the overall quality, correct operation, scalability, completeness, usability,
portability, and robustness of the functional components supplied by the software
system.

The main objective of acceptance testing is that the system meets the mutually
agreed acceptance criteria with the customers.

The features of acceptance testing are as follows:

Meeting critical requirements

Minimizing performance level of the tests

Maximizing defect detection rate

The configuration testing is carried out to ensure the compatibility of the system.
The term compatible implies to different degrees of partnership of one device to
another.

The characteristics of configuration testing are as follows:

The ability of one program to work with another is termed as hardware


compatibility.

The platform and inter-product compatibility is also taken care by the configuration
testing.

The network configuration has to be tested when two or more computers share the
resources on a computer network.

Database compatibility of the products is also taken care by the configuration


testing. For example, many programs are compatible with dBASE. Thus the files
produced can be easily transformed into a dBASE database.

The activities that occur during the installation testing are as follows:

Basic installation of the system is tested.

Functionality of various configuration and platforms are checked.


Regression testing for basic functionality should be performed during the
installation testing.

Alpha testing is the software stage in the execution of a product. It has all the core
functions to accept inputs and generate outputs. The client at the development side
will perform an alpha testing.

Beta testing is the last stage of testing. A round of testing called alpha testing often
precedes this testing. It need not have defined test cases. It is the users who
perform this testing according to their ability.

In this session, you have learned that:

GUI testing checks the user friendliness of an application

Regression testing ensures product quality and improves functionality

Acceptance testing ensures that the system meets mutually agreed acceptance
criteria with the customers

Alpha testing involves testing of software by customer at the developer’s site

Beta testing involves testing of software by customer at the client’s site

UNIT TESTING:

Welcome to the session on unit testing.

It is the lowest level of testing. The individual unit of the software is tested in
isolation from other parts of a program.

A unit is a program or a screen. It is a back-end related to a screen.

Unit testing is the lowest level of testing and is also called program testing. The
individual unit of the software is tested in isolation from other parts of a program.

The properties of writing tests in a unit testing are as follows:

It is written by software developers because bugs found by developers are cheaper


to solve than using a separate testing department.

It is easily used by all programmers when written in a framework.

The unit test is made easy to run automatically, so that it is easily used by all
programmers.

The working procedures of a unit test are as follows:


It is linked with the domain code (the code in the final application) and the unit
testing framework.

The framework calls the tests, which in turn manipulates the domain code and
performs checks on it.

The framework will report the failed tests.

It also catches every unhandled exception and the other tests keep running.

Integration tests are done with unit testing framework.

The steps involved in unit testing are as follows:

1. Initially set the test data, functions, and methods for a unit.

2. Define expected output.

3. Repeat the coding process until the test passes.

The unit testing activities are field level checks, field level validation, user interface
checks, and functionality checks.

The field level checks consist of factors, such as null or not null, uniqueness, length,
date field, numeric, negative, and default display checks.

Field level validation is used to test all validations for an input field. It checks date
range and date validation with the system date.

The user interface checks for:

Readability of the controls

Tool tip validations

Easy usage of interface across the product

Consistency with the user interface across the product

User interface dialogs

Tab-related checks for screen controls

The functionality checks for screen, field dependencies, and referential integrity
checks.

The significance of unit testing is exemplified by the following factors:

It determines whether unit works as designed.


Unit test benefits programmer if used correctly.

It is a streamline programming.

Unit test gives clarity about the requirements of the object.

It documents the requirement for an object.

Enhances the quality of the code.

Unit test makes changes to system with confidence.

Imposes no restrictions in learning from the system.

It extends the life and maintainability of the code.

The best practices for the efficient usage of unit testing by the developers are as
follows:

Testing small units is important because the bugs found must be in that unit.

Unit test should be isolated so that one test does not interfere with the other tests.

The unit tests are run every time they are adding code.

Unit tests should be made short.

The advantages of unit testing are as follows:

Immediate testing of the code will reduce the programmers' stress.

Unit testing provides a safety net for application when programmers add
functionalities.

Unit tests are used as documentation.

It increases the programmers’ productivity and code stability.

It reduces the debugging time because bugs found by developers are cheaper to
solve than those found by a separate testing department.

In this session, you have learned that:

Unit testing is the lowest level of testing

A unit test is linked with the domain code and the framework

The main activities of unit testing are field level checks, field level validations, user
interface checks, and functionality checks

Immediate testing of code will reduce the programmers' stress


J UNIT TESTING:

Welcome to the session on JUnit. XUnit is an application for executing arbitrary


tests. It provides an extensible framework and protection for executing groups of
unit tests. It has members, such as JUnit, NUnit, CppUnit, RubyUnit, XMLUnit, and
dbUnit. This session deals with JUnit, which is a testing framework written in Java.

JUnit is a unit test framework for Java programming language. It contains a series of
extensible classes that perform a great deal of testing work. The JUnit test
framework consists of facilities, such as counting of errors and failures, reporting of
errors and failures, and running tests in batches.

The JUnit test framework consists of some salient features, such as automation of
test cases, improved test coverage, consistent testing, highly reusable framework,
assertions for testing expected results, testing fixtures for sharing common test
data, testing suites for organizing and running tests, and using graphical and
textual test runners. In JUnit, all exceptions are test failures.

JUnit has its unique advantages. Unit testing is automated. It determines whether
the unit works as designed. It has features, such as line coverage, logic coverage,
and condition coverage. JUnit benefits the programmer a lot if he uses it correctly. It
has framework-supplied methods, such as assetTrue( ) and assetFalse( ).

JUnit has a few limitations. There is no direct provision for reading test data from a
file, and it cannot directly test UI components, such as JSPs and Servlets.

The essential classes in JUnit framework are as follows:

junit.framework.TestCase allows running multiple test methods simultaneously, and


it does all counting and reporting of errors.

junit.framework.Assert is a set of assert methods. An example for


junit.framework.Assert is displayed here. The test fails if the assert condition returns
false. String parameter is used to put a label on the test. If a test fails it is counted
and reported in this class.

junit.framework.TestSuite is a collection of tests. It uses Java introspection to find all


the methods that start with a test and have void parameters. The run method of
TestSuite executes all the tests.

The JUnit test case method manipulates an instance of the class to be tested. It
issues assertions about the expected state of the object.

The two processes involved in the JUnit test case are manipulation and assertion.
Manipulation is a process that calls a method or series of methods on the instance
of the class. It passes through a variety of input, both valid and invalid. Assertion is
a process where the statement is true if the code works appropriately. If an
assertion is not true, it fails and the test case fails as well.

The requirements of a JUnit test case are a piece of code that takes a predefined
unit of code and manipulates it. A small program is required to execute various
pieces of code divided into methods that test your class. Each behavior of your class
will have a method in the test case. A behavior can have multiple methods in the
test case.

The JUnit provides a framework for writing tests.

It implements a subclass of TestCase and TestRunner to run tests. The test case
runs multiple tests. All tests extend TestCase, named XXXTest.

The TestCase class generator is able to create skeletons of test methods. You can
add any number of assertions per method.

Generator is able to generate setUp or tearDown methods, which instantiate tested


class and allows test method to access this instance through the field member of
TestCase class.

Use setUp( ) to initialize variables used in more than one test.

Clean-up after test case is done by overriding tearDown method.

To write a JUnit test case, first create a class that extends junit.framework.TestCase
then write a public no-argument method whose name starts with test. An example
for writing a public no-argument method is displayed here. Now, manipulate an
instance of the class to be tested and issue assertions.

If you want to write a test similar to one you have previously written, write a fixture.
When you want to run more than one test, create a suite. Fixtures are a set of
objects used in JUnit to share common test data. setUp( ) method is used to
initialize the variables and the tearDown( ) method is used to release any
permanent resources you allocated in setUp( ) method. A suite refers to objects
provided in JUnit to run many number of test cases simultaneously.

The snippet of a program by using fixtures and suite objects is displayed here.

For running a JUnit test case, you can use JUnit TestRunner. You can also use both
textual and GUI TestRunners. An example for textual and GUI TestRunners is
displayed here. Textual is a faster TestRunner, whereas GUI is a user-friendly
TestRunner.

JUnit lets you test software code units by making assertions that the intended
requirements are met, but these assertions are limited to primitive operations.
Assertion Extensions for JUnit is an extension package for the JUnit framework.
In JUnit, an assertion performs an individual test. The most common assertions are
perhaps equality assertions. These compare two values, the test passes if they are
equal and fails if unequal.

The JUnit design pattern is displayed here.

In software engineering, a design pattern is a general solution to a common


problem in software design. A design pattern is not a completed design that can be
transformed directly into code; it is a description or template to solve a problem
that can be used in many different situations.

Object-oriented design patterns typically show relationships and interactions


between classes or objects, without specifying the final application classes or
objects that are involved. Algorithms are not thought of as design patterns since
they solve computational problems rather than design problems.

Patterns allow developers to communicate using well-known and well-understood


names for software interactions. Common design patterns can be improved over
time making them more robust than ad-hoc designs.

To write a test case follow these steps:

Define a subclass of TestCase.

Override the setUp( ) method to initialize object(s) under test.

Optionally override the tearDown( ) method to release object or objects under test.

Define one or more public testXXX( ) methods that exercise the object(s) under test
and assert expected results.

JUnit is designed around two key design patterns. They are command and
composite pattern.

A TestCase is a command object. Any class that contains test methods should
subclass the TestCase class. A TestCase can define any number of public testXXX( )
methods. When you want to check the expected and actual test results, you invoke
a variation of the assert( ) method.

TestCase subclasses that contain multiple testXXX( ) methods can use the setUp( )
and tearDown( ) methods to initialize and release any common objects under test,
referred to as the test fixture. Each test runs in the context of its own fixture, calling
setUp( ) before and tearDown( ) after each test method to ensure there can be no
side effects among test runs.

TestCase instances can be composed into TestSuite hierarchies that automatically


invoke all the testXXX( ) methods defined in each TestCase instance.
There are certain best ways to work with JUnit. Separate the production and test
code typically in the same packages. Compile into separate trees, allowing
deployment without tests. Do not forget to use object-oriented techniques, base
classing and so on.

The steps to be followed in a test-driven development are as follows:

1. Write failing test initially

2. Write enough code to pass

3. Refactor

4. Run tests again

5. Repeat until software meets goal

6. Write new code only when test is failing

The do’s and don'ts while working with JUnit test framework are as follows:

You can call the superclass methods, such as setUp( ) and tearDown( ) methods
when subclassing.

You should name the tests properly.

Ensure that tests are time-independent.

Always utilize the JUnit's assert or fail methods and exception handling for clean
test code.

Do the document tests in javadoc.

Keep tests small and fast and avoid visual inspection.

Do not use test-case constructor or suite( ) method to set up a test case.

Do not assume the order in which tests within a test case run.

Do not write test cases with side-effects and load data from hard-coded locations on
a file system.

Cactus framework is a testing framework for server-side Java code. It uses and
extends JUnit. Apache-Jakarta group developed cactus framework as an open-source
initiative.

The cactus framework is used to test Servlet, JSP, EJB, Taglib, filter and so on. The
cost of writing server-side TestCases is very low.
The architecture of the cactus framework is displayed here.

In this architecture, the YYYTestCase is the ServletTestCase or FilterTestCase or


JspTestCase and XXX is the name of the test case. Every YYYTestCase class contains
several test cases.

The steps to be followed while configuring the cactus framework in Web application
are as follows:

1. Get the cactus distribution

2. Place the required jars in Web library

3. Write TestCases

4. Should arrange the tests in a separate package from application code

5. Run tests through a browser

HttpUnit parses the HTML results into Document Object Model (DOM). It has easy
link navigation and form population. It is useful for automated acceptance tests.

Canoo WebTest is a free open-source tool for automated testing of Web


applications. It calls Web pages and verifies results, giving comprehensive reports
on success and failure. Canoo WebTest helps you to reduce the defect rate of your
Web application.

JUnitPerf is a collection of JUnit test decorators used to measure the performance


and scalability of functionality contained within existing JUnit tests. JUnitPerf tests
transparently decorate existing JUnit tests. This decoration-based design allows
performance testing to be dynamically added to an existing JUnit test without
affecting the use of the JUnit test independent of its performance. By decorating
existing JUnit tests, you can easily measure the desired performance and scalability
tolerances. JUnitPerf can wrap any JUnit tests.

In this session, you have learned that:

JUnit is a unit test framework for Java programming language

JUnit testing provides automation of test cases, improved test coverage, consistent
testing, and highly reusable framework

The best practices of JUnit testing are separate production, test code, and test-
driven development

INTEGRATION AND SYSTEM TESTING:

Welcome to the session on integration and system testing.


Integration testing is an intermediate level of testing. This testing process evaluates
the interaction and consistency of interacting components.

Integration testing is a logical extension of unit testing. It identifies problems that


occur when units are combined.

The integration testing techniques are top-down, bottom-up, and big-bang testing.

The program is merged and tested from the top to the bottom in top-down
integration.

Modules subordinate to the main control module are incorporated into the structure
in either depth-first or breadth-first manner.

The advantages of top-down integration testing are as follows:

Integrated Testing is done in an environment that closely resembles reality, so the


tested product is more reliable.

Stubs are functionally simpler than drivers and can be written with less labor in less
time.

The core functionality tested in the cycle is the disadvantage in top-down


integration testing.

The illustration here represents the testing process of top-down integration testing.

The modules are integrated by moving downward through the control hierarchy,
beginning with the main control module.

The program is merged and tested from the bottom to the top in Bottom-Up
integration testing.

It tests the modules at the lowest levels in the program structure and begins
construction and testing with atomic modules.

The advantages of bottom-up integration testing are as follows:

Many programming and testing operations can be carried out simultaneously, which
yield apparent improvement in software development effectiveness.

Intensive unit testing of each module.

The disadvantages of integration testing are as follows:

Key interface defects are trapped later in the cycle.

The test drivers have to be generated for modules at all levels except the top
controlling one.
You cannot test the program in the actual environment in which it runs.

The illustration here represents the testing process of bottom-up integration testing.

First, the terminal module is tested, and then the next set of higher-level modules is
tested with the previously tested lower modules.

The software components of an application are combined all at once into a overall
system in Big-Bang integration testing.

In this approach, every module is first unit tested in isolation from every module.
After each module is tested, all the modules are integrated together at once.

System testing is used to test whether the system performs the right business
functions.

It is a black-box testing technique and specifications-based testing.

The features of system testing are as follows:

Typically independent team testing

Simulated environment testing

Live or simulated user data

Tests the whole system

Functional and nonfunctional requirements tested

Business transaction-driven testing

Compatibility and performance limitations uncovered

The system testing covers testing of the integrated system for:

Business functionality

Performance and scalability

Usability

Reliability

Portability

Installation

Disaster recovery

In this session, you have learned that:


Integration testing is a logical extension of unit testing. It identifies problems that
occur when units are combined

The integration testing techniques are top-down, bottom-up, and big-bang

In top-down integration testing, the program is merged and tested from the top to
the bottom

In bottom-up integration testing, the program is merged and tested from the
bottom to the top

The software components of an application are combined all at once into an overall
system in big-bang integration testing

System testing is a black-box testing technique and specifications-based testing

TESTING ARTIFACTS:

A test plan contains a description of the following parameters:

Testing objectives and goals

Test strategy and approach based on customer priorities

Test environment (Hardware, Software, Network, Communication, and so on.)

Features to test with priority and criticality

Test deliverables

Test procedure

Test organization and scheduling

Testing resources and infrastructure

Test measurements and metrics

The benefits of test plan are as follows:

It sets clear and common objectives.

It helps prioritize tests.

It facilitates technical tasks.

It helps to improve coverage.

It provides structure to activities.

It improves communication.
It streamlines tasks, roles, and responsibilities.

It improves test efficiency and test measurability.

The characteristics of a test case are as follows:

It has a set of test inputs, execution conditions, and expected results.

It reflects the tests that need to be performed.

It identifies the data needed for testing.

It specifies preconditions, postconditions, and acceptance or pass criteria.

It provides a means to verify system use cases and other requirements.

It helps to determine test coverage.

A good test case:

Has a reasonable probability of catching an as-yet undiscovered error.

It is sequential with the program or business flow.

It systematically uncovers different classes of errors with minimum (optimum) effort


and time.

It is not redundant.

Neither simple nor complex.

It tests the invalid and unexpected test.

In this session, you have learned that:

A test plan contains a description of testing objectives and goals, test strategy and
approach based on customer priorities, test environment, features to test with
priority and criticality, test deliverables, procedure, organization, scheduling,
measurements and metrics

A test case has a set of test inputs, execution conditions, and expected results

The test case reflects the tests that need to be performed

A test case specifies preconditions, postconditions, and acceptance or pass criteria

A good test case is sequential with the program or business flow

DEFECT MANAGEMENT:

Welcome to the session on defect management.


It is the methodology of eliminating defects in a project.

Defects are classified based on factors, such as category and severity.

A defect is a variance from the required product attribute.

There are two kinds of defects.

Defect from specifications

Defect in capturing user requirements

Failure is a defect that causes an error in the operation of a program. It adversely


affects the end user or customer. Any mismatch in the application and its
specification is a defect. A software error is present when the program does not
satisfy its end-user requirements.

The defects are classified based on category and severity.

The defect classifications based on category are as follows:

Wrong - It occurs due to incorrect implementation.

Missing - It occurs because user requirements are not built into the product.

Extra - It occurs as a result of unwanted requirements built into the product.

The defect classifications based on severity are as follows:

Very high or critical

High

Medium

Low

A bug report is a case against a product. It must supply all necessary information to
identify and fix the problem. The report must also specify what the system should
perform.

The steps in writing a bug report are as follows:


1. The report should be written in clear concise steps, so that someone who has
never seen the system can follow the steps and reproduce the problem.

2. It should include information about the product, such as version number and the
data being used.

Effective project management requires defect tracking and reporting as one of the
important modules.

Defect reporting software is designed for recording and reporting the defects in
projects to help project management, construction of project life cycles, project
handovers, and so on.

Defect tracking software is essential for improving project quality and managing
future requests from customers.

Defect reporting and management software enables contractors to manage defects


of post-project practical completions that are reported by customers.

Any defect reported against a product is logged into a common repository and
tracked through a closure.

The document generated through defect reporting and tracking consists of the
topics as displayed here.

These topics are found as column headings when the TEST DIRECTOR tool is used.

In this session, you have learned that:

A defect is a variance from the required product attribute

Defects are classified based on category and severity

Effective project management requires defect tracking and reporting as one of the
important modules

TEST AUTOMATION:

Welcome to the session on software test automation.

Software testing with an automatic test program will prevent the errors that are
made generally. Automation of testing prevents skipping of mistakes, and increases
the accuracy of the product.
Automated testing is the process of automating the currently used manual testing
process.

The benefits of automation are as follows:

It improves the efficiency of the testing process.

The cost involved in testing is reduced to a large extend.

The effect of automated testing is replicated considerably across different


platforms.

It decreases the redundancy of tests and increases the control when compared to
the manual testing process.

Automated testing involves greater application coverage.

Once the test cases have been created, the test environment can be developed.

The test environment is defined as the complete set of steps necessary to execute
the test as described in the test plan.

It also includes initial set up, description of the environment, and the procedures
needed for installation and restoration of the environment.

Inputs to the test environment preparation process are as follows:

Technical environment descriptions

Approved test plan

Test execution schedule

Resource allocation schedule

Application software to be installed

The various phases in automation are as follows:

Planning the test to meet the objectives

Design and developing the test in accordance with the plan

Execution of the test


Measurement of the results of the test

A test method is a tool that records test input as it is sent to the software under
consideration.

The input cases stored can then be used to reproduce the test at a later time.

The two methods used for test automation are capture playback and data-driven
approach.

The matrix here represents a tool-by-tool comparison. The functionality of each tool
may also be inferred from the matrix.

Observe that each category in the matrix is given a rating from 1 to 5.

1 represents excellent support for this functionality.

2 represents good support or provides another tool for the functionalities that are
lacking.

3 represents basic support only.

4 represents support only through an API call or third party add-in. However, it is
not included in the general test tool or below average support.

5 represents nil support.

The matrix score for each tool is displayed here.

In this session, you have learned that:

Automated testing is the process of automating the manual testing process


currently in use.

The phases in automation are test planning, test design and development, test
execution, and measurement of the results.

TEST AUTOMATION TOOLS:

Welcome to the session on test automation tools.


Rational offers the most complete life cycle toolset, including testing for the
Windows platform. It is recognized as a renowned leader with respect to object-
oriented development. Some of their popular products are Rational Robot, Rational
Rose, Clear Case, Requisite Pro, and so on. The Unified Process of Rational is a very
good development model, which allows mapping of requirements to use cases, test
cases, and a whole set of tools to support the process.

Rational offers a number of test automation tools.

Some of the Rational suite of tools are as follows:

Rational Requisite Pro

Rational Clear Quest

Rational Purify

Rational Quantify

Rational Pure Coverage

Rational Suite Performance Studio

Rational Robot

Rational Test Factory

Rational Site Check

Rational Load Test

Rational Test Manager

The Rational project is a logical collection of databases and data stores that
associate with the used data while working with Rational suite. A Rational project is
associated with one Rational Test data store, Requisite Pro database, and Clear
Quest database. It also associates with multiple Rose models and Requisite Pro
projects by optionally placing them under configuration management.

The Rational administrator is used to create and manage Rational repositories,


users, and groups. It also manages security privileges.
Rational Robot is used to develop three kinds of scripts, namely, Graphical User
Interface (GUI) scripts for functional testing, Virtual User (VU), and Visual Basic (VB)
scripts for performance testing.

The uses of Rational Robot are as follows:

It performs full functional testing. Record, play back the scripts that navigate
through the application, and test the state of objects through verification points.

Rational Robot performs full performance testing. Use Rational Robot and Test
Manager to record and play back scripts that help in determining whether a multi-
client system is performing within user-defined standards under varying loads.

It is used for creating and editing scripts by using the SQABasic, VB, and VU
scripting environments. The Rational Robot editor provides color-coded commands
with keyword Help for powerful integrated programming during script development.

It facilitates testing applications developed with Integrated Development


Environment (IDE), such as Microsoft Visual Basic, Oracle Forms, PowerBuilder,
Hyper Text Markup Language (HTML), and Java.

It is also used for testing objects even if they are not visible in the interface of the
application.

Rational Robot is used for collecting diagnostic information about an application


during script playback.

It is integrated with Rational Purify, Quantify, and Pure Coverage. It also enables
playing back scripts under a diagnostic tool and viewing the results in a log file.

Rational Test Manager is an open and extensible framework that unites all the tools,
assets, and data, which are related to and produced by the testing effort. Under this
single framework, all participants in the testing effort can define and refine the
quality goals.

The features of Rational Test Manager are as follows:

Rational Test Manager enables planning, designing, implementing, executing tests,


and evaluating the results.

It facilitates creating, managing, and running the reports. The reporting tools track
assets, such as scripts, builds, and test documents. It also helps in tracking test
coverage and progress.
It is used for creating and managing builds, log folders, and logs.

Rational Test Manager is also used for creating and managing data pools, and
types.

Rational supports several operating systems, protocols, Web browsers, markup


languages, and development environments.

The operating systems and protocols supported by Rational are displayed here.

The Web browsers supported by Rational are displayed here.

Rational supports markup languages, such as HTML and DHTML on IE4.0 or later.

The development environments supported by Rational are displayed here.

In this session, you have learned that:

Some of the testing tools offered by Rational are Rational Requisite Pro, Rational
Clear Quest, Rational Purify, Rational Suite Performance Studio, Rational Robot,
Rational Test Manager, and so on

Rational supports several operating systems, protocols, Web browsers, markup


languages, and development environments

PERFORMANCE TESTING:

Welcome to the session on performance testing. This session deals with


requirements, process, tools, load, volume, and stress testing of performance
testing.

Performance testing is a measure of the performance characteristics of an


application. It is basically a process of understanding how the Web application and
its operating environment respond at various user load levels.

In general, to measure the latency, throughput, and utilization of the Web site while
simulating attempts by virtual users to simultaneously access the site. One of the
main objectives of performance testing is to maintain a Web site with low latency,
high throughput, and low utilization.
Performance testing requirements normally comprise of three components. They
are response time requirements, transaction volumes, and database volumes.

To measure the performance characteristics of an application, the performance


testing proceeds with requirements collection, plan, design, scripts, bed
preparation, execution, result analysis, and report generation.

Load Runner is a testing tool used for testing the performance of client or server
system. It enables the user to test the system under restricted and peak load
conditions.

To generate load, the Load Runner runs thousands of virtual users that are
distributed over a network. Using the minimum hardware resources, these virtual
users provide consistent, repeatable and measurable load to execute the client or
server system just as real users would perform. The brief reports and graphs of
Load Runner provide information required to evaluate the performance of the client
or server system.

Web Load is a testing tool used for testing the scalability, functionality, and
performance of Web-based applications, such as Internet and Intranet. Web Load
can measure the performance of your application under any load conditions. The
Web Load is used to test the performance of your Web site under real-world
conditions. The Web Load executes this task by combining performance, load, and
functional tests or by running them individually.

The purpose of using volume testing is to find the weakness in the system with
respect to its handling of large amounts of data during short time periods.

The purpose of using stress testing is to test whether the system has the capacity
to handle large numbers of processing transactions during peak periods.

Performance testing can be accomplished in parallel with volume and stress testing
because it is necessary to assess performance under all conditions. System
performance is generally assessed in terms of response time and throughput rate
under different processing and configuration conditions.

In this session, you have learned that:


Performance testing is a process of understanding how the Web application and its
operating environment respond at various user load levels

Load Runner is a testing tool for testing the performance of client or server systems

Web Load is a testing tool for testing the scalability, functionality and performance
of Web-based applications

Volume testing is used to find weakness in the system with respect to its handling
of large amounts of data during short time period

Stress testing is used to test whether the system has the capacity to handle large
number of transactions during peak period

CODE COVERAGE TOOLS:

Welcome to the session on code coverage tools.

Code coverage is a measure used in software testing. It is used to ascertain a part


of a program that does not exercise test cases.

It produces more test cases to improvise product and specifies a quantitative


measure of code coverage, which is an indirect measure of quality.

The code coverage analysis consists of:

Source code instrumentation

Intermediate code instrumentation

Run-time information collection

The code coverage tools exert a performance on memory or other resource cost
that are unacceptable by normal operations of the software.

The tools in code coverage are displayed here.

Clover is a powerful and highly configurable code coverage analysis tool. It


discovers the sections of code that are not being adequately exercised by the unit
tests.

Clover provides method, branch, and statement coverage for projects, packages,
files, and classes. Unlike tools that use byte code instrumentation or the Java Virtual
Machine (JVM) Profiling Application Programming Interface, Clover accurately
measures per statement coverage, rather than per-line coverage.

JProbe allows the user to easily test the applications without any code change and
identifies the performance problem in the code. It integrates easily with the
application server, Web server, IDE, JDK, and operating system.

JProbe is the industries leading choice for enterprise code profiling and analysis
since it supports platform with both 32 bit and 64 bit, and analysis an application
running on a remote server.

Instrumentation Execution Coverage Tool (InsECT) is a system that is developed to


obtain coverage information for Java programs.

It instruments or inserts instructions into Java class files at the byte code level.

The goal of InsECT is to provide coverage information about Java programs by


considering its object-oriented behavior and language features.

Furthermore, as an open-source project, InsECT is extensively used in dynamic


analysis.

Java byte code assembler programmers' use JCoverage tool.

After instrumenting the code and running the tests, a report is generated allowing
the user to view the information coverage figures from a project level. This process
is called code coverage.

Though Java developers have excellent free Integrated Development Environments,


free compilers, and test frameworks, they have to rely upon the code coverage
tools. These tools are used to recognize the functionality of the tests.

EMMA is an open-source toolkit for measuring and reporting Java code coverage.

Emma supports large-scale enterprise software development while keeping


individual developer's work fast and iterative.

The features of EMMA are as follows:


EMMA can instrument classes for coverage either offline (before they are loaded) or
on-the-fly (using an instrumenting application classloader).

It supports coverage types, such as class, method, line, and basic block.

EMMA can detect when a single source code line is covered only partially.

Coverage stats aggregated at method, class, and package levels are included with
EMMA.

Output reports are used to highlight items with coverage levels below user-provided
thresholds. They are of three types, plain text, HTML, and XML.

The HTML report supports merging of source code linking, such as data obtained in
different instrumentation or test runs.

EMMA does not require access to the source code. It degrades with decreasing
amount of debug information available in the input classes.

It can instrument individual .class files or entire .jar files. Hence, efficient coverage
subset filtering is possible.

The Makefile and ANT build integration are supported by EMMA on equal footing.

EMMA is quite fast since the run-time overhead of added instrumentation is small
and the byte code instrumentation is also fast.

Memory overhead is a few hundred bytes per Java class.

EMMA is 100 percent pure Java and has no external library dependencies. Thus it
can work in any Java 2 Java Virtual Machine (JVM).

EMMA belongs to a class of pure Java coverage tool based on Java bytecode
instrumentation.

Hence, special JVM switches for enabling coverage is not needed. Instead run
EMMA-processed .class files. It also concludes that EMMA does not instrument the
.java sources.

EMMA offers two different options to instrument. They are offline and on-the-fly.

The offline approach works well with contexts, such as J2SE, J2EE, and distributed
client/server applications.

The on-the-fly approach is a handy and lightweight shortcut for simple standalone
programs.
EMMA pays attention to the needs of enterprise software developers' and thus
cannot run always in the latest Java version. It runs in any Java 1.2 with a JVM and
has no library dependencies. EMMA is free and uses a very liberal open-source
license.

An executable class is considered to be covered if it is loaded and initialized by the


JVM.

Class initialization implies that the class static constructor is executed.

A class can be covered even though none of its other methods are executed.

Class coverage also considers the number of loaded but uninitialized classes.

It is common to see a small number of loaded, but uninitialized classes while using
EMMARUN without the -f option.

EMMA reports class coverage so that the user could spot the classes that are
ignored by the test suite. The identified classes could be a dead code or needs more
test attention.

EMMA first covers a method when it has been set up. But the problem is to track the
method execution.

A given method can have any number of normal or abnormal exit points. It is not
clear how many of the exit paths are considered to be normal.

Looking out for uncovered methods is a good technique for detecting either dead
code or code that needs more test attention.

In this session, you have learned that:

The types of code coverage tools are as follows:

Clover

JProbe

EMMA

InsECT

JCoverage
EMMA supports large-scale enterprise software development while keeping
individual developer's work fast and iterative. It offers two different options to
instrument

Offline

On-the-fly

TEST CASE POINT ANALYSIS:

Welcome to the session on Test Case Point (TCP) analysis. TCP analysis is an
approach for doing an accurate estimation of functional testing projects.

TCP is a measure of estimating the complexity of an application.

The TCP is also used as an estimation technique to calculate the size and effort of a
testing project. The TCP counts are nothing but ranking the requirements and the
test cases that are to be written for those requirements into simple, average, and
complex. It quantifies the same into a measure of complexity. This approach
emphasizes on key testing factors that determine the complexity of the entire
testing cycle. In other words, TCP is a way of representing the efforts involved in
testing projects.

TCP analysis generates test efforts for separate testing activities. This is essential
because testing projects fall under four different models, such as test case
generation, automated script generation, manual test execution, and automated
test execution.

Test case generation is an execution model that includes designing well-defined test
cases. To determine the TCP for test case generation, first determine the complexity
of the test cases. Some test cases may be more complex due to the inherent
functionality being tested. The complexity will be an indicator of the number of TCPs
for the test case. Automated script generation is an execution model that
automates the test cases using an automated test tool.

From the list of test cases derived from test case generation model, identify the test
cases that are good candidates for automation. Some test cases save a lot of effort
if performed manually and are not worth automating. On the other hand, some test
cases cannot be automated because the test tool does not support the feature
being tested. Automation is difficult in the cases having dynamic data.
Manual test execution is an execution model. It executes the test cases already
designed and reports the defects. To determine the TCPs for manual execution, first
calculate the manual test case execution complexity based on the factors, such as
pre-conditions, steps in the test case, and verification.

Automated test execution includes the execution of the automated scripts and
reporting the defects. To determine the TCPs for automated execution, you must
calculate the automation test case execution complexity based on the pre-
conditions, such as setting up the test data. It also includes the steps needed before
starting the execution.

TCP Analysis uses a 7-step process consisting of the stages, such as identifying use
cases, identifying test cases, determining TCP for test case generation, automation,
manual execution, automated execution, and total TCP.

In this session, you have learned that:

TCP is a measure of estimating the complexity of an application. It is also used as


an estimation technique to calculate the size and effort of a testing project

TCP analysis generates test efforts for separate testing activities. This is essential
because testing projects fall under four different models

Test case generation is an execution model that includes designing well-defined test
cases

Automated script generation is an execution model that includes automating the


test cases using an automated test tool

Manual test execution is an execution model that involves executing the test cases
already designed and reporting the defects

Automated test execution includes the execution of the automated scripts and
reporting the defects

Das könnte Ihnen auch gefallen