Sie sind auf Seite 1von 21

Software Testing Life Cycle STLC

Contrary to popular belief, Software Testing is not a just a single activity. It consists of series
of activities carried out methodologically to help certify your software product. These
activities (stages) constitute the Software Testing Life Cycle (STLC).
The different stages in Software Test Life Cycle -

Each of these stages have a definite Entry and Exit criteria , Activities & Deliverables
associated with it.
In an Ideal world you will not enter the next stage until the exit criteria for the previous stage
is met. But practically this is not always possible. So for this tutorial , we will focus of
activities and deliverables for the different stages in STLC. Lets look into them in detail.
Requirement Analysis
During this phase, test team studies the requirements from a testing point of view to identify
the testable requirements. The QA team may interact with various stakeholders (Client,
Business Analyst, Technical Leads, System Architects etc) to understand the requirements in
detail. Requirements could be either Functional (defining what the software must do) or Non
Functional (defining system performance /security availability ) .Automation feasibility for
the given testing project is also done in this stage.
Activities
Identify types of tests to be performed.
Gather details about testing priorities and focus.
Prepare Requirement Traceability Matrix (RTM).
Identify test environment details where testing is supposed to be carried out.
Automation feasibility analysis (if required).
Deliverables
RTM
Automation feasibility report. (if applicable)
Test Planning
This phase is also called Test Strategy phase. Typically , in this stage, a Senior QA manager
will determine effort and cost estimates for the project and would prepare and finalize the
Test Plan.
Activities
Preparation of test plan/strategy document for various types of testing
Test tool selection
Test effort estimation
Resource planning and determining roles and responsibilities.
Training requirement
Deliverables
Test plan /strategy document.
Effort estimation document.
Test Case Development
This phase involves creation, verification and rework of test cases & test scripts. Test data , is
identified/created and is reviewed and then reworked as well.
Activities
Create test cases, automation scripts (if applicable)
Review and baseline test cases and scripts
Create test data (If Test Environment is available)
Deliverables
Test cases/scripts
Test data
Test Environment Setup
Test environment decides the software and hardware conditions under which a work product
is tested. Test environment set-up is one of the critical aspects of testing process and can be
done in parallel with Test Case Development Stage. Test team may not be involved in this
activity if the customer/development team provides the test environment in which case the
test team is required to do a readiness check (smoke testing) of the given environment.
Activities
Understand the required architecture, environment set-up and prepare hardware and
software requirement list for the Test Environment.
Setup test Environment and test data
Perform smoke test on the build
Deliverables
Environment ready with test data set up
Smoke Test Results.
Test Execution
During this phase test team will carry out the testing based on the test plans and the test cases
prepared. Bugs will be reported back to the development team for correction and retesting
will be performed.
Activities
Execute tests as per plan
Document test results, and log defects for failed cases
Map defects to test cases in RTM
Retest the defect fixes
Track the defects to closure
Deliverables
Completed RTM with execution status
Test cases updated with results
Defect reports
Test Cycle Closure
Testing team will meet , discuss and analyze testing artifacts to identify strategies that have to
be implemented in future, taking lessons from the current test cycle. The idea is to remove the
process bottlenecks for future test cycles and share best practices for any similar projects in
future.
Activities
Evaluate cycle completion criteria based on Time,Test coverage,Cost,Software,Critical
Business Objectives , Quality
Prepare test metrics based on the above parameters.
Document the learning out of the project
Prepare Test closure report
Qualitative and quantitative reporting of quality of the work product to the customer.
Test result analysis to find out the defect distribution by type and severity.
Deliverables
Test Closure report
Test metrics
Finally, summary of STLC along with Entry and Exit Criteria
STLC Stage Entry Criteria Activity Exit Criteria Deliverables
Requirement
Analysis
Requirements
Document available
(both functional and
non functional)
Acceptance criteria
defined.
Application
architectural
document available.
Analyse business functionality
to know the business modules
and module specific
functionalities.
Identify all transactions in the
modules.
Identify all the user profiles.
Gather user
interface/authentication,
geographic spread
requirements.
Identify types of tests to be
performed.
Gather details about testing
Signed off RTM
Test automation
feasibility report
signed off by
the client


RTM
Automation
feasibility
report (if
applicable)


priorities and focus.
Prepare Requirement
Traceability Matrix (RTM).
Identify test environment
details where testing is
supposed to be carried out.
Automation feasibility analysis
(if required).
Test Planning Requirements
Documents
Requirement
Traceability matrix.
Test automation
feasibility
document.
Analyze various testing
approaches available
Finalize on the best suited
approach
Preparation of test plan/strategy
document for various types of
testing
Test tool selection
Test effort estimation
Resource planning and
determining roles and
responsibilities.
Approved test
plan/strategy
document.
Effort
estimation
document
signed off.

Test
plan/strategy
document.
Effort
estimation
document.

Test case
development
Requirements
Documents
RTM and test plan
Automation
analysis report
Create test cases, automation
scripts (where applicable)
Review and baseline test cases
and scripts
Create test data
Reviewed and
signed test
Cases/scripts
Reviewed and
signed test data

Test
cases/scripts
Test data

Test
Environment
setup
System Design and
architecture
documents are
available
Environment set-up
Understand the required
architecture, environment set-
up
Prepare hardware and software
requirement list
Environment
setup is working
as per the plan
and checklist
Test data setup
Environment
ready with test
data set up
Smoke Test
Results.
plan is available Finalize connectivity
requirements
Prepare environment setup
checklist
Setup test Environment and test
data
Perform smoke test on the build
Accept/reject the build
depending on smoke test result
is complete
Smoke test is
successful


Test Execution Baselined RTM,
Test Plan , Test
case/scripts are
available
Test environment is
ready
Test data set up is
done
Unit/Integration test
report for the build
to be tested is
available
Execute tests as per plan
Document test results, and log
defects for failed cases
Update test plans/test cases, if
necessary
Map defects to test cases in
RTM
Retest the defect fixes
Regression testing of
application
Track the defects to closure

All tests
planned are
executed
Defects logged
and tracked to
closure

Completed
RTM with
execution
status
Test cases
updated with
results
Defect reports
Test Cycle
closure
Testing has been
completed
Test results are
available
Defect logs are
available
Evaluate cycle completion
criteria based on - Time, Test
coverage , Cost , Software
Quality , Critical Business
Objectives
Prepare test metrics based on
the above parameters.
Document the learning out of
the project
Prepare Test closure report
Test Closure
report signed off
by client

Test Closure
report
Test metrics

Qualitative and quantitative
reporting of quality of the work
product to the customer.
Test result analysis to find out
the defect distribution by type
and severity


Functional Testing Vs Non-Functional Testing
Functional Testing is the type of testing done against the business requirements of
application. It is a black box type of testing.
It involves the complete integration system to evaluate the systems compliance with its
specified requirements. Based on the functional specification document this type of
testing is to be carried out. In actual testing, testers need to verify a specific action or
function of the code. For functional testing either manual testing or automation tools can
be used but functionality testing would be easier using manual testing only. Prior to non
Functional testing the Functional testing would be executed first.
Five steps need to be keeping in mind in the Functional testing:
1. Preparation of test data based on the specifications of functions
2. Business requirements are the inputs to functional testing
3. Based on functional specifications find out of output of the functions
4. The execution of test cases
5. Observe the actual and expected outputs
To carry out functional testing we have numerous tools available, here is the list
ofFunctional testing tools.
In the types of functional testing following testing types should be cover:
Unit Testing
Smoke testing
Sanity testing
Integration Testing
Interface Testing
System Testing
Regression Testing
UAT
What is non Functional Testing?
The non Functional Testing is the type of testing done against the non functional
requirements. Most of the criteria are not consider in functional testing so it is used
to check the readiness of a system. Non-functional requirements tend to be those
that reflect the quality of the product, particularly in the context of the suitability
perspective of its users. It can be started after the completion of Functional Testing.
The non functional tests can be effective by using testing tools.
The testing of software attributes which are not related to any specific function or user
action like performance, scalability, security or behavior of application under certain
constraints.
Non functional testing has a great influence on customer and user satisfaction with the
product. Non functional testing should be expressed in a testable way, not like the
system should be fast or the system should be easy to operate which is not testable.
Basically in the non functional test is used to major non-functional attributes of
software systems. Lets take non functional requirements examples; in how much
time does the software will take to complete a task? or how fast the response is.
Following testing should consider in non functional testing types:
Availability Testing
Baseline testing
Compatibility testing
Compliance testing
Configuration Testing
Documentation testing
Endurance testing
Ergonomics Testing
Interoperability Testing
Installation Testing
Load testing
Localization testing and Internationalization testing
Maintainability Testing
Operational Readiness Testing
Performance testing
Recovery testing
Reliability Testing
Resilience testing
Security testing
Scalability testing
Stress testing
Usability testing
Volume testing


What is Structural testing (Testing of
software structure/architecture)?
The structural testing is the testing of the structure of the system or component.
Structural testing is often referred to as white box or glass box or clear-box
testing because in structural testing we are interested in what is happening inside
the system/application.
In structural testing the testers are required to have the knowledge of the internal
implementations of the code. Here the testers require knowledge of how the
software is implemented, how it works.
During structural testing the tester is concentrating on how the software does it. For
example, a structural technique wants to know how loops in the software are
working. Different test cases may be derived to exercise the loop once, twice, and
many times. This may be done regardless of the functionality of the software.
Structural testing can be used at all levels of testing. Developers use structural
testing in component testing and component integration testing, especially where
there is good tool support for code coverage. Structural testing is also used in
system and acceptance testing, but the structures are different. For example, the
coverage of menu options or major business transactions could be the structural
element in system or acceptance testing.

Test Strategy and Test Plan
Test Strategy
A Test Strategy document is a high level document and normally developed by project
manager. This document defines Software Testing Approach to achieve testing objectives.
The Test Strategy is normally derived from the Business Requirement Specification
document.
The Test Strategy document is a static document meaning that it is not updated too often. It
sets the standards for testing processes and activities and other documents such as the Test
Plan draws its contents from those standards set in the Test Strategy Document.
Some companies include the Test Approach or Strategy inside the Test Plan, which is
fine and it is usually the case for small projects. However, for larger projects, there is one
Test Strategy document and different number of Test Plans for each phase or level of testing.
Components of the Test Strategy document
Scope and Objectives
Business issues
Roles and responsibilities
Communication and status reporting
Test deliverability
Industry standards to follow
Test automation and tools
Testing measurements and metrices
Risks and mitigation
Defect reporting and tracking
Change and configuration management
Training plan
Test Plan
The Test Plan document on the other hand, is derived from the Product Description, Software
Requirement Specification SRS, or Use Case Documents.
The Test Plan document is usually prepared by the Test Lead or Test Manager and the focus
of the document is to describe what to test, how to test, when to test and who will do what
test.
It is not uncommon to have one Master Test Plan which is a common document for the test
phases and each test phase have their own Test Plan documents.
There is much debate, as to whether the Test Plan document should also be a static document
like the Test Strategy document mentioned above or should it be updated every often to
reflect changes according to the direction of the project and activities.
My own personal view is that when a testing phase starts and the Test Manager is
controlling the activities, the test plan should be updated to reflect any deviation from the
original plan. After all, Planning and Control are continuous activities in the formal test
process.
Test Plan id
Introduction
Test items
Features to be tested
Features not to be tested
Test techniques
Testing tasks
Suspension criteria
Features pass or fail criteria
Test environment (Entry criteria, Exit criteria)
Test delivarables
Staff and training needs
Responsibilities
Schedule
This is a standard approach to prepare test plan and test strategy documents, but things can
vary company-to-company
Definition of a Test Plan


A test plan can be defined as a document describing the scope, approach, resources, and
schedule of intended testing activities. It identifies test items, the features to be tested, the testing
tasks, who will do each task, and any risks requiring contingency planning.

In software testing, a test plan gives detailed testing information regarding an upcoming testing
effort, including
Scope of testing
Schedule
Test Deliverables
Release Criteria
Risks and Contingencies
It is also be described as a detail of how the testing will proceed, who will do the testing, what will
be tested, in how much time the test will take place, and to what quality level the test will be
performed.

Few other definitions

The process of defining a test project so that it can be properly measured and controlled. The test
planning process generates a high level test plan document that identifies the software items to be
tested, the degree of tester independence, the test environment, the test case design and test
measurement techniques to be used, and the rationale for their choice.

A testing plan is a methodological and systematic approach to testing a system such as a machine or
software. It can be effective in finding errors and flaws in a system. In order to find relevant results,
the plan typically contains experiments with a range of operations and values, including an
understanding of what the eventual workflow will be.

Test plan is a document which includes, introduction, assumptions, list of test cases, list of features to
be tested, approach, deliverables, resources, risks and scheduling.

A test plan is a systematic approach to testing a system such as a machine or software. The plan
typically contains a detailed understanding of what the eventual workflow will be.

A record of the test planning process detailing the degree of tester indedendence, the test
environment, the test case design techniques and test measurement techniques to be used, and the
rationale for their choice.
Test planning

Test planning involves scheduling and estimating the system testing process, establishing
process standards and describing the tests that should be carried out.
As well as helping managers allocate resources and estimate testing schedules, test plans are
intended for software engineers involved in designing and carrying out system tests. They help
technical staff get an overall picture of the system tests and place their own work in this context.
Frewin and Hatton (Frewin and Hatton, 1986). Humphrey (Humphrey, 1989) and Kit (Kit, 1995)
also include discussions on test planning.
Test planning is particularly important in large software system development. As well as setting
out the testing schedule and procedures, the test plan defines the hardware and software
resources that are required. This is useful for system managers who are responsible for ensuring
that these resources are available to the testing team. Test plans should normally include
significant amounts of contingency so that slippages in design and implementation can be
accommodated and staff redeployed to other activities.
Test plans are not a static documents but evolve during the development process. Test plans
change because of delays at other stages in the development process. If part of a system is
incomplete, the system as a whole cannot be tested. You then have to revise the test plan to
redeploy the testers to some other activity and bring them back when the software is once again
available.
For small and medium-sized systems, a less formal test plan may be used, but there is still a
need for a formal document to support the planning of the testing process. For some agile
processes, such as extreme programming, testing is inseparable from development. Like other
planning activities, test planning is also incremental. In XP, the customer is ultimately responsible
for deciding how much effort should be devoted to system testing.
The structure of a test plan

Test plans obviously vary, depending on the project and the organization involved in the testing.
Sections that would typically be included in a large system, are:
The testing process
A description of the major phases of the system testing process. This may be broken down into
the testing of individual sub-systems, the testing of external system interfaces, etc.
Requirements traceability
Users are most interested in the system meeting its requirements and testing should be planned
so that all requirements are individually tested.
Tested items
The products of the software process that are to be tested should be specified.
Testing schedule
An overall testing schedule and resource allocation. This schedule should be linked to the more
general project development schedule.
Test recording procedures
It is not enough simply to run tests; the results of the tests must be systematically recorded. It
must be possible to audit the testing process to check that it has been carried out correctly.
Hardware and software requirements
This section should set out the software tools required and estimated hardware utilisation.
Constraints
Constraints affecting the testing process such as staff shortages should be anticipated in this
section.
System tests
This section, which may be completely separate from the test plan, defines the test cases that
should be applied to the system. These tests are derived from the system requirements
specification.
Test Case Specifications
The test plan focuses on how the testing for the project will proceed, which units will be
tested and what approaches (and tools) are to be used during the various stages of testing.
However it does not deals with details of testing a unit nor does it specify which test case are
to be used.

Test case specification has to be done separately for each unit. Based on the approach
specified in the test plan first the feature to be tested for this unit must be determined. The
overall approach stated in the plan is refined into specific test techniques that should be
followed and into the criteria to be used for evaluation. Based on these the test cases are
specified for testing unit.

The two basic reasons test cases are specified before they are used for testing. It is known
that testing has severe limitations and the effectiveness of testing depends very heavily on
the exact nature of the test case. Even for a given criterion the exact nature of the test cases
affects the effectiveness of testing.

Constructing good test case that will reveal errors in programs is still a very creative activity
that depends a great deal on the tester. Clearly it is important to ensure that the set of test
cases used is of high quality. As with many other verification methods evaluation of quality of
test case is done through "test case review" And for any review a formal document or work
product is needed. This is the primary reason for having the test case specification in the
form of a document.

Software Metrics for Reliability:
The Metrics are used to improve the reliability of the system by identifying the areas of
requirements (for specification),
Coding (for errors),
Testing (for verifying) phases.
The different types of Software Metrics that are used are
a) Requirements Reliability Metrics:-
Requirements indicate what features the software must contain. So for this requirement document,
a clear understanding between client and developer should exist. Otherwise it is critical to write
these requirements .
The requirements must contain valid structure to avoid the loss of valuable information.
Next , the requirements should be thorough and in a detailed manner so that it is easy for the design
phase. The requirements should not contain inadequate information .
Next one is to communicate easily .There should not be any ambiguous data in the requirements. If
there exists any ambiguous data , then it is difficult for the developer to implement that
specification. Requirement Reliability metrics evaluates the above said quality factors of the
requirement document.

b) Design and Code Reliability Metrics
The quality factors that exists in design and coding plan are complexity , size and modularity.
If there exists more complex modules, then it is difficult to understand and there is a high
probability of occurring errors. So complexity of the modules should be less.
Next coming to size, it depends upon the factors such as total lines, comments, executable
statements etc. According to SATC , the most effective evaluation is the combination of size and
complexity.
The reliability will decrease if modules have a combination of high complexity and large size or high
complexity and small size. In the later combination also the reliability decreases because , the
smaller size results in a short code which is difficult to alter.
These metrics are also applicable to object oriented code , but in this , additional metrics are
required to evaluate the quality.

c) Testing Reliability Metrics:
Testing Reliability metrics uses two approaches to evaluate the reliability.
First, it ensures that the system is fully equipped with the functions that are specified in the
requirements. Because of this, the errors due to the lack of functionality decreases .
Second approach is nothing but evaluating the code , finding the errors and fixing them.

The current practices of software reliability measurement can be divided into four categories.
1) Product metrics
2) project management busy
3) process metrics
4) Fault and failure metrics
As discussed earlier software size and complexity plays an important role in design and coding
phase. One of the product metrics called function point metric is used to estimate the size and
complexity of the program.
Project Management metrics increases reliability by evaluating the Management process whereas
process metrics can be used to estimate , monitor and improve the reliability and quality of the
software.
The final one, Fault and Failure Metrics determines, when the software is performing the whole
functions that are specified by the requirement documents with out any errors. It takes the faults
and failures that arises in the coding and analyzes them to achieve this task.
Reliability growth models

There are various reliability growth models that have been derived from reliability experiments in
a number of different application domains. As Kan (Kan, 2003) discusses, most of these models
are exponential, with reliability increasing quickly as defects are discovered and removed. The
increase then tails off and reaches a plateau as fewer and fewer defects are discovered and
removed in the later stages of testing.
The simplest model that illustrates the concept of reliability growth is a step function model
(Jelinski and Moranda, 1972). The reliability increases by a constant increment each time a fault
(or a set of faults) is discovered and repaired (Figure 1) and a new version of the software is
created. This model assumes that software repairs are always correctly implemented so that the
number of software faults and associated failures decreases in each new version of the system.
As repairs are made, the rate of occurrence of software failures (ROCOF) should therefore
decrease, as shown in Figure 1. Note that the time periods on the horizontal axis reflect the time
between releases of the system for testing so they are normally of unequal length.

Figure 1 Equal-step function model of reliability growth
In practice, however, software faults are not always fixed during debugging and when you
change a program, you sometimes introduce new faults into it. The probability of occurrence of
these faults may be higher than the occurrence probability of the fault that has been repaired.
Therefore, the system reliability may sometimes worsen in a new release rather than improve.
The simple equal-step reliability growth model also assumes that all faults contribute equally to
reliability and that each fault repair contributes the same amount of reliability growth. However,
not all faults are equally probable. Repairing the most common faults contributes more to
reliability growth than does repairing faults that occur only occasionally. You are also likely to find
these probable faults early in the testing process, so reliability may increase more than when
later, less probable, faults are discovered.
Later models, such as that suggested by Littlewood and Verrall (Littlewood and Verrall, 1973)
take these problems into account by introducing a random element into the reliability growth
improvement effected by a software repair. Thus, each repair does not result in an equal amount
of reliability improvement but varies depending on the random perturbation (Figure 2).
Littlewood and Verralls model allows for negative reliability growth when a software repair
introduces further errors. It also models the fact that as faults are repaired, the average
improvement in reliability per repair decreases. The reason for this is that the most probable
faults are likely to be discovered early in the testing process. Repairing these contributes most to
reliability growth.

Figure 2 Random-step function model of reliability growth
The above models are discrete models that reflect incremental reliability growth. When a new
version of the software with repaired faults is delivered for testing it should have a lower rate of
failure occurrence than the previous version. However, to predict the reliability that will be
achieved after a given amount of testing continuous mathematical models are needed. Many
models, derived from different application domains, have been proposed and compared
(Littlewood, 1990).

(c) Ian Sommerville 2008

Software quality
From Wikipedia, the free encyclopedia
In the context of software engineering, software quality refers to two related but distinct notions
that exist wherever quality is defined in a business context:
Software functional quality reflects how well it complies with or conforms to a given design,
based on functional requirements or specifications. That attribute can also be described as
the fitness for purpose of a piece of software or how it compares to competitors in the
marketplace as a worthwhile product;
[1]

Software structural quality refers to how it meets non-functional requirements that support
the delivery of the functional requirements, such as robustness or maintainability, the degree
to which the software was produced correctly.
Structural quality is evaluated through the analysis of the software inner structure, its source
code, at the unit level, the technology level and the system level, which is in effect how its
architecture adheres to sound principles of software architecture outlined in a paper on the topic
by OMG.
[2]
In contrast, functional quality is typically enforced and measured through software
testing.
Historically, the structure, classification and terminology of attributes and metrics applicable
to software quality management have been derived or extracted from theISO 9126-3 and the
subsequent ISO 25000:2005
[3]
quality model, also known as SQuaRE.
[citation needed]
Based on
these models, the Consortium for IT Software Quality (CISQ) has defined five major desirable
structural characteristics needed for a piece of software to provide business value: Reliability,
Efficiency, Security, Maintainability and (adequate) Size.

ISO 9000

ISO 9000 is a series of standards, developed and published by the International Organization for
Standardization (ISO), that define, establish, and maintain an effective quality assurance system
for manufacturing and service industries.
[1][2]
The standards are available through national
standards bodies. ISO 9000 deals with the fundamentals of quality management
systems,
[3]
including the eight management principles upon which the family of standards is
based.
[3][4]
ISO 9001 deals with the requirements that organizations wishing to meet the standard
must fulfill.
[5]

Third-party certification bodies provide independent confirmation that organizations meet the
requirements of ISO 9001. Over a million organizations worldwide
[6]
are independently certified,
making ISO 9001 one of the most widely used management tools in the world today. Despite
widespread use, the ISO certification process has been criticized
[7][8]
as being wasteful and not
being useful for all organizations.
[9][10]


Capability Maturity Model (CMM)
The Capability Maturity Model (CMM) is a methodology used to develop and refine an
organization's software development process. The model describes a five-level evolutionary
path of increasingly organized and systematically more mature processes. CMM was
developed and is promoted by the Software Engineering Institute (SEI), a research and
development center sponsored by the U.S. Department of Defense (DoD). SEI was founded
in 1984 to address software engineering issues and, in a broad sense, to advance software
engineering methodologies. More specifically, SEI was established to optimize the process
of developing, acquiring, and maintaining heavily software-reliant systems for the DoD.
Because the processes involved are equally applicable to the software industry as a whole,
SEI advocates industry-wide adoption of the CMM.
The CMM is similar to ISO 9001, one of the ISO 9000 series of standards specified by the
International Organization for Standardization (ISO). The ISO 9000 standards specify an
effective quality system for manufacturing and service industries; ISO 9001 deals specifically
with software development and maintenance. The main difference between the two
systems lies in their respective purposes: ISO 9001 specifies a minimal acceptable quality
level for software processes, while the CMM establishes a framework for continuous
process improvement and is more explicit than the ISO standard in defining the means to be
employed to that end.
CMM's Five Maturity Levels of Software Processes
At the initial level, processes are disorganized, even chaotic. Success is likely to depend on
individual efforts, and is not considered to be repeatable, because processes would not be
sufficiently defined and documented to allow them to be replicated.
At the repeatable level, basic project management techniques are established, and successes
could be repeated, because the requisite processes would have been made established, defined,
and documented.
At the defined level, an organization has developed its own standard software process through
greater attention to documentation, standardization, and integration.
At the managed level, an organization monitors and controls its own processes through data
collection and analysis.
At the optimizing level, processes are constantly being improved through monitoring feedback
from current processes and introducing innovative processes to better serve the organization's
particular needs.
\
Comparison between ISO and CMM
Difference between ISO 9000 and CMM(ISO 9000 VS CMM)
ISO 900(INTERNATIONAL STANDARD
ORGANISATION)
CMM (CABABILITY MATURITY MODEL)
It applies to any type of industry . CMM is specially developed for software industry
ISO 9000 addresses corporate business process CMM focuses on the software Engineering
activities.
ISO 9000 specifies minimum requirement. CMM gets nto technical aspect of software
engineering.
ISO 9000 restricts itself to what is required. It suggests how to fulfill the requirements.
ISO 9000 provides pass or fail criteria. It provides grade for process maturity.
ISO 9000 has no levels. CMM has 5 levels:
Initial
Repeatable
Defined
Managed
Optimization

ISO 9000 does not specifies sequence of steps
required to establish the quality system.
It reconnects the mechanism for step by step
progress through its successive maturity levels.
Certain process elements that are in ISO are not
included in CMM like:
1. Contract management
2. Purchase and customer supplied
components
3. Personal issue management
4. Packaging ,delivery, and installation
management
Similarly other process in CMM are not included
in ISO 9000
1. Project tracking
2. Process and technology change
management
3. Intergroup coordinating to meet customers
requirements
4. Organization level process focus, process
development and integrated management.



Computer-aided software engineering
From Wikipedia, the free encyclopedia


Example of a CASE tool.
Computer-aided software engineering (CASE) is the application of a set of tools and methods
to a software system with the desired end result of high-quality, defect-free, and maintainable
software products.
[1]
It also refers to methods for the development of information
systems together with automated tools that can be used in the software development process.
[2]



CASE (computer-aided software
engineering)
Reprints
CASE (computer-aided software engineering) is the use of a computer-assisted method to
organize and control the development of software, especially on large, complex projects
involving many software components and people. Using CASE allows designers, code
writers, testers, planners, and managers to share a common view of where a project stands
at each stage of development. CASE helps ensure a disciplined, check-pointed process. A
CASE tool may portray progress (or lack of it) graphically. It may also serve as a repository
for or be linked to document and program libraries containing the project's business plans,
design requirements, design specifications, detailed code specifications, the code units, test
cases and results, and marketing and service plans.
CASE originated in the 1970s when computer companies were beginning to borrow ideas
from the hardware manufacturing process and apply them to software development (which
generally has been viewed as an insufficiently disciplined process). Some CASE tools
supported the concepts of structured programming and similar organized development
methods. More recently, CASE tools have had to encompass or accommodate visual
programming tools and object-oriented programming. In corporations, a CASE tool may be
part of a spectrum of processes designed to ensure quality in what is developed. (Many
companies have their processes audited and certified as being in conformance with the ISO
9000 standard.)
Some of the benefits of CASE and similar approaches are that, by making the customer part
of the process (through market analysis and focus groups, for example), a product is more
likely to meet real-world requirements. Because the development process emphasizes
testing and redesign, the cost of servicing a product over its lifetime can be reduced
considerably. An organized approach to development encourages code and design reuse,
reducing costs and improving quality. Finally, quality products tend to improve a
corporation's image, providing a competitive advantage in the marketplace.
reverse engineering
Reprints
Reverse engineering is taking apart an object to see how it works in order to duplicate or
enhance the object. The practice, taken from older industries, is now frequently used on
computer hardware and software. Software reverse engineering involves reversing a
program's machine code (the string of 0s and 1s that are sent to the logic processor) back
into the source code that it was written in, using program language statements.
Software reverse engineering is done to retrieve the source code of a program because the
source code was lost, to study how the program performs certain operations, to improve
the performance of a program, to fix a bug (correct an error in the program when the source
code is not available), to identify malicious content in a program such as a virus or to adapt
a program written for use with one microprocessor for use with another. Reverse
engineering for the purpose of copying or duplicating programs may constitute a copyright
violation. In some cases, the licensed use of software specifically prohibits reverse
engineering.
Someone doing reverse engineering on software may use several tools to disassemble a
program. One tool is a hexadecimal dumper, which prints or displays the binary numbers of
a program in hexadecimal format (which is easier to read than a binary format). By knowing
the bit patterns that represent the processor instructions as well as the instruction lengths,
the reverse engineer can identify certain portions of a program to see how they work.
Another common tool is the disassembler. The disassembler reads the binary code and then
displays each executable instruction in text form. A disassembler cannot tell the difference
between an executable instruction and the data used by the program so a debugger is used,
which allows the disassembler to avoid disassembling the data portions of a program. These
tools might be used by a cracker to modify code and gain entry to a computer system or
cause other harm.
Hardware reverse engineering involves taking apart a device to see how it works. For
example, if a processor manufacturer wants to see how a competitor's processor works,
they can purchase a competitor's processor, disassemble it, and then make a processor
similar to it. However, this process is illegal in many countries. In general, hardware reverse
engineering requires a great deal of expertise and is quite expensive.
Another type of reverse engineering involves producing 3-D images of manufactured parts
when a blueprint is not available in order to remanufacture the part. To reverse engineer a
part, the part is measured by a coordinate measuring machine (CMM). As it is measured, a
3-D wire frame image is generated and displayed on a monitor. After the measuring is
complete, the wire frame image is dimensioned. Any part can be reverse engineered using
these methods.
The term forward engineering is sometimes used in contrast to reverse engineering.

Das könnte Ihnen auch gefallen