Beruflich Dokumente
Kultur Dokumente
A Software Defect / Bug is a condition in a software product which does not meet a
software requirement (as stated in the requirement specifications) or end-user
expectations (which may not be specified but are reasonable). In other words, a defect
is an error in coding or logic that causes a program to malfunction or to produce
incorrect/unexpected results.
Severity / Impact
Probability / Visibility
Priority / Urgency
Phase Detected
Phase Injected
Causes Of Defect
If someone makes an error or mistake in using the software, this may lead
directly to a problem.
The software is used incorrectly and so does not behave as we expected.
people also design and build the software and they can make mistakes during
the design and build.
Flaws in the software.
Not all defects result in failures.
Evaluation As well as executing the tests, we must check the results and evaluate
the software under test and the completion criteria, which help us decide whether we
have finished testing and whether the software product has passed the tests.
Software products and related work products We don't just test code. We test
the requirements and design specifications, and we test related documents such as
operation, user and training material.
Model-based:You can build mathematical models for loading and response for e
commerce servers, and test based on that model. If the behavior of the system
under test conforms to that predicted by the model, the system is deemed to be
working. Model-based test strategies have in common the creation or selection of
some formal or informal model for critical system behaviors, usually during the
requirements and design stages of the project.
Methodical: You might have a checklist that you have put together over the
years that suggests the major areas of testing to run or you might follow an
industry-standard for software quality, such as ISO 9126, for your outline of major
test areas. You then methodically design, implement and execute tests following
this outline. Methodical test strategies have in common the adherence to a preplanned, systematized approach that has been developed in-house, assembled
from various concepts developed inhouse and gathered from outside, or adapted
significantly from outside ideas and may have an early or late point of
involvement for testing.
Whats is Extreme Programming and what are its characteristics? Or Explai AGILE
methedology?
Extreme programming is currently one of the most well known agile development
life cycle model.
2. The methodology claims to be more human friendly than the traditional
development method.
3. Using Agile model, Developer can develop simple and Interesting GUI for S/W.
Some of the characteristics of XP are:
It promotes the generation of business stories to define functionality.
It demands an on side costumer for continues feedback and to define and carry
out functional test.
It promotes pair programming and shared code ownership among developers.
It states that component test scripts shall be written before code is written and
those test should be automated.
It states that integration and testing of code shall happen several times a day.
It always states that we should always implement simplest that should meet
solutions to the todays problem.
With XP there are numerous iterations each requiring testing.
XP developer write every test case they can think of and automate them.
Everytime a change is made in the code it is component tested and then
integrated with the existing code.
This type of testing is performed by developers before the setup is handed over
to the testing team to formally execute the test cases.
The developers use test data that is different from the test data of the quality
assurance team.
The goal of unit testing is to isolate each part of the program and show that
individual parts are correct in terms of requirements and functionality.
Integration Testing
System Testing
System testing tests the system as a whole. Once all the components are
integrated, the application as a whole is tested rigorously to see that it meets the
specified Quality Standards.
System testing is the first step in the Software Development Life Cycle, where
the application is tested as a whole.
The application is tested thoroughly to verify that it meets the functional and
technical specifications.
Acceptance Testing
The QA team will have a set of pre-written scenarios and test cases that will be
used to test the application.
More ideas will be shared about the application and more tests can be
performed on it to gauge its accuracy and the reasons why the project was
initiated.
Acceptance tests are not only intended to point out simple spelling mistakes,
cosmetic errors, or interface gaps, but also to point out any bugs in the
application that will result in system crashes or major errors in the application.
Explain Triggers for Maintanance Testing?
o As stated maintenance testing is done on an existing operational system.
o It is triggered by modifications, migration, or retirement of the system.
o Modifications include planned enhancement changes (e.g. releasebased), corrective and emergency changes, and changes of environment,
such as planned operating system or database upgrades, or patches to
newly exposed or discovered vulnerabilities of the operating system.
o Maintenance testing for migration (e.g. from one platform to another)
should include operational testing of the new environment, as well as the
changed software.
o Maintenance testing for the retirement of a system may include the testing
of data migration or archiving, if long data-retention periods are required.
o Since modifications are most often the main part of maintenance testing
for most organizations, this will be discussed in more detail.
o From the point of view of testing, there are two types of modifications.
There are modifications in which testing may be planned, and there
are ad-hoc corrective modifications, which cannot be planned at all.
Planned modifications
o The following types of planned modification may be identified:
perfective modifications (adapting software to the user's wishes, for instance by
supplying new functions or enhancing performance);
adaptive modifications (adapting software to environmental changes such as
new hardware, new systems software or new legislation);
corrective planned modifications (deferrable correction of defects).
o Ad-hoc corrective modifications are concerned with defects requiring an
immediate solution, e.g. a production run which dumps late at night, a
network that goes down with a few hundred users on line, a mailing with
incorrect addresses.
o There are different rules and different procedures for solving problems of
this kind.
o It will be impossible to take the steps required for a structured approach to
testing.
o If, however, a number of activities are carried out prior to a possible
malfunction, it may be possible to achieve a situation in which reliable
tests car. be executed in spite of 'panic stations' all round.
o Even in the event of ad-hoc modifications, it is therefore possible to bring
about an improvement in quality by adopting a specific test approach.
Explain Maintenance Testing?
Maintenance testing :
1) Once deployed, system is often in service for year or even decades during this time
the system & its operational environment is often corrected, change or extended.
Testing i.e. executing during these life cycle period is called Maintainance Testing.
2) Maintainance testing is different from maintenability testing, which defines how easy
it is to maintain the system.
3) The development and test process applicable to new developments doesnt change
fundamentally for maintainance purpose the same test process steps will be apply &
depending on size & risk of the changes made, several levels of testing are carried out
during maintainance testing. A component test,, system test, Acceptance Test.
4) A maintainance test process useally begins with the receipt of an application for a
change. The test manager will use these as basis for producing test plan.
On the receipt of new or change the specification, corresponding test cases are
specified or adapted.
Once the necessary changes have been made, regression testing is performed.
Useally maintainance testing will consist of two parts
i)
Testing Changes
ii)
iii)
iv)
v)
ii) Regression Test to show that rest of system has not been affected
by maintainance work. The maintainance testing will perform on
software under following condition.
A) If costumer or end user requires support or they are not
understanding some of the functionality of s/w.
B) When developer wants to enhance / upgrade a software.
C) When any changes are informed by user.
Alpha Testing
This test is the first stage of testing and will be performed amongst the teams
(developer and QA teams). Unit testing, integration testing and system testing when
combined together is known as alpha testing. During this phase, the following aspects
will be tested in the application:
Spelling Mistakes
Broken Links
Cloudy Directions
The Application will be tested on machines with the lowest specification to test
loading times and any latency problems.
Beta Testing
This test is performed after alpha testing has been successfully performed. In beta
testing, a sample of the intended audience tests the application. Beta testing is also
known as pre-release testing. Beta test versions of software are ideally distributed to
a wide audience on the Web, partly to give the program a "real-world" test and partly to
provide a preview of the next release. In this phase, the audience will be testing the
following:
Users will install, run the application and send their feedback to the project team.
Getting the feedback, the project team can fix the problems before releasing the
software to the actual users.
The more issues you fix that solve real user problems, the higher the quality of
your application will be.
Having a higher-quality application when you release it to the general public will
increase customer satisfaction.
4. Fixed: When developer makes necessary code changes and verifies the changes then
he/she can make bug status as Fixed and the bug is passed to testing team.
5. Pending retest: After fixing the defect the developer has given that particular code for
retesting to the tester. Here the testing is pending on the testers end. Hence its status is
pending retest.
6. Retest: At this stage the tester do the retesting of the changed code which developer has
given to him to check whether the defect got fixed or not.
7. Verified: The tester tests the bug again after it got fixed by the developer. If the bug is
not present in the software, he approves that the bug is fixed and changes the status to
verified.
8. Reopen: If the bug still exists even after the bug is fixed by the developer, the tester
changes the status to reopened. The bug goes through the life cycle once again.
9. Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no
longer exists in the software, he changes the status of the bug to closed. This state
means that the bug is fixed, tested and approved.
10. Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the
bug, then one bug status is changed to duplicate.
11. Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the
state of the bug is changed to rejected.
12. Deferred: The bug, changed to deferred state means the bug is expected to be fixed in
next releases. The reasons for changing the bug to this state have many factors. Some of
them are priority of the bug may be low, lack of time for the release or the bug may not
have major effect on the software.
13. Not a bug: The state given as Not a bug if there is no change in the functionality of
the application. For an example: If customer asks for some change in the look and field of
the application like change of colour of some text then it is not a bug but just some
change in the looks of the application.
The review process for particular review process begins with Request for
Review by the author to moderator.
2.
3.
On a project level, the project planning needs to allow time for review &
rework activities, providing engineers with time to thurrowly participate in
review.
2) KICK-OFF
i) An optional step in review process in kickoff meeting the goal of these meeting
is to get everybody on same wavelength regarding document under review & to
commit to time that will be spent on checking.
ii) Also the result of entry check & exit criteria are discussed in case of more
formal review.
iii) Roll assignment, checking rate, the pages to be check process changes &
possible other questions regarding formal reviews are also discuss during this
meeting.
3)Preparation
i) The participants who are individualy on document under review using the
related documents, procedure rules & checklist proper.
The individual participant identify defects according to their understanding of
document & role.
ii) All issues are recorded preferably using logging form spelling mistakes are
recorded on document under review but not mentioned during review meeting.
4) REVIEW MEETING:
i) Logging Phase: Discussion Phase Decision Phase During logging phase,
the issues e.g. defects that have been identified during preparation phase are
mention phase by phase, reviewer by reviewer & are logged either by author or
recorder.
REWORK
1.Based on defects detected, author will improve document under review stepby-step.
2. Not every defect i.e. found leads to rework. It is the authors responsibility to
judge if defects has to be fixed.
3.If no. of defects per page exceeds the exit criteria then only the rework should
be conducted for author.
FOLLOWUP:
The moderator is responsible for ensuring that satisfactory
action have been taken on all logged defects, process improvement suggestion
& change request.
Although the moderator checks to make sure that author has taken action an all
defects, it is not necessary for moderator to check all corrections in detail for
more formal review type moderator checks only for complains to exit criteria
Explain the Different Types OF reviews?
1. Walkthrough:
Author guide the participants through the document according to his or her
thought process to achieve a common understanding and to gather feedback.
Useful for the people if they are not from the software discipline, who are not
used to or cannot easily understand software development process.
Is especially useful for higher level documents like requirement specification, etc.
2.
To present the documents both within and outside the software discipline in order
to gather the information regarding the topic under documentation.
Technical review:
It is led by the trained moderator but can also be led by a technical expert
Defects are found by the experts (such as architects, designers, key users) who
focus on the content of the document.
3.
To ensure that an early stage the technical concepts are used correctly
Inspection:
During inspection the documents are prepared and checked thoroughly by the
reviewers before the meeting
A separate preparation is carried out during which the product is examined and
the defects are found
It helps the author to improve the quality of the document under inspection
It learn from defects found and prevent the occurrence of similar defects
Find a 'champion'
A champion is needed, one who will lead the process on a project or organizational level. They need expertise, enthusiasm and a practical mindset in order to guide
moderators and participants.
The authority of this champion should be clear to the entire organization.
Management support is also essential for success.
Pick things that really count
Select the documents for review that are most important in a project. Reviewing
highly critical, upstream documents like requirements and architecture will most
certainly show the benefits of the review process to the project.
These invested review hours will have a clear and high return on investment.
In addition make sure each review has a clear objective and the correct type of
review is selected that matches the defined objective.
Explicitly plan and track review activities
To ensure that reviews become part of the day-to-day activities, the hours to be
spent should be made visible within each project plan.
The engineers involved are prompted to schedule time for preparation and,
very importantly, rework.
Train participants
It is important that training is provided in review techniques, especially the more
formal techniques, such as inspection.
Otherwise the process is likely to be impeded by those who don't understand the
process and the reasoning behind it.
Manage people issues
Reviews are about evaluating someone's document.
Some reviews tend to get too personal when they are not well managed by the
moderator.
Follow the rules but keep it simple
Follow all the formal rules until you know why and how to modify them, but make
the process only as formal as the project culture or maturity level allows.
Do not become too theoretical or too detailed.
Continuously improve process and tools
Continuous improvement of process and supporting tools (e.g. checklists),based
upon the ideas of participants, ensures the motivation of the engineers involved.
Motivation is the key to a successful change process.
Report results
Report quantified results and benefits to all those involved as soon as possible,
and discuss the consequences of defects if they had not been found this early.
Costs should of course be tracked, but benefits, especially when problems don't
occur in the future, should be made visible by quantifying the benefits as well as
the costs.
Just do it!
The process is simple but not easy.
Each step of the process is clear, but experience is needed to executethem
correctly.
experience-based
In essence, the tester is concentrating on what the software does, not how it
does it.
Functional testing is concerned with what the system does, its features or
functions. Non-functional testing is concerned with examining how well the
system does something, rather than what it does.
It can be applied at any level of testing and is often a good technique to use first.
The idea behind this technique is to divide (i.e. to partition) a set of test conditions into
groups or sets that can be considered the same (i.e. the system should handle them
equivalently),
If one condition in a partition works, we assume all of the conditions in that partition will
work, and so there is little point in testing any of these others.
Similarly, if one of the conditions in a partition does not work, then we assume that none
of the conditions in that partition will work so again there is little point in testing any
more in that partition.
For example, a savings account in a bank has a different rate of interest depending on the
balance in the account. In order to test the software that calculates the interest due, we
can identify the ranges of balance values that earn the different rates of interest.
For example, 3% rate of interest is given if the balance in the account is in the range of
$0 to $100, 5% rate of interest is given if the balance in the account is in the range of
$100 to $1000, and 7% rate of interest is given if the balance in the account is $1000 and
above, we would initially identify three valid equivalence partitions and one invalid
partition as shown below.
In the above example we have identified four partitions, even though the
specification mentioned only three.
This shows a very important task of the tester that is a tester should not only test
what is in the specification, but should also think about things that havent been
specified.
In this case we have thought of the situation where the balance is less than zero.
An inexperienced tester (lets call him Robbin) might have thought that a good
set of tests would be to test every $50.
That would give the following tests: $50.00, $100.00, $150.00, $200.00, $250.00,
say up to $800.00 (then Robbin would have got tired of it and thought that
enough tests had been carried out).
But look at what Robbin has tested: only two out of four partitions! So if the
system does not correctly handle a negative balance or a balance of $1000 or
more, he would not have found these defects so the naive approach is less
effective than equivalence partitioning.
At the same time, Robbin has four times more tests (16 tests versus our four
tests using equivalence partitions), so he is also much less efficient. This is why
we say that using techniques such as this makes testing both more effective and
more efficient.
Note that when we say a partition is invalid, it doesnt mean that it represents a
value that cannot be entered by a user or a value that the user isnt supposed to
enter. It just means that it is not one of the expected inputs for this particular field.
Boundary analysis:
Here we have both valid boundaries (in the valid partitions) and invalid
boundaries (in the invalid partitions).
To apply boundary value analysis, we will take the minimum and maximum
(boundary) values from the valid partition (1 and 99 in this case) together with the
first or last value respectively in each of the invalid partitions adjacent to the valid
partition (0 and 100 in this case).
This is called an open boundary, because one of the sides of the partition is left
open, i.e. not defined.
Decision Tables:The techniques of equivalence partitioning and boundary value analysis are often
applied to specific situations or inputs.
Experience Based Testing or Techniques:In experience-based techniques, peoples knowledge, skills and background
are of prime importance to the test conditions and test cases.
The experience of both technical and business people is required, as they bring
different perspectives to the test analysis and design process. Because of the
previous experience with similar systems, they may have an idea as what could
go wrong, which is very useful for testing.
Experience-based techniques go together with specification-based and structurebased techniques, and are also used when there is no specification, or if the
specification is inadequate or out of date.
This may be the only type of technique used for low-risk systems, but this
approach may be particularly useful under extreme time pressure in fact this is
one of the factors leading to exploratory testing.
Error-Guessing
The Error guessing is a technique where the experienced and good testers are
encouraged to think of situations in which the software may not be able to cope.
Some people seem to be naturally good at testing and others are good testers
because they have a lot of experience either as a tester or working with a
particular system and so are able to find out its weaknesses.
This is why an error guessing approach, used after more formal techniques have
been applied to some extent, can be very effective.
It also saves a lot of time because of the assumptions and guessing made by the
experienced testers to find out the defects which otherwise wont be able to find.
The success of error guessing is very much dependent on the skill of the tester,
as good testers know where the defects are most likely to be.
This is why an error guessing approach, used after more formal techniques have
been applied to some extent, can be very effective. In using more formal
techniques, the tester is likely to gain a better understanding of the system, what
it does and how it works.
Transition based Testings: State transition testing is used where some aspect of the system can be
described in what is called a finite state machine. This simply means that the
system can be in a (finite) number of different states, and the transitions from one
state to another are determined by the rules of the machine. This is the model
on which the system and the tests are based.
Any system where you get a different output for the same input, depending on
what has happened before, is a finite state system.
A finite state system is often shown as a state diagram (see Figure 4.2).
One of the advantages of the state transition technique is that the model can be
as detailed or as abstract as you need it to be.
Where a part of the system is more important (that is, requires more testing) a
greater depth of detail can be modeled.
Where the system is less important (requires less testing), the model can use a
single state to signify what would otherwise be a series of different states.
A state transition model has four basic parts:
The states that the software may occupy (open/closed or funded/insufficient
funds);
The transitions from one state to another (not all transitions are allowed);
The events that cause a transition (closing a file or withdrawing money);
The actions that result from a transition (an error message or being given your
cash).
Hence we can see that in any given state, one event can cause only one action,
but that the same event from a different state may cause a different action
and a different end state.
First test case here would be the normal situation, where the correct PIN is
entered the first time.
A second test (to visit every state) would be to enter an incorrect PIN each time,
so that the system eats the card.
A third test we can do where the PIN was incorrect the first time but OK the
second time, and another test where the PIN was correct on the third try. These
tests are probably less important than the first two.
Note that a transition does not need to change to a different state So there could
be a transition from access account which just goes back to access account for
an action such as request balance.
Traceability:Test conditions should be able to be linked back to their sources in the test basis,
this is known as traceability.
Traceability can be horizontal through all the test documentation for a given test
level (e.g. system testing, from test conditions through test cases to test scripts)
or it can be vertical through the layers of development documentation (e.g. from
requirements to components).
Now, the question may arise is that Why is traceability important? So, lets
have a look on the following examples:
The requirements for a given function or feature have changed. Some of the
fields now have different ranges that can be entered.
A set of tests that has run OK in the past has now started creating serious
problems. What functionality do these tests actually exercise? Traceability
between the tests and the requirement being tested enables the functions or
features affected to be identified more easily.
Before delivering a new release, we want to know whether or not we have tested
all of the specified requirements in the requirements specification. We have the
list of the tests that have passed was every requirement tested?
Importance of traceability:To Identify the apt version of test cases to be used.
To identify which test cases can be reused or need to be updated.
To assist the debugging process so that a defects found when executing tests
can be tracked back to the corresponding version of requirement.
Independent Testing: The degree of independence avoids author bias and is often more effective at
When we think about how independent the test team is? It is really very important
to understand that independence is not an either/or condition, but a range:
At one end of the range lies the absence of independence, where the
programmer performs testing within the programming team.
Moving toward independence, we find an integrated tester or group of testers
working alongside the programmers, but still within and reporting to the
development manager.
Then moving little bit more towards independence we might find a team of testers
who are independent and outside the development team, but reporting to project
management.
Near the other end of the continuum lies complete independence. We might see
a separate test team reporting into the organization at a point equal to the
development or project team.
We might find specialists in the business domain (such as users of the system),
specialists in technology (such as database experts), and specialists in testing
(such as security testers, certification testers, or test automation experts) in a
separate test team,
as part of a larger independent test team, or as part of a contract, outsourced test
team.
An independent tester can repeatedly find out more, other, and different defects
than a tester working within a programming team or a tester who is by
profession a programmer.
While business analysts, marketing staff, designers, and programmers bring their
own assumptions to the specification and implementation of the item under test,
an independent tester brings a different set of assumptions to testing and to
reviews, which often helps in exposing the hidden defects and problems
An independent tester who reports to senior management can report his results
honestly and without any concern for reprisal that might result from pointing out
problems in coworkers or, worse yet, the managers work.
An independent test team often has a separate budget, which helps ensure the
proper level of money is spent on tester training, testing tools, test equipment,
etc.
In addition, in some organizations, testers in an independent test team may find it
easier to have a career path that leads up into more senior roles in testing.
There is a possibility that the testers and the test team can get isolated. This can
take the form of interpersonal isolation from the programmers, the designers, and
the project team itself
or it can take the form of isolation from the broader view of quality and the
business objectives (e.g., obsessive focus on defects, often accompanied by a
refusal to accept business prioritization of defects).
This leads to communication problems, feelings of unfriendliness and hostility.
Lack of identification with and support for the project goals, spontaneous blame
festivals and political backstabbing.
Even well-integrated test teams can suffer problems. Other project stakeholders
might come to see the independent test team rightly or wrongly as a
bottleneck and a source of delay.
Some programmers give up their responsibility for quality, saying, Well, we have
this test team now, so why do I need to unit test my code?
Product
Some products like, weapons systems and contract-development
software tend to have well-specified requirements. This leads to
synergy with a requirements-based analytical strategy.
Business.
Business considerations and business continuity are often important. If
you can use a legacy system as a model for a new system, you can
use a model-based strategy.
What is Incident logging Or How to log an Incident in software testing?
When we talk about incidents we mean to indicate the possibility that a
questionable behavior is not necessarily a true defect.
We log these incidents so that we can keep the record of what we observed
and can follow up the incident and track what is done to correct it.
But it will be a good idea to log, report, track, and manage incidents found
during development and reviews because it gives useful information about the
early and cheaper defect detection and removal activities.
In some projects, a very large number of defects are found. Even on smaller
projects where 100 or fewer defects are found, it is very difficult to keep track of
all of them unless you have a process for reporting, classifying, assigning and
managing the defects from discovery to final resolution.
As with any written communication, it helps to have clear goals in mind when
writing. One common goal for such reports is to provide programmers, managers
and others with detailed information about the behavior observed and the defect.
Another is to support the analysis of trends in aggregate defect data, either for
understanding more about a particular set of problems or tests or for
understanding and reporting the overall level of system quality.
Finally, defect reports, when analyzed over a project and even across projects,
give information that can lead to development and test process improvements.
The programmers need the information in the report to find and fix the defects.
Before that happens, though, managers should review and prioritize the defects
so that scarce testing and developer resources are spent fixing and confirmation
testing the most important defects.
While many of these incidents will be user error or some other behavior not
related to a defect, some percentage of defects gets escaped from quality
assurance and testing activities.
The defect detection percentage, which compares field defects with test
defects, is an important metric of the effectiveness of the test process.
Here is an example of a DDP formula that would apply for calculating DDP for the
last level of testing prior to release to the field:
Test Controls
Test control is about guiding and corrective actions to try to achieve the best
possible outcome for the project. The specific guiding actions depend on what we
are trying to control. Let us take few hypothetical examples:
A portion of the software under test will be delivered late but market conditions
dictate that we cannot change the release date.
At this point of time test control might involve re-prioritizing the tests so that we
start testing against what is available now.
For cost reasons, performance testing is normally run on weekday evenings
during off-hours in the production environment.
Due to unexpected high demand for your products, the company has temporarily
adopted an evening shift that keeps the production environment in use 18 hours
a day, five days a week. In this context test control might involve rescheduling the
performance tests for the weekend.
Configure Management
Configuration management determines clearly about the items that make up the
software or system. These items include source code, test scripts, third-party
software, hardware, data and both development and test documentation.
Configuration management is also about making sure that these items are
managed carefully, thoroughly and attentively during the entire project and
product life cycle.
Configuration management has a number of important implications for testing.
Like configuration management allows the testers to manage their testware and
test results using the same configuration management mechanisms.
Configuration management also supports the build process, which is important
for delivery of a test release into the test environment.
Simply sending Zip archives by e-mail will not be sufficient, because there are
too many opportunities for such archives to become polluted with undesirable
contents or to harbor left-over previous versions of items.
Especially in later phases of testing, it is critical to have a solid, reliable way of
delivering test items that work and are the proper version.
Last but not least, configuration management allows us to keep the record of
what is being tested to the underlying files and components that make it up.
This is very important. Let us take an example, when we report defects, we need
to report them against something, something which is version controlled.
If it is not clear what we found the defect in, the programmers will have a very
tough time of finding the defect in order to fix it. For the kind of test reports
discussed earlier to have any meaning, we must be able to trace the test results
back to what exactly we tested.
Ideally, when testers receive an organized, version-controlled test release from a
change-managed source code repository, it is along with a test item trans-mittal
report or release notes.
[IEEE 829] provides a useful guideline for what goes into such a report.
Release notes are not always so formal and do not always contain all the
information shown.
Configuration management is a topic that is very complex. So, advanced
planning is very important to make this work. During the project planning stage
and perhaps as part of your own test plan make sure that configuration
management procedures and tools are selected.
As the project proceeds, the configuration process and mechanisms must be
implemented, and the key interfaces to the rest of the development process
should be documented.
Risk based testing is basically a testing done for the project based on
risks. Risk based testing uses risk to prioritize and emphasize the appropriate
tests during test execution.
This outcome is also associated with an impact. Since there might not be
sufficient time to test all functionality, Risk based testing involves testing the
functionality which has the highest impact and probability of failure.
Risk-based testing is the idea that we can organize our testing efforts in a way
that reduces the residual level of product risk when the system is deployed.
Risk-based testing starts early in the project, identifying risks to system quality
and using that knowledge of risk to guide testing planning, specification,
preparation and execution.
Risk-based testing also involves measuring how well we are doing at finding and
removing defects in critical areas.
Risk-based testing can also involve using risk analysis to identify proactive
opportunities to remove or prevent defects through non-testing activities and to
help us select which test activities to perform.
As risks evaporate and new ones emerge, adjust your test effort to stay focused
on the current crop.
To prioritize incidents.
status (e.g. open, rejected, duplicate, deferred, ready for confirmation test,
closed);
To store information about versions and builds of the software and testware.
To keep track of which versions belong with which configurations (e.g. operating
systems, libraries, browsers).
Baselining (e.g. all the configuration items that make up a specific release).
Test deliverables
Approach Schedule
What are the important factors for the software testing tool selection?
While introducing the tool in the organization it must match a need within the
organization, and solve that need in a way that is both effective and efficient.
The tool should help in building the strengths of the organization and should also
address its weaknesses.
The organization needs to be ready for the changes that will come along with the
new tool. If the current testing practices are not good enough and the
organization is not mature, then it is always recommended to improve testing
practices first rather than to try to find tools to support poor practices.
Automating chaos just gives faster chaos!
The following factors are important during tool selection:
Assessment of the organizations maturity (e.g. readiness for change);
Identification of the areas within the organization where tool support will help to
improve testing processes;
Proof-of-concept to see whether the product works as desired and meets the
requirements and objectives defined for it;
For example, different settings for a static analysis tool, different reports from a
test management tool, different scripting and comparison techniques for a test
execution tool or different load profiles for a performance-testing tool.
The objectives for a pilot project for a new tool are:
To see how the tool would fit with existing processes or documentation, how
those would need to change to work well with the tool and how to use the tool to
streamline existing processes;
To decide on standard ways of using the tool that will work for all potential users
(e.g. naming conventions, creation of libraries, defining modularity, where
different elements will be stored, how they and the tool itself will be maintained);
To evaluate the pilot project against its objectives (have the benefits been
achieved at reasonable cost?).
People often make mistakes by underestimating the time, cost and effort
for the initial introduction of a tool: Introducing something new into an
organization is hardly straightforward. Once you purchase a tool, you want to
have a number of people being able to use the tool in a way that will be
beneficial.
There will be some technical issues to overcome, but there will also be
resistance from other people both need to be handled in such a way that the
tool will be of great success.
Just think back to the last time you tried something new for the very first time
(learning to drive, riding a bike, skiing). Your first attempts were unlikely to be
very good but with more experience and practice you became much better.
Mostly people underestimate the effort required to maintain the test assets
generated by the tool: Generally people underestimate the effort required to
maintain the test assets generated by the tool. Because of the insufficient
planning for maintenance of the assets that the tool produces there are chances
that the tool might end up as shelf-ware, along with the previously listed risks.
People depend on the tool a lot (over-reliance on the tool): Since there are
many benefits that can be gained by using tools to support testing like reduction
of repetitive work, greater consistency and repeatability, etc.
people started to depend on the tool a lot. But the tools are just a software they
can do only what they have been designed to do (at least a good quality tool
can), but they cannot do everything.
Eventually when your computers response time gets slower and slower, but it
get improved after re-booting, this may be because of the memory leak, where
the programs do not correctly release blocks of memory back to the operating
system.
Sooner or later the system will run out of memory completely and stop. Hence,
rebooting restores all of the memory that was lost, so the performance of the
system is now restored to its normal state.
Another form of dynamic analysis for websites is to check whether each link does
actually link to something else (this type of tool may be called a web spider).
The tool does not know if you have linked to the correct page, but at least it can
find dead links, which may be helpful.
Validation is done at the end of the development process and takes place after
verifications are completed.
Am I accessing the right data (in terms of the data required to satisfy the
requirement).
1. During verification if some defects are missed then during validation process it
can be caught as failures.
2. If during verification some specification is misunderstood and development had
happened then during validation process while executing that functionality the
difference between the actual result and expected result can be understood.
3. Validation is done during testing like feature testing, integration testing, system
testing, load testing, compatibility testing, stress testing, etc.
4. Validation helps in building the right product as per the customers requirement
and helps in satisfying their needs.
Validation is basically done by the testers during the testing. While validating the
product if some deviation is found in the actual result from the expected result
then a bug is reported or an incident is raised.
Not all incidents are bugs. But all bugs are incidents. Incidents can also be of
type Question where the functionality is not clear to the tester.
Hence, validation helps in unfolding the exact functionality of the features and
helps the testers to understand the product in much better way. It helps in making
the product more user friendly.
Suppose you are building a table. Here the verification is about checking all the
parts of the table, whether all the four legs are of correct size or not.
If one leg of table is not of the right size it will imbalance the end product.Similar
behavior is also noticed in case of the software product or application.
Am I accessing the data right (in the right place; in the right way).
As the defects are getting detected at an early stage so the rework cost most
often relatively low.
Types of the defects that are easier to find during the static testing are:
deviation from standards, missing requirements, design defects, nonmaintainable code and inconsistent interface specifications.
If a specified function has not been implemented or a function was omitted from
the specification, then structure-based techniques cannot say anything about
them it only looks at a structure which is already there.
The purpose of the smoke testing is to ensure that the critical functionalities of an
application are working fine.
It is also known as Build verification testing where the build is verified by testing
the important features of the application and then declaring it as good to go for
further detailed testing.
Smoke testing can be done by developers before releasing the build to the
testers and post this it is also tested by the testing team to ensure that the build
is stable enough to perform the detailed testing.
Usually smoke testing is performed with positive scenarios and with valid data.
It is a type of shallow and wide testing because it covers all the basic and
important functionalities of an application.
If all the 4 critical components work fine then the build is stable enough to
proceed with detailed testing.
This is known as Smoke testing.
When to use smoke testing
Smoke testing is used in the following scenarios:
Smoke testing is done to ensure that the basic functionalities of the application
are working fine.
It helps in finding the issues that got introduced by the integration of components.
It helps in verifying the issues fixed in the previous build are NOT impacting the
major functionalities of the application.
Its a non-exhaustive testing with small number of test cases because of which
we not are able to find the other critical issues.
Smoke testing is not performed with negative scenarios and with invalid data.
application.
The purpose of regression testing is that
been fixed.
The purpose of retesting is to ensure that
automated.
In case of regression testing the testing
automated.
In case of retesting the testing is done in a
style is generic
planned way.
During regression testing even the passed During retesting only failed test cases
test cases are executed.
are re-executed.
Regression testing is carried out to check Retesting is carried out to ensure that the
for unexpected side effects.
original issue is working as expected.
Regression testing is done only when any
Retesting is executed in the same
new feature is implemented or any
environment with same data but in new
modification or enhancement has been
build.
done to the code.
Test cases of regression testing can be
Test cases of retesting can be obtained
obtained from the specification documents
only when the testing starts.
and bug reports.
What are Security testing tools in software testing?
Security testing tools can be used to test security of the system by trying to
break it or by hacking it. The attacks may focus on the network, the support
software, the application code or the underlying database.
To identify viruses;
To do the security checks during operation, e.g. for checking integrity of files, and
intrusion detection, e.g. checking results of test attacks.
E.g., if we want to volume test our application with a specific database size, we
need to expand our database to that size and then test the applications
performance on it.
Volume testing is a term given and described in Glenford Myers The Art
ofSoftware Testing, 1979. Heres his definition: Subjecting the program to
heavy volumes of data. The purpose of volume testing is to show that the
program cannot handle the volume of data specified in its objectives p.
113.
The goals of such tests may be to ensure the software does not crash in
conditions of insufficient computational resources (such as memory or disk
space).
Load testing can be done under controlled lab conditions to compare the
capabilities of different systems or to accurately measure the capabilities of a
single system.
Load testing involves simulating real-life user load for the target application. It
helps you determine how your application behaves when multiple users hits it
simultaneously.
Load testing differs from stress testing, which evaluates the extent to which a
system keeps working when subjected to extreme work loads or when some of
its hardware or software has been compromised.
The primary goal of load testing is to define the maximum amount of work a
system can handle without significant performance degradation.
It is easy to manage due to the rigidity of the model each phase has specific
deliverables and a review process.
In this model phases are processed and completed one at a time. Phases do not
overlap.
Waterfall model works well for smaller projects where requirements are very well
understood.
Not suitable for the projects where requirements are at a moderate to high risk of
changing.
This model is used only when the requirements are very well known, clear and
fixed.
Technology is understood.
Cycles are divided up into smaller, more easily managed modules. Each module
passes through the requirements, design, implementation and testing phases.
A working version of software is produced during the first module, so you have
working software early on during thesoftware life cycle. Each subsequent
release of the module adds function to the previous release.
In the diagram above when we work incrementally we are adding piece by piece
but expect that each piece is fully finished. Thus keep on adding the pieces until
its complete.
As in the image above a person has thought of the application. Then he started
building it and in the first iteration the first module of the application or product is
totally ready and can be demoed to the customers.
Likewise in the second iteration the other module is ready and integrated with the
first module. Similarly, in the third iteration the whole product is ready and
integrated. Hence, the product got ready step by step.
Generates working software quickly and early during the software life cycle.
This model is more flexible less costly to change scope and requirements.
Easier to manage risk because risky pieces are identified and handled during itd
iteration.
Needs a clear and complete definition of the whole system before it can be
broken down and built incrementally.
The spiral model is similar to the incremental model, with more emphasis
placed on risk analysis. The spiral model has four phases: Planning, Risk
Analysis, Engineering and Evaluation.
Evaluation phase: This phase allows the customer to evaluate the output of the
project to date before the project continues to the next spiral.
Since in this methodology a working model of the system is provided, the users
get a better understanding of the system being developed.
Prototype model should be used when the desired system needs to have a lot of
interaction with the end users.
Typically, online systems, web interfaces have a very high amount of interaction
with end users, are best suited for Prototype model. It might take a while for a
system to be built that allows ease of use and needs minimal training for the end
user.
Prototyping ensures that the end users constantly work with the system and
provide a feedback which is incorporated in the prototype to result in a useable
system. They are excellent for designing good human computer interface
systems.
Generally a baseline is defined as a line that forms the base for any construction
or for measurement, comparisons or calculations.
Baseline testing also helps a great deal in solving most of the problems that are
discovered. A majority of the issues are solved through baseline testing.
As per the IEEE Documentation describing plans for, or results of, the testing of a
system or component, Types include test case specification, test incident report,
test log, test plan, test procedure, test report. Hence the testing of all the above
mentioned documents is known as documentation testing.
This is one of the most cost effective approaches to testing. If the documentation
is not right: there will be major and costly problems.
These range from running the documents through a spelling and grammar
checking device, to manually reviewing the documentation to remove any
ambiguity or inconsistency.
Documentation testing can start at the very beginning of the software process
and hence save large amounts of money, since the earlier a defect is found the
less it will cost to be fixed.
In this approach individual modules are not integrated until and unless all the
modules are ready.
In Big Bang integration testing all the modules are integrated without performing
any integration testing and then its executed to know whether all the integrated
modules are working fine or not.
This approach is generally executed by those developers who follows the Run it
and see approach.
In case any bug arises then the developers has to detach the integrated modules
in order to find the actual cause of the bug.
Suppose a system consists of four modules as displayed in the diagram above. In big
bang integration all the four modules Module A, Module B, Module C and Module D are
integrated simultaneously and then the testing is performed. Hence in this approach no
individual integration testing is performed because of which the chances of critical
failures increases.
Advantage of Big Bang Integration:
Big Bang testing has the advantage that everything is finished before integration
testing starts.
It is very difficult to trace the cause of failures because of this late integration.
The chances of having critical failures are more because of integrating all the
components together at same time.
If any bug is found then it is very difficult to detach all the modules in order to find
out the root cause of it.
Environmental conditions
Intentional damage
Without having code we can test the specifications. About 55% of all the bugs
present in the product are because of the mistakes present in the specification.
Hence testing the specifications can lots of time and the cost in future or in later
stages of the product.
Errors in use of the system or product or application may arise because of the
following reasons:
Environmental conditions:
Because of the wrong setup of the testing environment testers may report the
defects or failures. As per the recent surveys it has been observed that about
40% of the testers time is consumed because of the environment issues and this
has a great impact on quality and productivity. Hence proper test environments
are required for quality and on time delivery of the product to the customers.
Intentional damage:
The defects and failures reported by the testers while testing the product or the
application may arise because of the intentional damage.
Errors found in the earlier stages of the development reduce our cost of
production. Hence its very important to find the error at the earlier stage. This
could be done by reviewing the specification documents or by walkthrough. The
downward flow of the defect will increase the cost of production.
Errors: How many errors do users make, how severe are these
errors and how easily can they recover from the errors?
Satisfaction: How much does the user like using the system?
Efficiency testing: Efficiency testing test the amount of code and testing
resources required by a program to perform a particular function. Software
Test Efficiency is number of test cases executed divided by unit of time
(generally per hour).
Portability testing: It refers to the process of testing the ease with which
a computer software component or application can be moved from one
environment to another, e.g. moving of any application from Windows
2000 to Windows XP.