Sie sind auf Seite 1von 20

1) What is Testing?

Testing is a process of executing a program with the intent of finding an error. A good test is one
that has a high probability of finding an as yet undiscovered error. A successful test is one that
uncovers an as yet undiscovered error. The objective is to design tests that systematically
uncover different classes of errors and do so with a minimum amount of time and effort.

Secondary benefits include


• Demonstrate that software functions appear to be working according to specification.
• Those performance requirements appear to have been met.
• Data collected during testing provides a good indication of software reliability and some
indication of software quality.
Testing cannot show the absence of defects, it can only show that software defects are present.

Testing involves operation of a system or application under controlled conditions and evaluating
the results (eg, 'if the user is in interface A of the application while using hardware B, and does C,
then D should happen'). The controlled conditions should include both normal and abnormal
conditions. Testing should intentionally attempt to make things go wrong to determine if things
happen when they shouldn't or things don't happen when they should. It is oriented to 'detection'.
(See the Bookstore section's 'Software Testing' category for a list of useful books on Software
Testing.)
Organizations vary considerably in how they assign responsibility for QA and testing. Sometimes
they're the combined responsibility of one group or individual. Also common are project teams
that include a mix of testers and developers who work closely together, with overall QA processes
monitored by project managers. It will depend on what best fits an organization's size and
business structure.

2) What is Quality Assurance?


Software QA involves the entire software development PROCESS - monitoring and improving the
process, making sure that any agreed-upon standards and procedures are followed, and ensuring
that problems are found and dealt with. It is oriented to 'prevention'.

3) What is Software Quality?


Quality software is reasonably bug-free, delivered on time and within budget, meets requirements
and/or expectations, and is maintainable. However, quality is obviously a subjective term. It will
depend on who the 'customer' is and their overall influence in the scheme of things. A wide-angle
view of the 'customers' of a software development project might include end-users, customer
acceptance testers, customer contract officers, customer management, the development
organization's /testers/salespeople management/accountants, future software maintenance
engineers, stockholders, magazine columnists, etc. Each type of 'customer' will have their own
slant on 'quality' - the accounting department might define quality in terms of profits while an end-
user might define quality as user-friendly and bug-free. (See the Bookstore section's 'Software
QA' category for useful books with more information.)

4) What is Quality Control?


Processes or methods to be monitored based on the requirements adherence/conformance.

5) List the difference between Quality Control and Quality Assurance?


Quality Assurance Quality Control
Designed to detect and correct defects.  Designed to prevent defects.

6) What is unit Testing?

The most 'micro' scale of testing is unit testing. To test a particular function or code module.
Typically done by the programmer and not by testers, as it requires detailed knowledge of the
internal program design and code. Not always easily done unless the application has a well-

1
designed architecture with tight code; may require developing test driver modules or test
harnesses.
Or
Unit Testing may be defined as the verification and validation of an individual module or 'unit' of
software. It is the most "micro" scale of testing for testing particular functions or code modules.
Unit testing may require developing test driver modules or test harnesses. In addition, unit testing
often requires detailed knowledge of the internal program design.

7) Is unit testing important?


Developers should unit test their own code when possible. However, on many projects these tests
go undocumented and sometimes lack a methodical approach. In other words, without a list or
some other document to work from, it becomes increasingly difficult for a developer to track
testing progress as the size of an application increases. Thus, a test plan guides this effort and
contains a list of units to be tested along with the suggested approach for executing these tests.
In addition to creating a test plan, the QA team can write unit test conditions to effectively
construct a format from which unit test results will be obtained. Quality assurance members often
allocate these test scripts to the development team for execution in the development
environment.

This approach works well for time critical projects and/or projects with geographically dispersed
members. However, test conditions created by a QA team should not be substituted for routine
unit testing. Instead, QA efforts are used to compliment routine unit tests.

'Routine unit testing' includes identifying all fields and testing for input, output, upper and lower
boundaries, as well as calculations when appropriate. All standard GUI elements should be
identified and validated. These include scroll bars, push buttons, links, etc.

It is vital to ensure that each unit of the application will be tested and documented before
inclusion in the next build.

Software development projects often contain a mix of developer experience levels. Using a third
party to create unit test conditions and scripts adds team structure, which in turn reduces risks
associated with software development.

Flow of Unit Test Conditions

While the individual development team members are executing unit tests for all code written, the
Qa team creates unit test conditions using design, requirements or other documents including the
application itself, if it exists.

Test conditions are then allocated to the development team using a central network repository
such as Visual Source Safe. The method in which test conditions are allocated is determined by
the Project Manager and is often based on the modules each person is working on. Thus, test
conditions and scripts designed by the Qa team are executed in conjunction with routine unit
tests. Each executed set of test conditions and scripts are then posted in the repository by the
developer who executed them unless otherwise noted by the PM. This method allows the lead
developer to track all unit test results. This is particularly helpful on projects, which require a close
working relationship with the client's development team. The results serve as a checklist for fixes.
Failed conditions and scripts can also be associated with 'larger' defects found later in the
development cycle, if not fixed early on.

8) What is integration Testing?

2
Testing of combined parts of an application to determine if they function together correctly. The
'parts' can be code modules, individual applications, client and server applications on a network,
etc. This type of testing is especially relevant to client/server and distributed systems.

9) Why Performing Integration Testing?


Many individuals use the terms System Testing and Integration Testing interchangeably and for
simple applications that do not have many components the criteria and test scripts required to
perform testing are similar. But as an application increases in complexity, and size and users
demand new functionality and features the need to perform Integration Test becomes more
obvious. Often there is a deadline that drives businesses to develop new applications, and in an
effort to preempt the market the time for Development and of course testing is generally
shortened as the project matures.

10) What is black box Testing?


Black box testing - not based on any knowledge of internal design or code. Tests are based on
requirements and functionality.
Or
Black box testing means treating the software as though it were put inside a black box. (i.e. we
don’t know any details of what’s inside.) We do, however, know what data must be put into the
system, and the results that should be output for each data input. We are thus concerned only
with the fact that the system works, and not with the details of how it works. Black box testing is
usually used during the software development phases, and ensures that all sensible data give the
right output, and all nonsensical data is rejected, or an appropriate error message is output.

We need to get a set of ‘rules’ that ‘Surnames’ will obey. This will probably be different from
system to system. Foreign surnames, for example, will have characters other than those in the
English language. French, for example, might contain characters like é, or German like ü. Also,
some names might be hyphenated, like Morgan-Hitchcock, and therefore contain a ‘-‘ sign.
Let’s concentrate on European surnames. Here it should be obvious that characters like ‘%, *, ^,
$ and £ etc.’ would not be allowed, but characters (like ê, ë, and ü for example.) in languages like
‘French and German’ etc. would be. We must also decide on a sensible length for ‘Surname’, and
25 characters might be acceptable in this context.
Our black box tests could, therefore, go something along the following lines.
(i) Define a set of acceptable characters like ‘lower case a to z’ and ‘upper case A to Z’, plus all
characters allowed in the languages to be included.
(ii) Limit the length of the input string to 25 characters. Perhaps limit the minimum to 2 characters.
(‘Ng’ is a valid surname at the author’s school, for example.)
(iii) Allow special characters like ‘-’ to be used in hyphenated surnames.
(iv) Reject characters outside of this range.
(v) We might want to ensure that the first character is upper case, but this could limit names like
‘deMorgan’, for example. A suitable subset of test data for black box testing might be as shown in
the following table.

3
11) What is white box Testing?
Based on knowledge of the internal logic of an application's code. Tests are based on coverage of
code statements, branches, paths, conditions.
Or
Also known as glass box, structural, clear box and open box testing. A softwaretesting technique
whereby explicit knowledge of the internal workings of the item being tested are used to select
the test data. Unlike black box testing, white box testing uses specific knowledge of programming
to examine outputs. The test is accurate only if the tester knows what the program is supposed to
do. He or she can then see if the program diverges from its intended goal. White box testing does
not account for errors caused by omission, and all visible code must also be readable.

For a complete software examination, both white box and black box tests are required

The advantages of this type of testing include:


• The test is unbiased because the designer and the tester are independent of each other.
• The tester does not need knowledge of any specific programming languages.
• The test is done from the point of view of the user, not the designer.
• Test cases can be designed as soon as the specifications are complete.

The disadvantages of this type of testing include:


• The test can be redundant if the software designer has already run a test case.
• The test cases are difficult to design.
• Testing every possible input stream is unrealistic because it would take a inordinate amount
of time; therefore, many program paths will go untested
12) What is Functional Testing?
Simply stated, verifies that an application does what it is supposed to do (and doesn't do what it
shouldn't do). For example, if you were functionally testing a word processing application, a
partial list of checks you would perform includes creating, saving, editing, spell checking and
printing documents. (Again, this list is quite incomplete!)
Or
Black-box type testing geared to functional requirements of an application; testers should do this
type of testing. This doesn't mean that the programmers shouldn't check that their code works
before releasing it (which of course applies to any stage of testing.)

13) What is a Test Case?

4
Test case specification documents the actual values used for INPUT along with the anticipated
OUTPUTS. A test case also identifies any constraints on the test procedure resulting from use of
that specific test case.

A test case should explain exactly what values or conditions will be sent to the software and what
result is expected

14) What is a Test Data?


Test data is essential for executing test cases
There is nearly no test case without input data and the streets are not really paved with the
specific data you need.

Test data should be built in a way that the testing effort can be minimized
Our test data is simple, expressive, representative. The results are easy to verify.

It is of vital importance to know all attributes (characteristics, values) of the test data used
Our test data covers a maximum range of test-cases

15) What is a Use Case?


A use case is a modelling technique used to describe what a new system should do or what an
existing system already does from the user's point of view.

The functionality of the system is represented by a complete set of use cases. Each use case
specifies a "complete functionality": one general usage of the system.
Most importantly

Use cases are an essential prerequisite for systematic testing.


Use cases must meet the requirements of the customer.
Whenever available, will be built/adapted/matched using the test cases stored in our repository

16) What is V&V Testing?


Validation: Is it the right model?
Verification: Is the model right?
Validation - Functional, operational, physical requirements
Verification traceability analyses evaluation of development products

17) How to test with use cases?


Use cases are the foundation our tests are based on. They are categorized and systematized
requirements that are built in a modular fashion and can be combined in any way, forming the
basis for complete testing.

A combination of use cases (so called business cases) always corresponds to a customer's
specific workflow. These are the source for test cases and so called test scenarios.

18) What is System Testing?


Black-box type testing that is based on overall requirements specifications; covers all combined
parts of a system.

19) What is a requirement?


A condition or capability needed by a user to solve a problem or achieve an objective.
A condition or capability that must be met or possessed by a system component to satisfy a
contract, standard, specification or other formally imposed documents
A document representing either point 1 or 2

20) What is software Testing?

5
Software testing is the process of testing the functionality and correctness of software by running
it. Software testing is usually performed for one of two reasons: (1) defect detection, and (2)
reliability estimation.

21) What is Acceptance Testing?


Determining whether the software is satisfactory to an end-user or customer.

22) What is exploratory testing?


Often taken to mean a creative, informal software test that is not based on formal test plans or
test cases; testers may be learning the software as they test it.

23) What is ad-hoc testing?


Similar to exploratory testing, but often taken to mean that the testers have significant
understanding of the software before testing it.

24) What is scalability?


Scalability is a Web server's ability to maintain a site's availability, reliability, and
performance as the amount of simultaneous Web traffic, or load, hitting the Web server
increases
25) What is usability testing?
Involves testing a software or Web application as it will be used in the real world to
assess several factors, including:
• Does application functionality match the user's needs?
• Is the application easy to learn?
• How easy is it for the user to accomplish tasks with the application?
• Is the application tolerant of errors?
• Is it easy to remember how to use the application, or must the user "retrain" herself
every time she uses it?
• Does the application do what the user expects?
• Does the user enjoy using the application, or does he/she become easily frustrated
by it?

26) What is Comparison Testing?


Comparing software weaknesses and strengths to competing products.

27) What is Alpha Testing?


Testing of an application when development is nearing completion; minor design changes may
still be made as a result of such testing. Typically done by end-users or others, not by
programmers or testers.

28) What is Beta Testing?


Software developers test software to the best of their ability, but when used in the outside world
(in combination with thousands of other programs, different hardware combinations and different
computer components) errors often come to light. The software developers know this and release
pre production versions of their software so that users can try it out in an infinite number of
different hardware and software environments. This is called Beta testing. The software
developers hope that the users will report any consistent errors that they find, so that the
solutions may be incorporated into the next version.
Or
Testing when development and testing are essentially completed and final bugs and problems
need to be found before final release. Typically done by end-users or others, not by programmers
or testers.

6
29) What is automation testing?
Test automation alleviates this tedium of manual testing by automatically executing a
battery of tests using an automated testing tool. The tool acts just as a user would, interact
with an application to input data and verify expected responses. Implemented properly, an
automated regression battery can be run unattended and overnight, freeing up testers to
concentrate on testing new features and functionality.
30) What is impact analysis?
Software change impact analysis estimates what will be affected in software and related
documentation if a proposed changed is made.

Examples
Cross-referenced listings to determine parts that reference a variable of a procedure
Program slicing to determine the subset of a program that can affect the value of a variable
Browsing of (hyper linked) files
Using trace ability matrixes
Configuration management systems to find and track changes
Consulting design and specification documents to determine the scope of a change

31) What is smoke testing?


"Smoke testing" is an effective compromise in this situation and is gaining increased
acceptance in the software industry. In this approach, units are integrated at a measured
pace. As soon as a small number of units are added, a test version is generated and
"smoke tested," wherein a small number of tests are run to gain confidence that the
integrated product will function as expected. The intent is neither to thoroughly test the
new unit(s) nor to completely regression test the overall system. The objective is simply
to build confidence in the expanded system. Smoke testing should occur at least twice a
week, more frequently if practical
Or
A quick-and-dirty test that the major functions of a piece of software work. Originated in the
hardware testing practice of turning on a new piece of hardware for the first time and considering
it a success if it does not catch on fire.
Or
* Smoke test: When I turn it on,does it catch fire?* Smoke tests are not necessarilyexhaustive,
but are end to end* Smoke tests are capable of detecting major problems* Without a smoke test,
a build is not much use (but it is better than nothing)
OR
Smoke test: When I turn it on, does it catch fire?
* Smoke tests are not necessarily exhaustive, but are end to end
* Smoke tests are capable of detecting major problems
* Without a smoke test, a build is not much use (but it is better than nothing)

32) What is big bang testing?


Testing strategy where in you design and code the entire program before testing, suitable only for
very small programs.

33) What is regression Testing?


Re-testing after fixes or modifications of the software or its environment. It can be difficult to
determine how much re-testing is needed, especially near the end of the development cycle.
Automated testing tools can be especially useful for this type of testing.
Or

7
Testing: Regression testing is the process of always running the same sequence of tests on a
program unit every time the program unit changes. This verifies (within the bounds of the test)
that the new code works and it doesn't break any old code, either.

34) What is performance Testing?


Term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and
any other 'type' of testing) is defined in requirements documentation or QA or Test Plans. Testing
for efficiency of the application.

35) What is Volume Testing?


Testing the data storage capacity of an application is volume testing. Also how it reacts when the
data store is about to over flow.

36) What is Security Testing?


Testing how well the system protects against unauthorized internal or external access, willful
damage, etc; may require sophisticated testing techniques.

37) What is Mutation testing?


A method for determining if a set of test data or test cases is useful, by deliberately introducing
various code changes ('bugs') and retesting with the original test data/cases to determine if the
'bugs' are detected. Proper implementation requires large computational resources.

38) What is Compatibility Testing?


Platform testing or configuration testing, verifies that an application functions the same, or in an
appropriately similar manner, across all supported platforms or configurations
Or
Testing how well software performs in a particular hardware/software/operating
system/network/etc. environment.

39) List out Different Life Cycles?


Waterfall model
Prototyping model
Incremental model
Spiral model
Component assembly model
Concurrent development model
Formal methods model

40) What is a Driver?


An empty function that calls functions that is under test. Generally it contains just enough code to
set up parameters and globals prior calling the function.

41) What is a stub?


An empty function that replaces a function that is yet to be written. Generally, a stub provides
absolute minimal functionality; it accomplishes just enough to test the calling code with just a few
special cases. Also see Driver.

42) What is debugging?


Debugging: The process of locating and removing defects in a software system. Not to be
confused with testing (a different activity).

43) Define a SDLC?


The life cycle begins when an application is first conceived and ends when it is no longer in use. It
includes aspects such as initial concept, requirements analysis, functional design, internal design,
documentation planning, test planning, coding, document preparation, integration, testing,
maintenance, updates, retesting, phase-out, and other aspects.

8
44) What are 5 common solutions to software development problems?
• Solid requirements - clear, complete, detailed, cohesive, attainable, testable requirements
that are agreed to by all players. Use prototypes to help nail down requirements.
• Realistic schedules - allow adequate time for planning, design, testing, bug fixing, re-testing,
changes, and documentation; personnel should be able to complete the project without
burning out.
• Adequate testing - start testing early on, re-test after fixes or changes, plan for adequate time
for testing and bug fixing.
• Stick to initial requirements as much as possible - be prepared to defend against changes
and additions once development has begun, and be prepared to explain consequences. If
changes are necessary, they should be adequately reflected in related schedule changes. If
possible, use rapid prototyping during the design phase so that customers can see what to
expect. This will provide them a higher comfort level with their requirements decisions and
minimize changes later on.
• Communication - require walkthroughs and inspections when appropriate; make extensive
use of group communication tools - e-mail, groupware, networked bug-tracking tools and
change management tools, intranet capabilities, etc.; insure that documentation is available
and up-to-date - preferably electronic, not paper; promote teamwork and cooperation; use
prototypes early on so that customers' expectations are clarified.
45) What is a Test bed?
Software and Hardware feature of the test environment for testing an application is a Test Bed.

46) What is a Test Environment?


Configuring Hardware, Software, Network Connections, Domains and supply of human resources
is a Test Environment.

47) What is a bug?


An error or defect in software or hardware that causes a program to malfunction

48) What are the different types of Integration Testing?


Incremental integration testing - continuous testing of an application as new functionality is
added; requires that various aspects of an application's functionality be independent enough to
work separately before all parts of the program are completed, or that test drivers be developed
as needed; done by programmers or by testers.

49) What is Module Testing?


Splitting the whole system up in a modular way helps to develop programs or systems more
easily. Module testing is a series of tests applied to a particular module to ensure that it works
before being joined onto any other module. Modular testing is used because it’s easier to test
individual modules before they are allowed to interact with other modules. If more than one
module is being debugged, then this makes the debugging process much more complex.

50) What is Install ability Testing?


Testing of full, partial, or upgrade install/uninstall processes.

51) What is usability testing?


Testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user
or customer. User interviews, surveys, video recording of user sessions, and other techniques
can be used. Programmers and testers are usually not appropriate as usability testers.

52) What is Recovery Testing?


Testing how well a system recovers from crashes, hardware failures, or other catastrophic
problems.

9
53) What is Load testing?
Testing an application under heavy loads, such as testing of a web site under a range of loads to
determine at what point the systems response time degrades or fails.

54) What is stress testing?


Stress testing examines application behavior under peak bursts of activity.
Or
Term often used interchangeably with 'load' and 'performance' testing. Also used to describe such
tests as system functional testing while under unusually heavy loads, heavy repetition of certain
actions or inputs, input of large numerical values, large complex queries to a database system,
etc.

55) Who are testers and what is testing process?


To plan and execute tests, software testers must consider the software and the function it
computes, the inputs and how they can be combined, and the environment in which the software
will eventually operate. This difficult, time-consuming process requires technical sophistication
and proper planning. Testers must not only have good development skills-testing often requires a
great deal of coding-but also be knowledgeable in formal languages, graph theory, and
algorithms. Indeed, creative testers have brought many related computing disciplines to bear on
testing problems, often with impressive results

56) What is Sanity Testing?


Typically an initial testing effort to determine if a new software version is performing well enough
to accept it for a major testing effort. For example, if the new software is crashing systems every
5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in
a 'sane' enough condition to warrant further testing in its current state.

57) What is end-to-end Testing?


Similar to system testing; the 'macro' end of the test scale; involves testing of a complete
application environment in a situation that mimics real-world use, such as interacting with a
database, using network communications, or interacting with other hardware, applications, or
systems if appropriate

58) List out some of the benefits of Automation Testing?


• Possibly. For small projects, the time needed to learn and implement them may not be worth
it. For larger projects, or on-going long-term projects they can be valuable.
• A common type of automated tool is the 'record/playback' type. For example, a tester could
click through all combinations of menu choices, dialog box choices, buttons, etc. in an
application GUI and have them 'recorded' and the results logged by a tool. The 'recording' is
typically in the form of text based on a scripting language that is interpretable by the testing
tool. If new buttons are added, or some underlying code in the application is changed, etc. the
application can then be retested by just 'playing back' the 'recorded' actions, and comparing
the logging results to check effects of the changes. The problem with such tools is that if
there are continual changes to the system being tested, the 'recordings' may have to be
changed so much that it becomes very time-consuming to continuously update the scripts.
Additionally, interpretation of results (screens, data, logs, etc.) can be a difficult task. Note
that there are record/playback tools for text-based interfaces also, and for all types of
platforms.
• Other automated tools can include:
• Code analyzers - monitor code complexity, adherence to standards, etc.
• Coverage analyzers - these tools check which parts of the code have been exercised by a
test, and may be oriented to code statement coverage,
• Condition coverage, path coverage, etc.
• Memory analyzers - such as bounds-checkers and leak detectors.

10
• Load/performance test tools - for testing client/server and web applications under various
load levels.
• Web test tools - to check that links are valid, HTML code
• Usage is correct, client-side and Server-side programs work, a web site's
• Interactions are secure.
• Other tools - for test case management, documentation management, bug reporting, and
configuration management.

59) What is 'good design'?


'Design' could refer to many things, but often refers to 'functional design' or 'internal design'. Good
internal design is indicated by software code whose overall structure is clear, understandable,
easily modifiable, and maintainable; is robust with sufficient error-handling and status logging
capability; and works correctly when implemented. Good functional design is indicated by an
application whose functionality can be traced back to customer and end-user requirements. (See
further discussion of functional and internal design in 'What's the big deal about requirements?' in
FAQ #2.) For programs that have a user interface, it's often a good idea to assume that the end
user will have little computer knowledge and may not read a user manual or even the on-line
help; some common rules-of-thumb include:
• The program should act in a way that least surprises the user
• It should always be evident to the user what can be done next and how to exit
• The program shouldn't let the users do something stupid without warning them.
60) What is a good code?
‘Good code' is code that works, is bug free, and is readable and maintainable. Some
organizations have coding 'standards' that all developers are supposed to adhere to, but
everyone has different ideas about what's best, or what is too many or too few rules. There are
also various theories and metrics, such as McCabe Complexity metrics. It should be kept in mind
that excessive use of standards and rules can stifle productivity and creativity. 'Peer reviews',
'buddy checks' code analysis tools, etc. can be used to check for problems and enforce
standards.

61) What is code Optimization?

Generate optimized code for your website. Web site code optimization means designing,
writing, and coding your entire website in a way that will give it a good chance to appear at the
top of search engine queries.

62) What is the budest testing?

63) What is Portability Testing?


64) What is a Test Plan?
65) When you will start preparing unit test cases?
66) When you will start preparing integration test cases?
67) When you will start preparing System test cases?
68) When you will start preparing Acceptance test cases?
69) What is a defect?
70) What is a “Show Stopper”?
a show stopper means you are unable to continue doing work or the defect impacts the
product so severely that it may not be shipped
71) What is a bug?
Bug” means an individual error, fault, problem, issue or difficulty with the software You
are currently developing or maintaining;

11
72) What is a defect?
A Critical Defect means that the application is down or is at high risk, business functions cannot be
conducted, or the Customer is experiencing continual failures or data corruption as a result of the defect.

73) What is a error?


: Error means system is in such a state that further processing will cause a failure of the 
system

74) What is the difference between error and bug


Error means system is in such a state that further processing will cause a failure of the 
system. A bug (or fault) is the cause of an error

75) What is a Domain?

76) When to start unit Testing?


77) When to start Integration Testing?
78) When to start System Testing?
79) When to start Acceptance Testing?

80) Define a Testing Life Cycle?

12
81) What is the difference between Stress and Load Testing?
82) How you will achieve the Stress point during the test?
83) What is the difference between Test Case and Test Data?
84) Define the difference between a Tester and a QA?

85) List some of the tools used for regression Testing?


1. Winrunner (Mercury Interactive)
2. SQA Robot (rational)
3. Silk test (segue)
4. QARun (compuware)

86) List some of the tools used for Load/Stress Testing?


1) Loadrunner (Mercury Interactive)
2) Silkperformer (segue)
3) SQA LoadTest (rational)
4) QALoad (Compuware)

87) List some of the tools used for monitoring a web site?

13
1) Topaz ActiveWatch(Mercury Interactive)
2) Silkmonitor(segue)
3) SQA Sitecheck (rational)
88) List some of the tools keep track of the project under Testing?
1) TestDirector (Mercury Interactive)
2) SilkRadar (Segue)
3) SQA Manager (Rational)
4) QADirector (Compuware)
89) How to approach testing for a C/C++ application?

90) What is a test stub?


Test stub is a program that simulates the units that are called by the unit being tested. This is
needed for testing a unit in isolation.

91) What is manual Testing?


92) What is automation Testing?
93) What is UML?
94) How will you test a Protocol?
95) List out the sub heading of a test plan?
96) What is User Requirement Specification?
97) What is Software Requirement Specification?
98) List the difference between URS and SRS?
99) What is Architectural Design Document?
100)What is Detailed Design Document?
101)List the difference between ADD and DDD?
102)What is 100% Code coverage?
103)What is boundary value analysis?
104)Does Module testing and Integration Testing are same?
105)What is the difference between Integration and System Integration Testing?
For which types of system is 
bottom­up testing
appropriate, and why? 
Answer: 
Object­oriented systems  because these have a neat decomposition into classes and methods  
makes testing easy 
real­time systems  because we can identify slow bits of code more quickly 
systems with strict performance requirements  because we can measure the performance of 
individual methods early in the testing 

General Questions related to CMM


1) What is SEI-CMM?
• SEI = 'Software Engineering Institute' at Carnegie-Mellon University; initiated by the U.S.
Defense Department to help improve software development processes.
• CMM = 'Capability Maturity Model', developed by the SEI. It's a model of 5 levels of
organizational 'maturity' that determine effectiveness in delivering quality software. It is
geared to large organizations such as large U.S. Defense Department contractors. However,
many of the QA processes involved are appropriate to any organization, and if reasonably

14
applied can be helpful. Organizations can receive CMM ratings by undergoing assessments
by qualified auditors.
2) What is ISO?
ISO = 'International Organization for Standardization' - The ISO 9001:2000 standard (which
replaces the previous standard of 1994) concerns quality systems that are assessed by outside
auditors, and it applies to many kinds of production and manufacturing organizations, not just
software. It covers documentation, design, development, production, testing, installation,
servicing, and other processes. The full set of standards consists of: (a)Q9001-2000 - Quality
Management Systems: Requirements; (b)Q9000-2000 - Quality Management Systems:
Fundamentals and Vocabulary; (c)Q9004-2000 - Quality Management Systems: Guidelines for
Performance Improvements. To be ISO 9001 certified, a third-party auditor assesses an
organization, and certification is typically good for about 3 years, after which a complete
reassessment is required. Note that ISO certification does not necessarily indicate quality
products - it indicates only that documented processes are followed. Also see http://www.iso.ch/
for the latest information. In the U.S. the standards can be purchased via the ASQ web site at

3) What is IEEE?
IEEE = 'Institute of Electrical and Electronics Engineers' - among other things, creates standards
such as 'IEEE Standard for Software Test Documentation' (IEEE/ANSI Standard 829), 'IEEE
Standard of Software Unit Testing (IEEE/ANSI Standard 1008), 'IEEE Standard for Software
Quality Assurance Plans' (IEEE/ANSI Standard 730), and others.

4) What is ANSI?
ANSI = 'American National Standards Institute', the primary industrial standards body in the U.S.;
publishes some software-related standards in conjunction with the IEEE and ASQ (American
Society for Quality).

5) List out some other processing besides CMM and ISO?


Other software development process assessment methods besides CMM and ISO 9000 include
SPICE, Trillium, TickIT. and Bootstrap.
• See the 'Other Resources' section for further information available on the web.

6) Define different level of CMM?


Level 1 - characterized by chaos, periodic panics, and heroic efforts required by individuals to
successfully complete projects. Few if any processes in place; successes may not be repeatable.
Level 2 - software project tracking, requirements management, realistic planning, and
configuration management processes are in place; successful practices can be repeated.
Level 3 - standard software development and maintenance processes are integrated throughout
an organization; a Software Engineering Process Group is is in place to oversee software
processes, and training programs are used to ensure understanding and compliance.
Level 4 - metrics are used to track productivity, processes, and products. Project performance is
predictable, and quality is consistently high.
Level 5 - the focus is on continouous process improvement. The impact of new processes and
technologies can be predicted and effectively implemented when required.

7) What is a KPA?
Key process area(KPA) identifies a cluster of related activities that, when performed collectively,
achieve a set of goals considered important for establishing process capability at that maturity
level. The key process areas have been defined to reside at a single maturity level. For example,
one of the key process areas for Level 2 is Software Project Planning.

8) What is a Metric?
9) What is PCMM?
10) What is CMMi?
11) What is maturity level?
A maturity level is a well-defined evolutionary plateau

15
toward achieving a mature software process. The five
maturity levels provide the top-level structure of the
CMM.
12) What is a Goal?
The goals summarize the key practices of a key process
area and can be used to determine whether an
organization or project has effectively implemented the
key process area. The goals signify the scope, boundaries,
and intent of each key process area.
13) What is a Process Capability?
Software process capability describes the range of expected
results that can be achieved by following a software
process. The software process capability of an organization
provides one means of predicting the most likely
outcomes to be expected from the next software project the
organization undertakes.
14) Define the KPA’s of CMM – Level 1?
At the Initial Level, the organization typically does not provide a stable
environment for developing and maintaining software. When an
organization lacks sound management practices, the benefits of good
software engineering practices are undermined by ineffective planning and
reaction-driven commitment systems.
15) Define the KPA’s of CMM – Level 2?
At the Repeatable Level, policies for managing a software project and
procedures to implement those policies are established. Planning and
managing new projects is based on experience with similar projects.
KPA’s of level-2 is
1) Software Project Planning (SPP).
2) Requirements Management (RM)
3) Software Quality Assurance (SQA)
4) Software Project Tracking and Oversight (SPTO)
5) Software Subcontract Management (SSM)
6) Software Configuration Management (SCM)

16) Define the KPA’s of CMM – Level 3?


At the Defined Level, the standard process for developing and maintaining
software across the organization is documented, including both software
engineering and management processes, and these processes are integrated
into a coherent whole. This standard process is referred to throughout the
CMM as the organization's standard software process. Processes established
at Level 3 are used (and changed, as appropriate) to help the software
managers and technical staff perform more effectively.
1) Peer Review(PR)
2) Organization Process Focus (OPF)
3) Organization Process Definition (OPD)
4) Intergroup Coordination (IC)

16
5) Training Program (TP)
6) Software Quality Management (SQM)
7) Integrated Software Management (ISM)

17) Define the KPA’s of CMM – Level 4?


1)
18) Define the KPA’s of CMM – Level 5?
19) List the difference between ISO and CMM?
20) Define the need of SEPG?

General Questions related to Winrunner


1) What is Winrunner?
Winrunner is a testing tool used for regression testing on client/server and web based
applications.
The employee is ih the fake of
2) What is a GUI Checkpoint?
GUI Checkpoint verifies the information of GUI object.

3) How will you perform a GUI Test?


By adding GUI Checkpoints to the script, GUI test can be performed.

4) What is Bitmap Check point?


In

Compare two versions of an application being tested by matching captured bitmaps. This is
particularly useful for checking non-GUI areas of your application, such as drawings or graphs.

5) What is a Text Check point?


In

Enables you to read and check text in a GUI object or in any area of your application.

6) What is a data base checkpoint?


Checks the contents and the number of rows and columns of a result set, which is based on a
query created based on a database.

7) What are the different modes of Recording?


Two types of recording modes are
1) Context Sensitive Mode
2) Analog Mode

8) What are the different modes of Playback?


Three different types of modes are
1) Debug mode
2) Verify mode
3) Update mode

9) What is the hot key used to change from one mode of recording to another?
Functional key 2 (F2)

10) What is the hot key used for playback?


Left Ctrl + F5

11) What is Synchronization point?


Synchronization points enable to solve anticipated timing problems between the test and the
application being tested.

17
For example, if you create a test that opens a database application, you can add a
synchronization point that causes the test to wait until the database records are loaded on the
screen.

12) What is Data Driven Test?


To check how it performs the same operations with multiple sets of data. In other words it is called
as parameterizing the test.
Example: You can create a data-driven test with a loop that runs ten times: each time the loop
runs, it is driven by a different set of data. In order for WinRunner to use data to drive the test,
you must link the data to the test script which it drives. This is called parameterizing your test.

13) How will you create a GUI Check point?


The functions used to create GUI Check points to check a single property of a GUI object
Button_check_info
Obj_check_info
Edit_check_info
Win_check_info
List_check_info
Static_check_info
Scroll_check_info

14) What is the function used to check the default checks of an GUI objects
gui_ver_set_default_checks

15) List out the different types of GUI Checkpoints?


Different types of GUI Checkpoints are
1) For Single property
2) For object/window
3) For multiple objects

16) How will you create a Bitmap checkpoint?


The functions used to create bitmap checkpoints are
Win_check_bitmap
Obj_check_bitmap

17) List out the different types of Bitmap Checkpoints?


The different types of bitmap checkpoints are
1) For object window
2) For screen area

18) How will you create a Text checkpoint?


By inserting the below mentioned functions text checkpoints can be created
Win_get_text
Obj_get_text
Get_text
Web_frame_get_text
Web_obj_get_text
Win_find_text
Obj_find_text
Find_text
Obj_move_locator_text
Win_move_locator_text
Win_click_on_text
Obj_click_on_text
Click_on_text

18
Compare_text
setvar

19) List out the different types of Text checkpoints?


For object/window
From screen area

20) How to create database connectivity with winrunner?


Three types of database connectivity’s are
1) Microsoft Query to create a query on a database (Secondary package to be installed,
product of Microsoft)
2) Define an ODBE Query manually by creating its SQL statement
3) Data Junction to create a conversion file that converts a database to a target text file.
(Secondary package to be installed, product of Mercury interactive)
Functions used for manual connections are
Db_connect
Db_execute_query
Db_disconnect

21) How will you create a database checkpoint?


Db_get_headers
Db_get_last_error
Db_get_row
Db_get_field_value
Db_write_records
Db_dj_convert (used only data junction is installed)

22) What is GUI Spy and the need of GUI Spy?


GUI Spy, you can view the properties of any GUI object on your desktop. You use the Spy pointer
to point to an object, and the GUI Spy displays the properties and their values in the GUI Spy
dialog box. You can choose to view all the properties of an object,

23) What is GUI Map Editor?


In GUI Map Editor, objects are displayed in a tree under the icon of the window in which they
appear. When you double-click a window name or icon in the tree, can view all the objects it
contains.

24) List out the keywords to declare data?


Data types are
1) Static
2) Public
3) Extern
4) Auto

25) What does the function tl_step do?


Divides a test script into sections and inserts a status message in the test results for the previous
section.

26) What is the keyword used to alert message to the result window?
Report_msg is the keyword used to alert message to the result window

27) How does Winrunner identify each object?


WinRunner uses a logical name to identify each object: for example “Print” for a Print dialog box,
or “OK” for an OK button. The logical name is actually a nickname for the object’s physical
description. The physical description contains a list of the object’s physical properties: the Print

19
dialog box, for example, is identified as a window with the label “Print”. The logical name and the
physical description together ensure that each GUI object has its own unique identification.

28) Where does the Checklist file are stored?


1) Checklist files are stored in specified temporary map files folder, if the checklist private to
any test.
2) Checklist files are stored in specified shared map files folder, it the checklist is shared by
more then one test.

29) List the difference between context sensitive and Analog mode?

30) List the difference between Debug, Verify and Update Mode?
31) How will you test an ActiveX object?

Where in the software lifecycle are most errors made? 
Answer: Most errors are introduced in the design phase (50­65%) of errors

20

Das könnte Ihnen auch gefallen