You are on page 1of 29

1.

Open the Database


find the table which stores the user loging information
change the last loging time
and then tries to login the system
2. Better approach is automation
in manual take the input data as
0
1
100
101
as well as
negetive value and decimal poiints
3. See in the Internet ( Its combination of choices )
4. End to end test case ( see in net ) ,
5. I dont have a car, dont know the funcitonality
cant explain
6. [0-3][0-9][/](([0][0-9])|([1][0-2]))[/][0-9]{4}

On Tue, Jun 24, 2008 at 4:58 PM, anand bijwe <anand.bijwe@gmail.com> wrote:

1.When the 1st time I will log into the application it will ask to change th
e passward.After 30 dayz the passward will expire and the application is unacces
ible.U cant alter the server time. How will u test it??
2.There is a field in the form.if i will put a number it will give the squar
e of the number as output.it takes numbers from 1 to 100 how to test it?

4.What is meant by Decision Table Testing Technique?


5.how will you write end to end test cases
6.What are test cases for Car Locking System?
7.what is the regular expression for dd/mm/yyyy
1.The project had a very high cost of testing. After going in detail, someone fo
und out that the testers are spending their time on software that doesn't have t
oo many defects. How will you make sure that this is correct?
2.The top management has feeling that when there are any changes in the technolo
gy being used, development schedules etc, it was a waste of time to update the T
est Plan. Instead, they were emphasizing that you should put your time into test
ing than working on the test plan. Your Project Manager asked for your opinion.
You have argued that Test Plan is very important and you need to update your tes
t plan from time to time. It not a waste of time and testing activities would be
more effective when you have your plan clear. Use some metrics. How you would s
upport your argument to have the test plan consistently updated all the time
3.Is it essential to create new Software requirement document,test planning repo
rt if it is a "Migrating Project"?
4.
other than requirement traceability matrix what all other factors that we need t
o check in order to exit a testing process
1. I-soft
What should be done after writing test case??
2.Covansys
Testing
1. What is bidirectional traceability ??? and how it is implemented
2. What is Automation Test frame work ?
3. Define the components present in test strategy
4. Define the components present in test plan
5. Define database testing ?
6. What is the difference between QA and QC .
7. What is the difference between V&V
8. What are different types of test case that u have written in your project..
9. Have u written Test plan ? .
SQL
1. What is joins and define all the joins
2. What is Foreign key ?
3. Write an SQL query if u want to select the data from one block which intern r
eflects in another block ?
Unix
1. Which command is used to run an interface?
2. How will you see the hidden file ?
3. What is the command used to set the date and timings
4. Some basic commands like copy, move,delete ?
5. Which command used to the go back to the home directory .
6. Which command used to view the the current directory
3. Virtusa
Testing
1. Tell me about Yourself?
2. Testing process followed in your company
3. Testing Methodology
4. Where u maintains the Repositories?
5. What is CVS?
6. Bug Tool used?
7. How will you prepare traceability matrix if there is no Business Doc and Func
tional Doc?
8. How will you validate the functionality of the Test cases, if there is no bus
iness requirement document or user requirement document as such
9. Testing process followed in your company?
10. Tell me about CMM LEVEL -4 what are steps that to be followed to achieve the
CMM -IV standards?
11. What is Back End testing?
12. What is Unit Testing?
13. How will u write test cases for an given scenario i.e. main page, login screen
, transaction, Report Verification?
14. How will u write traceability matrix?
15. What is CVS and why it is used?
16. What will be specified in the Defect Report ?
17. What is Test summary Report ?
18. What is Test Closure report ?
19. Explain Defect life cycle
20. What will be specified in the Test Case
21. What are the Testing methodologies that u have followed in your project ?
22. What kind of testing that u have been involved in and explain about it .
23. What is UAT Testing?
24. What is joins and what are the different types of joins in SQL and explain t
he same?
25. What is Foreign Key in SQL ?
KLA Tencor
1. Bug life cycle?
2. Explain about the Project. And draw the architecture of your project?
3. What are the different types of severity?
4. Defect tracking tools used?
5. what are the responsibilities of an tester?
6. Give some example how will you write the test cases if an scenario involves L
ogin screen.
Aztec
1. What are the different types of testing followed ..
2. What are the different levels of testing used during testing the application?
4. What type of testing will be done in Installation testing or system testing?
5. What is meant by CMMI what are different types of CMM Level?
6. Explain about the components involved in CMM-4 level
7. Explain about Performance testing ?
8. What is Traceability matrix and how it is done ?
9. How can you differentiate Severity and Priority based on technical and busine
ss point of view.
10. What is the difference between Test life cycle and defect life cycle ?
11. How will u ensure that you have covered all the functionality while writing
test cases if there is no functional spec and there is no KT about the applicati
on?
Kinds of Testing
WHAT KINDS OF TESTING SHOULD BE CONSIDERED?
1. Black box testing: not based on any knowledge of internal design or code.Test
s are based on requirements and functionality
2. White box testing: based on knowledge of the internal logic of an application s
code. Tests are based on coverage of code statements, branches, paths, and cond
itions.
3. Unit testing: the most micro scale of testing; to test particular functions or
code modules. Typically done by the programmer and not by testers, as it require
s detailed knowledge of the internal program design and code. Not always easily
done unless the application has a well-designed architecture with tight code; ma
y require developing test driver modules or test harnesses.
4. Incremental integration testing: continuous testing of an application as new
functionality is added; requires that various aspects of an applications functio
nality be independent enough to work separately before all parts of the program
are completed, or that test drivers be developed as needed; done by programmers
or by testers.
6. Integration testing: testing of combined parts of an application to determine
if they function together correctly the parts can be code modules, individual app
lications, client and server applications on a networked. This type of testing i
s especially relevant to client/server and distributed systems.
7. Functional testing: black-box type testing geared to functional requirements
of an application; testers should do this type of testing. This does not mean th
at the programmers should not check their code works before releasing it(which o
f course applies to any stage of testing).
8. System testing: black box type testing that is based on overall requirements s
pecifications; covers all combined parts of system.
9. End to end testing: similar to system testing; the macro end of the test scale;
involves testing of a complete application environment in a situation that mimi
cs real-world use, such as interacting with database, using network communicatio
ns, or interacting with other hardware, applications, or systems if appropriate.
10. Sanity testing: typically an initial testing effort to determine if a new so
ftware version is performing well enough to accept it for a major testing effort
. For example, if the new software is crashing systems every 5minutes warrant fu
rther testing in item current state.
11. Regression testing: re-testing after fixes or modifications of the software
or its environment. It can be difficult to determine how much re-testing is need
ed, especially near the end of the development cycle. Automated testing tools ca
n be especially useful for this type of testing.
12. Acceptance testing: final testing based on specifications of the end-user or
customer, or based on use by end users/customers over some limited period of ti
me.
13. Load testing: testing an application under heavy loads, such as testing of a
web site under a range of loads to determine at what point the system s response
time degrades or fails.
14. Stress testing: term often used interchangeably with load and performance testin
g. Also used to describe such tests as system functional testing while under unu
sually heavy loads, heavy repletion of certain actions or inputs input of large
numerical values, large complex queries to a database system, etc.
15. Performance testing: term often used interchangeable with stress and load testin
g. Ideally performance testing (and another type of testing) is defined in requireme
nts documentation or QA or test plans.
16. Usability testing: testing for user-friendlinesses . Clearly this is subjective
,and will depend on the targeted end-ser or customer. User interviews, surveys,
video recording of user sessions, and other techniques can be used programmers a
nd testers are usually not appropriate as usability testers.
17. Install/uninstall testing: testing of full, partial, or upgrade install/unin
stall processes.
18. Recovery testing: testing how well a system recovers from crashes, hardware
failures or other catastrophic problems.
19. Security testing: testing how well system protects against unauthorized inte
rnal or external access, damage, etc, any require sophisticated testing techniqu
es.
20. Compatibility testing: testing how well software performs in a particular ha
rdware/software/operating/system/network/etc environment.
21. Exploratory testing: often taken to mean a creative, informal software test
that is not based on formal test plans of test cases; testers may be learning th
e software as they test it.
22. Ad-hoc testing: similar to exploratory testing, but often taken to mean that
the testers have significant understanding of the software testing it.
23. User acceptance testing: determining if software is satisfactory to an end-u
ser or customer.
24. Comparison testing: comparing software weakness and strengths to competing p
roducts.
25. Alpha testing: testing of an application when development is nearing complet
ion; minor design changes may still be made as a result of such testing. Typical
ly done by end-users or others, not by programmers or testers.
26. Beta testing: testing when development and testing are essentially completed
and final bugs and problems need to be found before final release. Typically do
ne by end-users or others, not by programmers or testers.
27. Mutation testing: method for determining if a set of test data or test cases
is useful, by deliberately introducing various code changes ( bugs ) and retesting
with the original test data/cases to determine if the bugs are detected proper imp
lementation requires large computational resources.
Difference between client server testing and web server testing.
Web systems are one type of client/server. The client is the browser, the server
is whatever is on the back end (database, proxy, mirror, etc). This differs fro
m so-called traditional client/server in a few ways but both systems are a type of
client/server. There is a certain client that connects via some protocol with a
server (or set of servers).
Also understand that in a strict difference based on how the question is worded,
testing a Web server specifically is simply testing the functionality and perform
ance of the Web server itself. (For example, I might test if HTTP Keep-Alives ar
e enabled and if that works. Or I might test if the logging feature is working.
Or I might test certain filters, like ISAPI. Or I might test some general charac
teristics such as the load the server can take.) In the case of client server tes
ting , as you have worded it, you might be doing the same general things to some o
ther type of server, such as a database server. Also note that you can be testin
g the server directly, in some cases, and other times you can be testing it via
the interaction of a client.
You can also test connectivity in both. (Anytime you have a client and a server
there has to be connectivity between them or the system would be less than usefu
l so far as I can see.) In the Web you are looking at HTTP protocols and perhaps
FTP depending upon your site and if your server is configured for FTP connectio
ns as well as general TCP/IP concerns. In a traditional client/server you may be l
ooking at sockets, Telnet, NNTP, etc.
Interview Questions for Elearning Testers »
| 0 Comments
1) What is SCORM?
2) What is Sec 508?
3) Have u done any portal testing?
4) DO u have any idea about LMS or LCMS?
5) Have u done any compliance testing
6) Have u done any compatibility testing?
7) What are the critical issues found while testing the projects in your organiz
ation?
8) Tell me about the testing procedures used by u in your organization?
9) How do you test a flash file?
10) Have u find any difference while testing a flash file and Html file?
11) What types of testing do u aware of?
12) While doing the compatibility testing have u found any critical issues?
13) While doing the compliance testing has u noticed any critical/ abnormal issu
es?
14) What is the procedure u use while doing the Regression testing in your proje
cts?
15) Have u done any performance or stress testing in your testing? If yes have u
used any automation techniques in that or not?
16) Have u aware of any bug tracking tools for defect tracking?
17) Tell me about the testing scenario s used in project?
18) Have u written any test cases/test plan? If yes can u tell me one or two ins
tances in that?
19) Have u aware of any Usability and Acceptance testing?
20) Is ur testing is conventional or non-conventional?
21) Have u done any other lang s courses testing? If yes have u faced in any criti
cal situations?
22) What are things to be more concentrated while testing same projects on diffe
rent environments?
23) What are AICC standards?
Software Testing Interview Questions Part 6 »
| 2 Comments
60. What is Impact analysis? How to do impact analysis in yr project?
A: Impact analysis means when we r doing regressing testing at that time we r ch
ecking that the bug fixes r working properly, and by fixing these bug other comp
onents are working as per their requirements r they got disturbed.
61. HOW TO TEST A WEBSITE BY MANUAL TESTING?
A: Web Testing
During testing the websites the following scenarios should be considered.
Functionality
Performance
Usability
Server side interface
Client side compatibility
Security
Functionality:
In testing the functionality of the web sites the following should be tested.
Links
Internal links
External links
Mail links
Broken links
Forms
Field validation
Functional chart
Error message for wrong input
Optional and mandatory fields
Database
Testing will be done on the database integrity.
Cookies
Testing will be done on the client system side, on the temporary internet files.
Performance:
Performance testing can be applied to understand the web site s scalability, or to
benchmark the performance in the environment of third party products such as se
rvers and middle ware for potential purchase.
Connection speed:
Tested over various Networks like Dial up, ISDN etc
Load
What is the no. of users per time?
Check for peak loads & how system behaves.
Large amount of data accessed by user.
Stress
Continuous load
Performance of memory, cpu, file handling etc.
Usability :
Usability testing is the process by which the human-computer interaction charact
eristics of a system are measured, and weaknesses are identified for correction.
Usability can be defined as the degree to which a given piece of software assis
ts the person sitting at the keyboard to accomplish a task, as opposed to becomi
ng an additional impediment to such accomplishment. The broad goal of usable sys
tems is often assessed using several
Criteria:
Ease of learning
Navigation
Subjective user satisfaction
General appearance
Server side interface:
In web testing the server side interface should be tested.
This is done by Verify that communication is done properly.
Compatibility of server with software, hardware, network and database should be
tested.
The client side compatibility is also tested in various platforms, using various
browsers etc.
Security:
The primary reason for testing the security of an web is to identify potential v
ulnerabilities and subsequently repair them.
The following types of testing are described in this section:
Network Scanning
Vulnerability Scanning
Password Cracking
Log Review
Integrity Checkers
Virus Detection
Performance Testing
Performance testing is a rigorous usability evaluation of a working system under
realistic conditions to identify usability problems and to compare measures suc
h as success
rate, task time and user satisfaction with requirements. The goal of performance
testing is not to find bugs, but to eliminate bottlenecks and establish a basel
ine for future regression testing.
To conduct performance testing is to engage in a carefully controlled process of
measurement and analysis. Ideally, the software under test is already stable en
ough so that this process can proceed smoothly. A clearly defined set of expecta
tions is essential for meaningful performance testing.
For example, for a Web application, you need to know at least two things:
expected load in terms of concurrent users or HTTP connections
acceptable response time
Load testing:
Load testing is usually defined as the process of exercising the system under te
st by feeding it the largest tasks it can operate with. Load testing is sometime
s called volume testing, or longevity/endurance testing
Examples of volume testing:
testing a word processor by editing a very large document
testing a printer by sending it a very large job
testing a mail server with thousands of users mailboxes
Examples of longevity/endurance testing:
testing a client-server application by running the client in a loop against the
server over an extended period of time
Goals of load testing:
Expose bugs that do not surface in cursory testing, such as memory management bu
gs, memory leaks, buffer overflows, etc. Ensure that the application meets the p
erformance baseline established during Performance testing. This is done by runn
ing regression tests against the application at a specified maximum load.
Although performance testing and load testing can seen similar, their goals are
different. On one hand, performance testing uses load testing techniques and too
ls for measurement and benchmarking purposes and uses various load levels wherea
s load testing operates at a predefined load level, the highest load that the sy
stem can accept while still functioning properly.
Stress testing:
Stress testing is a form of testing that is used to determine the stability of a
given system or entity. This is designed to test the software with abnormal sit
uations. Stress testing attempts to find the limits at which the system will fai
l through abnormal quantity or frequency of inputs.
Stress testing tries to break the system under test by overwhelming its resource
s or by taking resources away from it (in which case it is sometimes called nega
tive testing).
The main purpose behind this madness is to make sure that the system fails and r
ecovers gracefully this quality is known as recoverability.
Stress testing does not break the system but instead it allows observing how the
system reacts to failure. Stress testing observes for the following.
Does it save its state or does it crash suddenly?
Does it just hang and freeze or does it fail gracefully?
Is it able to recover from the last good state on restart?
Etc.
Compatability Testing
A Testing to ensure compatibility of an application or Web site with different b
rowsers, OS and hardware platforms. Different versions, configurations, display
resolutions, and Internet connect speeds all can impact the behavior of the prod
uct and introduce costly and embarrassing bugs. We test for compatibility using
real test environments. That is testing how will the system performs in the part
icular software, hardware or network environment. Compatibility testing can be p
erformed manually or can be driven by an automated functional or reg The purpose
of compatibility testing is to reveal issues related to the product& interactio
n session test suite.with other software as well as hardware. The product compat
ibility is evaluated by first identifying the hardware/software/browser componen
ts that the product is designed to support. Then a hardware/software/browser mat
rix is designed that indicates the configurations on which the product will be t
ested. Then, with input from the client, a testing script is designed that will
be sufficient to evaluate compatibility between the product and the hardware/sof
tware/browser matrix. Finally, the script is executed against the matrix,and any
anomalies are investigated to determine exactly where the incompatibility lies.
Some typical compatibility tests include testing your application:
On various client hardware configurations
Using different memory sizes and hard drive space
On various Operating Systems
In different network environments
With different printers and peripherals (i.e. zip drives, USBs, etc.)
62. which comes first test strategy or test plan?
A: Test strategy comes first ans this is the high level document . and approach for
the testing starts from test strategy and then based on this the test lead prep
ares the
test plan .
63. what is the difference between web based application and client server appli
cation as a testers point of view?
A: According to Tester s Point of view
1) Web Base Application (WBA)is a 3 tier application ;Browser,Back end and Serve
r.
Client server Application(CSA) is a 2 tier Application ;Front End ,Back end .
2) In the WBA tester test for the Script error like java script error VB script
error etc, that shown at the page. In the CSA tester does not test for any scrip
t error.
3) Because in the WBA once changes perform reflect at every machine so tester ha
s less work for test. Whereas in the CSA every time application need to be insta
l hence ,it maybe possible that some machine has some problem for that Hardware
testing as well as software testing is needed.
63. What is the significance of doing Regression testing?
A: To check for the bug fixes. And this fix should not disturb other functionalit
y
To Ensure the newly added functionality or existing modified functionality or de
veloper fixed bug arises any new bug or affecting any other side effect. this is
called regression test and ensure already PASSED TEST CASES would not arise any
new bug.
64. What are the diff ways to check a date field in a website?
A: There are different ways like :
1) you can check the field width for minimum and maximum.
2) If that field only take the Numeric Value then check it ll only take Numeric no
other type.
3) If it takes the date or time then check for other.
4) Same way like Numeric you can check it for the Character,Alpha Numeric aand a
ll.
5) And the most Important if you click and hit the enter key then some time pag
e may give the error of javascript, that is the big fault on the page .
6) Check the field for the Null value ..
ETC
The date field we can check in different ways Possitive testing: first we enter
the date in given format
Negative Testing: We enter the date in invalid format suppose if we enter date l
ike 30/02/2006 it should display some error message and also we use to check the
numeric or text
Software Testing Interview Questions Part 5 »
| 0 Comments
46. High severity, low priority bug?
A: A page is rarely accessed, or some activity is performed rarely but that thin
g outputs some important Data incorrectly, or corrupts the data, this will be a
bug of H severity L priority
47. If project wants to release in 3months what type of Risk analysis u do in Te
st plan?
A: Use risk analysis to determine where testing should be focused. Since it s rarel
y possible to test every possible aspect of an application, every possible combi
nation of events, every dependency, or everything that could go wrong, risk anal
ysis is appropriate to most software development projects. This requires judgmen
t skills, common sense, and experience. (If warranted, formal methods are also a
vailable.) Considerations can include:
Which functionality is most important to the project s intended purpose?
Which functionality is most visible to the user?
Which functionality has the largest safety impact?
Which functionality has the largest financial impact on users?
Which aspects of the application are most important to the customer?
Which aspects of the application can be tested early in the development cycle?
Which parts of the code are most complex, and thus most subject to errors?
Which parts of the application were developed in rush or panic mode?
Which aspects of similar/related previous projects caused problems?
Which aspects of similar/related previous projects had large maintenance expense
s?
Which parts of the requirements and design are unclear or poorly thought out?
What do the developers think are the highest-risk aspects of the application?
What kinds of problems would cause the worst publicity?
What kinds of problems would cause the most customer service complaints?
What kinds of tests could easily cover multiple functionalities?
Which tests will have the best high-risk-coverage to time-required ratio
48. Test cases for IE 6.0 ?
A: Test cases for IE 6.0 i.e Internet Explorer 6.0:
1)First I go for the Installation side, means that
+ is it working with all versions of Windows ,Netscape or other softwares in oth
er words we can say that IE must check with all hardware and software parts.
2) Secondly go for the Text Part means that all the Text part appears in frequen
t and smooth manner.
3) Thirdly go for the Images Part means that all the Images appears in frequent
and smooth manner.
4) URL must run in a better way.
5) Suppose Some other language used on it then URL take the Other Characters, Ot
her than Normal Characters.
6)Is it working with Cookies frequently or not.
7) Is it Concerning with different script like JScript and VBScript.
8) HTML Code work on that or not.
9) Troubleshooting works or not.
10) All the Tool bars are work with it or not.
11) If Page has Some Links, than how much is the Max and Min Limit for that.
12) Test for Installing Internet Explorer 6 with Norton Protected Recycle Bin en
abled .
13) Is it working with the Uninstallation Process.
14) Last but not the least test for the Security System for the IE 6.0
49. Where you involve in testing life cycle ,what type of test you perform ?
A: Generally test engineers involved from entire test life cycle i.e, test plan,
test case preparation, execution, reporting. Generally system testing, regressio
n testing, adhoc testing
etc.
50. what is Testing environment in your company ,means hwo testing process start
?
A: testing process is going as follows
quality assurance unit
quality assurance manager
testlead
test engineer
51. who prepares the use cases?
A: In Any company except the small company Business analyst prepares the use case
s
But in small company Business analyst prepares along with team lead
52. What methodologies have you used to develop test cases?
A: generally test engineers uses 4 types of methodologies
1. Boundary value analysis
2.Equivalence partition
3.Error guessing
4.cause effect graphing
53. Why we call it as a regression test nor retest?
A: If we test whether defect is closed or not i.e Retesting But here we are check
ing the impact also regression means repeated times
54. Is automated testing better than manual testing. If so, why?
A: Automated testing and manual testing have advantages as well as disadvantages
Advantages: It increase the efficiency of testing process speed in process
reliable
Flexible
disadvantage s
Tools should have compatibility with our development or deployment tools needs l
ot of time initially If the requirements are changing continuously Automation is
not suitable
Manual: If the requirements are changing continuously Manual is suitable Once th
e build is stable with manual testing then only we go 4 automation
Disadvantages:
It needs lot of time
We can not do some type of testing manually
E.g Performances
55. what is the exact difference between a product and a project.give an example
?
A: Project Developed for particular client requirements are defined by client Pro
duct developed for market Requirements are defined by company itself by conducti
ng market survey
Example
Project: the shirt which we are interested stitching with tailor as per our spec
ifications is project
Product: Example is Ready made Shirt where the particular company will imagine par
ticular measurements they made the product
Mainframes is a product
Product has many mo of versions
but project has fewer versions i.e depends upon change request and enhancements
56. Define Brain Stromming and Cause Effect Graphing? With Eg?
A: BS:
A learning technique involving open group discussion intended to expand the rang
e of available ideas
OR
A meeting to generate creative ideas. At PEPSI Advertising, daily, weekly and bi
-monthly brainstorming sessions are held by various work groups within the firm.
Our monthly I-
Power brainstorming meeting is attended by the entire agency staff.
OR
Brainstorming is a highly structured process to help generate ideas. It is based
on the principle that you cannot generate and evaluate ideas at the same time.
To use brainstorming, you must first gain agreement from the group to try brains
torming for a fixed interval (eg six minutes).
CEG :
A testing technique that aids in selecting, in a systematic way, a high-yield se
t of test cases that logically relates causes to effects to produce test cases.
It has a beneficial side effect in pointing out incompleteness and ambiguities i
n specifications.
57. Actually by using severity u should know which one u need to solve so what i
s the need of priority?
A: I guess severity reflects the seriousness of the bug where as priority refers
to which bug should rectify first. of course if the severity is high the same ca
se is with priority in normal.
severity decided by the tester where as priority decided by developers. which on
e need to solve first knows through priority not with severity. how serious of t
he bug knows through
severity.
severity is nothing impact of that bug on the application. Priority is nothing b
ut importance to resolve the bug yeah of course by looking severity we can judge
but sometimes high severity bug doesn t have high priority At the same time High
priority bug don t have high severity
So we need both severity and priority
58. What do u do if the bug that u found is not accepted by the developer and he
is saying its not reproducible. Note:The developer is in the on site location ?
A: once again we will check that condition with all reasons. then we will attach
screen shots with strong reasons. then we will explain to the project manager an
d also explain to the client when they contact us
Sometimes bug is not reproducible it is because of different environment suppose
development team using other environment and you are using different environmen
t at this situation there is chance of bug not reproducing. At this situation pl
ease check the environment in the base line documents that is functional documen
ts if the environment which we r using is correct we will raise it as defect We
will take screen shots and sends them with test procedure also
59. what is the difference between three tier and two tier application?
A: Client server is a 2-tier application. In this, front end or client is connect
ed to
Data base server through Data Source Name ,front end is the monitoring level.
Web based architecture is a 3-tier application. In this, browser is connected to
web server through TCP/IP and web server is connected to Data base server,brows
er is the monitoring level. In general, Black box testers are concentrating on m
onitoring level of any type of application.
All the client server applications are 2 tier architectures.
Here in these architecture, all the Business Logic is stored in clients and Data is
stored in Servers. So if user request anything, business logic will b performed
at client, and the data is retrieved from Server(DB Server). Here the problem is
, if any business logic changes, then we
need to change the logic at each any every client. The best ex: is take a super
market, i have branches in the city. At each branch i have clients, so business
logic is stored in clients, but the actual data is store in servers.If assume i
want to give some discount on some items, so i
need to change the business logic. For this i need to goto each branch and need
to change the business logic at each client. This the disadvantage of Client/Ser
ver architecture.
So 3-tier architecture came into picture:
Here Business Logic is stored in one Server, and all the clients are dumb termin
als. If user requests anything the request first sent to server, the server will
bring the data from DB Sever and send it to clients. This is the flow for 3-tie
r architecture.
Assume for the above. Ex. if i want to give some discount, all my business logic
is there in Server. So i need to change at one place, not at each client. This
is the main advantage of 3-tier architecture.
35 QA Testing Interview Questions »
| 3 Comments
1. What is SQA Activities?
2. How can we perform testing without expected results?
3. Which of the following statements about regression testing are true?
a. Regression Testing must consist of a fixed set of tests to create a baseline
b. Regression Testing should be used to detect defects in new features
c. Regression Testing can be run on every build
d. Regression Testing should be targeted to areas of high risk and known code ch
ange
e. Regression Testing, when automated, is highly effective in preventing defects
4. How do you conduct boundary analyst testing for ok pushbutton
5. What is an exit and entry criteria in a Test Plan ?
6. To whom you send test deliverables?
7. What is configuration Management?
8. Who writes the Business requirements? What you do when you have the BRD?
9. What we normally check for in the Database Testing?
10. What is walk through and inspection?
11. What are the key elements for creating test plan?
12. How do you ensure the quality of the product?
13. What is the job of Quality assurance engineer? Difference between the testin
g & Quality Assurance job.
14. Can any one send information regarding manual testing. I know just how to us
e winrunner load runner tool with sample flight reservation application. can any
one send me the information how to test web logic and web sphere.
15. What are the demerits of winrunner?
16. How you used white box and block box technologies in your application?
17. What is the role of QA in a project development?
18. How can u test the white page ?
19. How do you scope, organize, and execute a test project?
20. What is the role of QA in a company that produces software?
21. Describe to me when you would consider employing a failure mode and defect a
nalysis?
22. In general, how do you see automation fitting into the overall process of te
sting?
23. How do you decide when you have tested enough?
24. Describe to the basic elements you put in a defect report?
25. What is use case? What is the difference between test cases and use cases?
26. What is the importance of a requirements traceability in a product testing?
27. If the actual result doesn t match with expected result in this situation what
should we do?
28. Explain about Metrics and types of metrics like schedule variance , effort v
ariance?
29. What is the difference between functional testing & black box testing?
30. What is heuristic checklist used in Unit Testing?
31. What is the difference between System Testing,Integration Testing & System I
ntegration Testing?
32. How to calculate the estimate for test case design and review?
33. What is Requirements Traceability ? What is the purpose of it ? Explain type
s of traceability matrices ?
34. What are the contents of Risk management Plan? Have you ever prepared a Risk
Management Plan ?
35. What metrics used to measure the size of the software?
Software Testing Interview Questions Part 4 »
| 2 Comments
31. If we have no SRS, BRS but we have test cases does u execute the test cases
blindly or do u follow any other process?
A: Test case would have detail steps of what the application is supposed to do.
SO
1) Functionality of application is known.
2) In addition you can refer to Backend, I mean look into the Database. To gain
more knowledge of the application
32. How to execute test case?
A: There are two ways:
1. Manual Runner Tool for manual execution and updating of test status.
2. Automated test case execution by specifying Host name and other automation pe
rtaining details.
33. Difference between re testing and regression testing?
A: Retesting:
Re-execution of test cases on same application build with different input values
is retesting.
Regression Testing:
Re-execution of test cases on modifies form of build is called regression testin
g
34. What is the difference between bug log and defect tracking?
A; Bug log is a document which maintains the information of the bug where as bug
tracking is the process.
35. Who will change the Bug Status as Differed?
A: Bug will be in open status while developer is working on it Fixed after devel
oper completes his work if it is not fixed properly the tester puts it in reopen
After fixing the bug properly it is in closed state.
Developer
36. wht is smoke testing and user interface testing ?
A: ST:
Smoke testing is non-exhaustive software testing, as pertaining that the most cr
ucial functions of a program work, but not bothering with finer details. The ter
m comes to software testing from a similarly basic type of hardware testing.
UIT:
I did a bit or R n D on this . some says it s nothing but Usability testing. Testing
to determine the ease with which a user can learn to operate, input, and interp
ret outputs of a system or component.
Smoke testing is nothing but to check whether basic functionality of the build i
s stable or not?
I.e. if it possesses 70% of the functionality we say build is stable.
User interface testing: We check all the fields whether they are existing or not
as per the format we check spelling graphic font sizes everything in the window
present or not|
37. what is bug, deffect, issue, error?
A: Bug: Bug is identified by the tester.
Defect: Whenever the project is received for the analysis phase ,may be some requ
irement miss to get or understand most of the time Defect itself come with the p
roject (when it comes).
Issue: Client site error most of the time.
Error: When anything is happened wrong in the project from the development side
i.e. called as the error, most of the time this knows by the developer.
Bug: a fault or defect in a system or machine
Defect: an imperfection in a device or machine;
Issue: An issue is a major problem that will impede the progress of the project
and cannot be resolved by the project manager and project team without outside h
elp
Error:
Error is the deviation of a measurement, observation, or calculation from the tr
uth
38. What is the diff b/w functional testing and integration testing?
A: functional testing is testing the whole functionality of the system or the ap
plication whether it is meeting the functional specifications
Integration testing means testing the functionality of integrated module when tw
o individual modules are integrated for this we use top-down approach and bottom
up approach
39. what type of testing u perform in organization while u do System Testing, gi
ve clearly?
A: Functional testing
User interface testing
Usability testing
Compatibility testing
Model based testing
Error exit testing
User help testing
Security testing
Capacity testing
Performance testing
Sanity testing
Regression testing
Reliability testing
Recovery testing
Installation testing
Maintenance testing
Accessibility testing, including compliance with:
Americans with Disabilities Act of 1990
Section 508 Amendment to the Rehabilitation Act of 1973
Web Accessibility Initiative (WAI) of the World Wide Web
Consortium (W3C)
40. What is the main use of preparing Traceability matrix and explain the real t
ime usage?
A: A traceability matrix is created by associating requirements with the work pr
oducts that satisfy them. Tests are associated with the requirements on which th
ey are based and the product tested to meet the requirement.
A traceability matrix is a report from the requirements database or repository.
41. How can u do the following 1) Usability testing 2) scalability Testing
A:
UT:
Testing the ease with which users can learn and use a product.
ST:
It s a Web Testing defn.allows web site capability improvement.
PT:
Testing to determine whether the system/software meets the specified portability
requirements.
42. What does u mean by Positive and Negative testing & what is the diff s between
them. Can anyone explain with an example?
A: Positive Testing: Testing the application functionality with valid inputs and
verifying that output is correct
Negative testing: Testing the application functionality with invalid inputs and
verifying the output.
Difference is nothing but how the application behaves when we enter some invalid
inputs suppose if it accepts invalid input the application
Functionality is wrong
Positive test: testing aimed to show that s/w work i.e. with valid inputs. This
is also called as test to pass
Negative testing: testing aimed at showing s/w doesn t work. Which is also know as
test to fail BVA is the best example of -ve testing.
43. what is change request, how u use it?
A: Change Request is a attribute or part of Defect Life Cycle.
Now when u as a tester finds a defect n report to ur DL he in turn informs the Dev
elopment Team.
The DT says it s not a defect it s an extra implementation or says not part of req men
t. Its newscast has to pay.
Here the status in ur defect report would be Change Request
I think change request controlled by change request control board (CCB). If any
changes required by client after we start the project, it has to come thru that
CCB and they have to approve it. CCB got full rights to accept or reject based o
n the project schedule and cost.
44. What is risk analysis, what type of risk analysis u did in u r project?
A: Risk Analysis:
A systematic use of available information to determine how often specified event
s and unspecified events may occur and the magnitude of their likely consequence
s
OR
procedure to identify threats & vulnerabilities, analyze them to ascertain the e
xposures, and highlight how the impact can be eliminated or reduced
Types :
1.QUANTITATIVE RISK ANALYSIS
2.QUALITATIVE RISK ANALYSIS
45. What is API ?
A: Application program interface
Software Testing Interview Questions Part 3 »
| 1 Comment
16. What is bug life cycle?
A: New: when tester reports a defect
Open: when developer accepts that it is a bug or if the developer rejects the de
fect, then the status is turned into Rejected
Fixed: when developer make changes to the code to rectify the bug
Closed/Reopen: when tester tests it again. If the expected result shown up, it i
s turned into Closed and if the problem resists again, it s Reopen
17. What is deferred status in defect life cycle?
A: Deferred status means the developer accepted the bus, but it is scheduled to
rectify in the next build
18. What is smoke test?
A; Testing the application whether it s performing its basic functionality properl
y or not, so that the test team can go ahead with the application
19. Do you use any automation tool for smoke testing?
A: - Definitely can use.
20. What is Verification and validation?
A: Verification is static. No code is executed. Say, analysis of requirements et
c. Validation is dynamic. Code is executed with scenarios present in test cases.
21. What is test plan and explain its contents?
A: Test plan is a document which contains the scope for testing the application
and what to be tested, when to be tested and who to test.
22. Advantages of automation over manual testing?
A: Time, resource and Money
23. What is ADhoc testing?
A: AdHoc means doing something which is not planned.
24. What is mean by release notes?
A: It s a document released along with the product which explains about the produc
t. It also contains about the bugs that are in deferred status.
25. Scalability testing comes under in which tool?
A: Scalability testing comes under performance testing. Load testing, scalabilit
y testing both r same.
Full form of QTP ?
Quick Test Professional
What s the QTP ?
QTP is Mercury Interactive Functional Testing Tool.
Which scripting language used by QTP ?
QTP uses VB scripting.
What s the basic concept of QTP ?
QTP is based on two concept-
* Recording
* Playback
How many types of recording facility are available in QTP ?
QTP provides three types of recording methods-
* Context Recording (Normal)
* Analog Recording
* Low Level Recording
How many types of Parameters are available in QTP ?
QTP provides three types of Parameter-
* Method Argument
* Data Driven
* Dynamic
What s the QTP testing process ?
QTP testing process consist of seven steps-
* Preparing to recoding
* Recording
* Enhancing your script
* Debugging
* Run
* Analyze
* Report Defects
What s the Active Screen ?
It provides the snapshots of your application as it appeared when you performed
a certain steps during recording session.
What s the Test Pane ?
Test Pane contains Tree View and Expert View tabs.
What s Data Table ?
It assists to you about parameterizing the test.
What s the Test Tree ?
It provides graphical representation of your operations which you have performed
with your application.
Which all environment QTP supports ?
ERP/ CRM
Java/ J2EE
VB, .NET
Multimedia, XML
Web Objects, ActiveX controls
SAP, Oracle, Siebel, PeopleSoft
Web Services, Terminal Emulator
IE, NN, AOL
How can you view the Test Tree ?
The Test Tree is displayed through Tree View tab.
What s the Expert View ?
Expert View display the Test Script.
Which keyword used for Nornam Recording ?
F3
Which keyword used for run the test script ?
F5
Which keyword used for stop the recording ?
F4
Which keyword used for Analog Recording ?
Ctrl+Shift+F4
Which keyword used for Low Level Recording ?
Ctrl+Shift+F3
Which keyword used for switch between Tree View and Expert View ?
Ctrl+Tab
What s the Transaction ?
You can measure how long it takes to run a section of your test by defining tran
sactions.
Where you can view the results of the checkpoint ?
You can view the results of the checkpoints in the Test Result Window.
What s the Standard Checkpoint ?
Standard Checkpoints checks the property value of an object in your application
or web page.
Which environment are supported by Standard Checkpoint ?
Standard Checkpoint are supported for all add-in environments.
What s the Image Checkpoint ?
Image Checkpoint check the value of an image in your application or web page.
Which environments are supported by Image Checkpoint ?
Image Checkpoint are supported only Web environment.
What s the Bitmap Checkpoint ?
Bitmap Checkpoint checks the bitmap images in your web page or application.
Which environment are supported by Bitmap Checkpoints ?
Bitmap checkpoints are supported all add-in environment.
What s the Table Checkpoints ?
Table Checkpoint checks the information with in a table.
Which environments are supported by Table Checkpoint ?
Table Checkpoints are supported only ActiveX environment.
What s the Text Checkpoint ?
Text Checkpoint checks that a test string is displayed in the appropriate place
in your application or on web page.
Which environment are supported by Test Checkpoint ?
Text Checkpoint are supported all add-in environments
Note:
* QTP records each steps you perform and generates a test tree and test script.
* QTP records in normal recording mode.
* If you are creating a test on web object, you can record your test on one brow
ser and run it on another browser.
* Analog Recording and Low Level Recording require more disk sapce than normal r
ecording mode.
SMOKE TESTING:
* Smoke testing originated in the hardware testing practice of turning on a
new piece of hardware for the first time and considering it a success if it does
not catch fire and smoke. In software industry, smoke testing is a shallow and
wide approach whereby all areas of the application without getting into too deep
, is tested.
* A smoke test is scripted, either using a written set of tests or an automa
ted test
* A Smoke test is designed to touch every part of the application in a curso
ry way. It s shallow and wide.
* Smoke testing is conducted to ensure whether the most crucial functions of
a program are working, but not bothering with finer details. (Such as build ver
ification).
* Smoke testing is normal health check up to a build of an application befor
e taking it to testing in depth.
SANITY TESTING:
* A sanity test is a narrow regression test that focuses on one or a few are
as of functionality. Sanity testing is usually narrow and deep.
* A sanity test is usually unscripted.
* A Sanity test is used to determine a small section of the application is s
till working after a minor change.
* Sanity testing is a cursory testing, it is performed whenever a cursory te
sting is sufficient to prove the application is functioning according to specifi
cations. This level of testing is a subset of regression testing.
* Sanity testing is to verify whether requirements are met or not, checking
all features breadth-first.
What is the difference between client-server testing and web based testing and w
hat are things that we need to test in such applications?
Ans:
Projects are broadly divided into two types of:
* 2 tier applications
* 3 tier applications
CLIENT / SERVER TESTING
This type of testing usually done for 2 tier applications (usually developed for
LAN)
Here we will be having front-end and backend.
The application launched on front-end will be having forms and reports which wil
l be monitoring and manipulating data
E.g: applications developed in VB, VC++, Core Java, C, C++, D2K, PowerBuilder et
c.,
The backend for these applications would be MS Access, SQL Server, Oracle, Sybas
e, Mysql, Quadbase
The tests performed on these types of applications would be
- User interface testing
- Manual support testing
- Functionality testing
- Compatibility testing & configuration testing
- Intersystem testing
WEB TESTING
This is done for 3 tier applications (developed for Internet / intranet / xtrane
t)
Here we will be having Browser, web server and DB server.
The applications accessible in browser would be developed in HTML, DHTML, XML, J
avaScript etc. (We can monitor through these applications)
Applications for the web server would be developed in Java, ASP, JSP, VBScript,
JavaScript, Perl, Cold Fusion, PHP etc. (All the manipulations are done on the w
eb server with the help of these programs developed)
The DBserver would be having oracle, sql server, sybase, mysql etc. (All data is
stored in the database available on the DB server)
The tests performed on these types of applications would be
- User interface testing
- Functionality testing
- Security testing
- Browser compatibility testing
- Load / stress testing
- Interoperability testing/intersystem testing
- Storage and data volume testing
A web-application is a three-tier application.
This has a browser (monitors data) [monitoring is done using html, dhtml, xml, j
avascript]-> webserver (manipulates data) [manipulations are done using programm
ing languages or scripts like adv java, asp, jsp, vbscript, javascript, perl, co
ldfusion, php] -> database server (stores data) [data storage and retrieval is d
one using databases like oracle, sql server, sybase, mysql].
The types of tests, which can be applied on this type of applications, are:
1. User interface testing for validation & user friendliness
2. Functionality testing to validate behaviors, i/p, error handling, o/p, manipu
lations, services levels, order of functionality, links, content of web page & b
ackend coverage s
3. Security testing
4. Browser compatibility
5. Load / stress testing
6. Interoperability testing
7. Storage & data volume testing
A client-server application is a two tier application.
This has forms & reporting at front-end (monitoring & manipulations are done) [u
sing vb, vc++, core java, c, c++, d2k, power builder etc.,] -> database server a
t the backend [data storage & retrieval) [using ms access, sql server, oracle, s
ybase, mysql, quadbase etc.,]
The tests performed on these applications would be
1. User interface testing
2. Manual support testing
3. Functionality testing
4. Compatibility testing
5. Intersystem testing
Some more points to clear the difference between client server, web and desktop
applications:
Desktop application:
1. Application runs in single memory (Front end and Back end in one place)
2. Single user only
Client/Server application:
1. Application runs in two or more machines
2. Application is a menu-driven
3. Connected mode (connection exists always until logout)
4. Limited number of users
5. Less number of network issues when compared to web app.
Web application:
1. Application runs in two or more machines
2. URL-driven
3. Disconnected mode (state less)
4. Unlimited number of users
5. Many issues like hardware compatibility, browser compatibility, version compa
tibility, security issues, performance issues etc.
As per difference in both the applications come where, how to access the resourc
es. In client server once connection is made it will be in state on connected, w
hereas in case of web testing http protocol is stateless, then there comes logic
of cookies, which is not in client server.
For client server application users are well known, whereas for web application
any user can login and access the content, he/she will use it as per his intenti
ons.
So, there are always issues of security and compatibility for web application.

26. What is the difference between Bug and Defect?


A: Bug: Deviation from the expected result. Defect: Problem in algorithm leads t
o failure.
A Mistake in code is called Error.
Due to Error in coding, test engineers are getting mismatches in application is
called defect.
If defect accepted by development team to solve is called Bug.
27. What is hot fix?
A: A hot fix is a single, cumulative package that includes one or more files tha
t are used to address a problem in a software product. Typically, hot fixes are
made to address a specific customer situation and may not be distributed outside
the customer organization.
Bug found at the customer place which has high priority.
28. What is the difference between functional test cases and compatability testc
ases?
A: There are no Test Cases for Compatibility Testing; in Compatibility Testing w
e are Testing an application in different Hardware and software. If it is wrong
plz let me know.
29. What is Acid Testing??
A: ACID Means:
ACID testing is related to testing a transaction.
A-Atomicity
C-Consistent
I-Isolation
D-Durable
Mostly this will be done database testing.
30. What is the main use of preparing a traceability matrix?
A: To Cross verify the prepared test cases and test scripts with user requiremen
ts.
To monitor the changes, enhance occurred during the development of the project.
Traceability matrix is prepared in order to cross check the test cases designed
against each requirement, hence giving an opportunity to verify that all the req
uirements are covered in testing the application.
1. What are the features and benefits of Quick Test Pro(QTP)?
1. Key word driven testing
2. Suitable for both client server and web based application
3. VB script as the script language
4. Better error handling mechanism
5. Excellent data driven testing features
2. How to handle the exceptions using recovery scenario manager in QTP?
You can instruct QTP to recover unexpected events or errors that occurred in you
r testing environment during test run. Recovery scenario manager provides a wiza
rd that guides you through the defining recovery scenario. Recovery scenario has
three steps
1. Triggered Events
2. Recovery steps
3. Post Recovery Test-Run
3. What is the use of Text output value in QTP?
Output values enable to view the values that the application talks during run ti
me. When parameterized, the values change for each iteration. Thus by creating o
utput values, we can capture the values that the application takes for each run
and output them to the data table.
4. How to use the Object spy in QTP 8.0 version?
There are two ways to Spy the objects in QTP
1) Thru file toolbar: In the File ToolBar click on the last toolbar button (an i
con showing a person with hat).
2) Thru Object repository Dialog: In Objectrepository dialog click on the button
object spy In the Object spy Dialog click on the button showing hand symbol. The p
ointer now changes in to a hand symbol and we have to point out the object to sp
y the state of the object. If at all the object is not visible or window is mini
mized then hold the Ctrl button and activate the required window to and release
the Ctrl button.
5. What is the file extension of the code file and object repository file in QTP
?
File extension of
Per test object rep: filename.mtr
Shared Object rep: filename.tsr
Code file extension id: script.mts
6. Explain the concept of object repository and how QTP recognizes objects?
Object Repository: displays a tree of all objects in the current component or in
the current action or entire test( depending on the object repository mode you
selected).
we can view or modify the test object description of any test object in the repo
sitory or to add new objects to the repository.
Quicktest learns the default property values and determines in which test object
class it fits. If it is not enough it adds assistive properties, one by one to
the description until it has compiled the unique description. If no assistive pr
operties are available, then it adds a special Ordianl identifier such as object
s location on the page or in the source code.
7. What are the properties you would use for identifying a browser and page when
using descriptive programming?
name would be another property apart from title that we can use. OR
We can also use the property micClass .
ex: Browser( micClass:=browser ).page( micClass:=page )
8. What are the different scripting languages you could use when working with QT
P?
You can write scripts using following languages:
Visual Basic (VB), XML, JavaScript, Java, HTML
9. Tell some commonly used Excel VBA functions.
Common functions are:
Coloring the cell, Auto fit cell, setting navigation from link in one cell to ot
her saving
10. Explain the keyword createobject with an example.
Creates and returns a reference to an Automation object
syntax: CreateObject(servername.typename [, location])
Arguments
servername:Required. The name of the application providing the object.
typename : Required. The type or class of the object to create.
location : Optional. The name of the network server where the object is to be cr
eated.
11. Explain in brief about the QTP Automation Object Model.
Essentially all configuration and run functionality provided via the QuickTest i
nterface is in some way represented in the QuickTest automation object model via
objects, methods, and properties. Although a one-on-one comparison cannot alway
s be made, most dialog boxes in QuickTest have a corresponding automation object
, most options in dialog boxes can be set and/or retrieved using the correspondi
ng object property, and most menu commands and other operations have correspondi
ng automation methods. You can use the objects, methods, and properties exposed
by the QuickTest automation object model, along with standard programming elemen
ts such as loops and conditional statements to design your program.
12. How to handle dynamic objects in QTP?
QTP has a unique feature called Smart Object Identification/recognition. QTP gen
erally identifies an object by matching its test object and run time object prop
erties. QTP may fail to recognize the dynamic objects whose properties change du
ring run time. Hence it has an option of enabling Smart Identification, wherein
it can identify the objects even if their properties changes during run time.
Check out this:
If QuickTest is unable to find any object that matches the recorded object descr
iption, or if it finds more than one object that fits the description, then Quic
kTest ignores the recorded description, and uses the Smart Identification mechan
ism to try to identify the object.
While the Smart Identification mechanism is more complex, it is more flexible, a
nd thus, if configured logically, a Smart Identification definition can probably
help QuickTest identify an object, if it is present, even when the recorded des
cription fails.
The Smart Identification mechanism uses two types of properties:
Base filter properties - The most fundamental properties of a particular test ob
ject class; those whose values cannot be changed without changing the essence of
the original object. For example, if a Web link s tag was changed from to any oth
er value, you could no longer call it the same object. Optional filter propertie
s - Other properties that can help identify objects of a particular class as the
y are unlikely to change on a regular basis, but which can be ignored if they ar
e no longer applicable.
13. What is a Run-Time Data Table? Where can I find and view this table?
In QTP, there is data table used, which is used at runtime.
-In QTP, select the option View->Data table.
-This is basically an excel file, which is stored in the folder of the test crea
ted, its name is Default.xls by default.
14. How does Parameterization and Data-Driving relate to each other in QTP?
To data driven we have to parameterize. i.e. we have to make the constant value
as parameter, so that in each interaction(cycle) it takes a value that is suppli
ed in run-time data table. Through parameterization only we can drive a transact
ion (action) with different sets of data. You know running the script with the s
ame set of data several times is not suggested, and it s also of no use.
15. What is the difference between Call to Action and Copy Action.?
Call to Action: The changes made in Call to Action, will be reflected in the ori
ginal action (from where the script is called). But where as in Copy Action , th
e changes made in the script ,will not effect the original script(Action)
16. Explain the concept of how QTP identifies object.
During recording qtp looks at the object and stores it as test object. For each
test object QT learns a set of default properties called mandatory properties, a
nd look at the rest of the objects to check whether this properties are enough t
o uniquely identify the object. During test run, QTP searches for the run time o
bjects that matches with the test object it learned while recording.
17. Differentiate the two Object Repository Types of QTP.
Object repository is used to store all the objects in the application being test
ed.
Types of object repository: Per action and shared repository.
In shared repository only one centralized repository for all the tests. where as
in per action for each test a separate per action repository is created.
18. What the differences are and best practical application of Object Repository
?
Per Action: For Each Action, one Object Repository is created.
Shared: One Object Repository is used by entire application
19. Explain what the difference between Shared Repository and Per Action Reposit
ory
Shared Repository: Entire application uses one Object Repository , that similar
to Global GUI Map file in WinRunner
Per Action: For each Action, one Object Repository is created, like GUI map file
per test in WinRunner
20. Have you ever written a compiled module? If yes tell me about some of the fu
nctions that you wrote.
Sample answer (You can tell about modules you worked on. If your answer is Yes t
hen You should expect more questions and should be able to explain those modules
in later questions): I Used the functions for Capturing the dynamic data during
runtime. Function used for Capturing Desktop, browser and pages.
21. Can you do more than just capture and playback?
Sample answer (Say Yes only if you worked on): I have done Dynamically capturing
the objects during runtime in which no recording, no playback and no use of rep
ository is done AT ALL.
-It was done by the windows scripting using the DOM(Document Object Model) of th
e windows.
22. How to do the scripting. Are there any inbuilt functions in QTP? What is the
difference between them? How to handle script issues?
Yes, there s an in-built functionality called Step Generator in Insert->Step->Step G
enerator -F7, which will generate the scripts as you enter the appropriate steps
.
23. What is the difference between check point and output value?
An output value is a value captured during the test run and entered in the run-t
ime but to a specified location.
EX:-Location in Data Table[Global sheet / local sheet]
24. How many types of Actions are there in QTP?
There are three kinds of actions:
Non-reusable action - An action that can be called only in the test with which i
t is stored, and can be called only once.
Reusable action - An action that can be called multiple times by the test with w
hich it is stored (the local test) as well as by other tests.
External action - A reusable action stored with another test. External actions a
re read-only in the calling test, but you can choose to use a local, editable co
py of the Data Table information for the external action.
25. I want to open a Notepad window without recording a test and I do not want t
o use System utility Run command as well. How do I do this?
You can still make the notepad open without using the record or System utility s
cript, just by mentioning the path of the notepad ( i.e. where the notepad.exe is
stored in the system) in the Windows Applications Tab of the Record and Run Settin
gs window.
Why Automation testing?
1) You have some new releases and bug fixes in working module. So how will you e
nsure that the new bug fixes have not introduced any new bug in previous working
functionality? You need to test the previous functionality also. So will you te
st manually all the module functionality every time you have some bug fixes or n
ew functionality addition? Well you might do it manually but then you are not do
ing testing effectively. Effective in terms of company cost, resources, Time etc
. Here comes need of Automation.
- So automate your testing procedure when you have lot of regression work.
2) You are testing a web application where there might be thousands of users int
eracting with your application simultaneously. How will you test such a web appl
ication? How will you create those many users manually and simultaneously? Well
very difficult task if done manually.
- Automate your load testing work for creating virtual users to check load capac
ity of your application.
3) You are testing application where code is changing frequently. You have almos
t same GUI but functional changes are more so testing rework is more.
- Automate your testing work when your GUI is almost frozen but you have lot of
frequently functional changes.
What are the Risks associated in Automation Testing?
There are some distinct situations where you can think of automating your testin
g work. I have covered some risks of automation testing here. If you have taken
decision of automation or are going to take sooner then think of following scena
rios first.
1) Do you have skilled resources?
For automation you need to have persons having some programming knowledge. Think
of your resources. Do they have sufficient programming knowledge for automation
testing? If not do they have technical capabilities or programming background t
hat they can easily adapt to the new technologies? Are you going to invest money
to build a good automation team? If your answer is yes then only think to autom
ate your work.
2) Initial cost for Automation is very high:
I agree that manual testing has too much cost associated to hire skilled manual
testers. And if you are thinking automation will be the solution for you, Think
twice. Automation cost is too high for initial setup i.e. cost associated to aut
omation tool purchase, training and maintenance of test scripts is very high.
There are many unsatisfied customers regretting on their decision to automate th
eir work. If you are spending too much and getting merely some good looking test
ing tools and some basic automation scripts then what is the use of automation?
3) Do not think to automate your UI if it is not fixed:
Beware before automating user interface. If user interface is changing extensive
ly, cost associated with script maintenance will be very high. Basic UI automati
on is sufficient in such cases.
4) Is your application is stable enough to automate further testing work?
It would be bad idea to automate testing work in early development cycle (Unless
it is agile environment). Script maintenance cost will be very high in such cas
es.
5) Are you thinking of 100% automation?
Please stop dreaming. You cannot 100% automate your testing work. Certainly you
have areas like performance testing, regression testing, load/stress testing whe
re you can have chance of reaching near to 100% automation. Areas like User inte
rface, documentation, installation, compatibility and recovery where testing mus
t be done manually.
6) Do not automate tests that run once:
Identify application areas and test cases that might be running once and not inc
luded in regression. Avoid automating such modules or test cases.
7) Will your automation suite be having long lifetime?
Every automation script suite should have enough life time that its building cos
t should be definitely less than that of manual execution cost. This is bit diff
icult to analyze the effective cost of each automation script suite. Approximate
ly your automation suite should be used or run at least 15 to 20 times for separ
ate builds (General assumption. depends on specific application complexity) to h
ave good ROI.
Here is the conclusion:
Automation testing is the best way to accomplish most of the testing goals and e
ffective use of resources and time. But you should be cautious before choosing t
he automation tool. Be sure to have skilled staff before deciding to automate yo
ur testing work. Otherwise your tool will remain on the shelf giving you no ROI.
Handing over the expensive automation tools to unskilled staff will lead to fru
stration. Before purchasing the automation tools make sure that tool is a best f
it to your requirements. You cannot have the tool that will 100% match with your
requirements. So find out the limitations of the tool that is best match with y
our requirements and then use manual testing techniques to overcome those testin
g tool limitations. Open source tool is also a good option to start with automat
ion. To know more on choosing automation tools read my previous posts here and h
ere.
Instead of relying 100% on either manual or automation use the best combination
of manual and automation testing. This is the best solution (I think) for every
project. Automation suite will not find all the bugs and cannot be a replacement
for real testers. Ad-hoc testing is also necessary in many cases.
Over to you. I would like to here your experience about automation testing. Any
practical experience will always be helpful for our readers.