Sie sind auf Seite 1von 38

Today, we are starting Performance testing with LoadRunner tutorial series.

This is the
first tutorial in three-part HP LoadRunner training series.
In first two tutorials we will introduce you to performance testing and in last tutorial we
will share Load Testing with LoadRunner video tutorials.
See Also => List of all LoadRunner Video Tutorials.
Why Performance testing?
Performance testing has proved itself to be crucial for the success of a business. Not only
does a poor performing site face financiallosses, it also could lead to legal repercussions
at times.

No one wants to put up with a slow performing, unreliable site in cases of purchasing,
online test taking, bill payment, etc. With the internet being so widely available, the
alternates are immense. It is easier to lose clientele than gain them and performance is
a key game changer.
Therefore, performance testing is no longer a name sake checkpoint before going live. It
is indeed a comprehensive and detailed stagethat would determine whether the
performance of a site or an application meets the needs.
Introduction
The purpose of this test is to understand the performance of application under load,
particularly users.

Types of Performance Testing

Load Testing
Load testing is a type of performance test where the application is tested for its
performance on normal and peak usage. Performance of an application is checked with
respect to its response to the user request, its ability to respond consistently within
accepted tolerance on different user loads.
The key considerations are:
1. What is the max load the application is able to hold before the application starts
behaving unexpectedly?
2. How much data the Database is able to handle before system slowness or the
crash is observed?
3. Are there any network related issues to be addressed?
Stress Testing
Stress testing is the test to find the ways to break the system. The test also gives the
idea for the maximum load the system can hold.
Generally Stress testing has incremental approach where the load is increased gradually.
The test is started with good load for which application has been already tested. Then

slowly more load is added to stress the system and the point when we start seeing
servers not responding to the requests is considered as a break point.
During this test all the functionality of the application are tested under heavy load and
on back-end these functionality might be running complex queries, handling data, etc.
The following questions are to be addressed:

What is the max load a system can sustain before it breaks down?
How is the system break down?
Is the system able to recover once its crashed?
In how many ways system can break and which are the weak node while handling
the unexpected load?

Volume Testing
Volume test is to verify the performance of the application is not affected by volume of
data that is being handled by the application. Hence to execute Volume Test generally
huge volume of data is entered into the database. This test can be incremental or steady
test. In the incremental test volume of data is increased gradually.
Generally with the application usage, the database size grows and it is necessary to test
the application against heavy Database. A good example of this could be a website of a
new school or college having small data to store initially but after 5-10 years the data
stores in database of website is much more.
The most common recommendation of this test is tuning of DB queries which access the
Database for data. In some cases the response of DB queries is high for big database, so
it needs to be rewritten in a different way or index, joints etc need to be included.
Capacity Testing
=> Is the application capable of meeting business volume under both normal and peak
load conditions?
Capacity testing is generally done for future prospects. Capacity testing addresses
the following:
1. Will the application able to support the future load?
2. Is the environment capable to stand for upcoming increased load?
3. What are the additional resources required to make environment capable enough?
Capacity testing is used to determine how many users and/or transactions a given web
application will support and still meet performance. During this testing resources such as
processor capacity, network bandwidth, memory usage, disk capacity, etc. are
considered and altered to meet the goal.

Online Banking is a perfect example of where capacity testing could play a major part.
Reliability/Recovery Testing
Reliability Testing or Recovery Testing is to verify as to whether the application is able
to return back to its normal state or not after a failure or abnormal behavior- and also
how long does it take for it to do so(in other words, time estimation).
An online trading site if experience a failure where the users are not able to buy/sell
shares at a certain point of the day (peak hours) but are able to do so after an hour or
two. In this case, we can say the application is reliable or recovered from the abnormal
behavior.
In addition to the above sub-forms of performance testing, there are some more
fundamental ones that are prominent:
Smoke Test:

How is the new version of the application performing when compared to previous

ones?
Is any performance degradation observed in any area in the new version?
What should be the next area where developers should focus to address
performance issues in the new version of application?

See also => Smoke Testing in functional testing.


Component Test:

Whether the component is responsible for the performance issue?


Whether the component is doing what is expected and component optimization
has been done?

Endurance Test:

Whether the application will able to perform well enough over the period of time.
Any potential reasons that could slow the system down?
Third party tool and/or vendor integration and any possibility that the interaction
makes the application slower.

How does Functional Testing differ from Performance Testing?

Identification of components for testing


In an ideal scenario, all components should be performance tested. However, due to time
& other business constraints that may not be possible. Hence, the identification of
components for testing happens to be one of the most important tasks in load testing.
The following components must be included in performance testing:
-----------#1. Functional, business critical features
Components that have a Customer Service Level Agreement or those having complex
business logic (and are critical for the businesss success) should be included.
Example: Checkout and Payment for an E-commerce site like eBay.
#2. Components that process high volumes of data
Components, especially background jobs are to be included for sure.Example: Upload
and download feature on a file sharing website.
#3. Components which are commonly used
A component that is frequently used by end-users, jobs scheduled multiple times in a
day, etc.
Example: Login and Logout.
#4. Components interfacing with one or more application systems

In a system involving multiple applications that interact with one another, all the
interface components must be deemed as critical for performance test.
Example: E-commerce sites interface with online banking sites for payments, which is an
external third party application. This should be definitely the part of Perf testing.

Tools for performance testing


Sure, you could have a million computers set up with a million different credentials and
all of them could login at once and monitor the performance. Apparently its not practical
and even if we do, do that, we still need some sort of monitoring infrastructure.
The best way this situation is handled is through virtual user (VU).For all our tests
the VU behave just the way a real user would.
For the creation of as many VUs as you would require and to simulate real time
conditions, performance testing tools are employed. Not only that, Perf testing also tests
for the peak load usage, breakdown point, long term usage, etc
To enable all with limited resources, fast and to obtain reliable results tools are often
used for this process. There are a variety of tools available in the market- licensed, free
wares and open sourced.
Few of the such tools are:

HP LoadRunner,
Jmeter,
Silk Performer,
NeoLoad,
Web Load,
Rational Performance Tester (RTP),
VSTS,
Loadstorm,
Web Performance,
LoadUI,
Loadster,
Load Impact,
OpenSTA,
QEngine,
Cloud Test,
Httperf,
App Loader,
Qtest,
RTI,
Apica LoadTest,
Forecast,
WAPT,

Monitis,
Keynote Test Perspective,
Agile Load, etc.

See Also => List of top 15 Performance Testing tools


The tool selection depends on budget, technology used, purpose of testing, nature of the
applications, performance goals being validated, infrastructure, etc.
HP Load Runner captures majority of market due to:
1. Versatility can be used on windows as well as web based applications. It also
works for many kinds of technologies.
2. Test Results It provides in-depth insights that can be used for tuning the
application.
3. Easy Integrations works with diagnostics tool like HP Sitescope and HP
Diagnostic.
4. Analysis utility provides a variety of features which help in deep analysis.
5. Robust Reports LoadRunner has a good reporting engine and provides a
variety of reporting formats.
6. Comes with an Enterprise package too.
The only flip side is its license cost. It is a little bit on the expensive side which is why
other open source or affordably licensed tools that are specific to a technology, protocol
and with limited analysis & reporting capabilities have emerged in the market.
Still, the HP LoadRunner is a clear winner.

Future in Performance Testing Career


Performance testing is easy to learn but need lots of dedication to master it. Its like a
mathematics subject where you have to build your concept. Once the concept is through,
it can be applied to most of the tools irrespective of the scripting language being
different, straight forward logic not being applicable, look and feel of the tool being
different, etc. the approach to Perf testing is almost always the same.
I would highly recommend this hot and booming technology and to enhance your skill by
learning this. Mastering PT could be just what you are looking for to move ahead in
your software testing career.

Conclusion
In this article we have covered most of the information required to build a base to move
ahead and understand the Performance testing. In the next article we will apply
these concepts and understand the key activities of Performance testing.

Next Tutorial => How to Performance Test an Application


Load Runner is going to be our vehicle in the journey, but the destination we want to
reach is to understand everything about performance testing.
Stay tuned!

How to Performance Test an


Application LoadRunner
Training Tutorial Part 2
Posted In | Automation Testing, LoadRunner Tutorials, Software
Testing Tools | Last Updated: "February 12, 2015"

This is the 2nd tutorial in our Performance testing with


LoadRunner training series. With this, we are learning

the exact performance test process so that we can easily


get hold of the Load Testing with HP LoadRunner
tutorials.
Check out the first tutorials in this series
here: Performance Testing Introduction.
Performance Testing Goals:
It is conducted to accomplish the following goals:
Verify Applications readiness to go live.
Verify if the desired performance criteria are met
Compare performance characteristics/configurations
of the application to what is standard
Identify Performance bottlenecks.
Facilitate Performance Tuning.

Key Activities in Performance Testing:

#1. Requirement Analysis/Gathering


Performance team interacts with the client for
identification and gathering of requirement technical
and business. This includes getting information on
applications architecture, technologies and database
used, intended users, functionality, application
usage, test requirement, hardware & software
requirements etc.

#2. POC/Tool selection

Once the key functionality are identified, POC (proof of


concept which is a sort of demonstration of the real
time activity but in a limited sense) is done with the
available tools. The list of availableperformance test
tools depends on cost of tool, protocol that application is
using, the technologies used to build the application, the
number of users we are simulating for the test, etc.
During POC, scripts are created for the identified key
functionality and executed with 10-15 virtual users.

#3. Performance Test Plan & Design


Depending on the information collected in the preceding
stages, test planning and designing is conducted.
Test Planning involves information on how the
performance test is going to take place test
environment the application, workload, hardware, etc.
Test designing is mainly about the type of test to be
conducted, metrics to be measured, Metadata, scripts,
number of users and the execution plan.
During this activity, a Performance Test Plan is created.
This serves as an agreement before moving ahead and
also as a road map for the entire activity. Once created
this document is shared to the client to establish
transparency on the type of the application, test
objectives, prerequisites, deliverable, entry and exit
criteria, acceptance criteria etc.
Briefly, a performance test plan includes:
a) Introduction (Objective and Scope)
b) Application Overview
c) Performance (Objectives & Goals)
d) Test Approach (User Distribution, Test data
requirements, Workload criteria, Entry & Exit criteria,

Deliverable, etc.)
e) In-Scope and Out-of-Scope
f) Test Environment (Configuration, Tool, Hardware,
Server Monitoring, Database, test configuration, etc.)
g) Reporting & Communication
h) Test Metrics
i) Role & Responsibilities
j) Risk & Mitigation
k) Configuration Management

#4. Performance Test Development


Use cases are created for the functionality identified
in the test plan as the scope of PT.
These use cases are shared with the client for their
approval. This is to make sure the script will be
recorded with correct steps.
Once approved, script development starts with a
recording of the steps in use cases with the
performance test tool selected during the POC (Proof
of Concepts) and enhanced by performing
Correlation (for handling dynamic value),
Parameterization (value substitution) and custom
functions as per the situation or need. More on these
techniques in our video tutorials.
The Scripts are then validated against different
users.
Parallel to script creation, performance team also
keeps working on setting up of the test environment
(Software and hardware).
Performance team will also take care of Metadata
(back-end) through scripts if this activity is not
taken up by the client.

#5. Performance Test Modeling

Performance Load Model is created for the test execution.


The main aim of this step is to validate whether the
given Performance metrics (provided by clients) are
achieved during the test or not. There are different
approaches to create a Load model. Littles Law is used
in most cases.

#6. Test Execution


The scenario is designed according to the Load Model in
Controller or Performance Center but the initial tests are
not executed with maximum users that are in the Load
model.
Test execution is done incrementally. For example: If the
maximum number of users are 100, the scenarios is first
run with 10, 25, 50 users and so on, eventually moving
on to 100 users.
------------

#7. Test Results Analysis


Test results are the most important deliverable for the
performance tester. This is where we can prove the ROI
(Return on Investment) and productivity that a
performance testing effort can provide.

Some of the best practices that help the result


analysis process:
a) A unique and meaningful name to every test result
this helps in understanding the purpose of the test
b) Include the following information in the test result
summary:
Reason for the failure/s
Change in the performance of the application
compared to the previous test run
Changes made in the test from the point of
application build or test environment.
Its a good practice to make a result summary after
each test run so that analysis results are not
compiled every time test results are referred.
PT generally requires many test runs to reach at the
correct conclusion.
It is good to have the following points in result
summary:

Purpose of test

Number of virtual users

Scenario summary

Duration of test
Throughput
Graphs
Graphs comparison
Response Time
Error occurred
Recommendations

There might be recommendations like configuration


changes for the next test. Server logs also help in
identifying the root cause of the problem (like
bottlenecks) Deep Diagnostic tools are used for this
purpose.
In the final report, all the test summaries are
consolidated.

#8. Report
Test results should be simplified so the conclusion is
clearer and should not need any derivation. Development
Team needs more information on analysis, comparison of
results, and details of how the results were obtained.
Test report is considered to be good if it is brief,
descriptive and to the point.
The following guidelines will smooth this step out:
Use appropriate heading and summary
Report should be presentable so that it can be used
in the management meetings.
Provide supporting data to support the results.
Give meaningful names to the table headers.
Share the status report periodically, even with the
clients

Report the issues with as much information and


evidence as possible in order to avoid unnecessary
correspondence
The final report to be shared with the client has the
following information:

Execution Summary
System Under test
Testing Strategy
Summary of test
Results Strategy
Problem Identified
Recommendations

Along with the final report, all the deliverable as per test
plan should be shared with the client.

Conclusion
We hope this article has given a process oriented,
conceptual and detailed information on how performance
testing is carried out from beginning to end.

In the past tutorials we have seen the basics of Performance testing and LoadRunner
video tutorials. This article is going to focus on the most important commonly asked

LoadRunner interview questions and answers that will help you be successful in
performance testers interview using LoadRunner.
LoadRunner is one of the best licensed Performance testing tools in the market. It is best
suited for most upcoming technologies because of the wide range of supported protocols.
A few basic pointers before we begin:
#1) LoadRunner interview questions can be categorized into 3 main types Scripting,
Execution and Analysis. It is important for beginners to focus more on the scripting
part.
#2) Http/html is mostly used protocol, for a start try to perfect this protocol.
#3) Be sure to know the exact version of LoadRunner that you worked on. In case of
work experience with a previous version, try to keep yourself updated with the features
that are part of the newer/current versions.
#4) Performance Testing interviews are more practical than they used to be.
Scenario oriented questions are common rather than straight forward ones. Some
companies, even make scripting tests a part of the interview process. So, be prepared
for the same.
#5) Even in scripting, it is preferred that you be able to customize code, instead of
just record and replay.
#6) Expect questions on think time, transactions, comments, recording options,
run time settings, etc. these are to test your knowledge of scripting best practices.

The following are some of the performance testing interviewquestions that will need
some experience to answer. Try to keep these questions in mind while working on your
performance test projects, so the interview preparation activity becomes a continuous
process.
1. What are the different scripting issues you faced so far?
2. What are the performance bottlenecks that you found in projects you were
working? What are the recommendations made to overcome those issues?
3. Have you applied Littles law in your project? If so, how?
4. What is your approach for analysis?
5. What do you monitor while execution?
6. How to extract server data for test execution and how to analyze that?
7. How to identify performance bottlenecks?
Key question areas are:
Challenges that you face during scripting
Correlation function

Error handling
Different recording modes for Web HTTP/HTML protocol.
Scenario creation
Challenges during execution
Analysis

See also => Performance Testing with LoadRunner


Below we provided few common LoadRunner interview questions and answers to them.
However, please note that the best results can be achieved by providing answers
based on your exposure, expertise and interpretation of the concepts. Learning
just the answers to questions is not always optimum. Practice, Learn and Expert
this should be your approach for performance testing interview preparation.

LoadRunner Interview Questions and Best Answers


Q #1. What is the difference between Performance testing and Performance
engineering?
Ans => In Performance testing, testing cycle includes requirement gathering, scripting,
execution, result sharing and report generation. Performance Engineering is a step ahead
of Performance testing where after execution; results are analyzed with the aim to find
the performance bottlenecks and the solution is provided to resolve the identified issues.
Q #2. Explain Performance Testing Life Cycle.
Ans => Step 1: System Analysis (Identification of critical transaction)
Virtual User Generator
Step 2: Creating Virtual User Scripts (Recording)
Step 3: Defining Users Behavior (Runtime setting)
LoadRunner Controller
Step 4: Creating Load Test Scenarios
Step 5: Running the Load Test Scenarios and Monitoring the Performance
LoadRunner Analysis
Step 6: Analyzing the Results
Refer Performance Testing Tutorial #2 for more details.
Q #3. What is Performance testing?
Ans => Performance testing is done to evaluate application`s performance under load
and stress conditions. It is generally measured in terms of response time of users action
on application.
Q #4. What is Load testing?
Ans => Load testing is to determine if an application can work well with the heavy
usage resulting from a large number of users using it simultaneously. Load is increased
to to simulates the peak load that the servers are going to take during maximum usage
periods.
Q #5. What are the different components of LoadRunner?
Ans => The major components of LoadRunner are:
VUGen- Records Vuser scripts that emulate the actions of real users.
Controller Administrative center for creating, maintaining and executing load test
scenarios. Assigns scenarios to Vusers and load generators, starts and stops loading
tests.

Load Generator An agent through which we can generate load


Analysis Provides graphs and reports that summarize the system performance
Q #6. What is the Rendezvous point?
Ans => Rendezvous point helps in emulating heavy user load (request) on the server.
This instructs Vusers to act simultaneously. When the vuser reaches the Rendezvous
point, it waits for all Vusers with Rendezvous point. Once designated numbers of Vusers
reaches it, the Vusers are released. Function lr_rendezvous is used to create the
Rendezvous point. This can be inserted by:
1. Rendezvous button on the floating Recording toolbar while recording.
2. After recording Rendezvous point is inserted through Insert> Rendezvous.
Q #7. What are the different sections of the script? In what sequence does
these section runs?
Ans => LoadRunner script has three sections vuser_init, Action and vuser_end.
vuser_init has requests/actions to login to the application/server.
Action has actual code to test the functionality of the application. This can be played
many times in iterations.
Vuser_end has requests/actions to login out the application/server.
The sequence in which these sections get executed is vuser_init is at the very beginning
and vuser_end at the very end. Action is executed in between the two.
Q #8. How do you identify which protocol to use for any application?
Ans => Previously Performance tester had to depend much on the development team to
know about the protocol that application is using to interact with the server. Sometimes,
it also used to be speculative.
However, LoadRunner provides a great help in form of Protocol Advisor from version
9.5 onwards. Protocol advisor detects the protocols that application uses and suggest us
the possible protocols in which script can be created to simulate the real user.
Q #9. What is correlation? Explain the difference between automatic correlation
and manual correlation?
Ans => Correlation is used to handle the dynamic values in a script. The dynamic value
could change for each user action (value changes when action is replayed by the same
user) or for different users (value changes when action is replayed with different user).
In both the cases correlation takes care of these values and prevents them from failing
during execution.
Manual Correlation involves identifying the dynamic value, finding the first occurrence of
dynamic value, identifying the unique boundaries of capturing the dynamic value, writing
correlation function web_reg_save_param before the request having the first occurrence
of dynamic value in its response.
Automated correlation works on predefined correlation rules. The script is played back
and scanned for auto correlation on failing. Vugen identifies the place wherever the
correlation rules work and correlate the value on approval.
Refer this tutorial for more details.
Q #10. How to identify what to correlate and what to parameterize?
Ans => Any value in the script that changes on each iteration or with different user

while replaying needs correlation. Any user input while recording should be
parametrized.
Q #11. What is parameterization & why is parameterization necessary in the
script?
Ans => Replacing hard coded values within the script with a parameter is called
Parameterization. This helps a single virtual user (vuser) to use different data on each
run. This simulates real life usage of application as it avoids server from caching results.
Refer this tutorial for more details.
Q #12. How you identify Performance test use cases of any application?
Ans => Test cases/Uses cases for Performance test are almost same as any
manual/functional testing test cases where each and every step performed by the user is
written. The only difference is that all manual test cases cant be Performance testing
use cases as there are few criteria for the selection as:
I. The user activity should be related to critical and most important functionality of the
application.
II. The user activity should be having good amount of database activity such as search,
delete or insert.
III. The user activity should be having good user volume. The functionality having less
user activity is generally omitted from Performance testing point of view. e.g admin
account activity.
Any of the manual test cases that fulfill the above criteria can be used as performance
testing use case/test case. If manual test cases are not written step by step ,
Performance team should create dedicated documents for them.
Q #13. While scripting you created correlation rules for automatic correlation.
If you want to share the correlation rules with your team member working on
the same application so that he/she can use the same on his workstation, how
will you do that?
Ans => Correlation rules can be exported through .cor file and the same file can be
imported through VuGen.
-----------Q #14. What are different types of vuser logs which can be used while scripting
and execution? What is the difference between these logs? When you disable
logging?
Ans => There are two types of Vuser logs available Standard log and Extended log.
Logs are key for debugging the script. Once a script is up and running, logging is enabled
for errors only. Standard log creates a log of functions and messages sent to the server
during script execution whereas Extended log contains additional of warnings and other
messages. Logging is used during debugging and disabled while execution. Logging can
be enabled for errors in that case.
Q #15. What is Modular approach of scripting?
Ans => In Modular approach, a function is created for each request (e.g. login, logout,
save, delete, etc.) and these functions are called wherever required. This approach gives
more freedom to reuse the request and saves time. With this approach it is
recommended to work with web custom request.

Q #16. What are the different types goals in Goal-Oriented Scenario?


Ans => LoadRunner has five different types of goals in Goal-Oriented Scenario. These
are:
The number of concurrent Vusers
The number of hits per second
The number of transactions per second
The number of pages per minute
The transaction response time
Q #17. How is each step validated in the script?
Ans => Each step in the script is validated with the content on the returned page. A
content check verifies whether specific content is present on the web page or not. There
are two types of content check which can be used in LoadRunner:
Text Check- This checks for a text/string on the web page
Image Check- This checks for an image on a web page.
Q #18. How is VuGen script modified after recording?
Ans => Once the script is recorded, it can be modified with the following process:
Transaction
Parameterization
Correlation
Variable declarations
Rendezvous Point
Validations/Check point
Q #19. What is Ramp up and Ramp Down?
Ans => Ramp up- Rate at which virtual users add to the load test
Ramp Down- Rate at which virtual users exit from the load test.
Q #20. What is the advantage of running the Vuser as thread?
Ans => Running vusers as thread helps generate more virtual users from any machine
due to small memory print of the vuser running as thread.
Q #21. What is wasted time in VuGen Replay log?
Ans => Waste time is never performed by any browser user and just the time spent on
the activities which support the test analysis. These activities are related to logging,
keeping record and custom analysis.
Q #22. How do you enable text and image checks in VuGen?
Ans => This can be done by using functions web_find (for text check) and
web_image_check (for image check) and enabling image and text check from run time
setting.
Run Time Setting>Preference>Enable the Image and text check box.
Q #23. What is the difference between web_reg_find and web_find?
Ans => web_reg_find function is processed before the request sent and is placed before
the request in the VuGen script whereas web_find function is processed after the
response of the request come and is placed after the request in VuGen script.
Q #24. What are the challenges that you will face to script the step Select All
and then Delete for any mail account?
Ans => In this case the post for Select All and Delete will change every time
depending on the number mails available. For this the recorded request for the two
should be replaced with custom request and string building is required to build the post.

(Note- This question needs practical knowledge. So please this practically and formulate
your answer).
Q #25. What is difference between pacing and think time?
Ans => Pacing is wait time between the action iterations whereas think time is wait time
between the transactions.
Q #26. What are the number of graphs you can monitor using Controller at a
time? What is the max of them?
Ans => One, two, four and eight graphs can be seen at a time. The maximum number
of graphs can be monitored in at a time is 8.
Q #27. You have an application which shows the exam results of the student.
Corresponding to name of each student its mentioned whether he passed or
failed the exam with the label of Pass and Fail. How will you identify the
number of passed and failed student in VuGen script?
Ans => For this text check is used for the web page for the text Pass and Fail.
Through the function web_reg_find, we can capture the number of texts found on the
web page with the help of SaveCount. SaveCount stored the number of matches
found. For example1
web_reg_find("Text=Pass",
2
"SaveCount=Pass_Student",
3
LAST);
4
web_reg_find("Text=Fail",
5
"SaveCount=Fail_Student",
6
LAST);
Q #28. During the load test what is the optimum setting for Logs?
Ans => For the load test log level is set to minimal. This can be achieved with setting
the log level to the standard log and selecting the radio button Send message only
when an error occurs.
Q #29. How will you handle the situation in scripting where for your mailbox
you have to select any one mail randomly to read?
Ans => For this we will record the script for reading the first mail. Try to find what is
being posted in the request to read the first mail such as mail ids or row no. From the
post where a list of mails is reflecting, we will try to capture all the email ids row no with
correlation function and keeping Ordinal as All i.e. ORD=All . Replace the requested
email id in the read post with any of the randomly selected email id from the list of
captured email ids.
Refer this Scripting Tutorial.
Q #30. What is the Think Time? What is the Threshold level for think time and
how can be this changed?
Ans => Think time is the wait time inserted intentionally between the actions in the
script to emulate real user`s wait time while performing activity on the application. The
Threshold level for Think time in the level below which recorded think time will be
ignored. This can be changed from Recorded options->Script->Generate think time
greater than threshold.
Q #31. How is Automated Correlation configured?
Ans => Any setting related to Automated Correlation can be done byGeneral Options>Correlation. Correlation rules are set fromRecording options->Correlations.

Q #32. How you decide the number of load generator machine required to run a
test?
Ans => Number of load generator required totally depends on the protocol used to
create the script and configuration of the load generator machine. Each protocol has
different memory print and this decides how many virtual users can be generated from
the give configuration of the machine (load generator).
Q #33. What are the capabilities exactly you look for while selecting the
performance testing tool?
Ans => Performance testing tool should capable of: Testing an application built using multiple technologies and hardware platforms.
Determine the suitability of a server for testing the application
Testing an application with load of tens, thousand and even thousands virtual
users.
Q #34. How concurrent users are differing from simultaneous users?
Ans => All simultaneous users are concurrent users but vice versa is not true.
All the vusers in the running scenario are Concurrent users as they are using the same
application at the same time but may be or may not be doing the same tasks.
Simultaneous users perform the same task at the same time. Concurrent users are made
Simultaneous users through rendezvous points. Rendezvous points instruct the system
to wait till a certain number of vusers arrive so that they all can do a particular task
simultaneously.
Q #35. How do you identify which values need to be correlated in the script?
Give an example.
Ans => This can be done in ways:
a) Record the two scripts with similar steps and compare them using WDiff utility. (See
tutorial Correlation).
b) Replay the recorded script and scan for correlation. This gives a list of values that can
be correlated.
Session Id is a good example of this. When two scripts are recorded and compared using
WDiff utility. Session ids in the two scripts should be different and WDiff highlight these
values.
Q #36. How does caching affect performance testing results?
Ans => When data is cached in server`s memory, the server need not fetch the result
and no server activity triggered. Test result does not reflect the same performance of
real user using the application with different data.
Q #37. How will you stop the execution of script on error?
Ans => This can be achieved through lr_abort function. The function instructs the vuser
to stop executing Action section and end the execution by executing the vuser_end
section. This function is helpful in handling a specific error. This can also be used to
handle a situation rather than error where execution is not possible. The function
assigned Stopped status to the vuser which stopped due to lr_abort function. In RunTime setting, Continue on error should be unchecked.

LoadRunner Interview
Questions -1
1) What is Performance Testing?
The process of testing to determine the performance of software product.
2) What is Load Testing?
A type of performance testing conducted to evaluate the behavior of a component or
system with increasing load, e.g. numbers of parallel users and/or numbers of
transactions, to determine what load can be handled by the component or system.
3) What is Stress Testing?
A type of performance testing conducted to evaluate a system or component at or
beyond the limits of its anticipated or specified work loads, or with reduced availability of
resources such as access to memory or servers.
4) What is Spike Testing?
Verify the Systems performance under sudden increments and decrements.
5) What is Data Volume Testing?
Testing where the system is subjected to large volumes of data.
6) What is Endurance Testing?
Verifying the Systems performance under continues load in terms of users and
transactions.
7) What is LoadRunner?
It is a Performance Test Tool from HP. It supports all aspects of Performance Testing like
Load, Stress, Endurance, spike and Data volume testing.
8) What are the tools available in the industry for Load Testing?

LoadRunner from HP
RPT (Rational Performance Tester) from IBM
Silk Performer from Micro Focus
JMeter (Open source Tool) Etc

9) What is latest version of LoadRunner?


LoadRunner 11.5
10) What is the scripting language that used in LoadRunner?
VUser script (It is C like language)
11) What are the 4 important components in LoadRunner?
Virtual User Generator (VUGEN)
Controller
Load Generator
Analysis
12) How do you identify the performance bottlenecks?
Performance Bottlenecks can be detected by using monitors. These monitors might be
application server monitors, web server monitors, database server monitors and network
monitors. They help in finding out the troubled area in our scenario which causes
increased response time. The measurements made are usually performance response
time, throughput, hits/sec, network delay graphs, etc.
13) If web server, database and Network are all fine where could be the
problem?
The problem could be in the system itself or in the application server or in the code
written for the application.
14) How did you find web server related issues?
Using Web resource monitors we can find the performance of web servers. Using these
monitors we can analyze throughput on the web server, number of hits per second that
occurred during scenario, the number of http responses per second, the number of
downloaded pages per second.
15) How did you find database related issues?
By running Database monitor and help of Data Resource Graph we can find database
related issues. E.g. You can specify the resource you want to measure on before running
the controller and than you can see database related issues
16) Explain all the web recording options?

17) What is the difference between Overlay graph and Correlate graph?
Overlay Graph:
It overlay the content of two graphs that shares a common x-axis. Left Y-axis on the
merged graph shows the current graphs value & Right Y-axis show the value of Y-axis of
the graph that was merged.
Correlate Graph:
Plot the Y-axis of two graphs against each other. The active graphs Y-axis becomes Xaxis of merged graph. Y-axis of the graph that was merged becomes merged graphs Yaxis.
18) How did you plan the Load? What are the Criteria?
Load test is planned to decide the number of users, what kind of machines we are going
to use and from where they are run. It is based on 2 important documents, Task
Distribution Diagram and Transaction profile. Task Distribution Diagram gives us the
information on number of users for a particular transaction and the time of the load. The
peak usage and off-usage are decided from this Diagram. Transaction profile gives us the
information about the transactions name and their priority levels with regard to the
scenario we are deciding.
19) What does vuser_init action contain?
Vuser_init action contains procedures to login to a server.
20) What does vuser_end action contain?
Vuser_end section contains log off procedures.
21) What is Performance Test Tool?
A tool to support performance testing that usually has two main
Facilities: load generation and test transaction measurement.
Load generation can simulate either multiple users or high volumes of input data. During
execution, response time measurements are taken from selected transactions and these
are logged.
Performance testing tools normally provide reports based on test logs and graphs of load
against response times.
22) What are the phases in LoadRunner Test Process?
I) Planning the Test
II) Creating VUser Scripts
III) Creating the Scenario
IV) Running the Scenario
V) Monitoring the Scenario

VI) Analyzing Test Result


23) How LoadRunner interact with Software Application?
LoadRunner interacts with Software Application based on Protocols.
24) What is Protocol?
A set of rules that enable Computer devices to connect and transmit data to one another.
Protocols determine how data are transmitted between computing devices and over
networks.
25) What are the important Protocol Bundles that LoadRunner supports?
LoadRunner Supporting Protocol Bundles
.NET Record/Replay
Database
DCOM
Network
Oracle E-Business
Remote Access
Rich Internet Applications
SAP
SOA
Templates
Web and Multimedia
Wireless
GUI
Java Record/Replay
Remote Desktop

Web 2.0

LoadRunner Interview Questions


and Answers -2
1) What is the extension of LoadRunner scenario file?
Extension of LoadRunner scenario file is .lrs
2) How many areas we can do the correlation?
Areas of correlation are:
1) ItemData
2) TimeStamp
3) Links
4) Check Boxes
5) List Buttons
6) Radio Buttons
3) Tell something about LoadRunner?
1) LoadRunner is the industry standard automated performance and load
testing tool.
2) HP acquired LoadRunner as part of its acquisition of Mercury
Interactive.
3) Using LoadRunner one can emulated hundreds and thousand of virtual
users for performance and load testing.
4) LoadRunner supports wide range of industry standard applications for
load testing.
4) What are the features of HP LoadRunner?
The key features of HP LoadRunner are as follows:
1. TruClient technology that simplifies and accelerates scripting for
complex Rich Internet applications.

2. Enterprise load generation that applies measurable and repeatable


loads while monitoring systems and end-user transactions to identify
issues.
3. Powerful analysis and reporting capabilities that help isolate
performance bottlenecks quickly and easily.
4.Integrated diagnostics help pinpoint the root causes of application-level
issues down to the code level.
5) What is a virtual user or VUser in LoadRunner?
Virtual user or Vuser emulates the real user steps. The real user steps are
recorded a test script.
During the recording time user steps (like posting the requests or
accessing the pages) are recorded as test script. When the test script is
played back the script is going to perform the user actions. The real user
emulation by playing back the script is called virtual user or vuser.
The vusers are created as a process or a thread in LoadRunner for
multiple users.
6) What are the LoadRunner components?
LoadRunner has majorily 4 components
1. LoadRunner VuGen - Virtual user generator - used for scripting
purpose.
2. LoadRunner Controller - used for load test execution and monitoring
purpose.
3. LoadRunner Load Generator - used for generating the load of multiple
virtual users.
4. LoadRunner Analysis - used for analysis and reporting purpose.
7) What is a transaction in LoadRunner?
Transaction is defined as response time of one or more than one user
steps.
Transaction in LoadRunner is used for measuring the response time of
user steps. If one has to measure the response time one or more than
one page, the measure statements will be inserted at the appropriate
pages.
lr_measure_start("trans1")
Step 1
Step 2
lr_measure_stop("trans1")
lr_measure_start("trans2)

Step 3
lr_measure_stop("trans2)
Step 4
trans1 and trans2 are transaction names, which will have the response
times to measure periodically and those response times will be displayed
from the graphs in controller during the load test execution and those
transaction response times can be analyzed from LoadRunner analysis
after load test execution.
8) What is think time in LoadRunner?
Think time is nothing but the user delay between two subsequent
requests.
Assume that a user opened page1 and he is filling the data on page1.
During filling the page the user has spent 10 sec, and he has submitted
the page1, then page2 is loaded. In this case the user wait time between
pag1 and page2 10 sec is called think time.
9) What is tuning in LoadRunner? How to use this tuning option in
any project?Any one pls explain me in detail.?
We are having different tunings like DB Tuning, Network Tuning, and
Server Tuning and so on.
10) What is VUGen?
The VUGen stands for Virtual User Generator. VuGen is used to generate
vuser script (here we record a business operation performed by a single
user).
11) What is the analyzer in LoadRunner?
This gives you the results of the load runner test. These results can be
viewed in graphs and reports.
12) Do we see much difference in load testing for web
applications versus traditional software?
Yes. From own experience, in traditional applications, the developers
know more about how it all works, if its in house then developers are
easy to access and know the environment. With Web and CMS, there is so
much the developers dont know about, this is especially true when they
integrate out of the box solutions. So many software developers now are
using solutions they buy from someone else and they are slow to turn

around bugs, functional or non-functional. Obviously this depends on a lot


again, traditional applications can also fall into these traps, but its more
common with Web.
13) What are some of the most common web app bottlenecks that
you find and/or fix?
Some of the most common web app bottlenecks that you find and/or fix:
Misconfigured server, poor performing stored procedure, or the
application. Less down to infrastructure as most places we have been they
spend a lot of money in this area.
14) Have you ever measured application performance reengineering impact by operations cost reduction?
Yes. We can test a work-flow application, and the speed of this application
had an impact on both the productivity of a large portion of their
workforce, and the performance of their helpdesk staff, which had a direct
effect on their reputation.
15) What do you think is the most important aspect of load
testing?
Most important aspect of load testing:
As mentioned getting the scenario right and answering the business
question. No point telling them , it breaks at 1000 users, when they have
100 people working there and they were concerned only with network
latency". (Simplistic example as we know).
16) What kind of applications LR tests?
LR tests Client / Server & Web based applications.
17) What is correlation?
Correlation is used to obtain data which are unique for each run of the
script and which are generated by nested queries. Correlation provides
the value to avoid errors arising out of duplicate values and also
optimizing the code (to avoid nested queries). Automatic correlation is
where we set some rules for correlation. It can be application server
specific. Here values are replaced by data which are created by these
rules. In manual correlation, the value we want to correlate is scanned
and create correlation is used to correlate.
19) How do you find out where correlation is required?

In two ways we can find out where correlation is required.


First we can scan for correlations, and see the list of values which can be
correlated. From this we can pick a value to be correlated.
Secondly, we can record two scripts and compare them. We can look up
the difference file to see for the values which needed to be correlated.
20) Why do you create parameters?
Parameters are like script variables. They are used to vary input to the
server and to emulate real users. Different sets of data are sent to the
server each time the script is run. Better simulate the usage model for
more accurate testing from the Controller; one script can emulate many
different users on the system.
21) What is the controller in LoadRunner?
The more important and critical component of LoadRunner is the
Controller. LoadRunner uses the Controller to emulate the real time users.
Here is where we configure our scenario settings like Scripts need to be
executed, No of Vusers, Load Generators, Run-time settings, Load test
duration etc..
22) What is the use of Scheduler?
We can use the LoadRunner Scheduler to set up a scenario to run
automatically.
23) What are the reasons why parameterization is necessary
when load testing the Web server and the database server?
Parameterization is useful for performance scripts for varous reasosn: 1. We can use different data in scripts dynamically.
2. When URLs of AUT are parameterized, it becomes easy for the script to
point to different application environments, i.e. Dev, QA or Prod
depending upon the requirements.
3. Parameterizing helps in emulation real scenario as it avoids caching
effect, if we send same data again and again while running scripts in
iteration, then the data could be used from cache or from the temporary
table from the database. Now if we send different data in each iteration
the real performnace transaction timers can be measured.
24) What is the difference between hits/second and
requests/second?

Hits per second means the number of hits the vserver receives in one
second from the vuser and the request per second is the no. of request
the vuser will request from the server.
25) What is the advantage of using LoadRunner?
Advantages are:
1. With help of vusers reduces the human users
2. Reduces the reqirement of the systems
3. Helps in the better usage of time and money
4. Effective utilization of automation
5. Everything done from a single point.

LoadRunner Interview Questions


and Answers -3
1) How do you identify the performance bottlenecks?
Performance Bottlenecks can be detected by using monitors. These monitors might be
application server monitors, web server monitors, database server monitors and network
monitors. They help in finding out the troubled area in our scenario which causes
increased response time. The measurements made are usually performance response
time, throughput, hits/sec, network delay graphs, etc.
2) What are the tools available in the industry for Load Testing?
HP-LoadRunner
IBM - RPT (Rational Performance Tester)
Micro Focus - Silk Performer
JMeter (Open source Tool)
QA WebLoad (RadView)
Etc...
3) What are the considerable factors in Performance Result Analysis using
LoadRunner?
Performance Bench Marks
Local System Configuration

Network communicators
Server response
4) How to identify the memory leakage using Loadrunner?
In Load runner, every application has a processor running in the system. The processor
needs to be identified. Using the performance tab we can check the memory
consumption of the processor. Continuous tracking needs to be done while load testing.
However, if the memory keeps increasing even on stopping the test, a memory may have
occurred. Also, if the memory is not released on stopping the test, a memory may have
occurred.
5) How do we debug a LoadRunner script?
Debugging a LoadRunner script:
VuGen contains two options to help debug Vuser scripts-the Run Step by Step command
and breakpoints. The Debug settings in the Options dialog box allow us to determine the
extent of the trace to be performed during scenario execution. The debug information is
written to the Output window. We can manually set the message class within your script
using the lr_set_debug_message function. This is useful if we want to receive debug
information about a small section of the script only.
6) What are the performance tester roles and responsibilities using loadrunner?
There won't be load runner roles and responsibilities. It is the roles of performance
engineer and there won't be difference between the roles what ever tool you use for
performance testing.
7) When do you do load and performance Testing?
We perform load testing once we are done with interface (GUI) testing. Modern system
architectures are large and complex. Whereas single user testing primarily on
functionality and user interface of a system component, application testing focuses on
performance and reliability of an entire system. For example, a typical application-testing
scenario might depict 1000 users logging in simultaneously to a system. This gives rise
to issues such as what is the response time of the system, does it crash, will it go with
different software applications and platforms, can it hold so many hundreds and
thousands of users, etc. This is when we set do load and performance testing.
8) Do you feel like performance testing is an accepted critical part of the
development life cycle?
It is getting that way yes, with more and more crashes getting exposure on the news
here and with it happening more and more it has become a critical part of testing.
10) What are the key KPIs you track for performance testing and tuning?
The key KPIs that we track for performance testing are transaction response time,
memory usage, disk space, and CPU time.
Tuning: Network delay and stored procedure times.

11) What is Throughput?


Basically, Throughput is the amount of transactions produced over time during a test.
Its also expressed as the amount of capacity that a website or application can handle.
Also before starting a performance test it is common to have a throughput goal that the
application needs to be able to handle a specific number of requests per hour.
12) What is peak load testing?
Peak load is the maximum amount of concurrent users that are on a website within a
certain time period. For example, if you own a retail website, your peak load during any
given week is most likely to be on the weekend. It would also follow that the
Thanksgiving and Christmas holiday season is your busiest overall.
13) What is the focus of Performance testing?
The focus of Performance testing is checking a software programs
Speed Determines whether the application responds quickly.
Scalability Determines maximum user load the software application can handle.
Stability Determines if the application is stable under varying loads.
14) What is the goal of performance testing?
The goal of performance testing is not to find bugs but to eliminate performance
bottlenecks.
15) Can anybody give me example for load testing?
Load testing will simulate a real time user load on the application and testing.
For example: Searching functionality in a website.
If a website needs to accommodate a certain number of simultaneous users, for instance
1000 people using the search engine at the same time, then the site should be tested for
that load before putting the website into production. Each of these 1000 visitors can
access this website with different browsers, different versions of the same browser,
different machines in different platforms. Their connections can range from highbandwidth data lines to dial-up. Load testing is designed to verify that the website can
handle the expected load.
16) What is Endurance Testing?
Endurance Testing means testing is done with expected user load sustained over longer
period of time with normal ramp up and ramp down time and to identify Memory Leaks.
17) What is Volume Testing?
Volume testing is a typical load testing except that a large volume of data is populated
on the database to study its impact on the application response time and database
overall health ; Behavior at various DB volumes.
E.g. Checks for Accumulated counts, logs, and data files.
18) What is Scalability Testing?

Scalability testing is a process of evaluating the systems behaviour when the number of
simultaneous users is increasing, and the hardware and software resources are fixed.
This testing will be conducted for comparing the response times and system resource
utilization of AUT when the number of users are increased.
19) What are the results reported after capacity Testing?
Results Reported after capacity Testing are:
The hardware resource related bottlenecks will be reported.
Recommendation on new hardware.
Up-gradation of existing hardware.
Change in the existing application deployment architecture to support the future
growth.
Introducing new servers in the application deployment architecture.
20) What are the results reported after Stress Testing?
Results Reported after Stress Testing are:
1. The maximum number of users supported by AUT.
2. Let us assume that, the system resources are utilized beyond the expected limits.
After the stress test, when the normal amount of users are running, the status of the
application in terms of the resources utilization will be reported.
3. Assume that there are many errors are coming when the stress testing is been
conducted. After the stress test, when the normal amount of users are running the
status of the application in terms of the errors will be reported.
21) What are the disadvantages of using commercial performance / load
testing tools?
Disadvantages of using commercial performance/load testing:
We need to understand the need for any commercial tool with respect to the kind of
technologies we use in your organisation. A tool that do not support the technologies is
waste of time and money. If it can fit into our requirements, there is a considerable ROI.
22) How do you test an application if it is going production for the first time?
For testing the application, you need to have the basic scenarios done first.
Second step will be to do the End to End testing with E2E scenarios.
Third test will be to do the rigorous testing.
Final step will be to do the load testing.
23) Why should we automate the performance testing?
Its a discipline that leverages products, people and processes to reduce the risk of
application, upgrade or patch deployment. It is about applying production work loads to
pre-deployment systems while simultaneously measuring system performance and enduser experience.
24) What are the results reported after the Endurance/Longevity/Soak
Testing?
Results Reported after the Endurance / Longevity / Soak Testing Test:

When the endurance test is conducted on a multi tier web based enterprise level
applications the following kind of results will be reported.
Memory leaks on the application servers
JVM Heap size utilization on the application servers
Connection leaks on the database server
Cursor leaks on the data base servers
Response time (consistency or degradation) comparison for start of the load test to
end of the load test
Systems Resource (Memory, CPU, Network and Disk usage etc) comparison for
beginning of the load test to end of the load test
Application errors occurrence over the period of time.
25) What are all the things will be considered while doing performance testing?
Does the application respond quickly enough for the intended users?
Will the application handle the expected user load and beyond?
Will the application handle the number of transactions required by the business?
Is the application stable under expected and unexpected user loads?
Are we sure that users will have a positive experience on go-live day?
26) What are the results reported after the load Testing?
Results Reported after the load Test:
The system will be validated to ensure whether the service level agreements or
performance objectives are met.
Average, max, min and standard deviation of response times for each scenario will be
measure and reported.
Resource utilization of each of the systems which are part of AUT will be monitored and
reported.
If there is any application break point below the peak load condition, it need to be
identified and reported.
27) What are the results reported after Spike Testing?
Results Reported after Spike Testing:
The systems resources are utilization comparison for, with and without spikes.
The response times comparison for, with and without spikes.
Observation on errors for, with and without spikes.
28) What is remote command launcher?
The remote command launcher enables the controller to start applications on the host
machine.
29) How to determine the Stress Point?
Determining the Stress Point:
Transaction response times are exponentially increased
The application started throwing the errors for many users
The system stopped responding
At least one of the server in AUT architecture got crashed
The system resource utilization went beyond the acceptable limits.

30) What are the results reported after Scalability Testing?


Results Reported after Scalability Testing:
The comparison charts of different number of users and their response times.
The comparison charts of system resource utilization for different amount of users.
Scalability issues when the number of users are incremented.
Identification of scalable point of the application.