Sie sind auf Seite 1von 29

Performance Analysis

for WebSphere Web Sites

06_PeformanceAnalysis.ppt Page 1 of 29
Web Site Performance
Start-to-Finish Approach

• What are the Web Site’s Performance Requirements?

• Performance Validation

• Analyzing Performance Data

06_PeformanceAnalysis.ppt Page 2 of 29
Why Invest in Performance?
What’s the Worst That Could Happen?

• Site cannot support traffic


• Bloated applications, insufficient hardware, etc.
• Site does not function within budget
• May require more hardware and software licenses
• Significant additional cost for extremely poor performance
• Impacts
• Customer satisfaction
• Site growth projections
• Legal implications (depending on the business)

• Too much capacity is expensive


• Drains funding from other projects
• Extra capacity often goes undetected in production

06_PeformanceAnalysis.ppt Page 3 of 29
Web Site Performance
The Virtual City

• Traditional PC applications support a single user

• Web applications
• Support hundreds or thousands of concurrent users
• Virtual cities of people sharing resources simultaneously
• Yet unaware of each other

• Web applications developed under the thick client model


• Individual developers, single-user function testing

• Performance testing adds the next dimension


• Can the Web site support the potential crush of users?

06_PeformanceAnalysis.ppt Page 4 of 29
Web Site Performance
What does the Web site need to do well?

• What are the key measurements for the Web site?


• Page rate
• Logged in users (user with session data)
• Response time

• What are some typical activities for the users?


• Browsing
• Searching
• Viewing accounts ….

06_PeformanceAnalysis.ppt Page 5 of 29
Web Site Performance
Mythical “One Size Fits All” Test

• Throughput Test
• Drive the system to 100% CPU utilization
• Measure maximum throughput achieved

• Pros
• Requires few virtual user licenses
• Models maximum throughput well (page rate)

• Cons
• Does not model logged in user memory pressure well
• HttpSessions, LTPA tokens, etc.
• Difficult to determine potential “real users” supported

• Results
• Over-estimates capacity requirements
• Inappropriate system tuning

06_PeformanceAnalysis.ppt Page 6 of 29
Performance Test Planning
What are the Priorities?

• Brokerage Sites
• High throughput
• Competitive response time requirements
• Many logged-in users

• B2B
• Low throughput
• Generous response times
• Many logged-in users

• Prioritize the site’s performance characteristics


• Formulate tests based on these priorities

06_PeformanceAnalysis.ppt Page 7 of 29
Performance Test Planning
Portals behave differently too…

• B2E Portal
• Employees logged in all day
• Login burst in the mornings

• Default browser homepage for all employees


• “Pass thru” page

• Traffic may be bursty or geared to certain functions


• News articles in the morning
• Company phonebook application all day

• B2C Portal
• User visits usually last < 30 minutes
• Traffic bursts during morning and lunchtime
• Users task-oriented

06_PeformanceAnalysis.ppt Page 8 of 29
Web Site Performance
Defining “users”

Potential users

Active users

Current requestors
No impact

Memory only

CPU, Memory, Network, etc.

06_PeformanceAnalysis.ppt Page 9 of 29
Web Site Performance
Plan for Peak

• Always develop tests around peak load


• If the system doesn’t support peak, it does not function

• Often a big difference between “peak” and “average” loading

Web Site Traffic Arrival

100

80
Requests/Second

60

40
Peak
20

0
Time
Average

10

06_PeformanceAnalysis.ppt Page 10 of 29
Web Site Performance
Request bursts

• What if the traffic does not distribute linearly at peak?


• Can 6K active users => 6K concurrent requests?

• Queuing within the system handles micro-bursts


• Traffic queued briefly until processed

• Queues exist at various levels


• Plug-in
• Connection pools
• etc.

• Absorbs minor variations in loading

11

06_PeformanceAnalysis.ppt Page 11 of 29
Web Site Key Metrics
Consistent Terminology

• Use terminology consistently in your performance discussions

• Consider the difference between “requests” and “users”

• 100K requests/day (assuming peak 5x greater than avg.)


• Peak requirement: 18 requests/second

• 100K users/day (same 5x peak)


• 10 minute visit with 5 requests per visit
• Peak requirement: 88 requests/second

12

06_PeformanceAnalysis.ppt Page 12 of 29
Web Site Key Metrics
Network Capacity

• Network capacity is key for performance test planning

• Consider data transmissions and network overhead


• Network overhead accounts for 20% of capacity

• Page size includes


• HTML content
• Any static elements (gifs, jpegs, javascript, etc.)
• Back-end data transfers (database, external systems, etc.)

• Don’t forget client network

• Use isolated subnet for testing whenever possible

13

06_PeformanceAnalysis.ppt Page 13 of 29
Web Site Performance
Start-to-Finish Approach

• What are the Web Site’s Performance Requirements?

• Performance Validation

• Analyzing Performance Data

14

06_PeformanceAnalysis.ppt Page 14 of 29
Performance Validation
Develop Realistic Test Scenarios

• What are the typical activities for your users?


• Browsing only
• Browsing and adding items (how many?) to a cart
• Purchasing the items in the cart
• Checking order status

• What percentage of their time is involved in each activity?

• Performance test scripts should contain the right “mix”

15

06_PeformanceAnalysis.ppt Page 15 of 29
Performance Validation
Develop a Realistic Test Mix

Scenario Weight Sample Size

Browsing 60% 10,000 items • Test represents normal activity


Check Order 5% 3,000 orders
• Randomly select data elements
• Avoid data caching!
Shop & 5% 10,000 items
• Test DB reflects production DB
Purchase • Size
• Elements
Shop w/o 30% 10,000 items • Registered users
Purchasing

16

06_PeformanceAnalysis.ppt Page 16 of 29
Performance Validation
Selecting a test tool

• Test scripts represent an investment


• Market leaders good, but expensive
• Free-ware tools not suitable for large-scale testing

• Test tools vs monitoring tools


• Test tools support performance validation
• High-end tools usually include system monitoring
• Map system behavior to test runs
• Not used for live monitoring

• Monitoring tools watch production systems


• Alert thresholds
• Active warnings and indicators

17

06_PeformanceAnalysis.ppt Page 17 of 29
Performance Validation
Some criteria…

• Does the tool reflect typical browser behavior?


• Static element caching / cache flush
• SSL support
• Cookie management
• Dynamic data selection

• Does the tool produce the necessary measurements?


• Can the writer/user specify different measurement points?
• Does the tool report interesting measurements?
• Can you export data for additional manipulation?

• Can it meet special requirements of your web site?


• Example: WP req ids

18

06_PeformanceAnalysis.ppt Page 18 of 29
Performance Validation
Writing the Test Scripts

• Double-check “recorded” test scripts


• Think times
• Are think times hard coded in the script?
• Can the tester change think times globally?

• Are cookies hard-coded in the script?

• Does the script cover the desired content?

• Add dynamic data elements to recorded scripts

• Scripts should cover a reasonable flow of pages


• Not too many or too few pages (5-7 pages is typical)

• Write multiple scripts to cover various user activities

19

06_PeformanceAnalysis.ppt Page 19 of 29
Performance Validation
Think Time

• Think time
• Adding think time better simulates actual users
• May require more virtual users/client hardware

• Removing think time provides more system pressure


• Higher arrival rate
• May require fewer users to drive to transaction rate
• May not adequately represent Session loading

• Staggered Start vs. Synchronized “Burst”


• Few site types receive large “bursts per second”

• Staggering the arrival rate best represents actual traffic

20

06_PeformanceAnalysis.ppt Page 20 of 29
Performance Validation
Resources

• Test clients require resources


• Client machines should run at < 75% CPU busy
• Dedicate a machine for test management function
• Overburdened machines generate bad data

• Network capacity impacts performance


• 100 Mbps Fast Ethernet = 5MB of sustained data traffic
• Pen and paper exercise to develop good estimates
• Use a network protocol analyzer to determine utilization
• Warning: Watch out for site security restrictions!

• Test environment should duplicate production environment

21

06_PeformanceAnalysis.ppt Page 21 of 29
Performance Validation
Isolating the Test Subnet

% Network Busy

6:30 am
• Don’t share networks
• No repeatability
• Difficult diagnosis
• Difficult monitoring
% Network Busy
• Actual network uses
• System back-ups
• On-line game servers
• After-hours trading

• Heaviest loads at off-hours


2 pm • Backups
• Normal outages

22

06_PeformanceAnalysis.ppt Page 22 of 29
Performance Validation
Test Topology

DMZ
Database

HTTP Other
Application Systems
Server
Server

Router/
Load
Balancer
Legacy System
HTTP
Server Application
Server

Load Generators Firewall Firewall

23

06_PeformanceAnalysis.ppt Page 23 of 29
Performance Validation
Building the Performance Team

• n-Tiered systems require n-tiered test teams

• Discuss test plans with all members early in the process Host Admin

• Coordinate testing with the extended team

• Monitor extended resources during testing (ex: DB)

• Coordinate analysis and tuning with resource owners

Network Admin
System Admin DBA

24

06_PeformanceAnalysis.ppt Page 24 of 29
Web Site Performance
Start-to-Finish Approach

• What are the Web Site’s Performance Requirements?

• Performance Validation Factors

• Analyzing Performance Data

25

06_PeformanceAnalysis.ppt Page 25 of 29
Analyzing Performance Data
Capture the Right Data

• Monitor all machines involved with the tests


• CPU on all systems
• I/O on all systems
• Network traffic (if possible)

• Monitor all logs


• HTTP Server, Application Server, Database, Application
• Clean logs before each test run
• Excessive logging impacts performance
• Don’t performance test broken applications!

• Tivoli Resource Monitor


• Detail on key resources inside the Application Server

26

06_PeformanceAnalysis.ppt Page 26 of 29
Analyzing Performance Data
Good Capture Practices

•Measure during steady-state


• Do not include ramp-up/ramp-down times Measurement Interval

• Repeat tests after making changes


• Limit variables modified between runs

Users
• Don’t rely on data from a single run
• Repeat tests
• Investigate large variances between runs Time

• Performance tuning is an iterative process


• Runs of 10-15 minutes during tuning phase
• Long (multi-hour) runs later to detect leaks, etc.

• Test on quiet systems


• Avoid testing during DB backups, maintenance cycles

27

06_PeformanceAnalysis.ppt Page 27 of 29
Data Analysis
Where’s the Bottleneck?

• The System is only as fast as its slowest component

• Use test data to determine bottlenecks

• Eliminate bottlenecks by severity

• After addressing a bottleneck, retest


• Measure improvement
• Find next bottleneck

• Some bottlenecks are in the system, many in the code

28

06_PeformanceAnalysis.ppt Page 28 of 29
Final Notes

• Before deploying the web site


• Take time to understand the website’s performance requirements
• Design a test to measure these requirements
• Design a test to represent normal user load/interaction on the system

• During the performance test phase


• Capture data from all systems involved
• Address bottlenecks in order of severity

• Schedule enough time for performance validation


• At least a month to setup equipment and develop new tests

• Performance validation is a team effort


• Involve DBAs, network admins, system admins early in the process
• Maintain their involvement for analysis and tuning

29

06_PeformanceAnalysis.ppt Page 29 of 29

Das könnte Ihnen auch gefallen