Beruflich Dokumente
Kultur Dokumente
Executive summary............................................................................................................................... 3 Key findings........................................................................................................................................ 3 Overview............................................................................................................................................ 3 Components........................................................................................................................................ 4 Configuring the hardware..................................................................................................................... 6 Configuring the EVA8000 storage array ............................................................................................ 8 Configuring the HP ProLiant DL580 server........................................................................................... 8 Configuring the software ...................................................................................................................... 8 Configuring the MPIO driver ............................................................................................................. 8 Configuring Windows for ASM ......................................................................................................... 9 Using 32-bit Windows...................................................................................................................... 9 Modifying ASM hidden parameters.................................................................................................. 10 Working with Benchmark Factory .................................................................................................... 12 ASM performance results.................................................................................................................... 16 Batch test results............................................................................................................................. 17 OLTP test results ............................................................................................................................. 18 DSS test results............................................................................................................................... 21 Conclusions ...................................................................................................................................... 26 Summary of results ......................................................................................................................... 26 Project learnings ............................................................................................................................ 26 Best practices .................................................................................................................................... 27 Best practices for Storage Administrators .......................................................................................... 27 Best practices for Server Administrators ............................................................................................ 27 Best practices for Database Administrators ........................................................................................ 27 Appendix A. Bill of Materials .............................................................................................................. 28 Appendix B. Examples ....................................................................................................................... 29 For more information.......................................................................................................................... 32 HP technical references................................................................................................................... 32 HP solutions and training ................................................................................................................ 32
Executive summary
Improve profitability, manage risk, and lower TCO with HP StorageWorks Application-Based Solutions. In this project, the HP StorageWorks Customer Focused Test team provided best practices and optimal performance for configuring an Oracle 10g database with Automatic Storage Management (ASM) on the HP StorageWorks 8000 Enterprise Virtual Array (EVA8000). A special look into the ASM disk group and EVA disk group configurations is the critical emphasis of this project. Results of this testing include the optimal EVA8000 configuration with respect to disk groups, database file layout, and mapping to corresponding ASM disk groups. The following questions were specifically addressed in this project: What is the best method to map Oracle/ASM data extents to EVA disk groups? Is there an advantage for double-striping in which the hardware and software are striping the data across the same physical disks? What is the recommended configuration for an Oracle/ASM database on the EVA? Does this recommendation change based on workload type, that is, batch, Online Transaction Processing (OLTP), and Decision Support Systems (DSS)? Does this recommendation change based on single instance database or Real Application Clusters (RAC)?
Key findings
Testing successfully provided the following high-level results: ASM striping can improve database performance in conjunction with EVA virtualization. The EVA configuration for batch loading of 500 GB was not significant. All configurations performed equally well. The EVA configuration for OLTP testing shows an advantage toward a configuration that uses both ASM and EVA striping (double-striping). The advantages were more pronounced as the database size increased (2.5-TB RAC database with 1,000 concurrent users). The greatest benefit from double-striping was most noticeable with DSS workloads. Obvious differences in the configurations were observed, especially with the 2.5-TB RAC database with nine concurrent users running a portion of the DSS power test. Even greater performance on DSS workloads can be achieved by modifying one of the ASM hidden parameters, allocation unit size, to maximize data placement on the EVA. Important findings uncovered during the tests are documented in the Best practices section.
Overview
The main purpose of the project was to test various EVA disk group configurations and determine the best configuration when running an Oracle 10g database with ASM. The introduction of Oracle ASM capabilities in Oracle 10g provided alternative storage management options to those available in the storage hardware alone. ASM provides some similar functionality to the HP StorageWorks EVA. Customers want to understand where the functionality overlaps, if they can co-exist, and if so, what is the best way to configure HP StorageWorks EVA storage to optimize application performance under various database workloads?
Several test scenarios were evaluated in regards to their impact on database performance: EVA configurationsThree EVA configurations were tested to see how Oracle performed during the benchmark testing. BenchmarksThree industry-standard benchmarks (batch, OLTP, and DSS) were run against the EVA configurations. The intent was to see if the best practices changed with differing workloads, that is, did a particular EVA configuration perform better with DSS workload than with an OLTP workload? Database typeBoth single instance and RAC databases were tested to see if clustering had any effect on the overall best practice.
Components
To run these tests, HP configured the systems illustrated in Figure 1 and Figure 2. These environments were based on input from customers and are representative of typical Oracle database environments. The key components include the following: Oracle 10gQuest Softwares Benchmark Factory for Databases was used to generate standard Oracle workloads against the Oracle database in a variety of workload types (batch, OLTP, and DSS). HP ProLiant DL580 serverThis server was used to host the Oracle database. Two configurations were useda single instance database (Figure 1) and a two-node RAC instance (Figure 2). HP ProLiant BL20p serverThis server was used to host the necessary storage management software (HP StorageWorks Command View EVA) for monitoring and configuring the storage array. Also, the storage performance metrics from EVAPerf were managed from this server. EVA8000SAN-based storage array that stored the Oracle database, logs, and so on. Microsoft Windows 2003 Enterprise Edition operating systemThe database servers, benchmark servers, and storage management server all ran on Windows 2003 EE.
Configuration #3Double-striping across different physical disks In this scenario, the virtual disk is still subject to double-striping, performed by both the EVA and ASM. This is different from Configuration #1 because the disks that ASM is striping across do not belong to the same EVA disk group. Because of the smaller spindle count in the EVA disk groups, there is a reduced advantage to EVA striping. Therefore, this configuration is not expected to perform well, nor is recommended in general.
Configuration 1
EVA 8000 (Logical View) 2 Disk Groups
Disk Group
LUN LUN LUN LUN LUN
LUN
Disk Group
LUN LUN LUN
Configuration 2a
EVA 8000 (Logical View) 2 Disk Groups
Disk Group Disk Group
LUN
LUN
LUN
Using the asmtoolg tool, the disk was stamped with the ASM signature for use by the Oracle database. For more information, see Appendix B. Examples. The stamped ASM disk was verified with the asmtool list command. C:\>asmtool list ORCLDISKDATA0 \Device\Harddisk0\Partition1 \Device\Harddisk1\Partition1 \Device\Harddisk2\Partition1 \Device\Harddisk3\Partition1 69445M 1019M 2096121M 1048570M
In the RAC environment, the virtual disk was presented to both servers. One of the servers was used to create the ASM volume. It is necessary to reboot the other node for it to recognize the volume and later install the database.
If used in conjunction with the /PAE for higher memory amounts, /3GB will limit your /PAE to 16-GB RAM.
Adjust registry setting awe_window_memory. This parameter creates a window space to more than 4 GB of memory when /PAE is in affect. Assign the Oracle user the Windows Lock Pages in Memory system privilege in Administrator Tools, Local Security Policy, Local Policies, User Rights Assignment. Set the initialization parameter USE_INDIRECT_DATA_BUFFERS to TRUE.
10
The same data file using an ausize of 8 MB will consist of 128 extents of 8 MB each (Figure 6). Data is striped across extents, which are distributed equally across the disks in the ASM disk group. ASM Stripe Size (_asm_stripsize) determines the I/O size for fine grain templates (coarse grain templates are always 1 MBthe Oracle maximum I/O size). The default _asm_stripesize is 128 KB.
In our testing, the _asm_ausize showed the most promise for improving performance. To review the entire project, refer to the following knowledge brief at www.hp.com/go/hpcft. The benchmarks used in this testing were random read/write (OLTP) and large sequential reads (DSS). The OLTP benchmark did not show significant performance differences when applying the ASM parameter changes. However, the large sequential reads benchmark did improve when changing the _asm_ausize value. These DSS results are shown in Figure 7.
11
Sequential Reads
125%
116% 100%
100%
75%
50%
25%
B A S E L I N E
0%
8MB AU size
In this test, the DSS scripts completed much faster when the ASM allocation unit size was set to 8 MB (it outperformed the baseline by completing in less time). The sequential reads appear to function better when the ASM allocation unit is set to 8 MB. In this test, setting the ASM AU size improved overall DSS query performance by 16%. HP recommends that customers look at this parameter in a pre-production environment to see if database performance can be improved when running against their own data and workloads.
12
Benchmark Factory scale factors are approximate and should not be used as absolute guides. The following example shows a scale factor of 1000 for the DSS benchmark. While this appears to create a 934-GB database, that value does not include indexes or stored procedures, which will increase the overall database size even more. The scale factor shown is only an estimate as stated in Figure 8.
13
Using advanced features in Benchmark Factory (Figure 9), a user can create the tables in different tablespaces, use the parallel parameter, turn on or off logging, and so on.
14
Benchmark Factory can also load balance the workload across all nodes in the RAC environment. Figure 10 is a screenshot of 20 virtual users, each connecting to the database concurrently. Notice that Benchmark Factory has assigned the first 10 users to RAC node 1 and the second 10 users to RAC node 2. The same is true when the user loads of 500 and 1,000 were used for the OLTP testing.
15
While the benchmark test is running, the user can monitor the progress from the benchmark agent shown in Figure 11. This screenshot shows the 20 concurrent virtual users running the OLTP benchmark and the exact query they are currently processing. It also shows transactions per second (TPS), min and max transaction time, as well as errors. This screenshot was taken at the beginning of a test, so some of these values are not yet populated. In addition, the complete display is not shown due to space constrictions (min and max transactions are not shown from the screenshot).
16
The database was set up with a single bigfile tablespace. The user schema contained the tables that were to be loaded during the benchmark. This is diagrammed in Figure 12.
17
The test results did not show any major performance difference related to configuration, as shown in Figure 13. This was expected since as long as there is no bottleneck in the system, sequential writes should not be influenced by the disk configuration.
100%
80% B A S E L I N E
60%
40%
20%
The batch testing on RAC was not completed due to a known issue in Oracles Data Pump load tool, specifically the ability to utilize the parallel parameter in a RAC environment. Work-arounds included shutting down all but one RAC node or running without the parallel parameter. In either case, this would result in a single instance configuration, which was already covered on the previous test. This is Oracle bug #5472417 and is currently being tracked by Oracle.
18
The database was set up with a single bigfile tablespace. The user schema contained the tables and data necessary for the benchmark execution. This is diagrammed in Figure 14.
Five user schemas (each 500 GB) were used to scale up to the 2.5-TB database size used in the RAC testing, consisting of approximately 20 billion rows (Figure 15).
19
Transaction ratio weights are shown in Figure 16. The transaction types can be weighted to control the percentage that they will occur during the random process of the OLTP benchmark. These weights were left at the industry-standard defaults.
The results for single instance testing on the 500-GB database with 500 concurrent users do not show a significant trend (Figure 17). Keep in mind that an OLTP workload is typically a server-intensive benchmark. Migrating to a 2-node RAC environment, with more users and a larger database size, increases the impact of the various EVA configurations.
100.00%
100.00%
99.99%
99.99%
75.00%
50.00%
25.00%
B A S E L I N E
20
On the 2.5-TB RAC database with 1,000 concurrent virtual users, the workload on the storage array has increased. Now you start to see the divergences between the three EVA configurations (Figure 18). Configuration #1 is performing the best (highest throughput measured in TPS). The other configurations are performing at 97% and 96% of the baseline (Configuration #1). In other words, the other configurations had slightly lower TPS during the three-hour benchmark.
100%
100%
97%
96%
75%
50%
25%
B A S E L I N E
21
To create the 2.5-TB database, a combination of DSS scales (Table 6) was used from three user schemas (1000, 1000, 300). This is approximately 8.7 billion rows. Again, this benchmark is measured in time (minutes:seconds) to completion. Test iteration is approximately 6 hours.
Table 6. DSS benchmark 1000 scale Table Part Supplier Partsupp Customer Order Lineitem Nation Region Rows 200,000,000 10,000,000 800,000,000 150,000,000 1,500,000,000 6,001,215,000 25 5
The database was set up with a single bigfile tablespace. The user schema contained the tables and data necessary to run the benchmark. This is diagrammed in Figure 19.
22
To create the 2.5-TB database, a combination of DSS scales were used from three user schemas (1000, 1000, 300). Each schema had three concurrent users issuing the Power Test benchmark (Figure 20).
There are 22 queries in the DSS Power Test. A selection of 15 queries was used for our benchmark to keep the testing time within a manageable timeframe (no effect on the test outcome). These were selected based on their impact to the storage array and average time to completion. The queries with an asterisk were dropped from the RAC testing. Again, this was only in the interest of time. Power TestMeasured in time to completion. Queries: Minimum Cost Supplier Query Customer Distribution Query * Order Priority Checking Query Promotion Effect Query Local Supplier Volume Query Top Supplier Query Forecasting Revenue Change Query Parts/Supplier Relationship Query Volume Shipping Query* Small-Quantity-Order Revenue Query National Market Share Query* Discounted Revenue Query Important Stock Identification Query* Global Sales Opportunity Query Shipping Modes and Order Priority Query*
23
The results of the 500-GB database with three concurrent users in Figure 21 start to show the difference on database performance based on the configuration of the EVA virtual disks. Configuration #1 performed the best, completing the 15 queries of the Power Test in the fastest time. Configurations #2 and #3 performed within 91% and 97% of the baseline (Configuration #1). In other words, Configurations #2 and #3 took longer to complete the Power Test.
100%
75%
50%
25%
B A S E L I N E
24
When the workload is increased to nine concurrent users on the 2.5-TB RAC database, the greatest divergence is seen between the configurations (Figure 22). This particular benchmark stressed the storage array greater than any other workload. Configuration #1 has a substantial advantage over the other configurations, which performed 88% to 60% of the baseline. Note that Configuration #3 actually had fewer overall spindles than Configurations # 1 and #2 (84 verses 80). To create a fair test, Configuration #3 was re-run with an increased spindle count. This helped to raise its performance from 60% to 74% of the baseline, but it was still the lowest performing configuration in this benchmark.
100%
100% 88%
75%
74% B A S E L I N E 60%
50%
25%
25
Conclusions
The following results were observed during this testing.
Summary of results
Single Instance Testing BatchNo discernable difference OLTPConfiguration #1 DSSConfiguration #1 RAC Testing BatchNot applicable OLTPConfiguration #1 DSSConfiguration #1
Project learnings
Does it matter how Oracle/Automatic Storage Management (ASM) maps data extents to the Enterprise Virtual Array (EVA) disk groups? Yes, especially in DSS environments (consider modifying ASM hidden parameter _asm_ausize for optimal performance). Is there a penalty for double-striping? No, Configuration #1 (double-striping) was the best performing configuration in testing. In addition, Configuration 1 also complies with both ASM and EVA configuration best practices. What is the general recommended configuration for an Oracle/ASM database on the EVA? Configuration #1 Does this recommendation change based on workload type? No. Configuration #1 was the best performing in OLTP and DSS workloads. Does this recommendation change based on single instance or Real Application Clusters (RAC)? No. Configuration #1 was the best performing in both single instance and RAC testing. It was also noted that as configuration size increases (addition of RAC nodes, increase in database size, increase in users), the performance advantages of Configuration #1 increase.
26
Best practices
During testing, several best practices were developed to improve database performance for each scenario.
27
28
Appendix B. Examples
This section shows screenshots from the Oracle asmtoolg utility to stamp a volume for ASM. The tool is launched from the command line. C:\>asmtoolg
29
30
31
HP product sites
Enterprise Class Storage Portfolio HP StorageWorks Command View EVA HP ProLiant DL Servers HP BladeSystem B-Series SAN Switches Multi-Path Options for HP Arrays Fibre Channel Host Bus Adapters
Oracle
Oracle Database Installation Guide 10g Release 2 (10.2) for Microsoft Windows (x64) Oracle Database Installation Guide 10g Release 2 (10.2) for Microsoft Windows (32-Bit) Oracle Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide 10g Release 2 (10.2) for Microsoft Windows Oracle Database 10g Release 2 Automatic Storage Management Overview and Technical Best Practices
Quest Software
Benchmark Factory for Databases (Database Performance and Scalability Testing)
2006 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Intel and Xeon are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation. 4AA0-9728ENW, December 2006