Sie sind auf Seite 1von 32

Best practices for Oracle 10g with Automatic Storage Management (ASM) and HP StorageWorks 8000 Enterprise Virtual

Array 8000 white paper


Environment: Oracle 10g with ASM on both single instance database and Real Application Clusters (RAC) using HP ProLiant DL580 servers running Window 2003 Enterprise Edition with HP StorageWorks EVA8000

Executive summary............................................................................................................................... 3 Key findings........................................................................................................................................ 3 Overview............................................................................................................................................ 3 Components........................................................................................................................................ 4 Configuring the hardware..................................................................................................................... 6 Configuring the EVA8000 storage array ............................................................................................ 8 Configuring the HP ProLiant DL580 server........................................................................................... 8 Configuring the software ...................................................................................................................... 8 Configuring the MPIO driver ............................................................................................................. 8 Configuring Windows for ASM ......................................................................................................... 9 Using 32-bit Windows...................................................................................................................... 9 Modifying ASM hidden parameters.................................................................................................. 10 Working with Benchmark Factory .................................................................................................... 12 ASM performance results.................................................................................................................... 16 Batch test results............................................................................................................................. 17 OLTP test results ............................................................................................................................. 18 DSS test results............................................................................................................................... 21 Conclusions ...................................................................................................................................... 26 Summary of results ......................................................................................................................... 26 Project learnings ............................................................................................................................ 26 Best practices .................................................................................................................................... 27 Best practices for Storage Administrators .......................................................................................... 27 Best practices for Server Administrators ............................................................................................ 27 Best practices for Database Administrators ........................................................................................ 27 Appendix A. Bill of Materials .............................................................................................................. 28 Appendix B. Examples ....................................................................................................................... 29 For more information.......................................................................................................................... 32 HP technical references................................................................................................................... 32 HP solutions and training ................................................................................................................ 32

HP product sites ............................................................................................................................. 32 Oracle .......................................................................................................................................... 32 Quest Software .............................................................................................................................. 32

Executive summary
Improve profitability, manage risk, and lower TCO with HP StorageWorks Application-Based Solutions. In this project, the HP StorageWorks Customer Focused Test team provided best practices and optimal performance for configuring an Oracle 10g database with Automatic Storage Management (ASM) on the HP StorageWorks 8000 Enterprise Virtual Array (EVA8000). A special look into the ASM disk group and EVA disk group configurations is the critical emphasis of this project. Results of this testing include the optimal EVA8000 configuration with respect to disk groups, database file layout, and mapping to corresponding ASM disk groups. The following questions were specifically addressed in this project: What is the best method to map Oracle/ASM data extents to EVA disk groups? Is there an advantage for double-striping in which the hardware and software are striping the data across the same physical disks? What is the recommended configuration for an Oracle/ASM database on the EVA? Does this recommendation change based on workload type, that is, batch, Online Transaction Processing (OLTP), and Decision Support Systems (DSS)? Does this recommendation change based on single instance database or Real Application Clusters (RAC)?

Key findings
Testing successfully provided the following high-level results: ASM striping can improve database performance in conjunction with EVA virtualization. The EVA configuration for batch loading of 500 GB was not significant. All configurations performed equally well. The EVA configuration for OLTP testing shows an advantage toward a configuration that uses both ASM and EVA striping (double-striping). The advantages were more pronounced as the database size increased (2.5-TB RAC database with 1,000 concurrent users). The greatest benefit from double-striping was most noticeable with DSS workloads. Obvious differences in the configurations were observed, especially with the 2.5-TB RAC database with nine concurrent users running a portion of the DSS power test. Even greater performance on DSS workloads can be achieved by modifying one of the ASM hidden parameters, allocation unit size, to maximize data placement on the EVA. Important findings uncovered during the tests are documented in the Best practices section.

Overview
The main purpose of the project was to test various EVA disk group configurations and determine the best configuration when running an Oracle 10g database with ASM. The introduction of Oracle ASM capabilities in Oracle 10g provided alternative storage management options to those available in the storage hardware alone. ASM provides some similar functionality to the HP StorageWorks EVA. Customers want to understand where the functionality overlaps, if they can co-exist, and if so, what is the best way to configure HP StorageWorks EVA storage to optimize application performance under various database workloads?

Several test scenarios were evaluated in regards to their impact on database performance: EVA configurationsThree EVA configurations were tested to see how Oracle performed during the benchmark testing. BenchmarksThree industry-standard benchmarks (batch, OLTP, and DSS) were run against the EVA configurations. The intent was to see if the best practices changed with differing workloads, that is, did a particular EVA configuration perform better with DSS workload than with an OLTP workload? Database typeBoth single instance and RAC databases were tested to see if clustering had any effect on the overall best practice.

Components
To run these tests, HP configured the systems illustrated in Figure 1 and Figure 2. These environments were based on input from customers and are representative of typical Oracle database environments. The key components include the following: Oracle 10gQuest Softwares Benchmark Factory for Databases was used to generate standard Oracle workloads against the Oracle database in a variety of workload types (batch, OLTP, and DSS). HP ProLiant DL580 serverThis server was used to host the Oracle database. Two configurations were useda single instance database (Figure 1) and a two-node RAC instance (Figure 2). HP ProLiant BL20p serverThis server was used to host the necessary storage management software (HP StorageWorks Command View EVA) for monitoring and configuring the storage array. Also, the storage performance metrics from EVAPerf were managed from this server. EVA8000SAN-based storage array that stored the Oracle database, logs, and so on. Microsoft Windows 2003 Enterprise Edition operating systemThe database servers, benchmark servers, and storage management server all ran on Windows 2003 EE.

Figure 1. Environment for the Single Instance database

Figure 2. Environment for the RAC database

Configuring the hardware


For the complete list of all hardware and software, see Appendix A. Bill of Materials. HP constructed three different disk layouts for the EVA configurations (Figure 3). These configurations meet both HP and Oracle best practices of having two separate disk groups: one for online/system data and the other for backup/recovery data. This best practice is not for performance purposes, but to ensure recovery efforts by separating these files. By no means are these meant to be the only possible configurations. These are merely a guideline for best practices. Database Administrators (DBAs) can modify these configurations to suit their particular environments. Configuration #1Both ASM and EVA striping (double-striping) across the same physical disks This configuration highlights the concept of double-striping across the same physical disks using ASM software-based striping and EVA hardware-based striping algorithms. This is the expected optimal performing configuration. Configuration #2Single-striping (EVA striping only) In this scenario, the virtual disk is subject to single-striping, performed only by the EVA. Because only one LUN is presented to the ASM disk group, ASM cannot stripe. There are obvious scalability issues with this configuration for large databases since the EVA has a current maximum LUN size of 2 TB. Configuration #2aVariation This variation of Configuration #2 is necessary to scale databases larger than 2 TB. This configuration slowly evolves into Configuration #1 as more LUNs are added.

Configuration #3Double-striping across different physical disks In this scenario, the virtual disk is still subject to double-striping, performed by both the EVA and ASM. This is different from Configuration #1 because the disks that ASM is striping across do not belong to the same EVA disk group. Because of the smaller spindle count in the EVA disk groups, there is a reduced advantage to EVA striping. Therefore, this configuration is not expected to perform well, nor is recommended in general.

Figure 3. EVA configurations

Configuration 1
EVA 8000 (Logical View) 2 Disk Groups
Disk Group
LUN LUN LUN LUN LUN
LUN

Disk Group
LUN LUN LUN

ASM Disk Group

ASM Disk Group

User Files System Files Control Files Online Redo Logs

Archive Logs Flashback Area RMAN Backups

Configuration 2a
EVA 8000 (Logical View) 2 Disk Groups
Disk Group Disk Group
LUN

LUN

LUN

ASM Disk Group

ASM Disk Group

User Files System Files Control Files Online Redo Logs

Archive Logs Flashback Area RMAN Backups

Configuring the EVA8000 storage array


The EVA8000 configuration included the following: EVA XCS Firmware: v5.100 1 EVA8000 Controller Pair 12 EVA disk shelves 168 146-GB 15k FC disks The fibre connections were connected to Brocade Silkworm 3800 switches and configured in dual 2-GB fabrics. All virtual disks in the EVA configurations used VRAID1. The virtual disks were manually balanced across both EVA controllers for optimal performance. The physical drives were divided across the two disk groups with more spindles in the database disk group, for example, 96 drives in the database disk group and 64 drives in the backup disk group. The disk groups were configured for single disk failure protection. The virtual disks were presented only to the host ports that were connected to the master controller for the virtual disk. This was done, in part, to eliminate any possible impact to performance from traffic crossing on the mirror ports of the EVA controllers. It is recognized that such a setup prevents access to the virtual disk in the event of a controller failover and would typically not be utilized in a customer environment.

Configuring the HP ProLiant DL580 server


An HP ProLiant DL580 server was used for each database server, with four Intel Xeon processors (hyper-threaded to eight) and 32-GB RAM. The ProLiant Support Pack 7.5.1 was used to configure the server and install the Windows 2003 EE operating system. Fibre connectivity was provided by way of four Emulex LP1050 dual channel host bus adapters (HBAs) with the native Windows MPIO driver used for host connectivity, for a total of eight physical paths to the fabric. Because of strict zoning to control access to the master controller for each virtual disk, a total of 32 paths (instead of the typical 64 paths) was available for each host I/O.

Configuring the software


Configuring the MPIO driver
The Windows MPIO/HP DSM driver provides multipathing on Windows servers and supports ActiveActive storage arrays. The EVA8000 storage array was configured to load balance across HBAs and controllers with Active-Active enabled. The latest Windows MPIO/HP DSM driver can be obtained from the HP website under support and downloads. The default method of SQST was used in this testing (Table 1) for load balancing across the HBA ports. MPIO/DSM version was 2.00.01.
Table 1. MPIO multipathing options Type NLB RR SQR SQB SQST Policy No_Load_Balance Round_Robin Shortest_Queue_Requests Shortest_Queue_Bytes Shortest_Queue_ServiceTime Description All I/O requests are routed through a chosen active path. All I/O requests are distributed across all active paths to the device in a round robin manner. Each I/O request is routed to the active path with the least number of outstanding requests. Each I/O request is routed to the active path with the least number of outstanding data bytes. Each I/O request is routed to the active path where the total outstanding time for pending I/O requests is the least.

Configuring Windows for ASM


The diskpart command was run to enable auto-mounting of volumes. This command allows the asmtoolg to see the volume. In Windows 2003 server editions, this value is off by default. Next, the virtual disk was presented to the server, the disk initialized in Computer Management, and a basic disk created and the volume formatted as a raw device, that is, a file system was not installed (Figure 4). DISKPART> automount enable Automatic mounting of new volumes enabled.

Figure 4. Unformatted volume

Using the asmtoolg tool, the disk was stamped with the ASM signature for use by the Oracle database. For more information, see Appendix B. Examples. The stamped ASM disk was verified with the asmtool list command. C:\>asmtool list ORCLDISKDATA0 \Device\Harddisk0\Partition1 \Device\Harddisk1\Partition1 \Device\Harddisk2\Partition1 \Device\Harddisk3\Partition1 69445M 1019M 2096121M 1048570M

In the RAC environment, the virtual disk was presented to both servers. One of the servers was used to create the ASM volume. It is necessary to reboot the other node for it to recognize the volume and later install the database.

Using 32-bit Windows


If you are running 32-bit Windows with greater than 4 GB of RAM, several extra steps are necessary to install and configure the Oracle database. Add /3GB and /PAE in the Windows boot.ini file. On a 4-GB RAM server, /3GB will allow applications to use 3 GB of RAM (instead of the default 2 GB). On servers with more than 4-GB RAM, /PAE will allow applications access to the higher memory.
Note: Database buffer cache is the only parameter that can utilize memory greater than 3 GB by way of DB_BLOCK_SIZE x DB_DATA_BUFFERS. Other SGA components, PGA, other memory parameters, and Oracle processes must all share 3 GB of memory. For more information, refer to Metalink Note 225349.1.

If used in conjunction with the /PAE for higher memory amounts, /3GB will limit your /PAE to 16-GB RAM.

Adjust registry setting awe_window_memory. This parameter creates a window space to more than 4 GB of memory when /PAE is in affect. Assign the Oracle user the Windows Lock Pages in Memory system privilege in Administrator Tools, Local Security Policy, Local Policies, User Rights Assignment. Set the initialization parameter USE_INDIRECT_DATA_BUFFERS to TRUE.

Modifying ASM hidden parameters


There are a few hidden ASM parameters that can be modified to change the ASM stripe size and allocation unit. These ASM parameters are originally intended for use in very large databases (multiple TB to PB range) and are used to reduce the amount of metadata overhead on the ASM instance. Working with recommendations from Oracle engineering, HP evaluated variations of these parameters on smaller size databases to see if they would affect or improve overall database performance. Note the following ASM parameters must start with an _ (underscore): _asm_ausize (allocation unit) _asm_stripesize fine grain/coarse grain striping templates For more information on setting and configuring these parameters, refer to Oracle Metalink Note 368055.1. ASM allows for the modification of two key parameters that affect the way data is striped across an ASM disk group: allocation unit size and stripe size. ASM Allocation Unit Size (_asm_ausize) determines the extent size for all database files. For example, a 1-GB data file using the default ausize of 1 MB will consist of 1,024 extents of 1 MB each (Figure 5).

Figure 5. ASM striping with 1 MB

10

The same data file using an ausize of 8 MB will consist of 128 extents of 8 MB each (Figure 6). Data is striped across extents, which are distributed equally across the disks in the ASM disk group. ASM Stripe Size (_asm_stripsize) determines the I/O size for fine grain templates (coarse grain templates are always 1 MBthe Oracle maximum I/O size). The default _asm_stripesize is 128 KB.

Figure 6. ASM striping with 8 MB

In our testing, the _asm_ausize showed the most promise for improving performance. To review the entire project, refer to the following knowledge brief at www.hp.com/go/hpcft. The benchmarks used in this testing were random read/write (OLTP) and large sequential reads (DSS). The OLTP benchmark did not show significant performance differences when applying the ASM parameter changes. However, the large sequential reads benchmark did improve when changing the _asm_ausize value. These DSS results are shown in Figure 7.

11

Figure 7. DSS results with _asm_ausize =8 MB

Sequential Reads
125%

Performance % Against Baseline

116% 100%

100%

75%

50%

25%

B A S E L I N E

0%

ASM Defaults Configurations

8MB AU size

In this test, the DSS scripts completed much faster when the ASM allocation unit size was set to 8 MB (it outperformed the baseline by completing in less time). The sequential reads appear to function better when the ASM allocation unit is set to 8 MB. In this test, setting the ASM AU size improved overall DSS query performance by 16%. HP recommends that customers look at this parameter in a pre-production environment to see if database performance can be improved when running against their own data and workloads.

Working with Benchmark Factory


Quest Softwares Benchmark Factory for Databases 5.0 (Beta 4) was used to conduct the workload generation. Benchmark Factory is a generic workload generation utility that can perform DSS and OLTP workloads. Benchmark Factory can also perform a custom workload based on any trace generated by a database or any custom SQL script. The workloads used for benchmark testing in accordance with the industry standard are defined in Table 2. Each test scenario was executed a minimum of three times, and averages were taken from all test results.
Table 2. Benchmark scale factors Host Batchsingle instance BatchRAC OLTPsingle instance OLTPRAC DSSsingle instance DSSRAC Size 500 GB N/A 500 GB 2.5 TB 500 GB 2.5 TB User load 1 N/A 500 1,000 3 9

12

Benchmark Factory scale factors are approximate and should not be used as absolute guides. The following example shows a scale factor of 1000 for the DSS benchmark. While this appears to create a 934-GB database, that value does not include indexes or stored procedures, which will increase the overall database size even more. The scale factor shown is only an estimate as stated in Figure 8.

Figure 8. Benchmark scale factors

13

Using advanced features in Benchmark Factory (Figure 9), a user can create the tables in different tablespaces, use the parallel parameter, turn on or off logging, and so on.

Figure 9. Benchmark Factory advanced features

14

Benchmark Factory can also load balance the workload across all nodes in the RAC environment. Figure 10 is a screenshot of 20 virtual users, each connecting to the database concurrently. Notice that Benchmark Factory has assigned the first 10 users to RAC node 1 and the second 10 users to RAC node 2. The same is true when the user loads of 500 and 1,000 were used for the OLTP testing.

Figure 10. Benchmark Factory load balancing

15

While the benchmark test is running, the user can monitor the progress from the benchmark agent shown in Figure 11. This screenshot shows the 20 concurrent virtual users running the OLTP benchmark and the exact query they are currently processing. It also shows transactions per second (TPS), min and max transaction time, as well as errors. This screenshot was taken at the beginning of a test, so some of these values are not yet populated. In addition, the complete display is not shown due to space constrictions (min and max transactions are not shown from the screenshot).

Figure 11. Benchmark Factory agent

ASM performance results


The testing effort provided the following results: Batch testingNo discernable performance differences between configurations OLTP testing single instanceConfiguration #1 offered the best performance OLTP testing RACConfiguration #1 offered the best performance DSS testing single instanceConfiguration #1 offered the best performance DSS testing RACConfiguration #1 offered the best performance

16

Batch test results


Batch testing consisted of a single user issuing sequential writes by way of Oracle Data Pump in parallel mode into two related tables (Table 3). These tables provided the basis for the OLTP benchmark. Total size was approximately 500 GB representing 4 billion rows of data. The benchmark is measured in completion time (minutes:seconds). The test iteration was approximately 3 hours.
Table 3. OLTP benchmark Table Product Part Rows 777,000,000 3,080,000,000

The database was set up with a single bigfile tablespace. The user schema contained the tables that were to be loaded during the benchmark. This is diagrammed in Figure 12.

Figure 12. Batch database configuration

17

The test results did not show any major performance difference related to configuration, as shown in Figure 13. This was expected since as long as there is no bottleneck in the system, sequential writes should not be influenced by the disk configuration.

Figure 13. Batch results on single instance

Batch Results on Single Instance


120% 100% 100% 100%

Performance % Against Baseline

100%

80% B A S E L I N E

60%

40%

20%

0% Config #1 Config #2 Config #3

The batch testing on RAC was not completed due to a known issue in Oracles Data Pump load tool, specifically the ability to utilize the parallel parameter in a RAC environment. Work-arounds included shutting down all but one RAC node or running without the parallel parameter. In either case, this would result in a single instance configuration, which was already covered on the previous test. This is Oracle bug #5472417 and is currently being tracked by Oracle.

OLTP test results


The OLTP test consisted of random read/write load on two related tables consisting of approximately 4 billion rows (Table 4). This test is measured in TPS to completion. The test iteration was run for exactly 3 hours.
Table 4. OLTP benchmark 500 GB Table Product Part Rows 777,000,000 3,080,000,000

18

The database was set up with a single bigfile tablespace. The user schema contained the tables and data necessary for the benchmark execution. This is diagrammed in Figure 14.

Figure 14. OLTP 500-GB database configuration

Five user schemas (each 500 GB) were used to scale up to the 2.5-TB database size used in the RAC testing, consisting of approximately 20 billion rows (Figure 15).

Figure 15. OLTP 2.5-TB database configuration

19

Transaction ratio weights are shown in Figure 16. The transaction types can be weighted to control the percentage that they will occur during the random process of the OLTP benchmark. These weights were left at the industry-standard defaults.

Figure 16. OLTP transaction mix

The results for single instance testing on the 500-GB database with 500 concurrent users do not show a significant trend (Figure 17). Keep in mind that an OLTP workload is typically a server-intensive benchmark. Migrating to a 2-node RAC environment, with more users and a larger database size, increases the impact of the various EVA configurations.

Figure 17. OLTP results on single instance

OLTP Results on Single Instance


125.00%

Performance % Against Baseline

100.00%

100.00%

99.99%

99.99%

75.00%

50.00%

25.00%

B A S E L I N E

0.00% Config #1 Config #2 Config #3

20

On the 2.5-TB RAC database with 1,000 concurrent virtual users, the workload on the storage array has increased. Now you start to see the divergences between the three EVA configurations (Figure 18). Configuration #1 is performing the best (highest throughput measured in TPS). The other configurations are performing at 97% and 96% of the baseline (Configuration #1). In other words, the other configurations had slightly lower TPS during the three-hour benchmark.

Figure 18. OLTP results on RAC

OLTP Results on RAC


125%

Performance % Against Baseline

100%

100%

97%

96%

75%

50%

25%

B A S E L I N E

0% Config #1 Config #2 Config #3

DSS test results


The DSS test consists of large sequential reads from eight related tables with approximately 2.6 billion rows. It is measured in time (minutes:seconds) to completion. Test iteration is approximately 7 hours. The 300 scale for the DSS benchmark is shown in Table 5.
Table 5. DSS benchmark Table Part Supplier Partsupp Customer Order Lineitem Nation Region Rows 60,000,000 3,000,000 240,000,000 45,000,000 450,000,000 1,800,364,500 25 5

21

To create the 2.5-TB database, a combination of DSS scales (Table 6) was used from three user schemas (1000, 1000, 300). This is approximately 8.7 billion rows. Again, this benchmark is measured in time (minutes:seconds) to completion. Test iteration is approximately 6 hours.
Table 6. DSS benchmark 1000 scale Table Part Supplier Partsupp Customer Order Lineitem Nation Region Rows 200,000,000 10,000,000 800,000,000 150,000,000 1,500,000,000 6,001,215,000 25 5

The database was set up with a single bigfile tablespace. The user schema contained the tables and data necessary to run the benchmark. This is diagrammed in Figure 19.

Figure 19. DSS 500-GB database configuration

22

To create the 2.5-TB database, a combination of DSS scales were used from three user schemas (1000, 1000, 300). Each schema had three concurrent users issuing the Power Test benchmark (Figure 20).

Figure 20. DSS 2.5-TB database configuration

There are 22 queries in the DSS Power Test. A selection of 15 queries was used for our benchmark to keep the testing time within a manageable timeframe (no effect on the test outcome). These were selected based on their impact to the storage array and average time to completion. The queries with an asterisk were dropped from the RAC testing. Again, this was only in the interest of time. Power TestMeasured in time to completion. Queries: Minimum Cost Supplier Query Customer Distribution Query * Order Priority Checking Query Promotion Effect Query Local Supplier Volume Query Top Supplier Query Forecasting Revenue Change Query Parts/Supplier Relationship Query Volume Shipping Query* Small-Quantity-Order Revenue Query National Market Share Query* Discounted Revenue Query Important Stock Identification Query* Global Sales Opportunity Query Shipping Modes and Order Priority Query*

23

The results of the 500-GB database with three concurrent users in Figure 21 start to show the difference on database performance based on the configuration of the EVA virtual disks. Configuration #1 performed the best, completing the 15 queries of the Power Test in the fastest time. Configurations #2 and #3 performed within 91% and 97% of the baseline (Configuration #1). In other words, Configurations #2 and #3 took longer to complete the Power Test.

Figure 21. DSS results on single instance

DSS Results on Single Instance


125%

Performance % Against Baseline

100%

100% 91% 93%

75%

50%

25%

B A S E L I N E

0% Config #1 Config #2 Config #3

24

When the workload is increased to nine concurrent users on the 2.5-TB RAC database, the greatest divergence is seen between the configurations (Figure 22). This particular benchmark stressed the storage array greater than any other workload. Configuration #1 has a substantial advantage over the other configurations, which performed 88% to 60% of the baseline. Note that Configuration #3 actually had fewer overall spindles than Configurations # 1 and #2 (84 verses 80). To create a fair test, Configuration #3 was re-run with an increased spindle count. This helped to raise its performance from 60% to 74% of the baseline, but it was still the lowest performing configuration in this benchmark.

Figure 22. DSS results on RAC

DSS Results on RAC


125%

Performance % Against Baseline

100%

100% 88%

75%

74% B A S E L I N E 60%

50%

25%

0% Config #1 Config #2 Config #3 Config #3a

25

Conclusions
The following results were observed during this testing.

Summary of results
Single Instance Testing BatchNo discernable difference OLTPConfiguration #1 DSSConfiguration #1 RAC Testing BatchNot applicable OLTPConfiguration #1 DSSConfiguration #1

Project learnings
Does it matter how Oracle/Automatic Storage Management (ASM) maps data extents to the Enterprise Virtual Array (EVA) disk groups? Yes, especially in DSS environments (consider modifying ASM hidden parameter _asm_ausize for optimal performance). Is there a penalty for double-striping? No, Configuration #1 (double-striping) was the best performing configuration in testing. In addition, Configuration 1 also complies with both ASM and EVA configuration best practices. What is the general recommended configuration for an Oracle/ASM database on the EVA? Configuration #1 Does this recommendation change based on workload type? No. Configuration #1 was the best performing in OLTP and DSS workloads. Does this recommendation change based on single instance or Real Application Clusters (RAC)? No. Configuration #1 was the best performing in both single instance and RAC testing. It was also noted that as configuration size increases (addition of RAC nodes, increase in database size, increase in users), the performance advantages of Configuration #1 increase.

26

Best practices
During testing, several best practices were developed to improve database performance for each scenario.

Best practices for Storage Administrators


Create at least two EVA disk groups (one for data files, one for archive/backup files). Create multiple LUNs from each disk group, as needed, for the database. Use VRAID1 for the database disk group (online files). Use VRAID1 or VRAID5 for the archive log file disk group (backup files). Place a greater amount of high-performance disks into the first disk group, when possible, to improve database performance. If necessary, use lower performance disks in the second disk group. Also, consider using fewer disks. Balance the LUNs across EVA controllers.

Best practices for Server Administrators


Use multiple HBAs for path availability and optimal performance. Manage host access to the virtual disk by way of paths to the master controller where possible. Create OCR and voting disk on logical drives (Microsoft Windows). Create and stamp ASM disks on raw partitions.

Best practices for Database Administrators


Create at least two ASM disk groups. Use external redundancy, leaving the RAID functionality to the storage array. Place the online files into the first disk group. Place backup files and Flashback Area into the second disk group. If using the Flashback Area, be aware that a mirror copy of the online redo logs is placed in this area by default. Present multiple LUNs, as needed, to each of the ASM disk groups. Consider changing the _asm_ausize to 8 MB for DSS-like workloads.

27

Appendix A. Bill of Materials


Item Storage EVA8000 (2C12D) HP StorageWorks 146-GB 15K FC HDD Servers HP ProLiant DL580 server HP 36-GB 15K U320 Pluggable Hard Drive Intel X3.00GHz G3 Processor HP DL580R03 Memory Expansion Board HP 4-GB REG PC2-3200 2x2-GB DDR Memory Emulex LP1050DC HBA HP ProLiant DL580 server HP 36-GB 15K U320 Pluggable Hard Drive Intel Xeon MP X2.85 GHz-2-MB Processor HP Hot Plug Expansion Board Memory HP 4096-MB PC1600 Reg SDRAM Memory Emulex LP1050DC HBA HP ProLiant BL20p server Intel Xeon 3.06-GHz Processor HP 72-GB 15K U320 Pluggable Hard Drive Qlogic QLA2312 HBA SAN Infrastructure Brocade SilkWorm 3800 SAN switches Software Oracle 10g Enterprise Edition Microsoft Windows 2003 Enterprise Edition Quest Software Benchmark Factory for Databases HP MPIO Full Featured DSM for EVA Disk Arrays HP MPIO DSM Manager HP ProLiant Support Pack HP StorageWorks Command View EVA 10.2.0.2 RC2, SP1, SP2 beta 5.0 beta 4 2.00.01 1.02.00 7.51 5.1 8 V3.2.1 2 2 4 2 8 4 8 2 4 2 8 4 2 2 1 2 9.0.2.16 / 1.45 5.1.11x1 / 1.91a1 G3 7.1.11x1 / 1.91a1 G2 G3 2 168 V5.100 HP01 Quantity Version

28

Appendix B. Examples
This section shows screenshots from the Oracle asmtoolg utility to stamp a volume for ASM. The tool is launched from the command line. C:\>asmtoolg

Figure 23. asmtoolg

29

30

31

For more information


HP technical references
Configuring Oracle ASM hidden parameters for EVA8000 HP StorageWorks Command View EVA installation guide HP StorageWorks Command View EVA user guide HP StorageWorks Enterprise Virtual Array configuration best practices white paper HP StorageWorks SAN design reference guide

HP solutions and training


Customer Focused Testing HP & Oracle alliance Network Storage Services HP Storage and SAN Education Courses

HP product sites
Enterprise Class Storage Portfolio HP StorageWorks Command View EVA HP ProLiant DL Servers HP BladeSystem B-Series SAN Switches Multi-Path Options for HP Arrays Fibre Channel Host Bus Adapters

Oracle
Oracle Database Installation Guide 10g Release 2 (10.2) for Microsoft Windows (x64) Oracle Database Installation Guide 10g Release 2 (10.2) for Microsoft Windows (32-Bit) Oracle Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide 10g Release 2 (10.2) for Microsoft Windows Oracle Database 10g Release 2 Automatic Storage Management Overview and Technical Best Practices

Quest Software
Benchmark Factory for Databases (Database Performance and Scalability Testing)

2006 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Intel and Xeon are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation. 4AA0-9728ENW, December 2006

Das könnte Ihnen auch gefallen