Sie sind auf Seite 1von 64

Backup and recovery best practices for Oracle 10g with

HP Data Protector 6.0

Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

Solution configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Configuring the hardware . . . . . . . . . . . . . . . . . . . . . . . . . 6

Backup methodologies . . . . . . . . . . . . . . . . . . . . . . . . 6

Disk-to-disk backups—Backing up Oracle 10g Release 2 to EVA5000 . . 6

Disk-to-virtual-tape backups—Backing up Oracle 10g Release 2 to VLS . . 6

Disk-to-tape backups—Backing up Oracle 10g Release 2 to EML . . . . . 6

Hardware statistics . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Database server configurations 1 and 2 (IA64 RAC and IA64 Single) . . 7

Database server configuration 3 (IA32 Multi) . . . . . . . . . . . . . 7

Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Tape devices . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Partitioning the rx7620 . . . . . . . . . . . . . . . . . . . . . . . . 8

Configuring the management processor . . . . . . . . . . . . . . . 8

Creating nPars . . . . . . . . . . . . . . . . . . . . . . . . . . 8

SAN zone definition . . . . . . . . . . . . . . . . . . . . . . . . . 9

Zoning overview . . . . . . . . . . . . . . . . . . . . . . . . . 9

Configuring the EVA8000 for primary storage . . . . . . . . . . . . . . 10

Configuring the EVA5000 for disk backups . . . . . . . . . . . . . . . . 10

Configuring the HP StorageWorks EML E-Series 103e Tape Library . . . . . . 11

Configuring HP VLS6510 . . . . . . . . . . . . . . . . . . . . . . . . 11

Configuring the software . . . . . . . . . . . . . . . . . . . . . . . . . . 13

Configuring the QLogic driver . . . . . . . . . . . . . . . . . . . . . 13

Setting up QLogic dynamic load balancing . . . . . . . . . . . . . . 13

EVA8000 and EVA5000—active-active/active-passive . . . . . . . . . 13

Oracle Cluster File System 2 . . . . . . . . . . . . . . . . . . . . . . 14

OCFS disk configuration . . . . . . . . . . . . . . . . . . . . . . 14

Setting up the OCFS Cluster File Systems . . . . . . . . . . . . . . . 15

Working with Benchmark Factory . . . . . . . . . . . . . . . . . . . . 16

Oracle parameter changes . . . . . . . . . . . . . . . . . . . . . 18

Configuring HP Data Protector 6.0 for Oracle backups . . . . . . . . . . . . . 19

HP Data Protector setup . . . . . . . . . . . . . . . . . . . . . . . . 19

Backups with Data Protector . . . . . . . . . . . . . . . . . . . . . . 19

Creating a backup specification . . . . . . . . . . . . . . . . . . 19

Data Protector disk-to-disk backup . . . . . . . . . . . . . . . . . . . . 20

Setting up the disk file libraries . . . . . . . . . . . . . . . . . . . 20

Effectively using file libraries and tape devices . . . . . . . . . . . . 21

Setting up Oracle restores . . . . . . . . . . . . . . . . . . . . . . . 21

OLTP workload results . . . . . . . . . . . . . . . . . . . . . . . . . 22

Oracle backup and restore . . . . . . . . . . . . . . . . . . . . . . . . . 23

Backup and restore performance results . . . . . . . . . . . . . . . . . 23

EVA performance results . . . . . . . . . . . . . . . . . . . . . . . . 23

EVA5000 raw performance characterization . . . . . . . . . . . . . 23

EVA8000 raw performance characterization . . . . . . . . . . . . . 24

Disk-to-disk backup and restore results using EVA5000 . . . . . . . . . . . 24

Disk-to-tape backup and restore performance results, EML E-Series . . . . . . 32

Best practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

Initial tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

Tuning the Data Protector buffer configuration . . . . . . . . . . . . . 42

Tuning block settings . . . . . . . . . . . . . . . . . . . . . . . 42

Best practices for disk-to-disk backups on the EVA5000 disk array . . . . . . 42

Best practices for disk-to-tape backups on the EML tape library . . . . . . . . 43

Best practices for disk-to-virtual-tape backups on the VLS virtual tape library . . 43

Best practices for using Oracle Recovery Manager . . . . . . . . . . . . . 44

Use an RMAN catalog database . . . . . . . . . . . . . . . . . . 44

Protect your backup repositories, RMAN and Data Protector catalog . . . 44

Enable block change tracking to improve the speed of incremental backups 44

Use flash recovery for online point-in-time recovery and transaction rollback

tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

Choose effective backup policy types . . . . . . . . . . . . . . . . 44

Maintain your Oracle backup images effectively . . . . . . . . . . . . 44

Test copies of backups . . . . . . . . . . . . . . . . . . . . . . . 45

Manage your archived online logs and keep them safe . . . . . . . . . 45

Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

Appendix A Bill of materials . . . . . . . . . . . . . . . . . . . . . . . . 47

Appendix B Configuring Oracle Recovery Manager . . . . . . . . . . . . . . 49

Appendix C Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

Stinit.def configurations for the EML and VLS . . . . . . . . . . . . . . . 50

Sample RMAN scripts . . . . . . . . . . . . . . . . . . . . . . . . . 50

RMAN full backup script . . . . . . . . . . . . . . . . . . . . . . 50

RMAN duplicate script . . . . . . . . . . . . . . . . . . . . . . 51

HP Data Protector 6 Oracle RMAN template . . . . . . . . . . . . . . . 51

Data Protector 6.0 interface screen shots . . . . . . . . . . . . . . . . . 52

Appendix D Additional information . . . . . . . . . . . . . . . . . . . . . 58

Data Protector patches . . . . . . . . . . . . . . . . . . . . . . . . . 58

Server operating system hangs/crashes . . . . . . . . . . . . . . . 58

Oracle session hangs . . . . . . . . . . . . . . . . . . . . . . . 58

Data Protector interface crash causes Cell Manager crash . . . . . . . 58

RMAN specific syntax changes . . . . . . . . . . . . . . . . . . . 58

RAC issues . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

General Oracle changes . . . . . . . . . . . . . . . . . . . . . . 58

Appendix E: Acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . 60

For more information . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

HP Customer Focused Testing . . . . . . . . . . . . . . . . . . . . . . 62

HP Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

Enterprise Backup Solution . . . . . . . . . . . . . . . . . . . . . . . 62

Performance and troubleshooting . . . . . . . . . . . . . . . . . . . . 62

HP technical references . . . . . . . . . . . . . . . . . . . . . . . . 63

HP Data Protector . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

Oracle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

Quest Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

Open source tools . . . . . . . . . . . . . . . . . . . . . . . . . . 63

Overview
Over 40% of enterprises use Oracle databases as part of their core applications. The availability
requirements of these core applications vary from less than three hours of unplanned downtime
per month to as little as five minutes per calendar year. Given these requirements, techniques for
backup and recovery of Oracle databases are a vital part of an IT organization’s data protection
and availability strategy. From an operational perspective, proper validation of both recovery times
and backup windows are required to support business-driven recovery time objectives (RTOs) and
recovery point objectives (RPOs). In addition, as the size of the information stored in databases
continues to grow (current compound annual growth rate is approximately 30%), higher-performing
backup and recovery methods are required to maintain these objectives.

The investment in a backup and recovery infrastructure is significant, and the cost of deployment and
management is the largest component of this investment. There are many hardware and software
components and procedural methods available to back up a database environment and plan for its
recovery. Hardware options include disk, tape, and other new hybrid technologies such as virtual
tape systems. There also are many software options for managing the process of backing up and
recovering the data. With so many choices to consider, determining which combination of hardware,
software, and procedures to use can be challenging. The actual implementation and integration can
prove to be difficult; and the overall results can be unpredictable.

This paper describes testing that used a combination of three Oracle 10g server configurations and
three backup infrastructures—an HP StorageWorks Enterprise Modular Library (EML) E-Series Tape
Library, an HP StorageWorks Virtual Tape Library System, and an HP Enterprise Virtual Array disk
storage system. The purpose of the testing was to develop best practices for the backup and restore
of an Oracle 10g environment using HP Data Protector backup software across all combinations of
the three backup infrastructures and the three Oracle 10g environments.

This paper addresses fundamental deployment and operational issues and provides:

• A set of test-proven best practices for the backup and recovery of Oracle 10g databases, including
methodologies for deploying disk, tape, and virtual tape; setting backup and recovery software
options; and determining optimum configurations.

• A comparison of the impact of running backups on Oracle application performance.

• A comparison of the impact of running Oracle applications on backup performance.

• Detailed comparisons of disk-to-disk, disk-to-tape, and disk staging (combination of disk-to-disk


and disk-to-tape) backup and restore methodologies.

• The key planning and deployment considerations for Data Protector, Oracle 10g, and the server
and storage area network (SAN) infrastructure.

The test results, and specifically the methods and best practices derived from the testing, are described
in the following sections. This information is intended to facilitate both the productive planning and
the timely deployment of a fully operational HP server, storage, and backup infrastructure to ensure:

• Proper selection of the appropriate backup infrastructure.

• Effective deployment of the servers, storage technology, and software, and proper procedures
for backing up and recovering an Oracle 10g database in an enterprise-class, SAN-based
environment.

3
• Ease of overall operation.

• Proven procedures are used to perform backups in a timely manner and with a good
understanding of the impact of backups on application performance.

• Proven procedures are used to restore data in full support of business-driven RTOs and RPOs.

The user of these best practices can accelerate time to deployment, reduce risks, minimize total costs,
and maintain overall-application, service-level objectives.

4
Solution configuration
Based on customer input, an enterprise environment was configured for this testing that was
representative of a typical production Oracle database environment. The key components of the
test environment included:

• An Oracle 10g production database, which was backed up and restored.

• The HP Integrity rx7620 Server, a mid-range server, to host the Oracle database. Two
configurations were used: a 64-bit, single-instance database and a two-node, 64-bit RAC
instance.

Note
Although the introduction of the rx7640 has made the rx7620 server obsolete, the best practices outlined in this
document are still pertinent.

• An HP ProLiant DL580 server to host multiple 32-bit Oracle database instances.

• An HP StorageWorks 8000 Enterprise Virtual Array (EVA8000) as the primary SAN-based disk
array that held the production Oracle database, logs, and so on.

• An HP StorageWorks 5000 Enterprise Virtual Array (EVA5000) disk storage system as a


disk-to-disk backup target to show how an older EVA may be re-deployed in an existing
infrastructure.

• An HP EML E-Series 103e Tape Library as the primary LTO-3 tape backup and restore device.

Note
HP has recently released LTO-4 tape technology which provides improved performance, encryption, and variable speed
technology. The use of LTO-4 will alter the results of this testing; however, all best practices are still applicable.

• An HP StorageWorks 6510 Virtual Library System (VLS6510) as the primary virtual tape backup
and restore device.

• The Red Hat Enterprise Linux (RHEL) AS 4.0 operating system, the Enterprise Linux operating
system, on both the rx7620 and DL580 servers.

• HP Data Protector 6.0 as the backup application.

• Quest Software Benchmark Factory to create the online transaction processing (OLTP) data in
the databases and simulate 500- and 1,000-user workloads.

5
Configuring the hardware
HP constructed the configuration using Integrity rx7620 and DL580 servers, EVA8000 and EVA5000
disk arrays to best simulate an enterprise environment supporting different Oracle databases on
Itanium and Xeon based servers. See “Bill of materials” on page 47 for a complete list of the
hardware used.

Figure 1 shows the configuration of the servers used in this test environment. The most important
elements of the Oracle database server configurations are as follows:

• IA64 RAC—Oracle 10g Real Application Clusters (RAC) on RHEL4 U4 using both partitions of
the rx7620

• IA64 Single—Oracle 10g Single Instance on RHEL4 U4 using one partition of the rx7620

• IA32 Multi—Five Oracle 10g instances on RHEL4 U4 using a ProLiant DL580 G3

The Benchmark Factory, HP Data Protector 6.0, and HP Command View EVA servers also are shown
near the top of the environment configuration diagram. For EVA SAN connectivity, 2-Gb/s Fibre
Channel (FC) links were used. A description of the backup methodologies used for each of the
configurations presented in the next section. These backup methodologies also are depicted on the
environment configuration diagram.

Backup methodologies
Disk-to-disk backups—Backing up Oracle 10g Release 2 to EVA5000

For this test, Oracle Recovery Manager (RMAN) performed a tape backup and Data Protector
translated the configured data streams into files and wrote them to the defined File Libraries. The I/O
load was balanced across host bus adapters (HBAs) and controller ports.

Disk-to-virtual-tape backups—Backing up Oracle 10g Release 2 to VLS

For this test, RMAN performed a tape backup using 12 streams to the VLS. The I/O load was
balanced across HBAs and controller ports.

Disk-to-tape backups—Backing up Oracle 10g Release 2 to EML

For this test, RMAN performed a tape backup using four streams to the EML. The I/O load was
balanced across HBAs and controller ports.

In Figure 1, the top-most (orange) arrows indicate the disk-staging data flow, the middle (white)
arrows indicate the disk-to-disk data flow, and the bottom-most (green) arrows indicate the disk-to-tape
data flow.

6
Figure 1. Environment configuration diagram

Hardware statistics
Database server configurations 1 and 2 (IA64 RAC and IA64 Single)

• HP Integrity rx7620
– Eight IA64 1.6-GHz processors in one partition
– Eight QLogic-based HP A6286A dual-port FC HBAs

– 64-GB RAM

– Two cells and two partitions

Database server configuration 3 (IA32 Multi)

• HP ProLiant DL580 G3
– Four Intel Xeon 3.0-GHz processors
– Two QLogic-based HP FC2214A dual-port FC HBAs

– 16-GB RAM

Storage

• EVA8000 (primary)
– 144 300-GB FC drives
– Dual controllers (active-active)

7
– One disk group (FC)

• EVA5000 (backup)

– 56 250-GB Fibre attached technology adapted (FATA) drives

– Dual controllers (active-active)

– One disk group (FATA)

Tape devices

• EML 103e

– Four LTO-3 drives

– 24 tapes at 400-GB native capacity

– Two FC paths

• VLS6510

– 24 250-GB serial advanced technology attachment (SATA) drives

– Emulating 12x LTO-3 drives with four FC paths

– 49 tapes at 100 GB each

Partitioning the rx7620


HP Integrity servers operate differently from ProLiant servers. This section describes how the Integrity
rx7620 server was configured for this project.

Configuring the management processor

For this test environment, the Integrity rx7620 was partitioned into two nPars, or server partitions. In
order to create partitioning from a remote system and to enable remote hardware management, the
management processor (MP) was configured for remote access as follows:

1. Connect a server running Windows or Linux to the management processor serial management
port.

2. Start a terminal program, such as Windows Hyperterminal or Linux Minicom and configure the
IP address with the lc command and follow the on-screen prompts.

Note
You can configure the MP for dynamic host configuration protocol (DHCP) or static Internet protocol (IP) address. You
can also enable or disable telnet, SSH, or HTTPS remote access.

Creating nPars

If no partition exists, a new complex must be created by using the cc command as follows:

• Select cell 0 and save the configuration.

If a single partition exists, reset the partition for reconfiguration as follows:

1. Use the rr command to reset the partition.

8
2. Use the rs command to restart the partition.

3. Create a new complex with the cc command.

4. Select cell 0 and save the configuration.

On a server installed with nPar utilities, enter the following commands:

• parcreate -P nextPartition -c 1::: -u Admin −h <IPADDRESS>

• parstatus −u Admin −h <IPADDRESS>


The preceding procedure created a partition consisting of a single cell; after the procedure an
operating system was loaded onto the partition. When an operating system is loaded, you can install
the nPar command line utilities and connect to the MP to create the second partition. Alternatively,
you can install the nPar utilities on a server running Linux, HP-UX, or Windows and create the second
partition from a remote host.

One partition was created to provide access to all the on-board SCSI disks and an operating system
was loaded on the first SCSI disk. The first SCSI disk was then duplicated using the dd utility to the
remaining disks. Upon completion of the duplication, the system partitions were reset and the rx7620
was repartitioned into two partitions consisting of one cell each. Each new server partition then had
an identical bootable operating system because of the disk duplication effort. Alternatively, a network
install could have been performed on each partition if a PXE server had been setup.

Since no PXE server was used, disk duplication was the simplest method for preparing the operating
system disks for each server partition.

SAN zone definition


Since multiple paths for each device can be mapped, zoning was needed to reduce the total
number of paths.

Important
Only one path should be used for tape devices since they are not supported in multipath configurations and sometimes cause
issues with the backup application installation or configuration.

Because the current release of Red Hat Enterprise Linux, AS 4.0 is limited to 256 buses and multiple
buses are generated for each path from a host port to an EVA port, bus exhaustion can occur if the
EVA is not properly zoned. After zoning is introduced, it must be used in all cases for the devices
to be visible to one another.

Each rx7620 partition had four dual-port HP A6826A HBAs. Four ports per partition were explicitly
zoned to the EVA8000 and EVA5000. The last four were zoned to the VLS6510 and two of the last
four ports were also zoned to the EML E-Series 103e Tape Library.

The DL580 had two dual-port HP FCA2214A HBAs. Two ports were assigned to the EVA8000 and
EVA5000 and the other two were assigned to the VLS6510 and the EML E-Series 103e Tape Library.

Zoning overview

• To provide rx7620 SAN connectivity to EVA8000—Zones were established for four of the rx7620
HBA ports and each of the EVA8000 host ports.

9
• To provide DL580 SAN connectivity to EVA8000—Zones were established for two of the four
DL580 HBA ports and each of the EVA8000 host ports.

• To provide rx7620 SAN connectivity to EVA5000 for disk-based backups—Zones were


established for four of the rx7620 HBA ports and each of the EVA5000 host ports.

• To provide DL580 SAN connectivity to EVA5000 for disk-based backup—Zones were established
for two of the four DL580 HBA ports and each of the EVA5000 host ports.

• To provide rx7620 SAN connectivity to VLS6510—Zones were established for four of the rx7620
HBA ports and each of the VLS6510 host ports.

• To provide DL580 SAN connectivity to VLS6510 Virtual Library—Zones were established for two
of the four DL580 HBA ports and each of the VLS6510 host ports.

• To provide rx7620 SAN connectivity to EML E-Series 103e Tape Library—Zones were established
for two of the rx7620 HBA ports and each of the EML E-Series 103e host ports.

• To provide DL580 SAN connectivity to EML E-Series 103e Tape Library—Zones were established
for two of the four DL580 HBA ports and each EML E-Series 103e host ports.

Configuring the EVA8000 for primary storage


The EVA8000 configuration included:

• EVA Virtual Controller Software (VCS) 6.010

• An EVA8000 controller pair

• 12 EVA disk shelves

• 144 300-GB FC disks

• Three-phase 208-VAC redundant power

The FC connections were connected to two Brocade-based HP 2/16N SAN switches and two
Brocade Silkworm 3800s and were configured in dual fabrics.

The EVA8000 presented nine RAID1 virtual disks, which were all from a single EVA disk group. This
is in line with a typical optimal flexible architecture (OFA) configuration. The disk group included
all 144 available FC disks and was configured for double-disk failure protection. The virtual disks
were presented to all host ports that were connected to any port of the EVA. The QLogic least
recently used (LRU) load balancing policy was used.

Each used host port was identified on the EVA and the operating system type was set to Linux. Each
virtual disk from the EVA had Preferred Path/Mode set to No Preference which enabled the Linux
QLogic driver to load balance the logical unit numbers (LUNs) equally, dividing the load across the
two controllers according to the need of each host.

Configuring the EVA5000 for disk backups


The EVA5000 configuration included:

• EVA VCS 4.001

• An EVA5000 controller pair

• Eight EVA disk shelves

10
• 56 250-GB FATA disks

• Three-phase 208-VAC redundant power

The EVA configuration consisted of one HP StorageWorks HSV110 controller pair and eight disk
enclosures, which were populated with 56 250-GB FATA drives. VCS 4.001 firmware (the latest
release at time of printing) and VCS 3.028 (previous release) was used.

The EVA5000 presented four RAID5 virtual disks, which were all from a single disk group. The disk
group included all 56 available FATA disks and was configured for no disk failure protection. The
virtual disks were presented to all host ports that were connected to the EVA. The HP/QLogic load
balancing driver was configured for the LRU load balancing policy.

Each used host port was identified on the EVA and the operating system type was set to Linux.
Two virtual disks from the EVA had Preferred Path/Mode set to Path A—Failover Only and two
were set to Path B—Failover Only, alternating the settings. This equally divided the load across
the two controllers according to each host and LUN mapping.

Configuring the HP StorageWorks EML E-Series 103e Tape Library


The HP StorageWorks EML E-Series 103e Tape Library was configured with four Fibre Channel HP
StorageWorks Ultrium 960 tape drives (LTO-3).

The e2400-FC interface controllers had six Fibre Channel ports. Four ports were for the back-end
tape devices, and the remaining two were for the SAN. All interfaces were 2 GB and each SAN
port on the Interface Controller was connected to separate fabrics in order to distribute the load
evenly across fabrics and HBAs.

The tape library was managed from a dedicated SAN management server using HP StorageWorks
Command View TL software with HP Command View EVA.

HP Data Protector 6.0 Cell Manager for Windows was used as the backup application with all the
latest patches applied. For clarification, the cell manager server managed the backup images and
the media where the images reside. Because there can be only one host per robotic device, HP Data
Protector asks for one of the hosts to be the robotic control host. The robotic control host moves the
media to the tape drives when backups or restores are activated. Each server in the environment was
configured with the Data Protector agent and was responsible for writing data directly to the tape
devices to avoid network backups.

Configuring HP VLS6510
The HP StorageWorks VLS6510 was configured as an Ultrium 960 tape library with 50 LTO-2 tapes
slots and 12 tape drives. VLS uses the LTO-2 tape personality for LTO-2 and LTO-3 compatibility.

The VLS interface controllers had four FC ports and four SCSI ports. The four SCSI ports were for the
back-end MSA20 disk devices, while the four FC ports were for SAN connectivity. All FC interfaces
were 2 GB and each set of FC ports on the VLS interface controller was connected to separate fabrics
to distribute the load evenly across fabrics and HBAs.

The tape library was managed from a dedicated SAN management server. HP Command View TL
software was installed on the same SAN management server used for HP Command View EVA.

11
HP Data Protector 6.0 Cell Manager for Windows was used as the backup application with all the
latest patches applied. Each server in the environment was configured with the Data Protector agent
and was responsible for writing data directly to the tape devices to avoid network backups.

12
Configuring the software
See “Bill of materials” on page 47 for the complete list of software.

Linux kernel tuning was applied to accommodate the Oracle databases running on the hosts. Table 1
lists the Linux kernel 2.6 parameters that were modified for this testing. These settings were used
as best practices based on information provided from a previous project. The default values have
been included in Table 1.

Table 1. Altered kernel parameters

Tunable parameter Default value Value used in testing

net.core.rmem_default 110592 262144

net.core.rmem_max 131071 262144

kernel.sem 250 32000 32 128 250 32000 100 128

kernel.shmall 2097152 209715200

kernel.shmmax 33554432 24064771072

kernel.shmmni 4096 16384

fs.file-max 232233 658576

fs.aio-max-nr 65536 65535

net.ipv4.ip_local_port_range 32768 61000 1024 65000

vm.swappiness 10 30

Configuring the QLogic driver


Setting up QLogic dynamic load balancing

The Linux QLogic driver supports both an active-active configuration and an active-passive
configuration. The EVA8000 disk array was configured to balance the load across HBAs and
controllers with active-active enabled. The EVA5000 required an active-passive configuration, but
supports load balancing across different ports of the same controller. The latest QLogic/HP driver
can be obtained from the following HP website:

http://welcome.hp.com/country/us/en/support.html?pageDisplay=drivers

EVA8000 and EVA5000—active-active/active-passive

When using the QLogic driver for Linux with the EVA product line, a set of configuration utilities was
installed with the driver source. An initial ramdisk (initrd) was created as part of the post-installation
tasks of the package. To manually configure the driver options, we modified the hp_qla2300.conf
file in /etc. Example 1 shows changes made to the defaults.

Example 1. Modified /etc/hp_qla2300.conf file

qdepth = 16
port_down_retry_count = 8
login_retry_count = 8
failover = 1
load_balancing = 2
auto_restore = 0x80

13
Table 2 displays the available load_balancing parameters.

Table 2. QLogic driver load_balancing parameter descriptions


Type Policy Description

(0) Static None Finds the first active path or the first active optimized path for each LUN.

(1) Static Automatic Distributes commands across the active paths and available HBAs, such
that one path is used per LUN. Paths are automatically selected by
drivers for supported storage systems.

(2) Dynamic Least Recently Used (LRU) Sends command to the path with the lowest I/O count. Includes special
commands such as path verification and normal I/O.

(3) Dynamic Least Service Time (LST) Sends command to the path with the shortest execution time. Does not
include special commands.

A value of 2 was used for the LRU policy. In this way the paths selected were automatically balanced
across the HBAs and switches for the EVA8000, and load balanced across HBAs to the same
controller within the same switch for the EVA5000.

Oracle Cluster File System 2


The Oracle Cluster File System (OCFS) is required when using the RAC configuration where a
file system must be employed. OCFS provided a Distributed Lock Management (DLM) facility to
coordinate writes to the files contained within. ocfs2 was used for this environment and is currently
the only supported version of OCFS.

Note
OCFS, ASM, and raw devices can all be used with RAC, depending on platform and business requirements.

OCFS disk configuration

The virtual disks (Vdisks) are zoned so both hosts have visibility to all LUNs for use with RAC. The
single LUN containing u06 and u07 are the flash recovery area and Voting and Cluster Ready
Services (CRS) files respectively. Each LUN has a single partition which is then formatted for OCFS.
Figure 2 shows the hosts and LUN presentation. See Figure 3 for OCFS format information.

14
Figure 2. Shared OCFS2 LUN presentation to RAC nodes

Setting up the OCFS Cluster File Systems

Each of the six EVA virtual disks was formatted as an OCFS file system using the defaults shown
in Figure 3.

Figure 3. Viewing OCFS formatted devices in the OCFS2 console

The cluster size was set at 64 K with a block size of 4 K. Likewise, the command line is still valuable
to mount OCFS file systems. If this is a RAC and the file system will be used to store the Voting Disk
file, Oracle Cluster Registry, Data files, Redo logs, Archive logs or Control files, the file system should
be mounted with datavolume and nointr options. For example:

15
# mount −o _netdev,datavolume,nointr /dev/cciss/c0d7p1 /data

The default /etc/fstab entries were modified to include the _netdev and datavolume options
as follows:

LABEL=CFT106_DATA1 /u02 ocfs2 _netdev,datavolume,nointr 0 0

The datavolume option is necessary when putting data files on an OCFS volume. Prior versions of
OCFS would not allow data files to be placed on OCFS volumes. The only file types allowed were
the shared Oracle Home. The datavolume option was not added until OCFS version 2.

Note
All of the OCFS2 file systems must have the _netdev option specified. This guarantees that the network is started before the
file systems are mounted and unmounted before the network is stopped. This is required to prevent OCFS from crashing, and is
also called fencing systems during startup and shutdown.

More information is available in the Oracle Cluster File System (OCFS2) User’s Guide, available on
the following website:

http://oss.oracle.com/projects/ocfs2/dist/documentation/ocfs2_users_guide.pdf

Working with Benchmark Factory


Quest Software Benchmark Factory 5.1 was used to conduct the workload generation. Benchmark
Factory is a generic workload generation utility that can generate DSS and OLTP workloads.
Benchmark Factory also can generate a custom workload based on any trace generated by a
database or any custom SQL used.

OLTP workloads were created in accordance with the parameters defined in Table 3.

Table 3. Benchmark scale factors


Host Size User Load

IA64 RAC 3.1 TB 1,000

IA64 Single 1.4 TB 500

IA32 Multi 750 GB 500

Benchmark Factory scale factors are approximate and should not be used as absolute guides. The
example depicted in Figure 4 should make approximately 930 GB of data when, in fact, this scale
factor generated 1.3 TB of data and 300 GB of indexes.

16
Figure 4. Benchmark Factory scale factors

The scale factor shown is an estimate only. The actual size of the database must include indexes
as well. This may be as much as 33% of the total data size once the data has been completely
generated. The generated data alone may be as much as 20% larger than the estimated size.

After creating the database, the Oracle spfile parameters were tuned to generate the best
performance for backup during workloads. Table 4, Table 5, and Table 1 provide the specific
Oracle parameters that were used during testing. Table 4 shows the options Benchmark Factory
can use to create the tables. Each of the settings can be changed to customize each table. These
are the optimal settings for this environment.

Table 4. Benchmark Factory table creation options

Object Type Creation parameters

C_ORDER_LINE Table tablespace bmf parallel (degree default instances default) nologging cache monitoring

C_STOCK Table tablespace bmf parallel (degree default instances default) nologging cache monitoring

C_DISTRICT Table tablespace bmf parallel (degree default instances default) nologging cache monitoring

C_CUSTOMER Table tablespace bmf parallel (degree default instances default) nologging cache monitoring

C_HISTORY Table tablespace bmf parallel (degree default instances default) nologging cache monitoring

C_ORDER Table tablespace bmf parallel (degree default instances default) nologging cache monitoring

C_ITEM Table tablespace bmf parallel (degree default instances default) nologging cache monitoring

C_WAREHOUSE Table tablespace bmf parallel (degree default instances default) nologging cache monitoring

C_NEW_ORDER Table tablespace bmf parallel (degree default instances default) nologging cache monitoring

17
Table 5. Benchmark Factory index creation options

Object Type Creation parameters

C_STOCK_I1 Index tablespace bmf parallel (degree default instances default) nologging compute statistics

C_WAREHOUSE_I1 Index tablespace bmf parallel (degree default instances default) nologging compute statistics

C_NEW_ORDER_I1 Index tablespace bmf parallel (degree default instances default) nologging compute statistics

C_CUSTOMER_I2 Index tablespace bmf parallel (degree default instances default) nologging compute statistics

C_ORDER_LINE_I1 Index tablespace bmf parallel (degree default instances default) nologging compute statistics

C_ORDER_I1 Index tablespace bmf parallel (degree default instances default) nologging compute statistics

C_ITEM_I1 Index tablespace bmf parallel (degree default instances default) nologging compute statistics

C_DISTRICT_I1 Index tablespace bmf parallel (degree default instances default) nologging compute statistics

C_CUSTOMER_I1 Index tablespace bmf parallel (degree default instances default) nologging compute statistics

Table 5 shows the modifiable options available in Benchmark Factory when creating indexes. These
options enable the user to tune the indexes to those commonly used or to use the default indexing
schemes when trying different database scenarios.

Oracle parameter changes

Table 6 lists the Oracle default parameters and the parameters that were used. These changes
were made to accommodate the high number of simulated users configured for the workload while
backups were occurring.

Table 6. Changed Oracle parameters

Oracle parameter Default Used

sort_area_size 65536 262144

parallel_max_servers 15 2048

parallel_threads_per_cpu 2 8

db_files 200 1024

Processes 150 1250

Dbwr_io_slaves 0 4

Tape_io_slaves False True

db_file_multiblock_read_count 8 128

Cusor_sharing Exact Force

18
Configuring HP Data Protector 6.0 for Oracle backups
HP Data Protector setup
This section covers some Oracle-specific information for configuring Oracle Backups with Data
Protector 6.0. Generic backup and restore information and details on the HP Data Protector setup
is listed in the HP OpenView Storage Data Protector Concepts Guide, available on the following
website:

http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00751562/c00751562.pdf

Backups with Data Protector


Creating a backup specification

Data Protector Backup Specifications determine the way a backup is executed. Although the
backup specifications have similar attributes, the script template determines how the backup will
be performed. The EML, VLS, and DISK backups share similar attributes because the backup
specification used is an application backup of type oracle8; however, different devices are used to
back up the database.

A backup specification is broken down into the following sections:

• Source—the source database to back up and which objects will be part of the backup

• Destination—the device where the backup will be sent

• Options—the backup specification specific options, similar to the template options

• Schedule—the schedule set forth in the template, which can be modified on a per specification
basis

In order to define the backup specification in our environment:

1. Right-click the group to which the specification is being added.

2. Select Add Backup.

A dialog window similar to the template interface opens.

3. Select the Backup type from the available templates or select Blank Backup, if no template exists.

The screen shot in Figure 5 depicts the Disk Full Backup option.

4. Click OK, and the application prompts you for Database Server information.

5. Enter the Database Server information.

• If there is no need to modify any of the Destination information, click Finish.

• If there is a need to modify the information, click Next.

6. The application presents a window for providing Destination information or making modifications
to the disk or tape devices listed. Select the appropriate boxes for attributes of the backup. The
Load Balancing configuration on this screen may be modified. Click Next when done.

7. Modify the Backup Specification Options if needed. When done here, click Next.

19
8. The application presents a scheduling window where times and dates can be added to the
schedule and full or incremental backups can be specified. Click Finish when done.

A screen shot of the Create New Backup specification screen is shown in Figure 5. For Oracle, the
Filesystem check box does not need to be selected. Also, selecting the Schedule check box will start
the Schedule configuration near the end of the backup specification configuration. If a schedule for
the backup specification or template is not required at this time, do not select the Schedule check box.

Figure 5. HP Data Protector 6.0 Create New Backup interface

When the specification is complete, right-click on the backup specification in the group defined and
use the Start Backup command to initiate the backup. Full backups are the default.

Data Protector disk-to-disk backup


Setting up the disk file libraries

HP Data Protector facilitates disk-to-disk backups using file libraries, or disk backup repositories. In
order to make effective use of the SAN, this repository typically will be local to the host being backed
up. Otherwise, backups can be performed to disk devices across the network.

20
The final screen reviews choices and allows any changes needed. Further changes are allowed if
required, even after the file library has been created. The file library is now ready for use as a
device. A template or backup specification can be created or modified to use the new device.

Effectively using file libraries and tape devices

After creating the file library, modifying options may improve performance. For example, the
default block size for all Data Protector backups is 64 KB and the maximum block size is 1,024
KB. Performance of the file library is a function of the device used, the number of writers defined,
and the number of streams per writer.

A description of the settings for the disk writer and file library follows. More
information can be found on these settings; see “Examples” on page 50 and
“Configuring Oracle Recovery Manager (RMAN)” on page 49.

• Block size—The parameter we set for each disk writer when acting like a tape device.

• Number of writers—The number of maximum disk writers to create as part of the file library.

• Streams—Enables multiplexed I/O to each disk writer, allowing more than one file to be written
to the file the disk writer outputs during the backup; good for small files and incremental backups
but typically a problem for larger files and full backups.

After the backup is tuned, using the file library to create a virtual full backup or disk-staging backup
solution is very simple. Table 7 displays the parameters that were used in the disk-to-disk backups.

Table 7. Disk-to-disk backup parameters

File library parameter Default Used

Block size 64 KB 512 KB

Number of writers 1 8

Streams 3 1

File library devices differ from tapes in that they are not shared among hosts.

In Data Protector, the file library defines the relationship between the host and a disk storage
device, but Data Protector enables RMAN to treat the library as if it were a tape device. The hosts
share the tape devices. Backup destinations cannot be modified using global parameters. Instead
destinations are modified based on the backup setting, then based on the host. The disk devices are
presented explicitly to each host and are separate in every respect. Although the devices may be
listed for each host, HP recommends that only the host which has direct SAN access to the device
use it during a disk-to-disk backup.

HP recommends that the Device Auto Configuration wizard always be used to create tape libraries
and their device associations.

Setting up Oracle restores


There are two options for performing restores to the Oracle Database servers in this environment.

1. Create a restore job on the cell manager.

2. Manually create a script on the cell manager and execute the restore with RMAN at the
command line.

21
RMAN and Data Protector automate the restore process of demultiplexing files and mounting the
required backup images, regardless of media type. This is especially helpful with a mix of disk
backups and tape backups.

Note
For the restore to be successful, the control file from the last backup must be available or you must have an RMAN catalog
configured.

OLTP workload results


The Benchmark Factory testing demonstrates the impact to the OLTP workload generated against
each database. Table 8 displays the results. This workload is a standard workload and user levels
exercised are 500 and 1,000 users depending on the system. Table 8 shows the transaction per
second (TPS) rates for the OLTP baseline (no backup running) and the rates with the different
backups running.

Table 8. OLTP workload impact by backup method


Host type Database Number of OLTP Disk EML backup VLS backup Average
size (GB) users baseline backup (TPS) (TPS) percent
(TPS) (TPS) impact per
user

IA64 RAC 3,120 1,000 52 29 26 33 0.436%

(44%) (50%) (36.54%)

IA64 Single 1,450 500 51 46 35 30 0.549%

(9.8%) (31.37%) (41.18%)

IA32 Multi 750 500 21 17 20 20 0.019%

(9.8%) (4.76%) (4.76%)

The IA32 Multi system showed the least impact to any of the users, but the TPS was considerably less
than either the IA64 RAC or IA64 Single system.

The IA64 RAC system performed well, providing less impact to the users than the IA64 Single system
which was only able to keep up with the 500-user loads.

The IA64 Single system also shows a low impact, but did impact the users similarly to what happened
on the IA64 RAC system for the given number of users.

22
Oracle backup and restore
The major goal for the project was to back up the Oracle databases from each server directly to the
tape, virtual tape, and disk devices. The backups were conducted in two scenarios:

1. Perform a backup without a simulated user load and measure the backup rate (GB/hour).

2. Perform a backup with a simulated user load and measure the back up rate (GB/hour) and the
impact to the user workload (TPS).

During the first scenario, the database had no workload applied and the backup was performed

to disk, tape, and virtual tape. For the second scenario, the database was put under a peak OLTP

workload and then backed up to disk, tape, and virtual tape. This was done in order to observe

the interaction between the workload and the backup and also to understand what impact the

backup was having on the user experience. A total of 200 data files were backed up using RMAN

through HP Data Protector.

A combined total of approximately 5 TB was backed up for each server using the methodologies

in this paper. Data was sent to multiple drives in multiple channels, where media allows as high a

throughput as possible for each configuration. One channel was configured per tape device and two

streams per disk device (also known as multiplexing) during the backup.

Restores were done from each backup device to observe the performance of each methodology.

The most important metric captured was the actual speed of the restore as a factor in the overall

time-to-recover (TTR). This allows us to gauge the possible performance of each backup technology in

an environment.

Backup and restore performance results


This section discusses testing of three backup methodologies:

• Disk-to-disk backups—Backing up Oracle 10g Release 2 to EVA5000

• Disk-to-virtual-tape backups— Backing up Oracle 10g Release 2 to VLS

• Disk-to-tape backups—Backing up Oracle 10g Release 2 to EML

The tests demonstrated:

• Successful backup at the rate of ~1 TB per hour.

• Successful recovery of the database after simulated catastrophic data corruption.

EVA performance results


The performance of the EVA5000 and disk-to-disk backup methodology is discussed in the next
section. To understand the overall backup capabilities of the EVA5000 and FATA disks, the
performance must first be characterized.

EVA5000 raw performance characterization

To facilitate raw write testing of the EVA5000 the low-level UNIX tool disk-duplicate, (dd), was
used. As a tool, dd is very useful because it does only sequential I/O, but supports a modifiable
block size. This testing provided baseline performance for the tested configuration and showed the

23
raw performance of the EVA5000 being accessed from a host. An example of using the dd utility
is shown below:

dd if=/dev/zero of=/backup1/test.fil bs=256K

The raw sequential large block performance yielded approximately 190 MB/s for a single port
on a single HSV110 controller. This performance doubled to 380 MB/s for two simultaneous
dd commands, to two different LUNs, separated by HSV110 controllers. These numbers will
change based on the type of disks used. The overall performance using FATA drives is lower than
performance using FC drives.

EVA8000 raw performance characterization

Raw write testing of the EVA8000 was performed using the dd utility with a command similar to that
used for EVA5000 testing.

The raw sequential large block performance yielded approximately 190 MB/s for a single Vdisk
using multiple HBAs and Controllers. This performance doubled to 380 MB/s for two simultaneous
dd commands to two different LUNs. The I/O load was balanced across HBAs and controller host
ports to avoid bottlenecks.

Disk-to-disk backup and restore results using EVA5000


Figure 6 shows the configuration used for testing disk-to-disk backup.

24
Figure 6. Disk-to-disk backup configuration

Figure 7 depicts the speed of the disk-to-disk backups using the EVA5000 while under load. These
results were generated by performing a disk backup with a Data Protector block size of 512 KB with
and without a workload applied. Table 9 lists the backup results for disk-to-disk backups using
EVA5000, VCS 4.001, with OLTP load.

25
Figure 7. Disk backup with OLTP load

Table 9. Backup results for disk-to-disk backups using EVA5000, VCS 4.001, with OLTP load
Host type Database size (GB) Channels Backup LUNs Backup time Backup rate
(hours:minutes) (GB/hour)

IA64 RAC 3,123 12 2 10:05 310

IA64 Single 1,382 12 1 6:26 215

IA32 Multi 445 8 1 4:21 102

Figure 8 shows an example of the performance of the IA32 Multi backup to the EVA5000 array.

26
Figure 8. IA32 Multi disk backup performance

Examination of the backup time indicates that the target backup of ~1 TB per hour was not
achieved in the configuration with the OLTP workload. For comparison, Table 10 shows the backup
performance for this EVA5000 configuration, without the workload.

As indicated in Figure 8, the backup/restore rates of the two IA64 systems were similar. While there
is no noticeable contention from the EVA5000 cache, throughput is near the maximum when using
VCS 4.001. The backup rate was approximately 150% of the speed of the IA64 Single system
which ran at 215 GB/hour. These results are typical of backups run during peak workload times
when using FATA disks.

Figure 9 and Table 10 show the results of testing backup without the OLTP load. The IA64 systems
outperformed the IA32 system but still were not able to hit the target of ~1 TB/hr. A comparison of
backup times achieved during this and the previous test demonstrates the impact of the workload
on achieving specific backup times.

27
Figure 9. Backup results for disk-to-disk backups using EVA5000, VCS 4.001, without OLTP load

Table 10. Backup results for disk-to-disk backups using EVA5000, VCS 4.001, without OLTP load
Host type Database size Backup LUNs Backup time Channels Backup rate
(GB) (hours:minutes) (GB/hour)

IA64 RAC 3,146 2 8:40 12 363

IA64 Single 1,379 1 4:46 12 289

IA32 Multi 483 1 1:46 8 141

In each backup scenario, the EVA firmware cannot optimize backup performance because cache
mirroring cannot be turned off in VCS 4.001. Cache mirroring enables the EVA to provide full
redundancy with active-active failover and multipath I/O (MPIO) capabilities. Figure 10 shows the
same type of backup but uses VCS 3.028 with cache mirroring disabled for the backup LUNs.

28
Figure 10. Backup results for disk-to-disk backups using EVA5000, VCS 3.028, without OLTP load

Table 11. Backup results for disk-to-disk backups using EVA5000, VCS 3.028, without OLTP load
Host type Database size Backup LUNs Backup time RMAN channels Backup rate
(GB) (hours:minutes) (GB/hour)

IA64 RAC 3,090 2 4:24 12 702

IA64 Single 1,373 1 2:46 12 508

IA32 Multi 750 1 2:10 8 245

Using the earlier firmware resulted in improved backup times, nearly doubling the throughput for
each system. Most of the improvement is attributable to disabling controller cache mirroring. This
allowed the backup to be streamlined and did not congest one of the controller host ports with
cache mirroring I/O.

29
Figure 11. Restore results for disk-to-disk backups using EVA5000, VCS 4.001, off-line restore

Table 12. Restore results for disk-to-disk backups using EVA5000, VCS 4.001, off-line restore
Host type Database size Backup LUNs Restore time RMAN channels Restore rate
(GB) (hours:minutes) (GB/hour)

IA64 RAC 3,123 2 5:54 12 559

IA64 Single 1,383 1 2:05 12 432

IA32 Multi 450 1 7:36 8 399

Figure 12 below shows the EVA performance during the IA32 Multi system restore.

30
Figure 12. EVA performance during the IA32 Multi system restore

The IA64 Single and the IA64 RAC outperformed the IA32 Multi; however, the IA64 RAC achieved
the best throughput. This was attributable to the fact that the RAC was restoring to two EVA8000
LUNs while the other systems were each using a single LUN.

The IA32 Multi system was configured with known limitations:

• The disk I/O was shared on the same HBA for backups; tape I/O was assigned to a different
HBA.

• Bigfile tablespaces were used instead of more parallel reads from smallfile tablespaces.

• There was no striping across multiple LUNs using disk management software (Oracle ASM).

The limited capability of the hardware and the bigfile tablespace used to store the entire 150-GB
database resulted in long backups and restores using only one channel. Simultaneous restores of the
IA32 Multi system were performed causing simultaneous reads from the single backup LUN.

These results indicated that:

• Using bigfile tablespaces can impede overall restore results if there are not enough tablespaces to
spread across multiple devices, that is, smallfile tablespaces or partitioned bigfile tablespaces.

31
• Using VCS 4.001 or higher code and RAID1 on the production EVA will yield the best protection
from failures, because cache mirroring is enabled and controllers are not taxed with RAID5
parity overhead.

• Using 3.028 firmware and RAID0 LUNs on the disk-to-disk (D2D) target yields the best
performance, because the EVA5000 controllers operate without cache mirroring. However, this
configuration does not provide recovery from failures.

• Properly balancing datafiles/tablespaces to RMAN channels and tape devices yields the best
results.

• Disk striping across multiple LUNs enables the fastest possible reads.

Disk-to-tape backup and restore performance results, EML E-Series


Figure 13 shows an example of the disk-to-tape SAN backup configuration used to test backup and
restore performance. In this case, a tape backup was performed with Data Protector using a block
size of 512 KB, both with and without the OLTP workload applied.

Figure 13. Disk-to-tape backup diagram

Figure 14 and Table 13 show the results for disk-to-tape backup using EML 10e, with OLTP load.

32
Figure 14. Backup results for disk-to-tape backup using EML 103e, with OLTP load

Table 13. Backup results for disk-to-tape backup using EML 103e, with OLTP load
Host type Database size RMAN channels Backup LUNs Backup time Backup rate
(GB) (hours:minutes) (GB/hour)

IA64 RAC 3,128 4 2 7:09 384

IA64 Single 1,383 4 1 6:44 206

IA32 Multi 450 4 1 5:42 79

The IA32 system performed poorly during this backup. The OLTP workload performed plenty of
read/write I/O against the IA32 Multi system and then tried to back up these bigfile tablespaces
to four tape devices; this slowed down the overall backup. Theoretically, the system can attain
throughput of approximately 80 MB/s, or 216 GB/hour, per LTO-3 drive. However, during peak
workload with one file containing most of the data, the system attained throughput of approximately
25 MB/s, or 80 GB/hour, with simultaneous read/write operations outside of the backup.

The IA64 systems performed well during the backup because using two data sources (that is, IA64
RAC has two data LUNs, many FC paths) allowed for more read throughput.

The system backups were then tested without the OLTP load. Table 14 shows data from tests on
the three host systems.

33
Figure 15. Backup results for disk-to-tape backup using EML 103e, without OLTP load

Table 14. Backup results for disk-to-tape backup using EML 103e, without OLTP load
Host type Database size RMAN channels Backup LUNs Backup time Backup rate
(GB) (hours:minutes) (GB/hour)

IA64 RAC 3,150 2 5:32 4 568

IA64 Single 1,380 1 3:11 4 432

IA32 Multi 490 1 2:10 4 223

Without the OLTP load, performance of each system improved. The IA32 Multi system is not as fast
as the IA64 Single or the IA64 RAC system. However, it saw the greatest performance improvement
when compared with results from the backup performed while under the workload. It is easy to
understand why backup while under heavy load is not advised.

In testing restore times using EML tape, each restore was done off-line, therefore, each server had
more resources available for the restore than it would for the backup. The IA32 system could have
attained even higher performance if more partitions had been employed for the bigfile tablespace, or
if smallfile tablespaces had been used for the single 150-GB set of data tables in each database.

The results of tests using off-line restore mode are displayed in Table 15 and in Figure 16.

34
Figure 16. Restore results for disk-to-tape backup using EML 103e, off-line restore

Table 15. Restore results for disk-to-tape backup using EML 103e, off-line restore
Host type Database size Backup LUNs Backup time RMAN channels Restore rate
(GB) (hours:minutes) (GB/hour)

IA64 RAC 3,130 2 3:25 4 557

IA64 Single 1,388 1 1:42 4 537

IA32 Multi 455 1 2:25 4 371

These results demonstrated:

• Tape backup performance is sensitive to the layout of the data paths and device visibility.

• If the proper size files are used, tape streaming can perform at acceptable levels of I/O.

• Even in cases where poor RMAN channel balancing occurs, thus impacting backup, tape
restores are fast.

Tape backups with HP Data Protector using a block size of 512 KB with and without an applied
workload were tested. Figure 17 depicts a typical configuration.

35
Figure 17. Disk-to-tape backup diagram

Figure 18 and Table 16 display results of the test.

36
Figure 18. Backup results for disk-to-tape backup using VLS6510, with OLTP load

Table 16. Backup results for disk-to-tape backup using VLS6510, with OLTP load
Host type Database size Backup LUNs Backup time RMAN channels Restore rate
(GB) (hours:minutes) (GB/hour)

IA64 RAC 3,135 2 5:08 12 610

IA64 Single 1,390 1 3:39 12 380

IA32 Multi 455 1 3:20 8 143

Even under conditions in which the workload degrades backup performance, it was possible to
achieve results nearly half the ~1 TB/hour target. To test the effect of the workload, tests were run
without the workload.

Figure 19 and Table 17 show the performance of the VLS backup without the OLTP workload.

37
Figure 19. Backup results for disk-to-tape backup using VLS6510, without OLTP load

Table 17. Backup results for disk-to-tape backup using VLS6510, without OLTP load
Host type Database size Backup LUNs Backup time RMAN channels Restore rate
(GB) (hours:minutes) (GB/hour)

IA64 RAC 3,150 2 3:20 12 943

IA64 Single 1,380 1 2:15 12 612

IA32 Multi 483 1 1:14 8 389

These backup results demonstrate the high throughput that VLS can deliver when not impeded by
peak workloads.

• The IA32 Multi system realized the largest percentage increase in throughput over previous tests.

• The RAC system achieved the ~1 TB/hour targeted throughput demonstrating that, given the
proper configuration, VLS can attain the objective.

• Performance of each system improved substantially over the throughput for D2D and disk-to-tape
(D2T) EML backups.

If more than two LUNs had been used for reading, the backup could have increased the read
throughput from the EVA8000.

Figure 20 depicts the load on the EVA8000 during the VLS backup of the IA64 Single system. The
EVA8000 has plenty of available throughput during the backup.

38
Figure 20. EVA8000 performance during the VLS backup of the IA64 Single system

The VLS6510 increases the speed of disk-to-tape configurations. Since the VLS is capable of emulating
multiple types of tape devices and multiple libraries, the VLS enables a very flexible tape solution.
More information on the VLS6000 is available at the following website:

http://h18004.www1.hp.com/storage/disk_storage/disk_to_disk/vls/6000vls/index.html

Figure 21 and Table 18 show the VLS restore results demonstrating both throughput and backup times.

39
Figure 21. Restore results for disk-to-tape backup using VLS6510

Table 18. Restore results for disk-to-tape backup using VLS6510


Host type Database size Backup LUNs Backup time RMAN channels Restore rate
(GB) (hours:minutes) (GB/hour)

IA64 RAC 3,135 2 3:48 12 823

IA64 Single 1,452 1 2:18 12 631

IA32 Multi 450 1 0:54 8 496

The restore results are typical of the VLS6510. The backups, both with and without load, showed
much higher throughput than those performed using other backup mechanisms in this environment.
The IA32 Multi system performed well with restore performance at nearly 500 GB/hour. This is
especially significant because bigfile tablespaces were used, but database restores were parallelized.
If several smallfiles or many bigfile tablespaces had been used, the backup and restore times would
have been even better.

Several things were learned from these results:

• Because backups are sensitive to the layout of the data paths, device visibility, and the ratio of
number of files to RMAN device channels, using a minimum of two dual-channel FC HBAs and
files will improve overall performance and lower impact to the database.

• The VLS allows shorter backup windows than traditional tape.

• Configuring many emulated tape devices on the VLS is essential for achieving maximum
performance from the VLS.

40
• The use of four MSA20 disk shelves, the maximum for the VLS6510, is required to achieve the
highest throughput of the VLS6510. Also, these shelves are essential when emulating multiple
types of libraries, tape devices, and media.

41
Best practices
During testing, several best practices were developed to improve backup and recovery performance
for each scenario.

Initial tuning
Tuning the Data Protector buffer configuration

Buffer configuration is usually second in overall effort to device configuration in tuning Data Protector.
Several parameters were tuned to achieve the highest levels of throughput in this configuration.

The parameters used to provide the optimal performance for tuning the configuration are displayed in
Table 19.
Table 19. Default versus test settings for device parameters

Device parameter Default Used

Block size 64 KB 512 KB

Number of writers 1 8

Streams 3 1

Load balancing N Y

Load balancing min 1 4

Load balancing max 4 4 or 12

Drive_buffering 0 1

Tuning block settings

The operating-system-level tape block settings were configured using the /etc/stinit.def file to
specify settings for the LTO tape devices. The following settings were used in the test environment:

• Device Block Size—Setting this to the largest block size that the device can handle will improve
overall backup times if the block size can be met on every write.

• Number of Writers—This was increased to allow writing to more channels simultaneously.

• Streams—This controls whether or not to multiplex writes to the media. For many small files this
can be helpful, but when backing up files in the GB range this setting will hurt performance.

• Load Balancing Min/Max—The number of channels which can be used at a given time is
limited by the actual number of physical devices available. Setting this to Y allows devices to
be used as needed.

• Block size—This is a stinit.def setting. This is used by stinit to set each tape device’s defaults. A
setting of 0 is used so that the block size is automatically determined at write time.

• Drive-buffering—This is a stinit.def setting. This is used by stinit to set defaults for each tape
device. Simply adding this to the stinit device definition will enable hardware buffering for the
LTO-3 tape device. (This parameter can only be used if the drive is buffer capable.)

Best practices for disk-to-disk backups on the EVA5000 disk array


To ensure optimal performance:

42
• When using VCS 3.028, use RAID0 for backup target LUNs.

• Use RAID1 to provide the lowest write penalty for data protection on VCS 4.001, especially
when using FATA drives.

• Use VCS 3.028 and disable cache mirroring for backup LUNs.

Note
VCS 4.00x includes several critical bug fixes and other improvements; read the VCS 4.001 and 3.028 release notes
carefully before choosing to use an earlier firmware version.

• Use as many disks as possible for a given disk group. Create disk groups of at least 56 FATA
disks to achieve acceptable throughput per disk group.

• Use two or more LUNs for each host to spread data streams across controllers for improved
bandwidth utilization.

Best practices for disk-to-tape backups on the EML tape library


To ensure optimal performance using a disk-to-tape configuration:

• Tune so that drives stream as quickly as possible.

– The EML tape library is capable of approximately 320 MB/s to all four LTO-3 drives. While
the full speed of the drives may not be reached, efficient streaming will allow the tapes to
spin enough to perform the backup quickly without reaching the maximum speed of the
tape devices.

• Note the size of files being backed up.

– Since tapes need to stream, they will perform better when handling more data. Using many
small files will create issues as the tape will stop writing after each file is written, adding
latency and not allowing efficient streaming. By multiplexing many small files to one drive at
the same time, Data Protector can create a larger piece of data to keep the tape streaming.
However, because Data Protector then has to demultiplex the data, this should be used only
when necessary as it will impact restore times.

• Use only one stream per LTO-3 drive when using large files to ensure maximum performance
per channel from RMAN.

– This will facilitate both the database writes and the speed of Data Protector when facilitating
those writes.

Best practices for disk-to-virtual-tape backups on the VLS virtual tape


library
Several settings were adjusted in order to achieve optimum performance. These settings should be
tuned for the specific environment, and, when possible, validated first in a test environment. The
VLS6510 uses four MSA20 disk arrays to emulate tape devices.

Table 19 depicts the parameters which were tuned to achieve the optimum level of throughput in
this configuration.

43
• Block Settings—The operating-system-level tape block settings were configured using the
/etc/stinit.def file to specify settings for the LTO tape devices.

• Number of Devices—The number of devices emulated has an impact on overall performance of


the VLS. In general, the best performing configuration is a 3:1 or 4:1 ratio of emulated devices to
MSA20 disk shelves attached to the VLS interface controller.

Best practices for using Oracle Recovery Manager


Use an RMAN catalog database

An RMAN catalog database:

• Provides redundancy to your HP Data Protector catalog.

• Provides retention for information otherwise backed up in control files.

• Enables restoration of required image copies when an Oracle Data Guard physical standby
database is created.

• Can add protection against and simplify recovery of lost control files.

Protect your backup repositories, RMAN and Data Protector catalog

Back up any repositories that have media information on them on a regular basis. Create a binary
copy of the control file if not using an RMAN catalog.

Enable block change tracking to improve the speed of incremental backups

Without a block change tracking (BCT) table, an incremental level 1 backup will take as long as a
full level 0. Also, ensure the BCT file is included in the backup.

Use flash recovery for online point-in-time recovery and transaction rollback tracking

Using flash recovery is recommended for online recovery and it helps avoid the need to use backups
for recovery. Flash recovery should be part of any storage growth planning, because it can use a
large amount of storage. The actual storage required depends on the number of days of data
collection and the database transaction rate. Configuring even a small flash recovery area will
ensure that, at a minimum, you have a recent copy of a binary control file.

Choose effective backup policy types

Incorporating full and incremental backups should help ease recovery, but one policy does not fit all.
Understanding how and when to use cumulative as compared to differential backups is important
for your recovery strategy.

Maintain your Oracle backup images effectively

Use the Oracle 10g incremental merge to create an up-to-date, full image copy from
incremental backup files. Unless all the archive logs since the previous full backup are available,
creating frequent full backups does not guarantee recovery, even with incremental files available.
If using disk backups as the primary recovery option, move old on-disk backups to tape with the
backup database backupset command or HP Data Protector Media/Object Copy or
Media/Object Consolidation to manage space.

44
Test copies of backups

Restoring backups to an alternate location and attempting to start the restored database is the best
way to test backups. A second method for testing the backup is to use the RMAN validate
command. Finally, use the RMAN crosscheck command to verify that backup, data, and archive
log files are still located on the target media.

Manage your archived online logs and keep them safe

Ensure your archived logs are part of your full and incremental backup. Also, create duplicate
copies of media containing archive logs. Manage the archive log space by deleting archive logs
as part of the backup.

See “Configuring Oracle Recovery Manager (RMAN)” on page 49 for additional RMAN
configuration information.

45
Conclusion
This paper demonstrates how to properly architect, successfully deploy, and productively use a
fully operational HP server, storage, and backup infrastructure for an Oracle 10g environment. In
addition, specific detailed examples illustrate how to customize and integrate HP Data Protector with
Oracle 10g and RMAN to ensure seamless deployment and operation.

Key planning considerations include:

• Selecting the appropriate backup methodology. Each one has a unique set of strengths and
drawbacks pertaining to function, performance, management, reliability, and ease of integration.

• Choosing the appropriate backup types, incorporating full and incremental backups as needed,
as well as knowing when to use cumulative or differential backups. In general, the backup type is
determined by both database usage and recovery objectives.

• Simplifying the environment by using zoning to narrow device presentation to hosts, and sharing
tape devices between SAN connected hosts. This also helps avoid backing up across congested
or high-latency local area networks.

Key operational considerations include:

• Enabling Block Change Tracking to improve the speed of incremental backups.

• Using Oracle Flash Recovery to provide online point-in-time recovery to the transaction level and
eliminate the need to access off-line backups.

• Engaging in periodic testing of backups by restoring them to an alternate location and attempting
to start the restored database.

Key maintenance considerations include:

• Backing up the Data Protector internal database on a regular basis.

• Using an RMAN Catalog database to provide redundancy for the Data Protector internal
database, to protect against lost control files, and to restore image copies to create an Oracle
Data Guard physical standby database.

• Protecting both RMAN and Data Protector by backing up the RMAN catalog and keeping
redundant copies on separate media.

• Understanding these major considerations and knowing how to respond are keys to the successful
deployment of a backup infrastructure of an Oracle 10g environment using HP servers, storage,
and backup software. The test-proven techniques developed in this paper serve as a complete
guide that can be used with confidence to ensure success.

46
Appendix A Bill of materials

Backup Server 1

HP Integrity rx7620 server 1 Operating system Red Hat Enterprise Linux, AS 4.0 U3

Multipath solution QLogic Driver Dynamic Load


Balancing

Backup Software HP Data Protector 6.0 Cell Manager


Solution

MP E.03.13

BMC 03.47

EFI 03.10

System 03.11

1.6-GHz CPU 8 (4 per partition)

GB Memory 64 (32 per partition)

A6826A (dual-port HBA) 8 (4 per partition) Firmware version: 3.03.150

Driver 1.42

Backup Server 2

HP ProLiant DL580 G2 server 1 Operating system Red Hat Enterprise Linux, AS 4.0 U3

Multipath solution QLogic Driver Dynamic Load


Balancing

Backup Software Data Protector 6.0 Agent


Solution

3.0-GHz CPU 4

GB Memory 8

FCA2214 (dual-port HBA) 2 Firmware version: 3.21

Driver 1.45

Primary Storage − EVA8000

EVA8000 (2C12D) 1 V6.010

300-GB FC Disk (NDSOMEOMER) 144 HP02

HP StorageWorks SAN 2/16N switches 2 V4.2.0c

Brocade SilkWorm 3800 SAN switches 2 V3.2.0a

HP OpenView Storage Management 1 V2.1


Appliance III

Disk-to-disk backup target − EVA5000

EVA5000 (2C8D) 1 V4.001 and V3.028

250-GB FATA Disk (ND25058238) 56 HP01

Disk-to-virtual-tape backup target − VLS 6510

VLS6510 1 V3.020

250-GB SATA Disk (ND12341234) 48 HP02b

HP OpenView Command View TL 1 V3.2

47
Disk-to-tape backup target− EML 103e

EML E-Series 103e 1 V3.036

Ultrium 960 LTO-3 Drives (ND25058238) 4 HP01

48
Appendix B Configuring Oracle Recovery Manager
This section shows the recommended defaults for each RMAN instance as well as the configuration
options for configuring the RMAN backup.

CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default


CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO ’%F’;
# default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET;
# default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM ’AES128’; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO
’/u01/app/oracle/product/10.2.0/db_1/dbs/snapcf_ORDB1.f’; # default

It may be useful to modify some of the defaults in the previous example. In particular, the Backup
Optimization, Default Device Type, Controlfile Autobackup, Parallelism,
and Archivelog Deletion Policy.

The following are examples of suggested changes to the default settings:

• Enable Backup Optimization—Use if planning to merge several incremental backups. Only


changed blocks will be backed up since the last backup. This is not very useful for full backups.

• Default Device Type—The default device type may need to be a tape library or worm drive, so
setting this may relieve some scripting.

• Controlfile Autobackup—This is highly useful to ensure a control file backup is done often.

• Parallelism—When backup sets are being written, this will stream multiple files together to the
same channel if set to a value greater than one.

• Archivelog Deletion Policy—Setting this can ease management of scripts since one can set the
archivelogs to be deleted at a predefined interval.

49
Appendix C Examples
This appendix provides the stinit.def configuration, sample RMAN scripts, a sample Data Protector
template, and sample Data Protector screen shots.

Stinit.def configurations for the EML and VLS


# HP Ultrium 960 LTO-3 devices on the EML E-Series 103e
manufacturer="HP" model="Ultrium 3-SCSI" revision="L29S"
{
scsi2logical=1 # Common definitions for all modes

can-bsr drive-buffering can-partitions auto-lock buffer-writes

async-writes read-ahead compression

timeout=800

long-timeout=14400

mode1 blocksize=0 density=0x00

}
# HP Ultriu m960 LTO-3 devices emulated on the VLS 6510
manufacturer="HP" model="Ultrium 3-SCSI" revision="R138"
{
scsi2logical=1 # Common definitions for all modes

can-bsr drive-buffering can-partitions auto-lock buffer-writes

async-writes read-ahead

timeout=800

long-timeout=14400

mode1 blocksize=0 density=0x00 compression=0

Sample RMAN scripts


RMAN full backup script

(four channels configured, can be used for duplicate)

RUN {
allocate channel ’dev_0’ type ’sbt_tape’
parms ’ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=application,
OB2BARLIST=Backup_Specification_Name)’;
allocate channel ’dev_1’ type ’sbt_tape’
parms ’ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=application,
OB2BARLIST= Backup_Specification_Name)’;
allocate channel ’dev_2’ type ’sbt_tape’
parms ’ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=application,
OB2BARLIST= Backup_Specification_Name)’;
allocate channel ’dev_3’ type ’sbt_tape’
parms ’ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=application,
OB2BARLIST= Backup_Specification_Name)’;
BACKUP
INCREMENTAL LEVEL=0
FORMAT ’Data_Plus_Arch_%d_u%u_s%s_p%p_t%t’
TAG ’DB1 Full Backup’
DATABASE PLUS ARCHIVELOG;

50
RELEASE CHANNEL ch00;

RELEASE CHANNEL ch01;

RELEASE CHANNEL ch02;

RELEASE CHANNEL ch03;

ALLOCATE CHANNEL ch00 TYPE ’SBT_TAPE’;

’ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=application,OB2BARLIST=

Backup_Specification_Name)’;

BACKUP

FORMAT ’STBYCTLFILE-_%d_u%u_s%s_p%p_t%t’
CURRENT CONTROLFILE FOR STANDBY;
RELEASE CHANNEL ch00;
}

RMAN duplicate script

run
{
# Auxiliary channels are the only way to restore a database as
a duplicate
allocate channel ’dev_0’ type ’sbt_tape’
parms ’ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=application,OB2BARLIST=EML
Full Backup)’;
allocate channel ’dev_1’ type ’sbt_tape’
parms ’ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=application,OB2BARLIST=EML
Full Backup)’;
allocate channel ’dev_2’ type ’sbt_tape’
parms ’ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=application,OB2BARLIST=EML
Full Backup)’;
allocate channel ’dev_3’ type ’sbt_tape’
parms ’ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=application,OB2BARLIST=EML
Full Backup)’;

duplicate target database for standby;release channel ch00;

release channel ch01;

release channel ch02;

release channel ch03;

HP Data Protector 6 Oracle RMAN template


run {

allocate channel ’dev_0’ type ’sbt_tape’

parms ’ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=application,OB2BARLIST=EML

Full Backup)’;

allocate channel ’dev_1’ type ’sbt_tape’

parms ’ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=application,OB2BARLIST=EML

Full Backup)’;

allocate channel ’dev_2’ type ’sbt_tape’

parms ’ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=application,OB2BARLIST=EML

Full Backup)’;

allocate channel ’dev_3’ type ’sbt_tape’

parms ’ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=application,OB2BARLIST=EML

51
Full Backup)’;
backup incremental level <incr_level>
filesperset 2
format ’EML Full Backup<application_%s:%t:%p>.dbf’
database;
sql ’alter system archive log current’;
backup
format ’EML Full Backup<application_%s:%t:%p>.dbf’
archivelog all;
}

Data Protector 6.0 interface screen shots


The following screen shots provide an overview of how to set up a Data Protector file library for
disk-to-disk backups.

Step 1. Use this screen to choose the devices and media.

52
Step 2. Use this screen to Select a directory/device and set the number of writers.

Step 3. Use this screen to select the distributed file format.

53
Step 4. View the Summary pane.

Step 5. View the finished disk library with four writers.

The following screen shots show the advanced settings of a disk writer associated with the file
library created above.

54
Screen 1. The Advanced Options for disk writer

55
Screen 2. The file library disk-writer-device buffer settings

Screen 3. The media copy and consolidation view

Note
When trying to automate the copying of a D2D backup to tape, the block sizes must match the formatted block size of
any destination tape media.

The following screen shots show the advanced options of a Backup Specification or Template.

Screen 1. Backup Specification/Template advanced options

56
Screen 2. Backup Specification/Template Oracle-specific advanced options

Note
If either the Disable recovery catalog auto backup option or the Disable Data Protector managed
control file backup option is selected, there will not be a redundant copy of the recovery catalog backed up to
any media unless manually done through another means.

Screen 3. Job Scheduling from the Template or Backup Specification

57
Appendix D Additional information
This section presents a set of general issues encountered during testing and provides a description of
suggested resolutions.

Data Protector patches


To check Data Protector for any patches installed, use the following command on the cell manager:

C:\Program Files\OmniBack\bin>omnicheck -patches

The command will produce the following output table:

Patch level Patch


description===========================================DPWIN_00265
Core ComponentDPWIN_00264 Cell Manager ComponentDPWIN_00271 Disk
AgentDPWIN_00260 Media AgentDPWIN_00266 User InterfaceNumber
of patches found: 5.

Server operating system hangs/crashes

Issue: System hangs under high load.

Resolution: Upgrade from Linux AS4 U1 to U3.

Oracle session hangs

Issue: Oracle instances stop, leaving them in a hung state during high load.

Resolution: Upgrade Oracle to 10.2.0.2 or later.

Data Protector interface crash causes Cell Manager crash

Issue: On Data Protector 6.0 initial release, Windows Cell Manager could crash if the GUI crashed.

Resolution: Install DPWIN_00265 and DPWIN_00266 to resolve the GUI crash and the Cell
Manager crash.

RMAN specific syntax changes

Issue: To enable parity with other testing done in the past, modifications to some parameters were

made.

Argument 1: FilesPerSet − MaxSetSize, Rate, MaxPieceSize rewritten to the RMAN script.

Argument 2: MaxOpenFiles − MaxSetSize rewritten to the RMAN script.

Resolution: Set the explicit commands in the template, or create a script on the server.

RAC issues

Issue: OCFS2 times out under load.

Resolution: Set default timeout value greater than 7.

General Oracle changes

• Cursor_Sharing set to force

58
• Optimizer_Mode set to all_rows

• Dbwr_io_slaves set to 4, used for disk backups

• Tape_io_slaves set to true

59
Appendix E: Acronyms
Table 20 lists the acronyms used in this paper.

Table 20. Acronyms

2C12D 2 controller 12 drive shelves (EVA configuration)

ACK Acknowledgement

ALB Automated Load Balancing

ASM Automatic Storage Management

CFT Customer Focused Testing (HP)

CPU Central Processing Unit

CRS Cluster Ready Services

DBA Database Administrator

DP Data Protector

EVA Enterprise Virtual Array

EVAPerf Enterprise Virtual Array Performance monitoring tool

FATA Fibre Attached Technology Adapted drives

Gb Gigabit

GB Gigabyte

G2/G3/G4 Server Generation models (example DL580 G4)

FC Fibre Channel

FC/IP Fibre Channel over Internet Protocol

HBA host bus adapter

HP-UX Hewlett Packard UniX

I/O Input/Output

IOPS I/O operations per second

LUN Logical Unit Number

Mb Megabit

MB Megabyte

MPIO Microsoft Multipath I/O

ms millisecond

OCFS Oracle Cluster File System

OFA Optimal Flexible Architecture

OLTP Online Transaction Processing

PCI Peripheral Component Interconnect

pfile Initialization Parameter File (Oracle)

RAC Real Application Clusters

RAID Redundant Array of Independent Disks

RAM Random Access Memory

RDP Rapid Deployment Pack

RMAN Recovery Manager

60
RPO Recovery Point Objective

RR Round Robin

SAN Storage Area Network

SGA System Global Area

spfile Server Parameter File (Oracle)

SIM HP Systems Insight Manager

SQST shortest queue service time

TB Terabyte

TPS Transactions per Second

Vdisk Virtual Disk

61
For more information
This section lists references and their online locations.

HP Customer Focused Testing


• HP StorageWorks Customer Focused Testing

http://www.hp.com/go/hpcft

HP Storage
• HP StorageWorks 4x00/6x00/8x00 Enterprise Virtual Array configuration best practices white
paper

http://h71028.www7.hp.com/ERC/downloads/4AA0-2787ENW.pdf

• The role of HP StorageWorks 6000 Virtual Library Systems in a modern data protection strategy

http://h18004.www1.hp.com/storage/disk_storage/disk_to_disk/vls/6000vls/

relatedinfo.html?jumpid=reg_R1002_USEN

• Getting the most performance from your HP StorageWorks Ultrium 960 tape drive

http://h71028.www7.hp.com/ERC/downloads/5982-9971EN.pdf

• HP StorageWorks Enterprise Modular Library E-Series user guide

http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00644725/c00644725.pdf

Enterprise Backup Solution


• EBS Overview and features

http://www.hp.com/go/ebs

• HP StorageWorks Enterprise Backup Solution Near Online Backup-Restore Solution Implementation


Guide

http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00230303/

c00230303.pdf?jumpid=reg_R1002_USEN

• HP StorageWorks Enterprise Backup Solution design guide

http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00775232/

c00775232.pdf?jumpid=reg_R1002_USEN

Performance and troubleshooting


• Performance Troubleshooting and Using Performance Assessment Tools

http://www.hp.com/support/pat

• HP StorageWorks SAN design reference guide

http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00403562/c00403562.pdf

62
HP technical references

• HP technical on-line seminar program

http://www.hp.com/hps/tos/

HP Data Protector

• HP OpenView Storage Data Protector Installation and Licensing Guide

http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00751873/c00751873.pdf

• HP OpenView Storage Data Protector Media Operations User’s Guide

http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00752717/

c00752717.pdf?jumpid=reg_R1002_USEN

• HP Data Protector 6.0 software Advanced Backup to Disk performance

http://h71028.www7.hp.com/ERC/downloads/

4AA1-2978ENW.pdf?jumpid=reg_R1002_USEN

• Technical information about HP Data Protector software

http://h18004.www1.hp.com/products/storage/software/dataprotector/document

ation.html?jumpid=reg_R1002_USEN

Oracle

• Best Practices for Oracle Database 10g Backup and Recovery

http://www.oracle.com/technology/deploy/availability/pdf/S942_Chien.doc.pdf

• Oracle backup and recovery

http://www.oracle.com/technology/deploy/availability/htdocs/BR_Overview.htm

Quest Software

• Benchmark Factory for Databases (Database Performance and Scalability Testing)

http://www.quest.com/benchmark-factory/

http://www.quest.com/Quest_Site_Assets/PDF/Benchmark_Factory_5_TPCH.pdf

Open source tools

• Tiobench

http://directory.fsf.org/sysadmin/monitor/tiobench.html

• Bonnie++

63
http://www.coker.com.au/bonnie++/

©2007 Hewlett-Packard Development Company, L.P. The information contained


herein is subject to change without notice. The only warranties for HP products
and services are set forth in the express warranty statements accompanying such
products and services. Nothing herein should be construed as constituting an
additional warranty. HP shall not be liable for technical or editorial errors or
omissions contained herein.
Intel and Itanium are registered trademarks of Intel Corporation in the U.S. and
other countries. Oracle is a registered trademark of Oracle Corporation and/or
its affiliates. Microsoft and Windows are U.S. registered trademarks of Microsoft
Corporation.
4AA1-6074ENW, October 2007

64

Das könnte Ihnen auch gefallen