Sie sind auf Seite 1von 95

EMC Unified Storage for

Oracle Database 11g


Performance Enabled by EMC Celerra
Using DNFS or ASM

Proven Solution Guide

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
1

Copyright 2011 EMC Corporation. All rights reserved.


Published February 2011
EMC believes the information in this publication is accurate of its publication date. The information is
subject to change without notice.
The information in this publication is provided as is. EMC Corporation makes no representations or
warranties of any kind with respect to the information in this publication, and specifically disclaims implied
warranties of merchantability or fitness for a particular purpose.
Use, copying, and distribution of any EMC software described in this publication requires an applicable
software license.
For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.
VMware, ESX, and VMware vSphere are registered trademarks or trademarks of VMware, Inc. in the
United States and/or other jurisdictions. All other trademarks used herein are the property of their
respective owners.
Part number: H8100

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
2

Table of Contents

Table of Contents
Chapter 1: About this document ............................................................................... 6
Audience and purpose ...................................................................................................................... 7
Scope ................................................................................................................................................ 8
Business challenge ........................................................................................................................... 8
Technology solution .......................................................................................................................... 9
Objectives........................................................................................................................................ 10
Reference Architecture ................................................................................................................... 11
Validated environment profile ......................................................................................................... 13
Hardware and software resources .................................................................................................. 13
Unified storage platform environment ............................................................................................. 15
Prerequisites and supporting documentation.................................................................................. 17
Terminology .................................................................................................................................... 18

Chapter 2: Storage Design ...................................................................................... 19


Overview ......................................................................................................................................... 19
Concepts ......................................................................................................................................... 20
Storage setup .................................................................................................................................. 20
Best practices .................................................................................................................................. 21
Data Mover parameters setup ........................................................................................................ 23
Data Mover failover ......................................................................................................................... 24
RAID group layout ........................................................................................................................... 25

Chapter 3: File System............................................................................................. 28


Overview ......................................................................................................................................... 28
Limitations ....................................................................................................................................... 29
File system layout ........................................................................................................................... 30

Chapter 4: Oracle Database Design........................................................................ 31


Overview ......................................................................................................................................... 31
Considerations ................................................................................................................................ 32
Database file layout ......................................................................................................................... 33
Oracle ASM ..................................................................................................................................... 34
EMC Unified Storage for Oracle Database 11g
Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
3

Table of Contents

Oracle 11g DNFS ............................................................................................................................ 35


Memory configuration for Oracle 11g .............................................................................................. 38
HugePages ..................................................................................................................................... 39

Chapter 5: Network Design ..................................................................................... 41


Overview ......................................................................................................................................... 41
Concepts ......................................................................................................................................... 42
Best practices and recommendations ............................................................................................. 42
SAN network layout ......................................................................................................................... 43
IP network layout ............................................................................................................................. 43
Virtual LANs .................................................................................................................................... 44
Jumbo frames ................................................................................................................................. 45
Public and private networks ............................................................................................................ 46
Oracle RAC 11g server network architecture ................................................................................. 47

Chapter 6: Installation and Configuration .............................................................. 48


Overview ......................................................................................................................................... 48
Task 1: Install and configure EMC PowerPath ............................................................................... 49
Task 2A: Set up and configure NAS for Celerra ............................................................................. 53
Task 2B: Set up and configure ASM for CLARiiON ........................................................................ 53
Task 3: Set up and configure database servers ............................................................................. 56
Task 4: Configure NFS client options ............................................................................................. 57
Task 5: Install Oracle grid infrastructure and Oracle RAC .............................................................. 59
Task 6: Configure database server memory options ...................................................................... 59
Task 7: Configure and tune HugePages ......................................................................................... 61
Task 8: Set database initialization parameters ............................................................................... 63
Task 9: Configure the Oracle DNFS client ...................................................................................... 65
Task 10: Verify that DNFS has been enabled................................................................................. 68
Task 11: Configure Oracle Database control files and logfiles ....................................................... 70
Task 12: Enable passwordless authentication using ssh (optional) ............................................... 71

Chapter 7: Testing and Validation .......................................................................... 75


Overview ......................................................................................................................................... 75
Testing tools .................................................................................................................................... 76
Test procedure ................................................................................................................................ 77
EMC Unified Storage for Oracle Database 11g - Performance
Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
4

Table of Contents
Test results ...................................................................................................................................... 78
DNFS configuration test results ...................................................................................................... 79
ASM configuration test results ........................................................................................................ 82

Chapter 8: Conclusion ............................................................................................. 92


Overview ......................................................................................................................................... 92
Findings and conclusion.................................................................................................................. 92

Supporting Information ........................................................................................... 94


Overview ......................................................................................................................................... 94
Managing and monitoring EMC Celerra .......................................................................................... 94

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
5

Chapter 1: About this document

Chapter 1: About this document


Introduction to
unified storage

This Proven Solution Guide summarizes a series of best practices that EMC
discovered, validated, or otherwise encountered during the validation of a solution for

using an EMC Celerra NS-960 unified storage platform, EMC CLARiiON CX4-960
built-in as a back-end storage array, and Oracle Database 11g on Linux using Oracle
Direct Network File System (DNFS) or Oracle Automatic Storage Management
(ASM).
EMC's commitment to consistently maintain and improve quality is led by the Total
Customer Experience (TCE) program, which is driven by Six Sigma methodologies.
As a result, EMC has built Customer Integration Labs in its Global Solutions Centers
to reflect real-world deployments in which TCE use cases are developed and
executed. These use cases provide EMC with an insight into the challenges currently
facing its customers.

Use case
definition

A use case reflects a defined set of tests that validates the reference architecture for
a customer environment. This validated architecture can then be used as a reference
point for a Proven Solution.

Contents

The content of this chapter includes the following topics.

Topic

See Page

Audience and purpose

Scope

Business challenge

Technology solution

Objectives

10

Reference Architecture

11

Validated environment profile

13

Hardware and software resources

13

Unified storage platform environment

15

Prerequisites and supporting documentation

17

Terminology

18

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
6

Chapter 1: About this document

Audience and purpose


Audience

Purpose

The intended audience for the Proven Solution Guide is:

Internal EMC personnel

EMC partners

Customers

This purpose of this proven solution is to detail the use of two storage networking
technologies with Oracle: Oracle Automatic Storage Management (ASM) over Fibre
Channel (FC) and Oracle Direct NFS (DNFS) over Internet Protocol (IP). EMC field
personnel and account teams can use this as a guide for designing the storage layer
for Oracle environments. (Oracle DNFS is an implementation of Oracle where the
NFS client is embedded in the Oracle kernel. This makes the NFS implementation
OS agnostic, and it is specifically tuned for Oracle database workloads.)
The purpose of this Proven Solution Guide is to highlight the functionality,
performance, and scalability of DNFS and ASM in the context of an online
transaction processing (OLTP) workload. When testing both storage technologies,
the EMC Celerra NS-960 was used for storage.

In the case of ASM, the database servers were connected directly to the hostside FC ports on the CX4-960, which is the back end to the NS-960.

In the case of DNFS, the database servers were connected to the NS-960s
Data Movers using a 10 GbE storage network. Oracle RAC 11g on Linux for
x86-64 was used for the database environment.

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
7

Chapter 1: About this document

Scope
Scope

The use cases tested in the context of an OLTP workload are:

Oracle ASM over FC

Oracle DNFS over IP

The scope of this guide is limited to the performance, functionality, and scalability of
these two storage technologies in the context of Oracle RAC 11g for Linux on
x86-64. The cluster used for testing these use cases consisted of four nodes, each
containing 16 cores, with 128 GB of RAM. A 10 GbE storage network was used for
the IP storage use case, and FC was used for ASM.
A reasonable amount of storage and database tuning was performed in order to
achieve the results documented in this guide. However, undocumented parameters
were not used. The goal of this testing was to establish the real-world performance
that could be expected in a production customer environment. For this reason, the
storage was designed to support robustness and reliability, at the cost of
performance. This is consistent with the use of Oracle in a production, fault-tolerant
context.

Not in scope

The testing performed in these use cases did not include:

Backup and recovery

Disaster recovery

Remote replication

Test/dev cloning

EMC has previously documented these use cases on Celerra and CLARiiON
platforms. Refer to the documents listed in the Prerequisites and supporting
documentation section.

Business challenge
Overview

Customers face multiple challenges in maintaining and scaling up their storage


systems including balancing cost with performance and manageability.
Traditionally, mission-critical Oracle database applications have been positioned to
run on block-based storage over FC-connected SAN. However, SAN may not be the
best choice for every customer. Customers have different skills, infrastructures, and
budgetary constraints. These factors affect the choice of protocol used in their
Oracle environments. In many cases, customers may choose multiple protocols, and
this is supported very well by the EMC unified storage architecture.
The Celerra NS-960 unified storage platform offers total flexibility in terms of
protocols, network connection types, and applications. This solution demonstrates a
couple of alternatives that give customers the same type of performance with
EMC Unified Storage for Oracle Database 11g - Performance
Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
8

Chapter 1: About this document


different storage options:

DNFS on IP networks provides a low-cost, high-performing, and scalable


solution for customers.

ASM on FC provides a high-performing and scalable solution for customers.

Technology solution
Overview

This solution demonstrates how organizations can:

Use Celerra NS-960 with DNFS to:


Simplify network setup and management by taking advantage of DNFS
automated management of tasks, such as IP port trunking, and tuning of
Linux NFS parameters
Increase the capacity and throughput of their existing infrastructure through
the use of 10 GbE networking

Use CLARiiON CX4-960 with ASM over FC to:


Use FC as the protocol for storage connectivity with the Oracle ASM file
system

This solution will:

Use a Celerra NS-960 and a high-speed 10 GbE network to chart the limits of
performance and user scalability in an Oracle RAC 11g DNFS OLTP
environment
Demonstrate that network-attached storage is competitive on cost and
performance as compared to a traditional storage infrastructure

Use a CLARiiON CX4-960 high-speed 4 Gb/s FC fabric to chart the limits of


performance and user scalability in an Oracle RAC 11g ASM OLTP
environment
Demonstrate that storage area networking is competitive on performance
and may be the best choice for customers who prefer that storage network
type

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
9

Chapter 1: About this document

Objectives
Objectives

EMCs solution includes the objectives outlined in Table 1.


Table 1.

Solution objectives

Objective
Performance

Details
Demonstrate the baseline performance of the Celerra
NS-960 running over NFS with Oracle RAC 11g R2
DNFS on a 10 GbE network.
Demonstrate the baseline performance of the Celerra
NS-960 running over FC with an Oracle RAC 11g ASM
environment.
Scale the workload and show the database performance
achievable on the array over NFS and over ASM.

EMC Unified Storage for Oracle Database 11g - Performance


Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
10

Chapter 1: About this document

Reference Architecture
Corresponding
Reference
Architecture

This use case has a corresponding Reference Architecture document that is

available on Powerlink and EMC.com. Refer to EMC Unified Storage for Oracle
Database 11g - Performance Enabled by EMC Celerra Using DNFS or ASM for
details.
If you do not have access to this content, contact your EMC representative.

Reference
Architecture
diagram for
DNFS

Figure 1.

Figure 1 depicts the solutions overall physical architecture for the DNFS over IP
implementation.

Physical architecture for the DNFS over IP implementation


This implementation uses Celerra NS-960 and a 10 GbE network fabric to build a
physical four-node Oracle Real Application Cluster (RAC) 11g R2 DNFS OLTP
environment.

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
11

Chapter 1: About this document

Reference
architecture
diagram for
ASM

Figure 2.

Figure 2 depicts the solutions overall physical architecture for the ASM over FC
implementation.

Physical architecture for the ASM over FC implementation

EMC Unified Storage for Oracle Database 11g - Performance


Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
12

Chapter 1: About this document

Validated environment profile


Profile
characteristics

EMC used the environment profile defined in Table 2 to validate this solution.
Table 2.

Profile characteristics

Profile characteristic

Value

Database characteristic

OLTP

Benchmark profile

Quest Benchmark FactoryTPC-C-like benchmark

Response time

< 2 seconds

Read/write ratio

70/30

Database scale

The Quest Benchmark scale is 11,500, which keeps the


system running within agreed performance limits.

Size of databases

1 TB

Number of databases

Array drives: size and speed

FC: 300 GB; 15k rpm


SATA: 1 TB; 7,200 rpm

Hardware and software resources


Hardware

Table 3 lists the hardware used to validate this solution.


Table 3.

Solution hardware

Equipment

Quantity

EMC Celerra NS-960 unified storage


platform

Configuration
2 storage processors
3 Data Movers

(included an EMC CLARiiON CX4-960


back-end storage array)

1 Control Station
4 x 10 GbE network connections per Data
Mover
7 FC shelves
2 SATA shelves
105 x 300 GB 15k FC disks
30 x 1 TB SATA disks

10 Gigabit Ethernet switches

(Brocade 8000)
FC switches

24 CEE ports (10 Gb/s)


8 FC ports (8 Gb/s)

16 ports (4 Gb/s)

(QLogic SANbox 5602)


EMC Unified Storage for Oracle Database 11g
Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
13

Chapter 1: About this document

Equipment

Quantity

Database servers

Configuration
4 x 3 GHz Intel Nahalem quad-core
processors

(Oracle RAC 11g servers)

128 GB of RAM

(Fujitsu PRIMERGY RX600 S4)

2 x 146 GB 15k internal SCSI disks


2 onboard GbE Ethernet NICs
2 additional CNA cards
2 additional 8 Gb host bus adapters (HBAs)

Software

Table 4 lists the software that EMC used to validate this solution.
Table 4.

Solution software

Software

Version

Oracle Enterprise Linux

5.5

VMware vSphere

4.0

Microsoft Windows Server 2003 Standard Edition

2003

Oracle RAC 11g Enterprise Edition

11.2.0.1

Quest Benchmark Factory for Databases

5.8.1

EMC Celerra Manager Advanced Edition

5.6

EMC Navisphere Agent

6.29.5.0.37

EMC FLARE

04.29.000.5.006 (OS for CLARiiON)

EMC DART

5.6.49-3 (OS for Celerra NAS head)

EMC Navisphere Management Suite

6.29

EMC Unified Storage for Oracle Database 11g - Performance


Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
14

Chapter 1: About this document

Unified storage platform environment


Introduction to
the unified
storage
platform
environment

This solution tested a unified storage platform with two different environments.
DNFS environment
With DNFS over IP, all database objects are accessed using the Celerra Data Mover
and accessible through an NFS mount. Datafiles, tempfiles, control files, online redo
logfiles, and archived log files are accessed using DNFS over the IP protocol.
ASM environment
With ASM over FC, all database objects, including datafiles, tempfiles, control files,
online redo logfiles, and archived log files, are stored on ASM disk groups that reside
on SAN storage.

Solution
environment

The solution environment consists of:

A Celerra NS-960 unified storage array, which includes a CLARiiON CX4-960


back-end storage array
In the case of DNFS, the storage array is connected to the Oracle RAC 11g
servers through an IP production storage network.
In the case of ASM, the back-end storage array is directly connected to the
Oracle RAC 11g servers through an FC production storage network.

Storage layout

A four-node Oracle RAC 11g cluster

To test the unified storage platform solution with different protocols and different disk
drives, the database was built in two different configurations. The back-end storage
layout is the same for both except for the file-system type.
Table 5 shows the storage layouts for both environments.

The DNFS implementation has a RAID-protected NFS file system.

The ASM implementation has a RAID-protected ASM disk group.

Table 5.

Storage layouts

What

Where

Oracle datafiles and tempfiles


Oracle online redo logfiles
Oracle control files

FC disk

Voting disk
OCR files
Archived logfiles

SATA II

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
15

Chapter 1: About this document

Fast recovery area (FRA)


Backup target
For the DNFS environment, all files are accessed using DNFS.

RAID-protected NFS file systems are designed to satisfy the I/O demands of
particular database objects. For example, RAID 5 is sometimes used for the
datafiles and tempfiles, but RAID 1 is always used for the online redo logfiles.
For more information, refer to: EMC Celerra NS-960 Specification Sheet

Network
architecture

Oracle datafiles and online redo logfiles reside on their own NFS file system.
Online redo logfiles are mirrored across two different file systems using
Oracle software multiplexing. Three NFS file systems are used - one file
system for datafiles and tempfiles, and two file systems for online redo
logfiles.

Oracle control files are mirrored across the online redo logfile NFS file
systems.

The design implements the following physical connections:

4 Gb/s FC SAN connectivity for the ASM implementation

10 GbE network connectivity for the DNFS implementation

10 GbE network VLAN for the RAC interconnect

1 GbE network connectivity for the client network

DNFS provides file system semantics for Oracle RAC 11g on NFS over IP

ASM provides file system semantics for Oracle RAC 11g on SAN over FC

The RAC interconnect and storage networks are 10 GbE. Jumbo frames are enabled
on these networks.

EMC Unified Storage for Oracle Database 11g - Performance


Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
16

Chapter 1: About this document

Prerequisites and supporting documentation


Technology

Supporting
documents

Third-party
documents

This paper assumes that you have a general knowledge of:

EMC Celerra unified storage platform

EMC CLARiiON

Oracle Database

Oracle Enterprise Linux

The following documents, located on Powerlink or EMC.com, provide additional,


relevant information. Access to documents on Powerlink depends upon your login
credentials. If you do not have access to the following content, contact your EMC
representative.

EMC Celerra NS-960 System Installation Guide

EMC CLARiiON CX4 Model 960 (CX4-960) Storage System Setup Guide

EMC Celerra Network Server Command Reference Manual

EMC CLARiiON Best Practices for Performance and Availability: Release 30


Firmware Update

EMC PowerPath 5.1 Product Guide

EMC PowerPath for Linux Installation and Administration Guide

EMC Backup and Recovery for Oracle Database 11gSAN Performance


Enabled by EMC CLARiiON CX4-120 Using the Fibre Channel Protocol and
Oracle Automatic Storage Management (ASM)A Detailed Review

EMC Backup and Recovery for Oracle Database 11g without Hot Backup
Mode using DNFS and Automatic Storage Management on Fibre ChannelA
Detailed Review

EMC Unified Storage for Oracle Database 11gEnabled by EMC CLARiiON


and EMC Celerra Using FCP and NFS Reference Architecture

EMC Business Continuity for Oracle Database 11gEnabled by EMC Celerra


Using DNFS and NFS Reference Architecture

The following documents are available on the Oracle website:

Oracle Grid Infrastructure Installation Guide 11g Release 2 (11.2) for Linux

Oracle Real Application Clusters Installation Guide 11g Release 2 (11.2) for
Linux and UNIX

Large SGA on Linux

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
17

Chapter 1: About this document

Terminology
Terms and
definitions

Table 6 defines the terms used in this document.


Table 6.

Terminology

Term

Definition

Automatic Storage
Management (ASM)

Oracle ASM is a volume manager and a file system for Oracle Database
files. It supports single-instance Oracle Database and Oracle Real
Application Clusters (Oracle RAC) configurations. ASM uses block-level
storage.

Direct NFS (DNFS)

A feature of the Oracle Database 11g software stack in which the


Network File Storage (NFS) client protocol is embedded in the Oracle
11g database kernel. The NFS implementation is OS agnostic and tuned
for database I/O patterns.

Fast recovery area (FRA)

Fast recovery area (formerly called flash recovery area) is a specific


area of disk storage that is set aside for all recovery-related files and
activities in an Oracle database. In this solution, EMC put the database
backups, which are used to refresh the database environment after
several test runs, into the FRA.

Kernel NFS (KNFS)

A standard feature of all Linux and UNIX operating systems in which the
Network File Storage (NFS) client protocol is embedded in the operating
system kernel.

Online transaction
processing (OLTP)

A type of processing in which a computer responds to requests. Each


request is called a transaction.

Oracle Real Application


Cluster (RAC)

Oracle RAC allows multiple concurrent instances to share a single


physical database.

Scale-up OLTP

Using an industry-standard OLTP benchmark against a single database


instance, comprehensive performance testing is performed to validate the
maximum achievable performance using the solution stack of hardware
and software.

Serial Advanced
Technology Attachment
(SATA) drive

SATA is a standard for connecting hard drives into computer systems.


SATA is based on serial-signaling technology.

EMC Unified Storage for Oracle Database 11g - Performance


Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
18

Chapter 2: Storage Design

Chapter 2: Storage Design


Overview
Introduction to
storage design

The environment consists of a four-node Oracle RAC 11g cluster that accesses a
single production database. The four RAC nodes communicate with each other
through a dedicated private network that includes a Brocade 8000 FCoE switch. This
cluster interconnection synchronizes cache across various database instances
between user requests. An FC SAN is provided by two QLogic SANbox 5602
switches.

For the SAN configuration, EMC PowerPath is used in this solution and works with
the storage system to manage I/O paths. For each server, PowerPath manages four
active I/O paths to each device and four passive I/O paths to each device.

Contents

This chapter contains the following topics:


Topic

See Page

Concepts

20

Storage setup

20

Best practices

21

Data Mover parameters setup

23

Data Mover failover

24

RAID group layout

25

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
19

Chapter 2: Storage Design

Concepts
High
availability and
failover

EMC Celerra has built-in high-availability (HA) features. These HA features allow the
Celerra to survive various failures without a loss of access to the Oracle database.
These HA features protect against the following:

Data Mover failure

Network port failure

Power loss affecting a single circuit connected to the storage array

Storage processor failure

FC switch failure

Disk failure

Storage setup
Setting up
CLARiiON (CX)
storage

Setting up NAS
storage

To set up CLARiiON (CX) storage, the steps in the following table must be carried
out:
Step

Action

Configure zoning.

Configure RAID groups and bind LUNs.

Allocate hot spares.

Create storage groups.

Discover FCP LUNs from the database servers.

To set up NAS storage, the steps in the following table must be carried out:
Step

Action

Create RAID groups.

Allocate hot spares.

Create user-defined pools.

Create file systems and file system NFS exports.

EMC Unified Storage for Oracle Database 11g - Performance


Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
20

Chapter 2: Storage Design

Best practices
Disk drives

The following are the general recommendations for disk drives:

Because of significantly better performance, FC drives are always


recommended for storing datafiles, tempfiles, control files, and online redo log
files.

Serial Advanced Technology-Attached (SATA II) drives have slower response


and rotational speed, and moderate performance with random I/O. However,
they are less expensive than the FC drives for the same or similar capacity.

SATA II drives

SATA II drives are frequently the best option for storing archived redo logs and the
fast recovery area. In the event of high-performance requirements for backup and
recovery, FC drives can also be used for this purpose.

RAID types and


file types

Table 7 describes the recommendations for RAID types corresponding to


Oracle file types.
Table 7.

Recommended RAID types

Description

RAID 10/FC

RAID 5/FC

RAID 5/SATA II

Datafiles/tempfiles

Recommended

Recommended

Avoid

Control files

Recommended

Recommended

Avoid

Online redo logs

Recommended

Avoid

Avoid

Archived logs

Possible (apply
1
tuning)

Possible (apply
1
tuning)

Recommended

Fast recovery area

OK

OK

Recommended

OCR file/voting disk

OK

OK

Avoid

The use of FC disks for archived logs is relatively rare. However, if many archived
logs are being created, and the I/O requirements for archived logs exceed a
reasonable number of SATA II disks, this may be a more cost-effective solution.

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
21

Chapter 2: Storage Design

Tempfiles,
undo, and
sequential table
or index scans

In some cases, if an application creates a large amount of temp activity, placing your
tempfiles on RAID 10 devices may be faster due to RAID 10s superior sequential
I/O performance. This is also true for undo. Further, an application that performs
many full table scans or index scans may benefit from these datafiles being placed
on separate RAID 10 devices.

Online redo
logfiles

Online redo log files should be put on RAID 1 or RAID 10 devices. You should not
use RAID 5 because sequential write performance of distributed parity (RAID 5) is
not as high as that of mirroring (RAID 1).
RAID 1 or RAID 10 provides the best data protection; protection of online redo log
files is critical for Oracle recoverability.

OCR files and


voting disk files

You should use FC disks for OCR files and voting disk files; unavailability of these
files for any significant period of time (due to disk I/O performance issues) may
cause one or more of the RAC nodes to reboot and fence itself off from the cluster.
The LUN/RAID group layout images in the RAID group layout section, show two
different storage configurations that can be used for Oracle RAC 11g databases on a
Celerra. That section can help you to determine the best configuration to meet your
performance needs.

Stripe size for


Celerra storage
pool

EMC recommends a stripe size of 32 KB for all types of database workloads.

Shelf
configuration

The most common error when planning storage is designing for storage capacity
rather than for performance. The single most important storage parameter for
performance is disk latency. High disk latency is synonymous with slower
performance; low disk counts lead to increased disk latency.

The default stripe size for all file systems on FC shelves (redo logs and data) should
be 32 KB. Similarly, the recommended stripe size for the file systems on SATA II
shelves (archive and flash) should be 256 KB.

The recommendation is a configuration that produces average database I/O latency


(the Oracle measurement db file sequential read) of less than or equal to 20 ms. In
todays disk technology, the increase in storage capacity of a disk drive has
outpaced the increase in performance. Therefore, performance must be the standard
rather than storage capacity when planning an Oracle databases storage
configuration, not disk storage capacity.
The number of disks that should be used is determined first by the I/O requirements,
then by capacity. This is especially true for datafiles and tempfiles. Consult with your
EMC sales representative for specific sizing recommendations for your workload.

EMC Unified Storage for Oracle Database 11g - Performance


Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
22

Chapter 2: Storage Design

Data Mover parameters setup


Noprefetch

EMC recommends that you turn off file-system read prefetching for an OLTP
workload. Leave it on for a Decision Support System (DSS) workload.
Prefetch will waste I/Os in an OLTP environment, since few, if any, sequential I/Os
are performed. In a DSS, setting the opposite is true.
To turn off the read prefetch mechanism for a file system, type:
$ server_mount <movername> -option <options>,noprefetch
<fs_name> <mount_point>
For example:
$ server_mount server_3 option rw,noprefetch ufs1 /ufs1

NFS thread
count

EMC recommends that you use the default NFS thread count of 256 for optimal
performance.
Do not set this to a value lower than 32 or to a value higher than 512.
For more information about these parameter, see the Celerra Network Server
Parameters Guide on Powerlink. If you do not have access to this content, contact
your EMC representative.

file.asyncthres
hold

EMC recommends that you use the default value of 32 for the parameter
file.asyncthreshold. This provides optimum performance for databases.
For more information about these parameter, see the Celerra Network Server
Parameters Guide on Powerlink. If you do not have access to this content, contact
your EMC representative.

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
23

Chapter 2: Storage Design

Data Mover failover


High
availability

The Data Mover failover capability is a key feature unique to the Celerra. This
feature offers redundancy at the file-server level, allowing continuous data access. It
also helps to build a fault-resilient RAC architecture.

Configuring
failover

EMC recommends that you set up an auto-policy for the Data Mover, so that if a
Data Mover fails, either due to hardware or software failure, the Control Station
immediately fails the Data Mover over to its partner. The standby Data Mover
assumes the faulted Data Movers identities:

Network identity: IP and MAC addresses of all its network interface cards
(NICs)

Storage identity: File systems controlled by the faulted Data Mover

Service identity: Shares and exports controlled by the faulted Data Mover

This ensures continuous file sharing transparently for the database without requiring
users to unmount and remount the file system. The NFS applications and NFS
clients do not see any significant interruption in I/O.

Pre-conditions
for failover

Data Mover failover occurs if any of these conditions exists:

Failure (operation below the configured threshold) of both internal network


interfaces by the lack of a heartbeat (Data Mover timeout)

Power failure within the Data Mover (unlikely, as the Data Mover is typically
wired into the same power supply as the entire array)

Software panic due to exception or memory error

Data Mover hang

Events that do
not cause
failover

Data Mover failover does not occur under these conditions:

Manual failover

Because manual rebooting of Data Mover does not initiate a failover, EMC
recommends that you initiate a manual failover before taking down a Data Mover for
maintenance.

Removing a Data Mover from its slot

Manually rebooting a Data Mover

EMC Unified Storage for Oracle Database 11g - Performance


Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
24

Chapter 2: Storage Design

RAID group layout


RAID group
layout design

Two sets of RAID and disk configurations have been tested. These are described in
the following sections.
For this solution, the FC disks are designed to hold all other database files, for
example, datafiles, control files, online redo log files. As per EMC best practices,
online redo log files are put on RAID 10, while datafiles and control files are put on
RAID 5.
The SATA disks are only designed to hold the archive logs and backup files.
Therefore, there is no impact on the performance and scalability testing performed
by EMC.
For a customers production environment, EMC recommends RAID 6 instead of
RAID 5 for archive logs and backup files when using SATA. RAID 6 provides extra
redundancy; its performance is almost the same as RAID 5 for read, but is slower for
write.
For more information about RAID 6, see EMC CLARiiON RAID 6 TechnologyA
Detailed Review.

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
25

Chapter 2: Storage Design

RAID group
layout for ASM

Figure 3.

The RAID group layout for seven-FC shelf RAID 5/RAID 1 and two-SATA II RAID 5
using user-defined storage pools is shown in Figure 3.

RAID group layout for ASM

EMC Unified Storage for Oracle Database 11g - Performance


Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
26

Chapter 2: Storage Design

RAID group
layout for NFS

Figure 4.

The RAID group layout for seven-FC shelf RAID 5/RAID 1 and two-SATA II RAID 5
using user-defined storage pools is shown in Figure 4.

RAID group layout for NFS

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
27

Chapter 3: File System

Chapter 3: File System


Overview
Introduction to
the file system

For NFS, in addition to the configuration on the CLARiiON, a few configuration steps
are necessary for the Celerra, which provides the file system path to the hosts.
Seven file systems need to be created to hold database files, online redo logs,
archive logs, backup files, and CRS files. For database files, two file systems are
created to use two Data Movers for better performance. For online redo log files, two
file systems are created to use two Data Movers for multiplexing and better
performance.
For ASM, no additional configuration on Celerra is required.
There are many different ways of organizing software and services in the file system
that all make sense. However, standardization on a workable single layout across all
services has more advantages than picking a layout that is well suited for a particular
application.

Contents

This chapter contains the following topics:


Topic

See Page

Limitations

29

File system layout

30

EMC Unified Storage for Oracle Database 11g - Performance


Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
28

Chapter 3: File System

Limitations
Limitations
with Automatic
Volume
Management

An Automatic Volume Management (AVM) system-defined pool can provide a


maximum of eight stripes, which is only 40 disks.
For example, there are 40 disk volumes on the storage system. AVM takes eight
disk volumes, creates stripe1, slice1, metavolume1, then creates the file system
ufs1. Therefore, if you have more than eight RAID groups, you should create
the volume on those RAID groups manually.
In this solution, the database datafiles are spread over 80 disks, which far
exceed what the disks AVM system-defined pool can support. To resolve this
limitation, EMC did not use the system-defined pool but created the storage
pool manually, which allows for a wide striping AVM pool. Two file systems are
created on this wide stripe by using different Data Movers, and the files are
spread across 80 disks instead of 40.

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
29

Chapter 3: File System

File system layout


File system
layouts for NFS

The file systems shown in Table 8 were created on MVM user-defined pools,
exported on the Celerra, and mounted on the database servers.
ASM does not utilize a file system.
Table 8.

File system layouts for NFS

File system/export AVM user-defined pool

Volumes

/crs

log1pool
(user-defined storage pool created
using log1stripe volume)

/crsfs

/datafs1

datapool
(user-defined storage pool created
using datastripe volume)

data1stripe
(metavolume consisting of all
available FC 4+1 RAID 5
groups)

/datafs2

datapool
(user-defined storage pool created
using datastripe volume)

data2stripe
(metavolume consisting of all
available FC 4+1 RAID 5
groups)

/log1fs

log1pool
(user-defined storage pool created
using log1stripe volume)

log1stripe
(metavolume using one
RAID 10 group)

/log2fs

log2pool
(user-defined storage pool created
using log2stripe volume)

log2stripe
(metavolume using one
RAID 10 group)

/archfs

archpool
(user-defined storage pool created
using archstripe volume)

archstripe
(metavolume using the SATA
1
6+1 RAID 5 group)

/frafs

frapool
(user-defined storage pool created
using frastripe volume)

frastripe
(metavolume using the SATA
1
6+1 RAID 5 group)

EMC strongly recommends using RAID 6 with high-capacity SATA drives. High capacity is 1 TB or greater in
capacity.

EMC Unified Storage for Oracle Database 11g - Performance


Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
30

Chapter 4: Oracle Database Design

Chapter 4: Oracle Database Design


Overview
Introduction to
Oracle
database
design

Contents

This chapter provides guidelines on the Oracle 11g RAC database design used for
this validated solution. The design and configuration instructions apply to the specific
revision levels of components used during the development of the solution.
Before attempting to implement any real-world solution based on this validated
scenario, you must gather the appropriate configuration documentation for the
revision levels of the hardware and software components. Version-specific release
notes are especially important.

This section contains the following topics:


Topic

See Page

Considerations

32

Database file layout

33

Oracle ASM

34

Oracle 11g DNFS

35

Memory configuration for Oracle 11g

38

HugePages

39

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
31

Chapter 4: Oracle Database Design

Considerations
Heartbeat
mechanisms

The synchronization services component (CSS) of Oracle Clusterware maintains two


heartbeat mechanisms:

The disk heartbeat to the voting disk

The network heartbeat across the RAC interconnects that establishes and
confirms valid node membership in the cluster

Both of these heartbeat mechanisms have an associated time-out value. For more
information on Oracle Clusterware misscount and disktimeout parameters, see
Oracle MetaLink Note 294430.1.
EMC recommends setting the disk heartbeat parameter disktimeout to
160 seconds. You should leave the network heartbeat parameter
misscount at the default of 60 seconds.
Rationale
These settings will ensure that the RAC nodes do not evict when the active Data
Mover fails over to its partner.
The command to configure this option is:
$ORA_CRS_HOME/bin/crsctl set css disktimeout 160
Note

In Oracle RAC 11g R2, the default value of disktimeout has been set to
200. Therefore, there is no need to manually change the value to 160. You
must check the current value of the parameter before you make any
changes by executing the following command:
$GRID_HOME/bin/crsctl get css disktimeout

Oracle Cluster
Ready Services

Oracle Cluster Ready Services (CRS) are enabled on each of the Oracle RAC 11g
servers. The servers operate in active/active mode to provide local protection against
a server failure and to provide load balancing.
Provided that the required mount-point parameters are used, CRS-required files
(including the voting disk and the OCR file) can reside on NFS volumes.
For more information on the mount-point parameters required for the Oracle
Clusterware files, see Chapter 6: Installation and Configuration > Task 4: Configure
NFS client options.

NFS client

In the case of the Oracle RAC 11g database, the embedded Oracle DNFS protocol
is used to connect to the Celerra storage array. DNFS runs over TCP/IP.

Oracle binary
files

The Oracle RAC 11g binary files, including the Oracle CRS, are installed on the
database servers' local disks.

EMC Unified Storage for Oracle Database 11g - Performance


Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
32

Chapter 4: Oracle Database Design

Database file layout


Database file
layout for NFS

Datafiles, online redo log files, archive log files, tempfiles, control files, and CRS files
reside on Celerra NFS file systems. These file systems are designed (in terms of the
RAID level and number of disks used) to be appropriate for each type of file.
Table 9 lists each file or activity type and indicates where it resides.
Table 9.

Location of files and activities for NFS

Content

Location

Database binary files

Database servers local disk


(or vmdk file for virtualized servers)

Database file
layout for ASM

Datafiles, tempfiles

Spread across /datafs1 and /datafs2

Online redo log files, control files

Multiplexed across /log1fs and /log2fs

Archived log files

/archfs

Fast recovery area (FRA)

/frafs

OCR and voting disk files

/crs

Datafiles, online redo log files, archive log files, tempfiles, control files, and CRS files
reside on CLARiiON storage that is managed by Oracle ASM. The database was
built with six distinct ASM disk groups: +DATA, +LOG1, +LOG2, +ARCH, +FRA, and
+CRS.
Table 10 lists each file or activity type and indicates where it resides.
Table 10. Location of files and activities for ASM

Content

Location

Database binary files

Database servers local disk


(or vmdk file for virtualized servers)

Datafiles, tempfiles

+DATA

Online redo log files, control files

Multiplexed across +LOG1, +LOG2

Archived log files

+ARCH

Fast recovery area (FRA)

+FRA

OCR and voting disk files

+CRS

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
33

Chapter 4: Oracle Database Design

Oracle ASM
ASMLib

Oracle has developed a storage management interface called the ASMLib API.
ASMLib is not required to run ASM; it is an add-on module that simplifies the
management and discovery of ASM disks. The ASMLib provides an alternative, to
the standard operating system interface, for ASM to identify and access block
devices.
The ASMLib API provides two major feature enhancements over standard interfaces:

Best practices
for ASM and
database
deployment

Disk discovery Providing more information about the storage attributes to


the database and the database administrator (DBA)

I/O processing To enable more efficient I/O

ASM provides out-of-the-box enablement of redundancy and optimal performance.


However, the following items should be considered to increase either performance or
availability, or both:

Implement multiple access paths to the storage array using two or more HBAs
or initiators.

Deploy multipathing software over these multiple HBAs to provide I/O loadbalancing and failover capabilities.

Use disk groups with similarly sized and performing disks. A disk group
containing a large number of disks provides a wide distribution of data
extents, thus allowing greater concurrency for I/O, and reduces the
occurrence of hotspots. Since a large disk group can easily sustain various
I/O characteristics and workloads, a single (database area) disk group can be
used to house database files, logfiles, and controlfiles.

Use disk groups with four or more disks and ensure these disks span several
back-end disk adapters.

For example, a common deployment can be four or more disks in a database disk
group (for example, DATA disk group) spanning all back-end disk adapters/directors,
and eight to ten disks for the FRA disk group. The size of the FRA area will depend
on what is stored and how much, that is, full database backups, incremental
backups, flashback database logs, and archive logs.
Note

An active copy of the controlfile and one member of each of the redo log
groups are stored in the FRA.

EMC Unified Storage for Oracle Database 11g - Performance


Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
34

Chapter 4: Oracle Database Design

Oracle 11g DNFS


Overview of
Oracle Direct
NFS

Oracle 11g includes a feature for storing Oracle datafiles on a NAS device, referred
to as Direct NFS or DNFS. DNFS integrates the NFS client directly inside the
database kernel instead of the operating system kernel.
As part of this solution, the storage elements for Oracle RAC 11g were accessed
using the DNFS protocol. It is relatively easy to configure DNFS. It applies only to the
storage of Oracle datafiles. Redo log files, tempfiles, control files, and so on are not
affected. You can attempt to configure the mount points where these files are stored
to support DNFS, but this will have no impact.
DNFS provides performance advantages over conventional Linux kernel NFS (or
KNFS) because fewer context switches are required to perform an I/O. Because
DNFS integrates the client NFS protocol into the Oracle kernel, this allows all I/O
calls to be made in user space, rather than requiring a context switch to kernel
space. As a result, CPU utilization associated with the I/O of the database server is
reduced.
Disadvantages of KNFS
I/O caching and performance characteristics vary between operating systems. This
leads to varying NFS performance across different operating systems (for example,
Linux and Solaris), and across different releases of the same operating system (for
example, RHEL 4.7 and RHEL 5.4). This in turn results in varying NFS performance
across implementations.

DNFS and EMC


Celerra

Using the DNFS configuration, the Oracle RAC 11g and EMC Celerra solution
enables you to deploy an EMC NAS architecture with DNFS connectivity for its
Oracle RAC 11g database applications that have lower cost and reduced
complexity than direct-attached storage (DAS) or storage area network (SAN).
Figure 5 illustrates how DNFS can be used to deploy an Oracle 11g and
EMC Celerra solution.

Figure 5.

DNFS and Celerra unified storage

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
35

Chapter 4: Oracle Database Design

DNFS
performance
advantage

Table 11 describes the performance advantages that can be gained by using


DNFS.
Table 11. DNFS performance advantages

Enhanced data
integrity

Advantage

Details

Consistent performance

Consistent NFS performance is observed across all


operating systems.

Improved caching and


I/O management

The DNFS kernel is designed for improved caching


and management of the I/O patterns that are typically
experienced in database environments, that is, larger
and more efficient reads/writes.

Asynchronous direct I/O

The DNFS kernel enables asynchronous direct I/O,


which is typically the most efficient form of I/O for
databases. Asynchronous direct I/O significantly
improves database read/write performance by
enabling I/O to continue while other requests are
being submitted and processed.

Overcomes OS write
locking

DNFS overcomes OS write locking, which can be


inadequate in some operating systems and can cause
I/O performance bottlenecks in others.

Reduced CPU and


memory usage

Database server CPU and memory usage are


reduced by eliminating the overhead of copying data
to and from the OS memory cache to the database
SGA cache.

Included in 11g

DNFS is included free of charge with the Oracle 11g.

To ensure database integrity, immediate writes must be made to the database when
requested. Operating system caching delays writes for efficiency reasons; this
potentially compromises data integrity during failure scenarios.
DNFS uses database caching techniques and asynchronous direct I/O to ensure
almost immediate data writes, thus reducing data integrity risks.

Load balancing
and high
availability

Load balancing and high availability (HA) are managed internally within the DNFS
client itself, rather than at the OS level. This greatly simplifies network setups in HA
environments and reduces dependence on IT network administrators by eliminating
the need to set up network subnets and bond ports, for example, LACP bonding.
DNFS allows multiple parallel network paths/ports to be used for I/O between the
database server and the IP storage array. For each node, two paths were used in
the testing performed for this solution. For efficiency and performance, these paths
are managed and load balanced by the DNFS client, not by the operating system.
The four paths should be configured in separate subnets for effective load balancing
by DNFS.

EMC Unified Storage for Oracle Database 11g - Performance


Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
36

Chapter 4: Oracle Database Design

Less tuning
required

Oracle 11g DNFS requires little additional tuning, other than the tuning
considerations necessary in any IP storage environment with Oracle. In an
unchanging environment, once tuned, DNFS requires no ongoing maintenance.

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
37

Chapter 4: Oracle Database Design

Memory configuration for Oracle 11g


Memory
configuration
and
performance

Memory configuration in Oracle 11g is one of the most challenging aspects of


configuring the database server.
If the memory is not configured correctly, the performance of the database server will
be very poor, and:

The database server will be unstable.

The database may not open at all; if it does open, you may experience errors
due to lack of shared pool space.

In an OLTP context, the size of the shared pool is frequently the limitation on
performance of the database.
For more information, see Effects of Automatic Memory Management on
performance.
Automatic
Memory
Management

A feature called Automatic Memory Management was introduced in Oracle 11g 64


bit (Release 1). The purpose of Automatic Memory Management is to simplify the
memory configuration process for Oracle 11g.
For example, in Oracle 10g, the user is required to set two parameters,
SGA_TARGET and PGA_AGGREGATE_TARGET, so that Oracle can manage
other memory-related configurations such as buffer cache and shared pool. When
using Oracle 11g-style Automatic Memory Management, the user does not set these
SGA and PGA parameters. Instead, the following parameters are set:

MEMORY_TARGET

MEMORY_MAX_TARGET

Once these parameters are set, Oracle 11g can, in theory, handle all memory
management issues, including both SGA and PGA memory. However, the Automatic
Memory Management model in Oracle 11g 64 bit (Release 1) requires configuration
of shared memory as a file system mounted under /dev/shm. This adds an additional
management burden to the DBA/system administrator.

Effects of
Automatic
Memory
Management
on performance

Decreased database performance


EMC observed a significant decrease in performance when the Oracle 11g
Automatic Memory Management feature was enabled.
Linux HugePages are not supported
Linux HugePages are not supported when the Automatic Memory Management
feature is implemented. When Automatic Memory Management is enabled, the entire
SGA memory should fit under /dev/shm and, as a result, HugePages are not used.
For more information, see Oracle MetaLink Note 749851.1.
On Oracle 11g, tuning HugePages increases the performance of the database
significantly. It is EMCs opinion that the performance improvements of HugePages,
with no requirement for a /dev/shm file system, makes the Oracle 11g automatic

EMC Unified Storage for Oracle Database 11g - Performance


Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
38

Chapter 4: Oracle Database Design


memory model a poor choice.
EMC recommendations
To achieve optimal performance on Oracle 11g, EMC recommends the following:

Disable the Automatic Memory Management feature

Use the Oracle 10g style of memory management on Oracle 11g

The memory management configuration procedure is described in the previous


section. This provides optimal performance and manageability per our testing.

HugePages
HugePages

The Linux 2.6 kernel includes a feature called HugePages. This feature allows you to
specify the number of physically contiguous large memory pages that will be
allocated and pinned in RAM for shared memory segments like the Oracle System
Global Area (SGA).
The pre-allocated memory pages can only be used for shared memory and must be
large enough to accommodate the entire SGA. HugePages can create a very
significant performance improvement for Oracle RAC 11g database servers. The
performance payoff for enabling HugePages is significant.
Warning
HugePages must be tuned carefully and set correctly. Unused HugePages can only
be used for shared memory allocations - even if the system runs out of memory and
starts swapping. Incorrectly configured HugePages settings may result in poor
performance and may even make the machine unusable.

HugePages
parameters

The HugePages parameters are stored in /etc/sysctl.conf. You can change the value
of HugePages parameters by editing the systctl.conf file and rebooting the instance.
Table 12 describes the HugePages parameters.
Table 12. HugePages parameters

Parameter

Description

HugePages_Total

Total number of HugePages that are allocated for shared


memory segments
(This is a tunable value. You must determine how to set this
value.)

HugePages_Free

Number of HugePages that are not being used

HugePagesSize

Size of each HugePage

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
39

Chapter 4: Oracle Database Design


Optimum values for HugePages parameters
The amount of memory allocated to HugePages must be large enough to
accommodate the entire SGA:

HugePages_Total x HugesPagesSize = Amount of memory allocated to


HugePages

To avoid wasting memory resources, the value of HugePages_Free should be zero.


Note

The value of vm.nr_hugepages should be set to a value that is at least equal


to kernel.shmmax/2048. When the database is started, the HugePages_Free
should show a value close to zero to reflect that memory is tuned.

For more information on tuning HugePages, see Chapter 6: Installation and


configuration > Task 7: Configure and tune HugePages.

EMC Unified Storage for Oracle Database 11g - Performance


Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
40

Chapter 5: Network Design

Chapter 5: Network Design


Overview
Introduction to
network design

Contents

This chapter focuses on the network design and layout for this solution. It includes
the technology details of SAN and IP network configuration as well as the RAC
interconnect network. To maximize the network performance, jumbo frames have
been enabled on different layers.

This section contains the following topics:


Topic

See Page

Concepts

42

Best practices and recommendations

42

SAN network layout

43

IP network layout

43

Virtual LANs

44

Jumbo frames

45

Public and private networks

46

Oracle RAC 11g server network architecture

47

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
41

Chapter 5: Network Design

Concepts
Jumbo frames

Maximum Transfer Unit (MTU) sizes of greater than 1,500 bytes are referred to as
jumbo frames.
Jumbo frames require Gigabit Ethernet across the entire network infrastructure
server, switches, and database servers.

VLAN

Virtual local area networks (VLANs) logically group devices that are on different
network segments or sub-networks.

Best practices and recommendations


Gigabit
Ethernet

EMC recommends that you use Gigabit Ethernet for the RAC interconnects if RAC is
used. If 10 GbE is available, that is better.

Jumbo frames
and the RAC
interconnect

For Oracle RAC 11g installations, jumbo frames are recommended for the private
RAC interconnect. This boosts the throughput as well as possibly lowers the CPU
utilization due to the software overhead of the bonding devices. Jumbo frames
increase the device MTU size to a larger value (typically 9,000 bytes).

VLANs

EMC recommends that you use VLANs to segment different types of traffic to
specific subnets. This provides better throughput, manageability, application
separation, high availability, and security.

EMC Unified Storage for Oracle Database 11g - Performance


Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
42

Chapter 5: Network Design

SAN network layout


SAN network
layout for the
validated
scenario

Zoning

The SAN network layout is configured as follows:

Two Brocade 8000 switches are used for the test bed.

Two connections from each database server are connected to the Brocade
8000 switches.

One FC port from SPA and SPB is connected to each of the two FC switches
at 4 Gb/s.

Each FC port from the database servers are zoned to both SP ports. According to
EMCs best practices, single initiator zoning was used, meaning one HBA/one SP
port per zone.

IP network layout
Network design
for the
validated
scenario

The IP network layout is configured as follows:

TCP/IP provides network connectivity.

DNFS provides file system semantics for Oracle RAC 11g.

Client virtual machines run on a VMware ESX server. They are connected
to a client network.

Client, RAC interconnect, and redundant TCP/IP storage networks consist of


dedicated network switches and VLANs.

Jumbo frames are enabled on the RAC interconnect and storage networks.

The Oracle RAC 11g servers are connected to the client, RAC interconnect,
WAN, and production storage networks.

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
43

Chapter 5: Network Design

Virtual LANs
Virtual LANs

This solution uses four VLANs to segregate network traffic of different types. This
improves throughput, manageability, application separation, high availability, and
security.
Table 13 describes the database server network port setup.
Table 13. Database server network port setup

Client VLAN

VLAN ID

Description

CRS setting

Client network

Public

RAC interconnect

Private

Storage

Private

Storage

Private

The client VLAN supports connectivity between the physically booted Oracle
RAC 11g servers, the virtualized Oracle Database 11g, and the client workstations.
The client VLAN also supports connectivity between the Celerra and the client
workstations to provide network file services to the clients. Control and management
of these devices are also provided through the client network.

RAC
interconnect
VLAN

The RAC interconnect VLAN supports connectivity between the Oracle RAC 11g
servers for network I/O required by Oracle CRS. One NIC is configured on each
Oracle RAC 11g server to the RAC interconnect network.

Storage VLAN

The storage VLAN uses the NFS protocol to provide connectivity between servers
and storage. Each database server connected to the storage VLAN has two NICs
dedicated to the storage VLAN. Link aggregation is configured on the servers to
provide load balancing and port failover between the two ports.
For validating DNFS, link aggregation is removed. DNFS was validated using one-,
two-, three-, and four-port configurations. Link aggregation is not required on DNFS
because Oracle 11g internally manages load balancing and high availability.

Redundant
switches

In addition to VLANs, separate redundant storage switches are used.


The RAC interconnect connections are also on a dedicated switch. For real-world
solution builds, it is recommended that these switches support GbE connections,
jumbo frames, and port channeling.

EMC Unified Storage for Oracle Database 11g - Performance


Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
44

Chapter 5: Network Design

Jumbo frames
Introduction to
jumbo frames

Jumbo frames are configured for three layers:

Celerra Data Mover

Oracle RAC 11g servers

Switch

Note

Celerra Data
Mover

Configuration steps for the switch are not covered here, as that is vendorspecific. Check your switch documentation for details.

To configure jumbo frames on the Data Mover, execute the following command on
the Control Station:
server_ifconfig server_2 int1 mtu=9000
Where:

Linux servers

server_2 is the Data Mover

int1 is the interface

To configure jumbo frames on a Linux server, execute the following command:


ifconfig eth0 mtu 9000
Alternatively, place the following statement in the network scripts in
/etc/sysconfig/network-scripts:
MTU=9000

RAC
interconnect

Jumbo frames should be configured for the storage and RAC interconnect networks
of this solution to boost the throughput, as well as possibly lowering the CPU
utilization due to the software overhead of the bonding devices.
Typical Oracle database environments transfer data in 8 KB and 32 KB block sizes,
which require multiple 1,500 frames per database I/O, while using an MTU size of
1,500. Using jumbo frames, the number of frames needed for every large I/O request
can be reduced, thus the host CPU needed to generate a large number of interrupts
for each application I/O is reduced. The benefit of jumbo frames is primarily a
complex function of the workload I/O sizes, network utilization, and Oracle database
server CPU utilization, and so is not easy to predict.
For information on using jumbo frames with the RAC interconnect, see Oracle
MetaLink Note 300388.1.

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
45

Chapter 5: Network Design


Verifying that
jumbo frames
are enabled

To test whether jumbo frames are enabled, use the following command:
ping M do s 8192 <target>
Where:

target is the interface to be tested

Jumbo frames must be enabled on all layers of the network for this command to
succeed.

Public and private networks


Public and
private
networks

Each node should have:

One static IP address for the public network

One static IP address for the private cluster interconnect

The private interconnect should only be used by Oracle to transfer cluster manager
and cache fusion related data.
Although it is possible to use the public network for the RAC interconnect, this is not
recommended as it may cause degraded database performance (reducing the
amount of bandwidth for cache fusion and cluster manager traffic).

Configuring
virtual IP
addresses

The virtual IP addresses must be defined in either the /etc/hosts file or DNS for all
RAC nodes and client nodes. The public virtual IP addresses will be configured
automatically by Oracle when the Oracle Universal Installer is run, which starts
Oracle's Virtual Internet Protocol Configuration Assistant (vipca).
All virtual IP addresses will be activated when the following command is run:
srvctl start nodeapps -n <node_name>
Where:

node_name is the hostname/IP address that will be configured in the client's


tnsnames.ora file.

EMC Unified Storage for Oracle Database 11g - Performance


Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
46

Chapter 5: Network Design

Oracle RAC 11g server network architecture


Each Oracle RAC 11g server has four network interfaces:
Database
server network
Two interfaces connect to the storage network.
interfaces
One interface connects the server to the RAC interconnect network, enabling
the heartbeat and other network I/O required by Oracle CRS.

Oracle RAC 11g


server network
interfaces DNFS

Oracle RAC 11g


server network
interfaces ASM

One interface connects to the client.

Table 14 lists each interface and describes its use for the Oracle 11g DNFS
configuration.
Table 14. Interfaces for DNFS configuration

Interface port ID

Description

eth0

Client network

eth1

RAC interconnect

eth6

Storage network

eth7

Storage network

Table 15 lists each interface and describes its use for the Oracle 11g ASM
configuration.
Table 15. Interfaces for ASM configuration

Interface port ID

Description

eth0

Client network

eth1

RAC interconnect

eth6

Unused

eth7

Unused

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
47

Chapter 6: Installation and Configuration

Chapter 6: Installation and Configuration


Overview
Introduction to
installation and
configuration

This chapter provides procedures and guidelines for installing and configuring the
components that make up the validated solution scenario. The installation and
configuration instructions presented in this chapter apply to the specific revision
levels of components used during the development of this solution.
Before attempting to implement any real-world solution based on this validated
scenario, gather the appropriate installation and configuration documentation for the
revision levels of the hardware and software components planned in the solution.
Version-specific release notes are especially important.
Note

Contents

Where tasks are not divided into NAS or ASM, they are the same for both
configurations.

This chapter contains the following topics:


Topic

See Page

Task 1: Install and configure EMC PowerPath

49

Task 2A: Set up and configure NAS for Celerra

53

Task 2B: Set up and configure ASM for CLARiiON

53

Task 3: Set up and configure database servers

56

Task 4: Configure NFS client options

57

Task 5: Install Oracle grid infrastructure and Oracle RAC

59

Task 6: Configure database server memory options

59

Task 7: Configure and tune HugePages

61

Task 8: Set database initialization parameters

63

Task 9: Configure the Oracle DNFS client

65

Task 10: Verify that DNFS has been enabled

68

Task 11: Configure Oracle Database control files and logfiles

70

Task 12: Enable passwordless authentication using ssh (optional)

71

EMC Unified Storage for Oracle Database 11g - Performance


Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
48

Chapter 6: Installation and Configuration

Task 1: Install and configure EMC PowerPath


Overview of
Task 1

EMC PowerPath provides I/O multipath functionality. With PowerPath, a node can
access the same SAN volume via multiple paths (HBA ports), which enables both
load balancing across the multiple paths and transparent failover between the paths.

Install EMC
PowerPath

Installation
The installation is very straightforward. In the solution environment, EMC runs the
following command on the four nodes:
rpm -i EMCpower.LINUX-5.3.1.00.00-111.rhel5.x86_64.rpm
PowerPath license
To register the PowerPath license, run the following command:
emcpreg -install
Type the 24-character alphanumeric sequence found on the License Key Card
delivered with the PowerPath media kit.
Licensing type
To set the licensing type, choose one of the following:

Standard Edition for back-end failover only

Base Edition for end-to-end failover

Full PowerPath for failover and load balancing

Load-balancing policies
To set the PowerPath load-balancing policies to CLARiiON Optimize, run the
following command:
powermt set policy=co
New device/path
To reconfigure the new device/path, run the following command:
Powermt config
Use this command when scanning for new devices. Add those new devices to the
PowerPath configuration, configure all detected paths to PowerPath devices, then
add paths to the existing devices.
For more information on prerequisites and installing PowerPath, see the EMC
PowerPath for Linux Installation and Administration Guide.

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
49

Chapter 6: Installation and Configuration

Configure EMC
PowerPath

After installation, you should be able to see pseudo devices using this command:
powermt display dev=all
To start and stop PowerPath, run the command as below:
/etc/init.d/PowerPath start
/etc/init.d/PowerPath stop
All ASM disk groups are then built using PowerPath pseudo names.
Note

A pseudo name is a platform-specific value assigned by PowerPath to the


PowerPath device.

Because of the way in which the SAN devices are discovered on each node, there is
a possibility that a pseudo device pointing to a specific LUN on one node might point
to a different LUN on another node. The emcpadm command is used to ensure
consistent naming of PowerPath devices on all nodes as shown in Figure 6.

Figure 6.

Using the
emcpadm
utility

PowerPath pseudo device names in use

Enhancements to the emcpadm utility allow you to preserve and restore PowerPath
pseudo-device-to-array logical-unit bindings. The new commands simplify the
process of renaming pseudo devices in a cluster environment. For example, you can
rename pseudo devices on one cluster node, export the new device mappings, then
import the mappings on another cluster node.
EMC Unified Storage for Oracle Database 11g - Performance
Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
50

Chapter 6: Installation and Configuration


For examples that describe how to rename, import, and export device mappings, see
Table 16 and Table 17 or the EMC PowerPath 5.1 Product Guide.
Table 16. emcpadm commands

Command

Description

emcpadm check_mappings [v] -f <pseudo device/LU


mappings file>

Displays a comparison of the currently


configured mappings and the mappings in
<pseudo device/LU mappings file>. The
display lists the currently configured devices
along with the device remappings that will
occur if you import <pseudo device/LU
mappings file>.

emcpadm export_mappings f <pseudo device/LU


mappings file>

Saves the currently configured mappings to


<pseudo device/LU mappings file>.

emcpadm import_mappings
[-v] -f <pseudo device/LU
mappings file>

Replaces the currently configured mappings


with the mappings in <pseudo device/LU
mappings file>. If differences exist among
the current mappings and the file mappings,
the mappings in <pseudo device/LU
mappings file> take precedence. When you
import the file mappings, current host
devices are remapped according to the file
mappings, where differences exist.

Table 17. emcpadm arguments

Argument

Description

-f <pseudo device/LU
mappings file>

Specifies the filename and location for


<pseudo device/LU mappings file>.

-v

Specifies verbose mode.

Note

Before importing new mappings on a node or server, you must:

1. Preview changes with emcpadm check_mappings.


2. Shut down all applications and database systems.
3. Unmount file systems.
4. Deport VxVM disk groups.
Example:
On NodeA of a cluster, to make every other node in the cluster have the same
PowerPath configuration, run the following command:
emcpadm export_mappings -f NodeA.map

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
51

Chapter 6: Installation and Configuration

On NodeB of the cluster, copy the NodeA.map file and compare it with the current
configuration:
emcpadm check_mappings -v -f NodeA.map
This shows a comparison of the two configurations and what changes will be made if
this mapping file is imported. To proceed, run the following on NodeB:
emcpadm import_mappings -v -f NodeA.map
powermt save

EMC Unified Storage for Oracle Database 11g - Performance


Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
52

Chapter 6: Installation and Configuration

Task 2A: Set up and configure NAS for Celerra


Configure NAS
and manage
Celerra

For details on configuring NAS and managing Celerra, see Supporting Information>
Managing and monitoring EMC Celerra.

Task 2B: Set up and configure ASM for CLARiiON


Configure ASM
and manage
CLARiiON

For details on configuring ASM and managing the CLARiiON, follow the steps in the
table below.
Step
1

Action
Find the operating system (OS) version.
[root@fj903-esx01 ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 5.5
(Tikanga)
[root@fj903-esx01 ~]#

Check the PowerPath installation.


[root@fj903-esx01 ~]# rpm -qa EMC*
EMCpower.LINUX-5.3.1.00.00-111
[root@fj903-esx01 ~]#
Partition all disks that need to be used as ASM disks.
[root@fj903-esx01 ~]# fdisk /dev/emcpowere
Command (m for help): u
Changing display/entry units to sectors
Command (m for help): n
Command action
e

extended

primary partition (1-4)

p
Partition number (1-4): 1
First sector (63-2950472703, default 63):
2048

Last sector or +size or +sizeM or +sizeK


(2048-4294967294, default 4294967294):
Using default value 4294967294
EMC Unified Storage for Oracle Database 11g
Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
53

Chapter 6: Installation and Configuration

Command (m for help): w


The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
4

Check the ASM rpms applied on the OS.


[root@fj903-esx01 ~]# rpm -qa | grep oracleasm
oracleasm-2.6.18-194.el5-2.0.5-1.el5
oracleasm-support-2.1.3-1.el5

oracleasmlib-2.0.4-1.el5
5

6
7

[root@fj903-esx01 ~]#
Configure the ASM.
[root@fj903-esx01 ~]# /etc/init.d/oracleasm
configure

Check the status of ASM.

[root@fj903-esx01 ~]# /etc/init.d/oracleasm status

Create all ASM disks. For example:

[root@fj903-esx01 ~]#
/etc/init.d/oracleasm createdisk
DISK_LOG1 /dev/emcpowers1
Scan the ASM disk on each node.
[root@fj903-esx01 ~]# /etc/init.d/oracleasm
scandisks

List the ASM disks.

[root@fj903-esx01 ~]# /etc/init.d/oracleasm


listdisks
DISK_ARCH_1
DISK_ARCH_2
DISK_CRS
DISK_DATA_1
DISK_DATA_10
DISK_DATA_11
DISK_DATA_12
DISK_DATA_13
DISK_DATA_14
DISK_DATA_15
DISK_DATA_16
DISK_DATA_2
DISK_DATA_3
DISK_DATA_4
DISK_DATA_5
DISK_DATA_6
DISK_DATA_7
EMC Unified Storage for Oracle Database 11g - Performance
Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
54

Chapter 6: Installation and Configuration


DISK_DATA_8
DISK_DATA_9
DISK_FRA_1
DISK_LOG1
DISK_LOG2
[root@fj903-esx01 ~]#
PowerPath
names

Table 18 shows the PowerPath names associated with the LUNs used in the ASM
disk groups.
Table 18. ASM disk names

Disk group name

ASM disk name

Path

CLARiiON LUN

CRS

DISK_CRS

/dev/emcpowerx

DATA

DISK_DATA_1

/dev/emcpowerr

11

DISK_DATA_2

/dev/emcpowero

14

DISK_DATA_3

/dev/emcpowerj

20

DISK_DATA_4

/dev/emcpowerk

19

DISK_DATA_5

/dev/emcpowerh

22

DISK_DATA_6

/dev/emcpowerf

24

DISK_DATA_7

/dev/emcpowerw

DISK_DATA_8

/dev/emcpowerq

12

DISK_DATA_9

/dev/emcpowern

15

DISK_DATA_10

/dev/emcpowera

DISK_DATA_11

/dev/emcpowerl

18

DISK_DATA_12

/dev/emcpowerm

17

DISK_DATA_13

/dev/emcpowerp

13

DISK_DATA_14

/dev/emcpowerv

10

DISK_DATA_15

/dev/emcpowerg

23

DISK_DATA_16

/dev/emcpoweri

21

LOG1

DISK_LOG1

/dev/emcpowers

LOG2

DISK_LOG2

/dev/emcpoweru

27

FRA

DISK_FRA_1

/dev/emcpowerb

29

DISK_FRA_2

/dev/emcpowerc

28

DISK_ARCH_1

/dev/emcpowerd

26

DISK_ARCH_2

/dev/emcpowere

25

ARCH

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
55

Chapter 6: Installation and Configuration

Task 3: Set up and configure database servers


Check BIOS
version

Fujitsu PRIMERGY RX600 S4 servers were used in our testing. These servers were
preconfigured with the BIOS version 5.00 Rev. 1.19.2244.
Regardless of the server vendor and architecture, you should monitor the BIOS
version shipped with the system and determine if it is the latest production version
supported by the vendor. If it is not the latest production version supported by the
vendor, then flashing the BIOS is recommended.

Disable HyperThreading

Intel Hyper-Threading technology allows multithreaded operating systems to view a


single physical processor as if it were two logical processors. A processor that
incorporates this technology shares CPU resources among multiple threads. In
theory, this enables faster enterprise-server response times and provides additional
CPU processing power to handle larger workloads. As a result, server performance
will supposedly improve.
In EMCs testing, however, performance with Hyper-Threading was poorer than
performance without it.
For this reason, EMC recommends disabling Hyper-Threading. There are two ways
to disable Hyper-Threading:

In the kernel

Through the BIOS

Intel recommends disabling Hyper-Threading in the BIOS because it is cleaner than


doing so in the kernel. Refer to your server vendors documentation for instructions.

EMC Unified Storage for Oracle Database 11g - Performance


Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
56

Chapter 6: Installation and Configuration

Task 4: Configure NFS client options


NFS client
options

For optimal reliability and performance, EMC recommends the NFS client options listed
in Table 19. The mount options are listed in the /etc/fstab file.
Table 19.

NFS client options

Option

Syntax

Recommended

Description

Hard mount

hard

Always

The NFS file handles are kept intact when the


NFS server does not respond. When the NFS
server responds, all the open file handles
resume, and do not need to be closed and
reopened by restarting the application.
This option is required for Data Mover failover
to occur transparently without having to restart
the Oracle instance.

NFS protocol
version

vers= 3

Always

Sets the NFS version to be used. Version 3 is


recommended.

TCP

proto=tcp

Always

All the NFS and RPC requests will be


transferred over a connection-oriented
protocol.
This is required for reliable network transport.

Background

bg

Always

Enables client attempts to connect in the


background if the connection fails.

No interrupt

nointr

Always

This toggle allows or disallows client keyboard


interruptions to kill a hung or failed process on
a failed hard-mounted file system.

Read size and


write size

rsize=32768

Always

Sets the number of bytes NFS uses when


reading or writing files from an NFS server.

No auto

wsize=32768

noauto

The default value is dependent on the kernel.


However, throughput can be improved greatly
by setting rsize/wsize= 32768
Only for
backup/utility file
systems

Disables automatic mounting of the file


system on boot-up.
This is useful for file systems that are
infrequently used (for example, stage file
systems).

Actimeo

actimeo=0

RAC only

Sets the minimum and maximum time for


regular files and directories to 0 seconds.

Timeout

timeo=600

Always

Sets the time (in tenths of a second) the NFS


client waits for the request to complete.

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
57

Chapter 6: Installation and Configuration

Mount options
for Oracle RAC
files

Table 20 shows mount options for Oracle RAC files when used with NAS devices.
Table 20. Mount options for Oracle RAC files

Operating
system

Mount options
for Binaries

Mount options for


Oracle Datafiles

Mount options for


CRS Voting Disk
and OCR

Linux x86-64

rw,bg,hard,nointr,r
size=32768,wsize
=32768,tcp,vers=3
,timeo=600,
actimeo=0

rw,bg,hard,nointr,rsiz
e=32768,wsize=3276
8,tcp,actimeo=0,vers
=3,timeo=600

rw,bg,hard,nointr,rsi
ze=32768,wsize=32
768,tcp,vers=3,time
o=600,actimeo=0

For more information, see Oracle Metalink 359515.1.

sunrpc.tcp_slot
_table_entries

The NFS module called sunrpc.tcp_slot_table_entries controls the concurrent I/Os


to the storage system. The default value of this parameter is 16. The parameter
should be set to the maximum value (128) for enhanced I/O performance.
To configure this option, type the following command:
[root@fj903-esx01 ~]# sysctl -w
sunrpc.tcp_slot_table_entries=128
sunrpc.tcp_slot_table_entries = 128
[root@fj903-esx01 ~]#
Important
Before configuring this option, you must make the changes in sysctl.conf, and then
run sysctl w. This reparses the file, and the resulting text is output.

No protocol
overhead

Typically, in comparison to the host file system implementations, NFS


implementations increase database server CPU utilization by 1 percent to 5 percent.
However, most online environments are tuned to run with significant excess CPU
capacity.
EMC testing has confirmed that in such environments protocol CPU consumption
does not affect the transaction response times.

EMC Unified Storage for Oracle Database 11g - Performance


Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
58

Chapter 6: Installation and Configuration

Task 5: Install Oracle grid infrastructure and Oracle RAC


Install Oracle
grid
infrastructure
for Linux

For information about installing the Oracle grid infrastructure for Linux, see Oracle
Grid Infrastructure Installation Guide 11g Release 2 (11.2) for Linux.

Install Oracle
RAC for Linux

For information about installing Oracle RAC for Linux, see Oracle Real Application
Clusters Installation Guide 11g Release 2 (11.2) for Linux and UNIX.

Task 6: Configure database server memory options


Database
server memory

Refer to your database server documentation to determine the total number of


memory slots your database server has, and the number and density of memory
modules that you can install. EMC recommends that you configure the system with
the maximum amount of memory feasible to meet the scalability and performance
needs. Compared to the cost of the remaining components in an Oracle database
server configuration, the cost of memory is minor. Configuring an Oracle database
server with the maximum amount of memory is entirely appropriate.
Oracle 11g
EMCs testing of Oracle RAC 11g was done with servers containing 128 GB of RAM.

Memory
configuration
files

Table 21 describes the files that must be configured for memory management.
Table 21. Memory configuration files

File

Created by

Function

/etc/sysctl.conf

Linux installer

Contains the shared memory


parameters for the Linux operating
system. This file must be
configured in order for Oracle to
create the SGA with shared
memory.

/etc/security/limits.conf

Linux installer

Contains the limits imposed by


Linux on users use of resources.
This file must be configured
correctly in order for Oracle to use
shared memory for the SGA.

Oracle parameter file

Oracle installer,
dbca, or DBA
who creates the
database

Oracle installer, dbca, or DBA who


creates the database

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
59

Chapter 6: Installation and Configuration

Configure the /etc/sysctl.conf file as follows:


Configuring
/etc/sysctl.conf
# Oracle parameters
kernel.shmmax = 108447924224
kernel.shmall = 4294967296
kernel.shmmni = 4096
kernel.sem = 15010 2131420 15010 142
fs.file-max = 6815744
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
vm.nr_hugepages = 51712
vm.min_free_kbytes=409600
sunrpc.tcp_slot_table_entries = 128
Table 22 describes recommended values for kernel parameters.
Table 22. Recommended values for kernel parameters

Kernel parameter

Parameter function

Recommended value

kernel.shmmax

Defines the maximum size in


bytes of a single shared memory
segment that a Linux process
can allocate in its virtual address
space.

Slightly larger than the


SGA size

Since the SGA is comprised of


shared memory, SHMMAX can
potentially limit the size of the
SGA.
kernel.shmini

Sets the system-wide maximum


number of shared memory
segments.

4096

kernel.shmall

The value should be at least


ceil(shmmax/PAGE_SIZE).

Slightly larger than the


SGA size

The PAGE_SIZE on the EMC


Linux systems was 4096.

EMC Unified Storage for Oracle Database 11g - Performance


Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
60

Chapter 6: Installation and Configuration

Configuring
/etc/security/
limits.conf

The section of the /etc/security/limits.conf file relevant to Oracle should be configured


as follows:
# Oracle parameters
oracle soft nproc 131072
oracle hard nproc 131072
oracle soft nofile 131072
oracle hard nofile 131072
oracle hard memlock 104857600
oracle soft memlock 104857600
Important
If you do not set the memlock parameter, your database will behave
uncharacteristically.
Ensure that the memlock parameter has been configured. This is required to use
HugePages. This is not covered in the Oracle Database 11g installation guide, so be
sure to set this parameter.
For more information, see the Oracle-Base article Large SGA on Linux.

Task 7: Configure and tune HugePages


Tuning
HugePages

The following table describes how to tune HugePages parameters to ensure


optimum performance.
Step

Action

Ensure that the machine you are using has adequate memory.
For example, the EMC test system had 128 GB of RAM and a 100 GB
SGA.

Set the HugePages parameters in /etc/sysctl.conf to a size into which the


SGA will fit comfortably.
For example, to create a HugePages pool of 101 GB, which would be
large enough to accommodate the SGA, set the following parameter
values:
HugePages_Total: 51712
Hugepagesize: 2048 KB

Restart the database.

Check the values of the HugePages parameters by typing the following


command:
[root@fj903-esx01 ~]# grep Huge /proc/meminfo

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
61

Chapter 6: Installation and Configuration

On our test system, this command produced the following output:


HugePages_Total: 51712
HugePages_Free:

1857

Hugepagesize:

2048 KB

HugePages_Rsvd:
5

10834

If the value of HugePages_Free is equal to zero, the tuning is complete:


If the value of HugePages_Free is greater than zero:
a) Subtract the value of HugePages_Free from HugePages_Total. Make
note of the answer.
b) Open /etc/sysctl.conf and change the value of HugePages_Total to the
answer you calculated in step a).
c) Repeat steps 3, 4, and 5.

Enable
HugePages for
RAC 11.2.0.1 on
Linux

With Grid Infrastructure 11.2.0.1, if the database is restarted by using the srvctl
command, HugePages may not be used on Linux, unless sql*plus is used to restart
each instance. This issue has been fixed in 11.2.0.2, while in 11.2.0.1, the following
is the workaround:
Modify /etc/init.d/ohasd (could be /etc/ohasd or
/sbin/init.d/ohasd depend on platform):
Replace the following:
start()
{
$ECHO -n $"Starting $PROG: "
With
start()
{
$ECHO -n $"Starting $PROG: "
ulimit -n 65536
ulimit -l 104857600
And restart the node.
If you have a bigger SGA, you need adjust the "ulimit -l" parameter accordingly. In
the workaround above, it is set to 100 GB.

More
information
about
HugePages

For more information on enabling and tuning HugePages, refer to:

Oracle MetaLink 361323.1

Oracle MetaLink 983715.1

Tuning and Optimizing Red Hat Enterprise Linux for Oracle 9i and 10g
Databases

EMC Unified Storage for Oracle Database 11g - Performance


Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
62

Chapter 6: Installation and Configuration

Task 8: Set database initialization parameters


Overview of
Task 8

This task describes the initialization parameters that should be set in order to
configure the Oracle instance for optimal performance on the CLARiiON CX4 series.
These parameters are stored in the spfile or init.ora file for the Oracle instance.

Database block
size

Table 23 shows the database block size parameter for this configuration.
Table 23. Database block size parameter

Parameter

Database block size

Syntax

DB_BLOCK_SIZE=n

Description

For best database performance, DB_BLOCK_SIZE should be


a multiple of the OS block size.
For example, if the Linux page size is 4096,
DB_BLOCK_SIZE=4096*n.

Direct I/O

Table 24 shows the direct I/O parameter for this configuration.


Table 24. Direct I/O parameter

Parameter

Direct I/O

Syntax

FILESYSTEM_IO_OPTIONS=setall

Description

This setting enables direct I/O and async I/O. Direct I/O is a
feature available in modern file systems that delivers data
directly to the application without caching in the file system
buffer cache. Direct I/O preserves file system semantics and
reduces the CPU overhead by decreasing the kernel code
path execution. I/O requests are directly passed to the
network stack, bypassing some code layers.
Direct I/O is a beneficial feature to Oracles log writer, both in
terms of throughput and latency. Async I/O is beneficial for
datafile I/O.

Multiple
database writer
processes

Table 25 shows the multiple database writer processes parameter for this
configuration.
Table 25. Multiple database writer processes parameter

Parameter

Multiple database writer processes

Syntax

DB_WRITER_PROCESSES=2*n

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
63

Chapter 6: Installation and Configuration

Description

The recommended value for db_writer_processes is that it at


least match the number of CPUs.
During testing, we observed very good performance by just
setting db_writer_processes to 1.

Multiblock
read count

Table 26 shows the multiblock read count parameter for this configuration.
Table 26. Multiblock read count parameter

Parameter

Multiblock Read Count

Syntax

DB_FILE_MULTIBLOCK_READ_COUNT=n

Description

DB_FILE_MULTIBLOCK_READ_COUNT determines the


maximum number of database blocks read in one I/O during a
full table scan.
The number of database bytes read is calculated by
multiplying the DB_BLOCK_SIZE by the
DB_FILE_MULTIBLOCK_READ_COUNT.
The setting of this parameter can reduce the number of I/O
calls required for a full table scan, thus improving
performance.
Increasing this value may improve performance for databases
that perform many full table scans, but degrade performance
for OLTP databases where full table scans are seldom (if
ever) performed. Setting this value to a multiple of the NFS
READ/WRITE size specified in the mount limits the amount of
fragmentation that occurs in the I/O subsystem. This
parameter is specified in DB Blocks and NFS settings are in
bytes - adjust as required.
EMC recommends that
DB_FILE_MULTIBLOCK_READ_COUNT be set to between 1
and 4 for an OLTP database and to between 16 and 32 for
DSS.

Disk async I/O

Table 27 shows the disk async I/O parameter for this configuration.
Table 27. Disk async I/O parameter

Parameter

Disk async I/O

Syntax

DISK_ASYNCH_IO=true

Description

RHEL 4 update 3 and later support async I/O with direct I/O
on NFS. Async I/O is now recommended on all the storage
protocols.
The default value is true in Oracle 11.2.0.1.

EMC Unified Storage for Oracle Database 11g - Performance


Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
64

Chapter 6: Installation and Configuration

Task 9: Configure the Oracle DNFS client


Configure
oranfstab

When you use DNFS, you must create a new configuration file, oranfstab, to specify
the options/attributes/parameters that enable Oracle Database to use DNFS. The
oranfstab file must be placed in the ORACLE_BASE\ORACLE_HOME\dbs directory.
When oranfstab is placed in the ORACLE_BASE\ORACLE_HOME\dbs directory, the
entries in this file are specific to a single database. The DNFS client searches for the
mount point entries as they appear in oranfstab. DNFS uses the first matched entry
as the mount point.
Use the steps in the following table to configure the oranfstab file:
Step
1

Action
Create a file called oranfstab at the location:
$ORACLE_HOME/dbs/
[oracle@ fj903-esx01 ~]$ cat
/u01/app/oracle/product/11.2.0/dbhome_1/dbs/oranfstab
server: 192.168.4.160
local: 192.168.4.10
path: 192.168.4.160
export: /datafs1 mount: /u04
export: /log1fs mount: /u05
export: /flashfs mount: /u08
export: /data_efd1 mount: /u10
server: 192.168.5.160
local: 192.168.5.10
path: 192.168.5.160
export: /log2fs mount: /u06
export: /datafs2 mount: /u09
export: /data_efd2 mount: /u11
[oracle@fj903-esx01 ~]$

Replicate the oranfstab file on all nodes and keep it synchronized.

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
65

Chapter 6: Installation and Configuration

Apply ODM
NFS library

To enable DNFS, Oracle database uses an ODM library called libnfsodm11.so. You
must replace the standard ODM library libodm11.so, with the ODM NFS library
libnfsodm11.so.
Use the steps in the following table to replace the standard ODM library with the
ODM NFS library:
Step

Action

Change the directory to $ORACLE_HOME\bin

Shut down Oracle.

Run the following commands on the database servers:


$ cp libodm11.so libodm11.stub
$ mv libodm11.so libodm11.so_stub

$ ln s libnfsodm11.so libodm11.so

Enable
transChecksum
on the Celerra
Data Mover

EMC recommends that you enable transChecksum on the Data Mover that serves
the Oracle DNFS clients. This avoids the likelihood of TCP port and XID (transaction
identifier) reuse by two or more databases running on the same physical server,
which could possibly cause data corruption.
To enable the transChecksum, type:
#server_param <movername> -facility nfs -modify transChecksum
-value 1
Note

DNFS network
setup

This applies to NFS version 3 only. Refer to the NAS Support Matrix
available on Powerlink to understand the Celerra version that supports this
parameter.

For the DNFS network setup:

Port bonding and load balancing are managed by the Oracle DNFS client in
the database, therefore, there are no additional network setup steps.

If OS NIC/connection bonding is already configured, you should reconfigure


the OS to release the connections so that they operate as independent ports.
DNFS will then manage the bonding, high availability, and load balancing for
the connections.

Dontroute specifies that outgoing messages should not be routed using the
operating system but sent using the IP address to which they are bound.
If dontroute is not specified, it is mandatory that all paths to the Celerra are
configured in separate network subnets.

The network setup can now be managed by an Oracle DBA, through the oranfstab
file. This frees up the database sysdba from specific bonding tasks previously
necessary for OS LACP-type bonding, for example, the creation of separate subnets.

EMC Unified Storage for Oracle Database 11g - Performance


Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
66

Chapter 6: Installation and Configuration


Mounting DNFS

If you use DNFS, you must create a new configuration file, oranfstab, to specify the
options/attributes/parameters that enable Oracle Database to use DNFS.
The steps include:

Add oranfstab to the ORACLE_BASE\ORACLE_HOME\dbs directory

For Oracle RAC, replicate the oranfstab file on all nodes and keep them
synchronized

Mounting multiple servers


When oranfstab is placed in the ORACLE_BASE\ORACLE_HOME\dbs directory, the
entries in this file are specific to a single database. The DNFS client searches for the
mount point entries as they appear in oranfstab. DNFS uses the first matched entry
as the mount point.

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
67

Chapter 6: Installation and Configuration

Task 10: Verify that DNFS has been enabled


Overview of
Task 10

This task contains a number of queries that you can run to verify that DNFS is
enabled for the database.

Check the
available DNFS
storage paths

To check the available DNFS storage paths, run the following query:
SQL> select unique path from v$DNFS_channels; PATH
-------------------------------------------------------------192.168.4.160
192.168.5.160

Check the data


files configured
under DNFS

To check the data files configured under DNFS, run the following query:
SQL> select FILENAME from V_$DNFS_FILES; FILENAME
------------------------------------------------------------/u04/oradata/mterac28/control01.ctl
/u04/oradata/mterac28/control02.ctl
/u09/oradata/mterac28/tpcc002.dbf
/u09/oradata/mterac28/tpcc004.dbf
/u09/oradata/mterac28/tpcc006.dbf
/u09/oradata/mterac28/tpcc008.dbf
/u09/oradata/mterac28/tpcc010.dbf
/u09/oradata/mterac28/tpcc012.dbf
/u09/oradata/mterac28/tpcc014.dbf
...
/u09/oradata/mterac28/undotbs2_8.dbf
/u09/oradata/mterac28/undotbs3_6.dbf
/u09/oradata/mterac28/undotbs3_8.dbf
/u09/oradata/mterac28/undotbs4_6.dbf

EMC Unified Storage for Oracle Database 11g - Performance


Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
68

Chapter 6: Installation and Configuration

Check the
server and the
directories
configured
under DNFS

To check the server and the directories configured under DNFS, run the following
query:
SQL> select inst_id,svrname,dirname mntport,nfsport from
gv$DNFS_servers;
The query output is below.
INST_ID

SVRNAME

MNTPORT

NFSPORT

---------- ------------------------- ---------- ---------1

192.168.4.160

/datafs1

2049

192.168.5.160

/datafs2

2049

192.168.4.160

/log1fs

2049

192.168.5.160

/log2fs

2049

192.168.4.160

/datafs1

2049

192.168.5.160

/datafs2

2049

192.168.5.160

/log2fs

2049

192.168.4.160

/log1fs

2049

192.168.4.160

/datafs1

2049

192.168.5.160

/datafs2

2049

192.168.5.160

/log2fs

2049

192.168.4.160

/log1fs

2049

192.168.4.160

/datafs1

2049

192.168.5.160

/datafs2

2049

192.168.4.160

/log1fs

2049

192.168.5.160

/log2fs

2049

16 rows selected.

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
69

Chapter 6: Installation and Configuration

Task 11: Configure Oracle Database control files and logfiles


Control files

EMC recommends that when you create the control file, allow for growth by setting
MAXINSTANCES, MAXDATAFILES, MAXLOGFILES, and MAXLOGMEMBERS to
high values.
Your database should have a minimum of two control files located on separate
physical ASM disk groups or NFS file systems. One way to multiplex your control
files is to store a control file copy on every location that stores members of the redo
log groups.

Online and
archived redo
log files

EMC recommends that you:

Run a mission-critical production database in ARCHIVELOG mode.

Multiplex your redo log files for these databases.


Loss of online redo log files could result in a database recovery failure. The
best practice to multiplex your online redo log files is to place members of a
redo log group on different locations.

To understand how redo log and archive log files can be placed, refer to the
Reference Architecture diagrams (Figure 1 and Figure 2).

EMC Unified Storage for Oracle Database 11g - Performance


Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
70

Chapter 6: Installation and Configuration

Task 12: Enable passwordless authentication using ssh (optional)


Passwordless
authentication
using ssh

The use of passwordless authentication using ssh is a fundamental concept to make


successful use of Oracle RAC 11g with Celerra.
However, during installation of Oracle Grid Infrastructure 11.2.0.1, you do not need
to set up ssh passwordless authentication between RAC nodes in advance. You can
complete this task through the Oracle Universal Installer GUI.
Note

ssh files

Be careful about enabling passwordless authentication; only consider it


when installing Oracle RAC as it is mandatory. Never set up passwordless
authentication for any user other than an Oracle RAC software owner.

Passwordless authentication using ssh relies on the files described in Table 28.
Table 28. SSH files

Id_dsa.pub

File

Created by

Purpose

~/.ssh/id_dsa.pub

ssh-keygen

Contains the hosts DSA key for ssh


authentication (functions as the proxy
for a password).

~/.ssh/authorized_keys

ssh

Contains the DSA keys of hosts that


are authorized to log in to this server
without issuing a password.

~/.ssh/known_hosts

ssh

Contains the DSA key and hostname


of all hosts that are allowed to log in
to this server using ssh.

The most important ssh file is id_dsa.pub.


Important
If the id_dsa.pub file is re-created after you have established a passwordless
authentication for a host onto another host, the passwordless authentication will
cease to work.
Therefore, do not accept the option to overwrite id_dsa.pub if ssh-keygen is run and
it discovers that id_dsa.pub already exists.

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
71

Chapter 6: Installation and Configuration

Enabling
authentication:
single user/
single host

Use the steps in the following table to enable passwordless authentication using ssh
for a single user on a single host.
Step

Action

Create the dsa_id.pub file using ssh-keygen.

Copy the key for the host for which authorization is being given to the
authorized_keys file of the host that allows the login.

Complete a login so that ssh knows about the host that is logging in.
That is, record the hosts key and hostname in the known_hosts file.

Enabling
authentication: Prerequisites
single user/
To enable authentication for a user on multiple hosts, you must first enable
multiple hosts authentication for the user on a single host (see Chapter 6: Installation and
Configuration > Task 12: Enable passwordless authentication using ssh > Enabling
authentication: single user/single host).
Procedure summary
After you have enabled authentication for a user on a single host, you can then
enable authentication for the user on multiple hosts by copying the authorized_keys
and known_hosts files to the other hosts.
Before Oracle RAC 11g R2, this was a very common task when setting up Oracle
RAC prior to installation of Oracle Clusterware. It is possible to automate this task by
using the ssh_multi_handler.bash script. Below is a sample script; it needs to be
modified to adapt to the real environment.
ssh_multi_handler.bash
#!/bin/bash
#-----------------------------------------------------------#
# Script:

ssh_multi_handler.bash

# Purpose:

Handles creation of authorized_keys

#-----------------------------------------------------------#
ALL_HOSTS="rtpsol347 rtpsol348 rtpsol349
rtpsol350" THE_USER=root

mv -f ~/.ssh/authorized_keys ~/.ssh/authorized_keys.bak
mv -f ~/.ssh/known_hosts ~/.ssh/known_hosts.bak
for i in ${ALL_HOSTS}
do
ssh ${THE_USER}@${i} "ssh-keygen -t dsa"
ssh ${THE_USER}@${i} "cat ~/.ssh/id_dsa.pub" \

EMC Unified Storage for Oracle Database 11g - Performance


Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
72

Chapter 6: Installation and Configuration


>>
~/.ssh/authorized_keys ssh
${THE_USER}@${i} date
done
for i in $ALL_HOSTS
do
scp ~/.ssh/authorized_keys ~/.ssh/known_hosts \
${THE_USER}@${i}:~/.ssh/
Done
for i in ${ALL_HOSTS}
do
for j in ${ALL_HOSTS}
do
ssh ${THE_USER}@${i} "ssh ${THE_USER}@${j} date"
done
done
mv -f ~/.ssh/authorized_keys.bak
~/.ssh/authorized_keys mv -f ~/.ssh/known_hosts.bak
~/.ssh/known_hosts
exit
How to use
ssh_multi_
handler.bash

Use the steps in the following table to automate the task of enabling authentication
for the user on multiple hosts.
At the end of the procedure, all of the equivalent users on the set of hosts will be
able to log in to all of the other hosts without issuing a password.
Step

Action

Copy and paste the text from ssh_multi_handler.bash into a new file on
the Linux server.

Edit the variable definitions at the top of the script.

Chmod the script to allow it to be executed.

Run the script.

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
73

Chapter 6: Installation and Configuration

Enabling
Another common task is to set up passwordless authentication across two
authentication: users between two hosts.
single host/
For example, enable the Oracle user on the database server to run commands
different user
as the root or nasadmin user on the Celerra Control Station.
You can set this up by using the ssh_single_handler.bash script. This script creates
passwordless authentication from the presently logged-in user to the root user on the
Celerra Control Station.
ssh_single_handler.bash
#!/bin/bash
#-----------------------------------------------------------#
# Script:

ssh_single_handler.bash

# Purpose:

Handles creation of authorized_keys

#-----------------------------------------------------------#
THE_USER=root
THE_HOST=rtpsol33
ssh-keygen -t dsa
KEY=`cat ~/.ssh/id_dsa.pub`
ssh ${THE_USER}@${THE_HOST} "echo ${KEY} >> \
~/.ssh/authorized_keys"
ssh ${THE_USER}@${THE_HOST} date
exit

EMC Unified Storage for Oracle Database 11g - Performance


Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
74

Chapter 7: Testing and Validation

Chapter 7: Testing and Validation


Overview
Introduction to
testing and
validation

This chapter provides a summary and characterization of the tests performed to


validate the solution. The objective of the testing was to characterize the end-to-end
solution and component subsystem response under reasonable load, representing
the market for Oracle 11g on Linux with Celerra unified storage using either a DNFS
or an ASM configuration.

Note

Benchmark results are highly dependent upon workload, specific application


requirements, and system design and implementation. Relative system performance
will vary as a result of these and other factors. Therefore, this workload should not
be used as a substitute for a specific customer application benchmark when critical
capacity planning and/or product evaluation decisions are contemplated.
All performance data contained in this report was obtained in a rigorously controlled
environment. Results obtained in other operating environments may vary
significantly.
EMC Corporation does not warrant or represent that a user can or will achieve
similar performance expressed in transactions per minute.

Contents

This section contains the following topics:


Topic

See Page

Testing tools

76

Test procedure

77

Test results

78

DNFS configuration test results

79

ASM configuration test results

82

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
75

Chapter 7: Testing and Validation

Testing tools
Overview of
testing tools

To properly test the performance in the four-node RAC environment, Quest


Benchmark Factory for Database 5.8 was used to conduct repeatable and
measurable performance testing.

Data population
tool

Benchmark Factory has its own data population function. The data load function will
be created automatically when creating a scenario.
As shown in Figure 7, the scale factor determines the amount of information initially
loaded into the benchmark tables. For the TPC-C benchmark, each scale factor
represents one warehouse as per TPC-C specification.
The TPC-C benchmark involves a mix of five concurrent transactions of different
types and complexity. The database is composed of nine tables with a wide range of
records. You can alter the scale factor to meet the database size requirement before
running the data load scenario.

Figure 7.

Benchmark TPC-C properties

There was one load agent on the host, allowing data to be loaded in parallel (it
creates nine database sessions to fill out the nine TPC-C tables) until the 1 TB goal
was reached.
With benchmark scale 11,500, the data population completed in around 40 hours,
including index creation, and so on.

EMC Unified Storage for Oracle Database 11g - Performance


Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
76

Chapter 7: Testing and Validation

Benchmark
Factory
console and
agents

All tests were run from a Benchmark Factory console on a Microsoft Windows 2003
server with 18 agents started on 6 Microsoft Windows 2003 servers.

Test procedure
Test procedure

EMC used the steps in the following table to validate the performance test.
Step

Action

Close all Benchmark Factory agents that are running.

Restart all the client machines.

Restart all database instances:


srvctl stop database d mterac28

srvctl start database d mterac28


4

Initiate the Benchmark Factory console and agents on the client


machines.

Start the Benchmark Factory job.

Monitor the progress of the test.

Allow the test to finish.

Capture the results.

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
77

Chapter 7: Testing and Validation

Test results
Overview of
test results

The test results for the four-node Oracle RAC 11g DNFS and ASM configurations
are detailed below.
The overall SGA size of the database servers was set to around 100 GB. To
maximize and utilize the memory resources on each server, the memory
management method of SGA is manually set by indicating buffer cache, shared pool,
and large pool individually.
Table 29 provides a summary of the cache size settings.
Table 29.

Cache size settings

I#

Memory
target

SGA
target

DB
cache

Shared
pool

Large
pool

Java
pool

Streams
pool

PGA
target

Log
buffer

90,112

9,216

2,048

1,536

55.47

90,112

9,216

2,048

1,536

55.47

90,112

9,216

2,048

1,536

55.47

90,112

9,216

2,048

1,536

55.47

Besides the manual SGA memory configuration, HugePages were enabled on each
server in consideration of memory performance improvement. Oracle DNFS was
also enabled in consideration of I/O performance. Jumbo frames were enabled on
the server, switch, and Celerra for RAC interconnect and IP storage network in
consideration of network throughput improvement.
The same test procedure was performed on ASM and DNFS individually. The
database statistics, OS statistics, network statistics, and storage-related statistics
were captured for tuning purposes as well as for performance-comparison purposes.

Performance
testing

The performance testing was designed as a set of measurements to determine the


saturation point of the solution stack in terms of performance. A reasonable amount
of fine tuning was performed to ensure that the performance measurements
achieved were consistent with real-world, optimum performance.

Scalability
testing

For scalability testing, EMC ran the performance tests and added user load until the
performance degraded or the connection could no longer be served by the database.

EMC Unified Storage for Oracle Database 11g - Performance


Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
78

Chapter 7: Testing and Validation

DNFS configuration test results


DNFS
configuration
test results

The DNFS configuration used two ports, which connected to two Data Movers
individually for the four-node RAC.
Table 30 shows the test results.
Table 30. DNFS configuration test results

Users

TPS

Response
time

DB CPU
busy
average

Physical
reads

Physical
writes

Redo
size (K)

11,200

2,068.67

0.568

64.68

2,156,916

588,131

2,851,585

Figure 8.

TPS/response time for DNFS configuration

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
79

Chapter 7: Testing and Validation


Table 31 shows the testbed results for a user load of 11,200.
Table 31.

Testbed results for DNFS User load: 11,200

Virtual
station ID

TPS

Executions

Rows

Bytes

Avg
response
time

113.33

20416

27707

8873618

0.575

115.24

20751

27893

8904559

0.558

115.4

20792

28479

8846424

0.574

116.38

20961

28146

8805911

0.573

115.92

20884

28320

8848351

0.569

115.25

20732

27888

8829376

0.575

115.9

20903

28576

8897491

0.575

114.85

20698

28003

8860614

0.563

113.56

20454

26909

8831482

0.571

10

113.84

20512

27193

8842175

0.581

11

114.94

20694

28216

8861084

0.571

12

114.96

20705

28257

8889954

0.555

13

113.78

20503

28414

8790546

0.578

14

114.21

20558

27321

8876515

0.560

15

115.5

20814

27994

8897519

0.554

16

116.63

21027

29640

8926348

0.551

17

114.01

20548

26845

8858376

0.578

18

114.97

20723

27573

8877471

0.576

Total

Total

Total

Total

Average

2068.7

372675

503374

159517814

0.569

EMC Unified Storage for Oracle Database 11g - Performance


Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
80

Chapter 7: Testing and Validation

Table 32 shows the transaction results for a user load of 11,200.


Table 32.

Transaction results for DNFS User load: 11,200

Name

Executions

Rows

Bytes

Avg response time

New Order Transaction

169300

167609

102576708

0.648

Payment Transaction

160062

160062

49759654

0.26

Order-Status Transaction

13900

13900

6534240

1.023

Delivery Transaction

14710

147100

588400

1.205

Stock-Level Transaction

14703

14703

58812

1.936

Table 33 shows the OS statistics for all database instances. For DNFS, the CPU wait
for I/O operations (% WIO) is almost zero, which is very low compared to the ASM
configuration.
Table 33.

OS statistics for DNFS configuration

I#

Load
Begin

Load
End

% Busy

% Usr

% Sys

% WIO

% Idle

Busy
Time (s)

Idle
Time (s)

2.46

16.75

66.91

54.09

8.61

0.04

33.09

3,513.32

1,737.31

2.64

15.93

63.18

50.7

8.31

0.06

36.82

3,318.60

1,934.25

2.82

12.1

64.5

51.8

8.57

0.05

35.5

3,408.60

1,875.76

2.09

15.12

64.12

51.73

8.48

0.04

35.88

3,373.77

1,887.74

Table 34 shows the I/O statistics for all database files including data files, temp
files, control files, and redo logs.
Table 34.

I/O statistics for DNFS configuration

Reads MB/s

Writes MB/s

Reads requests/s

Writes requests/s

I#

Total

Data
File

Total

Data
File

Log
File

Total

Data
File

Total

Data
File

Log
File

13.45

12.55

6.03

3.73

2.28

1,660.78

1,603.10

550.88

369.02

180.82

13.83

12.93

5.47

3.32

2.14

1,712.61

1,655.03

529.38

346.8

181.66

14.13

13.22

5.91

3.65

2.24

1,749.66

1,691.62

545.32

357.35

186.96

13.55

12.66

5.65

3.35

2.28

1,681.68

1,624.07

538.7

347.44

190.27

Sum

54.97

51.36

23.06

14.06

8.94

6,804.71

6,573.82

2,164.28

1,420.62

739.71

Avg

13.74

12.84

5.76

3.51

2.24

1,701.18

1,643.46

541.07

355.15

184.93

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
81

Chapter 7: Testing and Validation

ASM configuration test results


ASM
configuration
test results

The ASM configuration used one two-port HBA card, which connected to the
CLARiiON CX4-960 back-end storage individually for the four-node RAC.
Table 35 shows the test results.
Table 35. ASM configuration test results

Users

TPS

Response
time

DB CPU
busy
average

Physical
reads

Physical
writes

Redo
size (K)

11,200

2,106.41

0.488

65.16

2,065,183

1,357,951

3,028,227

Figure 9.

TPS/response time for ASM configuration

EMC Unified Storage for Oracle Database 11g - Performance


Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
82

Chapter 7: Testing and Validation


Table 36 shows the testbed results for a user load of 11,200.
Table 36.

Testbed results for ASM User load: 11,200

Virtual
station ID

TPS

Executions

Rows

Bytes

Avg response
time

115.22

20730

28139

8998621

0.497

117.16

21098

28471

9059748

0.489

118.44

21312

29322

9068017

0.49

118.29

21297

28515

8974151

0.504

117.65

21160

28648

8966422

0.495

116.97

21061

28485

8983977

0.493

118.24

21253

28980

9026747

0.498

117.5

21157

28718

9040102

0.484

116.08

20875

27401

9023315

0.481

10

116.19

20879

27871

8986565

0.494

11

116.61

20942

28581

8957636

0.498

12

116.51

20955

28606

8992325

0.48

13

116.37

20915

28992

8970585

0.492

14

116.77

20987

27877

9059288

0.476

15

117.22

21082

28243

9016271

0.475

16

118.18

21247

30034

9012472

0.478

17

115.52

20811

26730

9004728

0.49

18

117.5

21095

27940

9026151

0.485

Total

Total

Total

Total

Average

2106.42

378856

511553

162167121

0.489

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
83

Chapter 7: Testing and Validation

Table 37 shows the transaction results for a user load of 11,200.


Table 37.

Transaction results for ASM User load: 11,200

Name

Executions

Rows

Bytes

Avg response time

New Order Transaction

171885

170203

104164236

0.544

Payment Transaction

162896

162896

50699967

0.222

Order-Status Transaction

14138

14138

6645654

0.93

Delivery Transaction

14931

149310

597240

1.064

Stock-Level Transaction

15006

15006

60024

1.751

Table 38 shows the OS statistics for all database instances. For ASM, the CPU wait
for I/O operations (%WIO) is around 5 percent, which is higher than the value
achieved with the DNFS configuration.
Table 38.

OS statistics for ASM configuration

I#

Load
Begin

Load
End

% Busy

% Usr

% Sys

% WIO

% Idle

Busy
Time (s)

Idle Time
(s)

3.21

16.44

68.3

54.84

9.08

4.15

31.7

3,714.70

1,724.39

3.76

13.85

62.67

49.94

8.66

6.16

37.33

3,420.06

2,037.20

3.57

12.76

63.99

51.19

8.75

6.07

36.01

3,504.39

1,972.04

4.03

17.01

65.67

52.79

8.87

5.32

34.33

3,584.07

1,873.38

Table 39 shows the I/O statistics for all database files including data files, temp files,
control files, and redo logs.
Table 39.

I/O statistics for ASM configuration

Reads MB/s

Writes MB/s

Reads requests/s

Writes requests/s

I#

Total

Data
File

Total

Data
File

Log
File

Total

Data
File

Total

Data
File

Log
File

13.15

12.28

10.29

7.88

2.39

1,623.91

1,568.08

1,177.84

858.99

317.79

12.62

11.75

9.81

7.58

2.21

1,557.58

1,501.48

1,163.28

845.46

316.85

12.32

11.45

10.13

7.81

2.31

1,520.45

1,464.38

1,202.67

868

333.75

12.9

12.01

10.27

7.88

2.37

1,590.98

1,534.21

1,188.55

858.18

329.45

Sum

50.99

47.48

40.49

31.15

9.28

6,292.91

6,068.15

4,732.34

3,430.63

1,297.84

Avg

12.75

11.87

10.12

7.79

2.32

1,573.23

1,517.04

1,183.08

857.66

324.46

EMC Unified Storage for Oracle Database 11g - Performance


Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
84

Chapter 7: Testing and Validation

Conclusion of
test results

Both DNFS and ASM tests demonstrate expected peak TPS statistics, with only small
variations in performance across the two environments, as shown in Table 40.
The workload run was an OLTP benchmark with a read/write mix of 3:1 (75 percent
random reads and 25 percent random writes) using small I/Os.
TPS is transactions per second of the OLTP workload. This is an accurate
measurement of the database processing capabilities of the testbed, including the
storage layer. IOPS was not included in the table because NFS IOPS and SAN IOPS
are not comparable.
Table 40. Summary of performance results

Peak
TPS

CPU
Busy (%)

CPU
WIO (%)

DB file sequential
read wait (ms)

DNFS

2,069

64.68

0.05

7.97

ASM

2,106

65.16

5.43

7.85

Figure 10 shows the peak TPS rates and DB file sequential read latency for the
DNFS and ASM configurations. The DB file sequential read latency for ASM is 1.5
percent faster than DNFS, while the peak TPS is very close for the two different
configurations.

Figure 10.

Peak TPS and DB file sequential read wait (ms)

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
85

Chapter 7: Testing and Validation

Figure 11 shows the CPU utilization for both the DFNS and ASM configurations.

Figure 11.

CPU utilization

Figure 12 shows the CPU utilization rate at the peak TPS on the first node. The
vmstat utility, available on all UNIX and Linux systems, gives an overall indication of
CPU utilization.
This chart is generated based on the output of vmstat and shows that DNFS has
more CPU idle time than ASM during the peak TPS phase of TPC-C testing. To
some extent, this result demonstrates that using an NFS server for file management
reduces CPU utilization on the production database server by taking advantage of
DNFS.

EMC Unified Storage for Oracle Database 11g - Performance


Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
86

Chapter 7: Testing and Validation

Figure 12.

CPU statistics

Table 41 details the observations for the CPU statistics shown in Figure 12.
Table 41. CPU statistics

CPU time
category

Description

Observation

User Time

Time spent running non-kernel


code (user time, including nice
time)

ASM is slightly higher than


DNFS.

System
Time

Time spent running kernel code


(system time)

ASM is slightly higher than


DNFS; at some point, they are
almost the same.

Idle Time

Time spent idle

DNFS has more CPU idle time


than ASM.

I/O Wait
Time

Time spent waiting for I/O

For DNFS, almost zero CPU


wait time for I/O.

Figure 13 is generated from the CPU statistics shown in Figure 12. The id column
was converted into % Utilization by subtracting the idle percent value from 100. The
middle horizontal dotted line shows that the average utilization for ASM is about 75
percent, while average utilization for DNFS is about 70 percent. The utilization line is
nearly perfectly straight, without variance, at 100 percent during this sampling period
for ASM, while for DNFS, there are still some buffers.

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
87

Chapter 7: Testing and Validation

Figure 13.

CPU utilization rate at peak TPS on the first node

At the point of CPU saturation, processes begin to wait for CPU, in the run queue as
shown in Figure 14. Correspondingly, the process number in the run queue for ASM
peaks at 170, while for DNFS, the maximum number is 129.

Figure 14.

CPU run queue at peak TPS on the first node

EMC Unified Storage for Oracle Database 11g - Performance


Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
88

Chapter 7: Testing and Validation

To explore the advantages in CPU utilization of DNFS further, the statistics captured
at user load 8,000 show that TPS is almost the same for both configurations. With
the same workload, the statistical differences in CPU utilization are clearer as shown
in Table 42.
Table 42. Summary of performance results at the same workload

User Load

Peak TPS

CPU Busy (%)

CPU WIO (%)

DNFS

8,000

1,669.27

49.91

0.06

ASM

8,000

1,669.99

50.62

10.94

Figure 15 shows the CPU utilization rate with the same workload on the first node.

Figure 15.

CPU utilization at the same workload on the first node

Figure 16 is generated from the CPU statistics shown in Figure 15. The id column
was converted into % Utilization by subtracting the idle percent value from 100. The
middle horizontal dotted line shows that the average utilization for ASM is about 66
percent, while the average utilization for DNFS is about 53 percent. The utilization
line is nearly perfectly straight, without variance, at 100 percent during this sampling
period for ASM, while for DNFS, there are still some buffers.

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
89

Chapter 7: Testing and Validation

Figure 16.

CPU utilization rate at the same workload on the first node

At the point of CPU saturation, processes begin to wait for CPU, in the run queue as
shown in Figure 17. Correspondingly, the process number in the run queue for ASM
peaked at 65, while for DNFS, the maximum number is 40.

Figure 17.

CPU run queue at the same workload on the first node

EMC Unified Storage for Oracle Database 11g - Performance


Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
90

Chapter 7: Testing and Validation

Conclusion for
test results

The test results demonstrate that network-attached storage using either a DNFS or
an ASM configuration is competitive on cost and performance compared to a
traditional storage infrastructure under a certain user load.

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
91

Chapter 8: Conclusion

Chapter 8: Conclusion
Overview
Introduction

The EMC Celerra unified storage platforms high-availability features combined with
EMCs proven storage technologies provide a very attractive storage system for the
Oracle RAC 11g over DNFS or ASM.

Findings and conclusion


Findings

Table 43 details this solutions objectives and the results achieved.


Table 43.

Conclusion

Objectives and results

Objective

Results

Demonstrate the baseline performance of


the Celerra NS-960 running over NFS with
Oracle RAC 11g R2 DNFS on a 10 GbE
network.

EMC achieved a maximum TPS of 2,068.67


at a user load of 11,200 with a response time
of 0.568 seconds.

Demonstrate the baseline performance of


the Celerra NS-960 running over FC with
an Oracle RAC 11g ASM environment.

EMC achieved a maximum TPS of 2,106.41


at a user load of 11,200 with a response time
of 0.488 seconds.

Scale the workload and show the database


performance achievable on the array over
NFS and over ASM.

EMC started the user load from 1,000 to


12,500, at which point the response time
exceeded 2 seconds and the TPS started to
decline.

This solution provides the following benefits:

The solution enables the Oracle RAC 11g configuration by providing shared
disks.

The Data Mover failover capability provides uninterruptible database access.

Redundant components on every level, such as the network connections,


back-end storage connections, RAID, and power supplies, achieve a very
high level of fault tolerance, thereby providing continuous storage access to
the database.

The overall Celerra architecture and its connectivity to the back-end storage
make it highly scalable, with the ease of increasing capacity by simply adding
components for immediate usability.

Running the Oracle RAC 11g with Celerra provides the best availability, scalability,
manageability, and performance for your database applications.
Oracle 11g environments using DNFS or ASM provide options to customers
depending on their familiarity and expertise with a chosen protocol, existing
EMC Unified Storage for Oracle Database 11g - Performance
Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
92

Chapter 8: Conclusion
architecture, and budgetary constraints. Testing proves that both implementations
have similar performance profiles, so it is the customer's responsibility to choose the
protocol and architecture that best fit their specific needs.
EMC unified storage provides flexibility and manageability for a storage infrastructure
that supports either of these architectures. Unified storage can also offer hybrid
architectures that utilize both protocols in a single solution, for example, production
could implement ASM over FC SAN, while test/dev could be developed over IP with
DNFS.

Next steps

EMC can help accelerate assessment, design, implementation, and management


while lowering the implementation risks and costs of an end-to-end solution for an
Oracle Database 11g environment.
To learn more about this and other solutions contact an EMC representative or visit:
http://www.emc.com/solutions/application-environment/oracle/solutions-for-oracledatabase.htm.

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
93

Supporting Information

Supporting Information
Overview
Introduction

This appendix contains supporting information referred to in this guide.

Managing and monitoring EMC Celerra


Celerra
Manager

The Celerra Manager is a web-based graphical user interface (GUI) for remote
administration of a Celerra unified storage platform. Various tools within Celerra
Manager provide the ability to monitor the Celerra. These tools are available to
highlight potential problems that have occurred or could occur in the future. Some of
these tools are delivered with the basic version of Celerra Manager, while more
detailed monitoring capabilities are delivered in the advanced version.
Celerra Manager can be used to create Ethernet channels, link aggregations, and
fail-safe networks

Celerra Data
Mover ports

Figure 18.

The Celerra Data Mover provides 10 GbE and 1 GbE storage network ports. The
number and type of ports vary significantly across Celerra models. Figure 18 shows
the back side of Data Mover.

The back side of Celerra Data Mover


For KNFS, storage network ports on the Data Movers are aggregated and connected
to the storage network. These handle all I/O required by the database servers to the
datafiles, online redo log files, archived log files, control files, OCR file, and voting
disk. Unused ports cge2 and cge3 are left open for future growth.
Link aggregation is removed for validating DNFS. Four Data Mover ports are used as
independent ports to validate DNFS. DNFS provides automatic load
balancing/failover across these ports with no configuration required on the Celerra
other than assigning IP addresses to these ports, and connecting them to the
network. Typically, a separate subnet is used for each physical connection between
the Celerra Data Movers and the ESX servers.

EMC Unified Storage for Oracle Database 11g - Performance


Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide (Draft)
94

Supporting Information

Enterprise Grid
Control storage
monitoring
plug-in

EMC recommends use of the Oracle Enterprise Manager monitoring plug-in for the
EMC Celerra unified storage platform. This system monitoring plug-in enables you
to:

Realize immediate value through out-of-box availability and performance


monitoring

Realize lower costs through knowledge: know what you have and what has
changed

Centralize all of the monitoring information in a single console

Enhance service modeling and perform comprehensive root cause analysis

For more information on the plug-in for the EMC Celerra server, see:
Oracle Enterprise Manager 11g System Monitoring Plug-In for EMC Celerra Server
Figure 19 shows the EMC Celerra OEM plug-in.

Figure 19.

OEM 11g plug-in for EMC Celerra Server

EMC Unified Storage for Oracle Database 11g


Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide
95

Das könnte Ihnen auch gefallen