Sie sind auf Seite 1von 356

Oracle SuperCluster T5-8 Owner's Guide

Part No: E40167-17


May 2016
Oracle SuperCluster T5-8 Owner's Guide
Part No: E40167-17
Copyright © 2015 , Oracle and/or its affiliates. All rights reserved.
This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except
as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform,
publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is
prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing.
If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable:
U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation,
delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental
regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating system, integrated software, any programs installed on the
hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the programs. No other rights are granted to the U.S. Government.
This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently dangerous
applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all
appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this
software or hardware in dangerous applications.
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of
SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered
trademark of The Open Group.
This software or hardware and documentation may provide access to or information about content, products, and services from third parties. Oracle Corporation and its affiliates are
not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services unless otherwise set forth in an applicable agreement
between you and Oracle. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content,
products, or services, except as set forth in an applicable agreement between you and Oracle.
Access to Oracle Support

Oracle customers that have purchased support have access to electronic support through My Oracle Support. For information, visit http://www.oracle.com/pls/topic/lookup?
ctx=acc&id=info or visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=trs if you are hearing impaired.
Référence: E40167-17
Copyright © 2015 , Oracle et/ou ses affiliés. Tous droits réservés.
Ce logiciel et la documentation qui l'accompagne sont protégés par les lois sur la propriété intellectuelle. Ils sont concédés sous licence et soumis à des restrictions d'utilisation et
de divulgation. Sauf stipulation expresse de votre contrat de licence ou de la loi, vous ne pouvez pas copier, reproduire, traduire, diffuser, modifier, accorder de licence, transmettre,
distribuer, exposer, exécuter, publier ou afficher le logiciel, même partiellement, sous quelque forme et par quelque procédé que ce soit. Par ailleurs, il est interdit de procéder à toute
ingénierie inverse du logiciel, de le désassembler ou de le décompiler, excepté à des fins d'interopérabilité avec des logiciels tiers ou tel que prescrit par la loi.
Les informations fournies dans ce document sont susceptibles de modification sans préavis. Par ailleurs, Oracle Corporation ne garantit pas qu'elles soient exemptes d'erreurs et vous
invite, le cas échéant, à lui en faire part par écrit.
Si ce logiciel, ou la documentation qui l'accompagne, est livré sous licence au Gouvernement des Etats-Unis, ou à quiconque qui aurait souscrit la licence de ce logiciel pour le
compte du Gouvernement des Etats-Unis, la notice suivante s'applique :
U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation,
delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental
regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating system, integrated software, any programs installed on the
hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the programs. No other rights are granted to the U.S. Government.
Ce logiciel ou matériel a été développé pour un usage général dans le cadre d'applications de gestion des informations. Ce logiciel ou matériel n'est pas conçu ni n'est destiné à être
utilisé dans des applications à risque, notamment dans des applications pouvant causer un risque de dommages corporels. Si vous utilisez ce logiciel ou ce matériel dans le cadre
d'applications dangereuses, il est de votre responsabilité de prendre toutes les mesures de secours, de sauvegarde, de redondance et autres mesures nécessaires à son utilisation dans
des conditions optimales de sécurité. Oracle Corporation et ses affiliés déclinent toute responsabilité quant aux dommages causés par l'utilisation de ce logiciel ou matériel pour des
applications dangereuses.
Oracle et Java sont des marques déposées d'Oracle Corporation et/ou de ses affiliés. Tout autre nom mentionné peut correspondre à des marques appartenant à d'autres propriétaires
qu'Oracle.
Intel et Intel Xeon sont des marques ou des marques déposées d'Intel Corporation. Toutes les marques SPARC sont utilisées sous licence et sont des marques ou des marques
déposées de SPARC International, Inc. AMD, Opteron, le logo AMD et le logo AMD Opteron sont des marques ou des marques déposées d'Advanced Micro Devices. UNIX est une
marque déposée de The Open Group.
Ce logiciel ou matériel et la documentation qui l'accompagne peuvent fournir des informations ou des liens donnant accès à des contenus, des produits et des services émanant de
tiers. Oracle Corporation et ses affiliés déclinent toute responsabilité ou garantie expresse quant aux contenus, produits ou services émanant de tiers, sauf mention contraire stipulée
dans un contrat entre vous et Oracle. En aucun cas, Oracle Corporation et ses affiliés ne sauraient être tenus pour responsables des pertes subies, des coûts occasionnés ou des
dommages causés par l'accès à des contenus, produits ou services tiers, ou à leur utilisation, sauf mention contraire stipulée dans un contrat entre vous et Oracle.
Accès aux services de support Oracle

Les clients Oracle qui ont souscrit un contrat de support ont accès au support électronique via My Oracle Support. Pour plus d'informations, visitez le site http://www.oracle.com/
pls/topic/lookup?ctx=acc&id=info ou le site http://www.oracle.com/pls/topic/lookup?ctx=acc&id=trs si vous êtes malentendant.
Contents

Using This Documentation ................................................................................ 13

Understanding the System ................................................................................ 15


Understanding Oracle SuperCluster T5-8 ...........................................................  15
Spares Kit Components ..........................................................................  16
Oracle SuperCluster T5-8 Restrictions ....................................................... 17
Identifying Hardware Components ...................................................................  19
Full Rack Components ...........................................................................  20
Half Rack Components ........................................................................... 22
Understanding the Hardware Components and Connections ..................................  23
Understanding the Hardware Components .................................................. 23
Understanding the Physical Connections .................................................... 26
Understanding the Software Configurations ........................................................ 46
Understanding Domains .......................................................................... 46
Understanding General Configuration Information .......................................  58
Understanding Half Rack Configurations ...................................................  61
Understanding Full Rack Configurations ...................................................  69
Understanding Clustering Software ...................................................................  84
Cluster Software for the Database Domain .................................................  85
Cluster Software for the Oracle Solaris Application Domains ........................  85
Understanding the Network Requirements .........................................................  86
Network Requirements Overview .............................................................  86
Network Connection Requirements for Oracle SuperCluster T5-8 ...................  89
Understanding Default IP Addresses .........................................................  90

Preparing the Site .............................................................................................  95


Cautions and Considerations ............................................................................ 95
Reviewing System Specifications .....................................................................  96

5
Contents

Physical Specifications ...........................................................................  96


Installation and Service Area ...................................................................  96
Rack and Floor Cutout Dimensions ..........................................................  97
Reviewing Power Requirements .......................................................................  99
Power Consumption ...............................................................................  99
Facility Power Requirements .................................................................  100
Grounding Requirements ....................................................................... 100
PDU Power Requirements .....................................................................  101
PDU Thresholds ..................................................................................  104
Preparing for Cooling ...................................................................................  106
Environmental Requirements .................................................................  107
Heat Dissipation and Airflow Requirements .............................................  107
Perforated Floor Tiles ...........................................................................  110
Preparing the Unloading Route and Unpacking Area ..........................................  111
Shipping Package Dimensions ...............................................................  111
Loading Dock and Receiving Area Requirements ......................................  111
Access Route Guidelines ....................................................................... 112
Unpacking Area ..................................................................................  113
Preparing the Network ..................................................................................  113
Network Connection Requirements .........................................................  113
Network IP Address Requirements .........................................................  114
▼ Prepare DNS for the System ............................................................. 114

Installing the System .......................................................................................  117


Installation Overview ...................................................................................  117
Oracle Safety Information .............................................................................  118
Unpacking the System ..................................................................................  119
Tools for Installation ............................................................................  119
▼ Find the Unpacking Instructions ........................................................  119
▼ Unpack and Inspect the System ......................................................... 121
Moving the Rack Into Place ..........................................................................  121
▼ Move Oracle SuperCluster T5-8 ........................................................  122
▼ Install a Ground Cable (Optional) ......................................................  124
▼ Adjust the Leveling Feet ..................................................................  125
Powering on the System for the First Time ....................................................... 126
▼ Connect Power Cords to the Rack .....................................................  126
▼ Power On the System ......................................................................  130

6 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Contents

▼ Connect a Laptop to the System ........................................................  134


▼ Connect to a 10-GbE Client Access Network .......................................  136
Using an Optional Fibre Channel PCIe Card ..................................................... 139

Maintaining the System ...................................................................................  141


Cautions and Warnings .................................................................................  141
Powering Off the System ..............................................................................  142
Graceful Power Off Sequence ................................................................  142
▼ Emergency Power Off .....................................................................  145
▼ Power On the System .............................................................................. 145
▼ Identify the Version of SuperCluster Software .............................................. 145
SuperCluster Tools .......................................................................................  146
Managing Oracle Solaris 11 Boot Environments ................................................  147
Advantages to Maintaining Multiple Boot Environments .............................  147
▼ Create a Boot Environment ..............................................................  148
▼ Mount to a Different Build Environment ............................................. 150
▼ Reboot to the Original Boot Environment ...........................................  151
▼ Remove Unwanted Boot Environments ............................................... 151
Using Dynamic Intimate Shared Memory ......................................................... 151
DISM Restrictions ..............................................................................  152
▼ Disable DISM ................................................................................  152
Component-Specific Service Procedures ..........................................................  153
Maintaining Exadata Storage Servers ..............................................................  153
▼ Monitor Write-through Caching Mode ................................................  154
▼ Shut Down or Reboot an Exadata Storage Server .................................. 156
▼ Drop an Exadata Storage Server ........................................................  158
Tuning the System (ssctuner) .......................................................................  158
ssctuner Overview .............................................................................. 159
▼ Monitor ssctuner Activity ...............................................................  160
▼ View Log Files ..............................................................................  161
▼ Configure the EMAIL_ADDRESS Property ............................................... 162
▼ Change ssctuner Properties and Disable Features ................................  163
▼ Configure ssctuner to Run compliance(1M) Benchmarks ...................... 165
▼ Monitor and View the Compliance Benchmark ....................................  166
▼ Install ssctuner .............................................................................  168
▼ Enable ssctuner ............................................................................  169

7
Contents

Configuring CPU and Memory Resources (osc-setcoremem) ............................... 170


osc-setcoremem Overview .....................................................................  171
Minimum and Maximum Resources (Dedicated Domains) ........................... 173
Supported Domain Configurations ..........................................................  174
▼ Plan CPU and Memory Allocations ...................................................  175
▼ Display the Current Domain Configuration (osc-setcoremem) ................. 178
▼ Display the Current Domain Configuration (ldm) ..................................  180
▼ Change CPU/Memory Allocations (Socket Granularity) .........................  182
▼ Change CPU/Memory Allocations (Core Granularity) ...........................  186
▼ Park Cores and Memory ..................................................................  190
▼ Access osc-setcoremem Log Files .....................................................  196
▼ View the SP Configuration ...............................................................  199
▼ Revert to a Previous CPU/Memory Configuration .................................  201
▼ Remove a CPU/Memory Configuration ..............................................  202

Monitoring the System ....................................................................................  205


Monitoring the System Using Auto Service Request ..........................................  205
ASR Overview ....................................................................................  205
ASR Resources ...................................................................................  206
ASR Installation Overview ....................................................................  207
▼ Configure SNMP Trap Destinations for Exadata Storage Servers .............  208
▼ Configure ASR on the ZFS Storage Appliance .....................................  210
▼ Configure ASR on SPARC T5-8 Servers (Oracle ILOM) ........................ 213
Configuring ASR on the SPARC T5-8 Servers (Oracle Solaris 11) ................  215
▼ Approve and Verify ASR Asset Activation ..........................................  219
Monitoring the System Using OCM ................................................................  221
OCM Overview ...................................................................................  221
▼ Install Oracle Configuration Manager on SPARC T5-8 Servers ................  222
Monitoring the System Using EM Exadata Plug-in ............................................  226
System Requirements ...........................................................................  227
Known Issues With EM Exadata Plug-in .................................................. 227

Configuring Exalogic Software ........................................................................  229


Exalogic Software Overview .........................................................................  229
Exalogic Software Prerequisites .....................................................................  230
▼ Enable Domain-Level Enhancements .........................................................  230

8 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Contents

▼ Enable Cluster-Level Session Replication Enhancements ...............................  231


Configuring Grid Link Data Source for Dept1_Cluster1 ......................................  234
Grid Link Data Source .........................................................................  235
▼ Create a GridLink Data Source on Dept1_Cluster1 ...............................  236
Configuring SDP-Enabled JDBC Drivers for Dept1_Cluster1 ..............................  238
▼ Configure the Database to Support Infiniband ......................................  238
▼ Enable SDP Support for JDBC .........................................................  239
▼ Monitor SDP Sockets Using netstat on Oracle Solaris .........................  240
Configuring SDP InfiniBand Listener for Exalogic Connections ...........................  240
▼ Create an SDP Listener on the InfiniBand Network ............................... 241

Understanding Internal Cabling ....................................................................... 245


Connector Locations ..................................................................................... 245
Identifying InfiniBand Fabric Connections .......................................................  253
IB Spine Switch ..................................................................................  253
IB Leaf Switch No. 1 ...........................................................................  253
IB Leaf Switch No. 2 ...........................................................................  254
Ethernet Management Switch Connections .......................................................  256
ZFS Storage Appliance Connections ...............................................................  258
Single-Phase PDU Cabling ............................................................................  258
Three-Phase PDU Cabling ............................................................................. 259

Connecting Multiple Oracle SuperCluster T5-8 Systems .................................  261


Multi-Rack Cabling Overview .......................................................................  261
Two-Rack Cabling .......................................................................................  263
Three-Rack Cabling .....................................................................................  265
Four-Rack Cabling .......................................................................................  267
Five-Rack Cabling .......................................................................................  270
Six-Rack Cabling ......................................................................................... 273
Seven-Rack Cabling .....................................................................................  277
Eight-Rack Cabling ......................................................................................  281

Connecting Expansion Racks .........................................................................  287


Oracle Exadata Storage Expansion Rack Components ........................................  287
Preparing for Installation ............................................................................... 288
Reviewing System Specifications ...........................................................  289

9
Contents

Reviewing Power Requirements .............................................................  289


Preparing for Cooling ...........................................................................  293
Preparing the Unloading Route and Unpacking Area ..................................  296
Preparing the Network ..........................................................................  297
Installing the Oracle Exadata Storage Expansion Rack ........................................ 299
Expansion Rack Default IP Addresses .............................................................  300
Understanding Expansion Rack Internal Cabling ...............................................  300
Front and Rear Expansion Rack Layout ...................................................  301
Oracle ILOM Cabling ........................................................................... 304
Administrative Gigabit Ethernet Port Cabling Tables ..................................  306
Single-Phase PDU Cabling ....................................................................  308
Three-Phase Power Distribution Unit Cabling ...........................................  309
InfiniBand Network Cabling ..................................................................  311
Connecting an Expansion Rack to Oracle SuperCluster T5-8 ...............................  315
InfiniBand Switch Information for Oracle SuperCluster T5-8 .......................  315
InfiniBand Switch Information for the Oracle Exadata Storage Expansion
Rack ..................................................................................................  316
Connecting an Oracle Exadata Storage Expansion Quarter Rack to Oracle
SuperCluster T5-8 ................................................................................  316
Connecting an Oracle Exadata Storage Expansion Half Rack or Oracle Exadata
Storage Expansion Full Rack to Oracle SuperCluster T5-8 ..........................  320
Two-Rack Cabling ...............................................................................  322
Three-Rack Cabling .............................................................................  324
Four-Rack Cabling ...............................................................................  326
Five-Rack Cabling ...............................................................................  328
Six-Rack Cabling ................................................................................. 332
Seven-Rack Cabling .............................................................................  336
Eight-Rack Cabling ..............................................................................  340
Nine-Rack Cabling ............................................................................... 345
Ten-Rack Cabling ................................................................................  345
Eleven-Rack Cabling ............................................................................  346
Twelve-Rack Cabling ...........................................................................  347
Thirteen-Rack Cabling ..........................................................................  348
Fourteen-Rack Cabling .........................................................................  348
Fifteen-Rack Cabling ............................................................................ 349
Sixteen-Rack Cabling ...........................................................................  350
Seventeen-Rack Cabling .......................................................................  351
Eighteen-Rack Cabling .........................................................................  351

10 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Contents

Index ................................................................................................................  353

11
12 Oracle SuperCluster T5-8 Owner's Guide • May 2016
Using This Documentation

This document provides and overview of Oracle SuperCluster T5-8, and describes configuration
options, site preparation specifications, installation information, and administration tools.
■ Overview – Describes how to configure, install, tune, and monitor the system.
■ Audience – Technicians, system administrators, and authorized service providers.
■ Required knowledge – Advanced experience in system installation and administration.

Product Documentation Library


Documentation and resources for this product and related products are available on the system.
Access the documentation by using a browser to view this directory on the first compute server
installed in SuperCluster T5-8:

/opt/oracle/node/doc/E40166_01/index.html

Feedback
Provide feedback about this documentation at:

http://www.oracle.com/goto/docfeedback

Using This Documentation 13


14 Oracle SuperCluster T5-8 Owner's Guide • May 2016
Understanding the System

These topics describe the features and hardware components of Oracle SuperCluster T5-8.
These topics also describe the different software configurations that are available.
■ “Understanding Oracle SuperCluster T5-8” on page 15
■ “Identifying Hardware Components” on page 19
■ “Understanding the Hardware Components and Connections” on page 23
■ “Understanding the Software Configurations” on page 46
■ “Understanding Clustering Software” on page 84
■ “Understanding the Network Requirements” on page 86

Understanding Oracle SuperCluster T5-8


Oracle SuperCluster T5-8 is an integrated hardware and software system designed to provide
a complete platform for a wide range of application types and widely varied workloads.
Oracle SuperCluster T5-8 is intended for large-scale, performance-sensitive, mission-critical
application deployments. Oracle SuperCluster T5-8 combines industry-standard hardware and
clustering software, such as optional Oracle Database 11g Real Application Clusters (Oracle
RAC) and optional Oracle Solaris Cluster software. This combination enables a high degree of
isolation between concurrently deployed applications, which have varied security, reliability,
and performance requirements. Oracle SuperCluster T5-8 enables customers to develop a single
environment that can support end-to-end consolidation of their entire applications portfolio.

Oracle SuperCluster T5-8 provides an optimal solution for all database workloads, ranging from
scan-intensive data warehouse applications to highly concurrent online transaction processing
(OLTP) applications. With its combination of smart Oracle Exadata Storage Server Software,
complete and intelligent Oracle Database software, and the latest industry-standard hardware
components, Oracle SuperCluster T5-8 delivers extreme performance in a highly-available,
highly-secure environment. Oracle provides unique clustering and workload management
capabilities so Oracle SuperCluster T5-8 is well-suited for consolidating multiple databases into
a single grid. Delivered as a complete pre-optimized, and pre-configured package of software,
servers, and storage, Oracle SuperCluster T5-8 is fast to implement, and it is ready to tackle
your large-scale business applications.

Understanding the System 15


Understanding Oracle SuperCluster T5-8

Oracle SuperCluster T5-8 does not include any Oracle software licenses. Appropriate licensing
of the following software is required when Oracle SuperCluster T5-8 is used as a database
server:
■ Oracle Database Software
■ Oracle Exadata Storage Server Software

In addition, Oracle recommends that the following software is licensed:


■ Oracle Real Application Clusters
■ Oracle Partitioning

Oracle SuperCluster T5-8 is designed to fully leverage an internal InfiniBand fabric that
connects all of the processing, storage, memory, and external network interfaces within Oracle
SuperCluster T5-8 to form a single, large computing device. Each Oracle SuperCluster T5-8
is connected to data center networks through 10-GbE (traffic) and 1-GbE (management)
interfaces.

You can integrate Oracle SuperCluster T5-8 systems with Exadata or Exalogic machines
by using the available InfiniBand expansion ports and optional data center switches. The
InfiniBand technology used by Oracle SuperCluster T5-8 offers significantly high bandwidth,
low latency, hardware-level reliability, and security. If you are using applications that follow
Oracle's best practices for highly scalable, fault-tolerant systems, you do not need to make any
application architecture or design changes to benefit from Oracle SuperCluster T5-8. You can
connect many Oracle SuperCluster T5-8 systems, or a combination of Oracle SuperCluster T5-8
systems and Oracle Exadata Database Machines, to develop a single, large-scale environment.
You can integrate Oracle SuperCluster T5-8 systems with their current data center infrastructure
using the available 10-GbE ports in each SPARC T5-8 server.

Spares Kit Components

Oracle SuperCluster T5-8 includes a spares kit that includes the following components:
■ One of the following disks as a spare for the Exadata Storage Servers, depending on the
type of Exadata Storage Server:
■ X3-2 Exadata Storage Server:
- 600 GB 10 K RPM High Performance SAS disk
- 3 TB 7.2 K RPM High Capacity SAS disk
■ X4-2 Exadata Storage Server:
- 1.2 TB 10 K RPM High Performance SAS disk
- 4 TB 7.2 K RPM High Capacity SAS disk

16 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding Oracle SuperCluster T5-8

■ X5-2L Exadata Storage Server:


- 1.6 TB Extreme Flash drive
- 8 TB High Capacity SAS disk
■ X6-2L Exadata Storage Server:
- 3.2 TB Extreme Flash drive
- 8 TB High Capacity SAS disk
■ One High Capacity SAS disk as a spare for the ZFS storage appliance:
■ One 3 TB High Capacity SAS disk as a spare for the Sun ZFS Storage 7320 storage
appliance, or
■ One 4 TB High Capacity SAS disk as a spare for the Oracle ZFS Storage ZS3-ES
storage appliance
■ Exadata Smart Flash Cache card
■ InfiniBand cables, used to connect multiple racks together

Oracle SuperCluster T5-8 Restrictions


The following restrictions apply to hardware and software modifications to Oracle SuperCluster
T5-8. Violating these restrictions can result in loss of warranty and support.

■ Oracle SuperCluster T5-8 hardware cannot be modified or customized. There is one


exception to this. The only allowed hardware modification to Oracle SuperCluster T5-8
is to the administrative 48-port Cisco 4948 Gigabit Ethernet switch included with Oracle
SuperCluster T5-8. Customers may choose to the following:
■ Replace the Gigabit Ethernet switch, at customer expense, with an equivalent 1U
48-port Gigabit Ethernet switch that conforms to their internal data center network
standards. This replacement must be performed by the customer, at their expense and
labor, after delivery of Oracle SuperCluster T5-8. If the customer chooses to make
this change, then Oracle cannot make or assist with this change given the numerous
possible scenarios involved, and it is not included as part of the standard installation.
The customer must supply the replacement hardware, and make or arrange for this
change through other means.
■ Remove the CAT5 cables connected to the Cisco 4948 Ethernet switch, and connect
them to the customer's network through an external switch or patch panel. The customer
must perform these changes at their expense and labor. In this case, the Cisco 4948
Ethernet switch in the rack can be turned off and unconnected to the data center
network.
■ The Oracle Exadata Storage Expansion Rack can only be connected to Oracle SuperCluster
T5-8 or an Oracle Exadata Database Machine, and only supports databases running on the

Understanding the System 17


Understanding Oracle SuperCluster T5-8

Oracle Database (DB) Domains in Oracle SuperCluster T5-8 or on the database servers in
the Oracle Exadata Database Machine.
■ Standalone Exadata Storage Servers can only be connected to Oracle SuperCluster T5-8 or
an Oracle Exadata Database Machine, and only support databases running on the Database
Domains in Oracle SuperCluster T5-8 or on the database servers in the Oracle Exadata
Database Machine. The standalone Exadata Storage Servers must be installed in a separate
rack.
■ Earlier Oracle Database releases can be run in Application Domains running Oracle Solaris
10. Non-Oracle databases can be run in either Application Domains running Oracle Solaris
10 or Oracle Solaris 11, depending on the Oracle Solaris version they support.
■ Oracle Exadata Storage Server Software and the operating systems cannot be modified, and
customers cannot install any additional software or agents on the Exadata Storage Servers.
■ Customers cannot update the firmware directly on the Exadata Storage Servers. The
firmware is updated as part of an Exadata Storage Server patch.
■ Customers may load additional software on the Database Domains on the SPARC T5-8
servers. However, to ensure best performance, Oracle discourages adding software except
for agents, such as backup agents and security monitoring agents, on the Database Domains.
Loading non-standard kernel modules to the operating system of the Database Domains is
allowed but discouraged. Oracle will not support questions or issues with the non-standard
modules. If a server crashes, and Oracle suspects the crash may have been caused by a non-
standard module, then Oracle support may refer the customer to the vendor of the non-
standard module or ask that the issue be reproduced without the non-standard module.
Modifying the Database Domain operating system other than by applying official patches
and upgrades is not supported. InfiniBand-related packages should always be maintained at
the officially supported release.
■ Oracle SuperCluster T5-8 supports separate domains dedicated to applications, with high
throughput/low latency access to the database domains through InfiniBand. Since Oracle
Database is by nature a client server, applications running in the Application Domains can
connect to database instances running in the Database Domain. Applications can be run in
the Database Domain, although it is discouraged.
■ Customers cannot connect USB devices to the Exadata Storage Servers except as
documented in Oracle Exadata Storage Server Software User's Guide and this guide. In
those documented situations, the USB device should not draw more than 100 mA of power.
■ The network ports on the SPARC T5-8 servers can be used to connect to external non-
Exadata Storage Servers using iSCSI or NFS. However, the Fibre Channel Over Ethernet
(FCoE) protocol is not supported.
■ Only switches specified for use in Oracle SuperCluster T5-8, Oracle Exadata Rack
and Oracle Exalogic Elastic Cloud may be connected to the InfiniBand network. It is
not supported to connect third-party switches and other switches not used in Oracle
SuperCluster T5-8, Oracle Exadata Rack and Oracle Exalogic Elastic Cloud.

18 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Identifying Hardware Components

Identifying Hardware Components


Oracle SuperCluster T5-8 consists of SPARC T5-8 servers, Exadata Storage Servers, and the
ZFS storage appliance (Sun ZFS Storage 7320 appliance or Oracle ZFS Storage ZS3-ES storage
appliance), as well as required InfiniBand and Ethernet networking components.

This section contains the following topics:

■ “Full Rack Components” on page 20


■ “Half Rack Components” on page 22

Understanding the System 19


Identifying Hardware Components

Full Rack Components


FIGURE 1 Oracle SuperCluster T5-8 Full Rack Layout, Front View

20 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Identifying Hardware Components

Figure Legend

1 Exadata Storage Servers (8)


2 ZFS storage controllers (2)
3 Sun Datacenter InfiniBand Switch 36 leaf switches (2)
4 Sun Disk Shelf
5 Cisco Catalyst 4948 ethernet management switch
6 SPARC T5-8 servers (2, with four processor modules apiece)
7 Sun Datacenter InfiniBand Switch 36 spine switch

You can expand the amount of disk storage for your system using the Oracle Exadata Storage
Expansion Rack. See “Oracle Exadata Storage Expansion Rack Components” on page 287
for more information.

You can connect up to eight Oracle SuperCluster T5-8 systems together, or a combination
of Oracle SuperCluster T5-8 systems and Oracle Exadata or Exalogic machines on the same
InfiniBand fabric, without the need for any external switches. See “Connecting Multiple Oracle
SuperCluster T5-8 Systems” on page 261 for more information.

Understanding the System 21


Identifying Hardware Components

Half Rack Components


FIGURE 2 Oracle SuperCluster T5-8 Half Rack Layout, Front View

22 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding the Hardware Components and Connections

Figure Legend

1 ZFS storage controllers (2)


2 Sun Datacenter InfiniBand Switch 36 leaf switches (2)
3 Sun Disk Shelf
4 Cisco Catalyst 4948 ethernet management switch
5 SPARC T5-8 servers (2, with two processor modules apiece)
6 Exadata Storage Servers (4)
7 Sun Datacenter InfiniBand Switch 36 spine switch

You can expand the amount of disk storage for your system using the Oracle Exadata Storage
Expansion Rack. See “Oracle Exadata Storage Expansion Rack Components” on page 287
for more information.

You can connect up to eight Oracle SuperCluster T5-8 systems together, or a combination
of Oracle SuperCluster T5-8 systems and Oracle Exadata or Exalogic machines on the same
InfiniBand fabric, without the need for any external switches. See “Connecting Multiple Oracle
SuperCluster T5-8 Systems” on page 261 for more information.

Understanding the Hardware Components and Connections


These topics describe how the hardware components and connections are configured to provide
full redundancy for high performance or high availability in Oracle SuperCluster T5-8, as well
as connections to the various networks:

■ “Understanding the Hardware Components” on page 23


■ “Understanding the Physical Connections” on page 26

Understanding the Hardware Components


The following Oracle SuperCluster T5-8 hardware components provide full redundancy,
either through physical connections between components within the system, or through the
components:

■ “SPARC T5-8 Servers” on page 24


■ “Exadata Storage Servers” on page 24
■ “ZFS Storage Appliance” on page 25
■ “Sun Datacenter InfiniBand Switch 36 Switches” on page 25

Understanding the System 23


Understanding the Hardware Components and Connections

■ “Cisco Catalyst 4948 Ethernet Management Switch” on page 26


■ “Power Distribution Units” on page 26

SPARC T5-8 Servers


The Full Rack version of Oracle SuperCluster T5-8 contains two SPARC T5-8 servers, each
with four processor modules. The Half Rack version of Oracle SuperCluster T5-8 also contains
two SPARC T5-8 servers, but each with two processor modules.

Redundancy in the SPARC T5-8 servers is achieved two ways:


■ Through connections between the servers (described in “Understanding the SPARC T5-8
Server Physical Connections” on page 26)
■ Through components within the SPARC T5-8 servers:
■ Fan modules – Each SPARC T5-8 server contains 10 fan modules. The SPARC T5-8
server will continue to operate at full capacity if one of the fan modules fails.
■ Disk drives – Each SPARC T5-8 server contains eight hard drives. Oracle SuperCluster
T5-8 software provides redundancy between the eight disk drives.
■ Processor modules – Oracle SuperCluster T5-8 contains two SPARC T5-8 servers.
For the Full Rack version of Oracle SuperCluster T5-8, each SPARC T5-8 server
contains four processor modules. There are two sockets or PCIe root complex pairs on
each processor module, where 16 cores are associated with each socket, for a total of
eight sockets or PCIe root complex pairs (128 cores) for each SPARC T5-8 server.
For the Half Rack version of Oracle SuperCluster T5-8, each SPARC T5-8 server
contains two processor modules. There are two sockets or PCIe root complex pairs on
each processor module, where 16 cores associated with each socket, for a total of four
sockets or PCIe root complex pairs (64 cores) for each SPARC T5-8 server.

Exadata Storage Servers


The Full Rack version of Oracle SuperCluster T5-8 contains eight Exadata Storage Servers.
The Half Rack version of Oracle SuperCluster T5-8 contains four Exadata Storage Servers.
Redundancy in the Exadata Storage Servers is achieved two ways:
■ Through connections between the Exadata Storage Servers. For more information, see
“Understanding the Exadata Storage Server Physical Connections” on page 36.
■ Through components within the Exadata Storage Servers:
■ Power supplies – Each Exadata Storage Server contains two power supplies. The
Exadata Storage Server can continue to operate normally if one of the power supplies
fail, or if one of the power distribution units fail.

24 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding the Hardware Components and Connections

■ Disk drives – Each Exadata Storage Server contains 12 disk drives, where you can
choose between disk drives designed for either high capacity or high performance when
you first order Oracle SuperCluster T5-8. Oracle SuperCluster T5-8 software provides
redundancy between the 12 disk drives within each Exadata Storage Server. For more
information, see “Cluster Software for the Database Domain” on page 85.

ZFS Storage Appliance

Each Oracle SuperCluster T5-8 contains one ZFS storage appliance, in either the Full Rack or
Half Rack version. The ZFS storage appliance consists of the following:

■ Two ZFS storage controllers


■ One Sun Disk Shelf

Redundancy in the ZFS storage appliance is achieved two ways:

■ Through connections from the two ZFS storage controllers to the Sun Disk Shelf.
For more information, see “Understanding the ZFS Storage Appliance Physical
Connections” on page 39.
■ Through components within the ZFS storage appliance itself:
■ Power supplies – Each ZFS storage controller and Sun Disk Shelf contains two power
supplies. Each ZFS storage controller and the Sun Disk Shelf can continue to operate
normally if one of those power supplies fails.
■ Disk drives – Each ZFS storage controller contains two mirrored boot drives, so the
controller can still boot up and operate normally if one boot drive fails. The Sun Disk
Shelf contains 20 hard disk drives that are used for storage in Oracle SuperCluster
T5-8, and 4 solid-state drives that are used as write-optimized cache devices,
also known as logzillas. Oracle SuperCluster T5-8 software provides redundancy
between the disk drives. For more information, see “Understanding the Software
Configurations” on page 46.

Sun Datacenter InfiniBand Switch 36 Switches

Each Oracle SuperCluster T5-8 contains three Sun Datacenter InfiniBand Switch 36 switches,
in either the Full Rack or Half Rack version, two of which are leaf switches (the third is used as
a spine switch to connect two racks together). The two leaf switches are connected to each other
to provide redundancy should one of the two leaf switches fail. In addition, each SPARC T5-8
server, Exadata Storage Server, and ZFS storage controller has connections to both leaf switches
to provide redundancy in the InfiniBand connections should one of the two leaf switches fail.
For more information, see “Understanding the Physical Connections” on page 26.

Understanding the System 25


Understanding the Hardware Components and Connections

Cisco Catalyst 4948 Ethernet Management Switch


The Cisco Catalyst 4948 Ethernet management switch contains two power supplies. The Cisco
Catalyst 4948 Ethernet management switch can continue to operate normally if one of those
power supplies fails.

Power Distribution Units


Each Oracle SuperCluster T5-8 contains two power distribution units, in either the Full Rack
or Half Rack version. The components within Oracle SuperCluster T5-8 connect to both power
distribution units, so that power continues to be supplied to those components should one of the
two power distribution units fail. For more information, see “Power Distribution Units Physical
Connections” on page 45.

Understanding the Physical Connections


The following topics describe the physical connections between the components within Oracle
SuperCluster T5-8:
■ “Understanding the SPARC T5-8 Server Physical Connections” on page 26
■ “Understanding the Exadata Storage Server Physical Connections” on page 36
■ “Understanding the ZFS Storage Appliance Physical Connections” on page 39
■ “Power Distribution Units Physical Connections” on page 45

Understanding the SPARC T5-8 Server Physical Connections


These topics provide information on the location of the cards and ports that are used for the
physical connections for the SPARC T5-8 server, as well as information specific to the four sets
of physical connections for the server:
■ “PCIe Slots (SPARC T5-8 Servers)” on page 27
■ “Card Locations (SPARC T5-8 Servers)” on page 29
■ “NET MGT and NET0-3 Port Locations (SPARC T5-8 Servers)” on page 31
■ “InfiniBand Private Network Physical Connections (SPARC T5-8 Servers)” on page 32
■ “Oracle ILOM Management Network Physical Connections (SPARC T5-8
Servers)” on page 35
■ “1-GbE Host Management Network Physical Connections (SPARC T5-8
Servers)” on page 35

26 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding the Hardware Components and Connections

■ “10-GbE Client Access Network Physical Connections (SPARC T5-8


Servers)” on page 35

PCIe Slots (SPARC T5-8 Servers)

Each Oracle SuperCluster T5-8 contains two SPARC T5-8 servers, regardless if it is a Full Rack
or a Half Rack. The distinguishing factor between a Full Rack or a Half Rack version of Oracle
SuperCluster T5-8 is not the number of SPARC T5-8 servers, but the number of processor
modules in each SPARC T5-8 server, where the Full Rack has four processor modules and the
Half Rack has two processor modules. See “SPARC T5-8 Servers” on page 24 for more
information.

Each SPARC T5-8 server has sixteen PCIe slots:

■ Full Rack – All 16 PCIe slots are accessible, and all 16 PCIe slots are occupied with either
InfiniBand HCAs or 10-GbE NICs.
■ Half Rack – All 16 PCIe slots are accessible, but only eight of the 16 PCIe slots are
occupied with either InfiniBand HCAs or 10-GbE NICs. The remaining eight PCIe slots are
available for optional Fibre Channel PCIe cards.

The following figures show the topology for the Full Rack and Half Rack versions of Oracle
SuperCluster T5-8. Also see “Card Locations (SPARC T5-8 Servers)” on page 29 for more
information.

Understanding the System 27


Understanding the Hardware Components and Connections

FIGURE 3 Topology for the Full Rack Version of Oracle SuperCluster T5-8

28 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding the Hardware Components and Connections

FIGURE 4 Topology for the Half Rack Version of Oracle SuperCluster T5-8

Card Locations (SPARC T5-8 Servers)

The following figures show the cards that will be used for the physical connections for the
SPARC T5-8 server in the Full Rack and Half Rack versions of Oracle SuperCluster T5-8.

Understanding the System 29


Understanding the Hardware Components and Connections

Note - For the Half Rack version of Oracle SuperCluster T5-8, eight of the 16 PCIe slots
are occupied with either InfiniBand HCAs or 10-GbE NICs. However, all 16 PCIe slots are
accessible, so the remaining eight PCIe slots are available for optional Fibre Channel PCIe
cards. See “Using an Optional Fibre Channel PCIe Card” on page 139 for more information.

FIGURE 5 Card Locations (Full Rack)

Figure Legend

1 Dual-port 10-GbE network interface cards, for connection to the 10-GbE client access network (see “10-GbE
Client Access Network Physical Connections (SPARC T5-8 Servers)” on page 35)
2 Dual-port InfiniBand host channel adapters, for connection to the InfiniBand network (see “InfiniBand Private
Network Physical Connections (SPARC T5-8 Servers)” on page 32)

30 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding the Hardware Components and Connections

FIGURE 6 Card Locations (Half Rack)

Figure Legend

1 Dual-port 10-GbE network interface cards, for connection to the 10-GbE client access network (see “10-GbE
Client Access Network Physical Connections (SPARC T5-8 Servers)” on page 35)
2 Dual-port InfiniBand host channel adapters, for connection to the InfiniBand network (see “InfiniBand Private
Network Physical Connections (SPARC T5-8 Servers)” on page 32)

NET MGT and NET0-3 Port Locations (SPARC T5-8 Servers)

The following figure shows the NET MGT and NET0-3 ports that will be used for the physical
connections for the SPARC T5-8 servers.

Understanding the System 31


Understanding the Hardware Components and Connections

FIGURE 7 NET MGT and NET0-3 Port Locations

Figure Legend

1 NET MGT port, for connection to Oracle ILOM management network (see “Oracle ILOM Management
Network Physical Connections (SPARC T5-8 Servers)” on page 35)
2 NET0-NET3 ports, for connection to 1-GbE host management network (see “1-GbE Host Management Network
Physical Connections (SPARC T5-8 Servers)” on page 35)

InfiniBand Private Network Physical Connections (SPARC T5-8 Servers)

Each SPARC T5-8 server contains several dual-ported Sun QDR InfiniBand PCIe Low Profile
host channel adapters (HCAs). The number of InfiniBand HCAs and their locations in the
SPARC T5-8 servers varies, depending on the configuration of Oracle SuperCluster T5-8:

■ Full Rack: Eight InfiniBand HCAs, installed in these PCIe slots:


■ PCIe slot 3
■ PCIe slot 4
■ PCIe slot 7
■ PCIe slot 8
■ PCIe slot 11
■ PCIe slot 12
■ PCIe slot 15
■ PCIe slot 16
■ Half Rack: Four InfiniBand HCAs, installed in these PCIe slots:
■ PCIe slot 3
■ PCIe slot 8
■ PCIe slot 11
■ PCIe slot 16

32 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding the Hardware Components and Connections

See “Card Locations (SPARC T5-8 Servers)” on page 29 for more information on the
location of the InfiniBand HCAs.

The two ports in each InfiniBand HCA (ports 1 and 2) connect to a different leaf switch to
provide redundancy between the SPARC T5-8 servers and the leaf switches. The following
figures show how redundancy is achieved with the InfiniBand connections between the SPARC
T5-8 servers and the leaf switches in the Full Rack and Half Rack configurations.

FIGURE 8 InfiniBand Connections for SPARC T5-8 Servers, Full Rack

Understanding the System 33


Understanding the Hardware Components and Connections

FIGURE 9 InfiniBand Connections for SPARC T5-8 Servers, Half Rack

Note that only the physical connections for the InfiniBand private network are described in
this section. Once the logical domains are created for each SPARC T5-8 server, the InfiniBand
private network will be configured differently depending on the type of domain created on
the SPARC T5-8 servers. The number of IP addresses needed for the InfiniBand network will
also vary, depending on the type of domains created on each SPARC T5-8 server. For more
information, see “Understanding the Software Configurations” on page 46.

34 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding the Hardware Components and Connections

Oracle ILOM Management Network Physical Connections (SPARC T5-8


Servers)

Each SPARC T5-8 server connects to the Oracle Integrated Lights Out Manager (ILOM)
management network through a single Oracle ILOM network port (NET MGT port) at the rear
of each SPARC T5-8 server. One IP address is required for Oracle ILOM management for each
SPARC T5-8 server.

See “NET MGT and NET0-3 Port Locations (SPARC T5-8 Servers)” on page 31 for more
information on the location of the NET MGT port.

1-GbE Host Management Network Physical Connections (SPARC T5-8


Servers)

Each SPARC T5-8 server connects to the 1-GbE host management network through the four
1-GbE host management ports at the rear of each SPARC T5-8 server (NET0 - NET3 ports).
However, the way the 1-GbE host management connections are used differs from the physical
connections due to logical domains. For more information, see “Understanding the Software
Configurations” on page 46.

See “NET MGT and NET0-3 Port Locations (SPARC T5-8 Servers)” on page 31 for more
information on the location of the 1-GbE host management ports.

10-GbE Client Access Network Physical Connections (SPARC T5-8


Servers)

Each SPARC T5-8 server contains several dual-ported Sun Dual 10-GbE SFP+ PCIe 2.0 Low
Profile network interface cards (NICs). The number of 10-GbE NICs and their locations in the
SPARC T5-8 servers varies, depending on the configuration of Oracle SuperCluster T5-8:
■ Full Rack: Eight 10-GbE NICs, installed in these PCIe slots:
■ PCIe slot 1
■ PCIe slot 2
■ PCIe slot 5
■ PCIe slot 6
■ PCIe slot 9
■ PCIe slot 10
■ PCIe slot 13
■ PCIe slot 14
■ Half Rack: Four 10-GbE NICs, installed in these PCIe slots:

Understanding the System 35


Understanding the Hardware Components and Connections

■ PCIe slot 1
■ PCIe slot 6
■ PCIe slot 9
■ PCIe slot 14

See “Card Locations (SPARC T5-8 Servers)” on page 29 for the location of the 10-GbE
NICs.

Depending on the configuration, one or two of the ports on the 10-GbE NICs (ports 0 and 1)
will be connected to the client access network. In some configurations, both ports on the same
10-GbE NIC will be part of an IPMP group to provide redundancy and increased bandwidth. In
other configurations, one port from two separate 10-GbE NICs will be part of an IPMP group.

The number of physical connections to the 10-GbE client access network varies, depending
on the type of domains created on each SPARC T5-8 server. For more information, see
“Understanding the Software Configurations” on page 46.

Understanding the Exadata Storage Server Physical


Connections
Each Exadata Storage Server contains three sets of physical connections:

■ “InfiniBand Private Network Physical Connections (Exadata Storage


Servers)” on page 36
■ “Oracle ILOM Management Network Physical Connections (Exadata Storage
Servers)” on page 38
■ “1-GbE Host Management Network Physical Connections (Exadata Storage
Servers)” on page 39

InfiniBand Private Network Physical Connections (Exadata Storage


Servers)

Each Exadata Storage Server contains one dual-ported Sun QDR InfiniBand PCIe Low Profile
host channel adapter (HCA). The two ports in the InfiniBand HCA are bonded together to
increase available bandwidth. When bonded, the two ports appear as a single port, with a single
IP address assigned to the two bonded ports, resulting in one IP address for InfiniBand private
network connections for each Exadata Storage Server.

The two ports in the InfiniBand HCA connects to a different leaf switch to provide redundancy
between the Exadata Storage Servers and the leaf switches. The following figures show how

36 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding the Hardware Components and Connections

redundancy is achieved with the InfiniBand connections between the Exadata Storage Servers
and the leaf switches in the Full Rack and Half Rack configurations.

FIGURE 10 InfiniBand Connections for Exadata Storage Servers, Full Rack

Understanding the System 37


Understanding the Hardware Components and Connections

FIGURE 11 InfiniBand Connections for Exadata Storage Servers, Half Rack

Oracle ILOM Management Network Physical Connections (Exadata


Storage Servers)

Each Exadata Storage Server connects to the Oracle ILOM management network through a
single Oracle ILOM network port (NET MGT port) at the rear of each Exadata Storage Server.
One IP address is required for Oracle ILOM management for each Exadata Storage Server.

38 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding the Hardware Components and Connections

1-GbE Host Management Network Physical Connections (Exadata


Storage Servers)

Each Exadata Storage Server connects to the 1-GbE host management network through the 1-
GbE host management port (NET 0 port) at the rear of each Exadata Storage Server. One IP
address is required for 1-GbE host management for each Exadata Storage Server.

Understanding the ZFS Storage Appliance Physical


Connections

The ZFS storage appliance has five sets of physical connections:

■ “InfiniBand Private Network Physical Connections (ZFS Storage


Appliance)” on page 39
■ “Oracle ILOM Management Network Physical Connections (ZFS Storage
Appliance)” on page 41
■ “1-GbE Host Management Network Physical Connections (ZFS Storage
Appliance)” on page 42
■ “SAS Physical Connections (ZFS Storage Appliance)” on page 42
■ “Cluster Physical Connections (ZFS Storage Appliance)” on page 44

InfiniBand Private Network Physical Connections (ZFS Storage


Appliance)

The ZFS storage appliance connects to the InfiniBand private network through one of the
two ZFS storage controllers. The ZFS storage controller contains one Sun Dual Port 40Gb
InfiniBand QDR HCA. The two ports in each InfiniBand HCA are bonded together to increase
available bandwidth. When bonded, the two ports appear as a single port, with a single IP
address assigned to the two bonded ports, resulting in one IP address for InfiniBand private
network connections for the ZFS storage controller.

The two ports in the InfiniBand HCA connect to a different leaf switch to provide redundancy
between the ZFS storage controller and the leaf switches. The following figures show how
redundancy is achieved with the InfiniBand connections between the ZFS storage controller and
the leaf switches in the Full Rack and Half Rack configurations.

Understanding the System 39


Understanding the Hardware Components and Connections

FIGURE 12 InfiniBand Connections for ZFS Storage Controllers, Full Rack

40 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding the Hardware Components and Connections

FIGURE 13 InfiniBand Connections for ZFS Storage Controllers, Half Rack

Oracle ILOM Management Network Physical Connections (ZFS Storage


Appliance)

The ZFS storage appliance connects to the Oracle ILOM management network through
the two ZFS storage controllers. Each storage controller connects to the Oracle ILOM
management network through the NET0 port at the rear of each storage controller using
sideband management. One IP address is required for Oracle ILOM management for each
storage controller.

Understanding the System 41


Understanding the Hardware Components and Connections

1-GbE Host Management Network Physical Connections (ZFS Storage


Appliance)

The ZFS storage appliance connects to the 1-GbE host management network through the
two ZFS storage controllers. The storage controllers connect to the 1-GbE host management
network through the following ports at the rear of each storage controller:

■ NET0 on the first storage controller (installed in slot 25 in the rack)


■ NET1 on the second storage controller (installed in slot 26 in the rack)

One IP address is required for 1-GbE host management for each storage controller.

SAS Physical Connections (ZFS Storage Appliance)

Each ZFS storage controller is populated with a dual-port SAS-2 HBA card. The Sun Disk
Shelf also has two SIM Link In and two SIM Link Out ports. The two storage controllers
connect to the Sun Disk Shelf in the following manner:

■ Storage controller 1 – Both ports from the SAS-2 HBA card to the SIM Link Out ports on
the Sun Disk Shelf.
■ Storage controller 2 – Both ports from the SAS-2 HBA card to the SIM Link In ports on
the Sun Disk Shelf.

The following figures show the SAS connections between the two storage controllers and the
Sun Disk Shelf.

42 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding the Hardware Components and Connections

FIGURE 14 SAS Connections for the Sun ZFS Storage 7320 Storage Appliance

Figure Legend

1 Storage controller 1
2 Storage controller 2
3 Sun Disk Shelf

Understanding the System 43


Understanding the Hardware Components and Connections

FIGURE 15 SAS Connections for the Oracle ZFS Storage ZS3-ES Storage Appliance

Cluster Physical Connections (ZFS Storage Appliance)

Each ZFS storage controller contains a single cluster card. The cluster cards in the storage
controllers are cabled together as shown in the following figure. This allows a heartbeat signal
to pass between the storage controllers to determine if both storage controllers are up and
running.

44 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding the Hardware Components and Connections

Power Distribution Units Physical Connections

Oracle SuperCluster T5-8 contains two power distribution units. Each component in Oracle
SuperCluster T5-8 has redundant connections to the two power distribution units:

■ SPARC T5-8 servers – Each SPARC T5-8 server has four AC power connectors. Two
AC power connectors connect to one power distribution unit, and the other two AC power
connectors connect to the other power distribution unit.
■ Exadata Storage Servers – Each Exadata Storage Server has two AC power connectors.
One AC power connector connects to one power distribution unit, and the other AC power
connector connects to the other power distribution unit.
■ ZFS storage controllers – Each ZFS storage controller has two AC power connectors.
One AC power connector connects to one power distribution unit, and the other AC power
connector connects to the other power distribution unit.
■ Sun Disk Shelf – The Sun Disk Shelf has two AC power connectors. One AC power
connector connects to one power distribution unit, and the other AC power connector
connects to the other power distribution unit.
■ Sun Datacenter InfiniBand Switch 36 switches – Each Sun Datacenter InfiniBand Switch
36 switch has two AC power connectors. One AC power connector connects to one power
distribution unit, and the other AC power connector connects to the other power distribution
unit.
■ Cisco Catalyst 4948 Ethernet management switch – The Cisco Catalyst 4948 Ethernet
management switch has two AC power connectors. One AC power connector connects to
one power distribution unit, and the other AC power connector connects to the other power
distribution unit.

Understanding the System 45


Understanding the Software Configurations

Understanding the Software Configurations


Oracle SuperCluster T5-8 is set up with logical domains (LDoms), which provide users with the
flexibility to create different specialized virtual systems within a single hardware platform.

The following topics provide more information on the configurations available to you:
■ “Understanding Domains” on page 46
■ “Understanding General Configuration Information” on page 58
■ “Understanding Half Rack Configurations” on page 61
■ “Understanding Full Rack Configurations” on page 69

Understanding Domains
The number of domains supported on each SPARC T5-8 server depends on the type of Oracle
SuperCluster T5-8:
■ Half Rack version of Oracle SuperCluster T5-8: One to four domains
■ Full Rack version of Oracle SuperCluster T5-8: One to eight domains

These topics describe the domain types:


■ “Dedicated Domains” on page 46
■ “Understanding SR-IOV Domain Types” on page 48

Dedicated Domains
The following SuperCluster-specific domain types have always been available:
■ Application Domain running Oracle Solaris 10
■ Application Domain running Oracle Solaris 11
■ Database Domain

These SuperCluster-specific domain types have been available in software version 1.x and are
now known as dedicated domains.

Note - The Database Domains can also be in two states: with zones or without zones.

When a SuperCluster is set up as part of the initial installation, each domain is assigned one
of these three SuperCluster-specific dedicated domain types. With these dedicated domains,
every domain in a SuperCluster has direct access to the 10GbE NICs and IB HCAs (and Fibre

46 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding the Software Configurations

Channel cards, if those are installed in the card slots). The following graphic shows this concept
on a SuperCluster with four domains.

With dedicated domains, connections to the 10GbE client access network go through the
physical ports on each 10GbE NIC, and connections to the IB network go through the physical
ports on each IB HCA, as shown in the following graphic.

With dedicated domains, the domain configuration for a SuperCluster (the number of
domains and the SuperCluster-specific types assigned to each) are set at the time of the initial
installation, and can only be changed by an Oracle representative.

Understanding the System 47


Understanding the Software Configurations

Understanding SR-IOV Domain Types

In addition to the dedicated domain types (Database Domains and Application Domains running
either Oracle Solaris 10 or Oracle Solaris 11), the following version 2.x SR-IOV (Single-Root I/
O Virtualization) domain types are now also available:

■ “Root Domains” on page 48


■ “I/O Domains” on page 52

Root Domains

A Root Domain is an SR-IOV domain that hosts the physical I/O devices, or physical functions
(PFs), such as the IB HCAs and 10GbE NICs installed in the PCIe slots. Almost all of its CPU
and memory resources are parked for later use by I/O Domains. Logical devices, or virtual
functions (VFs), are created from each PF, with each PF hosting 32 VFs.

Because Root Domains host the physical I/O devices, just as dedicated domains currently do,
Root Domains essentially exist at the same level as dedicated domains.

With the introduction of Root Domains, the following parts of the domain configuration for a
SuperCluster are set at the time of the initial installation and can only be changed by an Oracle
representative:

■ Type of domain:
■ Root Domain
■ Application Domain running Oracle Solaris 10 (dedicated domain)
■ Application Domain running Oracle Solaris 11 (dedicated domain)
■ Database Domain (dedicated domain)
■ Number of Root Domains and dedicated domains on the server

A domain can only be a Root Domain if it has either one or two IB HCAs associated with it.
A domain cannot be a Root Domain if it has more than two IB HCAs associated with it. If
you have a domain that has more than two IB HCAs associated with it (for example, the H1-1
domain in an Oracle SuperCluster T5-8), then that domain must be a dedicated domain.

When deciding which domains will be a Root Domain, the last domain must always be the
first Root Domain, and you would start from the last domain in your configuration and go
in for every additional Root Domain. For example, assume you have four domains in your
configuration, and you want two Root Domains and two dedicated domains. In this case,
the first two domains would be dedicated domains and the last two domains would be Root
Domains.

48 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding the Software Configurations

Note - Even though a domain with two IB HCAs is valid for a Root Domain, domains with
only one IB HCA should be used as Root Domains. When a Root Domain has a single IB
HCA, fewer I/O Domains have dependencies on the I/O devices provided by that Root Domain.
Flexibility around high availability also increases with Root Domains with one IB HCA.

The following domains have only one or two IB HCAs associated with them and can therefore
be used as a Root Domain:

■ Small Domains (one IB HCA)


■ Medium Domains (two IB HCAs)

In addition, the first domain in the system (the Control Domain) will always be a dedicated
domain. The Control Domain cannot be a Root Domain. Therefore, you cannot have all of the
domains on your server as Root Domains, but you can have a mixture of Root Domains and
dedicated domains on your server or all of the domains as dedicated domains.

A certain amount of CPU core and memory is always reserved for each Root Domain,
depending on which domain is being used as a Root Domain in the domain configuration and
the number of IB HCAs and 10GbE NICs that are associated with that Root Domain:

■ The last domain in a domain configuration:


■ Two cores and 32 GB of memory reserved for a Root Domain with one IB HCA and
10GbE NIC
■ Four cores and 64 GB of memory reserved for a Root Domain with two IB HCAs and
10GbE NICs
■ Any other domain in a domain configuration:
■ One core and 16 GB of memory reserved for a Root Domain with one IB HCA and
10GbE NIC
■ Two cores and 32 GB of memory reserved for a Root Domain with two IB HCAs and
10GbE NICs

Note - The amount of CPU core and memory reserved for Root Domains is sufficient to support
only the PFs in each Root Domain. There is insufficient CPU core or memory resources to
support zones or applications in Root Domains, so zones and applications are supported only in
the I/O Domains.

The remaining CPU core and memory resources associated with each Root Domain are parked
in CPU and memory repositories, as shown in the following graphic.

Understanding the System 49


Understanding the Software Configurations

CPU and memory repositories contain resources not only from the Root Domains, but also
any parked resources from the dedicated domains. Whether CPU core and memory resources
originated from dedicated domains or from Root Domains, once those resources have been
parked in the CPU and memory repositories, those resources are no longer associated with their
originating domain. These resources become equally available to I/O Domains.

In addition, CPU and memory repositories contain parked resources only from the compute
server that contains the domains providing those parked resources. In other words, if you have
two compute servers and both compute servers have Root Domains, there would be two sets
of CPU and memory repositories, where each compute server would have its own CPU and
memory repositories with parked resources.

For example, assume you have four domains on your compute server, with three of the four
domains as Root Domains, as shown in the previous graphic. Assume each domain has the
following IB HCAs and 10GbE NICs, and the following CPU core and memory resources:
■ One IB HCA and one 10GbE NIC
■ 16 cores
■ 256 GB of memory

In this situation, the following CPU core and memory resources are reserved for each Root
Domain, with the remaining resources available for the CPU and memory repositories:
■ Two cores and 32 GB of memory reserved for the last Root Domains in this configuration.
14 cores and 224 GB of memory available from this Root Domain for the CPU and memory
repositories.

50 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding the Software Configurations

■ One core and 16 GB of memory reserved for the second and third Root Domains in this
configuration.
■ 15 cores and 240 GB of memory available from each of these Root Domains for the
CPU and memory repositories.
■ A total of 30 cores (15 x 2) and 480 GB of memory (240 GB x 2) available for the CPU
and memory repositories from these two Root Domains.

A total of 44 cores (14 + 30 cores) are therefore parked in the CPU repository, and 704 GB of
memory (224 + 480 GB of memory) are parked in the memory repository and are available for
the I/O Domains.

With Root Domains, connections to the 10GbE client access network go through the physical
ports on each 10GbE NIC, and connections to the IB network go through the physical ports
on each IB HCA, just as they did with dedicated domains. However, cards used with Root
Domains must also be SR-IOV compliant. SR-IOV compliant cards enable VFs to be created on
each card, where the virtualization occurs in the card itself.

The VFs from each Root Domain are parked in the IB VF and 10GbE VF repositories, similar
to the CPU and memory repositories, as shown in the following graphic.

Even though the VFs from each Root Domain are parked in the VF repositories, the VFs are
created on each 10GbE NIC and IB HCA, so those VFs are associated with the Root Domain

Understanding the System 51


Understanding the Software Configurations

that contains those specific 10GbE NIC and IB HCA cards. For example, looking at the
example configuration in the previous graphic, the VFs created on the last (right most) 10GbE
NIC and IB HCA will be associated with the last Root Domain.

I/O Domains

An I/O Domain is an SR-IOV domain that owns its own VFs, each of which is a virtual device
based on a PF in one of the Root Domains. Root domains function solely as a provider of VFs
to the I/O Domains, based on the physical I/O devices associated with each Root Domain.
Applications and zones are supported only in I/O Domains, not in Root Domains.

You can create multiple I/O Domains using the I/O Domain Creation tool. As part of the
domain creation process, you also associate one of the following SuperCluster-specific domain
types to each I/O Domain:

■ Application Domain running Oracle Solaris 11


■ Database Domain

Note that only Database Domains that are dedicated domains can host database zones. Database
I/O Domains cannot host database zones.

The CPU cores and memory resources owned by an I/O Domain are assigned from the CPU and
memory repositories (the cores and memory released from Root Domains on the system) when
an I/O Domain is created, as shown in the following graphic.

52 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding the Software Configurations

You use the I/O Domain Creation tool to assign the CPU core and memory resources to the I/O
Domains, based on the amount of CPU core and memory resources that you want to assign to
each I/O Domain and the total amount of CPU core and memory resources available in the CPU
and memory repositories.

Similarly, the IB VFs and 10GbE VFs owned by the I/O Domains come from the IB VF and
10GbE VF repositories (the IB VFs and 10GbE VFs released from Root Domains on the
system), as shown in the following graphic.

Understanding the System 53


Understanding the Software Configurations

Again, you use the I/O Domain Creation tool to assign the IB VFs and 10GbE VFs to the I/
O Domains using the resources available in the IB VF and 10GbE VF repositories. However,
because VFs are created on each 10GbE NIC and IB HCA, the VFs assigned to an I/O Domain
will always come from the specific Root Domain that is associated with the 10GbE NIC and IB
HCA cards that contain those VFs.

The number and size of the I/O Domains that you can create depends on several factors,
including the amount of CPU core and memory resources that are available in the CPU and
memory repositories and the amount of CPU core and memory resources that you want to
assign to each I/O Domain. However, while it is useful to know the total amount of resources
are that are parked in the repositories, it does not necessarily translate into the maximum
number of I/O Domains that you can create for your system. In addition, you should not create
an I/O Domain that uses more than one socket's worth of resources.

54 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding the Software Configurations

For example, assume that you have 44 cores parked in the CPU repository and 704 GB of
memory parked in the memory repository. You could therefore create I/O Domains in any of the
following ways:

■ One or more large I/O Domains, with each large I/O Domain using one socket's worth of
resources (for example, 16 cores and 256 GB of memory)
■ One or more medium I/O Domains, with each medium I/O Domain using four cores and 64
GB of memory
■ One or more small I/O Domains, with each small I/O Domain using one core and 16 GB of
memory

When you go through the process of creating I/O Domains, at some point, the I/O Domain
Creation tool will inform you that you cannot create additional I/O Domains. This could be due
to several factors, such as reaching the limit of total CPU core and memory resources in the
CPU and memory repositories, reaching the limit of resources available specifically to you as a
user, or reaching the limit on the number of I/O Domains allowable for this system.

Note - The following examples describe how resources might be divided up between
domains using percentages to make the conceptual information easier to understand.
However, you actually divide CPU core and memory resources between domains at a socket
granularity or core granularity level. See “Configuring CPU and Memory Resources (osc-
setcoremem)” on page 170 for more information.

As an example configuration showing how you might assign CPU and memory resources to
each domain, assume that you have a domain configuration where one of the domains is a Root
Domain, and the other three domains are dedicated domains, as shown in the following figure.

Understanding the System 55


Understanding the Software Configurations

Even though dedicated domains and Root Domains are all shown as equal-sized domains
in the preceding figure, that does not mean that CPU core and memory resources must be
split evenly across all four domains (where each domain would get 25% of the CPU core and
memory resources). Using information that you provide in the configuration worksheets, you
can request different sizes of CPU core and memory resources for each domain when your
Oracle SuperCluster T5-8 is initially installed.

For example, you could request that each dedicated domain have 30% of the CPU core and
memory resources (for a total of 90% of the CPU cores and memory resources allocated to
the three dedicated domains), and the remaining 10% allocated to the single Root Domain.
Having this configuration would mean that only 10% of the CPU core and memory resources
are available for I/O Domains to pull from the CPU and memory repositories. However, you
could also request that some of the resources from the dedicated domains be parked at the time
of the initial installation of your system, which would further increase the amount of CPU core
and memory resources available for I/O Domains to pull from the repositories.

56 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding the Software Configurations

You could also use the CPU/Memory tool after the initial installation to resize the amount of
CPU core and memory resources used by the existing domains, depending on the configuration
that you chose at the time of your initial installation:

■ If all of the domains on your compute server are dedicated domains, you can use the CPU/
Memory tool to resize the amount of CPU core and memory resources used by those
domains. However, you must reboot those resized dedicated domains if you change the
amount of resources using the CPU/Memory tool.
■ If you have a mixture of dedicated domains and Root Domains on your compute server:
■ For the dedicated domains, you can use the CPU/Memory tool to resize the amount of
CPU core and memory resources used by those dedicated domains. You can also use the
tool to park some of the CPU core and memory resources from the dedicated domains,
which would park those resources in the CPU and Memory repositories, making them
available for the I/O Domains. However, you must reboot those resized dedicated
domains if you change the amount of resources using the CPU/Memory tool.
■ For the Root Domains, you cannot resize the amount of CPU core and memory
resources for any of the Root Domains after the initial installation. Whatever resources
that you asked to have assigned to the Root Domains at the time of initial installation are
set and cannot be changed unless you have the Oracle installer come back out to your
site to reconfigure your system.

See “Configuring CPU and Memory Resources (osc-setcoremem)” on page 170 for more
information.

Assume you have a mixture of dedicated domains and Root Domains as mentioned earlier,
where each dedicated domain has 30% of the CPU core and memory resources (total of 90%
resources allocated to dedicated domains), and the remaining 10% allocated to the Root
Domain. You could then make the following changes to the resource allocation, depending on
your situation:

■ If you are satisfied with the amount of CPU core and memory resources allocated to the
Root Domain, but you find that one dedicated domain needs more resources while another
needs less, you could reallocate the resources between the three dedicated domains (for
example, having 40% for the first dedicated domain, 30% for the second, and 20% for the
third), as long as the total amount of resources add up to the total amount available for all
the dedicated domains (in this case, 90% of the resources).
■ If you find that the amount of CPU core and memory resources allocated to the Root
Domain is insufficient, you could park resources from the dedicated domains, which would
park those resources in the CPU and Memory repositories, making them available for the I/
O Domains. For example, if you find that you need 20% of the resources for I/O Domains
created through the Root Domain, you could park 10% of the resources from one or more
of the dedicated domains, which would increase the amount of resources in the CPU and
Memory repositories by that amount for the I/O Domains.

Understanding the System 57


Understanding the Software Configurations

Understanding General Configuration Information


In order to fully understand the different configuration options that are available for Oracle
SuperCluster T5-8, you must first understand the basic concepts for the PCIe slots and the
different networks that are used for the system.

■ “Logical Domains and the PCIe Slots Overview” on page 58


■ “Management Network Overview” on page 59
■ “10-GbE Client Access Network Overview” on page 59
■ “InfiniBand Network Overview” on page 59

Logical Domains and the PCIe Slots Overview

Each SPARC T5-8 server in Oracle SuperCluster T5-8 has sixteen PCIe slots. The following
cards are installed in certain PCIe slots and are used to connect to these networks:

■ 10-GbE network interface cards (NICs) – Used to connect to the 10-GbE client access
network
■ InfiniBand host channel adapters (HCAs) – Used to connect to the private InfiniBand
network

See “PCIe Slots (SPARC T5-8 Servers)” on page 27 and “Card Locations (SPARC T5-8
Servers)” on page 29 for more information.

Optional Fibre Channel PCIe cards are also available to facilitate migration of data from legacy
storage subsystems to the Exadata Storage Servers integrated with Oracle SuperCluster T5-8
for Database Domains, or to access SAN-based storage for the Application Domains. The PCIe
slots that are available for those optional Fibre Channel PCIe cards will vary, depending on
your configuration. See “Using an Optional Fibre Channel PCIe Card” on page 139 for more
information.

Note - If you have the Full Rack version of Oracle SuperCluster T5-8, you cannot install a Fibre
Channel PCIe card in a slot that is associated with a Small Domain. See “Understanding Small
Domains (Full Rack)” on page 80 for more information.

The PCIe slots used for each configuration varies, depending on the type and number of logical
domains that are used for that configuration.

58 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding the Software Configurations

Management Network Overview

The management network connects to your existing management network, and is used for
administrative work. Each SPARC T5-8 server provides access to the following management
networks:

■ Oracle Integrated Lights Out Manager (ILOM) management network – Connected through
the Oracle ILOM Ethernet interface on each SPARC T5-8 server. Connections to this
network are the same, regardless of the type of configuration that is set up on the SPARC
T5-8 server.
■ 1-GbE host management network – Connected through the four 1-GbE host management
interfaces (NET0 - NET3) on each SPARC T5-8 server. Connections to this network will
vary, depending on the type of configuration that is set up on the system. In most cases, the
four 1-GbE host management ports at the rear of the SPARC T5-8 servers use IP network
multipathing (IPMP) to provide redundancy for the management network interfaces to the
logical domains. However, the ports that are grouped together, and whether IPMP is used,
varies depending on the type of configuration that is set up on the SPARC T5-8 server.

10-GbE Client Access Network Overview

This required 10-GbE network connects the SPARC T5-8 servers to your existing client
network and is used for client access to the servers. 10-GbE NICs installed in the PCIe slots are
used for connection to this network. The number of 10-GbE NICs and the PCIe slots that they
are installed in varies depending on the type of configuration that is set up on the SPARC T5-8
server.

InfiniBand Network Overview

The InfiniBand network connects the SPARC T5-8 servers, ZFS storage appliance, and Exadata
Storage Servers using the InfiniBand switches on the rack. This non-routable network is fully
contained in Oracle SuperCluster T5-8, and does not connect to your existing network.

When Oracle SuperCluster T5-8 is configured with the appropriate types of domains, the
InfiniBand network is partitioned to define the data paths between the SPARC T5-8 servers, and
between the SPARC T5-8 servers and the storage appliances.

The defined InfiniBand data path coming out of the SPARC T5-8 servers varies, depending on
the type of domain created on each SPARC T5-8 server:

■ “InfiniBand Network Data Paths for a Database Domain” on page 60

Understanding the System 59


Understanding the Software Configurations

■ “InfiniBand Network Data Paths for an Application Domain” on page 60

InfiniBand Network Data Paths for a Database Domain

Note - The information in this section applies to a Database Domain that is either a dedicated
domain or a Database I/O Domain.

When a Database Domain is created on a SPARC T5-8 server, the Database Domain has the
following InfiniBand paths:

■ SPARC T5-8 server to both Sun Datacenter InfiniBand Switch 36 leaf switches
■ SPARC T5-8 server to each Exadata Storage Server, through the Sun Datacenter InfiniBand
Switch 36 leaf switches
■ SPARC T5-8 server to the ZFS storage appliance, through the Sun Datacenter InfiniBand
Switch 36 leaf switches

The number of InfiniBand HCAs that are assigned to the Database Domain varies, depending
on the type of configuration that is set up on the SPARC T5-8 server.

For the InfiniBand HCAs assigned to a Database Domain, the following InfiniBand private
networks are used:

■ Storage private network: One InfiniBand private network for the Database Domains to
communicate with each other and with the Application Domains running Oracle Solaris 10,
and with the ZFS storage appliance
■ Exadata private network: One InfiniBand private network for the Oracle Database 11g
Real Application Clusters (Oracle RAC) interconnects, and for communication between the
Database Domains and the Exadata Storage Servers

The two ports on each InfiniBand HCA connect to different Sun Datacenter InfiniBand
Switch 36 leaf switches to provide redundancy between the SPARC T5-8 servers and the
leaf switches. For more information on the physical connections for the SPARC T5-8 servers
to the leaf switches, see “InfiniBand Private Network Physical Connections (SPARC T5-8
Servers)” on page 32.

InfiniBand Network Data Paths for an Application Domain

Note - The information in this section applies to an Application Domain that is either a
dedicated domain or an Application I/O Domain.

60 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding the Software Configurations

When an Application Domain is created on a SPARC T5-8 server (an Application Domain
running either Oracle Solaris 10 or Oracle Solaris 11), the Application Domain has the
following InfiniBand paths:

■ SPARC T5-8 server to both Sun Datacenter InfiniBand Switch 36 leaf switches
■ SPARC T5-8 server to the ZFS storage appliance, through the Sun Datacenter InfiniBand
Switch 36 leaf switches

Note that the Application Domain would not access the Exadata Storage Servers, which are
used only for the Database Domain.

The number of InfiniBand HCAs that are assigned to the Application Domain varies, depending
on the type of configuration that is set up on the SPARC T5-8 server.

For the InfiniBand HCAs assigned to an Application Domain, the following InfiniBand private
networks are used:

■ Storage private network: One InfiniBand private network for Application Domains to
communicate with each other and with the Database Domains, and with the ZFS storage
appliance
■ Oracle Solaris Cluster private network: Two InfiniBand private networks for the optional
Oracle Solaris Cluster interconnects

The two ports on each InfiniBand HCA will connect to different Sun Datacenter InfiniBand
Switch 36 leaf switches to provide redundancy between the SPARC T5-8 servers and the
leaf switches. For more information on the physical connections for the SPARC T5-8 servers
to the leaf switches, see “InfiniBand Private Network Physical Connections (SPARC T5-8
Servers)” on page 32.

Understanding Half Rack Configurations


In the Half Rack version of Oracle SuperCluster T5-8, each SPARC T5-8 server includes two
processor modules (PM0 and PM3), with two sockets or PCIe root complex pairs on each
processor module, for a total of four sockets or PCIe root complex pairs for each SPARC T5-8
server. You can therefore have from one to four logical domains on each SPARC T5-8 server in
a Half Rack.

These topics provide information on the domain configurations available for the Half Rack:

■ “Logical Domain Configurations and PCIe Slot Mapping (Half Rack)” on page 62
■ “Understanding Large Domains (Half Rack)” on page 63

Understanding the System 61


Understanding the Software Configurations

■ “Understanding Medium Domains (Half Rack)” on page 65


■ “Understanding Small Domains (Half Rack)” on page 67

Logical Domain Configurations and PCIe Slot Mapping (Half


Rack)
The following figure provides information on the available configurations for the Half Rack.
It also provides information on the PCIe slots and the InfiniBand (IB) HCAs or 10-GbE NICs
installed in each PCIe slot, and which logical domain those cards would be mapped to, for the
Half Rack.

FIGURE 16 Logical Domain Configurations and PCIe Slot Mapping (Half Rack)

Related Information
■ “Understanding Large Domains (Half Rack)” on page 63
■ “Understanding Medium Domains (Half Rack)” on page 65

62 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding the Software Configurations

■ “Understanding Small Domains (Half Rack)” on page 67

Understanding Large Domains (Half Rack)

These topics provide information on the Large Domain configuration for the Half Rack:

■ “Percentage of CPU and Memory Resource Allocation” on page 63


■ “Management Network” on page 63
■ “10-GbE Client Access Network” on page 63
■ “InfiniBand Network” on page 64

Percentage of CPU and Memory Resource Allocation

One domain is set up on each SPARC T5-8 server in this configuration, taking up 100% of the
server. Therefore, 100% of the CPU and memory resources are allocated to this single domain
on each server (all four sockets).

Note - You can use the CPU/Memory tool (setcoremem) to change this default allocation
after the initial installation of your system, if you want to have some CPU or memory
resources parked (unused). See “Configuring CPU and Memory Resources (osc-
setcoremem)” on page 170 for more information.

Management Network

Two out of the four 1-GbE host management ports are part of one IPMP group for this domain:

■ NET0
■ NET3

10-GbE Client Access Network

All four PCI root complex pairs, and therefore four 10-GbE NICs, are associated with the
logical domain on the server in this configuration. However, only two of the four 10-GbE NICs
are used with this domain. One port is used on each dual-ported 10-GbE NIC. The two ports on
the two separate 10-GbE NICs would be part of one IPMP group. One port from the dual-ported
10-GbE NICs is connected to the 10-GbE network in this case, with the remaining unused ports
and 10-GbE NICs unconnected.

Understanding the System 63


Understanding the Software Configurations

The following 10-GbE NICs and ports are used for connection to the client access network for
this configuration:

■ PCIe slot 1, port 0 (active)


■ PCIe slot 14, port 1 (standby)

A single data address is used to access these two physical ports. That data address allows traffic
to continue flowing to the ports in the IPMP group, even if one of the two 10-GbE NICs fail.

Note - You can also connect just one port in each IPMP group to the 10-GbE network rather
than both ports, if you are limited in the number of 10-GbE connections that you can make to
your 10-GbE network. However, you will not have the redundancy and increased bandwidth in
this case.

InfiniBand Network

The connections to the InfiniBand network vary, depending on the type of domain:

■ Database Domain:
■ Storage private network: Connections through P1 (active) on the InfiniBand HCA
associated with the first CPU in the domain and P0 (standby) on the InfiniBand HCA
associated with the last CPU in the domain.
So, for a Large Domain in a Half Rack, these connections would be through P1 on the
InfiniBand HCA installed in slot 3 (active) and P0 on the InfiniBand HCA installed in
slot 16 (standby).
■ Exadata private network: Connections through P0 (active) and P1 (standby) on all
InfiniBand HCAs associated with the domain.
So, for a Large Domain in a Half Rack, connections will be made through all four
InfiniBand HCAs, with P0 on each as the active connection and P1 on each as the
standby connection.
■ Application Domain:
■ Storage private network: Connections through P1 (active) on the InfiniBand HCA
associated with the first CPU in the domain and P0 (standby) on the InfiniBand HCA
associated with the last CPU in the domain.
So, for a Large Domain in a Half Rack, these connections would be through P1 on the
InfiniBand HCA installed in slot 3 (active) and P0 on the InfiniBand HCA installed in
slot 16 (standby).
■ Oracle Solaris Cluster private network: Connections through P0 (active) on the
InfiniBand HCA associated with the second CPU in the domain and P1 (standby) on the
InfiniBand HCA associated with the third CPU in the domain.

64 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding the Software Configurations

So, for a Large Domain in a Half Rack, these connections would be through P0 on the
InfiniBand HCA installed in slot 11 (active) and P1 on the InfiniBand HCA installed in
slot 8 (standby).

Understanding Medium Domains (Half Rack)


These topics provide information on the Medium Domain configuration for the Half Rack:
■ “Percentage of CPU and Memory Resource Allocation” on page 65
■ “Management Network” on page 65
■ “10-GbE Client Access Network” on page 66
■ “InfiniBand Network” on page 66

Percentage of CPU and Memory Resource Allocation

The amount of CPU and memory resources that you allocate to the logical domain varies,
depending on the size of the other domains that are also on the SPARC T5-8 server:
■ Config H2-1 (Two Medium Domains): The following options are available for CPU and
memory resource allocation:
■ Two sockets for each Medium Domain
■ One socket for the first Medium Domain, three sockets for the second Medium Domain
■ Three sockets for the first Medium Domain, one socket for the second Medium Domain
■ Four cores for the first Medium Domain, the remaining cores for the second Medium
Domain (first Medium Domain must be either a Database Domain or an Application
Domain running Oracle Solaris 11 in this case)
■ Config H3-1 (One Medium Domain and two Small Domains): The following options are
available for CPU and memory resource allocation:
■ Two sockets for the Medium Domain, one socket apiece for the two Small Domains
■ One socket for the Medium Domain, two sockets for the first Small Domain, one socket
for the second Small Domain
■ One socket for the Medium Domain and the first Small Domain, two sockets for the
second Small Domain

Management Network

Two 1-GbE host management ports are part of one IPMP group for each Medium Domain.
Following are the 1-GbE host management ports associated with each Medium Domain,
depending on how many domains are on the SPARC T5-8 server in the Half Rack:

Understanding the System 65


Understanding the Software Configurations

■ First Medium Domain: NET0-1


■ Second Medium Domain, if applicable: NET2-3

10-GbE Client Access Network

Two PCI root complex pairs, and therefore two 10-GbE NICs, are associated with the Medium
Domain on the SPARC T5-8 server in the Half Rack. One port is used on each dual-ported 10-
GbE NIC. The two ports on the two separate 10-GbE NICs would be part of one IPMP group.

The following 10-GbE NICs and ports are used for connection to the client access network for
this configuration:
■ First Medium Domain:
■ PCIe slot 1, port 0 (active)
■ PCIe slot 9, port 1 (standby)
■ Second Medium Domain, if applicable:
■ PCIe slot 6, port 0 (active)
■ PCIe slot 14, port 1 (standby)

A single data address is used to access these two physical ports. That data address allows traffic
to continue flowing to the ports in the IPMP group, even if one of the two 10-GbE NICs fail.

Note - You can also connect just one port in each IPMP group to the 10-GbE network rather
than both ports, if you are limited in the number of 10-GbE connections that you can make to
your 10-GbE network. However, you will not have the redundancy and increased bandwidth in
this case.

InfiniBand Network

The connections to the InfiniBand network vary, depending on the type of domain:
■ Database Domain:
■ Storage private network: Connections through P1 (active) on the InfiniBand HCA
associated with the first CPU in the domain and P0 (standby) on the InfiniBand HCA
associated with the second CPU in the domain.
So, for the first Medium Domain in a Half Rack, these connections would be through
P1 on the InfiniBand HCA installed in slot 3 (active) and P0 on the InfiniBand HCA
installed in slot 11 (standby).
■ Exadata private network: Connections through P0 (active) and P1 (standby) on all
InfiniBand HCAs associated with the domain.

66 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding the Software Configurations

So, for the first Medium Domain in a Half Rack, connections would be made through
both InfiniBand HCAs (slot 3 and slot 11), with P0 on each as the active connection and
P1 on each as the standby connection.
■ Application Domain:
■ Storage private network: Connections through P1 (active) on the InfiniBand HCA
associated with the first CPU in the domain and P0 (standby) on the InfiniBand HCA
associated with the second CPU in the domain.
So, for the first Medium Domain in a Half Rack, these connections would be through
P1 on the InfiniBand HCA installed in slot 3 (active) and P0 on the InfiniBand HCA
installed in slot 11 (standby).
■ Oracle Solaris Cluster private network: Connections through P0 (active) on the
InfiniBand HCA associated with the first CPU in the domain and P1 (standby) on the
InfiniBand HCA associated with the second CPU in the domain.
So, for first Medium Domain in a Half Rack, these connections would be through P0 on
the InfiniBand HCA installed in slot 3 (active) and P1 on the InfiniBand HCA installed
in slot 11 (standby).

Understanding Small Domains (Half Rack)

These topics provide information on the Small Domain configuration for the Half Rack:

■ “Percentage of CPU and Memory Resource Allocation” on page 67


■ “Management Network” on page 68
■ “10-GbE Client Access Network” on page 68
■ “InfiniBand Network” on page 69

Percentage of CPU and Memory Resource Allocation

The amount of CPU and memory resources that you allocate to the logical domain varies,
depending on the size of the other domains that are also on the SPARC T5-8 server:

■ One Medium Domain and two Small Domains: The following options are available for CPU
and memory resource allocation:
■ Two sockets for the Medium Domain, one socket apiece for the two Small Domains
■ One socket for the Medium Domain, two sockets for the first Small Domain, one socket
for the second Small Domain
■ One socket for the Medium Domain and the first Small Domain, two sockets for the
second Small Domain

Understanding the System 67


Understanding the Software Configurations

■ Four Small Domains: One socket for each Small Domain

Management Network

The number and type of 1-GbE host management ports that are assigned to each Small Domain
varies, depending on the CPU that the Small Domain is associated with:

■ CPU0: NET0-1
■ CPU1: NET1, NET3 (through virtual network devices)
■ CPU6: NET0, NET2 (through virtual network devices)
■ CPU7: NET2-3

10-GbE Client Access Network

One PCI root complex pair, and therefore one 10-GbE NIC, is associated with the Small
Domain on the SPARC T5-8 server in the Half Rack. Both ports are used on each dual-ported
10-GbE NIC, and both ports on each 10-GbE NIC would be part of one IPMP group. Both ports
from each dual-ported 10-GbE NIC is connected to the 10-GbE network for the Small Domains.

The following 10-GbE NICs and ports are used for connection to the client access network for
the Small Domains, depending on the CPU that the Small Domain is associated with:

■ CPU0:
■ PCIe slot 1, port 0 (active)
■ PCIe slot 1, port 1 (standby)
■ CPU1:
■ PCIe slot 9, port 0 (active)
■ PCIe slot 9, port 1 (standby)
■ CPU6:
■ PCIe slot 6, port 0 (active)
■ PCIe slot 6, port 1 (standby)
■ CPU7:
■ PCIe slot 14, port 0 (active)
■ PCIe slot 14, port 1 (standby)

A single data address is used to access these two physical ports. That data address allows traffic
to continue flowing to the ports in the IPMP group, even if the connection to one of the two
ports on the 10-GbE NIC fails.

68 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding the Software Configurations

Note - You can also connect just one port in each IPMP group to the 10-GbE network rather
than both ports, if you are limited in the number of 10-GbE connections that you can make to
your 10-GbE network. However, you will not have the redundancy and increased bandwidth in
this case.

InfiniBand Network

The connections to the InfiniBand network vary, depending on the type of domain:

■ Database Domain:
■ Storage private network: Connections through P1 (active) and P0 (standby) on the
InfiniBand HCA associated with the CPU associated with each Small Domain.
For example, for a Small Domain that is associated with CPU0, these connections would
be through the InfiniBand HCA installed in slot 3, through ports P1 (active) and P0
(standby) on that InfiniBand HCA.
■ Exadata private network: Connections through P0 (active) and P1 (standby) on the
InfiniBand HCA associated with the CPU associated with each Small Domain.
For example, for a Small Domain that is associated with CPU0, these connections would
be through the InfiniBand HCA installed in slot 3, through ports P0 (active) and P1
(standby) on that InfiniBand HCA.
■ Application Domain:
■ Storage private network: Connections through P1 (active) and P0 (standby) on the
InfiniBand HCA associated with the CPU associated with each Small Domain.
For example, for a Small Domain that is associated with CPU0, these connections would
be through the InfiniBand HCA installed in slot 3, through ports P1 (active) and P0
(standby) on that InfiniBand HCA.
■ Oracle Solaris Cluster private network: Connections through P0 (active) and P1
(standby) on the InfiniBand HCA associated with the CPU associated with each Small
Domain.
For example, for a Small Domain that is associated with CPU0, these connections would
be through the InfiniBand HCA installed in slot 3, through ports P0 (active) and P1
(standby) on that InfiniBand HCA.

Understanding Full Rack Configurations


In the Full Rack version of Oracle SuperCluster T5-8, each SPARC T5-8 server has four
processor modules (PM0 through PM3), with two sockets or PCIe root complex pairs on each

Understanding the System 69


Understanding the Software Configurations

processor module, for a total of eight sockets or PCIe root complexes for each SPARC T5-8
server. You can therefore have from one to eight logical domains on each SPARC T5-8 server in
a Full Rack.

■ “Logical Domain Configurations and PCIe Slot Mapping (Full Rack)” on page 70
■ “Understanding Giant Domains (Full Rack)” on page 72
■ “Understanding Large Domains (Full Rack)” on page 74
■ “Understanding Medium Domains (Full Rack)” on page 77
■ “Understanding Small Domains (Full Rack)” on page 80

Logical Domain Configurations and PCIe Slot Mapping (Full


Rack)

The following figure provides information on the available configurations for the Full Rack.
It also provides information on the PCIe slots and the InfiniBand (IB) HCAs or 10-GbE NICs
installed in each PCIe slot, and which logical domain those cards would be mapped to, for the
Full Rack.

70 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding the Software Configurations

FIGURE 17 Logical Domain Configurations and PCIe Slot Mapping (Full Rack)

Related Information
■ “Understanding Giant Domains (Full Rack)” on page 72
■ “Understanding Large Domains (Full Rack)” on page 74
■ “Understanding Medium Domains (Full Rack)” on page 77
■ “Understanding Small Domains (Full Rack)” on page 80

Understanding the System 71


Understanding the Software Configurations

Understanding Giant Domains (Full Rack)


These topics provide information on the Giant Domain configuration for the Full Rack:
■ “Percentage of CPU and Memory Resource Allocation” on page 72
■ “Management Network” on page 72
■ “10-GbE Client Access Network” on page 72
■ “InfiniBand Network” on page 73

Percentage of CPU and Memory Resource Allocation

One domain is set up on each SPARC T5-8 server in this configuration, taking up 100% of the
server. Therefore, 100% of the CPU and memory resources are allocated to this single domain
on each server (all eight sockets).

Note - You can use the CPU/Memory tool (setcoremem) to change this default allocation
after the initial installation of your system, if you want to have some CPU or memory
resources parked (unused). See “Configuring CPU and Memory Resources (osc-
setcoremem)” on page 170 for more information.

Management Network

Two out of the four 1-GbE host management ports are part of one IPMP group for this domain:
■ NET0
■ NET3

10-GbE Client Access Network

All eight PCI root complex pairs, and therefore eight 10-GbE NICs, are associated with the
logical domain on the server in this configuration. However, only two of the eight 10-GbE NICs
are used with this domain. One port is used on each dual-ported 10-GbE NIC. The two ports on
the two separate 10-GbE NICs would be part of one IPMP group. One port from the dual-ported
10-GbE NICs is connected to the 10-GbE network in this case, with the remaining unused ports
and 10-GbE NICs unconnected.

The following 10-GbE NICs and ports are used for connection to the client access network for
this configuration:
■ PCIe slot 1, port 0 (active)
■ PCIe slot 14, port 1 (standby)

72 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding the Software Configurations

A single data address is used to access these two physical ports. That data address allows traffic
to continue flowing to the ports in the IPMP group, even if one of the two 10-GbE NICs fail.

Note - You can also connect just one port in each IPMP group to the 10-GbE network rather
than both ports, if you are limited in the number of 10-GbE connections that you can make to
your 10-GbE network. However, you will not have the redundancy and increased bandwidth in
this case.

InfiniBand Network

The connections to the InfiniBand network vary, depending on the type of domain:
■ Database Domain:
■ Storage private network: Connections through P1 (active) on the InfiniBand HCA
associated with the first CPU in the first processor module (PM0) in the domain and
P0 (standby) on the InfiniBand HCA associated with the last CPU in the last processor
module (PM3) in the domain.
So, for a Giant Domain in a Full Rack, these connections would be through P1 on the
InfiniBand HCA installed in slot 3 (active) and P0 on the InfiniBand HCA installed in
slot 16 (standby).
■ Exadata private network: Connections through P0 (active) and P1 (standby) on all
InfiniBand HCAs associated with the domain.
So, for a Giant Domain in a Full Rack, connections will be made through all eight
InfiniBand HCAs, with P0 on each as the active connection and P1 on each as the
standby connection.
■ Application Domain:
■ Storage private network: Connections through P1 (active) on the InfiniBand HCA
associated with the first CPU in the first processor module (PM0) in the domain and
P0 (standby) on the InfiniBand HCA associated with the last CPU in the last processor
module (PM3) in the domain.
So, for a Giant Domain in a Full Rack, these connections would be through P1 on the
InfiniBand HCA installed in slot 3 (active) and P0 on the InfiniBand HCA installed in
slot 16 (standby).
■ Oracle Solaris Cluster private network: Connections through P0 (active) on the
InfiniBand HCA associated with the first CPU in the second processor module (PM1)
in the domain and P1 (standby) on the InfiniBand HCA associated with the first CPU in
the third processor module (PM2) in the domain.
So, for a Giant Domain in a Full Rack, these connections would be through P0 on the
InfiniBand HCA installed in slot 4 (active) and P1 on the InfiniBand HCA installed in
slot 7 (standby).

Understanding the System 73


Understanding the Software Configurations

Understanding Large Domains (Full Rack)


These topics provide information on the Large Domain configuration for the Half Rack:
■ “Percentage of CPU and Memory Resource Allocation” on page 74
■ “Management Network” on page 75
■ “10-GbE Client Access Network” on page 75
■ “InfiniBand Network” on page 76

Percentage of CPU and Memory Resource Allocation

The amount of CPU and memory resources that you allocate to the logical domain varies,
depending on the size of the other domains that are also on the SPARC T5-8 server:
■ Config F2-1 (Two Large Domains): The following options are available for CPU and
memory resource allocation:
■ Four sockets for each Large Domain
■ Two sockets for the first Large Domain, six sockets for the second Large Domain
■ Six sockets for the first Large Domain, two sockets for the second Large Domain
■ One socket for the first Large Domain, seven sockets for the second Large Domain
■ Seven sockets for the first Large Domain, one socket for the second Large Domain
■ Config F3-1 (One Large Domain and two Medium Domains): The following options are
available for CPU and memory resource allocation:
■ Four sockets for the Large Domain, two sockets apiece for the two Medium Domains
■ Two sockets for the Large Domain, four sockets for the first Medium Domain, two
sockets for the second Medium Domain
■ Two sockets for the Large Domain and the first Medium Domain, four sockets for the
second Medium Domain
■ Six sockets for the Large Domain, one socket apiece for the two Medium Domains
■ Five sockets for the Large Domain, two sockets for the first Medium Domain, one
socket for the second Medium Domain
■ Five sockets for the Large Domain, one socket for the first Medium Domain, two
sockets for the second Medium Domain
■ Config F4-2 (One Large Domain, two Small Domains, one Medium Domain): The
following options are available for CPU and memory resource allocation:
■ Four sockets for the Large Domain, one socket apiece for the two Small Domains, two
sockets for the Medium Domain
■ Three sockets for the Large Domain, one socket apiece for the two Small Domains,
three sockets for the Medium Domain

74 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding the Software Configurations

■ Two sockets for the Large Domain, one socket apiece for the two Small Domains, four
sockets for the Medium Domain
■ Five sockets for the Large Domain, one socket apiece for the two Small Domains and
the Medium Domain
■ Config F5-2 (One Large Domain, four Small Domains): The following options are available
for CPU and memory resource allocation:
■ Four sockets for the Large Domain, one socket apiece for the four Small Domains
■ Three sockets for the Large Domain, one socket apiece for the first three Small
Domains, two sockets for the fourth Small Domain
■ Two sockets for the Large Domain, one socket apiece for the first and second Small
Domains, two sockets apiece for the third and fourth Small Domains
■ Two sockets for the Large Domain, one socket apiece for the first three Small Domains,
three sockets for the fourth Small Domain
■ Two sockets for the Large Domain and the for the first Small Domain, one socket apiece
for the second and third Small Domains, two sockets for the fourth Small Domain

Management Network

Two 1-GbE host management ports are part of one IPMP group for each Large Domain.
Following are the 1-GbE host management ports associated with each Large Domain,
depending on how many domains are on the SPARC T5-8 server in the Full Rack:

■ First Large Domain: NET0-1


■ Second Large Domain, if applicable: NET2-3

10-GbE Client Access Network

Four PCI root complex pairs, and therefore four 10-GbE NICs, are associated with the Large
Domain on the SPARC T5-8 server in the Full Rack. One port is used on each dual-ported 10-
GbE NIC. The two ports on the two separate 10-GbE NICs would be part of one IPMP group.

The following 10-GbE NICs and ports are used for connection to the client access network for
this configuration:

■ First Large Domain:


■ PCIe slot 1, port 0 (active)
■ PCIe slot 10, port 1 (standby)
■ Second Large Domain, if applicable:
■ PCIe slot 5, port 0 (active)

Understanding the System 75


Understanding the Software Configurations

■ PCIe slot 14, port 1 (standby)

A single data address is used to access these two physical ports. That data address allows traffic
to continue flowing to the ports in the IPMP group, even if one of the two 10-GbE NICs fail.

Note - You can also connect just one port in each IPMP group to the 10-GbE network rather
than both ports, if you are limited in the number of 10-GbE connections that you can make to
your 10-GbE network. However, you will not have the redundancy and increased bandwidth in
this case.

InfiniBand Network

The connections to the InfiniBand network vary, depending on the type of domain:

■ Database Domain:
■ Storage private network: Connections through P1 (active) on the InfiniBand HCA
associated with the first CPU in the domain and P0 (standby) on the InfiniBand HCA
associated with the last CPU in the domain.
For example, for the first Large Domain in a Full Rack, these connections would be
through P1 on the InfiniBand HCA installed in slot 3 (active) and P0 on the InfiniBand
HCA installed in slot 12 (standby).
■ Exadata private network: Connections through P0 (active) and P1 (standby) on all
InfiniBand HCAs associated with the domain.
So, for a Large Domain in a Full Rack, connections will be made through the four
InfiniBand HCAs associated with the domain, with P0 on each as the active connection
and P1 on each as the standby connection.
■ Application Domain:
■ Storage private network: Connections through P1 (active) on the InfiniBand HCA
associated with the first CPU in the domain and P0 (standby) on the InfiniBand HCA
associated with the last CPU in the domain.
For example, for the first Large Domain in a Full Rack, these connections would be
through P1 on the InfiniBand HCA installed in slot 3 (active) and P0 on the InfiniBand
HCA installed in slot 12 (standby).
■ Oracle Solaris Cluster private network: Connections through P0 (active) on the
InfiniBand HCA associated with the second CPU in the domain and P1 (standby) on the
InfiniBand HCA associated with the third CPU in the domain.
For example, for the first Large Domain in a Full Rack, these connections would be
through P0 on the InfiniBand HCA installed in slot 11 (active) and P1 on the InfiniBand
HCA installed in slot 4 (standby).

76 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding the Software Configurations

Understanding Medium Domains (Full Rack)


These topics provide information on the Medium Domain configuration for the Full Rack:

■ “Percentage of CPU and Memory Resource Allocation” on page 77


■ “Management Network” on page 78
■ “10-GbE Client Access Network” on page 79
■ “InfiniBand Network” on page 79

Percentage of CPU and Memory Resource Allocation

The amount of CPU and memory resources that you allocate to the logical domain varies,
depending on the size of the other domains that are also on the SPARC T5-8 server:

■ Config F3-1 (One Large Domain and two Medium Domains): The following options are
available for CPU and memory resource allocation:
■ Four sockets for the Large Domain, two sockets apiece for the two Medium Domains
■ Two sockets for the Large Domain, four sockets for the first Medium Domain, two
sockets for the second Medium Domain
■ Two sockets for the Large Domain and the first Medium Domain, four sockets for the
second Medium Domain
■ Six sockets for the Large Domain, one socket apiece for the two Medium Domains
■ Five sockets for the Large Domain, two sockets for the first Medium Domain, one
socket for the second Medium Domain
■ Five sockets for the Large Domain, one socket for the first Medium Domain, two
sockets for the second Medium Domain
■ Config F4-1 (Four Medium Domains): The following options are available for CPU and
memory resource allocation:
■ Two sockets apiece for all four Medium Domains
■ One socket for the first Medium Domain, two sockets apiece for the second and third
Medium Domains, and three sockets for the fourth Medium Domain
■ Three sockets for the first Medium Domain, one socket for the second Medium Domain,
and two sockets apiece for the third and fourth Medium Domains
■ Three sockets for the first Medium Domain, two sockets apiece for the second and
fourth Medium Domain, and one socket for the third Medium Domain
■ Three sockets for the first Medium Domain, two sockets apiece for the second and third
Medium Domains, and one socket for the fourth Medium Domain
■ Config F4-2 (One Large Domain, two Small Domains, one Medium Domain): The
following options are available for CPU and memory resource allocation:

Understanding the System 77


Understanding the Software Configurations

■ Four sockets for the Large Domain, one socket apiece for the two Small Domains, two
sockets for the Medium Domain
■ Three sockets for the Large Domain, one socket apiece for the two Small Domains,
three sockets for the Medium Domain
■ Two sockets for the Large Domain, one socket apiece for the two Small Domains, four
sockets for the Medium Domain
■ Five sockets for the Large Domain, one socket apiece for the two Small Domains and
the Medium Domain
■ Config F5-1 (Three Medium Domains, two Small Domains): The following options are
available for CPU and memory resource allocation:
■ Two sockets apiece for the first and second Medium Domains, one socket apiece for the
first and second Small Domains, two sockets for the third Medium Domain
■ One socket for the first Medium Domain, two sockets for the second Medium Domain,
one socket for the first Small Domain, two sockets apiece for the second Small Domain
and the third Medium Domain
■ Two sockets apiece for the first and second Medium Domains, one socket for the first
Small Domain, two sockets for the second Small Domain, one socket for the third
Medium Domain
■ Two sockets for the first Medium Domain, one socket for the second Medium Domain,
two sockets for the first Small Domain, one socket for the second Small Domain, two
sockets for the third Medium Domain
■ Config F6-1 (Two Medium Domains, four Small Domains): The following options are
available for CPU and memory resource allocation:
■ Two sockets for the first Medium Domain, one socket apiece for the four Small
Domains, two sockets for the second Medium Domain
■ Three sockets for the first Medium Domain, one socket apiece for the four Small
Domains and the second Medium Domain
■ One socket apiece for the first Medium Domain and the four Small Domains, three
sockets for the second Medium Domain
■ Config F7-1 (One Medium Domains, six Small Domains): Two sockets for the Medium
Domain, one socket apiece for the six Small Domains

Management Network

The number and type of 1-GbE host management ports that are assigned to each Medium
Domain varies, depending on the CPUs that the Medium Domain is associated with:

■ CPU0/CPU1: NET0-1
■ CPU2/CPU3: NET1, NET3 (through virtual network devices)

78 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding the Software Configurations

■ CPU4/CPU5: NET0, NET2 (through virtual network devices)


■ CPU6/CPU7: NET2-3

10-GbE Client Access Network

Two PCI root complex pairs, and therefore two 10-GbE NICs, are associated with the Medium
Domain on the SPARC T5-8 server in the Full Rack. One port is used on each dual-ported 10-
GbE NIC. The two ports on the two separate 10-GbE NICs would be part of one IPMP group.

The following 10-GbE NICs and ports are used for connection to the client access network for
the Medium Domains, depending on the CPUs that the Medium Domain is associated with:
■ CPU0/CPU1:
■ PCIe slot 1, port 0 (active)
■ PCIe slot 9, port 1 (standby)
■ CPU2/CPU3:
■ PCIe slot 2, port 0 (active)
■ PCIe slot 10, port 1 (standby)
■ CPU4/CPU5:
■ PCIe slot 5, port 0 (active)
■ PCIe slot 13, port 1 (standby)
■ CPU6/CPU7:
■ PCIe slot 6, port 0 (active)
■ PCIe slot 14, port 1 (standby)

A single data address is used to access these two physical ports. That data address allows traffic
to continue flowing to the ports in the IPMP group, even if the connection to one of the two
ports on the 10-GbE NIC fails.

Note - You can also connect just one port in each IPMP group to the 10-GbE network rather
than both ports, if you are limited in the number of 10-GbE connections that you can make to
your 10-GbE network. However, you will not have the redundancy and increased bandwidth in
this case.

InfiniBand Network

The connections to the InfiniBand network vary, depending on the type of domain:
■ Database Domain:

Understanding the System 79


Understanding the Software Configurations

■ Storage private network: Connections through P1 (active) on the InfiniBand HCA


associated with the first CPU in the domain and P0 (standby) on the InfiniBand HCA
associated with the second CPU in the domain.
For example, for the first Medium Domain in a Full Rack, these connections would be
through P1 on the InfiniBand HCA installed in slot 3 (active) and P0 on the InfiniBand
HCA installed in slot 11 (standby).
■ Exadata private network: Connections through P0 (active) and P1 (standby) on all
InfiniBand HCAs associated with the domain.
For example, for the first Medium Domain in a Full Rack, connections would be made
through both InfiniBand HCAs (slot 3 and slot 11), with P0 on each as the active
connection and P1 on each as the standby connection.
■ Application Domain:
■ Storage private network: Connections through P1 (active) on the InfiniBand HCA
associated with the first CPU in the domain and P0 (standby) on the InfiniBand HCA
associated with the second CPU in the domain.
For example, for the first Medium Domain in a Full Rack, these connections would be
through P1 on the InfiniBand HCA installed in slot 3 (active) and P0 on the InfiniBand
HCA installed in slot 11 (standby).
■ Oracle Solaris Cluster private network: Connections through P0 (active) on the
InfiniBand HCA associated with the first CPU in the domain and P1 (standby) on the
InfiniBand HCA associated with the second CPU in the domain.
For example, for first Medium Domain in a Full Rack, these connections would be
through P0 on the InfiniBand HCA installed in slot 3 (active) and P1 on the InfiniBand
HCA installed in slot 11 (standby).

Understanding Small Domains (Full Rack)

Note - If you have the Full Rack, you cannot install a Fibre Channel PCIe card in a slot that
is associated with a Small Domain. In Full Rack configurations, Fibre Channel PCIe cards
can only be added to domains with more than one 10-GbE NIC. One 10-GbE NIC must be
left for connectivity to the client access network, but for domains with more than one 10-GbE
NICs, other 10-GbE NICs can be replaced with Fibre Channel HBAs. See “Understanding
Full Rack Configurations” on page 69 for more information on the configurations with
Small Domains and “Using an Optional Fibre Channel PCIe Card” on page 139 for more
information on the Fibre Channel PCIe card.

These topics provide information on the Small Domain configuration for the Full Rack:

■ “Percentage of CPU and Memory Resource Allocation” on page 81

80 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding the Software Configurations

■ “Management Network” on page 82


■ “10-GbE Client Access Network” on page 82
■ “InfiniBand Network” on page 83

Percentage of CPU and Memory Resource Allocation

The amount of CPU and memory resources that you allocate to the logical domain varies,
depending on the size of the other domains that are also on the SPARC T5-8 server:

■ Config F4-2 (One Large Domain, two Small Domains, one Medium Domain): The
following options are available for CPU and memory resource allocation:
■ Four sockets for the Large Domain, one socket apiece for the two Small Domains, two
sockets for the Medium Domain
■ Three sockets for the Large Domain, one socket apiece for the two Small Domains,
three sockets for the Medium Domain
■ Two sockets for the Large Domain, one socket apiece for the two Small Domains, four
sockets for the Medium Domain
■ Five sockets for the Large Domain, one socket apiece for the two Small Domains and
the Medium Domain
■ Config F5-1 (Three Medium Domains, two Small Domains): The following options are
available for CPU and memory resource allocation:
■ Two sockets apiece for the first and second Medium Domains, one socket apiece for the
first and second Small Domains, two sockets for the third Medium Domain
■ One socket for the first Medium Domain, two sockets for the second Medium Domain,
one socket for the first Small Domain, two sockets apiece for the second Small Domain
and the third Medium Domain
■ Two sockets apiece for the first and second Medium Domains, one socket for the first
Small Domain, two sockets for the second Small Domain, one socket for the third
Medium Domain
■ Two sockets for the first Medium Domain, one socket for the second Medium Domain,
two sockets for the first Small Domain, one socket for the second Small Domain, two
sockets for the third Medium Domain
■ Config F5-2 (One Large Domain, four Small Domains): The following options are available
for CPU and memory resource allocation:
■ Four sockets for the Large Domain, one socket apiece for the four Small Domains
■ Three sockets for the Large Domain, one socket apiece for the first three Small
Domains, two sockets for the fourth Small Domain
■ Two sockets for the Large Domain, one socket apiece for the first and second Small
Domains, two sockets apiece for the third and fourth Small Domains

Understanding the System 81


Understanding the Software Configurations

■ Two sockets for the Large Domain, one socket apiece for the first three Small Domains,
three sockets for the fourth Small Domain
■ Two sockets for the Large Domain and the for the first Small Domain, one socket apiece
for the second and third Small Domains, two sockets for the fourth Small Domain
■ Config F6-1 (Two Medium Domains, four Small Domains): The following options are
available for CPU and memory resource allocation:
■ Two sockets for the first Medium Domain, one socket apiece for the four Small
Domains, two sockets for the second Medium Domain
■ Three sockets for the first Medium Domain, one socket apiece for the four Small
Domains and the second Medium Domain
■ One socket apiece for the first Medium Domain and the four Small Domains, three
sockets for the second Medium Domain
■ Config F7-1 (One Medium Domains, six Small Domains): Two sockets for the Medium
Domain, one socket apiece for the six Small Domains
■ Config F8-1 (Eight Small Domains): One socket for each Small Domain

Management Network

The number and type of 1-GbE host management ports that are assigned to each Small Domain
varies, depending on the CPU that the Small Domain is associated with:
■ CPU0: NET0-1
■ CPU1: NET1, NET3 (through virtual network devices)
■ CPU2: NET0, NET2 (through virtual network devices)
■ CPU3: NET1, NET3 (through virtual network devices)
■ CPU4: NET0, NET2 (through virtual network devices)
■ CPU5: NET1, NET3 (through virtual network devices)
■ CPU6: NET0, NET2 (through virtual network devices)
■ CPU7: NET2-3

10-GbE Client Access Network

One PCI root complex pair, and therefore one 10-GbE NIC, is associated with the Small
Domain on the SPARC T5-8 server in the Full Rack. Both ports are used on each dual-ported
10-GbE NIC, and both ports on each 10-GbE NIC would be part of one IPMP group. Both ports
from each dual-ported 10-GbE NIC is connected to the 10-GbE network for the Small Domains.

The following 10-GbE NICs and ports are used for connection to the client access network for
the Small Domains, depending on the CPU that the Small Domain is associated with:

82 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding the Software Configurations

■ CPU0:
■ PCIe slot 1, port 0 (active)
■ PCIe slot 1, port 1 (standby)
■ CPU1:
■ PCIe slot 9, port 0 (active)
■ PCIe slot 9, port 1 (standby)
■ CPU2:
■ PCIe slot 2, port 0 (active)
■ PCIe slot 2, port 1 (standby)
■ CPU3:
■ PCIe slot 10, port 0 (active)
■ PCIe slot 10, port 1 (standby)
■ CPU4:
■ PCIe slot 5, port 0 (active)
■ PCIe slot 5, port 1 (standby)
■ CPU5:
■ PCIe slot 13, port 0 (active)
■ PCIe slot 13, port 1 (standby)
■ CPU6:
■ PCIe slot 6, port 0 (active)
■ PCIe slot 6, port 1 (standby)
■ CPU7:
■ PCIe slot 14, port 0 (active)
■ PCIe slot 14, port 1 (standby)

A single data address is used to access these two physical ports. That data address allows traffic
to continue flowing to the ports in the IPMP group, even if the connection to one of the two
ports on the 10-GbE NIC fails.

Note - You can also connect just one port in each IPMP group to the 10-GbE network rather
than both ports, if you are limited in the number of 10-GbE connections that you can make to
your 10-GbE network. However, you will not have the redundancy and increased bandwidth in
this case.

InfiniBand Network

The connections to the InfiniBand network vary, depending on the type of domain:

Understanding the System 83


Understanding Clustering Software

■ Database Domain:
■ Storage private network: Connections through P1 (active) and P0 (standby) on the
InfiniBand HCA associated with the CPU associated with each Small Domain.
For example, for a Small Domain that is associated with CPU0, these connections would
be through the InfiniBand HCA installed in slot 3, through ports P1 (active) and P0
(standby) on that InfiniBand HCA.
■ Exadata private network: Connections through P0 (active) and P1 (standby) on the
InfiniBand HCA associated with the CPU associated with each Small Domain.
For example, for a Small Domain that is associated with CPU0, these connections would
be through the InfiniBand HCA installed in slot 3, through ports P0 (active) and P1
(standby) on that InfiniBand HCA.
■ Application Domain:
■ Storage private network: Connections through P1 (active) and P0 (standby) on the
InfiniBand HCA associated with the CPU associated with each Small Domain.
For example, for a Small Domain that is associated with CPU0, these connections would
be through the InfiniBand HCA installed in slot 3, through ports P1 (active) and P0
(standby) on that InfiniBand HCA.
■ Oracle Solaris Cluster private network: Connections through P0 (active) and P1
(standby) on the InfiniBand HCA associated with the CPU associated with each Small
Domain.
For example, for a Small Domain that is associated with CPU0, these connections would
be through the InfiniBand HCA installed in slot 3, through ports P0 (active) and P1
(standby) on that InfiniBand HCA.

Understanding Clustering Software


Clustering software is typically used on multiple interconnected servers so that they appear as
if they are one server to end users and applications. For Oracle SuperCluster T5-8, clustering
software is used to cluster certain logical domains on the SPARC T5-8 servers together with the
same type of domain on other SPARC T5-8 servers. The benefits of clustering software include
the following:
■ Reduce or eliminate system downtime because of software or hardware failure
■ Ensure availability of data and applications to end users, regardless of the kind of failure
that would normally take down a single-server system
■ Increase application throughput by enabling services to scale to additional processors by
adding nodes to the cluster and balancing the load
■ Provide enhanced availability of the system by enabling you to perform maintenance
without shutting down the entire cluster

84 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding Clustering Software

Oracle SuperCluster T5-8 uses the following clustering software:

■ “Cluster Software for the Database Domain” on page 85


■ “Cluster Software for the Oracle Solaris Application Domains” on page 85

Cluster Software for the Database Domain


Oracle Database 11g Real Application Clusters (Oracle RAC) enables the clustering of the
Oracle Database on the Database Domain. Oracle RAC uses Oracle Clusterware for the
infrastructure to cluster the Database Domain on the SPARC T5-8 servers together.

Oracle Clusterware is a portable cluster management solution that is integrated with the
Oracle database. The Oracle Clusterware is also a required component for using Oracle RAC.
The Oracle Clusterware enables you to create a clustered pool of storage to be used by any
combination of single-instance and Oracle RAC databases.

Single-instance Oracle databases have a one-to-one relationship between the Oracle database
and the instance. Oracle RAC environments, however, have a one-to-many relationship between
the database and instances. In Oracle RAC environments, the cluster database instances access
one database. The combined processing power of the multiple servers can provide greater
throughput and scalability than is available from a single server. Oracle RAC is the Oracle
Database option that provides a single system image for multiple servers to access one Oracle
database.

Oracle RAC is a unique technology that provides high availability and scalability for all
application types. The Oracle RAC infrastructure is also a key component for implementing
the Oracle enterprise grid computing architecture. Having multiple instances access a single
database prevents the server from being a single point of failure. Applications that you deploy
on Oracle RAC databases can operate without code changes.

Cluster Software for the Oracle Solaris Application


Domains
The Oracle Solaris Cluster software is an optional clustering tool used for the Oracle Solaris
Application Domains. On Oracle SuperCluster T5-8, the Oracle Solaris Cluster software is used
to cluster the Oracle Solaris Application Domain on the SPARC T5-8 servers together.

Understanding the System 85


Understanding the Network Requirements

Understanding the Network Requirements


These topics describe the network requirements for Oracle SuperCluster T5-8.

■ “Network Requirements Overview” on page 86


■ “Network Connection Requirements for Oracle SuperCluster T5-8” on page 89
■ “Default IP Addresses” on page 90

Network Requirements Overview


Oracle SuperCluster T5-8 includes SPARC T5-8 servers, Exadata Storage Servers, and the ZFS
storage appliance, as well as equipment to connect the SPARC T5-8 servers to your network.
The network connections enable the servers to be administered remotely and enable clients to
connect to the SPARC T5-8 servers.

Each SPARC T5-8 server consists of the following network components and interfaces:

■ 4 embedded Gigabit Ethernet ports (NET0, NET1, NET2, and NET3) for connection to the
host management network
■ 1 Ethernet port (NET MGT) for Oracle Integrated Lights Out Manager (Oracle ILOM)
remote management
■ Either 4 (Half Rack) or 8 (Full Rack) dual-ported Sun QDR InfiniBand PCIe Low Profile
host channel adapters (HCAs) for connection to the InfiniBand private network
■ Either 4 (Half Rack) or 8 (Full Rack) dual-ported Sun Dual 10-GbE SFP+ PCIe 2.0 Low
Profile network interface cards (NICs) for connection to the 10-GbE client access network

Note - The QSFP modules for the 10-GbE PCIe 2.0 network cards are purchased separately.

Each Exadata Storage Server consists of the following network components and interfaces:

■ 1 embedded Gigabit Ethernet port (NET0) for connection to the host management network
■ 1 dual-ported Sun QDR InfiniBand PCIe Low Profile host channel adapter (HCA) for
connection to the InfiniBand private network
■ 1 Ethernet port (NET MGT) for Oracle ILOM remote management

Each ZFS storage controller consists of the following network components and interfaces:

■ 1 embedded Gigabit Ethernet port for connection to the host management network:

86 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding the Network Requirements

■ NET0 on the first storage controller (installed in slot 25 in the rack)


■ NET1 on the second storage controller (installed in slot 26 in the rack)
■ 1 dual-port QDR InfiniBand Host Channel Adapter for connection to the InfiniBand private
network
■ 1 Ethernet port (NET0) for Oracle ILOM remote management using sideband management.
The dedicate Oracle ILOM port is not used due to sideband.

The Cisco Catalyst 4948 Ethernet switch supplied with Oracle SuperCluster T5-8 is minimally
configured during installation. The minimal configuration disables IP routing, and sets the
following:

■ Host name
■ IP address
■ Subnet mask
■ Default gateway
■ Domain name
■ Domain Name Server
■ NTP server
■ Time
■ Time zone

Additional configuration, such as defining multiple virtual local area networks (VLANs) or
enabling routing, might be required for the switch to operate properly in your environment and
is beyond the scope of the installation service. If additional configuration is needed, then your
network administrator must perform the necessary configuration steps during installation of
Oracle SuperCluster T5-8.

To deploy Oracle SuperCluster T5-8, ensure that you meet the minimum network requirements.
There are three networks for Oracle SuperCluster T5-8. Each network must be on a distinct and
separate subnet from the others. The network descriptions are as follows:

■ Management network – This required network connects to your existing management


network, and is used for administrative work for all components of Oracle SuperCluster
T5-8. It connects the servers, Oracle ILOM, and switches connected to the Ethernet switch
in the rack. There is one uplink from the Ethernet switch in the rack to your existing
management network.

Note - Network connectivity to the PDUs is only required if the electric current will be
monitored remotely.

Understanding the System 87


Understanding the Network Requirements

Each SPARC T5-8 server and Exadata Storage Server use two network interfaces for
management. One provides management access to the operating system through the 1-GbE
host management interface(s), and the other provides access to the Oracle Integrated Lights
Out Manager through the Oracle ILOM Ethernet interface.

Note - The SPARC T5-8 servers have four 1-GbE host management interfaces (NET 0
- NET3). All four NET interfaces are physically connected, and use IPMP to provide
redundancy. See “Understanding the Software Configurations” on page 46 for more
information.

The method used to connect the ZFS storage controllers to the management network varies
depending on the controller:
■ Storage controller 1: NET0 used to provide access to the Oracle ILOM network using
sideband management, as well as access to the 1-GbE host management network.
■ Storage controller 2: NET0 used to provide access to the Oracle ILOM network using
sideband management, and NET1 used to provide access to the 1-GbE host management
network.

Oracle SuperCluster T5-8 is delivered with the 1-GbE host management and Oracle ILOM
interfaces connected to the Ethernet switch on the rack. The 1-GbE host management
interfaces on the SPARC T5-8 servers should not be used for client or application network
traffic. Cabling or configuration changes to these interfaces is not permitted.
■ Client access network – This required 10-GbE network connects the SPARC T5-8 servers
to your existing client network and is used for client access to the servers. Database
applications access the database through this network using Single Client Access Name
(SCAN) and Oracle RAC Virtual IP (VIP) addresses.
■ InfiniBand private network – This network connects the SPARC T5-8 servers, ZFS
storage appliance, and Exadata Storage Servers using the InfiniBand switches on the rack.
For SPARC T5-8 servers configured with Database Domains, Oracle Database uses this
network for Oracle RAC cluster interconnect traffic and for accessing data on Exadata
Storage Servers and the ZFS storage appliance. For SPARC T5-8 servers configured with
the Application Domain, Oracle Solaris Cluster uses this network for cluster interconnect
traffic and to access data on the ZFS storage appliance. This non-routable network is fully
contained in Oracle SuperCluster T5-8, and does not connect to your existing network. This
network is automatically configured during installation.

Note - All networks must be on distinct and separate subnets from each other.

The following figure shows the default network diagram.

88 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding the Network Requirements

FIGURE 18 Network Diagram for Oracle SuperCluster T5-8

Network Connection Requirements for Oracle


SuperCluster T5-8
The following connections are required for Oracle SuperCluster T5-8 installation:

TABLE 1 New Network Connections Required for Installation

Connection Type Number of connections Comments


Management network 1 for Ethernet switch Connect to the existing management network
Client access network Typically 2 per logical domain. Connect to the client access network. (You will not
have redundancy through IPMP if there is only one
connection per logical domain.)

Understanding the System 89


Understanding the Network Requirements

Understanding Default IP Addresses


These topics list the default IP addresses assigned to Oracle SuperCluster T5-8 components
during manufacturing.

■ “Default IP Addresses” on page 90


■ “Default Host Names and IP Addresses” on page 90

Default IP Addresses

Four sets of default IP addresses are assigned at manufacturing:

■ Management IP addresses – IP addresses used by Oracle ILOM for the SPARC T5-8
servers, Exadata Storage Servers, and the ZFS storage controllers.
■ Host IP addresses – Host IP addresses used by the SPARC T5-8 servers, Exadata Storage
Servers, ZFS storage controllers, and switches.
■ InfiniBand IP addresses – InfiniBand interfaces are the default channel of communication
among SPARC T5-8 servers, Exadata Storage Servers, and the ZFS storage controllers. If
you are connecting Oracle SuperCluster T5-8 to another Oracle SuperCluster T5-8 or to an
Oracle Exadata or Exalogic machine on the same InfiniBand fabric, the InfiniBand interface
enables communication between the SPARC T5-8 servers and storage server heads in one
Oracle SuperCluster T5-8 and the other Oracle SuperCluster T5-8 or Oracle Exadata or
Exalogic machine.
■ 10-GbE IP addresses – The IP addresses used by the 10-GbE client access network
interfaces.

Tip - For more information about how these interfaces are used, see Figure 18, “Network
Diagram for Oracle SuperCluster T5-8,” on page 89.

Default Host Names and IP Addresses

Refer to the following topics for the default IP addresses used in Oracle SuperCluster T5-8:

■ “Default Host Names and IP Addresses for the Oracle ILOM and Host Management
Networks” on page 91
■ “Default Host Names and IP Addresses for the InfiniBand and 10-GbE Client Access
Networks” on page 92

90 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding the Network Requirements

Default Host Names and IP Addresses for the Oracle ILOM and Host
Management Networks

TABLE 2 Default Host Names and IP Addresses for the Oracle ILOM and Host Management Networks

Information Assigned at Manufacturing


Unit Rack Component (Front View) Oracle ILOM Host Oracle ILOM IP Host Management Host Management
Number Names Addresses Host Names IP Addresses
N/A PDU-A (left from rear view) sscpdua 192.168.1.210 N/A
PCU-B (right from rear view) sscpdub 192.168.1.211 N/A
42 Exadata Storage Server 8 ssces8-sp 192.168.1.108 cell08 192.168.1.8
41 (Full Rack only)
40 Exadata Storage Server 7 ssces7-sp 192.168.1.107 cell07 192.168.1.7
39 (Full Rack only)
38 Exadata Storage Server 6 ssces6-sp 192.168.1.106 cell06 192.168.1.6
37 (Full Rack only)
36 Exadata Storage Server 5 ssces5-sp 192.168.1.105 cell05 192.168.1.5
35 (Full Rack only)
34 ZFS Storage Controller 2 sscsn2-sp 192.168.1.116 sscsn2-m1 192.168.1.16
33 ZFS Storage Controller 1 sscsn1-sp 192.168.1.115 sscsn1-m1 192.168.1.15
32 Sun Datacenter InfiniBand Switch 36 sscnm1-m3 192.168.1.203 N/A N/A
(Leaf 2)
31 Sun Disk Shelf for the ZFS Storage N/A N/A N/A N/A
Appliance
30
29
28
27 Cisco Catalyst 4948 Ethernet ssc4948-m1 192.168.1.200 N/A N/A
Management Switch
26 Sun Datacenter InfiniBand Switch 36 sscnm1-m2 192.168.1.202 N/A N/A
(Leaf 1)
25 SPARC T5-8 Server 2 ssccn2-sp 192.168.1.110 ssccn2-m4 192.168.1.39
24
23 ssccn2-m3 192.168.1.29
22
21 ssccn2-m2 192.168.1.19
20
19 ssccn2-m1 192.168.1.10
18

Understanding the System 91


Understanding the Network Requirements

Information Assigned at Manufacturing


Unit Rack Component (Front View) Oracle ILOM Host Oracle ILOM IP Host Management Host Management
Number Names Addresses Host Names IP Addresses
17 SPARC T5-8 Server 1 ssccn1-sp 192.168.1.109 ssccn1-m4 192.168.1.38
16
15 ssccn1-m3 192.168.1.28
14
13 ssccn1-m2 192.168.1.18
12
11 ssccn1-m1 192.168.1.9
10
9 Exadata Storage Server 4 ssces4-sp 192.168.1.104 cell04 192.168.1.4
8
7 Exadata Storage Server 3 ssces3-sp 192.168.1.103 cell03 192.168.1.3
6
5 Exadata Storage Server 2 ssces2-sp 192.168.1.102 cell02 192.168.1.2
4
3 Exadata Storage Server 1 ssces1-sp 192.168.1.101 cell01 192.168.1.1
2
1 Sun Datacenter InfiniBand Switch 36 sscnm1-m1 192.168.1.201 N/A N/A
(Spine)

Default Host Names and IP Addresses for the InfiniBand and 10-GbE
Client Access Networks

TABLE 3 Default Host Names and IP Addresses for the InfiniBand and 10-GbE Client Access Networks
Information Assigned at Manufacturing
Unit Rack Component (Front View) InfiniBand Host InfiniBand IP 10-GbE Client 10-GbE Client
Number Names Addresses Access Host Access IP
Names Addresses
N/A PDU-A (left from rear view) N/A N/A N/A N/A
PCU-B (right from rear view) N/A N/A N/A N/A
42 Exadata Storage Server 8 ssces8-stor 192.168.10.108 N/A N/A
41 (Full Rack only)
40 Exadata Storage Server 7 ssces7-stor 192.168.10.107 N/A N/A
39 (Full Rack only)
38 Exadata Storage Server 6 ssces6-stor 192.168.10.106 N/A N/A
37

92 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding the Network Requirements

Information Assigned at Manufacturing


Unit Rack Component (Front View) InfiniBand Host InfiniBand IP 10-GbE Client 10-GbE Client
Number Names Addresses Access Host Access IP
Names Addresses
(Full Rack only)
36 Exadata Storage Server 5 ssces5-stor 192.168.10.105 N/A N/A
35 (Full Rack only)
34 ZFS Storage Controller 2 N/A N/A N/A N/A
33 ZFS Storage Controller 1 sscsn1-stor1 192.168.10.15 N/A N/A
32 Sun Datacenter InfiniBand Switch 36 N/A N/A N/A N/A
(Leaf 2)
31 Sun Disk Shelf for the ZFS Storage N/A N/A N/A N/A
Appliance
30
29
28
27 Cisco Catalyst 4948 Ethernet N/A N/A N/A N/A
Management Switch
26 Sun Datacenter InfiniBand Switch 36 N/A N/A N/A N/A
(Leaf 1)
25 SPARC T5-8 Server 2 ssccn2-ib8 192.168.10.80 ssccn2-tg16 192.168.40.32

ssccn2-tg15 192.168.40.31
24 ssccn2-ib7 192.168.10.70 ssccn2-tg14 192.168.40.30

ssccn2-tg13 192.168.40.29
23 ssccn2-ib6 192.168.10.60 ssccn2-tg12 192.168.40.28

ssccn2-tg11 192.168.40.27
22 ssccn2-ib5 192.168.10.50 ssccn2-tg10 192.168.40.26

ssccn2-tg9 192.168.40.25
21 ssccn2-ib4 192.168.10.40 ssccn2-tg8 192.168.40.24

ssccn2-tg7 192.168.40.23
20 ssccn2-ib3 192.168.10.30 ssccn2-tg6 192.168.40.22

ssccn2-tg5 192.168.40.21
19 ssccn2-ib2 192.168.10.20 ssccn2-tg4 192.168.40.20

ssccn2-tg3 192.168.40.19
18 ssccn2-ib1 192.168.10.10 ssccn2-tg2 192.168.40.18

ssccn2-tg1 192.168.40.17
17 SPARC T5-8 Server 1 ssccn1-ib8 192.168.10.79 ssccn1-tg16 192.168.40.16

Understanding the System 93


Understanding the Network Requirements

Information Assigned at Manufacturing


Unit Rack Component (Front View) InfiniBand Host InfiniBand IP 10-GbE Client 10-GbE Client
Number Names Addresses Access Host Access IP
Names Addresses
ssccn1-tg15 192.168.40.15
16 ssccn1-ib7 192.168.10.69 ssccn1-tg14 192.168.40.14

ssccn1-tg13 192.168.40.13
15 ssccn1-ib6 192.168.10.59 ssccn1-tg12 192.168.40.12

ssccn1-tg11 192.168.40.11
14 ssccn1-ib5 192.168.10.49 ssccn1-tg10 192.168.40.10

ssccn1-tg9 192.168.40.9
13 ssccn1-ib4 192.168.10.39 ssccn1-tg8 192.168.40.8

ssccn1-tg7 192.168.40.7
12 ssccn1-ib3 192.168.10.29 ssccn1-tg6 192.168.40.6

ssccn1-tg5 192.168.40.5
11 ssccn1-ib2 192.168.10.19 ssccn1-tg4 192.168.40.4

ssccn1-tg3 192.168.40.3
10 ssccn1-ib1 192.168.10.9 ssccn1-tg2 192.168.40.2

ssccn1-tg1 192.168.40.1
9 Exadata Storage Server 4 ssces4-stor 192.168.10.104 N/A N/A
8
7 Exadata Storage Server 3 ssces3-stor 192.168.10.103 N/A N/A
6
5 Exadata Storage Server 2 ssces2-stor 192.168.10.102 N/A N/A
4
3 Exadata Storage Server 1 ssces1-stor 192.168.10.101 N/A N/A
2
1 Sun Datacenter InfiniBand Switch 36 N/A N/A N/A N/A
(Spine)

94 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Preparing the Site

This section describes the steps you should take to prepare the site for your system.

■ “Cautions and Considerations” on page 95


■ “Reviewing System Specifications” on page 96
■ “Reviewing Power Requirements” on page 99
■ “Preparing for Cooling” on page 106
■ “Preparing the Unloading Route and Unpacking Area” on page 111
■ “Preparing the Network” on page 113

Cautions and Considerations


Consider the following when selecting a location for the new rack.

■ Do not install the rack in a location that is exposed to:


■ Direct sunlight
■ Excessive dust
■ Corrosive gases
■ Air with high salt concentrations
■ Frequent vibrations
■ Sources of strong radio frequency interference
■ Static electricity
■ Use power outlets that provide proper grounding.
■ A qualified electrical engineer must perform any grounding work.
■ Each grounding wire for the rack must be used only for the rack.
■ The grounding resistance must not be greater than 10 ohms.
■ Verify the grounding method for the building.
■ Observe the precautions, warnings, and notes about handling that appear on labels on the
equipment.

Preparing the Site 95


Reviewing System Specifications

■ (CHECKLIST) Operate the air conditioning system for 48 hours to bring the room
temperature to the appropriate level.
■ (CHECKLIST) Clean and vacuum the area thoroughly in preparation for installation.

Reviewing System Specifications


■ “Physical Specifications” on page 96
■ “Installation and Service Area” on page 96
■ “Rack and Floor Cutout Dimensions” on page 97

Physical Specifications
Ensure that the installation site can properly accommodate the system by reviewing its physical
specifications and space requirements.

Parameter Metric English


Height 1998 mm 78.66 in.
Width with side panels 600 mm 23.62 in.
Depth (with doors) 1200 mm 47.24 in.
Depth (without doors) 1112 mm 43.78 in.
Minimum ceiling height 2300 mm 90 in.
Minimum space between top of cabinet and ceiling 914 mm 36 in.
Weight (full rack) 869 kg 1,916 lbs
Weight (half rack) 706 kg 1,556 lbs

Related Information
■ “Installation and Service Area” on page 96
■ “Rack and Floor Cutout Dimensions” on page 97
■ “Shipping Package Dimensions” on page 111

Installation and Service Area


Select an installation site that provides enough space to install and service the system.

96 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Reviewing System Specifications

Location Maintenance Access


Rear maintenance 914 mm 36 in.
Front maintenance 914 mm 36 in.
Top maintenance 914 mm 36 in.

Related Information

■ “Physical Specifications” on page 96


■ “Rack and Floor Cutout Dimensions” on page 97

Rack and Floor Cutout Dimensions


If you plan to route cables down through the bottom of the rack, cut a rectangular hole in the
floor tile. Locate the hole below the rear portion of the rack, between the two rear casters and
behind the rear inner rails. The suggested hole width is 280 mm (11 inches).

If you want a separate grounding cable, see “Install a Ground Cable (Optional)” on page 124.

Caution - Do not create a hole where the rack casters or leveling feet will be placed.

Preparing the Site 97


Reviewing System Specifications

FIGURE 19 Dimensions for Rack Stabilization

Figure Legend

1 Distance from mounting hole slots to the edge of the rack is 113 mm (4.45 inches)
2 Width between the centers of the mounting hole slots is 374 mm (14.72 inches)
3 Distance between mounting hole slots to the edge of the rack is 113 mm (4.45 inches)
4 Distance between the centers of the front and rear mounting hole slots is 1120 mm (44.1 inches)
5 Depth of cable-routing floor cutout is 330 mm (13 inches)
6 Distance between the floor cutout and the edge of the rack is 160 mm (6.3 inches)
7 Width of floor cutout is 280 mm (11 inches)

Related Information

■ “Perforated Floor Tiles” on page 110


■ “Physical Specifications” on page 96
■ “Installation and Service Area” on page 96

98 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Reviewing Power Requirements

Reviewing Power Requirements


■ “Power Consumption” on page 99
■ “Facility Power Requirements” on page 100
■ “Grounding Requirements” on page 100
■ “PDU Power Requirements” on page 101
■ “PDU Thresholds” on page 104

Power Consumption
These tables describe power consumption of SuperCluster T5-8 and expansion racks.

These are measured values and not the rated power for the rack. For rated power specifications,
see “PDU Power Requirements” on page 101.

TABLE 4 SuperCluster T5-8

Condition Full Rack Half Rack


Maximum 15.97 kVA 9.09 kVA

15.17 kW 8.6 kW
Typical
13.31 kVA 7.5 kVA

12.64 kW 7.13 kW

TABLE 5 Exadata X5-2 Expansion Rack

Product Condition kW kVA


Extreme Flash quarter rack Maximum 3.6 3.7

Typical 2.5 2.6


High capacity quarter rack Maximum 3.4 3.4

Typical 2.4 2.4


Individual Extreme Flash storage server Maximum .6 .6

Typical .4 .4
Individual High capacity storage server Maximum .5 .5

Typical .4 .4

Preparing the Site 99


Reviewing Power Requirements

Related Information
■ “Facility Power Requirements” on page 100
■ “Grounding Requirements” on page 100
■ “PDU Power Requirements” on page 101

Facility Power Requirements


Provide a separate circuit breaker for each power cord.

Use dedicated AC breaker panels for all power circuits that supply power to the PDU. Breaker
switches and breaker panels should not be shared with other high-powered equipment.

Balance the power load between AC supply branch circuits.

To protect the rack from electrical fluctuations and interruptions, you should have a dedicated
power distribution system, an uninterruptible power supply (UPS), power-conditioning
equipment, and lightning arresters.

Related Information
■ “Power Consumption” on page 99
■ “Grounding Requirements” on page 100
■ “PDU Power Requirements” on page 101

Grounding Requirements
Always connect the cords to grounded power outlets. Computer equipment requires electrical
circuits to be grounded to the Earth.

Because different grounding methods vary by locality, refer to documentation such as IEC
documents for the correct grounding method. Ensure that the facility administrator or qualified
electrical engineer verifies the grounding method for the building, and performs the grounding
work.

Related Information
■ “Facility Power Requirements” on page 100

100 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Reviewing Power Requirements

■ “Power Consumption” on page 99


■ “PDU Power Requirements” on page 101

PDU Power Requirements


When ordering the system you selected one of these options:
■ Low or high voltage
■ Single or three phase power

Refer to the following tables for Oracle marketing and manufacturing part numbers.

Note - Each system has two power distribution units (PDUs). Both PDUs in a rack must be the
same type.

TABLE 6 PDU Choices


Voltage Phases Reference
Low 2 Table 7, “Low Voltage 2 Phase PDUs,” on page 101
Low 3 Table 8, “Low Voltage 3 Phase PDUs,” on page 102
High 1 Table 9, “High Voltage 1 Phase PDUs,” on page 103
High 3 Table 10, “High Voltage 3 Phase PDUs,” on page 103

TABLE 7 Low Voltage 2 Phase PDUs


Low Voltage Dual Phase (2W+GND) Comments
kVA Size 22 kVA
Marketing part number 7100873
Manufacturing part number 7018123
Phase 2 No Grounded Neutral provided in input
cords
Voltage input rating 3x [2Ph. (2W+ground)] 208 Vac, 50/60 Hz, Can be connected to Ph-Ph input sources in
max. 36.8A per phase range of 200V - 240V AC nominal
Amps per PDU 110.4A (3 inputs x 36.8A)
Outlets 42 C13

6 C19
Number of inputs 3 inputs x50A plug, 2Ph (2W+ground) 3 inputs/plugs per PDU, or 6 total inputs/
plugs for system
Input current 36.8A max. current (per input)
Data center receptacle Hubbell CS8264C

Preparing the Site 101


Reviewing Power Requirements

Low Voltage Dual Phase (2W+GND) Comments


Outlet groups per PDU 6
Usable PDU power cord length 2 m (6.6 feet) PDU power cords are 4 m long (13 feet), but
sections are used for internal routing in the
rack.

Summary:
Number of PDUs in Oracle SuperCluster T5-8: 2
Number of circuits per PDU: 3 @ 50A 2-phase 208V
Total number of circuits required: 6 @ 50A 2-phase 208V (2 per PDU)
Total number of data center receptacles required: 6 x Hubbell CS8264C
Total Amps per PDU: 110.4A 2-phase 208V
Total Amps for entire Oracle SuperCluster T5-8: 220.8A 2-phase 208V (2 PDUs @ 110.4A 2-phase 208V each)

TABLE 8 Low Voltage 3 Phase PDUs

Low Voltage Three Phase (3W+GND) Comments


kVA Size 24 kVA
Marketing part number 6444A
Manufacturing part number 597-1162-01
Phase 3 No Grounded Neutral provided in input
cords
Voltage input rating 2x [3Ph. (3W+ground)] 208 Vac, 50/60 Hz, Can be connected to Ph-Ph input sources in
max. 34.6A per phase range of 190V - 220V AC nominal
Amps per PDU 120A (3 x 20A)
Outlets 42 C13

6 C19
Number of inputs 2 inputs x 60A plug, 3 ph (3W+GND) 2 inputs/plugs per PDU, or 4 total inputs/
plugs for system
Input current 34.6 A max. per phase
Data center receptacle IEC 309-3P4W-IP67 (60A, 250V, AC, 3Ph)
Outlet groups per PDU 6
Usable PDU power cord length 2 m (6.6 feet) PDU power cords are 4 m long (13 feet), but
sections are used for internal routing in the
rack.

Summary:
Number of PDUs in Oracle SuperCluster T5-8: 2
Number of circuits per PDU: 2 @ 60A 3-phase 208V
Total number of circuits required: 4 @ 60A 3-phase 208V (2 per PDU)
Total number of data center receptacles required: 4 x IEC 309-3P4W-IP67
Total Amps per PDU: 120A 3-phase 208V

102 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Reviewing Power Requirements

Low Voltage Three Phase (3W+GND) Comments


Total Amps for entire Oracle SuperCluster T5-8: 240A 3-phase 208V (2 PDUs @ 120A 3-phase 208V each)

TABLE 9 High Voltage 1 Phase PDUs


High Voltage Single Phase (2W+GND) Comments
kVA Size 22 kVA
Marketing part number 7100874
Manufacturing part number 7018124
Phase 1
Voltage input rating 3x [1Ph. (2W+ground)] 230 Vac, 50 Hz, Can be connected to Ph-N input sources in
max. 32A per phase range of 220V - 240V AC nominal
Amps per PDU 96A (3 inputs x 32A)
Outlets 42 C13

6 C19
Number of inputs 3 inputs x 32A plug, 1Ph (2W+ground) 3 inputs/plugs per PDU, or 6 total inputs/
plugs for system
Input current 32A max. current (per input)
Data center receptacle IEC 309-2P3W-IP44
Outlet groups per PDU 6
Usable PDU power cord length 2 m (6.6 feet) PDU power cords are 4 m long (13 feet), but
sections are used for internal routing in the
rack.

Summary:
Number of PDUs in Oracle SuperCluster T5-8: 2
Number of circuits per PDU: 3 @ 32A 1-phase 230V
Total number of circuits required: 6 @ 32A 1-phase 230V (2 per PDU)
Total number of data center receptacles required: 6 x IEC 309-2P3W-IP44
Total Amps per PDU: 96A 1-phase 230V
Total Amps for entire Oracle SuperCluster T5-8: 192A 1-phase 230V (2 PDUs @ 96A 1-phase 230V each)

TABLE 10 High Voltage 3 Phase PDUs


High Voltage Three Phase (4W+GND) Comments
kVA Size 24 kVA
Marketing part number 6445A
Manufacturing part number 597-1163-01
Phase 3
Voltage input rating 2x [3Ph. (4W+ground)] 230/400 Vac, 50/60 Can be connected to Ph-N input sources in
Hz, max. 18A per phase range of 220V - 240V AC nominal

Preparing the Site 103


Reviewing Power Requirements

High Voltage Three Phase (4W+GND) Comments


Amps per PDU 109A (6 inputs x 18.1A)
Outlets 42 C13

6 C19
Number of inputs 2 inputs x25A plug, 3 ph (4W+GND) 2 inputs/plugs per PDU, or 4 total inputs/
plugs for system
Input current 18 A max. per phase
Data center receptacle IEC 309-4P5W-IP44 (32A, 400V, AC, 3Ph)
Outlet groups per PDU 6
Usable PDU power cord length 2 m (6.6 feet) PDU power cords are 4 m long (13 feet), but
sections are used for internal routing in the
rack.

Summary:
Number of PDUs in Oracle SuperCluster T5-8: 2
Number of circuits per PDU: 2 @ 25A 3-phase 230/400V
Total number of circuits required: 4 @ 25A 3-phase 230/400V (2 per PDU)
Total number of data center receptacles required: 4 x IEC 309-4P5W-IP44
Total Amps per PDU: 109A 3-phase 230/400V
Total Amps for entire Oracle SuperCluster T5-8: 218A 3-phase 230/400V (2 PDUs @ 109A 3-phase 230/400V each)

Related Information
■ “Facility Power Requirements” on page 100
■ “PDU Thresholds” on page 104
■ “Power Consumption” on page 99

PDU Thresholds
This section provides the default PDU current thresholds for warnings and alarms for the
different SuperCluster T5-8 configurations.

You can also view the values by accessing the PDU metering unit as described in the Sun Rack
II Power Distribution Units User's Guide.

View the PDU thresholds and alarm values based on the configuration of SuperCluster T5-8:
■ “PDU Thresholds (Quarter-Populated Racks)” on page 105
■ “PDU Thresholds (Half-Populated Racks)” on page 105
■ “PDU Thresholds (Fully-Populated Racks)” on page 106

104 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Reviewing Power Requirements

PDU Thresholds (Quarter-Populated Racks)

TABLE 11 22kVA Single-Phase PDUs


Low Voltage High Voltage
PDU A PDU B Warning (Amps) Alarm (Amps) Warning (Amps) Alarm (Amps)
M1-3 M1-1 10 13 9 12
M1-2 M1-2 12 15 10 14
M1-1 M1-3 20 25 18 23

TABLE 12 24kVA 3-Phase PDUs


Low Voltage High Voltage
PDU A PDU B Warning (Amps) Alarm (Amps) Warning (Amps) Alarm (Amps)
M2-3 M1-3 9 12 6 8
M2-2 M1-2 6 8 4 5
M2-1 M1-1 8 11 3 4
M1-3 M2-3 15 19 8 10
M1-2 M2-2 17 22 8 10
M1-1 M2-1 17 22 10 14

PDU Thresholds (Half-Populated Racks)

TABLE 13 22kVA Single-Phase PDUs


Low Voltage High Voltage
PDU A PDU B Warning (Amps) Alarm (Amps) Warning (Amps) Alarm (Amps)
M1-3 M1-1 18 23 17 21
M1-2 M1-2 20 25 18 23
M1-1 M1-3 20 25 18 23

TABLE 14 24kVA 3-Phase PDUs


Low Voltage High Voltage
PDU A PDU B Warning (Amps) Alarm (Amps) Warning (Amps) Alarm (Amps)
M2-3 M1-3 16 20 6 8
M2-2 M1-2 20 26 11 14
M2-1 M1-1 15 20 10 14
M1-3 M2-3 15 19 8 10

Preparing the Site 105


Preparing for Cooling

Low Voltage High Voltage


PDU A PDU B Warning (Amps) Alarm (Amps) Warning (Amps) Alarm (Amps)
M1-2 M2-2 17 22 8 10
M1-1 M2-1 17 22 10 14

PDU Thresholds (Fully-Populated Racks)

TABLE 15 22kVA Single-Phase PDUs


Low Voltage High Voltage
PDU A PDU B Warning (Amps) Alarm (Amps) Warning (Amps) Alarm (Amps)
M1-3 M1-1 34 36 31 32
M1-2 M1-2 32 36 29 32
M1-1 M1-3 26 33 24 30

TABLE 16 24kVA 3-Phase PDUs


Low Voltage High Voltage
PDU A PDU B Warning (Amps) Alarm (Amps) Warning (Amps) Alarm (Amps)
M2-3 M1-3 30 35 15 18
M2-2 M1-2 31 35 17 18
M2-1 M1-1 29 35 16 18
M1-3 M2-3 25 32 14 17
M1-2 M2-2 23 29 14 17
M1-1 M2-1 23 29 10 14

Related Information
■ “PDU Power Requirements” on page 101
■ “Facility Power Requirements” on page 100
■ “Grounding Requirements” on page 100
■ “Power Consumption” on page 99

Preparing for Cooling


■ “Environmental Requirements” on page 107
■ “Heat Dissipation and Airflow Requirements” on page 107

106 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Preparing for Cooling

■ “Perforated Floor Tiles” on page 110

Environmental Requirements

Condition Operating Requirement Non-operating Requirement Comments


Temperature 5 to 32°C (41 to 89.6°F) -40 to 70°C (-40 to 158°F). For optimal rack cooling, data
center temperatures from 21 to
23°C (70 to 47°F)
Relative humidity 10 to 90% relative humidity, Up to 93% relative humidity. For optimal data center
noncondensing rack cooling, 45 to 50%,
noncondensing
Altitude 3048 m (10000 ft.) maximum† 12000 m (40000 ft.). Ambient temperature is de-rated
by 1 degree Celsius per 300 m
above 900 m altitude above sea
level

Except in China where regulations may limit installations to a maximum altitude of 2000 m (6560 ft.).

Related Information
■ “Heat Dissipation and Airflow Requirements” on page 107
■ “Perforated Floor Tiles” on page 110

Heat Dissipation and Airflow Requirements


The following table lists the maximum rate of heat released from a system. In order to cool the
system properly, ensure that adequate airflow travels through the system.

TABLE 17 SuperCluster T5-8 Specifications


Comments Full Rack Half Rack
Maximum 54,405 BTU/hour 31,013 BTU/hour
57.4 kJ/hour 32.69 kJ/hour
Typical 45,422 BTU/hour 25,591 BTU/hour
47.8 kJ/hour 26.97 kJ/hour

TABLE 18 Exadata X5-2 Expansion Rack Specifications


Configuration BTU/Hour kj/Hour
EF Quarter rack Maximum 12,362 13,042

Preparing the Site 107


Preparing for Cooling

Configuration BTU/Hour kj/Hour


Typical 8,654 9,129
HC Quarter rack Maximum 11,516 12,149
Typical 8,061 8,505
Individual EF storage server Maximum 2,037 2,149
Typical 1,426 1,504
Individual HC storage server Maximum 1,825 1,926
Typical 1,278 1,348

The direction of airflow is front to back for the main and expansion racks.

Caution - Do not restrict the movement of cool air from the air conditioner to the cabinet, or the
movement of hot air out of the rear of the cabinet.

Observe these additional requirements:

■ Allow a minimum clearance of 914 mm (36 inches) at the front of the rack, and 914 mm (36
inches) at the rear of the rack for ventilation. There is no airflow requirement for the left and
right sides, or the top of the rack.
■ If the rack is not completely filled with components, cover the empty sections with filler
panels.

108 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Preparing for Cooling

FIGURE 20 Direction of Airflow Is Front to Back

TABLE 19 Airflow (Listed Specifications are Approximate)


Rack Approximate CFM
Full rack Maximum 2,523

Typical 2,103
Half rack Maximum 1,436

Typical 1,185

Preparing the Site 109


Preparing for Cooling

Rack Approximate CFM


Exadata X5-2 Expansion Rack, Maximum 572
Extreme Flash, quarter rack
Typical 401
Exadata X5-2 Expansion Rack, High Maximum 533
capacity, quarter rack
Typical 373
Individual Extreme Flash storage Maximum 94
server
Typical 66
Individual high capacity storage Maximum 85
server
Typical 59

Related Information

■ “Environmental Requirements” on page 107


■ “Perforated Floor Tiles” on page 110

Perforated Floor Tiles


The default installation for the system assumes a slab floor, but if you install the server on a
raised floor, use perforated tiles in front of the rack to supply cold air to the system. Each tile
should support an airflow of approximately 400 CFM.

Perforated tiles can be arranged in any order in front of the rack, as long as cold air from the
tiles can flow into the rack.

The following is the recommended number of floor tiles:

Rack Number of Tiles


Full rack 6
Half rack 4

Related Information

■ “Heat Dissipation and Airflow Requirements” on page 107


■ “Environmental Requirements” on page 107
■ “Rack and Floor Cutout Dimensions” on page 97

110 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Preparing the Unloading Route and Unpacking Area

Preparing the Unloading Route and Unpacking Area


Ensure the system can get from the loading ramp to the installation site.
■ “Shipping Package Dimensions” on page 111
■ “Loading Dock and Receiving Area Requirements” on page 111
■ “Access Route Guidelines” on page 112
■ “Unpacking Area” on page 113

Shipping Package Dimensions


The following package dimensions apply to Oracle SuperCluster T5-8.

Parameter Metric English


Height 2159 mm 85 in.
Width 1219 mm 48 in.
Depth 1575 mm 62 in.
Shipping weight (full rack) 1009.2 kg 2225 lbs
Shipping weight (half rack) 721 kg 1586 lbs

Related Information
■ “Loading Dock and Receiving Area Requirements” on page 111
■ “Access Route Guidelines” on page 112
■ “Unpacking Area” on page 113
■ “Physical Specifications” on page 96

Loading Dock and Receiving Area Requirements


Before the rack arrives, ensure that the receiving area is large enough for the shipping package.

If your loading dock meets the height and ramp requirements for a standard freight carrier truck,
you can use a pallet jack to unload the rack. If the loading dock does not meet the requirements,
provide a standard forklift or other means to unload the rack. Alternatively, you can request that
the rack be shipped in a truck with a lift gate.

When the rack arrives, leave the rack in its shipping packaging until it arrives in its installation
site.

Preparing the Site 111


Preparing the Unloading Route and Unpacking Area

Note - Acclimatization time: If the shipping package is very cold or hot, allow the packaging
to stand unopened in the computer room or a similar environment to come to the same
temperature as the computer room. Acclimatization might require up to 24 hours.

Related Information
■ “Shipping Package Dimensions” on page 111
■ “Access Route Guidelines” on page 112
■ “Unpacking Area” on page 113
■ “Physical Specifications” on page 96

Access Route Guidelines


Leave the server in its shipping container until it reaches its final destination.

Plan the access route to enable smooth transport of the system. The entire access route to the
installation site should be free of raised patterns that can cause vibration. Avoid obstacles such
as doorways or elevator thresholds that can cause abrupt stops or shocks, and the route must
meet the following requirements.

Access Route Item With Shipping Pallet Without Shipping Pallet


Minimum door height 2184 mm 2000 mm

(86 in.) (78.74 in.)


Minimum door width 1220 600 mm

(48 in.) (23.62 in.)


Minimum elevator depth 1575 mm 1200 mm

(62 in.) (47.24 in.)


Maximum incline 6 degrees 6 degrees
Minimum elevator, pallet jack, and floor loading 1134 kg 1134 kg
capacity (Shipping weights, see “Shipping
Package Dimensions” on page 111) (2500 lbs) (2500 lbs)

Related Information
■ “Shipping Package Dimensions” on page 111
■ “Loading Dock and Receiving Area Requirements” on page 111

112 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Preparing the Network

■ “Unpacking Area” on page 113


■ “Physical Specifications” on page 96

Unpacking Area
Use a conditioned space to remove the packaging material to reduce particles before entering
the data center.

Allow enough space for unpacking it from its shipping cartons. Ensure that there is enough
clearance and clear pathways for moving the rack from the unpacking location to the
installation location.

Related Information
■ “Shipping Package Dimensions” on page 111
■ “Access Route Guidelines” on page 112
■ “Loading Dock and Receiving Area Requirements” on page 111
■ “Physical Specifications” on page 96

Preparing the Network


Prepare your network for Oracle SuperCluster T5-8.
■ “Network Connection Requirements” on page 113
■ “Network IP Address Requirements” on page 114
■ “Prepare DNS for the System” on page 114

Network Connection Requirements


The minimum number of physical network connections are listed below.

Parameter Comments Full Rack Half Rack


Network drops (minimum 1 Gb Ethernet 1 x 1 GB 1 x 1 GB
quantities)
10 Gb Ethernet 4 x 10 GB if using one connection per 2 x 10 GB if using one connection
SPARC T5-8† per SPARC T5-8†

Preparing the Site 113


Prepare DNS for the System

Parameter Comments Full Rack Half Rack


8 x 10 GB if using two connections per 4 x 10 GB if using two connections
SPARC T5-8 per SPARC T5-8

32 x 10 GB if using four connections 16 x 10 GB if using four


per SPARC T5-8 connections per SPARC T5-8

You will not have redundancy using IPMP if there is only one connection per SPARC T5-8.

Related Information
■ “Network IP Address Requirements” on page 114
■ “Prepare DNS for the System” on page 114
■ “Understanding the Network Requirements” on page 86

Network IP Address Requirements


For Oracle SuperCluster T5-8, the number of IP addresses that you will need for each
network varies depending on the type of configuration you choose for your system. For
more information on the number of IP addresses required for your configuration, refer to the
appropriate configuration worksheet.

Related Information
■ “Network Connection Requirements” on page 113
■ “Prepare DNS for the System” on page 114
■ “Understanding the Network Requirements” on page 86

Prepare DNS for the System


Before You Begin You must finish the following tasks before the system arrives. Installation and the initial
configuration cannot proceed until these tasks are finished.

Note - Grid Naming Service (GNS) is not configured on the rack until after initial
configuration.

1. Provide the necessary information in the following documents:


■ Oracle SuperCluster T5-8 Site Checklists

114 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Prepare DNS for the System

■ The configuration worksheet document for your type of system

2. Use the host names and IP addresses specified in the completed configuration
worksheets document to create and register Domain Name System (DNS)
addresses for the system.
All public addresses, single client access name (SCAN) addresses, and VIP addresses must also
be registered in DNS prior to installation.

Note - The configuration worksheets document defines the SCAN as a single name with three
IP addresses on the client access network.

3. Configure all addresses registered in DNS for both forward resolution and
reverse resolution.
Reverse resolution must be forward confirmed (forward-confirmed reverse DNS) such that both
the forward and reverse DNS entries match each other.
The SCAN name to the three SCAN addresses must be configured in DNS for round robin
resolution.

Related Information
■ The configuration worksheets document for additional information about the worksheets
■ Oracle Grid Infrastructure Installation Guide for Linux for additional information about
SCAN addresses
■ Your DNS vendor documentation for additional information about configuring round robin
name resolution
■ “Network IP Address Requirements” on page 114
■ “Network Connection Requirements” on page 113

Preparing the Site 115


116 Oracle SuperCluster T5-8 Owner's Guide • May 2016
Installing the System

This chapter explains how to install Oracle SuperCluster T5-8.

■ “Installation Overview” on page 117


■ “Oracle Safety Information” on page 118
■ “Unpacking the System” on page 119
■ “Moving the Rack Into Place” on page 121
■ “Powering on the System for the First Time” on page 126
■ “Using an Optional Fibre Channel PCIe Card” on page 139

Installation Overview
This is a summary of the installation process.

1. Prepare for installation. Reference


1. Review the safety precautions, guidelines, site “Oracle Safety Information” on page 118
checklists, and site requirements.
2. Ensure that the site is prepared for the “Preparing the Site” on page 95
installation.
Oracle SuperCluster T5-8 Site Checklists and Oracle
3. Complete the worksheets. SuperCluster T5-8 Configuration Worksheets
2. Begin the installation.
1. Unpack Oracle SuperCluster T5-8. “Unpacking the System” on page 119.
2. Place Oracle SuperCluster T5-8 in its allocated
space.
3. Perform preliminary checks before connecting
the power cords.
4. Verify the hardware by visually inspecting the
system.
3. Power on the PDUs in the rack.
1. Connect rack power. “Powering on the System for the First
2. Switch on the six PDU circuit breakers located Time” on page 126
on the rear of the main PDU (PDU A).

Installing the System 117


Oracle Safety Information

3. Wait 3 to 5 minutes for power-on self-test device


checks to finish and for all Oracle ILOM service
processors to boot.
4. Verify that server standby power is on for each
SPARC T5-8 server in Oracle SuperCluster T5-8.
5. Verify that the main power is on for each
database compute mode.
4. The ZFS storage controllers should start up
automatically.
If the storage controllers do not start up:

1. Press the soft switches located on the front of the


two ZFS storage controllers.
2. Wait 3 to 5 minutes for the ZFS storage appliance
to initiate NFS services, daemons, and basic
services.
3. Ping the IP address assigned to the ZFS storage
controllers to verify whether the system is up and
running.
5. Start the Exadata Storage Cells.
Wait for Oracle ILOM to boot, then either press the
Power On buttons, or turn on the Exadata Storage
Servers through Oracle ILOM.
6. Start the SPARC T5-8 servers.
1. Press the soft switches located on the front of the
SPARC T5-8 servers.
2. Verify that power is applied to the Ethernet
switch.
3. Verify that power is applied to the Sun
Datacenter Infiniband Switch 36 switches.
7. Configure Oracle SuperCluster T5-8.
8. Finish the installation.
(Optional) Install a PCIe card to copy a database “Using an Optional Fibre Channel PCIe
from another server. Card” on page 139

Oracle Safety Information


Become familiar with Oracle's safety information before installing any Oracle server or
equipment:

■ Read the safety notices printed on the product packaging.


■ Read the Important Safety Information for Sun Hardware Systems (821-1590) document
that is included with the rack.

118 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Unpacking the System

■ Read all safety notices in the Sun Rack II Safety and Compliance Guide (820-4762) This
guide is available at http://docs.oracle.com/en.
■ Read all safety notices in the Sun Rack II Power Distribution Units Users Guide (820-
4760). This guide is also available at http://docs.oracle.com/en.
■ Read the safety labels that are on the equipment.

Unpacking the System


■ “Tools for Installation” on page 119
■ “Find the Unpacking Instructions” on page 119
■ “Unpack and Inspect the System” on page 121

Tools for Installation


The following additional tools are required for installation:

■ No. 2 Phillips screwdriver


■ Antistatic wrist strap
■ A tool to cut plastic strapping tape

Find the Unpacking Instructions


Locate the unpacking instructions.
The unpacking instructions are attached to the outside of the shipping package.
Figure 21, “Unpacking the Rack,” on page 120 shows the major components of the shipping
package.

Installing the System 119


Find the Unpacking Instructions

FIGURE 21 Unpacking the Rack

120 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Unpack and Inspect the System

Unpack and Inspect the System

Caution - Rocking or tilting the rack on the shipping pallet can cause it to fall over and cause
serious injury or death.

1. Use a No. 2 Phillips screwdriver to remove the shipping brackets from the lower
front edges of each rack.

2. Check the rack for damage.

3. Check the rack for loose or missing screws.

4. Check the rack for the ordered configuration.

■ Refer to the Customer Information Sheet (CIS) on the side of the packaging.
■ Two wrenches are included. The 12-mm wrench is for the cabinet leveling feet. The 17-mm
wrench is for the bolts that hold the rack to the shipping pallet.
■ Two keys are included for the cabinet doors.
■ A box of other hardware and spare parts is included in the shipping carton. Note that some
of these parts are not required for this installation.

5. Check that all cable connections are secure and firmly in place:

a. Check the power cords.


Ensure that the correct connectors have been supplied for the data center facilities power
source.

b. Check the network data cables.

6. Check the floor tile preparations, if applicable.


See “Perforated Floor Tiles” on page 110.

7. Check the data center airflow around the installation site.


See “Heat Dissipation and Airflow Requirements” on page 107 for more information.

Moving the Rack Into Place


■ “Move Oracle SuperCluster T5-8” on page 122

Installing the System 121


Move Oracle SuperCluster T5-8

■ “Install a Ground Cable (Optional)” on page 124


■ “Adjust the Leveling Feet” on page 125

Move Oracle SuperCluster T5-8


1. Plan a route to the installation site.
See “Access Route Guidelines” on page 112.

2. Ensure that the system doors are closed and secured.

3. Ensure that all four leveling feet on the rack are raised and out of the way.

Caution - Use at least two people to move the rack: one person in front and one person in back
to help guide the rack. Move the rack slowly at 0.65 meters per second (approximately two feet
per second) or slower.

4. Push the system from behind to the installation site.

Note - The front casters do not swivel; you must steer the cabinet by turning the rear casters.

Caution - Never push on the side panels to move the rack. Pushing on the side panels can tip
the rack over.

122 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Move Oracle SuperCluster T5-8

Caution - Never tip or rock the rack. It can fall over.

Installing the System 123


Install a Ground Cable (Optional)

Install a Ground Cable (Optional)


The system power distribution units (PDUs) achieve earth ground through their power cords.
For additional grounding, attach a chassis earth ground cable to the system. The additional
ground point enables electrical current leakage to dissipate more efficiently.

Caution - Do not install a ground cable until you confirm that there is proper PDU receptacle
grounding. The PDU power input lead cords and the ground cable must reference a common
earth ground.

Note - A grounding cable is not shipped with the system.

1. Ensure that the installation site has properly grounded the power source in the
data center.
An earth ground is required. See “Grounding Requirements” on page 100

2. Ensure that all grounding points, such as raised floors and power receptacles,
reference the facilities ground.

3. Ensure that direct, metal-to-metal contact is made for this installation.


The ground cable attachment area might have a painted or coated surface that must be removed
to ensure solid contact.

4. Attach the ground cable to one of the attachment points located at the bottom
rear of the system frame.

124 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Adjust the Leveling Feet

The attachment point is an adjustable bolt that is inside the rear of the rack on the right side.

Adjust the Leveling Feet


There are leveling feet at the four corners of the rack.

1. Locate the 12-mm wrench inside the rack.

2. Use the wrench to lower the leveling feet to the floor.

Installing the System 125


Powering on the System for the First Time

When lowered correctly, the four leveling feet should support the full weight of the rack.

Powering on the System for the First Time


■ “Connect Power Cords to the Rack” on page 126
■ “Power On the System” on page 130
■ “Connect a Laptop to the System” on page 134
■ “Connect to a 10-GbE Client Access Network” on page 136

Connect Power Cords to the Rack


1. At the facilities breaker panel, verify that all breakers for the rack are in the Off
position.

Caution - If the circuit breakers are in the On position, destructive sparking might occur when
you attach the AC cables to the rack.

2. Open the rear cabinet door.

3. Verify that the switches on the PDUs are in the Off position.

126 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Connect Power Cords to the Rack

Ensure that both PDUs are turned completely off.


PDU-A is at the left side of the cabinet. PDU-B is at the right. Each PDU has six switches
(circuit breakers), one for each socket group.

FIGURE 22 Power Switches on PDU

Figure Legend

1 Power switch lies flat in the On position.


2 Power switch is tilted in the Off position.

Installing the System 127


Connect Power Cords to the Rack

4. Ensure that the correct power connectors have been supplied with the power
cords.

5. Unfasten the power cord cable ties.


The ties are for shipping only and are no longer needed.

128 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Connect Power Cords to the Rack

6. Route the power cords to the facility receptacles either above the rack or below
the flooring.

Installing the System 129


Power On the System

7. Secure the power cords in bundles.

8. Connect the PDU power cord connectors into the facility receptacles.

Power On the System


1. Ensure that each of the main power cords is connected.

2. Turn on the facilities circuit breakers.

3. Switch on power distribution unit B (PDU B) only.

Note - Do not turn on PDU A at this time.

PDU B is located on the right side of the rear of the rack. See below.
Press the ON (|) side of the toggle switches on PDU B.

130 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Power On the System

Powering on PDU B will power on only half of the power supplies in the rack. The remaining
power supplies will be powered on in Step 4.

Note - For the location of each of the components, see “Identifying Hardware
Components” on page 19.

The LEDs for the components should be in the following states when all of the PDU B circuit
breakers have been turned on:

a. Check the SPARC T5-8 servers:


■ Power OK green LED – Blinking
■ Service Action Required amber LED – Off
■ PS1 and PS3 Power LEDs – Green
■ PS0 and PS2 Power LEDs – Amber

If the LEDs are not in these states, press the Power buttons at the fronts of the SPARC
T5-8 servers.

b. Check the ZFS storage controllers:


■ Power OK green LED (front panel) – Blinking while the operating system is booting
up.
The Power OK LED changes to Steady On after the operating system has booted up
(this could take up to 10 minutes)
■ Service Action Required amber LED (front panel) – Off
■ Left power supply (rear panel) – Green
■ Right power supply (rear panel) – Off

If the LEDs are not in these states, press the Power buttons at the fronts of the ZFS storage
controllers.

c. At the rear of the Sun Disk Shelf, press both Power buttons to the On
positions.
The LEDs should be:
■ Service Action Required amber LED – Off
■ Left power supply (rear panel) – Amber
■ Right power supply (rear panel) – Green

d. Check the fronts of the Sun Datacenter InfiniBand Switch 36 switches:

Installing the System 131


Power On the System

■ Left power supply LED (PS0 LED) – Red


■ Right power supply LED (PS1 LED) – Green

e. Check the front of the Cisco Catalyst 4948 Ethernet management switch:

■ PS1 LED – Red


■ PS2 LED – Green

4. Switch on power distribution unit A (PDU A).


PDU A is located on the left side of the rack.
Press the ON (|) side of the toggle switches on PDU A.

The LEDs for the components should be in the following states when all of the PDU A circuit
breakers have been turned on.

a. Check the SPARC T5-8 servers:

■ Power OK green LED – Blinking


■ Service Action Required amber LED – Off

132 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Power On the System

■ PS1 and PS3 Power LEDs – Green


■ PS0 and PS2 Power LEDs – Green

b. Check the ZFS storage controllers:

■ Power OK green LED – On


■ Service Action Required amber LED – Off
■ Left power supply – Green
■ Right power supply – Green

Tip - You can ping the IP address assigned to the ZFS storage controller to verify
whether the system is up and running. For the default NET0 IP addresses, see “Default
IP Addresses” on page 90. Alternatively, you can try to launch the administration console
for the storage appliance. Before you can ping the IP address or launch the administration
console, you must connect a laptop to the rack, as described in “Connect a Laptop to the
System” on page 134.

c. Check the Sun Disk Shelf:

■ Service Action Required amber LED – Off


■ Left power supply – Green
■ Right power supply – Green

d. Check the Exadata Storage Server:

■ Power OK LED – Off while Oracle ILOM is booting (about 3 minutes), then blinking
■ Service Action Required amber LED – Off

e. Check the fronts of the Sun Datacenter InfiniBand Switch 36 switches:

■ Left power supply LED (PS0 LED) – Green


■ Right power supply LED (PS1 LED) – Green

f. Check the front of the Cisco Catalyst 4948 Ethernet management switch:

■ PS1 LED – Green


■ PS2 LED – Green

Installing the System 133


Connect a Laptop to the System

Connect a Laptop to the System

1. Ensure that the laptop has functional USB and network ports.

2. Ensure that you have a Category 5E patch cable of maximum length 25 feet and
a serial cable of maximum length 15 feet.

3. Open the rear cabinet door of the rack.

4. Connect the network port of your laptop into an unused input port in the Cisco
Catalyst 4948 Ethernet switch.
This switch is inside a vented filler panel in Unit 19 of your system rack. Note that you should
not connect to any of the management or console ports on the switch. The ports are labeled on
the switch.

Note - If you require serial connectivity, you can use a USB-to-Serial adapter to connect from
the USB port of your laptop to the Cisco switch. A USB-to-Serial adapter is installed in the rack
on all of the gateway switches (Sun Network QDR InfiniBand Gateway Switches). An extra
adapter is included in the shipping kit in Oracle SuperCluster T5-8 configurations.

5. If you have not booted your laptop, start the operating system now.

■ If you are using the Windows operating system, do the following:

a. Go to Control Panel > Network Connections. Select your wired network


adapter in the list of network connections, then right-click and select
Properties.
The network properties screen is displayed.

b. Click the General tab, and select Internet Protocol (TCP/IP). Click
Properties.
The Internet Protocol (TCP/IP) Properties screen is displayed.

c. Select the Use the following IP address: option, and type a static IP
address for your laptop.

134 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Connect a Laptop to the System

Note - This static IP should be on the same subnet and address range as the network on which
the Cisco Ethernet switch resides. You can use the default NET0 IP addresses of SPARC T5-8
servers assigned at the time of manufacturing or the custom IP address that you reconfigured
using the Oracle SuperCluster T5-8 Configuration Utility tool. For the list of default NET0 IP
addresses, see “Default IP Addresses” on page 90.

d. Although a default gateway is not necessary, enter the same IP address


in the Default Gateway field.

e. Click OK to exit the network connections screen.

■ If you are using a Linux operating system, do the following:

a. Log in as a root user.

b. At the command prompt, type the following command to display the


network devices, such as ETH0, attached to the system:
# ifconfig -a
The list of network devices or adapters attached to the system is displayed.

c. To set up the desired network interface, run the ifconfig command at


the command prompt, as in the following example:
# ifconfig eth0 192.0.2.10 netmask 255.255.255.0 up
In this example, the ifconfig command assigns the IPv4 address 192.0.2.10, with a
network mask of 255.255.255.0, to the eth0 interface.

6. For laptop connectivity, open any telnet or ssh client program, such as PuTTY.
Connect to one of the service processor IP addresses or to the IP address of a SPARC T5-8
server that is up and running.

Note - After you cable your laptop to the Cisco Catalyst 4948 Ethernet switch, you can use the
NET0 IP addresses of system components to communicate with them. For a list of default IP
addresses assigned at the time of manufacturing, see “Default IP Addresses” on page 90.

Installing the System 135


Connect to a 10-GbE Client Access Network

Note - If you or the Oracle installer have not run the Oracle SuperCluster T5-8 Configuration
Utility set of tools and scripts to reconfigure IP addresses for the system, you can use a set of
default IP addresses. If you or the Oracle installer have already run the Oracle SuperCluster
T5-8 Configuration Utility set of tools and scripts, you can use the network IP address that you
provided in the Oracle SuperCluster T5-8 Configuration Worksheets document.

Connect to a 10-GbE Client Access Network


Before You Begin The IP addresses and host names for the client access network should be registered in the
Domain Name System (DNS) prior to initial configuration. In addition, all public addresses,
single client access name (SCAN) addresses, and VIP addresses should be registered in DNS
prior to installation.

A 10-GbE client access network infrastructure is a required part of the installation process for
the Oracle SuperCluster T5-8.

The following components have been included with the system:


■ Four or eight dual-ported Sun Dual 10-GbE SFP+ PCIe NICs in each SPARC
T5-8 server, preinstalled in specific PCIe slots (see “Card Locations (SPARC T5-8
Servers)” on page 29 and “10-GbE Client Access Network Physical Connections (SPARC
T5-8 Servers)” on page 35 for more information)
■ Transceivers preinstalled in the 10-GbE NICs
■ Four 10-meter SFP-QSFP optical splitter cables for a Half Rack or eight cables for a Full
Rack

If you wish to use the supplied SFP-QSFP cables for the connection to your client access
network, you must provide the following 10-GbE client access network infrastructure
components:
■ A 10-GbE switch with QSFP connections, such as the Sun Network 10GbE Switch 72p
■ Transceivers for connections to your 10-GbE switch (four transceivers for a Half Rack or
eight transceivers for a Full Rack)

If you do not want to use the supplied SFP-QSFP cables for the connection to your client
access network, you must provide the following 10-GbE client access network infrastructure
components:
■ A 10-GbE switch
■ Suitable optical cables with SFP+ connections for the SPARC T5-8 server side
■ Suitable transceivers to connect all cables to your 10-GbE switch

136 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Connect to a 10-GbE Client Access Network

If you do not have a 10-GbE client access network infrastructure set up at your site, you must
have a 10-GbE network switch available at the time of installation that the system can be
connected to, even if the network speed drops from 10 Gb to 1 Gb on the other side of the 10-
GbE network switch. Oracle SuperCluster T5-8 cannot be installed at the customer site without
the 10-GbE client access network infrastructure in place.

1. Locate the Sun Dual 10 GbE SFP+ PCIe NIC(s) that you will use to connect to the
client access network.
The NIC connections will vary, depending on the type and number of domains that you have set
up for your system. See “10-GbE Client Access Network Physical Connections (SPARC T5-8
Servers)” on page 35 and “Understanding the Software Configurations” on page 46 for more
information.

2. Locate the SFP cable that you will use for the connection to the 10-GbE client
access network.
You can use the 10-meter SFP-QSFP optical splitter cables provided with the system or your
own SFP cables.

3. Connect the SFP side of the cable to the port on the 10-GbE NIC.

4. Connect the other end of the cable to the 10-GbE switch that you provided.
The following figure shows an example connection layout for the 10-GbE client access network
for one of the two SPARC T5-8 servers in a Half Rack, where:

■ The two QSFP ends of the two SFP-QSFP cables connect to the QSFP ports on the 10-GbE
switch (in this example, the Sun Network 10GbE Switch 72p)
■ The SFP+ ends of the two SFP-QSFP cables connect to the SFP+ ports on the 10-GbE NICs
installed in the SPARC T5-8 server (some configurations would have fewer connections to
the SFP+ ports on the 10-GbE NICs, depending on the configuration)

Installing the System 137


Connect to a 10-GbE Client Access Network

FIGURE 23 Example Connection to the 10 GbE Client Access Network

Figure Legend

1 10 GbE switch with QSFP connections (Sun Network 10GbE Switch 72p shown)
2 QSFP connector ends of SFP-QSFP cables, connecting to QSFP ports on 10 GbE switch

138 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Using an Optional Fibre Channel PCIe Card

3 SFP+ connector ends of SFP-QSFP cables, connecting to SFP+ ports on 10 GbE NICs in SPARC T5-8 server

Using an Optional Fibre Channel PCIe Card


Optional Fibre Channel PCIe cards are available to facilitate migration of data from legacy
storage subsystems to the Exadata Storage Servers integrated with Oracle SuperCluster T5-8 for
Database Domains, or to access SAN-based storage for the Application Domains. The optional
Fibre Channel PCIe cards are not included in standard Oracle SuperCluster T5-8 configurations
and must be purchased separately.

The following options are supported:

■ StorageTek 8 Gb FC PCI-Express HBA, Qlogic


■ StorageTek 8 Gb FC PCI-Express HBA, Emulex

For the Half Rack system, the optional Fibre Channel PCIe cards can be installed in any or all
of these remaining PCIe card slots that are not populated by 10 GbE NICs or InfiniBand HCAs:

■ Slot 2
■ Slot 4
■ Slot 5
■ Slot 7
■ Slot 10
■ Slot 12
■ Slot 13
■ Slot 15

See “Card Locations (SPARC T5-8 Servers)” on page 29 for more information on the slot
locations on the SPARC T5-8 servers.

For the Full Rack system, all 16 PCIe slots are occupied with either 10 GbE NICs or InfiniBand
HCAs, so installing an optional Fibre Channel PCIe card requires the removal of certain 10
GbE NICs in order to free up a PCIe slot. However, because the PCIe slots are assigned to
various domains, and the PCIe slots that are assigned to domains vary depending on your
domain configuration, it is recommended that you contact Oracle to install Fibre Channel PCIe
cards in the Full Rack version of Oracle SuperCluster T5-8 to ensure that the domains are
configured correctly after the 10 GbE NIC has been replaced with a Fibre Channel card.

Installing the System 139


Using an Optional Fibre Channel PCIe Card

Note - If you have the Full Rack, you can not install a Fibre Channel PCIe card in a slot that
is associated with a Small Domain. In Full Rack configurations, Fibre Channel PCIe cards can
only be added to domains with more than one 10-GbE NIC. One 10-GbE NIC must be left for
connectivity to the client access network, but for domains with more than one 10-GbE NICs,
other 10-GbE NICs can be replaced with Fibre Channel HBAs. See “Understanding Full Rack
Configurations” on page 69 and “Understanding Small Domains (Full Rack)” on page 80 for
more information on the configurations with Small Domains.

Once you have installed the optional Fibre Channel PCIe card, it will be associated with a
specific domain, depending on the slot it was installed in and your domain configuration. See
“Understanding the Software Configurations” on page 46 for more information.

Note the following restrictions when using the optional Fibre Channel PCIe cards:
■ When installed in slots associated with Application Domains running either Oracle Solaris
10 or Oracle Solaris 11, the Fibre Channel PCIe cards can be used for any purpose,
including database file storage for supported databases other than Oracle Database 11gR2.
■ When installed in slots associated with Database Domains, the Fibre Channel PCIe cards
can be used for data migration only, and not for storage of Oracle Database 11gR2 data.
■ Oracle discourages the use of additional network interfaces based on the GbE ports on the
Fibre Channel PCIe cards. Oracle will not support questions or issues with networks based
on these ports.

For instructions for installing a PCIe card, refer to:


■ The documentation provided with your PCIe card
■ The SPARC T5-8 server service manual at:
http://docs.oracle.com/cd/E35078_01/index.html

140 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Maintaining the System

These topics describe maintenance information for Oracle SuperCluster T5-8.


■ “Cautions and Warnings” on page 141
■ “Powering Off the System” on page 142
■ “Power On the System” on page 145
■ “Identify the Version of SuperCluster Software” on page 145
■ “SuperCluster Tools” on page 146
■ “Managing Oracle Solaris 11 Boot Environments” on page 147
■ “Using Dynamic Intimate Shared Memory ” on page 151
■ “Component-Specific Service Procedures” on page 153
■ “Maintaining Exadata Storage Servers” on page 153
■ “Tuning the System (ssctuner)” on page 158
■ “Configuring CPU and Memory Resources (osc-setcoremem)” on page 170

Cautions and Warnings

The following cautions and warnings apply to Oracle SuperCluster T5-8 systems.

Caution - Do not touch the parts of this product that use high-voltage power. Touching them
might result in serious injury.

Caution - Keep the front and rear cabinet doors closed. Failure to do so might cause system
failure or result in damage to hardware components.

Caution - Keep the top, front, and back of the cabinets clear to allow proper airflow and prevent
overheating of components.

Use only the supplied hardware.

Maintaining the System 141


Powering Off the System

Powering Off the System

■ “Graceful Power Off Sequence” on page 142


■ “Emergency Power Off” on page 145

Graceful Power Off Sequence

Use this procedure to shut down the system under normal circumstances.

1. Shut down Oracle Solaris Cluster:

# /usr/cluster/bin/cluster shutdown -g 0 -y
2. If Ops Center is running, shut down the enterprise controller:

# /opt/SUNWxvmoc/bin/ecadm stop

For HA environments use this command:

# /opt/SUNWxvmoc/bin/ecadm ha-stop-no-relocate
3. Shut down the database using one of these methods:
http://docs.oracle.com/cd/B28359_01/server.111/b28310/start003.htm
4. “Shut Down the Exadata Storage Servers” on page 143.
5. “Power Off the Exadata Storage Servers” on page 143
6. “Gracefully Shut Down LDoms” on page 144
7. Gracefully shut down each SPARC T5-8 server:

# init 0
8. Gracefully shut down the ZFS Storage Controller.
Log in to the browser interface and click the power icon on the left side of the top pane.
9. Power off the switches, and the entire rack, by turning off the circuit breakers.

■ To power the system back on, see “Power On the System” on page 145.

142 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Shut Down the Exadata Storage Servers

Shut Down the Exadata Storage Servers

Perform this procedure for each Exadata Storage Server before you power them off. For more
information on this task, see the Exadata documentation at:

http://wd0338.oracle.com/archive/cd_ns/E13877_01/doc/doc.112/e13874/maintenance.
htm#CEGBHCJG

1. Run the following command to check if there are any offline disks:

CellCLI > LIST GRIDDISK ATTRIBUTES name WHERE asmdeactivationoutcome != 'Yes'

If any grid disks are returned, then it is not safe to take Exadata Storage Server offline, because
proper Oracle ASM disk group redundancy will not be maintained. Taking Exadata Storage
Server offline when one or more grid disks are in this state will cause Oracle ASM to dismount
the affected disk group, causing the databases to shut down abruptly.

2. Inactivate all the grid disks when Exadata Storage Server is safe to take offline.

CellCLI> ALTER GRIDDISK ALL INACTIVE

This command completes after all disks are inactive and offline.

3. Verify that all grid disks are INACTIVE.

CellCLI> LIST GRIDDISK WHERE STATUS != 'inactive'

If all grid disks are INACTIVE, then Exadata Storage Server can be shut down without
affecting database availability.

4. Shut down the cell.


See “Power Off the Exadata Storage Servers” on page 143.

Power Off the Exadata Storage Servers

Perform the following procedure for each Exadata Storage Server.

Note the following when powering off Exadata Storage Servers:


■ All database and Oracle Clusterware processes should be shut down prior to shutting down
more than one Exadata Storage Server.

Maintaining the System 143


Gracefully Shut Down LDoms

■ Powering off one Exadata Storage Server will not affect running database processes or
Oracle ASM.
■ Powering off or restarting Exadata Storage Servers can impact database availability.

The following command shuts down Exadata Storage Server immediately:

# shutdown -h -y now

Gracefully Shut Down LDoms

The SPARC T5-8 server configurations vary based on the configuration chosen during
installation. If running with one LDom you shutdown the machine just as you would any other
server just by cleanly shutting down the OS. If running two LDoms you will shutdown the
guest domain first and then the primary (control). If running with three or more domains you
will have to identify the domain(s) that is/are running off virtualized hardware and shut it/
them down first before moving on to shutting down the guest domain and finally the primary
(control) domain.

1. Shut down, stop, and unbind each of the non-I/O domains.

# ldm stop domainname


LDom domainname stopped
# ldm unbind-domain domainname

2. Shut down, stop, and unbind any active I/O domains.

# ldm stop activedomainname


LDom activedomainname stopped
# ldm unbind-domain activedomainname

3. Halt the primary domain.

# shutdown -i5 -g0 -y

Because no other domains are bound, the firmware automatically powers off the system.

144 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Emergency Power Off

Emergency Power Off

If there is an emergency, such as earthquake or flood, an abnormal smell or smoke coming from
the machine, or a threat to human safety, then power to Oracle SuperCluster T5-8 should be
halted immediately. In that case, use one of the following ways to power off the system.

Power off the system one of the following ways:

■ Turn off power at the circuit breaker, or pull the emergency power-off switch in the
computer room.
■ Turn off the site EPO switch to remove power from Oracle SuperCluster T5-8.
■ Turn off the two PDUs in the rack.

After the emergency, contact Oracle Support Services to restore power to the machine.

Related Information
■ “Graceful Power Off Sequence” on page 142

Power On the System

Power on the system in the reverse order of shutdown.

1. Turn on both circuit breakers that provide power to the rack.


The switches will power on, and the Exadata Storage Servers, SPARC T5-8 servers and the ZFS
Storage Controllers will return to standby mode.

2. Boot each ZFS Storage Controller.

3. Boot each SPARC T5-8 server.

4. Boot each Exadata Storage Server.

Identify the Version of SuperCluster Software


Perform this procedure to determine the version of SuperCluster software.

1. On the management network, log in to one of the SPARC servers.

Maintaining the System 145


SuperCluster Tools

2. Type.

# svcprop -p configuration/build svc:/system/oes/id:default

3. Determine the version of software based on the output.


■ ssc-1.x.x
■ ssc-2.x.x (or later)

SuperCluster Tools
Use one of these tables according to the version of SuperCluster software. Use the tables to see
what tools are available.

To identify your version, see “Identify the Version of SuperCluster Software” on page 145.

Note - The tables do not provide a complete list of tools for SuperCluster. Instead, the tables
list the tools are covered in subsequent sections and provides information on what is available
based on specific version of SuperCluster software.

TABLE 20 SuperCluster v1.x Tools


Tool Description Links

ssctuner A set of scripts and configuration files that run “Tuning the System (ssctuner)” on page 158
on SuperCluster Oracle Solaris 10 and Oracle
Solaris 11 global zones to monitor and tune various
parameters.
osc-setcoremem Enables you to change how CPU and memory “Configuring CPU and Memory Resources (osc-
resources are allocated across domains. setcoremem)” on page 170

TABLE 21 SuperCluster v2.x Tools


Tool Description Links
Oracle I/O Domain Creation Enables you to create I/O domains on demand, Refer to the Oracle I/O Domain Administration
tool assigning CPU, memory and I/O resources of your Guide.
choice.
osc-setcoremem Enables you to change how CPU and memory “Configuring CPU and Memory Resources (osc-
resources are allocated across domains. The tool setcoremem)” on page 170
automatically assigns the appropriate amount
of memory to each domain based on how you
allocated CPU resources, ensuring optimal
performance by minimizing NUMA effects.

146 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Managing Oracle Solaris 11 Boot Environments

Tool Description Links

ssctuner A set of scripts and configuration files that run “Tuning the System (ssctuner)” on page 158
on SuperCluster Oracle Solaris 10 and Oracle
Solaris 11 global zones to monitor and tune various
parameters.

Managing Oracle Solaris 11 Boot Environments

When the Oracle Solaris OS is first installed on a system, a boot environment is created. You
can use the beadm(1M) utility to create and administer additional boot environments on your
system.

After your system is installed, create a backup of the original boot environment. If needed, you
can then boot to the backup of the original boot environment. These topics describe how to
create and manage boot environments on the system.

For more information about Oracle Solaris 11 boot environments, refer to:

http://docs.oracle.com/cd/E23824_01/html/E21801/toc.html

■ “Advantages to Maintaining Multiple Boot Environments” on page 147


■ “Create a Boot Environment” on page 148
■ “Mount to a Different Build Environment” on page 150
■ “Reboot to the Original Boot Environment” on page 151
■ “Remove Unwanted Boot Environments” on page 151

Advantages to Maintaining Multiple Boot


Environments
Multiple boot environments reduce risk when updating or changing software because system
administrators can create backup boot environments before making any updates to the system.
If needed, they have the option of booting a backup boot environment.

The following examples show how having more than one Oracle Solaris boot environment and
managing them with the beadm utility can be useful.

■ You can maintain more than one boot environment on your system and perform various
updates on each of them as needed. For example, you can clone a boot environment by

Maintaining the System 147


Create a Boot Environment

using the beadm create command. The clone you create is a bootable copy of the original.
Then, you can install, test, and update different software packages on the original boot
environment and on its clone.
Although only one boot environment can be active at a time, you can mount an inactive
boot environment by using the beadm mount command. Then, you could use the pkg
command with the alternate root (-R) option to install or update specific packages on that
environment.
■ If you are modifying a boot environment, you can take a snapshot of that environment at
any stage during modifications by using the beadm create command. For example, if you
are doing monthly upgrades to your boot environment, you can capture snapshots for each
monthly upgrade.
Use the command as follows:

# beadm create BeName@snapshotNamedescription

The snapshot name must use the format, BeName@snapshotdescription, where BeName is
the name of an existing boot environment that you want to make a snapshot from. Enter a
custom snapshotdescription to identify the date or purpose of the snapshot.
You can use the beadm list -s command to view the available snapshots for a boot
environment.
Although a snapshot is not bootable, you can create a boot environment based on that
snapshot by using the -e option in the beadm create command. Then you can use the
beadm activate command to specify that this boot environment will become the default
boot environment on the next reboot.

For more information about the advantages of multiple Oracle Solaris 11 boot environments,
see:

http://docs.oracle.com/cd/E23824_01/html/E21801/snap3.html#scrolltoc

Create a Boot Environment


If you want to create a backup of an existing boot environment, for example, prior to modifying
the original boot environment, you can use the beadm command to create and mount a new boot
environment that is a clone of your active boot environment. This clone is listed as an alternate
boot environment in the boot menu for SPARC systems.

1. Log in to the target system.

148 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Create a Boot Environment

localsys% ssh systemname -l root


Password:
Last login: Wed Nov 13 20:27:29 2011 from dhcp-vpn-r
Oracle Corporation SunOS 5.11 solaris April 2011
root@sup46:~#
 

2. Manage ZFS boot environments with beadm.

root@sup46:~# beadm list


 
BE       Active    Mountpoint    Space    Policy     Created
----------------------------------------------------------
solaris    NR          /         2.17G    static   2011-07-13 12:01
 

Note - In the Active column, the first letter indicates the boot environment current status and the
second letter indicates the status at next reboot. In the example above, N indicates the current
(or Now) boot environment while the R indicates which boot environment will be active at next
Reboot.

3. Create a new ZFS boot environment based on the current environment.

root@sup46:~# beadm create solaris_backup


root@sup46:~# beadm list
 
      BE       Active  Mountpoint Space  Policy     Created
-----------------------------------------------------------------
solaris         NR        /       2.17G  static   2011-07-13 12:01
solaris_backup  -         -       35.0K  static   2011-07-17 21:01

4. Change to the next boot environment.

root@sup46:~# beadm activate solaris_backup


root@sup46:~# beadm list
 
      BE       Active  Mountpoint Space  Policy     Created
-----------------------------------------------------------------
solaris_backup  R         -       2.17G  static   2011-07-17 21:01
solaris         N        /       1.86G  static   2011-07-13 12:01

5. Reboot to the new boot environment.

Maintaining the System 149


Mount to a Different Build Environment

root@sup46:~# reboot
Connection to systemname closed by remote host.
Connection to systemname closed.
localsys% ssh systemname -l root
Password:
Last login: Thu Jul 14 14:37:34 2011 from dhcp-vpn-
Oracle Corporation SunOS 5.11 solaris April 2011
 
root@sup46:~# beadm list
 
      BE       Active  Mountpoint Space  Policy     Created
-----------------------------------------------------------------
solaris_backup  NR        -       2.19G  static   2011-07-17 21:01
solaris          -        /       4.12G  static   2011-07-13 12:01

Mount to a Different Build Environment


Use the following commands to mount to a different build environment and
unmount the other build environment.

root@sup46:~# beadm mount s_backup /mnt


root@sup46:~# df -k /mnt
Filesystem           1024-blocks Used     Available  Capacity  Mounted on
rpool1/ROOT/s_backup 286949376   2195449  232785749  1%        /mnt
 
root@sup46:~# df -k /
Filesystem           1024-blocks Used     Available  Capacity  Mounted on
rpool1/ROOT/s_backup 286949376   2214203  232785749  1%        /
 
root@sup46:~# ls /mnt
bin etc lib opt rpool1 system wwss
boot export media pkg sbin tmp
cdrom home micro platform scde usr
dev import mnt proc share var
devices java net re shared workspace
doe kernel nfs4 root src ws
root@sup46:~#
 
root@sup46:~# beadm umount solaris
root@sup46:~#
 

150 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Reboot to the Original Boot Environment

Reboot to the Original Boot Environment


Use the following commands to reboot to the original boot environment.

root@sup46:~# beadm activate solaris


root@sup46:~# reboot
Connection to systemname closed by remote host.
Connection to systemname closed.
localsys%
ssh systemname -l root
Password: Last login: Thu Jul 14 14:37:34 2011 from dhcp-vpn-
Oracle Corporation SunOS 5.11 solaris April 2011
root@sup46:~#

Remove Unwanted Boot Environments


Use the following commands to remove boot environments.

root@sup46:~# beadm list   


 
    BE       Active  Mountpoint Space  Policy     Created
-----------------------------------------------------------------
solaris_backup  -         -      13.25G  static   2011-07-17 21:19
solaris        NR         -       4.12G  static   2011-07-13 12:01
 
root@sup46:~# beadm destroy solaris_backup
Are you sure you want to destroy solaris_backup? This action cannot be undone(y/[n]): y
root@sup46:~# beadm list 
 
   BE       Active  Mountpoint Space  Policy     Created
-----------------------------------------------------------------
solaris       NR        /      4.12G  static   2011-07-13 12:01
 
root@sup46:~#

Using Dynamic Intimate Shared Memory

■ “ DISM Restrictions” on page 152


■ “Disable DISM” on page 152

Maintaining the System 151


Disable DISM

DISM Restrictions

Dynamic Intimate Shared Memory (DISM) is not supported for use on Oracle SuperCluster
T5-8 Solaris environments in instances other than the ASM instance. The use of DISM on the
system outside of the ASM instance can lead to several different issues ranging form excessive
swap usage (even when memory is available) to kernel panics to performance problems. It has
been determined that the ASM instance is typically such a small memory footprint that it should
not cause an issue.

This behavior typically occurs on instances created after installation, because Solaris 11 uses
Automatic Memory Management by default. To prevent this DISM issue when creating Oracle
Solaris 11 instances, disable DISM. For more information see: “Disable DISM” on page 152.

To decide if DISM is appropriate for your environment, and for more information about using
DISM with an Oracle database, refer to the Oracle white paper “Dynamic SGA Tuning of Oracle
Database on Oracle Solaris with DISM”:

http://www.oracle.com/technetwork/articles/systems-hardware-architecture/using-
dynamic-intimate-memory-sparc-168402.pdf

Disable DISM

Dynamic Intimate Shared Memory (DISM) is not supported for use on Oracle SuperCluster
T5-8 Oracle Solaris environments in instances other than the ASM instance. For more
information, see “ DISM Restrictions” on page 152.

Note - Do not disable the use of automatic shared memory management (ASMM) within
the database, which is a very useful and desirable feature to reduce DBA management of the
database.

Disable the use of DISM by the database on Oracle Solaris in one of two
ways; either unsetting the SGA_MAX_SIZE / MEMORY_MAX_TARGET /
MEMORY_TARGET parameters, or ensure SGA_MAX_SIZE is set to the same
value as SGA_TARGET parameter or equal to the sum of all sga components in
the instance.

■ For example, to set a 64 G SGA:

152 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Component-Specific Service Procedures

 
alter system set SGA_TARGET=64G scope=spfile;
alter system set SGA_MAX_SIZE=64G scope=spfile;
alter system set MEMORY_MAX_TARGET=0 scope=spfile;
alter system set MEMORY_TARGET=0 scope=spfile;

Component-Specific Service Procedures


If you have a service contract for your Oracle SuperCluster T5-8, contact your service provider
for maintenance.

If you do not have a service contract, refer to each component's documentation for general
maintenance information.

Component Information Maintenance Information


“SPARC T5-8 Servers” on page 24 http://download.oracle.com/docs/cd/E23411_01/index.html
“ZFS Storage Appliance” on page 25. http://download.oracle.com/docs/cd/E22471_01/html/820-4167/toc.html
“Sun Datacenter InfiniBand Switch 36 http://download.oracle.com/docs/cd/E19197-01/index.html
Switches” on page 25.
“Cisco Catalyst 4948 Ethernet Management http://www.cisco.com/en/US/docs/switches/lan/catalyst4900/4948E/installation/
Switch” on page 26 guide/4948E_ins.html
“Exadata Storage Servers” on page 24 “Maintaining Exadata Storage Servers” on page 153
Oracle VM Server for SPARC http://download.oracle.com/docs/cd/E23120_01/index.html
Oracle Solaris Cluster http://download.oracle.com/docs/cd/E18728_01/index.html
Oracle Solaris 11 http://download.oracle.com/docs/cd/E19963-01/index.html
Oracle Solaris 10 http://download.oracle.com/docs/cd/E23823_01/index.html

Maintaining Exadata Storage Servers


The Exadata Storage Servers are highly optimized for use with the Oracle Database and employ
a massively parallel architecture and Exadata Smart Flash Cache to dramatically accelerate
Oracle Database processing and speed I/O operations. For more information, refer to: “Exadata
Storage Servers” on page 24.

For general maintenance information, refer to the Exadata Storage Server documentation,
located in the following directory on the Exadata Storage Servers installed in Oracle
SuperCluster T5-8:

/opt/oracle/cell/doc

Maintaining the System 153


Monitor Write-through Caching Mode

These topics describe maintenance relevant to Exadata Storage Servers in Oracle SuperCluster
T5-8.
■ “Monitor Write-through Caching Mode” on page 154
■ “Shut Down or Reboot an Exadata Storage Server” on page 156
■ “Drop an Exadata Storage Server” on page 158

Related Information
■ See the Oracle Exadata Storage Server Software User's Guide for additional information
about the Oracle ASM disk repair timer.

Monitor Write-through Caching Mode

The disk controller on each Exadata Storage Server periodically performs a discharge and
charge of the controller battery. During the operation, the write cache policy changes from
write-back caching to write-through caching. Write-through cache mode is slower than write-
back cache mode. However, write-back cache mode has a risk of data loss if the Exadata
Storage Server loses power or fails. For Exadata Storage Server releases earlier than release
11.2.1.3, the operation occurs every month. For Oracle Exadata Storage Server Software release
11.2.1.3 and later, the operation occurs every three months, for example, at 01:00 on the 17th
day of January, April, July and October.

1. To change the start time for when the learn cycle occurs, use a command similar
to the following. The time reverts to the default learn cycle time after the cycle
completes.

CellCLI> ALTER CELL bbuLearnCycleTime="2011-01-22T02:00:00-08:00"

2. To see the time for the next learn cycle, use the following command:

CellCLI> LIST CELL ATTRIBUTES bbuLearnCycleTime

The Exadata Storage Server generates an informational alert about the status of the caching
mode for logical drives on the cell, for example:

HDD disk controller battery on disk contoller at adapter 0 is going into a learn cycle.
This is a normal maintenance activity that occurs quarterly and runs for approximately
1 to 12 hours. The disk controller cache might go into WriteThrough caching mode during

154 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Monitor Write-through Caching Mode

the learn cycle. Disk write throughput might be temporarily lower during this time. The
message is informational only, no action is required.

3. Use the following command to view the status of the battery:

# /opt/MegaRAID/MegaCli/MegaCli64 -AdpBbuCmd -GetBbuStatus -a0

The following is an example of the output of the command:

BBU status for Adapter: 0


 
BatteryType: iBBU08
Voltage: 3721 mV
Current: 541 mA
Temperature: 43 C
 
BBU Firmware Status:
 
Charging Status : Charging
Voltage : OK
Temperature : OK
Learn Cycle Requested : No
Learn Cycle Active : No
Learn Cycle Status : OK
Learn Cycle Timeout : No
I2c Errors Detected : No
Battery Pack Missing : No
Battery Replacement required : No
Remaining Capacity Low : Yes
Periodic Learn Required : No
Transparent Learn : No
 
Battery state:
 
GasGuageStatus:
Fully Discharged : No
Fully Charged : No
Discharging : No
Initialized : No
Remaining Time Alarm : Yes
Remaining Capacity Alarm: No
Discharge Terminated : No
Over Temperature : No
Charging Terminated : No
Over Charged : No
 
Relative State of Charge: 7 %

Maintaining the System 155


Shut Down or Reboot an Exadata Storage Server

Charger System State: 1


Charger System Ctrl: 0
Charging current: 541 mA
Absolute State of Charge: 0%
 
Max Error: 0 %
Exit Code: 0x00

Shut Down or Reboot an Exadata Storage Server


When performing maintenance on Exadata Storage Servers, it may be necessary to power down
or reboot the cell. If Exadata Storage Server is to be shut down when one or more databases
are running, then verify that taking Exadata Storage Server offline will not impact Oracle ASM
disk group and database availability. The ability to take Exadata Storage Server offline without
affecting database availability depends on the level of Oracle ASM redundancy used on the
affected disk groups, and the current status of disks in other Exadata Storage Servers that have
mirror copies of data as Exadata Storage Server to be taken offline.

Use the following procedure to power down Exadata Storage Server.

1. Run the following command to check if there are other offline disks:

CellCLI> LIST GRIDDISK ATTRIBUTES name WHERE asmdeactivationoutcome != 'Yes'

If any grid disks are returned, then it is not safe to take Exadata Storage Server offline because
proper Oracle ASM disk group redundancy will not be maintained. Taking Exadata Storage
Server offline when one or more grid disks are in this state will cause Oracle ASM to dismount
the affected disk group, causing the databases to shut down abruptly.

2. Inactivate all the grid disks when Exadata Storage Server is safe to take offline:

CellCLI> ALTER GRIDDISK ALL INACTIVE

The preceding command completes once all disks are inactive and offline.

3. Verify that all grid disks are INACTIVE to allow safe shut down of Exadata Storage
Server.

CellCLI> LIST GRIDDISK WHERE STATUS != 'inactive'

If all grid disks are INACTIVE, then Exadata Storage Server can be shut down without affecting
database availability.

156 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Shut Down or Reboot an Exadata Storage Server

4. Shut down the cell.

5. After performing the maintenance, start the cell. The cell services start
automatically.

6. Bring all grid disks online using the following command:

CellCLI> ALTER GRIDDISK ALL ACTIVE

When the grid disks become active, Oracle ASM automatically synchronizes the grid disks to
bring them back into the disk group.

7. Verify that all grid disks have been successfully put online using the following
command:

CellCLI> LIST GRIDDISK ATTRIBUTES name, asmmodestatus

Wait until asmmodestatus is ONLINE or UNUSED for all grid disks. For example:

DATA_CD_00_dm01cel01 ONLINE
DATA_CD_01_dm01cel01 SYNCING
DATA_CD_02_dm01cel01 OFFLINE
DATA_CD_02_dm02cel01 OFFLINE
DATA_CD_02_dm03cel01 OFFLINE
DATA_CD_02_dm04cel01 OFFLINE
DATA_CD_02_dm05cel01 OFFLINE
DATA_CD_02_dm06cel01 OFFLINE
DATA_CD_02_dm07cel01 OFFLINE
DATA_CD_02_dm08cel01 OFFLINE
DATA_CD_02_dm09cel01 OFFLINE
DATA_CD_02_dm10cel01 OFFLINE
DATA_CD_02_dm11cel01 OFFLINE

Oracle ASM synchronization is complete only when all grid disks show
asmmodestatus=ONLINE or asmmodestatus=UNUSED. Before taking another Exadata Storage
Server offline, Oracle ASM synchronization must complete on the restarted Exadata Storage
Server. If synchronization is not complete, the check performed on another Exadata Storage
Server will fail. For example:

CellCLI> list griddisk attributes name where asmdeactivationoutcome != 'Yes'


DATA_CD_00_dm01cel02 "Cannot de-activate due to other offline disks in the diskgroup"
DATA_CD_01_dm01cel02 "Cannot de-activate due to other offline disks in the diskgroup"
DATA_CD_02_dm01cel02 "Cannot de-activate due to other offline disks in the diskgroup"
DATA_CD_03_dm01cel02 "Cannot de-activate due to other offline disks in the diskgroup"

Maintaining the System 157


Drop an Exadata Storage Server

DATA_CD_04_dm01cel02 "Cannot de-activate due to other offline disks in the diskgroup"


DATA_CD_05_dm01cel02 "Cannot de-activate due to other offline disks in the diskgroup"
DATA_CD_06_dm01cel02 "Cannot de-activate due to other offline disks in the diskgroup"
DATA_CD_07_dm01cel02 "Cannot de-activate due to other offline disks in the diskgroup"
DATA_CD_08_dm01cel02 "Cannot de-activate due to other offline disks in the diskgroup"
DATA_CD_09_dm01cel02 "Cannot de-activate due to other offline disks in the diskgroup"
DATA_CD_10_dm01cel02 "Cannot de-activate due to other offline disks in the diskgroup"
DATA_CD_11_dm01cel02 "Cannot de-activate due to other offline disks in the diskgroup"

Drop an Exadata Storage Server


1. From Oracle ASM, drop the Oracle ASM disks on the physical disk using the
following command:

ALTER DISKGROUP diskgroup-name DROP DISK asm-disk-name

To ensure correct redundancy level in Oracle ASM, wait for the rebalance to complete before
proceeding.

2. Remove the IP address entry from the cellip.ora file on each database server
that accesses the Exadata Storage Server.

3. From Exadata Storage Server, drop the grid disks, cell disks, and cell on the
physical disk using the following command:

DROP CELLDISK celldisk-on-this-lun FORCE

4. Shut down all services on the Exadata Storage Server.

5. Power down the cell.

Tip - Refer to “Shut Down or Reboot an Exadata Storage Server” on page 156 for additional
information.

Tuning the System (ssctuner)

SuperClusters with Oracle Solaris global zones automatically monitor and tune various tuning
parameters in realtime. Tuning is handled by the ssctuner utility.

158 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Drop an Exadata Storage Server

These topics describe the ssctuner utility:


■ “ssctuner Overview” on page 159
■ “Monitor ssctuner Activity” on page 160
■ “View Log Files” on page 161
■ “Configure the EMAIL_ADDRESS Property” on page 162
■ “Change ssctuner Properties and Disable Features” on page 163
■ “Configure ssctuner to Run compliance(1M) Benchmarks” on page 165
■ “Monitor and View the Compliance Benchmark” on page 166
■ “Install ssctuner” on page 168
■ “Enable ssctuner” on page 169

Related Information
■ For more information about ssctuner, refer to MOS note 1903388.1 at:
https://support.oracle.com
■ For more information about SMF services on the Oracle Solaris OS, see the Oracle Solaris
System Administration Guide: Common System Management Tasks at:
http://docs.oracle.com/cd/E23824_01/html/821-1451/hbrunlevels-25516.
html#scrolltoc

ssctuner Overview

The ssctuner utility is a small set of Perl and Korn shell scripts and configuration files that run
on SuperCluster Oracle Solaris 10 and Oracle Solaris 11 global zones. By default, ssctuner is
installed and enabled at installation time.

The utility runs in realtime as an SMF service to monitor and tune ndd parameters and various
system configuration parameters including these files:
■ /etc/system
■ /kernel/drv/sd.conf
■ /kernel/drv/ssd.conf
■ /etc/inet/ntp.conf

The utility also periodically checks for the use of DISM or suboptimal NFS mount options.

By default, the utility runs every two hours and modifies parameters as needed.

Maintaining the System 159


Monitor ssctuner Activity

The utility also checks every two minutes to see if there are any virtual disk devices that were in
a degraded state and have come back online, and if so, clears that zpool.

Note - If you manually tune a parameter for which ssctuner requires a different value,
ssctuner sets the value of that parameter back to what ssctuner requires and logs the changes
at this interval check. If you must control one or more of the parameters ssctuner manages,
consider turning off those specific components rather than disabling ssctuner completely. See
“Change ssctuner Properties and Disable Features” on page 163.

Note - Oracle Solaris 11 must be at SRU 11.4 or later, or ssd.conf/sd.conf settings might
cause panics.

Note - Do not set ndd parameters through another SMF service or init script. ssctuner must
manage the ndd parameters.

There is an ssctuner SMF variable called ssctuner_vars/COMPLIANCE_RUN, that you set to an


appropriate benchmark and then restart ssctuner to configure a compliance assessment. By
default, this variable is set to none. For security purposes, you must enable this feature, see
“Configure ssctuner to Run compliance(1M) Benchmarks” on page 165.

Related Information
■ “Monitor ssctuner Activity” on page 160
■ “View Log Files” on page 161
■ “Configure the EMAIL_ADDRESS Property” on page 162
■ “Change ssctuner Properties and Disable Features” on page 163
■ “Configure ssctuner to Run compliance(1M) Benchmarks” on page 165
■ “Monitor and View the Compliance Benchmark” on page 166
■ “Install ssctuner” on page 168
■ “Enable ssctuner” on page 169

Monitor ssctuner Activity

View ssctuner activity.

160 Oracle SuperCluster T5-8 Owner's Guide • May 2016


View Log Files

# svcs -l ssctuner

Related Information
■ “View Log Files” on page 161
■ “ssctuner Overview” on page 159
■ “Configure the EMAIL_ADDRESS Property” on page 162
■ “Change ssctuner Properties and Disable Features” on page 163
■ “Configure ssctuner to Run compliance(1M) Benchmarks” on page 165
■ “Monitor and View the Compliance Benchmark” on page 166
■ “Install ssctuner” on page 168
■ “Enable ssctuner” on page 169

View Log Files


1. View the ssctuner service log.
ssctuner writes messages to syslog and to the ssctuner service log. Those messages are
tagged as ssctuner and might point to other file locations for more information.

# svcs -x ssctuner
svc:/site/application/sysadmin/ssctuner:default (ssctuner for Oracle SuperCluster)
State: online since September 28, 2012 07:30:15 AM PDT
See: ssctuner(l)
See: /var/svc/log/site-application-sysadmin-ssctuner:default.log
Impact: None.
 
# more /var/svc/log/site-application-sysadmin-ssctuner\:default.log
[ Sep 28 07:30:00 Disabled. ]
[ Sep 28 07:30:00 Rereading configuration. ]
[ Sep 28 07:30:10 Enabled. ]
[ Sep 28 07:30:10 Executing start method ("/opt/oracle.supercluster/ssctuner.ksh start"). ]
ssctuner local0.notice success: Saved rollback for : /etc/system
ssctuner local0.notice success: Saved ndd rollback.
ssctuner local0.notice success: Saved rollback for : /kernel/drv/sd.conf
ssctuner local0.notice success: enabled, version 0.99e. daemon PID= 14599
[ Sep 28 07:30:15 Method "start" exited with status 0. ]
ssctuner local0.notice success: daemon executing
ssctuner local0.notice success: Changes made to /etc/system
ssctuner local0.notice success: Changes made to /kernel/drv/sd.conf

2. View ssctuner messages in /var/adm.

Maintaining the System 161


Configure the EMAIL_ADDRESS Property

# grep -i ssctuner /var/adm/messages


Sep 28 07:30:10 etc6cn04 ssctuner: [ID 702911 local0.notice] success: Saved rollback for : /etc/system
Sep 28 07:30:10 etc6cn04 ssctuner: [ID 702911 local0.notice] success: Saved ndd rollback.
Sep 28 07:30:10 etc6cn04 ssctuner: [ID 702911 local0.notice] success: Saved rollback for : /kernel/drv/
sd.conf
Sep 28 07:30:15 etc6cn04 ssctuner: [ID 702911 local0.notice] success: enabled, version 0.99e. daemon PID=
14599
Sep 28 07:30:15 etc6cn04 ssctuner: [ID 702911 local0.notice] success: daemon executing
Sep 28 07:30:15 etc6cn04 ssctuner: [ID 702911 local0.notice] success: Changes made to /etc/system
Sep 28 07:30:15 etc6cn04 ssctuner: [ID 702911 local0.notice] success: Changes made to /kernel/drv/sd.conf

Related Information
■ “Monitor ssctuner Activity” on page 160
■ “ssctuner Overview” on page 159
■ “Configure the EMAIL_ADDRESS Property” on page 162
■ “Change ssctuner Properties and Disable Features” on page 163
■ “Configure ssctuner to Run compliance(1M) Benchmarks” on page 165
■ “Monitor and View the Compliance Benchmark” on page 166
■ “Install ssctuner” on page 168
■ “Enable ssctuner” on page 169

Configure the EMAIL_ADDRESS Property


You must configure the EMAIL_ADDRESS property so that ssctuner messages are emailed to the
appropriate person, even when the message is not logged into the system.

1. Configure ssctuner so that critical messages are sent to your email address.

# svccfg -s ssctuner setprop ssctuner_vars/EMAIL_ADDRESS="my_name@mycorp.com"

2. If you plan to change any other ssctuner properties, do so before you perform the
remaining steps in this task.
See “Change ssctuner Properties and Disable Features” on page 163.

3. Restart the SMF service for changes to take effect.

# svcadm restart ssctuner

162 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Change ssctuner Properties and Disable Features

4. Ensure that the ssctuner service is enabled and no error messages are reported.
If you changed a property using incorrect syntax, the service does not come back. If this
happens, identify the offending property that you must fix.

# grep -i parameter /var/svc/log/site-application-sysadmin-ssctuner:default.log

Related Information
■ “ssctuner Overview” on page 159
■ “Monitor ssctuner Activity” on page 160
■ “View Log Files” on page 161
■ “Change ssctuner Properties and Disable Features” on page 163
■ “Configure ssctuner to Run compliance(1M) Benchmarks” on page 165
■ “Monitor and View the Compliance Benchmark” on page 166
■ “Install ssctuner” on page 168

Change ssctuner Properties and Disable Features

Caution - Do not perform this procedure without Oracle Support approval. Changing properties
or disabling ssctuner features can have unpredictable consequences.

Changing certain ssctuner properties such as EMAIL_ADDRESS and disk or memory usage
warning levels might be advantageous in some environments.

1. List the ssctuner properties to identify the property you want to change.

# svccfg -s ssctuner listprop 'ssctuner_vars/*'


ssctuner_vars/CRIT_THREADS_FIX boolean true
ssctuner_vars/CRIT_THREADS_NONEXA boolean false
ssctuner_vars/DISK_SPACE_CHECK boolean true
ssctuner_vars/DISK_USAGE_CRIT integer 90
ssctuner_vars/DISK_USAGE_WARN integer 85
ssctuner_vars/DISM_CHECK boolean true
ssctuner_vars/EMAIL_ADDRESS astring root@localhost
ssctuner_vars/EMAIL_MESSAGES boolean true
ssctuner_vars/FORCELOAD_VDC boolean false
ssctuner_vars/INTRD_DISABLE boolean true
ssctuner_vars/ISCSI_TUNE boolean true

Maintaining the System 163


Change ssctuner Properties and Disable Features

ssctuner_vars/MAJOR_INTERVAL integer 120


ssctuner_vars/MEM_USAGE_CRIT integer 97
ssctuner_vars/MEM_USAGE_WARN integer 94
ssctuner_vars/MINOR_INTERVAL integer 2
ssctuner_vars/NDD_TUNE boolean true
ssctuner_vars/NFS_CHECK boolean true
ssctuner_vars/NFS_EXCLUDE astring
ssctuner_vars/NFS_INCLUDE astring
ssctuner_vars/NTPCONF_TUNE boolean true
ssctuner_vars/POWERADM_DISABLE boolean true
ssctuner_vars/SDCONF_TUNE boolean true
ssctuner_vars/SERD_THRESHOLD_TUNE boolean true
ssctuner_vars/SSDCONF_TUNE boolean true
ssctuner_vars/SYSLOG_DUP_SUPPRESS_HOURS integer 8
ssctuner_vars/SYSTEM_TUNE boolean true
ssctuner_vars/ZPOOL_FIX boolean true
ssctuner_vars/ZPOOL_NAME_CUST astring

2. Use the svccfg command to change property settings.


These are examples of properties you might need to change:
■ Change the disk (/ and zone roots) usage warning level to 80%.

~# svccfg -s ssctuner setprop ssctuner_vars/DISK_USAGE_WARN=80


■ Enable thread priority changing for non-exa Oracle DB domains:.

~# svccfg -s ssctuner setprop ssctuner_vars/CRIT_THREADS_NONEXA=true


■ Enable zpool check and repair of vdisk zpools that are not generated by the SuperCluster
installer.

~# svccfg -s ssctuner setprop ssctuner_vars/ZPOOL_NAME_CUST=my_vdisk_pool


■ Exclude NFS mounts from warning mechanisms.

~# svccfg -s ssctuner setprop ssctuner_vars/NFS_EXCLUDE='mount_name_or_device'


■ Include NFS mounts in warning mechanism (overrides exclude).

~# svccfg -s ssctuner setprop ssctuner_vars/NFS_INCLUDE='mount_name_or_device'


■ Disable all NFS mount warnings (not recommended).

~# svccfg -s ssctuner setprop ssctuner_vars/NFS_CHECK=false

164 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Configure ssctuner to Run compliance(1M) Benchmarks

The NFS_EXCLUDE, NFS_INCLUDE and ZPOOL_NAME_CUST properties must be simple strings but
you can use simple regular expressions.
If you need the flexibility of regular expressions, be extremely careful to double quote the
expressions. Also verify that the ssctuner service comes back after restarting and no errors are
in the SMF log file.

3. Restart the SMF service for changes to take effect.

# svcadm restart ssctuner

4. Ensure that the ssctuner service is enabled and no error messages are reported.
If you changed a property using an incorrect syntax, the service does not come back. If this
happens, identify the offending property that you must fix:

# grep -i parameter /var/svc/log/site-application-sysadmin-ssctuner:default.log

After making any corrections or changes, repeat Step 3.

Related Information
■ “ssctuner Overview” on page 159
■ “Monitor ssctuner Activity” on page 160
■ “View Log Files” on page 161
■ “Configure the EMAIL_ADDRESS Property” on page 162
■ “Configure ssctuner to Run compliance(1M) Benchmarks” on page 165
■ “Monitor and View the Compliance Benchmark” on page 166
■ “Install ssctuner” on page 168
■ “Enable ssctuner” on page 169

Configure ssctuner to Run compliance(1M)


Benchmarks
Use this procedure to configure ssctuner to run compliance benchmarks.

The assessment begins within 12 minutes and then to reruns after each time the node is
rebooted.

Maintaining the System 165


Monitor and View the Compliance Benchmark

By default, this variable is set to none, but you must enable this feature.

1. Identify available benchmarks.


In this example, two benchmarks are available, psi-dss and solaris.

# compliance list -b
pci-dss solaris

2. Set the ssctuner SMF variable to the chosen benchmark.


This example uses the solaris benchmark, which runs the recommended profile.

# svccfg -s ssctuner setprop ssctuner_vars/COMPLIANCE_RUN=solaris


# svcadm restart ssctuner

3. Verify that the compliance run is scheduled by viewing the SMF log file.

Note - Compliance runs are staggered to prevent DOS attacks on the ZFS storage appliance.

# grep compliance /var/svc/log/site-application-sysadmin-ssctuner\:default.log


[ Nov 16 11:47:54 notice: Performing compliance run after delay of 519 seconds... ]

Related Information
■ “ssctuner Overview” on page 159
■ “Monitor ssctuner Activity” on page 160
■ “View Log Files” on page 161
■ “Configure the EMAIL_ADDRESS Property” on page 162
■ “Change ssctuner Properties and Disable Features” on page 163
■ “Monitor and View the Compliance Benchmark” on page 166
■ “Install ssctuner” on page 168

Monitor and View the Compliance Benchmark


1. (Optional) View the SMF log as the benchmark runs:

# tail -f /var/svc/log/site-application-sysadmin-ssctuner\:default.log

166 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Monitor and View the Compliance Benchmark

root@etc28zadm0101:~# tail -f /var/svc/log/site-application-sysadmin-ssctuner


\:default.log
[ Nov 16 11:47:55 CURRENT STATUS version=1.3.8  crit issue count=1  disabled feature
count=0 ]
[ Nov 16 11:47:55 CURRENT ISSUES : Please change ssctuner email address from
root@localhost ]
[ Nov 16 11:47:55 notice: Checking Oracle log writer and LMS thread priority. ]
[ Nov 16 11:47:56 notice: completed initialization. ]
[ Nov 16 11:47:56 Method "start" exited with status 0.]
[ Nov 16 11:49:55 notice: Checking Oracle log writer and LMS thread priority. ]
[ Nov 16 11:51:55 notice: Checking Oracle log writer and LMS thread priority. ]
[ Nov 16 11:53:55 notice: Checking Oracle log writer and LMS thread priority. ]
[ Nov 16 11:55:55 notice: Checking Oracle log writer and LMS thread priority. ]
Assessment will be named 'solaris.Baseline.2015-11-16,11:56'
Package integrity is verified
 OSC-54005
pass
 
The OS version is current
OSC-53005
pass
 
Package signature checking is globally activated
OSC-53505
pass

2. (Optional) Determine if the assessment is complete.


When you see Compliance assessment completed, either from Step 1 or by using this grep
command, continue to the next step.

# grep -i compliance /var/svc/log/site-application-sysadmin-ssctuner\:default.log


[ Nov 16 11:47:54 notice: Performing compliance run after delay of 519
seconds... ]
[ Nov 16 11:57:47 notice: Compliance assessment completed.]

3. List assessments.

# compliance list -a
solaris.Baseline.2015-11-16,11:56

4. Obtain an assessment html report.

Note - New assessments run (staggered in time) after each time the domain reboots.

Maintaining the System 167


Install ssctuner

# compliance report -a solaris.Baseline.2015-11-16,11:56


/var/share/compliance/assessments/solaris.Baseline.2015-11-16,11:56/report.html

Related Information
■ “ssctuner Overview” on page 159
■ “Monitor ssctuner Activity” on page 160
■ “View Log Files” on page 161
■ “Configure the EMAIL_ADDRESS Property” on page 162
■ “Change ssctuner Properties and Disable Features” on page 163
■ “Configure ssctuner to Run compliance(1M) Benchmarks” on page 165
■ “Install ssctuner” on page 168

Install ssctuner
By default, ssctuner is installed and running. If for some reason ssctuner is not installed, use
this procedure to install it.

1. Install the ssctuner package.


Use the Oracle Solaris package command and package name based on the version of the OS.
■ Oracle Solaris 10 OS:

Note - You must be in the directory where the ORCLssctuner package tree resides.

# pkgadd -d ORCLssctuner
■ Oracle Solaris 11 OS:

Note - You must have the latest exa-family repository set as a publisher.

# pkg install ssctuner

2. Verify the package installation.


■ Oracle Solaris 10 OS:

168 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Enable ssctuner

# pkginfo ORCLssctuner
■ Oracle Solaris 11 OS:

# pkg info ssctuner

3. Verify that the ssctuner service is automatically started after the package
installation.

# svcs ssctuner

If the service does not transition to an online state after a minute or two, check the service log
file. See “View Log Files” on page 161.

4. Reboot the OS.


When ssctuner changes configuration files, you must reboot the OS for those changes to take
effect.

Related Information
■ “ssctuner Overview” on page 159
■ “Monitor ssctuner Activity” on page 160
■ “View Log Files” on page 161
■ “Configure the EMAIL_ADDRESS Property” on page 162
■ “Change ssctuner Properties and Disable Features” on page 163
■ “Configure ssctuner to Run compliance(1M) Benchmarks” on page 165
■ “Monitor and View the Compliance Benchmark” on page 166
■ “Enable ssctuner” on page 169

Enable ssctuner

Usually ssctuner is running. If for some reason ssctuner is not running, use this procedure to
enable it.

1. Enable ssctuner.

Maintaining the System 169


Configuring CPU and Memory Resources (osc-setcoremem)

# svcadm enable ssctuner

2. Verify that the ssctuner service started.

# svcs ssctuner

If the service does not transition to an online state after a minute or two, check the service log
file. See “View Log Files” on page 161.

3. Check the /var/adm/messages log file to see if ssctuner changed any configuration
file settings.
See “View Log Files” on page 161.
If configuration settings changed, you must reboot the OS for the changes to take effect. If
settings did not change, you do not need to reboot the OS.

Related Information

■ “ssctuner Overview” on page 159


■ “Monitor ssctuner Activity” on page 160
■ “View Log Files” on page 161
■ “Configure the EMAIL_ADDRESS Property” on page 162
■ “Change ssctuner Properties and Disable Features” on page 163
■ “Configure ssctuner to Run compliance(1M) Benchmarks” on page 165
■ “Monitor and View the Compliance Benchmark” on page 166
■ “Install ssctuner” on page 168

Configuring CPU and Memory Resources (osc-setcoremem)

This section describes how to configure Oracle SuperCluster CPU and memory resources using
osc-setcoremem.

Prior to the Oracle SuperCluster July 2015 quarterly update, you configured CPU and memory
resources using the setcoremem tool (in some cases called setcoremem-t4, setcoremem-t5, or
setcoremem-m6 based on the SuperCluster model).

As of the Oracle SuperCluster July 2015 quarterly update, you use the osc-setcoremem tool,
and the previous commands are no longer available.

170 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Configuring CPU and Memory Resources (osc-setcoremem)

The osc-setcoremem tool is supported on all SuperCluster systems that run SuperCluster 1.x
software with the July 2015 quarterly update, and SuperCluster 2.x software.

Use these topics to change CPU and memory allocations for domains using the CPU/Memory
tool called osc-setcoremem.

Description Links
Learn about the CPU/Memory tool. “osc-setcoremem Overview” on page 171

“Minimum and Maximum Resources (Dedicated


Domains)” on page 173
Find out if SuperCluster resources can be modified using the “Supported Domain Configurations” on page 174
CPU/Memory tool.
Plan CPU and memory allocations. “Plan CPU and Memory Allocations” on page 175
Identify domain configurations. “Display the Current Domain Configuration (osc-
setcoremem)” on page 178

“Display the Current Domain Configuration (ldm)” on page 180

“Access osc-setcoremem Log Files” on page 196

“View the SP Configuration” on page 199


Configure domain CPU and memory resources at the socket or “Change CPU/Memory Allocations (Socket
core level. Granularity)” on page 182

“Change CPU/Memory Allocations (Core


Granularity)” on page 186
Configure domain CPU and memory resources so that some “Park Cores and Memory” on page 190
resources are parked.
Access information about previous executions of osc- “Access osc-setcoremem Log Files” on page 196
setcoremem.
“View the SP Configuration” on page 199
Revert to or remove a previous CPU/memory configuration. “Revert to a Previous CPU/Memory Configuration” on page 201

“Remove a CPU/Memory Configuration” on page 202

osc-setcoremem Overview

SuperCluster compute server CPU and memory resources are initially allocated during
installation as defined by your configuration. CPU sockets are assigned to domains in the same
proportion as IB HCAs. Memory is assigned in the same proportions.

The osc-setcoremem tool enables you to migrate CPU cores and memory resources between
dedicated domains, and from dedicated domains to CPU and memory repositories for the use of
IO domains.

Maintaining the System 171


Configuring CPU and Memory Resources (osc-setcoremem)

These points provide important information related to the use of osc-setcoremem:

■ The final CPU and memory layout for a dedicated domain is optimized for locality to
minimize accesses to non-local resources.
■ The granularity of CPU and memory migration is 1 core and 16GB.
■ Empty dedicated domains (domains with zero cores and zero memory) are not supported.
■ The tool tracks resource allocation and ensures that the selections you make are valid. See
“Minimum and Maximum Resources (Dedicated Domains)” on page 173.
■ Affected dedicated domains must be rebooted after any change.

The tool enables you to change the CPU and memory allocations in one of two levels of
granularity:

■ Socket granularity – The tool automatically allocates each domain a minimum of one
socket, then enables you to allocate remaining sockets to the domains. See “Change CPU/
Memory Allocations (Socket Granularity)” on page 182.
■ Core granularity – The tool automatically allocates each domain a minimum number of
cores, then enables you to allocate additional cores in one-core increments. See “Change
CPU/Memory Allocations (Core Granularity)” on page 186.

If you configure the CPU and memory resources so that some resources are not allocated
to any domain, those unallocated resources are parked. Parked resources are placed in a
logical CPU and memory repository and are available for I/O Domains. See “Park Cores and
Memory” on page 190.

You can park resources from dedicated domains anytime, but you cannot move parked
resources to dedicated domains once I/O Domains are created.

Also see “Supported Domain Configurations” on page 174.

Related Information

■ “Minimum and Maximum Resources (Dedicated Domains)” on page 173


■ “Supported Domain Configurations” on page 174
■ “Plan CPU and Memory Allocations” on page 175
■ “Display the Current Domain Configuration (osc-setcoremem)” on page 178
■ “Display the Current Domain Configuration (ldm)” on page 180
■ “Change CPU/Memory Allocations (Socket Granularity)” on page 182
■ “Change CPU/Memory Allocations (Core Granularity)” on page 186
■ “Park Cores and Memory” on page 190

172 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Configuring CPU and Memory Resources (osc-setcoremem)

Minimum and Maximum Resources (Dedicated


Domains)
The tool tracks resource allocation and ensures that the selections you make are valid. This
section describes how the minimum and maximum resources are determined.

This table summarizes the minimum resource requirements for dedicated domains on
SuperCluster T5-8:

Configuration Control Domain Other Domain

Minimum Resource Requirements Minimum Resource Requirements


Dedicated Domain with 1 HCA 5 cores / 32GB memory 4 cores / 32GB memory
Dedicated Domain with 2 HCAs 9 cores / 64GB memory 8 cores / 64GB memory
Dedicated Domain with 4 HCAs 17 cores / 128GB memory 16 cores / 128GB memory

The minimum amount of CPU resource that can be assigned to a dedicated domain is
determined by the number of IB and 10GbE devices in the domain (2 cores are required per IB
HCA)

The minimum amount of memory that can be assigned to a dedicated domain is determined as
follows:

■ The number of IB and 10GbE devices in the domain (2 16GB memory granules are required
per IB HCA)
■ The number of cores assigned to the domain (one 16GB granule in the same locality group
is required per 4 additional cores)

The maximum amount of CPU resource that can be assigned to a dedicated domain is
determined by the amount of resources available after taking these points into account:

■ Resources already assigned to other dedicated domains


■ Required minimal resource for dedicated domains with no resource yet assigned

The maximum amount of memory resources that can be assigned to a dedicated domain is
determined by the amount of resources available after taking these points into account:

■ Resources already assigned to other dedicated domains


■ Required minimum resources for dedicated domains with no resource yet assigned
■ The requirement that for each dedicated domain a memory granule footprint is placed in all
locality groups with allocated cores

Maintaining the System 173


Configuring CPU and Memory Resources (osc-setcoremem)

Related Information
■ “osc-setcoremem Overview” on page 171
■ “Supported Domain Configurations” on page 174
■ “Plan CPU and Memory Allocations” on page 175
■ “Display the Current Domain Configuration (osc-setcoremem)” on page 178
■ “Display the Current Domain Configuration (ldm)” on page 180
■ “Change CPU/Memory Allocations (Socket Granularity)” on page 182
■ “Change CPU/Memory Allocations (Core Granularity)” on page 186
■ “Park Cores and Memory” on page 190

Supported Domain Configurations

Use this table to identify your SuperCluster configuration, then review the supported resource
allocation activities.

Note - A dedicated domain can be any Application or Database Domain that is not associated
with I/O Domains. For more information about the different types of SuperCluster domains, see
“Understanding the Software Configurations” on page 46.

Domain Configuration Supported Resource Allocation Activities Links


All domains are dedicated Plan how the CPU and memory resources are allocated “Plan CPU and Memory
domains to the domains. Allocations” on page 175
Reallocate all of the resources across domains at the “Change CPU/Memory Allocations (Socket
socket or core level (a reboot is required if primary Granularity)” on page 182
domain resources are changed).
“Change CPU/Memory Allocations (Core
Granularity)” on page 186
Remove (park) resources from dedicated domains for “Park Cores and Memory” on page 190
licensing purposes.
Note - Parked resources are not available for use by any
domains.
Revert to a previous resource configuration. “Revert to a Previous CPU/Memory
Configuration” on page 201
Remove a CPU/memory configuration. “Remove a CPU/Memory
Configuration” on page 202
Mixed domains – some are Activities you can only perform at initial installation,
dedicated, some are Root before any I/O Domains are created:
Domains

174 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Plan CPU and Memory Allocations

Domain Configuration Supported Resource Allocation Activities Links


■ Plan how the CPU and memory resources are “Plan CPU and Memory
allocated to the domains. Allocations” on page 175
■ Reallocate all of the resources across domains at “Change CPU/Memory Allocations (Socket
the socket or core level (a reboot is required if any Granularity)” on page 182
dedicated domain resources are changed).
“Change CPU/Memory Allocations (Core
Granularity)” on page 186
■ Revert to a previous allocation configuration. “Revert to a Previous CPU/Memory
Configuration” on page 201
Activities you can perform anytime:
■ Configure resources for I/O Domains. Refer to the I/O Domain Administration Guide.
■ Move resources from dedicated domains so that the “Park Cores and Memory” on page 190
resources are available to I/O Domains.
■ Move resources between dedicated domains. “Change CPU/Memory Allocations (Socket
Granularity)” on page 182

“Change CPU/Memory Allocations (Core


Granularity)” on page 186
■ Remove a CPU/memory configuration. “Remove a CPU/Memory
Configuration” on page 202

Related Information
■ “osc-setcoremem Overview” on page 171
■ “Plan CPU and Memory Allocations” on page 175
■ “Display the Current Domain Configuration (osc-setcoremem)” on page 178
■ “Display the Current Domain Configuration (ldm)” on page 180
■ “Change CPU/Memory Allocations (Socket Granularity)” on page 182
■ “Change CPU/Memory Allocations (Core Granularity)” on page 186
■ “Park Cores and Memory” on page 190

Plan CPU and Memory Allocations

There are two main approaches to modifying resource allocations:


■ All resources allocated – You move resources from domains to other domains, and ensure
that all resources are allocated.
■ Some resources are unallocated – You allocate less than the maximum available cores
and memory for a compute node. Any unused cores are considered parked cores and are
not counted for licensing purposes. However, parked cores are added to the logical CPU

Maintaining the System 175


Plan CPU and Memory Allocations

and memory repository. If you have Root Domains, you can later allocate the repository
resources to I/O Domains. See “Park Cores and Memory” on page 190.

Depending on which command you use to view domain resources, you might need to convert
socket, core, and VCPU values.

■ Use these specifications for SuperCluster T5-8.


■ 1 socket = 16 cores
■ 1 core = 8 VCPUs
■ Use these specifications for SuperCluster M6-32.
■ 1 socket = 12 cores
■ 1 core = 8 VCPUs

1. Identify the current resource configuration for each compute node.


To display the current configuration, see one of these procedures:

■ “Display the Current Domain Configuration (osc-setcoremem)” on page 178


■ “Display the Current Domain Configuration (ldm)” on page 180

In this example, one compute node on a SuperCluster T5-8 Full-Rack has five dedicated
domains and one Root Domain.

Domain Domain Type Cores Before Memory Before


(GB)

primary Dedicated 32 512


ssccn2-dom1 Dedicated 16 256
ssccn2-dom2 Dedicated 16 256
ssccn2-dom3 Dedicated 16 256
ssccn2-dom4 Dedicated 16 256
ssccn2-dom5 Root 4 64
Unallocated Resources 28 448

2. Add the domain resources together to determine the total number of resources.
Calculating the total amount of CPU and memory resources gives you a starting point for
determining your resource plan.
While identifying resources, keep these points in mind:

■ Root Domain resources – Are a small amount of resources that are reserved for the
exclusive use of Root Domains (4 cores and 64 GB of memory in this example). Do not
factor these resources into your plan.

176 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Plan CPU and Memory Allocations

■ Unallocated resources – These resources are placed in the CPU and memory repositories
when Root Domains are created, or by leaving some resources unallocated when you use
the osc-setcoremem command.

In this example, the resources for the dedicated domains and the unallocated resources are
summed to provide total resources. The Root Domain resources are not included in total
resources.

Domain Domain Type Cores Before Memory Before


(GB)

primary Dedicated 32 512


ssccn2-dom1 Dedicated 16 256
ssccn2-dom2 Dedicated 16 256
ssccn2-dom3 Dedicated 16 256
ssccn2-dom4 Dedicated 16 256
ssccn2-dom5 Root n/a n/a
Unallocated Resources 28 448
Total Resources 124 1984

3. Based on your site requirements, and the type and number of domains on
SuperCluster, decide how to allocate CPU and memory for each domain.
This is an example of a plan that reduces resources for primary, ssccn2-dom2, ssccn2-dom4,
so that the resources are parked (unallocated). The parked resources can later be used by I/O
Domains.
The total resources for before and after columns should match. This check ensures that all
resources are accounted for in your plan.

Domain Domain Type Cores Before Cores Memory Memory


Before (GB)
After After (GB)

primary Dedicated 32 16 512 256


ssccn2-dom1 Dedicated 16 16 256 256
ssccn2-dom2 Dedicated 16 8 256 64
ssccn2-dom3 Dedicated 16 16 256 256
ssccn2-dom4 Dedicated 16 4 256 64
ssccn2-dom5 Root n/a n/a n/a n/a
Unallocated Resources 28 64 448 1088
Total Resources 124 124 1984 1984

Maintaining the System 177


Display the Current Domain Configuration (osc-setcoremem)

4. Consider your next action:

■ Change resource allocations at the socket granularity level.


See “Change CPU/Memory Allocations (Socket Granularity)” on page 182
■ Change resource allocations at the core granularity level.
. See “Change CPU/Memory Allocations (Core Granularity)” on page 186
■ Increase unallocated resources.
See “Park Cores and Memory” on page 190

Related Information
■ “osc-setcoremem Overview” on page 171
■ “Supported Domain Configurations” on page 174
■ “Display the Current Domain Configuration (osc-setcoremem)” on page 178
■ “Display the Current Domain Configuration (ldm)” on page 180
■ “Change CPU/Memory Allocations (Socket Granularity)” on page 182
■ “Change CPU/Memory Allocations (Core Granularity)” on page 186
■ “Park Cores and Memory” on page 190

Display the Current Domain Configuration (osc-


setcoremem)

This procedure describes how to display a compute node domain configuration using the osc-
setcoremem command.

Note - Alternatively, you can use ldm commands to get similar information See “Display the
Current Domain Configuration (ldm)” on page 180.

1. Log in as superuser on the compute node's control domain.

2. Use the osc-setcoremem command to view domains and resources.

Note - If you don't want to continue to use the osc-setcoremem command to change resource
allocations, enter CTL-C at the first prompt.

Example:

178 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Display the Current Domain Configuration (osc-setcoremem)

# /opt/oracle.supercluster/bin/osc-setcoremem
 
osc-setcoremem
v1.0 built on Oct 29 2014 10:21:05
 
 
Current Configuration: Full-Rack T5-8 SuperCluster
 
+-------------------------+-------+--------+-----------+--- MINIMUM ----+
| DOMAIN | CORES | MEM_GB | TYPE | CORES | MEM_GB |
+-------------------------+-------+--------+-----------+-------+--------+
| primary | 32 | 512 | Dedicated | 9 | 64 |
| ssccn2-dom1 | 16 | 256 | Dedicated | 4 | 32 |
| ssccn2-dom2 | 16 | 256 | Dedicated | 4 | 32 |
| ssccn2-dom3 | 16 | 256 | Dedicated | 4 | 32 |
| ssccn2-dom4 | 16 | 256 | Dedicated | 4 | 32 |
| ssccn2-dom5 | 4 | 64 | Root | 4 | 64 |
+-------------------------+-------+--------+-----------+-------+--------+
| unallocated or parked | 28 | 448 | -- | -- | -- |
+-------------------------+-------+--------+-----------+-------+--------+
 
[Note] Following domains will be skipped in this session.
 
Root Domains
------------
ssccn2-dom5
 
 
CPU allocation preference:
 
1. Socket level
2. Core level
 
In case of Socket level granularity, proportional memory capacity is
automatically selected for you.
 
Choose Socket or Core level [S or C] <CTL-C>

Related Information

■ “Display the Current Domain Configuration (ldm)” on page 180


■ “Change CPU/Memory Allocations (Socket Granularity)” on page 182
■ “Change CPU/Memory Allocations (Core Granularity)” on page 186
■ “Park Cores and Memory” on page 190

Maintaining the System 179


Display the Current Domain Configuration (ldm)

Display the Current Domain Configuration (ldm)

This procedure describes how to display a compute node domain configuration using a series of
ldm commands.

Note - Alternatively, you can use the osc-setcoremem command to get similar information See
“Display the Current Domain Configuration (osc-setcoremem)” on page 178.

1. Log in as root on the compute node's control domain.

2. Identify which domains are Root Domains:


Root Domains are identified by IOV in the STATUS column.
In this example, ssccn2-dom5 is a Root Domain. The other domains are dedicated domains.

# ldm list-io | grep BUS


NAME TYPE BUS DOMAIN STATUS
pci_0 BUS pci_0 primary
pci_1 BUS pci_1 primary
pci_2 BUS pci_2 primary
pci_3 BUS pci_3 primary
pci_4 BUS pci_4 ssccn2-dom1
pci_5 BUS pci_5 ssccn2-dom1
pci_6 BUS pci_6 ssccn2-dom2
pci_7 BUS pci_7 ssccn2-dom2
pci_8 BUS pci_8 ssccn2-dom3
pci_9 BUS pci_9 ssccn2-dom3
pci_10 BUS pci_10 ssccn2-dom4
pci_11 BUS pci_11 ssccn2-dom4
pci_12 BUS pci_12 ssccn2-dom5IOV
pci_13 BUS pci_13 ssccn2-dom5IOV
pci_14 BUS pci_14 ssccn2-dom5IOV
pci_15 BUS pci_15 ssccn2-dom5IOV

3. View domains and resource allocation information.


In this example, ssccn2-dom5 is a Root Domain (from Step 2). The resources listed for Root
Domains only represent the resources that are reserved for the Root Domain itself. Parked
resources are not displayed.

# ldm list
NAME STATE FLAGS CONS VCPU MEMORY UTIL NORM UPTIME

180 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Display the Current Domain Configuration (ldm)

primary active -n-cv- UART 256 523008M 0.1% 0.1% 19m


ssccn2-dom1 active -n---- 5001 128 256G 0.0% 0.0% 13m
ssccn2-dom2 active -n---- 5002 128 256G 0.0% 0.0% 13m
ssccn2-dom3 active -n---- 5003 128 256G 0.1% 0.1% 13m
ssccn2-dom4 active -n---- 5004 128 256G 0.0% 0.0% 14m
ssccn2-dom5 active -n--v- 5005 32 64G 0.0% 0.0% 1d 10h 15m

Based on your SuperCluster model, use one of these specifications to convert socket, core, and
VCPU values.

■ Use these specifications for SuperCluster T5-8.


■ 1 socket = 16 cores
■ 1 core = 8 VCPUs
■ Use these specifications for SuperCluster M6-32.
■ 1 socket = 12 cores
■ 1 core = 8 VCPUs

4. View the amount of parked resources.


In this example, the first command line reports the number of cores in the logical CPU
repository. The second command line reports the amount of memory in the logical memory
repository.

# ldm list-devices -p core | grep cid | wc -l


28
 
# ldm list-devices memory
MEMORY
PA SIZE
0x300000000000 224G
0x380000000000 224G

Related Information

■ “Display the Current Domain Configuration (osc-setcoremem)” on page 178


■ “Change CPU/Memory Allocations (Socket Granularity)” on page 182
■ “Change CPU/Memory Allocations (Core Granularity)” on page 186
■ “Park Cores and Memory” on page 190

Maintaining the System 181


Change CPU/Memory Allocations (Socket Granularity)

Change CPU/Memory Allocations (Socket


Granularity)

Perform this procedure on each compute node to change CPU and memory resource allocation
at the socket granularity level.

Note - To find out if you can perform this procedure, see “Supported Domain
Configurations” on page 174.

The tool makes these changes:

■ Automatically detects Root Domains.


■ Calculates the minimum and maximum resources for all domains, and only enables you to
select valid quantities.
■ Modifies domain resources according to the choices you make.
■ Automatically assigns memory capacity in the same proportion to CPU resources.
■ (If needed) Stops nonprimary domains.
■ (If needed) Reboots the primary domain with new resources .
■ (If needed) Brings up nonprimary domains with new resources.

The examples in this procedure show a SuperCluster T5-8 Full-Rack compute node that has six
domains. The concepts in this procedure also apply to other SuperCluster models.

In this example, one socket and 256 GB memory are removed from the primary domain and
allocated to ssccn2-dom2.

This table shows the allocation plan (see “Plan CPU and Memory Allocations” on page 175).

Domain Domain Type Sockets Sockets Memory Memory After


Before Before (GB) (GB)
After

primary Dedicated 2 1 512 256


ssccn2-dom1 Dedicated 1 1 256 256
ssccn2-dom2 Dedicated 1 2 256 512
ssccn2-dom3 Dedicated 1 1 256 256
ssccn2-dom4 Dedicated 1 1 256 256
ssccn2-dom5 Root n/a n/a n/a n/a
Unallocated 28 28 448 448

182 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Change CPU/Memory Allocations (Socket Granularity)

Domain Domain Type Sockets Sockets Memory Memory After


Before Before (GB) (GB)
After
Total allocated 34 34 1984 1984
resources

1. Log in as superuser on the compute node's control domain.

2. Activate any inactive domains using the ldm bind command.


The tool does not continue if any inactive domains are present.

3. Run osc-setcoremem to reconfigure the resources.


Respond when prompted. Press Enter to select the default value.

# /opt/oracle.supercluster/bin/osc-setcoremem
 
osc-setcoremem
v1.0 built on Oct 29 2014 10:21:05
 
 
Current Configuration: Full-Rack T5-8 SuperCluster
 
+-------------------------+-------+--------+-----------+--- MINIMUM ----+
| DOMAIN | CORES | MEM_GB | TYPE | CORES | MEM_GB |
+-------------------------+-------+--------+-----------+-------+--------+
| primary | 32 | 512 | Dedicated | 9 | 64 |
| ssccn2-dom1 | 16 | 256 | Dedicated | 4 | 32 |
| ssccn2-dom2 | 16 | 256 | Dedicated | 4 | 32 |
| ssccn2-dom3 | 16 | 256 | Dedicated | 4 | 32 |
| ssccn2-dom4 | 16 | 256 | Dedicated | 4 | 32 |
| ssccn2-dom5 | 4 | 64 | Root | 4 | 64 |
+-------------------------+-------+--------+-----------+-------+--------+
| unallocated or parked | 28 | 448 | -- | -- | -- |
+-------------------------+-------+--------+-----------+-------+--------+
 
[Note] Following domains will be skipped in this session.
 
Root Domains
------------
ssccn2-dom5
 
 
CPU allocation preference:
 
1. Socket level
2. Core level

Maintaining the System 183


Change CPU/Memory Allocations (Socket Granularity)

 
In case of Socket level granularity, proportional memory capacity is
automatically selected for you.
 
Choose Socket or Core level [S or C] s
 
Step 1 of 1: Socket count selection
 
primary : desired socket count [min: 1, max: 2. default: 1] : 1
you chose [1] socket for primary domain
 
ssccn2-dom1 : desired socket count [min: 1, max: 2. default: 1] : 1
you chose [1] socket for ssccn2-dom1 domain
 
ssccn2-dom2 : desired socket count [min: 1, max: 2. default: 1] : 2
you chose [2] sockets for ssccn2-dom2 domain
 
ssccn2-dom3 : desired socket count [min: 1, max: 1. default: 1] : 1
you chose [1] socket for ssccn2-dom3 domain
 
ssccn2-dom4 : desired socket count [min: 1, max: 1. default: 1] : 1
you chose [1] socket for ssccn2-dom4 domain
 
New Configuration in progress after Socket count selection:
 
+-------------------------+----------+----------+-----------+
| DOMAIN | SOCKETS | MEM_GB | TYPE |
+-------------------------+----------+----------+-----------+
| primary | 1 | 256 | Dedicated |
| ssccn2-dom1 | 1 | 256 | Dedicated |
| ssccn2-dom2 | 2 | 512 | Dedicated |
| ssccn2-dom3 | 1 | 256 | Dedicated |
| ssccn2-dom4 | 1 | 256 | Dedicated |
| *ssccn2-dom5 | 0.250 | 64 | Root |
+-------------------------+----------+----------+-----------+
| unallocated or parked | 1.750 | 448 | -- |
+-------------------------+----------+----------+-----------+
 
 
The following domains will be stopped and restarted:
 
ssccn2-dom2
 
This configuration requires rebooting the control domain.
Do you want to proceed? Y/N : y
 
IMPORTANT NOTE:
+----------------------------------------------------------------------------+

184 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Change CPU/Memory Allocations (Socket Granularity)

|After the reboot, osc-setcoremem attempts to complete CPU, memory re-configuration|


| Please check syslog and the state of all domains before using the system. |
| eg., dmesg | grep osc-setcoremem ; ldm list | grep -v active ; date |
+----------------------------------------------------------------------------+
 
All activity is being recorded in log file:
/opt/oracle.supercluster/osc-setcoremem/log/osc-setcoremem_activity_10-29-2014_16:15:44.log
 
Please wait while osc-setcoremem is setting up the new CPU, memory configuration.
It may take a while. Please be patient and do not interrupt.
 
0% 10 20 30 40 50 60 70 80 90 100%
|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|
*=====*=====*=====*=====*=====*=====*=====*=====*=====*=====*
Broadcast Message from root (pts/1) on etc27dbadm0201 Wed Oct 29 16:21:19...
THE SYSTEM etc27dbadm0201 IS BEING SHUT DOWN NOW ! ! !
Log off now or risk your files being damaged
 
Task complete with no errors.

4. Check the system log and the status of all logical domains to ensure that they
are in active state before proceeding with the regular activity.
Example:

# dmesg | grep osc-setcoremem


Oct 29 16:27:59 etc27dbadm0201 root[2870]: [ID 702911 user.alert] osc-setcoremem: core, memory re-
configuration complete. system can be used for regular work.

5. Verify the new resource allocation.


You can verify the resource allocation and check for possible osc-setcoremem errors in several
ways:
■ “Display the Current Domain Configuration (osc-setcoremem)” on page 178
■ “Display the Current Domain Configuration (ldm)” on page 180
■ “Access osc-setcoremem Log Files” on page 196

6. Repeat this procedure if you need to change resource allocations on another


compute node.

Related Information
■ “Supported Domain Configurations” on page 174
■ “Plan CPU and Memory Allocations” on page 175
■ “Display the Current Domain Configuration (osc-setcoremem)” on page 178

Maintaining the System 185


Change CPU/Memory Allocations (Core Granularity)

■ “Display the Current Domain Configuration (ldm)” on page 180


■ “Access osc-setcoremem Log Files” on page 196

Change CPU/Memory Allocations (Core


Granularity)

Perform this procedure on each compute node to change its CPU and memory resource
allocation at the core level.

Note - To find out if you can perform this procedure, see “Supported Domain
Configurations” on page 174.

The tool makes these changes:

■ Automatically detects Root Domains.


■ Calculates the minimum and maximum resources for all domains, and only enables you to
select valid quantities.
■ Presents viable memory capacities for you to select, based on your core allocations.
■ Modifies domain resources according to the choices you make.
■ (If needed) Stops nonprimary domains.
■ (If needed) Reboots the primary domain with new resources .
■ (If needed) Brings up nonprimary domains with new resources.

The examples in this procedure show a SuperCluster T5-8 compute node. The concepts in this
procedure also apply to other SuperCluster models.

This table shows the allocation plan for this example (see “Plan CPU and Memory
Allocations” on page 175).

Domain Domain Type Cores Before Cores Memory Memory After


Before (GB) (GB)
After

primary Dedicated 16 16 256 256


ssccn2-dom1 Dedicated 16 22 256 384
ssccn2-dom2 Dedicated 32 32 512 512
ssccn2-dom3 Dedicated 16 8 256 64
ssccn2-dom4 Dedicated 16 18 256 320

186 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Change CPU/Memory Allocations (Core Granularity)

Domain Domain Type Cores Before Cores Memory Memory After


Before (GB) (GB)
After

ssccn2-dom5 Root n/a n/a n/a n/a


Unallocated 28 28 448 448
Total allocated 124 124 1536 1536
resources

1. Log in as superuser on the compute node's control domain.

2. Activate any inactive domains using the ldm bind command.


The tool does not continue if any inactive domains are present.

3. Run osc-setcoremem to reconfigure the resources.


Respond when prompted. Press Enter to select the default value.

# /opt/oracle.supercluster/bin/osc-setcoremem
 
osc-setcoremem
v1.0 built on Oct 29 2014 10:21:05
 
 
Current Configuration: Full-Rack T5-8 SuperCluster
 
+-------------------------+-------+--------+-----------+--- MINIMUM ----+
| DOMAIN | CORES | MEM_GB | TYPE | CORES | MEM_GB |
+-------------------------+-------+--------+-----------+-------+--------+
| primary | 16 | 256 | Dedicated | 9 | 64 |
| ssccn2-dom1 | 16 | 256 | Dedicated | 4 | 32 |
| ssccn2-dom2 | 32 | 512 | Dedicated | 4 | 32 |
| ssccn2-dom3 | 16 | 256 | Dedicated | 4 | 32 |
| ssccn2-dom4 | 16 | 256 | Dedicated | 4 | 32 |
| ssccn2-dom5 | 4 | 64 | Root | 4 | 64 |
+-------------------------+-------+--------+-----------+-------+--------+
| unallocated or parked | 28 | 448 | -- | -- | -- |
+-------------------------+-------+--------+-----------+-------+--------+
 
[Note] Following domains will be skipped in this session.
 
Root Domains
------------
ssccn2-dom5
 
 
CPU allocation preference:
 

Maintaining the System 187


Change CPU/Memory Allocations (Core Granularity)

1. Socket level
2. Core level
 
In case of Socket level granularity, proportional memory capacity is
automatically selected for you.
 
Choose Socket or Core level [S or C] C
 
Step 1 of 2: Core count selection
 
primary : desired number of cores [min: 9, max: 80. default: 16] : <Enter>
you chose [16] cores for primary domain
 
ssccn2-dom1 : desired number of cores [min: 4, max: 68. default: 16] : 22
you chose [22] cores for ssccn2-dom1 domain
 
ssccn2-dom2 : desired number of cores [min: 4, max: 50. default: 32] : <Enter>
you chose [32] cores for ssccn2-dom2 domain
 
ssccn2-dom3 : desired number of cores [min: 4, max: 22. default: 16] : 8
you chose [8] cores for ssccn2-dom3 domain
 
ssccn2-dom4 : desired number of cores [min: 4, max: 18. default: 16] : 18
you chose [18] cores for ssccn2-dom4 domain
 
New Configuration in progress after Core count selection:
 
+-------------------------+-------+--------+-----------+--- MINIMUM ----+
| DOMAIN | CORES | MEM_GB | TYPE | CORES | MEM_GB |
+-------------------------+-------+--------+-----------+-------+--------+
| primary | 16 | 256 | Dedicated | 9 | 64 |
| ssccn2-dom1 | 22 | 256 | Dedicated | 4 | 96 |
| ssccn2-dom2 | 32 | 512 | Dedicated | 4 | 128 |
| ssccn2-dom3 | 8 | 256 | Dedicated | 4 | 32 |
| ssccn2-dom4 | 18 | 256 | Dedicated | 4 | 80 |
| *ssccn2-dom5 | 4 | 64 | Root | 4 | 64 |
+-------------------------+-------+--------+-----------+-------+--------+
| unallocated or parked | 28 | 448 | -- | -- | -- |
+-------------------------+-------+--------+-----------+-------+--------+
 
 
Step 2 of 2: Memory selection
 
primary : desired memory capacity in GB (must be 16 GB aligned) [min: 64, max: 1200. default:
256] : <Enter>
you chose [256 GB] memory for primary domain
 

188 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Change CPU/Memory Allocations (Core Granularity)

ssccn2-dom1 : desired memory capacity in GB (must be 16 GB aligned) [min: 96, max: 1040. default:
352] : 384
you chose [384 GB] memory for ssccn2-dom1 domain
 
ssccn2-dom2 : desired memory capacity in GB (must be 16 GB aligned) [min: 128, max: 784. default:
512] : <Enter>
you chose [512 GB] memory for ssccn2-dom2 domain
 
ssccn2-dom3 : desired memory capacity in GB (must be 16 GB aligned) [min: 32, max: 304. default:
128] : 64
you chose [64 GB] memory for ssccn2-dom3 domain
 
ssccn2-dom4 : desired memory capacity in GB (must be 16 GB aligned) [min: 80, max: 320. default:
288] : 320
you chose [320 GB] memory for ssccn2-dom4 domain
 
New Configuration in progress after Memory selection:
 
+-------------------------+-------+--------+-----------+--- MINIMUM ----+
| DOMAIN | CORES | MEM_GB | TYPE | CORES | MEM_GB |
+-------------------------+-------+--------+-----------+-------+--------+
| primary | 16 | 256 | Dedicated | 9 | 64 |
| ssccn2-dom1 | 22 | 384 | Dedicated | 4 | 96 |
| ssccn2-dom2 | 32 | 512 | Dedicated | 4 | 128 |
| ssccn2-dom3 | 8 | 64 | Dedicated | 4 | 32 |
| ssccn2-dom4 | 18 | 320 | Dedicated | 4 | 80 |
| *ssccn2-dom5 | 4 | 64 | Root | 4 | 64 |
+-------------------------+-------+--------+-----------+-------+--------+
| unallocated or parked | 28 | 448 | -- | -- | -- |
+-------------------------+-------+--------+-----------+-------+--------+
 
 
The following domains will be stopped and restarted:
 
ssccn2-dom4
ssccn2-dom1
ssccn2-dom3
 
This configuration does not require rebooting the control domain.
Do you want to proceed? Y/N : y
 
All activity is being recorded in log file:
/opt/oracle.supercluster/osc-setcoremem/log/osc-setcoremem_activity_10-29-2014_16:58:54.log
 
Please wait while osc-setcoremem is setting up the new CPU, memory configuration.
It may take a while. Please be patient and do not interrupt.
 
0% 10 20 30 40 50 60 70 80 90 100%

Maintaining the System 189


Park Cores and Memory

|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|
*=====*=====*=====*=====*=====*=====*=====*=====*=====*=====*
 
Task complete with no errors.
This concludes socket/core, memory reconfiguration.
You can continue using the system.

4. Verify the new resource allocation.


You can verify the resource allocation and check for possible osc-setcoremem errors in several
ways:
■ “Display the Current Domain Configuration (osc-setcoremem)” on page 178
■ “Display the Current Domain Configuration (ldm)” on page 180
■ “Access osc-setcoremem Log Files” on page 196

5. Repeat this procedure if you need to change resource allocations on another


compute node.

Related Information
■ “Supported Domain Configurations” on page 174
■ “Plan CPU and Memory Allocations” on page 175
■ “Display the Current Domain Configuration (osc-setcoremem)” on page 178
■ “Display the Current Domain Configuration (ldm)” on page 180
■ “Access osc-setcoremem Log Files” on page 196

Park Cores and Memory

Perform this procedure on each compute node to move CPU and memory resources from
dedicated domains into logical CPU and memory repositories, making the resources available
for I/O Domains.

If you are parking cores and memory, plan carefully. Once you park resources and create I/O
Domains you cannot move resources back to dedicated domains.

Note - To find out if you can perform this procedure, see “Supported Domain
Configurations” on page 174.

The examples in this procedure show a SuperCluster T5-8 full rack. The concepts in this
procedure also apply to other SuperCluster models.

190 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Park Cores and Memory

This is an example of a plan that reduces resources for primary, ssccn2-dom2, ssccn2-dom4,
so that the resources are parked (unallocated). The parked resources can later be used by I/O
Domains.

This table shows the allocation plan (see “Plan CPU and Memory Allocations” on page 175).

Domain Domain Type Cores Before Cores Memory Memory


Before (GB)
After After (GB)

primary Dedicated 32 16 512 256


ssccn2-dom1 Dedicated 16 16 256 256
ssccn2-dom2 Dedicated 16 8 256 64
ssccn2-dom3 Dedicated 16 16 256 256
ssccn2-dom4 Dedicated 16 4 256 64
ssccn2-dom5 Root n/a n/a n/a n/a
Unallocated Resources 28 64 448 1088
Total Resources 124 124 1984 1984

1. Log in as superuser on the compute node's control domain.

2. Activate any inactive domains using the ldm bind command.


The tool does not continue if any inactive domains are present.

3. Run osc-setcoremem to change resource allocations.


In this example, some resources are left unallocated which parks them.
Respond when prompted. Press Enter to select the default value.

# /opt/oracle.supercluster/bin/osc-setcoremem
 
osc-setcoremem
v1.0 built on Oct 29 2014 10:21:05
 
 
Current Configuration: Full-Rack T5-8 SuperCluster
 
+-------------------------+-------+--------+-----------+--- MINIMUM ----+
| DOMAIN | CORES | MEM_GB | TYPE | CORES | MEM_GB |
+-------------------------+-------+--------+-----------+-------+--------+
| primary | 32 | 512 | Dedicated | 9 | 64 |
| ssccn2-dom1 | 16 | 256 | Dedicated | 4 | 32 |
| ssccn2-dom2 | 16 | 256 | Dedicated | 4 | 32 |

Maintaining the System 191


Park Cores and Memory

| ssccn2-dom3 | 16 | 256 | Dedicated | 4 | 32 |


| ssccn2-dom4 | 16 | 256 | Dedicated | 4 | 32 |
| ssccn2-dom5 | 4 | 64 | Root | 4 | 64 |
+-------------------------+-------+--------+-----------+-------+--------+
| unallocated or parked | 28 | 448 | -- | -- | -- |
+-------------------------+-------+--------+-----------+-------+--------+
 
[Note] Following domains will be skipped in this session.
 
Root Domains
------------
ssccn2-dom5
 
 
CPU allocation preference:
 
1. Socket level
2. Core level
 
In case of Socket level granularity, proportional memory capacity is
automatically selected for you.
 
Choose Socket or Core level [S or C] C
 
Step 1 of 2: Core count selection
 
primary : desired number of cores [min: 9, max: 80. default: 32] : 16
you chose [16] cores for primary domain
 
ssccn2-dom1 : desired number of cores [min: 4, max: 68. default: 16] : <Enter>
you chose [16] cores for ssccn2-dom1 domain
 
ssccn2-dom2 : desired number of cores [min: 4, max: 56. default: 16] : 8
you chose [8] cores for ssccn2-dom2 domain
 
ssccn2-dom3 : desired number of cores [min: 4, max: 52. default: 16] : <Enter>
you chose [16] cores for ssccn2-dom3 domain
 
ssccn2-dom4 : desired number of cores [min: 4, max: 40. default: 16] : 4
you chose [4] cores for ssccn2-dom4 domain
 
New Configuration in progress after Core count selection:
 
+-------------------------+-------+--------+-----------+--- MINIMUM ----+
| DOMAIN | CORES | MEM_GB | TYPE | CORES | MEM_GB |
+-------------------------+-------+--------+-----------+-------+--------+
| primary | 16 | 512 | Dedicated | 9 | 64 |
| ssccn2-dom1 | 16 | 256 | Dedicated | 4 | 64 |

192 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Park Cores and Memory

| ssccn2-dom2 | 8 | 256 | Dedicated | 4 | 32 |


| ssccn2-dom3 | 16 | 256 | Dedicated | 4 | 64 |
| ssccn2-dom4 | 4 | 256 | Dedicated | 4 | 32 |
| *ssccn2-dom5 | 4 | 64 | Root | 4 | 64 |
+-------------------------+-------+--------+-----------+-------+--------+
| unallocated or parked | 64 | 448 | -- | -- | -- |
+-------------------------+-------+--------+-----------+-------+--------+
 
 
Step 2 of 2: Memory selection
 
primary : desired memory capacity in GB (must be 16 GB aligned) [min: 64, max: 1344. default:
256] : <Enter>
you chose [256 GB] memory for primary domain
 
ssccn2-dom1 : desired memory capacity in GB (must be 16 GB aligned) [min: 64, max: 1152. default:
256] : <Enter>
you chose [256 GB] memory for ssccn2-dom1 domain
 
ssccn2-dom2 : desired memory capacity in GB (must be 16 GB aligned) [min: 32, max: 928. default:
128] : 64
you chose [64 GB] memory for ssccn2-dom2 domain
 
ssccn2-dom3 : desired memory capacity in GB (must be 16 GB aligned) [min: 64, max: 928. default: 256] :
<Enter>
you chose [256 GB] memory for ssccn2-dom3 domain
 
ssccn2-dom4 : desired memory capacity in GB (must be 16 GB aligned) [min: 32, max: 704. default: 64] :
<Enter>
you chose [64 GB] memory for ssccn2-dom4 domain
 
New Configuration in progress after Memory selection:
 
+-------------------------+-------+--------+-----------+--- MINIMUM ----+
| DOMAIN | CORES | MEM_GB | TYPE | CORES | MEM_GB |
+-------------------------+-------+--------+-----------+-------+--------+
| primary | 16 | 256 | Dedicated | 9 | 64 |
| ssccn2-dom1 | 16 | 256 | Dedicated | 4 | 64 |
| ssccn2-dom2 | 8 | 64 | Dedicated | 4 | 32 |
| ssccn2-dom3 | 16 | 256 | Dedicated | 4 | 64 |
| ssccn2-dom4 | 4 | 64 | Dedicated | 4 | 32 |
| *ssccn2-dom5 | 4 | 64 | Root | 4 | 64 |
+-------------------------+-------+--------+-----------+-------+--------+
| unallocated or parked | 64 | 1088 | -- | -- | -- |
+-------------------------+-------+--------+-----------+-------+--------+
 
 
The following domains will be stopped and restarted:

Maintaining the System 193


Park Cores and Memory

 
ssccn2-dom4
ssccn2-dom2
 
This configuration requires rebooting the control domain.
Do you want to proceed? Y/N : Y
 
IMPORTANT NOTE:
+----------------------------------------------------------------------------+
|After the reboot, osc-setcoremem attempts to complete CPU, memory re-configuration|
| Please check syslog and the state of all domains before using the system. |
| eg., dmesg | grep osc-setcoremem ; ldm list | grep -v active ; date |
+----------------------------------------------------------------------------+
 
All activity is being recorded in log file:
/opt/oracle.supercluster/osc-setcoremem/log/osc-setcoremem_activity_10-29-2014_17:29:41.log
 
Please wait while osc-setcoremem is setting up the new CPU, memory configuration.
It may take a while. Please be patient and do not interrupt.
 
0% 10 20 30 40 50 60 70 80 90 100%
|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|
*=====*=====*=====*=====*=====*=====*=====*=====*=====*=====*
Broadcast Message from root (pts/1) on etc27dbadm0201 Wed Oct 29 17:31:48...
THE SYSTEM etc27dbadm0201 IS BEING SHUT DOWN NOW ! ! !
Log off now or risk your files being damaged
 
Task complete with no errors.
 
When the system is back online, check the log file to ensure that all re-configuration steps were
successful.
 
# tail -20 /opt/oracle.supercluster/osc-setcoremem/log/osc-setcoremem_activity_10-29-2014_17:29:41.log
 
Task complete with no errors.
 
::Post-reboot activity::
 
Please wait while osc-setcoremem is setting up the new CPU, memory configuration.
It may take a while. Please be patient and do not interrupt.
 
 
Executing ldm commands ..
 
0% 10 20 30 40 50 60 70 80 90 100%
|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|
*=====*=====*=====*=====*=====*=====*=====*=====*=====*=====*
 

194 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Park Cores and Memory

Task complete with no errors.


This concludes socket/core, memory reconfiguration.
You can continue using the system.

4. If the tool indicated that a reboot was needed, after the system reboots, log in as
root on the compute node's control domain.

5. Verify the new resource allocation.


You can verify the resource allocation and check for possible osc-setcoremem errors in several
ways:

■ “Display the Current Domain Configuration (osc-setcoremem)” on page 178


■ “Display the Current Domain Configuration (ldm)” on page 180
■ “Access osc-setcoremem Log Files” on page 196

6. Verify the parked cores.


In this example, there are 64 parked cores from these domains:

■ 28 cores were parked prior to the resource allocation (These cores were placed in the logical
CPU repository when the Root Domain was created)
■ 36 cores are parked from ssccn2-dom1, ssccn2-dom3, and ssccn2-dom5.

# ldm list-devices -p core | grep cid | wc -l


64

7. Verify the parked memory.


In this example, memory is parked:

■ 2 x 224 memory blocks are parked from the Root Domain.


■ The other memory blocks are parked from the dedicated domains.

When the memory is added together it equals approximately 1088 GB.

# ldm list-devices memory


MEMORY
PA SIZE
0x30000000 15616M
0x2400000000 114176M
0x82000000000 128G
0x181000000000 192G
0x281000000000 192G
0x300000000000 224G
0x380000000000 224G

Maintaining the System 195


Access osc-setcoremem Log Files

8. Repeat this procedure if you need to change resource allocations on the other
compute node.

Related Information
■ “Supported Domain Configurations” on page 174
■ “Plan CPU and Memory Allocations” on page 175
■ “Display the Current Domain Configuration (osc-setcoremem)” on page 178
■ “Display the Current Domain Configuration (ldm)” on page 180
■ “Access osc-setcoremem Log Files” on page 196

Access osc-setcoremem Log Files

The osc-setcoremem command creates a timestamped log file for each session.

1. Log in as superuser on the compute node's control domain.

2. Change directories to the log file directory and list the contents.

# cd /opt/oracle.supercluster/osc-setcoremem/log
# ls
 

3. Use a text reader of your choice to view the contents of a log file.

# more log_file_name

Example:

# cat /opt/oracle.supercluster/osc-setcoremem/log/osc-setcoremem_activity_10-29-2014_16\:58\:54.log
 
# ./osc-setcoremem
 
osc-setcoremem
v1.0 built on Oct 29 2014 10:21:05
 
 
Current Configuration: Full-Rack T5-8 SuperCluster
 
+-------------------------+-------+--------+-----------+--- MINIMUM ----+

196 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Access osc-setcoremem Log Files

| DOMAIN | CORES | MEM_GB | TYPE | CORES | MEM_GB |


+-------------------------+-------+--------+-----------+-------+--------+
| primary | 16 | 256 | Dedicated | 9 | 64 |
| ssccn2-dom1 | 16 | 256 | Dedicated | 4 | 32 |
| ssccn2-dom2 | 32 | 512 | Dedicated | 4 | 32 |
| ssccn2-dom3 | 16 | 256 | Dedicated | 4 | 32 |
| ssccn2-dom4 | 16 | 256 | Dedicated | 4 | 32 |
| ssccn2-dom5 | 4 | 64 | Root | 4 | 64 |
+-------------------------+-------+--------+-----------+-------+--------+
| unallocated or parked | 28 | 448 | -- | -- | -- |
+-------------------------+-------+--------+-----------+-------+--------+
 
[Note] Following domains will be skipped in this session.
 
Root Domains
------------
ssccn2-dom5
 
 
CPU allocation preference:
 
1. Socket level
2. Core level
 
In case of Socket level granularity, proportional memory capacity is
automatically selected for you.
 
Choose Socket or Core level [S or C]
user input: 'C'
 
Step 1 of 2: Core count selection
 
primary : desired number of cores [min: 9, max: 80. default: 16] :
user input (desired cores): '' you chose [16] cores for primary domain
 
ssccn2-dom1 : desired number of cores [min: 4, max: 68. default: 16] :
user input (desired cores): '22' you chose [22] cores for ssccn2-dom1 domain
 
ssccn2-dom2 : desired number of cores [min: 4, max: 50. default: 32] :
user input (desired cores): '' you chose [32] cores for ssccn2-dom2 domain
 
ssccn2-dom3 : desired number of cores [min: 4, max: 22. default: 16] :
user input (desired cores): '8' you chose [8] cores for ssccn2-dom3 domain
 
ssccn2-dom4 : desired number of cores [min: 4, max: 18. default: 16] :
user input (desired cores): '18' you chose [18] cores for ssccn2-dom4 domain
 
New Configuration in progress after Core count selection:

Maintaining the System 197


Access osc-setcoremem Log Files

 
+-------------------------+-------+--------+-----------+--- MINIMUM ----+
| DOMAIN | CORES | MEM_GB | TYPE | CORES | MEM_GB |
+-------------------------+-------+--------+-----------+-------+--------+
| primary | 16 | 256 | Dedicated | 9 | 64 |
| ssccn2-dom1 | 22 | 256 | Dedicated | 4 | 96 |
| ssccn2-dom2 | 32 | 512 | Dedicated | 4 | 128 |
| ssccn2-dom3 | 8 | 256 | Dedicated | 4 | 32 |
| ssccn2-dom4 | 18 | 256 | Dedicated | 4 | 80 |
| *ssccn2-dom5 | 4 | 64 | Root | 4 | 64 |
+-------------------------+-------+--------+-----------+-------+--------+
| unallocated or parked | 28 | 448 | -- | -- | -- |
+-------------------------+-------+--------+-----------+-------+--------+
 
 
Step 2 of 2: Memory selection
 
primary : desired memory capacity in GB (must be 16 GB aligned) [min: 64, max: 1200. default:
256] :
user input (desired memory): '' GB you chose [256 GB] memory for primary domain
 
ssccn2-dom1 : desired memory capacity in GB (must be 16 GB aligned) [min: 96, max: 1040. default:
352] :
user input (desired memory): '384' GB you chose [384 GB] memory for ssccn2-dom1 domain
 
ssccn2-dom2 : desired memory capacity in GB (must be 16 GB aligned) [min: 128, max: 784. default:
512] :
user input (desired memory): '' GB you chose [512 GB] memory for ssccn2-dom2 domain
 
ssccn2-dom3 : desired memory capacity in GB (must be 16 GB aligned) [min: 32, max: 304. default: 128] :
user input (desired memory): '64' GB you chose [64 GB] memory for ssccn2-dom3 domain
 
ssccn2-dom4 : desired memory capacity in GB (must be 16 GB aligned) [min: 80, max: 320. default: 288] :
user input (desired memory): '320' GB you chose [320 GB] memory for ssccn2-dom4 domain
 
New Configuration in progress after Memory selection:
 
+-------------------------+-------+--------+-----------+--- MINIMUM ----+
| DOMAIN | CORES | MEM_GB | TYPE | CORES | MEM_GB |
+-------------------------+-------+--------+-----------+-------+--------+
| primary | 16 | 256 | Dedicated | 9 | 64 |
| ssccn2-dom1 | 22 | 384 | Dedicated | 4 | 96 |
| ssccn2-dom2 | 32 | 512 | Dedicated | 4 | 128 |
| ssccn2-dom3 | 8 | 64 | Dedicated | 4 | 32 |
| ssccn2-dom4 | 18 | 320 | Dedicated | 4 | 80 |
| *ssccn2-dom5 | 4 | 64 | Root | 4 | 64 |
+-------------------------+-------+--------+-----------+-------+--------+
| unallocated or parked | 28 | 448 | -- | -- | -- |

198 Oracle SuperCluster T5-8 Owner's Guide • May 2016


View the SP Configuration

+-------------------------+-------+--------+-----------+-------+--------+
 
 
The following domains will be stopped and restarted:
 
ssccn2-dom4
ssccn2-dom1
ssccn2-dom3
 
This configuration does not require rebooting the control domain.
Do you want to proceed? Y/N :
user input: 'y'
 
Please wait while osc-setcoremem is setting up the new CPU, memory configuration.
It may take a while. Please be patient and do not interrupt.
 
 
Executing ldm commands ..
 
0% 10 20 30 40 50 60 70 80 90 100%
|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|
*=====*=====*=====*=====*=====*=====*=====*=====*=====*=====*
 
Task complete with no errors.
 
This concludes socket/core, memory reconfiguration.
You can continue using the system.

Related Information

■ “View the SP Configuration” on page 199


■ “Revert to a Previous CPU/Memory Configuration” on page 201
■ “Remove a CPU/Memory Configuration” on page 202

View the SP Configuration

When you reallocate resources using the osc-setcoremem command, osc-setcoremem saves the
new configuration to the service processor (SP) in this format:

CM_dom1_dom2_dom3_..._TimeStamp

where:

Maintaining the System 199


View the SP Configuration

■ CM_ – indicates a core/memory configuration that was created sometime after the initial
installation.
■ domx is expressed with this nomenclature:
■ xC or xS – CPU resources in number (x) of cores (C) or sockets (S)
■ xG or xT – Memory resources in number (x) of gigabytes (G) or number of terabytes (T)
■ TimeStamp – in the format MMDDYYYYHHMM

This file name example . . .

CM_2S1T_1S512G_3S1536G_082020141354

. . . represents a configuration created on August 20, 2014 at 13:54 and has three domains with
these resources:

■ 2-sockets, 1-TB memory


■ 1-socket, 512 GB memory
■ 3-sockets, 1536 GB memory

To see more details about the resource allocations, you can use the SP configuration timestamp
to locate and view the corresponding osc-setcoremem log file.

1. Log in as superuser on the compute node's control domain.

2. Display the SP configuration.


Examples:

■ Output indicating no custom CPU/memory configurations:


The file called V_B4_4_1_20140804141204 is the initial resource configuration file that was
created when the system was installed.

# ldm list-config
factory-default
V_B4_4_1_20140804141204
after_install_backup [next poweron]
■ Output indicating three additional CPU/memory configurations:

# ldm list-config
factory-default
V_B4_4_1_20140804141204
after_install_backup

200 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Revert to a Previous CPU/Memory Configuration

CM_2S1T_1S512G_3S1536G_082020141354
CM_2S1T_1S512G_3S1536G_082120140256
CM_1S512G_1S512G_4S2T_082120141521 [next poweron]

3. View the corresponding log file.

# more /opt/oracle.supercluster/osc-setcoremem/log/osc-setcoremem_activity_08-21-2014_15:21*.log

Related Information
■ “Access osc-setcoremem Log Files” on page 196
■ “Revert to a Previous CPU/Memory Configuration” on page 201
■ “Remove a CPU/Memory Configuration” on page 202

Revert to a Previous CPU/Memory Configuration

Use this procedure to revert a compute node to a previous CPU/Memory configuration. You
must perform this procedure on each member in a cluster. The tool does not automatically
propagate changes to every cluster member.

Note - To find out if you can perform this procedure, see “Supported Domain
Configurations” on page 174.

1. Log in as superuser on the compute node's control domain.

2. List previous configurations.

Note - You can also view previous configurations in the log files. See “Access osc-setcoremem
Log Files” on page 196.

# ldm list-config
factory-default
V_B4_4_1_20140804141204
after_install_backup
CM_2S1T_1S512G_3S1536G_082020141354
CM_2S1T_1S512G_3S1536G_082120140256
CM_1S512G_1S512G_4S2T_082120140321 [next poweron]

For details about SP configuration files see “View the SP Configuration” on page 199.

Maintaining the System 201


Remove a CPU/Memory Configuration

3. Revert to a previous configuration.

# ldm set-config CM_2S1T_1S512G_3S1536G_082020141354

4. Halt all domains, then halt the primary domain.

5. Restart the system from the service processor.

# #.
 
-> cd /SP
-> stop /SYS
Are you sure you want to stop /SYS (y/n) ? y
Stopping /SYS
 
-> start /SYS
Are you sure you want to start /SYS (y/n) ? y
Starting /SYS

6. Boot all domains and zones.

Related Information
■ “Access osc-setcoremem Log Files” on page 196
■ “View the SP Configuration” on page 199
■ “Remove a CPU/Memory Configuration” on page 202

Remove a CPU/Memory Configuration

The compute node's service processor has a limited amount of memory. If you are unable to
create a new configuration because the service processor ran out of memory, delete unused
configurations using this procedure.

1. List all current configurations.

# ldm list-config
factory-default
V_B4_4_1_20140804141204
after_install_backup
CM_2S1T_1S512G_3S1536G_082020141354

202 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Remove a CPU/Memory Configuration

CM_2S1T_1S512G_3S1536G_082120140256
CM_1S512G_1S512G_4S2T_082120140321 [next poweron]

2. Determine which configurations are safe to remove.


It is safe to remove any configuration that contains the string CM_ or _ML, as long as it is not
marked [current] or [next poweron].

3. Remove a configuration.
Example:

# ldm remove-spconfig CM_2S1T_1S512G_3S1536G_082020141354

Related Information
■ “Access osc-setcoremem Log Files” on page 196
■ “View the SP Configuration” on page 199
■ “Revert to a Previous CPU/Memory Configuration” on page 201

Maintaining the System 203


204 Oracle SuperCluster T5-8 Owner's Guide • May 2016
Monitoring the System

These topics describe the monitoring options for Oracle SuperCluster T5-8.

■ “Monitoring the System Using Auto Service Request” on page 205


■ “Monitoring the System Using OCM” on page 221
■ “System Requirements” on page 227

Monitoring the System Using Auto Service Request


These topics describe how to monitor the system using Oracle Auto Service Request (ASR).

Note - Oracle personnel might have configured ASR during the installation of SuperCluster.

■ “ASR Overview” on page 205


■ “ASR Resources” on page 206
■ “ASR Installation Overview” on page 207
■ “Configure SNMP Trap Destinations for Exadata Storage Servers” on page 208
■ “Configure ASR on the ZFS Storage Appliance” on page 210
■ “Configure ASR on SPARC T5-8 Servers (Oracle ILOM)” on page 213
■ “Configuring ASR on the SPARC T5-8 Servers (Oracle Solaris 11)” on page 215
■ “Approve and Verify ASR Asset Activation” on page 219

ASR Overview
ASR automatically opens service requests when specific hardware faults occur. In many cases,
Oracle Support Services can begin resolving the issue immediately, often before the system
administrator is aware that a problem exists.

The telemetry data that is sent from the ASR Manager to Oracle is encrypted.

Monitoring the System 205


Monitoring the System Using Auto Service Request

When ASR detects a fault, ASR sends an email message to your MOS email account for ASR,
and to the technical contact for the activated asset, notifying them of the creation of the service
request.

To enable this feature, ASR Manager software must be installed on a server (a server other
than SuperCluster). The ASR Manager server must have connectivity to SuperCluster, and
an outbound Internet connection using HTTPS or an HTTPS proxy. Certain SuperCluster
components must be configured to send hardware fault telemetry to the ASR Manager server.

For SuperCluster systems, ASR uses these telemetry sources to detect fault events:
■ FMA – Provides CPU and memory fault information from the host.
■ Oracle ILOM – Provides fault information, power and environmental, and CPU and
memory fault information from the service processor.
■ Exadata-detected Events (HALRT) – Provides fault coverage for disks, flash, and PCI
cards within Oracle SuperCluster.
■ ZFS storage appliance – Provides fault events detected within the systems and disk arrays
of the included Storage Appliance.
■ IB switch management module – Provides fault coverage for power, memory, storage, and
battery.

Consider this information when using ASR:


■ ASR is applicable only for component faults. Not all component failures are covered,
however components that are most likely to generate faults, such as disks, fans, and power
supplies, are covered by ASR.
■ ASR is not a replacement for other monitoring mechanisms, such as SMTP and SNMP
alerts. ASR is a complementary mechanism that expedites and simplifies the delivery
of replacement hardware. ASR should not be used for downtime events in high-priority
systems. For high-priority events, contact Oracle Support Services directly.
■ There are occasions when a service request might not be automatically filed. This can
happen due to problems with the SNMP protocol or loss of connectivity to the ASR
Manager. You must continue to monitor the systems for faults and call Oracle Support
Services if you do not receive notice that a service request has been automatically filed.

ASR Resources
For ASR details, refer to these resources:

Resource Links
The main ASR web page http://www.oracle.com/asr
ASR documentation http://www.oracle.com/technetwork/systems/asr/documentation

206 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Monitoring the System Using Auto Service Request

Resource Links
ASR download page http://www.oracle.com/technetwork/systems/asr/downloads
Oracle Services Tools Bundle http://docs.oracle.com/cd/E35557_01
documentation
A list of SuperCluster components that are http://docs.oracle.com/cd/E37710_01/doc.41/e37287/toc.htm
qualified ASR assets.
ASR fault coverage http://docs.oracle.com/cd/E37710_01/doc.41/e55817/toc.htm

ASR Installation Overview


This is a high-level description of the tasks you perform to configure ASR on SuperCluster. For
more details, refer to ASR documentation. See “ASR Resources” on page 206.

1. Create or verify your MOS account at http://support.oracle.com.


Ensure that your MOS account is correctly set up:
■ Oracle Premier Support for Systems or Oracle/Sun Limited Warranty
■ Technical contact responsible for SuperCluster system
■ Valid shipping address for parts
2. Designate a standalone system to serve as the ASR Manager and install the ASR Manager
software.
The server must run either Oracle Solaris or Linux, and Java
You must have superuser access to the ASR Manager system.
To download ASR Manager software, go to:http://www.oracle.com/technetwork/
systems/asr/downloads
For installation instructions, refer to the ASR documentation that corresponds to the version
of ASR Manager you plan to install. The documentation is available at:
http://docs.oracle.com/cd/E37710_01/index.htm
3. Ensure that the ASR Manager server has connectivity to the Internet using HTTPS.
You might need to open certain ports to your datacenter. For more information, see the
Oracle ASR Security White paper, located here:
http://docs.oracle.com/cd/E37710_01/index.htm
4. (Optional) Obtain these documents:
■ Oracle SuperCluster T5-8 Site Checklists
■ Oracle SuperCluster T5-8 Configuration Worksheets

The information in these documents can provide helpful information when you configure
SuperCluster for ASR.

Monitoring the System 207


Configure SNMP Trap Destinations for Exadata Storage Servers

5. Configure and activate SuperCluster ASR assets.


Refer to ASR documentation and tasks in this section.

Note - An active ASR Manager must be installed and running before you configure ASR
assets.

Note - To monitor Oracle Solaris 10 assets, you must install the latest STB bundle on
SuperCluster. Refer to Doc ID 1153444.1 to download the latest STB bundle from MOS:
https://support.oracle.com

6. Approve ASR assets in MOS.


Follow the instructions in the ASR documentation.

Configure SNMP Trap Destinations for Exadata


Storage Servers
Note - Do not attempt to copy and paste commands that span across multiple lines from this
section. Manually type commands that span across multiple lines to ensure the commands are
typed properly.

Complete the following steps on each Exadata Storage Server:

1. Log in as celladmin on the Exadata Storage Server.

2. On the Exadata Storage Server, add SNMP trap destinations:


# cellcli -e “alter cell snmpSubscriber=(host ='ASR-Manager-name-or-IP-address',port=162,
community=public,type=asr)”
Note that single quotes are required around the ASR-Manager-name-or-IP-address entry.
Following are the element definitions for the command above:
■ host='ASR-Manager-name-or-IP-address' – The ASR Manager hostname can be used when
DNS is enable for the site. If DNS is not running, the IP address is preferred, but the ASR
Manager hostname can be used if the entry is added to the /etc/hosts file.
■ type=asr – Shows the ASR Manager as being a special type of SNMP subscriber.
■ community=public – The required value of the community string. This value can be
modified to be a different string based on customer network requirements.

208 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Configure SNMP Trap Destinations for Exadata Storage Servers

■ port=162 – The SNMP port. This port value is customer dependant. It can be configured as
a different port based on network requirements, or it may need to be changed for ASR to
work correctly in a managed environment.

3. Validate if Oracle ILOM auto-activation occurred (if the network and Oracle ILOM
are set up correctly):
# asr list_asset
Following is example output:

IP_ADDRESS     HOST_NAME       SERIAL_NUMBER   ASR       PROTOCOL   SOURCE
----------     ---------       -------------   ---       --------   -----
10.60.40.105   ssc1cel01       1234FMM0CA      Enabled   SNMP       ILOM
10.60.40.106   ssc1cel02       1235FMM0CA      Enabled   SNMP       ILOM
10.60.40.107   ssc1cel03       1236FMM0CA      Enabled   SNMP       ILOM
10.60.40.117   ssc1cel01-ilom  1234FMM0CA      Enabled   SNMP,HTTP  EXADATA-SW
10.60.40.118   ssc1cel02-ilom  1235FMM0CA      Enabled   SNMP,HTTP  EXADATA-SW
10.60.40.119   ssc1cel03-ilom  1236FMM0CA      Enabled   SNMP,HTTP  EXADATA-SW

■ If all Oracle ILOMs for the Exadata Storage Servers are in the list, go to Step 5.
■ If Oracle ILOMs are not in the list, go to Step 4.

4. On the ASR Manager, activate the Oracle ILOMs of the Exadata Storage Servers:
# asr activate_asset -i ILOM-IP-address
or
# asr activate_asset -h ILOM-hostname

Note - If the last step fails, verify that port 6481 on the Oracle ILOM is open. If port 6481 is
open and the step still fails, contact ASR Support.

5. Activate the Exadata OS side of the ASR support:


# asr activate_exadata -i host-management-IP-address -h host-management-hostname -l ILOM-
IP-address

or
# asr activate_exadata -i host-management-IP-address -h host-management-hostname -n ILOM-
hostname

6. Validate all Exadata Storage Servers are visible on the ASR Manager:
# asr list_asset

Monitoring the System 209


Configure ASR on the ZFS Storage Appliance

You should see both the Oracle ILOM and the host referenced in the list, with the same serial
number, as shown in the following example output:

IP_ADDRESS     HOST_NAME       SERIAL_NUMBER   ASR       PROTOCOL   SOURCE
----------     ---------       -------------   ---       --------   -----
10.60.40.105   ssc1cel01       1234FMM0CA      Enabled   SNMP       ILOM
10.60.40.106   ssc1cel02       1235FMM0CA      Enabled   SNMP       ILOM
10.60.40.107   ssc1cel03       1236FMM0CA      Enabled   SNMP       ILOM
10.60.40.117   ssc1cel01-ilom  1234FMM0CA      Enabled   SNMP,HTTP  EXADATA-SW
10.60.40.118   ssc1cel02-ilom  1235FMM0CA      Enabled   SNMP,HTTP  EXADATA-SW
10.60.40.119   ssc1cel03-ilom  1236FMM0CA      Enabled   SNMP,HTTP  EXADATA-SW

7. On the Exadata Storage Server, validate the configuration:


# cellcli -e “list cell attributes snmpsubscriber”

8. On the Exadata Storage Server, validate the SNMP configuration:


# cellcli -e “alter cell validate snmp type=asr”
The MOS contact will receive an email as confirmation.

9. Repeat these instructions for every Exadata Storage Server in your Oracle
SuperCluster T5-8.

10. When you have completed these instructions for every Exadata Storage Server
in your Oracle SuperCluster T5-8, approve and verify contacts to the Exadata
Storage Servers on MOS.
See “Approve and Verify ASR Asset Activation” on page 219 for those instructions.
For more information on the process, see ASR MOS 5.3+ Activation Process (Doc ID
1329200.1).

Configure ASR on the ZFS Storage Appliance


To activate the ZFS storage appliance included in your system, complete the following steps on
each ZFS storage controller:

1. In a web browser, type the IP address or host name you assigned to the host
management port of either ZFS storage controller as follows:
https://storage-controller-ipaddress:215
or

210 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Configure ASR on the ZFS Storage Appliance

https://storage-controller-hostname:215
The login screen appears.

2. Type root into the Username field and the root password into this login screen, and
press the Enter key.

3. Click the Configuration tab, and click SERVICES, and then on the left navigation
pane, click Services to display the list of services.

4. Scroll down in the screen and click Phone Home, as shown in the following
figure.

Monitoring the System 211


Configure ASR on the ZFS Storage Appliance

When you click Phone Home, the Phone Home page is displayed, as shown in the following
figure.

5. If you are using a web proxy to connect to the Internet from the storage
appliance, select the Use web proxy option, and type the following information:
■ In the Host:port field, type the complete host name of your web proxy server and the port.
■ In the Username field, type your user name for the accessing the web proxy server.
■ In the Password field, type the password.

6. Click the pencil icon in the registration section.


A Privacy Statement is displayed. Click OK, complete the section for My Oracle Support and
password, and click OK.

7. When the account is verified, select the Sun Inventory and Enable Phone Home
options.

8. After typing the information, click APPLY.

9. When the Service Enable / Disable popup is presented, select the Enable option.

10. Repeat these instructions for every ZFS storage appliance in your system

212 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Configure ASR on SPARC T5-8 Servers (Oracle ILOM)

11. When you have completed these instructions for every ZFS storage appliance
in your system, approve and verify contacts to the ZFS storage appliances on
MOS.
See “Approve and Verify ASR Asset Activation” on page 219 for those instructions.
For more information on the process, see ASR MOS 5.3+ Activation Process (Doc ID
1329200.1).

Configure ASR on SPARC T5-8 Servers (Oracle


ILOM)

Note - Do not attempt to copy and paste commands that span across multiple lines from this
section. Manually type commands that span across multiple lines to ensure the commands are
typed properly.

To configure the Oracle ILOM for SPARC T5-8 servers, complete the following steps on each
SPARC T5-8 server:

1. Log in to the SPARC T5-8 server Oracle ILOM.

2. Display the available rules:


# show /SP/alertmgmt/rules
This lists the rules available, similar to the following:
1
2
3
...
15

3. Pick one of the rules and type the following command to determine if that rule is
currently being used:
# show /SP/alertmgmt/rules/rule-number
For example:
# show /SP/alertmgmt/rules/1
■ If you see output similar to the following:

Monitoring the System 213


Configure ASR on SPARC T5-8 Servers (Oracle ILOM)

Properties:
type = snmptrap
level = minor
destination = 10.60.10.243
destination_port = 0
community_or_username = public
snmp_version = 2c
testrule = (Cannot show property)
this rule is currently being used and should not be used for this exercise (the destination
address shown would be the IP address of the ASR Manager in this case). If you see output
similar to the preceding example, pick another rule and type the show /SP/alertmgmt/
rules/rule-number command again, this time using another rule in the list.
■ If you see output similar to the following:
Properties:
type = snmptrap
level = disable
destination = 0.0.0.0
destination_port = 0
community_or_username = public
snmp_version = 1
testrule = (Cannot show property)
this rule is currently unused and can be used for this exercise.

4. Type this command using the unused rule:


# set /SP/alertmgmt/rules/unused-rule-number type=snmptrap level=minor destination=IP-
address-of-ASR-Manager snmp_version=2c community_or_username=public

5. Log in to the ASR Manager server.

6. Activate Oracle ILOM for the SPARC T5-8 server:


asr> activate_asset -i ILOM-IP-address

7. Repeat these instructions on Oracle ILOM for both SPARC T5-8 servers in your
Oracle SuperCluster T5-8.

214 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Enable the HTTP Receiver on the ASR Manager

8. When you have completed these instructions for both SPARC T5-8 servers
in Oracle SuperCluster T5-8, approve and verify contacts to the SPARC T5-8
servers on MOS.
See “Approve and Verify ASR Asset Activation” on page 219 for those instructions.
For more information on the process, see ASR MOS 5.3+ Activation Process (Doc ID
1329200.1).

Configuring ASR on the SPARC T5-8 Servers


(Oracle Solaris 11)
Note - Do not attempt to copy and paste commands that span across multiple lines from this
section. Manually type commands that span across multiple lines to ensure the commands are
typed properly.

Oracle Solaris 11 includes the ability to send ASR fault events and telemetry to Oracle using
xml over HTTP to the ASR Manager.

To enable this capability use the asr enable_http_receiver command on the ASR Manager.
Select a port for the HTTP receiver that is appropriate for your network environment and does
not conflict with other network services.

Perform the following tasks:


■ “Enable the HTTP Receiver on the ASR Manager” on page 215
■ “Enable HTTPS on ASR Manager (Optional)” on page 216
■ “Register SPARC T5-8 Servers With Oracle Solaris 11 or Database Domains to ASR
Manager” on page 217

Enable the HTTP Receiver on the ASR Manager

Follow this procedure on the ASR Manager to enable the HTTP receiver for Oracle Solaris 11
ASR Assets.

1. Log in to the ASR Manager system as root.

2. Verify the existing settings:


# asr show_http_receiver

3. Enable the HTTP receiver:


# asr enable_http_receiver -p port-number

Monitoring the System 215


Enable HTTPS on ASR Manager (Optional)

where port-number is the port that you are designating for HTTP traffic.

Note - If you need to disable the HTTP receiver, run asr disable_http_receiver.

4. Verify the updated configuration:


# asr show_http_receiver

5. Verify the HTTP receiver is up and running.


In a browser, go to: http://ASR-Manager-name:port-number/asr
A message will display indicating that the HTTP receiver is up and running.

Enable HTTPS on ASR Manager (Optional)

If you need to use HTTPS for security purposes, you can set up HTTPS/SSL for the ASR
Manager HTTP receiver.

1. Once the SSL certificate from a trusted authority is loaded into keystore, then
add the following SSL connector in /var/opt/SUNWsasm/configuration/jetty/
jetty.xml below the <Call name="addConnector"> sections:

<Call name=”addConnector”>
  <Arg>
    <New class=”org.mortbay.jetty.security.SslSocketConnector”>
      <Set name=”Port”>443</Set>
      <Set name=”maxIdleTime”>30000</Set>
      <Set name=”keystore”>path-to-keystore</Set>
      <Set name=”password”>password</Set>
      <Set name=”keyPassword”key-password</Set>
      <Set name=”truststore”>path-to-keystore</Set>
      <Set name=”trustPassword”trust-password</Set>
    </New>
  </Arg>
</Call>

Passwords above can be plain text or obfuscated as follows:


java -classpath lib/jetty-6.1.7.jar:lib/jetty-util-6.1.7.jar
org.mortbay.jetty.security.Password plaintext-password
Then copy and paste the output line starting with OBF: (including the OBF: part) into this jetty.
xml config file.

216 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Register SPARC T5-8 Servers With Oracle Solaris 11 or Database Domains to ASR Manager

2. Restart OASM.

■ On a system running Oracle Solaris, type:


# svcadm restart sasm
■ On a system running Oracle Linux, type:
# /opt/SUNWsasm/bin/sasm stop-instance
# /opt/SUNWsasm/bin/sasm start-instance

3. Verify the SSL setup by accessing the following URL from a browser:
https://ASR-Manager-name/asr

Register SPARC T5-8 Servers With Oracle Solaris 11 or


Database Domains to ASR Manager

Follow this procedure to register the SPARC T5-8 server with Oracle Solaris 11 or Database
Domains to the ASR Manager.

1. Log in to the SPARC T5-8 server as root.

2. Confirm that the asr-notify service is working:


# svcs asr-notify

■ If you see the following message:


svcs: Pattern ???asr-notify' doesn't match any instances
then confirm that the asr-notify service is installed:
# pkg list asr-notify
If you see the following message:
pkg list: no packages matching ???asr-modify' installed
then install the asr-notify service:
# pkg install system/fault-management/asr-notify
Enter the svcs asr-notify command again to confirm that the asr-notify service is
working.
■ If you see the following message:

# svcs asr-notify
STATE     STIME       FMRI

Monitoring the System 217


Register SPARC T5-8 Servers With Oracle Solaris 11 or Database Domains to ASR Manager

online    16:06:05    svc:/system/fm/asr-notify:default

then the asr-notify service is installed and is working properly

3. To register the ASR manager, run:


# asradm register -e http://asr-manager-host:port-number/asr
For example:
# asradm register -e http://asrmanager1.mycompany.com:8777/asr
You should see screens asking for your Oracle Support account name and password. After
entering your Oracle Support account name and password, you should see a notification, saying
that your registration is complete:

Enter Orcle SSO User Name:


Enter password:
 
Registration complete.

4. Run the following command:


# asradm list
The screen output should be similar to the following:

PROPERTY VALUE
Status Successfully Registered with ASR manager
System Id system-identification-number
Asset Id asset-identification-number
User username
Endpoint URL http://asr-manager-host:port-number/asr

Upon successful results of the above commands, the registration of the ASR Manager is
complete.

5. Repeat these instructions for both SPARC T5-8 servers with Oracle Solaris 11 or
Database Domains in your system.

6. When you have completed these instructions for both SPARC T5-8 servers
in your system, approve and verify contacts to the SPARC T5-8 servers on
MOS. See “Approve and Verify ASR Asset Activation” on page 219 for those
instructions.
For more information on the process, see ASR MOS 5.3+ Activation Process (Doc ID
1329200.1).

218 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Approve and Verify ASR Asset Activation

Approve and Verify ASR Asset Activation


1. On the standalone system where ASR Manager is running, run the following
command to verify the status of your system assets:
list_asset
This command should list ASR assets in your Oracle SuperCluster T5-8, including SPARC
T5-8 servers, Exadata Storage Servers, and ZFS storage controllers.

2. Log in to My Oracle Support (https://support.oracle.com).

3. In the My Oracle Support Dashboard, click the More... tab, then click Settings
from the menu.

4. In the Settings pane on the left of the window, select Pending ASR Activations
(located under the Administrative sub menu).
A complete list of all qualified ASR assets that are awaiting approval are displayed.

Note - By default, all support identifiers that you are associated with are displayed. If this list of
assets is long, you can limit the display to show only assets associated to one support identifier.
You can also search for an asset's serial number.

Monitoring the System 219


Approve and Verify ASR Asset Activation

Note - For each component in the system, you should see two host names associated with
each serial number. If you see only the Oracle ILOM host name, that means that you did not
activate ASR for that component. If you see more than two host names associated with each
serial number, you might need to request help for ASR. To do this, open a hardware SR with
“Problem Category” set to “My - Auto Service Request (ASR) Installation and Configuration
Issues.”

5. Click the asset's serial number.


If any missing information about the asset is required, the information pop-up will indicate
the needed information. The ASR Activation window will appear and look like the following
figure.

Note - ASR Host name is updated when an activation request is sent to Oracle from the ASR
software on the asset. (For example, from the asr activate_asset command on the ASR
Manager.)

Required fields for ASR asset activation are:

■ Contact Name: You can only select a name associated with the support identifier. Click the
drop-down menu to see the list of available names.
A contact must have the "Create SR" privilege for the asset's support identifier.
■ Street Address 1: Type the street address for the asset.

220 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Monitoring the System Using OCM

Note - By default, all support identifiers that you are associated with are displayed. If this
list of assets is long, you can limit the display to show only assets associated to one support
identifier. You can also search for an asset's serial number.

■ Country: Select the asset's country location from the drop-down menu.
■ ZIP/Postal Code: type the ZIP/postal code for the asset's location. If there is no postcode
insert "-".
■ Distribution Email List: You can add email addresses that will receive all ASR mail
notifications. Separate multiple emails addresses with a comma. For example:
asr-notifications-1@mycompany.com, asr-notifications-2@mycompany.com
ASR will send email to the Contact's email address and the Distribution Email List, if
provided. This is a useful feature if your organization has a team that should be informed
about Service Requests created by ASR.

6. Click the “Approve” button to complete the ASR activation.

Note - A system asset must be in an active ASR state in My Oracle Support in order for Service
Request autocreate to work.

7. To confirm that ASR can send information to the transport server, run:
# asradm send test email-address@company.com
This command sends a test alert e-mail to the e-mail address.

Monitoring the System Using OCM


These topics describe how to monitor the system with OCM.

■ “OCM Overview” on page 221


■ “Install Oracle Configuration Manager on SPARC T5-8 Servers” on page 222

OCM Overview
Oracle Configuration Manager (OCM) collects configuration information and uploads it to
the Oracle repository. When the configuration information is uploaded daily, Oracle Support
Services can analyze the data and provide better service. When a service request is logged, the

Monitoring the System 221


Install Oracle Configuration Manager on SPARC T5-8 Servers

configuration data is associated with the service request. The following are some of the benefits
of Oracle Configuration Manager:

■ Reduced time for problem resolution


■ Proactive problem avoidance
■ Improved access to best practices, and the Oracle knowledge base
■ Improved understanding of the customer's business needs
■ Consistent responses and services

The Oracle Configuration Manager software is installed and configured in each ORACLE_HOME
directory on a host. For clustered databases, only one instance is configured for Oracle
Configuration Manager. A configuration script is run on every database on the host. The Oracle
Configuration Manager collects and then sends the data to a centralized Oracle repository.

For more information, see:

■ Oracle Configuration Manager Installation and Administration Guide


■ Oracle Configuration Manager Collection Overview

Install Oracle Configuration Manager on SPARC


T5-8 Servers

Note - Do not attempt to copy and paste commands that span across multiple lines from this
section. Manually type commands that span across multiple lines to ensure the commands are
typed properly.

1. Log in to the logical domain as root.

■ For the Application Domain, log in through the 1 GbE host management network using the
IP address or host name assigned to that domain. For example:
ssh -l root ssc0101-mgmt
■ For the Database Domain, log in through the 1 GbE host management network using the IP
address or host name assigned to the SPARC T5-8 server. For example:
ssh -l root ssc01db01

2. Locate the directory where the Oracle Configuration Manager is installed.

■ For the Application Domain, Oracle Configuration Manager should be installed in the
following directory:

222 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Install Oracle Configuration Manager on SPARC T5-8 Servers

/opt/ocm
■ For the Database Domain, Oracle Configuration Manager should be installed in the
following directory:
/usr/lib/ocm
■ If the Oracle Configuration Manager is not installed in either of those directories, locate and
change directories to the location where you installed Oracle Configuration Manager.
The directory where the Oracle Configuration Manager is installed will be referred to as ocm-
install-dir for the remainder of these procedures.

3. Change directories to:


/ocm-install-dir/ccr/bin

4. Look for the file emCCR.


■ If you see the emCCR file in this directory, then Oracle Configuration Manager has already
been installed and configured on your SPARC T5-8 server. Do not proceed with these
instructions in this case.
■ If you do not see the emCCR file in this directory, then Oracle Configuration Manager has not
been installed on your SPARC T5-8 server. Proceed to the next step.

5. Type the following command to configure Oracle Configuration Manager on the


SPARC T5-8 server:
/ocm-install-dir/ccr/bin/configCCR

6. Type the following information in the appropriate fields in Oracle Configuration


Manager:
■ E-mail address/User Name
■ Password (optional)
This installs Oracle Configuration Manager on your SPARC T5-8 server.

7. If you are logged into the Database Domain, or if are running a database on the
Application Domain, enable the collection of data from the database:
/ocm-install-dir/ccr/admin/scripts/installCCRSQL.sh collectconfig -s $SID -r SYS
XML files should be generated in /ocm-install-dir/ccr/hosts/SPARC-T5-8-server/state/
review.
■ If the XML files were generated in this directory, proceed to the next step.
■ If the XML files were not generated in this directory, contact Oracle for support with this
issue.

Monitoring the System 223


Install Oracle Configuration Manager on SPARC T5-8 Servers

8. Perform a collection as follows:


ORACLE_HOME/ccr/bin/emCCR collect

9. Log in to My Oracle Support.

10. On the home page, select the Customize page link at the top right of the
Dashboard page.
Drag the Targets button to the left and on to your dashboard.

224 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Install Oracle Configuration Manager on SPARC T5-8 Servers

11. Search for your system in the targets search window at the top right of the
screen.

Monitoring the System 225


Monitoring the System Using EM Exadata Plug-in

12. Double-click on your system to get information for your system.

Monitoring the System Using EM Exadata Plug-in


Starting with Oracle SuperCluster 1.1, you can monitor all Exadata-related software and
hardware components in the cluster using Oracle Enterprise Manager Exadata 12.1.0.3 Plug-in
only in the supported configuration described in the note below.

■ “System Requirements” on page 227


■ “Known Issues With EM Exadata Plug-in” on page 227

226 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Monitoring the System Using EM Exadata Plug-in

System Requirements
Only systems with software Version 1.1 (or later) with Database Domain on Control LDOM-
only environments are supported. Earlier versions of the system can be made compatible if you
update to the October 2012 QMU release. You can confirm this requirement by looking at the
version of the compmon pkg installed on the system (using either pkg info compmon or pkg
list compmon commands to check). You must have the following minimum version of compmon
installed: pkg://exa-family/system/platform/exadata/compmon@0.5.11,5.11-0.1.0.11:
20120726T024158Z

Note - With the Oracle SuperCluster software version 2.x, the compmon command name changed
to osc-compmon. Use the new name if SuperCluster is installed with the Oracle SuperCluster
v2.x release bundle. See “Identify the Version of SuperCluster Software” on page 145.

Known Issues With EM Exadata Plug-in


■ The pre-requisite check script exadataDiscoveryPreCheck.pl that is bundled in
Exadata plug-in 12.1.0.3 does not support the catalog.xml file. Download the latest
exadataDiscoveryPreCheck.pl file from My Oracle Support as described in the
“Discovery Precheck Script” section of the
Oracle Enterprise Manager Exadata Management Getting Started Guide (http://docs.
oracle.com/cd/E24628_01/doc.121/e27442/title.htm)
■ If multiple database clusters share the same Exadata Storage server, in one Enterprise
Manager management server environment, you can discover and monitor the first DB
Machine target and all its components. However, for additional DB Machine targets sharing
the same Exadata Storage server, the Oracle Exadata Storage Server Grid system and the
Oracle Database Exadata Storage Server System will have no Exadata Storage server
members because they are already monitored.
■ If the perfquery command installed in the system has version 1.5.8 or later, you will
encounter a bug (ID 15919339) where most columns in the HCA Port Errors metric in the
host targets for the compute nodes will be blank. If there are errors occurring on the HCA
ports, it will not be reported in Enterprise Manager.
To check your version, run the following command:
perfquery -V

Monitoring the System 227


228 Oracle SuperCluster T5-8 Owner's Guide • May 2016
Configuring Exalogic Software

This section provides information about using Exalogic software on Oracle SuperCluster T5-8.

■ “Exalogic Software Overview” on page 229


■ “Exalogic Software Prerequisites” on page 230
■ “Enable Domain-Level Enhancements” on page 230
■ “Enable Cluster-Level Session Replication Enhancements” on page 231
■ “Configuring Grid Link Data Source for Dept1_Cluster1” on page 234
■ “Configuring SDP-Enabled JDBC Drivers for Dept1_Cluster1” on page 238
■ “Configuring SDP InfiniBand Listener for Exalogic Connections” on page 240

Exalogic Software Overview


Oracle Exalogic Elastic Cloud Software (EECS) includes performance optimizations for
Oracle SuperCluster T5-8 to improve input/output, thread management, and request handling
efficiency.

Additional optimizations include reduced buffer copies, which result in more efficient input/
output. Finally, session replication performance and CPU utilization is improved through
lazy deserialization, which avoids performing extra work on every session update that is only
necessary when a server fails.

WebLogic Server clusters can be configured with cluster-wide optimizations that further
improve server-to-server communication. The first optimization enables multiple replication
channels, which improve network throughput among WebLogic Server cluster nodes. The
second cluster optimization enables InfiniBand support for Sockets Direct Protocol, which
reduces CPU utilization as network traffic bypasses the TCP stack.

Configuring Exalogic Software 229


Exalogic Software Prerequisites

Exalogic Software Prerequisites


The following are the prerequisites for configuring the Exalogic software products for the
system:

■ Preconfiguring the environment, including database, storage, and network, as described


in Chapter 3, “Network, Storage, and Database Preconfiguration” of the Oracle Exalogic
Enterprise Deployment Guide, located at: http://docs.oracle.com/cd/E18476_01/doc.
220/e18479/toc.htm
■ Your Oracle Exalogic Domain is configured, as described in Chapter 5, “Configuration
Oracle Fusion Middleware” in the Oracle Exalogic Enterprise Deployment Guide, located
at: http://docs.oracle.com/cd/E18476_01/doc.220/e18479/toc.htm

Enable Domain-Level Enhancements


To enable domain-level enhancements, complete the following steps:

1. Log in to the Oracle WebLogic Server Administration Console.

2. Select Domain name in the left navigation pane.


The Settings for Domain name screen is displayed. Click the General tab.

3. In your domain home page, select Enable Exalogic Optimizations, and click
Save.

4. Activate changes.

5. Stop and start your domain.


The Enable Exalogic Optimizations setting collectively enables all of the individual features
described in the following table. The Startup Option indicates how to independently enable and
disable each feature.

Feature
Scattered Reads
Description Increased efficiency during I/O in environments with high network
throughput
Startup Option -Dweblogic.ScatteredReadsEnabled=true/false
MBean KernelMBean.setScatteredReadsEnabled

230 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Enable Cluster-Level Session Replication Enhancements

Feature
Gathered Writes
Description Increased efficiency during I/O in environments with high network
throughput
Startup Option -Dweblogic.GatheredWritesEnabled=true/false
MBean KernelMBean.setGatheredWritesEnabled
Lazy Deserialization
Description Increased efficiency with session replication
Startup Option -Dweblogic.replication.enableLazyDeserialization=true/false
MBean ClusterMBean.setSessionLazyDeserializationEnabled

Note - After enabling the optimizations, you may see the following message: java.io.
IOException: Broken pipe. You may see the same message when storage failover occurs. In
either case, you can ignore the error message.

Enable Cluster-Level Session Replication Enhancements


You can enable session replication enhancements for Managed Servers in a WebLogic cluster to
which you will deploy a web application at a later time.

Note - If you are using Coherence*web, these session replication enhancements do not
apply. Skip these steps if you use the dizzyworld.ear application as described in Chapter 8,
“Deploying a Sample Web Application to and Oracle WebLogic Cluster” in the Oracle®
Fusion Middleware Exalogic Enterprise Deployment Guide at http://docs.oracle.com/cd/
E18476_01/doc.220/e18479/deploy.htm.

To enable session replication enhancements for Dept1_Cluster1, complete the following steps:

1. Ensure that Managed Servers in the Dept1_Cluster1 cluster are up and running,
as described in Section 5.16 “Starting Managed Servers on ComputeNode1
and ComputeNode2” of the Oracle® Fusion Middleware Exalogic Enterprise
Deployment Guide at http://docs.oracle.com/cd/E18476_01/doc.220/e18479/
create_domain.htm#BABEGAFB.

2. To set replication ports for a Managed Server, such as WLS1, complete the
following steps:

a. Under Domain Structure, click Environment and Servers.


The Summary of Servers page is displayed.

Configuring Exalogic Software 231


Enable Cluster-Level Session Replication Enhancements

b. Click WLS1 on the list of servers.


The Settings for WLS1 is displayed.

c. Click the Cluster tab.

d. In the Replication Ports field, enter a range of ports for configuring multiple
replication channels.
For example, replication channels for Managed Servers in Dept_1_Cluster1 can listen on
ports starting from 7005 to 7015. To specify this range of ports, enter 7005-7015.

3. Create a custom network channel for each Managed Server in the cluster (for
example, WLS1) as follows:

a. Log in to the Oracle WebLogic Server Administration Console.

b. If you have not already done so, click Lock & Edit in the Change Center.

c. In the left pane of the Console, expand Environment and select Servers.
The Summary of Servers page is displayed.

d. In the Servers table, click WLS1 Managed Server instance.

e. Select Protocols, and then Channels.

f. Click New.

g. Enter ReplicationChannel as the name of the new network channel and


select t3 as the protocol, then click Next.

h. Enter the following information:


Listen address: 10.0.0.1

Note - This is the floating IP assigned to WLS1.

Listen port: 7005

i. Click Next, and in the Network Channel Properties page, select Enabled and
Outbound Enabled.

232 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Enable Cluster-Level Session Replication Enhancements

j. Click Finish.

k. Under the Network Channels table, select ReplicationChannel, the network


channel you created for the WLS1 Managed Server.

l. Expand Advanced, and select Enable SDP Protocol.

m. Click Save.

n. To activate these changes, in the Change Center of the Administration


Console, click Activate Changes.
You must repeat the above steps to create a network channel each for the remaining
Managed Servers in the Dept1_Cluster1 cluster. Enter the required properties, as described
in the following table.

Managed Servers in Name Protocol Listen Address Listen Port Additional Channel Ports
Dept1_Cluster1
WLS2 ReplicationChannel t3 10.0.0.2 7005 7006 to 7014
WLS3 ReplicationChannel t3 10.0.0.3 7005 7006 to 7014
WLS4 ReplicationChannel t3 10.0.0.4 7005 7006 to 7014
WLS5 ReplicationChannel t3 10.0.0.5 7005 7006 to 7014
WLS6 ReplicationChannel t3 10.0.0.6 7005 7006 to 7014
WLS7 ReplicationChannel t3 10.0.0.7 7005 7006 to 7014
WLS8 ReplicationChannel t3 10.0.0.8 7005 7006 to 7014

4. After creating the network channel for each of the Managed Servers in your
cluster, click Environment > Clusters. The Summary of Clusters page is
displayed.

5. Click Dept1_Cluster1 (this is the example cluster to which you will deploy a web
application at a later time). The Settings for Dept1_Cluster1 page is displayed.

6. Click the Replication tab.

7. In the Replication Channel field, ensure that ReplicationChannel is set as the


name of the channel to be used for replication traffic.

8. In the Advanced section, select the Enable One Way RMI for Replication option.

9. Click Save.

Configuring Exalogic Software 233


Configuring Grid Link Data Source for Dept1_Cluster1

10. Activate changes, and restart the Managed Servers.

11. Manually add the system property -Djava.net.preferIPv4Stack=true to the


startWebLogic.sh script, which is located in the bin directory of base_domain,
using a text editor as follows:

a. Locate the following line in the startWebLogic.sh script:


. ${DOMAIN_HOME}/bin/setDomainEnv.sh $*

b. Add the following property immediately after the above entry:


JAVA_OPTIONS="${JAVA_OPTIONS} -Djava.net.preferIPv4Stack=true"

c. Save the file and close.

12. Restart all Managed Servers as follows:

a. In the administration console, click Environment > Servers. The Summary of


Servers page is displayed.

b. Select a Managed Server, such as WLS1, by clicking WLS1. The Settings for
WLS1 page is displayed.

c. Click the Control tab. Select WLS1 in the Server Status table. Click Start.

d. Repeat these steps for each of the Managed Servers in the WebLogic
cluster.

Note - To verify that multiple listening ports were opened, you can either run the netstat -na
command on the command line or check the Managed Server logs.

Configuring Grid Link Data Source for Dept1_Cluster1


You must create a Grid Link Data Source for JDBC connectivity between Oracle WebLogic
Server and a service targeted to an Oracle RAC cluster. It uses the Oracle Notification Service
(ONS) to adaptively respond to state changes in an Oracle RAC instance. These topics describe:

■ “Grid Link Data Source” on page 235


■ “Create a GridLink Data Source on Dept1_Cluster1” on page 236

234 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Configuring Grid Link Data Source for Dept1_Cluster1

Grid Link Data Source


A Grid Link data source includes the features of generic data sources plus the following support
for Oracle RAC:

■ “Fast Connection Failover” on page 235


■ “Runtime Connection Load Balancing” on page 235
■ “XA Affinity” on page 236
■ “SCAN Addresses” on page 236
■ “Secure Communication using Oracle Wallet” on page 236

Fast Connection Failover

A Grid Link data source uses Fast Connection Failover to:

■ Provide rapid failure detection.


■ Abort and remove invalid connections from the connection pool.
■ Perform graceful shutdown for planned and unplanned Oracle RAC node outages. The
data source allows in-progress transactions to complete before closing connections. New
requests are load balanced to an active Oracle RAC node.
■ Adapt to changes in topology, such as adding a new node.
■ Distribute runtime work requests to all active Oracle RAC instances.

See Fast Connection Failover in the Oracle Database JDBC Developer's Guide and Reference
at: http://docs.oracle.com/cd/B19306_01/java.102/b14355/fstconfo.htm.

Runtime Connection Load Balancing

Runtime Connection Load Balancing allows WebLogic Server to:

■ Adjust the distribution of work based on back end node capacities such as CPU, availability,
and response time.
■ React to changes in Oracle RAC topology.
■ Manage pooled connections for high performance and scalability.

If FAN (Fast Application Notification) is not enabled, Grid link data sources use a round-robin
load balancing algorithm to allocate connections to Oracle RAC nodes.

Configuring Exalogic Software 235


Create a GridLink Data Source on Dept1_Cluster1

XA Affinity
XA Affinity for global transactions ensures all the data base operations for a global transaction
performed on an Oracle RAC cluster are directed to the same Oracle RAC instance. The
first connection request for an XA transaction is load balanced using RCLB and is assigned
an Affinity context. All subsequent connection requests are routed to the same Oracle RAC
instance using the Affinity context of the first connection.

SCAN Addresses
Oracle Single Client Access Name (SCAN) addresses can be used to specify the host and
port for both the TNS listener and the ONS listener in the WebLogic console. A Grid Link
data source containing SCAN addresses does not need to change if you add or remove Oracle
RAC nodes. Contact your network administrator for appropriately configured SCAN urls for
your environment. For more information, see http://www.oracle.com/technetwork/database/
clustering/overview/scan-129069.pdf.

Secure Communication using Oracle Wallet


Allows you to configure secure communication with the ONS listener using Oracle Wallet.

Create a GridLink Data Source on Dept1_Cluster1


1. Log in to the Oracle WebLogic Server Administration Console.

2. If you have not already done so, in the Change Center of the Administration
Console, click Lock & Edit.

3. In the Domain Structure tree, expand Services, then select Data Sources.

4. On the Summary of Data Sources page, click New and select GridLink Data
Source.
The Create a New JDBC GridLink Data Source page is displayed.

5. Enter the following information:


■ Enter a logical name for the datasource in the Name field. For example, gridlink.
■ Enter a name for JNDI. For example, jdbc/gridlink.

236 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Create a GridLink Data Source on Dept1_Cluster1

■ Click Next.

6. In the Transaction Options page, de-select Supports Global Transactions, and


click Next.

7. Select Enter individual listener information and click Next.

8. Enter the following connection properties:

■ Service Name: Enter the name of the Oracle RAC service in the Service Name field. For
example, enter myService in Service Name.

Note - The Oracle RAC Service name is defined on the database, and it is not a fixed name.

■ Host Name - Enter the DNS name or IP address of the server that hosts the database. For an
Oracle GridLink service-instance connection, this must be the same for each data source in
a given multi data source.
■ Port - Enter the port on which the database server listens for connections requests.
■ Database User Name: Enter the database user name. For example, myDataBase.
■ Password: Enter the password. For example, myPassword1.
■ Confirm Password and click Next.

Tip - For more information, see the Oracle Fusion Middleware Oracle WebLogic Server
Administration Console Online Help.

The console automatically generates the complete JDBC URL. For example:
jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)
(HOST=left)(PORT=1234))(ADDRESS=(PROTOCOL=TCP)(HOST=right)(PORT=1234))(ADDRESS=
(PROTOCOL=TCP)(HOST=center)(PORT=1234)))(CONNECT_DATA=(SERVICE_NAME=myService)))

9. On the Test GridLink Database Connection page, review the connection


parameters and click Test All Listeners.
Oracle WebLogic attempts to create a connection from the Administration Server to the
database. Results from the connection test are displayed at the top of the page. If the test is
unsuccessful, you should correct any configuration errors and retry the test.
Click Next.

10. In the ONS Client Configuration page, do the following:

■ Select Fan Enabled to subscribe to and process Oracle FAN events.

Configuring Exalogic Software 237


Configuring SDP-Enabled JDBC Drivers for Dept1_Cluster1

■ In ONS host and port, enter a comma-separate list of ONS daemon listen addresses and
ports for receiving ONS-based FAN events. You can use Single Client Access Name
(SCAN) addresses to access FAN notifications.
■ Click Next.

11. On the Test ONS client configuration page, review the connection parameters
and click Test All ONS Nodes.
Click Next.

12. In the Select Targets page, select Dept1_Cluster1 as the target and All Servers in
the cluster

13. Click Finish.

14. Click Activate Changes.

15. Now you must configure SDP-enabled JDBC drivers for the cluster.
For instructions, go to “Configuring SDP-Enabled JDBC Drivers for
Dept1_Cluster1” on page 238.

Configuring SDP-Enabled JDBC Drivers for Dept1_Cluster1


You must configure SDP-enabled JDBC drivers for the Dept1_Cluster1 cluster.

This section discusses the following topics:


■ “Configure the Database to Support Infiniband” on page 238
■ “Enable SDP Support for JDBC” on page 239
■ “Monitor SDP Sockets Using netstat on Oracle Solaris” on page 240

Configure the Database to Support Infiniband


Before enabling SDP support for JDBC, you must configure the database to
support InfiniBand, as described in the " Configuring SDP Protocol Support for
Infinband Network Communication to the Database Server" section in the Oracle
Database Net Services Administrator's Guide, located at:
http://download.oracle.com/docs/cd/B28359_01/network.111/b28316/performance.
htm#i1008413

238 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Enable SDP Support for JDBC

Ensure that you set the protocol to SDP.

Enable SDP Support for JDBC


Complete the following steps:

1. Ensure that you have created the Grid Link Data Sources for the JDBC
connectivity on ComputeNode1 and ComputeNode2, as described in Section 7.6
“Configuring Grid Link Data Source for Dept1_Cluster1” of the Oracle® Fusion
Middleware Exalogic Enterprise Deployment Guide at http://docs.oracle.com/cd/
E18476_01/doc.220/e18479/optimization.htm#BABHEDI.
The console automatically generates the complete JDBC URL, as shown in the following
example:
jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.x.x.x)(PORT=1522))
(CONNECT_DATA=(SERVICE_NAME=myservice)))

2. In the JDBC URL, replace TCP protocol with SDP protocol. For example:
jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=sdp)(HOST=192.x.x.x)(PORT=1522))
(CONNECT_DATA=(SERVICE_NAME=myservice)))

3. Manually add the system property -Djava.net.preferIPv4Stack=true to the


startWebLogic.sh script, which is located in the bin directory of base_domain,
using a text editor as follows:

a. Locate the following line in the startWebLogic.sh script:


. ${DOMAIN_HOME}/bin/setDomainEnv.sh $*

b. Add the following property immediately after the above entry:


JAVA_OPTIONS="${JAVA_OPTIONS} -Djava.net.preferIPv4Stack=true -Doracle.net.SDP=true"

c. Save the file and close.

4. Restart the Managed Server as follows:

a. In the administration console, click Environment > Servers. The Summary of


Servers page is displayed.

b. Select a Managed Server, such as WLS1, by clicking WLS1. The Settings for
WLS1 page is displayed.

Configuring Exalogic Software 239


Monitor SDP Sockets Using netstat on Oracle Solaris

c. Click the Control tab. Select WLS1 in the Server Status table. Click Start.

Monitor SDP Sockets Using netstat on Oracle


Solaris
You can monitor SDP sockets by running the netstat command on the Application Domains
running Oracle Solaris 11 containing the Oracle Exalogic Elastic Cloud Software in the system.
Run the netstat command on these Application Domains running Oracle Solaris 11 and on the
Database Domains to monitor SDP traffic between the Application Domains running Oracle
Solaris 11 and the Database Domains.

Log in to the operating system as root, and run the following command on the
command line:
# netstat -f sdp -s l
This command displays the status of all SDP sockets (established or not), as in the following
sample output:

SDP sdpActiveOpens = 66357 sdpCurrEstab = 748


       sdpPrFails = 0 sdpRejects = 0
      sdpOutSegs =39985638793
       sdpInDataBytes =9450383834191
      sdpOutDataBytes =6228930927986
 
SDP sdpActiveOpens = 0 sdpCurrEstab = 0
      sdpPrFails = 0 sdpRejects = 0
      sdpInSegs = 14547
       sdpOutSegs = 14525
       sdpInDataBytes =3537194
      sdpOutDataBytes =2470907

Configuring SDP InfiniBand Listener for Exalogic


Connections
This section is intended for users who have the Oracle Exalogic Elastic Cloud Software
connected to a Database Domain in the system. It describes how to create an SDP listener on
the InfiniBand network. Perform these tasks on each of the Database Domains in the system
that uses SDP-enabled data sources.

240 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Create an SDP Listener on the InfiniBand Network

■ “Create an SDP Listener on the InfiniBand Network” on page 241

Create an SDP Listener on the InfiniBand Network


Oracle RAC 11g Release 2 supports client connections across multiple networks, and it
provides load balancing and failover of client connections within the network they are
connecting. To add a listener for the Oracle Exalogic Elastic Cloud Software connections
coming in on the Infiniband network, first add a network resource for the Infiniband network
with Virtual IP addresses.

Note - This example lists two Database Domains. If you have more than two Database Domains
in your system, you must repeat Database Domain-specific lines for each Database Domain in
the cluster.

1. Edit /etc/hosts on each Database Domain in the cluster to add the virtual IP
addresses you will use for the InfiniBand network. Make sure that these IP
addresses are not used. The following is an example:
# Added for Listener over IB

192.168.10.21 ssc01db01-ibvip.mycompany.com ssc01db01-ibvip

192.168.10.22 ssc01db02-ibvip.mycompany.com ssc01db02-ibvip

2. On one of the Database Domains, as the root user, create a network resource for
the InfiniBand network, as in the following example:
# /u01/app/grid/product/11.2.0.2/bin/srvctl add network -k 2 -S
192.168.10.0/255.255.255.0/bondib0

3. Validate that the network was added correctly, by running one of the following
commands:
# /u01/app/grid/product/11.2.0.2/bin/crsctl stat res -t | grep net
ora.net1.network

ora.net2.network -- Output indicating new Network resource

or
# /u01/app/grid/product/11.2.0.2/bin/srvctl config network -k 2
Network exists: 2/192.168.10.0/255.255.255.0/bondib0, type static -- Output indicating
Network resource on the 192.168.10.0 subnet

Configuring Exalogic Software 241


Create an SDP Listener on the InfiniBand Network

4. Add the Virtual IP addresses on the network created in Step 2, for each node in
the cluster.
srvctl add vip -n ssc01db01 -A ssc01db01-ibvip/255.255.255.0/bondib0 -k 2
srvctl add vip -n ssc01db02 -A ssc01db02-ibvip/255.255.255.0/bondib0 -k 2

5. As the "oracle" user (who owns the Grid Infrastructure Home), add a listener
which will listen on the VIP addresses created in Step 3.
srvctl add listener -l LISTENER_IB -k 2 -p TCP:1522,/SDP:1522

6. For each database that will accept connections from the middle tier, modify the
listener_networks init parameter to allow load balancing and failover across
multiple networks (Ethernet and InfiniBand). You can either enter the full
tnsnames syntax in the initialization parameter or create entries in tnsnames.ora
in $ORACLE_HOME/network/admin directory. The TNSNAMES.ORA entries must exist
in the GRID_HOME. The following example first updates tnsnames.ora. Complete
this step on each Database Domain in the cluster with the correct IP addresses
for that Database Domain. LISTENER_IBREMOTE should list all other Database
Domains that are in the cluster. DBM_IB should list all Database Domains in the
cluster.

Note - The TNSNAMES entry is only read by the database instance on startup, if you modify
the entry that is referred to by any init.ora parameter (LISTENER_NETWORKS), you must
restart the instance or issue an ALTER SYSTEM SET LISTENER_NETWORKS command for
the modifications to take affect by the instance.

(DESCRIPTION =
DBM =
(ADDRESS = (PROTOCOL = TCP)(HOST = ssc01-scan)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
 
(SERVICE_NAME = dbm)
))
 
DBM_IB =
(DESCRIPTION =
(LOAD_BALANCE=on)
(ADDRESS = (PROTOCOL = TCP)(HOST = ssc01db01-ibvip)(PORT = 1522))
(ADDRESS = (PROTOCOL = TCP)(HOST = ssc01db02-ibvip)(PORT = 1522))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = dbm)
))

242 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Create an SDP Listener on the InfiniBand Network

 
LISTENER_IBREMOTE =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = ssc01db02-ibvip.mycompany.com)(PORT = 1522))
))
 
LISTENER_IBLOCAL =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = ssc01db01-ibvip.mycompany.com)(PORT = 1522))
(ADDRESS = (PROTOCOL = SDP)(HOST = ssc01db01-ibvip.mycompany.com)(PORT = 1522))
))
 
LISTENER_IPLOCAL =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = ssc0101-vip.mycompany.com)(PORT = 1521))
))
 
LISTENER_IPREMOTE =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = ssc01-scan.mycompany.com)(PORT = 1521))
))

7. Modify the listener_networks init parameter. Connect to the database instance as


sysdba.
SQLPLUS> alter system set listener_networks='((NAME=network2)
(LOCAL_LISTENER=LISTENER_IBLOCAL)(REMOTE_LISTENER=LISTENER_IBREMOTE))',
'((NAME=network1)(LOCAL_LISTENER=LISTENER_IPLOCAL)(REMOTE_LISTENER=LISTENER_IPREMOTE))'
scope=both;

8. Stop and start LISTENER_IB for the modification in Step 7.


srvctl stop listener -l LISTENER_IB

srvctl start listener -l LISTENER_IB

Configuring Exalogic Software 243


244 Oracle SuperCluster T5-8 Owner's Guide • May 2016
Understanding Internal Cabling

These topics show the cable layouts for Oracle SuperCluster T5-8.

■ “Connector Locations” on page 245


■ “Identifying InfiniBand Fabric Connections” on page 253
■ “Ethernet Management Switch Connections” on page 256
■ “ZFS Storage Appliance Connections” on page 258
■ “Single-Phase PDU Cabling” on page 258
■ “Three-Phase PDU Cabling” on page 259

Connector Locations
The InfiniBand (IB) fabric connects together the IB switches and the servers listed below
in Oracle SuperCluster T5-8. For IB cable information, see “Identifying InfiniBand Fabric
Connections” on page 253.

The Ethernet Management Switch also connects to the servers listed below. For 1 Gb/s
Cat 5E cabling information, see “Ethernet Management Switch Connections” on page 256.

The following figures are in this section:

■ Sun Datacenter InfiniBand Switch 36 (Figure 24, “Sun Datacenter InfiniBand Switch 36,”
on page 246):
■ IB Spine Switch (slot U1)
■ IB Leaf Switch No. 1 (slot U26)
■ IB Leaf Switch No. 2 (slot U32)
■ SPARC T5-8 servers (Figure 25, “SPARC T5-8 Server Card Locations (Full Rack),” on
page 247, Figure 26, “SPARC T5-8 Server Card Locations (Half Rack),” on page 248,
and Figure 27, “NET MGT and NET0-3 Port Locations on the SPARC T5-8 Server,” on
page 249):
■ SPARC T5-8 No. 1 (slot 10)

Understanding Internal Cabling 245


Connector Locations

■ SPARC T5-8 No. 2 (slot 18)


■ ZFS storage controllers (Figure 28, “ZFS Storage Controller,” on page 249):
■ ZFS storage controller No. 1 (slot 33)
■ ZFS storage controller No. 2 (slot 34)
■ Exadata Storage Servers (Figure 29, “Exadata Storage Server ,” on page 250):
■ Exadata Storage Server No. 1 (slot 2)
■ Exadata Storage Server No. 2 (slot 4)
■ Exadata Storage Server No. 3 (slot 6)
■ Exadata Storage Server No. 4 (slot 8)
■ Exadata Storage Server No. 5 (slot 35) (Full Rack only)
■ Exadata Storage Server No. 6 (slot 37) (Full Rack only)
■ Exadata Storage Server No. 7 (slot 39) (Full Rack only)
■ Exadata Storage Server No. 8 (slot 41) (Full Rack only)
■ The Ethernet Management Switch (Figure 30, “Ethernet Management Switch (Cisco
Catalyst 4948 Ethernet Switch) ,” on page 251)
■ PDU circuit breakers and AC sockets (Figure 31, “Circuit Breakers and AC Sockets on
the PDU,” on page 252)

FIGURE 24 Sun Datacenter InfiniBand Switch 36

Figure Legend

1 NET MGT 0 and NET MGT 1 ports


2 InfiniBand ports 0A-17A (upper ports)
3 InfiniBand ports 0B-17B (lower ports)

246 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Connector Locations

FIGURE 25 SPARC T5-8 Server Card Locations (Full Rack)

Figure Legend

1 Dual-port 10 GbE network interface cards, for connection to the 10 GbE client access network
2 Dual-port InfiniBand host channel adapters, for connection to the InfiniBand network

Understanding Internal Cabling 247


Connector Locations

FIGURE 26 SPARC T5-8 Server Card Locations (Half Rack)

Figure Legend

1 Dual-port 10 GbE network interface cards, for connection to the 10 GbE client access network
2 Dual-port InfiniBand host channel adapters, for connection to the InfiniBand network

248 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Connector Locations

FIGURE 27 NET MGT and NET0-3 Port Locations on the SPARC T5-8 Server

Figure Legend

1 NET MGT port, for connection to Oracle ILOM management network


2 NET0-NET3 ports, for connection to 1 GbE host management network

FIGURE 28 ZFS Storage Controller

Understanding Internal Cabling 249


Connector Locations

Figure Legend

1 Power supply 1
2 Power supply 2
3 System status LEDs
4 Serial management port
5 Service processor network management port
6 Gigabit Ethernet ports NET 0, 1, 2, 3
7 USB ports 0, 1
8 HD15 video connector

FIGURE 29 Exadata Storage Server

Figure Legend

1 Power supply 1
2 Power supply 2
3 System status LEDs
4 Serial management port

250 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Connector Locations

5 Service processor network management port


6 Gigabit Ethernet ports NET 0, 1, 2, 3
7 USB ports 0, 1
8 HD15 video connector

FIGURE 30 Ethernet Management Switch (Cisco Catalyst 4948 Ethernet Switch)

Figure Legend

1 Indicators and reset switch


2 Ports 1-16, 10/100/1000BASE-T Ethernet
3 Ports 17-32, 10/100/1000BASE-T Ethernet
4 Ports 33-48, 10/100/1000BASE-T Ethernet
5 CON (upper), MGT (lower)
6 Ports 45-48, 10-Gigabit Ethernet

Understanding Internal Cabling 251


Connector Locations

FIGURE 31 Circuit Breakers and AC Sockets on the PDU

Figure Legend

1 PDU A

252 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Identifying InfiniBand Fabric Connections

2 PDU B

Identifying InfiniBand Fabric Connections


These topics show the following InfiniBand (IB) connections:

Note - For 1 Gb/s Ethernet, see “Ethernet Management Switch Connections” on page 256.

■ “IB Spine Switch” on page 253


■ “IB Leaf Switch No. 1” on page 253
■ “IB Leaf Switch No. 2” on page 254

IB Spine Switch

Spine Switch Port To Device Device Device Port


Location
U1 1B Leaf switch 1 U26 8B
U1 0B Leaf switch 2 U32 8B

IB Leaf Switch No. 1

Leaf Switch 1 Port To Device Device Device Port


Location
U26 0A T5-8 No. 2 U18 PCIE-16 P2
U26 0B T5-8 No. 2 (Full Rack) U18 PCIE-15 P2
U26 1A T5-8 No. 2 U18 PCIE-8 P2
U26 1B T5-8 No. 2 (Full Rack) U18 PCIE-7 P2
U26 2A — — —
U26 2B — — —
U26 3A ZFS storage controller No. 2 U34 PCIE-0 P1
U26 3B T5-8 No. 2 (Full Rack) U18 PCIE-12 P2
U26 4A T5-8 No. 2 U18 PCIE-11 P2
U26 4B T5-8 No. 2 (Full Rack) U18 PCIE-4 P2

Understanding Internal Cabling 253


Identifying InfiniBand Fabric Connections

Leaf Switch 1 Port To Device Device Device Port


Location
U26 5A T5-8 No. 2 U18 PCIE-3 P2
U26 5B T5-8 No. 1 U10 PCIE-16 P1
U26 6A T5-8 No. 1 (Full Rack) U10 PCIE-15 P1
U26 6B T5-8 No. 1 U10 PCIE-8 P1
U26 7A T5-8 No. 1 (Full Rack) U10 PCIE-7 P1
U26 7B Exadata Storage Server No. 8 U41 PCIE-3 P2
(Full Rack)
U26 8A IB leaf switch 2 U32 8A
U26 8B IB spine switch U1 1B
U26 9A IB leaf switch 2 U32 9B
U26 9B IB leaf switch 2 U32 9A
U26 10A IB leaf switch 2 U32 10B
U26 10B IB leaf switch 2 U32 10A
U26 11A IB leaf switch 2 U32 11B
U26 11B IB leaf switch 2 U32 11A
U26 12A Exadata Storage Server No. 7 U39 PCIE-3 P1
(Full Rack)
U26 12B T5-8 No. 1 (Full Rack) U10 PCIE-12 P1
U26 13A T5-8 No. 1 U10 PCIE-11 P1
U26 13B T5-8 No. 1 (Full Rack) U10 PCIE-4 P1
U26 14A T5-8 No. 1 U10 PCIE-3 P1
U26 14B ZFS storage controller No. 1 U33 PCIE-0 P1
U26 15A Exadata Storage Server No. 6 U37 PCIE-3 P2
(Full Rack)
U26 15B Exadata Storage Server No. 5 U35 PCIE-3 P2
(Full Rack)
U26 16A Exadata Storage Server No. 4 U8 PCIE-3 P2
U26 16B Exadata Storage Server No. 3 U6 PCIE-3 P1
U26 17A Exadata Storage Server No. 2 U4 PCIE-3 P1
U26 17B Exadata Storage Server No. 1 U2 PCIE-3 P1

IB Leaf Switch No. 2

Leaf Switch 2 Port To Device Device Device Port


Location
U32 0A T5-8 No. 2 U18 PCIE-16 P1

254 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Identifying InfiniBand Fabric Connections

Leaf Switch 2 Port To Device Device Device Port


Location
U32 0B T5-8 No. 2 (Full Rack) U18 PCIE-15 P1
U32 1A T5-8 No. 2 U18 PCIE-8 P1
U32 1B T5-8 No. 2 (Full Rack) U18 PCIE-7 P1
U32 2A — — —
U32 2B — — —
U32 3A ZFS storage controller No. 2 U34 PCIE-0 P2
U32 3B T5-8 No. 2 (Full Rack) U18 PCIE-12 P1
U32 4A T5-8 No. 2 U18 PCIE-11 P1
U32 4B T5-8 No. 2 (Full Rack) U18 PCIE-4 P1
U32 5A T5-8 No. 2 U18 PCIE-3 P1
U32 5B T5-8 No. 1 U10 PCIE-16 P2
U32 6A T5-8 No. 1 (Full Rack) U10 PCIE-15 P2
U32 6B T5-8 No. 1 U10 PCIE-8 P2
U32 7A T5-8 No. 1 (Full Rack) U10 PCIE-7 P2
U32 7B Exadata Storage Server No. 8 U41 PCIE 3 P1
(Full Rack)
U32 8A IB leaf switch No. 1 U26 8A
U32 8B IB spine switch U1 0B
U32 9A IB leaf switch No. 1 U26 9B
U32 9B IB leaf switch No. 1 U26 9A
U32 10A IB leaf switch No. 1 U26 10B
U32 10B IB leaf switch No. 1 U26 10A
U32 11A IB leaf switch No. 1 U26 11B
U32 11B IB leaf switch No. 1 U26 11A
U32 12A Exadata Storage Server No. 7 U39 PCIE 3 P2
(Full Rack)
U32 12B T5-8 No. 1 (Full Rack) U10 PCIE-12 P2
U32 13A T5-8 No. 1 U10 PCIE-11 P2
U32 13B T5-8 No. 1 (Full Rack) U10 PCIE-4 P2
U32 14A T5-8 No. 1 U10 PCIE-3 P2
U32 14B ZFS storage controller No. 1 U33 PCIE-0 P2
U32 15A Exadata Storage Server No. 6 U37 PCIE 3 P1
(Full Rack)
U32 15B Exadata Storage Server No. 5 U35 PCIE 3 P1
(Full Rack)
U32 16A Exadata Storage Server No. 4 U8 PCIE 3 P1
U32 16B Exadata Storage Server No. 3 U6 PCIE-3 P2

Understanding Internal Cabling 255


Ethernet Management Switch Connections

Leaf Switch 2 Port To Device Device Device Port


Location
U32 17A Exadata Storage Server No. 2 U4 PCIE-3 P2
U32 17B Exadata Storage Server No. 1 U2 PCIE-3 P2

Ethernet Management Switch Connections


The Ethernet management switch (Cisco Catalyst 4948 Ethernet Management Switch, Figure
30, “Ethernet Management Switch (Cisco Catalyst 4948 Ethernet Switch) ,” on page 251) is
located in Oracle SuperCluster T5-8 at location U27.

The Ethernet management switch connects to the SPARC T5-8 servers, ZFS storage controllers,
Exadata Storage Servers, and PDUs through the ports listed in the following tables.

The following table lists the cables for the Ethernet Management switch.

Note - Ethernet management switch port numbers 45, 46, 47, and 48 are shared between item 4
(10BASE-T/100BASE-TX/1000BASE-T Ethernet) and item 6 (1000BASE-X Ethernet)

Note - Ethernet management switch ports 1, 2, 3, and 4 are not used at this time.

Ethernet To Device Device Location Device Port Cable


Switch
Port
1 — — — —
2 — — — —
3 — — — —
4 — — — —
5 — — — —
6 — — — —
7 — — — —
8 — — — —
9 — — — —
10 — — — —
11 Exadata Storage Server No. 8 (Full Rack) U41 NET 0 Black, 10 ft
12 Exadata Storage Server No. 8 (Full Rack) U41 NET MGT Red, 10 ft
13 Exadata Storage Server No. 7 (Full Rack) U39 NET-0 Black, 10 ft
14 Exadata Storage Server No. 7 (Full Rack) U39 NET MGT Red, 10 ft
15 PDU-A PDU-A NET MGT White, 1 m

256 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Ethernet Management Switch Connections

Ethernet To Device Device Location Device Port Cable


Switch
Port
16 T5-8 No. 2 U18 NET-1 Black, 10 ft
17 T5-8 No. 2 U18 NET-0 Black, 10 ft
18 T5-8 No. 2 U18 NET MGT Red, 10 ft
19 PDU-B PDU-B NET MGT White, 1 m
20 T5-8 No. 2 U18 NET-2 Black, 10 ft
21 T5-8 No. 1 U10 NET-0 Black, 10 ft
22 T5-8 No. 1 U10 NET MGT Red, 10 ft
23 ZFS storage controller No. 2 U34 NET-1 Blue, 10 ft
24 T5-8 No. 2 U18 NET-3 Black, 10 ft
25 ZFS storage controller No. 2 U34 NET-0 Blue, 10 ft
26 T5-8 No. 1 U10 NET-1 Black, 10 ft
27 ZFS storage controller No. 2 U34 NET-2 Blue, 10 ft
28 T5-8 No. 1 U10 NET-2 Black, 10 ft
29 ZFS storage controller No. 1 U33 NET-1 Blue, 10 ft
30 T5-8 No. 1 U10 NET-3 Black, 10 ft
31 ZFS storage controller No. 1 U33 NET-0 Blue, 10 ft
32 ZFS storage controller No. 1 U33 NET-2 Blue, 10 ft
33 Exadata Storage Server No. 6 (Full Rack) U37 NET-0 Black, 10 ft
34 Exadata Storage Server No. 6 (Full Rack) U37 NET MGT Red, 10 ft
35 Exadata Storage Server No. 5 (Full Rack) U35 NET-0 Black, 10 ft
36 Exadata Storage Server No. 5 (Full Rack) U35 NET MGT Red, 10 ft
37 Exadata Storage Server No. 4 U8 NET-0 Black, 10 ft
38 Exadata Storage Server No. 4 U8 NET MGT Red, 10 ft
39 Exadata Storage Server No. 3 U6 NET-0 Black, 10 ft
40 Exadata Storage Server No. 3 U6 NET MGT Red, 10 ft
41 Exadata Storage Server No. 2 U4 NET-0 Black, 10 ft
42 Exadata Storage Server No. 2 U4 NET MGT Red, 10 ft
43 Exadata Storage Server No. 1 U2 NET-0 Black, 10 ft
44 Exadata Storage Server No. 1 U2 NET MGT Red, 10 ft
45 Sun Datacenter InfiniBand Switch 36 Leaf No. 2 U32 NET-0 Black, 10 ft
46 Sun Datacenter InfiniBand Switch 36 Leaf No. 1 U26 NET-0 Black, 10 ft
47 Sun Datacenter InfiniBand Switch 36 Spine U1 NET-0 Black, 10 ft
48 (Reserved for service access) — — Blue, 10 ft

Understanding Internal Cabling 257


ZFS Storage Appliance Connections

ZFS Storage Appliance Connections


These connections are for these major components:
■ ZFS storage controller No. 1 (U33)
■ ZFS storage controller No. 2 (U34)
■ Sun Disk Shelf (U28)

From Storage Storage Controller Port To Slot To Port Cable


Controller Slot
U33 PCIE-1 P0 U34 PCIE-1 P2 Yellow, 7 ft.
U33 PCIE-1 P1 U34 PCIE-1 P1 Black, 7 ft.
U33 PCIE-1 P2 U34 PCIE-1 P0 Green, 7 ft.
U34 PCIE-2 P1 U28 IOM0 P0 Black, 2 m
U33 PCIE-2 P1 U28 IOM0 P2 Black, 2 m
U34 PCIE-2 P0 U28 IOM1 P0 Black, 2 m
U33 PCIE-2 P0 U28 IOM1 P2 Black, 2 m

Single-Phase PDU Cabling


If the rack uses single-phase AC power, connect the devices in the rack to the power
distribution unit (PDU) as shown in the following table. Each device requires two power cords
for redundant power.

Caution - Turn off all PDU circuit breakers before connecting power cords to the PDU. Every
socket group has a circuit breaker.

See Figure 31, “Circuit Breakers and AC Sockets on the PDU,” on page 252 for the locations
of the PDU circuit breakers and for names of the PDU AC sockets.

Slot PDU-A/PSU-0 PDU-B/PSU-0 PDU-A/PSU-1 PDU-B/PSU-1 To Device

(T5-8 AC0) (T5-8 AC2) (T5-8 AC1) (T5-8 AC3)


U41 Group 5-6 Group 0-0 — — Exadata Storage Server No. 8 (Full
Rack)
U39 Group 5-5 Group 0-1 — — Exadata Storage Server No. 7 (Full
Rack)
U37 Group 5-4 Group 0-2 — — Exadata Storage Server No. 6 (Full
Rack)

258 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Three-Phase PDU Cabling

Slot PDU-A/PSU-0 PDU-B/PSU-0 PDU-A/PSU-1 PDU-B/PSU-1 To Device

(T5-8 AC0) (T5-8 AC2) (T5-8 AC1) (T5-8 AC3)


U35 Group 5-3 Group 0-3 — — Exadata Storage Server No. 5 (Full
Rack)
U34 Group 5-2 Group 0-4 — — ZFS storage controller No. 2
U33 Group 5-1 Group 0-5 — — ZFS storage controller No. 1
U32 Group 5-0 Group 0-6 — — Sun Datacenter InfiniBand Switch
36 Leaf Switch No. 2
U28 Group 4-6 Group 1-0 — — Sun Disk Shelf
U27 Group 3-5 Group 2-1 — — Ethernet management switch
U26 Group 3-6 Group 2-0 — — Sun Datacenter InfiniBand Switch
36 Leaf Switch No. 1
U18 Group 3-7 Group 2-7 Group 4-7 Group 1-7 SPARC T5-8 No. 2
U10 Group 1-7 Group 4-7 Group 2-7 Group 3-7 SPARC T5-8 No. 1
U8 Group 0-4 Group 5-2 — — Exadata Storage Server No. 4
U6 Group 0-3 Group 5-3 — — Exadata Storage Server No. 3
U4 Group 0-2 Group 5-4 — — Exadata Storage Server No. 2
U2 Group 0-1 Group 5-5 — — Exadata Storage Server No. 1
U1 Group 0-0 Group 5-6 — — Sun Datacenter InfiniBand Switch
36 Spine Switch

Three-Phase PDU Cabling


For three-phase AC power, connect the devices in the rack to the power distribution units
(PDUs) as shown in the following table. Each device requires two power cords for redundant
power.

Caution - Turn off all PDU circuit breakers before connecting power cords to the PDU. Every
socket group has a circuit breaker.

See Figure 31, “Circuit Breakers and AC Sockets on the PDU,” on page 252 for the locations
of the PDU circuit breakers and for names of the PDU AC sockets.

Slot PDU-A/PSU-0 PDU-B/PSU-0 PDU-A/PSU-1 PDU-B/PSU-1 To Device

(T5-8 AC0) (T5-8 AC2) (T5-8 AC1) (T5-8 AC3)


U41 Group 5-6 Group 2-0 — — Exadata Storage Server No. 8 (Full
Rack)
U39 Group 5-5 Group 2-1 — — Exadata Storage Server No. 7 (Full
Rack)

Understanding Internal Cabling 259


Three-Phase PDU Cabling

Slot PDU-A/PSU-0 PDU-B/PSU-0 PDU-A/PSU-1 PDU-B/PSU-1 To Device

(T5-8 AC0) (T5-8 AC2) (T5-8 AC1) (T5-8 AC3)


U37 Group 5-4 Group 2-2 — — Exadata Storage Server No. 6 (Full
Rack)
U35 Group 5-3 Group 2-3 — — Exadata Storage Server No. 5 (Full
Rack)
U34 Group 5-2 Group 2-4 — — ZFS storage controller No. 2
U33 Group 5-1 Group 2-5 — — ZFS storage controller No. 1
U32 Group 5-0 Group 2-6 — — Sun Datacenter InfiniBand Switch
36 Leaf Switch No. 2
U28 Group 4-6 Group 1-0 — — Sun Disk Shelf
U27 Group 3-5 Group 0-1 — — Ethernet management switch
U26 Group 3-6 Group 0-0 — — Sun Datacenter InfiniBand Switch
36 Leaf Switch No. 1
U18 Group 3-7 Group 0-7 Group 4-7 Group 1-7 SPARC T5-8 No. 2
U10 Group 1-7 Group 4-7 Group 2-7 Group 5-7 SPARC T5-8 No. 1
U8 Group 0-4 Group 3-2 — — Exadata Storage Server No. 4
U6 Group 0-3 Group 3-3 — — Exadata Storage Server No. 3
U4 Group 0-2 Group 3-4 — — Exadata Storage Server No. 2
U2 Group 0-1 Group 3-5 — — Exadata Storage Server No. 1
U1 Group 0-0 Group 3-6 — — Sun Datacenter InfiniBand Switch
36 Spine Switch

260 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Connecting Multiple Oracle SuperCluster T5-8
Systems

These topics provide instructions for connecting one Oracle SuperCluster T5-8 to one or more
Oracle SuperCluster T5-8 systems.

■ “Multi-Rack Cabling Overview” on page 261


■ “Two-Rack Cabling” on page 263
■ “Three-Rack Cabling” on page 265
■ “Four-Rack Cabling” on page 267
■ “Five-Rack Cabling” on page 270
■ “Six-Rack Cabling” on page 273
■ “Seven-Rack Cabling” on page 277
■ “Eight-Rack Cabling” on page 281

Multi-Rack Cabling Overview


You connect multiple Oracle SuperCluster T5-8 systems together using three Sun Datacenter
InfiniBand Switch 36 switches. Oracle SuperCluster T5-8 includes three Sun Datacenter
InfiniBand Switch 36 switches. These switches attach to standard Quad Small Form-factor
Pluggable (QSFP) connectors at the end of the InfiniBand cables. The procedures in this section
assume the racks are adjacent to each other. If they are not, then longer cables may be required
for the connections.

The switch at rack unit 1 (U1) is referred to as the spine switch. The switches at rack unit 26
(U26) and rack unit 32 (U32) are referred to as leaf switches. In a single rack, the two leaf
switches are interconnected using seven connections. In addition, each leaf switch has one
connection to the spine switch. The leaf switches connect to the spine switch as shown in the
following graphic.

Connecting Multiple Oracle SuperCluster T5-8 Systems 261


Multi-Rack Cabling Overview

When connecting multiple racks together, remove the seven existing inter-switch connections
between each leaf switches, as well as the two connections between the leaf switches and the
spine switch. From each leaf switch, distribute eight connections over the spine switches in
all racks. In multi-rack environments, the leaf switches inside a rack are no longer directly
interconnected, as shown in the following graphic.

262 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Two-Rack Cabling

As shown in the preceding graphic, each leaf switch in rack 1 connects to the following
switches:

■ Four connections to its internal spine switch


■ Four connections to the spine switch in rack 2

The spine switch in rack 1 connects to the following switches:

■ Eight connections to both internal leaf switches


■ Eight connections to both leaf switches in rack 2

In Oracle SuperCluster T5-8, the spine and leaf switches are installed in the following locations:

■ Spine switch in U1
■ Two leaf switches in U26 and U32

Two-Rack Cabling
The following table shows the cable connections for the first spine switch (R1-U1) when
cabling two full racks together.

Connecting Multiple Oracle SuperCluster T5-8 Systems 263


Two-Rack Cabling

TABLE 22 Leaf Switch Connections for the First Rack in a Two-Rack System
Leaf Switch Connection Cable Length
R1-U32 within Rack 1 R1-U32-P8A to R1-U1-P3A 5 meters

R1-U32-P8B to R1-U1-P4A

R1-U32-P9A to R1-U1-P5A

R1-U32-P9B to R1-U1-P6A
R1-U32 to Rack 2 R1-U32-P10A to R2-U1-P7A 5 meters

R1-U32-P10B to R2-U1-P8A

R1-U32-P11A to R2-U1-P9A

R1-U32-P11B to R2-U1-P10A
R1-U26 within Rack 1 R1-U26-P8A to R1-U1-P3B 5 meters

R1-U26-P8B to R1-U1-P4B

R1-U26-P9A to R1-U1-P5B

R1-U26-P9B to R1-U1-P6B
R1-U26 to Rack 2 R1-U26-P10A to R2-U1-P7B 5 meters

R1-U26-P10B to R2-U1-P8B

R1-U26-P11A to R2-U1-P9B

R1-U26-P11B to R2-U1-P10B

The following table shows the cable connections for the second spine switch (R2-U1) when
cabling two full racks together.

TABLE 23 Leaf Switch Connections for the Second Rack in a Two-Rack System
Leaf Switch Connection Cable Length
R2-U32 within Rack 2 R2-U32-P8A to R2-U1-P3A 5 meters

R2-U32-P8B to R2-U1-P4A

R2-U32-P9A to R2-U1-P5A

R2-U32-P9B to R2-U1-P6A
R2-U32 to Rack 1 R2-U32-P10A to R1-U1-P7A 5 meters

R2-U32-P10B to R1-U1-P8A

R2-U32-P11A to R1-U1-P9A

R2-U32-P11B to R1-U1-P10A
R2-U26 within Rack 2 R2-U26-P8A to R2-U1-P3B 5 meters

264 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Three-Rack Cabling

Leaf Switch Connection Cable Length


R2-U26-P8B to R2-U1-P4B

R2-U26-P9A to R2-U1-P5B

R2-U26-P9B to R2-U1-P6B
R2-U26 to Rack 1 R2-U26-P10A to R1-U1-P7B 5 meters

R2-U26-P10B to R1-U1-P8B

R2-U26-P11A to R1-U1-P9B

R2-U26-P11B to R1-U1-P10B

Three-Rack Cabling
The following table shows the cable connections for the first spine switch (R1-U1) when
cabling three full racks together.

TABLE 24 Leaf Switch Connections for the First Rack in a Three-Rack System

Leaf Switch Connection Cable Length


R1-U32 within Rack 1 R1-U32-P8A to R1-U1-P3A 5 meters

R1-U32-P8B to R1-U1-P4A

R1-U32-P9A to R1-U1-P5A
R1-U32 to Rack 2 R1-U32-P9B to R2-U1-P6A 5 meters

R1-U32-P10A to R2-U1-P7A

R1-U32-P10B to R2-U1-P8A
R1-U32 to Rack 3 R1-U32-P11A to R3-U1-P9A 5 meters

R1-U32-P11B to R3-U1-P10A
R1-U26 within Rack 1 R1-U26-P8A to R1-U1-P3B 5 meters

R1-U26-P8B to R1-U1-P4B

R1-U26-P9A to R1-U1-P5B
R1-U26 to Rack 2 R1-U26-P9B to R2-U1-P6B 5 meters

R1-U26-P10A to R2-U1-P7B

R1-U26-P10B to R2-U1-P8B
R1-U26 to Rack 3 R1-U26-P11A to R3-U1-P9B 5 meters

R1-U26-P11B to R3-U1-P10B

Connecting Multiple Oracle SuperCluster T5-8 Systems 265


Three-Rack Cabling

The following table shows the cable connections for the second spine switch (R2-U1) when
cabling three racks together.

TABLE 25 Leaf Switch Connections for the Second Rack in a Three-Rack System
Leaf Switch Connection Cable Length
R2-U32 within Rack 2 R2-U32-P8A to R2-U1-P3A 5 meters

R2-U32-P8B to R2-U1-P4A

R2-U32-P9A to R2-U1-P5A
R2-U32 to Rack 1 R2-U32-P11A to R1-U1-P9A 5 meters

R2-U32-P11B to R1-U1-P10A
R2-U32 to Rack 3 R2-U32-P9B to R3-U1-P6A 5 meters

R2-U32-P10A to R3-U1-P7A

R2-U32-P10B to R3-U1-P8A
R2-U26 within Rack 2 R2-U26-P8A to R2-U1-P3B 5 meters

R2-U26-P8B to R2-U1-P4B

R2-U26-P9A to R2-U1-P5B
R2-U26 to Rack 1 R2-U26-P11A to R1-U1-P9B 5 meters

R2-U26-P11B to R1-U1-P10B
R2-U26 to Rack 3 R2-U26-P9B to R3-U1-P6B 5 meters

R2-U26-P10A to R3-U1-P7B

R2-U26-P10B to R3-U1-P8B

The following table shows the cable connections for the third spine switch (R3-U1) when
cabling three full racks together.

TABLE 26 Leaf Switch Connections for the Third Rack in a Three-Rack System
Leaf Switch Connection Cable Length
R3-U32 within Rack 3 R3-U32-P8A to R3-U1-P3A 5 meters

R3-U32-P8B to R3-U1-P4A

R3-U32-P9A to R3-U1-P5A
R3-U32 to Rack 1 R3-U32-P9B to R1-U1-P6A 5 meters

R3-U32-P10A to R1-U1-P7A

R3-U32-P10B to R1-U1-P8A
R3-U32 to Rack 2 R3-U32-P11A to R2-U1-P9A 5 meters

266 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Four-Rack Cabling

Leaf Switch Connection Cable Length


R3-U32-P11B to R2-U1-P10A
R3-U26 within Rack 3 R3-U26-P8A to R3-U1-P3B 5 meters

R3-U26-P8B to R3-U1-P4B

R3-U26-P9A to R3-U1-P5B
R3-U26 to Rack 1 R3-U26-P9B to R1-U1-P6B 5 meters

R3-U26-P10A to R1-U1-P7B

R3-U26-P10B to R1-U1-P8B
R3-U26 to Rack 2 R3-U26-P11A to R2-U1-P9B 5 meters

R3-U26-P11B to R2-U1-P10B

Four-Rack Cabling
The following table shows the cable connections for the first spine switch (R1-U1) when
cabling four full racks together.

TABLE 27 Leaf Switch Connections for the First Rack in a Four-Rack System

Leaf Switch Connection Cable Length


R1-U32 within Rack 1 R1-U32-P8A to R1-U1-P3A 5 meters

R1-U32-P8B to R1-U1-P4A
R1-U32 to Rack 2 R1-U32-P9A to R2-U1-P5A 5 meters

R1-U32-P9B to R2-U1-P6A
R1-U32 to Rack 3 R1-U32-P10A to R3-U1-P7A 5 meters

R1-U32-P10B to R3-U1-P8A
R1-U32 to Rack 4 R1-U32-P11A to R4-U1-P9A 10 meters

R1-U32-P11B to R4-U1-P10A
R1-U26 within Rack 1 R1-U26-P8A to R1-U1-P3B 5 meters

R1-U26-P8B to R1-U1-P4B
R1-U26 to Rack 2 R1-U26-P9A to R2-U1-P5B 5 meters

R1-U26-P9B to R2-U1-P6B
R1-U26 to Rack 3 R1-U26-P10A to R3-U1-P7B 5 meters

R1-U26-P10B to R3-U1-P8B
R1-U26 to Rack 4 R1-U26-P11A to R4-U1-P9B 10 meters

Connecting Multiple Oracle SuperCluster T5-8 Systems 267


Four-Rack Cabling

Leaf Switch Connection Cable Length


R1-U26-P11B to R4-U1-P10B

The following table shows the cable connections for the second spine switch (R2-U1) when
cabling four full racks together.

TABLE 28 Leaf Switch Connections for the Second Rack in a Four-Rack System
Leaf Switch Connection Cable Length
R2-U32 within Rack 2 R2-U32-P8A to R2-U1-P3A 5 meters

R2-U32-P8B to R2-U1-P4A
R2-U32 to Rack 1 R2-U32-P11A to R1-U1-P9A 5 meters

R2-U32-P11B to R1-U1-P10A
R2-U32 to Rack 3 R2-U32-P9A to R3-U1-P5A 5 meters

R2-U32-P9B to R3-U1-P6A
R2-U32 to Rack 4 R2-U32-P10A to R4-U1-P7A 5 meters

R2-U32-P10B to R4-U1-P8A
R2-U26 within Rack 2 R2-U26-P8A to R2-U1-P3B 5 meters

R2-U26-P8B to R2-U1-P4B
R2-U26 to Rack 1 R2-U26-P11A to R1-U1-P9B 5 meters

R2-U26-P11B to R1-U1-P10B
R2-U26 to Rack 3 R2-U26-P9A to R3-U1-P5B 5 meters

R2-U26-P9B to R3-U1-P6B
R2-U26 to Rack 4 R2-U26-P10A to R4-U1-P7B 5 meters

R2-U26-P10B to R4-U1-P8B

The following table shows the cable connections for the third spine switch (R3-U1) when
cabling four full racks together.

TABLE 29 Leaf Switch Connections for the Third Rack in a Four-Rack System
Leaf Switch Connection Cable Length
R3-U32 within Rack 3 R3-U32-P8A to R3-U1-P3A 5 meters

R3-U32-P8B to R3-U1-P4A
R3-U32 to Rack 1 R3-U32-P10A to R1-U1-P7A 5 meters

R3-U32-P10B to R1-U1-P8A

268 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Four-Rack Cabling

Leaf Switch Connection Cable Length


R3-U32 to Rack 2 R3-U32-P11A to R2-U1-P9A 5 meters

R3-U32-P11B to R2-U1-P10A
R3-U32 to Rack 4 R3-U32-P9A to R4-U1-P5A 5 meters

R3-U32-P9B to R4-U1-P6A
R3-U26 within Rack 3 R3-U26-P8A to R3-U1-P3B 5 meters

R3-U26-P8B to R3-U1-P4B
R3-U26 to Rack 1 R3-U26-P10A to R1-U1-P7B 5 meters

R3-U26-P10B to R1-U1-P8B
R3-U26 to Rack 2 R3-U26-P11A to R2-U1-P9B 5 meters

R3-U26-P11B to R2-U1-P10B
R3-U26 to Rack 4 R3-U26-P9A to R4-U1-P5B 5 meters

R3-U26-P9B to R4-U1-P6B

The following table shows the cable connections for the fourth spine switch (R4-U1) when
cabling four full racks together.

TABLE 30 Leaf Switch Connections for the Fourth Rack in a Four-Rack System

Leaf Switch Connection Cable Length


R4-U32 within Rack 4 R4-U32-P8A to R4-U1-P3A 5 meters

R4-U32-P8B to R4-U1-P4A
R4-U32 to Rack 1 R4-U32-P9A to R1-U1-P5A 10 meters

R4-U32-P9B to R1-U1-P6A
R4-U32 to Rack 2 R4-U32-P10A to R2-U1-P7A 5 meters

R4-U32-P10B to R2-U1-P8A
R4-U32 to Rack 3 R4-U32-P11A to R3-U1-P9A 5 meters

R4-U32-P11B to R3-U1-P10A
R4-U26 within Rack 4 R4-U26-P8A to R4-U1-P3B 5 meters

R4-U26-P8B to R4-U1-P4B
R4-U26 to Rack 1 R4-U26-P9A to R1-U1-P5B 10 meters

R4-U26-P9B to R1-U1-P6B
R4-U26 to Rack 2 R4-U26-P10A to R2-U1-P7B 5 meters

R4-U26-P10B to R2-U1-P8B
R4-U26 to Rack 3 R4-U26-P11A to R3-U1-P9B 5 meters

Connecting Multiple Oracle SuperCluster T5-8 Systems 269


Five-Rack Cabling

Leaf Switch Connection Cable Length


R4-U26-P11B to R3-U1-P10B

Five-Rack Cabling
The following table shows the cable connections for the first spine switch (R1-U1) when
cabling five full racks together.

TABLE 31 Leaf Switch Connections for the First Rack in a Five-Rack System
Leaf Switch Connection Cable Length
R1 U32 within Rack 1 R1-U32-P8A to R1-U1-P3A 3 meters

R1-U32-P8B to R1-U1-P4A
R1 U32 to Rack 2 R1-U32-P9A to R2-U1-P5A 5 meters

R1-U32-P9B to R2-U1-P6A
R1 U32 to Rack 3 R1-U32-P10A to R3-U1-P7A 5 meters

R1-U32-P10B to R3-U1-P8A
R1 U32 to Rack 4 R1-U32-P11A to R4-U1-P9A 10 meters
R1 U32 to Rack 5 R1-U32-P11B to R5-U1-P10A 10 meters
R1 U26 within Rack 1 R1-U26-P8A to R1-U1-P3B 3 meters

R1-U26-P8B to R1-U1-P4B
R1 U26 to Rack 2 R1-U26-P9A to R2-U1-P5B 3 meters

R1-U26-P9B to R2-U1-P6B
R1 U26 to Rack 3 R1-U26-P10A to R3-U1-P7B 5 meters

R1-U26-P10B to R3-U1-P8B
R1 U26 to Rack 4 R1-U26-P11A to R4-U1-P9B 10 meters
R1 U26 to Rack 5 R1-U26-P11B to R5-U1-P10B 10 meters

The following table shows the cable connections for the second spine switch (R2-U1) when
cabling five full racks together.

TABLE 32 Leaf Switch Connections for the Second Rack in a Five-Rack System
Leaf Switch Connection Cable Length
R2 U32 within Rack 2 R2-U32-P8A to R2-U1-P3A 3 meters

R2-U32-P8B to R2-U1-P4A

270 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Five-Rack Cabling

Leaf Switch Connection Cable Length


R2 U32 to Rack 1 R2-U32-P11B to R1-U1-P10A 5 meters
R2 U32 to Rack 3 R2-U32-P9A to R3-U1-P5A 5 meters

R2-U32-P9B to R3-U1-P6A
R2 U32 to Rack 4 R2-U32-P10A to R4-U1-P7A 5 meters

R2-U32-P10B to R4-U1-P8A
R2 U32 to Rack 5 R2-U32-P11A to R5-U1-P9A 10 meters
R2 U26 within Rack 2 R2-U26-P8A to R2-U1-P3B 3 meters

R2-U26-P8B to R2-U1-P4B
R2 U26 to Rack 1 R2-U26-P11B to R1-U1-P10B 5 meters
R2 U26 to Rack 3 R2-U26-P9A to R3-U1-P5B 5 meters

R2-U26-P9B to R3-U1-P6B
R2 U26 to Rack 4 R2-U26-P10A to R4-U1-P7B 5 meters

R2-U26-P10B to R4-U1-P8B
R2 U26 to Rack 5 R2-U26-P11A to R5-U1-P9B 10 meters

The following table shows the cable connections for the third spine switch (R3-U1) when
cabling five full racks together.

TABLE 33 Leaf Switch Connections for the Third Rack in a Five-Rack System
Leaf Switch Connection Cable Length
R3 U32 within Rack 3 R3-U32-P8A to R3-U1-P3A 3 meters

R3-U32-P8B to R3-U1-P4A
R3 U32 to Rack 1 R3-U32-P11A to R1-U1-P9A 5 meters
R3 U32 to Rack 2 R3-U32-P11B to R2-U1-P10A 5 meters
R3 U32 to Rack 4 R3-U32-P9A to R4-U1-P5A 5 meters

R3-U32-P9B to R4-U1-P6A
R3 U32 to Rack 5 R3-U32-P10A to R5-U1-P7A 5 meters

R3-U32-P10B to R5-U1-P8A
R3 U26 within Rack 3 R3-U26-P8A to R3-U1-P3B 3 meters

R3-U26-P8B to R3-U1-P4B
R3 U26 to Rack 1 R3-U26-P11A to R1-U1-P9B 5 meters
R3 U26 to Rack 2 R3-U26-P11B to R2-U1-P10B 5 meters
R3 U26 to Rack 4 R3-U26-P9A to R4-U1-P5B 5 meters

R3-U26-P9B to R4-U1-P6B

Connecting Multiple Oracle SuperCluster T5-8 Systems 271


Five-Rack Cabling

Leaf Switch Connection Cable Length


R3 U26 to Rack 5 R3-U26-P10A to R5-U1-P7B 5 meters

R3-U26-P10B to R5-U1-P8B

The following table shows the cable connections for the fourth spine switch (R4-U1) when
cabling five full racks together.

TABLE 34 Leaf Switch Connections for the Fourth Rack in a Five-Rack System

Leaf Switch Connection Cable Length


R4 U32 within Rack 4 R4-U32-P8A to R4-U1-P3A 3 meters

R4-U32-P8B to R4-U1-P4A
R4 U32 to Rack 1 R4-U32-P10A to R1-U1-P7A 10 meters

R4-U32-P10B to R1-U1-P8A
R4 U32 to Rack 2 R4-U32-P11A to R2-U1-P9A 5 meters
R4 U32 to Rack 3 R4-U32-P11B to R3-U1-P10A 5 meters
R4 U32 to Rack 5 R4-U32-P9A to R5-U1-P5A 5 meters

R4-U32-P9B to R5-U1-P6A
R4 U26 within Rack 4 R4-U26-P8A to R4-U1-P3B 3 meters

R4-U26-P8B to R4-U1-P4B
R4 U26 to Rack 1 R4-U26-P10A to R1-U1-P7B 10 meters

R4-U26-P10B to R1-U1-P8B
R4 U26 to Rack 2 R4-U26-P11A to R2-U1-P9B 5 meters
R4 U26 to Rack 3 R4-U26-P11B to R3-U1-P10B 5 meters
R4 U26 to Rack 5 R4-U26-P9A to R5-U1-P5B 5 meters

R4-U26-P9B to R5-U1-P6B

The following table shows the cable connections for the fifth spine switch (R5-U1) when
cabling five full racks together.

TABLE 35 Leaf Switch Connections for the Fifth Rack in a Five-Rack System

Leaf Switch Connection Cable Length


R5 U32 within Rack 5 R5-U32-P8A to R5-U1-P3A 3 meters

R5-U32-P8B to R5-U1-P4A
R5 U32 to Rack 1 R5-U32-P9A to R1-U1-P5A 10 meters

272 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Six-Rack Cabling

Leaf Switch Connection Cable Length


R5-U32-P9B to R1-U1-P6A
R5 U32 to Rack 2 R5-U32-P10A to R2-U1-P7A 10 meters

R5-U32-P10B to R2-U1-P8A
R5 U32 to Rack 3 R5-U32-P11A to R3-U1-P9A 5 meters
R5 U32 to Rack 4 R5-U32-P11B to R4-U1-P10A 5 meters
R5 U26 within Rack 5 R5-U26-P8A to R5-U1-P3B 3 meters

R5-U26-P8B to R5-U1-P4B
R5 U26 to Rack 1 R5-U26-P9A to R1-U1-P5B 10 meters

R5-U26-P9B to R1-U1-P6B
R5 U26 to Rack 2 R5-U26-P10A to R2-U1-P7B 10 meters

R5-U26-P10B to R2-U1-P8B
R5 U26 to Rack 3 R5-U26-P11A to R3-U1-P9B 5 meters
R5 U26 to Rack 4 R5-U26-P11B to R4-U1-P10B 5 meters

Six-Rack Cabling
The following table shows the cable connections for the first spine switch (R1-U1) when
cabling six full racks together.

TABLE 36 Leaf Switch Connections for the First Rack in a Six-Rack System

Leaf Switch Connection Cable Length


R1 U32 within Rack 1 R1-U32-P8A to R1-U1-P3A 3 meters

R1-U32-P8B to R1-U1-P4A
R1 U32 to Rack 2 R1-U32-P9A to R2-U1-P5A 5 meters

R1-U32-P9B to R2-U1-P6A
R1 U32 to Rack 3 R1-U32-P10A to R3-U1-P7A 5 meters
R1 U32 to Rack 4 R1-U32-P10B to R4-U1-P8A 10 meters
R1 U32 to Rack 5 R1-U32-P11A to R5-U1-P9A 10 meters
R1 U32 to Rack 6 R1-U32-P11B to R6-U1-P10A 10 meters
R1 U26 within Rack 1 R1-U26-P8A to R1-U1-P3B 3 meters

R1-U26-P8B to R1-U1-P4B
R1 U26 to Rack 2 R1-U26-P9A to R2-U1-P5B 5 meters

R1-U26-P9B to R2-U1-P6B

Connecting Multiple Oracle SuperCluster T5-8 Systems 273


Six-Rack Cabling

Leaf Switch Connection Cable Length


R1 U26 to Rack 3 R1-U26-P10A to R3-U1-P7B 5 meters
R1 U26 to Rack 4 R1-U26-P10B to R4-U1-P8B 10 meters
R1 U26 to Rack 5 R1-U26-P11A to R5-U1-P9B 10 meters
R1 U26 to Rack 6 R1-U26-P11B to R6-U1-P10B 10 meters

The following table shows the cable connections for the second spine switch (R2-U1) when
cabling six full racks together.

TABLE 37 Leaf Switch Connections for the Second Rack in a Six-Rack System

Leaf Switch Connection Cable Length


R2 U32 within Rack 2 R2-U32-P8A to R2-U1-P3A 3 meters

R2-U32-P8B to R2-U1-P4A
R2 U32 to Rack 1 R2-U32-P11B to R1-U1-P10A 5 meters
R2 U32 to Rack 3 R2-U32-P9A to R3-U1-P5A 5 meters

R2-U32-P9B to R3-U1-P6A
R2 U32 to Rack 4 R2-U32-P10A to R4-U1-P7A 5 meters
R2 U32 to Rack 5 R2-U32-P10B to R5-U1-P8A 10 meters
R2 U32 to Rack 6 R2-U32-P11Ato R6-U1-P9A 10 meters
R2 U26 within Rack 2 R2-U26-P8A to R2-U1-P3B 3 meters

R2-U26-P8B to R2-U1-P4B
R2 U26 to Rack 1 R2-U26-P11B to R1-U1-P10B 5 meters
R2 U26 to Rack 3 R2-U26-P9A to R3-U1-P5B 5 meters

R2-U26-P9B to R3-U1-P6B
R2 U26 to Rack 4 R2-U26-P10A to R4-U1-P7B 5 meters
R2 U26 to Rack 5 R2-U26-P10B to R5-U1-P8B 10 meters
R2 U26 to Rack 6 R2-U26-P11Ato R6-U1-P9B 10 meters

The following table shows the cable connections for the third spine switch (R3-U1) when
cabling six full racks together.

TABLE 38 Leaf Switch Connections for the Third Rack in a Six-Rack System

Leaf Switch Connection Cable Length


R3 U32 within Rack 3 R3-U32-P8A to R3-U1-P3A 3 meters

R3-U32-P8B to R3-U1-P4A

274 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Six-Rack Cabling

Leaf Switch Connection Cable Length


R3 U32 to Rack 1 R3-U32-P11A to R1-U1-P9A 5 meters
R3 U32 to Rack 2 R3-U32-P11B to R2-U1-P10A 5 meters
R3 U32 to Rack 4 R3-U32-P9A to R4-U1-P5A 5 meters

R3-U32-P9B to R4-U1-P6A
R3 U32 to Rack 5 R3-U32-P10A to R5-U1-P7A 5 meters
R3 U32 to Rack 6 R3-U32-P10B to R6-U1-P8A 5 meters
R3 U26 within Rack 3 R3-U26-P8A to R3-U1-P3B 3 meters

R3-U26-P8B to R3-U1-P4B
R3 U26 to Rack 1 R3-U26-P11A to R1-U1-P9B 5 meters
R3 U26 to Rack 2 R3-U26-P11B to R2-U1-P10B 5 meters
R3 U26 to Rack 4 R3-U26-P9A to R4-U1-P5B 5 meters

R3-U26-P9B to R4-U1-P6B
R3 U26 to Rack 5 R3-U26-P10A to R5-U1-P7B 5 meters
R3 U26 to Rack 6 R3-U26-P10B to R6-U1-P8B 5 meters

The following table shows the cable connections for the fourth spine switch (R4-U1) when
cabling six full racks together.

TABLE 39 Leaf Switch Connections for the Fourth Rack in a Six-Rack System
Leaf Switch Connection Cable Length
R4 U32 within Rack 4 R4-U32-P8A to R4-U1-P3A 3 meters

R4-U32-P8B to R4-U1-P4A
R4 U32 to Rack 1 R4-U32-P10B to R1-U1-P8A 10 meters
R4 U32 to Rack 2 R4-U32-P11A to R2-U1-P9A 5 meters
R4 U32 to Rack 3 R4-U32-P11B to R3-U1-P10A 5 meters
R4 U32 to Rack 5 R4-U32-P9A to R5-U1-P5A 5 meters

R4-U32-P9B to R5-U1-P6A
R4 U32 to Rack 6 R4-U32-P10A to R6-U1-P7A 5 meters
R4 U26 within Rack 4 R4-U26-P8A to R4-U1-P3B 3 meters

R4-U26-P8B to R4-U1-P4B
R4 U26 to Rack 1 R4-U26-P10B to R1-U1-P8B 10 meters
R4 U26 to Rack 2 R4-U26-P11A to R2-U1-P9B 5 meters
R4 U26 to Rack 3 R4-U26-P11B to R3-U1-P10B 5 meters
R4 U26 to Rack 5 R4-U26-P9A to R5-U1-P5B 5 meters

R4-U26-P9B to R5-U1-P6B

Connecting Multiple Oracle SuperCluster T5-8 Systems 275


Six-Rack Cabling

Leaf Switch Connection Cable Length


R4 U26 to Rack 6 R4-U26-P10A to R6-U1-P7B 5 meters

The following table shows the cable connections for the fifth spine switch (R5-U1) when
cabling six full racks together.

TABLE 40 Leaf Switch Connections for the Fifth Rack in a Six-Rack System

Leaf Switch Connection Cable Length


R5 U32 within Rack 5 R5-U32-P8A to R5-U1-P3A 3 meters

R5-U32-P8B to R5-U1-P4A
R5 U32 to Rack 1 R5-U32-P10A to R1-U1-P7A 10 meters
R5 U32 to Rack 2 R5-U32-P10B to R2-U1-P8A 10 meters
R5 U32 to Rack 3 R5-U32-P11A to R3-U1-P9A 5 meters
R5 U32 to Rack 4 R5-U32-P11B to R4-U1-P10A 5 meters
R5 U32 to Rack 6 R5-U32-P9A to R6-U1-P5A 5 meters

R5-U32-P9B to R6-U1-P6A
R5 U26 within Rack 5 R5-U26-P8A to R5-U1-P3B 3 meters

R5-U26-P8B to R5-U1-P4B
R5 U26 to Rack 1 R5-U26-P10A to R1-U1-P7B 10 meters
R5 U26 to Rack 2 R5-U26-P10B to R2-U1-P8B 10 meters
R5 U26 to Rack 3 R5-U26-P11A to R3-U1-P9B 5 meters
R5 U26 to Rack 4 R5-U26-P11B to R4-U1-P10B 5 meters
R5 U26 to Rack 6 R5-U26-P9A to R6-U1-P5B 5 meters

R5-U26-P9B to R6-U1-P6B

The following table shows the cable connections for the sixth spine switch (R6-U1) when
cabling six full racks together.

TABLE 41 Leaf Switch Connections for the Sixth Rack in a Six-Rack System

Leaf Switch Connection Cable Length


R6 U32 within Rack 6 R6-U32-P8A to R6-U1-P3A 3 meters

R6-U32-P8B to R6-U1-P4A
R6 U32 to Rack 1 R6-U32-P9A to R1-U1-P5A 10 meters

R6-U32-P9B to R1-U1-P6A
R6 U32 to Rack 2 R6-U32-P10A to R2-U1-P7A 10 meters

276 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Seven-Rack Cabling

Leaf Switch Connection Cable Length


R6 U32 to Rack 3 R6-U32-P10B to R3-U1-P8A 5 meters
R6 U32 to Rack 4 R6-U32-P11A to R4-U1-P9A 5 meters
R6 U32 to Rack 5 R6-U32-P11B to R5-U1-P10A 5 meters
R6 U26 within Rack 6 R6-U26-P8A to R6-U1-P3B 3 meters

R6-U26-P8B to R6-U1-P4B
R6 U26 to Rack 2 R6-U26-P10A to R2-U1-P7B 10 meters
R6 U26 to Rack 1 R6-U26-P9A to R1-U1-P5B 10 meters

R6-U26-P9B to R1-U1-P6B
R6 U26 to Rack 3 R6-U26-P10B to R3-U1-P8B 5 meters
R6 U26 to Rack 4 R6-U26-P11A to R4-U1-P9B 5 meters
R6 U26 to Rack 5 R6-U26-P11B to R5-U1-P10B 5 meters

Seven-Rack Cabling
The following table shows the cable connections for the first spine switch (R1-U1) when
cabling seven full racks together.

TABLE 42 Leaf Switch Connections for the First Rack in a Seven-Rack System
Leaf Switch Connection Cable Length
R1 U32 within Rack 1 R1-U32-P8A to R1-U1-P3A 3 meters

R1-U32-P8B to R1-U1-P4A
R1 U32 to Rack 2 R1-U32-P9A to R2-U1-P5A 5 meters
R1 U32 to Rack 3 R1-U32-P9B to R3-U1-P6A 5 meters
R1 U32 to Rack 4 R1-U32-P10A to R4-U1-P7A 10 meters
R1 U32 to Rack 5 R1-U32-P10B to R5-U1-P8A 10 meters
R1 U32 to Rack 6 R1-U32-P11A to R6-U1-P9A 10 meters
R1 U32 to Rack 7 R1-U32-P11B to R7-U1-P10A 10 meters
R1 U26 within Rack 1 R1-U26-P8A to R1-U1-P3B 3 meters

R1-U26-P8B to R1-U1-P4B
R1 U26 to Rack 2 R1-U26-P9A to R2-U1-P5B 5 meters
R1 U26 to Rack 3 R1-U26-P9B to R3-U1-P6B 5 meters
R1 U26 to Rack 4 R1-U26-P10A to R4-U1-P7B 10 meters
R1 U26 to Rack 5 R1-U26-P10B to R5-U1-P8B 10 meters
R1 U26 to Rack 6 R1-U26-P11A to R6-U1-P9B 10 meters
R1 U26 to Rack 7 R1-U26-P11B to R7-U1-P10B 10 meters

Connecting Multiple Oracle SuperCluster T5-8 Systems 277


Seven-Rack Cabling

The following table shows the cable connections for the second spine switch (R2-U1) when
cabling seven full racks together.

TABLE 43 Leaf Switch Connections for the Second Rack in a Seven-Rack System
Leaf Switch Connection Cable Length
R2 U32 within Rack 2 R2-U32-P8A to R2-U1-P3A 3 meters

R2-U32-P8B to R2-U1-P4A
R2 U32 to Rack 1 R2-U32-P11B to R1-U1-P10A 5 meters
R2 U32 to Rack 3 R2-U32-P9A to R3-U1-P5A 5 meters
R2 U32 to Rack 4 R2-U32-P9B to R4-U1-P6A 5 meters
R2 U32 to Rack 5 R2-U32-P10A to R5-U1-P7A 10 meters
R2 U32 to Rack 6 R2-U32-P10B to R6-U1-P8A 10 meters
R2 U32 to Rack 7 R2-U32-P11A to R7-U1-P9A 10 meters
R2 U26 within Rack 2 R2-U26-P8A to R2-U1-P3B 3 meters

R2-U26-P8B to R2-U1-P4B
R2 U26 to Rack 1 R2-U26-P11B to R1-U1-P10B 5 meters
R2 U26 to Rack 3 R2-U26-P9A to R3-U1-P5B 5 meters
R2 U26 to Rack 4 R2-U26-P9B to R4-U1-P6B 5 meters
R2 U26 to Rack 5 R2-U26-P10A to R5-U1-P7B 10 meters
R2 U26 to Rack 6 R2-U26-P10Bto R6-U1-P8B 10 meters
R2 U26 to Rack 7 R2-U26-P11A to R7-U1-P9B 10 meters

The following table shows the cable connections for the third spine switch (R3-U1) when
cabling seven full racks together.

TABLE 44 Leaf Switch Connections for the Third Rack in a Seven-Rack System
Leaf Switch Connection Cable Length
R3 U32 within Rack 3 R3-U32-P8A to R3-U1-P3A 3 meters

R3-U32-P8B to R3-U1-P4A
R3 U32 to Rack 1 R3-U32-P11A to R1-U1-P9A 5 meters
R3 U32 to Rack 2 R3-U32-P11B to R2-U1-P10A 5 meters
R3 U32 to Rack 4 R3-U32-P9A to R4-U1-P5A 5 meters
R3 U32 to Rack 5 R3-U32-P9B to R5-U1-P6A 5 meters
R3 U32 to Rack 6 R3-U32-P10A to R6-U1-P7A 10 meters
R3 U32 to Rack 7 R3-U32-P10B to R7-U1-P8A 10 meters
R3 U26 within Rack 3 R3-U26-P8A to R3-U1-P3B 3 meters

R3-U26-P8B to R3-U1-P4B

278 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Seven-Rack Cabling

Leaf Switch Connection Cable Length


R3 U26 to Rack 1 R3-U26-P11A to R1-U1-P9B 5 meters
R3 U26 to Rack 2 R3-U26-P11B to R2-U1-P10B 5 meters
R3 U26 to Rack 4 R3-U26-P9A to R4-U1-P5B 5 meters
R3 U26 to Rack 5 R3-U26-P9B to R5-U1-P6B 5 meters
R3 U26 to Rack 6 R3-U26-P10A to R6-U1-P7B 10 meters
R3 U26 to Rack 7 R3-U26-P10B to R7-U1-P8B 10 meters

The following table shows the cable connections for the fourth spine switch (R4-U1) when
cabling seven full racks together.

TABLE 45 Leaf Switch Connections for the Fourth Rack in a Seven-Rack System

Leaf Switch Connection Cable Length


R4 U32 within Rack 4 R4-U32-P8A to R4-U1-P3A 3 meters

R4-U32-P8B to R4-U1-P4A
R4 U32 to Rack 1 R4-U32-P10B to R1-U1-P8A 10 meters
R4 U32 to Rack 2 R4-U32-P11A to R2-U1-P9A 5 meters
R4 U32 to Rack 3 R4-U32-P11B to R3-U1-P10A 5 meters
R4 U32 to Rack 5 R4-U32-P9A to R5-U1-P5A 5 meters
R4 U32 to Rack 6 R4-U32-P9B to R6-U1-P6A 5 meters
R4 U32 to Rack 7 R4-U32-P10A to R7-U1-P7A 10 meters
R4 U26 within Rack 4 R4-U26-P8A to R4-U1-P3B 3 meters

R4-U26-P8B to R4-U1-P4B
R4 U26 to Rack 1 R4-U26-P10B to R1-U1-P8B 10 meters
R4 U26 to Rack 2 R4-U26-P11A to R2-U1-P9B 5 meters
R4 U26 to Rack 3 R4-U26-P11B to R3-U1-P10B 5 meters
R4 U26 to Rack 5 R4-U26-P9A to R5-U1-P5B 5 meters
R4 U26 to Rack 6 R4-U26-P9B to R6-U1-P6B 5 meters
R4 U26 to Rack 7 R4-U26-P10A to R7-U1-P7B 10 meters

The following table shows the cable connections for the fifth spine switch (R5-U1) when
cabling seven full racks together.

TABLE 46 Leaf Switch Connections for the Fifth Rack in a Seven-Rack System

Leaf Switch Connection Cable Length


R5 U32 within Rack 5 R5-U32-P8A to R5-U1-P3A 3 meters

Connecting Multiple Oracle SuperCluster T5-8 Systems 279


Seven-Rack Cabling

Leaf Switch Connection Cable Length


R5-U32-P8B to R5-U1-P4A
R5 U32 to Rack 1 R5-U32-P10A to R1-U1-P7A 10 meters
R5 U32 to Rack 2 R5-U32-P10B to R2-U1-P8A 10 meters
R5 U32 to Rack 3 R5-U32-P11A to R3-U1-P9A 5 meters
R5 U32 to Rack 4 R5-U32-P11B to R4-U1-P10A 5 meters
R5 U32 to Rack 6 R5-U32-P9A to R6-U1-P5A 5 meters
R5 U32 to Rack 7 R5-U32-P9B to R7-U1-P6A 5 meters
R5 U26 within Rack 5 R5-U26-P8A to R5-U1-P3B 3 meters

R5-U26-P8B to R5-U1-P4B
R5 U26 to Rack 1 R5-U26-P10A to R1-U1-P7B 10 meters
R5 U26 to Rack 2 R5-U26-P10B to R2-U1-P8B 10 meters
R5 U26 to Rack 3 R5-U26-P11A to R3-U1-P9B 5 meters
R5 U26 to Rack 4 R5-U26-P11B to R4-U1-P10B 5 meters
R5 U26 to Rack 6 R5-U26-P9A to R6-U1-P5B 5 meters
R5 U26 to Rack 7 R5-U26-P9B to R7-U1-P6B 5 meters

The following table shows the cable connections for the sixth spine switch (R6-U1) when
cabling seven full racks together.

TABLE 47 Leaf Switch Connections for the Sixth Rack in a Seven-Rack System
Leaf Switch Connection Cable Length
R6 U32 within Rack 6 R6-U32-P8A to R6-U1-P3A 3 meters

R6-U32-P8B to R6-U1-P4A
R6 U32 to Rack 1 R6-U32-P9B to R1-U1-P6A 10 meters
R6 U32 to Rack 2 R6-U32-P10A to R2-U1-P7A 10 meters
R6 U32 to Rack 3 R6-U32-P10B to R3-U1-P8A 5 meters
R6 U32 to Rack 4 R6-U32-P11A to R4-U1-P9A 5 meters
R6 U32 to Rack 5 R6-U32-P11B to R5-U1-P10A 5 meters
R6 U32 to Rack 7 R6-U32-P9A to R7-U1-P5A 5 meters
R6 U26 within Rack 6 R6-U26-P8A to R6-U1-P3B 3 meters

R6-U26-P8B to R6-U1-P4B
R6 U26 to Rack 1 R6-U26-P9B to R1-U1-P6B 10 meters
R6 U26 to Rack 2 R6-U26-P10A to R2-U1-P7B 10 meters
R6 U26 to Rack 3 R6-U26-P10B to R3-U1-P8B 5 meters
R6 U26 to Rack 4 R6-U26-P11A to R4-U1-P9B 5 meters
R6 U26 to Rack 5 R6-U26-P11B to R5-U1-P10B 5 meters

280 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Eight-Rack Cabling

Leaf Switch Connection Cable Length


R6 U26 to Rack 7 R6-U26-P9A to R7-U1-P5B 5 meters

The following table shows the cable connections for the seventh spine switch (R7-U1) when
cabling seven full racks together.

TABLE 48 Leaf Switch Connections for the Seventh Rack in a Seven-Rack System
Leaf Switch Connection Cable Length
R7 U32 within Rack 7 R7-U32-P8A to R7-U1-P3A 3 meters

R7-U32-P8B to R7-U1-P4A
R7 U32 to Rack 1 R7-U32-P9A to R1-U1-P5A 10 meters
R7 U32 to Rack 2 R7-U32-P9B to R2-U1-P6A 10 meters
R7 U32 to Rack 3 R7-U32-P10A to R3-U1-P7A 10 meters
R7 U32 to Rack 4 R7-U32-P10B to R4-U1-P8A 10 meters
R7 U32 to Rack 5 R7-U32-P11A to R5-U1-P9A 5 meters
R7 U32 to Rack 6 R7-U32-P11B to R6-U1-P10A 5 meters
R7 U26 within Rack 7 R7-U26-P8A to R7-U1-P3B 3 meters

R7-U26-P8B to R7-U1-P4B
R7 U26 to Rack 1 R7-U26-P9A to R1-U1-P5B 10 meters
R7 U26 to Rack 2 R7-U26-P9B to R2-U1-P6B 10 meters
R7 U26 to Rack 3 R7-U26-P10A to R3-U1-P7B 10 meters
R7 U26 to Rack 4 R7-U26-P10B to R4-U1-P8B 10 meters
R7 U26 to Rack 5 R7-U26-P11A to R5-U1-P9B 5 meters
R7 U26 to Rack 6 R7-U26-P11B to R6-U1-P10B 5 meters

Eight-Rack Cabling
The following table shows the cable connections for the first spine switch (R1-U1) when
cabling eight full racks together.

TABLE 49 Leaf Switch Connections for the First Rack in an Eight-Rack System
Leaf Switch Connection Cable Length
R1 U32 within Rack 1 R1-U32-P8A to R1-U1-P3A 3 meters
R1 U32 to Rack 2 R1-U32-P8B to R2-U1-P4A 5 meters
R1 U32 to Rack 3 R1-U32-P9A to R3-U1-P5A 5 meters
R1 U32 to Rack 4 R1-U32-P9B to R4-U1-P6A 10 meters

Connecting Multiple Oracle SuperCluster T5-8 Systems 281


Eight-Rack Cabling

Leaf Switch Connection Cable Length


R1 U32 to Rack 5 R1-U32-P10A to R5-U1-P7A 10 meters
R1 U32 to Rack 6 R1-U32-P10B to R6-U1-P8A 10 meters
R1 U32 to Rack 7 R1-U32-P11A to R7-U1-P9A 10 meters
R1 U32 to Rack 8 R1-U32-P11B to R8-U1-P10A 10 meters
R1 U26 within Rack 1 R1-U26-P8A to R1-U1-P3B 3 meters
R1 U26 to Rack 2 R1-U26-P8B to R2-U1-P4B 5 meters
R1 U26 to Rack 3 R1-U26-P9A to R3-U1-P5B 5 meters
R1 U26 to Rack 4 R1-U26-P9B to R4-U1-P6B 10 meters
R1 U26 to Rack 5 R1-U26-P10A to R5-U1-P7B 10 meters
R1 U26 to Rack 6 R1-U26-P10B to R6-U1-P8B 10 meters
R1 U26 to Rack 7 R1-U26-P11A to R7-U1-P8B 10 meters
R1 U26 to Rack 8 R1-U26-P11B to R8-U1-P10B 10 meters

The following table shows the cable connections for the second spine switch (R2-U1) when
cabling eight full racks together.

TABLE 50 Leaf Switch Connections for the Second Rack in an Eight-Rack System
Leaf Switch Connection Cable Length
R2 U32 within Rack 2 R2-U32-P8A to R2-U1-P3A 3 meters
R2 U32 to Rack 1 R2-U32-P11B to R1-U1-P10A 5 meters
R2 U32 to Rack 3 R2-U32-P8B to R3-U1-P4A 5 meters
R2 U32 to Rack 4 R2-U32-P9A to R4-U1-P5A 5 meters
R2 U32 to Rack 5 R2-U32-P9B to R5-U1-P6A 10 meters
R2 U32 to Rack 6 R2-U32-P10A to R6-U1-P7A 10 meters
R2 U32 to Rack 7 R2-U32-P10B to R7-U1-P8A 10 meters
R2 U32 to Rack 8 R2-U32-P11A to R8-U1-P9A 10 meters
R2 U26 within Rack 2 R2-U26-P8A to R2-U1-P3B 3 meters
R2 U26 to Rack 1 R2-U26-P11B to R1-U1-P10B 5 meters
R2 U26 to Rack 3 R2-U26-P8B to R3-U1-P4B 5 meters
R2 U26 to Rack 4 R2-U26-P9A to R4-U1-P5B 5 meters
R2 U26 to Rack 5 R2-U26-P9B to R5-U1-P6B 10 meters
R2 U26 to Rack 6 R2-U26-P10A to R6-U1-P7B 10 meters
R2 U26 to Rack 7 R2-U26-P10B to R7-U1-P8B 10 meters
R2 U26 to Rack 8 R2-U26-P11A to R8-U1-P9B 10 meters

The following table shows the cable connections for the third spine switch (R3-U1) when
cabling eight full racks together.

282 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Eight-Rack Cabling

TABLE 51 Leaf Switch Connections for the Third Rack in an Eight-Rack System

Leaf Switch Connection Cable Length


R3 U32 within Rack 3 R3-U32-P8A to R3-U1-P3A 3 meters
R3 U32 to Rack 1 R3-U32-P11A to R1-U1-P9A 5 meters
R3 U32 to Rack 2 R3-U32-P11B to R2-U1-P10A 5 meters
R3 U32 to Rack 4 R3-U32-P8B to R4-U1-P4A 5 meters
R3 U32 to Rack 5 R3-U32-P9A to R5-U1-P5A 5 meters
R3 U32 to Rack 6 R3-U32-P9B to R6-U1-P6A 5 meters
R3 U32 to Rack 7 R3-U32-P10A to R7-U1-P7A 10 meters
R3 U32 to Rack 8 R3-U32-P10B to R8-U1-P8A 10 meters
R3 U26 within Rack 3 R3-U26-P8A to R3-U1-P3B 3 meters
R3 U26 to Rack 1 R3-U26-P11A to R1-U1-P9B 5 meters
R3 U26 to Rack 2 R3-U26-P11B to R2-U1-P10B 5 meters
R3 U26 to Rack 4 R3-U26-P8B to R4-U1-P4B 5 meters
R3 U26 to Rack 5 R3-U26-P9A to R5-U1-P5B 5 meters
R3 U26 to Rack 6 R3-U26-P9B to R6-U1-P6B 5 meters
R3 U26 to Rack 7 R3-U26-P10A to R7-U1-P7B 10 meters
R3 U26 to Rack 8 R3-U26-P10B to R8-U1-P8B 10 meters

The following table shows the cable connections for the fourth spine switch (R4-U1) when
cabling eight full racks together.

TABLE 52 Leaf Switch Connections for the Fourth Rack in an Eight-Rack System

Leaf Switch Connection Cable Length


R4 U32 within Rack 4 R4-U32-P8A to R4-U1-P3A 3 meters
R4 U32 to Rack 1 R4-U32-P10B to R1-U1-P8A 10 meters
R4 U32 to Rack 2 R4-U32-P11A to R2-U1-P9A 5 meters
R4 U32 to Rack 3 R4-U32-P11B to R3-U1-P10A 5 meters
R4 U32 to Rack 5 R4-U32-P8B to R5-U1-P4A 5 meters
R4 U32 to Rack 6 R4-U32-P9A to R6-U1-P5A 5 meters
R4 U32 to Rack 7 R4-U32-P9B to R7-U1-P6A 10 meters
R4 U32 to Rack 8 R4-U32-P10A to R8-U1-P7A 10 meters
R4 U26 within Rack 4 R4-U26-P8A to R4-U1-P3B 3 meters
R4 U26 to Rack 1 R4-U26-P10B to R1-U1-P8B 10 meters
R4 U26 to Rack 2 R4-U26-P11A to R2-U1-P9B 5 meters
R4 U26 to Rack 3 R4-U26-P11B to R3-U1-P10B 5 meters
R4 U26 to Rack 5 R4-U26-P8B to R5-U1-P4B 5 meters

Connecting Multiple Oracle SuperCluster T5-8 Systems 283


Eight-Rack Cabling

Leaf Switch Connection Cable Length


R4 U26 to Rack 6 R4-U26-P9A to R6-U1-P5B 5 meters
R4 U26 to Rack 7 R4-U26-P9B to R7-U1-P6B 10 meters
R4 U26 to Rack 8 R4-U26-P10A to R8-U1-P7B 10 meters

The following table shows the cable connections for the fifth spine switch (R5-U1) when
cabling eight full racks together.

TABLE 53 Leaf Switch Connections for the Fifth Rack in an Eight-Rack System
Leaf Switch Connection Cable Length
R5 U32 within Rack 5 R5-U32-P8A to R5-U1-P3A 3 meters
R5 U32 to Rack 1 R5-U32-P10A to R1-U1-P7A 10 meters
R5 U32 to Rack 2 R5-U32-P10B to R2-U1-P8A 10 meters
R5 U32 to Rack 3 R5-U32-P11A to R3-U1-P9A 5 meters
R5 U32 to Rack 4 R5-U32-P11B to R4-U1-P10A 5 meters
R5 U32 to Rack 6 R5-U32-P8B to R6-U1-P4A 5 meters
R5 U32 to Rack 7 R5-U32-P9A to R7-U1-P5A 5 meters
R5 U32 to Rack 8 R5-U32-P9B to R8-U1-P6A 10 meters
R5 U26 within Rack 5 R5-U26-P8A to R5-U1-P3B 3 meters
R5 U26 to Rack 1 R5-U26-P10A to R1-U1-P7B 10 meters
R5 U26 to Rack 2 R5-U26-P10B to R2-U1-P8B 10 meters
R5 U26 to Rack 3 R5-U26-P11A to R3-U1-P9B 5 meters
R5 U26 to Rack 4 R5-U26-P11B to R4-U1-P10B 5 meters
R5 U26 to Rack 6 R5-U26-P8B to R6-U1-P4B 5 meters
R5 U26 to Rack 7 R5-U26-P9A to R7-U1-P5B 5 meters
R5 U26 to Rack 8 R5-U26-P9B to R8-U1-P6B 10 meters

The following table shows the cable connections for the sixth spine switch (R6-U1) when
cabling eight full racks together.

TABLE 54 Leaf Switch Connections for the Sixth Rack in an Eight-Rack System
Leaf Switch Connection Cable Length
R6 U32 within Rack 6 R6-U32-P8A to R6-U1-P3A 3 meters
R6 U32 to Rack 1 R6-U32-P9B to R1-U1-P6A 10 meters
R6 U32 to Rack 2 R6-U32-P10A to R2-U1-P7A 10 meters
R6 U32 to Rack 3 R6-U32-P10B to R3-U1-P8A 5 meters
R6 U32 to Rack 4 R6-U32-P11A to R4-U1-P9A 5 meters

284 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Eight-Rack Cabling

Leaf Switch Connection Cable Length


R6 U32 to Rack 5 R6-U32-P11B to R5-U1-P10A 5 meters
R6 U32 to Rack 7 R6-U32-P8B to R7-U1-P4A 5 meters
R6 U32 to Rack 8 R6-U32-P9A to R8-U1-P5A 5 meters
R6 U26 within Rack 6 R6-U26-P8A to R6-U1-P3B 3 meters
R6 U26 to Rack 1 R6-U26-P9B to R1-U1-P6B 10 meters
R6 U26 to Rack 2 R6-U26-P10A to R2-U1-P7B 10 meters
R6 U26 to Rack 3 R6-U26-P10B to R3-U1-P8B 5 meters
R6 U26 to Rack 4 R6-U26-P11A to R4-U1-P9B 5 meters
R6 U26 to Rack 5 R6-U26-P11B to R5-U1-P10B 5 meters
R6 U26 to Rack 7 R6-U26-P8B to R7-U1-P4B 5 meters
R6 U26 to Rack 8 R6-U26-P9A to R8-U1-P5B 5 meters

The following table shows the cable connections for the seventh spine switch (R7-U1) when
cabling eight full racks together.

TABLE 55 Leaf Switch Connections for the Seventh Rack in an Eight-Rack System

Leaf Switch Connection Cable Length


R7 U32 within Rack 7 R7-U32-P8A to R7-U1-P3A 3 meters
R7 U32 to Rack 1 R7-U32-P9A to R1-U1-P5A 10 meters
R7 U32 to Rack 2 R7-U32-P9B to R2-U1-P6A 10 meters
R7 U32 to Rack 3 R7-U32-P10A to R3-U1-P7A 10 meters
R7 U32 to Rack 4 R7-U32-P10B to R4-U1-P8A 10 meters
R7 U32 to Rack 5 R7-U32-P11A to R5-U1-P9A 5 meters
R7 U32 to Rack 6 R7-U32-P11B to R6-U1-P10A 5 meters
R7 U32 to Rack 8 R7-U32-P8B to R8-U1-P4A 5 meters
R7 U26 within Rack 7 R7-U26-P8A to R7-U1-P3B 3 meters
R7 U26 to Rack 1 R7-U26-P9A to R1-U1-P5B 10 meters
R7 U26 to Rack 2 R7-U26-P9B to R2-U1-P6B 10 meters
R7 U26 to Rack 3 R7-U26-P10A to R3-U1-P7B 10 meters
R7 U26 to Rack 4 R7-U26-P10B to R4-U1-P8B 10 meters
R7 U26 to Rack 5 R7-U26-P11A to R5-U1-P9B 5 meters
R7 U26 to Rack 6 R7-U26-P11B to R6-U1-P10B 5 meters
R7 U26 to Rack 8 R7-U26-P8B to R8-U1-P4B 5 meters

The following table shows the cable connections for the eighth spine switch (R8-U1) when
cabling eight full racks together.

Connecting Multiple Oracle SuperCluster T5-8 Systems 285


Eight-Rack Cabling

TABLE 56 Leaf Switch Connections for the Eighth Rack in an Eight-Rack System
Leaf Switch Connection Cable Length
R8 U32 within Rack 8 R8-U32-P8A to R8-U1-P3A 3 meters
R8 U32 to Rack 1 R8-U32-P8B to R1-U1-P4A 10 meters
R8 U32 to Rack 2 R8-U32-P9A to R2-U1-P5A 10 meters
R8 U32 to Rack 3 R8-U32-P9B to R3-U1-P6A 10 meters
R8 U32 to Rack 4 R8-U32-P10A to R4-U1-P7A 10 meters
R8 U32 to Rack 5 R8-U32-P10B to R5-U1-P8A 5 meters
R8 U32 to Rack 6 R8-U32-P11A to R6-U1-P9A 5 meters
R8 U32 to Rack 7 R8-U32-P11B to R7-U1-P10A 5 meters
R8 U26 within Rack 8 R8-U26-P8A to R8-U1-P3B 3 meters
R8 U26 to Rack 1 R8-U26-P8B to R1-U1-P4B 10 meters
R8 U26 to Rack 2 R8-U26-P9A to R2-U1-P5B 10 meters
R8 U26 to Rack 3 R8-U26-P9B to R3-U1-P6B 10 meters
R8 U26 to Rack 4 R8-U26-P10A to R4-U1-P7B 10 meters
R8 U26 to Rack 5 R8-U26-P10B to R5-U1-P8B 5 meters
R8 U26 to Rack 6 R8-U26-P11A to R6-U1-P9B 5 meters
R8 U26 to Rack 7 R8-U26-P1B to R7-U1-P10B 5 meters

286 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Connecting Expansion Racks

These topics provide instructions for connecting Oracle SuperCluster T5-8 to an Oracle Exadata
Storage Expansion Rack.
■ “Oracle Exadata Storage Expansion Rack Components” on page 287
■ “Preparing for Installation” on page 288
■ “Installing the Oracle Exadata Storage Expansion Rack” on page 299
■ “Expansion Rack Default IP Addresses” on page 300
■ “Understanding Expansion Rack Internal Cabling” on page 300
■ “Connecting an Expansion Rack to Oracle SuperCluster T5-8” on page 315

Oracle Exadata Storage Expansion Rack Components


Oracle Exadata Storage Expansion Rack provides additional storage for Oracle SuperCluster
T5-8. The additional storage can be used for backups, historical data, and unstructured data.
Oracle Exadata Storage Expansion Racks can be used to add space to Oracle SuperCluster T5-8
as follows:
■ Add new Exadata Storage Servers and grid disks to a new Oracle Automatic Storage
Management (Oracle ASM) disk group.
■ Extend existing disk groups by adding grid disks in Oracle Exadata Storage Expansion
Rack.
■ Split Oracle Exadata Storage Expansion Rack among multiple Oracle SuperCluster T5-8
systems.

Oracle Exadata Storage Expansion Rack is available as a full rack, half rack, or quarter rack.
The following table lists the components included in each type of Oracle Exadata Storage
Expansion Rack.

TABLE 57 Components of Oracle Exadata Storage Expansion Racks


Oracle Exadata Storage Expansion Full Oracle Exadata Storage Expansion Half Oracle Exadata Storage Expansion Quarter
Rack Rack Rack
■ 18 Exadata Storage Servers with: ■ 9 Exadata Storage Servers with: ■ 4 Exadata Storage Servers with:

Connecting Expansion Racks 287


Preparing for Installation

Oracle Exadata Storage Expansion Full Oracle Exadata Storage Expansion Half Oracle Exadata Storage Expansion Quarter
Rack Rack Rack
- 600 GB 15 K RPM High Performance - 600 GB 15 K RPM High Performance - 600 GB 15 K RPM High Performance
SAS disks or 3 TB 7.2 K RPM High SAS disks or 3 TB 7.2 K RPM High SAS disks or 3 TB 7.2 K RPM High
Capacity SAS disks (first Exadata Capacity SAS disks (first Exadata Capacity SAS disks (first Exadata
Storage Server type)† or Storage Server type)† or Storage Server type)† or
- 1.2 TB 10 K RPM High Performance - 1.2 TB 10 K RPM High Performance - 1.2 TB 10 K RPM High Performance
SAS disks or 4 TB 7.2 K RPM High SAS disks or 4 TB 7.2 K RPM High SAS disks or 4 TB 7.2 K RPM High
Capacity SAS disks (second Exadata Capacity SAS disks (second Exadata Capacity SAS disks (second Exadata
Storage Server type) Storage Server type) Storage Server type)
■ 3 Sun Datacenter InfiniBand Switch 36 ■ 3 Sun Datacenter InfiniBand Switch 36 ■ 2 Sun Datacenter InfiniBand Switch 36
■ High speed flash: ■ High speed flash: ■ High speed flash:
- 28.8 TB of raw flash capacity (first - 14.4 TB of raw flash capacity (first - 6.4 TB of raw flash capacity (first
Exadata Storage Server type)‡ or Exadata Storage Server type)‡ or Exadata Storage Server type)‡ or
- 57.6 TB of raw flash capacity (second - 28.8 TB of raw flash capacity (second - 12.8 TB of raw flash capacity (second
Exadata Storage Server type)‡ Exadata Storage Server type)‡ Exadata Storage Server type)‡
■ Keyboard, video, and mouse (KVM) ■ Keyboard, video, and mouse (KVM) ■ Keyboard, video, and mouse (KVM)
hardware hardware hardware
■ 2 redundant 15 kVA PDU (single-phase ■ 2 redundant 15 kVA PDUs (single- ■ 2 redundant 15 kVA PDUs (single-
or three-phase, high voltage or low phase or three-phase, high voltage or low phase or three-phase, high voltage or low
voltage) voltage) voltage)
■ 1 48-port Cisco Catalyst 4948, model ■ 1 48-port Cisco Catalyst 4948, model ■ 1 48-port Cisco Catalyst 4948, model
number WS-C4948-S Ethernet switch number WS-C4948-S Ethernet switch number WS-C4948-S Ethernet switch


In earlier releases, the high capacity disks were 2 TB.

For raw capacity, 1 GB = 1 billion bytes. Capacity calculated using normal space terminology of 1 TB = 1024 * 1024 * 1024 * 1024 bytes. Actual formatted
capacity is less.

Preparing for Installation


These topics provide information to prepare your site for the installation of the Oracle Exadata
Storage Expansion Rack. Planning for the expansion rack is similar to planning for Oracle
SuperCluster T5-8. This section contains information specific to the expansion rack, and also
refers to “Preparing the Site” on page 95 for general planning information.

■ “Reviewing System Specifications” on page 289


■ “Reviewing Power Requirements” on page 289
■ “Preparing for Cooling” on page 293
■ “Preparing the Unloading Route and Unpacking Area” on page 296
■ “Preparing the Network” on page 297

288 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Preparing for Installation

Reviewing System Specifications


■ “Physical Specifications” on page 289
■ “Installation and Service Area” on page 96
■ “Rack and Floor Cutout Dimensions” on page 97

Physical Specifications

TABLE 58 Exadata Expansion Rack Specifications

Parameter Metric English


Height 1998 mm 78.66 in.
Width with side panels 600 mm 23.62 in.
Depth with front and rear doors 1200 mm 47.24 in.
Depth without doors 1112 mm 43.78 in.
Minimum ceiling height 2300 mm 90 in.
Minimum space between top of cabinet and ceiling 914 mm 36 in.
Weight (full rack) 917.6 kg 2023 lbs
Weight (half rack) 578.3 kg 1275 lbs
Weight (quarter rack) 396.8 kg 875 lbs

Related Information

■ “Installation and Service Area” on page 96


■ “Rack and Floor Cutout Dimensions” on page 97

Reviewing Power Requirements


■ “System Power Consumption” on page 290
■ “Facility Power Requirements” on page 100
■ “Grounding Requirements” on page 100
■ “PDU Power Requirements” on page 290

Connecting Expansion Racks 289


Preparing for Installation

System Power Consumption

TABLE 59 Oracle Exadata Storage Expansion Rack Power Consumption

Comments Full Rack Half Rack Quarter Rack


Maximum 12.6 kW 6.9 kW 3.4 kW

12.9 kVA 7.1 kVA 3.5 kVA


Typical
8.8 kW 4.8 kW 2.4 kW

9.0 kVA 5.0 kVA 2.5kVA

Related Information
■ “PDU Power Requirements” on page 290
■ “Grounding Requirements” on page 100
■ “Facility Power Requirements” on page 100

PDU Power Requirements

When ordering the Oracle Exadata Storage Expansion Rack, you must provide two
specifications:

■ Low or high voltage


■ Single or three phase power

Refer to the following tables for Oracle marketing and manufacturing part numbers.

Note - Each Oracle Exadata Storage Expansion Rack has two power distribution units (PDUs).
Both PDUs in a rack must be the same type.

TABLE 60 PDU Choices

Voltage Phases Reference


Low 1 Table 61, “Low Voltage 1 Phase PDUs for Oracle Exadata Storage Expansion Rack,” on
page 291
Low 3 Table 62, “Low Voltage 3 Phase PDUs for Oracle Exadata Storage Expansion Rack,” on
page 291
High 1 Table 63, “High Voltage 1 Phase PDUs for Oracle Exadata Storage Expansion Rack,” on
page 292

290 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Preparing for Installation

Voltage Phases Reference


High 3 Table 64, “High Voltage 3 Phase PDUs for Oracle Exadata Storage Expansion Rack,” on
page 292

TABLE 61 Low Voltage 1 Phase PDUs for Oracle Exadata Storage Expansion Rack

High Voltage One Phase Comments


kVA Size 15 kVA
Marketing part number N/A
Manufacturing part number N/A
Phase 1 ph
Voltage 200 to 240 VAC
Amps per PDU 72 A (3 inputs x 24 A)
Outlets 42 C13

6 C19
Number of inputs 3 inputs
Input current 24 A max. per input
Data center receptacle NEMA 30A/250 VAC 2-pole/3-wire L6-30P
Outlet groups per PDU 6
Usable PDU power cord length 2 m (6.6 feet) PDU power cords are 4 m long (13 feet), but
sections are used for internal routing in the
rack.

TABLE 62 Low Voltage 3 Phase PDUs for Oracle Exadata Storage Expansion Rack

High Voltage Three Phase Comments


kVA Size 14.4 kVA
Marketing part number N/A
Manufacturing part number N/A
Phase 3 ph
Voltage 190 to 220 VAC
Amps per PDU 69 A (3 inputs x 23 A)
Outlets 42 C13

6 C19
Number of inputs 2 inputs x 60 A, 3 ph
Input current 40 A max. per phase
Data center receptacle IEC 60309 60A 4-pin 250 VAC 3 ph IP 67
Outlet groups per PDU 6

Connecting Expansion Racks 291


Preparing for Installation

High Voltage Three Phase Comments


Usable PDU power cord length 2 m (6.6 feet) PDU power cords are 4 m long (13 feet), but
sections are used for internal routing in the
rack.

TABLE 63 High Voltage 1 Phase PDUs for Oracle Exadata Storage Expansion Rack

High Voltage Single Phase Comments


kVA Size 15 kVA
Marketing part number N/A
Manufacturing part number N/A
Phase 1 ph
Voltage 220 to 240 VAC
Amps per PDU 72 A (3 inputs x24 A)
Outlets 42 C13

6 C19
Number of inputs 6
Input current 25 A max. per input
Data center receptacle IEC 60309 32A 3-pin 250VAC IP44
Outlet groups per PDU 6
Usable PDU power cord length 2 m (6.6 feet) PDU power cords are 4 m long (13 feet), but
sections are used for internal routing in the
rack.

TABLE 64 High Voltage 3 Phase PDUs for Oracle Exadata Storage Expansion Rack

High Voltage Single Phase Comments


kVA size 14.4 kVA
Marketing part number N/A
Manufacturing part number N/A
Phase 3 ph
Voltage 220/380 to 240/415 VAC
Amps per PDU 62.7 A (3 inputs x 20.9. A)
Outlets 42 C13

6 C19
Number of inputs 2 inputs x 25 A
Input current 25 A max. per phase
Data center receptacle IEC 60309 32A, 5-pin 230/400 VAC
Outlet groups per PDU 6

292 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Preparing for Installation

High Voltage Single Phase Comments


Usable PDU power cord length 2 m (6.6 feet) PDU power cords are 4 m long (13 feet), but
sections are used for internal routing in the
rack.

Related Information
■ “Facility Power Requirements” on page 100
■ “Grounding Requirements” on page 100
■ “System Power Consumption” on page 290

Preparing for Cooling


■ “Environmental Requirements” on page 293
■ “Heat Dissipation and Airflow Requirements” on page 294
■ “Perforated Floor Tiles” on page 296

Environmental Requirements
The following table lists temperature, humidity and altitude requirements.

TABLE 65 Temperature, Humidity, and Altitude


Condition Operating Requirement Non-operating Requirement Comments
Temperature 5 to 32°C (41 to 89.6°F) -40 to 70°C (-40 to 158°F). For optimal rack cooling, data
center temperatures from 21 to
23°C (70 to 47°F)
Relative humidity 10 to 90% relative humidity, Up to 93% relative humidity. For optimal data center
noncondensing rack cooling, 45 to 50%,
noncondensing
Altitude 3048 m (10000 ft.) maximum 12000 m (40000 ft.). Ambient temperature is reduced
by 1 degree Celsius per 300 m
above 900 m altitude above sea
level

Related Information
■ “Heat Dissipation and Airflow Requirements” on page 294
■ “Perforated Floor Tiles” on page 296

Connecting Expansion Racks 293


Preparing for Installation

Heat Dissipation and Airflow Requirements

The maximum and typical rates of heat released from the expansion racks are listed below. In
order to cool the system properly, ensure that adequate airflow travels through the system.

TABLE 66 Heat Dissipation

Rack Units Maximum Typical


Full rack BTU/hour kJ/hour 43,000 45,400 30,100 31,800
Half rack BTU/hour kJ/hour 23,600 24,900 16,500 17,400
Quarter rack BTU/hour kJ/hour 11,600 12,250 8,100 8,600

Caution - Do not restrict the movement of cool air from the air conditioner to the cabinet, or the
movement of hot air out of the rear of the cabinet.

Observe these additional requirements:

■ Allow a minimum clearance of 914 mm (36 inches) at the front of the rack, and 914 mm (36
inches) at the rear of the rack for ventilation. There is no airflow requirement for the left and
right sides, or the top of the rack.
■ If the rack is not completely filled with components, cover the empty sections with filler
panels.

294 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Preparing for Installation

FIGURE 32 Direction of Airflow Is Front to Back

TABLE 67 Airflow (listed quantities are approximate)

Rack Maximum Typical


Full rack CFM 2,000 1,390
Half rack CFM 1,090 760
Quarter rack CFM 530 375

Connecting Expansion Racks 295


Preparing for Installation

Related Information

■ “Environmental Requirements” on page 293


■ “Perforated Floor Tiles” on page 296

Perforated Floor Tiles

If you install the server on a raised floor, use perforated tiles in front of the rack to supply cold
air to the system. Each tile should support an airflow of approximately 400 CFM.

Perforated tiles can be arranged in any order in front of the rack, as long as cold air from the
tiles can flow into the rack.

The following is the recommended number of floor tiles:

TABLE 68 Perforated Floor Tile Requirements

Rack Number of Tiles


Full rack 4
Half rack 3
Quarter rack 2

Related Information

■ “Heat Dissipation and Airflow Requirements” on page 294


■ “Environmental Requirements” on page 293

Preparing the Unloading Route and Unpacking


Area
■ “Shipping Package Dimensions” on page 297
■ “Loading Dock and Receiving Area Requirements” on page 111
■ “Access Route Guidelines” on page 112
■ “Unpacking Area” on page 113

296 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Preparing for Installation

Shipping Package Dimensions

TABLE 69 Exadata Expansion Rack Shipping Package Dimensions and Weights

Parameter Metric English


Height 2159 mm 85 in.
Width 1219 mm 48 in.
Depth 1575 mm 62 in.
Shipping weight (full rack) 1001.1 kg 2207 lbs
Shipping weight (half rack) 659.6 kg 1454 lbs
Shipping weight (quarter rack) 475.3 kg 1048 lbs

Related Information

■ “Loading Dock and Receiving Area Requirements” on page 111


■ “Access Route Guidelines” on page 112
■ “Unpacking Area” on page 113

Preparing the Network


■ “Prepare DNS for the System” on page 114
■ “Network Requirements Overview” on page 297
■ “Network Connection and IP Address Requirements” on page 299

Network Requirements Overview

The Oracle Exadata Storage Expansion Rack includes Exadata Storage Servers, as well as
equipment to connect the servers to your network. The network connections enable the servers
to be administered remotely.

Each Exadata Storage Server consists of the following network components and interfaces:

■ 1 embedded Gigabit Ethernet port (NET0) for connection to the host management network
■ 1 dual-ported Sun QDR InfiniBand PCIe Low Profile host channel adapter (HCA) for
connection to the InfiniBand private network
■ 1 Ethernet port (NET MGT) for Oracle ILOM remote management

Connecting Expansion Racks 297


Preparing for Installation

The Cisco Catalyst 4948 Ethernet switch supplied with the Oracle Exadata Storage Expansion
Rack is minimally configured during installation. The minimal configuration disables IP
routing, and sets the following:
■ Host name
■ IP address
■ Subnet mask
■ Default gateway
■ Domain name
■ Domain Name Server
■ NTP server
■ Time
■ Time zone

To deploy the Oracle Exadata Storage Expansion Rack, ensure that you meet the minimum
network requirements. There are two networks used for the Oracle Exadata Storage Expansion
Rack. Each network must be on a distinct and separate subnet from the others. The network
descriptions are as follows:
■ Management network – This required network connects to your existing management
network, and is used for administrative work for the Exadata Storage Servers. It connects
the servers, Oracle ILOM, and switches connected to the Ethernet switch in the rack. There
is one uplink from the Ethernet switch in the rack to your existing management network.

Note - Network connectivity to the PDUs is only required if the electric current will be
monitored remotely.

Each Exadata Storage Server use two network interfaces for management. One provides
management access to the operating system through the 1 GbE host management interface
(s), and the other provides access to the Oracle Integrated Lights Out Manager through the
Oracle ILOM Ethernet interface.
■ InfiniBand private network – This network connects the Exadata Storage Servers using
the InfiniBand switches on the rack. This non-routable network is fully contained in the
Oracle Exadata Storage Expansion Rack, and does not connect to your existing network.
This network is automatically configured during installation.

Note - All networks must be on distinct and separate subnets from each other.

Related Information
■ “Network Connection and IP Address Requirements” on page 299

298 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Installing the Oracle Exadata Storage Expansion Rack

■ “Prepare DNS for the System” on page 114

Network Connection and IP Address Requirements


Prior to installation, network cables must be run from your existing network infrastructure to
the installation site. The network requirements for an Oracle Exadata Storage Expansion Rack
are as follows:

TABLE 70 Oracle Exadata Storage Expansion Rack Network IP Address Requirements


Rack Type Minimum Quantity Comments
Full expansion rack 42 ■ 18 for administration (one per Exadata Storage Server)
■ 18 for Oracle ILOM (one per Exadata Storage Server)
■ 4 for switches (3 for InfiniBand and 1 for Ethernet)
■ 2 for monitoring electric current of the PDUs (PDU connectivity is only
required if the electric current will be monitored remotely.)
Half expansion rack 24 ■ 9 for administration (one per Exadata Storage Server)
■ 9 for Oracle ILOM (one per Exadata Storage Server)
■ 4 for switches (3 for InfiniBand and 1 for Ethernet)
■ 2 for monitoring electric current of the PDUs (PDU connectivity is only
required if the electric current will be monitored remotely.)
Quarter expansion rack 13 ■ 4 for administration (one per Exadata Storage Server)
■ 4 for Oracle ILOM (one per Exadata Storage Server)
■ 3 for switches (2 for InfiniBand and 1 for Ethernet)
■ 2 for monitoring electric current of the PDUs (PDU connectivity is
required only if the electric current will be monitored remotely.)

Related Information
■ “Prepare DNS for the System” on page 114
■ “Network Requirements Overview” on page 297
■ “Expansion Rack Default IP Addresses” on page 300
■ “Understanding Expansion Rack Internal Cabling” on page 300

Installing the Oracle Exadata Storage Expansion Rack


The procedures for installing the Oracle Exadata Storage Expansion Rack are the same as those
for installing Oracle SuperCluster T5-8. See “Installing the System” on page 117 for those
procedure, then return here. Note that the sections on connecting to a 10 GbE client access

Connecting Expansion Racks 299


Expansion Rack Default IP Addresses

network and using the optional Fibre Channel PCIe cards do not apply for the Oracle Exadata
Storage Expansion Rack.

Expansion Rack Default IP Addresses

Component NET0 IP Addresses Oracle ILOM IP InfiniBand Bonded IP


Addresses Addresses
Exadata Storage Server 18 192.168.1.68 192.168.1.168 192.168.10.68
Exadata Storage Server 17 192.168.1.67 192.168.1.167 192.168.10.67
Exadata Storage Server 16 192.168.1.66 192.168.1.166 192.168.10.66
Exadata Storage Server 15 192.168.1.65 192.168.1.165 192.168.10.65
Exadata Storage Server 14 192.168.1.64 192.168.1.164 192.168.10.64
Exadata Storage Server 13 192.168.1.63 192.168.1.163 192.168.10.63
Exadata Storage Server 12 192.168.1.62 192.168.1.162 192.168.10.62
Exadata Storage Server 11 192.168.1.61 192.168.1.161 192.168.10.61
Exadata Storage Server 10 192.168.1.60 192.168.1.160 192.168.10.60
Exadata Storage Server 9 192.168.1.59 192.168.1.159 192.168.10.59
Exadata Storage Server 8 192.168.1.58 192.168.1.158 192.168.10.58
Exadata Storage Server 7 192.168.1.57 192.168.1.157 192.168.10.57
Exadata Storage Server 6 192.168.1.56 192.168.1.156 192.168.10.56
Exadata Storage Server 5 192.168.1.55 192.168.1.155 192.168.10.55
Exadata Storage Server 4 192.168.1.54 192.168.1.154 192.168.10.54
Exadata Storage Server 3 192.168.1.53 192.168.1.153 192.168.10.53
Exadata Storage Server 2 192.168.1.52 192.168.1.152 192.168.10.52
Exadata Storage Server 1 192.168.1.51 192.168.1.151 192.168.10.51
Sun Datacenter InfiniBand Switch 36 switch 3 192.168.1.223 NA NA
Sun Datacenter InfiniBand Switch 36 switch 2 192.168.1.222 NA NA
Sun Datacenter InfiniBand Switch 36 switch 1 192.168.1.221 NA NA
Ethernet switch 192.168.1.220 NA NA
PDU-A 192.168.1.212 NA NA
PDU-B 192.168.1.213 NA NA

Understanding Expansion Rack Internal Cabling


These topics show the cable layouts for the expansion rack.

300 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding Expansion Rack Internal Cabling

■ “Front and Rear Expansion Rack Layout” on page 301


■ “Oracle ILOM Cabling” on page 304
■ “Administrative Gigabit Ethernet Port Cabling Tables” on page 306
■ “Single-Phase PDU Cabling” on page 308
■ “Three-Phase Power Distribution Unit Cabling ” on page 309
■ “InfiniBand Network Cabling” on page 311

Front and Rear Expansion Rack Layout


The following figures show the front and rear views of the expansion rack:

■ Figure 33, “Expansion Rack Layout (Full Rack),” on page 302


■ Figure 34, “Expansion Rack Layout (Half Rack),” on page 303
■ Figure 35, “Expansion Rack Layout (Quarter Rack),” on page 304

Connecting Expansion Racks 301


Understanding Expansion Rack Internal Cabling

FIGURE 33 Expansion Rack Layout (Full Rack)

302 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding Expansion Rack Internal Cabling

FIGURE 34 Expansion Rack Layout (Half Rack)

Connecting Expansion Racks 303


Understanding Expansion Rack Internal Cabling

FIGURE 35 Expansion Rack Layout (Quarter Rack)

Oracle ILOM Cabling


This topic contains tables that list the Oracle ILOM network cabling. The Oracle ILOM port on
the servers is labeled NET MGT, and connects to the Gigabit Ethernet port located in rack unit
21 on the expansion racks.

304 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding Expansion Rack Internal Cabling

■ Table 71, “Oracle ILOM Cabling for the Expansion Rack (Full Rack),” on page 305
■ Table 72, “Oracle ILOM Cabling for the Expansion Rack (Half Rack),” on page 305
■ Table 73, “Oracle ILOM Cabling for the Expansion Rack (Quarter Rack),” on page 306

TABLE 71 Oracle ILOM Cabling for the Expansion Rack (Full Rack)

From Rack Unit Type of Equipment Gigabit Ethernet Port


U41 Exadata Storage Server 2
U39 Exadata Storage Server 4
U37 Exadata Storage Server 6
U35 Exadata Storage Server 8
U33 Exadata Storage Server 10
U31 Exadata Storage Server 12
U29 Exadata Storage Server 14
U27 Exadata Storage Server 18
U25 Exadata Storage Server 22
U18 Exadata Storage Server 26
U16 Exadata Storage Server 30
U14 Exadata Storage Server 32
U12 Exadata Storage Server 34
U10 Exadata Storage Server 36
U8 Exadata Storage Server 38
U6 Exadata Storage Server 40
U4 Exadata Storage Server 42
U2 Exadata Storage Server 44

TABLE 72 Oracle ILOM Cabling for the Expansion Rack (Half Rack)

From Rack Unit Type of Equipment Gigabit Ethernet Port


U18 Exadata Storage Server 26
U16 Exadata Storage Server 30
U14 Exadata Storage Server 32
U12 Exadata Storage Server 34
U10 Exadata Storage Server 36
U8 Exadata Storage Server 38
U6 Exadata Storage Server 40
U4 Exadata Storage Server 42
U2 Exadata Storage Server 44

Connecting Expansion Racks 305


Understanding Expansion Rack Internal Cabling

TABLE 73 Oracle ILOM Cabling for the Expansion Rack (Quarter Rack)

From Rack Unit Type of Equipment Gigabit Ethernet Port


U8 Exadata Storage Server 38
U6 Exadata Storage Server 40
U4 Exadata Storage Server 42
U2 Exadata Storage Server 44

Administrative Gigabit Ethernet Port Cabling


Tables
This topic contains tables that list the administrative Gigabit Ethernet network cabling. The port
on the servers is labeled Net-0, and connects to the Gigabit Ethernet port located in rack unit 21
on the expansion racks.

■ Table 74, “Gigabit Ethernet Cabling for the Expansion Rack (Full Rack),” on page 306
■ Table 75, “Gigabit Ethernet Cabling for the Expansion Rack (Half Rack),” on page 307
■ Table 75, “Gigabit Ethernet Cabling for the Expansion Rack (Half Rack),” on page 307

TABLE 74 Gigabit Ethernet Cabling for the Expansion Rack (Full Rack)

From Rack Unit Type of Equipment Gigabit Ethernet Port


U41 Exadata Storage Server 1
U39 Exadata Storage Server 3
U37 Exadata Storage Server 5
U35 Exadata Storage Server 7
U33 Exadata Storage Server 9
U31 Exadata Storage Server 11
U29 Exadata Storage Server 13
U27 Exadata Storage Server 17
U25 Exadata Storage Server 21
U24 Sun Datacenter InfiniBand Switch 36 switch 45
U20 Sun Datacenter InfiniBand Switch 36 switch 46
U18 Exadata Storage Server 25
U16 Exadata Storage Server 29
U14 Exadata Storage Server 31
U12 Exadata Storage Server 33
U10 Exadata Storage Server 35

306 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding Expansion Rack Internal Cabling

From Rack Unit Type of Equipment Gigabit Ethernet Port


U8 Exadata Storage Server 37
U6 Exadata Storage Server 39
U4 Exadata Storage Server 41
U2 Exadata Storage Server 43
U1 Sun Datacenter InfiniBand Switch 36 switch 47
PDU-A PDU 15
PDU-B PDU 19

TABLE 75 Gigabit Ethernet Cabling for the Expansion Rack (Half Rack)

From Rack Unit Type of Equipment Gigabit Ethernet Port


U24 Sun Datacenter InfiniBand Switch 36 switch 45
U20 Sun Datacenter InfiniBand Switch 36 switch 46
U18 Exadata Storage Server 25
U16 Exadata Storage Server 29
U14 Exadata Storage Server 31
U12 Exadata Storage Server 33
U10 Exadata Storage Server 35
U8 Exadata Storage Server 37
U6 Exadata Storage Server 39
U4 Exadata Storage Server 41
U2 Exadata Storage Server 43
U1 Sun Datacenter InfiniBand Switch 36 switch 47
PDU-A PDU 15
PDU-B PDU 19

TABLE 76 Gigabit Ethernet Cabling for the Expansion Rack (Quarter Rack)

From Rack Unit Type of Equipment Gigabit Ethernet Port


U24 Sun Datacenter InfiniBand Switch 36 switch 45
U20 Sun Datacenter InfiniBand Switch 36 switch 46
U8 Exadata Storage Server 37
U6 Exadata Storage Server 39
U4 Exadata Storage Server 41
U2 Exadata Storage Server 43
PDU-A PDU 15
PDU-B PDU 19

Connecting Expansion Racks 307


Understanding Expansion Rack Internal Cabling

Single-Phase PDU Cabling


This topic contains tables that list the single-phase cabling from each PDU to the power
supplies configured in each rack. The cables are terminated to PDU-A on the left, and routed to
the right to enter CMA, and are bundled in groups of four.

■ Table 77, “Single-Phase PDU Cabling for the Expansion Rack (Full Rack),” on page 308
■ Table 78, “Single-Phase PDU Cabling for the Expansion Rack (Half Rack),” on page 309
■ Table 79, “Single-Phase PDU Cabling for the Expansion Rack (Quarter Rack),” on page
309

TABLE 77 Single-Phase PDU Cabling for the Expansion Rack (Full Rack)

Rack Unit PDU-A/PS-00 PDU-B/PS-01 Cable Length


U41 G5-6 G0-0 2 meters
U39 G5-3 G0-3 2 meters
U37 G5-0 G0-6 2 meters
U35 G4-6 G1-0 2 meters
U33 G4-4 G1-2 2 meters
U31 G4-2 G1-4 2 meters
U29 G3-6 G2-0 2 meters
U27 G3-5 G2-1 2 meters
U25 G3-3 G2-3 2 meters
U24 G3-1 G2-5 2 meters
U23 N/A G3-0 included
U22 G2-5 G3-1 1 meter
U21 G3-0 G2-6 2 meters
U20 G2-4 G3-2 2 meters
U18 G2-2 G3-4 2 meters
U16 G1-6 G4-0 2 meters
U14 G2-0 G3-6 2 meters
U12 G1-4 G4-2 2 meters
U10 G1-2 G4-4 2 meters
U8 G1-0 G4-6 2 meters
U6 G0-6 G5-0 2 meters
U4 G0-4 G5-2 2 meters
U2 G0-2 G5-4 2 meters
U1 G0-0 G5-6 2 meters

308 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding Expansion Rack Internal Cabling

TABLE 78 Single-Phase PDU Cabling for the Expansion Rack (Half Rack)
Rack Unit PDU-A/PS-00 PDU-B/PS-01 Cable Length
U24 G3-1 G2-5 2 meters
U23 N/A G3-0 included
U22 G2-5 G3-1 1 meter
U21 G3-0 G2-6 2 meters
U20 G2-4 G3-2 2 meters
U18 G2-2 G3-4 2 meters
U16 G1-6 G4-0 2 meters
U14 G2-0 G3-6 2 meters
U12 G1-4 G4-2 2 meters
U10 G1-2 G4-4 2 meters
U8 G1-0 G4-6 2 meters
U6 G0-6 G5-0 2 meters
U4 G0-4 G5-2 2 meters
U2 G0-2 G5-4 2 meters

TABLE 79 Single-Phase PDU Cabling for the Expansion Rack (Quarter Rack)
Rack Unit PDU-A/PS-00 PDU-B/PS-01 Cable Length
U24 G3-1 G2-5 2 meters
U23 N/A G3-0 included
U22 G2-5 G3-1 1 meter
U21 G3-0 G2-6 2 meters
U20 G2-4 G3-2 2 meters
U8 G1-0 G4-6 2 meters
U6 G0-6 G5-0 2 meters
U4 G0-4 G5-2 2 meters
U2 G0-2 G5-4 2 meters

Three-Phase Power Distribution Unit Cabling


This topic contains tables that list the three-phase cabling from each PDU to the power supplies
configured in each rack. The cables are terminated to PDU-A on the left, and routed to the right
to enter CMA, and are bundled in groups of four.

■ Table 80, “Three-Phase PDU Cabling for the Expansion Rack (Full Rack),” on page 310
■ Table 81, “Three-Phase PDU Cabling for the Expansion Rack (Half Rack),” on page 310

Connecting Expansion Racks 309


Understanding Expansion Rack Internal Cabling

■ Table 82, “Three-Phase PDU Cabling for the Expansion Rack (Quarter Rack),” on page
311

TABLE 80 Three-Phase PDU Cabling for the Expansion Rack (Full Rack)

Rack Unit PDU-A/PS-00 PDU-B/PS-01 Cable Length


U41 G5-6 G2-0 2 meters
U39 G5-3 G2-3 2 meters
U37 G5-0 G2-6 2 meters
U35 G4-6 G1-0 2 meters
U33 G4-4 G1-2 2 meters
U31 G4-2 G1-4 2 meters
U29 G3-6 G0-0 2 meters
U27 G3-5 G0-1 2 meters
U25 G3-3 G0-3 2 meters
U24 G3-1 G0-5 2 meters
U23 N/A G5-0 included
U22 G2-5 G5-1 1 meter
U21 G3-0 G0-6 2 meters
U20 G2-4 G5-2 2 meters
U18 G2-2 G5-4 2 meters
U16 G1-6 G4-0 2 meters
U14 G2-0 G5-6 2 meters
U12 G1-4 G4-2 2 meters
U10 G1-2 G4-4 2 meters
U8 G1-0 G4-6 2 meters
U6 G0-6 G3-0 2 meters
U4 G0-4 G3-2 2 meters
U2 G0-2 G3-4 2 meters
U1 G0-0 G3-6 2 meters

TABLE 81 Three-Phase PDU Cabling for the Expansion Rack (Half Rack)

Rack Unit PDU-A/PS-00 PDU-B/PS-01 Cable Length


U24 G3-1 G0-5 2 meters
U23 N/A G5-0 included
U22 G2-5 G5-1 1 meter
U21 G3-0 G0-6 2 meters
U20 G2-4 G5-2 2 meters

310 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding Expansion Rack Internal Cabling

Rack Unit PDU-A/PS-00 PDU-B/PS-01 Cable Length


U18 G2-2 G5-4 2 meters
U16 G1-6 G4-0 2 meters
U14 G2-0 G5-6 2 meters
U12 G1-4 G4-2 2 meters
U10 G1-2 G4-4 2 meters
U8 G1-0 G4-6 2 meters
U6 G0-6 G3-0 2 meters
U4 G0-4 G3-2 2 meters
U2 G0-2 G3-4 2 meters
U1 G0-0 G3-6 2 meters

TABLE 82 Three-Phase PDU Cabling for the Expansion Rack (Quarter Rack)

Rack Unit PDU-A/PS-00 PDU-B/PS-01 Cable Length


U24 G3-1 G0-5 2 meters
U23 N/A G5-0 included
U22 G2-5 G5-1 1 meter
U21 G3-0 G0-6 2 meters
U20 G2-4 G5-2 2 meters
U8 G1-0 G4-6 2 meters
U6 G0-6 G3-0 2 meters
U4 G0-4 G3-2 2 meters
U2 G0-2 G3-4 2 meters

InfiniBand Network Cabling


This topic contains tables that list the InfiniBand network cabling. The Sun Datacenter
InfiniBand Switch 36 switches are located in located in rack unit 1, 20 and 24.

■ Table 82, “Three-Phase PDU Cabling for the Expansion Rack (Quarter Rack),” on page
311
■ Table 83, “InfiniBand Network Cabling for the Expansion Rack (Full Rack),” on page
312
■ Table 85, “InfiniBand Network Cabling for the Expansion Rack (Quarter Rack),” on page
314

Connecting Expansion Racks 311


Understanding Expansion Rack Internal Cabling

TABLE 83 InfiniBand Network Cabling for the Expansion Rack (Full Rack)

From InfiniBand Port To Rack Unit Type of Equipment Port Cable Description
Switch Rack Unit
U24 0A U41 Exadata Storage Server PCIe 3, P1 3 meter QDR InfiniBand cable
U24 0B U39 Exadata Storage Server PCIe 3, P1 3 meter QDR InfiniBand cable
U24 1A U37 Exadata Storage Server PCIe 3, P1 3 meter QDR InfiniBand cable
U24 1B U35 Exadata Storage Server PCIe 3, P1 3 meter QDR InfiniBand cable
U24 2A U33 Exadata Storage Server PCIe 3, P1 3 meter QDR InfiniBand cable
U24 2B U31 Exadata Storage Server PCIe 3, P1 3 meter QDR InfiniBand cable
U24 3A U29 Exadata Storage Server PCIe 3, P1 3 meter QDR InfiniBand cable
U24 4A U27 Exadata Storage Server PCIe 2, P1 2 meter QDR InfiniBand cable
U24 5A U25 Exadata Storage Server PCIe 2, P1 2 meter QDR InfiniBand cable
U24 13A U18 Exadata Storage Server PCIe 2, P2 2 meter QDR InfiniBand cable
U24 14A U16 Exadata Storage Server PCIe 2, P2 2 meter QDR InfiniBand cable
U24 14B U14 Exadata Storage Server PCIe 3, P2 3 meter QDR InfiniBand cable
U24 15A U12 Exadata Storage Server PCIe 3, P2 3 meter QDR InfiniBand cable
U24 15B U10 Exadata Storage Server PCIe 3, P2 3 meter QDR InfiniBand cable
U24 16A U8 Exadata Storage Server PCIe 3, P2 3 meter QDR InfiniBand cable
U24 16B U6 Exadata Storage Server PCIe 3, P2 3 meter QDR InfiniBand cable
U24 17A U4 Exadata Storage Server PCIe 3, P2 3 meter QDR InfiniBand cable
U24 17B U2 Exadata Storage Server PCIe 3, P2 3 meter QDR InfiniBand cable
U20 0A U41 Exadata Storage Server PCIe 3, P2 3 meter QDR InfiniBand cable
U20 0B U39 Exadata Storage Server PCIe 3, P2 3 meter QDR InfiniBand cable
U20 1A U37 Exadata Storage Server PCIe 3, P2 3 meter QDR InfiniBand cable
U20 1B U35 Exadata Storage Server PCIe 3, P2 3 meter QDR InfiniBand cable
U20 2A U33 Exadata Storage Server PCIe 3, P2 3 meter QDR InfiniBand cable
U20 2B U31 Exadata Storage Server PCIe 3, P2 3 meter QDR InfiniBand cable
U20 3A U29 Exadata Storage Server PCIe 3, P2 3 meter QDR InfiniBand cable
U20 4A U27 Exadata Storage Server PCIe 2, P2 2 meter QDR InfiniBand cable
U20 5A U25 Exadata Storage Server PCIe 2, P2 2 meter QDR InfiniBand cable
U20 13A U18 Exadata Storage Server PCIe 2, P1 2 meter QDR InfiniBand cable
U20 14A U16 Exadata Storage Server PCIe 2, P1 2 meter QDR InfiniBand cable
U20 14B U14 Exadata Storage Server PCIe 3, P1 2 meter QDR InfiniBand cable
U20 15A U12 Exadata Storage Server PCIe 3, P1 3 meter QDR InfiniBand cable
U20 15B U10 Exadata Storage Server PCIe 3, P1 3 meter QDR InfiniBand cable
U20 16A U8 Exadata Storage Server PCIe 3, P1 3 meter QDR InfiniBand cable
U20 16B U6 Exadata Storage Server PCIe 3, P1 3 meter QDR InfiniBand cable
U20 17A U4 Exadata Storage Server PCIe 3, P1 3 meter QDR InfiniBand cable

312 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Understanding Expansion Rack Internal Cabling

From InfiniBand Port To Rack Unit Type of Equipment Port Cable Description
Switch Rack Unit
U20 17B U2 Exadata Storage Server PCIe 3, P1 3 meter QDR InfiniBand cable
U20 9B U24 Sun Datacenter InfiniBand Switch 9A 2 meter QDR InfiniBand cable
36 switch
U20 10B U24 Sun Datacenter InfiniBand Switch 10A 2 meter QDR InfiniBand cable
36 switch
U20 11B U24 Sun Datacenter InfiniBand Switch 11A 2 meter QDR InfiniBand cable
36 switch
U20 8A U24 Sun Datacenter InfiniBand Switch 8A 2 meter QDR InfiniBand cable
36 switch
U20 9A U24 Sun Datacenter InfiniBand Switch 9B 2 meter QDR InfiniBand cable
36 switch
U20 10A U24 Sun Datacenter InfiniBand Switch 10B 2 meter QDR InfiniBand cable
36 switch
U20 11A U24 Sun Datacenter InfiniBand Switch 11B 2 meter QDR InfiniBand cable
36 switch
U1 1B U20 Sun Datacenter InfiniBand Switch 8B 3 meter QDR InfiniBand cable
36 switch
U1 0B U24 Sun Datacenter InfiniBand Switch 8B 3 meter QDR InfiniBand cable
36 switch

TABLE 84 InfiniBand Network Cabling for the Expansion Rack (Half Rack)

From InfiniBand Port To Rack Unit Type of Equipment Port Cable Description
Switch Rack Unit
U24 13A U18 Exadata Storage Server PCIe 2, P2 2 meter QDR InfiniBand cable
U24 14A U16 Exadata Storage Server PCIe 2, P2 2 meter QDR InfiniBand cable
U24 14B U14 Exadata Storage Server PCIe 3, P2 3 meter QDR InfiniBand cable
U24 15A U12 Exadata Storage Server PCIe 3, P2 3 meter QDR InfiniBand cable
U24 15B U10 Exadata Storage Server PCIe 3, P2 3 meter QDR InfiniBand cable
U24 16A U8 Exadata Storage Server PCIe 3, P2 3 meter QDR InfiniBand cable
U24 16B U6 Exadata Storage Server PCIe 3, P2 3 meter QDR InfiniBand cable
U24 17A U4 Exadata Storage Server PCIe 3, P2 3 meter QDR InfiniBand cable
U24 17B U2 Exadata Storage Server PCIe 3, P2 3 meter QDR InfiniBand cable
U20 13A U18 Exadata Storage Server PCIe 2, P1 2 meter QDR InfiniBand cable
U20 14A U16 Exadata Storage Server PCIe 2, P1 2 meter QDR InfiniBand cable
U20 14B U14 Exadata Storage Server PCIe 3, P1 2 meter QDR InfiniBand cable
U20 15A U12 Exadata Storage Server PCIe 3, P1 3 meter QDR InfiniBand cable
U20 15B U10 Exadata Storage Server PCIe 3, P1 3 meter QDR InfiniBand cable
U20 16A U8 Exadata Storage Server PCIe 3, P1 3 meter QDR InfiniBand cable
U20 16B U6 Exadata Storage Server PCIe 3, P1 3 meter QDR InfiniBand cable

Connecting Expansion Racks 313


Understanding Expansion Rack Internal Cabling

From InfiniBand Port To Rack Unit Type of Equipment Port Cable Description
Switch Rack Unit
U20 17A U4 Exadata Storage Server PCIe 3, P1 3 meter QDR InfiniBand cable
U20 17B U2 Exadata Storage Server PCIe 3, P1 3 meter QDR InfiniBand cable
U20 9B U24 Sun Datacenter InfiniBand Switch 9A 2 meter QDR InfiniBand cable
36 switch
U20 10B U24 Sun Datacenter InfiniBand Switch 10A 2 meter QDR InfiniBand cable
36 switch
U20 11B U24 Sun Datacenter InfiniBand Switch 11A 2 meter QDR InfiniBand cable
36 switch
U20 8A U24 Sun Datacenter InfiniBand Switch 8A 2 meter QDR InfiniBand cable
36 switch
U20 9A U24 Sun Datacenter InfiniBand Switch 9B 2 meter QDR InfiniBand cable
36 switch
U20 10A U24 Sun Datacenter InfiniBand Switch 10B 2 meter QDR InfiniBand cable
36 switch
U20 11A U24 Sun Datacenter InfiniBand Switch 11B 2 meter QDR InfiniBand cable
36 switch
U1 1B U20 Sun Datacenter InfiniBand Switch 8B 3 meter QDR InfiniBand cable
36 switch
U1 0B U24 Sun Datacenter InfiniBand Switch 8B 3 meter QDR InfiniBand cable
36 switch

TABLE 85 InfiniBand Network Cabling for the Expansion Rack (Quarter Rack)

From InfiniBand Port To Rack Unit Type of Equipment Port Cable Description
Switch Rack Unit
U24 16A U8 Exadata Storage Server PCIe 2, P2 2 meter QDR InfiniBand cable
U24 16B U6 Exadata Storage Server PCIe 3, P2 3 meter QDR InfiniBand cable
U24 17A U4 Exadata Storage Server PCIe 3, P2 3 meter QDR InfiniBand cable
U24 17B U2 Exadata Storage Server PCIe 3, P2 3 meter QDR InfiniBand cable
U20 16A U8 Exadata Storage Server PCIe 2, P1 2 meter QDR InfiniBand cable
U20 16B U6 Exadata Storage Server PCIe 3, P1 3 meter QDR InfiniBand cable
U20 17A U4 Exadata Storage Server PCIe 3, P1 3 meter QDR InfiniBand cable
U20 17B U2 Exadata Storage Server PCIe 3, P1 3 meter QDR InfiniBand cable
U20 9B U24 Sun Datacenter InfiniBand 9A 2 meter QDR InfiniBand cable
Switch 36 switch
U20 10B U24 Sun Datacenter InfiniBand 10A 2 meter QDR InfiniBand cable
Switch 36 switch
U20 11B U24 Sun Datacenter InfiniBand 11A 2 meter QDR InfiniBand cable
Switch 36 switch
U20 8A U24 Sun Datacenter InfiniBand 8A 2 meter QDR InfiniBand cable
Switch 36 switch

314 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Connecting an Expansion Rack to Oracle SuperCluster T5-8

From InfiniBand Port To Rack Unit Type of Equipment Port Cable Description
Switch Rack Unit
U20 9A U24 Sun Datacenter InfiniBand 9B 2 meter QDR InfiniBand cable
Switch 36 switch
U20 10A U24 Sun Datacenter InfiniBand 10B 2 meter QDR InfiniBand cable
Switch 36 switch
U20 11A U24 Sun Datacenter InfiniBand 11B 2 meter QDR InfiniBand cable
Switch 36 switch

Connecting an Expansion Rack to Oracle SuperCluster T5-8


You connect an expansion rack to Oracle SuperCluster T5-8 using three Sun Datacenter
InfiniBand Switch 36 switches. Oracle SuperCluster T5-8 and the Oracle Exadata Storage
Expansion Half Rack or Oracle Exadata Storage Expansion Full Rack include three Sun
Datacenter InfiniBand Switch 36 switches. Two switches are used as leaf switches, and the third
switch is used as a spine switch. These switches attach to standard Quad Small Form-factor
Pluggable (QSFP) connectors at the end of the InfiniBand cables. The procedures in this section
assume the racks are adjacent to each other. If they are not, then longer cables may be required
for the connections.

Note - The Oracle Exadata Storage Expansion Quarter Rack contains only two Sun Datacenter
InfiniBand Switch 36 switches (the leaf switches), so no spine switch is available in the
Oracle Exadata Storage Expansion Quarter Rack. See “Connecting an Oracle Exadata Storage
Expansion Quarter Rack to Oracle SuperCluster T5-8” on page 316 for more information.

InfiniBand Switch Information for Oracle


SuperCluster T5-8
In Oracle SuperCluster T5-8, the switch at rack unit 1 (U1) is referred to as the spine switch.
The switches at rack unit 26 (U26) and rack unit 32 (U32) are referred to as leaf switches. In
a single rack, the two leaf switches are interconnected using seven connections. In addition,
each leaf switch has one connection to the spine switch. The leaf switches connect to the spine
switch as shown in the following graphic.

Connecting Expansion Racks 315


Connecting an Expansion Rack to Oracle SuperCluster T5-8

InfiniBand Switch Information for the Oracle


Exadata Storage Expansion Rack
In the Oracle Exadata Storage Expansion Rack, the switch at rack unit 1 (U1) is referred to as
the spine switch, except for the Oracle Exadata Storage Expansion Quarter Rack which has no
spine switch. The switches at rack unit 20 (U20) and rack unit 24 (U24) are referred to as leaf
switches.

Connecting an Oracle Exadata Storage Expansion


Quarter Rack to Oracle SuperCluster T5-8
These topics describe how to connect an Oracle Exadata Storage Expansion Quarter Rack to
your Oracle SuperCluster T5-8.

316 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Connecting an Expansion Rack to Oracle SuperCluster T5-8

Note - For instructions on connecting an Oracle Exadata Storage Expansion Half Rack or
Oracle Exadata Storage Expansion Full Rack to Oracle SuperCluster T5-8, see “Connecting an
Oracle Exadata Storage Expansion Half Rack or Oracle Exadata Storage Expansion Full Rack
to Oracle SuperCluster T5-8” on page 320.

Note the following restrictions when connecting an Oracle Exadata Storage Expansion Quarter
Rack to Oracle SuperCluster T5-8:

■ The standard version of the Oracle Exadata Storage Expansion Quarter Rack contains two
leaf switches and no spine switch. You will typically make two connections for each leaf
switch in the Oracle Exadata Storage Expansion Quarter Rack to the leaf switches in the
Oracle SuperCluster T5-8. Each leaf switch in the Oracle SuperCluster T5-8 must therefore
have four open ports for this type of connection.
■ In the Half Rack version of Oracle SuperCluster T5-8, four ports are open on both leaf
switches (ports 2A, 2B, 7B and 12A). Use the instructions in this section to make the two
connections between each leaf switch in the Oracle Exadata Storage Expansion Quarter
Rack to the leaf switches in the Oracle SuperCluster T5-8.
■ In the Full Rack version of Oracle SuperCluster T5-8, only two ports are open on both leaf
switches (ports 2A and 2B). Therefore, if you are connecting an Oracle Exadata Storage
Expansion Quarter Rack to the Full Rack version of Oracle SuperCluster T5-8, you must
order a spine switch kit from Oracle and install that spine switch in the Oracle Exadata
Storage Expansion Quarter Rack. You would then connect the Oracle Exadata Storage
Expansion Quarter Rack to Oracle SuperCluster T5-8 using the instructions given in
“Connecting an Oracle Exadata Storage Expansion Half Rack or Oracle Exadata Storage
Expansion Full Rack to Oracle SuperCluster T5-8” on page 320.
■ Only one Oracle Exadata Storage Expansion Quarter Rack can be connected to the Half
Rack version of Oracle SuperCluster T5-8.

The following graphic shows the cable connections from an Oracle Exadata Storage Expansion
Quarter Rack to the Half Rack version of Oracle SuperCluster T5-8. The leaf switches within
each rack maintain their existing seven connections. The leaf switches interconnect between the
racks with two links each using the ports reserved for external connectivity.

Connecting Expansion Racks 317


Connecting an Expansion Rack to Oracle SuperCluster T5-8

The following graphic shows the cable connections from an Oracle Exadata Storage Expansion
Quarter Rack to two or more racks. The following racks can connect to a standard Oracle
Exadata Storage Expansion Quarter Rack (with no spine switch):

■ Half Rack version of Oracle SuperCluster T5-8


■ Oracle Exadata Storage Expansion Half Rack
■ Oracle Exadata Storage Expansion Full Rack

The racks are interconnected using a fat-tree topology. Each leaf switch in the Oracle Exadata
Storage Expansion Quarter Rack connects to the spine switches in the other Half Racks or Full
Racks with two links each. If there are more than four racks, then use one link instead of two.

318 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Connecting an Expansion Rack to Oracle SuperCluster T5-8

The following terms are used when referring to the two racks:

■ Rack 1 (R1) refers to Oracle SuperCluster T5-8 (Half Rack)


■ Rack 2 (R2) refers to the Oracle Exadata Storage Expansion Quarter Rack

In addition, because the InfiniBand switches are physically located in different rack units in the
two racks, the following terms are used when referring to the InfiniBand switches:

■ InfiniBand 1 (IB1) refers to the spine switch, located in U1 in Oracle SuperCluster T5-8
■ InfiniBand 2 (IB2) refers to the first leaf switch, located in:
■ U26 in Oracle SuperCluster T5-8 (Half Rack)
■ U20 in the Oracle Exadata Storage Expansion Rack
■ InfiniBand 3 (IB3) refers to the second leaf switch, located in:
■ U32 in Oracle SuperCluster T5-8 (Half Rack)
■ U24 in the Oracle Exadata Storage Expansion Rack

The following table shows the cable connections for a single Oracle Exadata Storage Expansion
Quarter Rack to the Half Rack version of Oracle SuperCluster T5-8.

TABLE 86 Leaf Switch Connections for the First Rack in a Two-Rack System
Leaf Switch Connection Cable Length
R1-IB3 to Rack 2 R1-IB3-P2A to R2-IB3-P2A 5 meters

Connecting Expansion Racks 319


Connecting an Expansion Rack to Oracle SuperCluster T5-8

Leaf Switch Connection Cable Length


R1-IB3-P2B to R2-IB3-P2B

R1-IB3-P7B to R2-IB2-P7B

R1-IB3-P12A to R2-IB2-P12A
R1-IB2 to Rack 2 R1-IB2-P7B to R2-IB3-P7B 5 meters

R1-IB2-P12A to R2-IB3-P12A

R1-IB2-P2A to R2-IB2-P2A

R1-IB2-P2B to R2-IB2-P2B

Connecting an Oracle Exadata Storage Expansion


Half Rack or Oracle Exadata Storage Expansion
Full Rack to Oracle SuperCluster T5-8
These topics describe how to connect an Oracle Exadata Storage Expansion Half Rack or
Oracle Exadata Storage Expansion Full Rack to your Oracle SuperCluster T5-8.

Note - For instructions on connecting an Oracle Exadata Storage Expansion Quarter Rack to
the Half Rack version of Oracle SuperCluster T5-8, see “Connecting an Oracle Exadata Storage
Expansion Quarter Rack to Oracle SuperCluster T5-8” on page 316.

When connecting multiple racks together, remove the seven existing inter-switch connections
between each leaf switches, as well as the two connections between the leaf switches and the
spine switch. From each leaf switch, distribute eight connections over the spine switches in
all racks. In multi-rack environments, the leaf switches inside a rack are no longer directly
interconnected, as shown in the following graphic.

320 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Connecting an Expansion Rack to Oracle SuperCluster T5-8

As shown in the preceding graphic, each leaf switch in rack 1 connects to the following
switches:

■ Four connections to its internal spine switch


■ Four connections to the spine switch in rack 2

The spine switch in rack 1 connects to the following switches:

■ Eight connections to both internal leaf switches


■ Eight connections to both leaf switches in rack 2

The following terms are used when referring to the two racks:

■ Rack 1 (R1) refers to Oracle SuperCluster T5-8


■ Rack 2 (R2) refers to the Oracle Exadata Storage Expansion Rack

In addition, because the InfiniBand switches are physically located in different rack units in the
two racks, the following terms are used when referring to the InfiniBand switches:

■ InfiniBand 1 (IB1) refers to the spine switch, located in U1 in both racks


■ InfiniBand 2 (IB2) refers to the first leaf switch, located in:
■ U26 in Oracle SuperCluster T5-8
■ U20 in the Oracle Exadata Storage Expansion Rack
■ InfiniBand 3 (IB3) refers to the second leaf switch, located in:

Connecting Expansion Racks 321


Connecting an Expansion Rack to Oracle SuperCluster T5-8

■ U32 in Oracle SuperCluster T5-8


■ U24 in the Oracle Exadata Storage Expansion Rack

The following sections provide information for connecting Oracle Exadata Storage Expansion
Half Rack or Oracle Exadata Storage Expansion Full Rack to your Oracle SuperCluster T5-8:
■ “Two-Rack Cabling” on page 322
■ “Three-Rack Cabling” on page 324
■ “Four-Rack Cabling” on page 326
■ “Five-Rack Cabling” on page 328
■ “Six-Rack Cabling” on page 332
■ “Seven-Rack Cabling” on page 336
■ “Eight-Rack Cabling” on page 340
■ “Nine-Rack Cabling” on page 345
■ “Ten-Rack Cabling” on page 345
■ “Eleven-Rack Cabling” on page 346
■ “Twelve-Rack Cabling” on page 347
■ “Thirteen-Rack Cabling” on page 348
■ “Fourteen-Rack Cabling” on page 348
■ “Fifteen-Rack Cabling” on page 349
■ “Sixteen-Rack Cabling” on page 350
■ “Seventeen-Rack Cabling” on page 351
■ “Eighteen-Rack Cabling” on page 351

Two-Rack Cabling
The following table shows the cable connections for the first spine switch (R1-IB1) when
cabling two racks together.

TABLE 87 Leaf Switch Connections for the First Rack in a Two-Rack System
Leaf Switch Connection Cable Length
R1-IB3 within Rack 1 R1-IB3-P8A to R1-IB1-P3A 5 meters

R1-IB3-P8B to R1-IB1-P4A

R1-IB3-P9A to R1-IB1-P5A

R1-IB3-P9B to R1-IB1-P6A
R1-IB3 to Rack 2 R1-IB3-P10A to R2-IB1-P7A 5 meters

322 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Connecting an Expansion Rack to Oracle SuperCluster T5-8

Leaf Switch Connection Cable Length


R1-IB3-P10B to R2-IB1-P8A

R1-IB3-P11A to R2-IB1-P9A

R1-IB3-P11B to R2-IB1-P10A
R1-IB2 within Rack 1 R1-IB2-P8A to R1-IB1-P3B 5 meters

R1-IB2-P8B to R1-IB1-P4B

R1-IB2-P9A to R1-IB1-P5B

R1-IB2-P9B to R1-IB1-P6B
R1-IB2 to Rack 2 R1-IB2-P10A to R2-IB1-P7B 5 meters

R1-IB2-P10B to R2-IB1-P8B

R1-IB2-P11A to R2-IB1-P9B

R1-IB2-P11B to R2-IB1-P10B

The following table shows the cable connections for the second spine switch (R2-IB1) when
cabling two racks together.

TABLE 88 Leaf Switch Connections for the Second Rack in a Two-Rack System
Leaf Switch Connection Cable Length
R2-IB3 within Rack 2 R2-IB3-P8A to R2-IB1-P3A 5 meters

R2-IB3-P8B to R2-IB1-P4A

R2-IB3-P9A to R2-IB1-P5A

R2-IB3-P9B to R2-IB1-P6A
R2-IB3 to Rack 1 R2-IB3-P10A to R1-IB1-P7A 5 meters

R2-IB3-P10B to R1-IB1-P8A

R2-IB3-P11A to R1-IB1-P9A

R2-IB3-P11B to R1-IB1-P10A
R2-IB2 within Rack 2 R2-IB2-P8A to R2-IB1-P3B 5 meters

R2-IB2-P8B to R2-IB1-P4B

R2-IB2-P9A to R2-IB1-P5B

R2-IB2-P9B to R2-IB1-P6B
R2-IB2 to Rack 1 R2-IB2-P10A to R1-IB1-P7B 5 meters

R2-IB2-P10B to R1-IB1-P8B

R2-IB2-P11A to R1-IB1-P9B

Connecting Expansion Racks 323


Connecting an Expansion Rack to Oracle SuperCluster T5-8

Leaf Switch Connection Cable Length


R2-IB2-P11B to R1-IB1-P10B

Three-Rack Cabling
The following table shows the cable connections for the first spine switch (R1-IB1) when
cabling three full racks together.

TABLE 89 Leaf Switch Connections for the First Rack in a Three-Rack System

Leaf Switch Connection Cable Length


R1-IB3 within Rack 1 R1-IB3-P8A to R1-IB1-P3A 5 meters

R1-IB3-P8B to R1-IB1-P4A

R1-IB3-P9A to R1-IB1-P5A
R1-IB3 to Rack 2 R1-IB3-P9B to R2-IB1-P6A 5 meters

R1-IB3-P10A to R2-IB1-P7A

R1-IB3-P10B to R2-IB1-P8A
R1-IB3 to Rack 3 R1-IB3-P11A to R3-IB1-P9A 5 meters

R1-IB3-P11B to R3-IB1-P10A
R1-IB2 within Rack 1 R1-IB2-P8A to R1-IB1-P3B 5 meters

R1-IB2-P8B to R1-IB1-P4B

R1-IB2-P9A to R1-IB1-P5B
R1-IB2 to Rack 2 R1-IB2-P9B to R2-IB1-P6B 5 meters

R1-IB2-P10A to R2-IB1-P7B

R1-IB2-P10B to R2-IB1-P8B
R1-IB2 to Rack 3 R1-IB2-P11A to R3-IB1-P9B 5 meters

R1-IB2-P11B to R3-IB1-P10B

The following table shows the cable connections for the second spine switch (R2-IB1) when
cabling three racks together.

TABLE 90 Leaf Switch Connections for the Second Rack in a Three-Rack System

Leaf Switch Connection Cable Length


R2-IB3 within Rack 2 R2-IB3-P8A to R2-IB1-P3A 5 meters

324 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Connecting an Expansion Rack to Oracle SuperCluster T5-8

Leaf Switch Connection Cable Length


R2-IB3-P8B to R2-IB1-P4A

R2-IB3-P9A to R2-IB1-P5A
R2-IB3 to Rack 1 R2-IB3-P11A to R1-IB1-P9A 5 meters

R2-IB3-P11B to R1-IB1-P10A
R2-IB3 to Rack 3 R2-IB3-P9B to R3-IB1-P6A 5 meters

R2-IB3-P10A to R3-IB1-P7A

R2-IB3-P10B to R3-IB1-P8A
R2-IB2 within Rack 2 R2-IB2-P8A to R2-IB1-P3B 5 meters

R2-IB2-P8B to R2-IB1-P4B

R2-IB2-P9A to R2-IB1-P5B
R2-IB2 to Rack 1 R2-IB2-P11A to R1-IB1-P9B 5 meters

R2-IB2-P11B to R1-IB1-P10B
R2-IB2 to Rack 3 R2-IB2-P9B to R3-IB1-P6B 5 meters

R2-IB2-P10A to R3-IB1-P7B

R2-IB2-P10B to R3-IB1-P8B

The following table shows the cable connections for the third spine switch (R3-IB1) when
cabling three full racks together.

TABLE 91 Leaf Switch Connections for the Third Rack in a Three-Rack System

Leaf Switch Connection Cable Length


R3-IB3 within Rack 3 R3-IB3-P8A to R3-IB1-P3A 5 meters

R3-IB3-P8B to R3-IB1-P4A

R3-IB3-P9A to R3-IB1-P5A
R3-IB3 to Rack 1 R3-IB3-P9B to R1-IB1-P6A 5 meters

R3-IB3-P10A to R1-IB1-P7A

R3-IB3-P10B to R1-IB1-P8A
R3-IB3 to Rack 2 R3-IB3-P11A to R2-IB1-P9A 5 meters

R3-IB3-P11B to R2-IB1-P10A
R3-IB2 within Rack 3 R3-IB2-P8A to R3-IB1-P3B 5 meters

R3-IB2-P8B to R3-IB1-P4B

R3-IB2-P9A to R3-IB1-P5B

Connecting Expansion Racks 325


Connecting an Expansion Rack to Oracle SuperCluster T5-8

Leaf Switch Connection Cable Length


R3-IB2 to Rack 1 R3-IB2-P9B to R1-IB1-P6B 5 meters

R3-IB2-P10A to R1-IB1-P7B

R3-IB2-P10B to R1-IB1-P8B
R3-IB2 to Rack 2 R3-IB2-P11A to R2-IB1-P9B 5 meters

R3-IB2-P11B to R2-IB1-P10B

Four-Rack Cabling
The following table shows the cable connections for the first spine switch (R1-IB1) when
cabling four full racks together.

TABLE 92 Leaf Switch Connections for the First Rack in a Four-Rack System

Leaf Switch Connection Cable Length


R1-IB3 within Rack 1 R1-IB3-P8A to R1-IB1-P3A 5 meters

R1-IB3-P8B to R1-IB1-P4A
R1-IB3 to Rack 2 R1-IB3-P9A to R2-IB1-P5A 5 meters

R1-IB3-P9B to R2-IB1-P6A
R1-IB3 to Rack 3 R1-IB3-P10A to R3-IB1-P7A 5 meters

R1-IB3-P10B to R3-IB1-P8A
R1-IB3 to Rack 4 R1-IB3-P11A to R4-IB1-P9A 10 meters

R1-IB3-P11B to R4-IB1-P10A
R1-IB2 within Rack 1 R1-IB2-P8A to R1-IB1-P3B 5 meters

R1-IB2-P8B to R1-IB1-P4B
R1-IB2 to Rack 2 R1-IB2-P9A to R2-IB1-P5B 5 meters

R1-IB2-P9B to R2-IB1-P6B
R1-IB2 to Rack 3 R1-IB2-P10A to R3-IB1-P7B 5 meters

R1-IB2-P10B to R3-IB1-P8B
R1-IB2 to Rack 4 R1-IB2-P11A to R4-IB1-P9B 10 meters

R1-IB2-P11B to R4-IB1-P10B

The following table shows the cable connections for the second spine switch (R2-IB1) when
cabling four full racks together.

326 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Connecting an Expansion Rack to Oracle SuperCluster T5-8

TABLE 93 Leaf Switch Connections for the Second Rack in a Four-Rack System
Leaf Switch Connection Cable Length
R2-IB3 within Rack 2 R2-IB3-P8A to R2-IB1-P3A 5 meters

R2-IB3-P8B to R2-IB1-P4A
R2-IB3 to Rack 1 R2-IB3-P11A to R1-IB1-P9A 5 meters

R2-IB3-P11B to R1-IB1-P10A
R2-IB3 to Rack 3 R2-IB3-P9A to R3-IB1-P5A 5 meters

R2-IB3-P9B to R3-IB1-P6A
R2-IB3 to Rack 4 R2-IB3-P10A to R4-IB1-P7A 5 meters

R2-IB3-P10B to R4-IB1-P8A
R2-IB2 within Rack 2 R2-IB2-P8A to R2-IB1-P3B 5 meters

R2-IB2-P8B to R2-IB1-P4B
R2-IB2 to Rack 1 R2-IB2-P11A to R1-IB1-P9B 5 meters

R2-IB2-P11B to R1-IB1-P10B
R2-IB2 to Rack 3 R2-IB2-P9A to R3-IB1-P5B 5 meters

R2-IB2-P9B to R3-IB1-P6B
R2-IB2 to Rack 4 R2-IB2-P10A to R4-IB1-P7B 5 meters

R2-IB2-P10B to R4-IB1-P8B

The following table shows the cable connections for the third spine switch (R3-IB1) when
cabling four full racks together.

TABLE 94 Leaf Switch Connections for the Third Rack in a Four-Rack System
Leaf Switch Connection Cable Length
R3-IB3 within Rack 3 R3-IB3-P8A to R3-IB1-P3A 5 meters

R3-IB3-P8B to R3-IB1-P4A
R3-IB3 to Rack 1 R3-IB3-P10A to R1-IB1-P7A 5 meters

R3-IB3-P10B to R1-IB1-P8A
R3-IB3 to Rack 2 R3-IB3-P11A to R2-IB1-P9A 5 meters

R3-IB3-P11B to R2-IB1-P10A
R3-IB3 to Rack 4 R3-IB3-P9A to R4-IB1-P5A 5 meters

R3-IB3-P9B to R4-IB1-P6A
R3-IB2 within Rack 3 R3-IB2-P8A to R3-IB1-P3B 5 meters

R3-IB2-P8B to R3-IB1-P4B

Connecting Expansion Racks 327


Connecting an Expansion Rack to Oracle SuperCluster T5-8

Leaf Switch Connection Cable Length


R3-IB2 to Rack 1 R3-IB2-P10A to R1-IB1-P7B 5 meters

R3-IB2-P10B to R1-IB1-P8B
R3-IB2 to Rack 2 R3-IB2-P11A to R2-IB1-P9B 5 meters

R3-IB2-P11B to R2-IB1-P10B
R3-IB2 to Rack 4 R3-IB2-P9A to R4-IB1-P5B 5 meters

R3-IB2-P9B to R4-IB1-P6B

The following table shows the cable connections for the fourth spine switch (R4-IB1) when
cabling four full racks together.

TABLE 95 Leaf Switch Connections for the Fourth Rack in a Four-Rack System
Leaf Switch Connection Cable Length
R4-IB3 within Rack 4 R4-IB3-P8A to R4-IB1-P3A 5 meters

R4-IB3-P8B to R4-IB1-P4A
R4-IB3 to Rack 1 R4-IB3-P9A to R1-IB1-P5A 10 meters

R4-IB3-P9B to R1-IB1-P6A
R4-IB3 to Rack 2 R4-IB3-P10A to R2-IB1-P7A 5 meters

R4-IB3-P10B to R2-IB1-P8A
R4-IB3 to Rack 3 R4-IB3-P11A to R3-IB1-P9A 5 meters

R4-IB3-P11B to R3-IB1-P10A
R4-IB2 within Rack 4 R4-IB2-P8A to R4-IB1-P3B 5 meters

R4-IB2-P8B to R4-IB1-P4B
R4-IB2 to Rack 1 R4-IB2-P9A to R1-IB1-P5B 10 meters

R4-IB2-P9B to R1-IB1-P6B
R4-IB2 to Rack 2 R4-IB2-P10A to R2-IB1-P7B 5 meters

R4-IB2-P10B to R2-IB1-P8B
R4-IB2 to Rack 3 R4-IB2-P11A to R3-IB1-P9B 5 meters

R4-IB2-P11B to R3-IB1-P10B

Five-Rack Cabling
The following table shows the cable connections for the first spine switch (R1-IB1) when
cabling five full racks together.

328 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Connecting an Expansion Rack to Oracle SuperCluster T5-8

TABLE 96 Leaf Switch Connections for the First Rack in a Five-Rack System
Leaf Switch Connection Cable Length
R1 IB3 within Rack 1 R1-IB3-P8A to R1-IB1-P3A 3 meters

R1-IB3-P8B to R1-IB1-P4A
R1 IB3 to Rack 2 R1-IB3-P9A to R2-IB1-P5A 5 meters

R1-IB3-P9B to R2-IB1-P6A
R1 IB3 to Rack 3 R1-IB3-P10A to R3-IB1-P7A 5 meters

R1-IB3-P10B to R3-IB1-P8A
R1 IB3 to Rack 4 R1-IB3-P11A to R4-IB1-P9A 10 meters
R1 IB3 to Rack 5 R1-IB3-P11B to R5-IB1-P10A 10 meters
R1 IB2 within Rack 1 R1-IB2-P8A to R1-IB1-P3B 3 meters

R1-IB2-P8B to R1-IB1-P4B
R1 IB2 to Rack 2 R1-IB2-P9A to R2-IB1-P5B 3 meters

R1-IB2-P9B to R2-IB1-P6B
R1 IB2 to Rack 3 R1-IB2-P10A to R3-IB1-P7B 5 meters

R1-IB2-P10B to R3-IB1-P8B
R1 IB2 to Rack 4 R1-IB2-P11A to R4-IB1-P9B 10 meters
R1 IB2 to Rack 5 R1-IB2-P11B to R5-IB1-P10B 10 meters

The following table shows the cable connections for the second spine switch (R2-IB1) when
cabling five full racks together.

TABLE 97 Leaf Switch Connections for the Second Rack in a Five-Rack System
Leaf Switch Connection Cable Length
R2 IB3 within Rack 2 R2-IB3-P8A to R2-IB1-P3A 3 meters

R2-IB3-P8B to R2-IB1-P4A
R2 IB3 to Rack 1 R2-IB3-P11B to R1-IB1-P10A 5 meters
R2 IB3 to Rack 3 R2-IB3-P9A to R3-IB1-P5A 5 meters

R2-IB3-P9B to R3-IB1-P6A
R2 IB3 to Rack 4 R2-IB3-P10A to R4-IB1-P7A 5 meters

R2-IB3-P10B to R4-IB1-P8A
R2 IB3 to Rack 5 R2-IB3-P11A to R5-IB1-P9A 10 meters
R2 IB2 within Rack 2 R2-IB2-P8A to R2-IB1-P3B 3 meters

R2-IB2-P8B to R2-IB1-P4B
R2 IB2 to Rack 1 R2-IB2-P11B to R1-IB1-P10B 5 meters

Connecting Expansion Racks 329


Connecting an Expansion Rack to Oracle SuperCluster T5-8

Leaf Switch Connection Cable Length


R2 IB2 to Rack 3 R2-IB2-P9A to R3-IB1-P5B 5 meters

R2-IB2-P9B to R3-IB1-P6B
R2 IB2 to Rack 4 R2-IB2-P10A to R4-IB1-P7B 5 meters

R2-IB2-P10B to R4-IB1-P8B
R2 IB2 to Rack 5 R2-IB2-P11A to R5-IB1-P9B 10 meters

The following table shows the cable connections for the third spine switch (R3-IB1) when
cabling five full racks together.

TABLE 98 Leaf Switch Connections for the Third Rack in a Five-Rack System
Leaf Switch Connection Cable Length
R3 IB3 within Rack 3 R3-IB3-P8A to R3-IB1-P3A 3 meters

R3-IB3-P8B to R3-IB1-P4A
R3 IB3 to Rack 1 R3-IB3-P11A to R1-IB1-P9A 5 meters
R3 IB3 to Rack 2 R3-IB3-P11B to R2-IB1-P10A 5 meters
R3 IB3 to Rack 4 R3-IB3-P9A to R4-IB1-P5A 5 meters

R3-IB3-P9B to R4-IB1-P6A
R3 IB3 to Rack 5 R3-IB3-P10A to R5-IB1-P7A 5 meters

R3-IB3-P10B to R5-IB1-P8A
R3 IB2 within Rack 3 R3-IB2-P8A to R3-IB1-P3B 3 meters

R3-IB2-P8B to R3-IB1-P4B
R3 IB2 to Rack 1 R3-IB2-P11A to R1-IB1-P9B 5 meters
R3 IB2 to Rack 2 R3-IB2-P11B to R2-IB1-P10B 5 meters
R3 IB2 to Rack 4 R3-IB2-P9A to R4-IB1-P5B 5 meters

R3-IB2-P9B to R4-IB1-P6B
R3 IB2 to Rack 5 R3-IB2-P10A to R5-IB1-P7B 5 meters

R3-IB2-P10B to R5-IB1-P8B

The following table shows the cable connections for the fourth spine switch (R4-IB1) when
cabling five full racks together.

TABLE 99 Leaf Switch Connections for the Fourth Rack in a Five-Rack System
Leaf Switch Connection Cable Length
R4 IB3 within Rack 4 R4-IB3-P8A to R4-IB1-P3A 3 meters

330 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Connecting an Expansion Rack to Oracle SuperCluster T5-8

Leaf Switch Connection Cable Length


R4-IB3-P8B to R4-IB1-P4A
R4 IB3 to Rack 1 R4-IB3-P10A to R1-IB1-P7A 10 meters

R4-IB3-P10B to R1-IB1-P8A
R4 IB3 to Rack 2 R4-IB3-P11A to R2-IB1-P9A 5 meters
R4 IB3 to Rack 3 R4-IB3-P11B to R3-IB1-P10A 5 meters
R4 IB3 to Rack 5 R4-IB3-P9A to R5-IB1-P5A 5 meters

R4-IB3-P9B to R5-IB1-P6A
R4 IB2 within Rack 4 R4-IB2-P8A to R4-IB1-P3B 3 meters

R4-IB2-P8B to R4-IB1-P4B
R4 IB2 to Rack 1 R4-IB2-P10A to R1-IB1-P7B 10 meters

R4-IB2-P10B to R1-IB1-P8B
R4 IB2 to Rack 2 R4-IB2-P11A to R2-IB1-P9B 5 meters
R4 IB2 to Rack 3 R4-IB2-P11B to R3-IB1-P10B 5 meters
R4 IB2 to Rack 5 R4-IB2-P9A to R5-IB1-P5B 5 meters

R4-IB2-P9B to R5-IB1-P6B

The following table shows the cable connections for the fifth spine switch (R5-IB1) when
cabling five full racks together.

TABLE 100 Leaf Switch Connections for the Fifth Rack in a Five-Rack System

Leaf Switch Connection Cable Length


R5 IB3 within Rack 5 R5-IB3-P8A to R5-IB1-P3A 3 meters

R5-IB3-P8B to R5-IB1-P4A
R5 IB3 to Rack 1 R5-IB3-P9A to R1-IB1-P5A 10 meters

R5-IB3-P9B to R1-IB1-P6A
R5 IB3 to Rack 2 R5-IB3-P10A to R2-IB1-P7A 10 meters

R5-IB3-P10B to R2-IB1-P8A
R5 IB3 to Rack 3 R5-IB3-P11A to R3-IB1-P9A 5 meters
R5 IB3 to Rack 4 R5-IB3-P11B to R4-IB1-P10A 5 meters
R5 IB2 within Rack 5 R5-IB2-P8A to R5-IB1-P3B 3 meters

R5-IB2-P8B to R5-IB1-P4B
R5 IB2 to Rack 1 R5-IB2-P9A to R1-IB1-P5B 10 meters

R5-IB2-P9B to R1-IB1-P6B
R5 IB2 to Rack 2 R5-IB2-P10A to R2-IB1-P7B 10 meters

Connecting Expansion Racks 331


Connecting an Expansion Rack to Oracle SuperCluster T5-8

Leaf Switch Connection Cable Length


R5-IB2-P10B to R2-IB1-P8B
R5 IB2 to Rack 3 R5-IB2-P11A to R3-IB1-P9B 5 meters
R5 IB2 to Rack 4 R5-IB2-P11B to R4-IB1-P10B 5 meters

Six-Rack Cabling
The following table shows the cable connections for the first spine switch (R1-IB1) when
cabling six full racks together.

TABLE 101 Leaf Switch Connections for the First Rack in a Six-Rack System

Leaf Switch Connection Cable Length


R1 IB3 within Rack 1 R1-IB3-P8A to R1-IB1-P3A 3 meters

R1-IB3-P8B to R1-IB1-P4A
R1 IB3 to Rack 2 R1-IB3-P9A to R2-IB1-P5A 5 meters

R1-IB3-P9B to R2-IB1-P6A
R1 IB3 to Rack 3 R1-IB3-P10A to R3-IB1-P7A 5 meters
R1 IB3 to Rack 4 R1-IB3-P10B to R4-IB1-P8A 10 meters
R1 IB3 to Rack 5 R1-IB3-P11A to R5-IB1-P9A 10 meters
R1 IB3 to Rack 6 R1-IB3-P11B to R6-IB1-P10A 10 meters
R1 IB2 within Rack 1 R1-IB2-P8A to R1-IB1-P3B 3 meters

R1-IB2-P8B to R1-IB1-P4B
R1 IB2 to Rack 2 R1-IB2-P9A to R2-IB1-P5B 5 meters

R1-IB2-P9B to R2-IB1-P6B
R1 IB2 to Rack 3 R1-IB2-P10A to R3-IB1-P7B 5 meters
R1 IB2 to Rack 4 R1-IB2-P10B to R4-IB1-P8B 10 meters
R1 IB2 to Rack 5 R1-IB2-P11A to R5-IB1-P9B 10 meters
R1 IB2 to Rack 6 R1-IB2-P11B to R6-IB1-P10B 10 meters

The following table shows the cable connections for the second spine switch (R2-IB1) when
cabling six full racks together.

TABLE 102 Leaf Switch Connections for the Second Rack in a Six-Rack System

Leaf Switch Connection Cable Length


R2 IB3 within Rack 2 R2-IB3-P8A to R2-IB1-P3A 3 meters

332 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Connecting an Expansion Rack to Oracle SuperCluster T5-8

Leaf Switch Connection Cable Length


R2-IB3-P8B to R2-IB1-P4A
R2 IB3 to Rack 1 R2-IB3-P11B to R1-IB1-P10A 5 meters
R2 IB3 to Rack 3 R2-IB3-P9A to R3-IB1-P5A 5 meters

R2-IB3-P9B to R3-IB1-P6A
R2 IB3 to Rack 4 R2-IB3-P10A to R4-IB1-P7A 5 meters
R2 IB3 to Rack 5 R2-IB3-P10B to R5-IB1-P8A 10 meters
R2 IB3 to Rack 6 R2-IB3-P11Ato R6-IB1-P9A 10 meters
R2 IB2 within Rack 2 R2-IB2-P8A to R2-IB1-P3B 3 meters

R2-IB2-P8B to R2-IB1-P4B
R2 IB2 to Rack 1 R2-IB2-P11B to R1-IB1-P10B 5 meters
R2 IB2 to Rack 3 R2-IB2-P9A to R3-IB1-P5B 5 meters

R2-IB2-P9B to R3-IB1-P6B
R2 IB2 to Rack 4 R2-IB2-P10A to R4-IB1-P7B 5 meters
R2 IB2 to Rack 5 R2-IB2-P10B to R5-IB1-P8B 10 meters
R2 IB2 to Rack 6 R2-IB2-P11Ato R6-IB1-P9B 10 meters

The following table shows the cable connections for the third spine switch (R3-IB1) when
cabling six full racks together.

TABLE 103 Leaf Switch Connections for the Third Rack in a Six-Rack System
Leaf Switch Connection Cable Length
R3 IB3 within Rack 3 R3-IB3-P8A to R3-IB1-P3A 3 meters

R3-IB3-P8B to R3-IB1-P4A
R3 IB3 to Rack 1 R3-IB3-P11A to R1-IB1-P9A 5 meters
R3 IB3 to Rack 2 R3-IB3-P11B to R2-IB1-P10A 5 meters
R3 IB3 to Rack 4 R3-IB3-P9A to R4-IB1-P5A 5 meters

R3-IB3-P9B to R4-IB1-P6A
R3 IB3 to Rack 5 R3-IB3-P10A to R5-IB1-P7A 5 meters
R3 IB3 to Rack 6 R3-IB3-P10B to R6-IB1-P8A 5 meters
R3 IB2 within Rack 3 R3-IB2-P8A to R3-IB1-P3B 3 meters

R3-IB2-P8B to R3-IB1-P4B
R3 IB2 to Rack 1 R3-IB2-P11A to R1-IB1-P9B 5 meters
R3 IB2 to Rack 2 R3-IB2-P11B to R2-IB1-P10B 5 meters
R3 IB2 to Rack 4 R3-IB2-P9A to R4-IB1-P5B 5 meters

R3-IB2-P9B to R4-IB1-P6B

Connecting Expansion Racks 333


Connecting an Expansion Rack to Oracle SuperCluster T5-8

Leaf Switch Connection Cable Length


R3 IB2 to Rack 5 R3-IB2-P10A to R5-IB1-P7B 5 meters
R3 IB2 to Rack 6 R3-IB2-P10B to R6-IB1-P8B 5 meters

The following table shows the cable connections for the fourth spine switch (R4-IB1) when
cabling six full racks together.

TABLE 104 Leaf Switch Connections for the Fourth Rack in a Six-Rack System

Leaf Switch Connection Cable Length


R4 IB3 within Rack 4 R4-IB3-P8A to R4-IB1-P3A 3 meters

R4-IB3-P8B to R4-IB1-P4A
R4 IB3 to Rack 1 R4-IB3-P10B to R1-IB1-P8A 10 meters
R4 IB3 to Rack 2 R4-IB3-P11A to R2-IB1-P9A 5 meters
R4 IB3 to Rack 3 R4-IB3-P11B to R3-IB1-P10A 5 meters
R4 IB3 to Rack 5 R4-IB3-P9A to R5-IB1-P5A 5 meters

R4-IB3-P9B to R5-IB1-P6A
R4 IB3 to Rack 6 R4-IB3-P10A to R6-IB1-P7A 5 meters
R4 IB2 within Rack 4 R4-IB2-P8A to R4-IB1-P3B 3 meters

R4-IB2-P8B to R4-IB1-P4B
R4 IB2 to Rack 1 R4-IB2-P10B to R1-IB1-P8B 10 meters
R4 IB2 to Rack 2 R4-IB2-P11A to R2-IB1-P9B 5 meters
R4 IB2 to Rack 3 R4-IB2-P11B to R3-IB1-P10B 5 meters
R4 IB2 to Rack 5 R4-IB2-P9A to R5-IB1-P5B 5 meters

R4-IB2-P9B to R5-IB1-P6B
R4 IB2 to Rack 6 R4-IB2-P10A to R6-IB1-P7B 5 meters

The following table shows the cable connections for the fifth spine switch (R5-IB1) when
cabling six full racks together.

TABLE 105 Leaf Switch Connections for the Fifth Rack in a Six-Rack System

Leaf Switch Connection Cable Length


R5 IB3 within Rack 5 R5-IB3-P8A to R5-IB1-P3A 3 meters

R5-IB3-P8B to R5-IB1-P4A
R5 IB3 to Rack 1 R5-IB3-P10A to R1-IB1-P7A 10 meters
R5 IB3 to Rack 2 R5-IB3-P10B to R2-IB1-P8A 10 meters

334 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Connecting an Expansion Rack to Oracle SuperCluster T5-8

Leaf Switch Connection Cable Length


R5 IB3 to Rack 3 R5-IB3-P11A to R3-IB1-P9A 5 meters
R5 IB3 to Rack 4 R5-IB3-P11B to R4-IB1-P10A 5 meters
R5 IB3 to Rack 6 R5-IB3-P9A to R6-IB1-P5A 5 meters

R5-IB3-P9B to R6-IB1-P6A
R5 IB2 within Rack 5 R5-IB2-P8A to R5-IB1-P3B 3 meters

R5-IB2-P8B to R5-IB1-P4B
R5 IB2 to Rack 1 R5-IB2-P10A to R1-IB1-P7B 10 meters
R5 IB2 to Rack 2 R5-IB2-P10B to R2-IB1-P8B 10 meters
R5 IB2 to Rack 3 R5-IB2-P11A to R3-IB1-P9B 5 meters
R5 IB2 to Rack 4 R5-IB2-P11B to R4-IB1-P10B 5 meters
R5 IB2 to Rack 6 R5-IB2-P9A to R6-IB1-P5B 5 meters

R5-IB2-P9B to R6-IB1-P6B

The following table shows the cable connections for the sixth spine switch (R6-IB1) when
cabling six full racks together.

TABLE 106 Leaf Switch Connections for the Sixth Rack in a Six-Rack System

Leaf Switch Connection Cable Length


R6 IB3 within Rack 6 R6-IB3-P8A to R6-IB1-P3A 3 meters

R6-IB3-P8B to R6-IB1-P4A
R6 IB3 to Rack 1 R6-IB3-P9A to R1-IB1-P5A 10 meters

R6-IB3-P9B to R1-IB1-P6A
R6 IB3 to Rack 2 R6-IB3-P10A to R2-IB1-P7A 10 meters
R6 IB3 to Rack 3 R6-IB3-P10B to R3-IB1-P8A 5 meters
R6 IB3 to Rack 4 R6-IB3-P11A to R4-IB1-P9A 5 meters
R6 IB3 to Rack 5 R6-IB3-P11B to R5-IB1-P10A 5 meters
R6 IB2 within Rack 6 R6-IB2-P8A to R6-IB1-P3B 3 meters

R6-IB2-P8B to R6-IB1-P4B
R6 IB2 to Rack 2 R6-IB2-P10A to R2-IB1-P7B 10 meters
R6 IB2 to Rack 1 R6-IB2-P9A to R1-IB1-P5B 10 meters

R6-IB2-P9B to R1-IB1-P6B
R6 IB2 to Rack 3 R6-IB2-P10B to R3-IB1-P8B 5 meters
R6 IB2 to Rack 4 R6-IB2-P11A to R4-IB1-P9B 5 meters
R6 IB2 to Rack 5 R6-IB2-P11B to R5-IB1-P10B 5 meters

Connecting Expansion Racks 335


Connecting an Expansion Rack to Oracle SuperCluster T5-8

Seven-Rack Cabling
The following table shows the cable connections for the first spine switch (R1-IB1) when
cabling seven full racks together.

TABLE 107 Leaf Switch Connections for the First Rack in a Seven-Rack System
Leaf Switch Connection Cable Length
R1 IB3 within Rack 1 R1-IB3-P8A to R1-IB1-P3A 3 meters

R1-IB3-P8B to R1-IB1-P4A
R1 IB3 to Rack 2 R1-IB3-P9A to R2-IB1-P5A 5 meters
R1 IB3 to Rack 3 R1-IB3-P9B to R3-IB1-P6A 5 meters
R1 IB3 to Rack 4 R1-IB3-P10A to R4-IB1-P7A 10 meters
R1 IB3 to Rack 5 R1-IB3-P10B to R5-IB1-P8A 10 meters
R1 IB3 to Rack 6 R1-IB3-P11A to R6-IB1-P9A 10 meters
R1 IB3 to Rack 7 R1-IB3-P11B to R7-IB1-P10A 10 meters
R1 IB2 within Rack 1 R1-IB2-P8A to R1-IB1-P3B 3 meters

R1-IB2-P8B to R1-IB1-P4B
R1 IB2 to Rack 2 R1-IB2-P9A to R2-IB1-P5B 5 meters
R1 IB2 to Rack 3 R1-IB2-P9B to R3-IB1-P6B 5 meters
R1 IB2 to Rack 4 R1-IB2-P10A to R4-IB1-P7B 10 meters
R1 IB2 to Rack 5 R1-IB2-P10B to R5-IB1-P8B 10 meters
R1 IB2 to Rack 6 R1-IB2-P11A to R6-IB1-P9B 10 meters
R1 IB2 to Rack 7 R1-IB2-P11B to R7-IB1-P10B 10 meters

The following table shows the cable connections for the second spine switch (R2-IB1) when
cabling seven full racks together.

TABLE 108 Leaf Switch Connections for the Second Rack in a Seven-Rack System
Leaf Switch Connection Cable Length
R2 IB3 within Rack 2 R2-IB3-P8A to R2-IB1-P3A 3 meters

R2-IB3-P8B to R2-IB1-P4A
R2 IB3 to Rack 1 R2-IB3-P11B to R1-IB1-P10A 5 meters
R2 IB3 to Rack 3 R2-IB3-P9A to R3-IB1-P5A 5 meters
R2 IB3 to Rack 4 R2-IB3-P9B to R4-IB1-P6A 5 meters
R2 IB3 to Rack 5 R2-IB3-P10A to R5-IB1-P7A 10 meters
R2 IB3 to Rack 6 R2-IB3-P10B to R6-IB1-P8A 10 meters

336 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Connecting an Expansion Rack to Oracle SuperCluster T5-8

Leaf Switch Connection Cable Length


R2 IB3 to Rack 7 R2-IB3-P11A to R7-IB1-P9A 10 meters
R2 IB2 within Rack 2 R2-IB2-P8A to R2-IB1-P3B 3 meters

R2-IB2-P8B to R2-IB1-P4B
R2 IB2 to Rack 1 R2-IB2-P11B to R1-IB1-P10B 5 meters
R2 IB2 to Rack 3 R2-IB2-P9A to R3-IB1-P5B 5 meters
R2 IB2 to Rack 4 R2-IB2-P9B to R4-IB1-P6B 5 meters
R2 IB2 to Rack 5 R2-IB2-P10A to R5-IB1-P7B 10 meters
R2 IB2 to Rack 6 R2-IB2-P10Bto R6-IB1-P8B 10 meters
R2 IB2 to Rack 7 R2-IB2-P11A to R7-IB1-P9B 10 meters

The following table shows the cable connections for the third spine switch (R3-IB1) when
cabling seven full racks together.

TABLE 109 Leaf Switch Connections for the Third Rack in a Seven-Rack System

Leaf Switch Connection Cable Length


R3 IB3 within Rack 3 R3-IB3-P8A to R3-IB1-P3A 3 meters

R3-IB3-P8B to R3-IB1-P4A
R3 IB3 to Rack 1 R3-IB3-P11A to R1-IB1-P9A 5 meters
R3 IB3 to Rack 2 R3-IB3-P11B to R2-IB1-P10A 5 meters
R3 IB3 to Rack 4 R3-IB3-P9A to R4-IB1-P5A 5 meters
R3 IB3 to Rack 5 R3-IB3-P9B to R5-IB1-P6A 5 meters
R3 IB3 to Rack 6 R3-IB3-P10A to R6-IB1-P7A 10 meters
R3 IB3 to Rack 7 R3-IB3-P10B to R7-IB1-P8A 10 meters
R3 IB2 within Rack 3 R3-IB2-P8A to R3-IB1-P3B 3 meters

R3-IB2-P8B to R3-IB1-P4B
R3 IB2 to Rack 1 R3-IB2-P11A to R1-IB1-P9B 5 meters
R3 IB2 to Rack 2 R3-IB2-P11B to R2-IB1-P10B 5 meters
R3 IB2 to Rack 4 R3-IB2-P9A to R4-IB1-P5B 5 meters
R3 IB2 to Rack 5 R3-IB2-P9B to R5-IB1-P6B 5 meters
R3 IB2 to Rack 6 R3-IB2-P10A to R6-IB1-P7B 10 meters
R3 IB2 to Rack 7 R3-IB2-P10B to R7-IB1-P8B 10 meters

The following table shows the cable connections for the fourth spine switch (R4-IB1) when
cabling seven full racks together.

Connecting Expansion Racks 337


Connecting an Expansion Rack to Oracle SuperCluster T5-8

TABLE 110 Leaf Switch Connections for the Fourth Rack in a Seven-Rack System
Leaf Switch Connection Cable Length
R4 IB3 within Rack 4 R4-IB3-P8A to R4-IB1-P3A 3 meters

R4-IB3-P8B to R4-IB1-P4A
R4 IB3 to Rack 1 R4-IB3-P10B to R1-IB1-P8A 10 meters
R4 IB3 to Rack 2 R4-IB3-P11A to R2-IB1-P9A 5 meters
R4 IB3 to Rack 3 R4-IB3-P11B to R3-IB1-P10A 5 meters
R4 IB3 to Rack 5 R4-IB3-P9A to R5-IB1-P5A 5 meters
R4 IB3 to Rack 6 R4-IB3-P9B to R6-IB1-P6A 5 meters
R4 IB3 to Rack 7 R4-IB3-P10A to R7-IB1-P7A 10 meters
R4 IB2 within Rack 4 R4-IB2-P8A to R4-IB1-P3B 3 meters

R4-IB2-P8B to R4-IB1-P4B
R4 IB2 to Rack 1 R4-IB2-P10B to R1-IB1-P8B 10 meters
R4 IB2 to Rack 2 R4-IB2-P11A to R2-IB1-P9B 5 meters
R4 IB2 to Rack 3 R4-IB2-P11B to R3-IB1-P10B 5 meters
R4 IB2 to Rack 5 R4-IB2-P9A to R5-IB1-P5B 5 meters
R4 IB2 to Rack 6 R4-IB2-P9B to R6-IB1-P6B 5 meters
R4 IB2 to Rack 7 R4-IB2-P10A to R7-IB1-P7B 10 meters

The following table shows the cable connections for the fifth spine switch (R5-IB1) when
cabling seven full racks together.

TABLE 111 Leaf Switch Connections for the Fifth Rack in a Seven-Rack System
Leaf Switch Connection Cable Length
R5 IB3 within Rack 5 R5-IB3-P8A to R5-IB1-P3A 3 meters

R5-IB3-P8B to R5-IB1-P4A
R5 IB3 to Rack 1 R5-IB3-P10A to R1-IB1-P7A 10 meters
R5 IB3 to Rack 2 R5-IB3-P10B to R2-IB1-P8A 10 meters
R5 IB3 to Rack 3 R5-IB3-P11A to R3-IB1-P9A 5 meters
R5 IB3 to Rack 4 R5-IB3-P11B to R4-IB1-P10A 5 meters
R5 IB3 to Rack 6 R5-IB3-P9A to R6-IB1-P5A 5 meters
R5 IB3 to Rack 7 R5-IB3-P9B to R7-IB1-P6A 5 meters
R5 IB2 within Rack 5 R5-IB2-P8A to R5-IB1-P3B 3 meters

R5-IB2-P8B to R5-IB1-P4B
R5 IB2 to Rack 1 R5-IB2-P10A to R1-IB1-P7B 10 meters
R5 IB2 to Rack 2 R5-IB2-P10B to R2-IB1-P8B 10 meters
R5 IB2 to Rack 3 R5-IB2-P11A to R3-IB1-P9B 5 meters

338 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Connecting an Expansion Rack to Oracle SuperCluster T5-8

Leaf Switch Connection Cable Length


R5 IB2 to Rack 4 R5-IB2-P11B to R4-IB1-P10B 5 meters
R5 IB2 to Rack 6 R5-IB2-P9A to R6-IB1-P5B 5 meters
R5 IB2 to Rack 7 R5-IB2-P9B to R7-IB1-P6B 5 meters

The following table shows the cable connections for the sixth spine switch (R6-IB1) when
cabling seven full racks together.

TABLE 112 Leaf Switch Connections for the Sixth Rack in a Seven-Rack System
Leaf Switch Connection Cable Length
R6 IB3 within Rack 6 R6-IB3-P8A to R6-IB1-P3A 3 meters

R6-IB3-P8B to R6-IB1-P4A
R6 IB3 to Rack 1 R6-IB3-P9B to R1-IB1-P6A 10 meters
R6 IB3 to Rack 2 R6-IB3-P10A to R2-IB1-P7A 10 meters
R6 IB3 to Rack 3 R6-IB3-P10B to R3-IB1-P8A 5 meters
R6 IB3 to Rack 4 R6-IB3-P11A to R4-IB1-P9A 5 meters
R6 IB3 to Rack 5 R6-IB3-P11B to R5-IB1-P10A 5 meters
R6 IB3 to Rack 7 R6-IB3-P9A to R7-IB1-P5A 5 meters
R6 IB2 within Rack 6 R6-IB2-P8A to R6-IB1-P3B 3 meters

R6-IB2-P8B to R6-IB1-P4B
R6 IB2 to Rack 1 R6-IB2-P9B to R1-IB1-P6B 10 meters
R6 IB2 to Rack 2 R6-IB2-P10A to R2-IB1-P7B 10 meters
R6 IB2 to Rack 3 R6-IB2-P10B to R3-IB1-P8B 5 meters
R6 IB2 to Rack 4 R6-IB2-P11A to R4-IB1-P9B 5 meters
R6 IB2 to Rack 5 R6-IB2-P11B to R5-IB1-P10B 5 meters
R6 IB2 to Rack 7 R6-IB2-P9A to R7-IB1-P5B 5 meters

The following table shows the cable connections for the seventh spine switch (R7-IB1) when
cabling seven full racks together.

TABLE 113 Leaf Switch Connections for the Seventh Rack in a Seven-Rack System
Leaf Switch Connection Cable Length
R7 IB3 within Rack 7 R7-IB3-P8A to R7-IB1-P3A 3 meters

R7-IB3-P8B to R7-IB1-P4A
R7 IB3 to Rack 1 R7-IB3-P9A to R1-IB1-P5A 10 meters
R7 IB3 to Rack 2 R7-IB3-P9B to R2-IB1-P6A 10 meters

Connecting Expansion Racks 339


Connecting an Expansion Rack to Oracle SuperCluster T5-8

Leaf Switch Connection Cable Length


R7 IB3 to Rack 3 R7-IB3-P10A to R3-IB1-P7A 10 meters
R7 IB3 to Rack 4 R7-IB3-P10B to R4-IB1-P8A 10 meters
R7 IB3 to Rack 5 R7-IB3-P11A to R5-IB1-P9A 5 meters
R7 IB3 to Rack 6 R7-IB3-P11B to R6-IB1-P10A 5 meters
R7 IB2 within Rack 7 R7-IB2-P8A to R7-IB1-P3B 3 meters

R7-IB2-P8B to R7-IB1-P4B
R7 IB2 to Rack 1 R7-IB2-P9A to R1-IB1-P5B 10 meters
R7 IB2 to Rack 2 R7-IB2-P9B to R2-IB1-P6B 10 meters
R7 IB2 to Rack 3 R7-IB2-P10A to R3-IB1-P7B 10 meters
R7 IB2 to Rack 4 R7-IB2-P10B to R4-IB1-P8B 10 meters
R7 IB2 to Rack 5 R7-IB2-P11A to R5-IB1-P9B 5 meters
R7 IB2 to Rack 6 R7-IB2-P11B to R6-IB1-P10B 5 meters

Eight-Rack Cabling
The following table shows the cable connections for the first spine switch (R1-IB1) when
cabling eight full racks together.

TABLE 114 Leaf Switch Connections for the First Rack in an Eight-Rack System
Leaf Switch Connection Cable Length
R1 IB3 within Rack 1 R1-IB3-P8A to R1-IB1-P3A 3 meters
R1 IB3 to Rack 2 R1-IB3-P8B to R2-IB1-P4A 5 meters
R1 IB3 to Rack 3 R1-IB3-P9A to R3-IB1-P5A 5 meters
R1 IB3 to Rack 4 R1-IB3-P9B to R4-IB1-P6A 10 meters
R1 IB3 to Rack 5 R1-IB3-P10A to R5-IB1-P7A 10 meters
R1 IB3 to Rack 6 R1-IB3-P10B to R6-IB1-P8A 10 meters
R1 IB3 to Rack 7 R1-IB3-P11A to R7-IB1-P9A 10 meters
R1 IB3 to Rack 8 R1-IB3-P11B to R8-IB1-P10A 10 meters
R1 IB2 within Rack 1 R1-IB2-P8A to R1-IB1-P3B 3 meters
R1 IB2 to Rack 2 R1-IB2-P8B to R2-IB1-P4B 5 meters
R1 IB2 to Rack 3 R1-IB2-P9A to R3-IB1-P5B 5 meters
R1 IB2 to Rack 4 R1-IB2-P9B to R4-IB1-P6B 10 meters
R1 IB2 to Rack 5 R1-IB2-P10A to R5-IB1-P7B 10 meters
R1 IB2 to Rack 6 R1-IB2-P10B to R6-IB1-P8B 10 meters
R1 IB2 to Rack 7 R1-IB2-P11A to R7-IB1-P8B 10 meters
R1 IB2 to Rack 8 R1-IB2-P11B to R8-IB1-P10B 10 meters

340 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Connecting an Expansion Rack to Oracle SuperCluster T5-8

The following table shows the cable connections for the second spine switch (R2-IB1) when
cabling eight full racks together.

TABLE 115 Leaf Switch Connections for the Second Rack in an Eight-Rack System
Leaf Switch Connection Cable Length
R2 IB3 within Rack 2 R2-IB3-P8A to R2-IB1-P3A 3 meters
R2 IB3 to Rack 1 R2-IB3-P11B to R1-IB1-P10A 5 meters
R2 IB3 to Rack 3 R2-IB3-P8B to R3-IB1-P4A 5 meters
R2 IB3 to Rack 4 R2-IB3-P9A to R4-IB1-P5A 5 meters
R2 IB3 to Rack 5 R2-IB3-P9B to R5-IB1-P6A 10 meters
R2 IB3 to Rack 6 R2-IB3-P10A to R6-IB1-P7A 10 meters
R2 IB3 to Rack 7 R2-IB3-P10B to R7-IB1-P8A 10 meters
R2 IB3 to Rack 8 R2-IB3-P11A to R8-IB1-P9A 10 meters
R2 IB2 within Rack 2 R2-IB2-P8A to R2-IB1-P3B 3 meters
R2 IB2 to Rack 1 R2-IB2-P11B to R1-IB1-P10B 5 meters
R2 IB2 to Rack 3 R2-IB2-P8B to R3-IB1-P4B 5 meters
R2 IB2 to Rack 4 R2-IB2-P9A to R4-IB1-P5B 5 meters
R2 IB2 to Rack 5 R2-IB2-P9B to R5-IB1-P6B 10 meters
R2 IB2 to Rack 6 R2-IB2-P10A to R6-IB1-P7B 10 meters
R2 IB2 to Rack 7 R2-IB2-P10B to R7-IB1-P8B 10 meters
R2 IB2 to Rack 8 R2-IB2-P11A to R8-IB1-P9B 10 meters

The following table shows the cable connections for the third spine switch (R3-IB1) when
cabling eight full racks together.

TABLE 116 Leaf Switch Connections for the Third Rack in an Eight-Rack System
Leaf Switch Connection Cable Length
R3 IB3 within Rack 3 R3-IB3-P8A to R3-IB1-P3A 3 meters
R3 IB3 to Rack 1 R3-IB3-P11A to R1-IB1-P9A 5 meters
R3 IB3 to Rack 2 R3-IB3-P11B to R2-IB1-P10A 5 meters
R3 IB3 to Rack 4 R3-IB3-P8B to R4-IB1-P4A 5 meters
R3 IB3 to Rack 5 R3-IB3-P9A to R5-IB1-P5A 5 meters
R3 IB3 to Rack 6 R3-IB3-P9B to R6-IB1-P6A 5 meters
R3 IB3 to Rack 7 R3-IB3-P10A to R7-IB1-P7A 10 meters
R3 IB3 to Rack 8 R3-IB3-P10B to R8-IB1-P8A 10 meters
R3 IB2 within Rack 3 R3-IB2-P8A to R3-IB1-P3B 3 meters
R3 IB2 to Rack 1 R3-IB2-P11A to R1-IB1-P9B 5 meters
R3 IB2 to Rack 2 R3-IB2-P11B to R2-IB1-P10B 5 meters

Connecting Expansion Racks 341


Connecting an Expansion Rack to Oracle SuperCluster T5-8

Leaf Switch Connection Cable Length


R3 IB2 to Rack 4 R3-IB2-P8B to R4-IB1-P4B 5 meters
R3 IB2 to Rack 5 R3-IB2-P9A to R5-IB1-P5B 5 meters
R3 IB2 to Rack 6 R3-IB2-P9B to R6-IB1-P6B 5 meters
R3 IB2 to Rack 7 R3-IB2-P10A to R7-IB1-P7B 10 meters
R3 IB2 to Rack 8 R3-IB2-P10B to R8-IB1-P8B 10 meters

The following table shows the cable connections for the fourth spine switch (R4-IB1) when
cabling eight full racks together.

TABLE 117 Leaf Switch Connections for the Fourth Rack in an Eight-Rack System
Leaf Switch Connection Cable Length
R4 IB3 within Rack 4 R4-IB3-P8A to R4-IB1-P3A 3 meters
R4 IB3 to Rack 1 R4-IB3-P10B to R1-IB1-P8A 10 meters
R4 IB3 to Rack 2 R4-IB3-P11A to R2-IB1-P9A 5 meters
R4 IB3 to Rack 3 R4-IB3-P11B to R3-IB1-P10A 5 meters
R4 IB3 to Rack 5 R4-IB3-P8B to R5-IB1-P4A 5 meters
R4 IB3 to Rack 6 R4-IB3-P9A to R6-IB1-P5A 5 meters
R4 IB3 to Rack 7 R4-IB3-P9B to R7-IB1-P6A 10 meters
R4 IB3 to Rack 8 R4-IB3-P10A to R8-IB1-P7A 10 meters
R4 IB2 within Rack 4 R4-IB2-P8A to R4-IB1-P3B 3 meters
R4 IB2 to Rack 1 R4-IB2-P10B to R1-IB1-P8B 10 meters
R4 IB2 to Rack 2 R4-IB2-P11A to R2-IB1-P9B 5 meters
R4 IB2 to Rack 3 R4-IB2-P11B to R3-IB1-P10B 5 meters
R4 IB2 to Rack 5 R4-IB2-P8B to R5-IB1-P4B 5 meters
R4 IB2 to Rack 6 R4-IB2-P9A to R6-IB1-P5B 5 meters
R4 IB2 to Rack 7 R4-IB2-P9B to R7-IB1-P6B 10 meters
R4 IB2 to Rack 8 R4-IB2-P10A to R8-IB1-P7B 10 meters

The following table shows the cable connections for the fifth spine switch (R5-IB1) when
cabling eight full racks together.

TABLE 118 Leaf Switch Connections for the Fifth Rack in an Eight-Rack System
Leaf Switch Connection Cable Length
R5 IB3 within Rack 5 R5-IB3-P8A to R5-IB1-P3A 3 meters
R5 IB3 to Rack 1 R5-IB3-P10A to R1-IB1-P7A 10 meters
R5 IB3 to Rack 2 R5-IB3-P10B to R2-IB1-P8A 10 meters
R5 IB3 to Rack 3 R5-IB3-P11A to R3-IB1-P9A 5 meters

342 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Connecting an Expansion Rack to Oracle SuperCluster T5-8

Leaf Switch Connection Cable Length


R5 IB3 to Rack 4 R5-IB3-P11B to R4-IB1-P10A 5 meters
R5 IB3 to Rack 6 R5-IB3-P8B to R6-IB1-P4A 5 meters
R5 IB3 to Rack 7 R5-IB3-P9A to R7-IB1-P5A 5 meters
R5 IB3 to Rack 8 R5-IB3-P9B to R8-IB1-P6A 10 meters
R5 IB2 within Rack 5 R5-IB2-P8A to R5-IB1-P3B 3 meters
R5 IB2 to Rack 1 R5-IB2-P10A to R1-IB1-P7B 10 meters
R5 IB2 to Rack 2 R5-IB2-P10B to R2-IB1-P8B 10 meters
R5 IB2 to Rack 3 R5-IB2-P11A to R3-IB1-P9B 5 meters
R5 IB2 to Rack 4 R5-IB2-P11B to R4-IB1-P10B 5 meters
R5 IB2 to Rack 6 R5-IB2-P8B to R6-IB1-P4B 5 meters
R5 IB2 to Rack 7 R5-IB2-P9A to R7-IB1-P5B 5 meters
R5 IB2 to Rack 8 R5-IB2-P9B to R8-IB1-P6B 10 meters

The following table shows the cable connections for the sixth spine switch (R6-IB1) when
cabling eight full racks together.

TABLE 119 Leaf Switch Connections for the Sixth Rack in an Eight-Rack System
Leaf Switch Connection Cable Length
R6 IB3 within Rack 6 R6-IB3-P8A to R6-IB1-P3A 3 meters
R6 IB3 to Rack 1 R6-IB3-P9B to R1-IB1-P6A 10 meters
R6 IB3 to Rack 2 R6-IB3-P10A to R2-IB1-P7A 10 meters
R6 IB3 to Rack 3 R6-IB3-P10B to R3-IB1-P8A 5 meters
R6 IB3 to Rack 4 R6-IB3-P11A to R4-IB1-P9A 5 meters
R6 IB3 to Rack 5 R6-IB3-P11B to R5-IB1-P10A 5 meters
R6 IB3 to Rack 7 R6-IB3-P8B to R7-IB1-P4A 5 meters
R6 IB3 to Rack 8 R6-IB3-P9A to R8-IB1-P5A 5 meters
R6 IB2 within Rack 6 R6-IB2-P8A to R6-IB1-P3B 3 meters
R6 IB2 to Rack 1 R6-IB2-P9B to R1-IB1-P6B 10 meters
R6 IB2 to Rack 2 R6-IB2-P10A to R2-IB1-P7B 10 meters
R6 IB2 to Rack 3 R6-IB2-P10B to R3-IB1-P8B 5 meters
R6 IB2 to Rack 4 R6-IB2-P11A to R4-IB1-P9B 5 meters
R6 IB2 to Rack 5 R6-IB2-P11B to R5-IB1-P10B 5 meters
R6 IB2 to Rack 7 R6-IB2-P8B to R7-IB1-P4B 5 meters
R6 IB2 to Rack 8 R6-IB2-P9A to R8-IB1-P5B 5 meters

The following table shows the cable connections for the seventh spine switch (R7-IB1) when
cabling eight full racks together.

Connecting Expansion Racks 343


Connecting an Expansion Rack to Oracle SuperCluster T5-8

TABLE 120 Leaf Switch Connections for the Seventh Rack in an Eight-Rack System
Leaf Switch Connection Cable Length
R7 IB3 within Rack 7 R7-IB3-P8A to R7-IB1-P3A 3 meters
R7 IB3 to Rack 1 R7-IB3-P9A to R1-IB1-P5A 10 meters
R7 IB3 to Rack 2 R7-IB3-P9B to R2-IB1-P6A 10 meters
R7 IB3 to Rack 3 R7-IB3-P10A to R3-IB1-P7A 10 meters
R7 IB3 to Rack 4 R7-IB3-P10B to R4-IB1-P8A 10 meters
R7 IB3 to Rack 5 R7-IB3-P11A to R5-IB1-P9A 5 meters
R7 IB3 to Rack 6 R7-IB3-P11B to R6-IB1-P10A 5 meters
R7 IB3 to Rack 8 R7-IB3-P8B to R8-IB1-P4A 5 meters
R7 IB2 within Rack 7 R7-IB2-P8A to R7-IB1-P3B 3 meters
R7 IB2 to Rack 1 R7-IB2-P9A to R1-IB1-P5B 10 meters
R7 IB2 to Rack 2 R7-IB2-P9B to R2-IB1-P6B 10 meters
R7 IB2 to Rack 3 R7-IB2-P10A to R3-IB1-P7B 10 meters
R7 IB2 to Rack 4 R7-IB2-P10B to R4-IB1-P8B 10 meters
R7 IB2 to Rack 5 R7-IB2-P11A to R5-IB1-P9B 5 meters
R7 IB2 to Rack 6 R7-IB2-P11B to R6-IB1-P10B 5 meters
R7 IB2 to Rack 8 R7-IB2-P8B to R8-IB1-P4B 5 meters

The following table shows the cable connections for the eighth spine switch (R8-IB1) when
cabling eight full racks together.

TABLE 121 Leaf Switch Connections for the Eighth Rack in an Eight-Rack System
Leaf Switch Connection Cable Length
R8 IB3 within Rack 8 R8-IB3-P8A to R8-IB1-P3A 3 meters
R8 IB3 to Rack 1 R8-IB3-P8B to R1-IB1-P4A 10 meters
R8 IB3 to Rack 2 R8-IB3-P9A to R2-IB1-P5A 10 meters
R8 IB3 to Rack 3 R8-IB3-P9B to R3-IB1-P6A 10 meters
R8 IB3 to Rack 4 R8-IB3-P10A to R4-IB1-P7A 10 meters
R8 IB3 to Rack 5 R8-IB3-P10B to R5-IB1-P8A 5 meters
R8 IB3 to Rack 6 R8-IB3-P11A to R6-IB1-P9A 5 meters
R8 IB3 to Rack 7 R8-IB3-P11B to R7-IB1-P10A 5 meters
R8 IB2 within Rack 8 R8-IB2-P8A to R8-IB1-P3B 3 meters
R8 IB2 to Rack 1 R8-IB2-P8B to R1-IB1-P4B 10 meters
R8 IB2 to Rack 2 R8-IB2-P9A to R2-IB1-P5B 10 meters
R8 IB2 to Rack 3 R8-IB2-P9B to R3-IB1-P6B 10 meters
R8 IB2 to Rack 4 R8-IB2-P10A to R4-IB1-P7B 10 meters
R8 IB2 to Rack 5 R8-IB2-P10B to R5-IB1-P8B 5 meters

344 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Connecting an Expansion Rack to Oracle SuperCluster T5-8

Leaf Switch Connection Cable Length


R8 IB2 to Rack 6 R8-IB2-P11A to R6-IB1-P9B 5 meters
R8 IB2 to Rack 7 R8-IB2-P1B to R7-IB1-P10B 5 meters

Nine-Rack Cabling
The following table shows the cable connections for the switches when cabling nine racks
together.

Note - Cable lengths for racks 9 through 18 vary depending on the layout of the racks. Up to
100 meters is supported.

TABLE 122 Leaf Switch Connection for the Ninth Rack in the System

Leaf Switch Connection


R9 IB3 to Rack 1 R9-IB3-P8B to R1-IB1-P11A
R9 IB3 to Rack 2 R9-IB3-P9A to R2-IB1-P11A
R9 IB3 to Rack 3 R9-IB3-P9B to R3-IB1-P11A
R9 IB3 to Rack 4 R9-IB3-P10A to R4-IB1-P11A
R9 IB3 to Rack 5 R9-IB3-P10B to R5-IB1-P11A
R9 IB3 to Rack 6 R9-IB3-P11A to R6-IB1-P11A
R9 IB3 to Rack 7 R9-IB3-P11B to R7-IB1-P11A
R9 IB3 to Rack 8 R9-IB3-P9A to R8-IB1-P11A
R9 IB2 to Rack 1 R9-IB2-P8B to R1-IB1-P11B
R9 IB2 to Rack 2 R9-IB2-P9A to R2-IB1-P11B
R9 IB2 to Rack 3 R9-IB2-P9B to R3-IB1-P11B
R9 IB2 to Rack 4 R9-IB2-P10A to R4-IB1-P11B
R9 IB2 to Rack 5 R9-IB2-P10B to R5-IB1-P11B
R9 IB2 to Rack 6 R9-IB2-P11A to R6-IB1-P11B
R9 IB2 to Rack 7 R9-IB2-P11B to R7-IB1-P11B
R9 IB2 to Rack 8 R9-IB2-P8A to R8-IB1-P11B

Ten-Rack Cabling
The following table shows the cable connections for the switches when cabling ten racks
together.

Connecting Expansion Racks 345


Connecting an Expansion Rack to Oracle SuperCluster T5-8

Note - Cable lengths for racks 9 through 18 vary depending on the layout of the racks. Up to
100 meters is supported.

TABLE 123 Leaf Switch Connections for the Tenth Rack in the System
Leaf Switch Connection
R10 IB3 to Rack 1 R10-IB3-P8B to R1-IB1-P12A
R10 IB3 to Rack 2 R10-IB3-P9A to R2-IB1-P12A
R10 IB3 to Rack 3 R10-IB3-P9B to R3-IB1-P12A
R10 IB3 to Rack 4 R10-IB3-P10A to R4-IB1-P12A
R10 IB3 to Rack 5 R10-IB3-P10B to R5-IB1-P12A
R10 IB3 to Rack 6 R10-IB3-P12A to R6-IB1-P12A
R10 IB3 to Rack 7 R10-IB3-P12B to R7-IB1-P12A
R10 IB3 to Rack 8 R10-IB3-P8A to R8-IB1-P12A
R10 IB2 to Rack 1 R10-IB2-P8B to R1-IB1-P12B
R10 IB2 to Rack 2 R10-IB2-P9A to R2-IB1-P12B
R10 IB2 to Rack 3 R10-IB2-P9B to R3-IB1-P12B
R10 IB2 to Rack 4 R10-IB2-P10A to R4-IB1-P12B
R10 IB2 to Rack 5 R10-IB2-P10B to R5-IB1-P12B
R10 IB2 to Rack 6 R10-IB2-P12A to R6-IB1-P12B
R10 IB2 to Rack 7 R10-IB2-P12B to R7-IB1-P12B
R10 IB2 to Rack 8 R10-IB2-P8A to R8-IB1-P12B

Eleven-Rack Cabling
The following table shows the cable connections for the switches when cabling eleven racks
together.

Note - Cable lengths for racks 9 through 18 vary depending on the layout of the racks. Up to
100 meters is supported.

TABLE 124 Leaf Switch Connections for the Eleventh Rack in the System
Leaf Switch Connection
R11 IB3 to Rack 1 R11-IB3-P8B to R1-IB1-P13A
R11 IB3 to Rack 2 R11-IB3-P9A to R2-IB1-P13A
R11 IB3 to Rack 3 R11-IB3-P9B to R3-IB1-P13A
R11 IB3 to Rack 4 R11-IB3-P10A to R4-IB1-P13A

346 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Connecting an Expansion Rack to Oracle SuperCluster T5-8

Leaf Switch Connection


R11 IB3 to Rack 5 R11-IB3-P10B to R5-IB1-P13A
R11 IB3 to Rack 6 R11-IB3-P13A to R6-IB1-P13A
R11 IB3 to Rack 7 R11-IB3-P13B to R7-IB1-P13A
R11 IB3 to Rack 8 R11-IB3-P8A to R8-IB1-P13A
R11 IB2 to Rack 1 R11-IB2-P8B to R1-IB1-P13B
R11 IB2 to Rack 2 R11-IB2-P9A to R2-IB1-P13B
R11 IB2 to Rack 3 R11-IB2-P9B to R3-IB1-P13B
R11 IB2 to Rack 4 R11-IB2-P10A to R4-IB1-P13B
R11 IB2 to Rack 5 R11-IB2-P10B to R5-IB1-P13B
R11 IB2 to Rack 6 R11-IB2-P13A to R6-IB1-P13B
R11 IB2 to Rack 7 R11-IB2-P13B to R7-IB1-P13B
R11 IB2 to Rack 8 R11-IB2-P8A to R8-IB1-P13B

Twelve-Rack Cabling
The following table shows the cable connections for the switches when cabling twelve racks
together.

Note - Cable lengths for racks 9 through 18 vary depending on the layout of the racks. Up to
100 meters is supported.

TABLE 125 Leaf Switch Connections for the Twelfth Rack in the System
Leaf Switch Connection
R12 IB3 to Rack 1 R12-IB3-P8B to R1-IB1-P14A
R12 IB3 to Rack 2 R12-IB3-P9A to R2-IB1-P14A
R12 IB3 to Rack 3 R12-IB3-P9B to R3-IB1-P14A
R12 IB3 to Rack 4 R12-IB3-P10A to R4-IB1-P14A
R12 IB3 to Rack 5 R12-IB3-P10B to R5-IB1-P14A
R12 IB3 to Rack 6 R12-IB3-P14A to R6-IB1-P14A
R12 IB3 to Rack 7 R12-IB3-P14B to R7-IB1-P14A
R12 IB3 to Rack 8 R12-IB3-P8A to R8-IB1-P14A
R12 IB2 to Rack 1 R12-IB2-P8B to R1-IB1-P14B
R12 IB2 to Rack 2 R12-IB2-P9A to R2-IB1-P14B
R12 IB2 to Rack 3 R12-IB2-P9B to R3-IB1-P14B
R12 IB2 to Rack 4 R12-IB2-P10A to R4-IB1-P14B
R12 IB2 to Rack 5 R12-IB2-P10B to R5-IB1-P14B

Connecting Expansion Racks 347


Connecting an Expansion Rack to Oracle SuperCluster T5-8

Leaf Switch Connection


R12 IB2 to Rack 6 R12-IB2-P14A to R6-IB1-P14B
R12 IB2 to Rack 7 R12-IB2-P14B to R7-IB1-P14B
R12 IB2 to Rack 8 R12-IB2-P8A to R8-IB1-P14B

Thirteen-Rack Cabling
The following table shows the cable connections for the switches when cabling thirteen racks
together.

Note - Cable lengths for racks 9 through 18 vary depending on the layout of the racks. Up to
100 meters is supported.

TABLE 126 Leaf Switch Connections for the Thirteenth Rack in the System
Leaf Switch Connection
R13 IB3 to Rack 1 R13-IB3-P8B to R1-IB1-P15A
R13 IB3 to Rack 2 R13-IB3-P9A to R2-IB1-P15A
R13 IB3 to Rack 3 R13-IB3-P9B to R3-IB1-P15A
R13 IB3 to Rack 4 R13-IB3-P10A to R4-IB1-P15A
R13 IB3 to Rack 5 R13-IB3-P10B to R5-IB1-P15A
R13 IB3 to Rack 6 R13-IB3-P15A to R6-IB1-P15A
R13 IB3 to Rack 7 R13-IB3-P15B to R7-IB1-P15A
R13 IB3 to Rack 8 R13-IB3-P8A to R8-IB1-P15A
R13 IB2 to Rack 1 R13-IB2-P8B to R1-IB1-P15B
R13 IB2 to Rack 2 R13-IB2-P9A to R2-IB1-P15B
R13 IB2 to Rack 3 R13-IB2-P9B to R3-IB1-P15B
R13 IB2 to Rack 4 R13-IB2-P10A to R4-IB1-P15B
R13 IB2 to Rack 5 R13-IB2-P10B to R5-IB1-P15B
R13 IB2 to Rack 6 R13-IB2-P15A to R6-IB1-P15B
R13 IB2 to Rack 7 R13-IB2-P15B to R7-IB1-P15B
R13 IB2 to Rack 8 R13-IB2-P8A to R8-IB1-P15B

Fourteen-Rack Cabling
The following table shows the cable connections for the switches when cabling fourteen racks
together.

348 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Connecting an Expansion Rack to Oracle SuperCluster T5-8

Note - Cable lengths for racks 9 through 18 vary depending on the layout of the racks. Up to
100 meters is supported.

TABLE 127 Leaf Switch Connections for the Fourteenth Rack in the System
Leaf Switch Connection
R14 IB3 to Rack 1 R14-IB3-P8B to R1-IB1-P16A
R14 IB3 to Rack 2 R14-IB3-P9A to R2-IB1-P16A
R14 IB3 to Rack 3 R14-IB3-P9B to R3-IB1-P16A
R14 IB3 to Rack 4 R14-IB3-P10A to R4-IB1-P16A
R14 IB3 to Rack 5 R14-IB3-P10B to R5-IB1-P16A
R14 IB3 to Rack 6 R14-IB3-P16A to R6-IB1-P16A
R14 IB3 to Rack 7 R14-IB3-P16B to R7-IB1-P16A
R14 IB3 to Rack 8 R14-IB3-P8A to R8-IB1-P16A
R14 IB2 to Rack 1 R14-IB2-P8B to R1-IB1-P16B
R14 IB2 to Rack 2 R14-IB2-P9A to R2-IB1-P16B
R14 IB2 to Rack 3 R14-IB2-P9B to R3-IB1-P16B
R14 IB2 to Rack 4 R14-IB2-P10A to R4-IB1-P16B
R14 IB2 to Rack 5 R14-IB2-P10B to R5-IB1-P16B
R14 IB2 to Rack 6 R14-IB2-P16A to R6-IB1-P16B
R14 IB2 to Rack 7 R14-IB2-P16B to R7-IB1-P16B
R14 IB2 to Rack 8 R14-IB2-P8A to R8-IB1-P16B

Fifteen-Rack Cabling
The following table shows the cable connections for the switches when cabling fifteen racks
together.

Note - Cable lengths for racks 9 through 18 vary depending on the layout of the racks. Up to
100 meters is supported.

TABLE 128 Leaf Switch Connections for the Fifteenth Rack in the System
Leaf Switch Connection
R15 IB3 to Rack 1 R15-IB3-P8B to R1-IB1-P17A
R15 IB3 to Rack 2 R15-IB3-P9A to R2-IB1-P17A
R15 IB3 to Rack 3 R15-IB3-P9B to R3-IB1-P17A
R15 IB3 to Rack 4 R15-IB3-P10A to R4-IB1-P17A

Connecting Expansion Racks 349


Connecting an Expansion Rack to Oracle SuperCluster T5-8

Leaf Switch Connection


R15 IB3 to Rack 5 R15-IB3-P10B to R5-IB1-P17A
R15 IB3 to Rack 6 R15-IB3-P17A to R6-IB1-P17A
R15 IB3 to Rack 7 R15-IB3-P17B to R7-IB1-P17A
R15 IB3 to Rack 8 R15-IB3-P8A to R8-IB1-P17A
R15 IB2 to Rack 1 R15-IB2-P8B to R1-IB1-P17B
R15 IB2 to Rack 2 R15-IB2-P9A to R2-IB1-P17B
R15 IB2 to Rack 3 R15-IB2-P9B to R3-IB1-P17B
R15 IB2 to Rack 4 R15-IB2-P10A to R4-IB1-P17B
R15 IB2 to Rack 5 R15-IB2-P10B to R5-IB1-P17B
R15 IB2 to Rack 6 R15-IB2-P17A to R6-IB1-P17B
R15 IB2 to Rack 7 R15-IB2-P17B to R7-IB1-P17B
R15 IB2 to Rack 8 R15-IB2-P8A to R8-IB1-P17B

Sixteen-Rack Cabling
The following table shows the cable connections for the switches when cabling sixteen racks
together.

Note - Cable lengths for racks 9 through 18 vary depending on the layout of the racks. Up to
100 meters is supported.

TABLE 129 Leaf Switch Connections for the Sixteenth Rack in the System
Leaf Switch Connection
R16 IB3 to Rack 1 R16-IB3-P8B to R1-IB1-P0A
R16 IB3 to Rack 2 R16-IB3-P9A to R2-IB1-P0A
R16 IB3 to Rack 3 R16-IB3-P9B to R3-IB1-P0A
R16 IB3 to Rack 4 R16-IB3-P10A to R4-IB1-P0A
R16 IB3 to Rack 5 R16-IB3-P10B to R5-IB1-P0A
R16 IB3 to Rack 6 R16-IB3-P0A to R6-IB1-P0A
R16 IB3 to Rack 7 R16-IB3-P0B to R7-IB1-P0A
R16 IB3 to Rack 8 R16-IB3-P8A to R8-IB1-P0A
R16 IB2 to Rack 1 R16-IB2-P8B to R1-IB1-P0B
R16 IB2 to Rack 2 R16-IB2-P9A to R2-IB1-P0B
R16 IB2 to Rack 3 R16-IB2-P9B to R3-IB1-P0B
R16 IB2 to Rack 4 R16-IB2-P10A to R4-IB1-P0B
R16 IB2 to Rack 5 R16-IB2-P10B to R5-IB1-P0B

350 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Connecting an Expansion Rack to Oracle SuperCluster T5-8

Leaf Switch Connection


R16 IB2 to Rack 6 R16-IB2-P0A to R6-IB1-P0B
R16 IB2 to Rack 7 R16-IB2-P0B to R7-IB1-P0B
R16 IB2 to Rack 8 R16-IB2-P8A to R8-IB1-P0B

Seventeen-Rack Cabling
The following table shows the cable connections for the switches when cabling seventeen racks
together.

Note - Cable lengths for racks 9 through 18 vary depending on the layout of the racks. Up to
100 meters is supported.

TABLE 130 Leaf Switch Connections for the Seventeenth Rack in the System
Leaf Switch Connection
R17 IB3 to Rack 1 R17-IB3-P8B to R1-IB1-P1A
R17 IB3 to Rack 2 R17-IB3-P9A to R2-IB1-P1A
R17 IB3 to Rack 3 R17-IB3-P9B to R3-IB1-P1A
R17 IB3 to Rack 4 R17-IB3-P10A to R4-IB1-P1A
R17 IB3 to Rack 5 R17-IB3-P10B to R5-IB1-P1A
R17 IB3 to Rack 6 R17-IB3-P1A to R6-IB1-P1A
R17 IB3 to Rack 7 R17-IB3-P1B to R7-IB1-P1A
R17 IB3 to Rack 8 R17-IB3-P8A to R8-IB1-P1A
R17 IB2 to Rack 1 R17-IB2-P8B to R1-IB1-P1B
R17 IB2 to Rack 2 R17-IB2-P9A to R2-IB1-P1B
R17 IB2 to Rack 3 R17-IB2-P9B to R3-IB1-P1B
R17 IB2 to Rack 4 R17-IB2-P10A to R4-IB1-P1B
R17 IB2 to Rack 5 R17-IB2-P10B to R5-IB1-P1B
R17 IB2 to Rack 6 R17-IB2-P1A to R6-IB1-P1B
R17 IB2 to Rack 7 R17-IB2-P1B to R7-IB1-P1B
R17 IB2 to Rack 8 R17-IB2-P8A to R8-IB1-P1B

Eighteen-Rack Cabling
The following table shows the cable connections for the switches when cabling eighteen racks
together.

Connecting Expansion Racks 351


Connecting an Expansion Rack to Oracle SuperCluster T5-8

Note - Cable lengths for racks 9 through 18 vary depending on the layout of the racks. Up to
100 meters is supported.

TABLE 131 Leaf Switch Connections for the Eighteenth Rack in the System
Leaf Switch Connection
R18 IB3 to Rack 1 R18-IB3-P8B to R1-IB1-P2A
R18 IB3 to Rack 2 R18-IB3-P9A to R2-IB1-P2A
R18 IB3 to Rack 3 R18-IB3-P9B to R3-IB1-P2A
R18 IB3 to Rack 4 R18-IB3-P10A to R4-IB1-P2A
R18 IB3 to Rack 5 R18-IB3-P10B to R5-IB1-P2A
R18 IB3 to Rack 6 R18-IB3-P2A to R6-IB1-P2A
R18 IB3 to Rack 7 R18-IB3-P2B to R7-IB1-P2A
R18 IB3 to Rack 8 R18-IB3-P8A to R8-IB1-P2A
R18 IB2 to Rack 1 R18-IB2-P8B to R1-IB1-P2B
R18 IB2 to Rack 2 R18-IB2-P9A to R2-IB1-P2B
R18 IB2 to Rack 3 R18-IB2-P9B to R3-IB1-P2B
R18 IB2 to Rack 4 R18-IB2-P10A to R4-IB1-P2B
R18 IB2 to Rack 5 R18-IB2-P10B to R5-IB1-P2B
R18 IB2 to Rack 6 R18-IB2-P2A to R6-IB1-P2B
R18 IB2 to Rack 7 R18-IB2-P2B to R7-IB1-P2B
R18 IB2 to Rack 8 R18-IB2-P8A to R8-IB1-P2B

352 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Index

A clustering software, 84
alert messages core granularity, 171, 186
battery learn cycle, 154 CPU and memory
Application Domain changing allocations, 182, 186
clustering software, 85 configuring, 170
InfiniBand network, 61 displaying configurations, 178
ASR, 205 parking, 190
configure SNMP traps, 208 planning allocations, 175
configure SPARC T5-8 servers ILOM, 213 removing a resource configuration, 202
configure SPARC T5-8 servers Solaris OS, 215 reverting to a previous configuration, 201
configure storage appliance, 210 supported domain configurations, 174
enable HTTPS on ASR Manager, 216 tool overview, 171
register SPARC T5-8 domains, 217
verify assets, 219
Auto Service Request, 205
D
Database Domain
clustering software, 85
B InfiniBand network, 60
batteries dedicated domains, 174
learn cycle, 154 DISM
boot environments, managing, 147 disabling, 152
restrictions, 152
displaying CPU and memory allocations, 178
dynamic intimate shared memory, 151
C
cabling tables for the Oracle SuperCluster T5-8, 245
cautions, 141
changingssctuner properties, 163 E
Cisco Catalyst 4948 Ethernet management switch emergency power off, 145
description, 26 enablingssctuner, 169
client access connections Exadata Storage Servers
SPARC T5-8 servers, 35 description, 24
cluster connections host management connections, 39
ZFS storage appliance, 44 InfiniBand connections, 36

353
Index

Oracle ILOM connections, 38 managing boot environments, 147


physical connections, 36 mixed domains, 174
powering off, 143 monitoringssctuner activity, 160
shutting down, 143 moving the Oracle SuperCluster T5-8, 122

G N
ground cable, attaching, 124 network diagram for Oracle SuperCluster T5-8, 88
network requirements, 86, 297

H
host management connections
Exadata Storage Servers, 39 O
SPARC T5-8 servers, 35 OCM, 222
ZFS storage appliance, 42 install, 222
humidity requirements, 108 Oracle Configuration Manager, 221, 222
Oracle Exadata Storage HC Expansion Rack
components, 287
Oracle ILOM connections
I Exadata Storage Servers, 38
InfiniBand connections SPARC T5-8 servers, 35
Exadata Storage Servers, 36 ZFS storage appliance, 41
SPARC T5-8 servers, 32 Oracle SuperCluster T5-8
ZFS storage appliance, 39 cabling tables, 245
installation workflow, 117 default IP addresses, 90
installingssctuner, 168 description, 15
IP addresses hardware components, 19
default for Oracle SuperCluster T5-8, 90 monitoring using Auto Service Request, 205
monitoring using Oracle Configuration
Manager, 221
moving, 122
L
network diagram, 88
laptop, connecting, 134
physical connections, 26
ldm command, 180
power distribution units
leaf switches, 261, 315, 316
description, 26
leveling feet, adjusting, 125
understanding components, 23
logical domains
osc-setcoremem command
host management network, 59
core granularity, 186
InfiniBand network, 59
displaying resource configuration, 178
for configuring resources, 170
log files, 196
M overview, 171
maintenance access requirements, 96 parking resources, 190

354 Oracle SuperCluster T5-8 Owner's Guide • May 2016


Index

socket granularity, 182 storage servers, 143


socket granularity, 171, 182
SP configuration files, 199
space requirements, 96
P SPARC T5-8 servers
parking cores and memory, 190
client access connections, 35
PCIe slots, SPARC T5-8 servers, 27
description, 24
physical connections
host management connections, 35
Exadata Storage Servers, 36
InfiniBand connections, 32
Oracle SuperCluster T5-8, 26
Oracle ILOM connections, 35
power distribution units, 45
PCIe slots, 27
SPARC T5-8 servers, 26
physical connections, 26
ZFS storage appliance, 39
spares kit, 16
planning CPU and memory allocations, 175
spine switches, 261, 315, 316
power cords, connecting, 126
ssctuner command
power distribution units
enabling, 169
physical connections, 45
installing, 168
power switches, 128
log files, 161
powering off the system, 142
monitoring, 160
gracefully, 142
overview, 159
powering on the system, 145
properties, 163
Sun Datacenter InfiniBand Switch 36 switches
description, 25
R supported domain configurations, 174
raised floor, preparing, 97
receiving requirements, 111
requirements T
humidity, 108 temperature requirements, 108
maintenance access, 96 tools need for installation, 119
receiving, 111 tuning the system, 158
space, 96
temperature, 108
unpacking, 111
Root Domains, 174 U
unpacking requirements, 111

S
SAS connections, ZFS storage appliance, 42
V
viewingssctuner log files, 161
setcoremem deprecated command, 170
setting
battery learn cycle, 154
shutting down W
domains, 144 warnings, 141

355
Index

Z
ZFS storage appliance
cluster connections, 44
description, 25
host management connections, 42
InfiniBand connections, 39
Oracle ILOM connections, 41
physical connections, 39
SAS connections, 42

356 Oracle SuperCluster T5-8 Owner's Guide • May 2016

Das könnte Ihnen auch gefallen