You are on page 1of 21

Solutions Guide

Database Consolidation with
Oracle 12c Multitenancy
James Morle
Scale Abilities, Ltd.

Table of contents
Introduction ......................................................................................................................................................3
Testing scenario and architecture ....................................................................................................................3
Introduction to testing .....................................................................................................................................10
Proof Point – move a PDB between CDBs ....................................................................................................10
Proof Point – relocate a datafile to an alternate diskgroup .............................................................................14
Proof Point – create a clone from PDB ..........................................................................................................17
Conclusion .....................................................................................................................................................19
For more information ......................................................................................................................................20

this paper focuses on the proven combination of Fibre Channel and Oracle’s Automatic Storage Management functionality. The consolidation of these databases into a single managed entity dramatically reduces management costs. which ensures that maximum bandwidth is available in the Storage Area Network for any operations that require data copying. which contains all the data dictionary information that is absent from the PDB. The scenario chosen for this testing was one perceived to become common for Database Administrators( DBAs) involved in consolidating many databases into a single shared compute environment. and provides low latency access for all other database I/O operations. In this paper we will explore the new multitenant functionality by building a test database cluster.corrals all those pesky databases into a contained area! 3 Database Consolidation with Oracle 12c Multitenancy . within which we relocate databases between nodes. Testing scenario and architecture The Multitenant Option of the Oracle 12c Database is implemented through the concept of Pluggable Databases (PDBs). This is an attractive proposition for corporations with tens. Although shared disk storage may be achieved through other means. all important for the operation of the business. hundreds or even thousands of databases. frequently referred to as a Private Cloud.Solution Implementer’s Series Introduction This whitepaper is a study of the new Multitenant features of Oracle Database 12c and how a shared disk architecture using Fibre Channel infrastructure can be used to facilitate dramatically simplified database consolidation compared to previous releases. A PDB is essentially the same as what we always used to have pre-12c with all the generic data dictionary information removed. The new functionality in Oracle 12c makes these operations extremely straightforward and provides a compelling case for consolidation using a shared disk infrastructure. All testing performed for this whitepaper was carried out using the latest Gen 5 (16GFC) Fibre Channel components from Emulex. Access to the high bandwidth throughput provided by Gen 5 parts will become increasingly important for data mobility in consolidated cloud environments such as the one demonstrated here. Disclosure: Scale Abilities was commissioned commercially by Emulex Corporation to conduct this study.apart from all that -. A CDB can host multiple PDBs. they need to be plugged into a Container Database (CDB). and so allows a single set of data dictionary information. background processes and memory allocation to be shared across multiple PDBs. improves quality of service and -. Scale Abilities was awarded sole editorial control and Emulex granted sole veto rights for publication. instances and storage tiers. These PDBs cannot execute by themselves. To ensure impartial commentary under this arrangement.

Real end-user clusters could have considerably higher node counts depending on the amount of consolidation required and the desired maximum cluster size. where different CDBs are used to host databases of different service levels. managed using Oracle Grid Infrastructure 12c.Solution Implementer’s Series The logical architecture chosen was that of a shared cluster of compute nodes. and CDB2 as the container for mission-critical workloads. Two CDBs were defined: CDB1 was nominated as the container for low-criticality workloads. with a Gen 5 (16GFC) Fibre Channel network connecting all the storage to all hosts. Figure 1. The test scenario was implemented as a five-node cluster. which were constrained to discrete subsets of the cluster using the server pool functionality within the Oracle Clusterware. with a variety of different storage devices and tiers available to those nodes. Architecture Implementation 4 Database Consolidation with Oracle 12c Multitenancy . potentially with different memory footprints on their respective server nodes. The Oracle Database was then tasked to provide two Container Databases (CDBs). This architecture is representative of an end-user cluster.

especially with the Multitenant Option in Oracle 12c.Solution Implementer’s Series From a storage perspective. Migration of PDBs between CDBs The inherent shared architecture of the Multitenant Option makes the consolidation of databases much simpler than in previous releases. Figure 2. Given that all the data is available to all of the cluster nodes.” to represent a mid-tier traditional storage array.” to represent the growing reality of solid-state storage.” to represent a top-tier traditional storage array and “Fast Storage Array. only shared access. a migration of PDBs between CDBs only requires a logical ‘unplug’ from the source CDB and a ‘plug’ into the recipient CDB – No data copying is required. These were defined as “Small Storage Array. The following diagram shows how a similar implementation of two databases would have worked in the 11g release: 5 Database Consolidation with Oracle 12c Multitenancy . “Big Storage Array. All the storage is shared across all cluster nodes. making the relocation of databases to different servers a very straightforward process. three different tiers of storage were defined to reflect the common reality of multiple classes of storage being present in the data center.

with just a few large tables created in each. Any performance observations noted in this whitepaper are included 6 Database Consolidation with Oracle 12c Multitenancy . Accordingly. rather than raw performance. Certain measures could be taken. In the 12c Multitenant world. each database would exist as an entity in its own right. both PDB1 and PDB2 were kept relatively simple. with a full copy of the data dictionary. Each database was approximately 1TB in size to demonstrate a relatively large data transport requirement. such as Instance Caging and Preferred Nodes. Consolidation with Oracle 11g In this 11g example. and there is no overhead for hosting multiple databases other than the resource actually required to host those databases. The testing focus was primarily on functionality and architecture. and had no direct control over the resource consumption by the other instances. and its own set of background processes and memory allocations (ie: the instance) on each node in the cluster.Solution Implementer’s Series Figure 3. but these were really just workarounds prior to the 12c solution. the resources of all the databases can be effectively resource managed. Each of those instances would more or less compete for resource.

9GHz. 7 Database Consolidation with Oracle 12c Multitenancy . Physical hardware topology Each server node was an HP DL360 Generation 8 server with 96GB of memory and two CPU sockets populated with 8 core Intel E5-2690 processors running at 2. updating the firmware was a simple process during the initial setup. By using Emulex OneCommand Manager we were able to update online the latest firmware version for Emulex Lightpulse LPe16002B adapter. By using Emulex Onecommand Manager we verified Emulex Gen 5 Fibre Channel adapter’s connectivity to the FC targets. with each port connected to independent Fibre Channel zones via Brocade 6510 Gen 5 Fibre Channel switches. Figure 2. In addition.Solution Implementer’s Series for completeness. Each server was equipped with two port Emulex LPe16002B Gen5 Fibre Channel HBAs. excluding Ethernet networks. Hardware configuration This paper was written following the execution of a series of tests in Emulex’s Technical Marketing Costa Mesa lab environment. The diagram below shows a simplified physical hardware topology. with only limited investigation performed regarding areas of potential improvement.

com The cluster was split into two server pools using Clusterware server pools.com oracle4.Solution Implementer’s Series The storage tier was provided by two physical hardware devices.com oracle3. The mission-critical container (CDB2) is hosted by a server pool named ‘missioncritical.com oracle5. version 12. Min: 0.Release 8.0 target emulation DRAM-based device. These were named +BIGARRAY. +SMALLARRAY and +FASTARRAY to correspond with the underlying hardware.’ which includes the oracle1.18  Oracle 12c .0.1 (RDBMS and Grid Infrastructure) All storage was presented directly through to ASM via dm-multipath and ASMlib.Oracle Linux 6.7.emulex. The five node cluster comprised of the following hostnames: oracle1. configured to present LUNS from two separate storage pools. Min: 0. Max: -1 Category: Candidate server names: Server pool name: lesscritical Importance: 0.3.emulex. and the smaller one provided the role of the “Small Storage Array”.com oracle2. Min: 0. Max: -1 Category: 8 Database Consolidation with Oracle 12c Multitenancy .’ which includes the oracle4 and oracle5 hosts. The role of the “Fast Storage Array” was provided by a SanBlaze v7.Release 1.1. The low-criticality container database (CDB1) is hosted by a server pool named ‘lesscritical. oracle2 and oracle 3 hosts.one for each of the three ‘scenarios’ of storage arrays in the configuration. The server was configured with the following software:  Operating System .4. Max: -1 Category: Candidate server names: Server pool name: Generic Importance: 0. Three ASM diskgroups were created -.39-400 UEK Kernel  Emulex HBA device driver .emulex. using the 2. [oracle@Oracle1 ~]$ srvctl config srvpool Server pool name: Free Importance: 0. The traditional storage was provided by an HP 3PAR StoreServ V10000 storage array.emulex.6.emulex. The larger of these pools was used to provide the role of the “Big Storage Array” in our scenario.

Min: 0.oracle3 We can also view the CRS resources to ensure we have instances of our CDBs running on the nodes we expect: [oracle@Oracle1 ~]$ crsctl stat res -t ----------------------------------------------------------------------------Name Target State Server State details ----------------------------------------------------------------------------ora. 9 Database Consolidation with Oracle 12c Multitenancy . This is a very useful new feature that moves away from the former dependency to have one ASM instance on every cluster node. Max: -1 Category: Candidate server names: oracle1. 2.asm 1 ONLINE ONLINE oracle1 STABLE 2 ONLINE ONLINE oracle3 STABLE 3 ONLINE ONLINE oracle2 STABLE Although FlexASM was used in this case.oracle5 Server pool name: missioncritical Importance: 0. the full ASM instances where local on nodes 1.oracle2. 3 and remote via the Proxy instance on nodes 4 and 5.cdb1.Solution Implementer’s Series Candidate server names: oracle4.STABLE 3 ONLINE OFFLINE STABLE 4 ONLINE ONLINE oracle5 Open. For this testing. From Oracle 12c.STABLE 2 ONLINE OFFLINE STABLE 3 ONLINE ONLINE oracle2 Open.STABLE ASM is configured using the new FlexASM feature of Oracle12c.db 1 ONLINE ONLINE oracle1 Open.STABLE 5 ONLINE OFFLINE STABLE ora. [oracle@Oracle1 ~]$ crsctl stat res -t ----------------------------------------------------------------------------Cluster Resources ----------------------------------------------------------------------------ora. it is also possible to perform all of the testing in this whitepaper using the traditional ASM model of having one instance of ASM on each cluster node.db 1 ONLINE OFFLINE STABLE 2 ONLINE ONLINE oracle4 Open.STABLE 4 ONLINE OFFLINE STABLE 5 ONLINE ONLINE oracle3 Open. it is possible to host full instances of ASM on a subset of the cluster nodes. and for other cluster nodes to collect ASM metadata information via a local ‘proxy instance.cdb2.’ which in turn connects to the full ASM instances.

It is believed this functionality goes hand in hand with the management of PDBs.Solution Implementer’s Series Introduction to testing In pre-12c releases. The objective is to move PDB1 to a different CDB named CDB2. and would be a cumbersome database to move using previous releases of Oracle. For example. as each instance had no visibility or control of the resource requirements of instances on the shared server. The starting point of the test is a single PDB. in the CDB named CDB1. First. it would be necessary to have instances for both DB1 and DB2 running on at least two nodes of the cluster. The concept of Container Databases (CDBs) and Pluggable Databases (PDBs) is new in Oracle 12c. A CDB is the container for one or more PDB. resulting in wasted resources and rudimentary resource management capabilities. All the PDBs that are plugged into a CDB share a single Oracle instance (per node. In addition to the Multitenant option. PDB1 is a little over 1TB in size. Each of these instances would have to be configured to accommodate the full workload of its application on each node. PDBs can be plugged into and unplugged from CDBs using simple commands. and can be resource-managed by a single set of controls within the CDB. as well as movement between CDBs. The philosophy behind this testing was to perform key operations on PDBs that would be frequently required in a true consolidated environment and to discover what the utility-value was from running large shared clusters of multiple CDBs. as it offers the ability to relocate databases onto different tiers of storage. This resulted in gross memory and kernel resource waste on each server. and they can be cloned and moved to other CDBs. Proof Point – move a PDB between CDBs This test is a straightforward move of a PDB from one CDB to another within the same cluster. it should be a simple operation in this configuration. and is fundamental to providing the ‘missing link’ in database consolidation for Oracle databases. if one wanted to consolidate databases named DB1 and DB2 onto a shared cluster using pre-12c technology. named PDB1. and a PDB is the actual ‘database’ from the viewpoint of the application. Currently CDB2 has another PDB hosted within it named PDB2. including any passive nodes if running in active/passive mode. Since full access is available to all storage on each node on the cluster. as workloads would very often end up hosted on an inappropriate node. Resource management was even more restricted. it was necessary to host multiple full instances of each database on each node required. Scale Abilities checks that there is connection to the root of the CDB and checks the status of PDB1: 10 Database Consolidation with Oracle 12c Multitenancy . in the case of RAC). The results worsened when instances would failover to other nodes. 12c also now supports fully online datafile moves.

care must be taken to understand where the database connection is made. This changes the state of the PDB in the current CDB (CDB1) to be ‘unplugged’ and prevents all access to PDB1. The resulting XML contains a variety of interesting information. One aspect of this operation not immediately obvious is the XML file created by the foreground process on the database server to which the user is connected.name. it is evident that PDB1 is now unplugged and cannot be opened up again: 11 Database Consolidation with Oracle 12c Multitenancy .open_mode from v$PDBs. CON_ID NAME OPEN_MODE ---------.---------2 PDB$SEED READ ONLY 3 PDB1 READ WRITE In order to unplug the PDB.Solution Implementer’s Series SQL> SELECT SYS_CONTEXT ('USERENV'.name. options installed. 'CON_NAME') FROM DUAL. Pluggable database altered. PDB must be closed on all instances: SQL> alter pluggable database pdb1 close immediate instances=all.2 SQL> / CON_ID INST_ID NAME OPEN_MODE ---------. Pluggable database altered.open_mode from gv$PDBs 2* order by 1.inst_id.'CON_NAME') ----------------------------------------------------------------------------CDB$ROOT SQL> select con_id. version and so on. size. SQL> select con_id. container ID. location.---------2 2 PDB$SEED READ ONLY 2 4 PDB$SEED READ ONLY 3 2 PDB1 MOUNTED 3 4 PDB1 MOUNTED Next PDB1 is unplugged. It produces an XML file in the specified location that contains all the metadata required to plug the database back into a CDB. database ID. Looking back at CDB1. This XML file is the master description of the PDB and is used by the next CDB into which the PDB is plugged to determine the steps required. such as a list of files. SQL> alter pluggable database pdb1 unplug into '/home/oracle/pdb1_unplug. therefore. this is not necessarily the same server which is running SQL*Plus. In a RAC environment (and in a client/server environment). SYS_CONTEXT('USERENV'.-----------------------------.-----------------------------.xml'.---------.

status from CDB_PDBS. // *Action: The pluggable database can only be dropped.xml' nocopy. Real Application Clusters.---------.status from CDB_PDBS SQL> / PDB_ID PDB_NAME GUID STATUS ---------.pdb_name.1. // SQL> drop pluggable database pdb1.0.1.64bit Production With the Partitioning.1.-------------------------------.-------------------------------.0. Start by connecting to the CDB2 root using the TNS alias created by dbca: [oracle@Oracle1 ~]$ sqlplus sys@cdb2 as sysdba SQL*Plus: Release 12. PDB_ID PDB_NAME GUID STATUS ---------. 00000. Advanced Analytics and Real Application Testing options SQL> select pdb_id.------------3 PDB1 E5A299062A7FF04AE043975BC00A8790 UNPLUGGED 2 PDB$SEED E5A288645805E887E043975BC00AF91C NORMAL SQL> alter pluggable database pdb1 open instances=all.Solution Implementer’s Series 1* select pdb_id. OLAP. alter pluggable database pdb1 open instances=all * ERROR at line 1: ORA-65107: Error encountered when processing the current task on instance:2 ORA-65086: cannot open/close the pluggable database SQL> !oerr ora 65086 65086.0 . 12 Database Consolidation with Oracle 12c Multitenancy . Now PDB1 can be plugged into CDB2. Enter password: Connected to: Oracle Database 12c Enterprise Edition Release 12. All rights reserved. Oracle.guid. Automatic Storage Management. 2013.---------. Pluggable database dropped.------------3 PDB2 E5A3120148FF19CDE043975BC00AAF39 NORMAL 2 PDB$SEED E5A30153B3FB11AFE043975BC00AC732 NORMAL SQL> create pluggable database pdb1 using '/home/oracle/pdb1_unplug. "cannot open/close the pluggable database" // *Cause: The pluggable database has been unplugged.1.pdb_name.guid.0 Production on Fri Sep 6 08:13:08 2013 Copyright (c) 1982. Pluggable database created.

name. let’s check the status and open up PDB1 in its new container database: SQL> select con_id.Solution Implementer’s Series The ‘nocopy’ option was selected to instruct Oracle to leave the datafiles in their original locations.7) to disk (ORCL:ELX_3PAR_100G_7) NOTE: Assigning number (2.SMALLARRAY.open_mode from gv$PDBs order by 1.very useful for migrating to another diskgroup and/or storage array.inst_id.3) to disk (ORCL:ELX_3PAR_100G_3) NOTE: Assigning number (2. showing the mounting of the +SMALLARRAY diskgroup and the creation of a CRS dependency for future startup operations: NOTE: ASMB mounting group 2 (SMALLARRAY) NOTE: ASM background process initiating disk discovery for grp 2 NOTE: Assigning number (2.6) to disk (ORCL:ELX_3PAR_100G_6) NOTE: Assigning number (2. which were all in the +SMALLARRAY diskgroup.dg is established Next.5) to disk (ORCL:ELX_3PAR_100G_5) NOTE: Assigning number (2.---------2 1 2 3 2 5 3 1 3 3 3 5 4 1 4 3 4 5 13 NAME -----------------------------PDB$SEED PDB$SEED PDB$SEED PDB2 PDB2 PDB2 PDB1 PDB1 PDB1 Database Consolidation with Oracle 12c Multitenancy OPEN_MODE ---------READ ONLY READ ONLY READ ONLY READ WRITE READ WRITE READ WRITE MOUNTED MOUNTED MOUNTED .9) to disk (ORCL:ELX_3PAR_100G_9) SUCCESS: mounted group 2 (SMALLARRAY) NOTE: grp 2 disk 0: ELX_3PAR_100G_0 path:ORCL:ELX_3PAR_100G_0 NOTE: grp 2 disk 1: ELX_3PAR_100G_1 path:ORCL:ELX_3PAR_100G_1 NOTE: grp 2 disk 2: ELX_3PAR_100G_2 path:ORCL:ELX_3PAR_100G_2 NOTE: grp 2 disk 3: ELX_3PAR_100G_3 path:ORCL:ELX_3PAR_100G_3 NOTE: grp 2 disk 4: ELX_3PAR_100G_4 path:ORCL:ELX_3PAR_100G_4 NOTE: grp 2 disk 5: ELX_3PAR_100G_5 path:ORCL:ELX_3PAR_100G_5 NOTE: grp 2 disk 6: ELX_3PAR_100G_6 path:ORCL:ELX_3PAR_100G_6 NOTE: grp 2 disk 7: ELX_3PAR_100G_7 path:ORCL:ELX_3PAR_100G_7 NOTE: grp 2 disk 8: ELX_3PAR_100G_8 path:ORCL:ELX_3PAR_100G_8 NOTE: grp 2 disk 9: ELX_3PAR_100G_9 path:ORCL:ELX_3PAR_100G_9 NOTE: dependency between database CDB2 and diskgroup resource ora.4) to disk (ORCL:ELX_3PAR_100G_4) NOTE: Assigning number (2. During the plugging in of PDB1.1) to disk (ORCL:ELX_3PAR_100G_1) NOTE: Assigning number (2.2) to disk (ORCL:ELX_3PAR_100G_2) NOTE: Assigning number (2.8) to disk (ORCL:ELX_3PAR_100G_8) NOTE: Assigning number (2. The default action is to make copies of the datafile and leave the originals intact. which allows the files to be moved to a new location .0) to disk (ORCL:ELX_3PAR_100G_0) NOTE: Assigning number (2.2 2 / CON_ID INST_ID ---------. Another option is to select the ‘move’ option. the following messages were emitted in the alert file.

825310527 630 +SMALLARRAY/CDB1/E5A299062A7FF04AE043975BC00A8790/DATAFILE/users. Pluggable database altered. start with a tablespace named APPDATA which has a single datafile that has been created in the wrong diskgroup: 1* select name. and the error message was not entirely helpful: 05:20:37 SQL> alter database move datafile '+BIGARRAY/CDB2/E5A299062A7FF04AE043975BC00A8790/DATAFILE/appdata. Session altered. Oracle 12c promises to move a datafile with zero downtime.------------+BIGARRAY/CDB2/DATAFILE/undotbs1.825412 14 Database Consolidation with Oracle 12c Multitenancy .825312223 110 +SMALLARRAY/CDB1/E5A299062A7FF04AE043975BC00A8790/DATAFILE/system.825310527 260 +SMALLARRAY/CDB1/E5A299062A7FF04AE043975BC00A8790/DATAFILE/sysaux. SQL> alter pluggable database pdb1 open instances=all.304. In this scenario.304.825310547 5 +BIGARRAY/CDB2/E5A299062A7FF04AE043975BC00A8790/DATAFILE/appdata.Solution Implementer’s Series 9 rows selected. SQL> Proof Point – relocate a datafile to an alternate diskgroup Online relocation of datafiles is an important feature of Oracle 12c. Elapsed: 00:00:00.825412 353' to '+SMALLARRAY' * ERROR at line 1: ORA-01516: nonexistent log file.00 05:21:21 SQL> alter database move datafile '+BIGARRAY/CDB2/E5A299062A7FF04AE043975BC00A8790/DATAFILE/appdata.825412353 1090560 Note The attempt to move a datafile for a PDB while connected to the CDB$ROOT was unsuccessful. Previous releases allowed a certain degree of online relocation but still mandated a brief period of unavailability when the new file location was brought online.284.282.304. or temporary file "45" Connecting to the PDB fixes this problem and allowed the file to be moved successfully: 05:20:48 SQL> alter session set container=PDB1.bytes/1048576 sz from v$datafile SQL> / NAME SZ ----------------------------------------------------------------------------.304. data file.281. alter database move datafile '+BIGARRAY/CDB2/E5A299062A7FF04AE043975BC00A8790/DATAFILE/appdata.825412 353' to '+SMALLARRAY'.260.

Elapsed: 04:51:57. In the meantime here’s what a trace of the session executing the move looked like: WAIT #140641124141656: nam='control file sequential read' ela= 971 file#=0 block#=59 blocks=1 obj#=-1 tim=393150810594 WAIT #140641124141656: nam='db file sequential read' ela= 1494 file#=45 block#=119070465 blocks=128 obj#=-1 tim=393150812119 WAIT #140641124141656: nam='db file single write' ela= 1759 file#=45 block#=119070465 blocks=128 obj#=-1 tim=393150813980 WAIT #140641124141656: nam='DFS lock handle' ela= 193 type|mode=1128857605 id1=110 id2=1 obj#=-1 tim=393150814264 WAIT #140641124141656: nam='DFS lock handle' ela= 218 type|mode=1128857605 id1=110 id2=3 obj#=-1 tim=393150814537 WAIT #140641124141656: nam='DFS lock handle' ela= 11668 type|mode=1128857605 id1=110 id2=2 obj#=-1 tim=393150826256 WAIT #140641124141656: nam='control file sequential read' ela= 249 file#=0 block#=1 blocks=1 obj#=-1 tim=393150826570 WAIT #140641124141656: nam='control file sequential read' ela= 982 file#=0 block#=48 blocks=1 obj#=-1 tim=393150827577 15 Database Consolidation with Oracle 12c Multitenancy . As mentioned earlier. however.49 Although the move was successful. SQL> select sum(object_id) from bigtable where rownum<10. Scale Abilities was able to read and update a table within that tablespace: SQL> select sum(object_id) from bigtable where rownum<10 SUM(OBJECT_ID) -------------324 SQL> update bigtable set object_id=object_id+10 where rownum<10. it was very slow (more on that shortly). SQL> commit. this paper isn’t focused on performance. Database altered. During the move. Commit complete. The Scale Abilities lab will perform a more technical investigation in the future and write a blog post about it. SUM(OBJECT_ID) -------------414 Now.Solution Implementer’s Series 353' to '+SMALLARRAY'. that was a very long runtime and worthy of some explanation. considering the long runtime for the move operation. 9 rows updated.

however. This can be observed in the respective waits for ‘db file sequential read’ and ‘db file single write. which is likely a type of anomaly with the wait interface abstraction. Scale Abilities also tried a datafile move operation with the PDB explicitly closed on all but one instance. note the 11. The CI enqueue is a cross-instance invocation most likely associated with coordinating the DBWR processes on other instances to ensure dirty blocks are written out to both the original datafile and the new copy. there are a series of coordinating measures taking place in the RAC tier which slow down the progress.6ms wait for ‘DFS lock handle’ highlighted in green. but it did not improve the throughput and requires further investigation. which (when decoded) shows a wait for the CI enqueue.’ where it is reported as the file# parameter. which implies a disk throughput of 52MB/s. In particular. The total elapsed time for one iteration (1MB moved) in this trace file fragment is 18. Shutting down all the instances except the one where the move takes place dramatically reduces the overhead. It seems the move operation is taking place 128 blocks at a time (128*8KB=1MB in this case).Solution Implementer’s Series WAIT #140641124141656: nam='control file sequential read' ela= 952 file#=0 block#=50 blocks=1 obj#=-1 tim=393150828576 WAIT #140641124141656: nam='control file sequential read' ela= 957 file#=0 block#=59 blocks=1 obj#=-1 tim=393150829578 WAIT #140641124141656: nam='db file sequential read' ela= 1712 file#=45 block#=119070593 blocks=128 obj#=-1 tim=393150831343 WAIT #140641124141656: nam='db file single write' ela= 1843 file#=45 block#=119070593 blocks=128 obj#=-1 tim=393150833283 WAIT #140641124141656: nam='DFS lock handle' ela= 205 type|mode=1128857605 id1=110 id2=1 obj#=-1 tim=393150833571 WAIT #140641124141656: nam='DFS lock handle' ela= 164 type|mode=1128857605 id1=110 id2=3 obj#=-1 tim=393150833794 An interesting observation for aficionados of trace files is that the file number of the original and copy datafile appear to be registered as the same (45).984ms. very little of which is spent actually transferring data because of the time spent waiting for the DFS lock handle. but does not remove it altogether: WAIT #140653075283640: nam='DFS lock handle' ela= 4542 type|mode=1128857605 id1=110 id2=2 obj#=-1 tim=2388675442209 WAIT #140653075283640: nam='control file sequential read' ela= 806 file#=0 block#=1 blocks=1 obj#=-1 tim=2388675443083 WAIT #140653075283640: nam='control file sequential read' ela= 921 file#=0 block#=48 blocks=1 obj#=-1 tim=2388675444060 WAIT #140653075283640: nam='control file sequential read' ela= 965 file#=0 block#=50 blocks=1 obj#=-1 tim=2388675445060 WAIT #140653075283640: nam='control file sequential read' ela= 967 file#=0 block#=60 blocks=1 obj#=-1 tim=2388675446062 WAIT #140653075283640: nam='db file sequential read' ela= 1748 file#=45 block#=20943489 blocks=128 obj#=-1 tim=2388675447849 WAIT #140653075283640: nam='db file single write' ela= 2891 file#=45 block#=20943489 blocks=128 obj#=-1 tim=2388675450857 16 Database Consolidation with Oracle 12c Multitenancy .

---------392 6745 OPNAME PCT ----------------------------------------. for example. The progress of the move can. at least. This is a very useful feature for managing development and test databases and significantly reduces the associated labor intensity. This is an important feature for managing a consolidated platform.---------Online data file move 77. and it should be possible to use a much higher percentage of the 16GFC connectivity that we have available to speed up the relocation of data files between storage tiers. SQL> create pluggable database pdb1_clone from pdb1 2 file_name_convert=('+SMALLARRAY'. in the opinion of Scale Abilities) downside to the clone functionality is that the source PDB must be opened “READ ONLY” for the clone operation to run. be viewed in v$session_longops: 1 2 3 4* SQL> SELECT sid. The ability to achieve a much higher number than this with a little careful tuning of the I/O subsystem seems promising. Pluggable database altered. SQL> alter pluggable database pdb1 open read only.Solution Implementer’s Series This shows that the DFS lock handle calls no longer require servicing by other instances in the cluster. Pluggable database altered.42 Proof Point – create a clone from PDB Oracle 12c also allows the creation of clones from existing PDBs. After making this change. serial#. SQL> alter pluggable database pdb1 close instances=all. ROUND(sofar/totalwork*100. considering the specification of this system: A copy time of 43:16 represents a decent average write for a bandwidth of 406MB/s. but these results are a good start for a nonperformance focused exercise. perform it every 50MB. Rather than perform all the cross-instance work every 1MB. The low throughput seems like it could be dramatically improved by Oracle with some algorithmic changes. but that there is still an overhead in making them.'+BIGARRAY'). which is still not very high. The observed 406MB/s would saturate a 4GFC fibre channel 17 Database Consolidation with Oracle 12c Multitenancy .2) PCT FROM gv$session_longops WHERE opname like 'Online%' AND sofar<totalwork / SID SERIAL# ---------. Elapsed: 00:43:16. Pluggable database created. One (fairly major. the throughput went up to around 90MB/s.13 This is much more indicative of the performance we should be seeing. opname.

trc WAIT #140320369183712: nam='ASM file metadata operation' ela= 36 msgop=33 locn=0 p3=0 obj#=-1 tim=146952678700 WAIT #140320369183712: nam='Pluggable Database file copy' ela= 4 count=1 intr=256 timeout=2147483647 obj#=-1 tim=146952678788 WAIT #140320369183712: nam='Pluggable Database file copy' ela= 4 count=1 intr=256 timeout=2147483647 obj#=-1 tim=146952678864 WAIT #140320369183712: nam='Pluggable Database file copy' ela= 1249 count=1 intr=256 timeout=2147483647 obj#=-1 tim=146952680181 WAIT #140320369183712: nam='Pluggable Database file copy' ela= 423 count=1 intr=256 timeout=2147483647 obj#=-1 tim=146952680671 WAIT #140320369183712: nam='Pluggable Database file copy' ela= 2322 count=1 intr=256 timeout=2147483647 obj#=-1 tim=146952683051 WAIT #140320369183712: nam='Pluggable Database file copy' ela= 1259 count=1 intr=256 timeout=2147483647 obj#=-1 tim=146952684325 WAIT #140320369183712: nam='Pluggable Database file copy' ela= 4 count=1 intr=256 timeout=2147483647 obj#=-1 tim=146952684418 WAIT #140320369183712: nam='Pluggable Database file copy' ela= 2 count=1 intr=256 timeout=2147483647 obj#=-1 tim=146952684430 WAIT #140320369183712: nam='Pluggable Database file copy' ela= 2272 count=1 intr=256 timeout=2147483647 obj#=-1 tim=146952686758 WAIT #140320369183712: nam='Pluggable Database file copy' ela= 248 count=1 intr=256 timeout=2147483647 obj#=-1 tim=146952687020 WAIT #140320369183712: nam='Pluggable Database file copy' ela= 333 count=1 intr=256 timeout=2147483647 obj#=-1 tim=146952687421 WAIT #140320369183712: nam='Pluggable Database file copy' ela= 1722 count=1 intr=256 timeout=2147483647 obj#=-1 tim=146952689153 WAIT #140320369183712: nam='Pluggable Database file copy' ela= 122 count=1 intr=256 timeout=2147483647 obj#=-1 tim=146952689395 WAIT #140320369183712: nam='Pluggable Database file copy' ela= 1651 count=1 intr=256 timeout=2147483647 obj#=-1 tim=146952691056 WAIT #140320369183712: nam='Pluggable Database file copy' ela= 1237 count=1 intr=256 timeout=2147483647 obj#=-1 tim=146952692361 It is likely that the block size used for the copy is equal to one ASM allocation unit (default 1MB). One interesting observation is that the clone operation is carried out by PX slave processes rather than the user’s foreground process. In this test case we had only one file that was large enough to incur a high copy time – it is possible that multiple PX slaves could operate on PDBs that contain a number of large file. considering the trace file indicates that this session is coordinating the copy directly with the ASM instance. This would seem to indicate that considerably more parallel capabilities should be possible when cloning PDBs. 18 Database Consolidation with Oracle 12c Multitenancy .Solution Implementer’s Series HBA even without further tuning – such operations highlight the need to provide for significantly greater I/O bandwidth in a multitenant environment than in a traditional Oracle cluster. the following trace file fragment shows the PX slave performing the actual copy: [root@Oracle2 trace]# more CDB2_3_p000_54766. For example.

Multitenancy can also be implemented without shared disk and with locally attached SSD storage. in both consolidated environments and more traditional ones. and this will become a frequently used operation for DBAs. Doing so removes many of the advantages of consolidation. by its very nature. it is evident that the bandwidth provision in the SAN infrastructure will become increasingly important. The ability to pool multiple pluggable databases into a small number of container databases dramatically improves both the system management burden and the ability to effectively resource-manage databases in a shared environment. it should be stressed that there is nothing architecturally ‘wrong’ in the method Oracle has elected to use. The easy access to data using a shared storage infrastructure complements the ease of use afforded by the PDB concept and effectively eliminates data copies for many use cases. One possible implication of the increased mobility of databases and their storage is that the bottleneck will move from the labor intensive manual processes formerly required. Scale Abilities fully expects the throughput of such moves to be significantly higher once the root cause is found.potentially migrating large data sets from many storage arrays at any point in time.Solution Implementer’s Series Conclusion The new Multitenant functionality seems a natural fit for consolidation using server pooled clusters and shared disk solutions using Gen 5 Fibre Channel. because the data mobility is a server-side action -. Management of a consolidated environment requires. In particular. the ability to move workloads between servers and storage tiers as the demands and growth of the business dictate. 19 Database Consolidation with Oracle 12c Multitenancy . The availability of Gen 5 a Fibre Channel HBAs will be instrumental in providing this uplift in data bandwidth. and put more pressure on the storage tier. This will be especially true for the server HBAs. and perhaps indicates the systems in question were not ideal candidates for consolidation. Although some issues were observed with the throughput of the online datafile move functionality.

com www.com www.Solution Implementer’s Series For more information www.ImplementersLab.scaleabilities. Emulex shall not be liable for technical or editorial errors or omissions contained herein. The information contained herein is subject to change without notice.com.uk To help us improve our documents. OneCommand is a registered trademark of Emulex Corporation. © Copyright 2013 Emulex Corporation.S. and other countries.Emulex. HP is a registered trademark in the U. Oracle is a registered trademark of Oracle Corporation. The only warranties for Emulex products and services are set forth in the express warranty statements accompanying such products and services.co. please provide feedback at implementerslab@emulex. 20 Database Consolidation with Oracle 12c Multitenancy .

United Kingdom +44 (0) 118 977 2929 21 Database Consolidation with Oracle 12c Multitenancy . China +86 10 68499547 Dublin. Germany +49 (0) 89 97007 177 Paris. France +33 (0) 158 580 022 | Tokyo. California 92626 +1 714 662 5600 Bangalore. India +91 80 40156789 | Beijing. Japan +81 3 5322 1348 Wokingham. Ireland+35 3 (0)1 652 1700 | Munich. Costa Mesa.Solution Implementer’s Series World Headquarters 3333 Susan Street.