Beruflich Dokumente
Kultur Dokumente
In Oracle 12.2 a container database can run in local undo mode, which means each pluggable database has its own undo tablespa ce. This means the problems associated with shared undo are no longer
present.
In 12c the LGWR process spawns 2 worker slaves and will spawn more on redo intensive systems.
There are many more backup and recovery scenarios available when dealing with the multitenant architecture.
Backup and recovery at the CDB-level is similar to that of non-CDB instances and affects all PDBs associated with the CDB.
Backup and recovery at the PDB-level is possible, with some restrictions. Point in time recovery (PITR) of a PDB is possible using an auxiliary instance, similar to tablespace PITR.
A PDB-level PITR impacts on possible flashback database operations at the CDB level.
Flashback Database
In Oracle 12.1 flashback database is only available at the CDB level and therefore affects all PDBs associated with the CDB. As mentioned previously, PITR of a PDB affects the possible flashback database
operations on the CDB.
In Oracle 12.2 flashback of a pluggable database is possible, making flashback database relevant again.
Encryption key management has changed in Oracle database 12c, which affects transparent database encryption (TDE) for both non-CDB and CDB installations.
Under the multitenant architecture, many of the encryption key management operations must be done at both the CDB and PDB level for TDE to work. This also means encryption keys must be exported
and imported during unplug and plugin operations on PDBs.
The following features are not currently supported under the multitenant architecture in version 12.1.0.2.
DBVERIFY
Data Recovery Advisor
Flashback Pluggable Database (present in 12.2)
Flashback Transaction Backout
Database Change Notification
Continuous Query Notification (CQN)
Client Side Cache
Heat Map
Automatic Data Optimization
Oracle Streams
Flashback Transaction Query (in both local undo mode and shared undo mode)
Database Recovery Advisor
Oracle Sharding (new feature in 12.2)
If you need any of these features, use a non-CDB architecture until they are supported.
The non-CDB architecture is deprecated in Oracle Database 12c, and may be desupported and unavailable in a release after Oracle Database 12c Release 2. Oracle recommends use of the CDB
architecture.
Lone-PDB is Free
A container database with a single pluggable database, also know as lone-PDB or single tenant, is free and available in all database editions. It is only when you want multiple PDBs in a single CDB you
have to pay for the multitenant option. As such, you can have multiple CDBs on a server, each with a single PDB without incur ring any extra cost.
Using the lone-PDB approach allows you to get used to the multitenant architecture without having to buy the multitenant option
When the multitenant architecture was first announced, several claims were made about it improving the speed of patches and upgrades because of the ability to unplug a PDB from a CDB running a
previous database version and plug it into a CDB running a newer version. In reality, using unplug/plugin for an upgrade involves both pre-upgrade and post-upgrade steps that mean the total elapsed
time may not be improved by as much as it initially sounded. Even so, depending on the type of patch or upgrade, you may see some benefits to this approach.
Transfers of PDBs between CDBs of the same version using unplug/plugin are incredibly simple and quick!
Cloning is where the multitenant architecture really shines. A clone of a PDB can be performed within the local CDB, or to a remote CDB on the same, or a different server. Although cloning using
Clonedb and RMAN DUPLICATE are relatively straight forward, cloning a pluggable database is incredibly simple! It might be worth moving to the multitenant architecture just for this feature. Remote
clones can also be used to convert non-CDB databases to PDBs.
Multitenant : Create and Configure a Container Database (CDB) in Oracle Database 12c Release 1 (12.1)
Manual Creation
Multitenant Page 1
startup nomount pfile="/u01/app/oracle/admin/cdb1/scripts/init.ora";
CREATE DATABASE "cdb1"
MAXINSTANCES 8
MAXLOGHISTORY 1
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 1024
DATAFILE '/u01/app/oracle/oradata/cdb1/system01.dbf' SIZE 700M REUSE
AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL
SYSAUX DATAFILE '/u01/app/oracle/oradata/cdb1/sysaux01.dbf' SIZE 550M REUSE
AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED
SMALLFILE DEFAULT TEMPORARY TABLESPACE TEMP TEMPFILE '/u01/app/oracle/oradata/cdb1/temp01.dbf' SIZE 20M REUSE
AUTOEXTEND ON NEXT 640K MAXSIZE UNLIMITED
SMALLFILE UNDO TABLESPACE "UNDOTBS1" DATAFILE '/u01/app/oracle/oradata/cdb1/undotbs01.dbf' SIZE 200M REUSE
AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED
CHARACTER SET AL32UTF8
NATIONAL CHARACTER SET AL16UTF16
LOGFILE GROUP 1 ('/u01/app/oracle/oradata/cdb1/redo01.log') SIZE 50M,
GROUP 2 ('/u01/app/oracle/oradata/cdb1/redo02.log') SIZE 50M,
GROUP 3 ('/u01/app/oracle/oradata/cdb1/redo03.log') SIZE 50M
USER SYS IDENTIFIED BY "&&sysPassword" USER SYSTEM IDENTIFIED BY "&&systemPassword"
enable pluggable database
seed file_name_convert=('/u01/app/oracle/oradata/cdb1/system01.dbf','/u01/app/oracle/oradata/cdb1/pdbseed/system01.dbf',
'/u01/app/oracle/oradata/cdb1/sysaux01.dbf','/u01/app/oracle/oradata/cdb1/pdbseed/sysaux01.dbf',
'/u01/app/oracle/oradata/cdb1/temp01.dbf','/u01/app/oracle/oradata/cdb1/pdbseed/temp01.dbf',
'/u01/app/oracle/oradata/cdb1/undotbs01.dbf','/u01/app/oracle/oradata/cdb1/pdbseed/undotbs01.dbf');
spool off
Create and Configure a Pluggable DCreate a Pluggable Database (PDB) Manuallyatabase (PDB) in Oracle Database 12c Release 1 (12.1)
Method 1 (12.1.0.2)
ALTER SYSTEM SET db_create_file_dest = '/u02/oradata';
CREATE PLUGGABLE DATABASE pdb2 ADMIN USER pdb_adm IDENTIFIED BY Password1
CREATE_FILE_DEST='/u01/app/oracle/oradata';
Method 2
CONN / AS SYSDBA
CREATE PLUGGABLE DATABASE pdb2 ADMIN USER pdb_adm IDENTIFIED BY Password1
FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdbseed/','/u01/app/oracle/oradata/cdb1/pdb2/');
Method 3
CONN / AS SYSDBA
ALTER SESSION SET PDB_FILE_NAME_CONVERT='/u01/app/oracle/oradata/cdb1/pdbseed/','/u01/app/oracle/oradata/cdb1/pdb3/';
CREATE PLUGGABLE DATABASE pdb3 ADMIN USER pdb_adm IDENTIFIED BY Password1;
Commands
Before attempting to unplug a PDB, you must make sure it is closed. To unplug the database use the ALTER PLUGGABLE DATABASE command with the UNPLUG INTO clause to specify the location of the
XML metadata file.
Plugging in a PDB into the CDB is similar to creating a new PDB. First check the PBD is compatible with the CDB by calling th e DBMS_PDB.CHECK_PLUG_COMPATIBILITY function, passing in the XML
metadata file and the name of the PDB you want to create using it.
SET SERVEROUTPUT ON
DECLARE
l_result BOOLEAN;
BEGIN
l_result := DBMS_PDB.check_plug_compatibility(
pdb_descr_file => '/u01/app/oracle/oradata/cdb1/pdb2/pdb2.xml',
pdb_name => 'pdb2');
IF l_result THEN
DBMS_OUTPUT.PUT_LINE('compatible');
ELSE
DBMS_OUTPUT.PUT_LINE('incompatible');
END IF;
END;
/
compatible
SQL>
Multitenant Page 2
SQL>
Same container
CREATE PLUGGABLE DATABASE pdb2 USING '/u01/app/oracle/oradata/cdb1/pdb2/pdb2.xml'
NOCOPY
TEMPFILE REUSE;
ALTER PLUGGABLE DATABASE pdb2 OPEN READ WRITE;
Using DBMS_PDB
The DBMS_PDB package allows you to generate an XML metadata file from a non-CDB 12c database, effectively allowing it to be describe it the way you do when unplugging a PDB database. This allows
the non-CDB to be plugged in as a PDB into an existing CDB
export ORACLE_SID=db12c
sqlplus / as sysdba
SHUTDOWN IMMEDIATE;
STARTUP OPEN READ ONLY
BEGIN
DBMS_PDB.DESCRIBE(
pdb_descr_file => '/tmp/db12c.xml');
END;
/
export ORACLE_SID=db12c
sqlplus / as sysdba
SHUTDOWN IMMEDIATE;
export ORACLE_SID=cdb1
sqlplus / as sysdba
Switch to the PDB container and run the "$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql" script to clean up the new PDB, removing any items that should not be present in a PDB.
ALTER SESSION SET CONTAINER=pdb6;
@$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql
ALTER SESSION SET CONTAINER=pdb6;
ALTER PLUGGABLE DATABASE OPEN;
Multitenant Page 3
SELECT name, open_mode FROM v$pdbs;
NAME OPEN_MODE
------------------------------ ----------
PDB6 READ WRITE
1 row selected.
If the non-CDB is version 11.2.0.3 onward, you can consider using Transport Database, as described here. If the non-CDB is pre-11.2.0.3, then you can still consider using transportable tablespaces.
Using Replication
Another alternative is to use a replication product like Golden Gate to replicate the data from the non-container database to a pluggable database.
Patching Considerations
If your instances are not at the same patch level, you will get PDB violations visible in the PDB_PLUG_IN_VIOLATIONS view. If the destination is at a higher patch level than the source, simply run the
datapatch utility on the destination instance in the normal way. It will determine what work needs to be done.
cd $ORACLE_HOME/OPatch
./datapatch -verbose
Connecting to Container Databases (CDB) and Pluggable Databases (PDB) in Oracle Database 12c Release 1 (12.1)
$ export ORACLE_SID=cdb1
$ sqlplus / as sysdba
SQL> CONN system/password
Connected.
The V$SERVICES views can be used to display available services from the database.
NAME PDB
------------------------------ ------------------------------
SYS$BACKGROUND CDB$ROOT
SYS$USERS CDB$ROOT
cdb1 CDB$ROOT
cdb1XDB CDB$ROOT
pdb1 PDB1
pdb2 PDB2
6 rows selected.
The lsnrctl utility allows you to display the available services from the command line.
SQL> -- EZCONNECT
SQL> CONN system/password@//localhost:1521/cdb1
Connected.
SQL>
SQL> -- tnsnames.ora
SQL> CONN system/password@cdb1
Connected.
SQL>
CDB1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = ol6-121.localdomain)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = cdb1)
)
)
Displaying the Current Container
The SHOW CON_NAME and SHOW CON_ID commands in SQL*Plus display the current container name and ID respectively.
CON_NAME
------------------------------
CDB$ROOT
SQL>
CON_ID
------------------------------
1
SELECT SYS_CONTEXT('USERENV', 'CON_NAME')
Multitenant Page 4
SELECT SYS_CONTEXT('USERENV', 'CON_NAME')
FROM dual;
SYS_CONTEXT('USERENV','CON_NAME')
--------------------------------------------------------------------------------
CDB$ROOT
SQL>
SYS_CONTEXT('USERENV','CON_ID')
--------------------------------------------------------------------------------
1
Session altered.
CON_NAME
------------------------------
PDB1
SQL> ALTER SESSION SET container = cdb$root;
Session altered.
CON_NAME
------------------------------
CDB$ROOT
SQL>
PDB users with the SYSDBA, SYSOPER, SYSBACKUP, or SYSDG privilege can connect to a closed PDB. All other PDB users can only connect when the PDB is open. As with regular databases, the PDB
users require the CREATE SESSION privilege to enable connections.
When attempting to connect to a PDB using the SID format, you will receive the following error.
ORA-12505, TNS:listener does not currently know of SID given in connect descriptor
Ideally, you would correct the connect string to use services instead of SIDs, but if that is a problem the USE_SID_AS_SERVICE_listener_name listener parameter can be used.
Edit the "$ORACLE_HOME/network/admin/listener.ora" file, adding the following entry, with the "listener" name matching that used by your listener.
USE_SID_AS_SERVICE_listener=on
Reload or restart the listener.
$ lsnrctl reload
Now both of the following connection attempts will be successful as any SIDs will be treated as services.
jdbc:oracle:thin:@ol6-121:1521:pdb1
jdbc:oracle:thin:@ol6-121:1521/pdb1
Startup and Shutdown Container Databases (CDB) and Pluggable Databases (PDB) in Oracle Database 12c Release 1 (12.1)
SQL*Plus Commands
The following SQL*Plus commands are available to start and stop a pluggable database, when connected to that pluggable database as a privileged user.
Multitenant Page 5
The following SQL*Plus commands are available to start and stop a pluggable database, when connected to that pluggable database as a privileged user.
STARTUP FORCE;
STARTUP OPEN READ WRITE [RESTRICT];
STARTUP OPEN READ ONLY [RESTRICT];
STARTUP UPGRADE;
SHUTDOWN [IMMEDIATE];
The following commands are available to open and close the current PDB when connected to the PDB as a privileged user.
The following commands are available to open and close one or more PDBs when connected to the CDB as a privileged user.
You can customise the trigger if you don't want all of your PDBs to start.
We will start off by looking at the normal result of a CDB restart. Notice the PDBs are in READ WRITE mode before the restart, but in MOUNTED mode after it.
NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY
PDB1 READ WRITE
PDB2 READ WRITE
SQL>
SHUTDOWN IMMEDIATE;
STARTUP;
NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY
PDB1 MOUNTED
PDB2 MOUNTED
SQL>
Next, we open both pluggable databases, but only save the state of PDB1.
Multitenant Page 6
COLUMN instance_name FORMAT A20
SQL>
Restarting the CDB now gives us a different result.
NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY
PDB1 READ WRITE
PDB2 READ WRITE
SQL>
SHUTDOWN IMMEDIATE;
STARTUP;
NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY
PDB1 READ WRITE
PDB2 MOUNTED
SQL>
The saved state can be discarded using the following statement.
no rows selected
SQL>
The state is only saved and visible in the DBA_PDB_SAVED_STATES view if the container is in READ ONLY or READ WRITE mode. The ALTER PLUGGABLE DATABASE ... SAVE STATE command does not error
when run against a container in MOUNTED mode, but nothing is recorded, as this is the default state after a CDB restart.
Like other examples of the ALTER PLUGGABLE DATABASE command, PDBs can be identified individually, as a comma separated list, using the ALL or ALL EXCEPT keywords.
The INSTANCES clause can be added when used in RAC environments. The clause can identify instances individually, as a comma separated list, using the ALL or ALL EXCEPT keywords. Regardless of the
INSTANCES clause, the SAVE/DISCARD STATE commands only affect the current instance.
Configure Instance Parameters and Modify Container Databases (CDB) and Pluggable Databases (PDB) in Oracle Database 12c Release 1 (12.1)
When connected as a privileged user and pointing to the root container, any ALTER SYSTEM command will by default be directed at just the root container. This means the following two commands are
functionally equivalent in this context.
The PDBs are able to override some parameter settings by issuing a local ALTER SYSTEM call from the container. See documentation here.
To make a local PDB change, make sure you are either connected directly to a privileged use in the PDB, or to a privileged common user, who has their container pointing to the PDB in question. As
Multitenant Page 7
To make a local PDB change, make sure you are either connected directly to a privileged use in the PDB, or to a privileged common user, who has their container pointing to the PDB in question. As
mentioned previously, if the CONTAINER clause is not mentioned, the current container is assumed, so the following ALTER SYSTEM commands are functionally equivalent.
CONN / AS SYSDBA
ALTER SESSION SET CONTAINER = pdb1;
Remember, to target the PDB you must either connect directly to a privileged user using a service pointing to the PDB, or connect to the root container and switch to the PDB container. Some of the
possible PDB modifications are shown below.
CONN / AS SYSDBA
ALTER SESSION SET CONTAINER = pdb1;
-- Change the global name. This will change the container name and the
-- name of the default service registered with the listener.
ALTER PLUGGABLE DATABASE OPEN RESTRICTED FORCE;
ALTER PLUGGABLE DATABASE RENAME GLOBAL_NAME TO pdb1a.localdomain;
ALTER PLUGGABLE DATABASE CLOSE IMMEDIATE;
ALTER PLUGGABLE DATABASE OPEN;
Thanks to Pavel Rabel for pointing out the problem with this shared temporary tablespace, as described in this MOS note. PDB to Use Global CDB (ROOT) Temporary Tablespace Functionality is Missing
(Doc ID 2004595.1)
-- Limit the total storage of the the PDB (datafile and local temp files).
ALTER PLUGGABLE DATABASE STORAGE (MAXSIZE 5G);
-- Limit the amount of temp space used in the shared temp files.
ALTER PLUGGABLE DATABASE STORAGE (MAX_SHARED_TEMP_SIZE 2G);
Manage Tablespaces in a Container Database (CDB) and Pluggable Database (PDB) in Oracle Database 12c Release 1 (12.1)
CONN / AS SYSDBA
CON_NAME
------------------------------
CDB$ROOT
Multitenant Page 8
CDB$ROOT
SQL>
Tablespace created.
SQL>
Tablespace altered.
SQL>
Tablespace dropped.
SQL>
Session altered.
CON_NAME
------------------------------
PDB1
SQL>
Alternatively, connect directly to the PDB as a local user with sufficient privilege.
CON_NAME
------------------------------
PDB1
SQL>
Once pointed to the correct container, tablespaces can be managed using the same commands you have always used. Make sure you put the datafiles in a suitable location for the PDB.
Tablespace created.
SQL>
Tablespace altered.
SQL>
Tablespace dropped.
SQL>
Undo Tablespaces
Management of the undo tablespace in a CDB is unchanged from that of a non-CDB database.
In contrast, a PDB can not have an undo tablespace. Instead, it uses the undo tablespace belonging to the CDB. If we connect to a PDB, we can see no undo tablespace is visible.
CONN pdb_admin@pdb1
TABLESPACE_NAME
------------------------------
Multitenant Page 9
------------------------------
SYSTEM
SYSAUX
TEMP
USERS
SQL>
But we can see the datafile associated with the CDB undo tablespace.
NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/cdb1/undotbs01.dbf
/u01/app/oracle/oradata/cdb1/pdb1/system01.dbf
/u01/app/oracle/oradata/cdb1/pdb1/sysaux01.dbf
/u01/app/oracle/oradata/cdb1/pdb1/pdb1_users01.dbf
SQL>
NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/cdb1/pdb1/temp01.dbf
SQL>
Temporary Tablespaces
Management of the temporary tablespace in a CDB is unchanged from that of a non-CDB database.
A PDB can either have its owner temporary tablespace, or if it is created without a temporary tablespace, it can share the temporary tablespace with the CBD.
CONN pdb_admin@pdb1
Tablespace created.
SQL>
Tablespace dropped.
SQL>
Default Tablespaces
Setting the default tablespace and default temporary tablespace for a CDB is unchanged compared to a non-CDB database.
There are a two ways to set the default tablespace and default temporary tablespace for a PDB. The ALTER PLUGGABLE DATABASE command is the recommended way.
CONN pdb_admin@pdb1
ALTER PLUGGABLE DATABASE DEFAULT TABLESPACE users;
ALTER PLUGGABLE DATABASE DEFAULT TEMPORARY TABLESPACE temp;
For backwards compatibility, it is also possible to use the ALTER DATABASE command.
CONN pdb_admin@pdb1
ALTER DATABASE DEFAULT TABLESPACE users;
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp;
Manage Users and Privileges For Container Databases (CDB) and Pluggable Databases (PDB) in Oracle Database 12c Release 1 (12.1)
When connected to a multitenant database the management of users and privileges is a little different to traditional Oracle e nvironments. In multitenant environments there are two types of user.
Common User : The user is present in all containers (root and all PDBs).
Local User : The user is only present in a specific PDB. The same username can be present in multiple PDBs, but they are unrelated.
Likewise, there are two types of roles.
Common Role : The role is present in all containers (root and all PDBs).
Local Role : The role is only present in a specific PDB. The same role name can be used in multiple PDBs, but they are unrela ted.
Some DDL statements have a CONTAINER clause added to allow them to be directed to the current container or all containers. It s usage will be demonstrated in the sections below.
When creating a common user the following requirements must all be met.
You must be connected to a common user with the CREATE USER privilege.
The current container must be the root container.
The username for the common user must be prefixed with "C##" or "c##" and contain only ASCII or EBCDIC characters.
The username must be unique across all containers.
The DEFAULT TABLESPACE, TEMPORARY TABLESPACE, QUOTA and PROFILE must all reference objects that exist in all containers.
You can either specify the CONTAINER=ALL clause, or omit it, as this is the default setting when the current container is the root.
The following example shows how to create common users with and without the CONTAINER clause from the root container.
Multitenant Page 10
CONN / AS SYSDBA
When creating a local user the following requirements must all be met.
CONN / AS SYSDBA
Similar to users described previously, roles can be common or local. All Oracle-supplied roles are common and therefore available in the root container and all PDBs. Common roles can be created,
provided the following conditions are met.
You must be connected to a common user with CREATE ROLE and the SET CONTAINER privileges granted commonly.
The current container must be the root container.
The role name for the common role must be prefixed with "C##" or "c##" and contain only ASCII or EBCDIC characters.
The role name must be unique across all containers.
The role is created with the CONTAINER=ALL clause
The following example shows how to create a common role and grant it to a common and local user.
CONN / AS SYSDBA
Local roles are created in a similar manner to pre-12c databases. Each PDB can have roles with matching names, since the scope of a local role is limited to the current PDB. Th e following conditions must
be met.
CONN / AS SYSDBA
-- Switch container.
ALTER SESSION SET CONTAINER = pdb1;
Multitenant Page 11
GRANT CREATE SESSION TO test_role1;
The basic difference between a local and common grant is the value used by the CONTAINER clause.
-- Common grants.
CONN / AS SYSDBA
-- Local grants.
CONN system/password@pdb1
GRANT CREATE SESSION TO test_user3;
GRANT CREATE SESSION TO test_role1;
GRANT test_role1 TO test_user3;
Backup and Recovery of a Container Database (CDB) and a Pluggable Database (PDB) in Oracle Database 12c Release 1 (12.1)
RMAN Connections
Unless stated otherwise, this article assumes connections to RMAN are using OS authentication. This means you are connecting to the root container in the CDB with "AS SYSDBA" privilege.
$ export ORAENV_ASK=NO
$ export ORACLE_SID=cdb1
$ . oraenv
The Oracle base remains unchanged with value /u01/app/oracle
$ export ORAENV_ASK=YES
$ rman target=/
Recovery Manager: Release 12.1.0.1.0 - Production on Sun Dec 22 17:03:20 2013
Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.
RMAN>
When connecting to a PDB to perform backup and recovery operations, the RMAN connection will look like the following. Notice the password prompt as no password was entered on the command line.
$ rman target=sys@pdb1
Recovery Manager: Release 12.1.0.1.0 - Production on Mon Dec 23 11:08:35 2013
Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.
RMAN>
Backup
Container Database (CDB) Backup
Backup of a Container Database (CDB) is essentially the same as a non-Container Database. The main thing to remember is, by doing a full backup of the CDB you are also doing a full backup of all PDBs.
Connect to RMAN using OS authentication and take a full backup using the following command. This means you are connecting to the root container with "AS SYSDBA" privilege.
$ rman target=/
Multitenant Page 12
input datafile file number=00010 name=/u01/app/oracle/oradata/cdb1/pdb1/pdb1_users01.dbf
channel ORA_DISK_1: starting piece 1 at 22-DEC-13
channel ORA_DISK_1: finished piece 1 at 22-DEC-13
piece handle=/u01/app/oracle/fast_recovery_area/CDB1/E45393F0DE5F1A8AE043D200A8C00DFC/backupset/2013_12_22/o1_mf_nnndf_TAG20131222T163015_9cg4z3so_.bkp
tag=TAG20131222T163015 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:35
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00030 name=/u01/app/oracle/oradata/pdb2/sysaux01.dbf
input datafile file number=00029 name=/u01/app/oracle/oradata/pdb2/system01.dbf
input datafile file number=00031 name=/u01/app/oracle/oradata/pdb2/pdb2_users01.dbf
channel ORA_DISK_1: starting piece 1 at 22-DEC-13
channel ORA_DISK_1: finished piece 1 at 22-DEC-13
piece handle=/u01/app/oracle/fast_recovery_area/CDB1/E4B0CA84B47E6183E043D200A8C0A806/backupset/2013_12_22/o1_mf_nnndf_TAG20131222T163015_9cg50766_.bkp
tag=TAG20131222T163015 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:35
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00007 name=/u01/app/oracle/oradata/cdb1/pdbseed/sysaux01.dbf
input datafile file number=00005 name=/u01/app/oracle/oradata/cdb1/pdbseed/system01.dbf
channel ORA_DISK_1: starting piece 1 at 22-DEC-13
channel ORA_DISK_1: finished piece 1 at 22-DEC-13
piece handle=/u01/app/oracle/fast_recovery_area/CDB1/E453004B82C71772E043D200A8C08EC5/backupset/2013_12_22/o1_mf_nnndf_TAG20131222T163015_9cg51bmg_.bkp
tag=TAG20131222T163015 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:36
Finished backup at 22-DEC-13
Connect to RMAN using OS authentication and backup the root container using the following command. This means you are connect ing to the root container with "AS SYSDBA" privilege.
$ rman target=/
$ rman target=/
$ rman target=sys@pdb1
Multitenant Page 13
RMAN> BACKUP DATABASE;
Being connected to the PDB, this limits the scope of the backup command to the current PDB only, as shown in the output below .
$ rman target=sys@pdb1
$ rman target=sys@cdb1
$ rman target=/
# Or
$ rman target=sys@pdb1
RMAN>
Complete Recovery
Container Database (CDB) Complete Recovery
Restoring a CDB is similar to restoring a non-CDB database, but remember restoring a whole CDB will restore not only the root container, but all the PDBs also. Likewise a Point In Time Recovery (PITR) of
the whole CDB will bring all PDBs back to the same point in time.
Connect to RMAN using OS authentication and restore the whole CDB using the following restore script. This means you are connecting to the root container with "AS SYSDBA" privilege.
$ rman target=/
RUN {
SHUTDOWN IMMEDIATE; # use abort if this fails
STARTUP MOUNT;
RESTORE DATABASE;
RECOVER DATABASE;
ALTER DATABASE OPEN;
}
A section of the output from the above restore script is shown below. Notice the datafiles from the CDB (cdb1) and all the PD Bs (pdb1, pdb2 and pdb$seed) are all considered during the restore. The
seed PDB is not actually restored because it is read-only and RMAN can see a restore is not necessary.
Multitenant Page 14
channel ORA_DISK_1: restoring datafile 00004 to /u01/app/oracle/oradata/cdb1/undotbs01.dbf
channel ORA_DISK_1: restoring datafile 00006 to /u01/app/oracle/oradata/cdb1/users01.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/CDB1/backupset/2013_12_22/o1_mf_nnndf_TAG20131222T163015_9cg4wr40_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/CDB1/backupset/2013_12_22/o1_mf_nnndf_TAG20131222T163015_9cg4wr40_.bkp tag=TAG20131222T163015
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:56
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00008 to /u01/app/oracle/oradata/cdb1/pdb1/system01.dbf
channel ORA_DISK_1: restoring datafile 00009 to /u01/app/oracle/oradata/cdb1/pdb1/sysaux01.dbf
channel ORA_DISK_1: restoring datafile 00010 to /u01/app/oracle/oradata/cdb1/pdb1/pdb1_users01.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/CDB1/E45393F0DE5F1A8AE043D200A8C00DFC/backupset/2013_12_22/o1_mf_nnndf_TAG20131222T163015_
9cg4z3so_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/CDB1/E45393F0DE5F1A8AE043D200A8C00DFC/backupset/2013_12_22/o1_mf_nnndf_TAG20131222T163015_9cg4z3so_.bkp
tag=TAG20131222T163015
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00029 to /u01/app/oracle/oradata/pdb2/system01.dbf
channel ORA_DISK_1: restoring datafile 00030 to /u01/app/oracle/oradata/pdb2/sysaux01.dbf
channel ORA_DISK_1: restoring datafile 00031 to /u01/app/oracle/oradata/pdb2/pdb2_users01.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/CDB1/E4B0CA84B47E6183E043D200A8C0A806/backupset/2013_12_22/o1_mf_nnndf_TAG20131222T163015_
9cg50766_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/CDB1/E4B0CA84B47E6183E043D200A8C0A806/backupset/2013_12_22/o1_mf_nnndf_TAG20131222T163015_9cg50766_.bkp
tag=TAG20131222T163015
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:25
Finished restore at 22-DEC-13
Statement processed
Root Container Complete Recovery
Rather than recovering the whole CDB, including all PDBs, the root container can be recovered in isolation.
Connect to RMAN using OS authentication and restore the root container using the following restore script. This means you are connecting to the root container with "AS SYSDBA" privilege.
$ rman target=/
RUN {
SHUTDOWN IMMEDIATE; # use abort if this fails
STARTUP MOUNT;
RESTORE DATABASE ROOT;
RECOVER DATABASE ROOT;
# Consider recovering PDBs before opening.
ALTER DATABASE OPEN;
}
The following section of the output from the restore script shows only the root container datafiles are restored and recovere d.
It is probably a very bad idea to restore and recover just the root container without doing the same for the PDBs. Any difference in metadata between the two could prove problematic.
$ rman target=/
Multitenant Page 15
$ rman target=/
RUN {
ALTER PLUGGABLE DATABASE pdb1, pdb2 CLOSE;
RESTORE PLUGGABLE DATABASE pdb1, pdb2;
RECOVER PLUGGABLE DATABASE pdb1, pdb2;
ALTER PLUGGABLE DATABASE pdb1, pdb2 OPEN;
}
When connected directly to a PDB, you can restore and recover the current PDB using a local user with the SYSDBA privilege, as shown in the following script.
$ rman target=admin_user@pdb1
SHUTDOWN IMMEDIATE;
RESTORE DATABASE;
RECOVER DATABASE;
STARTUP;
In the current release, the RMAN commands will not work in a "run" script without producing errors.
$ rman target=sys@pdb1
RUN {
ALTER TABLESPACE users OFFLINE;
RESTORE TABLESPACE users;
RECOVER TABLESPACE users;
ALTER TABLESPACE users ONLINE;
}
Datafile recoveries can be done while connected to the container or directly to the PDB.
$ rman target=/
# Or
$ rman target=sys@pdb1
RUN {
ALTER DATABASE DATAFILE 10 OFFLINE;
RESTORE DATAFILE 10;
RECOVER DATAFILE 10;
ALTER DATABASE DATAFILE 10 ONLINE;
}
Point In Time Recovery (PITR) of a CDB is the same as that of non-CDB instances. Just remember, you are performing a PITR on the CDB and all the PDBs at once.
$ rman target=/
RUN {
SHUTDOWN IMMEDIATE; # use abort if this fails
STARTUP MOUNT;
SET UNTIL TIME "TO_DATE('23-DEC-2013 12:00:00','DD-MON-YYYY HH24:MI:SS')";
RESTORE DATABASE;
RECOVER DATABASE;
# Should probably open read-only and check it out first.
ALTER DATABASE OPEN RESETLOGS;
}
Point In Time Recovery (PITR) of a PDB follows a similar pattern to that of a regular database. The PDB is closed, restored and recovered to the required point in time, then opened with the RESETLOGS
option. In this case, the RESETLOGS option does nothing with the logfiles themselves, but creates a new PDB incarnation.
$ rman target=/
RUN {
ALTER PLUGGABLE DATABASE pdb1 CLOSE;
SET UNTIL TIME "TO_DATE('23-DEC-2013 12:00:00','DD-MON-YYYY HH24:MI:SS')";
RESTORE PLUGGABLE DATABASE pdb1;
RECOVER PLUGGABLE DATABASE pdb1;
ALTER PLUGGABLE DATABASE pdb1 OPEN RESETLOGS;
}
The simplicity of PITR of PDBs hides a certain amount of complexity. For a start, a PDB shares the root container with other PDBs, so a PITR of the root container must be performed. This is done in the
fast recovery area (FRA) provided it is configured. If the FRA is not configured, an AUXILIARY DESTINATION must be specified.
Multitenant Page 16
Aside from the FRA space requirement, one other important restriction is relevant. If a point in time recovery of a PDB has been done, it is not possible to directly flashback the database to a time before
the PDB point in time recovery. The workaround for this is discussed in this article.
Oracle 12c includes a new RMAN feature which performs point in time recovery of tables using a single command. You can read about this feature and see examples of it's use in the following article.
RMAN Table Point In Time Recovery (PITR) in Oracle Database 12c Release 1 (12.1)
The same mechanism is available for recovering tables in PDBs, with a few minor changes. For the feature to work with a PDB, you must log in as a root user with SYSDBA or SYSBACKUP privilege.
$ rman target=/
Issue the RECOVER TABLE command in a similar way to that shown for a non-CDB database, but include the OF PLUGGABLE DATABASE clause, as well as giving a suitable AUXILIARY DESTINATION
location for the auxiliary database. The following command also uses the REMAP TABLE clause to give the recovered table a new name.
Alternatively, you can just stop at the point where the recovered table is in a data pump dump file, which you can import man ually at a later time. The following example uses the DATAPUMP
DESTINATION, DUMP FILE and NOTABLEIMPORT clauses to achieve this.
This article assumes the following things are in place for the examples to work.
You have a container database (CDB). You can see how to create one here.
Your container database (CDB) has at least one pluggable database (PDB). You can see how to create one here.
You have the flashback database feature enabled on the CDB. You can see how to do that here.
You have backups of your CDB and PDBs. You can see how to do that here.
With this in place, you can move on to the next sections.
$ sqlplus / as sysdba
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
FLASHBACK DATABASE TO TIMESTAMP SYSDATE-(5/24/60);
ALTER DATABASE OPEN RESETLOGS;
$ rman target=/
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
FLASHBACK DATABASE TO TIME 'SYSDATE-(5/24/60)';
ALTER DATABASE OPEN RESETLOGS;
The restrictions on the use of flashback database are similar to those of a non-CDB database, with one extra restriction. If you perform a point in time recovery of a pluggable database (PDB), you can not
use flashback database to return the CDB to a point in time before that PITR of the PDB took place. This issue and the workar ound for it are discussed in the next section.
$ rman target=/
RUN {
ALTER PLUGGABLE DATABASE pdb1 CLOSE;
SET UNTIL TIME "TO_DATE('30-DEC-2013 10:15:00','DD-MON-YYYY HH24:MI:SS')";
RESTORE PLUGGABLE DATABASE pdb1;
RECOVER PLUGGABLE DATABASE pdb1;
ALTER PLUGGABLE DATABASE pdb1 OPEN RESETLOGS;
}
Then we flashback the CDB to 15 minutes ago.
Multitenant Page 17
$ rman target=/
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
SET UNTIL TIME "TO_DATE('30-DEC-2013 10:00:00','DD-MON-YYYY HH24:MI:SS')";
ALTER DATABASE OPEN RESETLOGS;
This results in the following error.
Take a backup of everything (CDB and PDBs). It's always a good idea to take a backup before doing anything major to your database.
Shutdown the PDB.
Offline all datafiles for the PDB.
Flashback the CDB.
Restore and recover the PDB to the point it was at before the flashback of the CDB.
You can see an example of this below.
rman target=/
# Backup everything.
BACKUP DATABASE PLUS ARCHIVELOG;
# PITR of pdb1.
RUN {
# PDB already closed. No SET UNTIL. We want to recover to the latest time.
#ALTER PLUGGABLE DATABASE pdb1 CLOSE;
#SET UNTIL TIME "TO_DATE('30-DEC-2013 10:15:00','DD-MON-YYYY HH24:MI:SS')";
RESTORE PLUGGABLE DATABASE pdb1;
RECOVER PLUGGABLE DATABASE pdb1;
ALTER PLUGGABLE DATABASE pdb1 DATAFILE ALL ONLINE;
ALTER PLUGGABLE DATABASE pdb1 OPEN;
}
Resource Manager with Container Databases (CDB) and Pluggable Databases (PDB) in Oracle Database 12c Release 1 (12.1)
PDBs without a specific plan directive use the default PDB directive.
The following code creates a new CBD resource plan using the CREATE_CDB_PLAN procedure, then adds two plan directives using t he CREATE_CDB_PLAN_DIRECTIVE procedure.
DECLARE
l_plan VARCHAR2(30) := 'test_cdb_plan';
BEGIN
DBMS_RESOURCE_MANAGER.clear_pending_area;
DBMS_RESOURCE_MANAGER.create_pending_area;
DBMS_RESOURCE_MANAGER.create_cdb_plan(
plan => l_plan,
comment => 'A test CDB resource plan');
DBMS_RESOURCE_MANAGER.create_cdb_plan_directive(
plan => l_plan,
pluggable_database => 'pdb1',
shares => 3,
utilization_limit => 100,
Multitenant Page 18
utilization_limit => 100,
parallel_server_limit => 100);
DBMS_RESOURCE_MANAGER.create_cdb_plan_directive(
plan => l_plan,
pluggable_database => 'pdb2',
shares => 3,
utilization_limit => 100,
parallel_server_limit => 100);
DBMS_RESOURCE_MANAGER.validate_pending_area;
DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/
Information about the available CDB resource plans can be queried using the DBA_CDB_RSRC_PLANS.
SELECT plan_id,
plan,
comments,
status,
mandatory
FROM dba_cdb_rsrc_plans
WHERE plan = 'TEST_CDB_PLAN';
SQL>
Information about the CDB resource plan directives can be queried using the DBA_CDB_RSRC_PLAN_DIRECTIVES view.
SELECT plan,
pluggable_database,
shares,
utilization_limit AS util,
parallel_server_limit AS parallel
FROM dba_cdb_rsrc_plan_directives
WHERE plan = 'TEST_CDB_PLAN'
ORDER BY pluggable_database;
SQL>
For the rest of the article the cdb_resource_plans.sql and cdb_resource_plan_directives.sql scripts will be used to display this information.
DECLARE
l_plan VARCHAR2(30) := 'test_cdb_plan';
BEGIN
DBMS_RESOURCE_MANAGER.clear_pending_area;
DBMS_RESOURCE_MANAGER.create_pending_area;
DBMS_RESOURCE_MANAGER.create_cdb_plan_directive(
plan => l_plan,
pluggable_database => 'pdb3',
shares => 1,
utilization_limit => 75,
parallel_server_limit => 75);
DBMS_RESOURCE_MANAGER.validate_pending_area;
DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/
Multitenant Page 19
TEST_CDB_PLAN ORA$DEFAULT_PDB_DIRECTIVE 1 100 100
TEST_CDB_PLAN PDB1 3 100 100
TEST_CDB_PLAN PDB2 3 100 100
TEST_CDB_PLAN PDB3 1 75 75
SQL>
The UPDATE_CDB_PLAN_DIRECTIVE procedure modifies an existing plan directive.
DECLARE
l_plan VARCHAR2(30) := 'test_cdb_plan';
BEGIN
DBMS_RESOURCE_MANAGER.clear_pending_area;
DBMS_RESOURCE_MANAGER.create_pending_area;
DBMS_RESOURCE_MANAGER.update_cdb_plan_directive(
plan => l_plan,
pluggable_database => 'pdb3',
new_shares => 1,
new_utilization_limit => 100,
new_parallel_server_limit => 100);
DBMS_RESOURCE_MANAGER.validate_pending_area;
DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/
SQL>
The DELETE_CDB_PLAN_DIRECTIVE procedure deletes an existing plan directive from the CDB resource plan.
DECLARE
l_plan VARCHAR2(30) := 'test_cdb_plan';
BEGIN
DBMS_RESOURCE_MANAGER.clear_pending_area;
DBMS_RESOURCE_MANAGER.create_pending_area;
DBMS_RESOURCE_MANAGER.delete_cdb_plan_directive(
plan => l_plan,
pluggable_database => 'pdb3');
DBMS_RESOURCE_MANAGER.validate_pending_area;
DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/
SQL>
DECLARE
l_plan VARCHAR2(30) := 'test_cdb_plan';
BEGIN
DBMS_RESOURCE_MANAGER.clear_pending_area;
DBMS_RESOURCE_MANAGER.create_pending_area;
DBMS_RESOURCE_MANAGER.update_cdb_default_directive(
plan => l_plan,
new_shares => 1,
new_utilization_limit => 80,
new_parallel_server_limit => 80);
DBMS_RESOURCE_MANAGER.validate_pending_area;
DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/
Multitenant Page 20
SQL> @cdb_resource_plan_directives.sql TEST_CDB_PLAN
SQL>
Modify CDB Autotask Directive
There is a plan directive associated with the database autotask functionality. The configuration of this can be altered using the UPDATE_CDB_AUTOTASK_DIRECTIVE procedure.
DECLARE
l_plan VARCHAR2(30) := 'test_cdb_plan';
BEGIN
DBMS_RESOURCE_MANAGER.clear_pending_area;
DBMS_RESOURCE_MANAGER.create_pending_area;
DBMS_RESOURCE_MANAGER.update_cdb_autotask_directive(
plan => l_plan,
new_shares => 1,
new_utilization_limit => 75,
new_parallel_server_limit => 75);
DBMS_RESOURCE_MANAGER.validate_pending_area;
DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/
SQL>
Enable/Disable Resource Plan
Enabling and disabling resource plans in a CDB is the same as it was in pre-12c instances. Enable a plan by setting the RESOURCE_MANAGER_PLAN paramter to the name of the CDB resource plan, while
connected to the root container.
System altered.
System altered.
DECLARE
l_plan VARCHAR2(30) := 'test_cdb_plan';
BEGIN
DBMS_RESOURCE_MANAGER.clear_pending_area;
DBMS_RESOURCE_MANAGER.create_pending_area;
DBMS_RESOURCE_MANAGER.validate_pending_area;
DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/
SQL> @cdb_resource_plans.sql
Multitenant Page 21
---------- ------------------------------ ------------------------------ ---------- ---
16774 DEFAULT_CDB_PLAN Default CDB plan YES
16775 DEFAULT_MAINTENANCE_PLAN Default CDB maintenance plan YES
16776 ORA$INTERNAL_CDB_PLAN Internal CDB plan YES
16777 ORA$QOS_CDB_PLAN QOS CDB plan YES
SQL>
Pluggable Database (PDB)
The use of resource manager inside the PDB is essentially unchanged compared to the pre-12c instances. Just remember, you have to be connected to the specific PDB when you set the
RESOURCE_MANAGER_PLAN parameter. You can read about how resource manager works in a PDB or in a pre-12c instance here:
-- Create plan
DBMS_RESOURCE_MANAGER.create_plan(
plan => 'hybrid_plan',
comment => 'Plan for a combination of high and low priority tasks.');
DBMS_RESOURCE_MANAGER.create_consumer_group(
consumer_group => 'batch_cg',
comment => 'Batch processing - low priority');
DBMS_RESOURCE_MANAGER.create_plan_directive (
plan => 'hybrid_plan',
group_or_subplan => 'batch_cg',
comment => 'Low Priority - level 2',
mgmt_p1 => 20);
DBMS_RESOURCE_MANAGER.create_plan_directive(
plan => 'hybrid_plan',
group_or_subplan => 'OTHER_GROUPS',
comment => 'all other users - level 3',
mgmt_p1 => 10);
DBMS_RESOURCE_MANAGER.validate_pending_area;
DBMS_RESOURCE_MANAGER.submit_pending_area();
END;
/
DBMS_RESOURCE_MANAGER_PRIVS.grant_switch_consumer_group(
grantee_name => 'batch_user',
consumer_group => 'batch_cg',
grant_option => FALSE);
DBMS_RESOURCE_MANAGER.set_initial_consumer_group('web_user', 'web_cg');
DBMS_RESOURCE_MANAGER.set_initial_consumer_group('batch_user', 'batch_cg');
END;
/
Running Scripts Against Container Databases (CDBs) and Pluggable Databases (PDBs) in Oracle Database 12c Release 1 (12.1)
SET CONTAINER
For DBA scripts that perform tasks at the container level, using "/ AS SYSDBA" will work as it did in previous releases. The problem comes when you want to perform a task within the pluggable database.
The simplest way to achieve this is to continue to connect using "/ as SYSDBA", but to set the specific container in your script using the ALTER SESSION SET CONTAINER command.
Multitenant Page 22
ALTER SESSION SET CONTAINER = pdb1;
EXIT;
EOF
To make the script generic, pass the PDB name as a parameter. Save the following code as a script called "set_container_test.sh".
EXIT;
EOF
Running the script with the PDB name as the first parameter shows the container is being set correctly.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL> SQL>
Session altered.
TWO_TASK
An obvious solution when connecting to specific users is to use the TWO_TASK environment variable. Unfortunately this does no t work when using "/ AS SYSDBA".
$ export TWO_TASK=pdb1
$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Fri Apr 18 16:54:34 2014
ERROR:
ORA-01017: invalid username/password; logon denied
Enter user-name:
When connecting using a specific username/password combination TWO_TASK works as before.
$ export TWO_TASK=pdb1
$ sqlplus test/test
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
CON_NAME
------------------------------
PDB1
SQL>
Hopefully your scripts do not contain connections with username and password specified, but if they do adding a service direc tly to the connection or using the TWO_TASK environment variable will
allow you to connect to a specific PDB.
Multitenant Page 23
Place the following entries into the "$ORACLE_HOME/network/admin/sqlnet.ora" file, specifying the required wallet directory.
WALLET_LOCATION =
(SOURCE =
(METHOD = FILE)
(METHOD_DATA =
(DIRECTORY = /u01/app/oracle/wallet)
)
)
SQLNET.WALLET_OVERRIDE = TRUE
SSL_CLIENT_AUTHENTICATION = FALSE
SSL_VERSION = 0
Create a wallet to hold the credentials. Since 11gR2 this is better done using orapki, as it prevents the auto-login working if the wallet is copied to another machine.
$ mkdir -p /u01/app/oracle/wallet
$ orapki wallet create -wallet "/u01/app/oracle/wallet" -pwd "mypassword" -auto_login_local
Oracle Secret Store Tool : Version 12.1.0.1
Copyright (c) 2004, 2012, Oracle and/or its affiliates. All rights reserved.
Enter password:
$
Create a credential associated with a TNS alias. The parameters are "alias username password".
PDB1_TEST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = ol6-121.localdomain)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = pdb1)
)
)
With this in place, we can now make connections to the database using the credentials in the wallet.
$ sqlplus /@pdb1_test
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
CON_NAME
------------------------------
PDB1
SQL>
Scheduler
The scheduler has been enhanced in Oracle 12c to include script-based jobs, allowing you to define scripts in-line, or call scripts on the file system. These are a variation on external jobs, but the
SQL_SCRIPT and BACKUP_SCRIPT job types make it significantly easier to deal with credentials and the multitenant environment. You can read more about this functionality here.
catcon.pl
Another issue DBAs will encounter when running in a multitenant environment is the need to run the same script in multiple PD Bs. That can be achieved using the methods mentioned previously, but
Oracle provide a Perl utility called "catcon.pl" which may be more convenient.
In a multitenant environment, some Oracle supplied scripts must be applied in the correct order, starting with the CDB$ROOT container. The "catcon.pl" utility takes care of that and provides container-
specific logs, allowing you to easily check the outcome of the action.
The full syntax of the utility is described here, but running the utility with no parameters displays the full usage.
$ perl catcon.pl
Multitenant Page 24
[-d directory] [-l directory]
[{-c|-C} container] [-p degree-of-parallelism]
[-e] [-s]
[-E { ON | errorlogging-table-other-than-SPERRORLOG } ]
[-g]
-b log-file-name-base
--
{ sqlplus-script [arguments] | --x<SQL-statement> } ...
Optional:
-u username (optional /password; otherwise prompts for password)
used to connect to the database to run user-supplied scripts or
SQL statements
defaults to "/ as sysdba"
-U username (optional /password; otherwise prompts for password)
used to connect to the database to perform internal tasks
defaults to "/ as sysdba"
-d directory containing the file to be run
-l directory to use for spool log files
-c container(s) in which to run sqlplus scripts, i.e. skip all
Containers not named here; for example,
-c 'PDB1 PDB2',
-C container(s) in which NOT to run sqlplus scripts, i.e. skip all
Containers named here; for example,
-C 'CDB PDB3'
Mandatory:
-b base name (e.g. catcon_test) for log and spool file names
NOTES:
- if --x<SQL-statement> is the first non-option string, it needs to be
preceeded with -- to avoid confusing module parsing options into
assuming that '-' is an option which that module is not expecting and
about which it will complain
- command line parameters to SQL scripts can be introduced using --p
interactive (or secret) parameters to SQL scripts can be introduced
using --P
For example,
perl catcon.pl ... x.sql '--pJohn' '--PEnter Password for John:' ...
$
Regarding running Oracle supplied scripts, the manual uses the example of running "catblock.sql" in all containers.
$ . oraenv
ORACLE_SID = [cdb1] ?
The Oracle base remains unchanged with value /u01/app/oracle
$ cd $ORACLE_HOME/rdbms/admin/
$ perl catcon.pl -d $ORACLE_HOME/rdbms/admin -l /tmp -b catblock_output catblock.sql
$ ls /tmp/catblock_output*
catblock_output0.log catblock_output1.log catblock_output2.log catblock_output3.log
$
The first output file contains the combined output from the "cdb$root" and "pdb$seed" containers. The last file contains an o verall status message for the task. The files between contain the output for
all the user-created PDBs.
The "catcon.pl" utility can also be used to run queries against all containers in the CDB. The following command runs a query in each container, placing the output for each in a file called
"/tmp/query_outputN.log".
$ cd $ORACLE_HOME/rdbms/admin/
$ perl catcon.pl -e -l /tmp -b query_output -- --x"SELECT SYS_CONTEXT('USERENV', 'CON_NAME') FROM dual"
$ ls /tmp/query_output*
/tmp/query_output0.log /tmp/query_output1.log /tmp/query_output2.log /tmp/query_output3.log
$
You can target specific PDBs using the "-c" option, or exclude PDBs using the "-C" option. The example below runs a query in all user defined PDBs by omitting the root and seed containers.
$ rm -f /tmp/select_output*
$ cd $ORACLE_HOME/rdbms/admin/
$ perl catcon.pl -e -C 'CDB$ROOT PDB$SEED' -l /tmp -b select_output -- --x"SELECT SYS_CONTEXT('USERENV', 'CON_NAME') FROM dual"
Multitenant Page 25
$ perl catcon.pl -e -C 'CDB$ROOT PDB$SEED' -l /tmp -b select_output -- --x"SELECT SYS_CONTEXT('USERENV', 'CON_NAME') FROM dual"
$
Event Availability
The following database events are available at both the CDB and PDB level.
The following database events are only available at the PDB level and require the ON PLUGGABLE DATABASE clause explicitly. Using the ON DATABASE clause results in an error.
AFTER CLONE : After a clone operation, the trigger fires in the new PDB and then the trigger is deleted. If the trigger fails, the clone operation fails.
BEFORE UNPLUG : Before an unplug operation, the trigger fires in the PDB and then the trigger is deleted. If the trigger fails, the unplug operation fails.
Transparent Data Encryption (TDE) in Pluggable Databases (PDBs) in Oracle Database 12c Release 1 (12.1)
Keystore Location
A keystore must be created to hold the encryption key. The search order for finding the keystore is as follows.
If present, the location specified by the ENCRYPTION_WALLET_LOCATION parameter in the "sqlnet.ora" file.
If present, the location specified by the WALLET_LOCATION parameter in the "sqlnet.ora" file.
The default location for the keystore. If the $ORACLE_BASE is set, this is "$ORACLE_BASE/admin/DB_UNIQUE_NAME/wallet", otherwise it is "$ORACLE_HOME/admin/DB_UNIQUE_NAME/wallet", where
DB_UNIQUE_NAME comes from the initialization parameter file.
Keystores should not be shared between CDBs, so if multiple CDBs are run from the same ORACLE_HOME you must do one of the fol lowing to keep them separate.
Multitenant Page 26
Keystores should not be shared between CDBs, so if multiple CDBs are run from the same ORACLE_HOME you must do one of the fol lowing to keep them separate.
Use the default keystore location, so each CDB database has its own keystore.
Specify the location using the $ORACLE_SID.
ENCRYPTION_WALLET_LOCATION =
(SOURCE =(METHOD = FILE)(METHOD_DATA =
(DIRECTORY = /u01/app/oracle/admin/$ORACLE_SID/encryption_keystore/)))
Have a separate "sqlnet.ora" for each database, making sure the TNS_ADMIN variable is set correctly.
Regardless of where you place the keystore, make sure you don't lose it. Oracle 12c is extremely sensitive to loss of the keystore. During the writing of this article I was forced to revert to a clean
snapshot several times.
Create a Keystore
Edit the "$ORACLE_HOME/network/admin/sqlnet.ora" files, adding the following entry.
ENCRYPTION_WALLET_LOCATION =
(SOURCE =(METHOD = FILE)(METHOD_DATA =
(DIRECTORY = /u01/app/oracle/admin/$ORACLE_SID/encryption_keystore/)))
Create the directory to hold the keystore.
mkdir -p /u01/app/oracle/admin/$ORACLE_SID/encryption_keystore
Connect to the root container and create the keystore.
CONN / AS SYSDBA
HOST ls /u01/app/oracle/admin/cdb1/encryption_keystore/
ewallet.p12
SQL>
You can open and close the keystore from the root container using the following commands. If the CONTAINER=ALL clause is omitted, the current container is assumed. Open the keystore for all
containers.
-- Open
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY myPassword CONTAINER=ALL;
-- Close
ADMINISTER KEY MANAGEMENT SET KEYSTORE CLOSE IDENTIFIED BY myPassword CONTAINER=ALL;
You need to create and activate a master key in the root container and one in each of the pluggable databases. Using the CONT AINER=ALL clause does it in a single step. If the CONTAINER=ALL clause is
omitted, it will only be done in the current container and will need to be done again for each PDB individually. Information about the master key is displayed using the V$ENCRYPTION_KEYS view.
ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY myPassword WITH BACKUP CONTAINER=ALL;
CON_ID KEY_ID
---------- ------------------------------------------------------------------------------
0 AdaYAOior0/3v0AoZDBV8hoAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
0 AYmKkQxl+U+Xv3UHVMgSJC8AAAAAAAAAAAAAAAAAAAAAAAAAAAAA
SQL>
Information about the keystore is displayed using the V$ENCRYPTION_WALLET view.
SQL>
Connect to the PDB. If you didn't create the key in the previous step, create a new master key for the PDB.
CON_ID KEY_ID
---------- ------------------------------------------------------------------------------
0 ATbrc0RkAE//v/jcxOecSGIAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
SQL>
Use the Keystore for TDE
You should now be able to create a table with an encrypted column in the PDB.
CONN test/test@pdb1
-- Encrypted column
CREATE TABLE tde_test (
id NUMBER(10),
data VARCHAR2(50) ENCRYPT
Multitenant Page 27
data VARCHAR2(50) ENCRYPT
);
-- Encrypted tablespacew
CONN sys@pdb1 AS SYSDBA
CONN test/test@pdb1
SHUTDOWN IMMEDIATE;
STARTUP;
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY myPassword;
CONN test/test@pdb1
ID DATA
---------- --------------------------------------------------
1 This is a secret!
SQL>
ID DATA
---------- --------------------------------------------------
1 This is also a secret!
SQL>
If the CDB is restarted, the keystore must be opened in both the CDB and the PDBs.
CONN / AS SYSDBA
SHUTDOWN IMMEDIATE;
STARTUP;
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY myPassword CONTAINER=ALL;
CONN test/test@pdb1
ID DATA
---------- --------------------------------------------------
1 This is a secret!
SQL>
ID DATA
---------- --------------------------------------------------
1 This is also a secret!
SQL>
ORAENV_ASK=NO
export ORACLE_SID=cdb1
. oraenv
ORAENV_ASK=YES
Multitenant Page 28
ORAENV_ASK=YES
sqlplus /nolog
Export the key information from PDB1.
ADMINISTER KEY MANAGEMENT EXPORT ENCRYPTION KEYS WITH SECRET "mySecret" TO '/tmp/export.p12' IDENTIFIED BY myPassword;
Unplug PDB1 from CDB1.
CONN / AS SYSDBA
ORAENV_ASK=NO
export ORACLE_SID=cdb2
. oraenv
ORAENV_ASK=YES
sqlplus /nolog
Plug in the PDB1, with the new name of PDB2 into the CDB2 instance.
CONN / AS SYSDBA
-- If you are not using OMF, you will have to convert the paths manually.
--CREATE PLUGGABLE DATABASE pdb2 USING '/tmp/pdb1.xml'
-- FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdb1/','/u01/app/oracle/oradata/cdb2/pdb2/');
CONN / AS SYSDBA
HOST mkdir -p /u01/app/oracle/admin/cdb2/encryption_keystore/
ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '/u01/app/oracle/admin/cdb2/encryption_keystore/' IDENTIFIED BY myPassword;
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY myPassword;
Import the key information into PDB2 and restart it. Until it opens cleanly it will not register with the listener, so switch the container manually.
CONN / AS SYSDBA
ALTER SESSION SET CONTAINER=pdb2;
CONN test/test@pdb2
ID DATA
---------- --------------------------------------------------
1 This is a secret!
SQL>
ID DATA
---------- --------------------------------------------------
1 This is also a secret!
SQL>
Auto-Login Keystores
Creation of an auto-login keystore means you no longer need to explicitly open the keystore after a restart. The first reference to a key causes the keystore to be opened automatically, as shown below.
CONN / AS SYSDBA
ADMINISTER KEY MANAGEMENT CREATE LOCAL AUTO_LOGIN KEYSTORE FROM KEYSTORE '/u01/app/oracle/admin/cdb1/encryption_keystore/' IDENTIFIED BY myPassword;
SHUTDOWN IMMEDIATE;
STARTUP
CONN test/test@pdb1
ID DATA
---------- --------------------------------------------------
1 This is a secret!
Multitenant Page 29
1 This is a secret!
SQL>
ID DATA
---------- --------------------------------------------------
1 This is also a secret!
SQL>
SYSKM
Key management can be performed by any member of the SYSDBA or SYSKM group.
PDBs With Different Time Zones to the CDB in Oracle Database 12c Release 1 (12.1)
CONN / AS SYSDBA
DBTIME
------
+00:00
SQL>
Reset the time zone using the ALTER DATABASE command to specify the new TIME_ZONE value. The database will need to be restarted for this to take effect.
CONN / AS SYSDBA
SHUTDOWN IMMEDIATE;
STARTUP;
We can see the database time zone has been changed.
CONN / AS SYSDBA
DBTIMEZONE
-------------
Europe/London
SQL>
Pluggable Database (PDB) Level
Setting the time zone in the pluggable database allows it to override the CDB setting.
CONN / AS SYSDBA
ALTER SESSION SET CONTAINER = pdb1;
DBTIME
------
-07:00
SQL>
Reset the time zone using the ALTER DATABASE command to specify the new TIME_ZONE value. The pluggable database will need to be restarted for this to take effect.
CONN / AS SYSDBA
ALTER SESSION SET CONTAINER = pdb1;
SHUTDOWN IMMEDIATE;
STARTUP;
We can see the pluggable database time zone is different to the container database.
CONN / AS SYSDBA
DBTIMEZONE
-------------
Europe/London
SQL>
Multitenant Page 30
ALTER SESSION SET CONTAINER = pdb1;
DBTIMEZONE
----------
US/Eastern
SQL>
Using the Database Upgrade Assistant (DBUA) against a container database (CDB) will upgrade all the associated pluggable databases (PDBs) also. If you don't want to commit to upgrading all the PDBs in
one step, you can upgrade them individually, or a subset of the PDBs, using the unplug/plugin method.
This article describes the method for upgrading a PDB using the unplug/plugin method. It assumes you have the following conta iner databases.
export ORACLE_BASE=/u01/app/oracle
export ORAENV_ASK=NO
export ORACLE_SID=cdb1
. oraenv
export ORAENV_ASK=YES
sqlplus /nolog
From Oracle 12.2 onward the "preupgrd.sql" script has been removed and replaced by the "preupgrade.jar" file, which is run as follows. The "preupgrade.jar" file is shipped with the Oracle
software, but you should really download the latest version from MOS 884522.1.
Run the "preupgrd.sql" script from the "12.1.0.2" home, not the current 12.1.0.1 home!
CONN / AS SYSDBA
ALTER SESSION SET CONTAINER=pdb1;
@/u01/app/oracle/product/12.1.0.2/db_1/rdbms/admin/preupgrd.sql
***************************************************************************
Executing Pre-Upgrade Checks in PDB1...
***************************************************************************
************************************************************
The following are *** ERROR LEVEL CONDITIONS *** that must be addressed
prior to attempting your upgrade.
Failure to do so will result in a failed upgrade.
************************************************************
************************************************************
ACTIONS REQUIRED:
************************************************************
***************************************************************************
Pre-Upgrade Checks in PDB1 Completed.
***************************************************************************
***************************************************************************
***************************************************************************
SQL>
Multitenant Page 31
The output displays the generated scripts, including the "preupgrade.log" file. Both the log file and fixup scripts will be in the "$ORACLE_BASE/cfgtoollogs" directory or the "$ORACLE_HOME/cfgtoollogs"
directory, depending on whether the $ORACLE_BASE has been specified or not. Run the fixup script and perform any manual tasks listed in the "preupgrade.log" file. These should be listed by the
"preupgrade_fixups.sql" script also.
CONN / AS SYSDBA
ALTER SESSION SET CONTAINER=pdb1;
@/u01/app/oracle/cfgtoollogs/cdb1/preupgrade/preupgrade_fixups.sql
Pre-Upgrade Fixup Script Generated on 2015-02-16 09:40:04 Version: 12.1.0.2 Build: 006
Beginning Pre-Upgrade Fixups...
Executing in container PDB1
**********************************************************************
Check Tag: APEX_UPGRADE_MSG
Check Summary: Check that APEX will need to be upgraded.
Fix Summary: Oracle Application Express can be manually upgraded prior to database upgrade.
**********************************************************************
Fixup Returned Information:
INFORMATION: --> Oracle Application Express (APEX) can be
manually upgraded prior to database upgrade
**********************************************************************
[Pre-Upgrade Recommendations]
**********************************************************************
*****************************************
********* Dictionary Statistics *********
*****************************************
**************************************************
************* Fixup Summary ************
EXEC DBMS_STATS.gather_dictionary_stats;
Connect to the root container and unplug the PDB.
CONN / AS SYSDBA
ALTER PLUGGABLE DATABASE pdb1 CLOSE;
ALTER PLUGGABLE DATABASE pdb1 UNPLUG INTO '/tmp/pdb1.xml';
EXIT;
Upgrade the PDB
The PDB must be plugged into the destination CDB and upgraded.
export ORACLE_BASE=/u01/app/oracle
export ORAENV_ASK=NO
export ORACLE_SID=cdb2
. oraenv
export ORAENV_ASK=YES
sqlplus /nolog
Plugin the "pdb1" pluggable database into the "cdb2" container.
CONN / AS SYSDBA
SQL> EXIT;
Multitenant Page 32
SQL> EXIT;
Don't worry about the "Warning: PDB altered with errors." message at this point.
Run the "catupgrd.sql" script against the PDB. Notice the use of the "-c" flag to specify an inclusion list. If you were upgrading multiple PDBs, you could list them in a space-separated list so they are all
upgraded in a single step.
cd $ORACLE_HOME/rdbms/admin
$ORACLE_HOME/perl/bin/perl catctl.pl -c "pdb1" -l /tmp catupgrd.sql
[CONTAINER NAMES]
CDB$ROOT
PDB$SEED
PDB1
PDB Inclusion:[PDB1] Exclusion:[]
Starting
[/u01/app/oracle/product/12.1.0.2/db_1/perl/bin/perl catctl.pl -c 'PDB1' -l /tmp -I -i pdb1 -n 2 catupgrd.sql]
Multitenant Page 33
catcon: See /tmp/catupgrdpdb1_*.lst files for spool files, if any
Number of Cpus =2
SQL PDB Process Count = 2
SQL Process Count = 2
[CONTAINER NAMES]
CDB$ROOT
PDB$SEED
PDB1
PDB Inclusion:[PDB1] Exclusion:[]
------------------------------------------------------
Phases [0-73]
Container Lists Inclusion:[PDB1] Exclusion:[]
Serial Phase #: 0 Files: 1 Time: 13s PDB1
Serial Phase #: 1 Files: 5 Time: 34s PDB1
Restart Phase #: 2 Files: 1 Time: 0s PDB1
Parallel Phase #: 3 Files: 18 Time: 11s PDB1
Restart Phase #: 4 Files: 1 Time: 0s PDB1
Serial Phase #: 5 Files: 5 Time: 14s PDB1
Serial Phase #: 6 Files: 1 Time: 7s PDB1
Serial Phase #: 7 Files: 4 Time: 6s PDB1
Restart Phase #: 8 Files: 1 Time: 0s PDB1
Parallel Phase #: 9 Files: 62 Time: 47s PDB1
Restart Phase #:10 Files: 1 Time: 0s PDB1
Serial Phase #:11 Files: 1 Time: 11s PDB1
Restart Phase #:12 Files: 1 Time: 0s PDB1
Parallel Phase #:13 Files: 91 Time: 8s PDB1
Restart Phase #:14 Files: 1 Time: 0s PDB1
Parallel Phase #:15 Files: 111 Time: 11s PDB1
Restart Phase #:16 Files: 1 Time: 0s PDB1
Serial Phase #:17 Files: 3 Time: 1s PDB1
Restart Phase #:18 Files: 1 Time: 0s PDB1
Parallel Phase #:19 Files: 32 Time: 19s PDB1
Restart Phase #:20 Files: 1 Time: 0s PDB1
Serial Phase #:21 Files: 3 Time: 6s PDB1
Restart Phase #:22 Files: 1 Time: 0s PDB1
Parallel Phase #:23 Files: 23 Time: 79s PDB1
Restart Phase #:24 Files: 1 Time: 0s PDB1
Parallel Phase #:25 Files: 11 Time: 34s PDB1
Restart Phase #:26 Files: 1 Time: 0s PDB1
Serial Phase #:27 Files: 1 Time: 0s PDB1
Restart Phase #:28 Files: 1 Time: 0s PDB1
Serial Phase #:30 Files: 1 Time: 0s PDB1
Serial Phase #:31 Files: 257 Time: 22s PDB1
Serial Phase #:32 Files: 1 Time: 0s PDB1
Restart Phase #:33 Files: 1 Time: 0s PDB1
Serial Phase #:34 Files: 1 Time: 1s PDB1
Restart Phase #:35 Files: 1 Time: 0s PDB1
Restart Phase #:36 Files: 1 Time: 0s PDB1
Serial Phase #:37 Files: 4 Time: 37s PDB1
Restart Phase #:38 Files: 1 Time: 0s PDB1
Parallel Phase #:39 Files: 13 Time: 51s PDB1
Restart Phase #:40 Files: 1 Time: 0s PDB1
Parallel Phase #:41 Files: 10 Time: 5s PDB1
Restart Phase #:42 Files: 1 Time: 0s PDB1
Serial Phase #:43 Files: 1 Time: 5s PDB1
Restart Phase #:44 Files: 1 Time: 0s PDB1
Serial Phase #:45 Files: 1 Time: 1s PDB1
Serial Phase #:46 Files: 1 Time: 1s PDB1
Restart Phase #:47 Files: 1 Time: 0s PDB1
Serial Phase #:48 Files: 1 Time: 164s PDB1
Restart Phase #:49 Files: 1 Time: 0s PDB1
Serial Phase #:50 Files: 1 Time: 33s PDB1
Restart Phase #:51 Files: 1 Time: 0s PDB1
Serial Phase #:52 Files: 1 Time: 38s PDB1
Restart Phase #:53 Files: 1 Time: 0s PDB1
Serial Phase #:54 Files: 1 Time: 44s PDB1
Restart Phase #:55 Files: 1 Time: 0s PDB1
Serial Phase #:56 Files: 1 Time: 58s PDB1
Restart Phase #:57 Files: 1 Time: 1s PDB1
Serial Phase #:58 Files: 1 Time: 73s PDB1
Restart Phase #:59 Files: 1 Time: 0s PDB1
Serial Phase #:60 Files: 1 Time: 88s PDB1
Restart Phase #:61 Files: 1 Time: 0s PDB1
Serial Phase #:62 Files: 1 Time: 117s PDB1
Restart Phase #:63 Files: 1 Time: 0s PDB1
Serial Phase #:64 Files: 1 Time: 0s PDB1
Serial Phase #:65 Files: 1 Calling sqlpatch with LD_LIBRARY_PATH=/u01/app/oracle/product/12.1.0.2/db_1/lib;
export LD_LIBRARY_PATH;/u01/app/oracle/product/12.1.0.2/db_1/perl/bin/perl -I /u01/app/oracle/product/12.1.0.2/db_1/rdbms/admin
-I /u01/app/oracle/product/12.1.0.2/db_1/rdbms/admin/../../sqlpatch
/u01/app/oracle/product/12.1.0.2/db_1/rdbms/admin/../../sqlpatch/sqlpatch.pl
-verbose -upgrade_mode_only -pdbs PDB1 > /tmp/catupgrdpdb1_datapatch_upgrade.log 2>
/tmp/catupgrdpdb1_datapatch_upgrade.err
returned from sqlpatch
Time: 13s PDB1
Multitenant Page 34
Time: 13s PDB1
Serial Phase #:66 Files: 1 Time: 3s PDB1
Serial Phase #:68 Files: 1 Time: 3s PDB1
Serial Phase #:69 Files: 1 Calling sqlpatch with LD_LIBRARY_PATH=/u01/app/oracle/product/12.1.0.2/db_1/lib;
export LD_LIBRARY_PATH;/u01/app/oracle/product/12.1.0.2/db_1/perl/bin/perl -I /u01/app/oracle/product/12.1.0.2/db_1/rdbms/admin
-I /u01/app/oracle/product/12.1.0.2/db_1/rdbms/admin/../../sqlpatch
/u01/app/oracle/product/12.1.0.2/db_1/rdbms/admin/../../sqlpatch/sqlpatch.pl -verbose -pdbs PDB1 >
/tmp/catupgrdpdb1_datapatch_normal.log 2> /tmp/catupgrdpdb1_datapatch_normal.err
returned from sqlpatch
Time: 8s PDB1
Serial Phase #:70 Files: 1 Time: 70s PDB1
Serial Phase #:71 Files: 1 Time: 6s PDB1
Serial Phase #:72 Files: 1 Time: 4s PDB1
Serial Phase #:73 Files: 1 Time: 0s PDB1
CONN / AS SYSDBA
ALTER SESSION SET CONTAINER=pdb1;
STARTUP;
@?/rdbms/admin/utlrp.sql
Run the "postupgrade_fixups.sql" script. Remember to perform any recommended manual steps.
@/u01/app/oracle/cfgtoollogs/cdb1/preupgrade/postupgrade_fixups.sql
Post Upgrade Fixup Script Generated on 2015-02-16 09:40:04 Version: 12.1.0.2 Build: 006
Beginning Post-Upgrade Fixups...
**********************************************************************
[Post-Upgrade Recommendations]
**********************************************************************
*****************************************
******** Fixed Object Statistics ********
*****************************************
**************************************************
************* Fixup Summary ************
**************************************************
*************** Post Upgrade Fixup Script Complete ********************
SQL>
EXECUTE DBMS_STATS.gather_fixed_objects_stats;
Manager (RMAN) Database Duplication Enhancements in Oracle Database 12c Release 1 (12.1)
Multitenant Page 35
parameter_value_convert ('cdb1','cdb2')
set db_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
set log_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
set audit_file_dest='/u01/app/oracle/admin/cdb2/adump'
set core_dump_dest='/u01/app/oracle/admin/cdb2/cdump'
set control_files='/u01/app/oracle/oradata/cdb2/control01.ctl','/u01/app/oracle/oradata/cdb2/control02.ctl','/u01/app/oracle/oradata/cdb2/control03.ctl'
set db_name='cdb2'
NOFILENAMECHECK;
Active Database Duplication using Compressed Backup Sets
In addition to conventional backup sets, active duplicates can also be performed using compressed backup sets by adding the U SING COMPRESSED BACKUPSET clause, which further reduces the amount
of data passing over the network. The example below performs an active duplicate of a source database (cdb1) to a new destina tion database (cdb2) using compressed backup sets.
Oracle allows backup sets to be encrypted. Transparent encryption uses a wallet to hold the encryption key and is seamless to the DBA, since backup sets are encrypted and decrypted as required using
the wallet. Password encryption requires the DBA to enter a password for each backup and restore operation.
Since Oracle 12c now supports active duplicates using backup sets, it also supports encryption of those backup sets using bot h methods.
If the source database uses transparent encryption of backups, the wallet containing the encryption key must be made available on the destination database.
If password encryption is used on the source database, the SET ENCRYPTION ON IDENTIFIED BY <password> command can be used to define an encryption password for the active duplication process. If
you are running in mixed mode, you can use SET ENCRYPTION ON IDENTIFIED BY <password> ONLY to override transparent encryption.
The encryption algorithm used by the active duplication can be set using the SET ENCRYPTION ALGORITHM command, where the possible algorithms can be displayed using the V
$RMAN_ENCRYPTION_ALGORITHMS view. If the encryption algorithm is not set, the default (AES128) is used.
The example below performs an active duplicate of a source database (cdb1) to a new destination database (cdb2) using passwor d encrypted backup sets.
There must be multiple channels available for multisection backups to work, so you will either need to configure persistent c hannel parallelism using CONFIGURE DEVICE TYPE ... PARALLELISM or use set
the parallelism for the current operation by performing multiple ALLOCATE CHANNEL commands.
The example below performs an active duplicate of a source database (cdb1) to a new destination database (cdb2) using multise ction backups.
If you are building an "initSID.ora" file from scratch, you must remember to include the following parameter.
enable_pluggable_database=TRUE
The previous examples didn't have to do this as the SPFILE was created as a copy of the source SPFILE, which already contained this parameter setting.
Multitenant Page 36
The DUPLICATE command includes some additional clauses related to the multitenant option.
Adding the PLUGGABLE DATABASE clause allows you to specify which pluggable databases should be included in the duplication. The following example creates a new container database (cdb2), but it
only contains two pluggable databases (pdb1 and pdb2). The third pluggable database (pdb3) is not included in the clone.
NAME
------------------------------
PDB$SEED
PDB1
PDB2
SQL>
Using the SKIP PLUGGABLE DATABASE clause will create a duplicate CDB will all the PDBs except those in the list. The following example creates a container database (cdb2) with a single pluggable
database (pdb3). The other two pluggable databases (pdb1 and pdb2) are excluded from the clone.
NAME
------------------------------
PDB$SEED
PDB3
SQL>
You can also limit the tablespaces that are included in a PDB using the TABLESPACE clause. If we connect to the source container database (cdb1) and check the tablespaces in the pdb1 pluggable
database we see the following.
TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
TEMP
USERS
TEST_TS
SQL>
Next, we perform a duplicate for the whole of the pdb2 pluggable database, but just the TEST_TS tablespace in the the pdb1 pluggable database.
Multitenant Page 37
SELECT name FROM v$pdbs;
NAME
------------------------------
PDB$SEED
PDB1
PDB2
SQL>
TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
TEMP
TEST_TS
SQL>
Clones always contains a fully functional CDB and functional PDBs. Even when we just ask for the TEST_TS tablespace in pdb1, we also get the SYSTEM, SYSAUX and TEMP tablespaces in the PDB. The
TABLESPACE clause can be used on it's own without the PLUGGABLE DATABASE clause, if no full PDBs are to be duplicated.
The SKIP TABLESPACE clause allows you to exclude specific tablespaces, rather than use the inclusion approach. The following example clones all the pluggable databases, but excludes the TEST_TS
tablespace from pdb1 during the duplicate.
NAME
------------------------------
PDB$SEED
PDB1
PDB2
PDB3
SQL>
TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
TEMP
USERS
SQL>
Appendix
The examples in this article are based on the following assumptions.
The source database is a container database (cdb1), with three pluggable databases (pdb1, pdb2 and pdb3).
The destination database is called cdb2.
Both the source and destination databases use file system storage and do not use Oracle Managed Files (OMF), hence the need for the file name conversions.
The basic setup for active duplicates was performed using the same process described for 11g here.
Between every test the following clean-up was performed.
Multitenant Page 38
SHUTDOWN IMMEDIATE;
EXIT;
EOF
# Connect to the target and auxiliary RMAN using the tsnnames.ora entry.
rman target sys/Password1@cdb1 auxiliary sys/Password1@cdb2
In the initial release of Oracle Database 12c Release 1 (12.1.0.1) remote cloning of PDBs was listed as a feature, but it didn't work. The 12.1.0.2 patch has fixed that, but also added the ability to create a
PDB as a clone of a remote non-CDB database.
Prerequisites
The prerequisites for cloning a remote PDB or non-CDB are very similar, so I will deal with them together.
In this context, the word "local" refers to the destination or target CDB that will house the cloned PDB. The word "remote" r efers to the PDB or non-CDB that is the source of the clone.
The user in the local database must have the CREATE PLUGGABLE DATABASE privilege in the root container.
The remote database (PDB or non-CDB) must be open in read-only mode.
The local database must have a database link to the remote database. If the remote database is a PDB, the database link can point to the remote CDB using a common user, or the PDB using a local or
common user.
The user in the remote database that the database link connects to must have the CREATE PLUGGABLE DATABASE privilege.
The local and remote databases must have the same endianness, options installed and character sets.
If the remote database uses Transparent Data Encryption (TDE) the local CDB must be configured appropriately before attempting the clone. If not you will be left with a new PDB that will only open in
restricted mode.
The default tablespaces for each common user in the remote PDB *must* exist in local CDB. If this is not true, create the missing tablespaces in the root container of the local PDB. If you don't do this
your new PDB will only be able to open in restricted mode (Bug 19174942).
When cloning from a non-CDB, both the the local and remote databases must using version 12.1.0.2 or higher.
In the examples below I have three databases running on the same virtual machine, but they could be running on separate physical or virtual servers.
cdb1 : The local database that will eventually house the clones.
db12c : The remote non-CDB.
cdb3 : The remote CDB, used for cloning a remote PDB (pdb5).
export ORAENV_ASK=NO
export ORACLE_SID=cdb3
. oraenv
export ORAENV_ASK=YES
sqlplus / as sysdba
Create a user in the remote database for use with the database link. In this case, we will use a local user in the remote PDB.
CONN / AS SYSDBA
ALTER PLUGGABLE DATABASE pdb5 CLOSE;
ALTER PLUGGABLE DATABASE pdb5 OPEN READ ONLY;
EXIT;
Switch to the local server and create a "tnsnames.ora" entry pointing to the remote database for use in the USING clause of the database link.
PDB5 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = ol7-121.localdomain)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = pdb5)
Multitenant Page 39
(SERVICE_NAME = pdb5)
)
)
Connect to the local database to initiate the clone.
export ORAENV_ASK=NO
export ORACLE_SID=cdb1
. oraenv
export ORAENV_ASK=YES
sqlplus / as sysdba
Create a database link in the local database, pointing to the remote database.
-- Test link.
DESC user_tables@clone_link
Create a new PDB in the local database by cloning the remote PDB. In this case we are using Oracle Managed Files (OMF), so we don't need to bother with FILE_NAME_CONVERT parameter for file name
conversions.
SQL>
We can see the new PDB has been created, but it is in the MOUNTED state.
NAME OPEN_MODE
------------------------------ ----------
PDB5NEW MOUNTED
SQL>
The PDB is opened in read-write mode to complete the process.
NAME OPEN_MODE
------------------------------ ----------
PDB5NEW READ WRITE
SQL>
As with any PDB clone, check common users and the temporary tablespace is configured as expected.
export ORAENV_ASK=NO
export ORACLE_SID=db12c
. oraenv
export ORAENV_ASK=YES
sqlplus / as sysdba
Create a user in the remote database for use with the database link.
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE OPEN READ ONLY;
EXIT;
Switch to the local server and create a "tnsnames.ora" entry pointing to the remote database for use in the USING clause of the database link.
DB12C =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = ol7-121.localdomain)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = db12c)
)
)
Connect to the local database to initiate the clone.
export ORAENV_ASK=NO
export ORACLE_SID=cdb1
. oraenv
export ORAENV_ASK=YES
Multitenant Page 40
sqlplus / as sysdba
Create a database link in the local database, pointing to the remote database.
-- Test link.
DESC user_tables@clone_link
Create a new PDB in the local database by cloning the remote non-CDB. In this case we are using Oracle Managed Files (OMF), so we don't need to bother with FILE_NAME_CONVERT parameter for file
name conversions. Since there is no PDB to name, we use "NON$CDB" as the PDB name.
SQL>
We can see the new PDB has been created, but it is in the MOUNTED state.
NAME OPEN_MODE
------------------------------ ----------
DB12CPDB MOUNTED
SQL>
Since this PDB was created as a clone of a non-CDB, before it can be opened we need to run the "$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql" script to clean it up.
@$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql
The PDB can now be opened in read-write mode.
NAME OPEN_MODE
------------------------------ ----------
DB12CPDB READ WRITE
SQL>
The 12.1.0.2 patchset introduced the ability to do a metadata-only clone. Adding the NO DATA clause when cloning a PDB signifies that only the metadata for the user-created objects should be cloned,
not the data in the tables and indexes.
Setup
Create a clean PDB, then add a new user and a test table with some data.
CONN / AS SYSDBA
COUNT(*)
----------
1
SQL>
Metadata Clone
Perform a metadata-only clone of the PDB using the NO DATA clause.
Multitenant Page 41
CONN / AS SYSDBA
CONN / AS SYSDBA
ALTER SESSION SET CONTAINER = pdb11;
COUNT(*)
----------
0
SQL>
Restrictions
The NO DATA clause is only valid is the the source PDB doesn't contain any of the following.
Index-organized tables
Advanced Queue (AQ) tables
Clustered tables
Table clusters
If it does, you will get the following type of error.
SQL> CREATE PLUGGABLE DATABASE pdb11 FROM pdb1
FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdb1/','/u01/app/oracle/oradata/cdb1/pdb11/')
NO DATA;
CREATE PLUGGABLE DATABASE pdb11 FROM pdb1
*
ERROR at line 1:
ORA-65161: Unable to create pluggable database with no data
SQL>
Setup
To see this feature working we will create a clean PDB, then add 3 new tablespaces, each with a default user and a single object in them. This will mimic a situation where a single database has been used
to consolidate three different applications.
CONN / AS SYSDBA
Multitenant Page 42
DEFAULT TABLESPACE ts2
QUOTA UNLIMITED ON ts2;
SQL>
CONN / AS SYSDBA
TABLESPACE_NAME
--------------------
SYSTEM
SYSAUX
TEMP
TS1
TS2
TS3
6 rows selected.
SQL>
If we try to access the objects from each schema, we see this is not the case.
ID
----------
1
Multitenant Page 43
ID
----------
1
SQL>
As requested, the datafile for the TS3 tablespace has not been cloned, so we should do some post-clone clean up to make the PDB look consistent.
Setup
We need to create 3 PDBs to test the CONTAINERS clause. The setup code below does the following.
CONN / AS SYSDBA
CONN / AS SYSDBA
Creates a common user called C##COMMON_USER that owns an empty table called COMMON_USER_TAB in the root container.
Creates a populated version of the COMMON_USER_TAB table owned by the C##COMMON_USER user in each PDB.
Grants select privilege on the local user's table to the common user.
-- Create a common user that owns an empty table.
CONN / AS SYSDBA
CREATE USER c##common_user IDENTIFIED BY Common1 QUOTA UNLIMITED ON users;
Multitenant Page 44
CREATE USER c##common_user IDENTIFIED BY Common1 QUOTA UNLIMITED ON users;
GRANT CREATE SESSION, CREATE TABLE, CREATE VIEW, CREATE SYNONYM TO c##common_user CONTAINER=ALL;
CONN c##common_user/Common1@pdb1
CONN c##common_user/Common1@pdb2
CONN c##common_user/Common1@pdb3
CONN local_user/Local1@pdb2
GRANT SELECT ON local_user_tab TO c##common_user;
CONN local_user/Local1@pdb3
GRANT SELECT ON local_user_tab TO c##common_user;
CONN / AS SYSDBA
CONTAINERS Clause with Common Users
The CONTAINERS clause can only be used from a common user in the root container. With no additional changes we can query the COMMON_USER_TAB tables present in the common user in all the
containers. The most basic use of the CONTAINERS clause is shown below.
CONN c##common_user/Common1
SELECT *
FROM CONTAINERS(common_user_tab);
ID CON_ID
---------- ----------
1 4
2 4
1 5
2 5
1 3
2 3
6 rows selected.
SQL>
Notice the CON_ID column has been added to the column list, to indicate which container the result came from. This allows us to query a subset of the containers.
SELECT con_id, id
FROM CONTAINERS(common_user_tab)
WHERE con_id IN (3, 4)
ORDER BY con_id, id;
CON_ID ID
---------- ----------
3 1
3 2
4 1
4 2
4 rows selected.
SQL>
Multitenant Page 45
CONN c##common_user/Common1
CREATE TABLE c##common_user.local_user_tab_v (id NUMBER);
CONN c##common_user/Common1@pdb1
CREATE VIEW c##common_user.local_user_tab_v AS
SELECT * FROM local_user.local_user_tab;
CONN c##common_user/Common1@pdb2
CREATE VIEW c##common_user.local_user_tab_v AS
SELECT * FROM local_user.local_user_tab;
CONN c##common_user/Common1@pdb3
CREATE VIEW c##common_user.local_user_tab_v AS
SELECT * FROM local_user.local_user_tab;
With the blank table and views in place we can now use the CONTAINERS clause indirectly against the local user objects.
CONN c##common_user/Common1
SELECT con_id, id
FROM CONTAINERS(local_user_tab_v)
ORDER BY con_id, id;
CON_ID ID
---------- ----------
3 1
3 2
4 1
4 2
5 1
5 2
6 rows selected.
SQL>
The documentation suggests the use of synonyms in place of views will not work, since the synonyms must resolve to objects ow ned by the common user issuing the query.
"When a synonym is specified in the CONTAINERS clause, the synonym must resolve to a table or a view owned by the common user issuing the statement."
That's not quite true from my tests, but it doesn't stop you from using synonyms to local objects in the PDBs, provided the o bject in the root container is not a synonym. The following example uses a
real object in the root container, and local objects via synonyms in the pluggable databases.
CONN c##common_user/Common1
CREATE TABLE c##common_user.local_user_tab_syn (id NUMBER);
CONN c##common_user/Common1@pdb1
DROP TABLE c##common_user.common_user_tab;
CREATE SYNONYM c##common_user.local_user_tab_syn FOR local_user.local_user_tab;
CONN c##common_user/Common1@pdb2
DROP TABLE c##common_user.common_user_tab;
CREATE SYNONYM c##common_user.local_user_tab_syn FOR local_user.local_user_tab;
CONN c##common_user/Common1@pdb3
DROP TABLE c##common_user.common_user_tab;
CREATE SYNONYM c##common_user.local_user_tab_syn FOR local_user.local_user_tab;
CONN c##common_user/Common1
SELECT con_id, id
FROM CONTAINERS(local_user_tab_syn)
ORDER BY con_id, id;
CON_ID ID
---------- ----------
3 1
3 2
4 1
4 2
5 1
5 2
6 rows selected.
SQL>
Let's see what happens if we drop the common user table and replace it with a synonym of the same name, pointing to a table o f the same structure as the local tables, but owned by the common user.
CONN c##common_user/Common1
SELECT *
FROM CONTAINERS(local_user_tab_syn);
SELECT *
*
Multitenant Page 46
*
ERROR at line 1:
ORA-12801: error signaled in parallel query server P004
ORA-00942: table or view does not exist
SQL>
If the synonyms consistently point to an object in the common user it still doesn't work.
CONN c##common_user/Common1@pdb1
DROP SYNONYM c##common_user.local_user_tab_syn;
CREATE SYNONYM c##common_user.local_user_tab_syn FOR c##common_user.common_user_tab;
DESC local_user_tab_syn;
Name Null? Type
----------------------------------------------------- -------- ------------------------------------
ID NUMBER
SQL>
CONN c##common_user/Common1@pdb2
DROP SYNONYM c##common_user.local_user_tab_syn;
CREATE SYNONYM c##common_user.local_user_tab_syn FOR c##common_user.common_user_tab;
DESC local_user_tab_syn;
Name Null? Type
----------------------------------------------------- -------- ------------------------------------
ID NUMBER
SQL>
CONN c##common_user/Common1@pdb3
DROP SYNONYM c##common_user.local_user_tab_syn;
CREATE SYNONYM c##common_user.local_user_tab_syn FOR c##common_user.common_user_tab;
DESC local_user_tab_syn;
Name Null? Type
----------------------------------------------------- -------- ------------------------------------
ID NUMBER
SQL>
CONN c##common_user/Common1
DROP SYNONYM c##common_user.local_user_tab_syn;
CREATE SYNONYM c##common_user.local_user_tab_syn FOR c##common_user.common_user_tab;
DESC local_user_tab_syn;
Name Null? Type
----------------------------------------------------- -------- ------------------------------------
ID NUMBER
SQL>
SELECT con_id, id
FROM CONTAINERS(local_user_tab_syn)
ORDER BY con_id, id;
FROM CONTAINERS(local_user_tab_syn)
*
ERROR at line 2:
ORA-00942: table or view does not exist
SQL>
I'm not sure what the wording in the documentation means, but it doesn't read well to me.
The hint is placed in the select list as usual, with the basic syntax as follows. Substitute the hint you want in place of "< <PUT-HINT-HERE>>".
/*+ CONTAINERS(DEFAULT_PDB_HINT='<<PUT-HINT-HERE>>') */
As an example, we will run a query against the ALL_OBJECTS view and check the elapsed time.
CONN / AS SYSDBA
SET TIMING ON
CON_ID MAX(OBJECT_ID)
---------- --------------
1 75316
3 73209
4 73330
5 73323
Elapsed: 00:00:00.31
SQL>
We repeat the query, but this time add a PARALLEL(2) hint to the recursive queries run in each PDB, which should make the elapsed time slower on this small VM.
Multitenant Page 47
We repeat the query, but this time add a PARALLEL(2) hint to the recursive queries run in each PDB, which should make the elapsed time slower on this small VM.
CON_ID MAX(OBJECT_ID)
---------- --------------
1 75316
3 73209
4 73340
5 73323
Elapsed: 00:00:06.17
SQL>
Notice the significantly longer elapsed time as a result of the parallel operations in the recursive SQL..
Clean Up
You can clean up all the pluggable databases and the common user created for these examples using the following script.
CONN / AS SYSDBA
ALTER PLUGGABLE DATABASE pdb1 CLOSE;
ALTER PLUGGABLE DATABASE pdb2 CLOSE;
ALTER PLUGGABLE DATABASE pdb3 CLOSE;
DROP PLUGGABLE DATABASE pdb1 INCLUDING DATAFILES;
DROP PLUGGABLE DATABASE pdb2 INCLUDING DATAFILES;
DROP PLUGGABLE DATABASE pdb3 INCLUDING DATAFILES;
There are some issues with this feature unless you apply the relevant patch.
This is feature does not work in the stock 12.1.0.2 release due to bug 18902135. The PDB logging clause is ignored when creat ing a new tablespace.
After you apply the 18902135 patch, if you set the PDB logging clause to NOLOGGING, the PDB logging clause is *always* used t o determine the logging setting of the tablespace. It can't be overridden
by explicitly setting the logging clause in the CREATE TABLESPACE statement. This goes against what the documentation states, so it appears the bug fix has introduced a new bug! If you set the PDB
logging clause to LOGGING, the setting can still be overridden at the tablespace level.
The feature has finally been fixed if you apply the 20961627 patch.
CONN / AS SYSDBA
PDB_NAME LOGGING
-------------------- ---------
PDB$SEED LOGGING
PDB1 LOGGING
PDB2 LOGGING
PDB5 NOLOGGING
4 rows selected.
SQL>
If we create a new tablespace in the PDB without an explicit logging clause, we can see the default logging clause is used.
TABLESPACE_NAME LOGGING
------------------------------ ---------
SYSAUX LOGGING
SYSTEM LOGGING
TEMP NOLOGGING
Multitenant Page 48
TEMP NOLOGGING
TEST1_TS NOLOGGING
4 rows selected.
SQL>
The default logging clause can* be overridden if an explicit logging clause is used during tablespace creation.
TABLESPACE_NAME LOGGING
------------------------------ ---------
SYSAUX LOGGING
SYSTEM LOGGING
TEMP NOLOGGING
TEST1_TS LOGGING
4 rows selected.
SQL>
ALTER PLUGGABLE DATABASE
The PDB logging clause can also be set using the ALTER PLUGGABLE DATABASE command. In this case, the affect is seen in during creation of new tablespaces in the PDB.
TABLESPACE_NAME LOGGING
------------------------------ ---------
SYSAUX LOGGING
SYSTEM LOGGING
TEMP NOLOGGING
TEST1_TS LOGGING
4 rows selected.
SQL>
The default logging clause can be overridden if an explicit logging clause is used during tablespace creation.
TABLESPACE_NAME LOGGING
------------------------------ ---------
SYSAUX LOGGING
SYSTEM LOGGING
TEMP NOLOGGING
TEST1_TS NOLOGGING
4 rows selected.
SQL>
Default Tablespace Clause During PDB Creation in Oracle Database 12c Release 2 (12.2)
Default Tablespace Clause in 12.1
In both Oracle database 12.1 and 12.2 the DEFAULT TABLESPACE clause of the CREATE PLUGGABLE DATABASE command can be used to create a new default tablespace for a pluggable database created
from the seed.
The following example gives both the Oracle Managed Files (OMF) and non-OMF syntax. All further examples will assume you are using OMF. You can add the appropriate FILE_NAME_CONVERT or
PDB_FILE_NAME_CONVERT settings if you need them.
CONN / AS SYSDBA
Multitenant Page 49
-- Non-OMF syntax.
CREATE PLUGGABLE DATABASE pdb2 ADMIN USER pdb_adm IDENTIFIED BY Password1
FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdbseed/','/u01/app/oracle/oradata/cdb1/pdb2/')
DEFAULT TABLESPACE users DATAFILE '/u01/app/oracle/oradata/cdb1/pdb2/users01.dbf' SIZE 1M AUTOEXTEND ON NEXT 1M;
CONN / AS SYSDBA
ALTER SESSION SET CONTAINER = pdb2;
SELECT tablespace_name
FROM dba_tablespaces
ORDER BY 1;
TABLESPACE_NAME
------------------------------
SYSAUX
SYSTEM
TEMP
USERS
SQL>
SELECT property_value
FROM database_properties
WHERE property_name = 'DEFAULT_PERMANENT_TABLESPACE';
PROPERTY_VALUE
----------------------------------------------------------------------------------------------------
USERS
SQL>
The DEFAULT TABELSPACE clause can't be used when creating a pluggable database from a user-defined PDB. In the examples below we attempt to use it both to specify a new default tablespace and
reference an existing tablespace. Both result in an error.
CONN / AS SYSDBA
SQL>
SQL>
Default Tablespace Clause in 12.2
In Oracle database 12.2 the DEFAULT TABLESPACE clause can be used regardless of the source of the clone. If the source is the seed PDB, the clause is used to create a new default tablespace, as it was in
Oracle 12.1. If the source is a user-defined PDB, the clause specifies which existing tablespace in the new PDB should be set as the default tablespace.
To see this in action, create a new pluggable database from the seed as we did before.
CONN / AS SYSDBA
CONN / AS SYSDBA
ALTER SESSION SET CONTAINER = pdb2;
CONN / AS SYSDBA
Multitenant Page 50
ALTER PLUGGABLE DATABASE pdb3 OPEN;
We can see the tablespaces in the new PDB are the same as the source, but it's now using the TEST_TS tablespace, rather than the USERS tablespace, as the database default tablespace.
CONN / AS SYSDBA
ALTER SESSION SET CONTAINER = pdb3;
SELECT tablespace_name
FROM dba_tablespaces
ORDER BY 1;
TABLESPACE_NAME
------------------------------
SYSAUX
SYSTEM
TEMP
TEST_TS
UNDOTBS1
USERS
SQL>
SELECT property_value
FROM database_properties
WHERE property_name = 'DEFAULT_PERMANENT_TABLESPACE';
PROPERTY_VALUE
----------------------------------------------------------------------------------------------------
TEST_TS
SQL>
Attempting to create a new tablespace, rather than reference an existing tablespace, during the creation process still results in an error.
CONN / AS SYSDBA
-- Clean up.
ALTER PLUGGABLE DATABASE pdb3 CLOSE;
DROP PLUGGABLE DATABASE pdb3 INCLUDING DATAFILES;
Disk I/O (IOPS, MBPS) Resource Management for PDBs in Oracle Database 12c Release 2 (12.2)
In the previous release there was no easy way to control the amount of disk I/O used by an individual PDB. As a result a "noisy neighbour" could use up lots of disk I/O and impact the performance of
other PDBs in the same instance. Oracle Database 12c Release 2 (12.2) allows you to control the amount of disk I/O used by a PDB, making consolidation more reliable.
MAX_IOPS : The maximum I/O operations per second for the PDB. Default "0". Values less that 100 IOPS are not recommended.
MAX_MBPS : The maximum megabytes of I/O per second for the PDB. Default "0". Values less that 25 MBPS are not recommended.
Some things to consider about their usage are listed below.
The parameters are independent. You can use none, one or both.
When the parameters are set at the CDB level they become the default values used by all PDBs.
When they are set at the PDB level they override any default values.
If the values are "0" at both the CDB and PDB level there is no I/O throttling.
Critical I/Os necessary for normal function of the instance are not limited, but do count towards the total I/O as far as the limit is concerned, so it is possible for the I/O to temporarily exceed the limit.
The parameters are only available for the multitenant architecture.
This feature is not available for Exadata.
Throttling will result in a resource manager wait event called resmgr: I/O rate limit.
Setting I/O Parameters
The example below sets the MAX_IOPS and MAX_MBPS parameters at the CDB level, the default values for all PDBs.
CONN / AS SYSDBA
-- Set defaults.
ALTER SYSTEM SET max_iops=100 SCOPE=BOTH;
ALTER SYSTEM SET max_mbps=400 SCOPE=BOTH;
-- Remove defaults.
ALTER SYSTEM SET max_iops=0 SCOPE=BOTH;
ALTER SYSTEM SET max_mbps=0 SCOPE=BOTH;
The example below sets the MAX_IOPS and MAX_MBPS parameters at the PDB level
CONN / AS SYSDBA
Multitenant Page 51
ALTER SESSION SET CONTAINER = pdb1;
V$RSRCPDBMETRIC : A single row per PDB, holding the last of the 1 minute samples.
V$RSRCPDBMETRIC_HISTORY : 61 rows per PDB, holding the last 60 minutes worth of samples from the V$RSRCPDBMETRIC view.
DBA_HIST_RSRC_PDB_METRIC : AWR snaphots, retained based on the AWR retention period.
The following queries are examples of their usage.
CONN / AS SYSDBA
CONN / AS SYSDBA
SHUTDOWN IMMEDIATE
STARTUP MOUNT
ALTER DATABASE ARCHIVELOG;
ALTER DATABASE OPEN;
Multitenant Page 52
ALTER DATABASE OPEN;
We can now enable/disable flashback database with the following commands.
FLASHBACK_ON
------------------
YES
1 row selected.
SQL>
The amount of flashback logs retained is controlled by the DB_FLASHBACK_RETENTION_TARGET parameter, which indicates the retention time in minutes.
Creating restore points at the CDB level is the same as for the non-CDB architecture. The following examples create and drop a normal and guaranteed restore point at the CDB level.
CONN / AS SYSDBA
CONN / AS SYSDBA
CONN / AS SYSDBA
It is preferable for the container database to be running in local undo mode, but flashback PDB does not depend on it. If the CDB is running in shared undo mode, it is more efficient to flashback to clean
restore points. These are restore points taken when the pluggable database is down, with no outstanding transactions.
Clean restore points can be created while connected to the PDB as follows.
CONN / AS SYSDBA
SHUTDOWN;
STARTUP;
They can also be created from the root container.
Multitenant Page 53
CONN / AS SYSDBA
CONN / AS SYSDBA
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
FLASHBACK DATABASE TO RESTORE POINT cdb1_before_changes;
ALTER DATABASE OPEN RESETLOGS;
CONN / AS SYSDBA
CONN / AS SYSDBA
CONN test/test@pdb1
CREATE TABLE t1 (
id NUMBER
);
ID
----------
1
SQL>
Flashback the PDB to the restore point.
CONN / AS SYSDBA
CONN test/test@pdb1
Multitenant Page 54
CONN test/test@pdb1
SQL>
Hot Clone a Remote PDB or Non-CDB in Oracle Database 12c Release 2 (12.2)
In the initial release of Oracle Database 12c Release 1 (12.1.0.1) remote cloning of PDBs was listed as a feature, but it didn't work. The 12.1.0.2 patch fixed that, but also added the ability to create a PDB
as a clone of a remote non-CDB database. The biggest problem with remote cloning was the prerequisite of placing the source PDB or non-CDB into read-only mode before initiating the cloning process.
This made this feature useless for cloning production systems, as that level of down-time is typically unacceptable. Oracle Database 12c Release 2 (12.2) removes this prerequisite, which enables hot
cloning of PDBs and non-CDBs for the first time.
Prerequisites
The prerequisites for cloning a remote PDB or non-CDB are very similar, so I will deal with them together.
In this context, the word "local" refers to the destination or target CDB that will house the cloned PDB. The word "remote" r efers to the PDB or non-CDB that is the source of the clone.
The user in the local database must have the CREATE PLUGGABLE DATABASE privilege in the root container.
The remote CDB must use local undo mode. Without this you must open the remote PDB or non-CDB in read-only mode.
The remote database should be in archivelog mode. Without this you must open the remote PDB or non-CDB in read-only mode.
The local database must have a database link to the remote database. If the remote database is a PDB, the database link can point to the remote CDB using a common user, the PDB or an application
container using a local or common user.
The user in the remote database that the database link connects to must have the CREATE PLUGGABLE DATABASE privilege.
The local and remote databases must have the same endianness.
The local and remote databases must either have the same options installed, or the remote database must have a subset of thos e present on the local database.
If the character set of the local CDB is AL32UTF8, the remote database can be any character set. If the local CDB does not use AL32UTF8, the character sets of the remote and local databases much
match.
If the remote database uses Transparent Data Encryption (TDE) the local CDB must be configured appropriately before attempting the clone. If not you will be left with a new PDB that will only open in
restricted mode.
Bug 19174942 is marked as fixed in 12.2. I can't confirm this, so just in case I'll leave this here, but it should no longer be the case. The default tablespaces for each common user in the remote PDB
*must* exist in local CDB. If this is not true, create the missing tablespaces in the root container of the local PDB. If you don't do this your new PDB will only be able to open in restricted mode (Bug
19174942).
When cloning from a non-CDB, both the the local and remote databases must using version 12.1.0.2 or higher.
In the examples below I have three databases running on the same virtual machine, but they could be running on separate physical or virtual servers.
cdb1 : The local database that will eventually house the clones.
db12c : The remote non-CDB.
cdb3 : The remote CDB, used for cloning a remote PDB (pdb5).
Cloning a Remote PDB
Connect to the remote CDB and prepare the remote PDB for cloning.
export ORAENV_ASK=NO
export ORACLE_SID=cdb3
. oraenv
export ORAENV_ASK=YES
sqlplus / as sysdba
Create a user in the remote database for use with the database link. In this case, we will use a local user in the remote PDB.
CONN / AS SYSDBA
PROPERTY_NAME PROPERTY_VALUE
------------------------------ ------------------------------
LOCAL_UNDO_ENABLED TRUE
SQL>
SELECT log_mode
FROM v$database;
LOG_MODE
------------
ARCHIVELOG
SQL>
Because the remote CDB is in local undo mode and archivelog mode, we don't need to turn the remote database into read-only mode.
Switch to the local server and create a "tnsnames.ora" entry pointing to the remote database for use in the USING clause of the database link.
CDB3=
Multitenant Page 55
CDB3=
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = my-server.my-domain)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = cdb3)
)
)
Connect to the local database to initiate the clone.
export ORAENV_ASK=NO
export ORACLE_SID=cdb1
. oraenv
export ORAENV_ASK=YES
sqlplus / as sysdba
Create a database link in the local database, pointing to the remote database.
-- Test link.
DESC user_tables@clone_link
Create a new PDB in the local database by cloning the remote PDB. In this case we are using Oracle Managed Files (OMF), so we don't need to bother with FILE_NAME_CONVERT parameter for file name
conversions.
SQL>
We can see the new PDB has been created, but it is in the MOUNTED state.
NAME OPEN_MODE
------------------------------ ----------
PDB5NEW MOUNTED
SQL>
The PDB is opened in read-write mode to complete the process.
NAME OPEN_MODE
------------------------------ ----------
PDB5NEW READ WRITE
SQL>
As with any PDB clone, check common users and the temporary tablespace is configured as expected.
export ORAENV_ASK=NO
export ORACLE_SID=db12c
. oraenv
export ORAENV_ASK=YES
sqlplus / as sysdba
Create a user in the remote database for use with the database link.
SELECT log_mode
FROM v$database;
LOG_MODE
------------
ARCHIVELOG
SQL>
In Oracle 12.1 we would have switched the remote database to read-only mode before continuing, but this is not necessary in Oracle 12.2 provided the source database is in archivelog mode.
Switch to the local server and create a "tnsnames.ora" entry pointing to the remote database for use in the USING clause of the database link.
DB12C =
(DESCRIPTION =
(ADDRESS_LIST =
Multitenant Page 56
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = my-server.my-domain)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = db12c)
)
)
Connect to the local database to initiate the clone.
export ORAENV_ASK=NO
export ORACLE_SID=cdb1
. oraenv
export ORAENV_ASK=YES
sqlplus / as sysdba
Create a database link in the local database, pointing to the remote database.
-- Test link.
DESC user_tables@clone_link
Create a new PDB in the local database by cloning the remote non-CDB. In this case we are using Oracle Managed Files (OMF), so we don't need to bother with FILE_NAME_CONVERT parameter for file
name conversions. Since there is no PDB to name, we use "NON$CDB" as the PDB name.
SQL>
We can see the new PDB has been created, but it is in the MOUNTED state.
NAME OPEN_MODE
------------------------------ ----------
DB12CPDB MOUNTED
SQL>
Since this PDB was created as a clone of a non-CDB, before it can be opened we need to run the "$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql" script to clean it up.
@$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql
The PDB can now be opened in read-write mode.
NAME OPEN_MODE
------------------------------ ----------
DB12CPDB READ WRITE
SQL>
As with any PDB clone, check common users and the temporary tablespace is configured as expected.
Appendix
These tests were performed on a free trial of the Oracle Database Cloud Service, where the CDB1 instance and PDB1 pluggable database were created as part of the service creation. The additional
instances were built on the same virtual machine using the commands below. I've included the DBCA commands to create and dele te the CDB1 instance for completeness. They were not actually used.
Multitenant Page 57
# Remote container (cdb3) with PDB (pdb5).
dbca -silent -createDatabase \
-templateName General_Purpose.dbc \
-gdbname cdb3 -sid cdb3 -responseFile NO_VALUE \
-characterSet AL32UTF8 \
-sysPassword OraPasswd1 \
-systemPassword OraPasswd1 \
-createAsContainerDatabase true \
-numberOfPDBs 1 \
-pdbName pdb5 \
-pdbAdminPassword OraPasswd1 \
-databaseType MULTIPURPOSE \
-automaticMemoryManagement false \
-totalMemory 2048 \
-storageType FS \
-datafileDestination "/u01/app/oracle/oradata/" \
-redoLogFileSize 50 \
-initParams encrypt_new_tablespaces=DDL \
-emConfiguration NONE \
-ignorePreReqs
export ORAENV_ASK=NO
export ORACLE_SID=cdb3
. oraenv
export ORAENV_ASK=YES
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE ARCHIVELOG;
ALTER DATABASE OPEN;
EXIT;
EOF
You should switch to local undo mode unless you have a compelling reason not to. Some of the new multitenant features in 12.2 rely on local undo. This article demonstrates how to switch to shared
undo mode, only so you can see the process of switching back to local undo mode.
PROPERTY_NAME PROPERTY_VALUE
------------------------------ ------------------------------
Multitenant Page 58
------------------------------ ------------------------------
LOCAL_UNDO_ENABLED TRUE
SQL>
We also check for the presence of the undo tablespaces for the root container (con_id=1) and user-defined pluggable database (con_id=3).
CON_ID TABLESPACE_NAME
---------- ------------------------------
1 UNDOTBS1
3 UNDOTBS1
SQL>
The following commands demonstrate how to switch to shared undo mode using the ALTER DATABASE LOCAL UNDO OFF command.
CONN / AS SYSDBA
SHUTDOWN IMMEDIATE;
STARTUP UPGRADE;
SHUTDOWN IMMEDIATE;
STARTUP;
Once the instance is restarted we can check the undo mode again and see we are now in shared undo mode.
PROPERTY_NAME PROPERTY_VALUE
------------------------------ ------------------------------
LOCAL_UNDO_ENABLED FALSE
SQL>
We still have the local undo tablespace for the user-defined pluggable database (con_id=3), even though the instance will no longer use it.
CON_ID TABLESPACE_NAME
---------- ------------------------------
1 UNDOTBS1
3 UNDOTBS1
SQL>
For clarity, we should remove it.
SELECT file_name
FROM dba_data_files
WHERE tablespace_name = 'UNDOTBS1';
----------------------------------------------------------------------------------------------------
/u02/app/oracle/oradata/cdb1/pdb1/undotbs01.dbf
SQL>
Tablespace dropped.
SQL>
The instance is now running in shared undo mode, with all old local undo tablespaces removed.
CONN / AS SYSDBA
Multitenant Page 59
FROM database_properties
WHERE property_name = 'LOCAL_UNDO_ENABLED';
PROPERTY_NAME PROPERTY_VALUE
------------------------------ ------------------------------
LOCAL_UNDO_ENABLED FALSE
SQL>
We also check for the presence of the undo tablespaces and only see that of the root container (con_id=1).
CON_ID TABLESPACE_NAME
---------- ------------------------------
1 UNDOTBS1
SQL>
The following commands demonstrate how to switch to local undo mode using the ALTER DATABASE LOCAL UNDO ON command.
CONN / AS SYSDBA
SHUTDOWN IMMEDIATE;
STARTUP UPGRADE;
SHUTDOWN IMMEDIATE;
STARTUP;
Once the instance is restarted we can check the undo mode again and see we are now in local undo mode.
PROPERTY_NAME PROPERTY_VALUE
------------------------------ ------------------------------
LOCAL_UNDO_ENABLED TRUE
SQL>
When we check for undo tablespaces we see Oracle has created a local undo tablespace for each user-defined pluggable databases.
CON_ID TABLESPACE_NAME
---------- ------------------------------
1 UNDOTBS1
3 UNDO_1
SQL>
If we create a new pluggable database, we can see it is also created with a local undo tablespace.
CON_ID TABLESPACE_NAME
---------- ------------------------------
1 UNDOTBS1
3 UNDO_1
4 UNDOTBS1
SQL>
Memory Resource Management for PDBs in Oracle Database 12c Release 2 (12.2)
In the previous release there was no way to control the amount of memory used by an individual PDB. As a result a "noisy neighbour" could use up lots of memory and impact the performance of other
PDBs in the same instance. Oracle Database 12c Release 2 (12.2) allows you to control the amount of memory used by a PDB, mak ing consolidation more reliable.
Multitenant Page 60
PGA_AGGREGATE_TARGET : The target PGA size for the PDB.
SGA_MIN_SIZE : The minimum SGA size for the PDB.
SGA_TARGET : The maximum SGA size for the PDB.
There are a number of restrictions regarding what values can be used, which are explained in the documentation here. To summarise.
CONN / AS SYSDBA
SHOW PARAMETER sga_target;
CONN / AS SYSDBA
ALTER SESSION SET CONTAINER=pdb1;
System altered.
SQL>
The value can be set to "0" or reset if you no longer want to control this parameter.
V$RSRCPDBMETRIC : A single row per PDB, holding the last of the 1 minute samples.
V$RSRCPDBMETRIC_HISTORY : 61 rows per PDB, holding the last 60 minutes worth of samples from the V$RSRCPDBMETRIC view.
DBA_HIST_RSRC_PDB_METRIC : AWR snaphots, retained based on the AWR retention period.
The following queries are examples of their usage.
CONN / AS SYSDBA
Multitenant Page 61
FROM v$rsrcpdbmetric r,
cdb_pdbs p
WHERE r.con_id = p.con_id
ORDER BY p.pdb_name;
From Oracle database 12.2 onward pluggable databases (PDBs) are created in parallel. You have some level of control over the number of parallel execution servers used to copy files during the creation
of a pluggable database (PDB).
The databases use Oracle Managed Files (OMF) so we don't need to worry about the FILE_NAME_CONVERT or PDB_FILE_NAME_CONVERT settings.
The following are functionally identical, both letting Oracle decide on the degree of parallelism (DOP).
-- Automatic DOP.
CREATE PLUGGABLE DATABASE pdb2 FROM pdb5;
CREATE PLUGGABLE DATABASE pdb2 FROM pdb5 PARALLEL;
Use an integer to manually specify the DOP. Oracle can choose to ignore this if it doesn't make sense. The DOP is limited by the number of datafiles. If the PDB only has 4 datafiles, a DOP of more than 4
will be limited for 4.
-- Manual DOP.
CREATE PLUGGABLE DATABASE pdb2 FROM pdb5 PARALLEL 8;
To create a PDB serially, use the value "0" or "1".
-- Serial
CREATE PLUGGABLE DATABASE pdb2 FROM pdb5 PARALLEL 0;
CREATE PLUGGABLE DATABASE pdb2 FROM pdb5 PARALLEL 1;
Monitoring Parallel Execution Servers
If you are cloning small PDBs, like the seed, you may struggle to be quick enough to see the parallel execution servers. I used the following query whilst cloning a PDB with 10 datafiles on a system that
had no other load.
-- No PARALLEL Clause
SELECT qcsid, qcserial#, sid, serial#
FROM v$px_session
ORDER BY 1,2,3;
SQL>
Multitenant Page 62
-- PARALLEL
SELECT qcsid, qcserial#, sid, serial#
FROM v$px_session
ORDER BY 1,2,3;
SQL>
-- PARALLEL 1
SELECT qcsid, qcserial#, sid, serial#
FROM v$px_session
ORDER BY 1,2,3;
no rows selected
SQL>
-- PARALLEL 2
SELECT qcsid, qcserial#, sid, serial#
FROM v$px_session
ORDER BY 1,2,3;
SQL>
-- PARALLEL 4
SELECT qcsid, qcserial#, sid, serial#
FROM v$px_session
ORDER BY 1,2,3;
SQL>
-- PARALLEL 8
SELECT qcsid, qcserial#, sid, serial#
FROM v$px_session
ORDER BY 1,2,3;
SQL>
PDB Archive Files for Unplug and Plugin in Oracle Database 12c Release 2 (12.2)
In Oracle 12.1 a pluggable database could be unplugged to a ".xml" file, which describes the contents of the pluggable database. To move the PDB, you needed to manually move the ".xml" file and all
the relevant database files. In addition to this functionality, Oracle 12.2 allows a PDB to be unplugged to a ".pdb" archive file. The resulting archive file contains the ".xml" file describing the PDB as well
as all the datafiles associated with the PDB. This can simplify the transfer of the files between servers and reduce the chances of human error.
export ORAENV_ASK=NO
export ORACLE_SID=cdb3
Multitenant Page 63
export ORACLE_SID=cdb3
. oraenv
export ORAENV_ASK=YES
sqlplus / as sysdba
SQL>
You can delete the PDB and drop the datafile, as they are all present in the archive file.
NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY
SQL>
Plugin PDB from ".pdb" Archive File
Plugging in a PDB into the CDB is similar to creating a new PDB. First check the PBD is compatible with the CDB by calling th e DBMS_PDB.CHECK_PLUG_COMPATIBILITY function, passing in the archive file
and the name of the PDB you want to create using it.
SET SERVEROUTPUT ON
DECLARE
l_result BOOLEAN;
BEGIN
l_result := DBMS_PDB.check_plug_compatibility(
pdb_descr_file => '/u01/pdb5.pdb',
pdb_name => 'pdb5');
IF l_result THEN
DBMS_OUTPUT.PUT_LINE('compatible');
ELSE
DBMS_OUTPUT.PUT_LINE('incompatible');
END IF;
END;
/
compatible
SQL>
If the PDB is not compatible, violations are listed in the PDB_PLUG_IN_VIOLATIONS view. If the PDB is compatible, create a new PDB using it as the source. If we were creating it with a new name we
might do something like this.
NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY
PDB5 READ WRITE
SQL>
Unplug PDB to ".xml" File
Before attempting to unplug a PDB, you must make sure it is closed. To unplug the database use the ALTER PLUGGABLE DATABASE command with the UNPLUG INTO clause to specify the location of the
XML metadata file.
export ORAENV_ASK=NO
export ORACLE_SID=cdb3
. oraenv
export ORAENV_ASK=YES
sqlplus / as sysdba
Multitenant Page 64
SELECT name, open_mode
FROM v$pdbs
ORDER BY name;
NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY
PDB5 MOUNTED
SQL>
You can delete the PDB, choosing to keep the files on the file system.
NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY
SQL>
Plugin PDB from ".xml" File
First check the PBD is compatible with the CDB by calling the DBMS_PDB.CHECK_PLUG_COMPATIBILITY function, passing in the XML metadata file and the name of the PDB you want to create using it.
SET SERVEROUTPUT ON
DECLARE
l_result BOOLEAN;
BEGIN
l_result := DBMS_PDB.check_plug_compatibility(
pdb_descr_file => '/u01/pdb5.xml',
pdb_name => 'pdb5');
IF l_result THEN
DBMS_OUTPUT.PUT_LINE('compatible');
ELSE
DBMS_OUTPUT.PUT_LINE('incompatible');
END IF;
END;
/
compatible
SQL>
If the PDB is not compatible, violations are listed in the PDB_PLUG_IN_VIOLATIONS view. If the PDB is compatible, create a new PDB using it as the source. If we were creating it with a new name we
might do something like this.
NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY
PDB5 READ WRITE
SQL>
Setup
We need to create 3 PDBs to test the CONTAINERS clause. The setup code below does the following.
CONN / AS SYSDBA
Multitenant Page 65
ADMIN USER pdb_admin IDENTIFIED BY Password1
DEFAULT TABLESPACE users DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
CONN / AS SYSDBA
Creates a common user called C##COMMON_USER that owns an empty table called COMMON_USER_TAB in the root container.
Creates a populated version of the COMMON_USER_TAB table owned by the C##COMMON_USER user in each PDB.
Grants select privilege on the local user's table to the common user.
-- Create a common user that owns an empty table.
CONN / AS SYSDBA
CREATE USER c##common_user IDENTIFIED BY Common1 QUOTA UNLIMITED ON users;
GRANT CREATE SESSION, CREATE TABLE, CREATE VIEW, CREATE SYNONYM TO c##common_user CONTAINER=ALL;
CONN c##common_user/Common1@pdb1
CONN c##common_user/Common1@pdb2
CONN c##common_user/Common1@pdb3
CONN local_user/Local1@pdb2
GRANT SELECT ON local_user_tab TO c##common_user;
CONN local_user/Local1@pdb3
GRANT SELECT ON local_user_tab TO c##common_user;
CONN / AS SYSDBA
CONTAINERS Clause with Common Users
The CONTAINERS clause can only be used from a common user in the root container. With no additional changes we can query the COMMON_USER_TAB tables present in the common user in all the
containers. The most basic use of the CONTAINERS clause is shown below.
CONN c##common_user/Common1
SELECT *
FROM CONTAINERS(common_user_tab);
ID CON_ID
---------- ----------
1 4
2 4
1 5
Multitenant Page 66
1 5
2 5
1 3
2 3
6 rows selected.
SQL>
Notice the CON_ID column has been added to the column list, to indicate which container the result came from. This allows us to query a subset of the containers.
SELECT con_id, id
FROM CONTAINERS(common_user_tab)
WHERE con_id IN (3, 4)
ORDER BY con_id, id;
CON_ID ID
---------- ----------
3 1
3 2
4 1
4 2
4 rows selected.
SQL>
CONTAINERS Clause with Local Users
To query tables and views from local users, the documentation suggest you must create views on them from a common user. The f ollowing code creates views against the LOCAL_USER_TAB tables
created earlier. We must also create a table in the root container with the same name as the views.
CONN c##common_user/Common1
CREATE TABLE c##common_user.local_user_tab_v (id NUMBER);
CONN c##common_user/Common1@pdb1
CREATE VIEW c##common_user.local_user_tab_v AS
SELECT * FROM local_user.local_user_tab;
CONN c##common_user/Common1@pdb2
CREATE VIEW c##common_user.local_user_tab_v AS
SELECT * FROM local_user.local_user_tab;
CONN c##common_user/Common1@pdb3
CREATE VIEW c##common_user.local_user_tab_v AS
SELECT * FROM local_user.local_user_tab;
With the blank table and views in place we can now use the CONTAINERS clause indirectly against the local user objects.
CONN c##common_user/Common1
SELECT con_id, id
FROM CONTAINERS(local_user_tab_v)
ORDER BY con_id, id;
CON_ID ID
---------- ----------
3 1
3 2
4 1
4 2
5 1
5 2
6 rows selected.
SQL>
The documentation suggests the use of synonyms in place of views will not work, since the synonyms must resolve to objects ow ned by the common user issuing the query.
"When a synonym is specified in the CONTAINERS clause, the synonym must resolve to a table or a view owned by the common user issuing the statement."
That's not quite true from my tests, but it doesn't stop you from using synonyms to local objects in the PDBs, provided the o bject in the root container is not a synonym. The following example uses a
real object in the root container, and local objects via synonyms in the pluggable databases.
CONN c##common_user/Common1
CREATE TABLE c##common_user.local_user_tab_syn (id NUMBER);
CONN c##common_user/Common1@pdb1
DROP TABLE c##common_user.common_user_tab;
CREATE SYNONYM c##common_user.local_user_tab_syn FOR local_user.local_user_tab;
CONN c##common_user/Common1@pdb2
DROP TABLE c##common_user.common_user_tab;
CREATE SYNONYM c##common_user.local_user_tab_syn FOR local_user.local_user_tab;
CONN c##common_user/Common1@pdb3
DROP TABLE c##common_user.common_user_tab;
CREATE SYNONYM c##common_user.local_user_tab_syn FOR local_user.local_user_tab;
CONN c##common_user/Common1
SELECT con_id, id
Multitenant Page 67
SELECT con_id, id
FROM CONTAINERS(local_user_tab_syn)
ORDER BY con_id, id;
CON_ID ID
---------- ----------
3 1
3 2
4 1
4 2
5 1
5 2
6 rows selected.
SQL>
Let's see what happens if we drop the common user table and replace it with a synonym of the same name, pointing to a table o f the same structure as the local tables, but owned by the common user.
CONN c##common_user/Common1
SELECT *
FROM CONTAINERS(local_user_tab_syn);
SELECT *
*
ERROR at line 1:
ORA-12801: error signaled in parallel query server P004
ORA-00942: table or view does not exist
SQL>
If the synonyms consistently point to an object in the common user it still doesn't work.
CONN c##common_user/Common1@pdb1
DROP SYNONYM c##common_user.local_user_tab_syn;
CREATE SYNONYM c##common_user.local_user_tab_syn FOR c##common_user.common_user_tab;
DESC local_user_tab_syn;
Name Null? Type
----------------------------------------------------- -------- ------------------------------------
ID NUMBER
SQL>
CONN c##common_user/Common1@pdb2
DROP SYNONYM c##common_user.local_user_tab_syn;
CREATE SYNONYM c##common_user.local_user_tab_syn FOR c##common_user.common_user_tab;
DESC local_user_tab_syn;
Name Null? Type
----------------------------------------------------- -------- ------------------------------------
ID NUMBER
SQL>
CONN c##common_user/Common1@pdb3
DROP SYNONYM c##common_user.local_user_tab_syn;
CREATE SYNONYM c##common_user.local_user_tab_syn FOR c##common_user.common_user_tab;
DESC local_user_tab_syn;
Name Null? Type
----------------------------------------------------- -------- ------------------------------------
ID NUMBER
SQL>
CONN c##common_user/Common1
DROP SYNONYM c##common_user.local_user_tab_syn;
CREATE SYNONYM c##common_user.local_user_tab_syn FOR c##common_user.common_user_tab;
DESC local_user_tab_syn;
Name Null? Type
----------------------------------------------------- -------- ------------------------------------
ID NUMBER
SQL>
SELECT con_id, id
FROM CONTAINERS(local_user_tab_syn)
ORDER BY con_id, id;
FROM CONTAINERS(local_user_tab_syn)
*
ERROR at line 2:
ORA-00942: table or view does not exist
SQL>
I'm not sure what the wording in the documentation means, but it doesn't read well to me.
Multitenant Page 68
I'm not sure what the wording in the documentation means, but it doesn't read well to me.
The hint is placed in the select list as usual, with the basic syntax as follows. Substitute the hint you want in place of "< <PUT-HINT-HERE>>".
/*+ CONTAINERS(DEFAULT_PDB_HINT='<<PUT-HINT-HERE>>') */
As an example, we will run a query against the ALL_OBJECTS view and check the elapsed time.
CONN / AS SYSDBA
SET TIMING ON
CON_ID MAX(OBJECT_ID)
---------- --------------
1 75316
3 73209
4 73330
5 73323
Elapsed: 00:00:00.31
SQL>
We repeat the query, but this time add a PARALLEL(2) hint to the recursive queries run in each PDB, which should make the elapsed time slower on this small VM.
CON_ID MAX(OBJECT_ID)
---------- --------------
1 75316
3 73209
4 73340
5 73323
Elapsed: 00:00:06.17
SQL>
Notice the significantly longer elapsed time as a result of the parallel operations in the recursive SQL..
Clean Up
You can clean up all the pluggable databases and the common user created for these examples using the following script.
CONN / AS SYSDBA
ALTER PLUGGABLE DATABASE pdb1 CLOSE;
ALTER PLUGGABLE DATABASE pdb2 CLOSE;
ALTER PLUGGABLE DATABASE pdb3 CLOSE;
DROP PLUGGABLE DATABASE pdb1 INCLUDING DATAFILES;
DROP PLUGGABLE DATABASE pdb2 INCLUDING DATAFILES;
DROP PLUGGABLE DATABASE pdb3 INCLUDING DATAFILES;
A PDB lockdown profile allows you to restrict the operations and functionality available from within a PDB. This can be very useful from a security perspective, giving the PDBs a greater degree of
separation and allowing different people to manage each PDB, without compromising the security of other PDBs with the same instance.
Basic Commands
The basic process of creating, enabling, disabling and dropping a lockdown profile is relatively simple. The user administering the PDB lockdown profiles described here will need the CREATE LOCKDOWN
PROFILE and DROP LOCKDOWN PROFILE system privileges. In these examples we will perform all these operations as the SYS user.
In the following example we create two PDB lockdown profiles in the root container. One will be used as the system default an d one for a specific PDB.
CONN / AS SYSDBA
Multitenant Page 69
ALTER SESSION SET CONTAINER = pdb1;
SHOW PARAMETER PDB_LOCKDOWN;
CONN / AS SYSDBA
ALTER SESSION SET CONTAINER = pdb1;
ALTER SYSTEM RESET PDB_LOCKDOWN;
-- Restart PDB.
SHUTDOWN IMMEDIATE;
STARTUP;
CONN / AS SYSDBA
ALTER SYSTEM RESET PDB_LOCKDOWN;
CONN / AS SYSDBA
SELECT profile_name,
Multitenant Page 70
SELECT profile_name,
rule_type,
rule,
clause,
clause_option,
option_value,
min_value,
max_value,
list,
status
FROM dba_lockdown_profiles
ORDER BY 1;
The database comes with three default PDB lockdown profiles called PRIVATE_DBAAS, PUBLIC_DBAAS and SAAS. These are empty profiles, containing no restrictions, which you can tailor to suit your
own needs if you so wish.
The remainder of the article will discuss the types of restrictions available when planning a PDB lockdown profile. All the commands below reference a profile called MY_PROFILE, which can be created
and dropped using the following commands.
Having no specific option restrictions in place is the equivalent of using the following.
-- Enable.
ALTER LOCKDOWN PROFILE my_profile ENABLE OPTION = ('DATABASE QUEUING');
ALTER LOCKDOWN PROFILE my_profile ENABLE OPTION = ('PARTITIONING');
ALTER LOCKDOWN PROFILE my_profile ENABLE OPTION ALL;
ALTER LOCKDOWN PROFILE my_profile ENABLE OPTION ALL EXCEPT = ('PARTITIONING');
-- Disable.
ALTER LOCKDOWN PROFILE my_profile DISABLE OPTION = ('DATABASE QUEUING');
ALTER LOCKDOWN PROFILE my_profile DISABLE OPTION = ('PARTITIONING');
ALTER LOCKDOWN PROFILE my_profile DISABLE OPTION ALL;
ALTER LOCKDOWN PROFILE my_profile DISABLE OPTION ALL EXCEPT = ('DATABASE QUEUING','PARTITIONING');
Using ALL EXCEPT doesn't really make sense with only two options available, but it will be useful if more options are added i n future.
Lockdown Features
Features can be enabled or disabled individually, or in groups known as feature bundles. The feature bundles and their individual features are listed in the ALTER LOCKDOWN PROFILE documentation.
Having no specific feature restrictions in place is the equivalent of using the following.
The following examples show how to enable or disable entire commands or groups of them using ALL and ALL EXCEPT.
ALTER LOCKDOWN PROFILE my_profile ENABLE STATEMENT = ('ALTER DATABASE', 'ALTER PLUGGABLE DATABASE');
ALTER LOCKDOWN PROFILE my_profile DISABLE STATEMENT = ('ALTER DATABASE', 'ALTER PLUGGABLE DATABASE');
ALTER LOCKDOWN PROFILE my_profile ENABLE STATEMENT ALL EXCEPT = ('ALTER DATABASE', 'ALTER PLUGGABLE DATABASE');
ALTER LOCKDOWN PROFILE my_profile DISABLE STATEMENT ALL EXCEPT = ('ALTER DATABASE', 'ALTER PLUGGABLE DATABASE');
The scope of the restriction can be reduced using the CLAUSE, OPTION, MINVALUE, MAXVALUE options and values.
Multitenant Page 71
-- Can only set CPU_COUNT to values 1, 2 or 3.
ALTER LOCKDOWN PROFILE my_profile DISABLE STATEMENT = ('ALTER SYSTEM')
CLAUSE = ('SET') OPTION = ('CPU_COUNT') MINVALUE = '1' MAXVALUE = '3';
The ALTER LOCKDOWN PROFILE documentation describes the available syntax.
Considerations
It should be obvious from the examples in the Lockdown Profile Basics section there is a flaw in this mechanism if you define poor lockdown profiles.
Imagine a scenario where you have a highly restrictive lockdown profile for one PDB, but a less restrictive default lockdown profile. If you don't restrict the ability to modify the PDB_LOCKDOWN
parameter in the PDB with the highly restrictive profile, what's to stop the PDB administrator from resetting the PDB-level parameter and reverting to the less restrictive default lockdown profile?
If you are planning to use a variety of PDB lockdown profiles in a single instance, you need to define your lockdown profiles very carefully to prevent this type of mistake. This is a classic case of garbage-
in, garbage-out.
Option, feature and statement restrictions can be combined into a single PDB lockdown profile.
Whilst testing it's easy to get yourself into a bit of a mess. Remember, you can always switch back to the root container and drop the problematic lockdown profile and start again.
Pluggable Database (PDB) Operating System (OS) Credentials in Oracle Database 12c Release 2 (12.2)
There are a number of database features that require access to the operating system, for example external jobs without explicit credentials, PL/SQL library executions and preprocessor executions for
external tables. By default these run using the Oracle software owner on the operating system, which is a highly privileged user and represents a security risk if you are trying to consolidate multiple
systems into a single container.
Oracle 12.2 allows you to assign a different default operating system (OS) credential to each pluggable database (PDB), giving a greater degree of separation between the pluggable databases and
therefore better control over security.
CONN / AS SYSDBA
-- PDB1 Credential
BEGIN
DBMS_CREDENTIAL.create_credential(
credential_name => 'pdb1_user_cred',
username => 'pdb1_user',
password => 'pdb1_user');
END;
/
-- PDB2 Credential
BEGIN
DBMS_CREDENTIAL.create_credential(
credential_name => 'pdb2_user_cred',
Multitenant Page 72
credential_name => 'pdb2_user_cred',
username => 'pdb2_user',
password => 'pdb2_user');
END;
/
-- PDB1 Credential
BEGIN
DBMS_CREDENTIAL.create_credential(
credential_name => 'pdb3_user_cred',
username => 'pdb3_user',
password => 'pdb3_user');
END;
/
Check the credentials are all present and owned by the root container using the CDB_CREDENTIALS view.
SQL>
Assign Credentials (PDB_OS_CREDENTIAL)
The PDB_OS_CREDENTIAL initialization parameter is used to define the default OS credential for the container. When this is set in the root container, it defines the default OS credential for all PDBs.
Setting it at the PDB level overrides the CDB default setting.
The documentation suggests you should be able set the parameter in the root container as follows.
CONN / AS SYSDBA
ALTER SYSTEM SET PDB_OS_CREDENTIAL=cdb1_user_cred SCOPE=SPFILE;
SHUTDOWN IMMEDIATE;
STARTUP;
If you try that, you get the following error from the ALTER SYSTEM command.
ERROR at line 1:
ORA-32017: failure in updating SPFILE
ORA-65046: operation not allowed from outside a pluggable database
Instead, I had to do the following.
CONN / AS SYSDBA
SHUTDOWN IMMEDIATE;
CREATE PFILE='/tmp/pfile.txt' FROM SPFILE;
HOST echo "*.pdb_os_credential=cdb1_user_cred" >> /tmp/pfile.txt
CREATE SPFILE FROM PFILE='/tmp/pfile.txt';
STARTUP;
SHOW PARAMETER PDB_OS_CREDENTIAL
With the default in place we can set the PDB-specific credentials as follows.
CONN / AS SYSDBA
-- PDB1 Credential
ALTER SESSION SET CONTAINER=pdb1;
ALTER SYSTEM SET PDB_OS_CREDENTIAL=pdb1_user_cred SCOPE=SPFILE;
SHUTDOWN IMMEDIATE;
STARTUP;
SHOW PARAMETER PDB_OS_CREDENTIAL
-- PDB2 Credential
ALTER SESSION SET CONTAINER=pdb2;
ALTER SYSTEM SET PDB_OS_CREDENTIAL=pdb2_user_cred SCOPE=SPFILE;
SHUTDOWN IMMEDIATE;
STARTUP;
SHOW PARAMETER PDB_OS_CREDENTIAL
Multitenant Page 73
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
pdb_os_credential string PDB2_USER_CRED
SQL>
-- PDB3 Credential
ALTER SESSION SET CONTAINER=pdb3;
ALTER SYSTEM SET PDB_OS_CREDENTIAL=pdb3_user_cred SCOPE=SPFILE;
SHUTDOWN IMMEDIATE;
STARTUP;
SHOW PARAMETER PDB_OS_CREDENTIAL
PDBs With Different Character Sets to the CDB in Oracle Database 12c Release 2 (12.2)
In the previous release the character set for the root container and all pluggable databases associated with it had to be the same. This could limit the movement of PDBs and make consolidation difficult
where a non-standard character set was required.
In Oracle Database 12c Release 2 (12.2) a PDB can use a different character set to the CDB, provided the character set of the CDB is AL32UTF8, which is now the default character set when using the
Database Configuration Assistant (DBCA).
CONN / AS SYSDBA
SELECT *
FROM nls_database_parameters
WHERE parameter = 'NLS_CHARACTERSET';
PARAMETER VALUE
------------------------------ ------------------------------
NLS_CHARACTERSET AL32UTF8
SQL>
We can see the default character set of the root container is AL32UTF8, which means it can hold PDBs with different character sets.
export ORAENV_ASK=NO
export ORACLE_SID=cdb3
. oraenv
export ORAENV_ASK=YES
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE ARCHIVELOG;
ALTER DATABASE OPEN;
EXIT;
Multitenant Page 74
EXIT;
EOF
Hot Clone the Source PDB
To prove we can house a database of a different character set in our destination CDB, we will be doing a hot clone. The setup required for this is described in the following article.
Multitenant : Hot Clone a Remote PDB or Non-CDB in Oracle Database 12c Release 2 (12.2)
Once you've completed the setup, you can perform a regular hot clone. Connect to the destination CDB.
export ORAENV_ASK=NO
export ORACLE_SID=cdb1
. oraenv
export ORAENV_ASK=YES
sqlplus / as sysdba
Clone the source PDB (pdb5) to create the destination PDB (pdb5new).
SHOW PDBS
SHOW PDBS
12-SEP-17 15.55.16.637023 PDB5NEW PDB not Unicode Character set mismatch: PDB ch
aracter set WE8ISO8859P1. CDB
character set AL32UTF8.
SQL>
Check the Destination PDB
Compare the character set of the CDB and the new pluggable database.
CONN / AS SYSDBA
SELECT *
FROM nls_database_parameters
WHERE parameter = 'NLS_CHARACTERSET';
PARAMETER VALUE
------------------------------ ------------------------------
NLS_CHARACTERSET AL32UTF8
SQL>
Multitenant Page 75
COLUMN parameter FORMAT A30
COLUMN value FORMAT A30
SELECT *
FROM nls_database_parameters
WHERE parameter = 'NLS_CHARACTERSET';
PARAMETER VALUE
------------------------------ ------------------------------
NLS_CHARACTERSET WE8ISO8859P1
SQL>
We can see we have a pluggable database with a different character set to that of the root container.
Miscellaneous
The root container must use to the AL32UTF8 character set if you need it to hold PDBs with differing character sets.
The character set and national character set of an application container and all its application PDBs must match.
New PDBs, cloned from the seed database, always match the CDB character set. There is no way to create a new PDB with a different character set directly. You can use Database Migration Assistant for
Unicode (DMU) to convert the character set of a PDB.
As seen in this article, cloning can be used to create a PDB with a different character set, as can unplug/plugin.
LogMiner supports PDBs with different character sets compared to their CDB.
Data Guard support PDBs with different character sets compared to their CDB for rolling upgrades.
From Oracle Database 12.2 onward it is possible to refresh the contents of a remotely hot cloned PDB provided it is created a s a refreshable PDB and has only ever been opened in read only mode. The
read-only PDB can be used for reporting purposes, or as the source for other clones, to minimise the impact on a production system when multiple up-to-date clones are required on a regular basis.
Prerequisites
In this context, the word "local" refers to the destination or target CDB that will house the cloned PDB. The word "remote" r efers to the PDB that is the source of the clone.
The prerequisites for a PDB refresh are similar to those of a hot remote clone, so you should be confident with that before continuing. You can read about it in this article.
Multitenant : Hot Clone a Remote PDB or Non-CDB in Oracle Database 12c Release 2 (12.2)
In addition to the prerequisites for hot remote cloning, we must also consider the following.
A refreshable PDB must be in a separate CDB to its source, so it must be a remote clone.
You can change a refreshable PDB to a non-refreshable PDB, but not vice versa.
If the source PDB is not available over a DB link, the archived redo logs can be read from a location specified by the optional REMOTE_RECOVERY_FILE_DEST parameter.
New datafiles added to the source PDB are automatically created on the destination PDB. The PDB_FILE_NAME_CONVERT parameter must be specified to allow the conversion to take place.
In the examples below I have two databases running on the same virtual machine, but they could be running on separate physical or virtual servers.
cdb1 : The local database that will eventually house the refreshable clone.
cdb3 : The remote CDB, used for the source PDB (pdb5).
Create a Refreshable PDB
Remember, you must have completed all the preparations for a hot remote clone described in the linked article before going forward.
export ORAENV_ASK=NO
export ORACLE_SID=cdb1
. oraenv
export ORAENV_ASK=YES
sqlplus / as sysdba
Create a new PDB in the local database by cloning the remote PDB. In this case we are using Oracle Managed Files (OMF), so we don't need to bother with FILE_NAME_CONVERT parameter for file name
conversions. In this case we are using manual refresh mode.
SQL>
We can see the new PDB has been created, but it is in the MOUNTED state.
NAME OPEN_MODE
------------------------------ ----------
PDB5_RO MOUNTED
SQL>
The PDB is opened in read-only mode to complete the process.
NAME OPEN_MODE
------------------------------ ----------
PDB5_RO READ ONLY
SQL>
Alter the Source PDB
Multitenant Page 76
Alter the Source PDB
We want to prove the new PDB can be refreshed, so we will add a new tablespace, user and table owned by that user in the sour ce database.
export ORAENV_ASK=NO
export ORACLE_SID=cdb3
. oraenv
export ORAENV_ASK=YES
sqlplus / as sysdba
Make some changes to the source PDB.
export ORAENV_ASK=NO
export ORACLE_SID=cdb1
. oraenv
export ORAENV_ASK=YES
sqlplus / as sysdba
Switch to the refreshable PDB and check for the presence of the test table. It will not exist yet.
SQL>
The refresh operation can only take place from the refreshable PDB, not the root container.
ID
----------
1
1 row selected.
SQL>
Notice the tablespace as also been created in the refreshable PDB.
SELECT tablespace_name
FROM dba_tablespaces
ORDER BY 1;
TABLESPACE_NAME
------------------------------
SYSAUX
SYSTEM
TEMP
TEST_TS
UNDOTBS1
USERS
6 rows selected.
Multitenant Page 77
6 rows selected.
SQL>
Refresh Modes
In the example above we created a refreshable PDB using the manual refresh mode. Alternatively we could allow it to refresh automatically. The possible variations during creation are shown below.
-- Non-refreshable PDB.
-- These two are functionally equivalent.
CREATE PLUGGABLE DATABASE pdb5_ro FROM pdb5@clone_link
REFRESH MODE NONE;
3 rows selected.
SQL>
The refresh mode can be altered after the refreshable PDB is created, as shown below.
Appendix
These tests were performed on a free trial of the Oracle Database Cloud Service, where the CDB1 instance and PDB1 pluggable database were created as part of the service creation. The additional
instance was built on the same virtual machine using the commands below. I've included the DBCA commands to create and delete the CDB1 instance for completeness. They were not actually used.
Multitenant Page 78
-pdbName pdb5 \
-pdbAdminPassword OraPasswd1 \
-databaseType MULTIPURPOSE \
-automaticMemoryManagement false \
-totalMemory 2048 \
-storageType FS \
-datafileDestination "/u01/app/oracle/oradata/" \
-redoLogFileSize 50 \
-initParams encrypt_new_tablespaces=DDL \
-emConfiguration NONE \
-ignorePreReqs
export ORAENV_ASK=NO
export ORACLE_SID=cdb3
. oraenv
export ORAENV_ASK=YES
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE ARCHIVELOG;
ALTER DATABASE OPEN;
EXIT;
EOF
Oracle 12.1 allowed 252 user-defined pluggable databases. Oracle 12.2 allows 4096 user-defined pluggable databases, including application root and application containers. From Oracle 12.1.0.2 onward
the non-CDB architecture is deprecated. As a result you may decide to use the Multitenant architecture, but stick with a single user-defined pluggable database (PDB), also known as single-tenant or
lone-PDB, so you don't have to pay for the Multitenant option. In Standard Edition you can't accidentally create additional PDBs, but in Enterprise Edition you are potentially one command away from
having to buy some extra licenses. This article gives an example of a way to save yourself from the costly mistake of creating more than one user-defined PDB in a Lone-PDB instance.
CON_ID NAME
---------- ------------------------------
2 PDB$SEED
3 PDB1
SQL>
There is nothing in Enterprise Edition to stop you creating additional user-defined pluggable databases, even if you don't have the Multitenant option.
CON_ID NAME
---------- ------------------------------
2 PDB$SEED
3 PDB1
4 PDB2
SQL>
Having done this the database will have a "detected usage" reported in the DBA_FEATURE_USAGE_STATISTICS view. It takes a while for this to be visible, but we'll force a sample to check it.
SELECT name,
detected_usages,
aux_count,
last_usage_date
Multitenant Page 79
last_usage_date
FROM dba_feature_usage_statistics
WHERE name = 'Oracle Pluggable Databases'
ORDER BY name;
SQL>
I'm doing this on a test instance, so it has detected the feature usage several times. The important point to notice here is the AUX_COUNT column, which indicates the number of user-defined PDBs
currently running. Using the Multitenant architecture results in the detected usage, regardless of the number of PDBs, so this alone does not indicate if you need to buy the Multitenant option. If the
AUX_COUNT column is greater than 1 for this feature, you need to buy the option!
SELECT name,
detected_usages,
aux_count,
last_usage_date
FROM dba_feature_usage_statistics
WHERE name = 'Oracle Pluggable Databases'
ORDER BY name;
SQL>
Notice the AUX_COUNT column now has a value of "1".
System altered.
SQL> CREATE PLUGGABLE DATABASE pdb2 ADMIN USER pdb_adm IDENTIFIED BY Password1;
CREATE PLUGGABLE DATABASE pdb2 ADMIN USER pdb_adm IDENTIFIED BY Password1
*
ERROR at line 1:
ORA-65010: maximum number of pluggable databases created
SQL>
Prevent Accidental Creation of a PDB
We can prevent accidental creation of a PDB using a system trigger. The following trigger is fired for any "CREATE" DDL on the database where the ORA_DICT_OBJ_TYPE system defined event attribute is
set to 'PLUGGABLE DATABASE'. It checks to see how many user-defined PDBs are already present. If the number of user-defined PDBs are in excess of the maximum allowed (1), then we raise an error.
CONN / AS SYSDBA
Multitenant Page 80
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-20001: More than 1 PDB requires the Multitenant option.
ORA-06512: at line 12
SQL>
As expected, the are prevented from creating a second user-defined PDB.
If you do accidentally create more than one user-defined PDB in a container database and you are paranoid about a potential licensing breach, you might want to do the followi ng.
Introduction
A proxy PDB can provide a local connection point that references a remote PDB. There are a few situations where this might be of interest to you.
You want to relocate a PDB to a different machine or data centre, without having to change any of the existing connection det ails. In this case you can relocate the PDB and create a proxy PDB of the
same name in the original location.
You want to run a PDB in the cloud, but you don't want to open access to multiple applications, having each of them connectin g directly. Instead you make all your applications connect to the local PDB,
which in turn connects to the referenced PDB, so there is only a single route in and out of the cloud PDB.
You want to share a single application root container between multiple databases.
Multitenant : Proxy
DML and DDL is sent to the referenced PDB for execution and the results returned.
When connected to the proxy PDB, ALTER DATABASE and ALTER PLUGGABLE DATABASE commands refer to the proxy only, they are not passed to the referenced PDB.
In the same way, when connected to the root container, ALTER PLUGGABLE DATABASE commands refer to the proxy only.
A database link is used for the initial creation of the proxy PDB, but all subsequent communication between the servers doesn't use the DB link, so it can be removed once the creation is complete.
The database link used to create a proxy PDB must be created in the root container of the local instance, but can point to a common user in the referenced CDB root container, or a common or local user
in the referenced PDB itself.
The SYSTEM, SYSAUX, TEMP and UNDO tablespaces are copied to the local instance and kept synchronized. As a result, you still need to consider file name conversion like a normal clone, unless you are
using Oracle Managed Files (OMF).
There will be performance implications due to all the network traffic. This won't magically make remote data transfer faster.
Prerequisites
The prerequisites for creating a proxy PDB are similar to that of hot-cloning, so rather than repeat them, you can read them here.
In the examples below I have two databases running on the same virtual machine, but they could be running on separate physical or virtual servers.
cdb1 : The local database that will eventually house the proxy PDB.
cdb3 : The remote CDB, housing the remote referenced PDB (pdb5).
The databases use Oracle Managed Files (OMF) so I don't need to worry about the FILE_NAME_CONVERT or PDB_FILE_NAME_CONVERT settings.
The proxy PDB and referenced PDB share the same listener, so they can't have the same name. If they had different listeners, either on the same machine or on separate machines, they could have the
same name.
CONN / AS SYSDBA
Multitenant Page 81
NAME
---------
CDB3
SQL>
Create a new entry in the "tnsnames.ora" file for the proxy PDB in the local instance.
PDB5_PROXY =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = myserver.mydomain)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = pdb5_proxy)
)
)
You can now connect directly to the proxy PDB. Notice in the output below, the database name is showing as CDB3, even though we are connected to the pdb5_proxy container in the cdb1 instance.
NAME
---------
CDB3
SQL>
Once the proxy PDB is created the database link and link user are no longer needed.
CONN test/test@pdb5
CONN test/test@pdb5_proxy
ID
----------
1
SQL>
Insert another record into the table in the proxy PDB.
CONN test/test@pdb5_proxy
CONN test/test@pdb5
ID
----------
1
2
SQL>
We can see the proxy PDB and referenced PDB are working as expected.
Local Datafiles
Multitenant Page 82
Local Datafiles
What might seem a little odd is the SYSTEM, SYSAUX, TEMP and UNDO tablespaces are copied to the local instance and kept synchronized. All other tablespaces are only present in the referenced
instance.
If we query datafiles and tempfiles in the proxy PDB we are shown those of the referenced PDB. Notice the datafiles associated with the USERS and TEST_TS tablespaces.
NAME
----------------------------------------------------------------------------------------------------
/u02/app/oracle/oradata/cdb3/pdb5/system01.dbf
/u02/app/oracle/oradata/cdb3/pdb5/sysaux01.dbf
/u02/app/oracle/oradata/cdb3/pdb5/undotbs01.dbf
/u02/app/oracle/oradata/cdb3/pdb5/users01.dbf
/u02/app/oracle/oradata/CDB3/469D84C85D196311E0538738A8C0B97D/datafile/o1_mf_test_ts_d877rjoo_.dbf
SQL>
NAME
----------------------------------------------------------------------------------------------------
/u02/app/oracle/oradata/cdb3/pdb5/temp01.dbf
SQL>
If we check in the local instance we see a different pattern. Notice the datafiles associated with the USERS and TEST_TS tablespaces are not present.
CONN / AS SYSDBA
SHOW PDBS
NAME
----------------------------------------------------------------------------------------------------
/u02/app/oracle/oradata/CDB1/469F256E1081028AE0538738A8C079C7/datafile/o1_mf_system_d876rtd8_.dbf
/u02/app/oracle/oradata/CDB1/469F256E1081028AE0538738A8C079C7/datafile/o1_mf_sysaux_d876rtd9_.dbf
/u02/app/oracle/oradata/CDB1/469F256E1081028AE0538738A8C079C7/datafile/o1_mf_undotbs1_d876rtd9_.dbf
SQL>
NAME
----------------------------------------------------------------------------------------------------
/u02/app/oracle/oradata/CDB1/469F256E1081028AE0538738A8C079C7/datafile/o1_mf_temp_d876rtdb_.dbf
SQL>
Alternate Host and Port
The CREATE PLUGGABLE DATABASE ... AS PROXY FROM command can also include the HOST and PORT clauses.
Proxy Views
You can see which are proxy PDBs using the V$PDBS.PROXY_PDB column or CDB_PDBS.IS_PROXY_PDB column.
Multitenant Page 83
SELECT name, proxy_pdb
FROM v$pdbs;
NAME PRO
------------------------------ ---
PDB$SEED NO
PDB1 NO
PDB5_PROXY YES
SQL>
PDB_NAME IS_
------------------------------ ---
PDB1 NO
PDB$SEED NO
PDB5_PROXY YES
SQL>
The V$PROXY_PDB_TARGETS displays information about the connection details for the referenced PDB used by a proxy PDB.
SELECT con_id,
target_port,
target_host,
target_service,
target_user
FROM v$proxy_pdb_targets;
SQL>
From Oracle 12.2 onward you can relocate a PDB by moving it between two root containers with near zero-downtime.
Prerequisites
In this context, the word "local" refers to the destination or target CDB that will house the relocated PDB. The word "remote" refers to the PDB that is to be relocated.
The user in the local database must have the CREATE PLUGGABLE DATABASE privilege in the root container.
The remote CDB must use local undo mode. Without this you must open the remote PDB.
The remote and local databases should be in archivelog mode.
The local database must have a public database link to the remote CDB using a common user.
The common user in the remote database that the database link connects to must have the CREATE PLUGGABLE DATABASE and SYSDBA or SYSOPER privilege.
The local and remote databases must have the same endianness.
The local and remote databases must either have the same options installed, or the remote database must have a subset of thos e present on the local database.
If the character set of the local CDB is AL32UTF8, the remote database can be any character set. If the local CDB does not use AL32UTF8, the character sets of the remote and local databases much
match.
If the remote database uses Transparent Data Encryption (TDE) the local CDB must be configured appropriately before attempting the relocate. If not you will be left with a new PDB that will only open
in restricted mode.
Bug 19174942 is marked as fixed in 12.2. I can't confirm this, so just in case I'll leave this here, but it should no longer be the case. The default tablespaces for each common user in the remote PDB
*must* exist in local CDB. If this is not true, create the missing tablespaces in the root container of the local PDB. If you don't do this your new PDB will only be able to open in restricted mode (Bug
19174942).
In the examples below I have two databases running on the same virtual machine, but they could be running on separate physical or virtual servers.
cdb1 : The local database that will eventually house the relocated PDB.
cdb3 : The remote CDB that houses the PDB (pdb5) to be relocated.
Prepare Remote CDB
Connect to the remote CDB and prepare the remote PDB for relocating.
export ORAENV_ASK=NO
export ORACLE_SID=cdb3
. oraenv
export ORAENV_ASK=YES
sqlplus / as sysdba
Create a user in the remote database for use with the database link. In this case, we must use a comon user in the remote CDB .
Multitenant Page 84
SELECT property_name, property_value
FROM database_properties
WHERE property_name = 'LOCAL_UNDO_ENABLED';
PROPERTY_NAME PROPERTY_VALUE
------------------------------ ------------------------------
LOCAL_UNDO_ENABLED TRUE
SQL>
SELECT log_mode
FROM v$database;
LOG_MODE
------------
ARCHIVELOG
SQL>
Because the remote CDB is in local undo mode and archivelog mode, we don't need to turn the remote database into read -only mode.
CDB3 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = my-server.my-domain)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = cdb3)
)
)
Connect to the local database to initiate the relocate.
export ORAENV_ASK=NO
export ORACLE_SID=cdb1
. oraenv
export ORAENV_ASK=YES
sqlplus / as sysdba
Check the local CDB is in local undo mode and archivelog mode.
PROPERTY_NAME PROPERTY_VALUE
------------------------------ ------------------------------
LOCAL_UNDO_ENABLED TRUE
SQL>
SELECT log_mode
FROM v$database;
LOG_MODE
------------
ARCHIVELOG
SQL>
Create a public database link in the local CDB, pointing to the remote CDB.
Remember to remove this once the relocate is complete. It is a massive security problem to leave this in place!
-- Test link.
DESC user_tables@clone_link
Relocate a PDB
Create a new PDB in the local CDB by relocating the remote PDB. In this case we are using Oracle Managed Files (OMF), so we don't need to bother with FILE_NAME_CONVERT parameter for file name
conversions.
SQL>
We can see the new PDB has been created, but it is in the MOUNTED state.
Multitenant Page 85
We can see the new PDB has been created, but it is in the MOUNTED state.
NAME OPEN_MODE
------------------------------ ----------
PDB5 MOUNTED
SQL>
The PDB is opened in read-write mode to complete the process.
NAME OPEN_MODE
------------------------------ ----------
PDB READ WRITE
SQL>
Drop the public database link.
If we switch back to the remote instance we can see PDB5 has been dropped.
export ORAENV_ASK=NO
export ORACLE_SID=cdb3
. oraenv
export ORAENV_ASK=YES
sqlplus / as sysdba
no rows selected
SQL>
Managing Connections
Moving the database is only one aspect of keeping a system running. Once the database is in the new location, you need to mak e sure connections can still me made to it. The options are as follows.
If your connection information is centralised in an LDAP server (OID, AD etc.) then the definition can be altered centrally.
If both CBSs use the same listener, the relocated PDB will auto-register once the relocate is complete.
If both CDBs use different listeners, the LOCAL_LISTENER and REMOTE_LISTENER parameters can be used to configure cross-registration.
Appendix
These tests were performed on a free trial of the Oracle Database Cloud Service, where the CDB1 instance and PDB1 pluggable database were created as part of the service creation. The additional
instance was built on the same virtual machine using the commands below. I've included the DBCA commands to create and delete the CDB1 instance for completeness. They were not actually used.
Multitenant Page 86
-totalMemory 2048 \
-storageType FS \
-datafileDestination "/u01/app/oracle/oradata/" \
-redoLogFileSize 50 \
-initParams encrypt_new_tablespaces=DDL \
-emConfiguration NONE \
-ignorePreReqs
export ORAENV_ASK=NO
export ORACLE_SID=cdb3
. oraenv
export ORAENV_ASK=YES
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE ARCHIVELOG;
ALTER DATABASE OPEN;
EXIT;
EOF
Resource Manager PDB Performance Profiles in Oracle Database 12c Release 2 (12.2)
In the previous release it was possible to create a resource manager CDB resource plan to control the division of CPU and parallel execution server resources between PDBs. This required a separate plan
directive for each PDB, which doesn't scale well to thousands of PDBs. In Oracle Database 12c Release 2 (12.2) it is now poss ible to create a resource plan based on performance profiles which defines
the resource management for groups of PDBs. This can drastically reduce the amount plan directives required to handle thousands of PDBs.
Much of the resource manager CDB/PDB functionality is unchanged between 12.1 and 12.2, so some of the sections below link to the 12.1 article to save repetition.
The following code creates a new CDB resource plan using the CREATE_CDB_PLAN procedure, then adds two profile directives using the CREATE_CDB_PROFILE_DIRECTIVE procedure to represent the
typical gold, silver levels of service.
DECLARE
l_plan VARCHAR2(30) := 'test_cdb_prof_plan';
BEGIN
DBMS_RESOURCE_MANAGER.clear_pending_area;
DBMS_RESOURCE_MANAGER.create_pending_area;
DBMS_RESOURCE_MANAGER.create_cdb_plan(
plan => l_plan,
comment => 'A test CDB resource plan using profiles');
DBMS_RESOURCE_MANAGER.create_cdb_profile_directive(
plan => l_plan,
profile => 'gold',
shares => 3,
utilization_limit => 100,
parallel_server_limit => 100);
DBMS_RESOURCE_MANAGER.create_cdb_profile_directive(
plan => l_plan,
profile => 'silver',
shares => 2,
utilization_limit => 50,
parallel_server_limit => 50);
DBMS_RESOURCE_MANAGER.validate_pending_area;
DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/
Information about the available CDB resource plans can be queried using the DBA_CDB_RSRC_PLANS view.
Multitenant Page 87
COLUMN plan FORMAT A30
COLUMN comments FORMAT A30
COLUMN status FORMAT A10
SET LINESIZE 100
SELECT plan_id,
plan,
comments,
status,
mandatory
FROM dba_cdb_rsrc_plans
WHERE plan = 'TEST_CDB_PROF_PLAN';
SQL>
Information about the CDB resource plan directives can be queried using the DBA_CDB_RSRC_PLAN_DIRECTIVES view. Notice we use the PROFILE column as well as the PLUGGABLE_DATABASE column.
SELECT plan,
pluggable_database,
profile,
shares,
utilization_limit AS util,
parallel_server_limit AS parallel
FROM dba_cdb_rsrc_plan_directives
WHERE plan = 'TEST_CDB_PROF_PLAN'
ORDER BY plan, pluggable_database, profile;
SQL>
For the rest of the article the cdb_resource_plans.sql and cdb_resource_profile_directives.sql scripts will be used to display this information.
DECLARE
l_plan VARCHAR2(30) := 'test_cdb_prof_plan';
BEGIN
DBMS_RESOURCE_MANAGER.clear_pending_area;
DBMS_RESOURCE_MANAGER.create_pending_area;
DBMS_RESOURCE_MANAGER.create_cdb_profile_directive(
plan => l_plan,
profile => 'bronze',
shares => 1,
utilization_limit => 25,
parallel_server_limit => 25);
DBMS_RESOURCE_MANAGER.validate_pending_area;
DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/
SQL>
The UPDATE_CDB_PROFILE_DIRECTIVE procedure modifies an existing profile directive.
DECLARE
l_plan VARCHAR2(30) := 'test_cdb_prof_plan';
BEGIN
DBMS_RESOURCE_MANAGER.clear_pending_area;
DBMS_RESOURCE_MANAGER.create_pending_area;
Multitenant Page 88
DBMS_RESOURCE_MANAGER.create_pending_area;
DBMS_RESOURCE_MANAGER.update_cdb_profile_directive(
plan => l_plan,
profile => 'bronze',
new_shares => 1,
new_utilization_limit => 20,
new_parallel_server_limit => 20);
DBMS_RESOURCE_MANAGER.validate_pending_area;
DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/
SQL>
The DELETE_CDB_PROFILE_DIRECTIVE procedure deletes an existing profile directive from the CDB resource plan.
DECLARE
l_plan VARCHAR2(30) := 'test_cdb_prof_plan';
BEGIN
DBMS_RESOURCE_MANAGER.clear_pending_area;
DBMS_RESOURCE_MANAGER.create_pending_area;
DBMS_RESOURCE_MANAGER.delete_cdb_profile_directive(
plan => l_plan,
profile => 'bronze');
DBMS_RESOURCE_MANAGER.validate_pending_area;
DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/
SQL>
Enable/Disable Resource Plan with PDB Performance Profiles
Enabling and disabling resource plans in a CDB is the same as it was in pre-12c instances. Enable a plan by setting the RESOURCE_MANAGER_PLAN parameter to the name of the CDB resource plan,
while connected to the root container.
CONN / AS SYSDBA
ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = 'test_cdb_prof_plan';
CONN / AS SYSDBA
ALTER SESSION SET CONTAINER=pdb1;
CONN / AS SYSDBA
ALTER SESSION SET CONTAINER=pdb1;
Multitenant Page 89
ALTER SYSTEM SET DB_PERFORMANCE_PROFILE='' SCOPE=SPFILE;
ALTER PLUGGABLE DATABASE CLOSE IMMEDIATE;
ALTER PLUGGABLE DATABASE OPEN;
CONN / AS SYSDBA
V$RSRCPDBMETRIC : A single row per PDB, holding the last of the 1 minute samples.
V$RSRCPDBMETRIC_HISTORY : 61 rows per PDB, holding the last 60 minutes worth of samples from the V$RSRCPDBMETRIC view.
DBA_HIST_RSRC_PDB_METRIC : AWR snaphots, retained based on the AWR retention period.
The following queries are examples of their usage.
CONN / AS SYSDBA
Multitenant Page 90
r.avg_running_sessions,
r.avg_waiting_sessions,
r.avg_cpu_utilization,
r.avg_active_parallel_stmts,
r.avg_queued_parallel_stmts,
r.avg_active_parallel_servers,
r.avg_queued_parallel_servers
FROM dba_hist_rsrc_pdb_metric r,
cdb_pdbs p
WHERE r.con_id = p.con_id
AND p.pdb_name = 'PDB1'
ORDER BY r.begin_time;
Oracle Resource Manager : Per-Process PGA Limits in Oracle Database 12c Release 2 (12.2)
Oracle has a long history of improving the management of the Process Global Area (PGA). Oracle 9i introduced the PGA_AGGREGATE_TARGET parameter to automate the management of the PGA and
set a soft limit for its size. Oracle 11g introduced Automatic Memory Management (AMM), which you should probably avoid. Oracle 12c Release 1 introduced the PGA_AGGREGATE_LIMIT parameter to
define a hard limit for PGA size.
Oracle Database 12c Release 2 (12.2) has introduced two new features related to management of the PGA. First, the PGA_AGGREGATE_TARGET and PGA_AGGREGATE_LIMIT parameters can now be set
at the PDB level to limit the amount of PGA used by the PDB (described here). Second, Resource Manager can limit the amount o f PGA used by a session, based on the session's consumer group. This
article focusses on this second feature
SESSION_PGA_LIMIT Parameter
The SESSION_PGA_LIMIT parameter has been added to the CREATE_PLAN_DIRECTIVE and UPDATE_PLAN_DIRECTIVE procedures of the DBMS_RESOURCE_MANAGER package. This new parameter
specifies the upper limit in MB for PGA usage by a session assigned to the consumer group. If a session exceeds this limit, an ORA-10260 error is raised.
This parameter can be used in conjunction with other resource limits for a plan directive, but in this article it will be dis cussed in isolation. It can be used in non-CDB architecture also, but here it will only
be considered inside a PDB.
CONN / AS SYSDBA
ALTER SESSION SET CONTAINER=pdb1;
BEGIN
DBMS_RESOURCE_MANAGER.clear_pending_area();
DBMS_RESOURCE_MANAGER.create_pending_area();
-- Create plan
DBMS_RESOURCE_MANAGER.create_plan(
plan => 'pga_plan',
comment => 'Plan for a combination of high and low PGA usage.');
DBMS_RESOURCE_MANAGER.create_consumer_group(
consumer_group => 'low_pga_cg',
comment => 'Low PGA usage allowed');
DBMS_RESOURCE_MANAGER.create_consumer_group(
consumer_group => 'maint_subplan',
comment => 'Low PGA usage allowed');
DBMS_RESOURCE_MANAGER.create_plan_directive (
plan => 'pga_plan',
group_or_subplan => 'low_pga_cg',
session_pga_limit => 20);
DBMS_RESOURCE_MANAGER.create_plan_directive (
plan => 'pga_plan',
group_or_subplan => 'maint_subplan',
session_pga_limit => NULL);
DBMS_RESOURCE_MANAGER.create_plan_directive (
plan => 'pga_plan',
group_or_subplan => 'OTHER_GROUPS',
session_pga_limit => NULL);
DBMS_RESOURCE_MANAGER.validate_pending_area;
DBMS_RESOURCE_MANAGER.submit_pending_area();
END;
/
Enable the plan by setting the RESOURCE_MANAGER_PLAN parameter in the PDB.
Multitenant Page 91
Enable the plan by setting the RESOURCE_MANAGER_PLAN parameter in the PDB.
CONN / AS SYSDBA
ALTER SESSION SET CONTAINER=pdb1;
BEGIN
DBMS_RESOURCE_MANAGER.clear_pending_area();
DBMS_RESOURCE_MANAGER.create_pending_area();
DBMS_RESOURCE_MANAGER.set_consumer_group_mapping (
attribute => DBMS_RESOURCE_MANAGER.oracle_user,
value => 'test',
consumer_group => 'low_pga_cg');
DBMS_RESOURCE_MANAGER.validate_pending_area;
DBMS_RESOURCE_MANAGER.submit_pending_area();
END;
/
USERNAME INITIAL_RSRC_CONSUMER_GROUP
------------------------------ ------------------------------
TEST LOW_PGA_CG
1 row selected.
SQL>
Test It
The following code connects to the test user and artificially tries to allocate excessive amounts of PGA using recursion.
CONN test/test@pdb1
DECLARE
PROCEDURE grab_memory AS
l_dummy VARCHAR2(4000);
BEGIN
grab_memory;
END;
BEGIN
grab_memory;
END;
/
DECLARE
*
ERROR at line 1:
ORA-04068: existing state of packages has been discarded
ORA-10260: PGA limit (20 MB) exceeded - process terminated
SQL>
Notice the process was terminated once the session tried to use more than 20 MB of PGA. Assign the TEST user to the HIGH_PGA_CG consumer group.
CONN / AS SYSDBA
ALTER SESSION SET CONTAINER=pdb1;
BEGIN
DBMS_RESOURCE_MANAGER.clear_pending_area();
DBMS_RESOURCE_MANAGER.create_pending_area();
DBMS_RESOURCE_MANAGER.set_consumer_group_mapping (
attribute => DBMS_RESOURCE_MANAGER.oracle_user,
value => 'test',
consumer_group => 'high_pga_cg');
DBMS_RESOURCE_MANAGER.validate_pending_area;
DBMS_RESOURCE_MANAGER.submit_pending_area();
END;
/
Test it again.
CONN test/test@pdb1
DECLARE
PROCEDURE grab_memory AS
l_dummy VARCHAR2(4000);
BEGIN
grab_memory;
END;
Multitenant Page 92
END;
BEGIN
grab_memory;
END;
/
DECLARE
*
ERROR at line 1:
ORA-04068: existing state of packages has been discarded
ORA-10260: PGA limit (100 MB) exceeded - process terminated
SQL>
Heat Map, Information Lifecycle Management (ILM) and Automatic Data Optimization (ADO) in Oracle Database 12c Release 2 (12.2)
In Oracle Database 12.1 the Heat Map and Automatic Data Optimization (ADO) functionality was only available when using the non-CDB architecture. In Oracle Database 12.2 this functionality is now
supported in the multitenant architecture. This article gives an overview of Heat Map, Information Lifecycle Management (ILM) and Automatic Data Optimization (ADO) in Oracle Database 12c Release 2
(12.2). The examples are based around the multitenant architecture, but the information applies equally to the non-CDB architecture in Oracle Database 12.1 and 12.2
Heat Map
The heat map functionality allows you to track data access at the segment level and data modification at the row and segment level, so you can identify the busy segments of the system. This
functionality is controlled by the HEAT_MAP parameter, that can be set at the system or session level.
Display the current setting of the HEAT_MAP parameter at the PDB level.
CONN / AS SYSDBA
ALTER SESSION SET CONTAINER = pdb1;
CONN / AS SYSDBA
V$HEAT_MAP_SEGMENT
{USER|ALL|DBA}_HEAT_MAP_SEG_HISTOGRAM
{USER|ALL|DBA}_HEAT_MAP_SEGMENT
{USER|ALL|DBA}_HEATMAP_TOP_OBJECTS
{USER|ALL|DBA}_HEATMAP_TOP_TABLESPACES
DBMS_HEAT_MAP.BLOCK_HEAT_MAP
DBMS_HEAT_MAP.EXTENT_HEAT_MAP
DBMS_HEAT_MAP.OBJECT_HEAT_MAP
DBMS_HEAT_MAP.SEGMENT_HEAT_MAP
DBMS_HEAT_MAP.TABLESPACE_HEAT_MAP
Do some work that will be tracked.
CONN / AS SYSDBA
ALTER SESSION SET CONTAINER = pdb1;
CONN test/test@pdb1
CREATE TABLE t1 (
id NUMBER,
description VARCHAR2(50),
CONSTRAINT t1_pk PRIMARY KEY (id)
);
INSERT INTO t1
SELECT level,
Multitenant Page 93
SELECT level,
'Description for ' || level
FROM dual
CONNECT BY level <= 10;
COMMIT;
SELECT *
FROM t1;
SELECT *
FROM t1
WHERE id = 1;
We can now run some queries to see the tracked information.
CONN / AS SYSDBA
ALTER SESSION SET CONTAINER = pdb1;
SELECT track_time,
object_name,
n_segment_write,
n_full_scan,
n_lookup_scan
FROM v$heat_map_segment
ORDER BY 1, 2;
SQL>
SELECT track_time,
owner,
object_name,
segment_write,
full_scan,
lookup_scan
FROM dba_heat_map_seg_histogram
ORDER BY 1, 2, 3;
SQL>
SELECT owner,
segment_name,
segment_type,
tablespace_name,
segment_size
FROM TABLE(DBMS_HEAT_MAP.object_heat_map('TEST','T1'));
SQL>
The heat map information can be really useful for identifying the busy and quiet segments in your database.
Create some tablespaces to represent the storage tiers. The following syntax uses Oracle Managed Files (OMF), hence no datafile names are needed.
CONN / AS SYSDBA
ALTER SESSION SET CONTAINER = pdb1;
Multitenant Page 94
CREATE TABLESPACE fast_storage_ts DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
CREATE TABLESPACE medium_storage_ts DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
CREATE TABLESPACE slow_storage_ts DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
A table can be created with an ADO ILM policy. The following example creates a partitioned invoices table. It manually allocates partitions to different storage tiers, and includes a tier policy on a
partition basis to migrate unused segments to tablespaces on slower storage. There is a compression policy at the table-level, that is inherited by all partitions.
CONN test/test@pdb1
SELECT policy_name,
object_owner,
object_name,
object_type,
inherited_from,
enabled,
deleted
FROM user_ilmobjects
ORDER BY 1;
SQL>
We can also add policies to an existing table. The following example repeats what we saw earlier by creating the table, then aplying the ADO ILM policies.
CONN test/test@pdb1
Multitenant Page 95
ALTER TABLE invoices MODIFY PARTITION invoices_2016_q4
ILM ADD POLICY TIER TO slow_storage_ts READ ONLY SEGMENT AFTER 6 MONTHS OF NO ACCESS;
-- Table-level.
ALTER TABLE <table-name> ILM DISABLE POLICY <policy-name>;
ALTER TABLE <table-name> ILM DELETE POLICY <policy-name>;
ALTER TABLE <table-name> ILM DISABLE_ALL;
ALTER TABLE <table-name> ILM DELETE_ALL;
-- Partition-level.
ALTER TABLE <table-name> MODIFY PARTITION <partition-name> ILM DISABLE POLICY <policy-name>;
ALTER TABLE <table-name> MODIFY PARTITION <partition-name> ILM DELETE POLICY <policy-name>;
ALTER TABLE <table-name> MODIFY PARTITION <partition-name> ILM DISABLE_all;
ALTER TABLE <table-name> MODIFY PARTITION <partition-name> ILM DELETE_ALL;
The following views are available to display policy details.
{DBA|USER}_ILMDATAMOVEMENTPOLICIES
{DBA|USER}_ILMTASKS
{DBA|USER}_ILMEVALUATIONDETAILS
{DBA|USER}_ILMOBJECTS
{DBA|USER}_ILMPOLICIES
{DBA|USER}_ILMRESULTS
DBA_ILMPARAMETERS
ILM ADO Parameters
The full list of ILM ADO Parameters are documented here. They can be displayed using the following query.
CONN / AS SYSDBA
ALTER SESSION SET CONTAINER = pdb1;
NAME VALUE
-------------------- ----------
ENABLED 1
EXECUTION INTERVAL 15
EXECUTION MODE 2
JOB LIMIT 2
POLICY TIME 0
RETENTION TIME 30
TBS PERCENT FREE 25
TBS PERCENT USED 85
SQL>
These parameters can be altered using the DBMS_ILM_ADMIN.CUSTOMIZE_ILM procedure. There is a constant defined in the package for each parameter, with the name matching the parameter name
with the whitespaces replaced by "_".
BEGIN
DBMS_ILM_ADMIN.customize_ilm(DBMS_ILM_ADMIN.retention_time, 60);
END;
/
Service-Level Access Control Lists (ACLs) - Database Service Firewall in Oracle Database 12c Release 2 (12.2)
Setup
The LOCAL_REGISTRATION_ADDRESS_lsnr_alias setting must be added to the "listener.ora" file. It should either specify a protocol and group or be set to "ON", which defaults to "IPC" and "oinstall".
# LOCAL_REGISTRATION_ADDRESS_lsnr_alias = (address=(protocol=ipc)(group=oninstall))
# LOCAL_REGISTRATION_ADDRESS_lsnr_alias = ON
LOCAL_REGISTRATION_ADDRESS_LISTENER = ON
The FIREWALL attribute can be added to the listener endpoint to control the action of the database firewall.
Unset : If an ACL is present for the service it is enforced. If no ACL is present for the service, all connections are considered valid.
FIREWALL=ON : Only connections matching an ACL are considered valid. All other connections are rejected.
FIREWALL=OFF : The firewall functionality is disabled, so all connections are considered valid.
If we wanted to force the firewall functionality we might amend the default listener configuration as follows. Remember, the FIREWALL attribute is optional.
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = ol7-122.localdomain)(PORT = 1521)(FIREWALL=ON))
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
Multitenant Page 96
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
)
)
LOCAL_REGISTRATION_ADDRESS_LISTENER = ON
The DBSFWUSER user owns the DBMS_SFW_ACL_ADMIN package, which provides an API to manage service-level access control lists (ACLs). We will be using this API in the following examples.
CONN / AS SYSDBA
BEGIN
DBMS_SERVICE.create_service('my_cdb_service','my_cdb_service');
DBMS_SERVICE.start_service('my_cdb_service');
END;
/
SELECT name,
network_name
FROM cdb_services
ORDER BY 1;
NAME NETWORK_NAME
------------------------------ ------------------------------
SYS$BACKGROUND
SYS$USERS
cdb1 cdb1
cdb1XDB cdb1XDB
my_cdb_service my_cdb_service
pdb1 pdb1
SQL>
The IP_ADD_ACE procedure accepts a service name and a host parameter. The host parameter can be IPv4 or IPv6, and wildcards are allowed. Once the ACL is built it is saved using the COMMIT_ACL
procedure.
CONN / AS SYSDBA
BEGIN
dbsfwuser.DBMS_SFW_ACL_ADMIN.ip_add_ace('my_cdb_service','ol7-122.localdomain');
dbsfwuser.DBMS_SFW_ACL_ADMIN.ip_add_ace('my_cdb_service','192.168.56.136');
dbsfwuser.DBMS_SFW_ACL_ADMIN.ip_add_ace('pdb1','ol7-122.localdomain');
dbsfwuser.DBMS_SFW_ACL_ADMIN.ip_add_ace('pdb1','192.168.56.136');
dbsfwuser.DBMS_SFW_ACL_ADMIN.commit_acl;
END;
/
The IP_ACL table holds all the saved ACLs, while the V$IP_ACL view lists the active ACLs.
SELECT service_name,
host
FROM dbsfwuser.ip_acl
ORDER BY 1, 2;
SERVICE_NAME HOST
------------------------------ ------------------------------
"MY_CDB_SERVICE" 192.168.56.136
"MY_CDB_SERVICE" OL7-122.LOCALDOMAIN
"PDB1" 192.168.56.136
"PDB1" OL7-122.LOCALDOMAIN
SQL>
SQL>
Multitenant Page 97
SQL>
At the time of writing the V$IP_ACL view seems to have an issue such that the data doesn't respond correctly to the format co mmand of SQL*Plus.
With the ACL in place we can connect to the services from the database server, but not from any other machine. In the example below the SQL*Plus connections from the server works fine, but the
SQLcl connections from a PC fails with a "IO Error: Undefined Error" error.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
USER = sys
URL = jdbc:oracle:thin:@ol7-122.localdomain:1521/my_cdb_service
Error Message = IO Error: Undefined Error
Username? (RETRYING) ('sys/*********@ol7-122.localdomain:1521/my_cdb_service as sysdba'?)
$ ./sql test/test@ol7-122.localdomain:1521/pdb1
USER = test
URL = jdbc:oracle:thin:@ol7-122.localdomain:1521/pdb1
Error Message = IO Error: Undefined Error
Username? (RETRYING) ('test/*********@ol7-122.localdomain:1521/pdb1'?)
We can add an entry for the PC to allow it to connect.
CONN / AS SYSDBA
BEGIN
dbsfwuser.DBMS_SFW_ACL_ADMIN.ip_add_ace('my_cdb_service','192.168.56.1');
dbsfwuser.DBMS_SFW_ACL_ADMIN.ip_add_ace('pdb1','192.168.56.1');
dbsfwuser.DBMS_SFW_ACL_ADMIN.commit_acl;
END;
/
The SQLcl connections from the PC now work as expected.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
CONN / AS SYSDBA
BEGIN
dbsfwuser.DBMS_SFW_ACL_ADMIN.ip_remove_ace('my_cdb_service','ol7-122.localdomain');
dbsfwuser.DBMS_SFW_ACL_ADMIN.ip_remove_ace('my_cdb_service','192.168.56.136');
dbsfwuser.DBMS_SFW_ACL_ADMIN.ip_remove_ace('my_cdb_service','192.168.56.1');
dbsfwuser.DBMS_SFW_ACL_ADMIN.ip_remove_ace('pdb1','ol7-122.localdomain');
dbsfwuser.DBMS_SFW_ACL_ADMIN.ip_remove_ace('pdb1','192.168.56.136');
dbsfwuser.DBMS_SFW_ACL_ADMIN.ip_remove_ace('pdb1','192.168.56.1');
dbsfwuser.DBMS_SFW_ACL_ADMIN.commit_acl;
END;
/
Multitenant Page 98
COLUMN host FORMAT A30
SELECT service_name,
host
FROM dbsfwuser.ip_acl
ORDER BY 1, 2;
no rows selected
SQL>
We can stop and remove the test service using the following code.
CONN / AS SYSDBA
BEGIN
DBMS_SERVICE.stop_service('my_cdb_service');
DBMS_SERVICE.delete_service('my_cdb_service');
END;
/
PDB-Level Access Control Lists (ACLs)
PDB-level ACLs allow us to manage access to all services for a PDB, rather than having to name them individually.
CONN / AS SYSDBA
ALTER SESSION SET CONTAINER = pdb1;
BEGIN
DBMS_SERVICE.create_service('my_pdb_service','my_pdb_service');
DBMS_SERVICE.start_service('my_pdb_service');
END;
/
SELECT name,
network_name
FROM dba_services
ORDER BY 1;
NAME NETWORK_NAME
------------------------------ ------------------------------
my_pdb_service my_pdb_service
pdb1 pdb1
SQL>
The IP_ADD_PDB_ACE procedure accepts a PDB name and a host parameter. The host parameter can be IPv4 or IPv6, and wildcards are allowed. Once the ACL is built it is saved using the COMMIT_ACL
procedure in the normal way.
CONN / AS SYSDBA
BEGIN
dbsfwuser.DBMS_SFW_ACL_ADMIN.ip_add_pdb_ace('pdb1','ol7-122.localdomain');
dbsfwuser.DBMS_SFW_ACL_ADMIN.ip_add_pdb_ace('pdb1','192.168.56.136');
dbsfwuser.DBMS_SFW_ACL_ADMIN.commit_acl;
END;
/
The IP_ACL table holds all the saved ACLs, while the V$IP_ACL view lists the active ACLs.
SELECT service_name,
host
FROM dbsfwuser.ip_acl
ORDER BY 1, 2;
SERVICE_NAME HOST
------------------------------ ------------------------------
"566C59261E6B2CA6E0538838A8C001B3" 192.168.56.136
"566C59261E6B2CA6E0538838A8C001B3" OL7-122.LOCALDOMAIN
"MY_PDB_SERVICE" 192.168.56.136
"MY_PDB_SERVICE" OL7-122.LOCALDOMAIN
"PDB1" 192.168.56.136
"PDB1" OL7-122.LOCALDOMAIN
SQL>
Multitenant Page 99
ORDER BY 1, 2;
SQL>
With the ACL in place we can connect to the services from the database server, but not from any other machine. In the example below the SQL*Plus connections from the server works fine, but the
SQLcl connections from a PC fails with a "IO Error: Undefined Error" error.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
USER = sys
URL = jdbc:oracle:thin:@ol7-122.localdomain:1521/my_pdb_service
Error Message = IO Error: Undefined Error
Username? (RETRYING) ('sys/*********@ol7-122.localdomain:1521/my_pdb_service as sysdba'?)
$ ./sql test/test@ol7-122.localdomain:1521/pdb1
USER = test
URL = jdbc:oracle:thin:@ol7-122.localdomain:1521/pdb1
Error Message = IO Error: Undefined Error
Username? (RETRYING) ('test/*********@ol7-122.localdomain:1521/pdb1'?)
We can add an entry for the PC to allow it to connect.
CONN / AS SYSDBA
BEGIN
dbsfwuser.DBMS_SFW_ACL_ADMIN.ip_add_pdb_ace('pdb1','192.168.56.1');
dbsfwuser.DBMS_SFW_ACL_ADMIN.commit_acl;
END;
/
The SQLcl connections from the PC now work as expected.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
CONN / AS SYSDBA
BEGIN
dbsfwuser.DBMS_SFW_ACL_ADMIN.ip_remove_pdb_ace('pdb1','ol7-122.localdomain');
dbsfwuser.DBMS_SFW_ACL_ADMIN.ip_remove_pdb_ace('pdb1','192.168.56.136');
dbsfwuser.DBMS_SFW_ACL_ADMIN.ip_remove_pdb_ace('pdb1','192.168.56.1');
dbsfwuser.DBMS_SFW_ACL_ADMIN.commit_acl;
END;
/
SELECT service_name,
host
FROM dbsfwuser.ip_acl
ORDER BY 1, 2;
no rows selected
SQL>
We can stop and remove the test service using the following code.
CONN / AS SYSDBA
ALTER SESSION SET CONTAINER = pdb1;
BEGIN
DBMS_SERVICE.stop_service('my_pdb_service');
DBMS_SERVICE.delete_service('my_pdb_service');
END;
/