Sie sind auf Seite 1von 101

Oracle Multitenant

In Oracle 12.2 a container database can run in local undo mode, which means each pluggable database has its own undo tablespa ce. This means the problems associated with shared undo are no longer
present.

In 12c the LGWR process spawns 2 worker slaves and will spawn more on redo intensive systems.

Backup and Recovery

There are many more backup and recovery scenarios available when dealing with the multitenant architecture.
Backup and recovery at the CDB-level is similar to that of non-CDB instances and affects all PDBs associated with the CDB.
Backup and recovery at the PDB-level is possible, with some restrictions. Point in time recovery (PITR) of a PDB is possible using an auxiliary instance, similar to tablespace PITR.
A PDB-level PITR impacts on possible flashback database operations at the CDB level.

Flashback Database

In Oracle 12.1 flashback database is only available at the CDB level and therefore affects all PDBs associated with the CDB. As mentioned previously, PITR of a PDB affects the possible flashback database
operations on the CDB.

In Oracle 12.2 flashback of a pluggable database is possible, making flashback database relevant again.

Transparent Data Encryption (TDE)

Encryption key management has changed in Oracle database 12c, which affects transparent database encryption (TDE) for both non-CDB and CDB installations.

Under the multitenant architecture, many of the encryption key management operations must be done at both the CDB and PDB level for TDE to work. This also means encryption keys must be exported
and imported during unplug and plugin operations on PDBs.

DBA_% and DBA_%_AE Views


In previous releases and non-CDB databases, the data dictionary view hierarchy from top-to-bottom is DBA > ALL > USER. In multitenant databases an additional layer is added, making it CDB > DBA >
ALL > USER.
When connecting to a PDB, the view hierarchy feels the same, so this shouldn't represent a problem for most people or tools. The problem comes when you connect to the root container, maybe using
"/ AS SYSDBA" or "SYS@CDB AS SYSDBA", and use the DBA views expecting to see all objects. In this case, you will only see the objects relevant to the root container, not all the PDBs.

Features Not Available with Multitenant

The following features are not currently supported under the multitenant architecture in version 12.1.0.2.

DBVERIFY
Data Recovery Advisor
Flashback Pluggable Database (present in 12.2)
Flashback Transaction Backout
Database Change Notification
Continuous Query Notification (CQN)
Client Side Cache
Heat Map
Automatic Data Optimization
Oracle Streams

In version 12.2 that list reads as follows.

Flashback Transaction Query (in both local undo mode and shared undo mode)
Database Recovery Advisor
Oracle Sharding (new feature in 12.2)
If you need any of these features, use a non-CDB architecture until they are supported.

The non-CDB architecture is deprecated in Oracle Database 12c, and may be desupported and unavailable in a release after Oracle Database 12c Release 2. Oracle recommends use of the CDB
architecture.

Lone-PDB is Free
A container database with a single pluggable database, also know as lone-PDB or single tenant, is free and available in all database editions. It is only when you want multiple PDBs in a single CDB you
have to pay for the multitenant option. As such, you can have multiple CDBs on a server, each with a single PDB without incur ring any extra cost.

Using the lone-PDB approach allows you to get used to the multitenant architecture without having to buy the multitenant option

Patching, Upgrading and Cloning

When the multitenant architecture was first announced, several claims were made about it improving the speed of patches and upgrades because of the ability to unplug a PDB from a CDB running a
previous database version and plug it into a CDB running a newer version. In reality, using unplug/plugin for an upgrade involves both pre-upgrade and post-upgrade steps that mean the total elapsed
time may not be improved by as much as it initially sounded. Even so, depending on the type of patch or upgrade, you may see some benefits to this approach.

Transfers of PDBs between CDBs of the same version using unplug/plugin are incredibly simple and quick!

Cloning is where the multitenant architecture really shines. A clone of a PDB can be performed within the local CDB, or to a remote CDB on the same, or a different server. Although cloning using
Clonedb and RMAN DUPLICATE are relatively straight forward, cloning a pluggable database is incredibly simple! It might be worth moving to the multitenant architecture just for this feature. Remote
clones can also be used to convert non-CDB databases to PDBs.

Multitenant : Create and Configure a Container Database (CDB) in Oracle Database 12c Release 1 (12.1)

Manual Creation

SET VERIFY OFF


connect "SYS"/"&&sysPassword" as SYSDBA
set echo on
spool /u01/app/oracle/admin/cdb1/scripts/CreateDB.log append
startup nomount pfile="/u01/app/oracle/admin/cdb1/scripts/init.ora";

Multitenant Page 1
startup nomount pfile="/u01/app/oracle/admin/cdb1/scripts/init.ora";
CREATE DATABASE "cdb1"
MAXINSTANCES 8
MAXLOGHISTORY 1
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 1024
DATAFILE '/u01/app/oracle/oradata/cdb1/system01.dbf' SIZE 700M REUSE
AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL
SYSAUX DATAFILE '/u01/app/oracle/oradata/cdb1/sysaux01.dbf' SIZE 550M REUSE
AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED
SMALLFILE DEFAULT TEMPORARY TABLESPACE TEMP TEMPFILE '/u01/app/oracle/oradata/cdb1/temp01.dbf' SIZE 20M REUSE
AUTOEXTEND ON NEXT 640K MAXSIZE UNLIMITED
SMALLFILE UNDO TABLESPACE "UNDOTBS1" DATAFILE '/u01/app/oracle/oradata/cdb1/undotbs01.dbf' SIZE 200M REUSE
AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED
CHARACTER SET AL32UTF8
NATIONAL CHARACTER SET AL16UTF16
LOGFILE GROUP 1 ('/u01/app/oracle/oradata/cdb1/redo01.log') SIZE 50M,
GROUP 2 ('/u01/app/oracle/oradata/cdb1/redo02.log') SIZE 50M,
GROUP 3 ('/u01/app/oracle/oradata/cdb1/redo03.log') SIZE 50M
USER SYS IDENTIFIED BY "&&sysPassword" USER SYSTEM IDENTIFIED BY "&&systemPassword"
enable pluggable database
seed file_name_convert=('/u01/app/oracle/oradata/cdb1/system01.dbf','/u01/app/oracle/oradata/cdb1/pdbseed/system01.dbf',
'/u01/app/oracle/oradata/cdb1/sysaux01.dbf','/u01/app/oracle/oradata/cdb1/pdbseed/sysaux01.dbf',
'/u01/app/oracle/oradata/cdb1/temp01.dbf','/u01/app/oracle/oradata/cdb1/pdbseed/temp01.dbf',
'/u01/app/oracle/oradata/cdb1/undotbs01.dbf','/u01/app/oracle/oradata/cdb1/pdbseed/undotbs01.dbf');
spool off

Create and Configure a Pluggable DCreate a Pluggable Database (PDB) Manuallyatabase (PDB) in Oracle Database 12c Release 1 (12.1)

Method 1 (12.1.0.2)
ALTER SYSTEM SET db_create_file_dest = '/u02/oradata';
CREATE PLUGGABLE DATABASE pdb2 ADMIN USER pdb_adm IDENTIFIED BY Password1
CREATE_FILE_DEST='/u01/app/oracle/oradata';

Method 2
CONN / AS SYSDBA
CREATE PLUGGABLE DATABASE pdb2 ADMIN USER pdb_adm IDENTIFIED BY Password1
FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdbseed/','/u01/app/oracle/oradata/cdb1/pdb2/');

Method 3
CONN / AS SYSDBA
ALTER SESSION SET PDB_FILE_NAME_CONVERT='/u01/app/oracle/oradata/cdb1/pdbseed/','/u01/app/oracle/oradata/cdb1/pdb3/';
CREATE PLUGGABLE DATABASE pdb3 ADMIN USER pdb_adm IDENTIFIED BY Password1;

Commands

COLUMN pdb_name FORMAT A20


SELECT pdb_name, status FROM dba_pdbs ORDER BY pdb_name;
SELECT name, open_mode FROM v$pdbs ORDER BY name;
SHOW PDBS

Unplug a Pluggable Database (PDB) Manually

Before attempting to unplug a PDB, you must make sure it is closed. To unplug the database use the ALTER PLUGGABLE DATABASE command with the UNPLUG INTO clause to specify the location of the
XML metadata file.

ALTER PLUGGABLE DATABASE pdb2 CLOSE;


ALTER PLUGGABLE DATABASE pdb2 UNPLUG INTO '/u01/app/oracle/oradata/cdb1/pdb2/pdb2.xml';
DROP PLUGGABLE DATABASE pdb2 KEEP DATAFILES;

Plugin a Pluggable Database (PDB) Manually

Plugging in a PDB into the CDB is similar to creating a new PDB. First check the PBD is compatible with the CDB by calling th e DBMS_PDB.CHECK_PLUG_COMPATIBILITY function, passing in the XML
metadata file and the name of the PDB you want to create using it.

SET SERVEROUTPUT ON
DECLARE
l_result BOOLEAN;
BEGIN
l_result := DBMS_PDB.check_plug_compatibility(
pdb_descr_file => '/u01/app/oracle/oradata/cdb1/pdb2/pdb2.xml',
pdb_name => 'pdb2');

IF l_result THEN
DBMS_OUTPUT.PUT_LINE('compatible');
ELSE
DBMS_OUTPUT.PUT_LINE('incompatible');
END IF;
END;
/
compatible

PL/SQL procedure successfully completed.

SQL>

Multitenant Page 2
SQL>

CREATE PLUGGABLE DATABASE pdb5 USING '/u01/app/oracle/oradata/cdb1/pdb2/pdb2.xml'


FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdb2/','/u01/app/oracle/oradata/cdb1/pdb5/');

Same container
CREATE PLUGGABLE DATABASE pdb2 USING '/u01/app/oracle/oradata/cdb1/pdb2/pdb2.xml'
NOCOPY
TEMPFILE REUSE;
ALTER PLUGGABLE DATABASE pdb2 OPEN READ WRITE;

Clone a Pluggable Database (PDB) Manually


Cloning an existing local PDB is similar to creating a new PDB from the seed PDB, except now we are using non-seed PDB as the source, which we have to identify using the FROM clause.
If you are using 12.1, or 12.2 without local undo mode, make sure the source PDB is open in READ ONLY mode.

-- Setting the source to read-only is not necessary for Oracle 12cR2.


ALTER PLUGGABLE DATABASE pdb3 CLOSE;
ALTER PLUGGABLE DATABASE pdb3 OPEN READ ONLY;

CREATE PLUGGABLE DATABASE pdb4 FROM pdb3


FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdb3/','/u01/app/oracle/oradata/cdb1/pdb4/');

ALTER PLUGGABLE DATABASE pdb4 OPEN READ WRITE;

-- Switch the source PDB back to read/write if you made it read-only.


ALTER PLUGGABLE DATABASE pdb3 CLOSE;
ALTER PLUGGABLE DATABASE pdb3 OPEN READ WRITE;

Clone a Pluggable Database (PDB) Manually (Metadata Only : NO DATA)


The 12.1.0.2 patchset introduced the ability to do a metadata-only clone. Adding the NO DATA clause when cloning a PDB signifies that only the metadata for the user-created objects should be cloned,
not the data in the tables and indexes.

Delete a Pluggable Database (PDB) Manually


When dropping a pluggable database, you must decide whether to keep or drop the associated datafiles. The PDBs must be closed before being dropped.

ALTER PLUGGABLE DATABASE pdb2 CLOSE;


DROP PLUGGABLE DATABASE pdb2 KEEP DATAFILES;

ALTER PLUGGABLE DATABASE pdb3 CLOSE;


DROP PLUGGABLE DATABASE pdb3 INCLUDING DATAFILES;

ALTER PLUGGABLE DATABASE pdb4 CLOSE;


DROP PLUGGABLE DATABASE pdb4 INCLUDING DATAFILES;

Clone a Remote Non-CDB


The 12.1.0.2 patchset introduced the ability to create a PDB as a clone of a remote non-CDB

Using DBMS_PDB
The DBMS_PDB package allows you to generate an XML metadata file from a non-CDB 12c database, effectively allowing it to be describe it the way you do when unplugging a PDB database. This allows
the non-CDB to be plugged in as a PDB into an existing CDB

export ORACLE_SID=db12c
sqlplus / as sysdba

SHUTDOWN IMMEDIATE;
STARTUP OPEN READ ONLY

BEGIN
DBMS_PDB.DESCRIBE(
pdb_descr_file => '/tmp/db12c.xml');
END;
/

export ORACLE_SID=db12c
sqlplus / as sysdba

SHUTDOWN IMMEDIATE;

export ORACLE_SID=cdb1
sqlplus / as sysdba

CREATE PLUGGABLE DATABASE pdb6 USING '/tmp/db12c.xml'


COPY
FILE_NAME_CONVERT = ('/u01/app/oracle/oradata/db12c/', '/u01/app/oracle/oradata/cdb1/pdb6/');

Switch to the PDB container and run the "$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql" script to clean up the new PDB, removing any items that should not be present in a PDB.
ALTER SESSION SET CONTAINER=pdb6;

@$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql
ALTER SESSION SET CONTAINER=pdb6;
ALTER PLUGGABLE DATABASE OPEN;

SELECT name, open_mode FROM v$pdbs;

Multitenant Page 3
SELECT name, open_mode FROM v$pdbs;

NAME OPEN_MODE
------------------------------ ----------
PDB6 READ WRITE

1 row selected.

The non-CDB has now been converted to a PDB

Using Data Pump (expdb, impdp)


A simple option is to export the data from the non-CDB and import it into a newly created PDB directly. Provided the import is connecting using a service pointing to the releva nt PDB, this is no different
to any other data transfer using data pump.

If the non-CDB is version 11.2.0.3 onward, you can consider using Transport Database, as described here. If the non-CDB is pre-11.2.0.3, then you can still consider using transportable tablespaces.

Using Replication
Another alternative is to use a replication product like Golden Gate to replicate the data from the non-container database to a pluggable database.

Patching Considerations
If your instances are not at the same patch level, you will get PDB violations visible in the PDB_PLUG_IN_VIOLATIONS view. If the destination is at a higher patch level than the source, simply run the
datapatch utility on the destination instance in the normal way. It will determine what work needs to be done.

cd $ORACLE_HOME/OPatch
./datapatch -verbose

Connecting to Container Databases (CDB) and Pluggable Databases (PDB) in Oracle Database 12c Release 1 (12.1)

$ export ORACLE_SID=cdb1
$ sqlplus / as sysdba
SQL> CONN system/password
Connected.
The V$SERVICES views can be used to display available services from the database.

COLUMN name FORMAT A30

SELECT name, pdb


FROM v$services
ORDER BY name;

NAME PDB
------------------------------ ------------------------------
SYS$BACKGROUND CDB$ROOT
SYS$USERS CDB$ROOT
cdb1 CDB$ROOT
cdb1XDB CDB$ROOT
pdb1 PDB1
pdb2 PDB2

6 rows selected.

The lsnrctl utility allows you to display the available services from the command line.

SQL> -- EZCONNECT
SQL> CONN system/password@//localhost:1521/cdb1
Connected.
SQL>

SQL> -- tnsnames.ora
SQL> CONN system/password@cdb1
Connected.
SQL>

CDB1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = ol6-121.localdomain)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = cdb1)
)
)
Displaying the Current Container
The SHOW CON_NAME and SHOW CON_ID commands in SQL*Plus display the current container name and ID respectively.

SQL> SHOW CON_NAME

CON_NAME
------------------------------
CDB$ROOT
SQL>

SQL> SHOW CON_ID

CON_ID
------------------------------
1
SELECT SYS_CONTEXT('USERENV', 'CON_NAME')

Multitenant Page 4
SELECT SYS_CONTEXT('USERENV', 'CON_NAME')
FROM dual;

SYS_CONTEXT('USERENV','CON_NAME')
--------------------------------------------------------------------------------
CDB$ROOT

SQL>

SELECT SYS_CONTEXT('USERENV', 'CON_ID')


FROM dual;

SYS_CONTEXT('USERENV','CON_ID')
--------------------------------------------------------------------------------
1

Switching Between Containers


When logged in to the CDB as an appropriately privileged user, the ALTER SESSION command can be used to switch between containers within the container database.

SQL> ALTER SESSION SET container = pdb1;

Session altered.

SQL> SHOW CON_NAME

CON_NAME
------------------------------
PDB1
SQL> ALTER SESSION SET container = cdb$root;

Session altered.

SQL> SHOW CON_NAME

CON_NAME
------------------------------
CDB$ROOT
SQL>

PDB users with the SYSDBA, SYSOPER, SYSBACKUP, or SYSDG privilege can connect to a closed PDB. All other PDB users can only connect when the PDB is open. As with regular databases, the PDB
users require the CREATE SESSION privilege to enable connections.
When attempting to connect to a PDB using the SID format, you will receive the following error.

ORA-12505, TNS:listener does not currently know of SID given in connect descriptor
Ideally, you would correct the connect string to use services instead of SIDs, but if that is a problem the USE_SID_AS_SERVICE_listener_name listener parameter can be used.

Edit the "$ORACLE_HOME/network/admin/listener.ora" file, adding the following entry, with the "listener" name matching that used by your listener.

USE_SID_AS_SERVICE_listener=on
Reload or restart the listener.

$ lsnrctl reload
Now both of the following connection attempts will be successful as any SIDs will be treated as services.

jdbc:oracle:thin:@ol6-121:1521:pdb1
jdbc:oracle:thin:@ol6-121:1521/pdb1

Data Pump Connections (expdp, impdp)


Connections to the expdp and impdp utilities are unchanged, provided you specify a service.

expdp username/password@service ...


expdp \"username/password@service as sysdba\" ...

impdp username/password@service ...


impdp \"username/password@service as sysdba\" ...
Connections as SYSDBA must be to a common user. For example.

expdp scott/tiger@pdb1 tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=expdpEMP_DEPT.log


expdp \"sys/SysPassword1@pdb1 as sysdba\" tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=expdpEMP_DEPT.log
expdp \"c##myuser/MyPassword1@pdb1 as sysdba\" tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=expdpEMP_DEPT.log

Startup and Shutdown Container Databases (CDB) and Pluggable Databases (PDB) in Oracle Database 12c Release 1 (12.1)

Container Database (CDB)


Startup and shutdown of the container database is the same as it has always been for regular instances. The SQL*Plus STARTUP and SHUTDOWN commands are available when connected to the CDB as
a privileged user. Some typical values are shown below.

STARTUP [NOMOUNT | MOUNT | RESTRICT | UPGRADE | FORCE | READ ONLY]


SHUTDOWN [IMMEDIATE | ABORT]

Pluggable Database (PDB)


Pluggable databases can be started and stopped using SQL*Plus commands or the ALTER PLUGGABLE DATABASE command.

SQL*Plus Commands
The following SQL*Plus commands are available to start and stop a pluggable database, when connected to that pluggable database as a privileged user.

Multitenant Page 5
The following SQL*Plus commands are available to start and stop a pluggable database, when connected to that pluggable database as a privileged user.
STARTUP FORCE;
STARTUP OPEN READ WRITE [RESTRICT];
STARTUP OPEN READ ONLY [RESTRICT];
STARTUP UPGRADE;
SHUTDOWN [IMMEDIATE];

ALTER PLUGGABLE DATABASE


The ALTER PLUGGABLE DATABASE command can be used from the CDB or the PDB.

The following commands are available to open and close the current PDB when connected to the PDB as a privileged user.

ALTER PLUGGABLE DATABASE OPEN READ WRITE [RESTRICTED] [FORCE];


ALTER PLUGGABLE DATABASE OPEN READ ONLY [RESTRICTED] [FORCE];
ALTER PLUGGABLE DATABASE OPEN UPGRADE [RESTRICTED];
ALTER PLUGGABLE DATABASE CLOSE [IMMEDIATE];

ALTER PLUGGABLE DATABASE OPEN READ ONLY FORCE;


ALTER PLUGGABLE DATABASE CLOSE IMMEDIATE;

ALTER PLUGGABLE DATABASE OPEN READ WRITE;


ALTER PLUGGABLE DATABASE CLOSE IMMEDIATE;

The following commands are available to open and close one or more PDBs when connected to the CDB as a privileged user.

ALTER PLUGGABLE DATABASE <pdb-name-clause> OPEN READ WRITE [RESTRICTED] [FORCE];


ALTER PLUGGABLE DATABASE <pdb-name-clause> OPEN READ ONLY [RESTRICTED] [FORCE];
ALTER PLUGGABLE DATABASE <pdb-name-clause> OPEN UPGRADE [RESTRICTED];
ALTER PLUGGABLE DATABASE <pdb-name-clause> CLOSE [IMMEDIATE];

The <pdb-name-clause> clause can be any of the following:

One or more PDB names, specified as a comma-separated list.


The ALL keyword to indicate all PDBs.
The ALL EXCEPT keywords, followed by one or more PDB names in a comma-separate list, to indicate a subset of PDBs.
Some examples are shown below.

ALTER PLUGGABLE DATABASE pdb1, pdb2 OPEN READ ONLY FORCE;


ALTER PLUGGABLE DATABASE pdb1, pdb2 CLOSE IMMEDIATE;

ALTER PLUGGABLE DATABASE ALL OPEN;


ALTER PLUGGABLE DATABASE ALL CLOSE IMMEDIATE;

ALTER PLUGGABLE DATABASE ALL EXCEPT pdb1 OPEN;


ALTER PLUGGABLE DATABASE ALL EXCEPT pdb1 CLOSE IMMEDIATE;

You can customise the trigger if you don't want all of your PDBs to start.

Preserve PDB Startup State (12.1.0.2 onward)


The 12.1.0.2 patchset introduced the ability to preserve the startup state of PDBs through a CDB restart. This is done using the ALTER PLUGGABLE DATABASE command.

We will start off by looking at the normal result of a CDB restart. Notice the PDBs are in READ WRITE mode before the restart, but in MOUNTED mode after it.

SELECT name, open_mode FROM v$pdbs;

NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY
PDB1 READ WRITE
PDB2 READ WRITE

SQL>

SHUTDOWN IMMEDIATE;
STARTUP;

SELECT name, open_mode FROM v$pdbs;

NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY
PDB1 MOUNTED
PDB2 MOUNTED

SQL>
Next, we open both pluggable databases, but only save the state of PDB1.

ALTER PLUGGABLE DATABASE pdb1 OPEN;


ALTER PLUGGABLE DATABASE pdb2 OPEN;
ALTER PLUGGABLE DATABASE pdb1 SAVE STATE;
The DBA_PDB_SAVED_STATES view displays information about the saved state of containers.

COLUMN con_name FORMAT A20


COLUMN instance_name FORMAT A20

Multitenant Page 6
COLUMN instance_name FORMAT A20

SELECT con_name, instance_name, state FROM dba_pdb_saved_states;

CON_NAME INSTANCE_NAME STATE


-------------------- -------------------- --------------
PDB1 cdb1 OPEN

SQL>
Restarting the CDB now gives us a different result.

SELECT name, open_mode FROM v$pdbs;

NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY
PDB1 READ WRITE
PDB2 READ WRITE

SQL>

SHUTDOWN IMMEDIATE;
STARTUP;

SELECT name, open_mode FROM v$pdbs;

NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY
PDB1 READ WRITE
PDB2 MOUNTED

SQL>
The saved state can be discarded using the following statement.

ALTER PLUGGABLE DATABASE pdb1 DISCARD STATE;

COLUMN con_name FORMAT A20


COLUMN instance_name FORMAT A20

SELECT con_name, instance_name, state FROM dba_pdb_saved_states;

no rows selected

SQL>

The state is only saved and visible in the DBA_PDB_SAVED_STATES view if the container is in READ ONLY or READ WRITE mode. The ALTER PLUGGABLE DATABASE ... SAVE STATE command does not error
when run against a container in MOUNTED mode, but nothing is recorded, as this is the default state after a CDB restart.
Like other examples of the ALTER PLUGGABLE DATABASE command, PDBs can be identified individually, as a comma separated list, using the ALL or ALL EXCEPT keywords.
The INSTANCES clause can be added when used in RAC environments. The clause can identify instances individually, as a comma separated list, using the ALL or ALL EXCEPT keywords. Regardless of the
INSTANCES clause, the SAVE/DISCARD STATE commands only affect the current instance.

Configure Instance Parameters and Modify Container Databases (CDB) and Pluggable Databases (PDB) in Oracle Database 12c Release 1 (12.1)

Configure Instance Parameters in a CDB (ALTER SYSTEM)


Configuring instance parameters for a CDB is not much different than it was for non-CDB databases. The ALTER SYSTEM command is used to set initialization parameters, with some database
configuration modified using the ALTER DATABASE command.

When connected as a privileged user and pointing to the root container, any ALTER SYSTEM command will by default be directed at just the root container. This means the following two commands are
functionally equivalent in this context.

ALTER SYSTEM SET parameter_name=value;


ALTER SYSTEM SET parameter_name=value CONTAINER=CURRENT;
In addition to the default action, an initialization parameter change from the root container can target all containers using the following syntax.

ALTER SYSTEM SET parameter_name=value CONTAINER=ALL;


By using CONTAINER=ALL you are instructing the PDBs that they should inherit the specific parameter value from the root container. Unless overridden by a local setting for the same parameter, any
subsequent local changes to the root container for this specific parameter will also be inherited by the PDBs.

The PDBs are able to override some parameter settings by issuing a local ALTER SYSTEM call from the container. See documentation here.

Configure Instance Parameters in a PDB (ALTER SYSTEM)


In the previous section we mentioned that instance parameters can be set for all PDBs belonging to the CDB by using the CONTA INER=ALL clause of the ALTER SYSTEM command from the root container.
Even when this inheritance is set, the local PDB can override the setting using a local ALTER SYSTEM call. Only a subset of the initialization parameters can be modified locally in the PDB. These can be
displayed using the following query.

COLUMN name FORMAT A35


COLUMN value FORMAT A35

SELECT name, value


FROM v$system_parameter
WHERE ispdb_modifiable = 'TRUE'
ORDER BY name;

To make a local PDB change, make sure you are either connected directly to a privileged use in the PDB, or to a privileged common user, who has their container pointing to the PDB in question. As

Multitenant Page 7
To make a local PDB change, make sure you are either connected directly to a privileged use in the PDB, or to a privileged common user, who has their container pointing to the PDB in question. As
mentioned previously, if the CONTAINER clause is not mentioned, the current container is assumed, so the following ALTER SYSTEM commands are functionally equivalent.

CONN / AS SYSDBA
ALTER SESSION SET CONTAINER = pdb1;

ALTER SYSTEM SET parameter_name=value;


ALTER SYSTEM SET parameter_name=value CONTAINER=CURRENT;
Instance-level parameter changes in the root container are stored in the SPFILE in the normal way. When you change PDB-specific initialization parameters in the PDB they are not stored in the SPFILE.
Instead they are saved in the PDB_SPFILE$ table. See documentation here.

Modify a CDB (ALTER DATABASE)


From a CDB perspective, the ALTER DATABASE command is similar to that of a non-CDB database. You just need to understand the scope of the changes you are making. Some ALTER DATABASE
commands applied to the CDB will by definition affect all PDBs plugged into the CDB. Others target just the root container it self. The scoping of the the ALTER DATABASE command is shown in a table in
the documentation here.

Modify a PDB (ALTER PLUGGABLE DATABASE)


Modifying a PDB is done by pointing to the relevant container and using the ALTER PLUGGABLE DATABASE command, but for backward compatibility reasons the ALTER DATABASE command will work
for most of the possible modifications. Not surprisingly, the possible modifications available to PDB are a subset of those p ossible for a CDB or non-CDB database.

Remember, to target the PDB you must either connect directly to a privileged user using a service pointing to the PDB, or connect to the root container and switch to the PDB container. Some of the
possible PDB modifications are shown below.

CONN / AS SYSDBA
ALTER SESSION SET CONTAINER = pdb1;

-- Default edition for PDB.


ALTER PLUGGABLE DATABASE DEFAULT EDITION = ora$base;

-- Default tablespace type for PDB.


ALTER PLUGGABLE DATABASE SET DEFAULT BIGFILE TABLESPACE;
ALTER PLUGGABLE DATABASE SET DEFAULT SMALLFILE TABLESPACE;

-- Default tablespaces for PDB.


ALTER PLUGGABLE DATABASE DEFAULT TABLESPACE users;
ALTER PLUGGABLE DATABASE DEFAULT TEMPORARY TABLESPACE temp;

-- Change the global name. This will change the container name and the
-- name of the default service registered with the listener.
ALTER PLUGGABLE DATABASE OPEN RESTRICTED FORCE;
ALTER PLUGGABLE DATABASE RENAME GLOBAL_NAME TO pdb1a.localdomain;
ALTER PLUGGABLE DATABASE CLOSE IMMEDIATE;
ALTER PLUGGABLE DATABASE OPEN;

-- Time zone for PDB.


ALTER PLUGGABLE DATABASE SET TIME_ZONE='GMT';

-- Make datafiles in the PDB offline/online and make storage changes.


ALTER PLUGGABLE DATABASE DATAFILE '/u01/app/oracle/oradata/cdb1/pdb1/pdb1_users01.dbf' OFFLINE;
ALTER PLUGGABLE DATABASE DATAFILE '/u01/app/oracle/oradata/cdb1/pdb1/pdb1_users01.dbf' ONLINE;

ALTER PLUGGABLE DATABASE DATAFILE '/u01/app/oracle/oradata/cdb1/pdb1/pdb1_users01.dbf'


RESIZE 1G AUTOEXTEND ON NEXT 1M;

-- Supplemental logging for PDB.


ALTER PLUGGABLE DATABASE ADD SUPPLEMENTAL LOG DATA;
ALTER PLUGGABLE DATABASE DROP SUPPLEMENTAL LOG DATA;
In addition there is a mechanism to control the maximum size of the PDB and the amount of the shared temp space it can use.

Thanks to Pavel Rabel for pointing out the problem with this shared temporary tablespace, as described in this MOS note. PDB to Use Global CDB (ROOT) Temporary Tablespace Functionality is Missing
(Doc ID 2004595.1)

-- Limit the total storage of the the PDB (datafile and local temp files).
ALTER PLUGGABLE DATABASE STORAGE (MAXSIZE 5G);

-- Limit the amount of temp space used in the shared temp files.
ALTER PLUGGABLE DATABASE STORAGE (MAX_SHARED_TEMP_SIZE 2G);

-- Combine the two.


ALTER PLUGGABLE DATABASE STORAGE (MAXSIZE 5G MAX_SHARED_TEMP_SIZE 2G);

-- Remove the limits.


ALTER PLUGGABLE DATABASE STORAGE UNLIMITED;

Manage Tablespaces in a Container Database (CDB) and Pluggable Database (PDB) in Oracle Database 12c Release 1 (12.1)

Manage Tablespaces in a CDB


Management of tablespaces in a container database (CDB) is no different to that of a non-CDB database. Provided you are logged in as a privileged user and pointing to the root container, the usual
commands are all available.

CONN / AS SYSDBA

SQL> SHOW CON_NAME

CON_NAME
------------------------------
CDB$ROOT

Multitenant Page 8
CDB$ROOT
SQL>

CREATE TABLESPACE dummy


DATAFILE '/u01/app/oracle/oradata/cdb1/dummy01.dbf' SIZE 1M
AUTOEXTEND ON NEXT 1M;

Tablespace created.

SQL>

ALTER TABLESPACE dummy ADD


DATAFILE '/u01/app/oracle/oradata/cdb1/dummy02.dbf' SIZE 1M
AUTOEXTEND ON NEXT 1M;

Tablespace altered.

SQL>

DROP TABLESPACE dummy INCLUDING CONTENTS AND DATAFILES;

Tablespace dropped.

SQL>

Manage Tablespaces in a PDB


The same tablespace management commands are available from a pluggable database (PDB), provided you are pointing to the correct container. You can connect using a common user then switch to
the correct container.

SQL> CONN / AS SYSDBA


Connected.
SQL> ALTER SESSION SET CONTAINER = pdb1;

Session altered.

SQL> SHOW CON_NAME

CON_NAME
------------------------------
PDB1
SQL>
Alternatively, connect directly to the PDB as a local user with sufficient privilege.

SQL> CONN pdb_admin@pdb1


Enter password:
Connected.
SQL> SHOW CON_NAME

CON_NAME
------------------------------
PDB1
SQL>
Once pointed to the correct container, tablespaces can be managed using the same commands you have always used. Make sure you put the datafiles in a suitable location for the PDB.

CREATE TABLESPACE dummy


DATAFILE '/u01/app/oracle/oradata/cdb1/pdb1/dummy01.dbf' SIZE 1M
AUTOEXTEND ON NEXT 1M;

Tablespace created.

SQL>

ALTER TABLESPACE dummy ADD


DATAFILE '/u01/app/oracle/oradata/cdb1/pdb1/dummy02.dbf' SIZE 1M
AUTOEXTEND ON NEXT 1M;

Tablespace altered.

SQL>

DROP TABLESPACE dummy INCLUDING CONTENTS AND DATAFILES;

Tablespace dropped.

SQL>

Undo Tablespaces
Management of the undo tablespace in a CDB is unchanged from that of a non-CDB database.

In contrast, a PDB can not have an undo tablespace. Instead, it uses the undo tablespace belonging to the CDB. If we connect to a PDB, we can see no undo tablespace is visible.

CONN pdb_admin@pdb1

SELECT tablespace_name FROM dba_tablespaces;

TABLESPACE_NAME
------------------------------

Multitenant Page 9
------------------------------
SYSTEM
SYSAUX
TEMP
USERS

SQL>
But we can see the datafile associated with the CDB undo tablespace.

SELECT name FROM v$datafile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/cdb1/undotbs01.dbf
/u01/app/oracle/oradata/cdb1/pdb1/system01.dbf
/u01/app/oracle/oradata/cdb1/pdb1/sysaux01.dbf
/u01/app/oracle/oradata/cdb1/pdb1/pdb1_users01.dbf

SQL>

SELECT name FROM v$tempfile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/cdb1/pdb1/temp01.dbf

SQL>

Temporary Tablespaces
Management of the temporary tablespace in a CDB is unchanged from that of a non-CDB database.

A PDB can either have its owner temporary tablespace, or if it is created without a temporary tablespace, it can share the temporary tablespace with the CBD.

CONN pdb_admin@pdb1

CREATE TEMPORARY TABLESPACE temp2


TEMPFILE '/u01/app/oracle/oradata/cdb1/pdb1/temp02.dbf' SIZE 5M
AUTOEXTEND ON NEXT 1M;

Tablespace created.

SQL>

DROP TABLESPACE temp2 INCLUDING CONTENTS AND DATAFILES;

Tablespace dropped.

SQL>
Default Tablespaces

Setting the default tablespace and default temporary tablespace for a CDB is unchanged compared to a non-CDB database.

There are a two ways to set the default tablespace and default temporary tablespace for a PDB. The ALTER PLUGGABLE DATABASE command is the recommended way.

CONN pdb_admin@pdb1
ALTER PLUGGABLE DATABASE DEFAULT TABLESPACE users;
ALTER PLUGGABLE DATABASE DEFAULT TEMPORARY TABLESPACE temp;
For backwards compatibility, it is also possible to use the ALTER DATABASE command.

CONN pdb_admin@pdb1
ALTER DATABASE DEFAULT TABLESPACE users;
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp;

Manage Users and Privileges For Container Databases (CDB) and Pluggable Databases (PDB) in Oracle Database 12c Release 1 (12.1)

When connected to a multitenant database the management of users and privileges is a little different to traditional Oracle e nvironments. In multitenant environments there are two types of user.

Common User : The user is present in all containers (root and all PDBs).
Local User : The user is only present in a specific PDB. The same username can be present in multiple PDBs, but they are unrelated.
Likewise, there are two types of roles.

Common Role : The role is present in all containers (root and all PDBs).
Local Role : The role is only present in a specific PDB. The same role name can be used in multiple PDBs, but they are unrela ted.
Some DDL statements have a CONTAINER clause added to allow them to be directed to the current container or all containers. It s usage will be demonstrated in the sections below.

Create Common Users

When creating a common user the following requirements must all be met.

You must be connected to a common user with the CREATE USER privilege.
The current container must be the root container.
The username for the common user must be prefixed with "C##" or "c##" and contain only ASCII or EBCDIC characters.
The username must be unique across all containers.
The DEFAULT TABLESPACE, TEMPORARY TABLESPACE, QUOTA and PROFILE must all reference objects that exist in all containers.
You can either specify the CONTAINER=ALL clause, or omit it, as this is the default setting when the current container is the root.
The following example shows how to create common users with and without the CONTAINER clause from the root container.

Multitenant Page 10
CONN / AS SYSDBA

-- Create the common user using the CONTAINER clause.


CREATE USER c##test_user1 IDENTIFIED BY password1 CONTAINER=ALL;
GRANT CREATE SESSION TO c##test_user1 CONTAINER=ALL;

-- Create the common user using the default CONTAINER setting.


CREATE USER c##test_user2 IDENTIFIED BY password1;
GRANT CREATE SESSION TO c##test_user2;

Create Local Users

When creating a local user the following requirements must all be met.

You must be connected to a user with the CREATE USER privilege.


The username for the local user must not be prefixed with "C##" or "c##".
The username must be unique within the PDB.
You can either specify the CONTAINER=CURRENT clause, or omit it, as this is the default setting when the current container is a PDB.
The following example shows how to create local users with and without the CONTAINER clause from the root container.

CONN / AS SYSDBA

-- Switch container while connected to a common user.


ALTER SESSION SET CONTAINER = pdb1;

-- Create the local user using the CONTAINER clause.


CREATE USER test_user3 IDENTIFIED BY password1 CONTAINER=CURRENT;
GRANT CREATE SESSION TO test_user3 CONTAINER=CURRENT;

-- Connect to a privileged user in the PDB.


CONN system/password@pdb1

-- Create the local user using the default CONTAINER setting.


CREATE USER test_user4 IDENTIFIED BY password1;
GRANT CREATE SESSION TO test_user4;
If a local user is to be used as a DBA user, it requires the PDB_DBA role granted locally to it.

Create Common Roles

Similar to users described previously, roles can be common or local. All Oracle-supplied roles are common and therefore available in the root container and all PDBs. Common roles can be created,
provided the following conditions are met.

You must be connected to a common user with CREATE ROLE and the SET CONTAINER privileges granted commonly.
The current container must be the root container.
The role name for the common role must be prefixed with "C##" or "c##" and contain only ASCII or EBCDIC characters.
The role name must be unique across all containers.
The role is created with the CONTAINER=ALL clause
The following example shows how to create a common role and grant it to a common and local user.

CONN / AS SYSDBA

-- Create the common role.


CREATE ROLE c##test_role1;
GRANT CREATE SESSION TO c##test_role1;

-- Grant it to a common user.


GRANT c##test_role1 TO c##test_user1 CONTAINER=ALL;

-- Grant it to a local user.


ALTER SESSION SET CONTAINER = pdb1;
GRANT c##test_role1 TO test_user3;
Only common operations can be granted to common roles. When the common role is granted to a local user, the privileges are limited to that specific user in that specific PDB.

Create Local Roles

Local roles are created in a similar manner to pre-12c databases. Each PDB can have roles with matching names, since the scope of a local role is limited to the current PDB. Th e following conditions must
be met.

You must be connected to a user with the CREATE ROLE privilege.


If you are connected to a common user, the container must be set to the local PDB.
The role name for the local role must not be prefixed with "C##" or "c##".
The role name must be unique within the PDB.
The following example shows how to create local a role and grant it to a common user and a local user.

CONN / AS SYSDBA

-- Switch container.
ALTER SESSION SET CONTAINER = pdb1;

-- Alternatively, connect to a local or common user


-- with the PDB service.
-- CONN system/password@pdb1

-- Create the common role.


CREATE ROLE test_role1;
GRANT CREATE SESSION TO test_role1;

Multitenant Page 11
GRANT CREATE SESSION TO test_role1;

-- Grant it to a common user.


GRANT test_role1 TO c##test_user1;

-- Grant it to a local user.


GRANT test_role1 TO test_user3;
When a local role are granted to common user, the privileges granted via the local role are only valid when the common user has its container set to the relevant PDB.

Granting Roles and Privileges to Common and Local Users


The rules for granting privileges and roles can seem a little confusing at first. Just remember, if you connect to a PDB and only deal with local users and roles, everything feels exactly the same as pre-12c
databases. It's only when you start to consider the scope of common users and roles that things become complicated.

The basic difference between a local and common grant is the value used by the CONTAINER clause.

-- Common grants.
CONN / AS SYSDBA

GRANT CREATE SESSION TO c##test_user1 CONTAINER=ALL;


GRANT CREATE SESSION TO c##test_role1 CONTAINER=ALL;
GRANT c##test_role1 TO c##test_user1 CONTAINER=ALL;

-- Local grants.
CONN system/password@pdb1
GRANT CREATE SESSION TO test_user3;
GRANT CREATE SESSION TO test_role1;
GRANT test_role1 TO test_user3;

Backup and Recovery of a Container Database (CDB) and a Pluggable Database (PDB) in Oracle Database 12c Release 1 (12.1)

RMAN Connections
Unless stated otherwise, this article assumes connections to RMAN are using OS authentication. This means you are connecting to the root container in the CDB with "AS SYSDBA" privilege.

$ export ORAENV_ASK=NO
$ export ORACLE_SID=cdb1
$ . oraenv
The Oracle base remains unchanged with value /u01/app/oracle
$ export ORAENV_ASK=YES

$ rman target=/
Recovery Manager: Release 12.1.0.1.0 - Production on Sun Dec 22 17:03:20 2013

Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.

connected to target database: CDB1 (DBID=797615285)

RMAN>
When connecting to a PDB to perform backup and recovery operations, the RMAN connection will look like the following. Notice the password prompt as no password was entered on the command line.

$ rman target=sys@pdb1
Recovery Manager: Release 12.1.0.1.0 - Production on Mon Dec 23 11:08:35 2013

Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.

target database Password:


connected to target database: CDB1 (DBID=797615285)

RMAN>

Backup
Container Database (CDB) Backup
Backup of a Container Database (CDB) is essentially the same as a non-Container Database. The main thing to remember is, by doing a full backup of the CDB you are also doing a full backup of all PDBs.

Connect to RMAN using OS authentication and take a full backup using the following command. This means you are connecting to the root container with "AS SYSDBA" privilege.

$ rman target=/

RMAN> BACKUP DATABASE PLUS ARCHIVELOG;


A section of the output from the above backup command is shown below. Notice how the datafiles associated with the CBD (cdb1) and all the PDBs (pdb1, pdb2, pdb$seed) are included in the backup.

Starting backup at 22-DEC-13


using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00003 name=/u01/app/oracle/oradata/cdb1/sysaux01.dbf
input datafile file number=00001 name=/u01/app/oracle/oradata/cdb1/system01.dbf
input datafile file number=00004 name=/u01/app/oracle/oradata/cdb1/undotbs01.dbf
input datafile file number=00006 name=/u01/app/oracle/oradata/cdb1/users01.dbf
channel ORA_DISK_1: starting piece 1 at 22-DEC-13
channel ORA_DISK_1: finished piece 1 at 22-DEC-13
piece handle=/u01/app/oracle/fast_recovery_area/CDB1/backupset/2013_12_22/o1_mf_nnndf_TAG20131222T163015_9cg4wr40_.bkp tag=TAG20131222T163015 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:01:15
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00009 name=/u01/app/oracle/oradata/cdb1/pdb1/sysaux01.dbf
input datafile file number=00008 name=/u01/app/oracle/oradata/cdb1/pdb1/system01.dbf
input datafile file number=00010 name=/u01/app/oracle/oradata/cdb1/pdb1/pdb1_users01.dbf

Multitenant Page 12
input datafile file number=00010 name=/u01/app/oracle/oradata/cdb1/pdb1/pdb1_users01.dbf
channel ORA_DISK_1: starting piece 1 at 22-DEC-13
channel ORA_DISK_1: finished piece 1 at 22-DEC-13
piece handle=/u01/app/oracle/fast_recovery_area/CDB1/E45393F0DE5F1A8AE043D200A8C00DFC/backupset/2013_12_22/o1_mf_nnndf_TAG20131222T163015_9cg4z3so_.bkp
tag=TAG20131222T163015 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:35
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00030 name=/u01/app/oracle/oradata/pdb2/sysaux01.dbf
input datafile file number=00029 name=/u01/app/oracle/oradata/pdb2/system01.dbf
input datafile file number=00031 name=/u01/app/oracle/oradata/pdb2/pdb2_users01.dbf
channel ORA_DISK_1: starting piece 1 at 22-DEC-13
channel ORA_DISK_1: finished piece 1 at 22-DEC-13
piece handle=/u01/app/oracle/fast_recovery_area/CDB1/E4B0CA84B47E6183E043D200A8C0A806/backupset/2013_12_22/o1_mf_nnndf_TAG20131222T163015_9cg50766_.bkp
tag=TAG20131222T163015 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:35
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00007 name=/u01/app/oracle/oradata/cdb1/pdbseed/sysaux01.dbf
input datafile file number=00005 name=/u01/app/oracle/oradata/cdb1/pdbseed/system01.dbf
channel ORA_DISK_1: starting piece 1 at 22-DEC-13
channel ORA_DISK_1: finished piece 1 at 22-DEC-13
piece handle=/u01/app/oracle/fast_recovery_area/CDB1/E453004B82C71772E043D200A8C08EC5/backupset/2013_12_22/o1_mf_nnndf_TAG20131222T163015_9cg51bmg_.bkp
tag=TAG20131222T163015 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:36
Finished backup at 22-DEC-13

Root Container Backup


A backup of the root container is a backup of the CDB, excluding any of the PDBs.

Connect to RMAN using OS authentication and backup the root container using the following command. This means you are connect ing to the root container with "AS SYSDBA" privilege.

$ rman target=/

RMAN> BACKUP DATABASE ROOT;


A section of the output from the above backup command is shown below. Notice how the datafiles associated with the CBD (cdb1) are included, but all the PDBs (pdb1, pdb2, pdb$seed) are not included
in the backup.

Starting backup at 23-DEC-13


using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00003 name=/u01/app/oracle/oradata/cdb1/sysaux01.dbf
input datafile file number=00001 name=/u01/app/oracle/oradata/cdb1/system01.dbf
input datafile file number=00004 name=/u01/app/oracle/oradata/cdb1/undotbs01.dbf
input datafile file number=00006 name=/u01/app/oracle/oradata/cdb1/users01.dbf
channel ORA_DISK_1: starting piece 1 at 23-DEC-13
channel ORA_DISK_1: finished piece 1 at 23-DEC-13
piece handle=/u01/app/oracle/fast_recovery_area/CDB1/backupset/2013_12_23/o1_mf_nnndf_TAG20131223T112413_9cj7bxtg_.bkp tag=TAG20131223T112413 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:01:25
Finished backup at 23-DEC-13
Pluggable Database (PDB) Backup
There are two ways to back up pluggable databases. When connected to RMAN as the root container, you can backup one or more P DBs using the following command.

$ rman target=/

RMAN> BACKUP PLUGGABLE DATABASE pdb1, pdb2;


You can see this includes the datafiles for both referenced PDBs.

Starting backup at 23-DEC-13


using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00009 name=/u01/app/oracle/oradata/cdb1/pdb1/sysaux01.dbf
input datafile file number=00008 name=/u01/app/oracle/oradata/cdb1/pdb1/system01.dbf
input datafile file number=00010 name=/u01/app/oracle/oradata/cdb1/pdb1/pdb1_users01.dbf
channel ORA_DISK_1: starting piece 1 at 23-DEC-13
channel ORA_DISK_1: finished piece 1 at 23-DEC-13
piece handle=/u01/app/oracle/fast_recovery_area/CDB1/E45393F0DE5F1A8AE043D200A8C00DFC/backupset/2013_12_23/o1_mf_nnndf_TAG20131223T113119_9cj7r8lp_.bkp
tag=TAG20131223T113119 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:35
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00030 name=/u01/app/oracle/oradata/pdb2/sysaux01.dbf
input datafile file number=00029 name=/u01/app/oracle/oradata/pdb2/system01.dbf
input datafile file number=00031 name=/u01/app/oracle/oradata/pdb2/pdb2_users01.dbf
channel ORA_DISK_1: starting piece 1 at 23-DEC-13
channel ORA_DISK_1: finished piece 1 at 23-DEC-13
piece handle=/u01/app/oracle/fast_recovery_area/CDB1/E4B0CA84B47E6183E043D200A8C0A806/backupset/2013_12_23/o1_mf_nnndf_TAG20131223T113119_9cj7sfbx_.bkp
tag=TAG20131223T113119 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:35
Finished backup at 23-DEC-13
Alternatively, connect to a specific PDB and issue the following command.

$ rman target=sys@pdb1

Multitenant Page 13
RMAN> BACKUP DATABASE;
Being connected to the PDB, this limits the scope of the backup command to the current PDB only, as shown in the output below .

Starting backup at 23-DEC-13


using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=237 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00009 name=/u01/app/oracle/oradata/cdb1/pdb1/sysaux01.dbf
input datafile file number=00008 name=/u01/app/oracle/oradata/cdb1/pdb1/system01.dbf
input datafile file number=00010 name=/u01/app/oracle/oradata/cdb1/pdb1/pdb1_users01.dbf
channel ORA_DISK_1: starting piece 1 at 23-DEC-13
channel ORA_DISK_1: finished piece 1 at 23-DEC-13
piece handle=/u01/app/oracle/fast_recovery_area/CDB1/E45393F0DE5F1A8AE043D200A8C00DFC/backupset/2013_12_23/o1_mf_nnndf_TAG20131223T113504_9cj7z9kb_.bkp
tag=TAG20131223T113504 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25
Finished backup at 23-DEC-13
Tablespace and Datafile Backups
Multiple PDBs in the same CDB can have a tablespace with the same name, for example SYSTEM, SYSAUX and USERS. One way to remove that ambiguity is connect to the appropriate PDB. Once RMAN is
connected to the PDB, the tablespace backup commands is unchanged compared to previous versions.

$ rman target=sys@pdb1

RMAN> BACKUP TABLESPACE system, sysaux, users;


Alternatively, you can remove the ambiguity by qualifying the PDB name with the tablespace name when connected to the root co ntainer.

$ rman target=sys@cdb1

RMAN> BACKUP TABLESPACE pdb1:system, pdb1:sysaux, pdb1:users, pdb2:system;


Datafiles have unique file numbers and fully qualified names, so they can be backed up from the root container or the individ ual PDB.

$ rman target=/

# Or

$ rman target=sys@pdb1

RMAN> BACKUP DATAFILE 8, 9, 10;


If you are connecting to a PDB, only the files belonging to that PDB can be backed up. So for example, when connected as PDB1 , we get an error if we try to backup the SYSTEM datafile from the root
container.

RMAN> BACKUP DATAFILE 1;

Starting backup at 23-DEC-13


using channel ORA_DISK_1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 12/23/2013 11:49:35
RMAN-20201: datafile not found in the recovery catalog
RMAN-06010: error while looking up datafile: 1

RMAN>

Complete Recovery
Container Database (CDB) Complete Recovery
Restoring a CDB is similar to restoring a non-CDB database, but remember restoring a whole CDB will restore not only the root container, but all the PDBs also. Likewise a Point In Time Recovery (PITR) of
the whole CDB will bring all PDBs back to the same point in time.

Connect to RMAN using OS authentication and restore the whole CDB using the following restore script. This means you are connecting to the root container with "AS SYSDBA" privilege.

$ rman target=/

RUN {
SHUTDOWN IMMEDIATE; # use abort if this fails
STARTUP MOUNT;
RESTORE DATABASE;
RECOVER DATABASE;
ALTER DATABASE OPEN;
}

A section of the output from the above restore script is shown below. Notice the datafiles from the CDB (cdb1) and all the PD Bs (pdb1, pdb2 and pdb$seed) are all considered during the restore. The
seed PDB is not actually restored because it is read-only and RMAN can see a restore is not necessary.

Starting restore at 22-DEC-13


allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=11 device type=DISK

skipping datafile 5; already restored to file /u01/app/oracle/oradata/cdb1/pdbseed/system01.dbf


skipping datafile 7; already restored to file /u01/app/oracle/oradata/cdb1/pdbseed/sysaux01.dbf
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /u01/app/oracle/oradata/cdb1/system01.dbf
channel ORA_DISK_1: restoring datafile 00003 to /u01/app/oracle/oradata/cdb1/sysaux01.dbf
channel ORA_DISK_1: restoring datafile 00004 to /u01/app/oracle/oradata/cdb1/undotbs01.dbf

Multitenant Page 14
channel ORA_DISK_1: restoring datafile 00004 to /u01/app/oracle/oradata/cdb1/undotbs01.dbf
channel ORA_DISK_1: restoring datafile 00006 to /u01/app/oracle/oradata/cdb1/users01.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/CDB1/backupset/2013_12_22/o1_mf_nnndf_TAG20131222T163015_9cg4wr40_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/CDB1/backupset/2013_12_22/o1_mf_nnndf_TAG20131222T163015_9cg4wr40_.bkp tag=TAG20131222T163015
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:56
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00008 to /u01/app/oracle/oradata/cdb1/pdb1/system01.dbf
channel ORA_DISK_1: restoring datafile 00009 to /u01/app/oracle/oradata/cdb1/pdb1/sysaux01.dbf
channel ORA_DISK_1: restoring datafile 00010 to /u01/app/oracle/oradata/cdb1/pdb1/pdb1_users01.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/CDB1/E45393F0DE5F1A8AE043D200A8C00DFC/backupset/2013_12_22/o1_mf_nnndf_TAG20131222T163015_
9cg4z3so_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/CDB1/E45393F0DE5F1A8AE043D200A8C00DFC/backupset/2013_12_22/o1_mf_nnndf_TAG20131222T163015_9cg4z3so_.bkp
tag=TAG20131222T163015
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00029 to /u01/app/oracle/oradata/pdb2/system01.dbf
channel ORA_DISK_1: restoring datafile 00030 to /u01/app/oracle/oradata/pdb2/sysaux01.dbf
channel ORA_DISK_1: restoring datafile 00031 to /u01/app/oracle/oradata/pdb2/pdb2_users01.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/CDB1/E4B0CA84B47E6183E043D200A8C0A806/backupset/2013_12_22/o1_mf_nnndf_TAG20131222T163015_
9cg50766_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/CDB1/E4B0CA84B47E6183E043D200A8C0A806/backupset/2013_12_22/o1_mf_nnndf_TAG20131222T163015_9cg50766_.bkp
tag=TAG20131222T163015
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:25
Finished restore at 22-DEC-13

Starting recover at 22-DEC-13


using channel ORA_DISK_1

starting media recovery


media recovery complete, elapsed time: 00:00:10

Finished recover at 22-DEC-13

Statement processed
Root Container Complete Recovery

Rather than recovering the whole CDB, including all PDBs, the root container can be recovered in isolation.

Connect to RMAN using OS authentication and restore the root container using the following restore script. This means you are connecting to the root container with "AS SYSDBA" privilege.

$ rman target=/

RUN {
SHUTDOWN IMMEDIATE; # use abort if this fails
STARTUP MOUNT;
RESTORE DATABASE ROOT;
RECOVER DATABASE ROOT;
# Consider recovering PDBs before opening.
ALTER DATABASE OPEN;
}
The following section of the output from the restore script shows only the root container datafiles are restored and recovere d.

Starting restore at 23-DEC-13


allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=247 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore


channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /u01/app/oracle/oradata/cdb1/system01.dbf
channel ORA_DISK_1: restoring datafile 00003 to /u01/app/oracle/oradata/cdb1/sysaux01.dbf
channel ORA_DISK_1: restoring datafile 00004 to /u01/app/oracle/oradata/cdb1/undotbs01.dbf
channel ORA_DISK_1: restoring datafile 00006 to /u01/app/oracle/oradata/cdb1/users01.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/CDB1/backupset/2013_12_23/o1_mf_nnndf_TAG20131223T112413_9cj7bxtg_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/CDB1/backupset/2013_12_23/o1_mf_nnndf_TAG20131223T112413_9cj7bxtg_.bkp tag=TAG20131223T112413
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:01:26
Finished restore at 23-DEC-13

Starting recover at 23-DEC-13


using channel ORA_DISK_1

starting media recovery


media recovery complete, elapsed time: 00:00:04

Finished recover at 23-DEC-13

It is probably a very bad idea to restore and recover just the root container without doing the same for the PDBs. Any difference in metadata between the two could prove problematic.

Pluggable Database (PDB) Complete Recovery


There are two ways to restore and recover PDBs. From to root containers, you can restore and recover one or more PDBs using the following script.

$ rman target=/

Multitenant Page 15
$ rman target=/

RUN {
ALTER PLUGGABLE DATABASE pdb1, pdb2 CLOSE;
RESTORE PLUGGABLE DATABASE pdb1, pdb2;
RECOVER PLUGGABLE DATABASE pdb1, pdb2;
ALTER PLUGGABLE DATABASE pdb1, pdb2 OPEN;
}
When connected directly to a PDB, you can restore and recover the current PDB using a local user with the SYSDBA privilege, as shown in the following script.

$ sqlplus sys@pdb1 as sysdba

CREATE USER admin_user IDENTIFIED BY admin_user;


GRANT CREATE SESSION, PDB_DBA, SYSDBA TO admin_user;
EXIT;

$ rman target=admin_user@pdb1

SHUTDOWN IMMEDIATE;
RESTORE DATABASE;
RECOVER DATABASE;
STARTUP;

In the current release, the RMAN commands will not work in a "run" script without producing errors.

Tablespace and Datafile Complete Recovery


Due to potential name clashes, restoring a tablespace must be done while connected to the PDB.

$ rman target=sys@pdb1

RUN {
ALTER TABLESPACE users OFFLINE;
RESTORE TABLESPACE users;
RECOVER TABLESPACE users;
ALTER TABLESPACE users ONLINE;
}

Datafile recoveries can be done while connected to the container or directly to the PDB.

$ rman target=/

# Or

$ rman target=sys@pdb1

RUN {
ALTER DATABASE DATAFILE 10 OFFLINE;
RESTORE DATAFILE 10;
RECOVER DATAFILE 10;
ALTER DATABASE DATAFILE 10 ONLINE;
}

Point In Time Recovery (PITR)


Container Database (CDB) Point In Time Recovery (PITR)

Point In Time Recovery (PITR) of a CDB is the same as that of non-CDB instances. Just remember, you are performing a PITR on the CDB and all the PDBs at once.

$ rman target=/

RUN {
SHUTDOWN IMMEDIATE; # use abort if this fails
STARTUP MOUNT;
SET UNTIL TIME "TO_DATE('23-DEC-2013 12:00:00','DD-MON-YYYY HH24:MI:SS')";
RESTORE DATABASE;
RECOVER DATABASE;
# Should probably open read-only and check it out first.
ALTER DATABASE OPEN RESETLOGS;
}

Pluggable Database (PDB) Point In Time Recovery (PITR)

Point In Time Recovery (PITR) of a PDB follows a similar pattern to that of a regular database. The PDB is closed, restored and recovered to the required point in time, then opened with the RESETLOGS
option. In this case, the RESETLOGS option does nothing with the logfiles themselves, but creates a new PDB incarnation.

$ rman target=/

RUN {
ALTER PLUGGABLE DATABASE pdb1 CLOSE;
SET UNTIL TIME "TO_DATE('23-DEC-2013 12:00:00','DD-MON-YYYY HH24:MI:SS')";
RESTORE PLUGGABLE DATABASE pdb1;
RECOVER PLUGGABLE DATABASE pdb1;
ALTER PLUGGABLE DATABASE pdb1 OPEN RESETLOGS;
}

The simplicity of PITR of PDBs hides a certain amount of complexity. For a start, a PDB shares the root container with other PDBs, so a PITR of the root container must be performed. This is done in the
fast recovery area (FRA) provided it is configured. If the FRA is not configured, an AUXILIARY DESTINATION must be specified.

Multitenant Page 16
Aside from the FRA space requirement, one other important restriction is relevant. If a point in time recovery of a PDB has been done, it is not possible to directly flashback the database to a time before
the PDB point in time recovery. The workaround for this is discussed in this article.

Table Point In Time Recovery (PITR) in PDBs

Oracle 12c includes a new RMAN feature which performs point in time recovery of tables using a single command. You can read about this feature and see examples of it's use in the following article.

RMAN Table Point In Time Recovery (PITR) in Oracle Database 12c Release 1 (12.1)

The same mechanism is available for recovering tables in PDBs, with a few minor changes. For the feature to work with a PDB, you must log in as a root user with SYSDBA or SYSBACKUP privilege.

$ rman target=/
Issue the RECOVER TABLE command in a similar way to that shown for a non-CDB database, but include the OF PLUGGABLE DATABASE clause, as well as giving a suitable AUXILIARY DESTINATION
location for the auxiliary database. The following command also uses the REMAP TABLE clause to give the recovered table a new name.

RECOVER TABLE 'TEST'.'T1' OF PLUGGABLE DATABASE pdb1


UNTIL SCN 5695703
AUXILIARY DESTINATION '/u01/aux'
REMAP TABLE 'TEST'.'T1':'T1_PREV';

Alternatively, you can just stop at the point where the recovered table is in a data pump dump file, which you can import man ually at a later time. The following example uses the DATAPUMP
DESTINATION, DUMP FILE and NOTABLEIMPORT clauses to achieve this.

RECOVER TABLE 'TEST'.'T1' OF PLUGGABLE DATABASE pdb1


UNTIL SCN 5695703
AUXILIARY DESTINATION '/u01/aux'
DATAPUMP DESTINATION '/u01/export'
DUMP FILE 'test_t1_prev.dmp'
NOTABLEIMPORT;

Flashback of a Container Database (CDB) in Oracle Database 12c Release 1 (12.1)

This article assumes the following things are in place for the examples to work.

You have a container database (CDB). You can see how to create one here.
Your container database (CDB) has at least one pluggable database (PDB). You can see how to create one here.
You have the flashback database feature enabled on the CDB. You can see how to do that here.
You have backups of your CDB and PDBs. You can see how to do that here.
With this in place, you can move on to the next sections.

Flashback of Container Database (CDB)


The basic procedure for performing a flashback database operation on a container database (CDB) is the same as that for a non-CDB database in 12c and previous versions, as described here. So for
example, if we want to flashback the CDB to a point in time 5 minutes ago, we might do one the following in SQL*Plus.

$ sqlplus / as sysdba

SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
FLASHBACK DATABASE TO TIMESTAMP SYSDATE-(5/24/60);
ALTER DATABASE OPEN RESETLOGS;

-- Open all pluggable databases.


ALTER PLUGGABLE DATABASE ALL OPEN;
Or the following in RMAN.

$ rman target=/

SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
FLASHBACK DATABASE TO TIME 'SYSDATE-(5/24/60)';
ALTER DATABASE OPEN RESETLOGS;

# Open all pluggable databases.


ALTER PLUGGABLE DATABASE ALL OPEN;
In both cases we connect to a root user with the SYSDBA or SYSBACKUP privilege.

The restrictions on the use of flashback database are similar to those of a non-CDB database, with one extra restriction. If you perform a point in time recovery of a pluggable database (PDB), you can not
use flashback database to return the CDB to a point in time before that PITR of the PDB took place. This issue and the workar ound for it are discussed in the next section.

Point In Time Recovery (PITR) of Pluggable Database (PDB) Restrictions


As mentioned previously, if you perform a point in time recovery of a pluggable database (PDB), you can not use flashback database to return the CDB to a point in time before that PITR of the PDB took
place. The following example shows this.

Perform a PITR of a PDB to 5 minutes ago.

$ rman target=/

RUN {
ALTER PLUGGABLE DATABASE pdb1 CLOSE;
SET UNTIL TIME "TO_DATE('30-DEC-2013 10:15:00','DD-MON-YYYY HH24:MI:SS')";
RESTORE PLUGGABLE DATABASE pdb1;
RECOVER PLUGGABLE DATABASE pdb1;
ALTER PLUGGABLE DATABASE pdb1 OPEN RESETLOGS;
}
Then we flashback the CDB to 15 minutes ago.

Multitenant Page 17
$ rman target=/

SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
SET UNTIL TIME "TO_DATE('30-DEC-2013 10:00:00','DD-MON-YYYY HH24:MI:SS')";
ALTER DATABASE OPEN RESETLOGS;
This results in the following error.

media recovery failed


RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of flashback command at 12/28/2013 23:20:08
ORA-39866: Data files for Pluggable Database PDB1 must be offline to flashback across PDB point-in-time recovery.
The workaround for this is to do the following.

Take a backup of everything (CDB and PDBs). It's always a good idea to take a backup before doing anything major to your database.
Shutdown the PDB.
Offline all datafiles for the PDB.
Flashback the CDB.
Restore and recover the PDB to the point it was at before the flashback of the CDB.
You can see an example of this below.

rman target=/

# Backup everything.
BACKUP DATABASE PLUS ARCHIVELOG;

# Close PDB and take the datafiles offline.


ALTER PLUGGABLE DATABASE pdb1 CLOSE;
ALTER PLUGGABLE DATABASE pdb1 DATAFILE ALL OFFLINE;

# Flashback the CDB, along with all the PDBs.


SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
FLASHBACK DATABASE TO TIME "TO_DATE('30-DEC-2013 10:00:00','DD-MON-YYYY HH24:MI:SS')";
ALTER DATABASE OPEN RESETLOGS;

# Open all pluggable databases, except pdb1.


ALTER PLUGGABLE DATABASE ALL EXCEPT pdb1 OPEN;

# PITR of pdb1.
RUN {
# PDB already closed. No SET UNTIL. We want to recover to the latest time.
#ALTER PLUGGABLE DATABASE pdb1 CLOSE;
#SET UNTIL TIME "TO_DATE('30-DEC-2013 10:15:00','DD-MON-YYYY HH24:MI:SS')";
RESTORE PLUGGABLE DATABASE pdb1;
RECOVER PLUGGABLE DATABASE pdb1;
ALTER PLUGGABLE DATABASE pdb1 DATAFILE ALL ONLINE;
ALTER PLUGGABLE DATABASE pdb1 OPEN;
}

Resource Manager with Container Databases (CDB) and Pluggable Databases (PDB) in Oracle Database 12c Release 1 (12.1)

Container Database (CDB)


The following sections describe how resource manager can be used to control the resource usage between pluggable databases (P DBs). Resource manager does not currently have the ability to control
memory usage between PDBs.

Create CDB Resource Plan


A CDB resource plan is made up of CDB resource plan directives. The plan directives allocate shares, which define the proportion of the CDB resources available to the PDB, and specific utilization
percentages, that give a finer level of control. CDB resource plans are managed using the DBMS_RESOURCE_MANAGER package. Each plan directive is made up of the following elements:

pluggable_database : The PDB the directive relates to.


shares : The proportion of the CDB resources available to the PDB.
utilization_limit : The percentage of the CDBs available CPU that is available to the PDB.
parallel_server_limit : The percentage of the CDBs available parallel servers (PARALLEL_SERVERS_TARGET initialization parameter) that are available to the PDB.

PDBs without a specific plan directive use the default PDB directive.

The following code creates a new CBD resource plan using the CREATE_CDB_PLAN procedure, then adds two plan directives using t he CREATE_CDB_PLAN_DIRECTIVE procedure.

DECLARE
l_plan VARCHAR2(30) := 'test_cdb_plan';
BEGIN
DBMS_RESOURCE_MANAGER.clear_pending_area;
DBMS_RESOURCE_MANAGER.create_pending_area;

DBMS_RESOURCE_MANAGER.create_cdb_plan(
plan => l_plan,
comment => 'A test CDB resource plan');

DBMS_RESOURCE_MANAGER.create_cdb_plan_directive(
plan => l_plan,
pluggable_database => 'pdb1',
shares => 3,
utilization_limit => 100,

Multitenant Page 18
utilization_limit => 100,
parallel_server_limit => 100);

DBMS_RESOURCE_MANAGER.create_cdb_plan_directive(
plan => l_plan,
pluggable_database => 'pdb2',
shares => 3,
utilization_limit => 100,
parallel_server_limit => 100);

DBMS_RESOURCE_MANAGER.validate_pending_area;
DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/
Information about the available CDB resource plans can be queried using the DBA_CDB_RSRC_PLANS.

COLUMN plan FORMAT A30


COLUMN comments FORMAT A30
COLUMN status FORMAT A10
SET LINESIZE 100

SELECT plan_id,
plan,
comments,
status,
mandatory
FROM dba_cdb_rsrc_plans
WHERE plan = 'TEST_CDB_PLAN';

PLAN_ID PLAN COMMENTS STATUS MAN


---------- ------------------------------ ------------------------------ ---------- ---
92235 TEST_CDB_PLAN A test CDB resource plan NO

SQL>
Information about the CDB resource plan directives can be queried using the DBA_CDB_RSRC_PLAN_DIRECTIVES view.

COLUMN plan FORMAT A30


COLUMN pluggable_database FORMAT A25
SET LINESIZE 100

SELECT plan,
pluggable_database,
shares,
utilization_limit AS util,
parallel_server_limit AS parallel
FROM dba_cdb_rsrc_plan_directives
WHERE plan = 'TEST_CDB_PLAN'
ORDER BY pluggable_database;

PLAN PLUGGABLE_DATABASE SHARES UTIL PARALLEL


------------------------------ ------------------------- ---------- ---------- ----------
TEST_CDB_PLAN ORA$AUTOTASK 90 100
TEST_CDB_PLAN ORA$DEFAULT_PDB_DIRECTIVE 1 100 100
TEST_CDB_PLAN PDB1 3 100 100
TEST_CDB_PLAN PDB2 3 100 100

SQL>
For the rest of the article the cdb_resource_plans.sql and cdb_resource_plan_directives.sql scripts will be used to display this information.

Modify CDB Resource Plan


An existing resource plan is modified by creating, updating or deleting plan directives. The following code uses the CREATE_C DB_PLAN_DIRECTIVE procedure to add a new plan directive to the CDB
resource plan we created previously.

DECLARE
l_plan VARCHAR2(30) := 'test_cdb_plan';
BEGIN
DBMS_RESOURCE_MANAGER.clear_pending_area;
DBMS_RESOURCE_MANAGER.create_pending_area;

DBMS_RESOURCE_MANAGER.create_cdb_plan_directive(
plan => l_plan,
pluggable_database => 'pdb3',
shares => 1,
utilization_limit => 75,
parallel_server_limit => 75);

DBMS_RESOURCE_MANAGER.validate_pending_area;
DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/

SQL> @cdb_resource_plan_directives.sql TEST_CDB_PLAN

PLAN PLUGGABLE_DATABASE SHARES UTIL PARALLEL


------------------------------ ------------------------- ---------- ---------- ----------
TEST_CDB_PLAN ORA$AUTOTASK 90 100
TEST_CDB_PLAN ORA$DEFAULT_PDB_DIRECTIVE 1 100 100

Multitenant Page 19
TEST_CDB_PLAN ORA$DEFAULT_PDB_DIRECTIVE 1 100 100
TEST_CDB_PLAN PDB1 3 100 100
TEST_CDB_PLAN PDB2 3 100 100
TEST_CDB_PLAN PDB3 1 75 75

SQL>
The UPDATE_CDB_PLAN_DIRECTIVE procedure modifies an existing plan directive.

DECLARE
l_plan VARCHAR2(30) := 'test_cdb_plan';
BEGIN
DBMS_RESOURCE_MANAGER.clear_pending_area;
DBMS_RESOURCE_MANAGER.create_pending_area;

DBMS_RESOURCE_MANAGER.update_cdb_plan_directive(
plan => l_plan,
pluggable_database => 'pdb3',
new_shares => 1,
new_utilization_limit => 100,
new_parallel_server_limit => 100);

DBMS_RESOURCE_MANAGER.validate_pending_area;
DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/

SQL> @cdb_resource_plan_directives.sql TEST_CDB_PLAN

PLAN PLUGGABLE_DATABASE SHARES UTIL PARALLEL


------------------------------ ------------------------- ---------- ---------- ----------
TEST_CDB_PLAN ORA$AUTOTASK 90 100
TEST_CDB_PLAN ORA$DEFAULT_PDB_DIRECTIVE 1 100 100
TEST_CDB_PLAN PDB1 3 100 100
TEST_CDB_PLAN PDB2 3 100 100
TEST_CDB_PLAN PDB3 1 100 100

SQL>

The DELETE_CDB_PLAN_DIRECTIVE procedure deletes an existing plan directive from the CDB resource plan.

DECLARE
l_plan VARCHAR2(30) := 'test_cdb_plan';
BEGIN
DBMS_RESOURCE_MANAGER.clear_pending_area;
DBMS_RESOURCE_MANAGER.create_pending_area;

DBMS_RESOURCE_MANAGER.delete_cdb_plan_directive(
plan => l_plan,
pluggable_database => 'pdb3');

DBMS_RESOURCE_MANAGER.validate_pending_area;
DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/

SQL> @cdb_resource_plan_directives.sql TEST_CDB_PLAN

PLAN PLUGGABLE_DATABASE SHARES UTIL PARALLEL


------------------------------ ------------------------- ---------- ---------- ----------
TEST_CDB_PLAN ORA$AUTOTASK 90 100
TEST_CDB_PLAN ORA$DEFAULT_PDB_DIRECTIVE 1 100 100
TEST_CDB_PLAN PDB1 3 100 100
TEST_CDB_PLAN PDB2 3 100 100

SQL>

Modify CDB Default Directive


In addition to creating PDB-specific plan directives, the default directive can be amended for a CBD resource plan. The following example uses the UPDATE _CDB_DEFAULT_DIRECTIVE procedure to edit
the default directive for the CDB resource plan.

DECLARE
l_plan VARCHAR2(30) := 'test_cdb_plan';
BEGIN
DBMS_RESOURCE_MANAGER.clear_pending_area;
DBMS_RESOURCE_MANAGER.create_pending_area;

DBMS_RESOURCE_MANAGER.update_cdb_default_directive(
plan => l_plan,
new_shares => 1,
new_utilization_limit => 80,
new_parallel_server_limit => 80);

DBMS_RESOURCE_MANAGER.validate_pending_area;
DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/

Multitenant Page 20
SQL> @cdb_resource_plan_directives.sql TEST_CDB_PLAN

PLAN PLUGGABLE_DATABASE SHARES UTIL PARALLEL


------------------------------ ------------------------- ---------- ---------- ----------
TEST_CDB_PLAN ORA$AUTOTASK 90 100
TEST_CDB_PLAN ORA$DEFAULT_PDB_DIRECTIVE 1 80 80
TEST_CDB_PLAN PDB1 3 100 100
TEST_CDB_PLAN PDB2 3 100 100

SQL>
Modify CDB Autotask Directive
There is a plan directive associated with the database autotask functionality. The configuration of this can be altered using the UPDATE_CDB_AUTOTASK_DIRECTIVE procedure.

DECLARE
l_plan VARCHAR2(30) := 'test_cdb_plan';
BEGIN
DBMS_RESOURCE_MANAGER.clear_pending_area;
DBMS_RESOURCE_MANAGER.create_pending_area;

DBMS_RESOURCE_MANAGER.update_cdb_autotask_directive(
plan => l_plan,
new_shares => 1,
new_utilization_limit => 75,
new_parallel_server_limit => 75);

DBMS_RESOURCE_MANAGER.validate_pending_area;
DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/

SQL> @cdb_resource_plan_directives.sql TEST_CDB_PLAN

PLAN PLUGGABLE_DATABASE SHARES UTIL PARALLEL


------------------------------ ------------------------- ---------- ---------- ----------
TEST_CDB_PLAN ORA$AUTOTASK 1 75 75
TEST_CDB_PLAN ORA$DEFAULT_PDB_DIRECTIVE 1 80 80
TEST_CDB_PLAN PDB1 3 100 100
TEST_CDB_PLAN PDB2 3 100 100

SQL>
Enable/Disable Resource Plan
Enabling and disabling resource plans in a CDB is the same as it was in pre-12c instances. Enable a plan by setting the RESOURCE_MANAGER_PLAN paramter to the name of the CDB resource plan, while
connected to the root container.

SQL> ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = 'TEST_CDB_PLAN';

System altered.

SQL> SHOW PARAMETER RESOURCE_MANAGER_PLAN

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
resource_manager_plan string TEST_CDB_PLAN
SQL>
To disable the plan, set the RESOURCE_MANAGER_PLAN parameter to another plan, or blank it.

SQL> ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = '';

System altered.

SQL> SHOW PARAMETER RESOURCE_MANAGER_PLAN

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
resource_manager_plan string
SQL>
Delete CDB Resource Plan
The DELETE_CDB_PLAN procedure deletes CDB resource plans.

DECLARE
l_plan VARCHAR2(30) := 'test_cdb_plan';
BEGIN
DBMS_RESOURCE_MANAGER.clear_pending_area;
DBMS_RESOURCE_MANAGER.create_pending_area;

DBMS_RESOURCE_MANAGER.delete_cdb_plan(plan => l_plan);

DBMS_RESOURCE_MANAGER.validate_pending_area;
DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/

SQL> @cdb_resource_plans.sql

PLAN_ID PLAN COMMENTS STATUS MAN


---------- ------------------------------ ------------------------------ ---------- ---

Multitenant Page 21
---------- ------------------------------ ------------------------------ ---------- ---
16774 DEFAULT_CDB_PLAN Default CDB plan YES
16775 DEFAULT_MAINTENANCE_PLAN Default CDB maintenance plan YES
16776 ORA$INTERNAL_CDB_PLAN Internal CDB plan YES
16777 ORA$QOS_CDB_PLAN QOS CDB plan YES

SQL>
Pluggable Database (PDB)
The use of resource manager inside the PDB is essentially unchanged compared to the pre-12c instances. Just remember, you have to be connected to the specific PDB when you set the
RESOURCE_MANAGER_PLAN parameter. You can read about how resource manager works in a PDB or in a pre-12c instance here:

-- Connect to privileged user on PDB.


CONN / AS SYSDBA
ALTER SESSION SET CONTAINER = pdb1;

-- Create a resource plan.


BEGIN
DBMS_RESOURCE_MANAGER.clear_pending_area();
DBMS_RESOURCE_MANAGER.create_pending_area();

-- Create plan
DBMS_RESOURCE_MANAGER.create_plan(
plan => 'hybrid_plan',
comment => 'Plan for a combination of high and low priority tasks.');

-- Create consumer groups


DBMS_RESOURCE_MANAGER.create_consumer_group(
consumer_group => 'web_cg',
comment => 'Web based OTLP processing - high priority');

DBMS_RESOURCE_MANAGER.create_consumer_group(
consumer_group => 'batch_cg',
comment => 'Batch processing - low priority');

-- Assign consumer groups to plan and define priorities


DBMS_RESOURCE_MANAGER.create_plan_directive (
plan => 'hybrid_plan',
group_or_subplan => 'web_cg',
comment => 'High Priority - level 1',
mgmt_p1 => 70);

DBMS_RESOURCE_MANAGER.create_plan_directive (
plan => 'hybrid_plan',
group_or_subplan => 'batch_cg',
comment => 'Low Priority - level 2',
mgmt_p1 => 20);

DBMS_RESOURCE_MANAGER.create_plan_directive(
plan => 'hybrid_plan',
group_or_subplan => 'OTHER_GROUPS',
comment => 'all other users - level 3',
mgmt_p1 => 10);

DBMS_RESOURCE_MANAGER.validate_pending_area;
DBMS_RESOURCE_MANAGER.submit_pending_area();
END;
/

-- Assign users to consumer groups


BEGIN
DBMS_RESOURCE_MANAGER_PRIVS.grant_switch_consumer_group(
grantee_name => 'web_user',
consumer_group => 'web_cg',
grant_option => FALSE);

DBMS_RESOURCE_MANAGER_PRIVS.grant_switch_consumer_group(
grantee_name => 'batch_user',
consumer_group => 'batch_cg',
grant_option => FALSE);

DBMS_RESOURCE_MANAGER.set_initial_consumer_group('web_user', 'web_cg');

DBMS_RESOURCE_MANAGER.set_initial_consumer_group('batch_user', 'batch_cg');
END;
/

ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = hybrid_plan;

Running Scripts Against Container Databases (CDBs) and Pluggable Databases (PDBs) in Oracle Database 12c Release 1 (12.1)

SET CONTAINER
For DBA scripts that perform tasks at the container level, using "/ AS SYSDBA" will work as it did in previous releases. The problem comes when you want to perform a task within the pluggable database.
The simplest way to achieve this is to continue to connect using "/ as SYSDBA", but to set the specific container in your script using the ALTER SESSION SET CONTAINER command.

sqlplus / as sysdba <<EOF

Multitenant Page 22
ALTER SESSION SET CONTAINER = pdb1;

-- Perform actions as before...


SHOW CON_NAME;

EXIT;
EOF
To make the script generic, pass the PDB name as a parameter. Save the following code as a script called "set_container_test.sh".

sqlplus / as sysdba <<EOF

ALTER SESSION SET CONTAINER = $1;

-- Perform actions as before...


SHOW CON_NAME;

EXIT;
EOF
Running the script with the PDB name as the first parameter shows the container is being set correctly.

$ chmod u+x set_container_test.sh


$ ./set_container_test.sh pdb1

SQL*Plus: Release 12.1.0.1.0 Production on Fri Apr 18 16:48:51 2014

Copyright (c) 1982, 2013, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> SQL>
Session altered.

SQL> SQL> SQL>


CON_NAME
------------------------------
PDB1
SQL> SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
$

TWO_TASK
An obvious solution when connecting to specific users is to use the TWO_TASK environment variable. Unfortunately this does no t work when using "/ AS SYSDBA".

$ export TWO_TASK=pdb1
$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Fri Apr 18 16:54:34 2014

Copyright (c) 1982, 2013, Oracle. All rights reserved.

ERROR:
ORA-01017: invalid username/password; logon denied

Enter user-name:
When connecting using a specific username/password combination TWO_TASK works as before.

$ export TWO_TASK=pdb1
$ sqlplus test/test

SQL*Plus: Release 12.1.0.1.0 Production on Fri Apr 18 16:57:46 2014

Copyright (c) 1982, 2013, Oracle. All rights reserved.

Last Successful login time: Wed Apr 02 2014 10:05:22 +01:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> SHOW CON_NAME;

CON_NAME
------------------------------
PDB1
SQL>

Hopefully your scripts do not contain connections with username and password specified, but if they do adding a service direc tly to the connection or using the TWO_TASK environment variable will
allow you to connect to a specific PDB.

Secure External Password Store


Oracle 10g introduced the ability to use a Secure External Password Store for connecting to the database without having to ex plicitly supply credentials. The fact this is service-based means it works
really well PBDs.

Multitenant Page 23
Place the following entries into the "$ORACLE_HOME/network/admin/sqlnet.ora" file, specifying the required wallet directory.

WALLET_LOCATION =
(SOURCE =
(METHOD = FILE)
(METHOD_DATA =
(DIRECTORY = /u01/app/oracle/wallet)
)
)

SQLNET.WALLET_OVERRIDE = TRUE
SSL_CLIENT_AUTHENTICATION = FALSE
SSL_VERSION = 0

Create a wallet to hold the credentials. Since 11gR2 this is better done using orapki, as it prevents the auto-login working if the wallet is copied to another machine.

$ mkdir -p /u01/app/oracle/wallet
$ orapki wallet create -wallet "/u01/app/oracle/wallet" -pwd "mypassword" -auto_login_local
Oracle Secret Store Tool : Version 12.1.0.1
Copyright (c) 2004, 2012, Oracle and/or its affiliates. All rights reserved.

Enter password:

Enter password again:

$
Create a credential associated with a TNS alias. The parameters are "alias username password".

$ mkstore -wrl "/u01/app/oracle/wallet" -createCredential pdb1_test test test


Oracle Secret Store Tool : Version 12.1.0.1
Copyright (c) 2004, 2012, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:

Create credential oracle.security.client.connect_string1


$
Create an entry in the "$ORACLE_HOME/network/admin/tnsnames.ora" file with an alias that matches that used in the wallet.

PDB1_TEST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = ol6-121.localdomain)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = pdb1)
)
)
With this in place, we can now make connections to the database using the credentials in the wallet.

$ sqlplus /@pdb1_test

SQL*Plus: Release 12.1.0.1.0 Production on Sat Apr 19 10:19:38 2014

Copyright (c) 1982, 2013, Oracle. All rights reserved.

Last Successful login time: Sat Apr 19 2014 10:18:52 +01:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> SHOW USER


USER is "TEST"
SQL> SHOW CON_NAME

CON_NAME
------------------------------
PDB1
SQL>

Scheduler
The scheduler has been enhanced in Oracle 12c to include script-based jobs, allowing you to define scripts in-line, or call scripts on the file system. These are a variation on external jobs, but the
SQL_SCRIPT and BACKUP_SCRIPT job types make it significantly easier to deal with credentials and the multitenant environment. You can read more about this functionality here.

catcon.pl
Another issue DBAs will encounter when running in a multitenant environment is the need to run the same script in multiple PD Bs. That can be achieved using the methods mentioned previously, but
Oracle provide a Perl utility called "catcon.pl" which may be more convenient.

In a multitenant environment, some Oracle supplied scripts must be applied in the correct order, starting with the CDB$ROOT container. The "catcon.pl" utility takes care of that and provides container-
specific logs, allowing you to easily check the outcome of the action.

The full syntax of the utility is described here, but running the utility with no parameters displays the full usage.

$ perl catcon.pl

Usage: catcon [-u username[/password]] [-U username[/password]]


[-d directory] [-l directory]

Multitenant Page 24
[-d directory] [-l directory]
[{-c|-C} container] [-p degree-of-parallelism]
[-e] [-s]
[-E { ON | errorlogging-table-other-than-SPERRORLOG } ]
[-g]
-b log-file-name-base
--
{ sqlplus-script [arguments] | --x<SQL-statement> } ...

Optional:
-u username (optional /password; otherwise prompts for password)
used to connect to the database to run user-supplied scripts or
SQL statements
defaults to "/ as sysdba"
-U username (optional /password; otherwise prompts for password)
used to connect to the database to perform internal tasks
defaults to "/ as sysdba"
-d directory containing the file to be run
-l directory to use for spool log files
-c container(s) in which to run sqlplus scripts, i.e. skip all
Containers not named here; for example,
-c 'PDB1 PDB2',
-C container(s) in which NOT to run sqlplus scripts, i.e. skip all
Containers named here; for example,
-C 'CDB PDB3'

NOTE: -c and -C are mutually exclusive

-p expected number of concurrent invocations of this script on a given


host

NOTE: this parameter rarely needs to be specified

-e sets echo on while running sqlplus scripts


-s output of running every script will be spooled into a file whose name
will be
<log-file-name-base>_<script_name_without_extension>_[<container_name_if_any>].<default_extension>
-E sets errorlogging on; if ON is specified, default error logging table
will be used, otherwise, specified error logging table (which must
have been created in every Container) will be used
-g turns on production of debugging info while running this script

Mandatory:
-b base name (e.g. catcon_test) for log and spool file names

sqlplus-script - sqlplus script to run OR


SQL-statement - a statement to execute

NOTES:
- if --x<SQL-statement> is the first non-option string, it needs to be
preceeded with -- to avoid confusing module parsing options into
assuming that '-' is an option which that module is not expecting and
about which it will complain
- command line parameters to SQL scripts can be introduced using --p
interactive (or secret) parameters to SQL scripts can be introduced
using --P

For example,
perl catcon.pl ... x.sql '--pJohn' '--PEnter Password for John:' ...

$
Regarding running Oracle supplied scripts, the manual uses the example of running "catblock.sql" in all containers.

$ . oraenv
ORACLE_SID = [cdb1] ?
The Oracle base remains unchanged with value /u01/app/oracle
$ cd $ORACLE_HOME/rdbms/admin/
$ perl catcon.pl -d $ORACLE_HOME/rdbms/admin -l /tmp -b catblock_output catblock.sql
$ ls /tmp/catblock_output*
catblock_output0.log catblock_output1.log catblock_output2.log catblock_output3.log
$
The first output file contains the combined output from the "cdb$root" and "pdb$seed" containers. The last file contains an o verall status message for the task. The files between contain the output for
all the user-created PDBs.

The "catcon.pl" utility can also be used to run queries against all containers in the CDB. The following command runs a query in each container, placing the output for each in a file called
"/tmp/query_outputN.log".

$ cd $ORACLE_HOME/rdbms/admin/
$ perl catcon.pl -e -l /tmp -b query_output -- --x"SELECT SYS_CONTEXT('USERENV', 'CON_NAME') FROM dual"
$ ls /tmp/query_output*
/tmp/query_output0.log /tmp/query_output1.log /tmp/query_output2.log /tmp/query_output3.log
$
You can target specific PDBs using the "-c" option, or exclude PDBs using the "-C" option. The example below runs a query in all user defined PDBs by omitting the root and seed containers.

$ rm -f /tmp/select_output*
$ cd $ORACLE_HOME/rdbms/admin/
$ perl catcon.pl -e -C 'CDB$ROOT PDB$SEED' -l /tmp -b select_output -- --x"SELECT SYS_CONTEXT('USERENV', 'CON_NAME') FROM dual"

Multitenant Page 25
$ perl catcon.pl -e -C 'CDB$ROOT PDB$SEED' -l /tmp -b select_output -- --x"SELECT SYS_CONTEXT('USERENV', 'CON_NAME') FROM dual"
$

Database Triggers on Pluggable Databases (PDBs) in Oracle 12c Release 1 (12.1)


Trigger Scope
To create a trigger on a database event in a CDB requires a connection to the CDB as a common user with the ADMINISTER DATABASE TRIGGER system privilege.

CONN sys@cdb1 AS SYSDBA

CREATE OR REPLACE TRIGGER cdb1_after_startup_trg


AFTER STARTUP ON DATABASE
BEGIN
-- Do something.
NULL;
END;
/
To create a trigger on a database event in a PDB requires a connection to the PDB as either a common or local user with the ADMINISTER DATABASE TRIGGER system privilege in the context of the PDB.
The ON DATABASE and ON PLUGGABLE DATABASE clauses are functionally equivalent within the PDB, but some events require the ON PLUGGABLE DATABASE clause explicitly.

CONN sys@pdb1 AS SYSDBA

CREATE OR REPLACE TRIGGER pdb1_after_startup_trg


AFTER STARTUP ON PLUGGABLE DATABASE
BEGIN
-- Do something.
NULL;
END;
/

CREATE OR REPLACE TRIGGER pdb1_after_startup_trg


AFTER STARTUP ON DATABASE
BEGIN
-- Do something.
NULL;
END;
/
Some database event triggers are also available at schema level within the CDB or PDB. Functionally, these are unchanged by the multitenant option.

CONN sys@cdb1 AS SYSDBA

CREATE OR REPLACE TRIGGER cdb1_after_logon_trg


AFTER LOGON ON flows_files.SCHEMA
BEGIN
-- Do something.
NULL;
END;
/

CONN sys@pdb1 AS SYSDBA

CREATE OR REPLACE TRIGGER cdb1_after_logon_trg


AFTER LOGON ON test.SCHEMA
BEGIN
-- Do something.
NULL;
END;
/

Event Availability
The following database events are available at both the CDB and PDB level.

AFTER STARTUP : Trigger fires after the CDB or PDB opens.


BEFORE SHUTDOWN : Trigger fires before the CDB shuts down or before the PDB closes.
AFTER SERVERERROR : Trigger fires when a server error message is logged and it is safe to fire error triggers. Available at [PLUGGABLE] DATABASE or SCHEMA level.
AFTER LOGON : Trigger fires when a client logs into the CDB or PDB. Available at [PLUGGABLE] DATABASE or SCHEMA level.
BEFORE LOGOFF : Trigger fires when a client logs out of the CDB or PDB. Available at [PLUGGABLE] DATABASE or SCHEMA level.
AFTER SUSPEND : Trigger fires when a server error causes a transaction to be suspended. Available at [PLUGGABLE] DATABASE or SCHEMA level.
BEFORE SET CONTAINER : Trigger fires before the SET CONTAINER command executes. Available at [PLUGGABLE] DATABASE or SCHEMA level.
AFTER SET CONTAINER : Trigger fires after the SET CONTAINER command executes. Available at [PLUGGABLE] DATABASE or SCHEMA level.

The following database event is only available at the CDB level.


AFTER DB_ROLE_CHANGE : Fires when the database role switches from primary to standby or from standby to primary in a Data Guard configuration.

The following database events are only available at the PDB level and require the ON PLUGGABLE DATABASE clause explicitly. Using the ON DATABASE clause results in an error.
AFTER CLONE : After a clone operation, the trigger fires in the new PDB and then the trigger is deleted. If the trigger fails, the clone operation fails.
BEFORE UNPLUG : Before an unplug operation, the trigger fires in the PDB and then the trigger is deleted. If the trigger fails, the unplug operation fails.

Transparent Data Encryption (TDE) in Pluggable Databases (PDBs) in Oracle Database 12c Release 1 (12.1)

Keystore Location
A keystore must be created to hold the encryption key. The search order for finding the keystore is as follows.

If present, the location specified by the ENCRYPTION_WALLET_LOCATION parameter in the "sqlnet.ora" file.
If present, the location specified by the WALLET_LOCATION parameter in the "sqlnet.ora" file.
The default location for the keystore. If the $ORACLE_BASE is set, this is "$ORACLE_BASE/admin/DB_UNIQUE_NAME/wallet", otherwise it is "$ORACLE_HOME/admin/DB_UNIQUE_NAME/wallet", where
DB_UNIQUE_NAME comes from the initialization parameter file.
Keystores should not be shared between CDBs, so if multiple CDBs are run from the same ORACLE_HOME you must do one of the fol lowing to keep them separate.

Multitenant Page 26
Keystores should not be shared between CDBs, so if multiple CDBs are run from the same ORACLE_HOME you must do one of the fol lowing to keep them separate.

Use the default keystore location, so each CDB database has its own keystore.
Specify the location using the $ORACLE_SID.
ENCRYPTION_WALLET_LOCATION =
(SOURCE =(METHOD = FILE)(METHOD_DATA =
(DIRECTORY = /u01/app/oracle/admin/$ORACLE_SID/encryption_keystore/)))
Have a separate "sqlnet.ora" for each database, making sure the TNS_ADMIN variable is set correctly.
Regardless of where you place the keystore, make sure you don't lose it. Oracle 12c is extremely sensitive to loss of the keystore. During the writing of this article I was forced to revert to a clean
snapshot several times.

Create a Keystore
Edit the "$ORACLE_HOME/network/admin/sqlnet.ora" files, adding the following entry.

ENCRYPTION_WALLET_LOCATION =
(SOURCE =(METHOD = FILE)(METHOD_DATA =
(DIRECTORY = /u01/app/oracle/admin/$ORACLE_SID/encryption_keystore/)))
Create the directory to hold the keystore.

mkdir -p /u01/app/oracle/admin/$ORACLE_SID/encryption_keystore
Connect to the root container and create the keystore.

CONN / AS SYSDBA

ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '/u01/app/oracle/admin/cdb1/encryption_keystore/' IDENTIFIED BY myPassword;

HOST ls /u01/app/oracle/admin/cdb1/encryption_keystore/
ewallet.p12

SQL>
You can open and close the keystore from the root container using the following commands. If the CONTAINER=ALL clause is omitted, the current container is assumed. Open the keystore for all
containers.

-- Open
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY myPassword CONTAINER=ALL;

-- Close
ADMINISTER KEY MANAGEMENT SET KEYSTORE CLOSE IDENTIFIED BY myPassword CONTAINER=ALL;
You need to create and activate a master key in the root container and one in each of the pluggable databases. Using the CONT AINER=ALL clause does it in a single step. If the CONTAINER=ALL clause is
omitted, it will only be done in the current container and will need to be done again for each PDB individually. Information about the master key is displayed using the V$ENCRYPTION_KEYS view.

ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY myPassword WITH BACKUP CONTAINER=ALL;

SET LINESIZE 100


SELECT con_id, key_id FROM v$encryption_keys;

CON_ID KEY_ID
---------- ------------------------------------------------------------------------------
0 AdaYAOior0/3v0AoZDBV8hoAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
0 AYmKkQxl+U+Xv3UHVMgSJC8AAAAAAAAAAAAAAAAAAAAAAAAAAAAA

SQL>
Information about the keystore is displayed using the V$ENCRYPTION_WALLET view.

SET LINESIZE 200


COLUMN wrl_parameter FORMAT A50
SELECT * FROM v$encryption_wallet;

WRL_TYPE WRL_PARAMETER STATUS WALLET_TYPE W ALLET_OR FULLY_BAC CON_ID


-------------------- -------------------------------------------------- ------------------------------ -------------------- --------- --------- ----------
FILE /u01/app/oracle/admin/cdb1/encryption_keystore/ OPEN PASSWORD S INGLE NO 0

SQL>
Connect to the PDB. If you didn't create the key in the previous step, create a new master key for the PDB.

CONN sys@pdb1 AS SYSDBA

-- We don't need to create a master key as we did it previously by using CONTAINER=ALL


-- ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY myPassword;
-- ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY myPassword WITH BACKUP;

SELECT con_id, key_id FROM v$encryption_keys;

CON_ID KEY_ID
---------- ------------------------------------------------------------------------------
0 ATbrc0RkAE//v/jcxOecSGIAAAAAAAAAAAAAAAAAAAAAAAAAAAAA

SQL>
Use the Keystore for TDE
You should now be able to create a table with an encrypted column in the PDB.

CONN test/test@pdb1

-- Encrypted column
CREATE TABLE tde_test (
id NUMBER(10),
data VARCHAR2(50) ENCRYPT

Multitenant Page 27
data VARCHAR2(50) ENCRYPT
);

INSERT INTO tde_test VALUES (1, 'This is a secret!');


COMMIT;
We can also create encrypted tablespaces.

-- Encrypted tablespacew
CONN sys@pdb1 AS SYSDBA

CREATE TABLESPACE encrypted_ts


DATAFILE SIZE 128K
AUTOEXTEND ON NEXT 64K
ENCRYPTION USING 'AES256'
DEFAULT STORAGE(ENCRYPT);

ALTER USER test QUOTA UNLIMITED ON encrypted_ts;

CONN test/test@pdb1

CREATE TABLE tde_ts_test (


id NUMBER(10),
data VARCHAR2(50)
) TABLESPACE encrypted_ts;

INSERT INTO tde_ts_test VALUES (1, 'This is also a secret!');


COMMIT;
If the PDB is restarted, the keystore must be opened in the PDB before the data can be accessed.

CONN sys@pdb1 AS SYSDBA

SHUTDOWN IMMEDIATE;
STARTUP;
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY myPassword;

CONN test/test@pdb1

SELECT * FROM tde_test;

ID DATA
---------- --------------------------------------------------
1 This is a secret!

SQL>

SELECT * FROM tde_ts_test;

ID DATA
---------- --------------------------------------------------
1 This is also a secret!

SQL>
If the CDB is restarted, the keystore must be opened in both the CDB and the PDBs.

CONN / AS SYSDBA

SHUTDOWN IMMEDIATE;
STARTUP;
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY myPassword CONTAINER=ALL;

CONN test/test@pdb1

SELECT * FROM tde_test;

ID DATA
---------- --------------------------------------------------
1 This is a secret!

SQL>

SELECT * FROM tde_ts_test;

ID DATA
---------- --------------------------------------------------
1 This is also a secret!

SQL>

Unplug/Plugin PDBs with TDE


This section describes the process of unplugging PDB1 from the CDB1 instance and plugging into the CDB2 instance on the same machine with a new name of PDB2.

Switch to the CDB1 instance.

ORAENV_ASK=NO
export ORACLE_SID=cdb1
. oraenv
ORAENV_ASK=YES

Multitenant Page 28
ORAENV_ASK=YES
sqlplus /nolog
Export the key information from PDB1.

CONN sys@pdb1 AS SYSDBA

ADMINISTER KEY MANAGEMENT EXPORT ENCRYPTION KEYS WITH SECRET "mySecret" TO '/tmp/export.p12' IDENTIFIED BY myPassword;
Unplug PDB1 from CDB1.

CONN / AS SYSDBA

ALTER PLUGGABLE DATABASE pdb1 CLOSE;


ALTER PLUGGABLE DATABASE pdb1 UNPLUG INTO '/tmp/pdb1.xml';
Switch to the CDB2 instance.

ORAENV_ASK=NO
export ORACLE_SID=cdb2
. oraenv
ORAENV_ASK=YES
sqlplus /nolog
Plug in the PDB1, with the new name of PDB2 into the CDB2 instance.

CONN / AS SYSDBA

CREATE PLUGGABLE DATABASE pdb2 USING '/tmp/pdb1.xml';

-- If you are not using OMF, you will have to convert the paths manually.
--CREATE PLUGGABLE DATABASE pdb2 USING '/tmp/pdb1.xml'
-- FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdb1/','/u01/app/oracle/oradata/cdb2/pdb2/');

ALTER PLUGGABLE DATABASE pdb2 OPEN READ WRITE;


Opening PDB2 will result in the following error, which we can ignore that this point.

Warning: PDB altered with errors.


If CDB2 doesn't already have a keystore at the root level, you will need to create it.

CONN / AS SYSDBA
HOST mkdir -p /u01/app/oracle/admin/cdb2/encryption_keystore/
ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '/u01/app/oracle/admin/cdb2/encryption_keystore/' IDENTIFIED BY myPassword;
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY myPassword;
Import the key information into PDB2 and restart it. Until it opens cleanly it will not register with the listener, so switch the container manually.

CONN / AS SYSDBA
ALTER SESSION SET CONTAINER=pdb2;

ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "myPassword";


ADMINISTER KEY MANAGEMENT IMPORT ENCRYPTION KEYS WITH SECRET "mySecret" FROM '/tmp/export.p12' IDENTIFIED BY "myPassword" WITH BACKUP;

-- Restart the PDB and open the keystore.


SHUTDOWN;
STARTUP;
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "myPassword";
The encrypted data is now available as expected.

CONN test/test@pdb2

SELECT * FROM tde_test;

ID DATA
---------- --------------------------------------------------
1 This is a secret!

SQL>

SELECT * FROM tde_ts_test;

ID DATA
---------- --------------------------------------------------
1 This is also a secret!

SQL>
Auto-Login Keystores
Creation of an auto-login keystore means you no longer need to explicitly open the keystore after a restart. The first reference to a key causes the keystore to be opened automatically, as shown below.

CONN / AS SYSDBA
ADMINISTER KEY MANAGEMENT CREATE LOCAL AUTO_LOGIN KEYSTORE FROM KEYSTORE '/u01/app/oracle/admin/cdb1/encryption_keystore/' IDENTIFIED BY myPassword;

SHUTDOWN IMMEDIATE;
STARTUP

CONN test/test@pdb1

SELECT * FROM tde_test;

ID DATA
---------- --------------------------------------------------
1 This is a secret!

Multitenant Page 29
1 This is a secret!

SQL>

SELECT * FROM tde_ts_test;

ID DATA
---------- --------------------------------------------------
1 This is also a secret!

SQL>
SYSKM
Key management can be performed by any member of the SYSDBA or SYSKM group.

PDBs With Different Time Zones to the CDB in Oracle Database 12c Release 1 (12.1)

Container Database (CDB) Level


Setting the timezone at the container database level is the same as setting it for a non-CDB instance. The CDB setting is the default for all pluggable databases.

Check the current time zone for the container database.

CONN / AS SYSDBA

SELECT dbtimezone FROM DUAL;

DBTIME
------
+00:00

SQL>
Reset the time zone using the ALTER DATABASE command to specify the new TIME_ZONE value. The database will need to be restarted for this to take effect.

CONN / AS SYSDBA

ALTER DATABASE SET TIME_ZONE='Europe/London';

SHUTDOWN IMMEDIATE;
STARTUP;
We can see the database time zone has been changed.

CONN / AS SYSDBA

SELECT dbtimezone FROM DUAL;

DBTIMEZONE
-------------
Europe/London

SQL>
Pluggable Database (PDB) Level
Setting the time zone in the pluggable database allows it to override the CDB setting.

Check the current time zone for the pluggable database.

CONN / AS SYSDBA
ALTER SESSION SET CONTAINER = pdb1;

SELECT dbtimezone FROM DUAL;

DBTIME
------
-07:00

SQL>
Reset the time zone using the ALTER DATABASE command to specify the new TIME_ZONE value. The pluggable database will need to be restarted for this to take effect.

CONN / AS SYSDBA
ALTER SESSION SET CONTAINER = pdb1;

ALTER DATABASE SET TIME_ZONE='US/Eastern';

SHUTDOWN IMMEDIATE;
STARTUP;
We can see the pluggable database time zone is different to the container database.

CONN / AS SYSDBA

SELECT dbtimezone FROM DUAL;

DBTIMEZONE
-------------
Europe/London

SQL>

ALTER SESSION SET CONTAINER = pdb1;

Multitenant Page 30
ALTER SESSION SET CONTAINER = pdb1;

SELECT dbtimezone FROM DUAL;

DBTIMEZONE
----------
US/Eastern

SQL>

Upgrade a PDB using Unplug/Plugin

Using the Database Upgrade Assistant (DBUA) against a container database (CDB) will upgrade all the associated pluggable databases (PDBs) also. If you don't want to commit to upgrading all the PDBs in
one step, you can upgrade them individually, or a subset of the PDBs, using the unplug/plugin method.

This article describes the method for upgrading a PDB using the unplug/plugin method. It assumes you have the following conta iner databases.

cdb1 (12.1.0.1) : Containing pdb1, the PDB we wish to upgrade.


cdb2 (12.1.0.2) : No PDBs, although that is not a prerequisite.

Switch to the "cdb1" instance in the "12.1.0.1" environment.

export ORACLE_BASE=/u01/app/oracle

export ORAENV_ASK=NO
export ORACLE_SID=cdb1
. oraenv
export ORAENV_ASK=YES
sqlplus /nolog

From Oracle 12.2 onward the "preupgrd.sql" script has been removed and replaced by the "preupgrade.jar" file, which is run as follows. The "preupgrade.jar" file is shipped with the Oracle
software, but you should really download the latest version from MOS 884522.1.

$ORACLE_HOME/jdk/bin/java -jar /u01/app/oracle/product/12.1.0.2/db_1/rdbms/admin/preupgrade.jar TERMINAL TEXT -c "pdb1"


The resulting output is similar to the "preupgrd.sql" script, an example of which is shown below.

Run the "preupgrd.sql" script from the "12.1.0.2" home, not the current 12.1.0.1 home!

CONN / AS SYSDBA
ALTER SESSION SET CONTAINER=pdb1;

@/u01/app/oracle/product/12.1.0.2/db_1/rdbms/admin/preupgrd.sql

Loading Pre-Upgrade Package...

***************************************************************************
Executing Pre-Upgrade Checks in PDB1...
***************************************************************************

************************************************************

====>> ERRORS FOUND for PDB1 <<====

The following are *** ERROR LEVEL CONDITIONS *** that must be addressed
prior to attempting your upgrade.
Failure to do so will result in a failed upgrade.

You MUST resolve the above errors prior to upgrade

************************************************************

************************************************************

====>> PRE-UPGRADE RESULTS for PDB1 <<====

ACTIONS REQUIRED:

1. Review results of the pre-upgrade checks:


/u01/app/oracle/cfgtoollogs/cdb1/preupgrade/preupgrade.log

2. Execute in the SOURCE environment BEFORE upgrade:


/u01/app/oracle/cfgtoollogs/cdb1/preupgrade/preupgrade_fixups.sql

3. Execute in the NEW environment AFTER upgrade:


/u01/app/oracle/cfgtoollogs/cdb1/preupgrade/postupgrade_fixups.sql

************************************************************

***************************************************************************
Pre-Upgrade Checks in PDB1 Completed.
***************************************************************************

***************************************************************************
***************************************************************************
SQL>

Multitenant Page 31
The output displays the generated scripts, including the "preupgrade.log" file. Both the log file and fixup scripts will be in the "$ORACLE_BASE/cfgtoollogs" directory or the "$ORACLE_HOME/cfgtoollogs"
directory, depending on whether the $ORACLE_BASE has been specified or not. Run the fixup script and perform any manual tasks listed in the "preupgrade.log" file. These should be listed by the
"preupgrade_fixups.sql" script also.

CONN / AS SYSDBA
ALTER SESSION SET CONTAINER=pdb1;

@/u01/app/oracle/cfgtoollogs/cdb1/preupgrade/preupgrade_fixups.sql
Pre-Upgrade Fixup Script Generated on 2015-02-16 09:40:04 Version: 12.1.0.2 Build: 006
Beginning Pre-Upgrade Fixups...
Executing in container PDB1

**********************************************************************
Check Tag: APEX_UPGRADE_MSG
Check Summary: Check that APEX will need to be upgraded.
Fix Summary: Oracle Application Express can be manually upgraded prior to database upgrade.
**********************************************************************
Fixup Returned Information:
INFORMATION: --> Oracle Application Express (APEX) can be
manually upgraded prior to database upgrade

APEX is currently at version 4.2.0.00.27 and will need to be


upgraded to APEX version 4.2.5 in the new release.
Note 1: To reduce database upgrade time, APEX can be manually
upgraded outside of and prior to database upgrade.
Note 2: See MOS Note 1088970.1 for information on APEX
installation upgrades.
**********************************************************************

**********************************************************************
[Pre-Upgrade Recommendations]
**********************************************************************

*****************************************
********* Dictionary Statistics *********
*****************************************

Please gather dictionary statistics 24 hours prior to


upgrading the database.
To gather dictionary statistics execute the following command
while connected as SYSDBA:
EXECUTE dbms_stats.gather_dictionary_stats;

^^^ MANUAL ACTION SUGGESTED ^^^

**************************************************
************* Fixup Summary ************

1 fixup routine generated an INFORMATIONAL message that should be reviewed.

**************** Pre-Upgrade Fixup Script Complete *********************


SQL>

EXEC DBMS_STATS.gather_dictionary_stats;
Connect to the root container and unplug the PDB.

CONN / AS SYSDBA
ALTER PLUGGABLE DATABASE pdb1 CLOSE;
ALTER PLUGGABLE DATABASE pdb1 UNPLUG INTO '/tmp/pdb1.xml';
EXIT;
Upgrade the PDB
The PDB must be plugged into the destination CDB and upgraded.

Switch to the "cdb2" instance in the "12.1.0.2" environment.

export ORACLE_BASE=/u01/app/oracle

export ORAENV_ASK=NO
export ORACLE_SID=cdb2
. oraenv
export ORAENV_ASK=YES
sqlplus /nolog
Plugin the "pdb1" pluggable database into the "cdb2" container.

CONN / AS SYSDBA

CREATE PLUGGABLE DATABASE pdb1 USING '/tmp/pdb1.xml'


FILE_NAME_CONVERT=('/oradata/cdb1/pdb1', '/oradata/cdb2/pdb1');

ALTER PLUGGABLE DATABASE pdb1 OPEN UPGRADE;

Warning: PDB altered with errors.

SQL> EXIT;

Multitenant Page 32
SQL> EXIT;
Don't worry about the "Warning: PDB altered with errors." message at this point.

Run the "catupgrd.sql" script against the PDB. Notice the use of the "-c" flag to specify an inclusion list. If you were upgrading multiple PDBs, you could list them in a space-separated list so they are all
upgraded in a single step.

cd $ORACLE_HOME/rdbms/admin
$ORACLE_HOME/perl/bin/perl catctl.pl -c "pdb1" -l /tmp catupgrd.sql

Argument list for [catctl.pl]


SQL Process Count n = 0
SQL PDB Process Count N = 0
Input Directory d=0
Phase Logging Table t = 0
Log Dir l = /tmp
Script s=0
Serial Run S=0
Upgrade Mode active M = 0
Start Phase p=0
End Phase P=0
Log Id i=0
Run in c = pdb1
Do not run in C=0
Echo OFF e=1
No Post Upgrade x=0
Reverse Order r=0
Open Mode Normal o = 0
Debug catcon.pm z=0
Debug catctl.pl Z=0
Display Phases y=0
Child Process I=0

catctl.pl version: 12.1.0.2.0


Oracle Base = /u01/app/oracle

Analyzing file catupgrd.sql


Log files in /tmp
catcon: ALL catcon-related output will be written to /tmp/catupgrd_catcon_9258.lst
catcon: See /tmp/catupgrd*.log files for output generated by scripts
catcon: See /tmp/catupgrd_*.lst files for spool files, if any
Number of Cpus =2
Parallel PDB Upgrades = 2
SQL PDB Process Count = 2
SQL Process Count = 0
New SQL Process Count = 2

[CONTAINER NAMES]

CDB$ROOT
PDB$SEED
PDB1
PDB Inclusion:[PDB1] Exclusion:[]

Starting
[/u01/app/oracle/product/12.1.0.2/db_1/perl/bin/perl catctl.pl -c 'PDB1' -l /tmp -I -i pdb1 -n 2 catupgrd.sql]

Argument list for [catctl.pl]


SQL Process Count n = 2
SQL PDB Process Count N = 0
Input Directory d=0
Phase Logging Table t = 0
Log Dir l = /tmp
Script s=0
Serial Run S=0
Upgrade Mode active M = 0
Start Phase p=0
End Phase P=0
Log Id i = pdb1
Run in c = PDB1
Do not run in C=0
Echo OFF e=1
No Post Upgrade x=0
Reverse Order r=0
Open Mode Normal o = 0
Debug catcon.pm z=0
Debug catctl.pl Z=0
Display Phases y=0
Child Process I=1

catctl.pl version: 12.1.0.2.0


Oracle Base = /u01/app/oracle

Analyzing file catupgrd.sql


Log files in /tmp
catcon: ALL catcon-related output will be written to /tmp/catupgrdpdb1_catcon_9360.lst
catcon: See /tmp/catupgrdpdb1*.log files for output generated by scripts
catcon: See /tmp/catupgrdpdb1_*.lst files for spool files, if any

Multitenant Page 33
catcon: See /tmp/catupgrdpdb1_*.lst files for spool files, if any
Number of Cpus =2
SQL PDB Process Count = 2
SQL Process Count = 2

[CONTAINER NAMES]

CDB$ROOT
PDB$SEED
PDB1
PDB Inclusion:[PDB1] Exclusion:[]

------------------------------------------------------
Phases [0-73]
Container Lists Inclusion:[PDB1] Exclusion:[]
Serial Phase #: 0 Files: 1 Time: 13s PDB1
Serial Phase #: 1 Files: 5 Time: 34s PDB1
Restart Phase #: 2 Files: 1 Time: 0s PDB1
Parallel Phase #: 3 Files: 18 Time: 11s PDB1
Restart Phase #: 4 Files: 1 Time: 0s PDB1
Serial Phase #: 5 Files: 5 Time: 14s PDB1
Serial Phase #: 6 Files: 1 Time: 7s PDB1
Serial Phase #: 7 Files: 4 Time: 6s PDB1
Restart Phase #: 8 Files: 1 Time: 0s PDB1
Parallel Phase #: 9 Files: 62 Time: 47s PDB1
Restart Phase #:10 Files: 1 Time: 0s PDB1
Serial Phase #:11 Files: 1 Time: 11s PDB1
Restart Phase #:12 Files: 1 Time: 0s PDB1
Parallel Phase #:13 Files: 91 Time: 8s PDB1
Restart Phase #:14 Files: 1 Time: 0s PDB1
Parallel Phase #:15 Files: 111 Time: 11s PDB1
Restart Phase #:16 Files: 1 Time: 0s PDB1
Serial Phase #:17 Files: 3 Time: 1s PDB1
Restart Phase #:18 Files: 1 Time: 0s PDB1
Parallel Phase #:19 Files: 32 Time: 19s PDB1
Restart Phase #:20 Files: 1 Time: 0s PDB1
Serial Phase #:21 Files: 3 Time: 6s PDB1
Restart Phase #:22 Files: 1 Time: 0s PDB1
Parallel Phase #:23 Files: 23 Time: 79s PDB1
Restart Phase #:24 Files: 1 Time: 0s PDB1
Parallel Phase #:25 Files: 11 Time: 34s PDB1
Restart Phase #:26 Files: 1 Time: 0s PDB1
Serial Phase #:27 Files: 1 Time: 0s PDB1
Restart Phase #:28 Files: 1 Time: 0s PDB1
Serial Phase #:30 Files: 1 Time: 0s PDB1
Serial Phase #:31 Files: 257 Time: 22s PDB1
Serial Phase #:32 Files: 1 Time: 0s PDB1
Restart Phase #:33 Files: 1 Time: 0s PDB1
Serial Phase #:34 Files: 1 Time: 1s PDB1
Restart Phase #:35 Files: 1 Time: 0s PDB1
Restart Phase #:36 Files: 1 Time: 0s PDB1
Serial Phase #:37 Files: 4 Time: 37s PDB1
Restart Phase #:38 Files: 1 Time: 0s PDB1
Parallel Phase #:39 Files: 13 Time: 51s PDB1
Restart Phase #:40 Files: 1 Time: 0s PDB1
Parallel Phase #:41 Files: 10 Time: 5s PDB1
Restart Phase #:42 Files: 1 Time: 0s PDB1
Serial Phase #:43 Files: 1 Time: 5s PDB1
Restart Phase #:44 Files: 1 Time: 0s PDB1
Serial Phase #:45 Files: 1 Time: 1s PDB1
Serial Phase #:46 Files: 1 Time: 1s PDB1
Restart Phase #:47 Files: 1 Time: 0s PDB1
Serial Phase #:48 Files: 1 Time: 164s PDB1
Restart Phase #:49 Files: 1 Time: 0s PDB1
Serial Phase #:50 Files: 1 Time: 33s PDB1
Restart Phase #:51 Files: 1 Time: 0s PDB1
Serial Phase #:52 Files: 1 Time: 38s PDB1
Restart Phase #:53 Files: 1 Time: 0s PDB1
Serial Phase #:54 Files: 1 Time: 44s PDB1
Restart Phase #:55 Files: 1 Time: 0s PDB1
Serial Phase #:56 Files: 1 Time: 58s PDB1
Restart Phase #:57 Files: 1 Time: 1s PDB1
Serial Phase #:58 Files: 1 Time: 73s PDB1
Restart Phase #:59 Files: 1 Time: 0s PDB1
Serial Phase #:60 Files: 1 Time: 88s PDB1
Restart Phase #:61 Files: 1 Time: 0s PDB1
Serial Phase #:62 Files: 1 Time: 117s PDB1
Restart Phase #:63 Files: 1 Time: 0s PDB1
Serial Phase #:64 Files: 1 Time: 0s PDB1
Serial Phase #:65 Files: 1 Calling sqlpatch with LD_LIBRARY_PATH=/u01/app/oracle/product/12.1.0.2/db_1/lib;
export LD_LIBRARY_PATH;/u01/app/oracle/product/12.1.0.2/db_1/perl/bin/perl -I /u01/app/oracle/product/12.1.0.2/db_1/rdbms/admin
-I /u01/app/oracle/product/12.1.0.2/db_1/rdbms/admin/../../sqlpatch
/u01/app/oracle/product/12.1.0.2/db_1/rdbms/admin/../../sqlpatch/sqlpatch.pl
-verbose -upgrade_mode_only -pdbs PDB1 > /tmp/catupgrdpdb1_datapatch_upgrade.log 2>
/tmp/catupgrdpdb1_datapatch_upgrade.err
returned from sqlpatch
Time: 13s PDB1

Multitenant Page 34
Time: 13s PDB1
Serial Phase #:66 Files: 1 Time: 3s PDB1
Serial Phase #:68 Files: 1 Time: 3s PDB1
Serial Phase #:69 Files: 1 Calling sqlpatch with LD_LIBRARY_PATH=/u01/app/oracle/product/12.1.0.2/db_1/lib;
export LD_LIBRARY_PATH;/u01/app/oracle/product/12.1.0.2/db_1/perl/bin/perl -I /u01/app/oracle/product/12.1.0.2/db_1/rdbms/admin
-I /u01/app/oracle/product/12.1.0.2/db_1/rdbms/admin/../../sqlpatch
/u01/app/oracle/product/12.1.0.2/db_1/rdbms/admin/../../sqlpatch/sqlpatch.pl -verbose -pdbs PDB1 >
/tmp/catupgrdpdb1_datapatch_normal.log 2> /tmp/catupgrdpdb1_datapatch_normal.err
returned from sqlpatch
Time: 8s PDB1
Serial Phase #:70 Files: 1 Time: 70s PDB1
Serial Phase #:71 Files: 1 Time: 6s PDB1
Serial Phase #:72 Files: 1 Time: 4s PDB1
Serial Phase #:73 Files: 1 Time: 0s PDB1

Grand Total Time: 1150s PDB1

LOG FILES: (catupgrdpdb1*.log)

Upgrade Summary Report Located in:


/u01/app/oracle/product/12.1.0.2/db_1/cfgtoollogs/cdb2/upgrade/upg_summary.log

Total Upgrade Time: [0d:0h:19m:10s]

Time: 1152s For PDB(s)

Grand Total Time: 1152s

LOG FILES: (catupgrd*.log)

Grand Total Upgrade Time: [0d:0h:19m:12s]


$
Start the PDB and recompile any invalid objects.

CONN / AS SYSDBA
ALTER SESSION SET CONTAINER=pdb1;
STARTUP;

@?/rdbms/admin/utlrp.sql
Run the "postupgrade_fixups.sql" script. Remember to perform any recommended manual steps.

@/u01/app/oracle/cfgtoollogs/cdb1/preupgrade/postupgrade_fixups.sql
Post Upgrade Fixup Script Generated on 2015-02-16 09:40:04 Version: 12.1.0.2 Build: 006
Beginning Post-Upgrade Fixups...

**********************************************************************
[Post-Upgrade Recommendations]
**********************************************************************

*****************************************
******** Fixed Object Statistics ********
*****************************************

Please create stats on fixed objects two weeks


after the upgrade using the command:
EXECUTE DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;

^^^ MANUAL ACTION SUGGESTED ^^^

**************************************************
************* Fixup Summary ************

No fixup routines were executed.

**************************************************
*************** Post Upgrade Fixup Script Complete ********************

PL/SQL procedure successfully completed.

SQL>

EXECUTE DBMS_STATS.gather_fixed_objects_stats;

Manager (RMAN) Database Duplication Enhancements in Oracle Database 12c Release 1 (12.1)

Active Database Duplication using Backup Sets


In previous releases, active duplicates were performed using implicit image copy backups, transferred directly to the destination server. From 12.1 it is also possible to perform active duplicates using
backup sets by including the USING BACKUPSET clause. Compared to image copy backups, the unused block compression associated with a backup set can greatly reduce the amount of data pulled
across the network for databases containing lots of unused blocks. The example below performs an active duplicate of a source database (cdb1) to a new destination database (cdb2) using backup sets
rather than image copy backups.

DUPLICATE DATABASE TO cdb2


FROM ACTIVE DATABASE
USING BACKUPSET
SPFILE
parameter_value_convert ('cdb1','cdb2')

Multitenant Page 35
parameter_value_convert ('cdb1','cdb2')
set db_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
set log_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
set audit_file_dest='/u01/app/oracle/admin/cdb2/adump'
set core_dump_dest='/u01/app/oracle/admin/cdb2/cdump'
set control_files='/u01/app/oracle/oradata/cdb2/control01.ctl','/u01/app/oracle/oradata/cdb2/control02.ctl','/u01/app/oracle/oradata/cdb2/control03.ctl'
set db_name='cdb2'
NOFILENAMECHECK;
Active Database Duplication using Compressed Backup Sets
In addition to conventional backup sets, active duplicates can also be performed using compressed backup sets by adding the U SING COMPRESSED BACKUPSET clause, which further reduces the amount
of data passing over the network. The example below performs an active duplicate of a source database (cdb1) to a new destina tion database (cdb2) using compressed backup sets.

DUPLICATE DATABASE TO cdb2


FROM ACTIVE DATABASE
USING COMPRESSED BACKUPSET
SPFILE
parameter_value_convert ('cdb1','cdb2')
set db_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
set log_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
set audit_file_dest='/u01/app/oracle/admin/cdb2/adump'
set core_dump_dest='/u01/app/oracle/admin/cdb2/cdump'
set control_files='/u01/app/oracle/oradata/cdb2/control01.ctl','/u01/app/oracle/oradata/cdb2/control02.ctl','/u01/app/oracle/oradata/cdb2/control03.ctl'
set db_name='cdb2'
NOFILENAMECHECK;
Active Database Duplication and Encryption
Always check the licensing implications of encryption before using it on a real system. Some encryption operations require the advanced security option.

Oracle allows backup sets to be encrypted. Transparent encryption uses a wallet to hold the encryption key and is seamless to the DBA, since backup sets are encrypted and decrypted as required using
the wallet. Password encryption requires the DBA to enter a password for each backup and restore operation.

Since Oracle 12c now supports active duplicates using backup sets, it also supports encryption of those backup sets using bot h methods.

If the source database uses transparent encryption of backups, the wallet containing the encryption key must be made available on the destination database.
If password encryption is used on the source database, the SET ENCRYPTION ON IDENTIFIED BY <password> command can be used to define an encryption password for the active duplication process. If
you are running in mixed mode, you can use SET ENCRYPTION ON IDENTIFIED BY <password> ONLY to override transparent encryption.
The encryption algorithm used by the active duplication can be set using the SET ENCRYPTION ALGORITHM command, where the possible algorithms can be displayed using the V
$RMAN_ENCRYPTION_ALGORITHMS view. If the encryption algorithm is not set, the default (AES128) is used.

The example below performs an active duplicate of a source database (cdb1) to a new destination database (cdb2) using passwor d encrypted backup sets.

SET ENCRYPTION ALGORITHM 'AES128';


SET ENCRYPTION ON IDENTIFIED BY MyPassword1 ONLY;

DUPLICATE DATABASE TO cdb2


FROM ACTIVE DATABASE
USING BACKUPSET
SPFILE
parameter_value_convert ('cdb1','cdb2')
set db_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
set log_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
set audit_file_dest='/u01/app/oracle/admin/cdb2/adump'
set core_dump_dest='/u01/app/oracle/admin/cdb2/cdump'
set control_files='/u01/app/oracle/oradata/cdb2/control01.ctl','/u01/app/oracle/oradata/cdb2/control02.ctl','/u01/app/oracle/oradata/cdb2/control03.ctl'
set db_name='cdb2'
NOFILENAMECHECK;
Active Database Duplication and Parallelism (Multisection)
Active database duplications can take advantage of the multisection backup functionality introduced in Oracle 12c, whether us ing image copies or backup sets. Including the SECTION SIZE clause
indicates multisection backups should be used.

There must be multiple channels available for multisection backups to work, so you will either need to configure persistent c hannel parallelism using CONFIGURE DEVICE TYPE ... PARALLELISM or use set
the parallelism for the current operation by performing multiple ALLOCATE CHANNEL commands.

The example below performs an active duplicate of a source database (cdb1) to a new destination database (cdb2) using multise ction backups.

CONFIGURE DEVICE TYPE disk PARALLELISM 4;

DUPLICATE DATABASE TO cdb2


FROM ACTIVE DATABASE
USING BACKUPSET
SPFILE
parameter_value_convert ('cdb1','cdb2')
set db_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
set log_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
set audit_file_dest='/u01/app/oracle/admin/cdb2/adump'
set core_dump_dest='/u01/app/oracle/admin/cdb2/cdump'
set control_files='/u01/app/oracle/oradata/cdb2/control01.ctl','/u01/app/oracle/oradata/cdb2/control02.ctl','/u01/app/oracle/oradata/cdb2/control03.ctl'
set db_name='cdb2'
NOFILENAMECHECK
SECTION SIZE 400M;
Multitenant Considerations
All the examples shown previously involve multitenant databases, but there are some extra considerations when you are using the multitenant architecture.

If you are building an "initSID.ora" file from scratch, you must remember to include the following parameter.

enable_pluggable_database=TRUE
The previous examples didn't have to do this as the SPFILE was created as a copy of the source SPFILE, which already contained this parameter setting.

Multitenant Page 36
The DUPLICATE command includes some additional clauses related to the multitenant option.

Adding the PLUGGABLE DATABASE clause allows you to specify which pluggable databases should be included in the duplication. The following example creates a new container database (cdb2), but it
only contains two pluggable databases (pdb1 and pdb2). The third pluggable database (pdb3) is not included in the clone.

DUPLICATE DATABASE TO cdb2 PLUGGABLE DATABASE pdb1, pdb2


FROM ACTIVE DATABASE
SPFILE
parameter_value_convert ('cdb1','cdb2')
set db_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
set log_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
set audit_file_dest='/u01/app/oracle/admin/cdb2/adump'
set core_dump_dest='/u01/app/oracle/admin/cdb2/cdump'
set control_files='/u01/app/oracle/oradata/cdb2/control01.ctl','/u01/app/oracle/oradata/cdb2/control02.ctl','/u01/app/oracle/oradata/cdb2/control03.ctl'
set db_name='cdb2'
NOFILENAMECHECK;
The resulting clone contains the following PDBs.

SELECT name FROM v$pdbs;

NAME
------------------------------
PDB$SEED
PDB1
PDB2

SQL>
Using the SKIP PLUGGABLE DATABASE clause will create a duplicate CDB will all the PDBs except those in the list. The following example creates a container database (cdb2) with a single pluggable
database (pdb3). The other two pluggable databases (pdb1 and pdb2) are excluded from the clone.

DUPLICATE DATABASE TO cdb2 SKIP PLUGGABLE DATABASE pdb1, pdb2


FROM ACTIVE DATABASE
SPFILE
parameter_value_convert ('cdb1','cdb2')
set db_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
set log_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
set audit_file_dest='/u01/app/oracle/admin/cdb2/adump'
set core_dump_dest='/u01/app/oracle/admin/cdb2/cdump'
set control_files='/u01/app/oracle/oradata/cdb2/control01.ctl','/u01/app/oracle/oradata/cdb2/control02.ctl','/u01/app/oracle/oradata/cdb2/control03.ctl'
set db_name='cdb2'
NOFILENAMECHECK;
The resulting clone contains the following PDBs.

SELECT name FROM v$pdbs;

NAME
------------------------------
PDB$SEED
PDB3

SQL>
You can also limit the tablespaces that are included in a PDB using the TABLESPACE clause. If we connect to the source container database (cdb1) and check the tablespaces in the pdb1 pluggable
database we see the following.

CONN sys/Password1@cdb1 AS SYSDBA


ALTER SESSION SET CONTAINER = pdb1;

SELECT tablespace_name FROM dba_tablespaces;

TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
TEMP
USERS
TEST_TS

SQL>
Next, we perform a duplicate for the whole of the pdb2 pluggable database, but just the TEST_TS tablespace in the the pdb1 pluggable database.

DUPLICATE DATABASE TO cdb2 PLUGGABLE DATABASE pdb2 TABLESPACE pdb1:test_ts


FROM ACTIVE DATABASE
SPFILE
parameter_value_convert ('cdb1','cdb2')
set db_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
set log_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
set audit_file_dest='/u01/app/oracle/admin/cdb2/adump'
set core_dump_dest='/u01/app/oracle/admin/cdb2/cdump'
set control_files='/u01/app/oracle/oradata/cdb2/control01.ctl','/u01/app/oracle/oradata/cdb2/control02.ctl','/u01/app/oracle/oradata/cdb2/control03.ctl'
set db_name='cdb2'
NOFILENAMECHECK;
Checking the completed clone reveals both the pdb1 and pdb2 pluggable databases are present, but the pdb1 pluggable database does not include the USERS tablespace.

CONN sys/Password1@cdb2 AS SYSDBA

SELECT name FROM v$pdbs;

Multitenant Page 37
SELECT name FROM v$pdbs;

NAME
------------------------------
PDB$SEED
PDB1
PDB2

SQL>

ALTER SESSION SET CONTAINER = pdb1;

SELECT tablespace_name FROM dba_tablespaces;

TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
TEMP
TEST_TS

SQL>
Clones always contains a fully functional CDB and functional PDBs. Even when we just ask for the TEST_TS tablespace in pdb1, we also get the SYSTEM, SYSAUX and TEMP tablespaces in the PDB. The
TABLESPACE clause can be used on it's own without the PLUGGABLE DATABASE clause, if no full PDBs are to be duplicated.

The SKIP TABLESPACE clause allows you to exclude specific tablespaces, rather than use the inclusion approach. The following example clones all the pluggable databases, but excludes the TEST_TS
tablespace from pdb1 during the duplicate.

DUPLICATE DATABASE TO cdb2 SKIP TABLESPACE pdb1:test_ts


FROM ACTIVE DATABASE
SPFILE
parameter_value_convert ('cdb1','cdb2')
set db_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
set log_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
set audit_file_dest='/u01/app/oracle/admin/cdb2/adump'
set core_dump_dest='/u01/app/oracle/admin/cdb2/cdump'
set control_files='/u01/app/oracle/oradata/cdb2/control01.ctl','/u01/app/oracle/oradata/cdb2/control02.ctl','/u01/app/oracle/oradata/cdb2/control03.ctl'
set db_name='cdb2'
NOFILENAMECHECK;
Not surprisingly, the resulting clone contains all the pluggable databases, but the pdb1 pluggable database is missing the TEST_TS tablespace.

CONN sys/Password1@cdb2 AS SYSDBA

SELECT name FROM v$pdbs;

NAME
------------------------------
PDB$SEED
PDB1
PDB2
PDB3

SQL>

ALTER SESSION SET CONTAINER = pdb1;

SELECT tablespace_name FROM dba_tablespaces;

TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
TEMP
USERS

SQL>
Appendix
The examples in this article are based on the following assumptions.

The source database is a container database (cdb1), with three pluggable databases (pdb1, pdb2 and pdb3).
The destination database is called cdb2.
Both the source and destination databases use file system storage and do not use Oracle Managed Files (OMF), hence the need for the file name conversions.
The basic setup for active duplicates was performed using the same process described for 11g here.
Between every test the following clean-up was performed.

# Set the paths using the source DB.


export ORAENV_ASK=NO
export ORACLE_SID=cdb1
. oraenv
export ORAENV_ASK=YES

# Set the SID for the new clone (destination).


export ORACLE_SID=cdb2

# Stop the clone if it already exists.


sqlplus / as sysdba <<EOF
SHUTDOWN IMMEDIATE;

Multitenant Page 38
SHUTDOWN IMMEDIATE;
EXIT;
EOF

# Cleanup any previous clone attemps.


mkdir -p /u01/app/oracle/admin/cdb2/adump
mkdir -p /u01/app/oracle/admin/cdb2/cdump
mkdir -p /u01/app/oracle/oradata/cdb2/
rm -Rf /u01/app/oracle/oradata/cdb2/*
mkdir -p /u01/app/oracle/oradata/cdb2/pdbseed/
mkdir -p /u01/app/oracle/oradata/cdb2/pdb1/
mkdir -p /u01/app/oracle/oradata/cdb2/pdb2/
mkdir -p /u01/app/oracle/oradata/cdb2/pdb3/
rm $ORACLE_HOME/dbs/spfilecdb2.ora
rm $ORACLE_HOME/dbs/initcdb2.ora
rm $ORACLE_HOME/dbs/orapwcdb2

# Recreate an init.ora and password file.


echo "db_name=cdb2" > $ORACLE_HOME/dbs/initcdb2.ora
orapwd file=/u01/app/oracle/product/12.1.0.2/db_1/dbs/orapwcdb2 password=Password1 entries=10

# Mount the clone (auxiliary).


sqlplus / as sysdba <<EOF
STARTUP NOMOUNT;
EXIT;
EOF

# Connect to the target and auxiliary RMAN using the tsnnames.ora entry.
rman target sys/Password1@cdb1 auxiliary sys/Password1@cdb2

Clone a Remote PDB or Non-CDB in Oracle Database 12c (12.1.0.2)

In the initial release of Oracle Database 12c Release 1 (12.1.0.1) remote cloning of PDBs was listed as a feature, but it didn't work. The 12.1.0.2 patch has fixed that, but also added the ability to create a
PDB as a clone of a remote non-CDB database.

Prerequisites
The prerequisites for cloning a remote PDB or non-CDB are very similar, so I will deal with them together.

In this context, the word "local" refers to the destination or target CDB that will house the cloned PDB. The word "remote" r efers to the PDB or non-CDB that is the source of the clone.

The user in the local database must have the CREATE PLUGGABLE DATABASE privilege in the root container.
The remote database (PDB or non-CDB) must be open in read-only mode.
The local database must have a database link to the remote database. If the remote database is a PDB, the database link can point to the remote CDB using a common user, or the PDB using a local or
common user.
The user in the remote database that the database link connects to must have the CREATE PLUGGABLE DATABASE privilege.
The local and remote databases must have the same endianness, options installed and character sets.
If the remote database uses Transparent Data Encryption (TDE) the local CDB must be configured appropriately before attempting the clone. If not you will be left with a new PDB that will only open in
restricted mode.
The default tablespaces for each common user in the remote PDB *must* exist in local CDB. If this is not true, create the missing tablespaces in the root container of the local PDB. If you don't do this
your new PDB will only be able to open in restricted mode (Bug 19174942).
When cloning from a non-CDB, both the the local and remote databases must using version 12.1.0.2 or higher.
In the examples below I have three databases running on the same virtual machine, but they could be running on separate physical or virtual servers.

cdb1 : The local database that will eventually house the clones.
db12c : The remote non-CDB.
cdb3 : The remote CDB, used for cloning a remote PDB (pdb5).

Cloning a Remote PDB


Connect to the remote CDB and prepare the remote PDB for cloning.

export ORAENV_ASK=NO
export ORACLE_SID=cdb3
. oraenv
export ORAENV_ASK=YES

sqlplus / as sysdba
Create a user in the remote database for use with the database link. In this case, we will use a local user in the remote PDB.

ALTER SESSION SET CONTAINER=pdb5;

CREATE USER remote_clone_user IDENTIFIED BY remote_clone_user;


GRANT CREATE SESSION, CREATE PLUGGABLE DATABASE TO remote_clone_user;
Open the remote PDB in read-only mode.

CONN / AS SYSDBA
ALTER PLUGGABLE DATABASE pdb5 CLOSE;
ALTER PLUGGABLE DATABASE pdb5 OPEN READ ONLY;
EXIT;
Switch to the local server and create a "tnsnames.ora" entry pointing to the remote database for use in the USING clause of the database link.

PDB5 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = ol7-121.localdomain)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = pdb5)

Multitenant Page 39
(SERVICE_NAME = pdb5)
)
)
Connect to the local database to initiate the clone.

export ORAENV_ASK=NO
export ORACLE_SID=cdb1
. oraenv
export ORAENV_ASK=YES

sqlplus / as sysdba
Create a database link in the local database, pointing to the remote database.

DROP DATABASE LINK clone_link;

CREATE DATABASE LINK clone_link


CONNECT TO remote_clone_user IDENTIFIED BY remote_clone_user USING 'pdb5';

-- Test link.
DESC user_tables@clone_link
Create a new PDB in the local database by cloning the remote PDB. In this case we are using Oracle Managed Files (OMF), so we don't need to bother with FILE_NAME_CONVERT parameter for file name
conversions.

CREATE PLUGGABLE DATABASE pdb5new FROM pdb5@clone_link;

Pluggable database created.

SQL>
We can see the new PDB has been created, but it is in the MOUNTED state.

SELECT name, open_mode FROM v$pdbs WHERE name = 'PDB5NEW';

NAME OPEN_MODE
------------------------------ ----------
PDB5NEW MOUNTED

SQL>
The PDB is opened in read-write mode to complete the process.

ALTER PLUGGABLE DATABASE pdb5new OPEN;

SELECT name, open_mode FROM v$pdbs WHERE name = 'PDB5NEW';

NAME OPEN_MODE
------------------------------ ----------
PDB5NEW READ WRITE

SQL>
As with any PDB clone, check common users and the temporary tablespace is configured as expected.

Cloning a Remote Non-CDB


Connect to the remote database to prepare it for cloning.

export ORAENV_ASK=NO
export ORACLE_SID=db12c
. oraenv
export ORAENV_ASK=YES

sqlplus / as sysdba
Create a user in the remote database for use with the database link.

CREATE USER remote_clone_user IDENTIFIED BY remote_clone_user;


GRANT CREATE SESSION, CREATE PLUGGABLE DATABASE TO remote_clone_user;
Open the remote database in read-only mode.

SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE OPEN READ ONLY;
EXIT;
Switch to the local server and create a "tnsnames.ora" entry pointing to the remote database for use in the USING clause of the database link.

DB12C =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = ol7-121.localdomain)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = db12c)
)
)
Connect to the local database to initiate the clone.

export ORAENV_ASK=NO
export ORACLE_SID=cdb1
. oraenv
export ORAENV_ASK=YES

Multitenant Page 40
sqlplus / as sysdba
Create a database link in the local database, pointing to the remote database.

DROP DATABASE LINK clone_link;

CREATE DATABASE LINK clone_link


CONNECT TO remote_clone_user IDENTIFIED BY remote_clone_user USING 'db12c';

-- Test link.
DESC user_tables@clone_link
Create a new PDB in the local database by cloning the remote non-CDB. In this case we are using Oracle Managed Files (OMF), so we don't need to bother with FILE_NAME_CONVERT parameter for file
name conversions. Since there is no PDB to name, we use "NON$CDB" as the PDB name.

CREATE PLUGGABLE DATABASE db12cpdb FROM NON$CDB@clone_link;

Pluggable database created.

SQL>
We can see the new PDB has been created, but it is in the MOUNTED state.

SELECT name, open_mode FROM v$pdbs WHERE name = 'DB12CPDB';

NAME OPEN_MODE
------------------------------ ----------
DB12CPDB MOUNTED

SQL>
Since this PDB was created as a clone of a non-CDB, before it can be opened we need to run the "$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql" script to clean it up.

ALTER SESSION SET CONTAINER=db12cpdb;

@$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql
The PDB can now be opened in read-write mode.

ALTER PLUGGABLE DATABASE db12cpdb OPEN;

SELECT name, open_mode FROM v$pdbs WHERE name = 'DB12CPDB';

NAME OPEN_MODE
------------------------------ ----------
DB12CPDB READ WRITE

SQL>

Metadata Only PDB Clones in Oracle Database 12c Release 1 (12.1.0.2)

The 12.1.0.2 patchset introduced the ability to do a metadata-only clone. Adding the NO DATA clause when cloning a PDB signifies that only the metadata for the user-created objects should be cloned,
not the data in the tables and indexes.

Setup
Create a clean PDB, then add a new user and a test table with some data.

CONN / AS SYSDBA

CREATE PLUGGABLE DATABASE pdb10 ADMIN USER pdb_adm IDENTIFIED BY Password1


FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdbseed/','/u01/app/oracle/oradata/cdb1/pdb10/');

ALTER PLUGGABLE DATABASE pdb10 OPEN;

ALTER SESSION SET CONTAINER = pdb10;

CREATE TABLESPACE users


DATAFILE '/u01/app/oracle/oradata/cdb1/pdb10/users01.dbf'
SIZE 1M AUTOEXTEND ON NEXT 1M;

CREATE USER test IDENTIFIED BY test


DEFAULT TABLESPACE users
QUOTA UNLIMITED ON users;

CREATE TABLE test.t1 (


id NUMBER
);
INSERT INTO test.t1 VALUES (1);
COMMIT;

SELECT COUNT(*) FROM test.t1;

COUNT(*)
----------
1

SQL>

Metadata Clone
Perform a metadata-only clone of the PDB using the NO DATA clause.

Multitenant Page 41
CONN / AS SYSDBA

ALTER PLUGGABLE DATABASE pdb10 CLOSE;


ALTER PLUGGABLE DATABASE pdb10 OPEN READ ONLY;

CREATE PLUGGABLE DATABASE pdb11 FROM pdb10


FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdb10/','/u01/app/oracle/oradata/cdb1/pdb11/')
NO DATA;

ALTER PLUGGABLE DATABASE pdb11 OPEN READ WRITE;

-- Switch the source PDB back to read/write


ALTER PLUGGABLE DATABASE pdb10 CLOSE;
ALTER PLUGGABLE DATABASE pdb10 OPEN READ WRITE;
Checking the contents of the test table in the new PDB show the table is present, but it is empty.

CONN / AS SYSDBA
ALTER SESSION SET CONTAINER = pdb11;

SELECT COUNT(*) FROM test.t1;

COUNT(*)
----------
0

SQL>

Restrictions
The NO DATA clause is only valid is the the source PDB doesn't contain any of the following.

Index-organized tables
Advanced Queue (AQ) tables
Clustered tables
Table clusters
If it does, you will get the following type of error.
SQL> CREATE PLUGGABLE DATABASE pdb11 FROM pdb1
FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdb1/','/u01/app/oracle/oradata/cdb1/pdb11/')
NO DATA;
CREATE PLUGGABLE DATABASE pdb11 FROM pdb1
*
ERROR at line 1:
ORA-65161: Unable to create pluggable database with no data

SQL>

PDB Subset Cloning in Oracle Database 12c Release 1 (12.1.0.2)


The 12.1.0.2 patchset introduced the concept of PDB subset cloning, which allows a subset of all the tablespaces in a PDB to be cloned. Excluding tablespaces can be useful when you want to build a PDB
to test a specific piece of functionality, which doesn't require the whole database. It is also useful when splitting instanc es that were used for consolidation into their individual functional areas.

Setup
To see this feature working we will create a clean PDB, then add 3 new tablespaces, each with a default user and a single object in them. This will mimic a situation where a single database has been used
to consolidate three different applications.

CONN / AS SYSDBA

-- Create a new PDB


CREATE PLUGGABLE DATABASE pdb20 ADMIN USER pdb_adm IDENTIFIED BY Password1
FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdbseed/','/u01/app/oracle/oradata/cdb1/pdb20/');

ALTER PLUGGABLE DATABASE pdb20 OPEN;

ALTER SESSION SET CONTAINER = pdb20;

-- Create first TS, User, Table.


CREATE TABLESPACE ts1
DATAFILE '/u01/app/oracle/oradata/cdb1/pdb20/ts101.dbf'
SIZE 1M AUTOEXTEND ON NEXT 1M;

CREATE USER test1 IDENTIFIED BY test1


DEFAULT TABLESPACE ts1
QUOTA UNLIMITED ON ts1;

CREATE TABLE test1.t1 (


id NUMBER
);
INSERT INTO test1.t1 VALUES (1);
COMMIT;

-- Create second TS, User, Table.


CREATE TABLESPACE ts2
DATAFILE '/u01/app/oracle/oradata/cdb1/pdb20/ts201.dbf'
SIZE 1M AUTOEXTEND ON NEXT 1M;

CREATE USER test2 IDENTIFIED BY test2


DEFAULT TABLESPACE ts2

Multitenant Page 42
DEFAULT TABLESPACE ts2
QUOTA UNLIMITED ON ts2;

CREATE TABLE test2.t2 (


id NUMBER
);
INSERT INTO test2.t2 VALUES (1);
COMMIT;

-- Create third TS, User, Table.


CREATE TABLESPACE ts3
DATAFILE '/u01/app/oracle/oradata/cdb1/pdb20/ts301.dbf'
SIZE 1M AUTOEXTEND ON NEXT 1M;

CREATE USER test3 IDENTIFIED BY test3


DEFAULT TABLESPACE ts3
QUOTA UNLIMITED ON ts3;

CREATE TABLE test3.t3 (


id NUMBER
);
INSERT INTO test3.t3 VALUES (1);
COMMIT;
We can see the separation between the schema in the following query.

COLUMN owner FORMAT A20


COLUMN table_name FORMAT A20
COLUMN tablespace_name FORMAT A20

SELECT owner, table_name, tablespace_name


FROM dba_tables
WHERE table_name IN ('T1','T2','T3')
ORDER BY owner;

OWNER TABLE_NAME TABLESPACE_NAME


-------------------- -------------------- --------------------
TEST1 T1 TS1
TEST2 T2 TS2
TEST3 T3 TS3

SQL>

PDB Subset Cloning


PDB subset cloning is made possible using the USER_TABLESPACES clause, which allows you to specify the user-defined tablespaces to be included in the clone in one of several ways.

One of more named tablespaces in a comma separated list.


NONE : No user-defined tablespaces are included in the clone.
ALL : All user-defined tablespaces are included in the clone. This is the same as omitting the clause completely.
ALL EXCEPT : Exclude one or more named user-defined tablespaces as a comma separated list.
The following example creates a clone including a named list of tablespaces.

CONN / AS SYSDBA

CREATE PLUGGABLE DATABASE pdb21 FROM pdb20


FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdb20/','/u01/app/oracle/oradata/cdb1/pdb21/')
USER_TABLESPACES=('ts1', 'ts2');

ALTER PLUGGABLE DATABASE pdb21 OPEN;

ALTER SESSION SET CONTAINER = pdb21;


If we query the list of tablespaces, it appears all of them are present.

SELECT tablespace_name from dba_tablespaces;

TABLESPACE_NAME
--------------------
SYSTEM
SYSAUX
TEMP
TS1
TS2
TS3

6 rows selected.

SQL>
If we try to access the objects from each schema, we see this is not the case.

SQL> SELECT * FROM test1.t1;

ID
----------
1

SQL> SELECT * FROM test2.t2;

Multitenant Page 43
ID
----------
1

SQL> SELECT * FROM test3.t3;


SELECT * FROM test3.t3
*
ERROR at line 1:
ORA-00376: file 30 cannot be read at this time
ORA-01111: name for data file 30 is unknown - rename to correct file
ORA-01110: data file 30:
'/u01/app/oracle/product/12.1.0.2/db_1/dbs/MISSING00030'

SQL>
As requested, the datafile for the TS3 tablespace has not been cloned, so we should do some post-clone clean up to make the PDB look consistent.

DROP TABLESPACE ts3 INCLUDING CONTENTS AND DATAFILES;


DROP USER test3 CASCADE;
The following example creates a clone with none of the user-defined tablespaces present.

CREATE PLUGGABLE DATABASE pdb22 FROM pdb20


FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdb20/','/u01/app/oracle/oradata/cdb1/pdb22/')
USER_TABLESPACES=NONE;

ALTER PLUGGABLE DATABASE pdb22 OPEN;


The following example clones all the user-defined tablespaces, which is the same as omitting the USER_TABLESPACES clauses.

CREATE PLUGGABLE DATABASE pdb23 FROM pdb20


FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdb20/','/u01/app/oracle/oradata/cdb1/pdb23/')
USER_TABLESPACES=ALL;

ALTER PLUGGABLE DATABASE pdb23 OPEN;


The ALL EXCEPT variant allows you to list those tablespaces to be excluded.

CREATE PLUGGABLE DATABASE pdb24 FROM pdb20


FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdb20/','/u01/app/oracle/oradata/cdb1/pdb24/')
USER_TABLESPACES=ALL EXCEPT('ts3');

ALTER PLUGGABLE DATABASE pdb24 OPEN;

PDB CONTAINERS Clause in Oracle Database 12c (12.1.0.2 and 12.2)

Setup
We need to create 3 PDBs to test the CONTAINERS clause. The setup code below does the following.

Creates a pluggable database called PDB1.


Creates a PDB1 with a local user called LOCAL_USER that owns a populated table called LOCAL_USER_TAB.
Creates two clones of PDB1 called PDB2 and PDB3.
These examples use Oracle Managed Files (OMF). If you are not using OMF you will need to handle the file conversions manually using the FILE_NAME_CONVERT clause or the
PDB_FILE_NAME_CONVERT parameter.

CONN / AS SYSDBA

-- Create a pluggable database


CREATE PLUGGABLE DATABASE pdb1
ADMIN USER pdb_admin IDENTIFIED BY Password1
DEFAULT TABLESPACE users DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;

ALTER PLUGGABLE DATABASE pdb1 OPEN;

ALTER SESSION SET CONTAINER = pdb1;

-- Create a local user.


CREATE USER local_user IDENTIFIED BY Local1 QUOTA UNLIMITED ON users;
GRANT CREATE SESSION, CREATE TABLE TO local_user;

CREATE TABLE local_user.local_user_tab AS


SELECT level AS ID
FROM dual
CONNECT BY level <= 2;

CONN / AS SYSDBA

CREATE PLUGGABLE DATABASE pdb2 FROM pdb1;


ALTER PLUGGABLE DATABASE pdb2 OPEN;
CREATE PLUGGABLE DATABASE pdb3 FROM pdb1;
ALTER PLUGGABLE DATABASE pdb3 OPEN;
The next part of the setup does the following.

Creates a common user called C##COMMON_USER that owns an empty table called COMMON_USER_TAB in the root container.
Creates a populated version of the COMMON_USER_TAB table owned by the C##COMMON_USER user in each PDB.
Grants select privilege on the local user's table to the common user.
-- Create a common user that owns an empty table.
CONN / AS SYSDBA
CREATE USER c##common_user IDENTIFIED BY Common1 QUOTA UNLIMITED ON users;

Multitenant Page 44
CREATE USER c##common_user IDENTIFIED BY Common1 QUOTA UNLIMITED ON users;
GRANT CREATE SESSION, CREATE TABLE, CREATE VIEW, CREATE SYNONYM TO c##common_user CONTAINER=ALL;

-- Create a table in the common user for each container.


-- Don't populate the one in the root container.
CONN c##common_user/Common1
CREATE TABLE c##common_user.common_user_tab (id NUMBER);

CONN c##common_user/Common1@pdb1

CREATE TABLE c##common_user.common_user_tab AS


SELECT level AS ID
FROM dual
CONNECT BY level <= 2;

CONN c##common_user/Common1@pdb2

CREATE TABLE c##common_user.common_user_tab AS


SELECT level AS ID
FROM dual
CONNECT BY level <= 2;

CONN c##common_user/Common1@pdb3

CREATE TABLE c##common_user.common_user_tab AS


SELECT level AS ID
FROM dual
CONNECT BY level <= 2;

-- Grant select on the local user's table to the common user.


CONN local_user/Local1@pdb1
GRANT SELECT ON local_user_tab TO c##common_user;

CONN local_user/Local1@pdb2
GRANT SELECT ON local_user_tab TO c##common_user;

CONN local_user/Local1@pdb3
GRANT SELECT ON local_user_tab TO c##common_user;

CONN / AS SYSDBA
CONTAINERS Clause with Common Users
The CONTAINERS clause can only be used from a common user in the root container. With no additional changes we can query the COMMON_USER_TAB tables present in the common user in all the
containers. The most basic use of the CONTAINERS clause is shown below.

CONN c##common_user/Common1

SELECT *
FROM CONTAINERS(common_user_tab);

ID CON_ID
---------- ----------
1 4
2 4
1 5
2 5
1 3
2 3

6 rows selected.

SQL>
Notice the CON_ID column has been added to the column list, to indicate which container the result came from. This allows us to query a subset of the containers.

SELECT con_id, id
FROM CONTAINERS(common_user_tab)
WHERE con_id IN (3, 4)
ORDER BY con_id, id;

CON_ID ID
---------- ----------
3 1
3 2
4 1
4 2

4 rows selected.

SQL>

CONTAINERS Clause with Local Users


To query tables and views from local users, the documentation suggest you must create views on them from a common user. The f ollowing code creates views against the LOCAL_USER_TAB tables
created earlier. We must also create a table in the root container with the same name as the views.

Multitenant Page 45
CONN c##common_user/Common1
CREATE TABLE c##common_user.local_user_tab_v (id NUMBER);

CONN c##common_user/Common1@pdb1
CREATE VIEW c##common_user.local_user_tab_v AS
SELECT * FROM local_user.local_user_tab;

CONN c##common_user/Common1@pdb2
CREATE VIEW c##common_user.local_user_tab_v AS
SELECT * FROM local_user.local_user_tab;

CONN c##common_user/Common1@pdb3
CREATE VIEW c##common_user.local_user_tab_v AS
SELECT * FROM local_user.local_user_tab;
With the blank table and views in place we can now use the CONTAINERS clause indirectly against the local user objects.

CONN c##common_user/Common1

SELECT con_id, id
FROM CONTAINERS(local_user_tab_v)
ORDER BY con_id, id;

CON_ID ID
---------- ----------
3 1
3 2
4 1
4 2
5 1
5 2

6 rows selected.

SQL>
The documentation suggests the use of synonyms in place of views will not work, since the synonyms must resolve to objects ow ned by the common user issuing the query.

"When a synonym is specified in the CONTAINERS clause, the synonym must resolve to a table or a view owned by the common user issuing the statement."
That's not quite true from my tests, but it doesn't stop you from using synonyms to local objects in the PDBs, provided the o bject in the root container is not a synonym. The following example uses a
real object in the root container, and local objects via synonyms in the pluggable databases.

CONN c##common_user/Common1
CREATE TABLE c##common_user.local_user_tab_syn (id NUMBER);

CONN c##common_user/Common1@pdb1
DROP TABLE c##common_user.common_user_tab;
CREATE SYNONYM c##common_user.local_user_tab_syn FOR local_user.local_user_tab;

CONN c##common_user/Common1@pdb2
DROP TABLE c##common_user.common_user_tab;
CREATE SYNONYM c##common_user.local_user_tab_syn FOR local_user.local_user_tab;

CONN c##common_user/Common1@pdb3
DROP TABLE c##common_user.common_user_tab;
CREATE SYNONYM c##common_user.local_user_tab_syn FOR local_user.local_user_tab;

CONN c##common_user/Common1

SELECT con_id, id
FROM CONTAINERS(local_user_tab_syn)
ORDER BY con_id, id;

CON_ID ID
---------- ----------
3 1
3 2
4 1
4 2
5 1
5 2

6 rows selected.

SQL>
Let's see what happens if we drop the common user table and replace it with a synonym of the same name, pointing to a table o f the same structure as the local tables, but owned by the common user.

CONN c##common_user/Common1

DROP TABLE c##common_user.local_user_tab_syn PURGE;

CREATE SYNONYM c##common_user.local_user_tab_syn FOR c##common_user.common_user_tab;

SELECT *
FROM CONTAINERS(local_user_tab_syn);

SELECT *
*

Multitenant Page 46
*
ERROR at line 1:
ORA-12801: error signaled in parallel query server P004
ORA-00942: table or view does not exist

SQL>
If the synonyms consistently point to an object in the common user it still doesn't work.

CONN c##common_user/Common1@pdb1
DROP SYNONYM c##common_user.local_user_tab_syn;
CREATE SYNONYM c##common_user.local_user_tab_syn FOR c##common_user.common_user_tab;
DESC local_user_tab_syn;
Name Null? Type
----------------------------------------------------- -------- ------------------------------------
ID NUMBER

SQL>

CONN c##common_user/Common1@pdb2
DROP SYNONYM c##common_user.local_user_tab_syn;
CREATE SYNONYM c##common_user.local_user_tab_syn FOR c##common_user.common_user_tab;
DESC local_user_tab_syn;
Name Null? Type
----------------------------------------------------- -------- ------------------------------------
ID NUMBER

SQL>

CONN c##common_user/Common1@pdb3
DROP SYNONYM c##common_user.local_user_tab_syn;
CREATE SYNONYM c##common_user.local_user_tab_syn FOR c##common_user.common_user_tab;
DESC local_user_tab_syn;
Name Null? Type
----------------------------------------------------- -------- ------------------------------------
ID NUMBER

SQL>

CONN c##common_user/Common1
DROP SYNONYM c##common_user.local_user_tab_syn;
CREATE SYNONYM c##common_user.local_user_tab_syn FOR c##common_user.common_user_tab;
DESC local_user_tab_syn;
Name Null? Type
----------------------------------------------------- -------- ------------------------------------
ID NUMBER

SQL>

SELECT con_id, id
FROM CONTAINERS(local_user_tab_syn)
ORDER BY con_id, id;

FROM CONTAINERS(local_user_tab_syn)
*
ERROR at line 2:
ORA-00942: table or view does not exist

SQL>
I'm not sure what the wording in the documentation means, but it doesn't read well to me.

CONTAINERS Hint (12.2)


Oracle database 12.2 introduced the CONTAINERS hint, allowing you an element of control over the recursive SQL statements executed as a result of using the CONTAINERS clause.

The hint is placed in the select list as usual, with the basic syntax as follows. Substitute the hint you want in place of "< <PUT-HINT-HERE>>".

/*+ CONTAINERS(DEFAULT_PDB_HINT='<<PUT-HINT-HERE>>') */
As an example, we will run a query against the ALL_OBJECTS view and check the elapsed time.

CONN / AS SYSDBA

SET TIMING ON

SELECT con_id, MAX(object_id)


FROM CONTAINERS(all_objects)
GROUP BY con_id
ORDER BY 1;

CON_ID MAX(OBJECT_ID)
---------- --------------
1 75316
3 73209
4 73330
5 73323

Elapsed: 00:00:00.31
SQL>
We repeat the query, but this time add a PARALLEL(2) hint to the recursive queries run in each PDB, which should make the elapsed time slower on this small VM.

Multitenant Page 47
We repeat the query, but this time add a PARALLEL(2) hint to the recursive queries run in each PDB, which should make the elapsed time slower on this small VM.

SELECT /*+ CONTAINERS(DEFAULT_PDB_HINT='PARALLEL(2)') */


con_id, MAX(object_id)
FROM CONTAINERS(all_objects)
GROUP BY con_id
ORDER BY 1;

CON_ID MAX(OBJECT_ID)
---------- --------------
1 75316
3 73209
4 73340
5 73323

Elapsed: 00:00:06.17
SQL>
Notice the significantly longer elapsed time as a result of the parallel operations in the recursive SQL..

Clean Up
You can clean up all the pluggable databases and the common user created for these examples using the following script.

-- !!! Double-check you really need to do all these steps !!!

CONN / AS SYSDBA
ALTER PLUGGABLE DATABASE pdb1 CLOSE;
ALTER PLUGGABLE DATABASE pdb2 CLOSE;
ALTER PLUGGABLE DATABASE pdb3 CLOSE;
DROP PLUGGABLE DATABASE pdb1 INCLUDING DATAFILES;
DROP PLUGGABLE DATABASE pdb2 INCLUDING DATAFILES;
DROP PLUGGABLE DATABASE pdb3 INCLUDING DATAFILES;

DROP USER c##common_user CASCADE;

PDB Logging Clause in Oracle Database 12c Release 1 (12.1.0.2)


The PDB logging clause is used to set the default tablespace logging clause for a PDB. If a tablespace is created without an explicit logging clause, the default PDB logging clause is used.

There are some issues with this feature unless you apply the relevant patch.

This is feature does not work in the stock 12.1.0.2 release due to bug 18902135. The PDB logging clause is ignored when creat ing a new tablespace.
After you apply the 18902135 patch, if you set the PDB logging clause to NOLOGGING, the PDB logging clause is *always* used t o determine the logging setting of the tablespace. It can't be overridden
by explicitly setting the logging clause in the CREATE TABLESPACE statement. This goes against what the documentation states, so it appears the bug fix has introduced a new bug! If you set the PDB
logging clause to LOGGING, the setting can still be overridden at the tablespace level.
The feature has finally been fixed if you apply the 20961627 patch.

CREATE PLUGGABLE DATABASE


Adding the NOLOGGING clause during PDB creation sets the default logging mode for all subsequent tablespaces in the resulting PDB. The DBA_PDBS view displays the default logging clause for the PDB.

CONN / AS SYSDBA

CREATE PLUGGABLE DATABASE pdb5


ADMIN USER pdb_adm IDENTIFIED BY Password1
NOLOGGING;

ALTER PLUGGABLE DATABASE pdb5 OPEN;

COLUMN pdb_name FORMAT A20

SELECT pdb_name, logging


FROM dba_pdbs
ORDER BY pdb_name;

PDB_NAME LOGGING
-------------------- ---------
PDB$SEED LOGGING
PDB1 LOGGING
PDB2 LOGGING
PDB5 NOLOGGING

4 rows selected.

SQL>
If we create a new tablespace in the PDB without an explicit logging clause, we can see the default logging clause is used.

ALTER SESSION SET CONTAINER = pdb5;

CREATE TABLESPACE test1_ts;

SELECT tablespace_name, logging


FROM dba_tablespaces
ORDER BY tablespace_name;

TABLESPACE_NAME LOGGING
------------------------------ ---------
SYSAUX LOGGING
SYSTEM LOGGING
TEMP NOLOGGING

Multitenant Page 48
TEMP NOLOGGING
TEST1_TS NOLOGGING

4 rows selected.

SQL>
The default logging clause can* be overridden if an explicit logging clause is used during tablespace creation.

ALTER SESSION SET CONTAINER = pdb5;

DROP TABLESPACE test1_ts INCLUDING CONTENTS AND DATAFILES;


CREATE TABLESPACE test1_ts LOGGING;

SELECT tablespace_name, logging


FROM dba_tablespaces
ORDER BY tablespace_name;

TABLESPACE_NAME LOGGING
------------------------------ ---------
SYSAUX LOGGING
SYSTEM LOGGING
TEMP NOLOGGING
TEST1_TS LOGGING

4 rows selected.

SQL>
ALTER PLUGGABLE DATABASE
The PDB logging clause can also be set using the ALTER PLUGGABLE DATABASE command. In this case, the affect is seen in during creation of new tablespaces in the PDB.

ALTER SESSION SET CONTAINER = pdb5;

ALTER PLUGGABLE DATABASE pdb5 OPEN READ WRITE RESTRICTED FORCE;


ALTER PLUGGABLE DATABASE pdb5 LOGGING;
ALTER PLUGGABLE DATABASE pdb5 OPEN READ WRITE FORCE;

DROP TABLESPACE test1_ts INCLUDING CONTENTS AND DATAFILES;


CREATE TABLESPACE test1_ts;

SELECT tablespace_name, logging


FROM dba_tablespaces
ORDER BY tablespace_name;

TABLESPACE_NAME LOGGING
------------------------------ ---------
SYSAUX LOGGING
SYSTEM LOGGING
TEMP NOLOGGING
TEST1_TS LOGGING

4 rows selected.

SQL>
The default logging clause can be overridden if an explicit logging clause is used during tablespace creation.

DROP TABLESPACE test1_ts INCLUDING CONTENTS AND DATAFILES;


CREATE TABLESPACE test1_ts NOLOGGING;

SELECT tablespace_name, logging


FROM dba_tablespaces
ORDER BY tablespace_name;

TABLESPACE_NAME LOGGING
------------------------------ ---------
SYSAUX LOGGING
SYSTEM LOGGING
TEMP NOLOGGING
TEST1_TS NOLOGGING

4 rows selected.

SQL>

Default Tablespace Clause During PDB Creation in Oracle Database 12c Release 2 (12.2)
Default Tablespace Clause in 12.1
In both Oracle database 12.1 and 12.2 the DEFAULT TABLESPACE clause of the CREATE PLUGGABLE DATABASE command can be used to create a new default tablespace for a pluggable database created
from the seed.

The following example gives both the Oracle Managed Files (OMF) and non-OMF syntax. All further examples will assume you are using OMF. You can add the appropriate FILE_NAME_CONVERT or
PDB_FILE_NAME_CONVERT settings if you need them.

CONN / AS SYSDBA

-- Oracle Managed Files (OMF) syntax.


CREATE PLUGGABLE DATABASE pdb2 ADMIN USER pdb_adm IDENTIFIED BY Password1
DEFAULT TABLESPACE users DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;

Multitenant Page 49
-- Non-OMF syntax.
CREATE PLUGGABLE DATABASE pdb2 ADMIN USER pdb_adm IDENTIFIED BY Password1
FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdbseed/','/u01/app/oracle/oradata/cdb1/pdb2/')
DEFAULT TABLESPACE users DATAFILE '/u01/app/oracle/oradata/cdb1/pdb2/users01.dbf' SIZE 1M AUTOEXTEND ON NEXT 1M;

ALTER PLUGGABLE DATABASE pdb2 OPEN;


Once the PDB is created you can see the presence of the extra tablespace and the database default tablespace setting in the PDB.

CONN / AS SYSDBA
ALTER SESSION SET CONTAINER = pdb2;

SELECT tablespace_name
FROM dba_tablespaces
ORDER BY 1;

TABLESPACE_NAME
------------------------------
SYSAUX
SYSTEM
TEMP
USERS

SQL>

SELECT property_value
FROM database_properties
WHERE property_name = 'DEFAULT_PERMANENT_TABLESPACE';

PROPERTY_VALUE
----------------------------------------------------------------------------------------------------
USERS

SQL>
The DEFAULT TABELSPACE clause can't be used when creating a pluggable database from a user-defined PDB. In the examples below we attempt to use it both to specify a new default tablespace and
reference an existing tablespace. Both result in an error.

CONN / AS SYSDBA

CREATE PLUGGABLE DATABASE pdb3 FROM pdb2


DEFAULT TABLESPACE users2 DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;

DEFAULT TABLESPACE users2 DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M


*
ERROR at line 2:
ORA-00922: missing or invalid option

SQL>

CREATE PLUGGABLE DATABASE pdb3 FROM pdb2


DEFAULT TABLESPACE users;

DEFAULT TABLESPACE users


*
ERROR at line 2:
ORA-00922: missing or invalid option

SQL>
Default Tablespace Clause in 12.2
In Oracle database 12.2 the DEFAULT TABLESPACE clause can be used regardless of the source of the clone. If the source is the seed PDB, the clause is used to create a new default tablespace, as it was in
Oracle 12.1. If the source is a user-defined PDB, the clause specifies which existing tablespace in the new PDB should be set as the default tablespace.

To see this in action, create a new pluggable database from the seed as we did before.

CONN / AS SYSDBA

CREATE PLUGGABLE DATABASE pdb2 ADMIN USER pdb_adm IDENTIFIED BY Password1


DEFAULT TABLESPACE users DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;

ALTER PLUGGABLE DATABASE pdb2 OPEN;


The USERS tablespace will be the default tablespace in this PDB and any others cloned from it, so let's try something differe nt. Create a new tablespace in the PDB, but don't set it as the database default
tablespace.

CONN / AS SYSDBA
ALTER SESSION SET CONTAINER = pdb2;

CREATE TABLESPACE test_ts DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;


Create a new PDB as a clone of this user-defined PDB, but tell it to use the TEST_TS tablespace as the database default tablespace.

CONN / AS SYSDBA

CREATE PLUGGABLE DATABASE pdb3 FROM pdb2


DEFAULT TABLESPACE test_ts;

Multitenant Page 50
ALTER PLUGGABLE DATABASE pdb3 OPEN;
We can see the tablespaces in the new PDB are the same as the source, but it's now using the TEST_TS tablespace, rather than the USERS tablespace, as the database default tablespace.

CONN / AS SYSDBA
ALTER SESSION SET CONTAINER = pdb3;

SELECT tablespace_name
FROM dba_tablespaces
ORDER BY 1;

TABLESPACE_NAME
------------------------------
SYSAUX
SYSTEM
TEMP
TEST_TS
UNDOTBS1
USERS

SQL>

SELECT property_value
FROM database_properties
WHERE property_name = 'DEFAULT_PERMANENT_TABLESPACE';

PROPERTY_VALUE
----------------------------------------------------------------------------------------------------
TEST_TS

SQL>
Attempting to create a new tablespace, rather than reference an existing tablespace, during the creation process still results in an error.

CONN / AS SYSDBA
-- Clean up.
ALTER PLUGGABLE DATABASE pdb3 CLOSE;
DROP PLUGGABLE DATABASE pdb3 INCLUDING DATAFILES;

CREATE PLUGGABLE DATABASE pdb3 FROM pdb2


DEFAULT TABLESPACE another_ts DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;

DEFAULT TABLESPACE another_ts DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M


*
ERROR at line 2:
ORA-00922: missing or invalid option
SQL>

Disk I/O (IOPS, MBPS) Resource Management for PDBs in Oracle Database 12c Release 2 (12.2)
In the previous release there was no easy way to control the amount of disk I/O used by an individual PDB. As a result a "noisy neighbour" could use up lots of disk I/O and impact the performance of
other PDBs in the same instance. Oracle Database 12c Release 2 (12.2) allows you to control the amount of disk I/O used by a PDB, making consolidation more reliable.

I/O Parameters (MAX_IOPS, MAX_MBPS)


Setting I/O Parameters (MAX_IOPS, MAX_MBPS)
Monitoring I/O Usage for PDBs

I/O Parameters (MAX_IOPS, MAX_MBPS)


The following parameters can be set at the CDB or PDB level to throttle I/O at the PDB level.

MAX_IOPS : The maximum I/O operations per second for the PDB. Default "0". Values less that 100 IOPS are not recommended.
MAX_MBPS : The maximum megabytes of I/O per second for the PDB. Default "0". Values less that 25 MBPS are not recommended.
Some things to consider about their usage are listed below.

The parameters are independent. You can use none, one or both.
When the parameters are set at the CDB level they become the default values used by all PDBs.
When they are set at the PDB level they override any default values.
If the values are "0" at both the CDB and PDB level there is no I/O throttling.
Critical I/Os necessary for normal function of the instance are not limited, but do count towards the total I/O as far as the limit is concerned, so it is possible for the I/O to temporarily exceed the limit.
The parameters are only available for the multitenant architecture.
This feature is not available for Exadata.
Throttling will result in a resource manager wait event called resmgr: I/O rate limit.
Setting I/O Parameters
The example below sets the MAX_IOPS and MAX_MBPS parameters at the CDB level, the default values for all PDBs.

CONN / AS SYSDBA

-- Set defaults.
ALTER SYSTEM SET max_iops=100 SCOPE=BOTH;
ALTER SYSTEM SET max_mbps=400 SCOPE=BOTH;

-- Remove defaults.
ALTER SYSTEM SET max_iops=0 SCOPE=BOTH;
ALTER SYSTEM SET max_mbps=0 SCOPE=BOTH;
The example below sets the MAX_IOPS and MAX_MBPS parameters at the PDB level

CONN / AS SYSDBA

Multitenant Page 51
ALTER SESSION SET CONTAINER = pdb1;

-- Set PDB-specific values.


ALTER SYSTEM SET max_iops=100 SCOPE=BOTH;
ALTER SYSTEM SET max_mbps=400 SCOPE=BOTH;

-- Remove PDB-specific values.


ALTER SYSTEM SET max_iops=0 SCOPE=BOTH;
ALTER SYSTEM SET max_mbps=0 SCOPE=BOTH;
Monitoring I/O Usage for PDBs
Oracle now provides views to monitor the resource (CPU, I/O, parallel execution, memory) usage of PDBs. Each view contains similar information, but for different retention periods.

V$RSRCPDBMETRIC : A single row per PDB, holding the last of the 1 minute samples.
V$RSRCPDBMETRIC_HISTORY : 61 rows per PDB, holding the last 60 minutes worth of samples from the V$RSRCPDBMETRIC view.
DBA_HIST_RSRC_PDB_METRIC : AWR snaphots, retained based on the AWR retention period.
The following queries are examples of their usage.

CONN / AS SYSDBA

SET LINESIZE 180


COLUMN pdb_name FORMAT A10
COLUMN begin_time FORMAT A26
COLUMN end_time FORMAT A26
ALTER SESSION SET NLS_DATE_FORMAT='DD-MON-YYYY HH24:MI:SS';
ALTER SESSION SET NLS_TIMESTAMP_FORMAT='DD-MON-YYYY HH24:MI:SS.FF';

-- Last sample per PDB.


SELECT r.con_id,
p.pdb_name,
r.begin_time,
r.end_time,
r.iops,
r.iombps,
r.iops_throttle_exempt,
r.iombps_throttle_exempt,
r.avg_io_throttle
FROM v$rsrcpdbmetric r,
cdb_pdbs p
WHERE r.con_id = p.con_id
ORDER BY p.pdb_name;

-- Last hours samples for PDB1


SELECT r.con_id,
p.pdb_name,
r.begin_time,
r.end_time,
r.iops,
r.iombps,
r.iops_throttle_exempt,
r.iombps_throttle_exempt,
r.avg_io_throttle
FROM v$rsrcpdbmetric_history r,
cdb_pdbs p
WHERE r.con_id = p.con_id
AND p.pdb_name = 'PDB1'
ORDER BY r.begin_time;

-- All AWR snapshot information for PDB1.


SELECT r.snap_id,
r.con_id,
p.pdb_name,
r.begin_time,
r.end_time,
r.iops,
r.iombps,
r.iops_throttle_exempt,
r.iombps_throttle_exempt,
r.avg_io_throttle
FROM dba_hist_rsrc_pdb_metric r,
cdb_pdbs p
WHERE r.con_id = p.con_id
AND p.pdb_name = 'PDB1'
ORDER BY r.begin_time;

Flashback Pluggable Database (PDB) in Oracle Database 12c Release 2 (12.2)


In Oracle Database 12.1 flashback database operations were limited to the root container, and therefore affected all pluggable databases (PDBs) associated with the root container. Oracle Database 12.2
now supports flashback of a pluggable database, making flashback database relevant in the multitenant architecture again.

Enable/Disable Flashback Database


Before we can enable flashback database we need to make sure the database is in archivelog mode. You must do this from the root container.

CONN / AS SYSDBA
SHUTDOWN IMMEDIATE
STARTUP MOUNT
ALTER DATABASE ARCHIVELOG;
ALTER DATABASE OPEN;

Multitenant Page 52
ALTER DATABASE OPEN;
We can now enable/disable flashback database with the following commands.

ALTER DATABASE FLASHBACK ON;


--ALTER DATABASE FLASHBACK OFF;
We can check the status of flashback database using the following query.

SELECT flashback_on FROM v$database;

FLASHBACK_ON
------------------
YES

1 row selected.

SQL>
The amount of flashback logs retained is controlled by the DB_FLASHBACK_RETENTION_TARGET parameter, which indicates the retention time in minutes.

-- Retention for 7 days.


ALTER SYSTEM SET DB_FLASHBACK_RETENTION_TARGET=10080 SCOPE=BOTH;

Creating Restore Points


A restore point is just a text alias for a SCN. A guaranteed restored point prevents the database from removing any flashback logs between that point and the current time, so you should always remove
unnecessary guaranteed restored point.

Creating restore points at the CDB level is the same as for the non-CDB architecture. The following examples create and drop a normal and guaranteed restore point at the CDB level.

CONN / AS SYSDBA

-- Normal restore point.


CREATE RESTORE POINT cdb1_before_changes;
DROP RESTORE POINT cdb1_before_changes;

-- Guaranteed restore point.


CREATE RESTORE POINT cdb1_before_changes GUARANTEE FLASHBACK DATABASE;
DROP RESTORE POINT cdb1_before_changes;
There are several options for creating restore points at the PDB level. If you connect to the PDB you can issue the commands as normal.

CONN / AS SYSDBA

ALTER SESSION SET CONTAINER=pdb1;

-- Normal restore point.


CREATE RESTORE POINT pdb1_before_changes;
DROP RESTORE POINT pdb1_before_changes;

-- Guaranteed restore point.


CREATE RESTORE POINT pdb1_before_changes GUARANTEE FLASHBACK DATABASE;
DROP RESTORE POINT pdb1_before_changes;
Alternatively you can create them from the root container by using the FOR PLUGGABLE DATABASE clause.

CONN / AS SYSDBA

-- Normal restore point.


CREATE RESTORE POINT pdb1_before_changes FOR PLUGGABLE DATABASE pdb1;
DROP RESTORE POINT pdb1_before_changes FOR PLUGGABLE DATABASE pdb1;

-- Guaranteed restore point.


CREATE RESTORE POINT pdb1_before_changes FOR PLUGGABLE DATABASE pdb1 GUARANTEE FLASHBACK DATABASE;
DROP RESTORE POINT pdb1_before_changes FOR PLUGGABLE DATABASE pdb1;
Information about restore points can be displayed using the V$RESTORE_POINT view.

Creating Clean Restore Points


Ignore this section if you are running in local undo mode.

It is preferable for the container database to be running in local undo mode, but flashback PDB does not depend on it. If the CDB is running in shared undo mode, it is more efficient to flashback to clean
restore points. These are restore points taken when the pluggable database is down, with no outstanding transactions.

Clean restore points can be created while connected to the PDB as follows.

CONN / AS SYSDBA

ALTER SESSION SET CONTAINER=pdb1;

SHUTDOWN;

-- Clean restore point.


CREATE CLEAN RESTORE POINT pdb1_before_changes;
DROP RESTORE POINT pdb1_before_changes;

-- Clean guaranteed restore point.


CREATE CLEAN RESTORE POINT pdb1_before_changes GUARANTEE FLASHBACK DATABASE;
DROP RESTORE POINT pdb1_before_changes;

STARTUP;
They can also be created from the root container.

Multitenant Page 53
CONN / AS SYSDBA

ALTER PLUGGABLE DATABASE pdb1 CLOSE;

-- Normal restore point.


CREATE CLEAN RESTORE POINT pdb1_before_changes FOR PLUGGABLE DATABASE pdb1;
DROP RESTORE POINT pdb1_before_changes FOR PLUGGABLE DATABASE pdb1;

-- Guaranteed restore point.


CREATE CLEAN RESTORE POINT pdb1_before_changes FOR PLUGGABLE DATABASE pdb1 GUARANTEE FLASHBACK DATABASE;
DROP RESTORE POINT pdb1_before_changes FOR PLUGGABLE DATABASE pdb1;

ALTER PLUGGABLE DATABASE pdb1 OPEN;


All restore points created while a pluggable database is closed are marked as clean, as shown by the CLEAN_PDB_RESTORE_POINT column in the V$RESTORE_POINT view.

Flashback Container Database (CDB) and Pluggable Database (PDB)


The basic procedure to flashback a CDB is as follows.

CONN / AS SYSDBA

SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
FLASHBACK DATABASE TO RESTORE POINT cdb1_before_changes;
ALTER DATABASE OPEN RESETLOGS;

-- Open all pluggable databases.


ALTER PLUGGABLE DATABASE ALL OPEN RESETLOGS;
The flashback operation itself can take one of several forms.

FLASHBACK DATABASE TO TIMESTAMP my_date;


FLASHBACK DATABASE TO BEFORE TIMESTAMP my_date;
FLASHBACK DATABASE TO SCN my_scn;
FLASHBACK DATABASE TO BEFORE SCN my_scn;
FLASHBACK DATABASE TO RESTORE POINT my_restore_point;
The flashback of a PDB varies depending on whether local undo mode is used or not. Typically, you will be using local undo mo de, so the procedure will be as follows.

CONN / AS SYSDBA

ALTER PLUGGABLE DATABASE pdb1 CLOSE;


FLASHBACK PLUGGABLE DATABASE pdb1 TO RESTORE POINT pdb1_before_changes;
ALTER PLUGGABLE DATABASE pdb1 OPEN RESETLOGS;
The flashback operation itself can take one of several forms.

FLASHBACK PLUGGABLE DATABASE pdb1 TO TIMESTAMP my_date;


FLASHBACK PLUGGABLE DATABASE pdb1 TO BEFORE TIMESTAMP my_date;
FLASHBACK PLUGGABLE DATABASE pdb1 TO SCN my_scn;
FLASHBACK PLUGGABLE DATABASE pdb1 TO BEFORE SCN my_scn;
FLASHBACK PLUGGABLE DATABASE pdb1 TO RESTORE POINT my_restore_point;
If you are using shared undo mode, then the syntax is a little different as you will have to specify a location for an auxili ary instance.

FLASHBACK PLUGGABLE DATABASE my_pdb TO SCN my_scn AUXILIARY DESTINATION '/u01/aux';


FLASHBACK PLUGGABLE DATABASE my_pdb TO RESTORE POINT my_restore_point AUXILIARY DESTINATION '/u01/aux';
Flashback a Pluggable Database (PDB) Example
Create a restore point.

CONN / AS SYSDBA

CREATE RESTORE POINT pdb1_before_changes FOR PLUGGABLE DATABASE pdb1;


Make a change inside the PDB.

CONN test/test@pdb1

CREATE TABLE t1 (
id NUMBER
);

INSERT INTO t1 VALUES (1);


COMMIT;

SELECT * FROM t1;

ID
----------
1

SQL>
Flashback the PDB to the restore point.

CONN / AS SYSDBA

ALTER PLUGGABLE DATABASE pdb1 CLOSE;


FLASHBACK PLUGGABLE DATABASE pdb1 TO RESTORE POINT pdb1_before_changes;
ALTER PLUGGABLE DATABASE pdb1 OPEN RESETLOGS;
Check to see the table is missing.

CONN test/test@pdb1

Multitenant Page 54
CONN test/test@pdb1

SELECT * FROM t1;


SELECT * FROM t1
*
ERROR at line 1:
ORA-00942: table or view does not exist

SQL>

Hot Clone a Remote PDB or Non-CDB in Oracle Database 12c Release 2 (12.2)

In the initial release of Oracle Database 12c Release 1 (12.1.0.1) remote cloning of PDBs was listed as a feature, but it didn't work. The 12.1.0.2 patch fixed that, but also added the ability to create a PDB
as a clone of a remote non-CDB database. The biggest problem with remote cloning was the prerequisite of placing the source PDB or non-CDB into read-only mode before initiating the cloning process.
This made this feature useless for cloning production systems, as that level of down-time is typically unacceptable. Oracle Database 12c Release 2 (12.2) removes this prerequisite, which enables hot
cloning of PDBs and non-CDBs for the first time.

Prerequisites
The prerequisites for cloning a remote PDB or non-CDB are very similar, so I will deal with them together.

In this context, the word "local" refers to the destination or target CDB that will house the cloned PDB. The word "remote" r efers to the PDB or non-CDB that is the source of the clone.

The user in the local database must have the CREATE PLUGGABLE DATABASE privilege in the root container.
The remote CDB must use local undo mode. Without this you must open the remote PDB or non-CDB in read-only mode.
The remote database should be in archivelog mode. Without this you must open the remote PDB or non-CDB in read-only mode.
The local database must have a database link to the remote database. If the remote database is a PDB, the database link can point to the remote CDB using a common user, the PDB or an application
container using a local or common user.
The user in the remote database that the database link connects to must have the CREATE PLUGGABLE DATABASE privilege.
The local and remote databases must have the same endianness.
The local and remote databases must either have the same options installed, or the remote database must have a subset of thos e present on the local database.
If the character set of the local CDB is AL32UTF8, the remote database can be any character set. If the local CDB does not use AL32UTF8, the character sets of the remote and local databases much
match.
If the remote database uses Transparent Data Encryption (TDE) the local CDB must be configured appropriately before attempting the clone. If not you will be left with a new PDB that will only open in
restricted mode.
Bug 19174942 is marked as fixed in 12.2. I can't confirm this, so just in case I'll leave this here, but it should no longer be the case. The default tablespaces for each common user in the remote PDB
*must* exist in local CDB. If this is not true, create the missing tablespaces in the root container of the local PDB. If you don't do this your new PDB will only be able to open in restricted mode (Bug
19174942).
When cloning from a non-CDB, both the the local and remote databases must using version 12.1.0.2 or higher.
In the examples below I have three databases running on the same virtual machine, but they could be running on separate physical or virtual servers.

cdb1 : The local database that will eventually house the clones.
db12c : The remote non-CDB.
cdb3 : The remote CDB, used for cloning a remote PDB (pdb5).
Cloning a Remote PDB
Connect to the remote CDB and prepare the remote PDB for cloning.

export ORAENV_ASK=NO
export ORACLE_SID=cdb3
. oraenv
export ORAENV_ASK=YES

sqlplus / as sysdba
Create a user in the remote database for use with the database link. In this case, we will use a local user in the remote PDB.

CREATE USER c##remote_clone_user IDENTIFIED BY remote_clone_user CONTAINER=ALL;


GRANT CREATE SESSION, CREATE PLUGGABLE DATABASE TO c##remote_clone_user CONTAINER=ALL;
Check the remote CDB is in local undo mode and archivelog mode.

CONN / AS SYSDBA

COLUMN property_name FORMAT A30


COLUMN property_value FORMAT A30

SELECT property_name, property_value


FROM database_properties
WHERE property_name = 'LOCAL_UNDO_ENABLED';

PROPERTY_NAME PROPERTY_VALUE
------------------------------ ------------------------------
LOCAL_UNDO_ENABLED TRUE

SQL>

SELECT log_mode
FROM v$database;

LOG_MODE
------------
ARCHIVELOG

SQL>
Because the remote CDB is in local undo mode and archivelog mode, we don't need to turn the remote database into read-only mode.

Switch to the local server and create a "tnsnames.ora" entry pointing to the remote database for use in the USING clause of the database link.

CDB3=

Multitenant Page 55
CDB3=
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = my-server.my-domain)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = cdb3)
)
)
Connect to the local database to initiate the clone.

export ORAENV_ASK=NO
export ORACLE_SID=cdb1
. oraenv
export ORAENV_ASK=YES

sqlplus / as sysdba
Create a database link in the local database, pointing to the remote database.

DROP DATABASE LINK clone_link;

CREATE DATABASE LINK clone_link


CONNECT TO c##remote_clone_user IDENTIFIED BY remote_clone_user USING 'cdb3';

-- Test link.
DESC user_tables@clone_link
Create a new PDB in the local database by cloning the remote PDB. In this case we are using Oracle Managed Files (OMF), so we don't need to bother with FILE_NAME_CONVERT parameter for file name
conversions.

CREATE PLUGGABLE DATABASE pdb5new FROM pdb5@clone_link;

Pluggable database created.

SQL>
We can see the new PDB has been created, but it is in the MOUNTED state.

COLUMN name FORMAT A30

SELECT name, open_mode FROM v$pdbs WHERE name = 'PDB5NEW';

NAME OPEN_MODE
------------------------------ ----------
PDB5NEW MOUNTED

SQL>
The PDB is opened in read-write mode to complete the process.

ALTER PLUGGABLE DATABASE pdb5new OPEN;

SELECT name, open_mode FROM v$pdbs WHERE name = 'PDB5NEW';

NAME OPEN_MODE
------------------------------ ----------
PDB5NEW READ WRITE

SQL>
As with any PDB clone, check common users and the temporary tablespace is configured as expected.

Cloning a Remote Non-CDB


Connect to the remote database to prepare it for cloning.

export ORAENV_ASK=NO
export ORACLE_SID=db12c
. oraenv
export ORAENV_ASK=YES

sqlplus / as sysdba
Create a user in the remote database for use with the database link.

CREATE USER remote_clone_user IDENTIFIED BY remote_clone_user;


GRANT CREATE SESSION, CREATE PLUGGABLE DATABASE TO remote_clone_user;
Check the remote non-CDB is archivelog mode.

SELECT log_mode
FROM v$database;

LOG_MODE
------------
ARCHIVELOG

SQL>
In Oracle 12.1 we would have switched the remote database to read-only mode before continuing, but this is not necessary in Oracle 12.2 provided the source database is in archivelog mode.

Switch to the local server and create a "tnsnames.ora" entry pointing to the remote database for use in the USING clause of the database link.

DB12C =
(DESCRIPTION =
(ADDRESS_LIST =

Multitenant Page 56
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = my-server.my-domain)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = db12c)
)
)
Connect to the local database to initiate the clone.

export ORAENV_ASK=NO
export ORACLE_SID=cdb1
. oraenv
export ORAENV_ASK=YES

sqlplus / as sysdba
Create a database link in the local database, pointing to the remote database.

DROP DATABASE LINK clone_link;

CREATE DATABASE LINK clone_link


CONNECT TO remote_clone_user IDENTIFIED BY remote_clone_user USING 'db12c';

-- Test link.
DESC user_tables@clone_link
Create a new PDB in the local database by cloning the remote non-CDB. In this case we are using Oracle Managed Files (OMF), so we don't need to bother with FILE_NAME_CONVERT parameter for file
name conversions. Since there is no PDB to name, we use "NON$CDB" as the PDB name.

CREATE PLUGGABLE DATABASE db12cpdb FROM NON$CDB@clone_link;

Pluggable database created.

SQL>
We can see the new PDB has been created, but it is in the MOUNTED state.

COLUMN name FORMAT A30

SELECT name, open_mode FROM v$pdbs WHERE name = 'DB12CPDB';

NAME OPEN_MODE
------------------------------ ----------
DB12CPDB MOUNTED

SQL>
Since this PDB was created as a clone of a non-CDB, before it can be opened we need to run the "$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql" script to clean it up.

ALTER SESSION SET CONTAINER=db12cpdb;

@$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql
The PDB can now be opened in read-write mode.

ALTER PLUGGABLE DATABASE db12cpdb OPEN;

SELECT name, open_mode FROM v$pdbs WHERE name = 'DB12CPDB';

NAME OPEN_MODE
------------------------------ ----------
DB12CPDB READ WRITE

SQL>
As with any PDB clone, check common users and the temporary tablespace is configured as expected.

Appendix
These tests were performed on a free trial of the Oracle Database Cloud Service, where the CDB1 instance and PDB1 pluggable database were created as part of the service creation. The additional
instances were built on the same virtual machine using the commands below. I've included the DBCA commands to create and dele te the CDB1 instance for completeness. They were not actually used.

# Empty local container (cdb1).


dbca -silent -createDatabase \
-templateName General_Purpose.dbc \
-gdbname cdb1 -sid cdb1 -responseFile NO_VALUE \
-characterSet AL32UTF8 \
-sysPassword OraPasswd1 \
-systemPassword OraPasswd1 \
-createAsContainerDatabase true \
-numberOfPDBs 1 \
-pdbName pdb1 \
-pdbAdminPassword OraPasswd1 \
-databaseType MULTIPURPOSE \
-automaticMemoryManagement false \
-totalMemory 2048 \
-storageType FS \
-datafileDestination "/u01/app/oracle/oradata/" \
-redoLogFileSize 50 \
-initParams encrypt_new_tablespaces=DDL \
-emConfiguration NONE \
-ignorePreReqs

# Remote container (cdb3) with PDB (pdb5).

Multitenant Page 57
# Remote container (cdb3) with PDB (pdb5).
dbca -silent -createDatabase \
-templateName General_Purpose.dbc \
-gdbname cdb3 -sid cdb3 -responseFile NO_VALUE \
-characterSet AL32UTF8 \
-sysPassword OraPasswd1 \
-systemPassword OraPasswd1 \
-createAsContainerDatabase true \
-numberOfPDBs 1 \
-pdbName pdb5 \
-pdbAdminPassword OraPasswd1 \
-databaseType MULTIPURPOSE \
-automaticMemoryManagement false \
-totalMemory 2048 \
-storageType FS \
-datafileDestination "/u01/app/oracle/oradata/" \
-redoLogFileSize 50 \
-initParams encrypt_new_tablespaces=DDL \
-emConfiguration NONE \
-ignorePreReqs

# Non-CDB instance (db12c).


dbca -silent -createDatabase \
-templateName General_Purpose.dbc \
-gdbname db12c -sid db12c -responseFile NO_VALUE \
-characterSet AL32UTF8 \
-sysPassword OraPasswd1 \
-systemPassword OraPasswd1 \
-createAsContainerDatabase false \
-databaseType MULTIPURPOSE \
-automaticMemoryManagement false \
-totalMemory 2048 \
-storageType FS \
-datafileDestination "/u01/app/oracle/oradata/" \
-redoLogFileSize 50 \
-initParams encrypt_new_tablespaces=DDL \
-emConfiguration NONE \
-ignorePreReqs

# Delete the instances.


#dbca -silent -deleteDatabase -sourceDB cdb1 -sysDBAUserName sys -sysDBAPassword OraPasswd1
dbca -silent -deleteDatabase -sourceDB cdb3 -sysDBAUserName sys -sysDBAPassword OraPasswd1
dbca -silent -deleteDatabase -sourceDB db12c -sysDBAUserName sys -sysDBAPassword OraPasswd1
As explained earlier, in all cases Oracle Managed Files (OMF) was used so no file name conversions were needed. Also, the source databases were switched to archivelog mode.

export ORAENV_ASK=NO
export ORACLE_SID=cdb3
. oraenv
export ORAENV_ASK=YES

sqlplus / as sysdba <<EOF

ALTER SYSTEM SET db_create_file_dest = '/u01/app/oracle/oradata';

SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE ARCHIVELOG;
ALTER DATABASE OPEN;

ALTER PLUGGABLE DATABASE pdb5 OPEN;


ALTER PLUGGABLE DATABASE pdb5 SAVE STATE;

EXIT;
EOF

Local Undo Mode in Oracle Database 12c Release 2 (12.2)


In Oracle Database 12c Release 1 all containers in an instance shared the same undo tablespace. In Oracle 12c Release 2 each container in an instance can use its own undo tablespace. This new undo
management mechanism is called local undo mode, whilst that of previous releases is now known as shared undo mode. Local undo mode is the default mode in newly created databases, so you will
probably only need to consider switching undo modes for upgraded instances.

You should switch to local undo mode unless you have a compelling reason not to. Some of the new multitenant features in 12.2 rely on local undo. This article demonstrates how to switch to shared
undo mode, only so you can see the process of switching back to local undo mode.

Switching to Shared Undo Mode


We can display the current undo mode using the following query, which shows we are currently in local undo mode.

COLUMN property_name FORMAT A30


COLUMN property_value FORMAT A30

SELECT property_name, property_value


FROM database_properties
WHERE property_name = 'LOCAL_UNDO_ENABLED';

PROPERTY_NAME PROPERTY_VALUE
------------------------------ ------------------------------

Multitenant Page 58
------------------------------ ------------------------------
LOCAL_UNDO_ENABLED TRUE

SQL>

We also check for the presence of the undo tablespaces for the root container (con_id=1) and user-defined pluggable database (con_id=3).

SELECT con_id, tablespace_name


FROM cdb_tablespaces
WHERE tablespace_name LIKE 'UNDO%'
ORDER BY con_id;

CON_ID TABLESPACE_NAME
---------- ------------------------------
1 UNDOTBS1
3 UNDOTBS1

SQL>
The following commands demonstrate how to switch to shared undo mode using the ALTER DATABASE LOCAL UNDO OFF command.

CONN / AS SYSDBA

SHUTDOWN IMMEDIATE;
STARTUP UPGRADE;

ALTER DATABASE LOCAL UNDO OFF;

SHUTDOWN IMMEDIATE;
STARTUP;
Once the instance is restarted we can check the undo mode again and see we are now in shared undo mode.

COLUMN property_name FORMAT A30


COLUMN property_value FORMAT A30

SELECT property_name, property_value


FROM database_properties
WHERE property_name = 'LOCAL_UNDO_ENABLED';

PROPERTY_NAME PROPERTY_VALUE
------------------------------ ------------------------------
LOCAL_UNDO_ENABLED FALSE

SQL>
We still have the local undo tablespace for the user-defined pluggable database (con_id=3), even though the instance will no longer use it.

SELECT con_id, tablespace_name


FROM cdb_tablespaces
WHERE tablespace_name LIKE 'UNDO%'
ORDER BY con_id;

CON_ID TABLESPACE_NAME
---------- ------------------------------
1 UNDOTBS1
3 UNDOTBS1

SQL>
For clarity, we should remove it.

ALTER SESSION SET CONTAINER = pdb1;

SELECT file_name
FROM dba_data_files
WHERE tablespace_name = 'UNDOTBS1';

----------------------------------------------------------------------------------------------------
/u02/app/oracle/oradata/cdb1/pdb1/undotbs01.dbf

SQL>

DROP TABLESPACE undotbs1;

Tablespace dropped.

SQL>
The instance is now running in shared undo mode, with all old local undo tablespaces removed.

Switching to Local Undo Mode


We display the current undo mode using the following query, which shows we are currently in shared undo mode.

CONN / AS SYSDBA

COLUMN property_name FORMAT A30


COLUMN property_value FORMAT A30

SELECT property_name, property_value


FROM database_properties

Multitenant Page 59
FROM database_properties
WHERE property_name = 'LOCAL_UNDO_ENABLED';

PROPERTY_NAME PROPERTY_VALUE
------------------------------ ------------------------------
LOCAL_UNDO_ENABLED FALSE

SQL>
We also check for the presence of the undo tablespaces and only see that of the root container (con_id=1).

SELECT con_id, tablespace_name


FROM cdb_tablespaces
WHERE tablespace_name LIKE 'UNDO%'
ORDER BY con_id;

CON_ID TABLESPACE_NAME
---------- ------------------------------
1 UNDOTBS1

SQL>
The following commands demonstrate how to switch to local undo mode using the ALTER DATABASE LOCAL UNDO ON command.

CONN / AS SYSDBA

SHUTDOWN IMMEDIATE;
STARTUP UPGRADE;

ALTER DATABASE LOCAL UNDO ON;

SHUTDOWN IMMEDIATE;
STARTUP;
Once the instance is restarted we can check the undo mode again and see we are now in local undo mode.

COLUMN property_name FORMAT A30


COLUMN property_value FORMAT A30

SELECT property_name, property_value


FROM database_properties
WHERE property_name = 'LOCAL_UNDO_ENABLED';

PROPERTY_NAME PROPERTY_VALUE
------------------------------ ------------------------------
LOCAL_UNDO_ENABLED TRUE

SQL>
When we check for undo tablespaces we see Oracle has created a local undo tablespace for each user-defined pluggable databases.

SELECT con_id, tablespace_name


FROM cdb_tablespaces
WHERE tablespace_name LIKE 'UNDO%'
ORDER BY con_id;

CON_ID TABLESPACE_NAME
---------- ------------------------------
1 UNDOTBS1
3 UNDO_1

SQL>
If we create a new pluggable database, we can see it is also created with a local undo tablespace.

CREATE PLUGGABLE DATABASE pdb2 ADMIN USER pdb_adm IDENTIFIED BY Password1;


ALTER PLUGGABLE DATABASE pdb2 SAVE STATE;

SELECT con_id, tablespace_name


FROM cdb_tablespaces
WHERE tablespace_name LIKE 'UNDO%'
ORDER BY con_id;

CON_ID TABLESPACE_NAME
---------- ------------------------------
1 UNDOTBS1
3 UNDO_1
4 UNDOTBS1

SQL>

Memory Resource Management for PDBs in Oracle Database 12c Release 2 (12.2)
In the previous release there was no way to control the amount of memory used by an individual PDB. As a result a "noisy neighbour" could use up lots of memory and impact the performance of other
PDBs in the same instance. Oracle Database 12c Release 2 (12.2) allows you to control the amount of memory used by a PDB, mak ing consolidation more reliable.

PDB Memory Parameters


The following parameters can be set at the PDB level.

DB_CACHE_SIZE : The minimum buffer cache size for the PDB.


SHARED_POOL_SIZE : The minimum shared pool size for the PDB.
PGA_AGGREGATE_LIMIT : The maximum PGA size for the PDB.
PGA_AGGREGATE_TARGET : The target PGA size for the PDB.

Multitenant Page 60
PGA_AGGREGATE_TARGET : The target PGA size for the PDB.
SGA_MIN_SIZE : The minimum SGA size for the PDB.
SGA_TARGET : The maximum SGA size for the PDB.
There are a number of restrictions regarding what values can be used, which are explained in the documentation here. To summarise.

The NONCDB_COMPATIBLE parameter is set to FALSE in the root container.


The MEMORY_TARGET parameter is unset or set to "0" in the root container.
The individual parameters have a variety of maximum limits to prevent you from over-allocating memory within the PDB and the instance generally. If you attempt to set an incorrect value an error will
be produced.
Setting PDB Memory Parameters
The process of setting memory parameters for a PDB is similar to setting regular instance parameters. The example below uses the SGA_TARGET parameter, but the approach is similar for the other
parameters.

Check the current settings for the root container.

CONN / AS SYSDBA
SHOW PARAMETER sga_target;

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
sga_target big integer 2544M
SQL>
Check the current settings for the pluggable database.

CONN / AS SYSDBA
ALTER SESSION SET CONTAINER=pdb1;

SHOW PARAMETER sga_target;

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
sga_target big integer 0
SQL>
Set the SGA_TARGET for the current PDB.

SQL> ALTER SYSTEM SET sga_target=1G SCOPE=BOTH;

System altered.

SQL> SHOW PARAMETER sga_target;

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
sga_target big integer 1G
SQL>
Attempt to make the SGA_TARGET too big compared to the value in the root container.

SQL> ALTER SYSTEM SET sga_target=3G SCOPE=BOTH;


ALTER SYSTEM SET sga_target=3G SCOPE=BOTH
*
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-56747: invalid value 3221225472 for parameter sga_target; must be smaller
than parameter sga_target of the root container

SQL>
The value can be set to "0" or reset if you no longer want to control this parameter.

ALTER SYSTEM SET sga_target=0 SCOPE=BOTH;


ALTER SYSTEM RESET sga_target;
Monitoring Memory Usage for PDBs
Oracle now provides views to monitor the resource (CPU, I/O, parallel execution, memory) usage of PDBs. Each view contains similar information, but for different retention periods.

V$RSRCPDBMETRIC : A single row per PDB, holding the last of the 1 minute samples.
V$RSRCPDBMETRIC_HISTORY : 61 rows per PDB, holding the last 60 minutes worth of samples from the V$RSRCPDBMETRIC view.
DBA_HIST_RSRC_PDB_METRIC : AWR snaphots, retained based on the AWR retention period.
The following queries are examples of their usage.

CONN / AS SYSDBA

SET LINESIZE 150


COLUMN pdb_name FORMAT A10
COLUMN begin_time FORMAT A26
COLUMN end_time FORMAT A26
ALTER SESSION SET NLS_DATE_FORMAT='DD-MON-YYYY HH24:MI:SS';
ALTER SESSION SET NLS_TIMESTAMP_FORMAT='DD-MON-YYYY HH24:MI:SS.FF';

-- Last sample per PDB.


SELECT r.con_id,
p.pdb_name,
r.begin_time,
r.end_time,
r.sga_bytes,
r.pga_bytes,
r.buffer_cache_bytes,
r.shared_pool_bytes
FROM v$rsrcpdbmetric r,

Multitenant Page 61
FROM v$rsrcpdbmetric r,
cdb_pdbs p
WHERE r.con_id = p.con_id
ORDER BY p.pdb_name;

-- Last hours samples for PDB1


SELECT r.con_id,
p.pdb_name,
r.begin_time,
r.end_time,
r.sga_bytes,
r.pga_bytes,
r.buffer_cache_bytes,
r.shared_pool_bytes
FROM v$rsrcpdbmetric_history r,
cdb_pdbs p
WHERE r.con_id = p.con_id
AND p.pdb_name = 'PDB1'
ORDER BY r.begin_time;

-- All AWR snapshot information for PDB1.


SELECT r.snap_id,
r.con_id,
p.pdb_name,
r.begin_time,
r.end_time,
r.sga_bytes,
r.pga_bytes,
r.buffer_cache_bytes,
r.shared_pool_bytes
FROM dba_hist_rsrc_pdb_metric r,
cdb_pdbs p
WHERE r.con_id = p.con_id
AND p.pdb_name = 'PDB1'
ORDER BY r.begin_time;

Parallel PDB Creation Clause in Oracle Database 12c Release 2 (12.2)

From Oracle database 12.2 onward pluggable databases (PDBs) are created in parallel. You have some level of control over the number of parallel execution servers used to copy files during the creation
of a pluggable database (PDB).

Parallel PDB Creation Clause


By default Oracle decides how many parallel execution servers should be used to copy the datafiles from the source (seed or PDB) to the new PDB. You can influence the decision using the PARALLEL
clause in the CREATE PLUGGABLE DATABASE command. This functionality relies on the COMPATIBLE parameter being set to 12.2 or higher.

The databases use Oracle Managed Files (OMF) so we don't need to worry about the FILE_NAME_CONVERT or PDB_FILE_NAME_CONVERT settings.

The following are functionally identical, both letting Oracle decide on the degree of parallelism (DOP).

-- Automatic DOP.
CREATE PLUGGABLE DATABASE pdb2 FROM pdb5;
CREATE PLUGGABLE DATABASE pdb2 FROM pdb5 PARALLEL;
Use an integer to manually specify the DOP. Oracle can choose to ignore this if it doesn't make sense. The DOP is limited by the number of datafiles. If the PDB only has 4 datafiles, a DOP of more than 4
will be limited for 4.

-- Manual DOP.
CREATE PLUGGABLE DATABASE pdb2 FROM pdb5 PARALLEL 8;
To create a PDB serially, use the value "0" or "1".

-- Serial
CREATE PLUGGABLE DATABASE pdb2 FROM pdb5 PARALLEL 0;
CREATE PLUGGABLE DATABASE pdb2 FROM pdb5 PARALLEL 1;
Monitoring Parallel Execution Servers
If you are cloning small PDBs, like the seed, you may struggle to be quick enough to see the parallel execution servers. I used the following query whilst cloning a PDB with 10 datafiles on a system that
had no other load.

SELECT qcsid, qcserial#, sid, serial#


FROM v$px_session
ORDER BY 1,2,3;
The typical output I saw for some tests is shown below.

-- No PARALLEL Clause
SELECT qcsid, qcserial#, sid, serial#
FROM v$px_session
ORDER BY 1,2,3;

QCSID QCSERIAL# SID SERIAL#


---------- ---------- ---------- ----------
4 22070 46 30666
4 22070 50 21167
4 22070 283 61472
4 22070 287 27180
4 4 22070

SQL>

Multitenant Page 62
-- PARALLEL
SELECT qcsid, qcserial#, sid, serial#
FROM v$px_session
ORDER BY 1,2,3;

QCSID QCSERIAL# SID SERIAL#


---------- ---------- ---------- ----------
4 22070 40 51220
4 22070 50 38168
4 22070 275 46743
4 22070 283 13753
4 4 22070

SQL>

-- PARALLEL 1
SELECT qcsid, qcserial#, sid, serial#
FROM v$px_session
ORDER BY 1,2,3;

no rows selected

SQL>

-- PARALLEL 2
SELECT qcsid, qcserial#, sid, serial#
FROM v$px_session
ORDER BY 1,2,3;

QCSID QCSERIAL# SID SERIAL#


---------- ---------- ---------- ----------
4 22070 50 18977
4 22070 283 12244
4 4 22070

SQL>

-- PARALLEL 4
SELECT qcsid, qcserial#, sid, serial#
FROM v$px_session
ORDER BY 1,2,3;

QCSID QCSERIAL# SID SERIAL#


---------- ---------- ---------- ----------
4 22070 32 37798
4 22070 34 26140
4 22070 280 12497
4 22070 282 15558
4 4 22070

SQL>

-- PARALLEL 8
SELECT qcsid, qcserial#, sid, serial#
FROM v$px_session
ORDER BY 1,2,3;

QCSID QCSERIAL# SID SERIAL#


---------- ---------- ---------- ----------
4 22070 32 44668
4 22070 40 47045
4 22070 44 53818
4 22070 46 28793
4 22070 275 39609
4 22070 282 14300
4 22070 283 11396
4 22070 287 35723
4 4 22070

SQL>

PDB Archive Files for Unplug and Plugin in Oracle Database 12c Release 2 (12.2)
In Oracle 12.1 a pluggable database could be unplugged to a ".xml" file, which describes the contents of the pluggable database. To move the PDB, you needed to manually move the ".xml" file and all
the relevant database files. In addition to this functionality, Oracle 12.2 allows a PDB to be unplugged to a ".pdb" archive file. The resulting archive file contains the ".xml" file describing the PDB as well
as all the datafiles associated with the PDB. This can simplify the transfer of the files between servers and reduce the chances of human error.

Unplug PDB to ".pdb" Archive File


Before attempting to unplug a PDB, you must make sure it is closed. To unplug the database use the ALTER PLUGGABLE DATABASE command with the UNPLUG INTO clause to specify the location of the
".pdb" archive file.

export ORAENV_ASK=NO
export ORACLE_SID=cdb3

Multitenant Page 63
export ORACLE_SID=cdb3
. oraenv
export ORAENV_ASK=YES

sqlplus / as sysdba

ALTER PLUGGABLE DATABASE pdb5 CLOSE;


ALTER PLUGGABLE DATABASE pdb5 UNPLUG INTO '/u01/pdb5.pdb';
You see the archive file not now present.

HOST ls -al /u01/pdb5.pdb


-rw-r--r--. 1 oracle oinstall 161702502 Jan 7 21:01 /u01/pdb5.pdb

SQL>
You can delete the PDB and drop the datafile, as they are all present in the archive file.

DROP PLUGGABLE DATABASE pdb5 INCLUDING DATAFILES;

SELECT name, open_mode


FROM v$pdbs
ORDER BY name;

NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY

SQL>
Plugin PDB from ".pdb" Archive File
Plugging in a PDB into the CDB is similar to creating a new PDB. First check the PBD is compatible with the CDB by calling th e DBMS_PDB.CHECK_PLUG_COMPATIBILITY function, passing in the archive file
and the name of the PDB you want to create using it.

SET SERVEROUTPUT ON
DECLARE
l_result BOOLEAN;
BEGIN
l_result := DBMS_PDB.check_plug_compatibility(
pdb_descr_file => '/u01/pdb5.pdb',
pdb_name => 'pdb5');

IF l_result THEN
DBMS_OUTPUT.PUT_LINE('compatible');
ELSE
DBMS_OUTPUT.PUT_LINE('incompatible');
END IF;
END;
/
compatible

PL/SQL procedure successfully completed.

SQL>
If the PDB is not compatible, violations are listed in the PDB_PLUG_IN_VIOLATIONS view. If the PDB is compatible, create a new PDB using it as the source. If we were creating it with a new name we
might do something like this.

CREATE PLUGGABLE DATABASE pdb5 USING '/u01/pdb5.pdb';

ALTER PLUGGABLE DATABASE pdb5 OPEN READ WRITE;

SELECT name, open_mode


FROM v$pdbs
ORDER BY name;

NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY
PDB5 READ WRITE

SQL>
Unplug PDB to ".xml" File
Before attempting to unplug a PDB, you must make sure it is closed. To unplug the database use the ALTER PLUGGABLE DATABASE command with the UNPLUG INTO clause to specify the location of the
XML metadata file.

export ORAENV_ASK=NO
export ORACLE_SID=cdb3
. oraenv
export ORAENV_ASK=YES

sqlplus / as sysdba

ALTER PLUGGABLE DATABASE pdb5 CLOSE;


ALTER PLUGGABLE DATABASE pdb5 UNPLUG INTO '/u01/pdb5.xml';
The pluggable database is still present, but you shouldn't open it until the metadata file and all the datafiles are copied s omewhere safe.

COLUMN name FORMAT A30

Multitenant Page 64
SELECT name, open_mode
FROM v$pdbs
ORDER BY name;

NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY
PDB5 MOUNTED

SQL>
You can delete the PDB, choosing to keep the files on the file system.

DROP PLUGGABLE DATABASE pdb5 KEEP DATAFILES;

SELECT name, open_mode


FROM v$pdbs
ORDER BY name;

NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY

SQL>
Plugin PDB from ".xml" File
First check the PBD is compatible with the CDB by calling the DBMS_PDB.CHECK_PLUG_COMPATIBILITY function, passing in the XML metadata file and the name of the PDB you want to create using it.

SET SERVEROUTPUT ON
DECLARE
l_result BOOLEAN;
BEGIN
l_result := DBMS_PDB.check_plug_compatibility(
pdb_descr_file => '/u01/pdb5.xml',
pdb_name => 'pdb5');

IF l_result THEN
DBMS_OUTPUT.PUT_LINE('compatible');
ELSE
DBMS_OUTPUT.PUT_LINE('incompatible');
END IF;
END;
/
compatible

PL/SQL procedure successfully completed.

SQL>
If the PDB is not compatible, violations are listed in the PDB_PLUG_IN_VIOLATIONS view. If the PDB is compatible, create a new PDB using it as the source. If we were creating it with a new name we
might do something like this.

CREATE PLUGGABLE DATABASE pdb2 USING '/u01/pdb5.xml'


FILE_NAME_CONVERT=('/u02/app/oracle/oradata/cdb3/pdb5/','/u02/app/oracle/oradata/cdb3/pdb2/');
Instead, we want to plug the database back into the same container, so we don't need to copy the files or recreate the temp f ile, so we can do the following.

CREATE PLUGGABLE DATABASE pdb5 USING '/u01/pdb5.xml'


NOCOPY
TEMPFILE REUSE;

ALTER PLUGGABLE DATABASE pdb5 OPEN READ WRITE;

SELECT name, open_mode


FROM v$pdbs
ORDER BY name;

NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY
PDB5 READ WRITE

SQL>

PDB CONTAINERS Clause in Oracle Database 12c (12.1.0.2 and 12.2)

Setup
We need to create 3 PDBs to test the CONTAINERS clause. The setup code below does the following.

Creates a pluggable database called PDB1.


Creates a PDB1 with a local user called LOCAL_USER that owns a populated table called LOCAL_USER_TAB.
Creates two clones of PDB1 called PDB2 and PDB3.
These examples use Oracle Managed Files (OMF). If you are not using OMF you will need to handle the file conversions manually using the FILE_NAME_CONVERT clause or the
PDB_FILE_NAME_CONVERT parameter.

CONN / AS SYSDBA

-- Create a pluggable database


CREATE PLUGGABLE DATABASE pdb1
ADMIN USER pdb_admin IDENTIFIED BY Password1

Multitenant Page 65
ADMIN USER pdb_admin IDENTIFIED BY Password1
DEFAULT TABLESPACE users DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;

ALTER PLUGGABLE DATABASE pdb1 OPEN;

ALTER SESSION SET CONTAINER = pdb1;

-- Create a local user.


CREATE USER local_user IDENTIFIED BY Local1 QUOTA UNLIMITED ON users;
GRANT CREATE SESSION, CREATE TABLE TO local_user;

CREATE TABLE local_user.local_user_tab AS


SELECT level AS ID
FROM dual
CONNECT BY level <= 2;

CONN / AS SYSDBA

CREATE PLUGGABLE DATABASE pdb2 FROM pdb1;


ALTER PLUGGABLE DATABASE pdb2 OPEN;
CREATE PLUGGABLE DATABASE pdb3 FROM pdb1;
ALTER PLUGGABLE DATABASE pdb3 OPEN;
The next part of the setup does the following.

Creates a common user called C##COMMON_USER that owns an empty table called COMMON_USER_TAB in the root container.
Creates a populated version of the COMMON_USER_TAB table owned by the C##COMMON_USER user in each PDB.
Grants select privilege on the local user's table to the common user.
-- Create a common user that owns an empty table.
CONN / AS SYSDBA
CREATE USER c##common_user IDENTIFIED BY Common1 QUOTA UNLIMITED ON users;
GRANT CREATE SESSION, CREATE TABLE, CREATE VIEW, CREATE SYNONYM TO c##common_user CONTAINER=ALL;

-- Create a table in the common user for each container.


-- Don't populate the one in the root container.
CONN c##common_user/Common1
CREATE TABLE c##common_user.common_user_tab (id NUMBER);

CONN c##common_user/Common1@pdb1

CREATE TABLE c##common_user.common_user_tab AS


SELECT level AS ID
FROM dual
CONNECT BY level <= 2;

CONN c##common_user/Common1@pdb2

CREATE TABLE c##common_user.common_user_tab AS


SELECT level AS ID
FROM dual
CONNECT BY level <= 2;

CONN c##common_user/Common1@pdb3

CREATE TABLE c##common_user.common_user_tab AS


SELECT level AS ID
FROM dual
CONNECT BY level <= 2;

-- Grant select on the local user's table to the common user.


CONN local_user/Local1@pdb1
GRANT SELECT ON local_user_tab TO c##common_user;

CONN local_user/Local1@pdb2
GRANT SELECT ON local_user_tab TO c##common_user;

CONN local_user/Local1@pdb3
GRANT SELECT ON local_user_tab TO c##common_user;

CONN / AS SYSDBA
CONTAINERS Clause with Common Users
The CONTAINERS clause can only be used from a common user in the root container. With no additional changes we can query the COMMON_USER_TAB tables present in the common user in all the
containers. The most basic use of the CONTAINERS clause is shown below.

CONN c##common_user/Common1

SELECT *
FROM CONTAINERS(common_user_tab);

ID CON_ID
---------- ----------
1 4
2 4
1 5

Multitenant Page 66
1 5
2 5
1 3
2 3

6 rows selected.

SQL>
Notice the CON_ID column has been added to the column list, to indicate which container the result came from. This allows us to query a subset of the containers.

SELECT con_id, id
FROM CONTAINERS(common_user_tab)
WHERE con_id IN (3, 4)
ORDER BY con_id, id;

CON_ID ID
---------- ----------
3 1
3 2
4 1
4 2

4 rows selected.

SQL>
CONTAINERS Clause with Local Users
To query tables and views from local users, the documentation suggest you must create views on them from a common user. The f ollowing code creates views against the LOCAL_USER_TAB tables
created earlier. We must also create a table in the root container with the same name as the views.

CONN c##common_user/Common1
CREATE TABLE c##common_user.local_user_tab_v (id NUMBER);

CONN c##common_user/Common1@pdb1
CREATE VIEW c##common_user.local_user_tab_v AS
SELECT * FROM local_user.local_user_tab;

CONN c##common_user/Common1@pdb2
CREATE VIEW c##common_user.local_user_tab_v AS
SELECT * FROM local_user.local_user_tab;

CONN c##common_user/Common1@pdb3
CREATE VIEW c##common_user.local_user_tab_v AS
SELECT * FROM local_user.local_user_tab;
With the blank table and views in place we can now use the CONTAINERS clause indirectly against the local user objects.

CONN c##common_user/Common1

SELECT con_id, id
FROM CONTAINERS(local_user_tab_v)
ORDER BY con_id, id;

CON_ID ID
---------- ----------
3 1
3 2
4 1
4 2
5 1
5 2

6 rows selected.

SQL>
The documentation suggests the use of synonyms in place of views will not work, since the synonyms must resolve to objects ow ned by the common user issuing the query.

"When a synonym is specified in the CONTAINERS clause, the synonym must resolve to a table or a view owned by the common user issuing the statement."
That's not quite true from my tests, but it doesn't stop you from using synonyms to local objects in the PDBs, provided the o bject in the root container is not a synonym. The following example uses a
real object in the root container, and local objects via synonyms in the pluggable databases.

CONN c##common_user/Common1
CREATE TABLE c##common_user.local_user_tab_syn (id NUMBER);

CONN c##common_user/Common1@pdb1
DROP TABLE c##common_user.common_user_tab;
CREATE SYNONYM c##common_user.local_user_tab_syn FOR local_user.local_user_tab;

CONN c##common_user/Common1@pdb2
DROP TABLE c##common_user.common_user_tab;
CREATE SYNONYM c##common_user.local_user_tab_syn FOR local_user.local_user_tab;

CONN c##common_user/Common1@pdb3
DROP TABLE c##common_user.common_user_tab;
CREATE SYNONYM c##common_user.local_user_tab_syn FOR local_user.local_user_tab;

CONN c##common_user/Common1

SELECT con_id, id

Multitenant Page 67
SELECT con_id, id
FROM CONTAINERS(local_user_tab_syn)
ORDER BY con_id, id;

CON_ID ID
---------- ----------
3 1
3 2
4 1
4 2
5 1
5 2

6 rows selected.

SQL>
Let's see what happens if we drop the common user table and replace it with a synonym of the same name, pointing to a table o f the same structure as the local tables, but owned by the common user.

CONN c##common_user/Common1

DROP TABLE c##common_user.local_user_tab_syn PURGE;

CREATE SYNONYM c##common_user.local_user_tab_syn FOR c##common_user.common_user_tab;

SELECT *
FROM CONTAINERS(local_user_tab_syn);

SELECT *
*
ERROR at line 1:
ORA-12801: error signaled in parallel query server P004
ORA-00942: table or view does not exist

SQL>
If the synonyms consistently point to an object in the common user it still doesn't work.

CONN c##common_user/Common1@pdb1
DROP SYNONYM c##common_user.local_user_tab_syn;
CREATE SYNONYM c##common_user.local_user_tab_syn FOR c##common_user.common_user_tab;
DESC local_user_tab_syn;
Name Null? Type
----------------------------------------------------- -------- ------------------------------------
ID NUMBER

SQL>

CONN c##common_user/Common1@pdb2
DROP SYNONYM c##common_user.local_user_tab_syn;
CREATE SYNONYM c##common_user.local_user_tab_syn FOR c##common_user.common_user_tab;
DESC local_user_tab_syn;
Name Null? Type
----------------------------------------------------- -------- ------------------------------------
ID NUMBER

SQL>

CONN c##common_user/Common1@pdb3
DROP SYNONYM c##common_user.local_user_tab_syn;
CREATE SYNONYM c##common_user.local_user_tab_syn FOR c##common_user.common_user_tab;
DESC local_user_tab_syn;
Name Null? Type
----------------------------------------------------- -------- ------------------------------------
ID NUMBER

SQL>

CONN c##common_user/Common1
DROP SYNONYM c##common_user.local_user_tab_syn;
CREATE SYNONYM c##common_user.local_user_tab_syn FOR c##common_user.common_user_tab;
DESC local_user_tab_syn;
Name Null? Type
----------------------------------------------------- -------- ------------------------------------
ID NUMBER

SQL>

SELECT con_id, id
FROM CONTAINERS(local_user_tab_syn)
ORDER BY con_id, id;

FROM CONTAINERS(local_user_tab_syn)
*
ERROR at line 2:
ORA-00942: table or view does not exist

SQL>
I'm not sure what the wording in the documentation means, but it doesn't read well to me.

Multitenant Page 68
I'm not sure what the wording in the documentation means, but it doesn't read well to me.

CONTAINERS Hint (12.2)


Oracle database 12.2 introduced the CONTAINERS hint, allowing you an element of control over the recursive SQL statements executed as a result of using the CONTAINERS clause.

The hint is placed in the select list as usual, with the basic syntax as follows. Substitute the hint you want in place of "< <PUT-HINT-HERE>>".

/*+ CONTAINERS(DEFAULT_PDB_HINT='<<PUT-HINT-HERE>>') */
As an example, we will run a query against the ALL_OBJECTS view and check the elapsed time.

CONN / AS SYSDBA

SET TIMING ON

SELECT con_id, MAX(object_id)


FROM CONTAINERS(all_objects)
GROUP BY con_id
ORDER BY 1;

CON_ID MAX(OBJECT_ID)
---------- --------------
1 75316
3 73209
4 73330
5 73323

Elapsed: 00:00:00.31
SQL>
We repeat the query, but this time add a PARALLEL(2) hint to the recursive queries run in each PDB, which should make the elapsed time slower on this small VM.

SELECT /*+ CONTAINERS(DEFAULT_PDB_HINT='PARALLEL(2)') */


con_id, MAX(object_id)
FROM CONTAINERS(all_objects)
GROUP BY con_id
ORDER BY 1;

CON_ID MAX(OBJECT_ID)
---------- --------------
1 75316
3 73209
4 73340
5 73323

Elapsed: 00:00:06.17
SQL>
Notice the significantly longer elapsed time as a result of the parallel operations in the recursive SQL..

Clean Up
You can clean up all the pluggable databases and the common user created for these examples using the following script.

-- !!! Double-check you really need to do all these steps !!!

CONN / AS SYSDBA
ALTER PLUGGABLE DATABASE pdb1 CLOSE;
ALTER PLUGGABLE DATABASE pdb2 CLOSE;
ALTER PLUGGABLE DATABASE pdb3 CLOSE;
DROP PLUGGABLE DATABASE pdb1 INCLUDING DATAFILES;
DROP PLUGGABLE DATABASE pdb2 INCLUDING DATAFILES;
DROP PLUGGABLE DATABASE pdb3 INCLUDING DATAFILES;

DROP USER c##common_user CASCADE;

PDB Lockdown Profiles in Oracle Database 12c Release 2 (12.2)

A PDB lockdown profile allows you to restrict the operations and functionality available from within a PDB. This can be very useful from a security perspective, giving the PDBs a greater degree of
separation and allowing different people to manage each PDB, without compromising the security of other PDBs with the same instance.

Basic Commands
The basic process of creating, enabling, disabling and dropping a lockdown profile is relatively simple. The user administering the PDB lockdown profiles described here will need the CREATE LOCKDOWN
PROFILE and DROP LOCKDOWN PROFILE system privileges. In these examples we will perform all these operations as the SYS user.

In the following example we create two PDB lockdown profiles in the root container. One will be used as the system default an d one for a specific PDB.

CONN / AS SYSDBA

CREATE LOCKDOWN PROFILE default_pdb_lockdown;


CREATE LOCKDOWN PROFILE pdb1_specfic_lockdown;
We need to add some restrictions, but for the moment we'll keep this simple.

ALTER LOCKDOWN PROFILE default_pdb_lockdown DISABLE FEATURE = ('NETWORK_ACCESS');


ALTER LOCKDOWN PROFILE pdb1_specfic_lockdown DISABLE FEATURE = ('NETWORK_ACCESS', 'OS_ACCESS');
We set the PDB_LOCKDOWN parameter in the root container to set a default lockdown profile for all PDBs.

ALTER SYSTEM SET PDB_LOCKDOWN = default_pdb_lockdown;


We can see this setting is in place at the PDB level, but we can also override it by setting a PDB-specific lockdown profile.

ALTER SESSION SET CONTAINER = pdb1;

Multitenant Page 69
ALTER SESSION SET CONTAINER = pdb1;
SHOW PARAMETER PDB_LOCKDOWN;

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
pdb_lockdown string DEFAULT_PDB_LOCKDOWN
SQL>

ALTER SYSTEM SET PDB_LOCKDOWN = pdb1_specfic_lockdown;


SHOW PARAMETER PDB_LOCKDOWN;

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
pdb_lockdown string PDB1_SPECFIC_LOCKDOWN
SQL>
We can reset the values of the PDB_LOCKDOWN parameter at the PDB level to return to the default lockdown profile. The changed doesn't appear to be visible until the PDB is restarted.

CONN / AS SYSDBA
ALTER SESSION SET CONTAINER = pdb1;
ALTER SYSTEM RESET PDB_LOCKDOWN;

SHOW PARAMETER PDB_LOCKDOWN;

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
pdb_lockdown string PDB1_SPECFIC_LOCKDOWN
SQL>

-- Restart PDB.
SHUTDOWN IMMEDIATE;
STARTUP;

SHOW PARAMETER PDB_LOCKDOWN;

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
pdb_lockdown string DEFAULT_PDB_LOCKDOWN
SQL>
Reseting the PDB_LOCKDOWN parameter in the root container disables the default lockdown profile. Once again, the change doesn't seem to take place until an instance restart takes place.

CONN / AS SYSDBA
ALTER SYSTEM RESET PDB_LOCKDOWN;

SHOW PARAMETER PDB_LOCKDOWN;

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
pdb_lockdown string DEFAULT_PDB_LOCKDOWN
SQL>

-- Restart the instance.


SHUTDOWN IMMEDIATE;
STARTUP;

SHOW PARAMETER PDB_LOCKDOWN;

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
pdb_lockdown string
SQL>
PDB lockdown profiles are dropped as follows. If the instance or any PDBs references them they will still be dropped, and the lockdown profile will no longer be active, but the PDB_LOCKDOWN
parameter will not be reset automatically.

CONN / AS SYSDBA

DROP LOCKDOWN PROFILE default_pdb_lockdown;


DROP LOCKDOWN PROFILE pdb1_specfic_lockdown;
Information about PDB lockdown profiles can be displayed using the DBA_LOCKDOWN_PROFILES view. You can use variations on the following query to check the impact of some of the commands used
in this article. You may want to alter the format of the columns, depending on what you are trying to display.

SET LINESIZE 200

COLUMN profile_name FORMAT A30


COLUMN rule_type FORMAT A20
COLUMN rule FORMAT A20
COLUMN clause FORMAT A20
COLUMN clause_option FORMAT A20
COLUMN option_value FORMAT A20
COLUMN min_value FORMAT A20
COLUMN max_value FORMAT A20
COLUMN list FORMAT A20

SELECT profile_name,

Multitenant Page 70
SELECT profile_name,
rule_type,
rule,
clause,
clause_option,
option_value,
min_value,
max_value,
list,
status
FROM dba_lockdown_profiles
ORDER BY 1;
The database comes with three default PDB lockdown profiles called PRIVATE_DBAAS, PUBLIC_DBAAS and SAAS. These are empty profiles, containing no restrictions, which you can tailor to suit your
own needs if you so wish.

The remainder of the article will discuss the types of restrictions available when planning a PDB lockdown profile. All the commands below reference a profile called MY_PROFILE, which can be created
and dropped using the following commands.

CREATE LOCKDOWN PROFILE my_profile;


DROP LOCKDOWN PROFILE my_profile;
Lockdown Options
In Oracle 12c Release 2 (12.2) there are only two options (DATABASE QUEUING, PARTITIONING) that can be enabled or disabled in a lockdown profile, but this may increase in future.

Having no specific option restrictions in place is the equivalent of using the following.

ALTER LOCKDOWN PROFILE my_profile ENABLE OPTION ALL;


Here are some examples of enabling or disabling options.

-- Enable.
ALTER LOCKDOWN PROFILE my_profile ENABLE OPTION = ('DATABASE QUEUING');
ALTER LOCKDOWN PROFILE my_profile ENABLE OPTION = ('PARTITIONING');
ALTER LOCKDOWN PROFILE my_profile ENABLE OPTION ALL;
ALTER LOCKDOWN PROFILE my_profile ENABLE OPTION ALL EXCEPT = ('PARTITIONING');

-- Disable.
ALTER LOCKDOWN PROFILE my_profile DISABLE OPTION = ('DATABASE QUEUING');
ALTER LOCKDOWN PROFILE my_profile DISABLE OPTION = ('PARTITIONING');
ALTER LOCKDOWN PROFILE my_profile DISABLE OPTION ALL;
ALTER LOCKDOWN PROFILE my_profile DISABLE OPTION ALL EXCEPT = ('DATABASE QUEUING','PARTITIONING');
Using ALL EXCEPT doesn't really make sense with only two options available, but it will be useful if more options are added i n future.

Lockdown Features
Features can be enabled or disabled individually, or in groups known as feature bundles. The feature bundles and their individual features are listed in the ALTER LOCKDOWN PROFILE documentation.

Having no specific feature restrictions in place is the equivalent of using the following.

ALTER LOCKDOWN PROFILE my_profile ENABLE FEATURE ALL;


Here are some examples of enabling or disabling feature bundles and features.

-- Enable/disable one or more features.


ALTER LOCKDOWN PROFILE my_profile ENABLE FEATURE = ('UTL_HTTP');
ALTER LOCKDOWN PROFILE my_profile DISABLE FEATURE = ('UTL_HTTP', 'UTL_SMTP');

-- Enable/disable one or more feature bundles.


ALTER LOCKDOWN PROFILE my_profile ENABLE FEATURE = ('NETWORK_ACCESS');
ALTER LOCKDOWN PROFILE my_profile DISABLE FEATURE = ('NETWORK_ACCESS', 'OS_ACCESS');

-- Enable/disable all features.


ALTER LOCKDOWN PROFILE my_profile ENABLE FEATURE ALL;
ALTER LOCKDOWN PROFILE my_profile DISABLE FEATURE ALL;

-- Enable/disable all features with bundle and/or feature exceptions.


ALTER LOCKDOWN PROFILE my_profile ENABLE FEATURE ALL EXCEPT = ('NETWORK_ACCESS');
ALTER LOCKDOWN PROFILE my_profile DISABLE FEATURE ALL EXCEPT = ('OS_ACCESS', 'UTL_HTTP', 'UTL_SMTP');
Lockdown Statements
At present four ALTER statements (ALTER DATABASE, ALTER PLUGGABLE DATABASE, ALTER SESSION, ALTER SYSTEM) can be restricted using a PDB lockdown profile.

The following examples show how to enable or disable entire commands or groups of them using ALL and ALL EXCEPT.

ALTER LOCKDOWN PROFILE my_profile ENABLE STATEMENT = ('ALTER DATABASE', 'ALTER PLUGGABLE DATABASE');
ALTER LOCKDOWN PROFILE my_profile DISABLE STATEMENT = ('ALTER DATABASE', 'ALTER PLUGGABLE DATABASE');

ALTER LOCKDOWN PROFILE my_profile ENABLE STATEMENT ALL EXCEPT = ('ALTER DATABASE', 'ALTER PLUGGABLE DATABASE');
ALTER LOCKDOWN PROFILE my_profile DISABLE STATEMENT ALL EXCEPT = ('ALTER DATABASE', 'ALTER PLUGGABLE DATABASE');
The scope of the restriction can be reduced using the CLAUSE, OPTION, MINVALUE, MAXVALUE options and values.

ALTER LOCKDOWN PROFILE my_profile DISABLE STATEMENT = ('ALTER PLUGGABLE DATABASE')


CLAUSE = ('DEFAULT TABLESPACE', 'DEFAULT TEMPORARY TABLESPACE');

ALTER LOCKDOWN PROFILE my_profile DISABLE STATEMENT = ('ALTER SYSTEM')


CLAUSE ALL EXCEPT = ('FLUSH SHARED_POOL');

-- Can't set CPU_COUNT higher than 1.


ALTER LOCKDOWN PROFILE my_profile DISABLE STATEMENT = ('ALTER SYSTEM')
CLAUSE = ('SET') OPTION = ('CPU_COUNT') MAXVALUE = '1';

-- Can only set CPU_COUNT to values 1, 2 or 3.

Multitenant Page 71
-- Can only set CPU_COUNT to values 1, 2 or 3.
ALTER LOCKDOWN PROFILE my_profile DISABLE STATEMENT = ('ALTER SYSTEM')
CLAUSE = ('SET') OPTION = ('CPU_COUNT') MINVALUE = '1' MAXVALUE = '3';
The ALTER LOCKDOWN PROFILE documentation describes the available syntax.

Considerations
It should be obvious from the examples in the Lockdown Profile Basics section there is a flaw in this mechanism if you define poor lockdown profiles.

Imagine a scenario where you have a highly restrictive lockdown profile for one PDB, but a less restrictive default lockdown profile. If you don't restrict the ability to modify the PDB_LOCKDOWN
parameter in the PDB with the highly restrictive profile, what's to stop the PDB administrator from resetting the PDB-level parameter and reverting to the less restrictive default lockdown profile?

If you are planning to use a variety of PDB lockdown profiles in a single instance, you need to define your lockdown profiles very carefully to prevent this type of mistake. This is a classic case of garbage-
in, garbage-out.

Option, feature and statement restrictions can be combined into a single PDB lockdown profile.

Whilst testing it's easy to get yourself into a bit of a mess. Remember, you can always switch back to the root container and drop the problematic lockdown profile and start again.

Pluggable Database (PDB) Operating System (OS) Credentials in Oracle Database 12c Release 2 (12.2)

There are a number of database features that require access to the operating system, for example external jobs without explicit credentials, PL/SQL library executions and preprocessor executions for
external tables. By default these run using the Oracle software owner on the operating system, which is a highly privileged user and represents a security risk if you are trying to consolidate multiple
systems into a single container.

Oracle 12.2 allows you to assign a different default operating system (OS) credential to each pluggable database (PDB), giving a greater degree of separation between the pluggable databases and
therefore better control over security.

Create Operating Systems (OS) Users


In this example we will define a separate group and user for the CDB and each PDB using the following commands, run as the "root" user.

# groupadd -g 2000 cdb1_user


# useradd -g cdb1_user -u 2000 cdb1_user
# id cdb1_user
uid=2000(cdb1_user) gid=2000(cdb1_user) groups=2000(cdb1_user)
# passwd cdb1_user

# groupadd -g 1001 pdb1_user


# useradd -g pdb1_user -u 1001 pdb1_user
# id pdb1_user
uid=1001(pdb1_user) gid=1001(pdb1_user) groups=1001(pdb1_user)
# passwd pdb1_user

# groupadd -g 1002 pdb2_user


# useradd -g pdb2_user -u 1002 pdb2_user
# id pdb2_user
uid=1002(pdb2_user) gid=1002(pdb2_user) groups=1002(pdb2_user)
# passwd pdb2_user

# groupadd -g 1003 pdb3_user


# useradd -g pdb3_user -u 1003 pdb3_user
# id pdb3_user
uid=1003(pdb3_user) gid=1003(pdb3_user) groups=1003(pdb3_user)
# passwd pdb3_user
The CDB credential is used as the default OS user for all PDBs if they don't have a PDB-specific credential set.

Create Credentials (DBMS_CREDENTIAL)


Create the relevant database credential for each container using the DBMS_CREDENTIAL package.

CONN / AS SYSDBA

-- CDB Credential (Default for all PDBs)


BEGIN
DBMS_CREDENTIAL.create_credential(
credential_name => 'cdb1_user_cred',
username => 'cdb1_user',
password => 'cdb1_user');
END;
/

-- PDB1 Credential
BEGIN
DBMS_CREDENTIAL.create_credential(
credential_name => 'pdb1_user_cred',
username => 'pdb1_user',
password => 'pdb1_user');
END;
/

-- PDB2 Credential
BEGIN
DBMS_CREDENTIAL.create_credential(
credential_name => 'pdb2_user_cred',

Multitenant Page 72
credential_name => 'pdb2_user_cred',
username => 'pdb2_user',
password => 'pdb2_user');
END;
/

-- PDB1 Credential
BEGIN
DBMS_CREDENTIAL.create_credential(
credential_name => 'pdb3_user_cred',
username => 'pdb3_user',
password => 'pdb3_user');
END;
/
Check the credentials are all present and owned by the root container using the CDB_CREDENTIALS view.

COLUMN owner FORMAT A30


COLUMN credential_name FORMAT A30

SELECT con_id, owner, credential_name


FROM cdb_credentials
ORDER BY 1, 2, 3;

CON_ID OWNER CREDENTIAL_NAME


---------- ------------------------------ ------------------------------
1 SYS CDB1_USER_CRED
1 SYS PDB1_USER_CRED
1 SYS PDB2_USER_CRED
1 SYS PDB3_USER_CRED

SQL>
Assign Credentials (PDB_OS_CREDENTIAL)
The PDB_OS_CREDENTIAL initialization parameter is used to define the default OS credential for the container. When this is set in the root container, it defines the default OS credential for all PDBs.
Setting it at the PDB level overrides the CDB default setting.

The documentation suggests you should be able set the parameter in the root container as follows.

CONN / AS SYSDBA
ALTER SYSTEM SET PDB_OS_CREDENTIAL=cdb1_user_cred SCOPE=SPFILE;
SHUTDOWN IMMEDIATE;
STARTUP;
If you try that, you get the following error from the ALTER SYSTEM command.

ERROR at line 1:
ORA-32017: failure in updating SPFILE
ORA-65046: operation not allowed from outside a pluggable database
Instead, I had to do the following.

CONN / AS SYSDBA
SHUTDOWN IMMEDIATE;
CREATE PFILE='/tmp/pfile.txt' FROM SPFILE;
HOST echo "*.pdb_os_credential=cdb1_user_cred" >> /tmp/pfile.txt
CREATE SPFILE FROM PFILE='/tmp/pfile.txt';
STARTUP;
SHOW PARAMETER PDB_OS_CREDENTIAL

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
pdb_os_credential string cdb1_user_cred
SQL>
The credential was then visible from all the PDBs when using the SHOW PARAMETER PDB_OS_CREDENTIAL command.

With the default in place we can set the PDB-specific credentials as follows.

CONN / AS SYSDBA

-- PDB1 Credential
ALTER SESSION SET CONTAINER=pdb1;
ALTER SYSTEM SET PDB_OS_CREDENTIAL=pdb1_user_cred SCOPE=SPFILE;
SHUTDOWN IMMEDIATE;
STARTUP;
SHOW PARAMETER PDB_OS_CREDENTIAL

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
pdb_os_credential string PDB1_USER_CRED
SQL>

-- PDB2 Credential
ALTER SESSION SET CONTAINER=pdb2;
ALTER SYSTEM SET PDB_OS_CREDENTIAL=pdb2_user_cred SCOPE=SPFILE;
SHUTDOWN IMMEDIATE;
STARTUP;
SHOW PARAMETER PDB_OS_CREDENTIAL

Multitenant Page 73
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
pdb_os_credential string PDB2_USER_CRED
SQL>

-- PDB3 Credential
ALTER SESSION SET CONTAINER=pdb3;
ALTER SYSTEM SET PDB_OS_CREDENTIAL=pdb3_user_cred SCOPE=SPFILE;
SHUTDOWN IMMEDIATE;
STARTUP;
SHOW PARAMETER PDB_OS_CREDENTIAL

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
pdb_os_credential string PDB3_USER_CRED
SQL>

PDBs With Different Character Sets to the CDB in Oracle Database 12c Release 2 (12.2)

In the previous release the character set for the root container and all pluggable databases associated with it had to be the same. This could limit the movement of PDBs and make consolidation difficult
where a non-standard character set was required.

In Oracle Database 12c Release 2 (12.2) a PDB can use a different character set to the CDB, provided the character set of the CDB is AL32UTF8, which is now the default character set when using the
Database Configuration Assistant (DBCA).

Check the Destination CDB Character Set


Connect to the destination root container and run the following query to display the default character set of database.

CONN / AS SYSDBA

COLUMN parameter FORMAT A30


COLUMN value FORMAT A30

SELECT *
FROM nls_database_parameters
WHERE parameter = 'NLS_CHARACTERSET';

PARAMETER VALUE
------------------------------ ------------------------------
NLS_CHARACTERSET AL32UTF8

SQL>
We can see the default character set of the root container is AL32UTF8, which means it can hold PDBs with different character sets.

Create a Source CDB and PDB


First we must create a CDB with the WE8ISO8859P1 character set so we have a suitable source CDB and PDB. The following command creates a CDB called cdb3 with a PDB called pdb5

dbca -silent -createDatabase \


-templateName General_Purpose.dbc \
-gdbname cdb3 -sid cdb3 -responseFile NO_VALUE \
-characterSet WE8ISO8859P1 \
-sysPassword OraPasswd1 \
-systemPassword OraPasswd1 \
-createAsContainerDatabase true \
-numberOfPDBs 1 \
-pdbName pdb5 \
-pdbAdminPassword OraPasswd1 \
-databaseType MULTIPURPOSE \
-automaticMemoryManagement false \
-totalMemory 2048 \
-storageType FS \
-datafileDestination "/u02/app/oracle/oradata/" \
-redoLogFileSize 50 \
-emConfiguration NONE \
-ignorePreReqs
We make the source CDB use Oracle Managed Files (OMF) and switch it to archivelog mode.

export ORAENV_ASK=NO
export ORACLE_SID=cdb3
. oraenv
export ORAENV_ASK=YES

sqlplus / as sysdba <<EOF

ALTER SYSTEM SET db_create_file_dest = '/u02/app/oracle/oradata';

SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE ARCHIVELOG;
ALTER DATABASE OPEN;

ALTER PLUGGABLE DATABASE pdb5 OPEN;


ALTER PLUGGABLE DATABASE pdb5 SAVE STATE;

EXIT;

Multitenant Page 74
EXIT;
EOF
Hot Clone the Source PDB
To prove we can house a database of a different character set in our destination CDB, we will be doing a hot clone. The setup required for this is described in the following article.

Multitenant : Hot Clone a Remote PDB or Non-CDB in Oracle Database 12c Release 2 (12.2)
Once you've completed the setup, you can perform a regular hot clone. Connect to the destination CDB.

export ORAENV_ASK=NO
export ORACLE_SID=cdb1
. oraenv
export ORAENV_ASK=YES

sqlplus / as sysdba
Clone the source PDB (pdb5) to create the destination PDB (pdb5new).

CREATE PLUGGABLE DATABASE pdb5new FROM pdb5@clone_link;

SHOW PDBS

CON_ID CON_NAME OPEN MODE RESTRICTED


---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
4 PDB5NEW MOUNTED
SQL>
Open the PDB for the first time.

ALTER PLUGGABLE DATABASE pdb5new OPEN;

SHOW PDBS

CON_ID CON_NAME OPEN MODE RESTRICTED


---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
4 PDB5NEW READ WRITE NO
SQL>
If you have any problems, check the PDB_PLUG_IN_VIOLATIONS view. When I first wrote this article against an instance on Oracle Cloud I did not see any violations. On the on-prem 12.2.0.1 I see the
following Unicode violation, but this doesn't stop the new PDB from working.

SET LINESIZE 200

COLUMN time FORMAT A30


COLUMN name FORMAT A30
COLUMN cause FORMAT A30
COLUMN message FORMAT A30

SELECT time, name, cause, message


FROM pdb_plug_in_violations
WHERE time > TRUNC(SYSTIMESTAMP)
ORDER BY time;

TIME NAME CAUSE MESSAGE


------------------------------ ------------------------------ ------------------------------ ------------------------------
12-SEP-17 15.55.16.636705 PDB5NEW Parameter CDB parameter pga_aggregate_ta
rget mismatch: Previous 512M C
urrent 384M

12-SEP-17 15.55.16.637023 PDB5NEW PDB not Unicode Character set mismatch: PDB ch
aracter set WE8ISO8859P1. CDB
character set AL32UTF8.

SQL>
Check the Destination PDB
Compare the character set of the CDB and the new pluggable database.

CONN / AS SYSDBA

COLUMN parameter FORMAT A30


COLUMN value FORMAT A30

SELECT *
FROM nls_database_parameters
WHERE parameter = 'NLS_CHARACTERSET';

PARAMETER VALUE
------------------------------ ------------------------------
NLS_CHARACTERSET AL32UTF8

SQL>

ALTER SESSION SET CONTAINER=pdb5new;

COLUMN parameter FORMAT A30

Multitenant Page 75
COLUMN parameter FORMAT A30
COLUMN value FORMAT A30

SELECT *
FROM nls_database_parameters
WHERE parameter = 'NLS_CHARACTERSET';

PARAMETER VALUE
------------------------------ ------------------------------
NLS_CHARACTERSET WE8ISO8859P1

SQL>
We can see we have a pluggable database with a different character set to that of the root container.

Miscellaneous
The root container must use to the AL32UTF8 character set if you need it to hold PDBs with differing character sets.
The character set and national character set of an application container and all its application PDBs must match.
New PDBs, cloned from the seed database, always match the CDB character set. There is no way to create a new PDB with a different character set directly. You can use Database Migration Assistant for
Unicode (DMU) to convert the character set of a PDB.
As seen in this article, cloning can be used to create a PDB with a different character set, as can unplug/plugin.
LogMiner supports PDBs with different character sets compared to their CDB.
Data Guard support PDBs with different character sets compared to their CDB for rolling upgrades.

PDB Refresh in Oracle Database 12c Release 2 (12.2)

From Oracle Database 12.2 onward it is possible to refresh the contents of a remotely hot cloned PDB provided it is created a s a refreshable PDB and has only ever been opened in read only mode. The
read-only PDB can be used for reporting purposes, or as the source for other clones, to minimise the impact on a production system when multiple up-to-date clones are required on a regular basis.

Prerequisites
In this context, the word "local" refers to the destination or target CDB that will house the cloned PDB. The word "remote" r efers to the PDB that is the source of the clone.

The prerequisites for a PDB refresh are similar to those of a hot remote clone, so you should be confident with that before continuing. You can read about it in this article.

Multitenant : Hot Clone a Remote PDB or Non-CDB in Oracle Database 12c Release 2 (12.2)
In addition to the prerequisites for hot remote cloning, we must also consider the following.

A refreshable PDB must be in a separate CDB to its source, so it must be a remote clone.
You can change a refreshable PDB to a non-refreshable PDB, but not vice versa.
If the source PDB is not available over a DB link, the archived redo logs can be read from a location specified by the optional REMOTE_RECOVERY_FILE_DEST parameter.
New datafiles added to the source PDB are automatically created on the destination PDB. The PDB_FILE_NAME_CONVERT parameter must be specified to allow the conversion to take place.
In the examples below I have two databases running on the same virtual machine, but they could be running on separate physical or virtual servers.

cdb1 : The local database that will eventually house the refreshable clone.
cdb3 : The remote CDB, used for the source PDB (pdb5).
Create a Refreshable PDB
Remember, you must have completed all the preparations for a hot remote clone described in the linked article before going forward.

Connect to the local database to initiate the clone.

export ORAENV_ASK=NO
export ORACLE_SID=cdb1
. oraenv
export ORAENV_ASK=YES

sqlplus / as sysdba
Create a new PDB in the local database by cloning the remote PDB. In this case we are using Oracle Managed Files (OMF), so we don't need to bother with FILE_NAME_CONVERT parameter for file name
conversions. In this case we are using manual refresh mode.

CREATE PLUGGABLE DATABASE pdb5_ro FROM pdb5@clone_link


REFRESH MODE MANUAL;

Pluggable database created.

SQL>
We can see the new PDB has been created, but it is in the MOUNTED state.

COLUMN name FORMAT A30

SELECT name, open_mode FROM v$pdbs WHERE name = 'PDB5_RO';

NAME OPEN_MODE
------------------------------ ----------
PDB5_RO MOUNTED

SQL>
The PDB is opened in read-only mode to complete the process.

ALTER PLUGGABLE DATABASE pdb5_ro OPEN READ ONLY;

SELECT name, open_mode FROM v$pdbs WHERE name = 'PDB5_RO';

NAME OPEN_MODE
------------------------------ ----------
PDB5_RO READ ONLY

SQL>
Alter the Source PDB

Multitenant Page 76
Alter the Source PDB
We want to prove the new PDB can be refreshed, so we will add a new tablespace, user and table owned by that user in the sour ce database.

Connect to the source database.

export ORAENV_ASK=NO
export ORACLE_SID=cdb3
. oraenv
export ORAENV_ASK=YES

sqlplus / as sysdba
Make some changes to the source PDB.

ALTER SESSION SET CONTAINER=pdb5;

CREATE TABLESPACE test_ts DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;

CREATE USER test IDENTIFIED BY test


DEFAULT TABLESPACE test_ts
QUOTA UNLIMITED ON test_ts;

GRANT CREATE SESSION, CREATE TABLE TO test;

CREATE TABLE test.t1 (


id NUMBER
);

INSERT INTO test.t1 VALUES (1);


COMMIT;
Refresh the PDB
The source PDB now differs from the clone, so we should be able to easily see if the clone can be refreshed.

Connect to the target database.

export ORAENV_ASK=NO
export ORACLE_SID=cdb1
. oraenv
export ORAENV_ASK=YES

sqlplus / as sysdba
Switch to the refreshable PDB and check for the presence of the test table. It will not exist yet.

ALTER SESSION SET CONTAINER=pdb5_ro;

SELECT * FROM test.t1;


SELECT * FROM test.t1
*
ERROR at line 1:
ORA-00942: table or view does not exist

SQL>
The refresh operation can only take place from the refreshable PDB, not the root container.

ALTER SESSION SET CONTAINER=pdb5_ro;

ALTER PLUGGABLE DATABASE CLOSE IMMEDIATE;

ALTER PLUGGABLE DATABASE REFRESH;

ALTER PLUGGABLE DATABASE OPEN READ ONLY;


Check for the presence of the test table again. It will now exist.

SELECT * FROM test.t1;

ID
----------
1

1 row selected.

SQL>
Notice the tablespace as also been created in the refreshable PDB.

SELECT tablespace_name
FROM dba_tablespaces
ORDER BY 1;

TABLESPACE_NAME
------------------------------
SYSAUX
SYSTEM
TEMP
TEST_TS
UNDOTBS1
USERS

6 rows selected.

Multitenant Page 77
6 rows selected.

SQL>
Refresh Modes
In the example above we created a refreshable PDB using the manual refresh mode. Alternatively we could allow it to refresh automatically. The possible variations during creation are shown below.

-- Manual refresh mode.


CREATE PLUGGABLE DATABASE pdb5_ro FROM pdb5@clone_link
REFRESH MODE MANUAL;

-- Automatically refresh ever 60 minutes.


CREATE PLUGGABLE DATABASE pdb5_ro FROM pdb5@clone_link
REFRESH MODE EVERY 60 MINUTES;

-- Non-refreshable PDB.
-- These two are functionally equivalent.
CREATE PLUGGABLE DATABASE pdb5_ro FROM pdb5@clone_link
REFRESH MODE NONE;

CREATE PLUGGABLE DATABASE pdb5_ro FROM pdb5@clone_link;


The current refresh mode can be queried using the DBA_PDBS view.

COLUMN pdb_name FORMAT A30

SELECT pdb_id, pdb_name, refresh_mode, refresh_interval


FROM dba_pdbs
ORDER BY 1;

PDB_ID PDB_NAME REFRES REFRESH_INTERVAL


---------- ------------------------------ ------ ----------------
2 PDB$SEED NONE
3 PDB1 NONE
4 PDB5_RO MANUAL

3 rows selected.

SQL>
The refresh mode can be altered after the refreshable PDB is created, as shown below.

-- Alter the refresh interval.


ALTER PLUGGABLE DATABASE pdb5_ro REFRESH MODE EVERY 60 MINUTES;
ALTER PLUGGABLE DATABASE pdb5_ro REFRESH MODE EVERY 120 MINUTES;

-- Set an automatically refreshed PDB to manual mode.


ALTER PLUGGABLE DATABASE pdb5_ro REFRESH MODE MANUAL;

-- Make a refreshable PDB non-refreshable.


ALTER PLUGGABLE DATABASE CLOSE IMMEDIATE;
ALTER PLUGGABLE DATABASE pdb5_ro REFRESH MODE NONE;
ALTER PLUGGABLE DATABASE OPEN;
Remember, once the PDB is made non-refreshable, it can't be made refreshable again.

Appendix
These tests were performed on a free trial of the Oracle Database Cloud Service, where the CDB1 instance and PDB1 pluggable database were created as part of the service creation. The additional
instance was built on the same virtual machine using the commands below. I've included the DBCA commands to create and delete the CDB1 instance for completeness. They were not actually used.

# Empty local container (cdb1).


dbca -silent -createDatabase \
-templateName General_Purpose.dbc \
-gdbname cdb1 -sid cdb1 -responseFile NO_VALUE \
-characterSet AL32UTF8 \
-sysPassword OraPasswd1 \
-systemPassword OraPasswd1 \
-createAsContainerDatabase true \
-numberOfPDBs 1 \
-pdbName pdb1 \
-pdbAdminPassword OraPasswd1 \
-databaseType MULTIPURPOSE \
-automaticMemoryManagement false \
-totalMemory 2048 \
-storageType FS \
-datafileDestination "/u01/app/oracle/oradata/" \
-redoLogFileSize 50 \
-initParams encrypt_new_tablespaces=DDL \
-emConfiguration NONE \
-ignorePreReqs

# Remote container (cdb3) with PDB (pdb5).


dbca -silent -createDatabase \
-templateName General_Purpose.dbc \
-gdbname cdb3 -sid cdb3 -responseFile NO_VALUE \
-characterSet AL32UTF8 \
-sysPassword OraPasswd1 \
-systemPassword OraPasswd1 \
-createAsContainerDatabase true \
-numberOfPDBs 1 \
-pdbName pdb5 \

Multitenant Page 78
-pdbName pdb5 \
-pdbAdminPassword OraPasswd1 \
-databaseType MULTIPURPOSE \
-automaticMemoryManagement false \
-totalMemory 2048 \
-storageType FS \
-datafileDestination "/u01/app/oracle/oradata/" \
-redoLogFileSize 50 \
-initParams encrypt_new_tablespaces=DDL \
-emConfiguration NONE \
-ignorePreReqs

# Delete the instances.


#dbca -silent -deleteDatabase -sourceDB cdb1 -sysDBAUserName sys -sysDBAPassword OraPasswd1
dbca -silent -deleteDatabase -sourceDB cdb3 -sysDBAUserName sys -sysDBAPassword OraPasswd1
As explained earlier, in all cases Oracle Managed Files (OMF) was used so no file name conversions were needed. Also, the source databases were switched to archivelog mode.

export ORAENV_ASK=NO
export ORACLE_SID=cdb3
. oraenv
export ORAENV_ASK=YES

sqlplus / as sysdba <<EOF

ALTER SYSTEM SET db_create_file_dest = '/u01/app/oracle/oradata';

SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE ARCHIVELOG;
ALTER DATABASE OPEN;

ALTER PLUGGABLE DATABASE pdb5 OPEN;


ALTER PLUGGABLE DATABASE pdb5 SAVE STATE;

EXIT;
EOF

Prevent Accidental Creation of a Pluggable Database (PDB) - Lone-PDB

Oracle 12.1 allowed 252 user-defined pluggable databases. Oracle 12.2 allows 4096 user-defined pluggable databases, including application root and application containers. From Oracle 12.1.0.2 onward
the non-CDB architecture is deprecated. As a result you may decide to use the Multitenant architecture, but stick with a single user-defined pluggable database (PDB), also known as single-tenant or
lone-PDB, so you don't have to pay for the Multitenant option. In Standard Edition you can't accidentally create additional PDBs, but in Enterprise Edition you are potentially one command away from
having to buy some extra licenses. This article gives an example of a way to save yourself from the costly mistake of creating more than one user-defined PDB in a Lone-PDB instance.

Accidental Creation of a PDB


On checking the current instance we can see there is already an existing user-defined PDB.

SELECT con_id, name FROM v$pdbs;

CON_ID NAME
---------- ------------------------------
2 PDB$SEED
3 PDB1

SQL>
There is nothing in Enterprise Edition to stop you creating additional user-defined pluggable databases, even if you don't have the Multitenant option.

CREATE PLUGGABLE DATABASE pdb2 ADMIN USER pdbadmin IDENTIFIED BY Password1


FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdbseed/','/u01/app/oracle/oradata/cdb1/pdb2/');

ALTER PLUGGABLE DATABASE pdb2 OPEN;

SELECT con_id, name FROM v$pdbs;

CON_ID NAME
---------- ------------------------------
2 PDB$SEED
3 PDB1
4 PDB2

SQL>
Having done this the database will have a "detected usage" reported in the DBA_FEATURE_USAGE_STATISTICS view. It takes a while for this to be visible, but we'll force a sample to check it.

-- Force usage sample.


EXEC DBMS_FEATURE_USAGE_INTERNAL.exec_db_usage_sampling(SYSDATE);

COLUMN name FORMAT A40


COLUMN detected_usages FORMAT 999999999999

SELECT name,
detected_usages,
aux_count,
last_usage_date

Multitenant Page 79
last_usage_date
FROM dba_feature_usage_statistics
WHERE name = 'Oracle Pluggable Databases'
ORDER BY name;

NAME DETECTED_USAGES AUX_COUNT LAST_USAG


---------------------------------------- --------------- ---------- ---------
Oracle Pluggable Databases 16 2 04-OCT-16

SQL>
I'm doing this on a test instance, so it has detected the feature usage several times. The important point to notice here is the AUX_COUNT column, which indicates the number of user-defined PDBs
currently running. Using the Multitenant architecture results in the detected usage, regardless of the number of PDBs, so this alone does not indicate if you need to buy the Multitenant option. If the
AUX_COUNT column is greater than 1 for this feature, you need to buy the option!

Let's remove the PDB we just created.

ALTER PLUGGABLE DATABASE pdb2 CLOSE;


DROP PLUGGABLE DATABASE pdb2 INCLUDING DATAFILES;
What happens to the feature usage now?

-- Force usage sample.


EXEC DBMS_FEATURE_USAGE_INTERNAL.exec_db_usage_sampling(SYSDATE);

COLUMN name FORMAT A40


COLUMN detected_usages FORMAT 999999999999

SELECT name,
detected_usages,
aux_count,
last_usage_date
FROM dba_feature_usage_statistics
WHERE name = 'Oracle Pluggable Databases'
ORDER BY name;

NAME DETECTED_USAGES AUX_COUNT LAST_USAG


---------------------------------------- --------------- ---------- ---------
Oracle Pluggable Databases 17 1 04-OCT-16

SQL>
Notice the AUX_COUNT column now has a value of "1".

MAX_PDBS (12.2 Onward)


Oracle 12cR2 includes a new initialization parameter called MAX_PDBS, which allows you to set an upper limit for the number of user-defined PDBs. If you are using 12cR2 onward, use this parameter,
rather than the trigger approach described below.

SQL> ALTER SYSTEM SET max_pdbs=1;

System altered.

SQL> CREATE PLUGGABLE DATABASE pdb2 ADMIN USER pdb_adm IDENTIFIED BY Password1;
CREATE PLUGGABLE DATABASE pdb2 ADMIN USER pdb_adm IDENTIFIED BY Password1
*
ERROR at line 1:
ORA-65010: maximum number of pluggable databases created

SQL>
Prevent Accidental Creation of a PDB
We can prevent accidental creation of a PDB using a system trigger. The following trigger is fired for any "CREATE" DDL on the database where the ORA_DICT_OBJ_TYPE system defined event attribute is
set to 'PLUGGABLE DATABASE'. It checks to see how many user-defined PDBs are already present. If the number of user-defined PDBs are in excess of the maximum allowed (1), then we raise an error.

CONN / AS SYSDBA

CREATE OR REPLACE TRIGGER max_1_pdb_trg


BEFORE CREATE ON DATABASE
WHEN (ora_dict_obj_type = 'PLUGGABLE DATABASE')
DECLARE
l_max_pdbs PLS_INTEGER := 1;
l_count PLS_INTEGER;
BEGIN
SELECT COUNT(*)
INTO l_count
FROM v$pdbs
WHERE con_id > 2;

IF l_count >= l_max_pdbs THEN


RAISE_APPLICATION_ERROR(-20001, 'More than 1 PDB requires the Multitenant option.' );
END IF;
END;
/
With the trigger in place, we attemtp to create another pluggable database.

CREATE PLUGGABLE DATABASE pdb2 ADMIN USER pdbadmin IDENTIFIED BY Password1


FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdbseed/','/u01/app/oracle/oradata/cdb1/pdb2/');

CREATE PLUGGABLE DATABASE pdb2 ADMIN USER pdbadmin IDENTIFIED BY Password1


*

Multitenant Page 80
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-20001: More than 1 PDB requires the Multitenant option.
ORA-06512: at line 12

SQL>
As expected, the are prevented from creating a second user-defined PDB.

Cleanup After an Accident


Looking at the feature usage described above, it would appear in 12.1 all you need to do to recover from accidentally creating more than one PDB is to drop the extra PDBs. At this point I don't know if
there is any other mechanism for tracking the maximum number of PDBs ever created in an instance, so I don't know if there is any record of a mistake left behind in the instance for future reference by
auditors.

If anyone knows something more about this, please contact me. :)

If you do accidentally create more than one user-defined PDB in a container database and you are paranoid about a potential licensing breach, you might want to do the followi ng.

Create a new CDB instance with no PDBs.


Protect the new CDB instance with the trigger mentioned previously.
Unplug the PDB of interest from the original CDB.
Plug the PDB into the new clean CDB.
Throw away the original CDB instance.

Proxy PDB in Oracle Database 12c Release 2 (12.2)

Introduction
A proxy PDB can provide a local connection point that references a remote PDB. There are a few situations where this might be of interest to you.

You want to relocate a PDB to a different machine or data centre, without having to change any of the existing connection det ails. In this case you can relocate the PDB and create a proxy PDB of the
same name in the original location.
You want to run a PDB in the cloud, but you don't want to open access to multiple applications, having each of them connectin g directly. Instead you make all your applications connect to the local PDB,
which in turn connects to the referenced PDB, so there is only a single route in and out of the cloud PDB.
You want to share a single application root container between multiple databases.
Multitenant : Proxy

Here are a few things to consider.

DML and DDL is sent to the referenced PDB for execution and the results returned.
When connected to the proxy PDB, ALTER DATABASE and ALTER PLUGGABLE DATABASE commands refer to the proxy only, they are not passed to the referenced PDB.
In the same way, when connected to the root container, ALTER PLUGGABLE DATABASE commands refer to the proxy only.
A database link is used for the initial creation of the proxy PDB, but all subsequent communication between the servers doesn't use the DB link, so it can be removed once the creation is complete.
The database link used to create a proxy PDB must be created in the root container of the local instance, but can point to a common user in the referenced CDB root container, or a common or local user
in the referenced PDB itself.
The SYSTEM, SYSAUX, TEMP and UNDO tablespaces are copied to the local instance and kept synchronized. As a result, you still need to consider file name conversion like a normal clone, unless you are
using Oracle Managed Files (OMF).
There will be performance implications due to all the network traffic. This won't magically make remote data transfer faster.
Prerequisites
The prerequisites for creating a proxy PDB are similar to that of hot-cloning, so rather than repeat them, you can read them here.

In the examples below I have two databases running on the same virtual machine, but they could be running on separate physical or virtual servers.

cdb1 : The local database that will eventually house the proxy PDB.
cdb3 : The remote CDB, housing the remote referenced PDB (pdb5).
The databases use Oracle Managed Files (OMF) so I don't need to worry about the FILE_NAME_CONVERT or PDB_FILE_NAME_CONVERT settings.

The proxy PDB and referenced PDB share the same listener, so they can't have the same name. If they had different listeners, either on the same machine or on separate machines, they could have the
same name.

Create a Proxy PDB


Connect to the root container of the local instance (cdb1). With the prerequisites in place we can create and open the proxy PDB using the following commands.

CONN sys@cdb1 AS SYSDBA

CREATE PLUGGABLE DATABASE pdb5_proxy AS PROXY FROM pdb5@clone_link;


ALTER PLUGGABLE DATABASE pdb5_proxy OPEN;
If you connect to the root container using OS authentication, switch to the proxy PDB container and try to perform a query yo u will get the following error.

CONN / AS SYSDBA

ALTER SESSION SET CONTAINER = pdb5_proxy;

SQL> SELECT name FROM v$database;


SELECT name FROM v$database
*
ERROR at line 1:
ORA-01017: invalid username/password; logon denied
ORA-02063: preceding line from PROXYPDB$DBLINK
If you connect to SYS using a service, the switch works fine.

CONN sys@cdb1 AS SYSDBA

ALTER SESSION SET CONTAINER = pdb5_proxy;

SELECT name FROM v$database;

Multitenant Page 81
NAME
---------
CDB3

SQL>
Create a new entry in the "tnsnames.ora" file for the proxy PDB in the local instance.

PDB5_PROXY =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = myserver.mydomain)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = pdb5_proxy)
)
)
You can now connect directly to the proxy PDB. Notice in the output below, the database name is showing as CDB3, even though we are connected to the pdb5_proxy container in the cdb1 instance.

CONN sys@pdb5_proxy AS SYSDBA

SELECT name FROM v$database;

NAME
---------
CDB3

SQL>
Once the proxy PDB is created the database link and link user are no longer needed.

CONN sys@cdb1 AS SYSDBA

DROP DATABASE LINK clone_link;

CONN sys@cdb3 AS SYSDBA

DROP USER c##remote_clone_user CASCADE CONTAINER=ALL;


Test It
We will test the proxy PDB by making changes in both the proxy PDB and the referenced PDB. First, create a new tablespace and a test user with a quota in the new tablespace.

CONN sys@pdb5_proxy AS SYSDBA

CREATE TABLESPACE test_ts DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;

CREATE USER test IDENTIFIED BY test


DEFAULT TABLESPACE test_ts
QUOTA UNLIMITED ON test_ts;

GRANT CREATE SESSION, CREATE TABLE TO test;


Connect to the referenced PDB using the newly created user and create a test table.

CONN test/test@pdb5

CREATE TABLE t1 (id NUMBER);


INSERT INTO t1 VALUES (1);
COMMIT;
Return to the proxy PDB and query the table.

CONN test/test@pdb5_proxy

SELECT * FROM t1;

ID
----------
1

SQL>
Insert another record into the table in the proxy PDB.

CONN test/test@pdb5_proxy

INSERT INTO t1 VALUES (2);


COMMIT;
Return to the referenced PDB and query the table.

CONN test/test@pdb5

SELECT * FROM t1;

ID
----------
1
2

SQL>
We can see the proxy PDB and referenced PDB are working as expected.

Local Datafiles

Multitenant Page 82
Local Datafiles
What might seem a little odd is the SYSTEM, SYSAUX, TEMP and UNDO tablespaces are copied to the local instance and kept synchronized. All other tablespaces are only present in the referenced
instance.

If we query datafiles and tempfiles in the proxy PDB we are shown those of the referenced PDB. Notice the datafiles associated with the USERS and TEST_TS tablespaces.

CONN sys@pdb5_proxy AS SYSDBA

SET LINESIZE 100

SELECT name FROM v$datafile;

NAME
----------------------------------------------------------------------------------------------------
/u02/app/oracle/oradata/cdb3/pdb5/system01.dbf
/u02/app/oracle/oradata/cdb3/pdb5/sysaux01.dbf
/u02/app/oracle/oradata/cdb3/pdb5/undotbs01.dbf
/u02/app/oracle/oradata/cdb3/pdb5/users01.dbf
/u02/app/oracle/oradata/CDB3/469D84C85D196311E0538738A8C0B97D/datafile/o1_mf_test_ts_d877rjoo_.dbf

SQL>

SELECT name FROM v$tempfile;

NAME
----------------------------------------------------------------------------------------------------
/u02/app/oracle/oradata/cdb3/pdb5/temp01.dbf

SQL>
If we check in the local instance we see a different pattern. Notice the datafiles associated with the USERS and TEST_TS tablespaces are not present.

CONN / AS SYSDBA

SHOW PDBS

CON_ID CON_NAME OPEN MODE RESTRICTED


---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
5 PDB5_PROXY READ WRITE NO
SQL>

SET LINESIZE 100

SELECT name FROM v$datafile WHERE con_id = 5;

NAME
----------------------------------------------------------------------------------------------------
/u02/app/oracle/oradata/CDB1/469F256E1081028AE0538738A8C079C7/datafile/o1_mf_system_d876rtd8_.dbf
/u02/app/oracle/oradata/CDB1/469F256E1081028AE0538738A8C079C7/datafile/o1_mf_sysaux_d876rtd9_.dbf
/u02/app/oracle/oradata/CDB1/469F256E1081028AE0538738A8C079C7/datafile/o1_mf_undotbs1_d876rtd9_.dbf

SQL>

SELECT name FROM v$tempfile WHERE con_id = 5;

NAME
----------------------------------------------------------------------------------------------------
/u02/app/oracle/oradata/CDB1/469F256E1081028AE0538738A8C079C7/datafile/o1_mf_temp_d876rtdb_.dbf

SQL>
Alternate Host and Port
The CREATE PLUGGABLE DATABASE ... AS PROXY FROM command can also include the HOST and PORT clauses.

CREATE PLUGGABLE DATABASE pdb5_proxy AS PROXY FROM pdb5@clone_link PORT=1526 HOST='ol7-122.localdomain';


The PORT clause should be used if the referenced PDB is accessed by a port other than 1521. The HOST clause is used if the referenced PDB is to be accessed using a name other that produced by the
hostname command on the remote server, for example a DNS alias or SCAN. The host and port are amended using the following com mands, issued from the referenced PDB.

CONN sys@pdb5 AS SYSDBA

-- Alter and reset HOST.


ALTER PLUGGABLE DATABASE CONTAINERS HOST='myhost.example.com';
ALTER PLUGGABLE DATABASE CONTAINERS HOST REST;

-- Alter and reset HOST.


ALTER PLUGGABLE DATABASE CONTAINERS PORT=1526;
ALTER PLUGGABLE DATABASE CONTAINERS PORT REST;
After a change, any proxy PDBs pointing to the referenced PDB must be recreated.

Proxy Views
You can see which are proxy PDBs using the V$PDBS.PROXY_PDB column or CDB_PDBS.IS_PROXY_PDB column.

COLUMN name FORMAT A30

Multitenant Page 83
SELECT name, proxy_pdb
FROM v$pdbs;

NAME PRO
------------------------------ ---
PDB$SEED NO
PDB1 NO
PDB5_PROXY YES

SQL>

COLUMN pdb_name FORMAT A30

SELECT pdb_name, is_proxy_pdb


FROM cdb_pdbs;

PDB_NAME IS_
------------------------------ ---
PDB1 NO
PDB$SEED NO
PDB5_PROXY YES

SQL>
The V$PROXY_PDB_TARGETS displays information about the connection details for the referenced PDB used by a proxy PDB.

COLUMN target_host FORMAT A20


COLUMN target_service FORMAT A32
COLUMN target_user FORMAT A20

SELECT con_id,
target_port,
target_host,
target_service,
target_user
FROM v$proxy_pdb_targets;

CON_ID TARGET_PORT TARGET_HOST TARGET_SERVICE TARGET_USER


---------- ----------- -------------------- -------------------------------- --------------------
5 1521 my-server 469d84c85d196311e0538738a8c0b97d

SQL>

Relocate a PDB in Oracle Database 12c Release 2 (12.2)

From Oracle 12.2 onward you can relocate a PDB by moving it between two root containers with near zero-downtime.

Prerequisites
In this context, the word "local" refers to the destination or target CDB that will house the relocated PDB. The word "remote" refers to the PDB that is to be relocated.

The user in the local database must have the CREATE PLUGGABLE DATABASE privilege in the root container.
The remote CDB must use local undo mode. Without this you must open the remote PDB.
The remote and local databases should be in archivelog mode.
The local database must have a public database link to the remote CDB using a common user.
The common user in the remote database that the database link connects to must have the CREATE PLUGGABLE DATABASE and SYSDBA or SYSOPER privilege.
The local and remote databases must have the same endianness.
The local and remote databases must either have the same options installed, or the remote database must have a subset of thos e present on the local database.
If the character set of the local CDB is AL32UTF8, the remote database can be any character set. If the local CDB does not use AL32UTF8, the character sets of the remote and local databases much
match.
If the remote database uses Transparent Data Encryption (TDE) the local CDB must be configured appropriately before attempting the relocate. If not you will be left with a new PDB that will only open
in restricted mode.
Bug 19174942 is marked as fixed in 12.2. I can't confirm this, so just in case I'll leave this here, but it should no longer be the case. The default tablespaces for each common user in the remote PDB
*must* exist in local CDB. If this is not true, create the missing tablespaces in the root container of the local PDB. If you don't do this your new PDB will only be able to open in restricted mode (Bug
19174942).
In the examples below I have two databases running on the same virtual machine, but they could be running on separate physical or virtual servers.

cdb1 : The local database that will eventually house the relocated PDB.
cdb3 : The remote CDB that houses the PDB (pdb5) to be relocated.
Prepare Remote CDB
Connect to the remote CDB and prepare the remote PDB for relocating.

export ORAENV_ASK=NO
export ORACLE_SID=cdb3
. oraenv
export ORAENV_ASK=YES

sqlplus / as sysdba
Create a user in the remote database for use with the database link. In this case, we must use a comon user in the remote CDB .

CREATE USER c##remote_clone_user IDENTIFIED BY remote_clone_user CONTAINER=ALL;


GRANT CREATE SESSION, SYSOPER, CREATE PLUGGABLE DATABASE TO c##remote_clone_user CONTAINER=ALL;
Check the remote CDB is in local undo mode and archivelog mode.

COLUMN property_name FORMAT A30


COLUMN property_value FORMAT A30

Multitenant Page 84
SELECT property_name, property_value
FROM database_properties
WHERE property_name = 'LOCAL_UNDO_ENABLED';

PROPERTY_NAME PROPERTY_VALUE
------------------------------ ------------------------------
LOCAL_UNDO_ENABLED TRUE

SQL>

SELECT log_mode
FROM v$database;

LOG_MODE
------------
ARCHIVELOG

SQL>
Because the remote CDB is in local undo mode and archivelog mode, we don't need to turn the remote database into read -only mode.

Prepare Local CDB


Switch to the local server and create a "tnsnames.ora" entry pointing to the remote CDB for use in the USING clause of the database link. The connection details must include the "(SERVER =
DEDICATED)" entry, or you will receive a "ORA-01031: insufficient privileges" error.

CDB3 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = my-server.my-domain)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = cdb3)
)
)
Connect to the local database to initiate the relocate.

export ORAENV_ASK=NO
export ORACLE_SID=cdb1
. oraenv
export ORAENV_ASK=YES

sqlplus / as sysdba
Check the local CDB is in local undo mode and archivelog mode.

COLUMN property_name FORMAT A30


COLUMN property_value FORMAT A30

SELECT property_name, property_value


FROM database_properties
WHERE property_name = 'LOCAL_UNDO_ENABLED';

PROPERTY_NAME PROPERTY_VALUE
------------------------------ ------------------------------
LOCAL_UNDO_ENABLED TRUE

SQL>

SELECT log_mode
FROM v$database;

LOG_MODE
------------
ARCHIVELOG

SQL>
Create a public database link in the local CDB, pointing to the remote CDB.

Remember to remove this once the relocate is complete. It is a massive security problem to leave this in place!

DROP PUBLIC DATABASE LINK clone_link;

CREATE PUBLIC DATABASE LINK clone_link


CONNECT TO c##remote_clone_user IDENTIFIED BY remote_clone_user USING 'cdb3';

-- Test link.
DESC user_tables@clone_link
Relocate a PDB
Create a new PDB in the local CDB by relocating the remote PDB. In this case we are using Oracle Managed Files (OMF), so we don't need to bother with FILE_NAME_CONVERT parameter for file name
conversions.

CREATE PLUGGABLE DATABASE pdb5 FROM pdb5@clone_link RELOCATE;

Pluggable database created.

SQL>
We can see the new PDB has been created, but it is in the MOUNTED state.

Multitenant Page 85
We can see the new PDB has been created, but it is in the MOUNTED state.

COLUMN name FORMAT A30

SELECT name, open_mode FROM v$pdbs WHERE name = 'PDB5';

NAME OPEN_MODE
------------------------------ ----------
PDB5 MOUNTED

SQL>
The PDB is opened in read-write mode to complete the process.

ALTER PLUGGABLE DATABASE pdb5 OPEN;

SELECT name, open_mode FROM v$pdbs WHERE name = 'PDB5';

NAME OPEN_MODE
------------------------------ ----------
PDB READ WRITE

SQL>
Drop the public database link.

DROP PUBLIC DATABASE LINK clone_link;


As with any PDB clone, check common users and the temporary tablespace is configured as expected.

If we switch back to the remote instance we can see PDB5 has been dropped.

export ORAENV_ASK=NO
export ORACLE_SID=cdb3
. oraenv
export ORAENV_ASK=YES

sqlplus / as sysdba

SELECT name, open_mode FROM v$pdbs WHERE name = 'PDB5';

no rows selected

SQL>
Managing Connections
Moving the database is only one aspect of keeping a system running. Once the database is in the new location, you need to mak e sure connections can still me made to it. The options are as follows.

If your connection information is centralised in an LDAP server (OID, AD etc.) then the definition can be altered centrally.
If both CBSs use the same listener, the relocated PDB will auto-register once the relocate is complete.
If both CDBs use different listeners, the LOCAL_LISTENER and REMOTE_LISTENER parameters can be used to configure cross-registration.
Appendix
These tests were performed on a free trial of the Oracle Database Cloud Service, where the CDB1 instance and PDB1 pluggable database were created as part of the service creation. The additional
instance was built on the same virtual machine using the commands below. I've included the DBCA commands to create and delete the CDB1 instance for completeness. They were not actually used.

# Empty local container (cdb1).


dbca -silent -createDatabase \
-templateName General_Purpose.dbc \
-gdbname cdb1 -sid cdb1 -responseFile NO_VALUE \
-characterSet AL32UTF8 \
-sysPassword OraPasswd1 \
-systemPassword OraPasswd1 \
-createAsContainerDatabase true \
-numberOfPDBs 1 \
-pdbName pdb1 \
-pdbAdminPassword OraPasswd1 \
-databaseType MULTIPURPOSE \
-automaticMemoryManagement false \
-totalMemory 2048 \
-storageType FS \
-datafileDestination "/u01/app/oracle/oradata/" \
-redoLogFileSize 50 \
-initParams encrypt_new_tablespaces=DDL \
-emConfiguration NONE \
-ignorePreReqs

# Remote container (cdb3) with PDB (pdb5).


dbca -silent -createDatabase \
-templateName General_Purpose.dbc \
-gdbname cdb3 -sid cdb3 -responseFile NO_VALUE \
-characterSet AL32UTF8 \
-sysPassword OraPasswd1 \
-systemPassword OraPasswd1 \
-createAsContainerDatabase true \
-numberOfPDBs 1 \
-pdbName pdb5 \
-pdbAdminPassword OraPasswd1 \
-databaseType MULTIPURPOSE \
-automaticMemoryManagement false \
-totalMemory 2048 \

Multitenant Page 86
-totalMemory 2048 \
-storageType FS \
-datafileDestination "/u01/app/oracle/oradata/" \
-redoLogFileSize 50 \
-initParams encrypt_new_tablespaces=DDL \
-emConfiguration NONE \
-ignorePreReqs

# Delete the instances.


#dbca -silent -deleteDatabase -sourceDB cdb1 -sysDBAUserName sys -sysDBAPassword OraPasswd1
dbca -silent -deleteDatabase -sourceDB cdb3 -sysDBAUserName sys -sysDBAPassword OraPasswd1
As explained earlier, in all cases Oracle Managed Files (OMF) was used so no file name conversions were needed. Also, the databases were switched to archivelog mode.

export ORAENV_ASK=NO
export ORACLE_SID=cdb3
. oraenv
export ORAENV_ASK=YES

sqlplus / as sysdba <<EOF

ALTER SYSTEM SET db_create_file_dest = '/u01/app/oracle/oradata';

SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE ARCHIVELOG;
ALTER DATABASE OPEN;

ALTER PLUGGABLE DATABASE pdb5 OPEN;


ALTER PLUGGABLE DATABASE pdb5 SAVE STATE;

EXIT;
EOF

Resource Manager PDB Performance Profiles in Oracle Database 12c Release 2 (12.2)

In the previous release it was possible to create a resource manager CDB resource plan to control the division of CPU and parallel execution server resources between PDBs. This required a separate plan
directive for each PDB, which doesn't scale well to thousands of PDBs. In Oracle Database 12c Release 2 (12.2) it is now poss ible to create a resource plan based on performance profiles which defines
the resource management for groups of PDBs. This can drastically reduce the amount plan directives required to handle thousands of PDBs.

Much of the resource manager CDB/PDB functionality is unchanged between 12.1 and 12.2, so some of the sections below link to the 12.1 article to save repetition.

Create CDB Resource Plan with PDB Performance Profiles


The process for creating a CDB resource plan using PDB performance profiles is very similar to using CDB plan directives. Instead of targeting individual PDBs, the profiles define types of PDBs that have
the same resource usage profiles. The profile directives allocate shares, which define the proportion of the CDB resources available to the PDB, and specific utilization percentages, that give a finer level
of control. PDB performance profiles are managed using the DBMS_RESOURCE_MANAGER package. Each profile directive is made up of the following elements:

profile : The profile the directive relates to.


shares : The proportion of the CDB resources available to the PDB.
utilization_limit : The percentage of the CDBs available CPU that is available to the PDB.
parallel_server_limit : The percentage of the CDBs available parallel servers (PARALLEL_SERVERS_TARGET initialization parameter) that are available to the PDB.
PDBs without a specific plan directive use the default PDB directive.

The following code creates a new CDB resource plan using the CREATE_CDB_PLAN procedure, then adds two profile directives using the CREATE_CDB_PROFILE_DIRECTIVE procedure to represent the
typical gold, silver levels of service.

DECLARE
l_plan VARCHAR2(30) := 'test_cdb_prof_plan';
BEGIN
DBMS_RESOURCE_MANAGER.clear_pending_area;
DBMS_RESOURCE_MANAGER.create_pending_area;

DBMS_RESOURCE_MANAGER.create_cdb_plan(
plan => l_plan,
comment => 'A test CDB resource plan using profiles');

DBMS_RESOURCE_MANAGER.create_cdb_profile_directive(
plan => l_plan,
profile => 'gold',
shares => 3,
utilization_limit => 100,
parallel_server_limit => 100);

DBMS_RESOURCE_MANAGER.create_cdb_profile_directive(
plan => l_plan,
profile => 'silver',
shares => 2,
utilization_limit => 50,
parallel_server_limit => 50);

DBMS_RESOURCE_MANAGER.validate_pending_area;
DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/
Information about the available CDB resource plans can be queried using the DBA_CDB_RSRC_PLANS view.

COLUMN plan FORMAT A30

Multitenant Page 87
COLUMN plan FORMAT A30
COLUMN comments FORMAT A30
COLUMN status FORMAT A10
SET LINESIZE 100

SELECT plan_id,
plan,
comments,
status,
mandatory
FROM dba_cdb_rsrc_plans
WHERE plan = 'TEST_CDB_PROF_PLAN';

PLAN_ID PLAN COMMENTS STATUS MAN


---------- ------------------------------ ------------------------------ ---------- ---
83326 TEST_CDB_PROF_PLAN A test CDB resource plan using NO
profiles

SQL>
Information about the CDB resource plan directives can be queried using the DBA_CDB_RSRC_PLAN_DIRECTIVES view. Notice we use the PROFILE column as well as the PLUGGABLE_DATABASE column.

COLUMN plan FORMAT A30


COLUMN pluggable_database FORMAT A25
COLUMN profile FORMAT A25
SET LINESIZE 150 VERIFY OFF

SELECT plan,
pluggable_database,
profile,
shares,
utilization_limit AS util,
parallel_server_limit AS parallel
FROM dba_cdb_rsrc_plan_directives
WHERE plan = 'TEST_CDB_PROF_PLAN'
ORDER BY plan, pluggable_database, profile;

PLAN PLUGGABLE_DATABASE PROFILE SHARES UTIL PARALLEL


------------------------------ ------------------------- ------------------------- ---------- ---------- ----------
TEST_CDB_PROF_PLAN ORA$AUTOTASK 90 100
TEST_CDB_PROF_PLAN ORA$DEFAULT_PDB_DIRECTIVE 1 100 100
TEST_CDB_PROF_PLAN GOLD 3 100 100
TEST_CDB_PROF_PLAN SILVER 2 50 50

SQL>
For the rest of the article the cdb_resource_plans.sql and cdb_resource_profile_directives.sql scripts will be used to display this information.

Modify CDB Resource Plan with PDB Performance Profiles


An existing resource plan is modified by creating, updating or deleting profile directives. The following code uses the CREAT E_CDB_PROFILE_DIRECTIVE procedure to add a new profile directive to the
CDB resource plan we created previously.

DECLARE
l_plan VARCHAR2(30) := 'test_cdb_prof_plan';
BEGIN
DBMS_RESOURCE_MANAGER.clear_pending_area;
DBMS_RESOURCE_MANAGER.create_pending_area;

DBMS_RESOURCE_MANAGER.create_cdb_profile_directive(
plan => l_plan,
profile => 'bronze',
shares => 1,
utilization_limit => 25,
parallel_server_limit => 25);

DBMS_RESOURCE_MANAGER.validate_pending_area;
DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/

SQL> @cdb_resource_profile_directives.sql test_cdb_prof_plan

PLAN PLUGGABLE_DATABASE PROFILE SHARES UTIL PARALLEL


------------------------------ ------------------------- ------------------------- ---------- ---------- ----------
TEST_CDB_PROF_PLAN ORA$AUTOTASK 90 100
TEST_CDB_PROF_PLAN ORA$DEFAULT_PDB_DIRECTIVE 1 100 100
TEST_CDB_PROF_PLAN BRONZE 1 25 25
TEST_CDB_PROF_PLAN GOLD 3 100 100
TEST_CDB_PROF_PLAN SILVER 2 50 50

SQL>
The UPDATE_CDB_PROFILE_DIRECTIVE procedure modifies an existing profile directive.

DECLARE
l_plan VARCHAR2(30) := 'test_cdb_prof_plan';
BEGIN
DBMS_RESOURCE_MANAGER.clear_pending_area;
DBMS_RESOURCE_MANAGER.create_pending_area;

Multitenant Page 88
DBMS_RESOURCE_MANAGER.create_pending_area;

DBMS_RESOURCE_MANAGER.update_cdb_profile_directive(
plan => l_plan,
profile => 'bronze',
new_shares => 1,
new_utilization_limit => 20,
new_parallel_server_limit => 20);

DBMS_RESOURCE_MANAGER.validate_pending_area;
DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/

SQL> @cdb_resource_profile_directives.sql test_cdb_prof_plan

PLAN PLUGGABLE_DATABASE PROFILE SHARES UTIL PARALLEL


------------------------------ ------------------------- ------------------------- ---------- ---------- ----------
TEST_CDB_PROF_PLAN ORA$AUTOTASK 90 100
TEST_CDB_PROF_PLAN ORA$DEFAULT_PDB_DIRECTIVE 1 100 100
TEST_CDB_PROF_PLAN BRONZE 1 20 20
TEST_CDB_PROF_PLAN GOLD 3 100 100
TEST_CDB_PROF_PLAN SILVER 2 50 50

SQL>
The DELETE_CDB_PROFILE_DIRECTIVE procedure deletes an existing profile directive from the CDB resource plan.

DECLARE
l_plan VARCHAR2(30) := 'test_cdb_prof_plan';
BEGIN
DBMS_RESOURCE_MANAGER.clear_pending_area;
DBMS_RESOURCE_MANAGER.create_pending_area;

DBMS_RESOURCE_MANAGER.delete_cdb_profile_directive(
plan => l_plan,
profile => 'bronze');

DBMS_RESOURCE_MANAGER.validate_pending_area;
DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/

SQL> @cdb_resource_profile_directives.sql test_cdb_prof_plan

PLAN PLUGGABLE_DATABASE PROFILE SHARES UTIL PARALLEL


------------------------------ ------------------------- ------------------------- ---------- ---------- ----------
TEST_CDB_PROF_PLAN ORA$AUTOTASK 90 100
TEST_CDB_PROF_PLAN ORA$DEFAULT_PDB_DIRECTIVE 1 100 100
TEST_CDB_PROF_PLAN GOLD 3 100 100
TEST_CDB_PROF_PLAN SILVER 2 50 50

SQL>
Enable/Disable Resource Plan with PDB Performance Profiles
Enabling and disabling resource plans in a CDB is the same as it was in pre-12c instances. Enable a plan by setting the RESOURCE_MANAGER_PLAN parameter to the name of the CDB resource plan,
while connected to the root container.

CONN / AS SYSDBA
ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = 'test_cdb_prof_plan';

SHOW PARAMETER RESOURCE_MANAGER_PLAN

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
resource_manager_plan string test_cdb_prof_plan
SQL>
In addition to enabling the resource plan at the CDB level, we need to consider the PDB. Each PDB will use the default direct ive. To change an individual PDB to an alternative profile you need to set the
DB_PERFORMANCE_PROFILE parameter at the PDB level, as shown below.

CONN / AS SYSDBA
ALTER SESSION SET CONTAINER=pdb1;

ALTER SYSTEM SET DB_PERFORMANCE_PROFILE=gold SCOPE=SPFILE;


ALTER PLUGGABLE DATABASE CLOSE IMMEDIATE;
ALTER PLUGGABLE DATABASE OPEN;

SHOW PARAMETER DB_PERFORMANCE_PROFILE

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
db_performance_profile string GOLD
SQL>
To switch the PDB back to using the default directive reset the DB_PERFORMANCE_PROFILE parameter.

CONN / AS SYSDBA
ALTER SESSION SET CONTAINER=pdb1;

Multitenant Page 89
ALTER SYSTEM SET DB_PERFORMANCE_PROFILE='' SCOPE=SPFILE;
ALTER PLUGGABLE DATABASE CLOSE IMMEDIATE;
ALTER PLUGGABLE DATABASE OPEN;

SHOW PARAMETER DB_PERFORMANCE_PROFILE


NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_performance_profile string
SQL>
To disable the plan, set the RESOURCE_MANAGER_PLAN parameter to another plan, or blank it.

CONN / AS SYSDBA

ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = '';

SHOW PARAMETER RESOURCE_MANAGER_PLAN

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
resource_manager_plan string
SQL>
Monitoring CPU and Parallel Execution Server Usage for PDBs
Oracle now provides views to monitor the resource (CPU, I/O, parallel execution, memory) usage of PDBs. Each view contains similar information, but for different retention periods.

V$RSRCPDBMETRIC : A single row per PDB, holding the last of the 1 minute samples.
V$RSRCPDBMETRIC_HISTORY : 61 rows per PDB, holding the last 60 minutes worth of samples from the V$RSRCPDBMETRIC view.
DBA_HIST_RSRC_PDB_METRIC : AWR snaphots, retained based on the AWR retention period.
The following queries are examples of their usage.

CONN / AS SYSDBA

SET LINESIZE 300


COLUMN pdb_name FORMAT A10
COLUMN begin_time FORMAT A26
COLUMN end_time FORMAT A26
ALTER SESSION SET NLS_DATE_FORMAT='DD-MON-YYYY HH24:MI:SS';
ALTER SESSION SET NLS_TIMESTAMP_FORMAT='DD-MON-YYYY HH24:MI:SS.FF';

-- Last sample per PDB.


SELECT r.con_id,
p.pdb_name,
r.begin_time,
r.end_time,
r.cpu_consumed_time,
r.cpu_wait_time,
r.avg_running_sessions,
r.avg_waiting_sessions,
r.avg_cpu_utilization,
r.avg_active_parallel_stmts,
r.avg_queued_parallel_stmts,
r.avg_active_parallel_servers,
r.avg_queued_parallel_servers
FROM v$rsrcpdbmetric r,
cdb_pdbs p
WHERE r.con_id = p.con_id
ORDER BY p.pdb_name;

-- Last hours samples for PDB1


SELECT r.con_id,
p.pdb_name,
r.begin_time,
r.end_time,
r.cpu_consumed_time,
r.cpu_wait_time,
r.avg_running_sessions,
r.avg_waiting_sessions,
r.avg_cpu_utilization,
r.avg_active_parallel_stmts,
r.avg_queued_parallel_stmts,
r.avg_active_parallel_servers,
r.avg_queued_parallel_servers
FROM v$rsrcpdbmetric_history r,
cdb_pdbs p
WHERE r.con_id = p.con_id
AND p.pdb_name = 'PDB1'
ORDER BY r.begin_time;

-- All AWR snapshot information for PDB1.


SELECT r.snap_id,
r.con_id,
p.pdb_name,
r.begin_time,
r.end_time,
r.cpu_consumed_time,
r.cpu_wait_time,
r.avg_running_sessions,

Multitenant Page 90
r.avg_running_sessions,
r.avg_waiting_sessions,
r.avg_cpu_utilization,
r.avg_active_parallel_stmts,
r.avg_queued_parallel_stmts,
r.avg_active_parallel_servers,
r.avg_queued_parallel_servers
FROM dba_hist_rsrc_pdb_metric r,
cdb_pdbs p
WHERE r.con_id = p.con_id
AND p.pdb_name = 'PDB1'
ORDER BY r.begin_time;

Oracle Resource Manager : Per-Process PGA Limits in Oracle Database 12c Release 2 (12.2)

Oracle has a long history of improving the management of the Process Global Area (PGA). Oracle 9i introduced the PGA_AGGREGATE_TARGET parameter to automate the management of the PGA and
set a soft limit for its size. Oracle 11g introduced Automatic Memory Management (AMM), which you should probably avoid. Oracle 12c Release 1 introduced the PGA_AGGREGATE_LIMIT parameter to
define a hard limit for PGA size.

Oracle Database 12c Release 2 (12.2) has introduced two new features related to management of the PGA. First, the PGA_AGGREGATE_TARGET and PGA_AGGREGATE_LIMIT parameters can now be set
at the PDB level to limit the amount of PGA used by the PDB (described here). Second, Resource Manager can limit the amount o f PGA used by a session, based on the session's consumer group. This
article focusses on this second feature

SESSION_PGA_LIMIT Parameter
The SESSION_PGA_LIMIT parameter has been added to the CREATE_PLAN_DIRECTIVE and UPDATE_PLAN_DIRECTIVE procedures of the DBMS_RESOURCE_MANAGER package. This new parameter
specifies the upper limit in MB for PGA usage by a session assigned to the consumer group. If a session exceeds this limit, an ORA-10260 error is raised.

This parameter can be used in conjunction with other resource limits for a plan directive, but in this article it will be dis cussed in isolation. It can be used in non-CDB architecture also, but here it will only
be considered inside a PDB.

Create a Plan to Limit Session PGA


The following example creates a new resource plan using the SESSION_PGA_LIMIT parameter. The plan includes two main consumer groups, one allowing high PGA usage and one limited to low PGA
usage. It also includes a consumer group for maintenance tasks and a catch all group.

CONN / AS SYSDBA
ALTER SESSION SET CONTAINER=pdb1;

BEGIN
DBMS_RESOURCE_MANAGER.clear_pending_area();
DBMS_RESOURCE_MANAGER.create_pending_area();

-- Create plan
DBMS_RESOURCE_MANAGER.create_plan(
plan => 'pga_plan',
comment => 'Plan for a combination of high and low PGA usage.');

-- Create consumer groups


DBMS_RESOURCE_MANAGER.create_consumer_group(
consumer_group => 'high_pga_cg',
comment => 'High PGA usage allowed');

DBMS_RESOURCE_MANAGER.create_consumer_group(
consumer_group => 'low_pga_cg',
comment => 'Low PGA usage allowed');

DBMS_RESOURCE_MANAGER.create_consumer_group(
consumer_group => 'maint_subplan',
comment => 'Low PGA usage allowed');

-- Assign consumer groups to plan and define priorities


DBMS_RESOURCE_MANAGER.create_plan_directive (
plan => 'pga_plan',
group_or_subplan => 'high_pga_cg',
session_pga_limit => 100);

DBMS_RESOURCE_MANAGER.create_plan_directive (
plan => 'pga_plan',
group_or_subplan => 'low_pga_cg',
session_pga_limit => 20);

DBMS_RESOURCE_MANAGER.create_plan_directive (
plan => 'pga_plan',
group_or_subplan => 'maint_subplan',
session_pga_limit => NULL);

DBMS_RESOURCE_MANAGER.create_plan_directive (
plan => 'pga_plan',
group_or_subplan => 'OTHER_GROUPS',
session_pga_limit => NULL);

DBMS_RESOURCE_MANAGER.validate_pending_area;
DBMS_RESOURCE_MANAGER.submit_pending_area();
END;
/
Enable the plan by setting the RESOURCE_MANAGER_PLAN parameter in the PDB.

Multitenant Page 91
Enable the plan by setting the RESOURCE_MANAGER_PLAN parameter in the PDB.

CONN / AS SYSDBA
ALTER SESSION SET CONTAINER=pdb1;

ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = pga_plan;


Assign the TEST user to the LOW_PGA_CG consumer group.

BEGIN
DBMS_RESOURCE_MANAGER.clear_pending_area();
DBMS_RESOURCE_MANAGER.create_pending_area();

DBMS_RESOURCE_MANAGER.set_consumer_group_mapping (
attribute => DBMS_RESOURCE_MANAGER.oracle_user,
value => 'test',
consumer_group => 'low_pga_cg');

DBMS_RESOURCE_MANAGER.validate_pending_area;
DBMS_RESOURCE_MANAGER.submit_pending_area();
END;
/

COLUMN username FORMAT A30


COLUMN initial_rsrc_consumer_group FORMAT A30

SELECT username, initial_rsrc_consumer_group


FROM dba_users
WHERE username = 'TEST';

USERNAME INITIAL_RSRC_CONSUMER_GROUP
------------------------------ ------------------------------
TEST LOW_PGA_CG

1 row selected.

SQL>
Test It
The following code connects to the test user and artificially tries to allocate excessive amounts of PGA using recursion.

CONN test/test@pdb1

DECLARE
PROCEDURE grab_memory AS
l_dummy VARCHAR2(4000);
BEGIN
grab_memory;
END;
BEGIN
grab_memory;
END;
/
DECLARE
*
ERROR at line 1:
ORA-04068: existing state of packages has been discarded
ORA-10260: PGA limit (20 MB) exceeded - process terminated

SQL>
Notice the process was terminated once the session tried to use more than 20 MB of PGA. Assign the TEST user to the HIGH_PGA_CG consumer group.

CONN / AS SYSDBA
ALTER SESSION SET CONTAINER=pdb1;

BEGIN
DBMS_RESOURCE_MANAGER.clear_pending_area();
DBMS_RESOURCE_MANAGER.create_pending_area();

DBMS_RESOURCE_MANAGER.set_consumer_group_mapping (
attribute => DBMS_RESOURCE_MANAGER.oracle_user,
value => 'test',
consumer_group => 'high_pga_cg');

DBMS_RESOURCE_MANAGER.validate_pending_area;
DBMS_RESOURCE_MANAGER.submit_pending_area();
END;
/
Test it again.

CONN test/test@pdb1

DECLARE
PROCEDURE grab_memory AS
l_dummy VARCHAR2(4000);
BEGIN
grab_memory;
END;

Multitenant Page 92
END;
BEGIN
grab_memory;
END;
/
DECLARE
*
ERROR at line 1:
ORA-04068: existing state of packages has been discarded
ORA-10260: PGA limit (100 MB) exceeded - process terminated

SQL>

Heat Map, Information Lifecycle Management (ILM) and Automatic Data Optimization (ADO) in Oracle Database 12c Release 2 (12.2)

In Oracle Database 12.1 the Heat Map and Automatic Data Optimization (ADO) functionality was only available when using the non-CDB architecture. In Oracle Database 12.2 this functionality is now
supported in the multitenant architecture. This article gives an overview of Heat Map, Information Lifecycle Management (ILM) and Automatic Data Optimization (ADO) in Oracle Database 12c Release 2
(12.2). The examples are based around the multitenant architecture, but the information applies equally to the non-CDB architecture in Oracle Database 12.1 and 12.2

Heat Map
The heat map functionality allows you to track data access at the segment level and data modification at the row and segment level, so you can identify the busy segments of the system. This
functionality is controlled by the HEAT_MAP parameter, that can be set at the system or session level.

Display the current setting of the HEAT_MAP parameter at the PDB level.

CONN / AS SYSDBA
ALTER SESSION SET CONTAINER = pdb1;

SHOW PARAMETER heat_map;

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
heat_map string OFF
SQL>
Enable the heat map for the PDB.

ALTER SYSTEM SET heat_map = ON;

SHOW PARAMETER heat_map;

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
heat_map string ON
SQL>
Notice that the heat map is still disabled at the CBD level.

CONN / AS SYSDBA

SHOW PARAMETER heat_map;

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
heat_map string OFF
SQL>
Once the heat map functionality is enable the database will track segment changes for all segments except for those in the SYSTEM and SYSAUX tablespaces. You can display the heat map information
using the following views and pipelined table functions.

V$HEAT_MAP_SEGMENT
{USER|ALL|DBA}_HEAT_MAP_SEG_HISTOGRAM
{USER|ALL|DBA}_HEAT_MAP_SEGMENT
{USER|ALL|DBA}_HEATMAP_TOP_OBJECTS
{USER|ALL|DBA}_HEATMAP_TOP_TABLESPACES
DBMS_HEAT_MAP.BLOCK_HEAT_MAP
DBMS_HEAT_MAP.EXTENT_HEAT_MAP
DBMS_HEAT_MAP.OBJECT_HEAT_MAP
DBMS_HEAT_MAP.SEGMENT_HEAT_MAP
DBMS_HEAT_MAP.TABLESPACE_HEAT_MAP
Do some work that will be tracked.

CONN / AS SYSDBA
ALTER SESSION SET CONTAINER = pdb1;

CREATE USER test IDENTIFIED BY test QUOTA UNLIMITED ON users;


GRANT CREATE SESSION, CREATE TABLE TO test;

CONN test/test@pdb1

CREATE TABLE t1 (
id NUMBER,
description VARCHAR2(50),
CONSTRAINT t1_pk PRIMARY KEY (id)
);

INSERT INTO t1
SELECT level,

Multitenant Page 93
SELECT level,
'Description for ' || level
FROM dual
CONNECT BY level <= 10;
COMMIT;

SELECT *
FROM t1;

SELECT *
FROM t1
WHERE id = 1;
We can now run some queries to see the tracked information.

CONN / AS SYSDBA
ALTER SESSION SET CONTAINER = pdb1;

COLUMN object_name FORMAT A20

SELECT track_time,
object_name,
n_segment_write,
n_full_scan,
n_lookup_scan
FROM v$heat_map_segment
ORDER BY 1, 2;

TRACK_TIME OBJECT_NAME N_SEGMENT_WRITE N_FULL_SCAN N_LOOKUP_SCAN


-------------------- -------------------- --------------- ----------- -------------
25-FEB-2017 18:25:31 T1 1 2 1
25-FEB-2017 18:25:31 T1_PK 1 0 1

SQL>

COLUMN owner FORMAT A20


COLUMN object_name FORMAT A20

SELECT track_time,
owner,
object_name,
segment_write,
full_scan,
lookup_scan
FROM dba_heat_map_seg_histogram
ORDER BY 1, 2, 3;

TRACK_TIME OWNER OBJECT_NAME SEG FUL LOO


-------------------- -------------------- -------------------- --- --- ---
25-FEB-2017 18:26:15 TEST T1 YES YES YES
25-FEB-2017 18:26:15 TEST T1_PK YES NO YES

SQL>

SET LINESIZE 100

COLUMN owner FORMAT A10


COLUMN segment_name FORMAT A20
COLUMN tablespace_name FORMAT A20

SELECT owner,
segment_name,
segment_type,
tablespace_name,
segment_size
FROM TABLE(DBMS_HEAT_MAP.object_heat_map('TEST','T1'));

OWNER SEGMENT_NAME SEGMENT_TYPE TABLESPACE_NAME SEGMENT_SIZE


---------- -------------------- -------------------- -------------------- ------------
TEST T1 TABLE USERS 65536
TEST T1_PK INDEX USERS 65536

SQL>
The heat map information can be really useful for identifying the busy and quiet segments in your database.

Automatic Data Optimization (ADO)


Enabling the heat map functionality also enables Automatic Data Optimimzation (ADO), part of Information Lifecycle Management (ILM). This allows the database to control compression and storage
tiering of segments based on usage patterns. Although it can be used with regular table segments, it only really makes sense with partitioning, as it is unlikely you will have whole tables that are not
accessed for long periods of time, whereas it can be very likely to have partitions for low-use data.

Create some tablespaces to represent the storage tiers. The following syntax uses Oracle Managed Files (OMF), hence no datafile names are needed.

CONN / AS SYSDBA
ALTER SESSION SET CONTAINER = pdb1;

CREATE TABLESPACE fast_storage_ts DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;

Multitenant Page 94
CREATE TABLESPACE fast_storage_ts DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
CREATE TABLESPACE medium_storage_ts DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
CREATE TABLESPACE slow_storage_ts DATAFILE SIZE 1M AUTOEXTEND ON NEXT 1M;
A table can be created with an ADO ILM policy. The following example creates a partitioned invoices table. It manually allocates partitions to different storage tiers, and includes a tier policy on a
partition basis to migrate unused segments to tablespaces on slower storage. There is a compression policy at the table-level, that is inherited by all partitions.

CONN test/test@pdb1

DROP TABLE invoices PURGE;

CREATE TABLE invoices (


invoice_no NUMBER NOT NULL,
invoice_date DATE NOT NULL,
comments VARCHAR2(500)
)
PARTITION BY RANGE (invoice_date)
(
PARTITION invoices_2016_q1 VALUES LESS THAN (TO_DATE('01/04/2016', 'DD/MM/YYYY')) TABLESPACE slow_storage_ts,
PARTITION invoices_2016_q2 VALUES LESS THAN (TO_DATE('01/07/2016', 'DD/MM/YYYY')) TABLESPACE slow_storage_ts,
PARTITION invoices_2016_q3 VALUES LESS THAN (TO_DATE('01/09/2016', 'DD/MM/YYYY')) TABLESPACE medium_storage_ts
ILM ADD POLICY TIER TO slow_storage_ts READ ONLY SEGMENT AFTER 6 MONTHS OF NO ACCESS,
PARTITION invoices_2016_q4 VALUES LESS THAN (TO_DATE('01/01/2017', 'DD/MM/YYYY')) TABLESPACE medium_storage_ts
ILM ADD POLICY TIER TO slow_storage_ts READ ONLY SEGMENT AFTER 6 MONTHS OF NO ACCESS,
PARTITION invoices_2017_q1 VALUES LESS THAN (TO_DATE('01/04/2017', 'DD/MM/YYYY')) TABLESPACE fast_storage_ts
ILM ADD POLICY TIER TO medium_storage_ts READ ONLY SEGMENT AFTER 3 MONTHS OF NO ACCESS,
PARTITION invoices_2017_q2 VALUES LESS THAN (TO_DATE('01/07/2017', 'DD/MM/YYYY')) TABLESPACE fast_storage_ts
ILM ADD POLICY TIER TO medium_storage_ts READ ONLY SEGMENT AFTER 3 MONTHS OF NO ACCESS
)
ILM ADD POLICY ROW STORE COMPRESS BASIC SEGMENT AFTER 3 MONTHS OF NO ACCESS;
We can see the policies have been applied using the USER_ILMOBJECTS view.

SET LINESIZE 200

COLUMN policy_name FORMAT A20


COLUMN object_owner FORMAT A15
COLUMN object_name FORMAT A15

SELECT policy_name,
object_owner,
object_name,
object_type,
inherited_from,
enabled,
deleted
FROM user_ilmobjects
ORDER BY 1;

POLICY_NAME OBJECT_OWNER OBJECT_NAME OBJECT_TYPE INHERITED_FROM ENA DEL


-------------------- --------------- --------------- ------------------ -------------------- --- ---
P13 SYS INVOICES TABLE POLICY NOT INHERITED YES NO
P13 SYS INVOICES TABLE PARTITION TABLE YES NO
P13 SYS INVOICES TABLE PARTITION TABLE YES NO
P13 SYS INVOICES TABLE PARTITION TABLE YES NO
P13 SYS INVOICES TABLE PARTITION TABLE YES NO
P13 SYS INVOICES TABLE PARTITION TABLE YES NO
P13 SYS INVOICES TABLE PARTITION TABLE YES NO
P14 SYS INVOICES TABLE PARTITION POLICY NOT INHERITED YES NO
P15 SYS INVOICES TABLE PARTITION POLICY NOT INHERITED YES NO
P16 SYS INVOICES TABLE PARTITION POLICY NOT INHERITED YES NO
P17 SYS INVOICES TABLE PARTITION POLICY NOT INHERITED YES NO

SQL>
We can also add policies to an existing table. The following example repeats what we saw earlier by creating the table, then aplying the ADO ILM policies.

CONN test/test@pdb1

DROP TABLE invoices PURGE;

CREATE TABLE invoices (


invoice_no NUMBER NOT NULL,
invoice_date DATE NOT NULL,
comments VARCHAR2(500)
)
PARTITION BY RANGE (invoice_date)
(
PARTITION invoices_2016_q1 VALUES LESS THAN (TO_DATE('01/04/2016', 'DD/MM/YYYY')) TABLESPACE slow_storage_ts,
PARTITION invoices_2016_q2 VALUES LESS THAN (TO_DATE('01/07/2016', 'DD/MM/YYYY')) TABLESPACE slow_storage_ts,
PARTITION invoices_2016_q3 VALUES LESS THAN (TO_DATE('01/09/2016', 'DD/MM/YYYY')) TABLESPACE medium_storage_ts,
PARTITION invoices_2016_q4 VALUES LESS THAN (TO_DATE('01/01/2017', 'DD/MM/YYYY')) TABLESPACE medium_storage_ts,
PARTITION invoices_2017_q1 VALUES LESS THAN (TO_DATE('01/04/2017', 'DD/MM/YYYY')) TABLESPACE fast_storage_ts,
PARTITION invoices_2017_q2 VALUES LESS THAN (TO_DATE('01/07/2017', 'DD/MM/YYYY')) TABLESPACE fast_storage_ts
);

ALTER TABLE invoices MODIFY PARTITION invoices_2016_q3


ILM ADD POLICY TIER TO slow_storage_ts READ ONLY SEGMENT AFTER 6 MONTHS OF NO ACCESS;

Multitenant Page 95
ALTER TABLE invoices MODIFY PARTITION invoices_2016_q4
ILM ADD POLICY TIER TO slow_storage_ts READ ONLY SEGMENT AFTER 6 MONTHS OF NO ACCESS;

ALTER TABLE invoices MODIFY PARTITION invoices_2017_q1


ILM ADD POLICY TIER TO medium_storage_ts READ ONLY SEGMENT AFTER 3 MONTHS OF NO ACCESS;

ALTER TABLE invoices MODIFY PARTITION invoices_2017_q2


ILM ADD POLICY TIER TO medium_storage_ts READ ONLY SEGMENT AFTER 3 MONTHS OF NO ACCESS;

ALTER TABLE invoices


ILM ADD POLICY ROW STORE COMPRESS BASIC SEGMENT AFTER 3 MONTHS OF NO ACCESS;
We can disable, delete or modify policies using the following commands.

-- Table-level.
ALTER TABLE <table-name> ILM DISABLE POLICY <policy-name>;
ALTER TABLE <table-name> ILM DELETE POLICY <policy-name>;
ALTER TABLE <table-name> ILM DISABLE_ALL;
ALTER TABLE <table-name> ILM DELETE_ALL;

-- Partition-level.
ALTER TABLE <table-name> MODIFY PARTITION <partition-name> ILM DISABLE POLICY <policy-name>;
ALTER TABLE <table-name> MODIFY PARTITION <partition-name> ILM DELETE POLICY <policy-name>;
ALTER TABLE <table-name> MODIFY PARTITION <partition-name> ILM DISABLE_all;
ALTER TABLE <table-name> MODIFY PARTITION <partition-name> ILM DELETE_ALL;
The following views are available to display policy details.

{DBA|USER}_ILMDATAMOVEMENTPOLICIES
{DBA|USER}_ILMTASKS
{DBA|USER}_ILMEVALUATIONDETAILS
{DBA|USER}_ILMOBJECTS
{DBA|USER}_ILMPOLICIES
{DBA|USER}_ILMRESULTS
DBA_ILMPARAMETERS
ILM ADO Parameters
The full list of ILM ADO Parameters are documented here. They can be displayed using the following query.

CONN / AS SYSDBA
ALTER SESSION SET CONTAINER = pdb1;

COLUMN name FORMAT A20

SELECT name, value


FROM dba_ilmparameters
ORDER BY name;

NAME VALUE
-------------------- ----------
ENABLED 1
EXECUTION INTERVAL 15
EXECUTION MODE 2
JOB LIMIT 2
POLICY TIME 0
RETENTION TIME 30
TBS PERCENT FREE 25
TBS PERCENT USED 85

SQL>
These parameters can be altered using the DBMS_ILM_ADMIN.CUSTOMIZE_ILM procedure. There is a constant defined in the package for each parameter, with the name matching the parameter name
with the whitespaces replaced by "_".

BEGIN
DBMS_ILM_ADMIN.customize_ilm(DBMS_ILM_ADMIN.retention_time, 60);
END;
/

Service-Level Access Control Lists (ACLs) - Database Service Firewall in Oracle Database 12c Release 2 (12.2)

Setup
The LOCAL_REGISTRATION_ADDRESS_lsnr_alias setting must be added to the "listener.ora" file. It should either specify a protocol and group or be set to "ON", which defaults to "IPC" and "oinstall".

# LOCAL_REGISTRATION_ADDRESS_lsnr_alias = (address=(protocol=ipc)(group=oninstall))
# LOCAL_REGISTRATION_ADDRESS_lsnr_alias = ON
LOCAL_REGISTRATION_ADDRESS_LISTENER = ON
The FIREWALL attribute can be added to the listener endpoint to control the action of the database firewall.

Unset : If an ACL is present for the service it is enforced. If no ACL is present for the service, all connections are considered valid.
FIREWALL=ON : Only connections matching an ACL are considered valid. All other connections are rejected.
FIREWALL=OFF : The firewall functionality is disabled, so all connections are considered valid.
If we wanted to force the firewall functionality we might amend the default listener configuration as follows. Remember, the FIREWALL attribute is optional.

LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = ol7-122.localdomain)(PORT = 1521)(FIREWALL=ON))
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))

Multitenant Page 96
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
)
)

LOCAL_REGISTRATION_ADDRESS_LISTENER = ON
The DBSFWUSER user owns the DBMS_SFW_ACL_ADMIN package, which provides an API to manage service-level access control lists (ACLs). We will be using this API in the following examples.

Service-Level Access Control Lists (ACLs)


Service-level ACLs can limit access to any named service handled by the listener, including those for a PDB.

Create and start a test service.

CONN / AS SYSDBA

BEGIN
DBMS_SERVICE.create_service('my_cdb_service','my_cdb_service');
DBMS_SERVICE.start_service('my_cdb_service');
END;
/

COLUMN name FORMAT A30


COLUMN network_name FORMAT A30

SELECT name,
network_name
FROM cdb_services
ORDER BY 1;

NAME NETWORK_NAME
------------------------------ ------------------------------
SYS$BACKGROUND
SYS$USERS
cdb1 cdb1
cdb1XDB cdb1XDB
my_cdb_service my_cdb_service
pdb1 pdb1

SQL>
The IP_ADD_ACE procedure accepts a service name and a host parameter. The host parameter can be IPv4 or IPv6, and wildcards are allowed. Once the ACL is built it is saved using the COMMIT_ACL
procedure.

CONN / AS SYSDBA

BEGIN
dbsfwuser.DBMS_SFW_ACL_ADMIN.ip_add_ace('my_cdb_service','ol7-122.localdomain');
dbsfwuser.DBMS_SFW_ACL_ADMIN.ip_add_ace('my_cdb_service','192.168.56.136');
dbsfwuser.DBMS_SFW_ACL_ADMIN.ip_add_ace('pdb1','ol7-122.localdomain');
dbsfwuser.DBMS_SFW_ACL_ADMIN.ip_add_ace('pdb1','192.168.56.136');
dbsfwuser.DBMS_SFW_ACL_ADMIN.commit_acl;
END;
/
The IP_ACL table holds all the saved ACLs, while the V$IP_ACL view lists the active ACLs.

-- Display the saved ACLs.


COLUMN service_name FORMAT A30
COLUMN host FORMAT A30

SELECT service_name,
host
FROM dbsfwuser.ip_acl
ORDER BY 1, 2;

SERVICE_NAME HOST
------------------------------ ------------------------------
"MY_CDB_SERVICE" 192.168.56.136
"MY_CDB_SERVICE" OL7-122.LOCALDOMAIN
"PDB1" 192.168.56.136
"PDB1" OL7-122.LOCALDOMAIN

SQL>

-- Display the active ACLs.


SELECT service_name,
host,
con_id
FROM v$ip_acl
ORDER BY 1, 2;

SERVICE_NAME HOST CON_ID


------------------------------ ------------------------------ ----------
MY_CDB_SERVICE 192.168.56.136 1
MY_CDB_SERVICE OL7-122.LOCALDOMAIN 1
PDB1 192.168.56.136 3
PDB1 OL7-122.LOCALDOMAIN 3

SQL>

Multitenant Page 97
SQL>
At the time of writing the V$IP_ACL view seems to have an issue such that the data doesn't respond correctly to the format co mmand of SQL*Plus.

With the ACL in place we can connect to the services from the database server, but not from any other machine. In the example below the SQL*Plus connections from the server works fine, but the
SQLcl connections from a PC fails with a "IO Error: Undefined Error" error.

$ sqlplus sys/OraPasswd1@ol7-122.localdomain:1521/my_cdb_service as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Tue Sep 19 18:50:20 2017

Copyright (c) 1982, 2016, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> CONN test/test@ol7-122.localdomain:1521/pdb1


Connected.
SQL>

$ ./sql sys/OraPasswd1@ol7-122.localdomain:1521/my_cdb_service as sysdba

SQLcl: Release 17.2.0 Production on Tue Sep 19 18:54:35 2017

Copyright (c) 1982, 2017, Oracle. All rights reserved.

USER = sys
URL = jdbc:oracle:thin:@ol7-122.localdomain:1521/my_cdb_service
Error Message = IO Error: Undefined Error
Username? (RETRYING) ('sys/*********@ol7-122.localdomain:1521/my_cdb_service as sysdba'?)

$ ./sql test/test@ol7-122.localdomain:1521/pdb1

SQLcl: Release 17.2.0 Production on Tue Sep 19 19:20:07 2017

Copyright (c) 1982, 2017, Oracle. All rights reserved.

USER = test
URL = jdbc:oracle:thin:@ol7-122.localdomain:1521/pdb1
Error Message = IO Error: Undefined Error
Username? (RETRYING) ('test/*********@ol7-122.localdomain:1521/pdb1'?)
We can add an entry for the PC to allow it to connect.

CONN / AS SYSDBA

BEGIN
dbsfwuser.DBMS_SFW_ACL_ADMIN.ip_add_ace('my_cdb_service','192.168.56.1');
dbsfwuser.DBMS_SFW_ACL_ADMIN.ip_add_ace('pdb1','192.168.56.1');
dbsfwuser.DBMS_SFW_ACL_ADMIN.commit_acl;
END;
/
The SQLcl connections from the PC now work as expected.

$ ./sql sys/OraPasswd1@ol7-122.localdomain:1521/my_cdb_service as sysdba

SQLcl: Release 17.2.0 Production on Tue Sep 19 18:59:53 2017

Copyright (c) 1982, 2017, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> CONN test/test@ol7-122.localdomain:1521/pdb1


Connected.
SQL>
The IP_REMOVE_ACE procedure is used to remove service-level ACL entries. The following removes all the service-level ACLs created for this example.

CONN / AS SYSDBA

BEGIN
dbsfwuser.DBMS_SFW_ACL_ADMIN.ip_remove_ace('my_cdb_service','ol7-122.localdomain');
dbsfwuser.DBMS_SFW_ACL_ADMIN.ip_remove_ace('my_cdb_service','192.168.56.136');
dbsfwuser.DBMS_SFW_ACL_ADMIN.ip_remove_ace('my_cdb_service','192.168.56.1');
dbsfwuser.DBMS_SFW_ACL_ADMIN.ip_remove_ace('pdb1','ol7-122.localdomain');
dbsfwuser.DBMS_SFW_ACL_ADMIN.ip_remove_ace('pdb1','192.168.56.136');
dbsfwuser.DBMS_SFW_ACL_ADMIN.ip_remove_ace('pdb1','192.168.56.1');
dbsfwuser.DBMS_SFW_ACL_ADMIN.commit_acl;
END;
/

-- Display the saved ACLs.


COLUMN service_name FORMAT A30
COLUMN host FORMAT A30

Multitenant Page 98
COLUMN host FORMAT A30

SELECT service_name,
host
FROM dbsfwuser.ip_acl
ORDER BY 1, 2;

no rows selected

SQL>
We can stop and remove the test service using the following code.

CONN / AS SYSDBA

BEGIN
DBMS_SERVICE.stop_service('my_cdb_service');
DBMS_SERVICE.delete_service('my_cdb_service');
END;
/
PDB-Level Access Control Lists (ACLs)
PDB-level ACLs allow us to manage access to all services for a PDB, rather than having to name them individually.

Create and start a test service in the PDB.

CONN / AS SYSDBA
ALTER SESSION SET CONTAINER = pdb1;

BEGIN
DBMS_SERVICE.create_service('my_pdb_service','my_pdb_service');
DBMS_SERVICE.start_service('my_pdb_service');
END;
/

COLUMN name FORMAT A30


COLUMN network_name FORMAT A30

SELECT name,
network_name
FROM dba_services
ORDER BY 1;

NAME NETWORK_NAME
------------------------------ ------------------------------
my_pdb_service my_pdb_service
pdb1 pdb1

SQL>
The IP_ADD_PDB_ACE procedure accepts a PDB name and a host parameter. The host parameter can be IPv4 or IPv6, and wildcards are allowed. Once the ACL is built it is saved using the COMMIT_ACL
procedure in the normal way.

CONN / AS SYSDBA

BEGIN
dbsfwuser.DBMS_SFW_ACL_ADMIN.ip_add_pdb_ace('pdb1','ol7-122.localdomain');
dbsfwuser.DBMS_SFW_ACL_ADMIN.ip_add_pdb_ace('pdb1','192.168.56.136');
dbsfwuser.DBMS_SFW_ACL_ADMIN.commit_acl;
END;
/
The IP_ACL table holds all the saved ACLs, while the V$IP_ACL view lists the active ACLs.

-- Display the saved ACLs.


COLUMN service_name FORMAT A35
COLUMN host FORMAT A30

SELECT service_name,
host
FROM dbsfwuser.ip_acl
ORDER BY 1, 2;

SERVICE_NAME HOST
------------------------------ ------------------------------
"566C59261E6B2CA6E0538838A8C001B3" 192.168.56.136
"566C59261E6B2CA6E0538838A8C001B3" OL7-122.LOCALDOMAIN
"MY_PDB_SERVICE" 192.168.56.136
"MY_PDB_SERVICE" OL7-122.LOCALDOMAIN
"PDB1" 192.168.56.136
"PDB1" OL7-122.LOCALDOMAIN

SQL>

-- Display the active ACLs.


SELECT service_name,
host,
con_id
FROM v$ip_acl
ORDER BY 1, 2;

Multitenant Page 99
ORDER BY 1, 2;

SERVICE_NAME HOST CON_ID


----------------------------------- ------------------------------ ----------
MY_PDB_SERVICE 192.168.56.136 3
MY_PDB_SERVICE OL7-122.LOCALDOMAIN 3
PDB1 192.168.56.136 3
PDB1 OL7-122.LOCALDOMAIN 3

SQL>
With the ACL in place we can connect to the services from the database server, but not from any other machine. In the example below the SQL*Plus connections from the server works fine, but the
SQLcl connections from a PC fails with a "IO Error: Undefined Error" error.

$ sqlplus sys/OraPasswd1@ol7-122.localdomain:1521/my_pdb_service as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Tue Sep 19 20:26:15 2017

Copyright (c) 1982, 2016, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> CONN test/test@ol7-122.localdomain:1521/pdb1


Connected.
SQL>

$ ./sql sys/OraPasswd1@ol7-122.localdomain:1521/my_pdb_service as sysdba

SQLcl: Release 17.2.0 Production on Tue Sep 19 20:26:36 2017

Copyright (c) 1982, 2017, Oracle. All rights reserved.

USER = sys
URL = jdbc:oracle:thin:@ol7-122.localdomain:1521/my_pdb_service
Error Message = IO Error: Undefined Error
Username? (RETRYING) ('sys/*********@ol7-122.localdomain:1521/my_pdb_service as sysdba'?)

$ ./sql test/test@ol7-122.localdomain:1521/pdb1

SQLcl: Release 17.2.0 Production on Tue Sep 19 20:27:51 2017

Copyright (c) 1982, 2017, Oracle. All rights reserved.

USER = test
URL = jdbc:oracle:thin:@ol7-122.localdomain:1521/pdb1
Error Message = IO Error: Undefined Error
Username? (RETRYING) ('test/*********@ol7-122.localdomain:1521/pdb1'?)
We can add an entry for the PC to allow it to connect.

CONN / AS SYSDBA

BEGIN
dbsfwuser.DBMS_SFW_ACL_ADMIN.ip_add_pdb_ace('pdb1','192.168.56.1');
dbsfwuser.DBMS_SFW_ACL_ADMIN.commit_acl;
END;
/
The SQLcl connections from the PC now work as expected.

$ ./sql sys/OraPasswd1@ol7-122.localdomain:1521/my_pdb_service as sysdba

SQLcl: Release 17.2.0 Production on Tue Sep 19 20:29:35 2017

Copyright (c) 1982, 2017, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> CONN test/test@ol7-122.localdomain:1521/pdb1


Connected.
SQL>
The IP_REMOVE_PDB_ACE procedure is used to remove PDB-level ACL entries. The following removes all the PDB-level ACLs created for this example.

CONN / AS SYSDBA

BEGIN
dbsfwuser.DBMS_SFW_ACL_ADMIN.ip_remove_pdb_ace('pdb1','ol7-122.localdomain');
dbsfwuser.DBMS_SFW_ACL_ADMIN.ip_remove_pdb_ace('pdb1','192.168.56.136');
dbsfwuser.DBMS_SFW_ACL_ADMIN.ip_remove_pdb_ace('pdb1','192.168.56.1');
dbsfwuser.DBMS_SFW_ACL_ADMIN.commit_acl;
END;
/

Multitenant Page 100


-- Display the saved ACLs.
COLUMN service_name FORMAT A30
COLUMN host FORMAT A30

SELECT service_name,
host
FROM dbsfwuser.ip_acl
ORDER BY 1, 2;

no rows selected

SQL>
We can stop and remove the test service using the following code.

CONN / AS SYSDBA
ALTER SESSION SET CONTAINER = pdb1;

BEGIN
DBMS_SERVICE.stop_service('my_pdb_service');
DBMS_SERVICE.delete_service('my_pdb_service');
END;
/

Multitenant Page 101

Das könnte Ihnen auch gefallen