Sie sind auf Seite 1von 236

DATABASE SECURITY

WORKSHOP

Presented By:

Oracle Corporation

Lab Exercises Workbook


(REV: 24-NOV-2008)
Author(s) Copyright © 2008, Oracle. All rights reserved.

Kurt Lysy This documentation contains proprietary information of Oracle Corporation. It is


provided under a license agreement containing restrictions on use and disclosure and
Barbara Gingrande is also protected by copyright law. Reverse engineering of the software is prohibited. If
Mark Waldron this documentation is delivered to a U.S. Government Agency of the Department of
Defense, then it is delivered with Restricted Rights and the following legend is
applicable:
Technical Contributors
and Reviewers Restricted Rights Legend

Use, duplication or disclosure by the Government is subject to restrictions for


Kenneth Zeng commercial computer software and shall be deemed to be Restricted Rights software
under Federal law, as set forth in subparagraph (c)(1)(ii) of DFARS 252.227-7013,
Jack Brinson Rights in Technical Data and Computer Software (October 1988).

This material or any portion of it may not be copied in any form or by any means
without the express prior written permission of Oracle Corporation. Any other copying
is a violation of copyright law and may result in civil and/or criminal penalties.

If this documentation is delivered to a U.S. Government Agency not within the


Department of Defense, then it is delivered with “Restricted Rights,” as defined in FAR
52.227-14, Rights in Data-General, including Alternate III (June 1987).

The information in this document is subject to change without notice. If you find any
problems in the documentation, please report them in writing to Education Products,
Oracle Corporation, 500 Oracle Parkway, Redwood Shores, CA 94065. Oracle
Corporation does not warrant that this document is error-free.

Oracle and all references to Oracle Products are trademarks or registered trademarks
of Oracle Corporation.

All other products or company names are used for identification purposes only, and
may be trademarks of their respective owners.

Contact For This Document

Please direct any questions or comments regarding the contents of this document to
Kurt Lysy (kurt.lysy@oracle.com).

-2-
TABLE OF CONTENTS

Summary Of Accounts And Passwords.............................................................................. 4


Important Aliases And URLs.............................................................................................. 5
LAB EXERCISE 01 – Starting Up Oracle11g Environment ............................................. 6
LAB EXERCISE 02 – Implementing Network Security.................................................. 11
LAB EXERCISE 03 – Transparent Data Encryption ....................................................... 16
LAB EXERCISE 04 – Tablespace Level Data Encryption .............................................. 29
LAB EXERCISE 05 – Database Vault Realms................................................................ 45
LAB EXERCISE 06 – Database Vault Command Rules ................................................. 59
LAB EXERCISE 07 – Using Custom Factors In Database Vault.................................... 68
LAB EXERCISE 08 – Customizing Database Vault Separation Of Duty With OLS...... 75
LAB EXERCISE 09 – Starting Up Audit Vault Agent & Collectors............................... 87
LAB EXERCISE 10 – Injecting Audit Records ............................................................... 95
LAB EXERCISE 11 – Creating Audit Vault Alerts......................................................... 99
LAB EXERCISE 12 – Audit Management .................................................................... 107
LAB EXERCISE 13 – Adding A Second Collection Source ......................................... 111
LAB EXERCISE 14 – Configuring Audit Policy........................................................... 120
LAB EXERCISE 15 – Audit Vault Reporting Features................................................. 130
LAB EXERCISE 16 – Starting Up Source Database and Grid Control 10.2.0.4........... 149
LAB EXERCISE 17 – Masking Sensitive Application Data ......................................... 154
APPENDIX A – Command Line Examples For Database Vault ................................... 172
APPENDIX B – Oracle Label Security (OLS) Lab Exercise......................................... 174
APPENDIX C – Expanding Logical Volumes In VM Image ........................................ 189
APPENDIX D – Glossary Of Terms .............................................................................. 195
APPENDIX E – Releasing Flash Recovery Area Disk Space In VM Image................. 197
APPENDIX F – Configuring Audit Management Features In Release 10.2.0.3............ 200
APPENDIX G – How To Convert FAT Disk Partition To NTFS.................................. 205
APPENDIX H – Audit Vault Best Practices .................................................................. 207

-3-
Summary of Accounts and Passwords

IMAGE NAME AND IP ADDRESS:

dbsecurity.oracle.com
192.168.214.67

IMAGE OPERATING SYSTEM ACCOUNTS:

oracle/oracle1
root/oracle1

11g ACCOUNTS (databases DB01 and DB02):

sysman/oracle1
sys/oracle1
dvowner/oracle12#
dvacctmgr/oracle12#

AUDIT VAULT ACCOUNTS:

avadmin/oracle12#
avauditor/oracle12#
avdvo/oracle12#
avdvam/oracle12#
sys/oracle1

GRID CONTROL ACCOUNTS:

sysman/oracle1

-4-
Important Aliases And URLs

• Aliases:

Alias Execution Path Description


agemctl '$ORA_AGENT_HOME/bin/emctl' Runs emctl for Grid Control
agent
agent . /home/oracle/agent.sh' Sets environment for Grid
control agent
av '. /home/oracle/av.sh' Sets environment for Audit
Vault server
av1 '. /home/oracle/avagent.sh' Sets environment for Audit
Vault agent
db01 '. /home/oracle/db01.sh' Sets environment for DB01
database instance
db02 '. /home/oracle/db02.sh' Sets environment for DB02
database instance
emrep '. /home/oracle/em.sh' Sets environment for Grid
Control EM repository database
grid '. /home/oracle/grid.sh' Same as emrep alias
oms '. /home/oracle/oms.sh' Sets environment for Oracle
Management Server (OMS)
opmnstat '. Shows current status of all
/home/oracle/opmn_status.sh' OPMN processes in 10gAS
infrastructure
ora 'env|grep ORA' Shows current environment
settings for session

• URLs:

URL Description
http://dbsecurity.oracle.com:4889/em Grid Control Console
https://dbsecurity.oracle.com:5501/em DB01 Database Console Enterprise
Mgr
https://dbsecurity.oracle.com:5501/dva DB01 Database Console DVA
https://dbsecurity.oracle.com:5502/em DB02 Database Console Enterprise
Mgr
https://dbsecurity.oracle.com:5502/dva DB02 Database Console DVA
http://dbsecurity.oracle.com:5503/em AV Server Database Console
Enterprise Mgr
http://dbsecurity.oracle.com:5700/av Audit Vault Console

-5-
LAB EXERCISE 01 – Starting Up Oracle11g Environment

1. Login to the VM Ware image as user oracle. The password is oracle1.

2. Open a terminal window (easily done by right-clicking on the desktop -> Open
Terminal.)

3. In the command window, type alias to view all aliases created for this image.
Enter the alias db01 on the command line to set the environment for bring up the
Oracle11g database DB01.

oracle:/home/oracle> alias

-6-
alias 10g='. /home/oracle/10g.sh'
alias 11g='. /home/oracle/11g.sh'
alias agemctl='$ORA_AGENT_HOME/bin/emctl'
alias agent='. /home/oracle/agent.sh'
alias av='. /home/oracle/av.sh'
alias av1='. /home/oracle/avagent.sh'
alias db01='. /home/oracle/db01.sh'
alias db02='. /home/oracle/db02.sh'
alias emrep='. /home/oracle/em.sh'
alias grid='. /home/oracle/grid.sh'
alias l.='ls -d .* --color=tty'
alias ll='ls -l --color=tty'
alias ls='ls --color=tty'
alias oms='. /home/oracle/oms.sh'
alias opmnstat='. /home/oracle/opmn_status.sh'
alias ora='env|grep ORA'
alias vi='vim'
alias which='alias | /usr/bin/which --tty-only --read-alias --show-dot --
show-tilde'
oracle:/home/oracle> db01
ORACLE_SID=db01
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/11.1.0/db_1
OH=/u01/oracle/product/11.1.0/db_1

4. Run the commands hostname and ifconfig to view the name and IP of the virtual
host you are using in the VM Ware image.

oracle:/home/oracle> hostname
dbsecurity.oracle.com
oracle:/home/oracle> ifconfig
eth0 Link encap:Ethernet HWaddr 00:0C:29:80:BE:2C
inet addr:192.168.214.67 Bcast:192.168.214.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe80:be2c/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:103 errors:0 dropped:0 overruns:0 frame:0
TX packets:62 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:8879 (8.6 KiB) TX bytes:5659 (5.5 KiB)
Interrupt:185 Base address:0x1400

lo Link encap:Local Loopback


inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:2540 errors:0 dropped:0 overruns:0 frame:0
TX packets:2540 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:3082462 (2.9 MiB) TX bytes:3082462 (2.9 MiB)

5. (Optional) If you wish to connect to this image from local tools on your laptop
(such as using Putty or using Internet Explorer), you can update your local
hosts file with the following line: (On XP this file is located at
C:\WINDOWS\system32\drivers\etc)
192.168.214.67 dbsecurity.oracle.com dbsecurity

-7-
6. Change directory to /home/oracle/scripts. Run the script
start_DB.sh. This script brings up the 11g listener, database instance, and
Enterprise Manager Database Console. [Be patient with the time taken for both
the instance and Database Console to come up. The image has a modest amount
of allocated memory for use.]
oracle:/home/oracle> cd scripts
oracle:/home/oracle/scripts> db01
ORACLE_SID=db01
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/11.1.0/db_1
OH=/u01/oracle/product/11.1.0/db_1
oracle:/home/oracle/scripts> . start_DB.sh

LSNRCTL for Linux: Version 11.1.0.7.0 - Production on 01-OCT-2008 06:50:44

Copyright (c) 1991, 2008, Oracle. All rights reserved.

Starting /u01/oracle/product/11.1.0/db_1/bin/tnslsnr: please wait...

TNSLSNR for Linux: Version 11.1.0.7.0 - Production


System parameter file is
/u01/oracle/product/11.1.0/db_1/network/admin/listener.ora
Log messages written to /u01/oracle/diag/tnslsnr/dbsecurity/listener/alert/log.xml
Listening on:
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dbsecurity.oracle.com)(PORT=1521)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))

Connecting to
(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbsecurity.oracle.com)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 11.1.0.7.0 - Production
Start Date 01-OCT-2008 06:50:44
Uptime 0 days 0 hr. 0 min. 0 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File
/u01/oracle/product/11.1.0/db_1/network/admin/listener.ora
Listener Log File
/u01/oracle/diag/tnslsnr/dbsecurity/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dbsecurity.oracle.com)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
The listener supports no services
The command completed successfully

SQL*Plus: Release 11.1.0.7.0 - Production on Wed Oct 1 06:50:44 2008

Copyright (c) 1982, 2008, Oracle. All rights reserved.

Connected to an idle instance.

SQL> ORACLE instance started.

Total System Global Area 418484224 bytes


Fixed Size 1313792 bytes
Variable Size 230687744 bytes
Database Buffers 180355072 bytes
Redo Buffers 6127616 bytes
Database mounted.
Database opened.
SQL>
SQL> set echo off
SQL> Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 -
Production

-8-
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
Oracle Enterprise Manager 11g Database Control Release 11.1.0.7.0
Copyright (c) 1996, 2008 Oracle Corporation. All rights reserved.
https://dbsecurity.oracle.com:5501/em/console/aboutApplication
Starting Oracle Enterprise Manager 11g Database Control ................. started.
------------------------------------------------------------------
Logs are generated in directory
/u01/oracle/product/11.1.0/db_1/dbsecurity.oracle.com_db01/sysman/log
oracle:/home/oracle/scripts>

7. Verify that you are able to successfully login to the Enterprise Manager Database
Console. Launch a browser and enter the URL
https://dbsecurity.oracle.com:5501/em. Login as
sysman/oracle1. [If using the Firefox browser within the image, you should
see this URL in the Bookmarks list.]

-9-
8. Verify that the Database Console is successfully running by viewing the summary
screen (sample output shown below). Logout of Enterprise Manager when
finished. Do not stop the Database Console, since we will be using it for the ASO
and Database Vault lab exercises.

-10-
LAB EXERCISE 02 – Implementing Network Security

A. Business Driver

As of 2007, all merchants accepting credit cards for payment need to be compliant
with the Payment Card Industry (PCI) standards. For more information, see:
https://www.pcisecuritystandards.org/.

CashBankTrust’s requirements for meeting Payment Card Industry (PCI) standards


involve encrypting certain of the data at rest (which we will cover in our next lab)
AND encrypting data as it passes over the network. Specifically, under section 4 of
the PCI requirements:

Requirement 4: Encrypt transmission of cardholder data across open, public


networks

Sensitive information must be encrypted during transmission over networks that


are easy and common for a hacker to intercept, modify, and divert data while in
transit.

4.1 Use strong cryptography and security protocols such as secure sockets layer
(SSL) / transport layer security (TLS) and Internet protocol security (IPSEC) to
safeguard sensitive cardholder data during transmission over open, public
networks. Examples of open, public networks that are in scope of the PCI DSS are
the Internet, WiFi (IEEE 802.11x), global system for mobile communications
(GSM), and general packet radio service (GPRS).

4.1.1 For wireless networks transmitting cardholder data, encrypt the


transmissions by using WiFi protected access (WPA or WPA2) technology,
IPSEC VPN, or SSL/TLS. Never rely exclusively on wired equivalent privacy
(WEP) to protect confidentiality and access to a wireless LAN. If WEP is used, do
the following:

• Use with a minimum 104-bit encryption key and 24 bit-initialization value


• Use ONLY in conjunction with WiFi protected access (WPA or WPA2)
technology, VPN, or SSL/TLS
• Rotate shared WEP keys quarterly (or automatically if the technology
permits)
• Rotate shared WEP keys whenever there are changes in personnel with
access to keys
• Restrict access based on media access code (MAC) address.

4.2 Never send unencrypted PANs by e-mail.

-11-
Network encryption is one feature of Oracle’s Advanced Security Option (ASO).
When information travels to and from the Database, ASO provides a high level of
security by offering support for the following encryption standards: RC4 (40, 56, 128,
and 256 bits), DES (40 and 56 bits), 3DES (2 and 3 keys), AES (128, 192, and 256
bits).

B. Setting up Network Encryption

In any network connection, it is possible for both the client and server to support more
than one encryption algorithm and more than one integrity algorithm. When a connection
is made, the server selects which algorithm to use, if any, from those algorithms specified
in the sqlnet.ora files.

In this lab we will set up network encryption by directly making changes to the
sqlnet.ora file for the DB01 database.

To set up Network Encryption, you need only add the following lines to your
sqlnet.ora file in $ORACLE_HOME/network/admin:
SQLNET.CRYPTO_CHECKSUM_SERVER = REQUIRED
SQLNET.ENCRYPTION_SERVER = REQUIRED
SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER = (MD5)
SQLNET.ENCRYPTION_TYPES_SERVER = (DES40, RC4_40)
SQLNET.CRYPTO_SEED = "Between Ten and Seventy Random Characters"

If the file
/u01/oracle/product/11.1.0/db_1/network/admin/sqlnet.ora
does not exist, create the file using a text editor such as vi or gedit and place the
lines above in the file.

NOTE: A ready-to-use sqlnet.ora is available for you to use for this purpose.
It is located in /home/oracle/aso_scripts. The file contains both server
and client attributes for use in the image. Enable ASO network encryption by
copying this file into the directory $ORACLE_HOME/network/admin).

C. Understanding the Lab

Each parameter is explained below. This is all you need to do to implement Network
Security.

SQLNET.CRYPTO_CHECKSUM_SERVER = REQUIRED
SQLNET.ENCRYPTION_SERVER = REQUIRED

-12-
To negotiate whether to turn on integrity (CHECKSUM) or encryption (ENCRYPTION),
you can specify four possible values for the Oracle Advanced Security integrity and
encryption configuration parameters – REJECTED, ACCEPTED, REQUESTED or
REQUIRED. The four values are listed in order of increasing security. The value
REJECTED provides the minimum amount of security between client and server
communications, and the value REQUIRED provides the maximum amount of
network security. In this scenario, this side of the connection specifies that the
security service must be enabled. The connection fails if the other side specifies
REJECTED or if there is no compatible algorithm supported by the other side.
SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER = (MD5)

MD5 and SHA1 are the two integrity algorithms supported by Oracle ASO.
SQLNET.ENCRYPTION_TYPES_SERVER = (DES40, RC4_40)

This parameter enumerates some subset of the encryption algorithms supported by


ASO.
SQLNET.CRYPTO_SEED="Between Ten and Seventy Random Characters"

Several seeds are used to generate a random number on the client and on the server.
One of the seeds that can be used is a user-defined encryption seed. It can be 10 to 70
characters in length and changed at any time. The longer the string, the more secure
the environment.

Any client connecting to this server would need to have parallel settings in their local
sqlnet.ora file. Otherwise their connections will be rejected.

Logged in as the oracle user, note that you can also use the “adapters” to ascertain
what encryption and checksumming algorithms are available in the installation, for
example:
[oracle@gss-grc-lnx ~(sec)]$ adapters
...
...
Installed Oracle Advanced Security options are:

RC4 40-bit encryption


RC4 56-bit encryption
RC4 128-bit encryption
RC4 256-bit encryption
DES40 40-bit encryption
DES 56-bit encryption
3DES 112-bit encryption
3DES 168-bit encryption
AES 128-bit encryption
AES 192-bit encryption
AES 256-bit encryption
MD5 crypto-checksumming
SHA-1 crypto-checksumming

-13-
Kerberos v5 authentication
RADIUS authentication

This change will take effect for all new connections to the database, since the
parameters within sqlnet.ora are read during the establishment of every Oracle Net
session. Note that existing connections i.e. those in place prior to the changes made to
the sqlnet.ora files, will remain un-affected by these encryption settings. This
would have implications for how CashBankTrust would enforce these new settings in
a Production environment across, for example, an application server farm, where the
use of pooled database connections implies the need to force re-connects from the
mid-tier in order to pick up the new settings. In a 24x7 environment, this might be
achieved via the use of ONS (Oracle Notification Service) to denote all such pooled
connections as stale, thus forcing new connections to be established.

This example demonstrates how to configure Network Encryption for Oracle clients
such as Oracle Audit Vault (we will be modifying this configuration in a later
course). For JDBC applications such as SQL Developer, JDeveloper or a J2EE
application, a similar configuration can be set up.

To show that you are using ASO network encryption, you can query the dynamic view
V$ SESSION_CONNECT_INFO. This view displays one row for each network service
adapter the database instance is currently using. Set alias db01 followed by connecting to
SQL Plus as system/oracle1. Issue the SQL Plus COLUMN format commands
followed by running the query shown below. You can see that ASO network encryption
adapters are in fact being used by Oracle for this instance.

oracle:/home/oracle> db01
ORACLE_SID=db01
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/11.1.0/db_1
OH=/u01/oracle/product/11.1.0/db_1
oracle:/home/oracle> sqlplus system/oracle1

SQL*Plus: Release 11.1.0.7.0 - Production on Sat Oct 11 02:06:04 2008

Copyright (c) 1982, 2008, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options

SQL> col network_service_banner format a24


SQL> col client_driver format a15
SQL> select sid,serial#,network_service_banner,client_driver
2 from v$session_connect_info;

SID SERIAL# NETWORK_SERVICE_BANNER CLIENT_DRIVER


---------- ---------- ------------------------ ---------------
118 2
120 26
170 12 Oracle Bequeath NT Proto SQL*PLUS
col Adapter for Linux: V
ersion 11.1.0.7.0 - Prod
uction

-14-
170 12 Oracle Advanced Security SQL*PLUS
: authentication service
for Linux: Version 11.1
.0.7.0 - Production

SID SERIAL# NETWORK_SERVICE_BANNER CLIENT_DRIVER


---------- ---------- ------------------------ ---------------

170 12 Oracle Advanced Security SQL*PLUS


: encryption service for
Linux: Version 11.1.0.7
.0 - Production

170 12 Oracle Advanced Security SQL*PLUS


: DES40 encryption servi
ce adapter for Linux: Ve
rsion 11.1.0.7.0 - Produ
cti

SID SERIAL# NETWORK_SERVICE_BANNER CLIENT_DRIVER


---------- ---------- ------------------------ ---------------

170 12 Oracle Advanced Security SQL*PLUS


: crypto-checksumming se
rvice for Linux: Version
11.1.0.7.0 - Production

170 12 Oracle Advanced Security SQL*PLUS


: MD5 crypto-checksummin
g service adapter

8 rows selected.

For more information, see:


http://download.oracle.com/docs/cd/B19306_01/network.102/b14268/asoconfg.htm

-15-
LAB EXERCISE 03 – Transparent Data Encryption

A. Business Driver

As of 2007, all merchants accepting credit cards for payment need to be compliant with
the Payment Card Industry (PCI) standards. For more information, see:
https://www.pcisecuritystandards.org/.
CashBankTrust’s requirements for meeting Payment Card Industry (PCI) standards
involve encrypting certain of the data at rest. Specifically, under section 3 of the PCI
requirements:
Requirement 3: Protect stored cardholder data
Encryption is a critical component of cardholder data protection. If an intruder
circumvents other network security controls and gains access to encrypted data,
without the proper cryptographic keys, the data is unreadable and unusable to that
person. Other effective methods of protecting stored data should be considered as
potential risk mitigation opportunities. For example, methods for minimizing risk
include not storing cardholder data unless absolutely necessary, truncating
cardholder data if full PAN is not needed, and not sending PAN in unencrypted e-
mails.
3.1 Keep cardholder data storage to a minimum. Develop a data retention and
disposal policy. Limit storage amount and retention time to that which is required
for business, legal, and/or regulatory purposes, as documented in the data
retention policy.
3.2 Do not store sensitive authentication data subsequent to authorization (even if
encrypted). Sensitive authentication data includes the data as cited in the following
Requirements 3.2.1 through 3.2.3:
3.2.1 Do not store the full contents of any track from the magnetic stripe (that is on
the back of a card, in a chip or elsewhere). This data is alternatively called full
track, track, track 1,track 2, and magnetic stripe data In the normal course of
business, the following data elements from the magnetic stripe may need to be
retained: the accountholder’s name, primary account number (PAN), expiration
date, and service code. To minimize risk, store only those data elements needed
for business. NEVER store the card verification code or value or PIN verification
value data elements.
3.2.2 Do not store the card-validation code or value (three-digit or four-digit number
printed on the front or back of a payment card) used to verify card-not-present
transactions
3.2.3 Do not store the personal identification number (PIN) or the encrypted PIN
block.
3.3 Mask PAN when displayed (the first six and last four digits are the maximum
number of digits to be displayed). Note: This requirement does not apply to
employees and other parties with a specific need to see the full PAN; nor does the
requirement supersede stricter requirements in place for displays of cardholder
data (for example, for point of sale [POS] receipts).
3.4 Render PAN, at minimum, unreadable anywhere it is stored (including data on
portable digital media, backup media, in logs, and data received from or stored by
wireless networks) by using any of the following approaches:
Strong one-way hash functions (hashed indexes)

-16-
Truncation
Index tokens and pads (pads must be securely stored)
Strong cryptography with associated key management processes and procedures.
The MINIMUM account information that must be rendered unreadable is the PAN.
3.4.1 If disk encryption is used (rather than file- or column-level database
encryption), logical access must be managed independently of native operating
system access control mechanisms (for example, by not using local system or
Active Directory accounts). Decryption keys must not be tied to user accounts.
3.5 Protect encryption keys used for encryption of cardholder data against both
disclosure and misuse.
3.5.1 Restrict access to keys to the fewest number of custodians necessary
3.5.2 Store keys securely in the fewest possible locations and forms.
3.6 Fully document and implement all key management processes and procedures
for keys used for encryption of cardholder data, including the following:
3.6.1 Generation of strong keys
3.6.2 Secure key distribution
3.6.3 Secure key storage
3.6.4 Periodic changing of keys
As deemed necessary and recommended by the associated application (for
example, re-keying); preferably automatically
At least annually.
3.6.5 Destruction of old keys
3.6.6 Split knowledge and establishment of dual control of keys (so that it requires
two or three people, each knowing only their part of the key, to reconstruct the
whole key)
3.6.7 Prevention of unauthorized substitution of keys
3.6.8 Replacement of known or suspected compromised keys
3.6.9 Revocation of old or invalid keys
3.6.10 Requirement for key custodians to sign a form stating that they understand
and accept their key-custodian responsibilities.

Oracle Advanced Security Transparent Data Encryption (TDE), first introduced in


Oracle Database 10g Release 2, is the industry's most advanced encryption solution.
TDE provides built-in key management and complete transparency for encryption of
sensitive application data. The database encryption process is turned on using DDL
commands, completely eliminating the need for application changes, programmatic
key management, database triggers, and views.

B. Setup

The following scripts will be used in this lab and are available in the
/home/oracle/aso_scripts directory:
• change_oe_schema.sql

-17-
• create_db_users.sql

Execute the script create_db_users.sql in sqlplus from a new terminal


window logged in as the oracle user. This will create users that will be used in the
rest of the course. Additionally the script grants KLYSY some special privileges that
he will need in his role as the highly privileged security DBA. Partial output of the
script is shown below (you may generate errors because the script attempts to drop
users that may not be there – this is normal behavior):
oracle: /home/oracle/aso_scripts> sqlplus / as sysdba

...

SQL> @create_db_users.sql
...
<a number of commands are executed here>
...
SQL> set echo off

Also, a directory named /home/oracle/wallet has already been created for


you. This directory will be used to store the master encryption wallet file.

C. Creating a Wallet

In this lab you will enable transparent data encryption for selected columns of the OE
schema. You will create a wallet. You will also rekey each table in the OE schema based
on corporate security requirements. Note that by doing this work alone (as KLYSY), you
will be violating PCI requirement 3.6.6.

Attempt to modify the OE Schema

Now attempt to modify the OE schema (containing order entry information) by running
the script /home/oracle/aso_scripts/change_oe_schema.sql as shown
below. Why do you get errors (such as below) when attempting to create or modify some
of the OE schema tables?

SQL> connect klysy/klysy


Connected.
SQL> @/home/oracle/aso_scripts/change_oe_schema.sql
...
...
CREATE TABLE "OE"."CUSTOMER_PAYMENT"
*

-18-
ERROR at line 1:
ORA-28365: wallet is not open
...
...

[Note: If you get errors other than those related to the wallet not being open, try
running the script as oe/oe.]
A wallet is a container that is used to store authentication and signing credentials,
including the TDE master key, PKI private keys, certificates, and trusted certificates
needed by SSL. With TDE, wallets are used on the server to protect the TDE master
key.
Oracle provides two different types of wallets: encryption wallet and auto-open
wallet. The encryption wallet (filename ewallet.p12) is the one recommended for
TDE and the one we will be using in this lab. It needs to be opened manually after
database startup and prior to TDE encrypted data being accessed. If the Wallet is not
opened, the database will return an error when TDE protected data is queried. The
auto-open wallet (filename cwallet.sso) opens automatically when a database is
started; hence it can be used for unattended Data Guard (Physical Standby only)
environments where encrypted columns are shipped to secondary sites.

Define the wallet location in Oracle Database Control

Wallet management in Oracle Database 11g can be performed in the UI of Database


Control. If you have not started this already, the command to execute (as the user oracle
with your environment already pointing to the DB01 Oracle Home) is:
$ emctl start dbconsole
Log on to the GUI of Database Control using the URL
https://dbsecurity.oracle.com:5501/em. Login using: klysy/klysy as sysdba:

After logging in as klysy/ klysy, you will be presented with the following screen:

-19-
Click on the “Server” tab at the top, and then, on the screen that next appears, click
the “Transparent Data Encryption” link under the “Security” section on the right-hand
side.

-20-
The following screen will appear:

-21-
At this point, no encryption wallet exists. We would also want to specify a wallet
location on the server that is distinct from any other files/directories that may be
backed up as part of a backup regime (i.e. we need to have the wallet backed up, but
it is preferable to have it not co-located with the Oracle RDBMS backup).
Click the “Change” button on this screen – you will be presented with the following:

Making sure “Network Profile” is selected in the dropdown box, click the “Go”
button to the right. You will be taken to the following screen:

Enter the O/S credentials of the operating system user here (note: this would more
likely be a distinct individual to KLYSY) – oracle/oracle1 – and click the “Login”
button. The following screen appears:

-22-
Click on the triangle next to “Advanced Security Options”. Click the “Wallet
Location” at the bottom of this screen and the following screen appears:

Enter “/home/oracle/wallet” as the value of the Encryption Wallet Directory.

-23-
Click OK to save your changes. To reflect this change in your session, log out of
Database Control and log in again as klysy.

Create the wallet in Database Control

Now, navigate via the “Server” tab to the “Transparent Data Encryption” link (under the
“Security” sub-section). The following screen should appear:

At this point, you can enter a wallet password. Enter the password
“firstpartsecondpart”. The masking facility in 11g ensures that two separate
individuals can be entrusted with the respective portion of a wallet password. This is
in accordance with requirement 3.6.6 of PCI. Click “OK” and the following screen
should appear:

-24-
The wallet is now open for use.

Encrypting Data

Run the script again


Attempt to run the change_oe_schema.sql script again, as the user klysy:
SQL> @/home/oracle/aso_scripts/change_oe_schema
[Note: If you get errors other than those related to a table already existing , try running
the script as oe/oe.]
Observe the encryption attributes
Run the SQL*Plus command DESCRIBE against the OE tables CUSTOMERS and
CUSTOMER_PAYMENT. Observe the encryption attributes set for selected columns
in each table.
SQL> describe oe.customers
Name Null? Type
-----------------------------------------------------------
CUSTOMER_ID NOT NULL NUMBER(6)
CUST_FIRST_NAME NOT NULL VARCHAR2(20)
CUST_LAST_NAME NOT NULL VARCHAR2(20)
CUST_ADDRESS OE.CUST_ADDRESS_TYP
PHONE_NUMBERS OE.PHONE_LIST_TYP
NLS_LANGUAGE VARCHAR2(3)
NLS_TERRITORY VARCHAR2(30)
CREDIT_LIMIT NUMBER(9,2) ENCRYPT

-25-
CUST_EMAIL VARCHAR2(30) ENCRYPT
ACCOUNT_MGR_ID NUMBER(6)
CUST_GEO_LOCATION MDSYS.SDO_GEOMETRY
DATE_OF_BIRTH DATE ENCRYPT
MARITAL_STATUS VARCHAR2(20) ENCRYPT
GENDER VARCHAR2(1)
INCOME_LEVEL VARCHAR2(20)
SSN VARCHAR2(10) ENCRYPT

SQL> describe oe.customer_payment


Name Null? Type
------------------------------------------------------------------
CUSTOMER_ID NUMBER
PAYMENT_ID NUMBER
PAYMENT_TYPE VARCHAR2(20) ENCRYPT
PAYMENT_NAME VARCHAR2(100)
PAYMENT_CARD_NUMBER NUMBER ENCRYPT
PAYMENT_EXPIRATION VARCHAR2(20) ENCRYPT
The CUSTOMERS table is populated with data, some of which is now encrypted. It
should be noted that this encryption process is an update operation which will permit
reads, but no updates during this process. In CashBankTrust’s case, this sort of table
may have over a billion rows and cannot be taken down – in such a scenario the use
of online redefinition would be indicated. We would likely use the BY ROWID
option, and the interim table would have the PAYMENT_CARD_NUMBER encrypted
(see
http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/tables.htm#sthref
2342 for details). The benefit of online redefinition is a very short amount of
downtime, while otherwise remaining transparent to users and applications.
Now, since the wallet is open the data will be returned decrypted. Attempt to select
one of the encrypted columns of CUSTOMERS and verify that the data is returned
decrypted.
SQL> SELECT CUST_EMAIL FROM OE.CUSTOMERS;

CUST_EMAIL
------------------------------
Ishwarya.Roberts@LAPWING.COM
Dieter.Matthau@VERDIN.COM
Divine.Sheen@COWBIRD.COM
Frederico.Romero@CURLEW.COM
Goldie.Montand@DIPPER.COM
…and so on…

-26-
If the wallet was not opened, any attempt to access encrypted data cause an error to
be returned:

SQL> alter system set encryption wallet close;

System altered.

SQL> select CUST_EMAIL from OE.CUSTOMERS;


select CUST_EMAIL from OE.CUSTOMERS

ERROR at line 1:
ORA-28365: wallet is not open

Rekeying the Encrypted Columns

Re-create the master key in the wallet

As klysy, perform the following commands:


SQL> alter system set encryption wallet open identified by
"firstpartsecondpart";
This will re-open the wallet that was closed in the last exercise.
SQL> alter system set encryption key identified by
"firstpartsecondpart";
This will force a re-creation of the master key that is held within the wallet. NOTE: If for
any reason the above statements fail, disconnect from sqlplus and reconnect as klysy.

Rekey the tables with encrypted columns

Rekey the CUSTOMERS and CUSTOMER_PAYMENT table:

SQL> alter table oe.customers rekey;

Table altered.

SQL> alter table oe.customer_payment rekey;

Table altered.

SQL> @

CUST_EMAIL
------------------------------
Ishwarya.Roberts@LAPWING.COM

-27-
Dieter.Matthau@VERDIN.COM
Divine.Sheen@COWBIRD.COM
Frederico.Romero@CURLEW.COM
Goldie.Montand@DIPPER.COM
…and so on…
319 rows selected.
This activity would be performed by CashBankTrust staff from time to time in order to
satisfy the following PCI requirements:
3.6.4 Periodic changing of keys
• As deemed necessary and recommended by the associated application (for example,
re-keying); preferably automatically
• At least annually.
3.6.8 Replacement of known or suspected compromised keys
Remember, the unencrypted data will be available whenever the wallet is opened. The
data will be encrypted on disk and in any copies or backups of the data. Network
encryption (covered by the previous lab) encrypts the data in motion over the network.

Customer Objections

After presenting this solution to CashBankTrust, they mention to you that they already
have in place a couple of different technologies within the organization to achieve
something similar. In the one case, a custom-built Java application is using the Jasypt
library to add encryption capabilities at the application layer – i.e. data sent by the
application to the database is already encrypted by the application itself. In the the other
case, they are using a disk-based encryption solution that operates at the I/O layer on the
database server itself. All I/O thus performed on Oracle datafiles results in datafiles that
are completely encrypted.
What are the specific benefits of column-based TDE over these two different approaches?

-28-
LAB EXERCISE 04 – Tablespace Level Data Encryption

CashBankTrust has been using column-level TDE for quite some time. In the last lab, we
saw them applying it to tables associated with their new e-commerce application for PCI
compliance. Now they would like to evaluate Tablespace Encryption. Tablespace
encryption is an attractive option to CashBankTrust for several reasons:
o Less upfront analysis needs to be done to identify candidates for encryption –
only entire tablespaces need to be identified. Unlike with column-level TDE,
no impact assessment needs to be made around columns used as indexes.
o CashBankTrust is beginning to make use of SecureFiles, the 11g unstructured
data type, for secure and efficient storage of all documents, etc. used in their
applications. With Tablespace Encryption, these files can be encrypted on
disk along with all the other data types.
o On disk encryption is important for protecting PII (personally identifiable
information) to comply with regulations such as HIPPA and to protect data
from being compromised and used for purposes other than which it was
intended.

NOTE: All scripts used in this lab exercise can be found in the directory
/home/oracle/scripts.

1. Change directory to /home/oracle/scripts. Enter the alias db01 to set


your environment variables to the db01 database.

oracle:/home/oracle> cd scripts/
oracle:/home/oracle/scripts> db01
ORACLE_SID=db01
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/11.1.0/db_1
OH=/u01/oracle/product/11.1.0/db_1

2. Launch SQL Plus with the /nolog option. Run the script
create_enc_tablespace.sql to build the encrypted tablespace used in
this exercise. [NOTE: Be sure that, there are no segments currently residing in
any tablespace named EXAMPLE_ENC before running this script! Use the
provided script show_contents_of_example_enc_ts.sql to check this
case first. The output below performs the check. If there are segments, move them
to another tablespace before running create_enc_tablespace.sql.]

oracle:/home/oracle/scripts> sqlplus /nolog

SQL*Plus: Release 11.1.0.7.0 - Production on Sun Oct 5 21:08:32 2008

Copyright (c) 1982, 2008, Oracle. All rights reserved.

SQL> @show_contents_of_example_enc_ts.sql
SQL>
SQL> connect / as sysdba

-29-
Connected.
SQL>
SQL> select segment_name,segment_type from dba_segments
2 where tablespace_name='EXAMPLE_ENC'
3 /

no rows selected

SQL> @create_enc_tablespace.sql
SQL>
SQL> connect / as sysdba
Connected.
SQL>
SQL> drop tablespace example_enc including contents and datafiles;
drop tablespace example_enc including contents and datafiles
*
ERROR at line 1:
ORA-00959: tablespace 'EXAMPLE_ENC' does not exist

SQL>
SQL> create tablespace example_enc
2 datafile '/u01/oracle/oradata/db01/example01_enc.dbf'
3 size 50m
4 encryption using 'AES192'
5 default storage(encrypt)
6 /

Tablespace created.

SQL> set echo off

3. To test the performance behavior of encrypted tablespaces, we wish to compare


execution plans against several queries involving the SH.CUSTOMERS and
SH.PRODUCTS tables. Initially, all SH schema objects reside in the unencrypted
tablespace EXAMPLE. We wish to first create execution plans for the case where
no SH objects reside in an encrypted tablespace. We then want to have the
SH.CUSTOMERS and SH.PRODUCTS tables moved into the encrypted
tablespace EXAMPLE_ENC, revalidate indexes, and then retest the same queries
to compare execution plans. Run the script show_sh_tables.sql to display
all tables owned by SH.
SQL> @show_sh_tables.sql
SQL>
SQL> select substr(a.table_name,1,30)
"TABLE",substr(b.tablespace_name,1,10) "TABLESPACE",
2 substr(a.owner,1,10) "OWNER",
3 b.encrypted "ENC?"
4 from dba_tables a, dba_tablespaces b
5 where a.tablespace_name=b.tablespace_name
6 and owner in ('SH')
7 order by 3,1,2
8 /

TABLE TABLESPACE OWNER ENC


------------------------------ ---------- ---------- ---
CAL_MONTH_SALES_MV EXAMPLE SH NO
CHANNELS EXAMPLE SH NO

-30-
COUNTRIES EXAMPLE SH NO
CUSTOMERS EXAMPLE SH NO
DIMENSION_EXCEPTIONS EXAMPLE SH NO
DR$SUP_TEXT_IDX$I EXAMPLE SH NO
DR$SUP_TEXT_IDX$R EXAMPLE SH NO
FWEEK_PSCAT_SALES_MV EXAMPLE SH NO
PRODUCTS EXAMPLE SH NO
PROMOTIONS EXAMPLE SH NO
SUPPLEMENTARY_DEMOGRAPHICS EXAMPLE SH NO

TABLE TABLESPACE OWNER ENC


------------------------------ ---------- ---------- ---
TIMES EXAMPLE SH NO

12 rows selected.

SQL> set echo off


SQL>

4. Verify that all indexes for SH.CUSTOMERS and SH.PRODUCTS are valid by
running the script show_sh_index_status_cust_prod.sql. Rebuild
any invalid indexes by running the script
rebuild_sh_indexes_cust_prod_nonencrypt.sql.

SQL> @show_sh_index_status_cust_prod.sql
SQL>
SQL> connect / as sysdba
Connected.
SQL>
SQL> select substr(index_name,1,30) "INDEX",substr(table_name,1,10)
"TABLE",
2 substr(tablespace_name,1,10) "TABLESPACE",status
3 from dba_indexes
4 where owner='SH' and table_name in ('PRODUCTS','CUSTOMERS')
5 /

INDEX TABLE TABLESPACE STATUS


------------------------------ ---------- ---------- --------
CUSTOMERS_PK CUSTOMERS EXAMPLE VALID
CUSTOMERS_GENDER_BIX CUSTOMERS EXAMPLE VALID
CUSTOMERS_MARITAL_BIX CUSTOMERS EXAMPLE VALID
CUSTOMERS_YOB_BIX CUSTOMERS EXAMPLE VALID
PRODUCTS_PROD_STATUS_BIX PRODUCTS EXAMPLE VALID
PRODUCTS_PK PRODUCTS EXAMPLE VALID
PRODUCTS_PROD_SUBCAT_IX PRODUCTS EXAMPLE VALID
PRODUCTS_PROD_CAT_IX PRODUCTS EXAMPLE VALID

8 rows selected.

SQL>
SQL> set echo off
SQL>

5. Run the script get_execution_plans.sql to create plans for four queries


against SH objects.
SQL> @get_execution_plans.sql
SQL> connect / as sysdba
Connected.
SQL> set autotrace traceonly explain

-31-
SQL> @@lab3_exec_plan_1.sql
SQL> -- Test Execution Plan #1
SQL> select cust_last_name
2 from sh.customers
3 where cust_gender='M'
4 and cust_marital_status='Married'
5 /

Execution Plan
----------------------------------------------------------
Plan hash value: 1936762766

--------------------------------------------------------------------------------
----------------------

| Id | Operation | Name | Rows | Bytes | C


ost (%CPU)| Time |

--------------------------------------------------------------------------------
----------------------

| 0 | SELECT STATEMENT | | 1731 | 27696 |


341 (0)| 00:00:05 |

| 1 | TABLE ACCESS BY INDEX ROWID | CUSTOMERS | 1731 | 27696 |


341 (0)| 00:00:05 |

| 2 | BITMAP CONVERSION TO ROWIDS| | | |


| |

| 3 | BITMAP AND | | | |
| |

|* 4 | BITMAP INDEX SINGLE VALUE| CUSTOMERS_MARITAL_BIX | | |


| |

|* 5 | BITMAP INDEX SINGLE VALUE| CUSTOMERS_GENDER_BIX | | |


| |

--------------------------------------------------------------------------------
----------------------

Predicate Information (identified by operation id):


---------------------------------------------------

4 - access("CUST_MARITAL_STATUS"='Married')
5 - access("CUST_GENDER"='M')

SQL> @@lab3_exec_plan_2.sql
SQL> -- Test Execution Plan #2
SQL> select prod_name,prod_desc
2 from sh.products
3 where prod_id<10
4 /

Execution Plan
----------------------------------------------------------
Plan hash value: 2114366748

--------------------------------------------------------------------------------
-----------

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|


Time |

--------------------------------------------------------------------------------
-----------

| 0 | SELECT STATEMENT | | 1 | 58 | 2 (0)|


00:00:01 |

-32-
| 1 | TABLE ACCESS BY INDEX ROWID| PRODUCTS | 1 | 58 | 2 (0)|
00:00:01 |

|* 2 | INDEX RANGE SCAN | PRODUCTS_PK | 1 | | 1 (0)|


00:00:01 |

--------------------------------------------------------------------------------
-----------

Predicate Information (identified by operation id):


---------------------------------------------------

2 - access("PROD_ID"<10)

SQL> @@lab3_exec_plan_3.sql
SQL> -- Test Execution Plan #3
SQL> select c.cust_last_name
2 , s.time_id
3 , s.prod_id
4 from sh.sales s, sh.customers c
5 where c.cust_id = s.cust_id (+)
6 and s.prod_id = 7145
7 /

Execution Plan
----------------------------------------------------------
Plan hash value: 3177224361

--------------------------------------------------------------------------------
---------------------------------------

| Id | Operation | Name | Rows | Bytes |


Cost (%CPU)| Time | Pstart| Pstop |

--------------------------------------------------------------------------------
---------------------------------------

| 0 | SELECT STATEMENT | | 1 | 30 |
30 (0)| 00:00:01 | | |

| 1 | NESTED LOOPS | | | |
| | | |

| 2 | NESTED LOOPS | | 1 | 30 |
30 (0)| 00:00:01 | | |

| 3 | PARTITION RANGE ALL | | 1 | 17 |


29 (0)| 00:00:01 | 1 | 28 |

| 4 | TABLE ACCESS BY LOCAL INDEX ROWID| SALES | 1 | 17 |


29 (0)| 00:00:01 | 1 | 28 |

| 5 | BITMAP CONVERSION TO ROWIDS | | | |


| | | |

|* 6 | BITMAP INDEX SINGLE VALUE | SALES_PROD_BIX | | |


| | 1 | 28 |

|* 7 | INDEX UNIQUE SCAN | CUSTOMERS_PK | 1 | |


0 (0)| 00:00:01 | | |

| 8 | TABLE ACCESS BY INDEX ROWID | CUSTOMERS | 1 | 13 |


1 (0)| 00:00:01 | | |

--------------------------------------------------------------------------------
---------------------------------------

Predicate Information (identified by operation id):

-33-
---------------------------------------------------

6 - access("S"."PROD_ID"=7145)
7 - access("C"."CUST_ID"="S"."CUST_ID")

SQL> @@lab3_exec_plan_4.sql
SQL> -- Test Execution Plan #4
SQL> select distinct
2 a.cust_last_name,a.cust_city,b.prod_name
3 from sh.customers a, sh.products b,sh.sales c
4 where a.cust_id=c.cust_id
5 and c.prod_id=b.prod_id
6 /

Execution Plan
----------------------------------------------------------
Plan hash value: 2067517253

--------------------------------------------------------------------------------
----------------------------

| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)


| Time | Pstart| Pstop |

--------------------------------------------------------------------------------
----------------------------

| 0 | SELECT STATEMENT | | 918K| 54M| | 15699 (1)


| 00:03:09 | | |

| 1 | HASH UNIQUE | | 918K| 54M| 127M| 15699 (1)


| 00:03:09 | | |

|* 2 | HASH JOIN | | 918K| 54M| | 1926 (2)


| 00:00:24 | | |

| 3 | TABLE ACCESS FULL | PRODUCTS | 72 | 2160 | | 3 (0)


| 00:00:01 | | |

|* 4 | HASH JOIN | | 918K| 28M| 1904K| 1917 (2)


| 00:00:24 | | |

| 5 | TABLE ACCESS FULL | CUSTOMERS | 55500 | 1246K| | 406 (1)


| 00:00:05 | | |

| 6 | PARTITION RANGE ALL| | 918K| 8075K| | 492 (3)


| 00:00:06 | 1 | 28 |

| 7 | TABLE ACCESS FULL | SALES | 918K| 8075K| | 492 (3)


| 00:00:06 | 1 | 28 |

--------------------------------------------------------------------------------
----------------------------

Predicate Information (identified by operation id):


---------------------------------------------------

2 - access("C"."PROD_ID"="B"."PROD_ID")
4 - access("A"."CUST_ID"="C"."CUST_ID")

SQL> set autotrace off


SQL> set echo off

6. Move the tables SH.CUSTOMERS and SH.PRODUCTS to the encrypted


tablespace using the script
move_sh_cust_prod_tables_to_enc_ts.sql. Verify whether the

-34-
indexes for these tables are still valid by running the script
show_sh_index_status_cust_prod.sql.

SQL> @move_sh_cust_prod_tables_to_enc_ts.sql
SQL>
SQL> connect / as sysdba
Connected.
SQL>
SQL> alter table sh.customers move tablespace example_enc;

Table altered.

SQL> alter table sh.products move tablespace example_enc;

Table altered.

SQL>
SQL> set echo off
SQL> @show_sh_index_status_cust_prod.sql
SQL>
SQL> connect / as sysdba
Connected.
SQL>
SQL> select substr(index_name,1,30) "INDEX",substr(table_name,1,10)
"TABLE",
2 substr(tablespace_name,1,10) "TABLESPACE",status
3 from dba_indexes
4 where owner='SH' and table_name in ('PRODUCTS','CUSTOMERS')
5 /

INDEX TABLE TABLESPACE STATUS


------------------------------ ---------- ---------- --------
CUSTOMERS_PK CUSTOMERS EXAMPLE UNUSABLE
CUSTOMERS_GENDER_BIX CUSTOMERS EXAMPLE UNUSABLE
CUSTOMERS_MARITAL_BIX CUSTOMERS EXAMPLE UNUSABLE
CUSTOMERS_YOB_BIX CUSTOMERS EXAMPLE UNUSABLE
PRODUCTS_PROD_STATUS_BIX PRODUCTS EXAMPLE UNUSABLE
PRODUCTS_PK PRODUCTS EXAMPLE UNUSABLE
PRODUCTS_PROD_SUBCAT_IX PRODUCTS EXAMPLE UNUSABLE
PRODUCTS_PROD_CAT_IX PRODUCTS EXAMPLE UNUSABLE

8 rows selected.

SQL>
SQL> set echo off

7. Rebuild all indexes for the tables SH.CUSTOMERS and SH.PRODUCTS in the
non-encrypted tablespace EXAMPLE by running the script
rebuild_sh_indexes_cust_prod_nonencrypt.sql. Verify that all of
these indexes are now valid.

SQL> @rebuild_sh_indexes_cust_prod_nonencrypt.sql
SQL>
SQL> connect / as sysdba
Connected.
SQL>
SQL> alter index sh.CUSTOMERS_PK rebuild tablespace example;

Index altered.

-35-
SQL> alter index sh.CUSTOMERS_GENDER_BIX rebuild tablespace example;

Index altered.

SQL> alter index sh.CUSTOMERS_MARITAL_BIX rebuild tablespace example;

Index altered.

SQL> alter index sh.CUSTOMERS_YOB_BIX rebuild tablespace example;

Index altered.

SQL> alter index sh.PRODUCTS_PROD_STATUS_BIX rebuild tablespace example;

Index altered.

SQL> alter index sh.PRODUCTS_PK rebuild tablespace example;

Index altered.

SQL> alter index sh.PRODUCTS_PROD_SUBCAT_IX rebuild tablespace example;

Index altered.

SQL> alter index sh.PRODUCTS_PROD_CAT_IX rebuild tablespace example;

Index altered.

SQL>
SQL> select substr(index_name,1,30) "INDEX",substr(table_name,1,10)
"TABLE",
2 substr(tablespace_name,1,10) "TABLESPACE",status
3 from dba_indexes
4 where owner='SH' and table_name in ('PRODUCTS','CUSTOMERS')
5 /

INDEX TABLE TABLESPACE STATUS


------------------------------ ---------- ---------- --------
CUSTOMERS_PK CUSTOMERS EXAMPLE VALID
CUSTOMERS_GENDER_BIX CUSTOMERS EXAMPLE VALID
CUSTOMERS_MARITAL_BIX CUSTOMERS EXAMPLE VALID
CUSTOMERS_YOB_BIX CUSTOMERS EXAMPLE VALID
PRODUCTS_PROD_STATUS_BIX PRODUCTS EXAMPLE VALID
PRODUCTS_PK PRODUCTS EXAMPLE VALID
PRODUCTS_PROD_SUBCAT_IX PRODUCTS EXAMPLE VALID
PRODUCTS_PROD_CAT_IX PRODUCTS EXAMPLE VALID

8 rows selected.

SQL>

8. Run the script get_execution_plans.sql to see the plans for same queries
run in step (5). Note that the indexes, which are not encrypted, are still being used
in these plans for accessing the encrypted table data.
SQL> @get_execution_plans.sql
SQL> connect / as sysdba
Connected.
SQL> set autotrace traceonly explain
SQL> @@lab3_exec_plan_1.sql
SQL> -- Test Execution Plan #1

-36-
SQL> select cust_last_name
2 from sh.customers
3 where cust_gender='M'
4 and cust_marital_status='Married'
5 /

Execution Plan
----------------------------------------------------------
Plan hash value: 1936762766

--------------------------------------------------------------------------------
----------------------

| Id | Operation | Name | Rows | Bytes | C


ost (%CPU)| Time |

--------------------------------------------------------------------------------
----------------------

| 0 | SELECT STATEMENT | | 1731 | 27696 |


341 (0)| 00:00:05 |

| 1 | TABLE ACCESS BY INDEX ROWID | CUSTOMERS | 1731 | 27696 |


341 (0)| 00:00:05 |

| 2 | BITMAP CONVERSION TO ROWIDS| | | |


| |

| 3 | BITMAP AND | | | |
| |

|* 4 | BITMAP INDEX SINGLE VALUE| CUSTOMERS_MARITAL_BIX | | |


| |

|* 5 | BITMAP INDEX SINGLE VALUE| CUSTOMERS_GENDER_BIX | | |


| |

--------------------------------------------------------------------------------
----------------------

Predicate Information (identified by operation id):


---------------------------------------------------

4 - access("CUST_MARITAL_STATUS"='Married')
5 - access("CUST_GENDER"='M')

SQL> @@lab3_exec_plan_2.sql
SQL> -- Test Execution Plan #2
SQL> select prod_name,prod_desc
2 from sh.products
3 where prod_id<10
4 /

Execution Plan
----------------------------------------------------------
Plan hash value: 2114366748

--------------------------------------------------------------------------------
-----------

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|


Time |

--------------------------------------------------------------------------------
-----------

| 0 | SELECT STATEMENT | | 1 | 58 | 2 (0)|


00:00:01 |

| 1 | TABLE ACCESS BY INDEX ROWID| PRODUCTS | 1 | 58 | 2 (0)|

-37-
00:00:01 |

|* 2 | INDEX RANGE SCAN | PRODUCTS_PK | 1 | | 1 (0)|


00:00:01 |

--------------------------------------------------------------------------------
-----------

Predicate Information (identified by operation id):


---------------------------------------------------

2 - access("PROD_ID"<10)

SQL> @@lab3_exec_plan_3.sql
SQL> -- Test Execution Plan #3
SQL> select c.cust_last_name
2 , s.time_id
3 , s.prod_id
4 from sh.sales s, sh.customers c
5 where c.cust_id = s.cust_id (+)
6 and s.prod_id = 7145
7 /

Execution Plan
----------------------------------------------------------
Plan hash value: 3177224361

--------------------------------------------------------------------------------
---------------------------------------

| Id | Operation | Name | Rows | Bytes |


Cost (%CPU)| Time | Pstart| Pstop |

--------------------------------------------------------------------------------
---------------------------------------

| 0 | SELECT STATEMENT | | 1 | 30 |
30 (0)| 00:00:01 | | |

| 1 | NESTED LOOPS | | | |
| | | |

| 2 | NESTED LOOPS | | 1 | 30 |
30 (0)| 00:00:01 | | |

| 3 | PARTITION RANGE ALL | | 1 | 17 |


29 (0)| 00:00:01 | 1 | 28 |

| 4 | TABLE ACCESS BY LOCAL INDEX ROWID| SALES | 1 | 17 |


29 (0)| 00:00:01 | 1 | 28 |

| 5 | BITMAP CONVERSION TO ROWIDS | | | |


| | | |

|* 6 | BITMAP INDEX SINGLE VALUE | SALES_PROD_BIX | | |


| | 1 | 28 |

|* 7 | INDEX UNIQUE SCAN | CUSTOMERS_PK | 1 | |


0 (0)| 00:00:01 | | |

| 8 | TABLE ACCESS BY INDEX ROWID | CUSTOMERS | 1 | 13 |


1 (0)| 00:00:01 | | |

--------------------------------------------------------------------------------
---------------------------------------

Predicate Information (identified by operation id):


---------------------------------------------------

-38-
6 - access("S"."PROD_ID"=7145)
7 - access("C"."CUST_ID"="S"."CUST_ID")

SQL> @@lab3_exec_plan_4.sql
SQL> -- Test Execution Plan #4
SQL> select distinct
2 a.cust_last_name,a.cust_city,b.prod_name
3 from sh.customers a, sh.products b,sh.sales c
4 where a.cust_id=c.cust_id
5 and c.prod_id=b.prod_id
6 /

Execution Plan
----------------------------------------------------------
Plan hash value: 2067517253

--------------------------------------------------------------------------------
----------------------------

| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)


| Time | Pstart| Pstop |

--------------------------------------------------------------------------------
----------------------------

| 0 | SELECT STATEMENT | | 918K| 54M| | 15699 (1)


| 00:03:09 | | |

| 1 | HASH UNIQUE | | 918K| 54M| 127M| 15699 (1)


| 00:03:09 | | |

|* 2 | HASH JOIN | | 918K| 54M| | 1926 (2)


| 00:00:24 | | |

| 3 | TABLE ACCESS FULL | PRODUCTS | 72 | 2160 | | 3 (0)


| 00:00:01 | | |

|* 4 | HASH JOIN | | 918K| 28M| 1904K| 1917 (2)


| 00:00:24 | | |

| 5 | TABLE ACCESS FULL | CUSTOMERS | 55500 | 1246K| | 406 (1)


| 00:00:05 | | |

| 6 | PARTITION RANGE ALL| | 918K| 8075K| | 492 (3)


| 00:00:06 | 1 | 28 |

| 7 | TABLE ACCESS FULL | SALES | 918K| 8075K| | 492 (3)


| 00:00:06 | 1 | 28 |

--------------------------------------------------------------------------------
----------------------------

Predicate Information (identified by operation id):


---------------------------------------------------

2 - access("C"."PROD_ID"="B"."PROD_ID")
4 - access("A"."CUST_ID"="C"."CUST_ID")

SQL> set autotrace off


SQL> set echo off

9. Finally, we wish to verify that indexes can still be used in execution plans if both
tables and indexes reside in an encrypted tablespace. Run the script
rebuild_sh_indexes_cust_prod_encrypt.sql to rebuild all indexes

-39-
on tables SH.CUSTOMERS and SH.PRODUCTS in the encrypted tablespace
EXAMPLE_ENC. Verify that these indexes are valid.

SQL> @rebuild_sh_indexes_cust_prod_encrypt.sql
SQL>
SQL> connect / as sysdba
Connected.
SQL>
SQL> alter index sh.CUSTOMERS_PK rebuild tablespace example_enc;

Index altered.

SQL> alter index sh.CUSTOMERS_GENDER_BIX rebuild tablespace example_enc;

Index altered.

SQL> alter index sh.CUSTOMERS_MARITAL_BIX rebuild tablespace example_enc;

Index altered.

SQL> alter index sh.CUSTOMERS_YOB_BIX rebuild tablespace example_enc;

Index altered.

SQL> alter index sh.PRODUCTS_PROD_STATUS_BIX rebuild tablespace


example_enc;

Index altered.

SQL> alter index sh.PRODUCTS_PK rebuild tablespace example_enc;

Index altered.

SQL> alter index sh.PRODUCTS_PROD_SUBCAT_IX rebuild tablespace


example_enc;

Index altered.

SQL> alter index sh.PRODUCTS_PROD_CAT_IX rebuild tablespace example_enc;

Index altered.

SQL>
SQL> select substr(index_name,1,30) "INDEX",substr(table_name,1,10)
"TABLE",
2 substr(tablespace_name,1,15) "TABLESPACE",status
3 from dba_indexes
4 where owner='SH' and table_name in ('PRODUCTS','CUSTOMERS')
5 /

INDEX TABLE TABLESPACE STATUS


------------------------------ ---------- --------------- --------
CUSTOMERS_PK CUSTOMERS EXAMPLE_ENC VALID
CUSTOMERS_GENDER_BIX CUSTOMERS EXAMPLE_ENC VALID
CUSTOMERS_MARITAL_BIX CUSTOMERS EXAMPLE_ENC VALID
CUSTOMERS_YOB_BIX CUSTOMERS EXAMPLE_ENC VALID
PRODUCTS_PROD_STATUS_BIX PRODUCTS EXAMPLE_ENC VALID
PRODUCTS_PK PRODUCTS EXAMPLE_ENC VALID
PRODUCTS_PROD_SUBCAT_IX PRODUCTS EXAMPLE_ENC VALID
PRODUCTS_PROD_CAT_IX PRODUCTS EXAMPLE_ENC VALID

8 rows selected.

-40-
SQL>
SQL> set echo off

10. Run the script get_execution_plans.sql to see the plans for same queries run
before. Are the associated indexes, now encrypted, still being used in these plans?
SQL> @get_execution_plans.sql
SQL> connect / as sysdba
Connected.
SQL> set autotrace traceonly explain
SQL> @@lab3_exec_plan_1.sql
SQL> -- Test Execution Plan #1
SQL> select cust_last_name
2 from sh.customers
3 where cust_gender='M'
4 and cust_marital_status='Married'
5 /

Execution Plan
----------------------------------------------------------
Plan hash value: 1936762766

--------------------------------------------------------------------------------
----------------------

| Id | Operation | Name | Rows | Bytes | C


ost (%CPU)| Time |

--------------------------------------------------------------------------------
----------------------

| 0 | SELECT STATEMENT | | 1731 | 27696 |


341 (0)| 00:00:05 |

| 1 | TABLE ACCESS BY INDEX ROWID | CUSTOMERS | 1731 | 27696 |


341 (0)| 00:00:05 |

| 2 | BITMAP CONVERSION TO ROWIDS| | | |


| |

| 3 | BITMAP AND | | | |
| |

|* 4 | BITMAP INDEX SINGLE VALUE| CUSTOMERS_MARITAL_BIX | | |


| |

|* 5 | BITMAP INDEX SINGLE VALUE| CUSTOMERS_GENDER_BIX | | |


| |

--------------------------------------------------------------------------------
----------------------

Predicate Information (identified by operation id):


---------------------------------------------------

4 - access("CUST_MARITAL_STATUS"='Married')
5 - access("CUST_GENDER"='M')

SQL> @@lab3_exec_plan_2.sql
SQL> -- Test Execution Plan #2
SQL> select prod_name,prod_desc
2 from sh.products
3 where prod_id<10
4 /

-41-
Execution Plan
----------------------------------------------------------
Plan hash value: 2114366748

--------------------------------------------------------------------------------
-----------

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|


Time |

--------------------------------------------------------------------------------
-----------

| 0 | SELECT STATEMENT | | 1 | 58 | 2 (0)|


00:00:01 |

| 1 | TABLE ACCESS BY INDEX ROWID| PRODUCTS | 1 | 58 | 2 (0)|


00:00:01 |

|* 2 | INDEX RANGE SCAN | PRODUCTS_PK | 1 | | 1 (0)|


00:00:01 |

--------------------------------------------------------------------------------
-----------

Predicate Information (identified by operation id):


---------------------------------------------------

2 - access("PROD_ID"<10)

SQL> @@lab3_exec_plan_3.sql
SQL> -- Test Execution Plan #3
SQL> select c.cust_last_name
2 , s.time_id
3 , s.prod_id
4 from sh.sales s, sh.customers c
5 where c.cust_id = s.cust_id (+)
6 and s.prod_id = 7145
7 /

Execution Plan
----------------------------------------------------------
Plan hash value: 3177224361

--------------------------------------------------------------------------------
---------------------------------------

| Id | Operation | Name | Rows | Bytes |


Cost (%CPU)| Time | Pstart| Pstop |

--------------------------------------------------------------------------------
---------------------------------------

| 0 | SELECT STATEMENT | | 1 | 30 |
30 (0)| 00:00:01 | | |

| 1 | NESTED LOOPS | | | |
| | | |

| 2 | NESTED LOOPS | | 1 | 30 |
30 (0)| 00:00:01 | | |

| 3 | PARTITION RANGE ALL | | 1 | 17 |


29 (0)| 00:00:01 | 1 | 28 |

| 4 | TABLE ACCESS BY LOCAL INDEX ROWID| SALES | 1 | 17 |


29 (0)| 00:00:01 | 1 | 28 |

| 5 | BITMAP CONVERSION TO ROWIDS | | | |


| | | |

-42-
|* 6 | BITMAP INDEX SINGLE VALUE | SALES_PROD_BIX | | |
| | 1 | 28 |

|* 7 | INDEX UNIQUE SCAN | CUSTOMERS_PK | 1 | |


0 (0)| 00:00:01 | | |

| 8 | TABLE ACCESS BY INDEX ROWID | CUSTOMERS | 1 | 13 |


1 (0)| 00:00:01 | | |

--------------------------------------------------------------------------------
---------------------------------------

Predicate Information (identified by operation id):


---------------------------------------------------

6 - access("S"."PROD_ID"=7145)
7 - access("C"."CUST_ID"="S"."CUST_ID")

SQL> @@lab3_exec_plan_4.sql
SQL> -- Test Execution Plan #4
SQL> select distinct
2 a.cust_last_name,a.cust_city,b.prod_name
3 from sh.customers a, sh.products b,sh.sales c
4 where a.cust_id=c.cust_id
5 and c.prod_id=b.prod_id
6 /

Execution Plan
----------------------------------------------------------
Plan hash value: 2067517253

--------------------------------------------------------------------------------
----------------------------

| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)


| Time | Pstart| Pstop |

--------------------------------------------------------------------------------
----------------------------

| 0 | SELECT STATEMENT | | 918K| 54M| | 15699 (1)


| 00:03:09 | | |

| 1 | HASH UNIQUE | | 918K| 54M| 127M| 15699 (1)


| 00:03:09 | | |

|* 2 | HASH JOIN | | 918K| 54M| | 1926 (2)


| 00:00:24 | | |

| 3 | TABLE ACCESS FULL | PRODUCTS | 72 | 2160 | | 3 (0)


| 00:00:01 | | |

|* 4 | HASH JOIN | | 918K| 28M| 1904K| 1917 (2)


| 00:00:24 | | |

| 5 | TABLE ACCESS FULL | CUSTOMERS | 55500 | 1246K| | 406 (1)


| 00:00:05 | | |

| 6 | PARTITION RANGE ALL| | 918K| 8075K| | 492 (3)


| 00:00:06 | 1 | 28 |

| 7 | TABLE ACCESS FULL | SALES | 918K| 8075K| | 492 (3)


| 00:00:06 | 1 | 28 |

--------------------------------------------------------------------------------
----------------------------

Predicate Information (identified by operation id):

-43-
---------------------------------------------------

2 - access("C"."PROD_ID"="B"."PROD_ID")
4 - access("A"."CUST_ID"="C"."CUST_ID")

SQL> set autotrace off


SQL> set echo off

-44-
LAB EXERCISE 05 – Database Vault Realms

CashBankTrust is in the process of offshoring and outsourcing many IT operations. They


would like to restrict access to the HR and Banking schemas both from overseas
CashBankTrust employees and from contractors. Additionally, due to a corporate
decision to reduce the number of application databases, the Human Resources (HR) and
Banking databases have been integrated. Our goal is to use Database Vault to enforce
separation of duties between the HR and Banking schemas in the integrated database, as
well as to restrict access to both schemas. Database Vault is a Preventative Database
Control that will limit access to sensitive information and help ensure the integrity of
data. DBV will prevent issues from ever occurring in the first place.

Database Vault also plays a role in many regulatory customer scenarios, as follows:

Some companies have used Database Vault as part of compliance with the European
Union Privacy Directive, which does not allow the transfer of personal data to a country
outside of the EU which lacks adequate personal data privacy safeguards. Other
companies are deploying Database Vault in response to external and internal audits that

-45-
have uncovered policy enforcement issues, and have directed them to be addressed by
preventative controls.

Similar protections on the OE schema would also help CashBankTrust comply with
Section 7 of the PCI Requirements:

Requirement 7: Restrict access to cardholder data by business need-to-know


This requirement ensures critical data can only be accessed by authorized
personnel.
7.1 Limit access to computing resources and cardholder information only
to those individuals whose job requires such access.
7.2 Establish a mechanism for systems with multiple users that restricts
access based on a user’s need to know and is set to “deny all” unless
specifically allowed.

-46-
1. If you have not already done so, please be sure the following is started (see pages
8-10):
• Listener for db01
• The db01 database
• The db01 db console

2. Startup up your browser if it is not already started. Access (via Bookmark) the
URL https://dbsecurity.oracle.com:5501/dva. Login to the
Database Vault Web Administration UI (DVA) as dvowner. (Password for
dvowner is oracle12#). Enter the other login attributes as shown below.

1. Use DVA to create a realm named HR Realm. Ensure that all objects owned by
the user HR are protected by this realm. Make HR the only authorized user.

-47-
a. Select Realms.

b. Select Create.

-48-
c. Enter name and description as shown. Leave all other fields as defaults.
Select OK.

d. Select the radio button for HR Realm, then select Edit.

-49-
e. Scroll down to the section entitled Secure Realm Objects. Select Create.

-50-
f. Enter HR as object owner, object type as “%”, and object name as “%”.
Select OK.

-51-
g. Scroll down to the Realm Authorizations section. Select Create.
Select HR from pulldown list as Grantee, select Participant radio
button as Authorization Type.

-52-
h. Select OK twice. Your Realm screen should show the newly created realm
HR Realm as shown below.

2. Back in your terminal window, set the alias to db01. Connect to SQL*Plus as
SYSDBA. Attempt to select data from HR.DEPARTMENTS. Attempt to create a
table in the HR schema. Notice that both operations fail even though SYSDBA
possesses sufficient system privileges.
oracle:/home/oracle> db01
ORACLE_SID=db01
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/11.1.0/db_1
OH=/u01/oracle/product/11.1.0/db_1
oracle:/home/oracle> sqlplus / as sysdba

SQL*Plus: Release 11.1.0.7.0 - Production on Sun Oct 5 17:54:28 2008

Copyright (c) 1982, 2008, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options

SQL> select * from hr.departments;


select * from hr.departments
*
ERROR at line 1:

-53-
ORA-01031: insufficient privileges

SQL> create table hr.junk(id number);


create table hr.junk(id number)
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-47401: Realm violation for create table on HR.JUNK
ORA-06512: at "DVSYS.AUTHORIZE_EVENT", line 55
ORA-06512: at line 31

3. As user KLYSY, attempt to perform the same operations as in Step 2. Verify that
both operations fail. [Leave this session connected as this time.]

SQL> connect klysy/klysy


Connected.
SQL> select * from hr.departments;
select * from hr.departments
*
ERROR at line 1:
ORA-01031: insufficient privileges

SQL> create table hr.junk(id number);


create table hr.junk(id number)
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-47401: Realm violation for create table on HR.JUNK
ORA-06512: at "DVSYS.AUTHORIZE_EVENT", line 55
ORA-06512: at line 31

4. As user OE, attempt to perform the same operations as in Step 2. Why does one
succeed and not the other?

SQL> connect oe/oe


Connected.
SQL> select employee_id, last_name, first_name
2 from hr.employees
3 where rownum < 10;

EMPLOYEE_ID LAST_NAME FIRST_NAME


----------- ------------------------- --------------------
198 OConnell Donald
199 Grant Douglas
200 Whalen Jennifer
201 Hartstein Michael
202 Fay Pat
203 Mavris Susan
204 Baer Hermann
205 Higgins Shelley
206 Gietz William

9 rows selected.

SQL> create table hr.junk(id number);


create table hr.junk(id number)
*

-54-
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-47401: Realm violation for create table on HR.JUNK
ORA-06512: at "DVSYS.AUTHORIZE_EVENT", line 55
ORA-06512: at line 31

5. In DVA, edit HR Realm. Add user KLYSY as a participant to HR Realm. Select


OK to save the changes.

6. Return to your SQL Plus session as user KLYSY. Attempt to run the same
commands as before. Verify that both commands now succeed since KLYSY is
an authorized user for HR Realm.

SQL> connect klysy/klysy


Connected.
SQL> show user
USER is "KLYSY"
SQL>
SQL> select * from hr.departments where rownum<6;

DEPARTMENT_ID DEPARTMENT_NAME MANAGER_ID LOCATION_ID


------------- ------------------------------ ---------- -----------
10 Administration 200 1700
20 Marketing 201 1800
30 Purchasing 114 1700
40 Human Resources 203 2400
50 Shipping 121 1500

SQL> create table hr.junk(id number);

Table created.

-55-
7. Although Transparent Data Encryption provides physical, file-level data
protection for tables in the BANKING schema, it is clear that powerful users such
as DBAs still have the ability to read sensitive BANKING data as clear text. To
provide additional protection, create a realm named “Banking Realm”. Add the
entire BANKING schema to the realm, and do not provide any authorized users to
this realm.

8. Return to your SQL Plus session. Connect as SYSDBA. Verify that SYSDBA
cannot query data in the BANKING tables CUSTOMER, ACCOUNT, and
ACCOUNT_BALANCE.

oracle:/home/oracle> db01
ORACLE_SID=db01
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/oracle/product/11.1.0/db_1
OH=/oracle/product/11.1.0/db_1
oracle:/home/oracle> sqlplus / as sysdba

SQL*Plus: Release 11.1.0.7.0 - Production on Fri Jul 18 02:06:54 2008

Copyright (c) 1982, 2007, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options

SQL> select * from banking.customer;

-56-
select * from banking.customer
*
ERROR at line 1:
ORA-01031: insufficient privileges

SQL> select * from banking.account;


select * from banking.account
*
ERROR at line 1:
ORA-01031: insufficient privileges

SQL> select * from banking.account_balance;


select * from banking.account_balance
*
ERROR at line 1:
ORA-01031: insufficient privileges

9. As SYSDBA, attempt to truncate BANKING.CUSTOMER. Also attempt to drop


one column in the same table. Verify that both DDL commands fail due to a realm
violation error.
SQL> truncate table banking.customer;
truncate table banking.customer
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-47401: Realm violation for truncate table on BANKING.CUSTOMER
ORA-06512: at "DVSYS.AUTHORIZE_EVENT", line 55
ORA-06512: at line 31

SQL> alter table banking.customer drop column customer_country;


alter table banking.customer drop column customer_country
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-47401: Realm violation for alter table on BANKING.CUSTOMER
ORA-06512: at "DVSYS.AUTHORIZE_EVENT", line 55
ORA-06512: at line 31

10. As user BANKING, attempt to perform the following:


a. Select data in the table ACCOUNT_BALANCE.
b. Create a new table ACCT_BAL_TMP as select from
ACCOUNT_BALANCE.
c. Truncate table ACCOUNT_BALANCE.

Note that the query succeeds but the two DDL commands fail. Although BANKING
owns these tables, without being an authorized user for the Banking realm the schema
owner is limited only to queries and DML statements on its objects.
SQL> connect banking/banking
Connected.
SQL> select * from banking.account_balance;

-57-
ACCOUNT_ID ACCOUNT_B BAL_AVAIL_ADJ BAL_AVAIL_CLOSING
---------- --------- ------------- -----------------
1001 01-MAY-01 121455.9 101440.01
1002 01-JUL-01 782000 780211.23
1003 15-JUN-03 978332.9 765232
1004 19-APR-02 850200.18 850200.18
1005 28-OCT-04 232900.1 120918.75
1006 03-SEP-99 496039.88 490190.59
1017 22-JAN-92 101900 100248.95
1018 01-SEP-00 768950 569122.3

8 rows selected.

SQL> create table banking.acct_bal_tmp


2 as select * from banking.account_balance;
as select * from banking.account_balance
*
ERROR at line 2:
ORA-00604: error occurred at recursive SQL level 1
ORA-47401: Realm violation for create table on BANKING.ACCT_BAL_TMP
ORA-06512: at "DVSYS.AUTHORIZE_EVENT", line 55
ORA-06512: at line 31

SQL> truncate table banking.account_balance;


truncate table banking.account_balance
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-47401: Realm violation for truncate table on BANKING.ACCOUNT_BALANCE
ORA-06512: at "DVSYS.AUTHORIZE_EVENT", line 55
ORA-06512: at line 31

-58-
LAB EXERCISE 06 – Database Vault Command Rules

It has been decided by Corporate Security that the application DBAs for the BANKING
schema must be restricted from being to select, insert, update, or delete any banking
data. In addition, the senior application DBAs must be the only users who have the
capability of dropping or truncating any tables in the BANKING schema. Therefore, the
only users who are permitted access to the BANKING schema are the following:

• Application user BANKING (full access)


• BANKING_DBA_SR (all DDL commands and selects, no DMLs)
• BANKING_DBA_JR (all DDL commands, no selects, no DMLs)

In this lab we will implement the above security requirements using a combination of
Database Vault realms, rule sets, and command rules.

1. Login to DVA as dvowner/oracle12#. Select Realms, select the button for


Banking Realm, and select Edit.

-59-
2. Under Realm Authorizations, add users BANKING, BANKING_DBA_SR
and BANKING_DBA_JR as realm participants. After completing this step,
the Realm Authorization section for Banking Realm should look like the
following:

-60-
3. Create a rule set named “No JR Banking DBAs” to restrict access for user
BANKING_DBA_JR. Select Rule Sets, select CREATE. For the rule
expression use the string dvf.f$session_user != ‘BANKING_DBA_JR’.

-61-
4. Create a rule set named “No Banking App DBAs” to restrict access for users
BANKING_DBA_SR and BANKING_DBA_JR. For the rule expression use
the string dvf.f$session_user not in
('BANKING_DBA_SR','BANKING_DBA_JR')

-62-
5. Create a command rule for the SELECT command on behalf of the BANKING
schema. We wish to prevent the Junior Banking DBAs from being able to select
banking data, so assign the rule set “No JR Banking DBAs” to the SELECT
command rule.

Select Command Rules, followed by CREATE. Under the Command pulldown


menu, choose SELECT. Under Object Owner choose BANKING. Under Object
Name choose “%”. Under Rule Set choose “No JR Banking DBAs”. Select OK.

6. Create a command rules for the INSERT, UPDATE, and DELETE commands.
We wish to restrict both senior and junior application DBAs from running DML
commands against any objects owned by BANKING. For each command rule
specify Object Owner as BANKING, Object Name as “%”, and Rule Set “No
Banking App DBAs”. The output below shows the creation of the command rule
for the INSERT command.

-63-
7. Verify that you have successfully created the four command rules. Under Command
Rules, you should see the following command rules:

8. Verify that users BANKING and BANKING_DBA_SR are able to select data from
the table BANKING.ACCOUNT_BALANCE, but that user BANKING_DBA_JR is
not able to select data from this table.
SQL> connect banking/banking
Connected.
SQL> select * from banking.account_balance;

ACCOUNT_ID ACCOUNT_B BAL_AVAIL_ADJ BAL_AVAIL_CLOSING


---------- --------- ------------- -----------------
1001 01-MAY-01 121455.9 101440.01
1002 01-JUL-01 782000 780211.23
1003 15-JUN-03 978332.9 765232
1004 19-APR-02 850200.18 850200.18
1005 28-OCT-04 232900.1 120918.75
1006 03-SEP-99 496039.88 490190.59
1017 22-JAN-92 101900 100248.95
1018 01-SEP-00 768950 569122.3

8 rows selected.

SQL> connect banking_dba_sr/banking_dba_sr


Connected.
SQL> select * from banking.account_balance;

-64-
ACCOUNT_ID ACCOUNT_B BAL_AVAIL_ADJ BAL_AVAIL_CLOSING
---------- --------- ------------- -----------------
1001 01-MAY-01 121455.9 101440.01
1002 01-JUL-01 782000 780211.23
1003 15-JUN-03 978332.9 765232
1004 19-APR-02 850200.18 850200.18
1005 28-OCT-04 232900.1 120918.75
1006 03-SEP-99 496039.88 490190.59
1017 22-JAN-92 101900 100248.95
1018 01-SEP-00 768950 569122.3

8 rows selected.

SQL> connect banking_dba_jr/banking_dba_jr


Connected.
SQL> select * from banking.account_balance;
select * from banking.account_balance
*
ERROR at line 1:
ORA-01031: insufficient privileges

9. Verify that user BANKING is able to delete data from the table
BANKING.ACCOUNT_BALANCE, but that users BANKING_DBA_SR and
BANKING_DBA_JR are not able to delete data from this table.

SQL> connect banking/banking


Connected.
SQL> delete from banking.account_balance;
8 rows deleted.
SQL> rollback;
Rollback complete.
SQL> connect banking_dba_sr/banking_dba_sr
Connected.
SQL> delete from banking.account_balance;
delete from banking.account_balance
*
ERROR at line 1:
ORA-01031: insufficient privileges

SQL> connect banking_dba_jr/banking_dba_jr


Connected.
SQL> delete from banking.account_balance;
delete from banking.account_balance
*
ERROR at line 1:

-65-
ORA-01031: insufficient privileges

10. As user BANKING, create a new table named CUSTOMER_TMP using CTAS from
the table CUSTOMER. Verify that users BANKING, BANKING_DBA_SR, and
BANKING_DBA_JR are all able to truncate the table CUSTOMER_TMP.
SQL> connect banking/banking
Connected.
SQL> create table banking.customer_tmp as select * from banking.customer;

Table created.

SQL> truncate table banking.customer_tmp;

Table truncated.

SQL> connect banking_dba_sr/banking_dba_sr


Connected.
SQL> truncate table banking.customer_tmp;

Table truncated.

SQL> connect banking_dba_jr/banking_dba_jr


Connected.
SQL> truncate table banking.customer_tmp;

Table truncated.

11. Verify that users BANKING, BANKING_DBA_SR, BANKING_DBA_JR and


SYSDBA are all unable to drop the table CUSTOMER_TMP. This is because
Database Vault is currently incompatible with the Recycle Bin functionality
introduced in Oracle 10g. If the Recycle Bin is enabled (as it is by Default), objects
within a Realm may not be dropped because the data would then be completely
visible to any DBA.

SQL> connect banking/banking


Connected.
SQL> drop table banking.customer_tmp;
drop table banking.customer_tmp
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 2
ORA-47401: Realm violation for alter table on BANKING.CUSTOMER_TMP
ORA-06512: at "DVSYS.AUTHORIZE_EVENT", line 55
ORA-06512: at line 31

SQL> connect banking_dba_sr/banking_dba_sr


Connected.
SQL> drop table banking.customer_tmp;
drop table banking.customer_tmp
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 2
ORA-47401: Realm violation for alter table on BANKING.CUSTOMER_TMP
ORA-06512: at "DVSYS.AUTHORIZE_EVENT", line 55

-66-
ORA-06512: at line 31

SQL> connect banking_dba_jr/banking_dba_jr


Connected.
SQL> drop table banking.customer_tmp;
drop table banking.customer_tmp
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 2
ORA-47401: Realm violation for alter table on BANKING.CUSTOMER_TMP
ORA-06512: at "DVSYS.AUTHORIZE_EVENT", line 55
ORA-06512: at line 31

SQL> connect / as sysdba


Connected.
SQL> drop table banking.customer_tmp;
drop table banking.customer_tmp
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-47401: Realm violation for drop table on BANKING.CUSTOMER_TMP
ORA-06512: at "DVSYS.AUTHORIZE_EVENT", line 55
ORA-06512: at line 31

12. Now we will turn of the Recycle Bin functionality in the database and again attempt
to drop the table. Notice the change in behavior.

SQL> alter system set recyclebin=off scope=spfile;

System altered.

SQL> shutdown immediate;


Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 418484224 bytes


Fixed Size 1313792 bytes
Variable Size 260047872 bytes
Database Buffers 150994944 bytes
Redo Buffers 6127616 bytes
Database mounted.
Database opened.
SQL> connect banking/banking
Connected.
SQL> drop table banking.customer_tmp;

Table dropped.

SQL>

-67-
LAB EXERCISE 07 – Using Custom Factors in Database Vault

Database Vault allows company to enforce policies around data access, imposing
limitations on database access the way a firewall imposes limitations on network traffic.
In this lab you will create the “job role” factor. CashBankTrust is evaluating the use of
Database Vault for protecting HR data. This exercise will allow you to assist them by
demonstrating the following:
• Creating a rule set that restricts an authorized user to HR realm based on
whether he has the job role of SA_MAN or HR_MAN
• Create a command rule that only allows managers to query employee data
(i.e., job role of ‘%MAN’)
ƒ Note: If we “promote” an employee from clerk to manager (i.e.,
apply a committed update of one row of HR.EMPLOYEES to
change an employee’s job role from SA_CLERK to SA_MAN), that
employee would be immediately able to query employee data

1. In DVA, remove the realm named HR Realm. Select the radio button next to
this realm, then click REMOVE.

2. Set the alias db01. Connect to SQL*Plus with the /nolog option. Run the
script
/home/oracle/scripts/create_retrieve_job_id_function
.sql to create a function named retrieve_job_id. This function,
owned by user HR, returns the job_id of the connected user if the user’s login
name matches a value of the EMAIL column in the table HR.EMPLOYEES.

oracle:/home/oracle/scripts> db01
ORACLE_SID=db01
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/11.1.0/db_1
OH=/u01/oracle/product/11.1.0/db_1
oracle:/home/oracle/scripts> sqlplus /nolog

-68-
SQL*Plus: Release 11.1.0.7.0 - Production on Sun Oct 5 20:09:39 2008

Copyright (c) 1982, 2008, Oracle. All rights reserved.

SQL> @create_retrieve_job_id_function.sql
SQL>
SQL> connect hr/hr
Connected.
SQL>
SQL> CREATE OR REPLACE
2 FUNCTION RETRIEVE_JOB_ID
3 RETURN VARCHAR2 AS
4 v_job_id VARCHAR2(10);
5 BEGIN
6 SELECT job_id INTO v_job_id FROM HR.EMPLOYEES
7 WHERE EMAIL = DVF.F$SESSION_USER;
8 RETURN v_job_id;
9 EXCEPTION
10 WHEN NO_DATA_FOUND THEN
11 RETURN NULL;
12 END;
13 /

Function created.

SQL>
SQL> set echo off

3. As HR, grant execute on retrieve_job_id to user DVSYS. You can run the
script named grant_execute_to_dvsys.sql to complete this task.

SQL> @grant_execute_to_dvsys.sql
SQL>
SQL> connect hr/hr
Connected.
SQL>
SQL> grant execute on retrieve_job_id to dvsys;

Grant succeeded.

4. In DVA, login as dvowner/oracle12#. Create a factor named JobRole. This


factor will use the function HR.RETRIEVE_JOB_ID as its retrieval method.

In the Factor page, select CREATE.

-69-
Fill in the values for Name and Retrieval Method as shown below. Be sure to set
Evaluation to “By Access”. Select OK to save.

5. Verify that the factor JobRole works as intended. Test use of JobRole for the
users SMAVRIS, WSMITH, KPARTNER, and PKESTNER. Use the script

-70-
test_jobrole_factor.sql to perform these tests. Note the returned
values for dvf.f$jobrole. The reason why this factor returned a null value for
PKESTNER is because this user is not an employee.

SQL> @test_jobrole_factor.sql
SQL>
SQL> connect smavris/smavris
Connected.
SQL> select dvf.f$jobrole from dual;

F$JOBROLE
--------------------------------------------------------------------
HR_REP

SQL>
SQL> connect wsmith/wsmith
Connected.
SQL> select dvf.f$jobrole from dual;

F$JOBROLE
--------------------------------------------------------------------
SA_REP

SQL>
SQL> connect kpartner/kpartner
Connected.
SQL> select dvf.f$jobrole from dual;

F$JOBROLE
--------------------------------------------------------------------
SA_MAN

SQL>
SQL> connect pkestner/pkestner
Connected.
SQL> select dvf.f$jobrole from dual;

F$JOBROLE
--------------------------------------------------------------------

SQL>
SQL> set echo off

6. Create a rule set named “SA Employees Only”. Include a rule named “SA
Emps Only Rule” that evaluates to TRUE only if the connected user is an
employee having a jobrole of either SA_MAN or SA_REP. Use the rule
expression dvf.f$jobrole in ('SA_MAN','SA_REP').

First, create the new rule set as shown below:

-71-
Add the new rule to the rule set:

Be sure to use the correct syntax for the rule expression:

-72-
7. To test use of the new rule set, create a command rule on the SELECT
statement for the object HR.DEPARTMENTS. Apply the rule set “SA
Employees Only” to the command rule.

8. Connect to SQL Plus (respectively) as users WSMITH, SMAVRIS,


KPARTNER, PKESTNER and KLYSY. Verify that only WSMITH and
KPARTNER are able to successfully query the table HR.DEPARTMENTS
because of their job roles. Run the script test_selects_on_depts.sql
to perform these tests.

SQL> @test_selects_on_depts.sql
SQL>
SQL> connect wsmith/wsmith
Connected.
SQL> select * from hr.departments where rownum<6;

DEPARTMENT_ID DEPARTMENT_NAME MANAGER_ID LOCATION_ID


------------- ------------------------------ ---------- -----------
10 Administration 200 1700
20 Marketing 201 1800
30 Purchasing 114 1700
40 Human Resources 203 2400

-73-
50 Shipping 121 1500

SQL>
SQL> connect smavris/smavris
Connected.
SQL> select * from hr.departments where rownum<6;
select * from hr.departments where rownum<6
*
ERROR at line 1:
ORA-01031: insufficient privileges

SQL>
SQL> connect kpartner/kpartner
Connected.
SQL> select * from hr.departments where rownum<6;

DEPARTMENT_ID DEPARTMENT_NAME MANAGER_ID LOCATION_ID


------------- ------------------------------ ---------- -----------
10 Administration 200 1700
20 Marketing 201 1800
30 Purchasing 114 1700
40 Human Resources 203 2400
50 Shipping 121 1500

SQL>
SQL> connect pkestner/pkestner
Connected.
SQL> select * from hr.departments where rownum<6;
select * from hr.departments where rownum<6
*
ERROR at line 1:
ORA-01031: insufficient privileges

SQL>
SQL> connect klysy/klysy
Connected.
SQL> select * from hr.departments where rownum<6;
select * from hr.departments where rownum<6
*
ERROR at line 1:
ORA-01031: insufficient privileges

SQL>
SQL> set echo off
SQL>

-74-
LAB EXERCISE 08 – Customizing Database Vault Separation of Duty With OLS

Oracle Label Security helps organizations address security and compliance requirements
using sensitivity labels such as confidential and sensitive. Sensitivity labels can be
assigned to users in the form of label authorizations and associated with operations and
objects inside the database using data labels. Label authorizations provide tremendous
flexibility in making access control decisions and enforcing separation of duty. Oracle
Label Security can be used to address numerous operational issues related to security,
compliance and privacy. Used with Oracle Database Vault, Oracle Label Security label
authorizations are factors that control access to applications, databases and data.
Oracle Database Vault command rules can check whether a user has been authorized
access to Sensitive data considered Personally Identifiable Information (PII). Database
administrators can be assigned different label authorizations, enforcing separation of
duty within a consolidated application environment. For example, a Select command
rule can check a user's label authorization before allowing access to an application
table.

Label authorizations combined with Oracle Database Vault enables powerful access
control policies within the database.
o Limit connections to databases based on whether label authorizations include
Sensitive Personally Identifiable Information (PII) access
o Limit access to application tables based on whether label authorizations
include Sensitive Personally Identifiable Information (PII) access
o Limit DDL such as Create Table based on whether label authorizations
include sensitive Personally Identifiable Information (PII) access.

1. Verify that the DB01 database instance and DB Control are both up and
running.
oracle:/home/oracle> db01
ORACLE_SID=db01
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/11.1.0/db_1
OH=/u01/oracle/product/11.1.0/db_1
oracle:/home/oracle> sqlplus / as sysdba

SQL*Plus: Release 11.1.0.7.0 - Production on Fri Oct 10 01:26:47 2008

Copyright (c) 1982, 2008, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options

SQL> select instance_name,status from v$instance;

INSTANCE_NAME STATUS
---------------- ------------
db01 OPEN

-75-
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.7.0
- Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
oracle:/home/oracle> emctl status dbconsole
Oracle Enterprise Manager 11g Database Control Release 11.1.0.7.0
Copyright (c) 1996, 2008 Oracle Corporation. All rights reserved.
https://dbsecurity.oracle.com:5501/em/console/aboutApplication
Oracle Enterprise Manager 11g is running.
------------------------------------------------------------------
Logs are generated in directory
/u01/oracle/product/11.1.0/db_1/dbsecurity.oracle.com_db01/sysman/log

2. Bring up Enterprise Manager DB Control by launching the URL


https://dbsecurity.oracle.com:5501/em. Login as
lbacsys/lbacsys.

3. Navigate to Server -> Oracle Label Security. Click on CREATE to build a


new Oracle Label Security policy.

-76-
4. Under the GENERAL tab, name the policy DBA_ACCESS_CONTROL and
specify the column name CLASSES. Be sure to click on the Enabled check
box, and set NO_CONTROL as the default policy enforcement option. Click
on the LABEL COMPONENTS tab when finished.

5. Enter level attributes as shown below. Click OK when finished.

-77-
6. You should now see the successfully created label policy. Click on GO to add
authorized users for this label policy.

7. Click on ADD USERS.

-78-
8. Under Database Users, click on ADD, then enter HR_DBA% followed by
clicking on GO to find the two DBA users of interest.

9. Since each user will be provided different authorization levels, it is necessary


to add each user separately. Click on the check box next to the user name
HR_DBA_JR, then click SELECT.
10.

11. Make certain that HR_DBA_JR shows up in your list of Database Users, as
shown below. [If this is not the case, select ADD under Database Users and
retry.] Then click NEXT.

-79-
12. Under Privileges, select the check box next to the READ policy as shown
below. Then click on NEXT.

13. Under Levels, set the value S (“Sensitive”) for all values as shown below,
then click NEXT.

14. Click NEXT without changing any settings under Audit.

-80-
15. Under Review, verify the settings followed by clicking on FINISH.

16. You should now see a message indicating that the user was successfully
added. Repeat steps 7-15 for adding the user HR_DBA_SR. The only
difference you will make is to assign clearance level HS (“Highly Sensitive”)
when applying Step 13. When finished, you see the following authorization
summary. Logout DB Console when finished.

17. Login to DVA at https://dbsecurity.oracle.com:5501/dva as


dvowner/oracle12#. Enter in all other credentials as shown below.

-81-
18. Drop any realms and/or command rules that have been created on HR schema
objects. Create a new realm named “HR Realm Special Authorization”,
including all HR objects and having the user accounts HR_DBA_JR and
HR_DBA_SR as the only authorized users for this realm. Click on OK when
finished.

-82-
9. Navigate to Rule Sets. Create a rule set named “Check DBA Clearance”
containing a single rule as defined below. Click OK when finished. The rule expression
is:
dominates(sa_utl.numeric_label('DBA_ACCESS_CONTROL'),char_to_label('DBA
_ACCESS_CONTROL','HS'))=1

-83-
19. The rule you created examines if the clearance of the current user dominates
(i.e., is higher or equal to) “Highly Sensitive”. Verify that your successfully
created rule looks like the following. Click OK when finished.

20. Create a command rule on the SELECT statement for the HR.EMPLOYEES
table. Include the rule set “Check DBA Clearance”. Click OK when finished.

-84-
11. Connect to SQL Plus using the /nolog option. Our first test will be to show
that with the exception of selects against the HR.EMPLOYEES table, both
HR_DBA_JR and HR_DBA_SR have full access to the HR schema. As each
of these users, attempt to query HR.DEPARTMENTS and also to reorganize
HE.EMPLOYEES. Verify that all of these operations succeed for each user.

oracle:/home/oracle> sqlplus /nolog

SQL*Plus: Release 11.1.0.7.0 - Production on Fri Oct 10 04:57:29 2008

Copyright (c) 1982, 2008, Oracle. All rights reserved.

SQL> connect hr_dba_sr/hr_dba_sr


Connected.
SQL> select * from hr.departments where rownum<6;

DEPARTMENT_ID DEPARTMENT_NAME MANAGER_ID LOCATION_ID


------------- ------------------------------ ---------- -----------
10 Administration 200 1700
20 Marketing 201 1800
30 Purchasing 114 1700
40 Human Resources 203 2400
50 Shipping 121 1500

SQL> alter table hr.employees move;

Table altered.

SQL> connect hr_dba_jr/hr_dba_jr


Connected.
SQL> select * from hr.departments where rownum<6;

DEPARTMENT_ID DEPARTMENT_NAME MANAGER_ID LOCATION_ID


------------- ------------------------------ ---------- -----------
10 Administration 200 1700
20 Marketing 201 1800
30 Purchasing 114 1700
40 Human Resources 203 2400

-85-
50 Shipping 121 1500

SQL> alter table hr.employees move;

Table altered.

12. Finally, as each of the two DBA users attempt to query HR.EMPLOYEES.
You should observe that only HR_DBA_SR, having the higher security
clearance, is able to successfully read employee data.

SQL> connect hr_dba_sr/hr_dba_sr


Connected.
SQL> select employee_id,last_name,first_name from hr.employees
where rownum<6;

EMPLOYEE_ID LAST_NAME FIRST_NAME


----------- ------------------------- --------------------
100 King Steven
101 Kochhar Neena
102 De Haan Lex
103 Hunold Alexander
104 Ernst Bruce

SQL> connect hr_dba_jr/hr_dba_jr


Connected.
SQL> select employee_id,last_name,first_name from hr.employees where
rownum<6;
select employee_id,last_name,first_name from hr.employees where rownum<6
*
ERROR at line 1:
ORA-01031: insufficient privileges

-86-
LAB EXERCISE 09 – Starting Up Audit Vault Agent & Collectors

Because audit logs provide a detective, after the fact control, it is imperative to collect
and report data in a streamlined and timely manner as close to the event as possible. In
an effort to automate CashBankTrust’s detective controls, CashBankTrust is deploying
Audit Vault, a product that consolidates database audit records from heterogenous
sources such as Oracle, SQL Server, Sybase and DB2. In this lab we show how the
product can be administered both from a web-based UI and using SQL statements.

In other workshops, you may discuss using Enterprise Manager’s capabilities for further
automating detective controls of the software, hardware and network environment within
which the database is running.

Your current image already contains the installed Audit Vault Server And Agent
components. Additionally, the three collectors DBAUD, OSAUD, and REDO have
already been configured. In this lab exercise, you will start up the Audit Vault server,
agent, and collectors. Additionally, you will generate activity in the source database
DB01 to create a set of audit records for use in Audit Vault.

1. Verify the current Oracle environment settings in your session using the Linux
alias ora. The configuration you have consists of one source database (SID is
db01) along with the Audit Vault warehouse database (SID is av), one agent,
and three collectors. All of the configuration details are listed below:

AUDIT VAULT SETTING VALUE

Audit Vault Name av


Audit Vault Home /u01/oracle/product/10.2/av
Audit Vault Administrator avadmin
Username
Administrator Password oracle12#
Audit Vault Auditor Username avauditor
Auditor Password oracle12#
Audit Vault Listener LISTENER
Listener Port Number (Audit 1522
Vault Server Listener)
Enterprise Manager Database http://dbsecurity.oracle.com:5501/em
Control URL
Audit Vault Console URL http://dbsecurity.oracle.com:5700/av
Audit Vault Agent User (In avagent1
The Audit Vault Server
Database)
Audit Vault Agent User oracle12#
Password
Audit Vault Agent Name agent1
Audit Vault Agent Home /u01/oracle/product/10.2/avagent
Audit Vault Agent Port 7010
Number
Agent Source User (In The avsrcuser1
db01 Source Database)

-87-
Agent Source User Password Oracle12#db01

2. Run the alias db01 to set your environment to point to the db01 database.
Verify that the listener and database are up and running. Start up either or both
of these components that if not already running. FOR PERFORMANCE
REASONS, ENSURE THAT THE THE DATABASE CONSOLE IS NOT
RUNNING. Finally, make sure that the master encryption key is open for the
db01 database.

oracle:/home/oracle> db01
ORACLE_SID=db01
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/11.1.0/db_1
OH=/u01/oracle/product/11.1.0/db_1
oracle:/home/oracle> lsnrctl status

LSNRCTL for Linux: Version 11.1.0.7.0 - Production on 05-OCT-2008 22:34:20

Copyright (c) 1991, 2008, Oracle. All rights reserved.

Connecting to
(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbsecurity.oracle.com)(
PORT=1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 11.1.0.7.0 - Production
Start Date 05-OCT-2008 17:33:25
Uptime 0 days 5 hr. 0 min. 55 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File
/u01/oracle/product/11.1.0/db_1/network/admin/listener.ora
Listener Log File
/u01/oracle/diag/tnslsnr/dbsecurity/listener/alert/log.xml
Listening Endpoints Summary...

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dbsecurity.oracle.com)(PORT=1521))
)
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
Services Summary...
Service "db01.oracle.com" has 1 instance(s).
Instance "db01", status READY, has 1 handler(s) for this service...
Service "db01XDB.oracle.com" has 1 instance(s).
Instance "db01", status READY, has 1 handler(s) for this service...
Service "db01_XPT.oracle.com" has 1 instance(s).
Instance "db01", status READY, has 1 handler(s) for this service...
The command completed successfully
oracle:/home/oracle> sqlplus / as sysdba

SQL*Plus: Release 11.1.0.7.0 - Production on Sun Oct 5 22:37:38 2008

Copyright (c) 1982, 2008, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production

-88-
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options

SQL> select instance_name,status from v$instance;

INSTANCE_NAME STATUS
---------------- ------------
db01 OPEN

If necessary, start up the database at this point. If you need to start up the database,
also open the wallet. If the wallet isn’t open, the Redo collector will not start up.

[
SQL> startup
SQL> ALTER SYSTEM SET ENCRYPTION WALLET OPEN IDENTIFIED BY
"firstpartsecondpart";
]
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.7.0
- Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing
oracle:/home/oracle> emctl status dbconsole
Oracle Enterprise Manager 11g Database Control Release 11.1.0.7.0
Copyright (c) 1996, 2008 Oracle Corporation. All rights reserved.
https://dbsecurity.oracle.com:5501/em/console/aboutApplication
Oracle Enterprise Manager 11g is running.
------------------------------------------------------------------
Logs are generated in directory
/u01/oracle/product/11.1.0/db_1/dbsecurity.oracle.com_db01/sysman/log
oracle:/home/oracle> emctl stop dbconsole

3. Change directory to $HOME/scripts. Run the shell script


$HOME/scripts/start_AV_listener.sh to bring up Audit Vault’s
listener on port 1522.

oracle:/home/oracle> cd scripts/
oracle:/home/oracle/scripts> . start_AV_listener.sh
ORACLE_SID=av
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/10.2/av
OH=/u01/oracle/product/10.2/av

LSNRCTL for Linux: Version 10.2.0.3.0 - Production on 05-OCT-2008 22:43:57

Copyright (c) 1991, 2006, Oracle. All rights reserved.

Starting /u01/oracle/product/10.2/av/bin/tnslsnr: please wait...

TNSLSNR for Linux: Version 10.2.0.3.0 - Production


System parameter file is
/u01/oracle/product/10.2/av/network/admin/listener.ora
Log messages written to /u01/oracle/product/10.2/av/network/log/listener.log
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1)))

-89-
Listening on:
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dbsecurity.oracle.com)(PORT=1522))
)

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1)))
STATUS of the LISTENER
------------------------
Alias listener
Version TNSLSNR for Linux: Version 10.2.0.3.0 - Production
Start Date 05-OCT-2008 22:43:57
Uptime 0 days 0 hr. 0 min. 0 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File
/u01/oracle/product/10.2/av/network/admin/listener.ora
Listener Log File
/u01/oracle/product/10.2/av/network/log/listener.log
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1)))

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dbsecurity.oracle.com)(PORT=1522))
)
Services Summary...
Service "PLSExtProc" has 1 instance(s).
Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this
service...
The command completed successfully

4. Run the shell script $HOME/scripts/start_AV_Agent_oc4j.sh to


bring up the Audit Vault agent’s OC4J instance.

oracle:/home/oracle/scripts> . start_AV_Agent_oc4j.sh
ORACLE_SID=av
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/10.2/avagent
OH=/u01/oracle/product/10.2/avagent
AVCTL started
Starting OC4J...
OC4J started successfully.

5. Run the shell script $HOME/scripts/start_AV_Server.sh to bring


up the Audit Vault server database and its OC4J instance.
oracle:/home/oracle/scripts> . start_AV_Server.sh
ORACLE_SID=av
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/10.2/av
OH=/u01/oracle/product/10.2/av

SQL*Plus: Release 10.2.0.3.0 - Production on Sun Oct 5 22:48:24 2008

Copyright (c) 1982, 2006, Oracle. All Rights Reserved.

Connected to an idle instance.

SQL> ORACLE instance started.

Total System Global Area 608174080 bytes


Fixed Size 1263224 bytes

-90-
Variable Size 226494856 bytes
Database Buffers 377487360 bytes
Redo Buffers 2928640 bytes
Database mounted.
Database opened.
SQL> Disconnected from Oracle Database 10g Enterprise Edition Release
10.2.0.3.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining
and Oracle Database Vault options
AVCTL started
Starting OC4J...
OC4J started successfully.
TZ set to GB-Eire
Oracle Audit Vault 10g Database Control Release 10.2.3.0.0 Copyright (c)
1996, 2008 Oracle Corporation. All rights reserved.
http://dbsecurity.oracle.com:5700/av
Oracle Audit Vault 10g is running.
------------------------------------

Logs are generated in directory /u01/oracle/product/10.2/av/av/log

6. Run the shell script $HOME/scripts/start_AV_Agent.sh to bring up


the Audit Vault agent named “Agent1”.

oracle:/home/oracle/scripts> . start_AV_Agent.sh
ORACLE_SID=av
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/10.2/av
OH=/u01/oracle/product/10.2/av
AVCTL started
Starting agent...
Agent started successfully.
AVCTL started
Getting agent metrics...
--------------------------------
Agent is running
--------------------------------
Metrics retrieved successfully.

7. Run the shell script


$HOME/scripts/start_AV_Collectors_db01.sh to bring up
agent1’s three collectors for the DB01 source database.
oracle:/home/oracle/scripts> . start_AV_Collectors_db01.sh
ORACLE_SID=av
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/10.2/av
OH=/u01/oracle/product/10.2/av
AVCTL started
Starting collector...
Collector started successfully.
AVCTL started
Getting collector metrics...
--------------------------------
Collector is running
Records per second = 0.00
Bytes per second = 0.00
--------------------------------

-91-
AVCTL started
Starting collector...
Collector started successfully.
AVCTL started
Getting collector metrics...
--------------------------------
Collector is running
Records per second = 0.00
Bytes per second = 0.00
--------------------------------
AVCTL started
Starting collector...
Collector started successfully.
AVCTL started
Getting collector metrics...
--------------------------------
Collector is running
Records per second = 0.00
Bytes per second = 0.00
--------------------------------

8. Launch the Audit Vault console, logging in as avadmin/oracle12#. Use the


URL http://dbsecurity.oracle.com:5700/av. Be sure to select
AV_ADMIN in the “Connect As” field. Click on LOGIN.

-92-
9. Navigate to Management -> Agents. You will see that agent1 is currently
running. The Status field should show a green Up arrow.

10. Navigate to Management -> Collectors. Note that all three collectors for the
DB01 source database are up. The Status field should show a green Up arrow
for each collector.

11. The table below shows all scripts used to start up and shut down the Audit
Vault environment for the workshop. All scripts listed in the table below can
be found in the image under the path /home/oracle/scripts. Using the
table should make things considerable easier whenever you need to bounce
your image or to bring up or bring down individual Audit Vault components.
[Note that in order for all collectors to startup and to remain up you must
ensure that the DB01 source database is first open before starting the
collectors, and you need to also ensure that the master encryption wallet is
open. If the master wallet is not open, the REDO collector will startup but will
shut itself down after about 20-30 seconds. See the alert log for the DB01
database for details.]

-93-
STARTING UP AUDIT VAULT SHUTTING DOWN AUDIT VAULT
ENVIRONMENT ENVIRONMENT
Step Script Name Description Step Script Name Description
1 start_AV_listener.sh Starts the listener 1 stop_AV_Collectors_db01.sh Stops collectors for
LISTENER (port DB01 database
1522) for use by source
Audit Vault
2 start_AV_Agent_oc4j.sh Starts OC4J 2 stop_AV_Agent.sh Stops AV agent
instance for agent
3 start_AV_Server.sh Starts AV server 3 stop_AV_Server.sh Stops AV server
database, OC4J database, OC4J
instance, and instance, and
dbconsole dbconsole
4 start_AV_Agent.sh Starts AV agent 4 stop_AV_Agent_oc4j.sh Stops OC4J
instance for agent
5 start_AV_Collectors_db01sh Starts collectors 5 stop_AV_listener.sh Stops the listener
for DB01 database LISTENER on port
source 1522

-94-
LAB EXERCISE 10 – Injecting Audit Records

The purpose of this lab exercise is to enable database auditing and FGA in the DB01
source database and to inject audit records into the audit trails for transfer to the Audit
Vault warehouse.

1. Set the alias db01. Change directory to /home/oracle/av_scripts.


Launch SQL Plus as SYSDBA. Verify that the initialization parameter
AUDIT_TRAIL is set to the value DB_EXTENDED and that
AUDIT_SYS_OPERATIONS is set to TRUE..
oracle:/home/oracle/av_scripts> db01
ORACLE_SID=db01
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/11.1.0/db_1
OH=/u01/oracle/product/11.1.0/db_1
oracle:/home/oracle/av_scripts> sqlplus / as sysdba

SQL*Plus: Release 11.1.0.7.0 - Production on Sun Oct 5 23:25:36 2008

Copyright (c) 1982, 2008, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options

SQL> show parameter audit

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
audit_file_dest string /u01/oracle/admin/db01/adump
audit_sys_operations boolean TRUE
audit_syslog_level string
audit_trail string DB_EXTENDED

2. Run the script secconf.sql to enable native database-level and fine-


grained auditing in the DB01 database. Remain connected to SQL Plus when
finished.
SQL> @secconf.sql
Connected.
SQL>
SQL> Audit alter any table by access;

Audit succeeded.

SQL>
SQL> Audit create any table by access;

Audit succeeded.

SQL>
SQL> Audit drop any table by access;

-95-
Audit succeeded.
.
.
.
SQL> execute DBMS_FGA.ADD_POLICY( -
> object_schema => 'hr', -
> object_name => 'employees', -
> policy_name => 'chk_hr_emp', -
> audit_condition => 'salary>10000', -
> audit_column => 'salary');

PL/SQL procedure successfully completed.

SQL>
SQL> set echo off
SQL>

3. Run the script inject_audit.sql to create audit records in the DB01


database. Partial output of the script is shown below. Exit SQL Plus when
finished.

SQL> @inject_audit.sql
SQL>
SQL> @change_schema.sql
SQL> connect system/oracle1
Connected.
SQL>
SQL> create table hr.emp1 as select * from hr.employees;

Table created.

SQL> create table hr.emp2 as select * from hr.emp1;


.
.
.
SQL> select first_name,last_name,salary
2 from hr.employees
3 where employee_id=192;

FIRST_NAME LAST_NAME SALARY


-------------------- ------------------------- ----------
Sarah Bell 4000

SQL>
SQL> set echo off
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.7.0
- Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
oracle:/home/oracle/av_scripts>

4. Set alias av to change your environment to point to the Audit Vault server.
Force a manual refresh of the Audit Vault warehouse by running the shell
script /home/oracle/av_scripts/refresh_warehouse.sh.

oracle:/home/oracle/av_scripts> av

-96-
ORACLE_SID=av
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/10.2/av
OH=/u01/oracle/product/10.2/av
oracle:/home/oracle/av_scripts> . refresh_warehouse.sh
AVCTL started
Refreshing warehouse...
Waiting for refresh to complete...
done.

5. Login to the Audit Vault console as user avauditor/oracle12#. Be sure to


select the value AV_AUDITOR in the “Connect As” menu option.

6. Verify that the far lower portion of the Overview screen contains summary
entries for audit events. Sample output is shown below.

-97-
7. Click on the hyperlink for the Audit Event Category named “Data Access”.
View the Data Access report, and verify that you see audit records that were
collected on today’s date.

8. Login to the Audit Vault console as avadmin/oracle12#. Navigate to


Management -> Warehouse. Verify that you have an entry in the Refresh
History tab showing a successful refresh of the warehouse that you previously
performed. Exit the Audit Vault console when finished.

-98-
LAB EXERCISE 11 – Creating Audit Vault Alerts

Even though we may have a preventative control in Database Vault or a customized


mechanism, we still want the additional control of monitoring privileged users for
tracking and accountability purposes. Two database security policies that
CashBankTrust wants to enforce in its production databases are (1) no one is allowed to
drop or truncate production tables, and (2) no one is allowed to create, alter or drop user
accounts except via the provisioning application, which consists of an HR application
integrated with Oracle Identity Management products. Additionally, CashBankTrust
wants to monitor any attempts to drop indexes in the DB01 database only (due to chronic
performance problems with this database). Given the importance of these three security
requirements, CashBankTrust would like to proactively monitor these audit events by
deploying Audit Vault alerts for its databases.

This lab goes into detail on creating and using the alert that monitors account creation.
The purpose of this lab exercise is to create and use Audit Vault alerts for the DB01
source database.

1. Log into the Audit Vault console as avauditor/oracle12#. Click on the Audit
Policy > Alerts subtab. Click on the Create button.

2. Enter the following, then Click on OK to finish:


• Alert: CREATE_USER
• Alert Severity: Warning
• Audit Source Type: ORCLDB
• Audit Source: DB01.ORACLE.COM
• Audit Event Category: Account Management

-99-
• Audit Event: CREATE USER

3. Verify that you should see the newly created alert in the summary screen, as
shown below.

4. Change directory to /home/oracle/av_scripts. Set alias db01. Login to


SQL Plus using the /nolog option. Run the script create_users_for_alert.sql to
generate some audit activity that will trigger the alert. Exit SQL Plus when
finished.

oracle:/home/oracle> cd av_scripts/
oracle:/home/oracle/av_scripts> db01
ORACLE_SID=db01
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/11.1.0/db_1
OH=/u01/oracle/product/11.1.0/db_1

-100-
oracle:/home/oracle/av_scripts> sqlplus /nolog

SQL*Plus: Release 11.1.0.7.0 - Production on Mon Oct 6 00:18:25 2008

Copyright (c) 1982, 2008, Oracle. All rights reserved.


SQL> @create_users_for_alert.sql
SQL> set echo on
SQL>
SQL> connect dvacctmgr/oracle12#
Connected.
SQL> create user limh identified by limh;

User created.

SQL> create user hbradbury identified by hbradbury;

User created.

SQL> create user prayner identified by prayner;

User created.

SQL> create user tnugent identified by tnugent;

User created.
.
.
.

5. Set alias av. Run the script refresh.warehouse to refresh the Audit Vault
database with new audit records.

oracle:/home/oracle/av_scripts> av
ORACLE_SID=av
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/10.2/av
OH=/u01/oracle/product/10.2/av
oracle:/home/oracle/av_scripts> . refresh_warehouse.sh
AVCTL started
Refreshing warehouse...
Waiting for refresh to complete...
done.

-101-
6. In The audit Vault console, force a refresh of the Overview screen. You should
see a number of alerts already appearing in the Overview screen.

7. Click on the hyperlink for the “Account Management” event under Alerts By
Audit Event Category. Partial output is shown below. Click on the first entry in
the list of alerts.

8. Examine the detail alert record.

-102-
9. We will next demonstrate the near real-time behavior of Audit Vault’s ability to
report on alerts without the need for a refresh. From viewing the Audit Vault
console’s Overview screen as user avauditor, verify the current number of alerts.
The sample output below shows a total of 17 alerts, all for the “Account
Management” event category.

-103-
10. Set alias db01, and login to SQL Plus as dvacctmgr/oracle12#. Create a user named
prayner (password of prayner). Then immediately drop this user account.

oracle:/home/oracle/av_scripts> db01
ORACLE_SID=db01
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/11.1.0/db_1
OH=/u01/oracle/product/11.1.0/db_1
oracle:/home/oracle/av_scripts> sqlplus dvacctmgr/oracle12#

SQL*Plus: Release 11.1.0.7.0 - Production on Mon Oct 6 01:19:48 2008

Copyright (c) 1982, 2008, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options

SQL> create user prayner identified by prayner;

User created.

SQL> drop user prayner;

User dropped.

-104-
10. Return to the Audit Vault console. You should now observe that the total number
of alerts has incremented by 1 to its new total. As shown below, the sample output
shows that the total number of alerts has incremented from 17 to 18. The total was
incremented without our having to force a refresh of the Audit Vault warehouse.

11. Click on the hyperlink for the “Account Management” event under Alerts By
Audit Event Category. Partial output is shown below. Select the first entry in the
list, since this is the most recent alert record.

12. View the detail alert record.

-105-
13. Logout of the Audit Vault console when finished.

-106-
LAB EXERCISE 12 – Audit Management

In the latest version of Audit Vault, two important new capabilities have been introduced:
a. The ability to move the native audit trail base tables to another tablespace
besides SYS. Many customers had performed this action manually themselves,
but it was not previously supported. This capability allows for more granular
management of the audit trails.
b. The ability to create and schedule a job to perform automated removal of
audit records in an Oracle database. Once a record has been collected and
stored by Audit Vault, there is no longer any need to store this record in the
source database. Many customers balk at the storage requirements for audit
data, and Audit Vault helps address this concern by centralizing storage of
audit records, making management of these records more efficient and less
costly.
This lab exercise guides you through managing audit trails on a source Oracle database.

14. Set alias db01. Change directory to /home/oracle/av_scripts. Login to


SQL Plus with the /nolog option, and run the script
show_audit_trails.sql to view the two native audit trail base tables and
which tablespace each currently resides in. Notice that both audit trails currently
reside in the SYSTEM tablespace.

oracle:/home/oracle> db01
ORACLE_SID=db01
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/11.1.0/db_1
OH=/u01/oracle/product/11.1.0/db_1
oracle:/home/oracle> cd av_scripts/
oracle:/home/oracle/av_scripts> sqlplus /nolog

SQL*Plus: Release 11.1.0.7.0 - Production on Mon Oct 6 02:47:06 2008

Copyright (c) 1982, 2008, Oracle. All rights reserved.

SQL> @show_audit_trails.sql
Connected.
SQL>
SQL> select owner,table_name,tablespace_name
2 from dba_tables
3 where table_name in ('AUD$','FGA_LOG$');

OWNER TABLE_NAME TABLESPACE_NAME

-107-
---------- -------------------- --------------------
SYS FGA_LOG$ SYSTEM
SYSTEM AUD$ SYSTEM

SQL>
SQL> set echo off

15. In SQL Plus run the script configure_audit_mgt.sql. This script creates a
new tablespace named AUDIT_TRAIL_TS and moves the two native audit trail
base tables to this tablespace. [Ignore the error “Tablespace does not exist” when
attempting to drop the tablespace before creating it.]

SQL> @configure_audit_mgt.sql
SQL>
SQL> connect / as sysdba
Connected.
SQL>
SQL> drop tablespace audit_trail_ts including contents and datafiles;
drop tablespace audit_trail_ts including contents and datafiles
*
ERROR at line 1:
ORA-00959: tablespace 'AUDIT_TRAIL_TS' does not exist

SQL> create tablespace audit_trail_ts


2 datafile '/u01/oracle/oradata/db01/audit01.dbf'
3 size 200m autoextend on next 10m maxsize unlimited
4 logging extent management local segment space management auto;

Tablespace created.

SQL>
SQL> execute dbms_audit_mgmt.set_audit_trail_location(-
> audit_trail_type => dbms_audit_mgmt.audit_trail_db_std,-
> audit_trail_location_value => 'AUDIT_TRAIL_TS');

PL/SQL procedure successfully completed.

SQL>
SQL> set echo off

16. Run the script run the script show_audit_trails.sql to (again) view the
two native audit trail base tables and which tablespace each currently resides in.

-108-
Notice that both audit trails have been successfully moved into the
AUDIT_TRAIL_TS tablespace.

SQL> @show_audit_trails.sql
Connected.
SQL>
SQL> select owner,table_name,tablespace_name
2 from dba_tables
3 where table_name in ('AUD$','FGA_LOG$');

OWNER TABLE_NAME TABLESPACE_NAME


---------- -------------------- --------------------
SYS FGA_LOG$ AUDIT_TRAIL_TS
SYSTEM AUD$ AUDIT_TRAIL_TS

SQL>
SQL> set echo off

17. In SQL Plus, as SYS run the script create_audit_cleanup_job.sql to


create a job to clean up the audit trail once records have been moved to the Audit
Vault warehouse.

SQL> show user


USER is "SYS"
SQL> @create_audit_cleanup_job.sql
SQL>
SQL> connect / as sysdba
Connected.
SQL>
SQL> BEGIN
2 DBMS_AUDIT_MGMT.INIT_CLEANUP(
3 AUDIT_TRAIL_TYPE => DBMS_AUDIT_MGMT.AUDIT_TRAIL_ALL,
4 DEFAULT_CLEANUP_INTERVAL => 12 );
5 END;
6 /

PL/SQL procedure successfully completed.

SQL> BEGIN
2 DBMS_AUDIT_MGMT.CREATE_PURGE_JOB (
3 AUDIT_TRAIL_TYPE => DBMS_AUDIT_MGMT.AUDIT_TRAIL_ALL,
4 AUDIT_TRAIL_PURGE_INTERVAL => 12,
5 AUDIT_TRAIL_PURGE_NAME => 'AUDIT_TRAIL_PURGE_JOB',
6 USE_LAST_ARCH_TIMESTAMP => TRUE );

-109-
7 END;
8 /

PL/SQL procedure successfully completed.

SQL>
SQL> SET SERVEROUTPUT ON
SQL> BEGIN
2 IF
3
DBMS_AUDIT_MGMT.IS_CLEANUP_INITIALIZED(DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD)
4 THEN
5 DBMS_OUTPUT.PUT_LINE('AUD$ is initialized for clean-up');
6 ELSE
7 DBMS_OUTPUT.PUT_LINE('AUD$ is not initialized for clean-up.');
8 END IF;
9 END;
10 /
AUD$ is initialized for clean-up

PL/SQL procedure successfully completed.

SQL>
SQL> set echo off

18. Query the dictionary view DBA_SCHEDULER_JOBS to verify that your purge
job has been successfully created.

SQL> select owner,job_type,state,job_name from dba_scheduler_jobs


2 where upper(job_name) like 'AUDIT%';

OWNER JOB_TYPE STATE JOB_NAME


---------- ---------------- --------------- ------------------------------
SYS PLSQL_BLOCK SCHEDULED AUDIT_TRAIL_PURGE_JOB

-110-
LAB EXERCISE 13 – Adding A Second Collection Source

Audit Vault will allow CashBankTrust to standardize database policies across databases,
as well as consolidating and analyzing audit records in preparation for an external or
internal audit. CashBankTrust’s audit and compliance requirements demand that two
databases, DB01 and DB02, both be audited and have their audit records centralized for
reporting and analysis. Currently only DB01 is configured as an Audit Vault collection
source. In this lab exercise you will configure three collectors for DB02 and verify that
its audit records can be captured and stored in Audit Vault along with audit data from
DB01.

This lab will guide you through the process of adding a second collection source
database for Audit Vault. All of the configuration scripts are located in
/home/oracle/av_scripts.

To start this lab please ensure that you have started the db01 and db02 databases as well
as Audit Vault. To do this use the /home/oracle/scripts/start_av.sh script (after setting
your environment first to db01 and then to db02 using the aliases) and the
/home/oracle/start_av.sh script.

Temporary note: In order to run the following Audit Vault labs successfully, please issue
the following statements on the command line:

• chmod + x /home/oracle/av.sh
• chmod + x /home/oracle/db01.sh
• chmod + x /home/oracle/db02.sh

This issue will be permanently addressed in the next rev of the vmware image.

1. Create Database User


a. To collect audit information from the Oracle DB you must first create a
DB user in the source database who has appropriate access to the local
audit trails.
b. We’ve created a script to perform this step: Step1a for adding source db
user:
i. cd /home/oracle/av_scripts
ii. . step1a_create_sourcedb_user.sh

oracle:/home/oracle/av_scripts> . step1a_create_sourcedb_user.sh
SQL*Plus: Release 11.1.0.7.0 - Production on Mon Oct 6 20:23:28 2008

Copyright (c) 1982, 2008, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production

-111-
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options

SQL>
User created.

SQL> Granting privileges to AVCOLLUSER ... Done.


SQL> Granting privileges to AVCOLLUSER ... Done.
*NOTE*
======
The source has Database Vault option enabled, the source user must be
added
to 'Oracle Data Dictionary' realm as a participant for REDO collector to
work.

Connect to the source database as DV Owner and execute:

exec dbms_macadm.add_auth_to_realm('Oracle Data Dictionary',


'AVCOLLUSER', null, dbms_macutl.g_realm_auth_participant);

SQL> Disconnected from Oracle Database 11g Enterprise Edition Release


11.1.0.7.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options

2. Verify Collectors
a. Once you’ve created the source user on db02 you are ready to start the
Audit Vault configuration.
b. We’ll start by verifying that the 11g database is able to support auditing
for the three collectors:
i. DB Aud Collector
ii. OS Aud Collector
iii. Redo Collector
c. Run the step2_verify_collector.sh script
i. . step2_verify_collector.sh

oracle:/home/oracle/av_scripts> . step2_verify_collector.sh
ORACLE_SID=av
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/10.2/av
OH=/u01/oracle/product/10.2/av
source DB02.ORACLE.COM verified for OS File Audit Collector collector
source DB02.ORACLE.COM verified for Aud$/FGA_LOG$ Audit Collector
collector
parameter _JOB_QUEUE_INTERVAL is not set; recommended value is 1

-112-
ERROR: parameter UNDO_RETENTION = 900 is not in required value range
[3600 - ANY_VALUE]
ERROR: parameter GLOBAL_NAMES = false is not set to required value true
ERROR: set the above init.ora parameters to recommended/required values

d. After running the verification step - step 2, you’ll see that the redo
collector requires some DB changes. We’ve prepared a script to make the
appropriate changes for you.
i. . step2a_change_db_parms.sh

oracle:/home/oracle/av_scripts> . step2a_change_db_parms.sh
ORACLE_SID=db02
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/11.1.0/db_1
OH=/u01/oracle/product/11.1.0/db_1

SQL*Plus: Release 11.1.0.7.0 - Production on Mon Oct 6 20:34:12 2008

Copyright (c) 1982, 2008, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options

SQL> SQL> SQL> SQL>


System altered.

SQL>
System altered.

SQL>
System altered.

SQL>
System altered.

SQL>
System altered.

SQL>
System altered.

SQL>
System altered.

-113-
SQL>
System altered.

SQL> 2 3
SQL> Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> ORACLE instance started.

Total System Global Area 418484224 bytes


Fixed Size 1313792 bytes
Variable Size 192939008 bytes
Database Buffers 218103808 bytes
Redo Buffers 6127616 bytes
Database mounted.
Database opened.
SQL> Disconnected from Oracle Database 11g Enterprise Edition Release
11.1.0.7.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options

e. You should see that the script completes and that the changes have been
made.
f. Now we’ve altered the DB to support the REDO collector re-run the
verification step: . step2_verify_collector.sh
g. You should now see that the OS, DB and REDO collector are all verified.
NOTE: Ignore the _JOB_QUEUE_INTERVAL parameter.

oracle:/home/oracle/av_scripts> . step2_verify_collector.sh
ORACLE_SID=av
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/10.2/av
OH=/u01/oracle/product/10.2/av
source DB02.ORACLE.COM verified for OS File Audit Collector collector
source DB02.ORACLE.COM verified for Aud$/FGA_LOG$ Audit Collector
collector
parameter _JOB_QUEUE_INTERVAL = 4 is not set to recommended value 1
source DB02.ORACLE.COM verified for REDO Log Audit Collector collector

3. Add Source to Audit Vault


a. You are now ready to let Audit Vault know that there is a new source
ready to audit: . step3_add_source.sh

-114-
b. This step will add db02 as a new source in Audit Vault (entering
avcolluser / avcolluser). Once the source is added you are able to add
collectors for it.

oracle:/home/oracle/av_scripts> . step3_add_source.sh
ORACLE_SID=av
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/10.2/av
OH=/u01/oracle/product/10.2/av
Enter Source user name: avcolluser
Enter Source password:
Adding source...
Source added successfully.
source successfully added to Audit Vault

remember the following information for use in avctl


Source name (srcname): DB02.ORACLE.COM
Storing user credentials in wallet...
Create credential oracle.security.client.connect_string4
done.
Mapping Source to Agent...

c. Let’s confirm that the source has been added correctly. In the VMWare
guest, start Firefox.
d. Select Internet > Firefox
e. Enter the Audit Vault URL:
i. http://dbsecurity.oracle.com:5700/av
ii. avadmin/oracle12# connect as AV_ADMIN
iii. Click on the Configuration Tab.
f. You should see that the db02 source has been added.

-115-
4. Add OS Collector
a. Once we’ve added the source we can add collectors for that source. We’ll
add all three collectors for the DB02 database – DB, OS and Redo.
b. We’ll start with the OSAUD collector: . step4_add_os_collector.sh
c. Please take a look at the scripts to see what they’re doing. The
configuration steps are documented in great detail in the Audit Vault
Administrators guide.

oracle:/home/oracle/av_scripts> . step4_add_os_collector.sh
ORACLE_SID=av
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/10.2/av
OH=/u01/oracle/product/10.2/av
source DB02.ORACLE.COM verified for OS File Audit Collector collector
Adding collector...
Collector added successfully.
collector successfully added to Audit Vault

remember the following information for use in avctl


Collector name (collname): OSAUD_Collector_DB02

5. Add DB Collector
a. The DBAUD Collector is collecting audit records from the aud$ and
fga_log$ base tables.
b. Now let’s add the DBAUD Collector: . step5_add_db_collector.sh

oracle:/home/oracle/av_scripts> . step5_add_db_collector.sh
ORACLE_SID=av

-116-
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/10.2/av
OH=/u01/oracle/product/10.2/av
source DB02.ORACLE.COM verified for Aud$/FGA_LOG$ Audit Collector
collector
Adding collector...
Collector added successfully.
collector successfully added to Audit Vault

remember the following information for use in avctl


Collector name (collname): DBAUD_Collector_DB02

6. Add the REDO Collector


a. Finally, let’s add the REDO collector: . step6_add_redo_collector.sh

oracle:/home/oracle/av_scripts> . step6_add_redo_collector.sh
ORACLE_SID=av
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/10.2/av
OH=/u01/oracle/product/10.2/av
parameter _JOB_QUEUE_INTERVAL = 4 is not set to recommended value 1
source DB02.ORACLE.COM verified for REDO Log Audit Collector collector
Adding collector...
Collector added successfully.
collector successfully added to Audit Vault

remember the following information for use in avctl


Collector name (collname): REDO_Collector_DB02
initializing REDO Collector
setting up APPLY process on Audit Vault server
setting up CAPTURE process on source database

7. Update tnsnames.ora, etc.


a. Once the collectors have been added we need to run a final step on the
Audit Vault Agent Oracle Home. This step will complete the
configuration for this new source: . step8_setup_source.sh (entering
avcolluser / avcolluser)

oracle:/home/oracle/av_scripts> . step8_setup_source.sh
ORACLE_SID=av
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/10.2/avagent
OH=/u01/oracle/product/10.2/avagent

-117-
Enter Source user name: avcolluser
Enter Source password:
adding credentials for user avcolluser for connection [SRCDB2]
Storing user credentials in wallet...
Create credential oracle.security.client.connect_string4
done.
updated tnsnames.ora with alias [SRCDB2] to source database
verifying SRCDB2 connection using wallet

8. Start Collectors For DB02 Source


a. Run the script start_AV_Collectors_db02.sh to bring up all three
collectors for the DB02 source database.

oracle:/home/oracle/av_scripts> . start_AV_Collectors_db02.sh
ORACLE_SID=av
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/10.2/av
OH=/u01/oracle/product/10.2/av
AVCTL started
Starting collector...
Collector started successfully.
AVCTL started
Getting collector metrics...
--------------------------------
Collector is running
Records per second = 0.00
Bytes per second = 0.00
--------------------------------
AVCTL started
Starting collector...
Collector started successfully.
AVCTL started
Getting collector metrics...
--------------------------------
Collector is running
Records per second = 8.69
Bytes per second = 5963.19
--------------------------------
AVCTL started
Starting collector...
Collector started successfully.
AVCTL started
Getting collector metrics...

-118-
--------------------------------
Collector is running
Records per second = 0.00
Bytes per second = 0.00
--------------------------------

9. Confirm Configuration
a. We can confirm that all of the newly configured collectors were
configured and started successfully by logging into the Audit Vault
administration application and reviewing the collector information:
http://dbsecurity.oracle.com:5700/av
i. avadmin/oracle12# connect as AV_ADMIN
b. Click on the ‘Configuration’ Tab, then review the ‘Audit Source’ >
‘Source’ and ‘Collector’ tabs.
c. You should see the new DB11g source that we added, with the three
collectors.

-119-
LAB EXERCISE 14 – Configuring Audit Policy

In order to understand the data collected by Audit Vault, it is important to understand the
different categories of database auditing provided by Oracle. This lab creates audit
policies in each category, and walks you through the different database events that
generate each type of audit record. You may want to refer to this lab exercise later,
using it as a guide to deploying audit policy for Oracle databases.

This lab exercise provides an overview of:


o Statement Auditing
o Object Auditing
o Privilege Auditing
o Fine-Grained Auditing
o Capture Rules

This lab guides you through configuring audit policy in the Oracle 11g Database and
generating some audit activity.

1. Start up the environment (it may already be up from the last lab):
b. Start Audit Vault
c. Start db02
d. Once everything is started login to the Audit Vault console:
i. Select Internet > Firefox.
ii. Enter the Audit Vault URL: http://dbsecurity.oracle.com:5700/av,
logging in as avadmin / oracle12# and connecting as
AV_ADMIN
e. Click on the ‘Agent’ Sub-tab under the ‘Management’ tab and be sure the
Agent is started.
f. Once the agent is started you will be able to start the collectors for DB02.
g. Start all of the DB02 collectors. Select each collector’s radio button and
click the ‘Start’ button

-120-
3. Retrieve the Audit Policies
g. Now, login as the Auditor: http://dbsecurityoracle.com:5700/av,
logging in as avauditor/oracle12# , connecting as AV_AUDITOR
h. You will now see the Audit Vault console – home dashboard. Two
collection sources are shown on the Overview screen. At the moment
there will be nothing on it for the DB02 database. We’ve not
generated, collected or alerted/reported on any data for this database.

i. Click on the Audit Policy tab


j. You will see the two Oracle Database sources – DB01 and DB02.

-121-
k. For the DB02 source retrieve the Audit Policy from the DB. Select the
DB02 source, and click Retrieve from Source

l. You will notice that the database immediately has some policy. This
is due to the fact that 11g by default audits a set of events considered
important.

-122-
4. Configuring Audit Policy
g. We will now login to the DB02 database and configure its audit
policy. We will login to SQLPlus as the sys user, then run a script that
configures the Audit Policy:
i. cd /home/oracle/av_scripts
ii. db02
iii. sqlplus sys/oracle1 as sysdba
h. Once logged in, run the set_policy.sql script: @set_policy.sql This
will set the audit policy on this Oracle 11g DB.

i. The script will generate some small errors, but you will see all of the
audit policy statements generate a ‘Audit succeeded’ confirmation
5. Examine the Audit Policy
g. Now, we’ll go back to the Firefox session logged in as the Auditor
user.
h. For the DB02 source, retrieve the audit policy from the source.

-123-
i. Click the radio button for DB02
ii. Click Retrieve from Source
i. You will now see that there are more audit policies on the summary
page.

j. Click on the DB02 hyperlink to see the detail for the audit policy.

k. This is the Audit Policy detail screen. From here you will be able to:
i. Provision Audit Policy to the Source DB
ii. Review the Audit policy for the source
iii. Export the Audit Policy for the source

-124-
l. We’ll now review the policy for this DB source.

6. Understanding Statement Auditing

Statement auditing audits SQL statements by type of statement, not by the


specific schema objects on which the statement operates. Statement auditing can
be broad or focused, for example, by auditing the activities of all database users or
of only a select list of users. Typically broad, statement auditing audits the use of
several types of related actions for each option. These statements are in the
following categories:
g. Data definition statements (DDL). For example, AUDIT TABLE
audits all CREATE TABLE and DROP TABLE statements. AUDIT
TABLE tracks several DDL statements regardless of the table on
which they are issued. You can also set statement auditing to audit
selected users or every user in the database.
h. Data manipulation statements (DML). For example, AUDIT SELECT
TABLE audits all SELECT ... FROM TABLE or SELECT ... FROM
VIEW statements, regardless of the table or view.

7. Understanding Object Auditing

Schema object auditing is the auditing of specific statements on a particular


schema object, such as AUDIT SELECT ON HR.EMPLOYEES. Schema object
auditing is very focused, auditing only a specific statement on a specific schema
object for all users of the database.

-125-
g. For example, object auditing can audit all SELECT and DML
statements permitted by object privileges, such as SELECT or
DELETE statements on a given table. The GRANT and REVOKE
statements that control those privileges are also audited.
h. Object auditing lets you audit the use of powerful database commands
that enable users to view or delete very sensitive and private data. You
can audit statements that reference tables, views, sequences,
standalone stored procedures or functions, and packages.
i. Oracle Database and Oracle Audit Vault always set schema object
audit options for all users of the database. You cannot set these options
for a specific list of users.

8. Understanding Privilege Auditing

Privilege auditing is the auditing of SQL statements that use a system privilege.
You can audit the use of any system privilege. Like statement auditing, privilege
auditing can audit the activities of all database users or of only a specified list of
users.
a. For example, if you enable AUDIT SELECT ANY TABLE,
Oracle Database audits all SELECT tablename statements issued
by users who have the SELECT ANY TABLE privilege. This type
of auditing is very important for the Sarbanes-Oxley (SOX) Act
compliance requirements. Sarbanes-Oxley and other compliance
regulations require the privileged user be audited for inappropriate
data changes or fraudulent changes to records.
b. Privilege auditing audits the use of powerful system privileges
enabling corresponding actions, such as AUDIT CREATE
TABLE. If you set both similar statement and privilege audit
options, then only a single audit record is generated. For example,
if the statement clause TABLE and the system privilege CREATE
TABLE are both audited, then only a single audit record is
generated each time a table is created. The statement auditing
clause, TABLE, audits CREATE TABLE, ALTER TABLE, and

-126-
DROP TABLE statements. However, the privilege auditing option,
CREATE TABLE, audits only CREATE TABLE statements,
because only the CREATE TABLE statement requires the
CREATE TABLE privilege.
c. Privilege auditing does not occur if the action is already permitted
by the existing owner and schema object privileges. Privilege
auditing is triggered only if these privileges are insufficient, that is,
only if what makes the action possible is a system privilege.
d. Privilege auditing is more focused than statement auditing for the
following reasons:
i. It audits only a specific type of SQL statement, not a
related list of statements.
ii. It audits only the use of the target privilege.

9. Understanding Fine-Grained Auditing

Fine-grained auditing (FGA) enables you to create a policy that defines specific
conditions that must take place for the audit to occur. For example, fine-grained
auditing lets you audit the following types of activities:A table being accessed
between 9 p.m. and 6 a.m. or on Saturday and Sunday
g. An IP address from outside the corporate network being used
h. A table column being selected or updated
i. A value in a table column being modified
A fine-grained audit policy provides granular auditing of select, insert, update,
and delete operations. Furthermore, because you are auditing only very specific
conditions, you reduce the amount of audit information generated and can restrict
auditing to only the conditions that you want to audit. This creates a more
meaningful audit trail that supports compliance requirements. For example, a
central tax authority can use fine-grained auditing to track access to tax returns to
guard against employee snooping, with enough detail to determine what data was
accessed. It is not enough to know that a specific user used the SELECT privilege
on a particular table. Fine-grained auditing provides a deeper audit, such as when
the user queried the table or the computer IP address user who performed the
action.

-127-
10. Understanding Capture Rules

You can create a capture rule to track changes in the database redo log files. The
capture rule specifies DML and DDL changes that should be checked when
Oracle Database scans the database redo log. You can apply the capture rule to an
individual table, a schema, or globally to the entire database. Unlike statement,
object, privilege, and fine-grained audit policies, you do not retrieve and activitate
capture rule settings from a source database, because you cannot create them
there. You only can create the capture rule in the Audit Vault Console.

11. Generating Auditable Events


g. Navigate back to the Audit Vault Home tab. You will notice that there
are events shown, but currently all audit events are for the DB01
database only.

h. Next will generate activity/load on the DB02 database.


i. cd /home/oracle/av_scripts
ii. . inject_audit.sh

-128-
i. This script will generate activity that we can report on with Audit
Vault. The activity has been specially created to trigger the audit
policy we previously created. Many errors will be generated.
j. Once the activity is generated the Audit Vault Agent will collect it,
and move it to the Audit Vault Server.
k. We’ll now have to refresh the audit vault warehouse to see the data.
We will manually refresh the warehouse using the AVCTL command
line utility:
i. av
ii. $OH/avctl refresh_warehouse –wait

oracle:/home/oracle/av_scripts> av
ORACLE_SID=av
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/10.2/av
OH=/u01/oracle/product/10.2/av
oracle:/home/oracle/av_scripts> avctl refresh_warehouse -wait
AVCTL started
Refreshing warehouse...
Waiting for refresh to complete...
done.

l. The first time you run this utility it will take a few minutes.
m. Go back to the Firefox browser. On the Home Tab you will now see a
sizeable boost in Audit Activity.

n. Click on User Sessions -- You will see that there are multiple entries
for the user sessions category.

-129-
o. We’ve completed the following tasks:
i. Configured Audit Policy on an 11g database, DB02
ii. Synchronized Audit Vault with the 11g DB Audit Policy
iii. Generated activity matched to the policy
iv. Reviewed the reports in Audit Vault console

LAB EXERCISE 15 – Audit Vault Reporting Features

This lab guides you through reviewing Audit Vault reporting.

1. Start up the environment (it may already be up from the last lab):
a. Start Audit Vault
b. Start db02
c. Once everything is started login to the Audit Vault console.
i. Select Internet > Firefox.
ii. Enter the Audit Vault URL: http:/dbsecurity.oracle.com:5700/av
iii. avauditor/aracle12#, connect as: AV_AUDITOR
2. Viewing Alerts
a. This screen shows that no data has been added into the Audit Vault
repository in the past 24 hours. You will now need to refresh the
dashboard to get some data.

-130-
b. Select the ‘Last One Month’ radio button and hit the ‘Go’ button. You
will see the Audit Vault dashboard get refreshed.
c. The Dashboard is organized into three sections. The first two pie charts in
this first horizontal section represent the Alerts that are generated in Audit
Vault.
d. The second row shows other alert information.
e. The final horizonal row section contains the audit activity.
f. Click on the alert link

g. You will see that you have two alerts (that we created earlier). These two
alerts were generated when we created some sample DB users.
h. Click on the audit activity detail button to see the detail for the record.

-131-
i. You will see that the alert drills into the audit report activity for the event.
In the report you will see information about the activity.

j. Now let’s go back to the Audit Vault ‘Home’ tab. Click on the ‘Home’
tab and scroll down to review the four alert charts.

-132-
k. Scroll down to the end of the ‘Home’ tab.
l. You will see the ‘Activity by Audit Event Category’. All audit activity
that is collected from your sources will be categorized then made available
in this chart. There are 14 categories. During this lab we will review each
of the categories and activity that is captured in them.
3. Account Management Activity
a. Click on ‘Account Management’ link

b. Review all of the account management activity.


c. Click on a row. We’re going to select the ‘CREATE USER’ for this
example. In this section you will find activity related to the following
policy:
i. AUDIT ALTER PROFILE;
ii. AUDIT ALTER USER;

-133-
iii. AUDIT CREATE PROFILE;
iv. AUDIT CREATE USER;
v. AUDIT DROP PROFILE;
vi. AUDIT DROP USER;

d. You will see the detail for that activity.

-134-
e. Scroll down to the bottom on this page. We’ve captured the SQL Text for
this ‘CREATE USER’ activity. This is being collected because we’ve
used the ‘DB, EXTENDED’ audit_trail parameter.

4. Application Management Activity

-135-
a. Return to the ‘Home’ tab. We now continue reviewing each of the activity
categories and see what data is captured. Start by clicking by ‘Application
Management’

b. In this category you will see activity related to packages, procedures and
other PL/SQL code in the database. Here are some sample audit
statements that are captured:
i. AUDIT CREATE CONTEXT;
ii. AUDIT CREATE FUNCTION;
iii. AUDIT CREATE INDEXTYPE;
iv. AUDIT CREATE JAVA;
v. AUDIT CREATE LIBRARY;
vi. AUDIT CREATE OPERATOR;
vii. AUDIT CREATE PACKAGE;
viii. AUDIT CREATE PACKAGE BODY;
ix. AUDIT CREATE PROCEDURE;
x. AUDIT CREATE TRIGGER;
xi. AUDIT CREATE TYPE;
xii. AUDIT CREATE TYPE BODY;

-136-
5. Audit Activity
a. Return to the previous page, click on the Audit category, you will see all
the system audit activity.

6. Data Access Activity


a. Return to the previous page, click on the Data Access category. All DML
activity is captured here. Here are some sample audit policies that will
generate records here.
i. AUDIT DELETE;
ii. AUDIT INSERT;
iii. AUDIT SELECT;
iv. AUDIT TRUNCATE TABLE;
v. AUDIT UPDATE;

-137-
7. Object Management Activity
a. Return to the previous page, click on the Object Management category.
This will have all DDL activity. Here are some sample audit policy
statements that will generate activity in this category.
i. AUDIT CREATE DIMENSION;
ii. AUDIT CREATE DIRECTORY;
iii. AUDIT CREATE INDEX;
iv. AUDIT CREATE MATERIALIZED VIEW;
v. AUDIT CREATE MATERIALIZED VIEW LOG;
vi. AUDIT CREATE OUTLINE;
vii. AUDIT CREATE PUBLIC DATABASE LINK;
viii. AUDIT CREATE PUBLIC SYNONYM;
ix. AUDIT CREATE SCHEMA;
x. AUDIT CREATE SEQUENCE;
xi. AUDIT CREATE SYNONYM;
xii. AUDIT CREATE TABLE;
xiii. AUDIT CREATE VIEW;

-138-
8. Peer Association Activity
a. Return to the previous page, click on the ‘Peer Association’ category.
This category contains all activity related to Database Links:
i. AUDIT CREATE DATABASE LINK;
ii. DROP DATABASE LINK;

9. Role and Privilege Category Activity


a. Return to the previous page, click on the ‘Role and Privilege’ category,
which captures:
i. AUDIT ALTER ROLE;
ii. AUDIT CREATE ROLE;
iii. AUDIT DROP ROLE;

-139-
iv. AUDIT GRANT OBJECT;
v. AUDIT GRANT ROLE;
vi. AUDIT REVOKE OBJECT;
vii. AUDIT REVOKE ROLE;

10. System Management Activity


a. Return to the previous page, click on the ‘System Management’ category.
Here are some examples of the audit policy that would trigger activity in
this category.
i. AUDIT ALTER CLUSTER;
ii. AUDIT ALTER DATABASE;
iii. AUDIT ALTER ROLLBACK SEG;
iv. AUDIT ALTER SYSTEM;
v. AUDIT ALTER TABLESPACE;
vi. AUDIT ANALYZE CLUSTER;
vii. AUDIT CREATE CLUSTER;
viii. AUDIT CREATE CONTROL FILE;
ix. AUDIT CREATE DATABASE;
x. AUDIT CREATE ROLLBACK SEG;
xi. AUDIT CREATE TABLESPACE;
xii. AUDIT DISABLE ALL TRIGGERS;
xiii. AUDIT DROP CLUSTER;
xiv. AUDIT DROP ROLLBACK SEG;
xv. AUDIT DROP TABLESPACE;
xvi. AUDIT ENABLE ALL TRIGGERS;
xvii. AUDIT FLASHBACK;
xviii. AUDIT FLASHBACK DATABASE;
xix. AUDIT PURGE DBA_RECYCLEBIN;
xx. AUDIT PURGE TABLESPACE;
xxi. AUDIT SHUTDOWN;
xxii. AUDIT STARTUP;
xxiii. AUDIT SUPER USER DDL;

-140-
xxiv. AUDIT SUPER USER DML;
xxv. AUDIT SYSTEM GRANT;
xxvi. AUDIT SYSTEM REVOKE;
xxvii. AUDIT TRUNCATE CLUSTER;

11. User Session Activity


a. Finally, return to the previous page and click on the ‘User Session’
category. You will see all the user login/logoff/session information in this
category.

12. Using the new Audit Vault Ad Hoc Reporting Features


a. Now let’s use the latest Audit Vault reports.

-141-
b. Return to the Home tab > Click on Data Access:

c. Click on the ‘Cog’ to access the reporting options available to you.


d. You will see that you are able to:
i. Select more columns for the report
ii. Filter the report
iii. Sort rows
iv. Highlight rows
v. Generate a chart
vi. Save the report for future use
vii. Download the report to CSV
e. This lab will demonstrate the Highlighting functionality. Feel free to test
out the other options. The new Audit Vault reporting functionality is both
powerful and easy to use.
f. Select Highlight:

g. You will now have the ‘Highlight’ configuration panel at the top of the
screen:

-142-
h. Add in the following:
i. Name: Highlight Orders
ii. Background Color: Click ‘Yellow’
iii. Text Color: Click ‘Blue’
iv. Column: ‘Target’
v. Expression: ‘ORDERS’
i. Click Apply:

j. You will now see that there are some rows highlighted:

-143-
k. Now let’s filter on the ‘Event’ type.
l. Click on the ‘Event’ header in the table. You will see a drop down menu
of events. Select the UPDATE event:

m. The report is now filtered to just show updates:

-144-
n. Click on the ‘cog’.
o. Select the ‘Save Report’ option:

p. Enter the following:


i. Name: My Report
ii. Category: Select New Category
iii. Category: Enter SOX Update Reports
iv. Description: Report of updates
13. Using Out-of-the-Box and Saved Reports
a. Click on the ‘Audit Reports’ tab. This page contains all of the out of the
box reports for Audit Vault. We’ll start by checking our saved report.
Click on the ‘Custom Report’ sub-tab:

-145-
b. The report that you just saved is located here.
c. Using this functionality we’re able to save reports that are commonly
used. We’re also able to categorize reports. This could be useful to
categorize reports for regulatory purposes.
d. Click on the My Report link:

e. You will see the report that we created earlier.


f. Click on the ‘Audit Reports’ tab to return to the reporting area. These
reports are intended to help you meet your reporting requirements as
quickly as possible.
g. Click on the ‘Login Failures’:

-146-
h. This report lists all of the DB login failures across the audited sources:

i. Return to the reporting home page, then click on the ‘Structure Changes’
link. As with the other reports you are able to drill into the detail for a
given activity:

-147-
-148-
LAB EXERCISE 16 – Starting Up Source Database and Grid Control 10.2.0.4

Enterprise Manager provides an integrated management solution for managing Oracle


database with a top-down application management approach. Using Grid Control,
Oracle customers can:
o Manage the software and hardware lifecycle -- Today's data centers need to
respond to rapidly changing business demands, driving the need to provision
software and roll out changes in short order. Configuration policies can be
monitored for changes, and growth can be managed according to standards
and best practices.
o Maximize performance & availability—Automatically monitor the entire
database environment and proactively resolve issues before they turn into
emergencies
o Elevate administrator productivity—Gives administrators the tools they need
to manage more databases, more effectively while increasing their value to
the organization
o Eliminate failures from human error—Take control of your IT environment by
addressing the causes of unplanned downtime through extensive out-of-box
automation, configuration and, change management capabilities
o Increase Auditability of the Oracle database environment – The information
provided by Enterprise Manager can be analyzed over time so that the
effectiveness of internal procedures and controls may be measured

PLEASE NOTE: In order to preserve memory within the image for Grid Control, first
bring down the entire Audit Vault stack (collectors, agent, and server components).

1. Since Grid Control consumes additional memory resources in the VM Ware


image, it is recommended to stop the DB Console for the database DB01
before launching Grid Control. Set alias db01, and then run the command
emctl stop dbconsole. Make sure that the DB Console shuts down
cleanly.

oracle:/home/oracle/scripts> db01
ORACLE_SID=db01
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/11.1.0/db_1
OH=/u01/oracle/product/11.1.0/db_1
oracle:/home/oracle/scripts> emctl stop dbconsole
Oracle Enterprise Manager 11g Database Control Release 11.1.0.7.0
Copyright (c) 1996, 2008 Oracle Corporation. All rights
reserved.
https://dbsecurity.oracle.com:5501/em/console/aboutApplication
Stopping Oracle Enterprise Manager 11g Database Control ...
... Stopped.

-149-
2. Since we will be masking DB02, we need to be sure that the database instance
is running:

3. Change directory to /home/oracle/grid_scripts. Run the script


start_grid.sh. This script performs the following:
• Bring up the l1g listener, if it is not already up (ignore any errors
that come up if the 11g listener is already up)
• Startup the EM repository database
• Startup the grid agent
• Startup the Oracle Management Server (OMS)
• Startup the 10gAS infrastructure

Note that this script will take some time to run, commonly up to 5-7 minutes,
depending on the amount of memory you are able to allocate to the image itself.
The step of starting up the OMS and the 10gAS infrastructure tend to consume
most of the overall running time of the script.

oracle:/home/oracle/grid_scripts> . start_grid.sh
ORACLE_SID=db01
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/11.1.0/db_1
OH=/u01/oracle/product/11.1.0/db_1

LSNRCTL for Linux: Version 11.1.0.7.0 - Production on 06-OCT-2008 05:50:24

Copyright (c) 1991, 2008, Oracle. All rights reserved.

-150-
TNS-01106: Listener using listener name LISTENER has already been started
ORACLE_SID=emrep
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/grid/db10g

SQL*Plus: Release 10.1.0.4.0 - Production on Mon Oct 6 05:50:26 2008

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to an idle instance.

SQL> ORACLE instance started.

Total System Global Area 536870912 bytes


Fixed Size 780056 bytes
Variable Size 166729960 bytes
Database Buffers 369098752 bytes
Redo Buffers 262144 bytes
Database mounted.
Database opened.
SQL> Disconnected from Oracle Database 10g Enterprise Edition Release
10.1.0.4.0 - Production
With the Partitioning, OLAP and Data Mining options
ORACLE_SID=emrep
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/grid/agent10g
Oracle Enterprise Manager 10g Release 4 Grid Control 10.2.0.4.0.
Copyright (c) 1996, 2007 Oracle Corporation. All rights reserved.
Starting agent .................. started.
ORACLE_SID=emrep
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/grid/oms10g
Oracle Enterprise Manager 10g Release 4 Grid Control
Copyright (c) 1996, 2007 Oracle Corporation. All rights reserved.
opmnctl: opmn started
Starting HTTP Server ...
Starting Oracle Management Server ...
Checking Oracle Management Server Status ...
Oracle Management Server is Up.
Oracle Enterprise Manager 10g Release 4 Grid Control
Copyright (c) 1996, 2007 Oracle Corporation. All rights reserved.
Starting Oracle 10g Application Server Control ..... started.
opmnctl: starting opmn and all managed processes...

Processes in Instance: EnterpriseManager0.dbsecurity.oracle.com


-------------------+--------------------+---------+---------
ias-component | process-type | pid | status
-------------------+--------------------+---------+---------
DSA | DSA | N/A | Down
HTTP_Server | HTTP_Server | 2702 | Alive
LogLoader | logloaderd | N/A | Down
dcm-daemon | dcm-daemon | 2508 | Alive
OC4J | home | 4642 | Alive
OC4J | OC4J_EMPROV | 4643 | Alive
OC4J | OC4J_EM | 2848 | Alive
WebCache | WebCache | 4672 | Alive
WebCache | WebCacheAdmin | 4645 | Alive

-151-
4. In your browser, enter the URL
http://dbsecurity.oracle.com:4889/em, and login as
sysman/oracle1. If using the image’s browser, this URL should show up
in the Favorites pulldown menu as well as in the URL history.

5. Click on the TARGETS tab and verify that your host is shown as being up
(green Up arrow).

6. Click on the DATABASES tab to view all of the discovered databases and
their status. You should see three discovered databases in Grid Control.

-152-
7. Click on the hyperlink for the database DB02.ORACLE.COM and view the
current summary data and alerts for the database. [Partial output shown
below.]

-153-
LAB EXERCISE 17 – Masking Sensitive Application Data

Now that CashBankTrust is protecting their production data, it is imperative to maintain


that same level of confidentiality and protection even when providing realistic test data to
outside application developers and analysts. The Data Masking Pack (an add-on to
Oracle Enterprise Manager) allows masking of data, utilizing a variety of flexible
masking options, while preserving referential integrity and the normal, measurable
actions of applications.

This lab focuses on the EMPLOYEES and related tables, with the goal of protecting PII
(personally identifiable information) from outside developers who work on their HR
application.

1. In Grid Control, select TARGETS -> DATABASES. Select db02.oracle.com,


and then select ADMINISTRATION -> SCHEMA -> TABLES.

-154-
2. Login as system/oracle1, leave “Connect As” set to Normal, and then
click LOGIN.

3. For the table search, enter HR for schema and for object name enter
EMPLOYEES. Click GO.

4. Select View Data from the pulldown menu, then click GO.

-155-
5. Below is a partial output of the data in the table HR.EMPLOYEES. We will
compare this with the masked version of the table later in the lab exercise.

-156-
6. In Firefox, open a new Tab so that you may return to view this data. In the
new Tab, navigate to the Administration page for database db02.oracle.com.
Under Data Masking, select Definitions.

-157-
7. Click MASK to proceed with the steps for masking data.

8. You need to select the columns you want to include in this mask definition.
Accept the default mask name, and under columns click ADD.

9. There are several ways by which sensitive data can be identified. One
recommended approach is to identify the sensitive data by tagging the
associated sensitive columns with a keyword, such as MASK. Enter HR for
the Schema and MASK% for the Comment Name and click SEARCH.

-158-
10. Select the column for EMPLOYEE_ID and click ADD.

11. Notice how all associated foreign keys were added automatically. In this case,
there is an additional table named MANAGERS that is a part of the HR
application but all of its constraints are enforced in the application and not in
the database. The MANAGERS table uses EMPLOYEE_ID as its parent table
but the relationship is not registered in the database as a foreign key
constraint. Therefore, we must add a Dependent column on the
EMPLOYEE_ID column. Click the + for the EMPLOYEE_ID column under
“Add” for Dependent Columns.

-159-
12. Enter HR for schema and MANAGERS for table name, then click SEARCH.

-160-
13. Select MGR_ID from the displayed column list, followed by clicking ADD.

14. The dependent column was added. Now we can define the mask format of the
EMPLOYEE_ID column. Click the Format icon .

15. Select RANDOM NUMBERS from the Add pulldown list and click GO.

16. Enter 100000000 for Start Value and 999999999 for End Value and click OK.

-161-
17. Verify that your column mask was correctly created, then click OK.

18. We will add four more columns in HR.EMPLOYEES for masking:


FIRST_NAME, LAST_NAME, PHONE_NUMBER, and SALARY. Click
ADD to add more columns. Select HR for schema, EMPLOYEES for table
name, then click SEARCH. [Notice how this time we are looking for all table
columns, not just those that have a special column comment.]

-162-
19. Select the four columns listed in Step 18. Click ADD.

20. We have to format each of these 4 added columns. Select the Format icon for
LAST_NAME.

-163-
21. We will use a format already defined in the Format Library. Click IMPORT
FROM LIBRARY.

22. Select the masking definition “Anglo American Last Name” then click IMPORT.

-164-
23. View the sample masked data generated for this column. Then click OK.

24. Select the Format icon for FIRST_NAME.

25. We will again use a masking format already defined in the Format Library.
Click IMPORT FROM LIBRARY

-165-
26. Select “Anglo American First Name”, and then click IMPORT.

27. Observe the sample generated mask data for this column. Then click OK.

28. Select the Format icon for Phone Number. Use the existing mask “Bay Area
Phone Number” from the Format Library.

-166-
28. Select the Format icon for Salary. For this column we will randomly shuffle the
original column data. Select SHUFFLE from the Add drop-down list and then click OK.

29. Now that we have identified all columns we wish to mask, click NEXT.

-167-
30. Next the masking script gets generated. When completed, an Impact Report is
displayed. Verify that there are no identified issues with running the masking
script, then click NEXT.

-168-
31. Name your job “EMPLOYEE_DATA_MASKING_JOB”. Enter
oracle/oracle1 as your host credentials. Then click NEXT.

32. Notice that you have the option of saving the full masking script at this point
if you wish to manually deploy it yourself. In our case we will have the script
run immediately. Click SUBMIT.

-169-
33. You should now see a screen indicating that the job was successfully
submitted. Click “View Job Details”.

34. Verify that your job has succeeded running by examing the status output
below.

35. We can now verify that our data has been masked. Return to
ADMINISTRATION -> SCHEMA -> TABLES. For the HR.EMPLOYEES
table, select the pulldown option View Data, followed by clicking GO.

-170-
36. Compare the data now displayed to what you observed in Step 5 (in your first
Tab). Notice how the 5 columns of interest have been successfully masked.

-171-
APPENDIX A – Command Line Examples For Database Vault

• Creating Application Realms


connect dvowner/welcome1%

exec dvsys.dbms_macadm.create_realm ('HR Realm','Realm for Human Resources


Application','Y',1);

exec dvsys.dbms_macadm.create_realm ('OE Realm','Realm for Order Entry


Application','Y',1);

exec dvsys.dbms_macadm.create_realm ('SH Realm','Realm for Sales


Application','Y',1);

• Adding Realm Secured Objects


connect dvowner/welcome1%

exec dvsys.dbms_macadm.add_object_to_realm ('HR Realm','HR','%','%');

exec dvsys.dbms_macadm.add_object_to_realm ('OE Realm','OE','%','%');

exec dvsys.dbms_macadm.add_object_to_realm ('SH Realm','SH','%','%');

• Adding Realm Authorizations


connect dvowner/welcome1%

exec dvsys.dbms_macadm.add_auth_to_realm ('HR Realm','HR_ADMIN',NULL,1);

exec dvsys.dbms_macadm.add_auth_to_realm ('OE Realm','OE_ADMIN',NULL,1);

exec dvsys.dbms_macadm.add_auth_to_realm ('SH Realm','SH_ADMIN',NULL,1);

• Creating Command Rules


Begin
dvsys.dbms_macadm.create_command_rule (
command => 'DROP TABLE',
rule_set_name => 'Non Working Hours',
object_owner => 'HR',
object_name => '%',
enabled => 'Y');
End;
/

• Creating Rules
Begin
dvsys.dbms_macadm.create_rule (
rule_name => 'Night Hours',
rule_expr => 'to_char(sysdate,''hh24'') not between ''08'' and ''16''');
End;
/
Begin
dvsys.dbms_macadm.create_rule (
rule_name => 'Weekend Hours',
rule_expr => 'to_char(sysdate,''D'') not between ''2'' and ''6''');
End;

-172-
/

• Creating Rule Sets


Begin
dvsys.dbms_macadm.create_rule_set (
rule_set_name => 'Non Working Hours',
description => 'Hours outside of 8am to 5pm weekdays and all weekends',
enabled => 'Y',
eval_options => 2, /* 1=All True, 2=Any True */
audit_options => 1, /* 0=Never, 1=On failure, 2=On success */
fail_options => 1, /* 1=Show errmsg, 2=Don't show errmsg */
fail_message => 'Cannot perform operation during work hours',
fail_code => 0,
handler_options => 0, /* 0=None, 1=On failure, 2=On success */
handler => NULL);
End;
/

• Adding Rules to Rule Sets


Begin
dvsys.dbms_macadm.add_rule_to_rule_set (
rule_set_name => 'Non Working Hours',
rule_name => 'Night Hours',
rule_order => 1);
End;
/
Begin
dvsys.dbms_macadm.add_rule_to_rule_set (
rule_set_name => 'Non Working Hours',
rule_name => 'Weekend Hours',
rule_order => 2);
End;
/

-173-
APPENDIX B – Oracle Label Security (OLS) Lab Exercise

This lab exercise demonstrates the use of Oracle Label Security to set up row-level
security based on label policies. All scripts used in this exercise are included in the VM
Ware image under $HOME/setup_scripts.

Lab Overview

Oracle Label Security makes separation of duty easy: When lbacsys, the default Oracle
DBA for OLS, creates a policy, a role with the name "<policy_name>_DBA" is
automatically granted to lbacsys with the ADMIN option, so that the role can be granted
to other users for them to complete and own the policy. In this lab exercise, these users
are named "sec_admin" and "hr_sec".

There are three parts to an access control policy:

1. The table containing the sensitive data (LOCATIONS) and the owner of this data
(hr), who determines the sensitivity of his data and who will get access to which
level of sensitivity.

2. The user-related part of the OLS policy is maintained by the user hr_sec, who
creates database users and roles and grants clearances to them.

3. The OLS labels (both for data and users), which enable the access mediation
defined by the data owner, are created by sec_admin. Furthermore, this user is
responsible for maintaining the performance of the application.

When the policy is tested and ready for production, lbacsys revokes all necessary
execution rights and roles from both "hr_sec" and "sec_admin".

I. Setup

To create users and roles, open a terminal window within your VM


Ware image and execute the following commands:

cd /home/oracle/labs
sqlplus /nolog
@ols_create_admin_users_and_roles.sql

Partial output shown below:

-174-
II. Creating a Policy

-175-
In this section, you will create a policy, grant the role to the admin users, and
create the levels and labels for the policy. Perform the following:

1. User LBACSYS creates a policy which will control access to the


hr.LOCATIONS table; the name of the policy is 'ACCESS_LOCATIONS';
the name of the hidden column which will be appended to the
hr.LOCATIONS table to hold the data labels is called 'OLS_COLUMN'.
From a SQL*Plus session, execute the script ols_create_policy.sql
to create your policy.

2. When the policy is created, an administration role for this policy is automatically
granted to LBACSYS with the 'admin' option. In order to enable proper separation
of duty, lbacsys grants this role and some additional execution rights to the admin
users 'HR_sec' and 'sec_admin'. From a SQL*Plus session, execute the script
ols_grant_role.sql:

-176-
3. The sec_admin user creates the levels for the policy. Each policy consists of
levels (one or more), and optional compartments and groups, which are not
included in this example. Execute the script ols_create_level.sql to
create levels for your policy.

-177-
4. The sec_admin user also creates the labels (which only contain levels, no
compartments or groups). Execute the script ols_create_label.sql:

III. Setting User Authorizations

Later, data access rights will be limited by applying the labels


you created earlier to the data. Before this, you need to
authorize users and grant privileges to the policies, in order to
define the matching access rights to these users. Perform the
following:

-178-
1. The HR_SEC user binds the labels to the users, defining their clearance.
From a SQL*Plus session, execute the script
ols_set_user_label.sql to create user label authorizations:

-179-
-180-
2. HR, the owner of the LOCATIONS table, needs full access to the table, since the
user will later add the data labels into the hidden column named OLS_COLUMN
defined earlier. From a SQL*Plus session, execute the script
ols_set_user_privs.sql:

IV. Applying a Policy to a Table

You can apply Oracle Label Security policies to entire


application schemes or to individual application tables. You will
apply it to the LOCATIONS table. Perform the following:

1. The sec_admin user applies the policy to the table. From now on, since
READ_CONTROL has been set in the policy definition and no labels are added
to the rows, no one can read the data (except HR). Execute the script
ols_apply_policy.sql:

V. Adding Labels to the Data

-181-
Before you can test the policy, you must add the label to the
data by performing the following:

1. HR, the owner of the LOCATIONS table, adds the labels for each row into the
hidden column 'OLS_COLUMNS'. In this case, you will assign the Sensitive label
to the cities: Beijing, Tokyo and Singapore. You will assign the Confidential label
to the cities: Munich, Oxford and Roma. And all other cities, you will assign the
label Public. Run the script ols_add_label_column.sql.

VI. Creating an Index on OLS_COLUMN

To improve performance of data access, create a BITMAP INDEX on


the OLS_COLUMN. Perform the following steps:

1. As user sec_admin, create a bitmap index on the OLS_COLUMN by


running the script ols_create_index.sql:

-182-
VII. Revoking Access from Admin Users

In order to secure the policy you need to revoke policy-specific


execution rights and roles from sec_admin and hr_sec. Perform the
following steps:

1. From your SQL*Plus session, execute the script


ols_revoke_access.sql:

VIII. Testing the Policy Implementation

-183-
After establishing policies to tables and users, and adding
labels to the data, you can now test them by performing the
following:

1. Execute the script ols_test_policy_sking.sql to test the


policy for the SKING user. Note that SKING can see PUBLIC,
CONFIDENTIAL and SENSITIVE data.

-184-
2. Now you can test the policy for the KPARTNER user by executing the
script ols_test_policy_kpartner.sql. Note that KPARTNER
can see PUBLIC and CONFIDENTIAL data.

-185-
3. Now you can test the PRIVACY policy by executing the script
ols_test_policy_ldoran.sql. Note that LDORAN can only see
PUBLIC data.

-186-
IX. Cleanup

Now that you have tested your policies, drop the users and the
access policies by performing the following:

1. Execute the script ols_cleanup.sql:

-187-
Lab Summary

In this lab exercise, you've learned how to:

• Create a Policy
• Set User Authorizations
• Apply a Policy to a Table
• Add Labels to the Data
• Test the Policy

-188-
APPENDIX C – Expanding Logical Volumes In VM Image

*** THIS EXAMPLE SHOWS HOW TO EXPAND A VIRTUAL DISK FROM 20GB
TO 40GB. THIS IS FOR AN IMAGE RUNNING OEL4.

To locate the correct name to resize, go into vm->settings:

-189-
SINCE I HAVE A LOT OF IMAGES WITH THE SAME NAME, I NEEDED TO
MAKE SURE THAT THE CORRECT FILE IS SELECTED. THIS IS WHY I PUT A
CMD WINDOW IN THE CORRECT PATH.

RESTART THE IMAGE, LOGGING IN AS ROOT.

RUN /sbin/fdisk –l to see the disk devices.

NOW sda1 IS THE BOOT DISK, SO IT IS sda2 THAT WE WANT TO EXPAND.

-190-
NEXT DELETE PARTITION 2 (SINCE WE ARE GOING TO ADD IT BACK
RESIZED)

NOTICE HOW NOW THE PARTITION MAP ONLY SHOWS THE BOOT
PARTITION.

-191-
THE WARNING ABOVE IS OKAY, WE JUST HAVE TO REBOOT THE VM WARE
IMAGE AT THIS POINT.

NEXT STEP IS TO REBOOT ---

-192-
AFTER REBOOT, LOGIN AS ROOT.

NEXT I RAN THE COMMAND:

ext2online -v /

TO RESIZE THE FILE SYSTEM ON THE FLY.

-193-
(ETC.)

AT THE END OF THE ABOVE COMMAND RUNNING WE GET:

(ETC.)

FINALLY, CHECK OUT THE CURRENT SPACE VIA df –h. WE NOW SEE THE
EXPANDED SIZE OF THE VOLUME.

-194-
APPENDIX D – Glossary Of Terms

Below is a summary of technical terms used throughout this document.

Data Dictionary A set of tables and views used by Oracle to


track information about all objects in a
database.

DDL Data Definition Language. Includes


statements like CREATE/ALTER
TABLE/INDEX, which define or change data
structure.

DML Data Manipulation Language. Includes


statements like INSERT, UPDATE, and
DELETE, which change data in tables.

Function A PL/SQL subprogram that returns a single


value.

Object Anything that Oracle is able to create and


maintain in a database. Examples include
tables, views, packages, functions,
procedures, and indexes.

Package A schema object that groups logically


related PL/SQL types, variables, and
subprograms. Packages usually have two
parts, a specification (spec) and a body.

PL/SQL Oracle's procedural language extension to


SQL. PL/SQL enables you to mix SQL
statements with procedural constructs.
With PL/SQL, you can define and execute
PL/SQL program units such as procedures,
functions, and packages.

Procedure A PL/SQL subprogram that performs


specific operations. A procedure may or
may not return values when executed.

-195-
Role A database object that includes privileges
and other roles to be granted to users for
performing specific operations in a
database.

Schema A collection of database objects, including


logical structures such as tables, views,
sequences, stored procedures, synonyms,
indexes, clusters, and database links. A
schema has the name of the user who
controls it.

Schema owner A user who owns a collection of one or


more objects in a database.

SQL*Plus Oracle tool used to run SQL statements


against an Oracle database.

Table Basic unit of data storage in an Oracle


database. Table data is stored in rows and
columns.

Username The name by which a user is known to the


Oracle database server and to other users.
Every username is associated with a
password, and both must be entered to
connect to an Oracle database.

View A custom-tailored presentation of the data


in one or more tables. A view can also be
thought of as a "stored query." Views do
not actually contain or store data; they
derive their data from the tables on which
they are based.

-196-
APPENDIX E – Releasing Flash Recovery Area Disk Space In VM Image

Do not simply remove archive logs from the flash recovery area, since RMAN might
expect to need them for recovery and could give you errors on startup and shutdown. Run
the commands below, followed by removing the physical archivelogs in the flash recovery
area.

Also, be sure you are running the correct rman command. As shown below, it is best to
run rman with its full path. Otherwise, you might end up running the Linux OS comand
instead!

oracle:/home/oracle> $OH/bin/rman

Recovery Manager: Release 11.1.0.6.0 - Production on Sat Jul 12 17:46:22 2008

Copyright (c) 1982, 2007, Oracle. All rights reserved.

RMAN> connect target /

connected to target database: DB01 (DBID=1290309308)

RMAN> crosscheck archivelog all;

using target database control file instead of recovery catalog


allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=114 device type=DISK
validation succeeded for archived log
archived log file
name=/oracle/flash_recovery_area/DB01/archivelog/2008_07_02/o1_mf_1_2_46r9swwy_
.arc RECID=2 STAMP=659041889
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_3_659041283.dbf
RECID=3 STAMP=659042408
validation succeeded for archived log
archived log file
name=/oracle/flash_recovery_area/DB01/archivelog/2008_07_02/o1_mf_1_3_46rb9zrz_
.arc RECID=4 STAMP=659042408
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_4_659041283.dbf
RECID=5 STAMP=659042643
validation succeeded for archived log
archived log file
name=/oracle/flash_recovery_area/DB01/archivelog/2008_07_02/o1_mf_1_4_46rbkfsz_
.arc RECID=6 STAMP=659042643
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_5_659041283.dbf
RECID=7 STAMP=659042928
validation succeeded for archived log
archived log file
name=/oracle/flash_recovery_area/DB01/archivelog/2008_07_02/o1_mf_1_5_46rbtcmz_
.arc RECID=8 STAMP=659042928
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_6_659041283.dbf
RECID=9 STAMP=659043228
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_7_659041283.dbf
RECID=10 STAMP=659043333
validation succeeded for archived log

-197-
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_8_659041283.dbf
RECID=11 STAMP=659043428
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_9_659041283.dbf
RECID=12 STAMP=659044033
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_10_659041283.dbf
RECID=13 STAMP=659044179
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_11_659041283.dbf
RECID=14 STAMP=659044455
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_12_659041283.dbf
RECID=15 STAMP=659044809
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_13_659041283.dbf
RECID=16 STAMP=659044818
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_14_659041283.dbf
RECID=17 STAMP=659150236
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_15_659041283.dbf
RECID=18 STAMP=659150236
validation succeeded for archived log
archived log file
name=/oracle/flash_recovery_area/DB01/archivelog/2008_07_04/o1_mf_1_15_46vmmws0
_.arc RECID=19 STAMP=659150236
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_16_659041283.dbf
RECID=20 STAMP=659150248
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_17_659041283.dbf
RECID=21 STAMP=659151067
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_18_659041283.dbf
RECID=22 STAMP=659151101
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_19_659041283.dbf
RECID=23 STAMP=659250925
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_20_659041283.dbf
RECID=24 STAMP=659251072
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_21_659041283.dbf
RECID=25 STAMP=659252955
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_22_659041283.dbf
RECID=26 STAMP=659899619
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_23_659041283.dbf
RECID=27 STAMP=659899766
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_24_659041283.dbf
RECID=28 STAMP=659900370
validation succeeded for archived log
archived log file
name=/oracle/flash_recovery_area/DB01/archivelog/2008_07_12/o1_mf_1_24_47lj5klk
_.arc RECID=29 STAMP=659900370
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_25_659041283.dbf
RECID=30 STAMP=659900525
validation succeeded for archived log

-198-
archived log file
name=/oracle/flash_recovery_area/DB01/archivelog/2008_07_12/o1_mf_1_25_47ljbdh0
_.arc RECID=31 STAMP=659900525
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_26_659041283.dbf
RECID=32 STAMP=659900795
validation succeeded for archived log
archived log file
name=/oracle/flash_recovery_area/DB01/archivelog/2008_07_12/o1_mf_1_26_47ljlt8m
_.arc RECID=33 STAMP=659900795
Crosschecked 32 objects

RMAN> delete expired archivelog all;

released channel: ORA_DISK_1


allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=114 device type=DISK
specification does not match any archived log in the recovery catalog

RMAN> exit

Recovery Manager complete.


oracle:/home/oracle>

-199-
APPENDIX F – Configuring Audit Management Features In Release 10.2.0.3

For Oracle 10.2.0.3, we show how to configure several new features that allow you the
ability of moving the source database native audit trails outside of the SYSTEM
tablespace as well as configures a job to automatically purge old audit records after they
have been moved to Audit Vault’s repository. Patch 6989148, which provides these
capabilities, currently can be installed only in 10.2.0.3 sources. There is no patch to
provide these capabilities to 111.0.6 sources. Release 11.1.0.7 does contain this
functionality, however.

The steps shown below are for a 10.2.0.3 Oracle Home running one database (db03).

1. Our first step is to install Patch 6989148. Change directory to


/home/oracle/patches and unzip the file
p6989148_10203_LINUX.zip. Also, review the README.txt file for
this patch.

oracle:/home/oracle> cd patches
oracle:/home/oracle/patches> ls
p6989148_10203_LINUX.zip README.txt
oracle:/home/oracle/patches> unzip p6989148_10203_LINUX.zip
Archive: p6989148_10203_LINUX.zip
creating: 6989148/
creating: 6989148/files/
creating: 6989148/files/lib/
creating: 6989148/files/lib/libserver10.a/
inflating: 6989148/files/lib/libserver10.a/aud.o
inflating: 6989148/files/lib/libserver10.a/kza.o
inflating: 6989148/files/lib/libserver10.a/kzam.o
.
.
.
oracle:/home/oracle/patches>

2. As stated in file README.txt we must shutdown all instances for the 10.2.0.3
Home since opatch checks to see if any binaries in the Home being patched
are active. Set alias 10g to point to the correct environment settings.
Shutdown EM Database Control and instance for database DB03. Ensure that
the DB03 instance is shutdown cleanly.
oracle:/oracle/product/10.2> 10g
ORA_AGENT_HOME=/oracle/product/grid/agent10g
ORACLE_SID=db03
ORACLE_BASE=/oracle
ORACLE_HOME=/oracle/product/10.2/db_1
OH=/oracle/product/10.2/db_1
oracle:/oracle/product/10.2> emctl stop dbconsole
TZ set to US/Mountain
Oracle Enterprise Manager 10g Database Control Release 10.2.0.3.0
Copyright (c) 1996, 2006 Oracle Corporation. All rights reserved.
http://lysy-kestner.oracle.com:5502/em/console/aboutApplication

-200-
Stopping Oracle Enterprise Manager 10g Database Control ...
... Stopped.
oracle:/oracle/product/10.2> sqlplus / as sysdba

SQL*Plus: Release 10.2.0.3.0 - Production on Sun Sep 14 07:05:53 2008

Copyright (c) 1982, 2006, Oracle. All Rights Reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the Partitioning, Oracle Label Security, OLAP and Data Mining options

SQL> shutdown immediate


Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.3.0
- Production
With the Partitioning, Oracle Label Security, OLAP and Data Mining options

3. Change directory to /home/oracle/patches/6989148. Run opatch to


apply Patch 6989148 to the 10.2.0.3 Home.

oracle:/home/oracle/patches/6989148> $ORACLE_HOME/OPatch/opatch apply


Invoking OPatch 10.2.0.3.0

Oracle interim Patch Installer version 10.2.0.3.0


Copyright (c) 2005, Oracle Corporation. All rights reserved..

Oracle Home : /oracle/product/10.2/db_1


Central Inventory : /oracle/oraInventory
from : /etc/oraInst.loc
OPatch version : 10.2.0.3.0
OUI version : 10.2.0.3.0
OUI location : /oracle/product/10.2/db_1/oui
Log file location : /oracle/product/10.2/db_1/cfgtoollogs/opatch/opatch2008-
09-14_07-20-31AM.log

ApplySession applying interim patch '6989148' to OH


'/oracle/product/10.2/db_1'
Invoking fuser to check for active processes.
Invoking fuser on "/oracle/product/10.2/db_1/bin/oracle"

OPatch detected non-cluster Oracle Home from the inventory and will patch
the local system only.

Please shutdown Oracle instances running out of this ORACLE_HOME on the


local system.
(Oracle Home = '/oracle/product/10.2/db_1')

Is the local system ready for patching?

Do you want to proceed? [y|n]


y
User Responded with: Y
Backing up files and inventory (not for auto-rollback) for the Oracle Home
Backing up files affected by the patch '6989148' for restore. This might
take a while...

-201-
Patching component oracle.rdbms, 10.2.0.3.0...
Updating archive file "/oracle/product/10.2/db_1/lib/libserver10.a" with
"lib/libserver10.a/aud.o"
Updating archive file "/oracle/product/10.2/db_1/lib/libserver10.a" with
"lib/libserver10.a/kza.o"
Updating archive file "/oracle/product/10.2/db_1/lib/libserver10.a" with
"lib/libserver10.a/kzam.o"
Updating archive file "/oracle/product/10.2/db_1/lib/libserver10.a" with
"lib/libserver10.a/kzax.o"
Updating archive file "/oracle/product/10.2/db_1/lib/libserver10.a" with
"lib/libserver10.a/kzft.o"
Updating archive file "/oracle/product/10.2/db_1/lib/libserver10.a" with
"lib/libserver10.a/psdsys.o"
Updating archive file "/oracle/product/10.2/db_1/lib/libserver10.a" with
"lib/libserver10.a/szaud.o"
Updating archive file "/oracle/product/10.2/db_1/lib/libserver10.a" with
"lib/libserver10.a/zlle.o"
Updating archive file "/oracle/product/10.2/db_1/lib/libserver10.a" with
"lib/libserver10.a/kspare.o"
Copying file to "/oracle/product/10.2/db_1/rdbms/admin/catamgt.sql"
Copying file to "/oracle/product/10.2/db_1/rdbms/admin/catlbacs.sql"
Copying file to "/oracle/product/10.2/db_1/rdbms/admin/catnools.sql"
Copying file to "/oracle/product/10.2/db_1/rdbms/admin/dbmsamgt.sql"
Copying file to "/oracle/product/10.2/db_1/rdbms/admin/prvtamgt.plb"
ApplySession adding interim patch '6989148' to inventory

Verifying the update...


Inventory check OK: Patch ID 6989148 is registered in Oracle Home inventory
with proper meta-data.
Files check OK: Files from Patch ID 6989148 are present in Oracle Home.
Running make for target ioracle

The local system has been patched and can be restarted.

OPatch succeeded.

4. Run “opatch lsinventory” to verify that the patch has been registered with the
Oracle Inventory file.
oracle:/home/oracle/patches/6989148> $ORACLE_HOME/OPatch/opatch lsinventory
Invoking OPatch 10.2.0.3.0

Oracle interim Patch Installer version 10.2.0.3.0


Copyright (c) 2005, Oracle Corporation. All rights reserved..

Oracle Home : /oracle/product/10.2/db_1


Central Inventory : /oracle/oraInventory
from : /etc/oraInst.loc
OPatch version : 10.2.0.3.0
OUI version : 10.2.0.3.0
OUI location : /oracle/product/10.2/db_1/oui
Log file location : /oracle/product/10.2/db_1/cfgtoollogs/opatch/opatch2008-
09-14_07-25-38AM.log

Lsinventory Output file location :


/oracle/product/10.2/db_1/cfgtoollogs/opatch/lsinv/lsinventory2008-09-14_07-
25-38AM.txt

-202-
----------------------------------------------------------------------------
----
Installed Top-level Products (3):

Oracle Database 10g


10.2.0.1.0
Oracle Database 10g Products
10.2.0.1.0
Oracle Database 10g Release 2 Patch Set 2
10.2.0.3.0
There are 3 products installed in this Oracle Home.

Interim patches (3) :

Patch 6989148 : applied on Sun Sep 14 07:23:13 MDT 2008


Created on 13 Jun 2008, 05:21:31 hrs PST8PDT
Bugs fixed:
6152756, 6940487, 7120290, 7120267, 6954407, 6023472, 4085593, 5021304
6989148, 6322324, 6655588, 6726958, 5248932, 6964283, 4740049, 5936250

Patch 5556081 : applied on Sun Sep 14 06:47:04 MDT 2008


Created on 9 Nov 2006, 22:20:50 hrs PST8PDT
Bugs fixed:
5556081

Patch 5557962 : applied on Sun Sep 14 06:46:50 MDT 2008


Created on 9 Nov 2006, 23:23:06 hrs PST8PDT
Bugs fixed:
4269423, 5557962, 5528974

----------------------------------------------------------------------------
----

OPatch succeeded.

5. As the file README.txt explains, after running opatch to install Patch


6989148 we will need startup the DB01 database followed by running several
scripts to install DBMS_AUDIT_MGMT and other packages.

oracle:/home/oracle/patches/6989148> sqlplus / as sysdba

SQL*Plus: Release 10.2.0.3.0 - Production on Sun Sep 14 07:29:06 2008

Copyright (c) 1982, 2006, Oracle. All Rights Reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 419430400 bytes


Fixed Size 1262140 bytes
Variable Size 130026948 bytes
Database Buffers 285212672 bytes
Redo Buffers 2928640 bytes
Database mounted.
Database opened.
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.3.0
- Production

-203-
With the Partitioning, Oracle Label Security, OLAP and Data Mining options
oracle:/home/oracle/patches/6989148> cd $OH/rdbms/admin
oracle:/oracle/product/10.2/db_1/rdbms/admin> sqlplus / as sysdba

SQL*Plus: Release 10.2.0.3.0 - Production on Sun Sep 14 07:31:10 2008

Copyright (c) 1982, 2006, Oracle. All Rights Reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the Partitioning, Oracle Label Security, OLAP and Data Mining options

SQL> @@catamgt.sql

Table created.

Comment created.

Comment created.
.
.
.
SQL> @@dbmsamgt.sql

Package created.

Grant succeeded.

SQL> @@prvtamgt.plb

Library created.

Package body created.


SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.3.0
- Production
With the Partitioning, Oracle Label Security, OLAP and Data Mining options

-204-
APPENDIX G – How To Convert FAT Disk Partition To NTFS

How to Convert FAT Disks to NTFS


Source: Microsoft Technet (http://www.microsoft.com/technet/prodtechnol/winxppro/maintain/convertfat.mspx)

Published: October 25, 2001

This article describes how to convert FAT disks to NTFS. See the Terms sidebar for definitions of FAT,
FAT32 and NTFS. Before you decide which file system to use, you should understand the benefits and
limitations of each of them.

Changing a volume's existing file system can be time–consuming, so choose the file system that best
suits your long–term needs. If you decide to use a different file system, you must back up your data and
then reformat the volume using the new file system. However, you can convert a FAT or FAT32 volume
to an NTFS volume without formatting the volume, though it is still a good idea to back up your data
before you convert.

Note Some older programs may not run on an NTFS volume, so you should research the current
requirements for your software before converting.

Choosing Between NTFS, FAT, and FAT32


You can choose between three file systems for disk partitions on a computer running Windows XP: NTFS,
FAT, and FAT32. NTFS is the recommended file system because it’s is more powerful than FAT or FAT32,
and includes features required for hosting Active Directory as well as other important security features.
You can use features such as Active Directory and domain–based security only by choosing NTFS as your
file system.

Converting to NTFS Using the Setup Program


The Setup program makes it easy to convert your partition to the new version of NTFS, even if it used FAT
or FAT32 before. This kind of conversion keeps your files intact (unlike formatting a partition).

Setup begins by checking the existing file system. If it is NTFS, conversion is not necessary. If it is FAT or
FAT32, Setup gives you the choice of converting to NTFS. If you don't need to keep your files intact and
you have a FAT or FAT32 partition, it is recommended that you format the partition with NTFS rather than
converting from FAT or FAT32. (Formatting a partition erases all data on the partition and allows you to
start fresh with a clean drive.) However, it is still advantageous to use NTFS, regardless of whether the
partition was formatted with NTFS or converted.

Converting to NTFS Using Convert.exe


A partition can also be converted after Setup by using Convert.exe. For more information about
Convert.exe, after completing Setup, click Start, click Run, type cmd, and then press ENTER. In the
command window, type help convert, and then press ENTER.

It is easy to convert partitions to NTFS. The Setup program makes conversion easy, whether your
partitions used FAT, FAT32, or the older version of NTFS. This kind of conversion keeps your files intact
(unlike formatting a partition.

-205-
To find out more information about Convert.exe
1. After completing Setup, click Start, click Run, type cmd, and then press ENTER.

2. In the command window, type help convert and then press ENTER. Information about converting
FAT volumes to NTFS is made available as shown below.

To convert a volume to NTFS from the command prompt

1 Open Command Prompt. Click Start, point to All Programs, point to Accessories, and then
click Command Prompt.

In the command prompt window, type: convert drive_letter: /fs:ntfs


For example, typing convert D: /fs:ntfs would format drive D: with the ntfs format. You can convert
FAT or FAT32 volumes to NTFS with this command.

Important Once you convert a drive or partition to NTFS, you cannot simply convert it back to FAT or
FAT32. You will need to reformat the drive or partition which will erase all data, including programs and
personal files, on the partition.

-206-
CPU

Best Practices
Nov 2007
Best Practices

Introduction......................................................................................................................... 3
Installing Audit Vault ......................................................................................................... 3
Deployment Plan............................................................................................................. 4
Audit Vault Server ...................................................................................................... 4
Audit Vault Collection Agent ..................................................................................... 5
Which Collector(s) Should I Deploy?......................................................................... 6
Recommended Collector and Database Audit Configuration..................................... 8
Near Real-Time Alerts.................................................................................................... 9
Near Real-Time Reporting.............................................................................................. 9
Recommendations on ETL process ............................................................................ 9
Oracle Database Auditing ................................................................................................. 10
Audit Trail Contents and Locations.............................................................................. 11
Recommended Database Audit Configuration ......................................................... 12
Audit Settings – Secure Configuration ......................................................................... 12
Recommended Database Audit Settings................................................................... 12
Database Auditing Performance ............................................................................... 14
Auditing and the Audit Vault Collectors .................................................................. 14
Managing Audit Data on the Source................................................................................. 15
Removing Audit Data from the Database................................................................. 15
Recommended Database Audit Cleanup Periods ..................................................... 16
Removing Audit Data from the Operating System................................................... 16
Oracle Audit Vault Maintenance ...................................................................................... 18
Audit Vault Server Log Files.................................................................................... 18
Audit Vault Collection Agent Log Files................................................................... 18
Oracle Audit Vault Disaster Recovery ..................................................................... 20
Recommended Recovery Configuration................................................................... 20
Appendix A. Audit Trail Maintenance Scripts ................................................................ 21
Appendix B. Database Source Audit Settings ................................................................. 28

Oracle Audit Vault Best Practices Page 2


Introduction
Oracle Audit Vault automates the audit data consolidation and analysis process, turning
audit data into a key security resource to help address today's security and compliance
challenges. Oracle Audit Vault is built on Oracle’s industry leading database security
and data warehousing products. This paper provides best practices for deploying Oracle
Audit Vault in your enterprise. Information on deployment architectures and expected
performance is included. In addition, this paper provides information on the auditing
capabilities of the Oracle database and recommended best practices. Oracle Audit Vault
supports consolidating audit data from Oracle9i Release 2 and higher databases. Oracle
is currently working to support heterogeneous databases in a future release of Oracle
Audit Vault.

Please note that this document will be updated on a regular basis to contain the latest
information based on development and customer feedback. These best practices will be
included in future releases of the Oracle Audit Vault documentation.

Installing Audit Vault


The architecture of Audit Vault consists of two major components that work in concert to
store and secure the audit data. They are:

• Audit Vault Server – A stand-alone stacked application that contains a data


warehouse built on a customized installation of Oracle Database 10g (10.2.0.3)
with Oracle Database Vault providing security and OC4J components to support
an Audit Vault Console and Enterprise Manager’s Database Control.
• Audit Vault Collection Agent – The Collection Agent is responsible for managing
collectors and maintaining the Audit Vault wallet.
o Collectors - A collector is specific to an audit source and acts as the
middleman between the source and the Audit Vault Server by pulling the
audit trail data from the source and sending it to the Audit Vault Server
over SQL*Net.
o Audit Vault Wallet – The wallet is used to maintain the password for the
collector to connect to the sources to pull audit data from the database.

Oracle Audit Vault Best Practices Page 3


Figure 1 Audit Vault Architecture

Deployment Plan
While Audit Vault provides consolidation and secure storage of audit data, planning the
installation of the Audit Vault components will ensure a faster installation and overall
success of implementing a compliant solution. The following sections discuss the pre-
installation considerations for the Audit Vault Server and Audit Vault Collection Agents.

Audit Vault Server


The Audit Vault Server should be installed on its own host or a host that contains other
repository databases such as Enterprise Manager Grid Control or the Oracle Recovery
Manager (RMAN) repository database. By installing the Audit Vault Server separate
from the source database servers provides the following benefits:

• Higher Availability – When the Audit Vault Server is on a separate server from
the source databases then the availability will not be dependent on the source
host’s up/down status and therefore the audit data continues to be collected from
all sources that are running.
• Secured Audit Trail – By extracting the audit trail records off of the source
database as fast as possible, there is very little opportunity for privileged database
and operating system users to modify any audit records.

When it comes to determining what type of resources are required to install and maintain
the Audit Vault Server, it depends on the how fast you need the audit records to be
inserted into Audit Vault and how long you must maintain audit data.

In internal testing on a 2x6GB 3GHz Intel Xeons, Redhat 3.

Oracle Audit Vault Best Practices Page 4


2 Linux host, the Audit Vault Server inserted up to 17,000 audit records / second. To
store 500,000 audit trail records in the Audit Vault repository database requires
approximately 300mg of disk space. An additional 2G of disk space is needed for the
ORACLE_HOME files.

For scalability and availability, the Audit Vault Server may optionally implement Real
Applications Cluster (RAC) and Data Guard for disaster recovery.

Check the specific Audit Vault Server Installation Guide documentation of the platform
that you will be installing for a list of the requirements of that operating system.

Audit Vault Collection Agent


The Oracle database can write audit trail data into the database
(SYS.AUD$/SYS.FGA_LOG$) and/or operating system files. The online log (redo log)
of the Oracle database also contains information of before/after value changes of data as
well. Audit Vault deploys a process called a Collector which is specific to the Oracle
database audit trail to extract the audit data and send it to the Audit Vault Server. The
three types of collectors are called DBAUD for database auditing, OSAUD for operating
system files written by the Oracle database, and REDO to extract audit data from the redo
stream.
The Audit Vault Collection Agent provides support for audit data collection. The agent
loads the collectors, provides them with a connection to the Audit Vault Server to send
audit data and run-time metrics on the collectors. Audit Vault communicates with the
audit data source through its agent

The Audit Vault Collection Agent may be installed either on the same host as the
database that is going to be audited, on the audit vault server hosts, or on a host separate
from the audit vault server or the host where the database resides that will be audited.

Let’s look at each of these scenarios to determine the best location within your
environment for the Audit Vault Collection Agent.

• Same host of audited databases (Recommended) – If the database audit trail


destination is the operating system, the Audit Vault Collection Agent must be
installed on the same hosts as those operating system files.

Oracle Audit Vault Best Practices Page 5


• Audit Vault Server host – If the database audit trail destination is the database
tables (SYS.AUD$/SYS.FGA_LOG$) then the Audit Vault Collection Agent may
be installed on the Audit Vault Server host. This would mean that all software
components used by Audit Vault would be consolidated on a single host.

• Separate from audit host and Audit Vault Server – If the database audit trail
destination is the database tables, (SYS.AUD$/SYS.FGA_LOG$) then the Audit
Vault Collection Agent may be installed on a different host from the audited
database or Audit Vault Server.

Recommended Agent Configuration


Oracle recommends that the Audit Vault Collection Agent be installed on the same server
as the databases being audited. In the case of RAC the agent should be installed on each
instance. This configuration will allow the agent to service audit data from either the
database tables (SYS.AUD$/SYS.FGA_LOG$) or the operating system files.

Which Collector(s) Should I Deploy?


Audit Vault collectors transport audit data from the source to the Audit Vault Server.
The collectors are controlled by the Audit Vault Collection Agents described in the
previous section. Oracle Audit Vault Collection Agent may deploy three different Audit
Vault collectors depending on where the audit data is written - database tables or
operating system. Note that Oracle stores some valuable audit related information in the
REDO logs. As a result, Oracle Audit Vault provides a REDO Collector to retrieve the
information. Table 1 below lists the characteristics of the audit trail locations to help

Oracle Audit Vault Best Practices Page 6


you determine where to write the audit trail and which collector(s) should be deployed to
move the audit data into Audit Vault.

Audit Operation OS Log DB Audit Redo Log


Table
SELECT

DML

DDL
Before and After
Values
Success and Failure

SQL Text (for SYS)


SYS Auditing
Supplemental
Separation of
Other considerations FGA data logging for
Duties
all values
Table 1 Audit Trail Characteristics

The three collector types are called DBAUD, OSAUD, and REDO. Each collector type
retrieves audit records from different locations in the source Oracle database as shown
below in Table 2.

Collector Audit Data Sources of Oracle Database Settings to


Name Oracle Database Initiate Auditing Advantages
DBAUD Database audit trail, where Set initialization parameter With the
standard audit events are audit_trail=db, db_extended. DB_EXTENDED
written to the database value, the SQL text is
dictionary table collected as part of the
SYS.AUD$. Fine-grained audit trail.
audit trail, where audit DB_EXTENDED does
events are written to the not audit activity by
database dictionary table SYS users, so you
SYS.FGA_LOG$ should also deploy the
OSAUD collector in
conjunction.
OSAUD Operating system files (OS Set initialization parameter • Audit records stored
files), where mandatory AUDIT TRAIL parameter to OS in operating system

Oracle Audit Vault Best Practices Page 7


Collector Audit Data Sources of Oracle Database Settings to
Name Oracle Database Initiate Auditing Advantages
audit records are written and AUDIT_FILE_DEST files can be more
and optionally, where parameter to desired file in secure than
Database audit trail directory specification. database-stored
(standard audit events) and audit records
fine-grained audit trail audit Set initialization parameters because access can
events are written to OS AUDIT_SYS_OPERATIONS to require file
audit logs. Operating TRUE and AUDIT_FILE_DEST to permissions that
system-specific audit trails the desired file in the directory. DBAs do not have.
(system audit trail), where • Greater availability
database audit trail records is another advantage
are written to Windows to operating system
Event Log on Microsoft storage for audit
Windows systems or to a records, in that they
syslog on Linux systems remain available
even if the database
is temporarily
inaccessible.

REDO Redo log Redo logs are part of the Oracle • Used to track before
Database infrastructure and do not and after changes to
require any source database sensitive data
settings. The Audit Vault Policy, columns, such as
capture rule, determines the meta salary.
data pulled from the redo log.

Table 2 Audit Vault Collector Types

Depending on the type of audit information generated and required to maintain, you may
deploy one or all three of the collectors for each source database.

Recommended Collector and Database Audit Configuration


Oracle recommends using the operating system as your as your primary audit trail
location and deploying the OSAUD collector as the operating system has the least
amount of performance overhead on the database. Please refer to the Oracle Database
Auditing section within this document for information on configuring the database to
write audit information to the operating system.

Oracle Audit Vault Best Practices Page 8


Near Real-Time Alerts
Security alerts can be used for proactive notification of compliance, privacy, and insider
threat issues across the enterprise. Oracle Audit Vault provides IT security personnel
with the ability to detect and alert on suspicious activity, attempts to gain unauthorized
access, and abuse of system privileges.

Oracle Audit Vault can generate alerts on specific system or user defined events, acting
as an early warning system against insider threats and helping detect changes to baseline
configurations or activity that could potentially violate compliance. Oracle Audit Vault
continuously monitors the audit data collected, evaluating the activities against defined
alert conditions.

Alerts are generated when data in a single audit record matches a custom defined alert
rule condition. For example, a rule condition may be defined to raise alerts whenever a
privileged user attempts to grant someone access to sensitive data.

In Oracle’s in-house testing of the Audit Vault Server, it was possible to achieve a
throughput of 17,000 insertions of audit trail records per second using a 2x6GB 3GHz
Intel Xeons, Redhat 3.
2 Linux x86 system. To achieve near real-time alerting capability, the host should be
sized to meet your business requirements.

Near Real-Time Reporting


After audit data is transferred from the source to the Audit Vault, an Oracle
DBMS_SCHEDULER job runs an ETL (extract, transformation, load) process to
normalize the raw audit data into the data warehouse. In Oracle’s in-house testing, the
ETL job was able to process 500,000 records in a little over 50 seconds on a 2x6GB
3GHz Intel Xeons, Redhat 3.
2 Linux x86 system. Out of the box, the default DBMS_SCHEDULER job runs every 24
hours.

Audit Vault provides statistics of the ETL process to update the warehouse as shown
below in Figure 3. By utilizing this information, you can estimate how often the job may
be run to update the data warehouse infrastructure. The data warehouse infrastructure is
documented in the Oracle Audit Vault Auditor’s Guide.

Recommendations on ETL process


The ETL process may be run more often to provide near real-time reporting. Oracle
recommends that the previous ETL job be completed before initiating the next ETL job.

Oracle Audit Vault Best Practices Page 9


Figure 2 Audit Vault Warehouse Load Results

The Oracle Audit Vault has been developed on a flexible data warehouse infrastructure
that provides the ability to consolidate audit data so that it can be easily secured,
managed, accessed, and analyzed. In addition to the out-of-the-box reports provided by
Oracle Audit Vault, Audit Vault provides an open audit warehouse schema that can be
accessed from Oracle BI Publisher, Oracle Application Express, or any 3rd party
reporting tools for customized security and compliance reporting.

Oracle Database Auditing


Oracle has provided robust auditing capabilities since the release of Oracle7 in the early
1990’s. Oracle database auditing can be highly customized to address specific
compliance and privacy requirements.

Audit records include information about the operation that was audited, the user
performing the operation, and the date and time of the operation. Audit records can be
stored in the database audit trail or in files on the operating system. There are two types
of general auditing: standard and fine-grained. Standard auditing includes operations on
privileges, schemas, objects, and statements. Fine-grained auditing is policy based and
operates and is enforced on select operations in Oracle9i. Fine-grained auditing was
enhanced in Oracle Database 10g to enforce policy based auditing on insert, update and
delete operations.

Oracle Audit Vault Best Practices Page 10


Audit Trail Contents and Locations
Audit trail records can contain different types of information, depending on the events
audited and the auditing options set.

Some of that information includes:


• Operating system login user name (CLIENT USER)
• Database user name (DATABASE USER)
• Session identifier
• Terminal identifier
• Name of the schema object accessed
• Operation performed or attempted (ACTION)
• Date and time stamp in UTC (Coordinated Universal Time) format
• System privileges used (PRIVILEGE)
• Proxy Session audit ID
• Global User unique ID
• Instance number
• Process number
• Transaction ID
• SCN (system change number) for the SQL statement
• SQL text that triggered the auditing (SQLTEXT)
• Bind values used for the SQL statement, if any (SQLBIND)

Audit Vault extracts auditdata from either the database tables or the operating system
files. To enable database auditing, the initialization parameter, AUDIT_TRAIL, should
be set to one of these values:

Parameter Value Meaning


DB Enables database auditing and directs all audit records to the database
audit trail (SYS.AUD$), except for records that are always written to
the operating system audit trail
DB_EXTENDED Does all actions of AUDIT_TRAIL=DB and also populates the SQL
bind and SQL text columns of the SYS.AUD$ table,
OS Enables database auditing and directs all audit records to an operating
(recommended) system file

Oracle Audit Vault Best Practices Page 11


Recommended Database Audit Configuration
Oracle recommends that the audit trail be written to the operating system files as this
configuration imposes the least amount of overhead on the source database system.

In addition, the following database parameters should be set:


• Init.ora parameter: AUDIT_FILE_DEST -- Dynamic parameter specifying
the location of the operating system audit trail. The default location on
Unix/Linux is $OH/admin/$ORACLE_SID/adump. The default on Windows
is the event log. For optimal performance, it should refer to a directory on a
disk that is locally attached to the host running the Oracle instance.
• Init.ora parameter: AUDIT_SYS_OPERATIONS -- Enables the auditing of
operations issued by user SYS, and users connecting with SYSDBA or
SYSOPER privileges. The audit trail data is written to the operating system
audit trail. This parameter should be set to true.

Audit Settings – Secure Configuration


Oracle database auditing is highly granular, flexible and extensible. In most enterprise
environments, auditing of basic activities such as failed and successful logins, privileged
user activity, database schema changes, and user policy changes will be required by IT
auditors.

When you issue an audit command, an additional parameter ‘by access’ or ‘by session’
can be specified. By access tells Oracle to create an audit record every time any of these
operations occur when in contrast, by session only creates an audit trail the first time this
operation occurs in the current session. If you need to know each time an operation is
executed then by access should be used.

Recommended Database Audit Settings


Oracle recommends the following audit settings for your source databases to collect
information on the operations executed. A SQL script can be found in Appendix B that
may be copied and run in your databases. When Audit Vault is installed, this script is also
included in the demo directory of the Audit Vault Server
$ORACLE_HOME/demo/secconf.sql.

Audit Command What do you audit?


Audit alter any table by access; Database schema or structure changes
Audit create any table by access;
Audit drop any table by access;
Audit Create any procedure by
access;
Audit Drop any procedure by

Oracle Audit Vault Best Practices Page 12


access;
Audit Alter any procedure by
access;
Audit create external job by
access;
Audit create any job by access;
Audit create any library by
access;
Audit alter database by access;
Audit alter system by access;

Audit audit system by access; Database access and privileges


Audit create public database link
by access;
Audit exempt access policy by
access;
Audit alter user by access;
Audit create user by access;
Audit role by access;
Audit create session by access;
Audit drop user by access;
Audit Grant any privilege by
access;
Audit grant any object privilege
by access;
Audit grant any role by access;
Audit alter profile by access;
Audit drop profile by access;

Table 3 Recommended Audit Settings

TIP: Do not audit the SYS.AUD$ or SYS.FGA_LOG$ tables. This will cause a
recursive condition.

Oracle also has the ability to create specific audit policies based on a condition called
Fine-Grained Auditing. By utilizing fine-grained auditing, you can monitor data access
based on content or condition. Conditions can include limiting the audit to specific types
of DML statements used in connection with the columns that you specify. Optionlally a
named routine can be called when an audit event occurs to handle errors and anomalies.

An example of a fine-graned audit policy to create an audit trail record if a select on the
SH.SALES table is executed by anyone other than the user APPS is shown in Figure 2.

Based on your business requirements, fine-grained auditing can be tailored to meet the
auditing needs. For more information on database auditing, please see the Oracle
Security Guide documentation.

Oracle Audit Vault Best Practices Page 13


Figure 3 Audit Vault Fine Grained Audit Policy Example

Database Auditing Performance


On the source database, resources on required by the audit process and the Audit Vault
Collection Agent.

Auditing and the Audit Vault Collectors


Using the recommended audit settings listed above in Table 3, Oracle performed in house
testing on a 4x32GB 3GHz Intel Xeons Redhat 3.0, running
Oracle Database 10g Release 2 (10.2.0.3). The table below demonstrates that database
auditing and Audit Vault Collection Agent uses up to 6% additional CPU overhead based
on the number of audit trail records created per second.

Table 4 Auditing and Collection Agent CPU Overhead

Table 4 shows the performance overhead of turning on auditing and running a


TPC-C like workload using the recommended audit settings specified in Table 3. The
‘Collect’ column shows the performance overhead of the specific Audit Vault Collector
while the ‘Create’ number shows the performance overhead for database auditing when
10 or 100 audit records are generated per second.

Writing audit trail records to the operating system has the lowest overhead.

Oracle Audit Vault Best Practices Page 14


Managing Audit Data on the Source
Audit records include information about the operation that was audited, the user
performing the operation, and the date and time of the operation. As noted earlier, audit
records can be stored in either the database or on the operating system.

The tables named SYS.AUD$ and SYS.FGA_LOG$ are used when the audit data is
written to the database. These tables are located in the databse SYS schema.

The Oracle Database also allows audit trail records to be directed to the operating system.
The target directory varies by platform, but on the UNIX platform, it is usually
$ORACLE_HOME/rdbms/audit. On Windows, the information is accessed through
Event Viewer.

Oracle Audit Vault provides the mechanisms to collect audit data generated by Oracle9i
Database Release 2, Oracle Database 10g Release 1, and Oracle Database 10g Release 2.
The database audit data can be collected from the both the database and operating system
audit destinations. Transactional before/after values can be captured from the database
REDO transaction logs using the REDO collector for Oracle9i Release 2 and Oracle
Database 10g Release 2 databases.

Removing Audit Data from the Database


Over time, the database and operating system can potentially reach a maximum capacity
for storing new audit records After auditing is enabled for some time, the security
administrator will want to delete records from the database audit trail both to free audit
trail space and to facilitate audit trail management. However, it's critical not to delete
data that has not yet been transferred to Oracle Audit Vault.

Before deleting audit data from the database, determine the last record inserted into Audit
Vault Server. This can be done by using Audit Vaults’s Activity Overview Report.
Open the Activity Overview to view the date of the summary data. Remember, the Audit
Vault report data is displayed based on the last completed ETL warehouse job. For more
information on the warehouse job, please look at the Oracle Audit Vault Administration
Guide documentation.

Oracle Audit Vault Best Practices Page 15


Figure 4 Audit Vault Activity Overview Report

The activity overview report returns data in descending order of time. So the first record
displayed is the last record to be inserted into the data warehouse.

Once you have established that data is being inserted into the Audit Vault Server in a
timely manner, you can use the scripts located in Appendix A to delete records from
SYS.AUD$ and SYS.FGA_LOG$ by running a database job.

Recommended Database Audit Cleanup Periods


Oracle recommends that you should delete records 24 hours and older. In the example
above, you would delete records that are older than May 3, 2007 8:02 PM UTC. All data
is stored in Audit Vault in UTC time format to maintain the order in which transactions
are executed no matter what time zone the database sources are located in.

Removing Audit Data from the Operating System


Similar to the database audit trail, Oracle stores audit data on the operating system by
creating or appending to operating system files based on the Oracle database session id.
Since disk space is not infinite, the operating system audit trail files will need to be
deleted after the records are inserted in to Audit Vault.

The operating system audit trail files are written by default on most UNIX system in
$ORACLE_HOME/admin/$ORACLE_SID/adump. The files have an extension of
".aud" Optionally, the destination can be explicitly defined by the Oracle database
parameter AUDIT_FILE_DEST.

Oracle Audit Vault Best Practices Page 16


Before deleting audit trail records, determine the last record inserted into Audit Vault
Server. This can be done by using the Activity Overview Report. Open the Activity
Overview to view the date of the summary data (See Figure 4 above). Remember, the
Audit Vault report data is displayed based on the last completed ETL warehouse job. For
more information on the warehouse job, please look at the Oracle Audit Vault
Administration Guide. Appendix A contains scripts that can be run as a cron job or
database job to delete operating system audit files that are no longer needed on UNIX
systems.

On the Windows operating system, the audit trail record is written to the window event
log. Use the Windows Event Viewer functionality to control the size of the event log file
or overwrite records that are older the X number of days.

Figure 5 Windows Event Viewer

Oracle recommends that you should use the option to overwrite records based on age.

Oracle Audit Vault Best Practices Page 17


Oracle Audit Vault Maintenance
Periodic maintenance of Oracle Audit Vault is important for maintaining optimal
performance. Oracle Audit Vault generates numerous logs and trace files during normal
day-to-day operations. The following sections provide important information regarding
the contents of the log files, their purpose and how and when the files can be removed.

Audit Vault Server Log Files


Much like the Oracle Database, the Oracle Audit Vault server generates log files that
provide current status and diagnostic information. The log files should be monitored and
periodically removed to control the amount of disk space used by the log files. These log
files may be found in <Audit_Vault_Server_Home>/av/log.

Server Log Description Maintenance


File Name
avorcldb.log This log file tracks the commands issued by It is safe to delete this file
the avorcldb facility. Avorcldb facility is at any time.
used during the initial configuration of
audited sources and Audit Vault agents and
collectors.
avca.log This log file tracks the creation of This file may only be
collectors and the starting and stopping of deleted after the Audit
Audit Vault agents and collectors. Vault Server is shutdown.

av_client- This log file contains information about The files, which contain an
%g.log.n collection metrics from the Audit Vault extension of .log.n, for
Collection Agent. The %g is a generation example av_client-0.log.1,
number that starts from 0 (zero) and may be deleted at any
increases once the file size reaches the 10 time.
MB limit.

Enterprise Manager stores its logs in the directory <Audit


Vault_Server_Home>/<Host_Name>_<SID>/sysman/log. The file emdb.nohup in this
directory contains a log of activity for the Audit Vault web application, including GUI
conversations, requests from the avctl utility and communication with the various Audit
Vault collection agents. This can be used to debug communication issues between the
server and the agents.

Audit Vault Collection Agent Log Files


The Audit Vault Collection Agent creates several log files and also must be maintained to
control the amount of disk space used by the log files. These log files may be found in
<Audit_Vault_Collection_Agent_Home>/av/log.

Oracle Audit Vault Best Practices Page 18


Agent Log File Name Description Maintenance
agent.err Contains a log of all errors It is safe to delete this file
encountered in agent at any time.
initialization and
operation.
agent.out Contains a log of all This file may only be
primary agent-related deleted after the Audit
operations and activity. Vault Collection Agent is
shutdown.

avca.log Contains a log of all It is safe to delete this file


AVCA commands that at any time.
have been run and the
results of running each
command.
avorcldb.log Contains a log of all It is safe to delete this file
AVORCLDB commands at any time.
that have been run and the
results of running each
command.
<CName><SName><SId>.log Contains a log of This file may only be
collection operations for deleted after the Audit
CNmae = Collector_name the DBAUD and OSAUD Vault Collection Agent is
SName = Source_name collectors. shutdown.
SID = Source_ID
av_client-%g.log.n Contains a log of the The files which contain an
agent operations and any extension of .log.n may be
errors returned from those deleted at any time.
operations. The %g is a
generation number that
starts from 0 (zero) and
increases once the file size
reaches the 10 MB limit. A
concurrent existence of
this file is indicated by a .n
suffix appended to the file
type name, such as
av_client-%g.log.n, where
n is an integer issued in
sequence, for example
av_client-0.log.1.

sqlnet.log Contains a log of SQL*Net


information.

Oracle Audit Vault Best Practices Page 19


The directory <Audit_Vault_Collection_Agent_Home>/oc4j/j2ee/home/log contains the
logs generated by the Collection Agent OC4J. In this directory, the file AVAgent-
access.log contains a log of requests the agent receives from the Audit Vault Server. This
can be used to debug communication issues between the server and the agent.

Oracle Audit Vault Disaster Recovery


By default, the Oracle Audit Vault data warehouse is in archive log mode. This protects
the audit data from media failure and ensures a more complete recovery. The archive
logs are placed in the flash recovery area.

Oracle Recovery Manager (RMAN) and a flash recovery area minimize the need to
manually manage disk space for your backup-related files and balances the use of space
among the different types of files. The basic Oracle Audit Vault installation places the
flash recovery area on the same disk as the Audit Vault Oracle Home and sets the default
size to 2G. The advanced installation method allows you to define the location and size
of the flash recovery area and RMAN backup job.

Recommended Recovery Configuration


Oracle recommends that you review the flash recovery area settings and modify them to
meet your data protection needs. For more information on RMAN, flash recovery area,
and archive logs, please see the Oracle Database Backup and Recovery documentation.
The Audit Vault Oracle Homes should be backed up using your current procedures for
other Oracle Homes.

Oracle Audit Vault Best Practices Page 20


Appendix A. Audit Trail Maintenance Scripts
Rem
Rem $Header: os_aud_cleanup_setup.sql 05-apr-2007.02:42:14 srirasub Exp
$
Rem
Rem os_aud_cleanup_setup.sql
Rem
Rem Copyright (c) 2007, Oracle. All rights reserved.
Rem
Rem NAME
Rem os_aud_cleanup_setup.sql - <one-line expansion of the name>
Rem
Rem DESCRIPTION
Rem <short description of component this file declares/defines>
Rem
Rem NOTES
Rem <other useful comments, qualifications, etc.>
Rem
Rem MODIFIED (MM/DD/YY)
Rem srirasub 04/04/07 - Created
Rem

-- The following arguments are required to run this procedure


--
-- 1. Path of directory where temporary files can be written (eg.
/tmp)
-- 2. Threshold (in no. of days) for deleting old audit files (eg. 7)
-- 3. $ORACLE_HOME
--
--
SET ECHO ON
SET FEEDBACK 1
SET NUMWIDTH 10
SET LINESIZE 80
SET TRIMSPOOL ON
SET TAB OFF
SET PAGESIZE 100

create or replace procedure source_os_audit_cleanup as


output_file utl_file.file_type;
cursor c1 is select unique(audsid) from sys.V$SESSION;
sessid number;
aud_dest varchar2(1000);
ver varchar2(100);
begin
execute immediate 'create or replace directory os_aud_cleanup_dir as
''&1''';

output_file := utl_file.fopen
('OS_AUD_CLEANUP_DIR','session_list.txt', 'W');
open c1;
loop
fetch c1 into sessid;
exit when c1%notfound;

Oracle Audit Vault Best Practices Page 21


utl_file.put_line (output_file, sessid);
end loop;
utl_file.fclose(output_file);

select value into aud_dest from v$parameter where name =


'audit_file_dest';

select version into ver from v$instance;

if ver like '10%' or ver like '11%'


then
execute immediate
'BEGIN dbms_scheduler.run_job(''OS_CLEANUP_PERL'',TRUE); END;';
else if ver like '9%'
then
output_file := utl_file.fopen
('OS_AUD_CLEANUP_DIR','audit_dest.txt', 'W');
utl_file.put_line (output_file, aud_dest);
utl_file.fclose(output_file);
end if;
end if;
end;
/

Declare
ver varchar2(100);
argv3 varchar2(1000);
argv2 varchar2(1000);
aud_dest varchar2(1000);
Begin
select version into ver from v$instance;
select value into aud_dest from v$parameter where name =
'audit_file_dest';

argv2 := '&1' || '/' || 'session_list.txt';


argv3 := '&3' || '/demo/os_aud_cleanup.pl';

if ver like '10%' or ver like '11%'


then

execute immediate
'BEGIN DBMS_SCHEDULER.CREATE_JOB (JOB_NAME => ''OS_CLEANUP_PERL'',
JOB_TYPE => ''executable'',
JOB_ACTION => ''' || argv3 || ''',
NUMBER_OF_ARGUMENTS => 3,
ENABLED => FALSE); END;';

execute immediate
'BEGIN DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE (
job_name => ''OS_CLEANUP_PERL'',
argument_position => 1,
argument_value => ''' || aud_dest || '''); END;';

execute immediate
'BEGIN DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE (
job_name => ''OS_CLEANUP_PERL'',
argument_position => 2,

Oracle Audit Vault Best Practices Page 22


argument_value =>''' || argv2 || '''); END;';

execute immediate
'BEGIN DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE (
job_name => ''OS_CLEANUP_PERL'',
argument_position => 3,
argument_value => ''&2''); END;';

execute immediate
'BEGIN DBMS_SCHEDULER.CREATE_JOB (JOB_NAME => ''AUDIT_OS_CLEANUP'',
JOB_TYPE => ''STORED_PROCEDURE'',
JOB_ACTION => ''sys.source_os_audit_cleanup'',
REPEAT_INTERVAL => ''FREQ=DAILY;INTERVAL=1'',
ENABLED => TRUE,
COMMENTS => ''Cleaup Job Run Daily''); END;';
end if;
End;
/

declare
ver varchar2(100);
begin
select version into ver from v$instance;

if ver like '9%'


then
sys.source_os_audit_cleanup;
end if;
end;
/

exit;

#!/usr/local/bin/perl
#
# $Header: os_aud_cleanup.pl 05-apr-2007.01:47:12 srirasub Exp $
#
# os_aud_cleanup.pl
#
# Copyright (c) 2007, Oracle. All rights reserved.
#
# NAME
# os_aud_cleanup.pl - OS AUDit trail CLEANUP
#
# DESCRIPTION
# Perl Script to clean audit trails.
#
# NOTES
# <other useful comments, qualifications, etc.>
#
# MODIFIED (MM/DD/YY)
# srirasub 04/04/07 - Creation
#

Oracle Audit Vault Best Practices Page 23


$aud_dir = $ARGV[0];

%mon2num = qw(
jan 1 feb 2 mar 3 apr 4 may 5 jun 6
jul 7 aug 8 sep 9 oct 10 nov 11 dec 12
);

#list of all files in audit dir


if(!opendir(DIR, $aud_dir))
{
$oh = $ENV{'ORACLE_HOME'};

$aud_dir =~ s/\?/$oh/g;

if(!opendir(DIR, $aud_dir))
{
exit -1;
}
}

@files = grep(/ora_.*aud$/,readdir(DIR));
closedir(DIR);

#get timestamp to compare


$tstamp = time();

$tstamp1 = localtime;

#get list of active stession from db


$session_list = $ARGV[1];
open(INFO, $session_list);
@sessids = <INFO>;
close(INFO);

#days parameter
$day_upper_limit = $ARGV[2];

#go thru all the files in audit destination directory


foreach $file (@files)
{
$flag = 1;
$file_name = $aud_dir. '/' . $file;
open(INFO, $file_name);
@lines = <INFO>;
close(INFO);

#check each line for matchin session


foreach $line (@lines)
{
foreach $sess (@sessids)
{
$sessionid = $sess;
chop($sessionid);
$reg = 'SESSIONID: "' . $sessionid . '"';
if($line =~ $reg)
{

Oracle Audit Vault Best Practices Page 24


#this file cant be deleted as it has a session that
#is active
$flag = 0;
}
}

$prev = $line;
}

#since this file doesnt have any active session, it can be deleted
if($flag == 1)
{
$flag2 = 1;
foreach $line (@lines)
{
$reg = 'SESSIONID: "';

if($line =~ $reg)
{
chop($prev);
$days_diff = day_diff($prev, $tstamp1 );

if($days_diff < $day_upper_limit)


{
#this file cant be deleted as it has a session that
#doesnt satisfy the min-days criteria
$flag2 = 0;
}
}
$prev = $line;
}

if($flag2 == 1)
{
#delete the file
unlink("$file_name");
}
}
}

#subroutine to return difference between two dates.


sub day_diff
{
$ts1 = $_[0];
$ts2 = $_[1];

$ts1 =~ s/\s\s*/ /g;


$ts2 =~ s/\s\s*/ /g;

($a1,$a2,$a3,$a4,$a5,$a6,$a7) = split(/[ :]/, $ts1);


($b1,$b2,$b3,$b4,$b5,$b6,$b7) = split(/[ :]/, $ts2);

$diff_mon = &month_difference($a2, $b2);


$days = (($b7-$a7)*365) + ($diff_mon)*30 + ($b3-$a3);
$days;
}

Oracle Audit Vault Best Practices Page 25


#subroutine to return difference between two month
sub month_difference
{
$mon1 = $mon2num{ lc substr($_[0], 0, 3) };
$mon2 = $mon2num{ lc substr($_[1], 0, 3) };

$diff = $mon2-$mon1;

$diff;
}

-- These scripts can be run directly on 9i, 10gR1 and 10gR2


-- scripts should be run as sys
-- the jobs run daily

-- For DBMS_JOB,
-- ALTER SYSTEM SET job_queue_processes=1;
-- this may be required to run the jobs automatically
-- the value is set to 0 in most systems
-- any number in the range [1,1000] is valid.

create or replace procedure source_audit_cleanup(days number) as


ver varchar2(100);
begin
select version into ver from v$instance;

if ver like '10%' or ver like '11%'


then
execute immediate 'delete from sys.aud$
where extract(day from
sys_extract_utc(systimestamp)-ntimestamp#) > ' || days ||
' and sessionid not in (select audsid from
sys.V$SESSION)';

execute immediate 'delete from sys.fga_log$


where extract(day from
sys_extract_utc(systimestamp)-ntimestamp#) > ' || days ||
' and sessionid not in (select audsid from
sys.V$SESSION)';
else
if ver like '9%' then
execute immediate 'delete from sys.aud$
where extract(day from
sys_extract_utc(systimestamp)-timestamp#) > ' || days ||
' and sessionid not in (select audsid from
sys.V$SESSION)';

execute immediate 'delete from sys.fga_log$


where extract(day from
sys_extract_utc(systimestamp)-timestamp#) > ' || days ||
' and sessionid not in (select audsid from
sys.V$SESSION)';
end if;
end if;

Oracle Audit Vault Best Practices Page 26


end;
/

-- the parameter for no. of days is configurabl


-- change the value of "no_of_days" (current value = 7)
Declare
ver varchar2(100);
jobno binary_integer;
------
no_of_days number := 7;
------
begin
select version into ver from v$instance;

if ver like '10%' or ver like '11%'


then
execute immediate
'begin DBMS_SCHEDULER.CREATE_JOB (
JOB_NAME => ''AUDIT_CLEANUP'',
JOB_TYPE => ''PLSQL_BLOCK'',
JOB_ACTION => ''begin sys.source_audit_cleanup(days
=> ' || no_of_days ||'); end;'',
REPEAT_INTERVAL => ''FREQ=DAILY;INTERVAL=1'',
ENABLED => TRUE,
AUTO_DROP => FALSE,
COMMENTS => ''Cleaup Job Run Daily''); end;';
else if ver like '9%' then
execute immediate 'begin dbms_job.submit(job => :jobno,
what => ''begin
sys.source_audit_cleanup('|| no_of_days ||'); end;'',
interval => ''SYSDATE +
1''); end ;' using in out jobno;
dbms_output.put_line('Job No ' || jobno);
commit;
end if;
end if;
end;
/

Oracle Audit Vault Best Practices Page 27


Appendix B. Database Source Audit Settings
Rem
Rem Copyright (c) 2007, Oracle. All rights reserved.
Rem
Rem DESCRIPTION
Rem Secure configuration settings for the database include audit
REM settings (enabled, with admin actions audited.
Rem

-- Turn on auditing options

Audit alter any table by access;

Audit create any table by access;

Audit drop any table by access;

Audit Create any procedure by access;

Audit Drop any procedure by access;

Audit Alter any procedure by access;

Audit Grant any privilege by access;

Audit grant any object privilege by access;

Audit grant any role by access;

Audit audit system by access;

Audit create external job by access;

Audit create any job by access;

Audit create any library by access;

Audit create public database link by access;

Audit exempt access policy by access;

Audit alter user by access;

Audit create user by access;

Audit role by access;

Audit create session by access;

Audit drop user by access;

Audit alter database by access;

Audit alter system by access;

Oracle Audit Vault Best Practices Page 28


Audit alter profile by access;

Audit drop profile by access;

Oracle Audit Vault Best Practices Page 29


Oracle Audit Vault Best Practices
November 2007
Author: Tammy Bednar
Contributions: Paul Needham, Vipul Shah

Oracle Corporation
World Headquarters
500 Oracle Parkway
Redwood Shores, CA 94065
U.S.A.

Worldwide Inquiries:
Phone: +1.650.506.7000
Fax: +1.650.506.7200
oracle.com

Copyright © 2007, Oracle. All rights reserved.


This document is provided for information purposes only and the
contents hereof are subject to change without notice.
This document is not warranted to be error-free, nor subject to any
other warranties or conditions, whether expressed orally or implied
in law, including implied warranties and conditions of merchantability
or fitness for a particular purpose. We specifically disclaim any
liability with respect to this document and no contractual obligations
are formed either directly or indirectly by this document. This document
may not be reproduced or transmitted in any form or by any means,
electronic or mechanical, for any purpose, without our prior written permission.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.

Oracle Audit Vault Best Practices Page 30

Das könnte Ihnen auch gefallen