Beruflich Dokumente
Kultur Dokumente
WORKSHOP
Presented By:
Oracle Corporation
This material or any portion of it may not be copied in any form or by any means
without the express prior written permission of Oracle Corporation. Any other copying
is a violation of copyright law and may result in civil and/or criminal penalties.
The information in this document is subject to change without notice. If you find any
problems in the documentation, please report them in writing to Education Products,
Oracle Corporation, 500 Oracle Parkway, Redwood Shores, CA 94065. Oracle
Corporation does not warrant that this document is error-free.
Oracle and all references to Oracle Products are trademarks or registered trademarks
of Oracle Corporation.
All other products or company names are used for identification purposes only, and
may be trademarks of their respective owners.
Please direct any questions or comments regarding the contents of this document to
Kurt Lysy (kurt.lysy@oracle.com).
-2-
TABLE OF CONTENTS
-3-
Summary of Accounts and Passwords
dbsecurity.oracle.com
192.168.214.67
oracle/oracle1
root/oracle1
sysman/oracle1
sys/oracle1
dvowner/oracle12#
dvacctmgr/oracle12#
avadmin/oracle12#
avauditor/oracle12#
avdvo/oracle12#
avdvam/oracle12#
sys/oracle1
sysman/oracle1
-4-
Important Aliases And URLs
• Aliases:
• URLs:
URL Description
http://dbsecurity.oracle.com:4889/em Grid Control Console
https://dbsecurity.oracle.com:5501/em DB01 Database Console Enterprise
Mgr
https://dbsecurity.oracle.com:5501/dva DB01 Database Console DVA
https://dbsecurity.oracle.com:5502/em DB02 Database Console Enterprise
Mgr
https://dbsecurity.oracle.com:5502/dva DB02 Database Console DVA
http://dbsecurity.oracle.com:5503/em AV Server Database Console
Enterprise Mgr
http://dbsecurity.oracle.com:5700/av Audit Vault Console
-5-
LAB EXERCISE 01 – Starting Up Oracle11g Environment
2. Open a terminal window (easily done by right-clicking on the desktop -> Open
Terminal.)
3. In the command window, type alias to view all aliases created for this image.
Enter the alias db01 on the command line to set the environment for bring up the
Oracle11g database DB01.
oracle:/home/oracle> alias
-6-
alias 10g='. /home/oracle/10g.sh'
alias 11g='. /home/oracle/11g.sh'
alias agemctl='$ORA_AGENT_HOME/bin/emctl'
alias agent='. /home/oracle/agent.sh'
alias av='. /home/oracle/av.sh'
alias av1='. /home/oracle/avagent.sh'
alias db01='. /home/oracle/db01.sh'
alias db02='. /home/oracle/db02.sh'
alias emrep='. /home/oracle/em.sh'
alias grid='. /home/oracle/grid.sh'
alias l.='ls -d .* --color=tty'
alias ll='ls -l --color=tty'
alias ls='ls --color=tty'
alias oms='. /home/oracle/oms.sh'
alias opmnstat='. /home/oracle/opmn_status.sh'
alias ora='env|grep ORA'
alias vi='vim'
alias which='alias | /usr/bin/which --tty-only --read-alias --show-dot --
show-tilde'
oracle:/home/oracle> db01
ORACLE_SID=db01
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/11.1.0/db_1
OH=/u01/oracle/product/11.1.0/db_1
4. Run the commands hostname and ifconfig to view the name and IP of the virtual
host you are using in the VM Ware image.
oracle:/home/oracle> hostname
dbsecurity.oracle.com
oracle:/home/oracle> ifconfig
eth0 Link encap:Ethernet HWaddr 00:0C:29:80:BE:2C
inet addr:192.168.214.67 Bcast:192.168.214.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe80:be2c/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:103 errors:0 dropped:0 overruns:0 frame:0
TX packets:62 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:8879 (8.6 KiB) TX bytes:5659 (5.5 KiB)
Interrupt:185 Base address:0x1400
5. (Optional) If you wish to connect to this image from local tools on your laptop
(such as using Putty or using Internet Explorer), you can update your local
hosts file with the following line: (On XP this file is located at
C:\WINDOWS\system32\drivers\etc)
192.168.214.67 dbsecurity.oracle.com dbsecurity
-7-
6. Change directory to /home/oracle/scripts. Run the script
start_DB.sh. This script brings up the 11g listener, database instance, and
Enterprise Manager Database Console. [Be patient with the time taken for both
the instance and Database Console to come up. The image has a modest amount
of allocated memory for use.]
oracle:/home/oracle> cd scripts
oracle:/home/oracle/scripts> db01
ORACLE_SID=db01
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/11.1.0/db_1
OH=/u01/oracle/product/11.1.0/db_1
oracle:/home/oracle/scripts> . start_DB.sh
Connecting to
(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbsecurity.oracle.com)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 11.1.0.7.0 - Production
Start Date 01-OCT-2008 06:50:44
Uptime 0 days 0 hr. 0 min. 0 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File
/u01/oracle/product/11.1.0/db_1/network/admin/listener.ora
Listener Log File
/u01/oracle/diag/tnslsnr/dbsecurity/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dbsecurity.oracle.com)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
The listener supports no services
The command completed successfully
-8-
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
Oracle Enterprise Manager 11g Database Control Release 11.1.0.7.0
Copyright (c) 1996, 2008 Oracle Corporation. All rights reserved.
https://dbsecurity.oracle.com:5501/em/console/aboutApplication
Starting Oracle Enterprise Manager 11g Database Control ................. started.
------------------------------------------------------------------
Logs are generated in directory
/u01/oracle/product/11.1.0/db_1/dbsecurity.oracle.com_db01/sysman/log
oracle:/home/oracle/scripts>
7. Verify that you are able to successfully login to the Enterprise Manager Database
Console. Launch a browser and enter the URL
https://dbsecurity.oracle.com:5501/em. Login as
sysman/oracle1. [If using the Firefox browser within the image, you should
see this URL in the Bookmarks list.]
-9-
8. Verify that the Database Console is successfully running by viewing the summary
screen (sample output shown below). Logout of Enterprise Manager when
finished. Do not stop the Database Console, since we will be using it for the ASO
and Database Vault lab exercises.
-10-
LAB EXERCISE 02 – Implementing Network Security
A. Business Driver
As of 2007, all merchants accepting credit cards for payment need to be compliant
with the Payment Card Industry (PCI) standards. For more information, see:
https://www.pcisecuritystandards.org/.
4.1 Use strong cryptography and security protocols such as secure sockets layer
(SSL) / transport layer security (TLS) and Internet protocol security (IPSEC) to
safeguard sensitive cardholder data during transmission over open, public
networks. Examples of open, public networks that are in scope of the PCI DSS are
the Internet, WiFi (IEEE 802.11x), global system for mobile communications
(GSM), and general packet radio service (GPRS).
-11-
Network encryption is one feature of Oracle’s Advanced Security Option (ASO).
When information travels to and from the Database, ASO provides a high level of
security by offering support for the following encryption standards: RC4 (40, 56, 128,
and 256 bits), DES (40 and 56 bits), 3DES (2 and 3 keys), AES (128, 192, and 256
bits).
In any network connection, it is possible for both the client and server to support more
than one encryption algorithm and more than one integrity algorithm. When a connection
is made, the server selects which algorithm to use, if any, from those algorithms specified
in the sqlnet.ora files.
In this lab we will set up network encryption by directly making changes to the
sqlnet.ora file for the DB01 database.
To set up Network Encryption, you need only add the following lines to your
sqlnet.ora file in $ORACLE_HOME/network/admin:
SQLNET.CRYPTO_CHECKSUM_SERVER = REQUIRED
SQLNET.ENCRYPTION_SERVER = REQUIRED
SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER = (MD5)
SQLNET.ENCRYPTION_TYPES_SERVER = (DES40, RC4_40)
SQLNET.CRYPTO_SEED = "Between Ten and Seventy Random Characters"
If the file
/u01/oracle/product/11.1.0/db_1/network/admin/sqlnet.ora
does not exist, create the file using a text editor such as vi or gedit and place the
lines above in the file.
NOTE: A ready-to-use sqlnet.ora is available for you to use for this purpose.
It is located in /home/oracle/aso_scripts. The file contains both server
and client attributes for use in the image. Enable ASO network encryption by
copying this file into the directory $ORACLE_HOME/network/admin).
Each parameter is explained below. This is all you need to do to implement Network
Security.
SQLNET.CRYPTO_CHECKSUM_SERVER = REQUIRED
SQLNET.ENCRYPTION_SERVER = REQUIRED
-12-
To negotiate whether to turn on integrity (CHECKSUM) or encryption (ENCRYPTION),
you can specify four possible values for the Oracle Advanced Security integrity and
encryption configuration parameters – REJECTED, ACCEPTED, REQUESTED or
REQUIRED. The four values are listed in order of increasing security. The value
REJECTED provides the minimum amount of security between client and server
communications, and the value REQUIRED provides the maximum amount of
network security. In this scenario, this side of the connection specifies that the
security service must be enabled. The connection fails if the other side specifies
REJECTED or if there is no compatible algorithm supported by the other side.
SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER = (MD5)
MD5 and SHA1 are the two integrity algorithms supported by Oracle ASO.
SQLNET.ENCRYPTION_TYPES_SERVER = (DES40, RC4_40)
Several seeds are used to generate a random number on the client and on the server.
One of the seeds that can be used is a user-defined encryption seed. It can be 10 to 70
characters in length and changed at any time. The longer the string, the more secure
the environment.
Any client connecting to this server would need to have parallel settings in their local
sqlnet.ora file. Otherwise their connections will be rejected.
Logged in as the oracle user, note that you can also use the “adapters” to ascertain
what encryption and checksumming algorithms are available in the installation, for
example:
[oracle@gss-grc-lnx ~(sec)]$ adapters
...
...
Installed Oracle Advanced Security options are:
-13-
Kerberos v5 authentication
RADIUS authentication
This change will take effect for all new connections to the database, since the
parameters within sqlnet.ora are read during the establishment of every Oracle Net
session. Note that existing connections i.e. those in place prior to the changes made to
the sqlnet.ora files, will remain un-affected by these encryption settings. This
would have implications for how CashBankTrust would enforce these new settings in
a Production environment across, for example, an application server farm, where the
use of pooled database connections implies the need to force re-connects from the
mid-tier in order to pick up the new settings. In a 24x7 environment, this might be
achieved via the use of ONS (Oracle Notification Service) to denote all such pooled
connections as stale, thus forcing new connections to be established.
This example demonstrates how to configure Network Encryption for Oracle clients
such as Oracle Audit Vault (we will be modifying this configuration in a later
course). For JDBC applications such as SQL Developer, JDeveloper or a J2EE
application, a similar configuration can be set up.
To show that you are using ASO network encryption, you can query the dynamic view
V$ SESSION_CONNECT_INFO. This view displays one row for each network service
adapter the database instance is currently using. Set alias db01 followed by connecting to
SQL Plus as system/oracle1. Issue the SQL Plus COLUMN format commands
followed by running the query shown below. You can see that ASO network encryption
adapters are in fact being used by Oracle for this instance.
oracle:/home/oracle> db01
ORACLE_SID=db01
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/11.1.0/db_1
OH=/u01/oracle/product/11.1.0/db_1
oracle:/home/oracle> sqlplus system/oracle1
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
-14-
170 12 Oracle Advanced Security SQL*PLUS
: authentication service
for Linux: Version 11.1
.0.7.0 - Production
8 rows selected.
-15-
LAB EXERCISE 03 – Transparent Data Encryption
A. Business Driver
As of 2007, all merchants accepting credit cards for payment need to be compliant with
the Payment Card Industry (PCI) standards. For more information, see:
https://www.pcisecuritystandards.org/.
CashBankTrust’s requirements for meeting Payment Card Industry (PCI) standards
involve encrypting certain of the data at rest. Specifically, under section 3 of the PCI
requirements:
Requirement 3: Protect stored cardholder data
Encryption is a critical component of cardholder data protection. If an intruder
circumvents other network security controls and gains access to encrypted data,
without the proper cryptographic keys, the data is unreadable and unusable to that
person. Other effective methods of protecting stored data should be considered as
potential risk mitigation opportunities. For example, methods for minimizing risk
include not storing cardholder data unless absolutely necessary, truncating
cardholder data if full PAN is not needed, and not sending PAN in unencrypted e-
mails.
3.1 Keep cardholder data storage to a minimum. Develop a data retention and
disposal policy. Limit storage amount and retention time to that which is required
for business, legal, and/or regulatory purposes, as documented in the data
retention policy.
3.2 Do not store sensitive authentication data subsequent to authorization (even if
encrypted). Sensitive authentication data includes the data as cited in the following
Requirements 3.2.1 through 3.2.3:
3.2.1 Do not store the full contents of any track from the magnetic stripe (that is on
the back of a card, in a chip or elsewhere). This data is alternatively called full
track, track, track 1,track 2, and magnetic stripe data In the normal course of
business, the following data elements from the magnetic stripe may need to be
retained: the accountholder’s name, primary account number (PAN), expiration
date, and service code. To minimize risk, store only those data elements needed
for business. NEVER store the card verification code or value or PIN verification
value data elements.
3.2.2 Do not store the card-validation code or value (three-digit or four-digit number
printed on the front or back of a payment card) used to verify card-not-present
transactions
3.2.3 Do not store the personal identification number (PIN) or the encrypted PIN
block.
3.3 Mask PAN when displayed (the first six and last four digits are the maximum
number of digits to be displayed). Note: This requirement does not apply to
employees and other parties with a specific need to see the full PAN; nor does the
requirement supersede stricter requirements in place for displays of cardholder
data (for example, for point of sale [POS] receipts).
3.4 Render PAN, at minimum, unreadable anywhere it is stored (including data on
portable digital media, backup media, in logs, and data received from or stored by
wireless networks) by using any of the following approaches:
Strong one-way hash functions (hashed indexes)
-16-
Truncation
Index tokens and pads (pads must be securely stored)
Strong cryptography with associated key management processes and procedures.
The MINIMUM account information that must be rendered unreadable is the PAN.
3.4.1 If disk encryption is used (rather than file- or column-level database
encryption), logical access must be managed independently of native operating
system access control mechanisms (for example, by not using local system or
Active Directory accounts). Decryption keys must not be tied to user accounts.
3.5 Protect encryption keys used for encryption of cardholder data against both
disclosure and misuse.
3.5.1 Restrict access to keys to the fewest number of custodians necessary
3.5.2 Store keys securely in the fewest possible locations and forms.
3.6 Fully document and implement all key management processes and procedures
for keys used for encryption of cardholder data, including the following:
3.6.1 Generation of strong keys
3.6.2 Secure key distribution
3.6.3 Secure key storage
3.6.4 Periodic changing of keys
As deemed necessary and recommended by the associated application (for
example, re-keying); preferably automatically
At least annually.
3.6.5 Destruction of old keys
3.6.6 Split knowledge and establishment of dual control of keys (so that it requires
two or three people, each knowing only their part of the key, to reconstruct the
whole key)
3.6.7 Prevention of unauthorized substitution of keys
3.6.8 Replacement of known or suspected compromised keys
3.6.9 Revocation of old or invalid keys
3.6.10 Requirement for key custodians to sign a form stating that they understand
and accept their key-custodian responsibilities.
B. Setup
The following scripts will be used in this lab and are available in the
/home/oracle/aso_scripts directory:
• change_oe_schema.sql
-17-
• create_db_users.sql
...
SQL> @create_db_users.sql
...
<a number of commands are executed here>
...
SQL> set echo off
C. Creating a Wallet
In this lab you will enable transparent data encryption for selected columns of the OE
schema. You will create a wallet. You will also rekey each table in the OE schema based
on corporate security requirements. Note that by doing this work alone (as KLYSY), you
will be violating PCI requirement 3.6.6.
Now attempt to modify the OE schema (containing order entry information) by running
the script /home/oracle/aso_scripts/change_oe_schema.sql as shown
below. Why do you get errors (such as below) when attempting to create or modify some
of the OE schema tables?
-18-
ERROR at line 1:
ORA-28365: wallet is not open
...
...
[Note: If you get errors other than those related to the wallet not being open, try
running the script as oe/oe.]
A wallet is a container that is used to store authentication and signing credentials,
including the TDE master key, PKI private keys, certificates, and trusted certificates
needed by SSL. With TDE, wallets are used on the server to protect the TDE master
key.
Oracle provides two different types of wallets: encryption wallet and auto-open
wallet. The encryption wallet (filename ewallet.p12) is the one recommended for
TDE and the one we will be using in this lab. It needs to be opened manually after
database startup and prior to TDE encrypted data being accessed. If the Wallet is not
opened, the database will return an error when TDE protected data is queried. The
auto-open wallet (filename cwallet.sso) opens automatically when a database is
started; hence it can be used for unattended Data Guard (Physical Standby only)
environments where encrypted columns are shipped to secondary sites.
After logging in as klysy/ klysy, you will be presented with the following screen:
-19-
Click on the “Server” tab at the top, and then, on the screen that next appears, click
the “Transparent Data Encryption” link under the “Security” section on the right-hand
side.
-20-
The following screen will appear:
-21-
At this point, no encryption wallet exists. We would also want to specify a wallet
location on the server that is distinct from any other files/directories that may be
backed up as part of a backup regime (i.e. we need to have the wallet backed up, but
it is preferable to have it not co-located with the Oracle RDBMS backup).
Click the “Change” button on this screen – you will be presented with the following:
Making sure “Network Profile” is selected in the dropdown box, click the “Go”
button to the right. You will be taken to the following screen:
Enter the O/S credentials of the operating system user here (note: this would more
likely be a distinct individual to KLYSY) – oracle/oracle1 – and click the “Login”
button. The following screen appears:
-22-
Click on the triangle next to “Advanced Security Options”. Click the “Wallet
Location” at the bottom of this screen and the following screen appears:
-23-
Click OK to save your changes. To reflect this change in your session, log out of
Database Control and log in again as klysy.
Now, navigate via the “Server” tab to the “Transparent Data Encryption” link (under the
“Security” sub-section). The following screen should appear:
At this point, you can enter a wallet password. Enter the password
“firstpartsecondpart”. The masking facility in 11g ensures that two separate
individuals can be entrusted with the respective portion of a wallet password. This is
in accordance with requirement 3.6.6 of PCI. Click “OK” and the following screen
should appear:
-24-
The wallet is now open for use.
Encrypting Data
-25-
CUST_EMAIL VARCHAR2(30) ENCRYPT
ACCOUNT_MGR_ID NUMBER(6)
CUST_GEO_LOCATION MDSYS.SDO_GEOMETRY
DATE_OF_BIRTH DATE ENCRYPT
MARITAL_STATUS VARCHAR2(20) ENCRYPT
GENDER VARCHAR2(1)
INCOME_LEVEL VARCHAR2(20)
SSN VARCHAR2(10) ENCRYPT
CUST_EMAIL
------------------------------
Ishwarya.Roberts@LAPWING.COM
Dieter.Matthau@VERDIN.COM
Divine.Sheen@COWBIRD.COM
Frederico.Romero@CURLEW.COM
Goldie.Montand@DIPPER.COM
…and so on…
-26-
If the wallet was not opened, any attempt to access encrypted data cause an error to
be returned:
System altered.
ERROR at line 1:
ORA-28365: wallet is not open
Table altered.
Table altered.
SQL> @
CUST_EMAIL
------------------------------
Ishwarya.Roberts@LAPWING.COM
-27-
Dieter.Matthau@VERDIN.COM
Divine.Sheen@COWBIRD.COM
Frederico.Romero@CURLEW.COM
Goldie.Montand@DIPPER.COM
…and so on…
319 rows selected.
This activity would be performed by CashBankTrust staff from time to time in order to
satisfy the following PCI requirements:
3.6.4 Periodic changing of keys
• As deemed necessary and recommended by the associated application (for example,
re-keying); preferably automatically
• At least annually.
3.6.8 Replacement of known or suspected compromised keys
Remember, the unencrypted data will be available whenever the wallet is opened. The
data will be encrypted on disk and in any copies or backups of the data. Network
encryption (covered by the previous lab) encrypts the data in motion over the network.
Customer Objections
After presenting this solution to CashBankTrust, they mention to you that they already
have in place a couple of different technologies within the organization to achieve
something similar. In the one case, a custom-built Java application is using the Jasypt
library to add encryption capabilities at the application layer – i.e. data sent by the
application to the database is already encrypted by the application itself. In the the other
case, they are using a disk-based encryption solution that operates at the I/O layer on the
database server itself. All I/O thus performed on Oracle datafiles results in datafiles that
are completely encrypted.
What are the specific benefits of column-based TDE over these two different approaches?
-28-
LAB EXERCISE 04 – Tablespace Level Data Encryption
CashBankTrust has been using column-level TDE for quite some time. In the last lab, we
saw them applying it to tables associated with their new e-commerce application for PCI
compliance. Now they would like to evaluate Tablespace Encryption. Tablespace
encryption is an attractive option to CashBankTrust for several reasons:
o Less upfront analysis needs to be done to identify candidates for encryption –
only entire tablespaces need to be identified. Unlike with column-level TDE,
no impact assessment needs to be made around columns used as indexes.
o CashBankTrust is beginning to make use of SecureFiles, the 11g unstructured
data type, for secure and efficient storage of all documents, etc. used in their
applications. With Tablespace Encryption, these files can be encrypted on
disk along with all the other data types.
o On disk encryption is important for protecting PII (personally identifiable
information) to comply with regulations such as HIPPA and to protect data
from being compromised and used for purposes other than which it was
intended.
NOTE: All scripts used in this lab exercise can be found in the directory
/home/oracle/scripts.
oracle:/home/oracle> cd scripts/
oracle:/home/oracle/scripts> db01
ORACLE_SID=db01
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/11.1.0/db_1
OH=/u01/oracle/product/11.1.0/db_1
2. Launch SQL Plus with the /nolog option. Run the script
create_enc_tablespace.sql to build the encrypted tablespace used in
this exercise. [NOTE: Be sure that, there are no segments currently residing in
any tablespace named EXAMPLE_ENC before running this script! Use the
provided script show_contents_of_example_enc_ts.sql to check this
case first. The output below performs the check. If there are segments, move them
to another tablespace before running create_enc_tablespace.sql.]
SQL> @show_contents_of_example_enc_ts.sql
SQL>
SQL> connect / as sysdba
-29-
Connected.
SQL>
SQL> select segment_name,segment_type from dba_segments
2 where tablespace_name='EXAMPLE_ENC'
3 /
no rows selected
SQL> @create_enc_tablespace.sql
SQL>
SQL> connect / as sysdba
Connected.
SQL>
SQL> drop tablespace example_enc including contents and datafiles;
drop tablespace example_enc including contents and datafiles
*
ERROR at line 1:
ORA-00959: tablespace 'EXAMPLE_ENC' does not exist
SQL>
SQL> create tablespace example_enc
2 datafile '/u01/oracle/oradata/db01/example01_enc.dbf'
3 size 50m
4 encryption using 'AES192'
5 default storage(encrypt)
6 /
Tablespace created.
-30-
COUNTRIES EXAMPLE SH NO
CUSTOMERS EXAMPLE SH NO
DIMENSION_EXCEPTIONS EXAMPLE SH NO
DR$SUP_TEXT_IDX$I EXAMPLE SH NO
DR$SUP_TEXT_IDX$R EXAMPLE SH NO
FWEEK_PSCAT_SALES_MV EXAMPLE SH NO
PRODUCTS EXAMPLE SH NO
PROMOTIONS EXAMPLE SH NO
SUPPLEMENTARY_DEMOGRAPHICS EXAMPLE SH NO
12 rows selected.
4. Verify that all indexes for SH.CUSTOMERS and SH.PRODUCTS are valid by
running the script show_sh_index_status_cust_prod.sql. Rebuild
any invalid indexes by running the script
rebuild_sh_indexes_cust_prod_nonencrypt.sql.
SQL> @show_sh_index_status_cust_prod.sql
SQL>
SQL> connect / as sysdba
Connected.
SQL>
SQL> select substr(index_name,1,30) "INDEX",substr(table_name,1,10)
"TABLE",
2 substr(tablespace_name,1,10) "TABLESPACE",status
3 from dba_indexes
4 where owner='SH' and table_name in ('PRODUCTS','CUSTOMERS')
5 /
8 rows selected.
SQL>
SQL> set echo off
SQL>
-31-
SQL> @@lab3_exec_plan_1.sql
SQL> -- Test Execution Plan #1
SQL> select cust_last_name
2 from sh.customers
3 where cust_gender='M'
4 and cust_marital_status='Married'
5 /
Execution Plan
----------------------------------------------------------
Plan hash value: 1936762766
--------------------------------------------------------------------------------
----------------------
--------------------------------------------------------------------------------
----------------------
| 3 | BITMAP AND | | | |
| |
--------------------------------------------------------------------------------
----------------------
4 - access("CUST_MARITAL_STATUS"='Married')
5 - access("CUST_GENDER"='M')
SQL> @@lab3_exec_plan_2.sql
SQL> -- Test Execution Plan #2
SQL> select prod_name,prod_desc
2 from sh.products
3 where prod_id<10
4 /
Execution Plan
----------------------------------------------------------
Plan hash value: 2114366748
--------------------------------------------------------------------------------
-----------
--------------------------------------------------------------------------------
-----------
-32-
| 1 | TABLE ACCESS BY INDEX ROWID| PRODUCTS | 1 | 58 | 2 (0)|
00:00:01 |
--------------------------------------------------------------------------------
-----------
2 - access("PROD_ID"<10)
SQL> @@lab3_exec_plan_3.sql
SQL> -- Test Execution Plan #3
SQL> select c.cust_last_name
2 , s.time_id
3 , s.prod_id
4 from sh.sales s, sh.customers c
5 where c.cust_id = s.cust_id (+)
6 and s.prod_id = 7145
7 /
Execution Plan
----------------------------------------------------------
Plan hash value: 3177224361
--------------------------------------------------------------------------------
---------------------------------------
--------------------------------------------------------------------------------
---------------------------------------
| 0 | SELECT STATEMENT | | 1 | 30 |
30 (0)| 00:00:01 | | |
| 1 | NESTED LOOPS | | | |
| | | |
| 2 | NESTED LOOPS | | 1 | 30 |
30 (0)| 00:00:01 | | |
--------------------------------------------------------------------------------
---------------------------------------
-33-
---------------------------------------------------
6 - access("S"."PROD_ID"=7145)
7 - access("C"."CUST_ID"="S"."CUST_ID")
SQL> @@lab3_exec_plan_4.sql
SQL> -- Test Execution Plan #4
SQL> select distinct
2 a.cust_last_name,a.cust_city,b.prod_name
3 from sh.customers a, sh.products b,sh.sales c
4 where a.cust_id=c.cust_id
5 and c.prod_id=b.prod_id
6 /
Execution Plan
----------------------------------------------------------
Plan hash value: 2067517253
--------------------------------------------------------------------------------
----------------------------
--------------------------------------------------------------------------------
----------------------------
--------------------------------------------------------------------------------
----------------------------
2 - access("C"."PROD_ID"="B"."PROD_ID")
4 - access("A"."CUST_ID"="C"."CUST_ID")
-34-
indexes for these tables are still valid by running the script
show_sh_index_status_cust_prod.sql.
SQL> @move_sh_cust_prod_tables_to_enc_ts.sql
SQL>
SQL> connect / as sysdba
Connected.
SQL>
SQL> alter table sh.customers move tablespace example_enc;
Table altered.
Table altered.
SQL>
SQL> set echo off
SQL> @show_sh_index_status_cust_prod.sql
SQL>
SQL> connect / as sysdba
Connected.
SQL>
SQL> select substr(index_name,1,30) "INDEX",substr(table_name,1,10)
"TABLE",
2 substr(tablespace_name,1,10) "TABLESPACE",status
3 from dba_indexes
4 where owner='SH' and table_name in ('PRODUCTS','CUSTOMERS')
5 /
8 rows selected.
SQL>
SQL> set echo off
7. Rebuild all indexes for the tables SH.CUSTOMERS and SH.PRODUCTS in the
non-encrypted tablespace EXAMPLE by running the script
rebuild_sh_indexes_cust_prod_nonencrypt.sql. Verify that all of
these indexes are now valid.
SQL> @rebuild_sh_indexes_cust_prod_nonencrypt.sql
SQL>
SQL> connect / as sysdba
Connected.
SQL>
SQL> alter index sh.CUSTOMERS_PK rebuild tablespace example;
Index altered.
-35-
SQL> alter index sh.CUSTOMERS_GENDER_BIX rebuild tablespace example;
Index altered.
Index altered.
Index altered.
Index altered.
Index altered.
Index altered.
Index altered.
SQL>
SQL> select substr(index_name,1,30) "INDEX",substr(table_name,1,10)
"TABLE",
2 substr(tablespace_name,1,10) "TABLESPACE",status
3 from dba_indexes
4 where owner='SH' and table_name in ('PRODUCTS','CUSTOMERS')
5 /
8 rows selected.
SQL>
8. Run the script get_execution_plans.sql to see the plans for same queries
run in step (5). Note that the indexes, which are not encrypted, are still being used
in these plans for accessing the encrypted table data.
SQL> @get_execution_plans.sql
SQL> connect / as sysdba
Connected.
SQL> set autotrace traceonly explain
SQL> @@lab3_exec_plan_1.sql
SQL> -- Test Execution Plan #1
-36-
SQL> select cust_last_name
2 from sh.customers
3 where cust_gender='M'
4 and cust_marital_status='Married'
5 /
Execution Plan
----------------------------------------------------------
Plan hash value: 1936762766
--------------------------------------------------------------------------------
----------------------
--------------------------------------------------------------------------------
----------------------
| 3 | BITMAP AND | | | |
| |
--------------------------------------------------------------------------------
----------------------
4 - access("CUST_MARITAL_STATUS"='Married')
5 - access("CUST_GENDER"='M')
SQL> @@lab3_exec_plan_2.sql
SQL> -- Test Execution Plan #2
SQL> select prod_name,prod_desc
2 from sh.products
3 where prod_id<10
4 /
Execution Plan
----------------------------------------------------------
Plan hash value: 2114366748
--------------------------------------------------------------------------------
-----------
--------------------------------------------------------------------------------
-----------
-37-
00:00:01 |
--------------------------------------------------------------------------------
-----------
2 - access("PROD_ID"<10)
SQL> @@lab3_exec_plan_3.sql
SQL> -- Test Execution Plan #3
SQL> select c.cust_last_name
2 , s.time_id
3 , s.prod_id
4 from sh.sales s, sh.customers c
5 where c.cust_id = s.cust_id (+)
6 and s.prod_id = 7145
7 /
Execution Plan
----------------------------------------------------------
Plan hash value: 3177224361
--------------------------------------------------------------------------------
---------------------------------------
--------------------------------------------------------------------------------
---------------------------------------
| 0 | SELECT STATEMENT | | 1 | 30 |
30 (0)| 00:00:01 | | |
| 1 | NESTED LOOPS | | | |
| | | |
| 2 | NESTED LOOPS | | 1 | 30 |
30 (0)| 00:00:01 | | |
--------------------------------------------------------------------------------
---------------------------------------
-38-
6 - access("S"."PROD_ID"=7145)
7 - access("C"."CUST_ID"="S"."CUST_ID")
SQL> @@lab3_exec_plan_4.sql
SQL> -- Test Execution Plan #4
SQL> select distinct
2 a.cust_last_name,a.cust_city,b.prod_name
3 from sh.customers a, sh.products b,sh.sales c
4 where a.cust_id=c.cust_id
5 and c.prod_id=b.prod_id
6 /
Execution Plan
----------------------------------------------------------
Plan hash value: 2067517253
--------------------------------------------------------------------------------
----------------------------
--------------------------------------------------------------------------------
----------------------------
--------------------------------------------------------------------------------
----------------------------
2 - access("C"."PROD_ID"="B"."PROD_ID")
4 - access("A"."CUST_ID"="C"."CUST_ID")
9. Finally, we wish to verify that indexes can still be used in execution plans if both
tables and indexes reside in an encrypted tablespace. Run the script
rebuild_sh_indexes_cust_prod_encrypt.sql to rebuild all indexes
-39-
on tables SH.CUSTOMERS and SH.PRODUCTS in the encrypted tablespace
EXAMPLE_ENC. Verify that these indexes are valid.
SQL> @rebuild_sh_indexes_cust_prod_encrypt.sql
SQL>
SQL> connect / as sysdba
Connected.
SQL>
SQL> alter index sh.CUSTOMERS_PK rebuild tablespace example_enc;
Index altered.
Index altered.
Index altered.
Index altered.
Index altered.
Index altered.
Index altered.
Index altered.
SQL>
SQL> select substr(index_name,1,30) "INDEX",substr(table_name,1,10)
"TABLE",
2 substr(tablespace_name,1,15) "TABLESPACE",status
3 from dba_indexes
4 where owner='SH' and table_name in ('PRODUCTS','CUSTOMERS')
5 /
8 rows selected.
-40-
SQL>
SQL> set echo off
10. Run the script get_execution_plans.sql to see the plans for same queries run
before. Are the associated indexes, now encrypted, still being used in these plans?
SQL> @get_execution_plans.sql
SQL> connect / as sysdba
Connected.
SQL> set autotrace traceonly explain
SQL> @@lab3_exec_plan_1.sql
SQL> -- Test Execution Plan #1
SQL> select cust_last_name
2 from sh.customers
3 where cust_gender='M'
4 and cust_marital_status='Married'
5 /
Execution Plan
----------------------------------------------------------
Plan hash value: 1936762766
--------------------------------------------------------------------------------
----------------------
--------------------------------------------------------------------------------
----------------------
| 3 | BITMAP AND | | | |
| |
--------------------------------------------------------------------------------
----------------------
4 - access("CUST_MARITAL_STATUS"='Married')
5 - access("CUST_GENDER"='M')
SQL> @@lab3_exec_plan_2.sql
SQL> -- Test Execution Plan #2
SQL> select prod_name,prod_desc
2 from sh.products
3 where prod_id<10
4 /
-41-
Execution Plan
----------------------------------------------------------
Plan hash value: 2114366748
--------------------------------------------------------------------------------
-----------
--------------------------------------------------------------------------------
-----------
--------------------------------------------------------------------------------
-----------
2 - access("PROD_ID"<10)
SQL> @@lab3_exec_plan_3.sql
SQL> -- Test Execution Plan #3
SQL> select c.cust_last_name
2 , s.time_id
3 , s.prod_id
4 from sh.sales s, sh.customers c
5 where c.cust_id = s.cust_id (+)
6 and s.prod_id = 7145
7 /
Execution Plan
----------------------------------------------------------
Plan hash value: 3177224361
--------------------------------------------------------------------------------
---------------------------------------
--------------------------------------------------------------------------------
---------------------------------------
| 0 | SELECT STATEMENT | | 1 | 30 |
30 (0)| 00:00:01 | | |
| 1 | NESTED LOOPS | | | |
| | | |
| 2 | NESTED LOOPS | | 1 | 30 |
30 (0)| 00:00:01 | | |
-42-
|* 6 | BITMAP INDEX SINGLE VALUE | SALES_PROD_BIX | | |
| | 1 | 28 |
--------------------------------------------------------------------------------
---------------------------------------
6 - access("S"."PROD_ID"=7145)
7 - access("C"."CUST_ID"="S"."CUST_ID")
SQL> @@lab3_exec_plan_4.sql
SQL> -- Test Execution Plan #4
SQL> select distinct
2 a.cust_last_name,a.cust_city,b.prod_name
3 from sh.customers a, sh.products b,sh.sales c
4 where a.cust_id=c.cust_id
5 and c.prod_id=b.prod_id
6 /
Execution Plan
----------------------------------------------------------
Plan hash value: 2067517253
--------------------------------------------------------------------------------
----------------------------
--------------------------------------------------------------------------------
----------------------------
--------------------------------------------------------------------------------
----------------------------
-43-
---------------------------------------------------
2 - access("C"."PROD_ID"="B"."PROD_ID")
4 - access("A"."CUST_ID"="C"."CUST_ID")
-44-
LAB EXERCISE 05 – Database Vault Realms
Database Vault also plays a role in many regulatory customer scenarios, as follows:
Some companies have used Database Vault as part of compliance with the European
Union Privacy Directive, which does not allow the transfer of personal data to a country
outside of the EU which lacks adequate personal data privacy safeguards. Other
companies are deploying Database Vault in response to external and internal audits that
-45-
have uncovered policy enforcement issues, and have directed them to be addressed by
preventative controls.
Similar protections on the OE schema would also help CashBankTrust comply with
Section 7 of the PCI Requirements:
-46-
1. If you have not already done so, please be sure the following is started (see pages
8-10):
• Listener for db01
• The db01 database
• The db01 db console
2. Startup up your browser if it is not already started. Access (via Bookmark) the
URL https://dbsecurity.oracle.com:5501/dva. Login to the
Database Vault Web Administration UI (DVA) as dvowner. (Password for
dvowner is oracle12#). Enter the other login attributes as shown below.
1. Use DVA to create a realm named HR Realm. Ensure that all objects owned by
the user HR are protected by this realm. Make HR the only authorized user.
-47-
a. Select Realms.
b. Select Create.
-48-
c. Enter name and description as shown. Leave all other fields as defaults.
Select OK.
-49-
e. Scroll down to the section entitled Secure Realm Objects. Select Create.
-50-
f. Enter HR as object owner, object type as “%”, and object name as “%”.
Select OK.
-51-
g. Scroll down to the Realm Authorizations section. Select Create.
Select HR from pulldown list as Grantee, select Participant radio
button as Authorization Type.
-52-
h. Select OK twice. Your Realm screen should show the newly created realm
HR Realm as shown below.
2. Back in your terminal window, set the alias to db01. Connect to SQL*Plus as
SYSDBA. Attempt to select data from HR.DEPARTMENTS. Attempt to create a
table in the HR schema. Notice that both operations fail even though SYSDBA
possesses sufficient system privileges.
oracle:/home/oracle> db01
ORACLE_SID=db01
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/11.1.0/db_1
OH=/u01/oracle/product/11.1.0/db_1
oracle:/home/oracle> sqlplus / as sysdba
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
-53-
ORA-01031: insufficient privileges
3. As user KLYSY, attempt to perform the same operations as in Step 2. Verify that
both operations fail. [Leave this session connected as this time.]
4. As user OE, attempt to perform the same operations as in Step 2. Why does one
succeed and not the other?
9 rows selected.
-54-
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-47401: Realm violation for create table on HR.JUNK
ORA-06512: at "DVSYS.AUTHORIZE_EVENT", line 55
ORA-06512: at line 31
6. Return to your SQL Plus session as user KLYSY. Attempt to run the same
commands as before. Verify that both commands now succeed since KLYSY is
an authorized user for HR Realm.
Table created.
-55-
7. Although Transparent Data Encryption provides physical, file-level data
protection for tables in the BANKING schema, it is clear that powerful users such
as DBAs still have the ability to read sensitive BANKING data as clear text. To
provide additional protection, create a realm named “Banking Realm”. Add the
entire BANKING schema to the realm, and do not provide any authorized users to
this realm.
8. Return to your SQL Plus session. Connect as SYSDBA. Verify that SYSDBA
cannot query data in the BANKING tables CUSTOMER, ACCOUNT, and
ACCOUNT_BALANCE.
oracle:/home/oracle> db01
ORACLE_SID=db01
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/oracle/product/11.1.0/db_1
OH=/oracle/product/11.1.0/db_1
oracle:/home/oracle> sqlplus / as sysdba
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
-56-
select * from banking.customer
*
ERROR at line 1:
ORA-01031: insufficient privileges
Note that the query succeeds but the two DDL commands fail. Although BANKING
owns these tables, without being an authorized user for the Banking realm the schema
owner is limited only to queries and DML statements on its objects.
SQL> connect banking/banking
Connected.
SQL> select * from banking.account_balance;
-57-
ACCOUNT_ID ACCOUNT_B BAL_AVAIL_ADJ BAL_AVAIL_CLOSING
---------- --------- ------------- -----------------
1001 01-MAY-01 121455.9 101440.01
1002 01-JUL-01 782000 780211.23
1003 15-JUN-03 978332.9 765232
1004 19-APR-02 850200.18 850200.18
1005 28-OCT-04 232900.1 120918.75
1006 03-SEP-99 496039.88 490190.59
1017 22-JAN-92 101900 100248.95
1018 01-SEP-00 768950 569122.3
8 rows selected.
-58-
LAB EXERCISE 06 – Database Vault Command Rules
It has been decided by Corporate Security that the application DBAs for the BANKING
schema must be restricted from being to select, insert, update, or delete any banking
data. In addition, the senior application DBAs must be the only users who have the
capability of dropping or truncating any tables in the BANKING schema. Therefore, the
only users who are permitted access to the BANKING schema are the following:
In this lab we will implement the above security requirements using a combination of
Database Vault realms, rule sets, and command rules.
-59-
2. Under Realm Authorizations, add users BANKING, BANKING_DBA_SR
and BANKING_DBA_JR as realm participants. After completing this step,
the Realm Authorization section for Banking Realm should look like the
following:
-60-
3. Create a rule set named “No JR Banking DBAs” to restrict access for user
BANKING_DBA_JR. Select Rule Sets, select CREATE. For the rule
expression use the string dvf.f$session_user != ‘BANKING_DBA_JR’.
-61-
4. Create a rule set named “No Banking App DBAs” to restrict access for users
BANKING_DBA_SR and BANKING_DBA_JR. For the rule expression use
the string dvf.f$session_user not in
('BANKING_DBA_SR','BANKING_DBA_JR')
-62-
5. Create a command rule for the SELECT command on behalf of the BANKING
schema. We wish to prevent the Junior Banking DBAs from being able to select
banking data, so assign the rule set “No JR Banking DBAs” to the SELECT
command rule.
6. Create a command rules for the INSERT, UPDATE, and DELETE commands.
We wish to restrict both senior and junior application DBAs from running DML
commands against any objects owned by BANKING. For each command rule
specify Object Owner as BANKING, Object Name as “%”, and Rule Set “No
Banking App DBAs”. The output below shows the creation of the command rule
for the INSERT command.
-63-
7. Verify that you have successfully created the four command rules. Under Command
Rules, you should see the following command rules:
8. Verify that users BANKING and BANKING_DBA_SR are able to select data from
the table BANKING.ACCOUNT_BALANCE, but that user BANKING_DBA_JR is
not able to select data from this table.
SQL> connect banking/banking
Connected.
SQL> select * from banking.account_balance;
8 rows selected.
-64-
ACCOUNT_ID ACCOUNT_B BAL_AVAIL_ADJ BAL_AVAIL_CLOSING
---------- --------- ------------- -----------------
1001 01-MAY-01 121455.9 101440.01
1002 01-JUL-01 782000 780211.23
1003 15-JUN-03 978332.9 765232
1004 19-APR-02 850200.18 850200.18
1005 28-OCT-04 232900.1 120918.75
1006 03-SEP-99 496039.88 490190.59
1017 22-JAN-92 101900 100248.95
1018 01-SEP-00 768950 569122.3
8 rows selected.
9. Verify that user BANKING is able to delete data from the table
BANKING.ACCOUNT_BALANCE, but that users BANKING_DBA_SR and
BANKING_DBA_JR are not able to delete data from this table.
-65-
ORA-01031: insufficient privileges
10. As user BANKING, create a new table named CUSTOMER_TMP using CTAS from
the table CUSTOMER. Verify that users BANKING, BANKING_DBA_SR, and
BANKING_DBA_JR are all able to truncate the table CUSTOMER_TMP.
SQL> connect banking/banking
Connected.
SQL> create table banking.customer_tmp as select * from banking.customer;
Table created.
Table truncated.
Table truncated.
Table truncated.
-66-
ORA-06512: at line 31
12. Now we will turn of the Recycle Bin functionality in the database and again attempt
to drop the table. Notice the change in behavior.
System altered.
Table dropped.
SQL>
-67-
LAB EXERCISE 07 – Using Custom Factors in Database Vault
Database Vault allows company to enforce policies around data access, imposing
limitations on database access the way a firewall imposes limitations on network traffic.
In this lab you will create the “job role” factor. CashBankTrust is evaluating the use of
Database Vault for protecting HR data. This exercise will allow you to assist them by
demonstrating the following:
• Creating a rule set that restricts an authorized user to HR realm based on
whether he has the job role of SA_MAN or HR_MAN
• Create a command rule that only allows managers to query employee data
(i.e., job role of ‘%MAN’)
Note: If we “promote” an employee from clerk to manager (i.e.,
apply a committed update of one row of HR.EMPLOYEES to
change an employee’s job role from SA_CLERK to SA_MAN), that
employee would be immediately able to query employee data
1. In DVA, remove the realm named HR Realm. Select the radio button next to
this realm, then click REMOVE.
2. Set the alias db01. Connect to SQL*Plus with the /nolog option. Run the
script
/home/oracle/scripts/create_retrieve_job_id_function
.sql to create a function named retrieve_job_id. This function,
owned by user HR, returns the job_id of the connected user if the user’s login
name matches a value of the EMAIL column in the table HR.EMPLOYEES.
oracle:/home/oracle/scripts> db01
ORACLE_SID=db01
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/11.1.0/db_1
OH=/u01/oracle/product/11.1.0/db_1
oracle:/home/oracle/scripts> sqlplus /nolog
-68-
SQL*Plus: Release 11.1.0.7.0 - Production on Sun Oct 5 20:09:39 2008
SQL> @create_retrieve_job_id_function.sql
SQL>
SQL> connect hr/hr
Connected.
SQL>
SQL> CREATE OR REPLACE
2 FUNCTION RETRIEVE_JOB_ID
3 RETURN VARCHAR2 AS
4 v_job_id VARCHAR2(10);
5 BEGIN
6 SELECT job_id INTO v_job_id FROM HR.EMPLOYEES
7 WHERE EMAIL = DVF.F$SESSION_USER;
8 RETURN v_job_id;
9 EXCEPTION
10 WHEN NO_DATA_FOUND THEN
11 RETURN NULL;
12 END;
13 /
Function created.
SQL>
SQL> set echo off
3. As HR, grant execute on retrieve_job_id to user DVSYS. You can run the
script named grant_execute_to_dvsys.sql to complete this task.
SQL> @grant_execute_to_dvsys.sql
SQL>
SQL> connect hr/hr
Connected.
SQL>
SQL> grant execute on retrieve_job_id to dvsys;
Grant succeeded.
-69-
Fill in the values for Name and Retrieval Method as shown below. Be sure to set
Evaluation to “By Access”. Select OK to save.
5. Verify that the factor JobRole works as intended. Test use of JobRole for the
users SMAVRIS, WSMITH, KPARTNER, and PKESTNER. Use the script
-70-
test_jobrole_factor.sql to perform these tests. Note the returned
values for dvf.f$jobrole. The reason why this factor returned a null value for
PKESTNER is because this user is not an employee.
SQL> @test_jobrole_factor.sql
SQL>
SQL> connect smavris/smavris
Connected.
SQL> select dvf.f$jobrole from dual;
F$JOBROLE
--------------------------------------------------------------------
HR_REP
SQL>
SQL> connect wsmith/wsmith
Connected.
SQL> select dvf.f$jobrole from dual;
F$JOBROLE
--------------------------------------------------------------------
SA_REP
SQL>
SQL> connect kpartner/kpartner
Connected.
SQL> select dvf.f$jobrole from dual;
F$JOBROLE
--------------------------------------------------------------------
SA_MAN
SQL>
SQL> connect pkestner/pkestner
Connected.
SQL> select dvf.f$jobrole from dual;
F$JOBROLE
--------------------------------------------------------------------
SQL>
SQL> set echo off
6. Create a rule set named “SA Employees Only”. Include a rule named “SA
Emps Only Rule” that evaluates to TRUE only if the connected user is an
employee having a jobrole of either SA_MAN or SA_REP. Use the rule
expression dvf.f$jobrole in ('SA_MAN','SA_REP').
-71-
Add the new rule to the rule set:
-72-
7. To test use of the new rule set, create a command rule on the SELECT
statement for the object HR.DEPARTMENTS. Apply the rule set “SA
Employees Only” to the command rule.
SQL> @test_selects_on_depts.sql
SQL>
SQL> connect wsmith/wsmith
Connected.
SQL> select * from hr.departments where rownum<6;
-73-
50 Shipping 121 1500
SQL>
SQL> connect smavris/smavris
Connected.
SQL> select * from hr.departments where rownum<6;
select * from hr.departments where rownum<6
*
ERROR at line 1:
ORA-01031: insufficient privileges
SQL>
SQL> connect kpartner/kpartner
Connected.
SQL> select * from hr.departments where rownum<6;
SQL>
SQL> connect pkestner/pkestner
Connected.
SQL> select * from hr.departments where rownum<6;
select * from hr.departments where rownum<6
*
ERROR at line 1:
ORA-01031: insufficient privileges
SQL>
SQL> connect klysy/klysy
Connected.
SQL> select * from hr.departments where rownum<6;
select * from hr.departments where rownum<6
*
ERROR at line 1:
ORA-01031: insufficient privileges
SQL>
SQL> set echo off
SQL>
-74-
LAB EXERCISE 08 – Customizing Database Vault Separation of Duty With OLS
Oracle Label Security helps organizations address security and compliance requirements
using sensitivity labels such as confidential and sensitive. Sensitivity labels can be
assigned to users in the form of label authorizations and associated with operations and
objects inside the database using data labels. Label authorizations provide tremendous
flexibility in making access control decisions and enforcing separation of duty. Oracle
Label Security can be used to address numerous operational issues related to security,
compliance and privacy. Used with Oracle Database Vault, Oracle Label Security label
authorizations are factors that control access to applications, databases and data.
Oracle Database Vault command rules can check whether a user has been authorized
access to Sensitive data considered Personally Identifiable Information (PII). Database
administrators can be assigned different label authorizations, enforcing separation of
duty within a consolidated application environment. For example, a Select command
rule can check a user's label authorization before allowing access to an application
table.
Label authorizations combined with Oracle Database Vault enables powerful access
control policies within the database.
o Limit connections to databases based on whether label authorizations include
Sensitive Personally Identifiable Information (PII) access
o Limit access to application tables based on whether label authorizations
include Sensitive Personally Identifiable Information (PII) access
o Limit DDL such as Create Table based on whether label authorizations
include sensitive Personally Identifiable Information (PII) access.
1. Verify that the DB01 database instance and DB Control are both up and
running.
oracle:/home/oracle> db01
ORACLE_SID=db01
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/11.1.0/db_1
OH=/u01/oracle/product/11.1.0/db_1
oracle:/home/oracle> sqlplus / as sysdba
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
INSTANCE_NAME STATUS
---------------- ------------
db01 OPEN
-75-
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.7.0
- Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
oracle:/home/oracle> emctl status dbconsole
Oracle Enterprise Manager 11g Database Control Release 11.1.0.7.0
Copyright (c) 1996, 2008 Oracle Corporation. All rights reserved.
https://dbsecurity.oracle.com:5501/em/console/aboutApplication
Oracle Enterprise Manager 11g is running.
------------------------------------------------------------------
Logs are generated in directory
/u01/oracle/product/11.1.0/db_1/dbsecurity.oracle.com_db01/sysman/log
-76-
4. Under the GENERAL tab, name the policy DBA_ACCESS_CONTROL and
specify the column name CLASSES. Be sure to click on the Enabled check
box, and set NO_CONTROL as the default policy enforcement option. Click
on the LABEL COMPONENTS tab when finished.
-77-
6. You should now see the successfully created label policy. Click on GO to add
authorized users for this label policy.
-78-
8. Under Database Users, click on ADD, then enter HR_DBA% followed by
clicking on GO to find the two DBA users of interest.
11. Make certain that HR_DBA_JR shows up in your list of Database Users, as
shown below. [If this is not the case, select ADD under Database Users and
retry.] Then click NEXT.
-79-
12. Under Privileges, select the check box next to the READ policy as shown
below. Then click on NEXT.
13. Under Levels, set the value S (“Sensitive”) for all values as shown below,
then click NEXT.
-80-
15. Under Review, verify the settings followed by clicking on FINISH.
16. You should now see a message indicating that the user was successfully
added. Repeat steps 7-15 for adding the user HR_DBA_SR. The only
difference you will make is to assign clearance level HS (“Highly Sensitive”)
when applying Step 13. When finished, you see the following authorization
summary. Logout DB Console when finished.
-81-
18. Drop any realms and/or command rules that have been created on HR schema
objects. Create a new realm named “HR Realm Special Authorization”,
including all HR objects and having the user accounts HR_DBA_JR and
HR_DBA_SR as the only authorized users for this realm. Click on OK when
finished.
-82-
9. Navigate to Rule Sets. Create a rule set named “Check DBA Clearance”
containing a single rule as defined below. Click OK when finished. The rule expression
is:
dominates(sa_utl.numeric_label('DBA_ACCESS_CONTROL'),char_to_label('DBA
_ACCESS_CONTROL','HS'))=1
-83-
19. The rule you created examines if the clearance of the current user dominates
(i.e., is higher or equal to) “Highly Sensitive”. Verify that your successfully
created rule looks like the following. Click OK when finished.
20. Create a command rule on the SELECT statement for the HR.EMPLOYEES
table. Include the rule set “Check DBA Clearance”. Click OK when finished.
-84-
11. Connect to SQL Plus using the /nolog option. Our first test will be to show
that with the exception of selects against the HR.EMPLOYEES table, both
HR_DBA_JR and HR_DBA_SR have full access to the HR schema. As each
of these users, attempt to query HR.DEPARTMENTS and also to reorganize
HE.EMPLOYEES. Verify that all of these operations succeed for each user.
Table altered.
-85-
50 Shipping 121 1500
Table altered.
12. Finally, as each of the two DBA users attempt to query HR.EMPLOYEES.
You should observe that only HR_DBA_SR, having the higher security
clearance, is able to successfully read employee data.
-86-
LAB EXERCISE 09 – Starting Up Audit Vault Agent & Collectors
Because audit logs provide a detective, after the fact control, it is imperative to collect
and report data in a streamlined and timely manner as close to the event as possible. In
an effort to automate CashBankTrust’s detective controls, CashBankTrust is deploying
Audit Vault, a product that consolidates database audit records from heterogenous
sources such as Oracle, SQL Server, Sybase and DB2. In this lab we show how the
product can be administered both from a web-based UI and using SQL statements.
In other workshops, you may discuss using Enterprise Manager’s capabilities for further
automating detective controls of the software, hardware and network environment within
which the database is running.
Your current image already contains the installed Audit Vault Server And Agent
components. Additionally, the three collectors DBAUD, OSAUD, and REDO have
already been configured. In this lab exercise, you will start up the Audit Vault server,
agent, and collectors. Additionally, you will generate activity in the source database
DB01 to create a set of audit records for use in Audit Vault.
1. Verify the current Oracle environment settings in your session using the Linux
alias ora. The configuration you have consists of one source database (SID is
db01) along with the Audit Vault warehouse database (SID is av), one agent,
and three collectors. All of the configuration details are listed below:
-87-
Agent Source User Password Oracle12#db01
2. Run the alias db01 to set your environment to point to the db01 database.
Verify that the listener and database are up and running. Start up either or both
of these components that if not already running. FOR PERFORMANCE
REASONS, ENSURE THAT THE THE DATABASE CONSOLE IS NOT
RUNNING. Finally, make sure that the master encryption key is open for the
db01 database.
oracle:/home/oracle> db01
ORACLE_SID=db01
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/11.1.0/db_1
OH=/u01/oracle/product/11.1.0/db_1
oracle:/home/oracle> lsnrctl status
Connecting to
(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbsecurity.oracle.com)(
PORT=1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 11.1.0.7.0 - Production
Start Date 05-OCT-2008 17:33:25
Uptime 0 days 5 hr. 0 min. 55 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File
/u01/oracle/product/11.1.0/db_1/network/admin/listener.ora
Listener Log File
/u01/oracle/diag/tnslsnr/dbsecurity/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dbsecurity.oracle.com)(PORT=1521))
)
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
Services Summary...
Service "db01.oracle.com" has 1 instance(s).
Instance "db01", status READY, has 1 handler(s) for this service...
Service "db01XDB.oracle.com" has 1 instance(s).
Instance "db01", status READY, has 1 handler(s) for this service...
Service "db01_XPT.oracle.com" has 1 instance(s).
Instance "db01", status READY, has 1 handler(s) for this service...
The command completed successfully
oracle:/home/oracle> sqlplus / as sysdba
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
-88-
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
INSTANCE_NAME STATUS
---------------- ------------
db01 OPEN
If necessary, start up the database at this point. If you need to start up the database,
also open the wallet. If the wallet isn’t open, the Redo collector will not start up.
[
SQL> startup
SQL> ALTER SYSTEM SET ENCRYPTION WALLET OPEN IDENTIFIED BY
"firstpartsecondpart";
]
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.7.0
- Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing
oracle:/home/oracle> emctl status dbconsole
Oracle Enterprise Manager 11g Database Control Release 11.1.0.7.0
Copyright (c) 1996, 2008 Oracle Corporation. All rights reserved.
https://dbsecurity.oracle.com:5501/em/console/aboutApplication
Oracle Enterprise Manager 11g is running.
------------------------------------------------------------------
Logs are generated in directory
/u01/oracle/product/11.1.0/db_1/dbsecurity.oracle.com_db01/sysman/log
oracle:/home/oracle> emctl stop dbconsole
oracle:/home/oracle> cd scripts/
oracle:/home/oracle/scripts> . start_AV_listener.sh
ORACLE_SID=av
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/10.2/av
OH=/u01/oracle/product/10.2/av
-89-
Listening on:
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dbsecurity.oracle.com)(PORT=1522))
)
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1)))
STATUS of the LISTENER
------------------------
Alias listener
Version TNSLSNR for Linux: Version 10.2.0.3.0 - Production
Start Date 05-OCT-2008 22:43:57
Uptime 0 days 0 hr. 0 min. 0 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File
/u01/oracle/product/10.2/av/network/admin/listener.ora
Listener Log File
/u01/oracle/product/10.2/av/network/log/listener.log
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dbsecurity.oracle.com)(PORT=1522))
)
Services Summary...
Service "PLSExtProc" has 1 instance(s).
Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this
service...
The command completed successfully
oracle:/home/oracle/scripts> . start_AV_Agent_oc4j.sh
ORACLE_SID=av
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/10.2/avagent
OH=/u01/oracle/product/10.2/avagent
AVCTL started
Starting OC4J...
OC4J started successfully.
-90-
Variable Size 226494856 bytes
Database Buffers 377487360 bytes
Redo Buffers 2928640 bytes
Database mounted.
Database opened.
SQL> Disconnected from Oracle Database 10g Enterprise Edition Release
10.2.0.3.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining
and Oracle Database Vault options
AVCTL started
Starting OC4J...
OC4J started successfully.
TZ set to GB-Eire
Oracle Audit Vault 10g Database Control Release 10.2.3.0.0 Copyright (c)
1996, 2008 Oracle Corporation. All rights reserved.
http://dbsecurity.oracle.com:5700/av
Oracle Audit Vault 10g is running.
------------------------------------
oracle:/home/oracle/scripts> . start_AV_Agent.sh
ORACLE_SID=av
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/10.2/av
OH=/u01/oracle/product/10.2/av
AVCTL started
Starting agent...
Agent started successfully.
AVCTL started
Getting agent metrics...
--------------------------------
Agent is running
--------------------------------
Metrics retrieved successfully.
-91-
AVCTL started
Starting collector...
Collector started successfully.
AVCTL started
Getting collector metrics...
--------------------------------
Collector is running
Records per second = 0.00
Bytes per second = 0.00
--------------------------------
AVCTL started
Starting collector...
Collector started successfully.
AVCTL started
Getting collector metrics...
--------------------------------
Collector is running
Records per second = 0.00
Bytes per second = 0.00
--------------------------------
-92-
9. Navigate to Management -> Agents. You will see that agent1 is currently
running. The Status field should show a green Up arrow.
10. Navigate to Management -> Collectors. Note that all three collectors for the
DB01 source database are up. The Status field should show a green Up arrow
for each collector.
11. The table below shows all scripts used to start up and shut down the Audit
Vault environment for the workshop. All scripts listed in the table below can
be found in the image under the path /home/oracle/scripts. Using the
table should make things considerable easier whenever you need to bounce
your image or to bring up or bring down individual Audit Vault components.
[Note that in order for all collectors to startup and to remain up you must
ensure that the DB01 source database is first open before starting the
collectors, and you need to also ensure that the master encryption wallet is
open. If the master wallet is not open, the REDO collector will startup but will
shut itself down after about 20-30 seconds. See the alert log for the DB01
database for details.]
-93-
STARTING UP AUDIT VAULT SHUTTING DOWN AUDIT VAULT
ENVIRONMENT ENVIRONMENT
Step Script Name Description Step Script Name Description
1 start_AV_listener.sh Starts the listener 1 stop_AV_Collectors_db01.sh Stops collectors for
LISTENER (port DB01 database
1522) for use by source
Audit Vault
2 start_AV_Agent_oc4j.sh Starts OC4J 2 stop_AV_Agent.sh Stops AV agent
instance for agent
3 start_AV_Server.sh Starts AV server 3 stop_AV_Server.sh Stops AV server
database, OC4J database, OC4J
instance, and instance, and
dbconsole dbconsole
4 start_AV_Agent.sh Starts AV agent 4 stop_AV_Agent_oc4j.sh Stops OC4J
instance for agent
5 start_AV_Collectors_db01sh Starts collectors 5 stop_AV_listener.sh Stops the listener
for DB01 database LISTENER on port
source 1522
-94-
LAB EXERCISE 10 – Injecting Audit Records
The purpose of this lab exercise is to enable database auditing and FGA in the DB01
source database and to inject audit records into the audit trails for transfer to the Audit
Vault warehouse.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
Audit succeeded.
SQL>
SQL> Audit create any table by access;
Audit succeeded.
SQL>
SQL> Audit drop any table by access;
-95-
Audit succeeded.
.
.
.
SQL> execute DBMS_FGA.ADD_POLICY( -
> object_schema => 'hr', -
> object_name => 'employees', -
> policy_name => 'chk_hr_emp', -
> audit_condition => 'salary>10000', -
> audit_column => 'salary');
SQL>
SQL> set echo off
SQL>
SQL> @inject_audit.sql
SQL>
SQL> @change_schema.sql
SQL> connect system/oracle1
Connected.
SQL>
SQL> create table hr.emp1 as select * from hr.employees;
Table created.
SQL>
SQL> set echo off
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.7.0
- Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
oracle:/home/oracle/av_scripts>
4. Set alias av to change your environment to point to the Audit Vault server.
Force a manual refresh of the Audit Vault warehouse by running the shell
script /home/oracle/av_scripts/refresh_warehouse.sh.
oracle:/home/oracle/av_scripts> av
-96-
ORACLE_SID=av
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/10.2/av
OH=/u01/oracle/product/10.2/av
oracle:/home/oracle/av_scripts> . refresh_warehouse.sh
AVCTL started
Refreshing warehouse...
Waiting for refresh to complete...
done.
6. Verify that the far lower portion of the Overview screen contains summary
entries for audit events. Sample output is shown below.
-97-
7. Click on the hyperlink for the Audit Event Category named “Data Access”.
View the Data Access report, and verify that you see audit records that were
collected on today’s date.
-98-
LAB EXERCISE 11 – Creating Audit Vault Alerts
This lab goes into detail on creating and using the alert that monitors account creation.
The purpose of this lab exercise is to create and use Audit Vault alerts for the DB01
source database.
1. Log into the Audit Vault console as avauditor/oracle12#. Click on the Audit
Policy > Alerts subtab. Click on the Create button.
-99-
• Audit Event: CREATE USER
3. Verify that you should see the newly created alert in the summary screen, as
shown below.
oracle:/home/oracle> cd av_scripts/
oracle:/home/oracle/av_scripts> db01
ORACLE_SID=db01
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/11.1.0/db_1
OH=/u01/oracle/product/11.1.0/db_1
-100-
oracle:/home/oracle/av_scripts> sqlplus /nolog
User created.
User created.
User created.
User created.
.
.
.
5. Set alias av. Run the script refresh.warehouse to refresh the Audit Vault
database with new audit records.
oracle:/home/oracle/av_scripts> av
ORACLE_SID=av
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/10.2/av
OH=/u01/oracle/product/10.2/av
oracle:/home/oracle/av_scripts> . refresh_warehouse.sh
AVCTL started
Refreshing warehouse...
Waiting for refresh to complete...
done.
-101-
6. In The audit Vault console, force a refresh of the Overview screen. You should
see a number of alerts already appearing in the Overview screen.
7. Click on the hyperlink for the “Account Management” event under Alerts By
Audit Event Category. Partial output is shown below. Click on the first entry in
the list of alerts.
-102-
9. We will next demonstrate the near real-time behavior of Audit Vault’s ability to
report on alerts without the need for a refresh. From viewing the Audit Vault
console’s Overview screen as user avauditor, verify the current number of alerts.
The sample output below shows a total of 17 alerts, all for the “Account
Management” event category.
-103-
10. Set alias db01, and login to SQL Plus as dvacctmgr/oracle12#. Create a user named
prayner (password of prayner). Then immediately drop this user account.
oracle:/home/oracle/av_scripts> db01
ORACLE_SID=db01
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/11.1.0/db_1
OH=/u01/oracle/product/11.1.0/db_1
oracle:/home/oracle/av_scripts> sqlplus dvacctmgr/oracle12#
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
User created.
User dropped.
-104-
10. Return to the Audit Vault console. You should now observe that the total number
of alerts has incremented by 1 to its new total. As shown below, the sample output
shows that the total number of alerts has incremented from 17 to 18. The total was
incremented without our having to force a refresh of the Audit Vault warehouse.
11. Click on the hyperlink for the “Account Management” event under Alerts By
Audit Event Category. Partial output is shown below. Select the first entry in the
list, since this is the most recent alert record.
-105-
13. Logout of the Audit Vault console when finished.
-106-
LAB EXERCISE 12 – Audit Management
In the latest version of Audit Vault, two important new capabilities have been introduced:
a. The ability to move the native audit trail base tables to another tablespace
besides SYS. Many customers had performed this action manually themselves,
but it was not previously supported. This capability allows for more granular
management of the audit trails.
b. The ability to create and schedule a job to perform automated removal of
audit records in an Oracle database. Once a record has been collected and
stored by Audit Vault, there is no longer any need to store this record in the
source database. Many customers balk at the storage requirements for audit
data, and Audit Vault helps address this concern by centralizing storage of
audit records, making management of these records more efficient and less
costly.
This lab exercise guides you through managing audit trails on a source Oracle database.
oracle:/home/oracle> db01
ORACLE_SID=db01
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/11.1.0/db_1
OH=/u01/oracle/product/11.1.0/db_1
oracle:/home/oracle> cd av_scripts/
oracle:/home/oracle/av_scripts> sqlplus /nolog
SQL> @show_audit_trails.sql
Connected.
SQL>
SQL> select owner,table_name,tablespace_name
2 from dba_tables
3 where table_name in ('AUD$','FGA_LOG$');
-107-
---------- -------------------- --------------------
SYS FGA_LOG$ SYSTEM
SYSTEM AUD$ SYSTEM
SQL>
SQL> set echo off
15. In SQL Plus run the script configure_audit_mgt.sql. This script creates a
new tablespace named AUDIT_TRAIL_TS and moves the two native audit trail
base tables to this tablespace. [Ignore the error “Tablespace does not exist” when
attempting to drop the tablespace before creating it.]
SQL> @configure_audit_mgt.sql
SQL>
SQL> connect / as sysdba
Connected.
SQL>
SQL> drop tablespace audit_trail_ts including contents and datafiles;
drop tablespace audit_trail_ts including contents and datafiles
*
ERROR at line 1:
ORA-00959: tablespace 'AUDIT_TRAIL_TS' does not exist
Tablespace created.
SQL>
SQL> execute dbms_audit_mgmt.set_audit_trail_location(-
> audit_trail_type => dbms_audit_mgmt.audit_trail_db_std,-
> audit_trail_location_value => 'AUDIT_TRAIL_TS');
SQL>
SQL> set echo off
16. Run the script run the script show_audit_trails.sql to (again) view the
two native audit trail base tables and which tablespace each currently resides in.
-108-
Notice that both audit trails have been successfully moved into the
AUDIT_TRAIL_TS tablespace.
SQL> @show_audit_trails.sql
Connected.
SQL>
SQL> select owner,table_name,tablespace_name
2 from dba_tables
3 where table_name in ('AUD$','FGA_LOG$');
SQL>
SQL> set echo off
SQL> BEGIN
2 DBMS_AUDIT_MGMT.CREATE_PURGE_JOB (
3 AUDIT_TRAIL_TYPE => DBMS_AUDIT_MGMT.AUDIT_TRAIL_ALL,
4 AUDIT_TRAIL_PURGE_INTERVAL => 12,
5 AUDIT_TRAIL_PURGE_NAME => 'AUDIT_TRAIL_PURGE_JOB',
6 USE_LAST_ARCH_TIMESTAMP => TRUE );
-109-
7 END;
8 /
SQL>
SQL> SET SERVEROUTPUT ON
SQL> BEGIN
2 IF
3
DBMS_AUDIT_MGMT.IS_CLEANUP_INITIALIZED(DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD)
4 THEN
5 DBMS_OUTPUT.PUT_LINE('AUD$ is initialized for clean-up');
6 ELSE
7 DBMS_OUTPUT.PUT_LINE('AUD$ is not initialized for clean-up.');
8 END IF;
9 END;
10 /
AUD$ is initialized for clean-up
SQL>
SQL> set echo off
18. Query the dictionary view DBA_SCHEDULER_JOBS to verify that your purge
job has been successfully created.
-110-
LAB EXERCISE 13 – Adding A Second Collection Source
Audit Vault will allow CashBankTrust to standardize database policies across databases,
as well as consolidating and analyzing audit records in preparation for an external or
internal audit. CashBankTrust’s audit and compliance requirements demand that two
databases, DB01 and DB02, both be audited and have their audit records centralized for
reporting and analysis. Currently only DB01 is configured as an Audit Vault collection
source. In this lab exercise you will configure three collectors for DB02 and verify that
its audit records can be captured and stored in Audit Vault along with audit data from
DB01.
This lab will guide you through the process of adding a second collection source
database for Audit Vault. All of the configuration scripts are located in
/home/oracle/av_scripts.
To start this lab please ensure that you have started the db01 and db02 databases as well
as Audit Vault. To do this use the /home/oracle/scripts/start_av.sh script (after setting
your environment first to db01 and then to db02 using the aliases) and the
/home/oracle/start_av.sh script.
Temporary note: In order to run the following Audit Vault labs successfully, please issue
the following statements on the command line:
• chmod + x /home/oracle/av.sh
• chmod + x /home/oracle/db01.sh
• chmod + x /home/oracle/db02.sh
This issue will be permanently addressed in the next rev of the vmware image.
oracle:/home/oracle/av_scripts> . step1a_create_sourcedb_user.sh
SQL*Plus: Release 11.1.0.7.0 - Production on Mon Oct 6 20:23:28 2008
-111-
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
SQL>
User created.
2. Verify Collectors
a. Once you’ve created the source user on db02 you are ready to start the
Audit Vault configuration.
b. We’ll start by verifying that the 11g database is able to support auditing
for the three collectors:
i. DB Aud Collector
ii. OS Aud Collector
iii. Redo Collector
c. Run the step2_verify_collector.sh script
i. . step2_verify_collector.sh
oracle:/home/oracle/av_scripts> . step2_verify_collector.sh
ORACLE_SID=av
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/10.2/av
OH=/u01/oracle/product/10.2/av
source DB02.ORACLE.COM verified for OS File Audit Collector collector
source DB02.ORACLE.COM verified for Aud$/FGA_LOG$ Audit Collector
collector
parameter _JOB_QUEUE_INTERVAL is not set; recommended value is 1
-112-
ERROR: parameter UNDO_RETENTION = 900 is not in required value range
[3600 - ANY_VALUE]
ERROR: parameter GLOBAL_NAMES = false is not set to required value true
ERROR: set the above init.ora parameters to recommended/required values
d. After running the verification step - step 2, you’ll see that the redo
collector requires some DB changes. We’ve prepared a script to make the
appropriate changes for you.
i. . step2a_change_db_parms.sh
oracle:/home/oracle/av_scripts> . step2a_change_db_parms.sh
ORACLE_SID=db02
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/11.1.0/db_1
OH=/u01/oracle/product/11.1.0/db_1
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
SQL>
System altered.
SQL>
System altered.
SQL>
System altered.
SQL>
System altered.
SQL>
System altered.
SQL>
System altered.
-113-
SQL>
System altered.
SQL> 2 3
SQL> Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> ORACLE instance started.
e. You should see that the script completes and that the changes have been
made.
f. Now we’ve altered the DB to support the REDO collector re-run the
verification step: . step2_verify_collector.sh
g. You should now see that the OS, DB and REDO collector are all verified.
NOTE: Ignore the _JOB_QUEUE_INTERVAL parameter.
oracle:/home/oracle/av_scripts> . step2_verify_collector.sh
ORACLE_SID=av
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/10.2/av
OH=/u01/oracle/product/10.2/av
source DB02.ORACLE.COM verified for OS File Audit Collector collector
source DB02.ORACLE.COM verified for Aud$/FGA_LOG$ Audit Collector
collector
parameter _JOB_QUEUE_INTERVAL = 4 is not set to recommended value 1
source DB02.ORACLE.COM verified for REDO Log Audit Collector collector
-114-
b. This step will add db02 as a new source in Audit Vault (entering
avcolluser / avcolluser). Once the source is added you are able to add
collectors for it.
oracle:/home/oracle/av_scripts> . step3_add_source.sh
ORACLE_SID=av
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/10.2/av
OH=/u01/oracle/product/10.2/av
Enter Source user name: avcolluser
Enter Source password:
Adding source...
Source added successfully.
source successfully added to Audit Vault
c. Let’s confirm that the source has been added correctly. In the VMWare
guest, start Firefox.
d. Select Internet > Firefox
e. Enter the Audit Vault URL:
i. http://dbsecurity.oracle.com:5700/av
ii. avadmin/oracle12# connect as AV_ADMIN
iii. Click on the Configuration Tab.
f. You should see that the db02 source has been added.
-115-
4. Add OS Collector
a. Once we’ve added the source we can add collectors for that source. We’ll
add all three collectors for the DB02 database – DB, OS and Redo.
b. We’ll start with the OSAUD collector: . step4_add_os_collector.sh
c. Please take a look at the scripts to see what they’re doing. The
configuration steps are documented in great detail in the Audit Vault
Administrators guide.
oracle:/home/oracle/av_scripts> . step4_add_os_collector.sh
ORACLE_SID=av
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/10.2/av
OH=/u01/oracle/product/10.2/av
source DB02.ORACLE.COM verified for OS File Audit Collector collector
Adding collector...
Collector added successfully.
collector successfully added to Audit Vault
5. Add DB Collector
a. The DBAUD Collector is collecting audit records from the aud$ and
fga_log$ base tables.
b. Now let’s add the DBAUD Collector: . step5_add_db_collector.sh
oracle:/home/oracle/av_scripts> . step5_add_db_collector.sh
ORACLE_SID=av
-116-
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/10.2/av
OH=/u01/oracle/product/10.2/av
source DB02.ORACLE.COM verified for Aud$/FGA_LOG$ Audit Collector
collector
Adding collector...
Collector added successfully.
collector successfully added to Audit Vault
oracle:/home/oracle/av_scripts> . step6_add_redo_collector.sh
ORACLE_SID=av
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/10.2/av
OH=/u01/oracle/product/10.2/av
parameter _JOB_QUEUE_INTERVAL = 4 is not set to recommended value 1
source DB02.ORACLE.COM verified for REDO Log Audit Collector collector
Adding collector...
Collector added successfully.
collector successfully added to Audit Vault
oracle:/home/oracle/av_scripts> . step8_setup_source.sh
ORACLE_SID=av
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/10.2/avagent
OH=/u01/oracle/product/10.2/avagent
-117-
Enter Source user name: avcolluser
Enter Source password:
adding credentials for user avcolluser for connection [SRCDB2]
Storing user credentials in wallet...
Create credential oracle.security.client.connect_string4
done.
updated tnsnames.ora with alias [SRCDB2] to source database
verifying SRCDB2 connection using wallet
oracle:/home/oracle/av_scripts> . start_AV_Collectors_db02.sh
ORACLE_SID=av
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/10.2/av
OH=/u01/oracle/product/10.2/av
AVCTL started
Starting collector...
Collector started successfully.
AVCTL started
Getting collector metrics...
--------------------------------
Collector is running
Records per second = 0.00
Bytes per second = 0.00
--------------------------------
AVCTL started
Starting collector...
Collector started successfully.
AVCTL started
Getting collector metrics...
--------------------------------
Collector is running
Records per second = 8.69
Bytes per second = 5963.19
--------------------------------
AVCTL started
Starting collector...
Collector started successfully.
AVCTL started
Getting collector metrics...
-118-
--------------------------------
Collector is running
Records per second = 0.00
Bytes per second = 0.00
--------------------------------
9. Confirm Configuration
a. We can confirm that all of the newly configured collectors were
configured and started successfully by logging into the Audit Vault
administration application and reviewing the collector information:
http://dbsecurity.oracle.com:5700/av
i. avadmin/oracle12# connect as AV_ADMIN
b. Click on the ‘Configuration’ Tab, then review the ‘Audit Source’ >
‘Source’ and ‘Collector’ tabs.
c. You should see the new DB11g source that we added, with the three
collectors.
-119-
LAB EXERCISE 14 – Configuring Audit Policy
In order to understand the data collected by Audit Vault, it is important to understand the
different categories of database auditing provided by Oracle. This lab creates audit
policies in each category, and walks you through the different database events that
generate each type of audit record. You may want to refer to this lab exercise later,
using it as a guide to deploying audit policy for Oracle databases.
This lab guides you through configuring audit policy in the Oracle 11g Database and
generating some audit activity.
1. Start up the environment (it may already be up from the last lab):
b. Start Audit Vault
c. Start db02
d. Once everything is started login to the Audit Vault console:
i. Select Internet > Firefox.
ii. Enter the Audit Vault URL: http://dbsecurity.oracle.com:5700/av,
logging in as avadmin / oracle12# and connecting as
AV_ADMIN
e. Click on the ‘Agent’ Sub-tab under the ‘Management’ tab and be sure the
Agent is started.
f. Once the agent is started you will be able to start the collectors for DB02.
g. Start all of the DB02 collectors. Select each collector’s radio button and
click the ‘Start’ button
-120-
3. Retrieve the Audit Policies
g. Now, login as the Auditor: http://dbsecurityoracle.com:5700/av,
logging in as avauditor/oracle12# , connecting as AV_AUDITOR
h. You will now see the Audit Vault console – home dashboard. Two
collection sources are shown on the Overview screen. At the moment
there will be nothing on it for the DB02 database. We’ve not
generated, collected or alerted/reported on any data for this database.
-121-
k. For the DB02 source retrieve the Audit Policy from the DB. Select the
DB02 source, and click Retrieve from Source
l. You will notice that the database immediately has some policy. This
is due to the fact that 11g by default audits a set of events considered
important.
-122-
4. Configuring Audit Policy
g. We will now login to the DB02 database and configure its audit
policy. We will login to SQLPlus as the sys user, then run a script that
configures the Audit Policy:
i. cd /home/oracle/av_scripts
ii. db02
iii. sqlplus sys/oracle1 as sysdba
h. Once logged in, run the set_policy.sql script: @set_policy.sql This
will set the audit policy on this Oracle 11g DB.
i. The script will generate some small errors, but you will see all of the
audit policy statements generate a ‘Audit succeeded’ confirmation
5. Examine the Audit Policy
g. Now, we’ll go back to the Firefox session logged in as the Auditor
user.
h. For the DB02 source, retrieve the audit policy from the source.
-123-
i. Click the radio button for DB02
ii. Click Retrieve from Source
i. You will now see that there are more audit policies on the summary
page.
j. Click on the DB02 hyperlink to see the detail for the audit policy.
k. This is the Audit Policy detail screen. From here you will be able to:
i. Provision Audit Policy to the Source DB
ii. Review the Audit policy for the source
iii. Export the Audit Policy for the source
-124-
l. We’ll now review the policy for this DB source.
-125-
g. For example, object auditing can audit all SELECT and DML
statements permitted by object privileges, such as SELECT or
DELETE statements on a given table. The GRANT and REVOKE
statements that control those privileges are also audited.
h. Object auditing lets you audit the use of powerful database commands
that enable users to view or delete very sensitive and private data. You
can audit statements that reference tables, views, sequences,
standalone stored procedures or functions, and packages.
i. Oracle Database and Oracle Audit Vault always set schema object
audit options for all users of the database. You cannot set these options
for a specific list of users.
Privilege auditing is the auditing of SQL statements that use a system privilege.
You can audit the use of any system privilege. Like statement auditing, privilege
auditing can audit the activities of all database users or of only a specified list of
users.
a. For example, if you enable AUDIT SELECT ANY TABLE,
Oracle Database audits all SELECT tablename statements issued
by users who have the SELECT ANY TABLE privilege. This type
of auditing is very important for the Sarbanes-Oxley (SOX) Act
compliance requirements. Sarbanes-Oxley and other compliance
regulations require the privileged user be audited for inappropriate
data changes or fraudulent changes to records.
b. Privilege auditing audits the use of powerful system privileges
enabling corresponding actions, such as AUDIT CREATE
TABLE. If you set both similar statement and privilege audit
options, then only a single audit record is generated. For example,
if the statement clause TABLE and the system privilege CREATE
TABLE are both audited, then only a single audit record is
generated each time a table is created. The statement auditing
clause, TABLE, audits CREATE TABLE, ALTER TABLE, and
-126-
DROP TABLE statements. However, the privilege auditing option,
CREATE TABLE, audits only CREATE TABLE statements,
because only the CREATE TABLE statement requires the
CREATE TABLE privilege.
c. Privilege auditing does not occur if the action is already permitted
by the existing owner and schema object privileges. Privilege
auditing is triggered only if these privileges are insufficient, that is,
only if what makes the action possible is a system privilege.
d. Privilege auditing is more focused than statement auditing for the
following reasons:
i. It audits only a specific type of SQL statement, not a
related list of statements.
ii. It audits only the use of the target privilege.
Fine-grained auditing (FGA) enables you to create a policy that defines specific
conditions that must take place for the audit to occur. For example, fine-grained
auditing lets you audit the following types of activities:A table being accessed
between 9 p.m. and 6 a.m. or on Saturday and Sunday
g. An IP address from outside the corporate network being used
h. A table column being selected or updated
i. A value in a table column being modified
A fine-grained audit policy provides granular auditing of select, insert, update,
and delete operations. Furthermore, because you are auditing only very specific
conditions, you reduce the amount of audit information generated and can restrict
auditing to only the conditions that you want to audit. This creates a more
meaningful audit trail that supports compliance requirements. For example, a
central tax authority can use fine-grained auditing to track access to tax returns to
guard against employee snooping, with enough detail to determine what data was
accessed. It is not enough to know that a specific user used the SELECT privilege
on a particular table. Fine-grained auditing provides a deeper audit, such as when
the user queried the table or the computer IP address user who performed the
action.
-127-
10. Understanding Capture Rules
You can create a capture rule to track changes in the database redo log files. The
capture rule specifies DML and DDL changes that should be checked when
Oracle Database scans the database redo log. You can apply the capture rule to an
individual table, a schema, or globally to the entire database. Unlike statement,
object, privilege, and fine-grained audit policies, you do not retrieve and activitate
capture rule settings from a source database, because you cannot create them
there. You only can create the capture rule in the Audit Vault Console.
-128-
i. This script will generate activity that we can report on with Audit
Vault. The activity has been specially created to trigger the audit
policy we previously created. Many errors will be generated.
j. Once the activity is generated the Audit Vault Agent will collect it,
and move it to the Audit Vault Server.
k. We’ll now have to refresh the audit vault warehouse to see the data.
We will manually refresh the warehouse using the AVCTL command
line utility:
i. av
ii. $OH/avctl refresh_warehouse –wait
oracle:/home/oracle/av_scripts> av
ORACLE_SID=av
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/10.2/av
OH=/u01/oracle/product/10.2/av
oracle:/home/oracle/av_scripts> avctl refresh_warehouse -wait
AVCTL started
Refreshing warehouse...
Waiting for refresh to complete...
done.
l. The first time you run this utility it will take a few minutes.
m. Go back to the Firefox browser. On the Home Tab you will now see a
sizeable boost in Audit Activity.
n. Click on User Sessions -- You will see that there are multiple entries
for the user sessions category.
-129-
o. We’ve completed the following tasks:
i. Configured Audit Policy on an 11g database, DB02
ii. Synchronized Audit Vault with the 11g DB Audit Policy
iii. Generated activity matched to the policy
iv. Reviewed the reports in Audit Vault console
1. Start up the environment (it may already be up from the last lab):
a. Start Audit Vault
b. Start db02
c. Once everything is started login to the Audit Vault console.
i. Select Internet > Firefox.
ii. Enter the Audit Vault URL: http:/dbsecurity.oracle.com:5700/av
iii. avauditor/aracle12#, connect as: AV_AUDITOR
2. Viewing Alerts
a. This screen shows that no data has been added into the Audit Vault
repository in the past 24 hours. You will now need to refresh the
dashboard to get some data.
-130-
b. Select the ‘Last One Month’ radio button and hit the ‘Go’ button. You
will see the Audit Vault dashboard get refreshed.
c. The Dashboard is organized into three sections. The first two pie charts in
this first horizontal section represent the Alerts that are generated in Audit
Vault.
d. The second row shows other alert information.
e. The final horizonal row section contains the audit activity.
f. Click on the alert link
g. You will see that you have two alerts (that we created earlier). These two
alerts were generated when we created some sample DB users.
h. Click on the audit activity detail button to see the detail for the record.
-131-
i. You will see that the alert drills into the audit report activity for the event.
In the report you will see information about the activity.
j. Now let’s go back to the Audit Vault ‘Home’ tab. Click on the ‘Home’
tab and scroll down to review the four alert charts.
-132-
k. Scroll down to the end of the ‘Home’ tab.
l. You will see the ‘Activity by Audit Event Category’. All audit activity
that is collected from your sources will be categorized then made available
in this chart. There are 14 categories. During this lab we will review each
of the categories and activity that is captured in them.
3. Account Management Activity
a. Click on ‘Account Management’ link
-133-
iii. AUDIT CREATE PROFILE;
iv. AUDIT CREATE USER;
v. AUDIT DROP PROFILE;
vi. AUDIT DROP USER;
-134-
e. Scroll down to the bottom on this page. We’ve captured the SQL Text for
this ‘CREATE USER’ activity. This is being collected because we’ve
used the ‘DB, EXTENDED’ audit_trail parameter.
-135-
a. Return to the ‘Home’ tab. We now continue reviewing each of the activity
categories and see what data is captured. Start by clicking by ‘Application
Management’
b. In this category you will see activity related to packages, procedures and
other PL/SQL code in the database. Here are some sample audit
statements that are captured:
i. AUDIT CREATE CONTEXT;
ii. AUDIT CREATE FUNCTION;
iii. AUDIT CREATE INDEXTYPE;
iv. AUDIT CREATE JAVA;
v. AUDIT CREATE LIBRARY;
vi. AUDIT CREATE OPERATOR;
vii. AUDIT CREATE PACKAGE;
viii. AUDIT CREATE PACKAGE BODY;
ix. AUDIT CREATE PROCEDURE;
x. AUDIT CREATE TRIGGER;
xi. AUDIT CREATE TYPE;
xii. AUDIT CREATE TYPE BODY;
-136-
5. Audit Activity
a. Return to the previous page, click on the Audit category, you will see all
the system audit activity.
-137-
7. Object Management Activity
a. Return to the previous page, click on the Object Management category.
This will have all DDL activity. Here are some sample audit policy
statements that will generate activity in this category.
i. AUDIT CREATE DIMENSION;
ii. AUDIT CREATE DIRECTORY;
iii. AUDIT CREATE INDEX;
iv. AUDIT CREATE MATERIALIZED VIEW;
v. AUDIT CREATE MATERIALIZED VIEW LOG;
vi. AUDIT CREATE OUTLINE;
vii. AUDIT CREATE PUBLIC DATABASE LINK;
viii. AUDIT CREATE PUBLIC SYNONYM;
ix. AUDIT CREATE SCHEMA;
x. AUDIT CREATE SEQUENCE;
xi. AUDIT CREATE SYNONYM;
xii. AUDIT CREATE TABLE;
xiii. AUDIT CREATE VIEW;
-138-
8. Peer Association Activity
a. Return to the previous page, click on the ‘Peer Association’ category.
This category contains all activity related to Database Links:
i. AUDIT CREATE DATABASE LINK;
ii. DROP DATABASE LINK;
-139-
iv. AUDIT GRANT OBJECT;
v. AUDIT GRANT ROLE;
vi. AUDIT REVOKE OBJECT;
vii. AUDIT REVOKE ROLE;
-140-
xxiv. AUDIT SUPER USER DML;
xxv. AUDIT SYSTEM GRANT;
xxvi. AUDIT SYSTEM REVOKE;
xxvii. AUDIT TRUNCATE CLUSTER;
-141-
b. Return to the Home tab > Click on Data Access:
g. You will now have the ‘Highlight’ configuration panel at the top of the
screen:
-142-
h. Add in the following:
i. Name: Highlight Orders
ii. Background Color: Click ‘Yellow’
iii. Text Color: Click ‘Blue’
iv. Column: ‘Target’
v. Expression: ‘ORDERS’
i. Click Apply:
j. You will now see that there are some rows highlighted:
-143-
k. Now let’s filter on the ‘Event’ type.
l. Click on the ‘Event’ header in the table. You will see a drop down menu
of events. Select the UPDATE event:
-144-
n. Click on the ‘cog’.
o. Select the ‘Save Report’ option:
-145-
b. The report that you just saved is located here.
c. Using this functionality we’re able to save reports that are commonly
used. We’re also able to categorize reports. This could be useful to
categorize reports for regulatory purposes.
d. Click on the My Report link:
-146-
h. This report lists all of the DB login failures across the audited sources:
i. Return to the reporting home page, then click on the ‘Structure Changes’
link. As with the other reports you are able to drill into the detail for a
given activity:
-147-
-148-
LAB EXERCISE 16 – Starting Up Source Database and Grid Control 10.2.0.4
PLEASE NOTE: In order to preserve memory within the image for Grid Control, first
bring down the entire Audit Vault stack (collectors, agent, and server components).
oracle:/home/oracle/scripts> db01
ORACLE_SID=db01
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/11.1.0/db_1
OH=/u01/oracle/product/11.1.0/db_1
oracle:/home/oracle/scripts> emctl stop dbconsole
Oracle Enterprise Manager 11g Database Control Release 11.1.0.7.0
Copyright (c) 1996, 2008 Oracle Corporation. All rights
reserved.
https://dbsecurity.oracle.com:5501/em/console/aboutApplication
Stopping Oracle Enterprise Manager 11g Database Control ...
... Stopped.
-149-
2. Since we will be masking DB02, we need to be sure that the database instance
is running:
Note that this script will take some time to run, commonly up to 5-7 minutes,
depending on the amount of memory you are able to allocate to the image itself.
The step of starting up the OMS and the 10gAS infrastructure tend to consume
most of the overall running time of the script.
oracle:/home/oracle/grid_scripts> . start_grid.sh
ORACLE_SID=db01
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/11.1.0/db_1
OH=/u01/oracle/product/11.1.0/db_1
-150-
TNS-01106: Listener using listener name LISTENER has already been started
ORACLE_SID=emrep
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/product/grid/db10g
-151-
4. In your browser, enter the URL
http://dbsecurity.oracle.com:4889/em, and login as
sysman/oracle1. If using the image’s browser, this URL should show up
in the Favorites pulldown menu as well as in the URL history.
5. Click on the TARGETS tab and verify that your host is shown as being up
(green Up arrow).
6. Click on the DATABASES tab to view all of the discovered databases and
their status. You should see three discovered databases in Grid Control.
-152-
7. Click on the hyperlink for the database DB02.ORACLE.COM and view the
current summary data and alerts for the database. [Partial output shown
below.]
-153-
LAB EXERCISE 17 – Masking Sensitive Application Data
This lab focuses on the EMPLOYEES and related tables, with the goal of protecting PII
(personally identifiable information) from outside developers who work on their HR
application.
-154-
2. Login as system/oracle1, leave “Connect As” set to Normal, and then
click LOGIN.
3. For the table search, enter HR for schema and for object name enter
EMPLOYEES. Click GO.
4. Select View Data from the pulldown menu, then click GO.
-155-
5. Below is a partial output of the data in the table HR.EMPLOYEES. We will
compare this with the masked version of the table later in the lab exercise.
-156-
6. In Firefox, open a new Tab so that you may return to view this data. In the
new Tab, navigate to the Administration page for database db02.oracle.com.
Under Data Masking, select Definitions.
-157-
7. Click MASK to proceed with the steps for masking data.
8. You need to select the columns you want to include in this mask definition.
Accept the default mask name, and under columns click ADD.
9. There are several ways by which sensitive data can be identified. One
recommended approach is to identify the sensitive data by tagging the
associated sensitive columns with a keyword, such as MASK. Enter HR for
the Schema and MASK% for the Comment Name and click SEARCH.
-158-
10. Select the column for EMPLOYEE_ID and click ADD.
11. Notice how all associated foreign keys were added automatically. In this case,
there is an additional table named MANAGERS that is a part of the HR
application but all of its constraints are enforced in the application and not in
the database. The MANAGERS table uses EMPLOYEE_ID as its parent table
but the relationship is not registered in the database as a foreign key
constraint. Therefore, we must add a Dependent column on the
EMPLOYEE_ID column. Click the + for the EMPLOYEE_ID column under
“Add” for Dependent Columns.
-159-
12. Enter HR for schema and MANAGERS for table name, then click SEARCH.
-160-
13. Select MGR_ID from the displayed column list, followed by clicking ADD.
14. The dependent column was added. Now we can define the mask format of the
EMPLOYEE_ID column. Click the Format icon .
15. Select RANDOM NUMBERS from the Add pulldown list and click GO.
16. Enter 100000000 for Start Value and 999999999 for End Value and click OK.
-161-
17. Verify that your column mask was correctly created, then click OK.
-162-
19. Select the four columns listed in Step 18. Click ADD.
20. We have to format each of these 4 added columns. Select the Format icon for
LAST_NAME.
-163-
21. We will use a format already defined in the Format Library. Click IMPORT
FROM LIBRARY.
22. Select the masking definition “Anglo American Last Name” then click IMPORT.
-164-
23. View the sample masked data generated for this column. Then click OK.
25. We will again use a masking format already defined in the Format Library.
Click IMPORT FROM LIBRARY
-165-
26. Select “Anglo American First Name”, and then click IMPORT.
27. Observe the sample generated mask data for this column. Then click OK.
28. Select the Format icon for Phone Number. Use the existing mask “Bay Area
Phone Number” from the Format Library.
-166-
28. Select the Format icon for Salary. For this column we will randomly shuffle the
original column data. Select SHUFFLE from the Add drop-down list and then click OK.
29. Now that we have identified all columns we wish to mask, click NEXT.
-167-
30. Next the masking script gets generated. When completed, an Impact Report is
displayed. Verify that there are no identified issues with running the masking
script, then click NEXT.
-168-
31. Name your job “EMPLOYEE_DATA_MASKING_JOB”. Enter
oracle/oracle1 as your host credentials. Then click NEXT.
32. Notice that you have the option of saving the full masking script at this point
if you wish to manually deploy it yourself. In our case we will have the script
run immediately. Click SUBMIT.
-169-
33. You should now see a screen indicating that the job was successfully
submitted. Click “View Job Details”.
34. Verify that your job has succeeded running by examing the status output
below.
35. We can now verify that our data has been masked. Return to
ADMINISTRATION -> SCHEMA -> TABLES. For the HR.EMPLOYEES
table, select the pulldown option View Data, followed by clicking GO.
-170-
36. Compare the data now displayed to what you observed in Step 5 (in your first
Tab). Notice how the 5 columns of interest have been successfully masked.
-171-
APPENDIX A – Command Line Examples For Database Vault
• Creating Rules
Begin
dvsys.dbms_macadm.create_rule (
rule_name => 'Night Hours',
rule_expr => 'to_char(sysdate,''hh24'') not between ''08'' and ''16''');
End;
/
Begin
dvsys.dbms_macadm.create_rule (
rule_name => 'Weekend Hours',
rule_expr => 'to_char(sysdate,''D'') not between ''2'' and ''6''');
End;
-172-
/
-173-
APPENDIX B – Oracle Label Security (OLS) Lab Exercise
This lab exercise demonstrates the use of Oracle Label Security to set up row-level
security based on label policies. All scripts used in this exercise are included in the VM
Ware image under $HOME/setup_scripts.
Lab Overview
Oracle Label Security makes separation of duty easy: When lbacsys, the default Oracle
DBA for OLS, creates a policy, a role with the name "<policy_name>_DBA" is
automatically granted to lbacsys with the ADMIN option, so that the role can be granted
to other users for them to complete and own the policy. In this lab exercise, these users
are named "sec_admin" and "hr_sec".
1. The table containing the sensitive data (LOCATIONS) and the owner of this data
(hr), who determines the sensitivity of his data and who will get access to which
level of sensitivity.
2. The user-related part of the OLS policy is maintained by the user hr_sec, who
creates database users and roles and grants clearances to them.
3. The OLS labels (both for data and users), which enable the access mediation
defined by the data owner, are created by sec_admin. Furthermore, this user is
responsible for maintaining the performance of the application.
When the policy is tested and ready for production, lbacsys revokes all necessary
execution rights and roles from both "hr_sec" and "sec_admin".
I. Setup
cd /home/oracle/labs
sqlplus /nolog
@ols_create_admin_users_and_roles.sql
-174-
II. Creating a Policy
-175-
In this section, you will create a policy, grant the role to the admin users, and
create the levels and labels for the policy. Perform the following:
2. When the policy is created, an administration role for this policy is automatically
granted to LBACSYS with the 'admin' option. In order to enable proper separation
of duty, lbacsys grants this role and some additional execution rights to the admin
users 'HR_sec' and 'sec_admin'. From a SQL*Plus session, execute the script
ols_grant_role.sql:
-176-
3. The sec_admin user creates the levels for the policy. Each policy consists of
levels (one or more), and optional compartments and groups, which are not
included in this example. Execute the script ols_create_level.sql to
create levels for your policy.
-177-
4. The sec_admin user also creates the labels (which only contain levels, no
compartments or groups). Execute the script ols_create_label.sql:
-178-
1. The HR_SEC user binds the labels to the users, defining their clearance.
From a SQL*Plus session, execute the script
ols_set_user_label.sql to create user label authorizations:
-179-
-180-
2. HR, the owner of the LOCATIONS table, needs full access to the table, since the
user will later add the data labels into the hidden column named OLS_COLUMN
defined earlier. From a SQL*Plus session, execute the script
ols_set_user_privs.sql:
1. The sec_admin user applies the policy to the table. From now on, since
READ_CONTROL has been set in the policy definition and no labels are added
to the rows, no one can read the data (except HR). Execute the script
ols_apply_policy.sql:
-181-
Before you can test the policy, you must add the label to the
data by performing the following:
1. HR, the owner of the LOCATIONS table, adds the labels for each row into the
hidden column 'OLS_COLUMNS'. In this case, you will assign the Sensitive label
to the cities: Beijing, Tokyo and Singapore. You will assign the Confidential label
to the cities: Munich, Oxford and Roma. And all other cities, you will assign the
label Public. Run the script ols_add_label_column.sql.
-182-
VII. Revoking Access from Admin Users
-183-
After establishing policies to tables and users, and adding
labels to the data, you can now test them by performing the
following:
-184-
2. Now you can test the policy for the KPARTNER user by executing the
script ols_test_policy_kpartner.sql. Note that KPARTNER
can see PUBLIC and CONFIDENTIAL data.
-185-
3. Now you can test the PRIVACY policy by executing the script
ols_test_policy_ldoran.sql. Note that LDORAN can only see
PUBLIC data.
-186-
IX. Cleanup
Now that you have tested your policies, drop the users and the
access policies by performing the following:
-187-
Lab Summary
• Create a Policy
• Set User Authorizations
• Apply a Policy to a Table
• Add Labels to the Data
• Test the Policy
-188-
APPENDIX C – Expanding Logical Volumes In VM Image
*** THIS EXAMPLE SHOWS HOW TO EXPAND A VIRTUAL DISK FROM 20GB
TO 40GB. THIS IS FOR AN IMAGE RUNNING OEL4.
-189-
SINCE I HAVE A LOT OF IMAGES WITH THE SAME NAME, I NEEDED TO
MAKE SURE THAT THE CORRECT FILE IS SELECTED. THIS IS WHY I PUT A
CMD WINDOW IN THE CORRECT PATH.
-190-
NEXT DELETE PARTITION 2 (SINCE WE ARE GOING TO ADD IT BACK
RESIZED)
NOTICE HOW NOW THE PARTITION MAP ONLY SHOWS THE BOOT
PARTITION.
-191-
THE WARNING ABOVE IS OKAY, WE JUST HAVE TO REBOOT THE VM WARE
IMAGE AT THIS POINT.
-192-
AFTER REBOOT, LOGIN AS ROOT.
ext2online -v /
-193-
(ETC.)
(ETC.)
FINALLY, CHECK OUT THE CURRENT SPACE VIA df –h. WE NOW SEE THE
EXPANDED SIZE OF THE VOLUME.
-194-
APPENDIX D – Glossary Of Terms
-195-
Role A database object that includes privileges
and other roles to be granted to users for
performing specific operations in a
database.
-196-
APPENDIX E – Releasing Flash Recovery Area Disk Space In VM Image
Do not simply remove archive logs from the flash recovery area, since RMAN might
expect to need them for recovery and could give you errors on startup and shutdown. Run
the commands below, followed by removing the physical archivelogs in the flash recovery
area.
Also, be sure you are running the correct rman command. As shown below, it is best to
run rman with its full path. Otherwise, you might end up running the Linux OS comand
instead!
oracle:/home/oracle> $OH/bin/rman
-197-
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_8_659041283.dbf
RECID=11 STAMP=659043428
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_9_659041283.dbf
RECID=12 STAMP=659044033
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_10_659041283.dbf
RECID=13 STAMP=659044179
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_11_659041283.dbf
RECID=14 STAMP=659044455
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_12_659041283.dbf
RECID=15 STAMP=659044809
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_13_659041283.dbf
RECID=16 STAMP=659044818
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_14_659041283.dbf
RECID=17 STAMP=659150236
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_15_659041283.dbf
RECID=18 STAMP=659150236
validation succeeded for archived log
archived log file
name=/oracle/flash_recovery_area/DB01/archivelog/2008_07_04/o1_mf_1_15_46vmmws0
_.arc RECID=19 STAMP=659150236
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_16_659041283.dbf
RECID=20 STAMP=659150248
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_17_659041283.dbf
RECID=21 STAMP=659151067
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_18_659041283.dbf
RECID=22 STAMP=659151101
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_19_659041283.dbf
RECID=23 STAMP=659250925
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_20_659041283.dbf
RECID=24 STAMP=659251072
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_21_659041283.dbf
RECID=25 STAMP=659252955
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_22_659041283.dbf
RECID=26 STAMP=659899619
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_23_659041283.dbf
RECID=27 STAMP=659899766
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_24_659041283.dbf
RECID=28 STAMP=659900370
validation succeeded for archived log
archived log file
name=/oracle/flash_recovery_area/DB01/archivelog/2008_07_12/o1_mf_1_24_47lj5klk
_.arc RECID=29 STAMP=659900370
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_25_659041283.dbf
RECID=30 STAMP=659900525
validation succeeded for archived log
-198-
archived log file
name=/oracle/flash_recovery_area/DB01/archivelog/2008_07_12/o1_mf_1_25_47ljbdh0
_.arc RECID=31 STAMP=659900525
validation succeeded for archived log
archived log file name=/oracle/product/11.0/db_1/dbs/arch1_26_659041283.dbf
RECID=32 STAMP=659900795
validation succeeded for archived log
archived log file
name=/oracle/flash_recovery_area/DB01/archivelog/2008_07_12/o1_mf_1_26_47ljlt8m
_.arc RECID=33 STAMP=659900795
Crosschecked 32 objects
RMAN> exit
-199-
APPENDIX F – Configuring Audit Management Features In Release 10.2.0.3
For Oracle 10.2.0.3, we show how to configure several new features that allow you the
ability of moving the source database native audit trails outside of the SYSTEM
tablespace as well as configures a job to automatically purge old audit records after they
have been moved to Audit Vault’s repository. Patch 6989148, which provides these
capabilities, currently can be installed only in 10.2.0.3 sources. There is no patch to
provide these capabilities to 111.0.6 sources. Release 11.1.0.7 does contain this
functionality, however.
The steps shown below are for a 10.2.0.3 Oracle Home running one database (db03).
oracle:/home/oracle> cd patches
oracle:/home/oracle/patches> ls
p6989148_10203_LINUX.zip README.txt
oracle:/home/oracle/patches> unzip p6989148_10203_LINUX.zip
Archive: p6989148_10203_LINUX.zip
creating: 6989148/
creating: 6989148/files/
creating: 6989148/files/lib/
creating: 6989148/files/lib/libserver10.a/
inflating: 6989148/files/lib/libserver10.a/aud.o
inflating: 6989148/files/lib/libserver10.a/kza.o
inflating: 6989148/files/lib/libserver10.a/kzam.o
.
.
.
oracle:/home/oracle/patches>
2. As stated in file README.txt we must shutdown all instances for the 10.2.0.3
Home since opatch checks to see if any binaries in the Home being patched
are active. Set alias 10g to point to the correct environment settings.
Shutdown EM Database Control and instance for database DB03. Ensure that
the DB03 instance is shutdown cleanly.
oracle:/oracle/product/10.2> 10g
ORA_AGENT_HOME=/oracle/product/grid/agent10g
ORACLE_SID=db03
ORACLE_BASE=/oracle
ORACLE_HOME=/oracle/product/10.2/db_1
OH=/oracle/product/10.2/db_1
oracle:/oracle/product/10.2> emctl stop dbconsole
TZ set to US/Mountain
Oracle Enterprise Manager 10g Database Control Release 10.2.0.3.0
Copyright (c) 1996, 2006 Oracle Corporation. All rights reserved.
http://lysy-kestner.oracle.com:5502/em/console/aboutApplication
-200-
Stopping Oracle Enterprise Manager 10g Database Control ...
... Stopped.
oracle:/oracle/product/10.2> sqlplus / as sysdba
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the Partitioning, Oracle Label Security, OLAP and Data Mining options
OPatch detected non-cluster Oracle Home from the inventory and will patch
the local system only.
-201-
Patching component oracle.rdbms, 10.2.0.3.0...
Updating archive file "/oracle/product/10.2/db_1/lib/libserver10.a" with
"lib/libserver10.a/aud.o"
Updating archive file "/oracle/product/10.2/db_1/lib/libserver10.a" with
"lib/libserver10.a/kza.o"
Updating archive file "/oracle/product/10.2/db_1/lib/libserver10.a" with
"lib/libserver10.a/kzam.o"
Updating archive file "/oracle/product/10.2/db_1/lib/libserver10.a" with
"lib/libserver10.a/kzax.o"
Updating archive file "/oracle/product/10.2/db_1/lib/libserver10.a" with
"lib/libserver10.a/kzft.o"
Updating archive file "/oracle/product/10.2/db_1/lib/libserver10.a" with
"lib/libserver10.a/psdsys.o"
Updating archive file "/oracle/product/10.2/db_1/lib/libserver10.a" with
"lib/libserver10.a/szaud.o"
Updating archive file "/oracle/product/10.2/db_1/lib/libserver10.a" with
"lib/libserver10.a/zlle.o"
Updating archive file "/oracle/product/10.2/db_1/lib/libserver10.a" with
"lib/libserver10.a/kspare.o"
Copying file to "/oracle/product/10.2/db_1/rdbms/admin/catamgt.sql"
Copying file to "/oracle/product/10.2/db_1/rdbms/admin/catlbacs.sql"
Copying file to "/oracle/product/10.2/db_1/rdbms/admin/catnools.sql"
Copying file to "/oracle/product/10.2/db_1/rdbms/admin/dbmsamgt.sql"
Copying file to "/oracle/product/10.2/db_1/rdbms/admin/prvtamgt.plb"
ApplySession adding interim patch '6989148' to inventory
OPatch succeeded.
4. Run “opatch lsinventory” to verify that the patch has been registered with the
Oracle Inventory file.
oracle:/home/oracle/patches/6989148> $ORACLE_HOME/OPatch/opatch lsinventory
Invoking OPatch 10.2.0.3.0
-202-
----------------------------------------------------------------------------
----
Installed Top-level Products (3):
----------------------------------------------------------------------------
----
OPatch succeeded.
SQL> startup
ORACLE instance started.
-203-
With the Partitioning, Oracle Label Security, OLAP and Data Mining options
oracle:/home/oracle/patches/6989148> cd $OH/rdbms/admin
oracle:/oracle/product/10.2/db_1/rdbms/admin> sqlplus / as sysdba
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the Partitioning, Oracle Label Security, OLAP and Data Mining options
SQL> @@catamgt.sql
Table created.
Comment created.
Comment created.
.
.
.
SQL> @@dbmsamgt.sql
Package created.
Grant succeeded.
SQL> @@prvtamgt.plb
Library created.
-204-
APPENDIX G – How To Convert FAT Disk Partition To NTFS
This article describes how to convert FAT disks to NTFS. See the Terms sidebar for definitions of FAT,
FAT32 and NTFS. Before you decide which file system to use, you should understand the benefits and
limitations of each of them.
Changing a volume's existing file system can be time–consuming, so choose the file system that best
suits your long–term needs. If you decide to use a different file system, you must back up your data and
then reformat the volume using the new file system. However, you can convert a FAT or FAT32 volume
to an NTFS volume without formatting the volume, though it is still a good idea to back up your data
before you convert.
Note Some older programs may not run on an NTFS volume, so you should research the current
requirements for your software before converting.
Setup begins by checking the existing file system. If it is NTFS, conversion is not necessary. If it is FAT or
FAT32, Setup gives you the choice of converting to NTFS. If you don't need to keep your files intact and
you have a FAT or FAT32 partition, it is recommended that you format the partition with NTFS rather than
converting from FAT or FAT32. (Formatting a partition erases all data on the partition and allows you to
start fresh with a clean drive.) However, it is still advantageous to use NTFS, regardless of whether the
partition was formatted with NTFS or converted.
It is easy to convert partitions to NTFS. The Setup program makes conversion easy, whether your
partitions used FAT, FAT32, or the older version of NTFS. This kind of conversion keeps your files intact
(unlike formatting a partition.
-205-
To find out more information about Convert.exe
1. After completing Setup, click Start, click Run, type cmd, and then press ENTER.
2. In the command window, type help convert and then press ENTER. Information about converting
FAT volumes to NTFS is made available as shown below.
1 Open Command Prompt. Click Start, point to All Programs, point to Accessories, and then
click Command Prompt.
Important Once you convert a drive or partition to NTFS, you cannot simply convert it back to FAT or
FAT32. You will need to reformat the drive or partition which will erase all data, including programs and
personal files, on the partition.
-206-
CPU
Best Practices
Nov 2007
Best Practices
Introduction......................................................................................................................... 3
Installing Audit Vault ......................................................................................................... 3
Deployment Plan............................................................................................................. 4
Audit Vault Server ...................................................................................................... 4
Audit Vault Collection Agent ..................................................................................... 5
Which Collector(s) Should I Deploy?......................................................................... 6
Recommended Collector and Database Audit Configuration..................................... 8
Near Real-Time Alerts.................................................................................................... 9
Near Real-Time Reporting.............................................................................................. 9
Recommendations on ETL process ............................................................................ 9
Oracle Database Auditing ................................................................................................. 10
Audit Trail Contents and Locations.............................................................................. 11
Recommended Database Audit Configuration ......................................................... 12
Audit Settings – Secure Configuration ......................................................................... 12
Recommended Database Audit Settings................................................................... 12
Database Auditing Performance ............................................................................... 14
Auditing and the Audit Vault Collectors .................................................................. 14
Managing Audit Data on the Source................................................................................. 15
Removing Audit Data from the Database................................................................. 15
Recommended Database Audit Cleanup Periods ..................................................... 16
Removing Audit Data from the Operating System................................................... 16
Oracle Audit Vault Maintenance ...................................................................................... 18
Audit Vault Server Log Files.................................................................................... 18
Audit Vault Collection Agent Log Files................................................................... 18
Oracle Audit Vault Disaster Recovery ..................................................................... 20
Recommended Recovery Configuration................................................................... 20
Appendix A. Audit Trail Maintenance Scripts ................................................................ 21
Appendix B. Database Source Audit Settings ................................................................. 28
Please note that this document will be updated on a regular basis to contain the latest
information based on development and customer feedback. These best practices will be
included in future releases of the Oracle Audit Vault documentation.
Deployment Plan
While Audit Vault provides consolidation and secure storage of audit data, planning the
installation of the Audit Vault components will ensure a faster installation and overall
success of implementing a compliant solution. The following sections discuss the pre-
installation considerations for the Audit Vault Server and Audit Vault Collection Agents.
• Higher Availability – When the Audit Vault Server is on a separate server from
the source databases then the availability will not be dependent on the source
host’s up/down status and therefore the audit data continues to be collected from
all sources that are running.
• Secured Audit Trail – By extracting the audit trail records off of the source
database as fast as possible, there is very little opportunity for privileged database
and operating system users to modify any audit records.
When it comes to determining what type of resources are required to install and maintain
the Audit Vault Server, it depends on the how fast you need the audit records to be
inserted into Audit Vault and how long you must maintain audit data.
For scalability and availability, the Audit Vault Server may optionally implement Real
Applications Cluster (RAC) and Data Guard for disaster recovery.
Check the specific Audit Vault Server Installation Guide documentation of the platform
that you will be installing for a list of the requirements of that operating system.
The Audit Vault Collection Agent may be installed either on the same host as the
database that is going to be audited, on the audit vault server hosts, or on a host separate
from the audit vault server or the host where the database resides that will be audited.
Let’s look at each of these scenarios to determine the best location within your
environment for the Audit Vault Collection Agent.
• Separate from audit host and Audit Vault Server – If the database audit trail
destination is the database tables, (SYS.AUD$/SYS.FGA_LOG$) then the Audit
Vault Collection Agent may be installed on a different host from the audited
database or Audit Vault Server.
DML
DDL
Before and After
Values
Success and Failure
The three collector types are called DBAUD, OSAUD, and REDO. Each collector type
retrieves audit records from different locations in the source Oracle database as shown
below in Table 2.
REDO Redo log Redo logs are part of the Oracle • Used to track before
Database infrastructure and do not and after changes to
require any source database sensitive data
settings. The Audit Vault Policy, columns, such as
capture rule, determines the meta salary.
data pulled from the redo log.
Depending on the type of audit information generated and required to maintain, you may
deploy one or all three of the collectors for each source database.
Oracle Audit Vault can generate alerts on specific system or user defined events, acting
as an early warning system against insider threats and helping detect changes to baseline
configurations or activity that could potentially violate compliance. Oracle Audit Vault
continuously monitors the audit data collected, evaluating the activities against defined
alert conditions.
Alerts are generated when data in a single audit record matches a custom defined alert
rule condition. For example, a rule condition may be defined to raise alerts whenever a
privileged user attempts to grant someone access to sensitive data.
In Oracle’s in-house testing of the Audit Vault Server, it was possible to achieve a
throughput of 17,000 insertions of audit trail records per second using a 2x6GB 3GHz
Intel Xeons, Redhat 3.
2 Linux x86 system. To achieve near real-time alerting capability, the host should be
sized to meet your business requirements.
Audit Vault provides statistics of the ETL process to update the warehouse as shown
below in Figure 3. By utilizing this information, you can estimate how often the job may
be run to update the data warehouse infrastructure. The data warehouse infrastructure is
documented in the Oracle Audit Vault Auditor’s Guide.
The Oracle Audit Vault has been developed on a flexible data warehouse infrastructure
that provides the ability to consolidate audit data so that it can be easily secured,
managed, accessed, and analyzed. In addition to the out-of-the-box reports provided by
Oracle Audit Vault, Audit Vault provides an open audit warehouse schema that can be
accessed from Oracle BI Publisher, Oracle Application Express, or any 3rd party
reporting tools for customized security and compliance reporting.
Audit records include information about the operation that was audited, the user
performing the operation, and the date and time of the operation. Audit records can be
stored in the database audit trail or in files on the operating system. There are two types
of general auditing: standard and fine-grained. Standard auditing includes operations on
privileges, schemas, objects, and statements. Fine-grained auditing is policy based and
operates and is enforced on select operations in Oracle9i. Fine-grained auditing was
enhanced in Oracle Database 10g to enforce policy based auditing on insert, update and
delete operations.
Audit Vault extracts auditdata from either the database tables or the operating system
files. To enable database auditing, the initialization parameter, AUDIT_TRAIL, should
be set to one of these values:
When you issue an audit command, an additional parameter ‘by access’ or ‘by session’
can be specified. By access tells Oracle to create an audit record every time any of these
operations occur when in contrast, by session only creates an audit trail the first time this
operation occurs in the current session. If you need to know each time an operation is
executed then by access should be used.
TIP: Do not audit the SYS.AUD$ or SYS.FGA_LOG$ tables. This will cause a
recursive condition.
Oracle also has the ability to create specific audit policies based on a condition called
Fine-Grained Auditing. By utilizing fine-grained auditing, you can monitor data access
based on content or condition. Conditions can include limiting the audit to specific types
of DML statements used in connection with the columns that you specify. Optionlally a
named routine can be called when an audit event occurs to handle errors and anomalies.
An example of a fine-graned audit policy to create an audit trail record if a select on the
SH.SALES table is executed by anyone other than the user APPS is shown in Figure 2.
Based on your business requirements, fine-grained auditing can be tailored to meet the
auditing needs. For more information on database auditing, please see the Oracle
Security Guide documentation.
Writing audit trail records to the operating system has the lowest overhead.
The tables named SYS.AUD$ and SYS.FGA_LOG$ are used when the audit data is
written to the database. These tables are located in the databse SYS schema.
The Oracle Database also allows audit trail records to be directed to the operating system.
The target directory varies by platform, but on the UNIX platform, it is usually
$ORACLE_HOME/rdbms/audit. On Windows, the information is accessed through
Event Viewer.
Oracle Audit Vault provides the mechanisms to collect audit data generated by Oracle9i
Database Release 2, Oracle Database 10g Release 1, and Oracle Database 10g Release 2.
The database audit data can be collected from the both the database and operating system
audit destinations. Transactional before/after values can be captured from the database
REDO transaction logs using the REDO collector for Oracle9i Release 2 and Oracle
Database 10g Release 2 databases.
Before deleting audit data from the database, determine the last record inserted into Audit
Vault Server. This can be done by using Audit Vaults’s Activity Overview Report.
Open the Activity Overview to view the date of the summary data. Remember, the Audit
Vault report data is displayed based on the last completed ETL warehouse job. For more
information on the warehouse job, please look at the Oracle Audit Vault Administration
Guide documentation.
The activity overview report returns data in descending order of time. So the first record
displayed is the last record to be inserted into the data warehouse.
Once you have established that data is being inserted into the Audit Vault Server in a
timely manner, you can use the scripts located in Appendix A to delete records from
SYS.AUD$ and SYS.FGA_LOG$ by running a database job.
The operating system audit trail files are written by default on most UNIX system in
$ORACLE_HOME/admin/$ORACLE_SID/adump. The files have an extension of
".aud" Optionally, the destination can be explicitly defined by the Oracle database
parameter AUDIT_FILE_DEST.
On the Windows operating system, the audit trail record is written to the window event
log. Use the Windows Event Viewer functionality to control the size of the event log file
or overwrite records that are older the X number of days.
Oracle recommends that you should use the option to overwrite records based on age.
av_client- This log file contains information about The files, which contain an
%g.log.n collection metrics from the Audit Vault extension of .log.n, for
Collection Agent. The %g is a generation example av_client-0.log.1,
number that starts from 0 (zero) and may be deleted at any
increases once the file size reaches the 10 time.
MB limit.
Oracle Recovery Manager (RMAN) and a flash recovery area minimize the need to
manually manage disk space for your backup-related files and balances the use of space
among the different types of files. The basic Oracle Audit Vault installation places the
flash recovery area on the same disk as the Audit Vault Oracle Home and sets the default
size to 2G. The advanced installation method allows you to define the location and size
of the flash recovery area and RMAN backup job.
output_file := utl_file.fopen
('OS_AUD_CLEANUP_DIR','session_list.txt', 'W');
open c1;
loop
fetch c1 into sessid;
exit when c1%notfound;
Declare
ver varchar2(100);
argv3 varchar2(1000);
argv2 varchar2(1000);
aud_dest varchar2(1000);
Begin
select version into ver from v$instance;
select value into aud_dest from v$parameter where name =
'audit_file_dest';
execute immediate
'BEGIN DBMS_SCHEDULER.CREATE_JOB (JOB_NAME => ''OS_CLEANUP_PERL'',
JOB_TYPE => ''executable'',
JOB_ACTION => ''' || argv3 || ''',
NUMBER_OF_ARGUMENTS => 3,
ENABLED => FALSE); END;';
execute immediate
'BEGIN DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE (
job_name => ''OS_CLEANUP_PERL'',
argument_position => 1,
argument_value => ''' || aud_dest || '''); END;';
execute immediate
'BEGIN DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE (
job_name => ''OS_CLEANUP_PERL'',
argument_position => 2,
execute immediate
'BEGIN DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE (
job_name => ''OS_CLEANUP_PERL'',
argument_position => 3,
argument_value => ''&2''); END;';
execute immediate
'BEGIN DBMS_SCHEDULER.CREATE_JOB (JOB_NAME => ''AUDIT_OS_CLEANUP'',
JOB_TYPE => ''STORED_PROCEDURE'',
JOB_ACTION => ''sys.source_os_audit_cleanup'',
REPEAT_INTERVAL => ''FREQ=DAILY;INTERVAL=1'',
ENABLED => TRUE,
COMMENTS => ''Cleaup Job Run Daily''); END;';
end if;
End;
/
declare
ver varchar2(100);
begin
select version into ver from v$instance;
exit;
#!/usr/local/bin/perl
#
# $Header: os_aud_cleanup.pl 05-apr-2007.01:47:12 srirasub Exp $
#
# os_aud_cleanup.pl
#
# Copyright (c) 2007, Oracle. All rights reserved.
#
# NAME
# os_aud_cleanup.pl - OS AUDit trail CLEANUP
#
# DESCRIPTION
# Perl Script to clean audit trails.
#
# NOTES
# <other useful comments, qualifications, etc.>
#
# MODIFIED (MM/DD/YY)
# srirasub 04/04/07 - Creation
#
%mon2num = qw(
jan 1 feb 2 mar 3 apr 4 may 5 jun 6
jul 7 aug 8 sep 9 oct 10 nov 11 dec 12
);
$aud_dir =~ s/\?/$oh/g;
if(!opendir(DIR, $aud_dir))
{
exit -1;
}
}
@files = grep(/ora_.*aud$/,readdir(DIR));
closedir(DIR);
$tstamp1 = localtime;
#days parameter
$day_upper_limit = $ARGV[2];
$prev = $line;
}
#since this file doesnt have any active session, it can be deleted
if($flag == 1)
{
$flag2 = 1;
foreach $line (@lines)
{
$reg = 'SESSIONID: "';
if($line =~ $reg)
{
chop($prev);
$days_diff = day_diff($prev, $tstamp1 );
if($flag2 == 1)
{
#delete the file
unlink("$file_name");
}
}
}
$diff = $mon2-$mon1;
$diff;
}
-- For DBMS_JOB,
-- ALTER SYSTEM SET job_queue_processes=1;
-- this may be required to run the jobs automatically
-- the value is set to 0 in most systems
-- any number in the range [1,1000] is valid.
Oracle Corporation
World Headquarters
500 Oracle Parkway
Redwood Shores, CA 94065
U.S.A.
Worldwide Inquiries:
Phone: +1.650.506.7000
Fax: +1.650.506.7200
oracle.com