Sie sind auf Seite 1von 3

Using LogMiner to analyze redo logs for human errors/auditing:

To use the LogMiner the database should be in Archivelog mode and also minimal supplemental
logging is enabled.
Supplemental logging can be enabled using below.
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

Steps in using LogMiner:


1. a. To extract dictionary to a flat file.
Set the UTL_FILE_DIR parameter to extract the dictionary to a flat file. If the database is started
with PFILE, set this parameter and restart the database for it to take effect.
UTL_FILE_DIR=/usr/tmp, /u01/oracle/visdb/9.2.0/dbs/arch
/usr/tmp is the directory to extract the dictionary.
/u01/oracle/visdb/9.2.0/dbs/arch is the directory to which redo logs are archived.
Execute the below procedure to extract dictionary to flat file, ensure no DDLs run during that
time.
SQL> EXECUTE DBMS_LOGMNR_D.BUILD('dictionary.ora', >'/usr/tmp', >OPTIONS => DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);

b. To extract dictionary to a redo log


The database should be open, should be in archive log. No DDLs should be running while it is
running. This will affect performance but if run during off-peak hours generates faster than flat
file.
SQL> EXECUTE DBMS_LOGMNR_D.BUILD ( >OPTIONS=>DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);

c. Using online catalog


The only advantage is we wont extract any dictionary which is a save in time. We will use the
dictionary currently being used in database. The price paid for saving in time is you can only use
the redo logs of the database to analyze on which LogMiner is running.

2. Mine the archived logs one by one. If we are using the same instance that is generating
the logs, we have to add the first log before starting the LogMiner. After that enable
continuous mining.
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( >LOGFILENAME => '/u01/oracle/visdb/9.2.0/dbs/arch/1_166.dbf', >OPTIONS => DBMS_LOGMNR.NEW);

Add another using below.


SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( >LOGFILENAME => '/u01/oracle/visdb/9.2.0/dbs/arch/1_167.dbf', >OPTIONS => DBMS_LOGMNR.ADDFILE);

To remove file use below.


SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( >LOGFILENAME => '/u01/oracle/visdb/9.2.0/dbs/arch/1_167.dbf', >OPTIONS => DBMS_LOGMNR.REMOVEFILE);

In case of continuous mining, add the first log ONLY (Run first command only in this step)
3. Start the LogMiner session.
LogMiner session can be run on the database that originally generated logs or on the other
database.[online catalog as a dictionary does not support running on another instance]
a. Using dictionary in flat file.
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR( >DICTFILENAME =>'/usr/tmp/dictionary.ora');

b. Using dictionary in logs


SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => >DBMS_LOGMNR. DBMS_LOGMNR.DICT_FROM_REDO_LOGS);

c. Using online catalog


SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => >DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);

4. To set additional options to LogMiner session run the below. The below command considers
only committed transactions and also enables continuous mining.
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => >DBMS_LOGMNR.CONTINUOUS_MINE + -

>DBMS_LOGMNR.COMMITTED_DATA_ONLY);

To skip corrupted redo logs use below.


SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => >DBMS_LOGMNR.SKIP_CORRUPTION);

To track DDL use below. This option is invalid for online catalog as a dictionary.
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => >DBMS_LOGMNR.DDL_DICT_TRACKING);

5. Query the v$logmnr_contents for the interested transactions. You can only query this view in
the session you have started LogMiner session. You have to build your own select query
depending upon the transactions you are interested in. The below example gives information
about the update transactions run on TESTLOG.TESTLOG1 table.
SELECT OPERATION, USERNAME, SQL_REDO, SQL_UNDO FROM V$LOGMNR_CONTENTS WHERE
SEG_OWNER = 'TESTLOG' AND SEG_NAME = 'TESTLOG1' AND OPERATION = 'UPDATE';

6. End the log mining session once you get the details of transactions you are interested in.
SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR;

Das könnte Ihnen auch gefallen