You are on page 1of 46

Oracle 10g Availability

Enhancements, Part 1: Backup


and Recovery Improvements
By Jim Czuprynski
Synopsis. Oracle 10g offers significant enhancements that help insure the high availability
of any Oracle database, especially in the arena of disaster recovery. This article - the first in
a series - concentrates on several new features available for backup, restoration, and
recovery of Oracle databases, especially when using Oracle Recovery Manager (RMAN).
If you have read my earlier article about setting up disaster recovery for Oracle databases,
you already know that I sincerely enjoy experimenting with the myriad robust features of
Oracle Recovery Manager (RMAN). I am willing to bet that any seasoned Oracle DBA sighs
knowingly and thankfully when she thinks of a potentially disastrous loss of data that has
been averted by a well-planned backup and recovery strategy that incorporates RMAN.
Oracle 10g expands significantly the RMAN backup, restoration, and recovery features that I
have grown to appreciate. Flash Backup and Recovery appears to be the most exciting
improvement, and I will cover that in greater detail in the next article in this series, but for
now, this article will focus on numerous significant features that beg for illustration.
Backup Enhancements
Expanded Image Copying Features. A standard RMAN backup set contains one or more
backup pieces, and each of these pieces consists of the data blocks for a particular datafile
stored in a special compressed format. When a datafile needs to be restored, therefore, the
entire datafile essentially needs to be recreated from the blocks present in the backup piece.
An image copy of a datafile, on the other hand, is much faster to restore because the
physical structure of the datafile already exists. Oracle 10g now permits image copies to be
created at the database, tablespace, or datafile level through the new RMAN directive
BACKUP AS COPY. For example, here is a command script to create image copies for all
datafiles in the entire database:
RUN {
# Set the default channel configuration. Note the use of the
# %U directive to insure unique file names for the image copies
ALLOCATE CHANNEL dbkp1 DEVICE TYPE DISK FORMAT 'c:\oracle\rmanbkup\U%';
# Create an image copy of all datafiles in the database
BACKUP AS COPY DATABASE;
}
See Listing 1.1 for additional RMAN script examples of this new feature.

Incrementally Updated Backups. As I explained in the previous section, it is now much


simpler to create image copy backups of the database. Another new Oracle 10g feature,
incrementally updated backups, allows me to apply incremental database changes to the
corresponding image copy backup - also known as rolling forward the datafile image copy -of any datafile in the database. Since image copy backups are much faster to restore in a
media recovery situation, this new feature gives me the option to have updated image
copies ready for restoration without having to recreate the image copies on a regular basis.
To utilize this feature, I will need to use the new BACKUP ... FOR RECOVER OF COPY
command to create the incremental level 1 backups to roll forward the changes to the
image copy of the datafiles, and use the new RMAN RECOVER COPY OF DATABASE
command to apply the incremental backup to the image copies of the datafiles. Note that
the TAG directive becomes extremely important to this implementation, as it is used to
identify to which image copies the changes are to be rolled forward.
Here is a script that illustrates a daily cycle of creation and application of the incrementally
updated backups. This would be appropriate for a database that has sufficient disk space for
storage of image copies, and has a relatively high need for quick restoration of media:
RUN {
# Roll forward any available changes to image copy files
# from the previous set of incremental Level 1 backups
RECOVER
COPY OF DATABASE
WITH TAG 'img_cpy_upd';
# Create incremental level 1 backup of all datafiles in the database
# for roll-forward application against image copies
BACKUP
INCREMENTAL LEVEL 1
FOR RECOVER OF COPY WITH TAG 'img_cpy_upd'
DATABASE;
}
Though this appears a bit counter-intuitive at first, here is an explanation of what happens
during the initial run of this script:

The RECOVER command actually has no effect, because it cannot find any
incremental backups with a tag of img_cpy_upd.
However, the BACKUP command will create a new Incremental Level 0 backup that is
labeled with a tag of img_cpy_upd because no backups have been created yet with
this tag.

And during the second run of this script:

The RECOVER command still will have no effect, because it cannot find any Level 1
incremental backups with a tag of img_cpy_upd.
The BACKUP command will create its first Incremental Level 1 backup that is labeled
with a tag of img_cpy_upd.

But during the third and subsequent runs of this script:

The RECOVER command finds the incremental level 1 image copy backups from the
previous night's run tagged as img_cpy_upd, and applies them to the existing
datafile image copies.
The BACKUP command will create the next Incremental Level 1 backup that is
labeled with a tag of img_cpy_upd.

After the third run of this script, RMAN would then choose the following files during a media
recovery scenario: the image copy of the database for tag img_cpy_upd from the previous
night, the most recent incremental level 1 backup, and all archived redo logs since the
image copy was taken. This strategy offers a potentially quick and flexible recovery, since
the datafile image copies will be relatively quick to restore, and the incremental level 1
backup plus all archived redo logs can be used to perform either a point-in-time or a
complete recovery.
See Listing 1.2 for an example of how this new feature could be implemented for a weekly
backup strategy.
Improved Incremental Backup Performance With Change Tracking. Another new
Oracle 10g optional feature, change tracking, promises to improve the performance of
incremental backup creation significantly. When an incremental backup was being taken
prior to 10g, all the blocks in each datafile being backed up needed to be scanned to
determine if the block had changed since the last incremental backup to determine if it
needed to be included in the new incremental backup.
With the new change tracking feature enabled, however, now only the first Level 0
incremental backup needs to be completely scanned, and the IDs of any changed blocks are
written instead to a change tracking file. All subsequent incremental backups will query the
change tracking file to determine if there are any changed blocks that need to be backed up.
Oracle automatically stores enough incremental backup metadata to insure that any of the
eight most recent incremental backups can be used as the "parent" of a new incremental
backup.
Each Oracle database has only one change tracking file, and if the database has been
configured for Oracle Managed Files (OMF) it will be automatically created based on the
specification for DB_CREATE_FILE_DEST. However, if OMF is not enabled for the
database, the location of the change tracking file can be specified manually. The initial size
of the change tracking file is 10MB, and it grows in 10MB increments, but Oracle notes that
the 10MB initial extent should be sufficient to store change tracking information for any
database up to one terabyte in size.
If the location needs to be moved, change tracking can be disabled, and a new change
tracking file can be created, but this causes the database to lose all change tracking
information. Moreover, unfortunately the change tracking file cannot be moved without
shutting down the database, moving it with the appropriate ALTER DATABASE RENAME FILE
<filename> command, and then restarting the database.
Oracle does recommend that this feature be activated for any database whose disaster
recovery plan utilizes incremental backups of differing levels. Oracle also notes that theirs is
a small performance hit during normal operations, but that hit should be discounted against
the need to avoid scans of datafiles during restoration and recovery operations.

See Listing 1.3 for more extensive RMAN script examples of this new feature
Improved Backup Resource Management. Oracle 9i added new features to help DBAs to
automatically manage backup file retention with the RETENTION POLICY directives of the
CONFIGURE command set. Oracle 10g has improved RMAN resource management even
further with the DURATION directive of the BACKUP command: It is now possible to tell
RMAN exactly how much system resources should be allocated to accomplish a backup task
so that it completes within a specified time frame.

For example, my client's primary production database is scheduled to begin at 00:15 every
day, and needs to complete before batch processing commences at 03:00 every day. In my
daily backup RMAN script, I can specify that the backups must complete in 2.5 hours, and
RMAN will begin backing up the specified database files:

BACKUP DURATION 2:30 DATABASE;


If the backup cannot complete within this time frame, the RMAN script being executed will
return an error and terminate the backup - not necessarily a desirable outcome! However, if
I specify the PARTIAL directive, RMAN will not return an error, but will back up as many
files as it can in that time frame, starting with the least recently used backed-up files (a
feature of using DURATION):
BACKUP DURATION PARTIAL 2:30 DATABASE FILESPERSET 1;
In this case, any files that could not be backed up will be logged as errors from the RMAN
script, but all other backup files will be retained. Oracle does recommend setting
FILESPERSET to 1 when using DURATION PARTIAL to insure that any files for which backups
succeeded are retained. I can also tune backup performance so RMAN will try to complete
the backups as quickly as possible by specifying the MINIMIZE TIME directive:
BACKUP DURATION PARTIAL 2:00 MINIMIZE TIME DATABASE FILESPERSET 1;
If I specify the MINIMIZE LOAD directive, on the other hand, RMAN will instead "stretch out"
backup operations so that fewer resources are utilized during that time:
BACKUP DURATION PARTIAL 3:30 MINIMIZE LOAD DATABASE FILESPERSET 1;

Server Parameter File (SPFILE) AutoBackups. Oracle 9i added the ability to configure
automatic control file backups to occur whenever specific RMAN operations happened, or
when the DBA performed a significant modification of the database's logical or physical
structure that affected the control file (e.g. adding a new tablespace, or renaming a
datafile).
Oracle 10g expands this feature to include the auto-backup of the database's server
parameter file - the binary copy of the initialization parameter file -- as well. Though I have
to admit that I am still a fan of the initialization parameter file - old habits do die hard, dang

it! - it is obvious that Oracle views the SPFILE as the future basis for controlling database
parameter configuration.
Enhanced BEGIN BACKUP. Finally, here is a neat enhancement for user-managed
backups: The BEGIN BACKUP command that is used to take tablespaces offline one at a
time has been enhanced so that all of the database's tablespaces can be taken offline at
once:
-- Take all datafiles offline before starting user-managed backup
ALTER DATABASE BEGIN BACKUP;
-- Bring all datafiles back online after completing user-managed backup
ALTER DATABASE END BACKUP;
Though our shop uses RMAN for all production database backups, this command certainly
has value for smaller but no less mission-critical databases like OEM or RMAN recovery
catalog repositories.
Automatic Channel Failover. For those of you who create RMAN backups directly on tape
via a Media Management Layer (MML), Oracle 10g adds a new feature: If multiple channels
have been allocated for the backup step, but any one channel fails during that step, RMAN
will automatically try to use one of the other available channels to continue processing the
backups. Though I have had limited experience with using MML in conjunction with RMAN,
this feature appears to increase the flexibility and stability of directly backing up to alternate
media.
Restoration and Recovery Enhancements
RESTORE Failover. Oracle 10g has also significantly improved the restoration process
during initial restoration and recovery efforts:

If RMAN should detect a damaged or missing backup file, it will automatically


attempt to locate another usable copy of the image copy or backup piece, either at
the default location or at an alternate multiplexed location.
If it cannot find a usable current copy, it then looks at prior backup pieces or image
copies and attempts to restore from those files.
If RMAN cannot locate any appropriate backup or image copy, only then will it issue
an error and terminate the RMAN session.

RESTORE ... PREVIEW. If you have ever wondered exactly what backup files or image
copies RMAN will use to perform restoration, Oracle 10g now offers the RESTORE ...
PREVIEW command set to show exactly what backup pieces or image copies RMAN plans to
utilize.
For example, if I wanted to explore exactly what RMAN will choose if I want to restore the
database's SYSTEM tablespace, from within an RMAN session, I can issue the RESTORE
DATAFILE 1 PREVIEW; command and RMAN will return the following output:
See Listing 1.4 for additional examples of this command set.

Automatic Creation of Missing Datafiles. Consider this scenario: Your junior DBA has
just added a new tablespace to the production database, but she neglected to take a full
backup of the database immediately after adding the tablespace. Then, as luck would have
it, a media failure occurs on the same disk where the new tablespace's datafile resides.
Here's the good news: With Oracle 9i, it's definitely possible to recreate the datafile for the
new tablespace - as long as all the archived redo logs and online redo logs that were
generated since the creation of the new tablespace are available, that is. Once the datafile
has been taken offline, the ALTER DATABASE CREATE DATAFILE <datafile name>;
command is issued to recreate the datafile. Then the RECOVER DATAFILE <datafile
name>; command is issued to recover the datafile, and the datafile's tablespace can be
brought back online.
Moreover, here is the better news: Oracle 10g is now smart enough to handle this situation
without DBA intervention. If the database encounters a redo log entry for the creation of
the datafile, it will automatically add the new datafile to the database.
Other Enhancements
Improved Access To RMAN Metadata. Oracle 10g provides some new dynamic views
that offer a DBA the ability to see what's really happening during and after a set of RMAN
tasks have been completed, thus saving the effort of having to constantly monitor a
command window or log file to determine their status.
V$RMAN_OUTPUT will show the status of an ongoing RMAN job. For example, here is
some sample output from the example of the full database image copy backup run:
Results of currently active Recovery Manager sessions
Activity
---------------------------------------------------------------connected to target database: ZDCDB (DBID=1863541959)
using target database controlfile instead of recovery catalog
allocated channel: dbkp1
channel dbkp1: sid=145 devtype=DISK
Starting backup at 20-NOV-04
channel dbkp1: starting datafile copy
input datafile fno=00001 name=C:\ORACLE\ORADATA\ZDCDB\SYSTEM01.DBF
output filename=C:\ORACLE\RMANBKUP\DATA_D-ZDCDB_I-1863541959_TSSYSTEM_FNO-1_2OG5IGDQ tag=TAG20041120T114042 recid=2 stamp=54272
channel dbkp1: datafile copy complete, elapsed time: 00:01:19
channel dbkp1: starting datafile copy
input datafile fno=00013 name=C:\ORACLE\ORADATA\ZDCDB\SYSAUX01.D
BF
... (Some detail removed for brevity) ...
channel dbkp1: datafile copy complete, elapsed time: 00:00:03
Finished backup at 20-NOV-04
released channel: dbkp1

66 rows selected.
In addition, V$RMAN_STATUS lists the historical status of all RMAN jobs. Here is the
resulting output from my (not always successful!) experiments with image backups:
Results of most recent Recovery Manager sessions
Command/
TimeStamp
Session Action Status
-------------------- -------- -------- -----------------------2004-11-20T11:40:33 COMMAND BACKUP COMPLETED
2004-11-20T11:40:33 SESSION RMAN
COMPLETED
2004-11-20T11:39:50 SESSION RMAN
COMPLETED WITH ERRORS
2004-11-20T11:39:50 COMMAND BACKUP FAILED
2004-11-20T11:33:49 COMMAND BACKUP FAILED
2004-11-20T11:33:49 SESSION RMAN
COMPLETED WITH ERRORS
6 rows selected.
See Listing 1.5 for the queries used to create this output.
Improved Recovery Catalog Maintenance. Oracle 10g offers a new catalog maintenance
command, UNREGISTER DATABASE, to remove all information about an Oracle database
from an RMAN repository.
Dropping a Database Completely. If you really must drop an entire database, the new
DROP DATABASE command will remove all of the specified database's physical files,
including control files, datafiles, online redo log members, and server parameter files (if any
exist). Note that the database must be mounted in exclusive, restricted mode for this
command to succeed.
Conclusion
Oracle 10g's new Recovery Manager features greatly expand the flexibility and reliability of
any Oracle DBA's tool kit for disaster recovery planning, backup strategies and failure
recovery scenarios. And I've just scratched the surface! As promised, the next article in this
series will focus on one of the most intriguing new availability features: Flash Backup and
Recovery.
References and Additional Reading
While there is no substitute for direct experience, reading the manual is not a bad idea,
either. I have drawn upon the following Oracle 10g documentation for the deeper technical
details of this article:
B10734-01 Oracle Database Backup and Recovery Advanced User's Guide
B10735-01 Oracle Database Backup and Recovery Basics
B10750-01 Oracle Database New Features Guide

B10770-01 Oracle Database Recovery Manager Reference

/*
||
||
||
||
||
||
||
||
||
||
||
||
*/

Oracle 10g RMAN Listing 1


Contains examples of new Oracle 10g Recovery Manager (RMAN) features.
Author: Jim Czuprynski
Usage Notes:
This script is provided to demonstrate various features of Oracle 10g's
new Recovery Manager (RMAN) features and should be carefully proofread
before executing it against any existing Oracle database to insure
that no potential damage can occur.

------ Listing 1.1: Image Copy Enhancements


----RUN {
# Set the default channel configuration
ALLOCATE CHANNEL dbkp1 DEVICE TYPE DISK
FORMAT 'c:\oracle\rmanbkup\ic_%d_%s_%t_%p';
# Back up specific datafiles and retain them as an image copies
BACKUP AS COPY (DATAFILE 2, 6, 9 MAXSETSIZE 25M);
# Back up a specific tablespace and retain it as an image copy
BACKUP AS COPY (TABLESPACE example MAXSETSIZE 15M);

# Back up the whole database and retain it as an image copy


BACKUP AS COPY DATABASE;

------ Listing 1.2: Incrementally-Updated Backups: A Weekly Implementation


----RUN {
######
# This script will create image copy backups to which incremental
# changes can be applied on a weekly schedule
######
# Roll forward any available changes to image copy files
# from the previous set of incremental Level 1 backups. Note that
# the roll-forward will not occur until 7 days have elapsed!
RECOVER
COPY OF DATABASE
WITH TAG 'img_cpy_upd'
UNTIL TIME (SYSDATE-7);

## Create incremental level 1 backup of all datafiles in the database


# for roll-forward application against weekly image copies
BACKUP
INCREMENTAL LEVEL 1
FOR RECOVER OF COPY WITH TAG 'img_cpy_upd'
DATABASE;
}
------ Listing 1.3: Managing Block Change Tracking
---------- Activate block change tracking, if Oracle Managed Files (OMF) is in place
----ALTER DATABASE ENABLE BLOCK CHANGE TRACKING;
------ Activate block change tracking when OMF is +not+ in place. Note that:
-- 1.) Initial file size is 10MB
-- 2.) File size grows in 10MB increments
-- 3.) Will be approximately 1/30000th size of total database
----ALTER DATABASE ENABLE BLOCK CHANGE TRACKING
USING FILE 'c:\oracle\rmanbkup\ocft\cft.f' REUSE;
-- Verify block change tracking file's existence
SELECT *
FROM v$block_change_tracking;
------ Shut down block change tracking to move the block change tracking file.
-- Note that this will cause the loss of all block change tracking
information,
-- but is the only alternative to shutting down the database
----ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;
ALTER DATABASE ENABLE BLOCK CHANGE TRACKING
USING FILE 'c:\oracle\rmanbkup\ocft\cft.f' REUSE;
------ Listing 1.4: What files will RMAN use during a RESTORE operation?
-- NOTE: These commands should be issued from within an active RMAN session
----# Spool output to a log file
SPOOL LOG TO c:\oracle\rmancmd\restoresummary.lst;
# Show what files will be used to restore the SYSTEM tablespace's datafile
RESTORE DATAFILE 1 PREVIEW;
# Show what files will be used to restore a specific tablespace

RESTORE TABLESPACE hr PREVIEW;


# Show a summary for a full database restore
RESTORE DATABASE PREVIEW SUMMARY;
# Close the log file
SPOOL LOG OFF;
------ The resulting output:
----Spooling started in log file: c:\oracle\rmancmd\restoresummary.lst
Recovery Manager: Release 10.1.0.2.0 - Production
RMAN>
Starting restore at 21-NOV-04
using channel ORA_DISK_1
List of Datafile Copies
Key
File S Completion Time Ckp SCN
Ckp Time
Name
------- ---- - --------------- ---------- --------------- ---14
1
A 21-NOV-04
2034765
21-NOV-04
C:\ORACLE\RMANBKUP\IC_ZDCDB_100_1_542808680
Finished restore at 21-NOV-04
RMAN>
Starting restore at 21-NOV-04
using channel ORA_DISK_1
List of Datafile Copies
Key
File S Completion Time Ckp SCN
Ckp Time
Name
------- ---- - --------------- ---------- --------------- ---17
4
A 21-NOV-04
2034819
21-NOV-04
C:\ORACLE\RMANBKUP\IC_ZDCDB_103_1_542808837
Finished restore at 21-NOV-04
RMAN>
Starting restore at 21-NOV-04
using channel ORA_DISK_1
List of Datafile Copies
Key
File S Completion Time Ckp SCN
Ckp Time
Name
------- ---- - --------------- ---------- --------------- ---14
1
A 21-NOV-04
2034765
21-NOV-04
C:\ORACLE\RMANBKUP\IC_ZDCDB_100_1_542808680
16
2
A 21-NOV-04
2034807
21-NOV-04
C:\ORACLE\RMANBKUP\IC_ZDCDB_102_1_542808801
21
3
A 21-NOV-04
2034839
21-NOV-04
C:\ORACLE\RMANBKUP\IC_ZDCDB_107_1_542808886
17
4
A 21-NOV-04
2034819
21-NOV-04
C:\ORACLE\RMANBKUP\IC_ZDCDB_103_1_542808837
19
5
A 21-NOV-04
2034832
21-NOV-04
C:\ORACLE\RMANBKUP\IC_ZDCDB_105_1_542808871

23
6
A 21-NOV-04
2034845
21-NOV-04
C:\ORACLE\RMANBKUP\IC_ZDCDB_109_1_542808898
20
7
A 21-NOV-04
2034836
21-NOV-04
C:\ORACLE\RMANBKUP\IC_ZDCDB_106_1_542808879
18
8
A 21-NOV-04
2034829
21-NOV-04
C:\ORACLE\RMANBKUP\IC_ZDCDB_104_1_542808863
24
9
A 21-NOV-04
2034847
21-NOV-04
C:\ORACLE\RMANBKUP\IC_ZDCDB_110_1_542808902
22
10
A 21-NOV-04
2034843
21-NOV-04
C:\ORACLE\RMANBKUP\IC_ZDCDB_108_1_542808894
26
11
A 21-NOV-04
2034852
21-NOV-04
C:\ORACLE\RMANBKUP\IC_ZDCDB_112_1_542808909
25
12
A 21-NOV-04
2034850
21-NOV-04
C:\ORACLE\RMANBKUP\IC_ZDCDB_111_1_542808906
15
13
A 21-NOV-04
2034791
21-NOV-04
C:\ORACLE\RMANBKUP\IC_ZDCDB_101_1_542808756
Finished restore at 21-NOV-04
------ Listing 1.5: Show RMAN current and historical activity
------ What's the current RMAN session activity?
TTITLE 'Results of currently active Recovery Manager sessions'
COL output
FORMAT A64
HEADING 'Activity'
SELECT
output
FROM v$rman_output
;
-- What are results from prior
TTITLE 'Results of most recent
COL command_id
FORMAT
COL row_type
FORMAT
COL operation
FORMAT
COL status
FORMAT
SELECT
command_id
,row_type
,operation
,status
FROM v$rman_status
ORDER BY command_id DESC
;

RMAN commands and sessions?


Recovery Manager sessions'
A20
HEADING 'TimeStamp'
A8
HEADING 'Command/|Session'
A8
HEADING 'Action'
A24
HEADING 'Status'

Oracle 10g Availability


Enhancements, Part 2: Flashback
Database
By Jim Czuprynski
Synopsis. Oracle 10g offers significant enhancements that help insure the high availability
of any Oracle database, as well as improvements in the database disaster recovery arena.
This article - part two of a series - explores one of the most intriguing new features of
Oracle 10g: Flashback Backup and Recovery.
The previous article in this series explored the myriad enhancements to Recovery Manager
(RMAN) that Oracle 10g has added to an Oracle DBA's tool belt when constructing a wellplanned backup and recovery strategy.
Oracle 9i provided the capability to "flash back" to a prior view of the database via queries
performed against specific logical entities. For example, if a user had accidentally added,
modified or deleted a large number of rows erroneously, it was now possible to view the
state of logical entity just before the operation had taken place. This capability was limited,
of course, by the amount of UNDO data retained in the database's UNDO segments and
bounded by the time frame specified by the UNDO_RETENTION initialization parameter.
Oracle 10g expands these logical flashback capabilities significantly, and I will cover them in
detail in the next article in this series.
However, when a DBA needed to return an entire database back to a prior state to recover
from a serious logical error - for example, when multiple erroneous transactions within the
same logical unit of work have affected the contents of several database tables - a logical
option was to perform an incomplete database recovery. Since an incomplete recovery
requires that all datafiles are first restored from the latest backup, and then a careful "roll
forward" recovery through the appropriate archived and online redo logs until the
appropriate point in time was reached, the database would be unavailable until this process
was completed.
With the addition of Flashback Database, Oracle 10g has significantly improved the
availability of a database while it's restored and recovered to the desired point in time.
These new features, however, do take some additional effort to plan for and set up, so let's
start at the beginning: configuring the Flash Recovery Area.
Enabling The Flash Recovery Area
Before any Flash Backup and Recovery activity can take place, the Flash Recovery Area
must be set up. The Flash Recovery Area is a specific area of disk storage that is set aside

exclusively for retention of backup components such as datafile image copies, archived redo
logs, and control file autobackup copies. These features include:
Unified Backup Files Storage. All backup components can be stored in one consolidated
spot. The Flash Recovery Area is managed via Oracle Managed Files (OMF), and it can utilize
disk resources managed by Oracle Automated Storage Management (ASM). In addition, the
Flash Recovery Area can be configured for use by multiple database instances if so desired.
Automated Disk-Based Backup and Recovery. Once the Flash Recovery Area is configured,
all backup components (datafile image copies, archived redo logs, and so on) are managed
automatically by Oracle.
Automatic Deletion of Backup Components. Once backup components have been
successfully created, RMAN can be configured to automatically clean up files that are no
longer needed (thus reducing risk of insufficient disk space for backups).
Disk Cache for Tape Copies. Finally, if your disaster recovery plan involves backing up to
alternate media, the Flash Recovery Area can act as a disk cache area for those backup
components that are eventually copied to tape.
Flashback Logs. The Flash Recovery Area is also used to store and manage flashback logs,
which are used during Flashback Backup operations to quickly restore a database to a prior
desired state.
Sizing the Flash Recovery Area. Oracle recommends that the Flash Recovery Area should
be sized large enough to include all files required for backup and recovery. However, if
insufficient disk space is available, Oracle recommends that it be sized at least large enough
to contain any archived redo logs that have not yet been backed up to alternate media.
Table 1 below shows the minimum and recommended sizes for the Flash Recovery Area
based on the sizes of these database files in my current Oracle 10g evaluation database:

Table 1. Sizing The Flash Recovery Area


Database Element
Estimated
Size (MB)
Image copies of all datafiles
1200
Incremental backups
256
Online Redo Logs
48
Archived Redo Logs retained for backup
96
to tape
Control Files
6
Control File Autobackups
6
Flash Recovery Logs
96
Recommended Size:
1708
Minimum Size:
96
Based on these estimates, I will dedicate 2GB of available disk space so I can demonstrate a
complete implementation of the Flash Recovery Area.

Setting Up the Flash Recovery Area. Activation of the Flash Recovery Area specifying
values for two additional initialization parameters:

DB_RECOVERY_FILE_DEST_SIZE specifies the total size of all files that can be


stored in the Flash Recovery Area. Note that Oracle recommends setting this value
first.
DB_RECOVERY_FILE_DEST specifies the physical disk location where the
Flashback Recovery Area will be stored. Oracle recommends that this be a separate
location from the database's datafiles, control files, and redo logs. Also, note that if
the database is using Oracle's new Automatic Storage Management (ASM) feature,
then the shared disk area that ASM manages can be targeted for the Flashback
Recovery Area.

Activating the Flash Recovery Area. It is obviously preferable to set up the Flash
Recovery Area when a database is being set up for the first time, as all that needs to be
done is to make the changes to the database's initialization parameters. However, if the
Flash Recovery Area is being set up for an existing database, all that's required to do is
issue the appropriate ALTER SYSTEM commands.
Listing 2.1 shows the changes I have made to the database's initialization parameter file,
including an example of how to insure that an additional copy of the database's archived
redo logs is created in the Flash Recovery area.
Listing 2.2 shows the commands to issue to set up the Flash Recovery Area when the
database is already open before flashback logging has been activated.
Enabling Flashback Database
As its name implies, Flashback Database offers the capability to quickly "flash" a database
back to its prior state as of a specified point in time. Oracle does this by retaining a copy of
any modified database blocks in flashback logs in the Flash Recovery Area. A new flashback
log is written to the Flash Recovery Area on a regular basis (usually hourly, even if nothing
has changed in the database), and these logs are typically smaller in size than an archived
redo log. Flashback logs have a file extension of .FLB.
When a Flashback Database request is received, Oracle then reconstructs the state of the
database just prior to the point in time requested using the contents of the appropriate
flashback logs. Then the database's archived redo logs are used to fill in the remaining gaps
between the last backup of the datafile and the point in time desired for recovery.
The beauty of this approach is that no datafiles need to be restored from backups; further,
only the few changes required to fill in the gaps are automatically applied from archived
redo logs. This means that recovery is much quicker than traditional incomplete recovery
methods, with much higher database availability.
It is worth noting the few prerequisites that must be met before a database may utilize
Flashback Database features:

The database must have flashback logging enabled, and therefore a Flash Recovery
Area must have been configured. (For a RAC environment, the Flash Recovery Area
must also be stored in either ASM or in a clustered file system.)

Since archived redo logs are used to "fill in the gaps" during Flashback Database
recovery, the database must be running in ARCHIVELOG mode.

Activating Flashback Database. Once the Flash Recovery Area has been configured, the next
step is to enable Flashback Database by issuing the ALTER DATABASE FLASHBACK ON;
command while the database is in MOUNT EXCLUSIVE mode, similar to activating a
database in ARCHIVELOG mode.
Setting the Flashback Retention Target. Once Flashback Database has been enabled, the
DB_FLASHBACK_RETENTION_TARGET initialization parameter determines exactly how
far a database can be flashed back. The default value is 1440 minutes (one full day), but
this can be modified to suit the needs of your database. For purposes of illustration, I have
set my demonstration database's setting to 2880 minutes (two full days).
Deactivating Flashback Database. Likewise, issuing the ALTER DATABASE FLASHBACK
OFF; command deactivates Flashback Backup and Recovery. Just as in the activation
process, note that this command must be issued while the database is in MOUNT
EXCLUSIVE mode.
See Listing 2.3 for queries that display the status of the Flash Recovery Area, status of the
related initialization parameters, and whether the database has been successfully configured
for flashback.
Storing Backups In Flash Recovery Area
Now that I have enabled the Flash Recovery Area and enabled flashback logging, I can next
turn my attention to preparing the database to use flashback logs during a Flashback
Database recovery operation.
Listing 2.4 lists the RMAN commands I will need to issue to configure the database for
Flash Recovery Area and Flashback Database use. Notice that I have not CONFIGUREd a
FORMAT directive for the RMAN channels used to create database backups; for these
examples, I am going to let RMAN place all backup components directly in the Flash
Recovery Area.
Listing 2.5 implements Oracle's recommended daily RMAN backup scheme using datafile
image copies and incrementally-updated backups. (See the previous article in this series for
a full discussion of this technique.)
Finally, Listing 2.6 shows the abbreviated results of the first cycle's run of this backup
scheme. Note that Oracle uses OMF naming standards for each backup component file - in
this example, datafiles, the "extra copy" of the archived redo logs, and control file
autobackups - stored in the Flash Recovery Area.
Flashback Database: An Example
Now that I have enabled flashback logging and have created sufficient backup components
that are being managed in the Flash Recovery Area, it is time to demonstrate a Flashback
Database operation.

Let's assume a worst-case scenario: One of my junior developers has been enthusiastically
experimenting with logical units of work on what he thought was his personal development
database, but instead mistakenly applied a transaction against the production database. He
has just accidentally deleted several thousand entries in the SH.SALES and SH.COSTS tables
- just in time to endanger our end-of-quarter sales reporting schedule, of course! Here is
the DML statements issued, along with the number of records removed:
DELETE FROM sh.sales
WHERE prod_id BETWEEN 20 AND 80;
10455 rows deleted
Executed in 89.408 seconds
DELETE FROM sh.costs
WHERE prod_id BETWEEN 20 AND 80;
6728 rows deleted
Executed in 18.086 seconds
COMMIT;
Commit complete
Executed in 0.881 seconds
Flashback Database to the rescue! Since I know the approximate date and time that this
transaction was committed to the database, I will issue an appropriate FLASHBACK
DATABASE command from within an RMAN session to return the database to that
approximate point in time. Here is a more complete listing of the FLASHBACK DATABASE
command set:
FLASHBACK [DEVICE TYPE = <device type>] DATABASE
TO [BEFORE] SCN = <scn>
TO [BEFORE] SEQUENCE = <sequence> [THREAD = <thread id>]
TO [BEFORE] TIME = '<date_string>'
Note that I can return the database to any prior point in time based on a specific System
Change Number (SCN), a specific redo log sequence number (SEQUENCE), or to a specific
date and time (TIME). If I specify the BEFORE directive, I am telling RMAN to flash the
database back to the point in time just prior to the specified SCN, redo log, or time,
whereas if the BEFORE directive is not specified, the database will be flashed back to the
specified SCN, redo log, or time as of that specified point in time, i.e., inclusively.
First, I queried my database's Flashback Logs to determine which ones are available, found
the log just prior to the user error and decided to flash back the database based on that
log's starting SCN. Listing 2.7 contains the query I ran against
V$FLASHBACK_DATABASE_LOGFILE to obtain this information.
Just as I would do during a normal point-in-time incomplete recovery, I then shut down the
database by issuing the SHUTDOWN IMMEDIATE command, and then restarted the
database and brought it into MOUNT mode via the STARTUP MOUNT command. Instead of
having to perform a restoration of datafiles as in a normal incomplete recovery, I instead
simply issue the appropriate FLASHBACK DATABASE command to take the database back to
the SCN I desired.

Once the flashback is completed, I could have continued to roll forward additional changes
from the archived redo logs available; however, I simply chose to open the database at this
point in time via the ALTER DATABASE OPEN RESETLOGS; command. Here are the actual
results from the RMAN session:
C:>rman nocatalog target sys/@zdcdb
Recovery Manager: Release 10.1.0.2.0 - Production
Copyright (c) 1995, 2004, Oracle. All rights reserved.
connected to target database: ZDCDB (DBID=1863541959)
using target database controlfile instead of recovery catalog
RMAN> FLASHBACK DEVICE TYPE = DISK DATABASE TO SCN = 2127725;
Starting flashback at 08-DEC-04
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=158 devtype=DISK
starting media recovery
media recovery complete
Finished flashback at 08-DEC-04
RMAN> alter database open resetlogs;
database opened
To see what is really going on during the flashback and recovery process, I have also
included a portion of the database's alert log. Note that Oracle automatically cleaned up
after itself: Since they are of no use any longer after the RESETLOGS operation, Oracle even
deleted the outmoded Flashback Logs from the Flashback Recovery Area.
Conclusion
Oracle 10g's Flash Recovery Area simplifies the storage and handling of backup components
and flashback logs, and the new Flashback Database features provide any Oracle DBA with a
much improved, faster option for incomplete database recovery. The next article in this
series will delve into the details of using Oracle 10g's expanded Logical Flashback features,
including some intriguing capabilities for recovering from logical errors at a much more
granular level than Flashback Database provides.
References and Additional Reading
While there is no substitute for direct experience, reading the manual is not a bad idea,
either. I have drawn upon the following Oracle 10g documentation for the deeper technical
details of this article:
B10734-01 Oracle Database Backup and Recovery Advanced User's Guide

B10735-01 Oracle Database Backup and Recovery Basics


B10750-01 Oracle Database New Features Guide
B10770-01 Oracle Database Recovery Manager Refererence

/*
|| Oracle 10g RMAN Listing 2
||
|| Contains examples of new Oracle 10g FlashBack Recovery Area and
|| Flashback Database features.
||
|| Author: Jim Czuprynski
||
|| Usage Notes:
|| This script is provided to demonstrate various features of Oracle 10g's
|| FlashBack Recovery Area and Flashback Database features and should be
|| carefully proofread before executing it against any existing Oracle
database || to insure that no potential damage can occur.
||
*/
------ Listing 2.1: Setting up the Flash Recovery Area - closed database
------ Entries to add to database's INIT.ORA:
###########################################
# Flashback Backup and Recovery settings
###########################################
db_recovery_file_dest_size = 2G # See article for suggested sizing guidelines
db_recovery_file_dest = 'c:\oracle\fbrdata\zdcdb' # Should be a separate area
of disk
db_flashback_retention_target = 2880 # Will hold two days (2880 minutes) worth
of Flashback
# Activate this to transmit an extra copy of archived redo logs to Flash
Recovery Area
log_archive_dest_2 = 'location=use_db_recovery_file_dest'
log_archive_dest_state_2 = enable
------ Listing 2.2: Setting up the Flash Recovery Area - open database
------ Be sure to set DB_FILE_RECOVERY_DEST_SIZE first ...
ALTER SYSTEM SET db_file_recovery_dest_size = '5G' SCOPE=BOTH SID='*';
-- ... and then set DB_FILE_RECOVERY_DEST and DB_FLASHBACK_RETENTION_TARGET
ALTER SYSTEM SET db_file_recovery_dest = 'c:\oracle\fbrdata\zdcdb' SCOPE=BOTH
SID='*';
ALTER SYSTEM SET db_flashback_retention_target = 2880;
------ Listing 2.3: Flash Recovery status queries
-----

-- What Flashback options are currently enabled for this database?


TTITLE 'Flashback Options Currently Enabled:'
COL name
FORMAT A32
HEADING 'Parameter'
COL value
FORMAT A32
HEADING 'Setting'
SELECT
name
,value
FROM v$parameter
WHERE NAME LIKE '%flash%' OR NAME LIKE '%recovery%'
ORDER BY NAME;
-- What's the status of the Flash Recovery Area?
TTITLE 'Flash Recovery Area Status'
COL name
FORMAT A32
HEADING 'File Name'
COL spc_lmt_mb
FORMAT 9999.99 HEADING 'Space|Limit|(MB)'
COL spc_usd_mb
FORMAT 9999.99 HEADING 'Space|Used|(MB)'
COL spc_rcl_mb
FORMAT 9999.99 HEADING 'Reclm|Space|(MB)'
COL number_of_files
FORMAT 99999
HEADING 'Files'
SELECT
name
,space_limit /(1024*1024) spc_lmt_mb
,space_used /(1024*1024) spc_usd_mb
,space_reclaimable /(1024*1024) spc_rcl_mb
,number_of_files
FROM v$recovery_file_dest;
-- Is Flashback Database currently activated for this database?
TTITLE 'Is Flashback Database Enabled?'
COL name
FORMAT A12
HEADING 'Database'
COL current_scn
FORMAT 9999999 HEADING 'Current|SCN #'
COL flashback_on
FORMAT A8
HEADING 'Flash|Back On?'
SELECT
name
,current_scn
,flashback_on
FROM v$database;

-- What's the earliest point to which this database can be flashed back?
TTITLE 'Flashback Database Limits'
COL oldest_flashback_scn
FORMAT
COL oldest_flashback_time
FORMAT
COL retention_target
FORMAT
COL flashback_size
FORMAT
COL estimated_flashback_size FORMAT
Size'
SELECT
oldest_flashback_scn
,oldest_flashback_time
,retention_target

999999999
A20
999999999
999999999
999999999

HEADING
HEADING
HEADING
HEADING
HEADING

'Oldest|Flashback|SCN #'
'Oldest|Flashback|Time'
'Oldest|Flashback|SCN #'
'Oldest|Flashback|Size'
Estimated|Flashback|

,flashback_size
,estimated_flashback_size
FROM v$flashback_database_log;
------ Listing 2.4: Configuring RMAN to use Flash Recovery Area
----RUN {
# Configure RMAN specifically to use Flash Recovery Area features
CONFIGURE RETENTION POLICY TO REDUNDANCY 1;
CONFIGURE BACKUP OPTIMIZATION ON;
CONFIGURE CONTROLFILE AUTOBACKUP ON;
}
------ Listing 2.5: RMAN Daily Backup Scheme Using Image Copies
----RUN {
##############################################################################
#
# RMAN Script: DailyImageCopyBackup.rcv
# Creates a daily image copy of all datafiles and Level 1 incremental backups
# for use by the daily image copies
##############################################################################
#
# Roll forward any available changes to image copy files
# from the previous set of incremental Level 1 backups
RECOVER
COPY OF DATABASE
WITH TAG 'img_cpy_upd';
# Create incremental level 1 backup of all datafiles in the database
# for roll-forward application against image copies
BACKUP
INCREMENTAL LEVEL 1
FOR RECOVER OF COPY WITH TAG 'img_cpy_upd'
DATABASE;
}
------ Listing 2.6: Results of First Daily Backup
----List of Datafile Copies
Key
File S Completion Time Ckp SCN
Ckp Time
Name
------- ---- - --------------- ---------- --------------- ---41
1
A 07-DEC-04
2119100
07-DEC-04
C:\ORACLE\FBRDATA\ZDCDB\ZDCDB\DATAFILE\O1_MF_SYSTEM_0VDM2NP9_.DBF
1
1
A 20-NOV-04
2006057
20-NOV-04
C:\RMANBKUP
43
2
A 07-DEC-04
2119143
07-DEC-04
C:\ORACLE\FBRDATA\ZDCDB\ZDCDB\DATAFILE\O1_MF_UNDOTBS1_0VDM6MRV_.DBF
48
3
A 07-DEC-04
2119180
07-DEC-04
C:\ORACLE\FBRDATA\ZDCDB\ZDCDB\DATAFILE\O1_MF_DRSYS_0VDM9OP2_.DBF
44
4
A 07-DEC-04
2119156
07-DEC-04
C:\ORACLE\FBRDATA\ZDCDB\ZDCDB\DATAFILE\O1_MF_EXAMPLE_0VDM7S0X_.DBF

46
5
A 07-DEC-04
2119173
07-DEC-04
C:\ORACLE\FBRDATA\ZDCDB\ZDCDB\DATAFILE\O1_MF_INDX_0VDM94ON_.DBF
50
6
A 07-DEC-04
2119186
07-DEC-04
C:\ORACLE\FBRDATA\ZDCDB\ZDCDB\DATAFILE\O1_MF_TOOLS_0VDMB270_.DBF
47
7
A 07-DEC-04
2119176
07-DEC-04
C:\ORACLE\FBRDATA\ZDCDB\ZDCDB\DATAFILE\O1_MF_USERS_0VDM9F8W_.DBF
45
8
A 07-DEC-04
2119166
07-DEC-04
C:\ORACLE\FBRDATA\ZDCDB\ZDCDB\DATAFILE\O1_MF_XDB_0VDM8N66_.DBF
51
9
A 07-DEC-04
2119189
07-DEC-04
C:\ORACLE\FBRDATA\ZDCDB\ZDCDB\DATAFILE\O1_MF_LMPT1_0VDMB6CL_.DBF
49
10
A 07-DEC-04
2119184
07-DEC-04
C:\ORACLE\FBRDATA\ZDCDB\ZDCDB\DATAFILE\O1_MF_LMPT3_0VDM9Y6J_.DBF
53
11
A 07-DEC-04
2119193
07-DEC-04
C:\ORACLE\FBRDATA\ZDCDB\ZDCDB\DATAFILE\O1_MF_LMPT2_0VDMBGJN_.DBF
52
12
A 07-DEC-04
2119191
07-DEC-04
C:\ORACLE\FBRDATA\ZDCDB\ZDCDB\DATAFILE\O1_MF_LMPT4_0VDMBBGW_.DBF
42
13
A 07-DEC-04
2119127
07-DEC-04
C:\ORACLE\FBRDATA\ZDCDB\ZDCDB\DATAFILE\O1_MF_SYSAUX_0VDM53DD_.DBF
List of Archived Log Copies
Key
Thrd Seq
S Low Time Name
------- ---- ------- - --------- ---148
1
203
A 05-DEC-04
C:\ORACLE\ORADATA\ZDCDB\ARCHIVE\ZDC002030010493846599.ARC
149
1
204
A 06-DEC-04
C:\ORACLE\ORADATA\ZDCDB\ARCHIVE\ZDC002040010493846599.ARC
150
1
205
A 06-DEC-04
C:\ORACLE\ORADATA\ZDCDB\ARCHIVE\ZDC002050010493846599.ARC
151
1
206
A 06-DEC-04
C:\ORACLE\ORADATA\ZDCDB\ARCHIVE\ZDC002060010493846599.ARC
152
1
207
A 06-DEC-04
C:\ORACLE\ORADATA\ZDCDB\ARCHIVE\ZDC002070010493846599.ARC
153
1
208
A 06-DEC-04
C:\ORACLE\ORADATA\ZDCDB\ARCHIVE\ZDC002080010493846599.ARC
154
1
209
A 06-DEC-04
C:\ORACLE\ORADATA\ZDCDB\ARCHIVE\ZDC002090010493846599.ARC
155
1
209
A 06-DEC-04
C:\ORACLE\FBRDATA\ZDCDB\ZDCDB\ARCHIVELOG\2004_12_06\O1_MF_1_209_0V9Q1HHJ_.ARC
160
1
210
A 06-DEC-04
C:\ORACLE\ORADATA\ZDCDB\ARCHIVE\ZDC002100010493846599.ARC
161
1
210
A 06-DEC-04
C:\ORACLE\FBRDATA\ZDCDB\ZDCDB\ARCHIVELOG\2004_12_08\O1_MF_1_210_0VH53GGG_.ARC
156
1
210
A 06-DEC-04
C:\ORACLE\ORADATA\ZDCDB\ARCHIVE\ZDC002100010493846599.ARC
157
1
210
A 06-DEC-04
C:\ORACLE\FBRDATA\ZDCDB\ZDCDB\ARCHIVELOG\2004_12_07\O1_MF_1_210_0VDOMOGQ_.ARC
162
1
211
A 07-DEC-04
C:\ORACLE\ORADATA\ZDCDB\ARCHIVE\ZDC002110010493846599.ARC
163
1
211
A 07-DEC-04
C:\ORACLE\FBRDATA\ZDCDB\ZDCDB\ARCHIVELOG\2004_12_08\O1_MF_1_211_0VH53NS2_.ARC
158
1
211
A 07-DEC-04
C:\ORACLE\ORADATA\ZDCDB\ARCHIVE\ZDC002110010493846599.ARC
159
1
211
A 07-DEC-04
C:\ORACLE\FBRDATA\ZDCDB\ZDCDB\ARCHIVELOG\2004_12_07\O1_MF_1_211_0VDOPPDT_.ARC
164
1
212
A 07-DEC-04
C:\ORACLE\ORADATA\ZDCDB\ARCHIVE\ZDC002120010493846599.ARC

165
1
212
A 07-DEC-04
C:\ORACLE\FBRDATA\ZDCDB\ZDCDB\ARCHIVELOG\2004_12_08\O1_MF_1_212_0VH53V2V_.ARC
------ Listing 2.7: Flashback Log Query
------ What Flashback Logs are available?
TTITLE 'Current Flashback Logs Available'
COL log#
FORMAT 9999
HEADING
COL bytes
FORMAT 99999999 HEADING
COL first_change#
FORMAT 99999999 HEADING
COL first_time
FORMAT A24
HEADING
SELECT
LOG#
,bytes
,first_change#
,first_time
FROM v$flashback_database_logfile;

'FLB|Log#'
'Flshbck|Log Size'
'Flshbck|SCN #'
'Flashback Start Time'

Oracle 10g Availability


Enhancements, Part 3:
FLASHBACK Enhancements
By Jim Czuprynski
Synopsis. Oracle 10g offers significant enhancements that help insure the high availability
of any Oracle database, as well as improvements in the database disaster recovery arena.
This article - part three of a series - concentrates on the expanded capabilities of the logical
Flashback command set.
The previous article in this series discussed how two new Oracle 10g features - the Flash
Recovery Area and the Flashback Database command -- expand a DBA's flexibility
during a point-in-time incomplete recovery of a database. However, when those features are
used in conjunction with the new logical FLASHBACK command set enhancements, a DBA
now has an extensive set of tools for recovering data with more granularity than ever
before.
A Quick Review: Flashback Query
Oracle 9i provided the ability to "flash back" to a prior view of the database via queries
performed against specific logical entities. For example, if a user had accidentally added,
modified or deleted a large number of rows erroneously, it was now possible to view the
state of the logical entity just before the operation had taken place. Of course, this
capability is limited by the amount of UNDO data retained in the database's UNDO
segments, and that is bounded by the time frame specified by the UNDO_RETENTION
initialization parameter.
For example, let's assume that late one Friday afternoon a junior DBA, in his zest to perform
emergency maintenance requested by a developer against the DEPARTMENTS table,
inadvertently set all of the values for the Department Managers to NULL values. If the senior
DBA knew the approximate time at which this had occurred, then she could issue the
following query to return the state of that table at that time:
SELECT *
FROM hr.departments
AS OF TIMESTAMP
TO_TIMESTAMP('12/05/2004 11:55:00', 'MM/DD/YYYY HH24:MI:SS');
Moreover, to restore the data back to its prior state, she could issue this UPDATE statement:
UPDATE hr.departments D1
SET D1.manager_id = (
SELECT manager_id
FROM hr.departments
AS OF TIMESTAMP
TO_TIMESTAMP('12/05/2004 11:55:00', 'MM/DD/YYYY HH24:MI:SS') D

WHERE manager_id IS NOT NULL


AND d1.department_id = D.department_id
);
COMMIT;
(See Listing 3.1 for a complete set of DML statements that simulate this scenario.)
The ability to flash back to a different version of the data is an obvious time-saver in this
situation, as it prevents having to resort to other brute-force methods to restore data
integrity. However, what if more than one table's data had been affected by this transaction?
For example, what will happen if a user error causes data to be modified in another table
based on a change in values in the DEPARTMENTS table, perhaps through a trigger?
Or - even worse - what if significant amounts of data had been deleted instead of just being
updated erroneously? Or what if the junior DBA had dropped a whole table instead of just
truncating the data stored within? Now the restoration and recovery of these data requires
some difficult decisions for the DBA:

She could perform an incomplete recovery of the database using the traditional
method of restoring backups of all datafiles and then applying the appropriate
archived redo logs to roll the database forward until the database reached a point in
time just prior to the time that the user error occurred.
Alternatively, if the new Oracle 10g Flashback Database feature had been enabled,
she could use that new method to "flash" the database back to a particular point in
time. (See the prior article in this series for complete details on this method.)
Another option would be to perform a brute-force recovery via manual means,
perhaps using database exports (providing, of course, that the exports were current
enough). However, unless the DBA is intimately aware of the inter-relation of the
affected data, this may not be practical, and could even be more destructive.

Unfortunately, in all these cases, the state of the data that has been added, modified or
deleted after the recovery point in time would most likely also be lost. Moreover, that
probably means that her users are going to spend their weekend re-entering significant
amounts of lost data.
Wouldn't it be great if Oracle provided a way to reverse the effects of a particular DML
statement completely, or (even better!) a dropped table? Here is the good news: Oracle 10g
has significantly improved the existing set of logical FLASHBACK features to handle many of
these not-quite-a-disaster data recovery operations.
Flashback Version Query
Flashback Version Query improves upon the existing Flashback Query feature: It allows a
DBA to see all the different versions of a particular row within a specified time frame, as
long as those versions are still available within the UNDO tablespace's rollback segments.
This time frame can be defined based on either a beginning and an ending timestamp value,
or based on a range of starting and ending System Change Numbers (SCNs).

Flashback Version Query: An Illustration. I will use the Human Resources


demonstration schema to illustrate the new capabilities of Flashback Version Query. First, I
will establish a new baseline of "correct" data by adding four new Job Titles, a new
Department, and five new Employees to the database via a series of DML statements (see
Listing 3.2).
Next, I will issue a series of DML statements to simulate a set of "mistaken" operations
against the database (see Listing 3.3). Note that the changes against the HR.EMPLOYEES
table within these transactions also mean that new rows will be added automatically to the
HR.JOB_HISTORY table via the NNNN trigger.
Now, I will use the query in Listing 3.4 to show what Flashback Version Query can tell me
about the versions of the rows in the database as a result of these DML statements. Here is
a sample of the results:
Current FLASHBACK VERSION Results For Selected Employees
Vsn
Vsn
Start
End
XID
SCN
SCN Operatio Last Name
Dept
Salary
---------------- --------- --------- -------- ------------ ----- ---------04002900E7000000 2150721
Insert Campbell
280 110000.00
04000C00E7000000 2150749
Update Asimov
280 5250.00
04002900E7000000 2150721 2150749 Insert Asimov
280 5000.00
04000C00E7000000 2150749
Update Heinlein
280 26250.00
04002900E7000000 2150721 2150749 Insert Heinlein
280 25000.00
04000C00E7000000 2150749
Update Bradbury
280 50925.00
04002900E7000000 2150721 2150749 Insert Bradbury
280 48500.00
04001500E7000000 2150751
Update Ellison
270 34125.00
04000C00E7000000 2150749 2150751 Update Ellison
280 34125.00
04002900E7000000 2150721 2150749 Insert Ellison
280 32500.00
04001B00E7000000 2150753
Delete Brin
280 39375.00
04000C00E7000000 2150749 2150753 Update Brin
280 39375.00
04001E00E7000000 2150747 2150749 Insert Brin
280 37500.00
A New PseudoColumn: ORA_ROWSCN. While the ROWID pseudocolumn uniquely
identifies a row's block location within the database, the new pseudocolumn ORA_ROWSCN
identifies the System Change Number (SCN) of the most recently committed change to a
row. This pseudocolumn can therefore be used to establish the upper limit of SCNs that I
might want to search through. See Listing 3.4 for a sample query that utilized this new
pseudocolumn.
Another intriguing potential use of ORA_ROWSCN is the capability to control transaction
concurrency within applications. For example, if one user retrieves a row from
HR.EMPLOYEES for eventual update, but has not yet applied the change, the value of
ORA_ROWSCN will remain unchanged until those modifications are committed to the
database. In the meantime, if another user modifies that same row and commits the
changes, I can have my application check if the value of ORA_ROWSCN is still equal to its
original value.
If the value has changed, I can notify the original user that the row has changed since it
was originally retrieved for update, and request whether to continue the transaction or roll it

back. Prior to ORA_ROWSCN, my application would have to check every value of every
column for that row to determine if any value had changed. Alternatively, I add a separate
numeric column that would be incremented whenever a change to a row was committed for
each table that needs this level of transaction control. However, ORA_ROWSCN makes
accurate transaction control almost trivial.
Flashback Transaction Query
Like its cousin Flashback Version Query, Flashback Transaction Query gives me even more
flexibility: It allows me to see all changed rows within a particular set of transactions that
occurred within a range of timestamps or SCNs.
Flashback Transaction Query uses the FLASHBACK_TRANSACTION_QUERY view as a
window into the database's UNDO segments. I can use this view's transaction ID column
(XID) to identify what changes have been recorded during a specific transaction.
Reminiscent of Oracle's LogMiner toolset, Flashback Transaction Query can display the actual
DML statements to issue, to reverse the original transaction.
Listing 3.5 shows how to utilize a Flashback Version Query SELECT statement to drive the
retrieval of all transactions that have occurred during a specific range off SCNs. Here is the
result of that query:
Current FLASHBACK_TRANSACTION_QUERY Contents For Selected Employees
User Table
Commit
XID#
Operation Logon Owner Table Name
SCN
---------------- ---------- ------ ------ ------------ --------UNDO SQL
-------------------------------------------------------------------------------04000C00E7000000 UPDATE
SYS HR
EMPLOYEES
2150749
update "HR"."EMPLOYEES" set "SALARY" = '37500' where ROWID =
'AAAGMsAAEAAAABWAAA';
04000C00E7000000 UPDATE
SYS HR
EMPLOYEES
2150749
update "HR"."EMPLOYEES" set "SALARY" = '32500' where ROWID =
'AAAGMsAAEAAAABYAAE';
04000C00E7000000 UPDATE
SYS HR
EMPLOYEES
2150749
update "HR"."EMPLOYEES" set "SALARY" = '48500' where ROWID =
'AAAGMsAAEAAAABYAAD';
04000C00E7000000 UPDATE
SYS HR
EMPLOYEES
2150749
update "HR"."EMPLOYEES" set "SALARY" = '25000' where ROWID =
'AAAGMsAAEAAAABYAAC';
04000C00E7000000 UPDATE
SYS HR
EMPLOYEES
2150749
update "HR"."EMPLOYEES" set "SALARY" = '5000' where ROWID =
'AAAGMsAAEAAAABYAAB';
04000C00E7000000 BEGIN

SYS

04001500E7000000 INSERT

SYS

2150749
HR

JOB_HISTORY

2150751

delete from "HR"."JOB_HISTORY" where ROWID = 'AAAGMwAAEAAAABtAAG';


04001500E7000000 UPDATE
SYS HR
EMPLOYEES
2150751
update "HR"."EMPLOYEES" set "DEPARTMENT_ID" = '280' where ROWID =
'AAAGMsAAEAAAABYAAE';
04001500E7000000 BEGIN

SYS

2150751

04001B00E7000000 DELETE
SYS HR
EMPLOYEES
2150753
insert into "HR"."EMPLOYEES"("EMPLOYEE_ID","FIRST_NAME","LAST_NAME","EMAIL",
"PHONE_NUMBER","HIRE_DATE","JOB_ID","SALARY","COMMISSION_PCT","MANAG
ER_ID",
"DEPARTMENT_ID") values ('906','David','Brin','dbrin@astounding.com',
'212-555-1616',TO_DATE('10/31/1987 00:00:00', 'mm/dd/yyyy hh24:mi:ss'),
'WRITER-3','39375',NULL,'901','280');
04001B00E7000000 BEGIN

SYS

2150753

04001E00E7000000 INSERT
SYS HR
EMPLOYEES
2150747
delete from "HR"."EMPLOYEES" where ROWID = 'AAAGMsAAEAAAABWAAA';
04001E00E7000000 BEGIN

SYS

2150747

04002900E7000000 INSERT
SYS HR
EMPLOYEES
2150721
delete from "HR"."EMPLOYEES" where ROWID = 'AAAGMsAAEAAAABYAAE';
04002900E7000000 INSERT
SYS HR
EMPLOYEES
2150721
delete from "HR"."EMPLOYEES" where ROWID = 'AAAGMsAAEAAAABYAAD';
04002900E7000000 INSERT
SYS HR
EMPLOYEES
2150721
delete from "HR"."EMPLOYEES" where ROWID = 'AAAGMsAAEAAAABYAAC';
04002900E7000000 INSERT
SYS HR
EMPLOYEES
2150721
delete from "HR"."EMPLOYEES" where ROWID = 'AAAGMsAAEAAAABYAAB';
04002900E7000000 INSERT
SYS HR
EMPLOYEES
2150721
delete from "HR"."EMPLOYEES" where ROWID = 'AAAGMsAAEAAAABYAAA';
04002900E7000000 BEGIN

SYS

2150721

19 rows selected.
Using SCNs vs. TIMESTAMPs. As you might expect, using an SCN to identify a transaction
or range of row versions is more accurate than using a TIMESTAMP. Oracle recommends
using SCNs over TIMESTAMPs when an extremely accurate Logical Flashback operation
needs to be performed; in fact, the documentation states that a TIMESTAMP can be as much
as three minutes ahead in time than an SCN.
Effect of UNDO_RETENTION Setting. The length of time the row versions are available
obviously depends on the setting of the UNDO_RETENTION initialization parameter. By
default, this setting is 900 seconds (15 minutes); in some cases, I have set
UNDO_RETENTION as high as 10800 (3 hours) for some databases that I knew needed
longer UNDO retention durations. For the sake of these examples, I have set it to 1800 (30

minutes) in my demonstration database, so that I can more easily illustrate these two new
features without recreating examples every 15 minutes.
Rewinding Tables with FLASHBACK TABLE
While Flashback Version Query and Flashback Transaction Query offer the capability to
retrieve the state of a table's rows at a prior point in time, Oracle 10g also offers the ability
to restore an entire table to an earlier state within the boundaries of available UNDO data
via the FLASHBACK TABLE command.
To illustrate this, I will create a new table in the HR demo schema called APPLICANTS that
I will use to record information about each person applying for a job. I will use a row-level
trigger and a sequence to automatically increment the primary key column,
APPLICANTS.APPLICANT_ID, whenever a new entry is added to the table.
Listing 3.6 shows the DDL and DML statements necessary to create and populate this table
initially. Once the new table was populated, I recorded the maximum SCN (2177093) just
before I issued a series of additional INSERT statements shown in Listing 3.7.
I will issue the FLASHBACK TABLE command shown in Listing 3.8 to bring the table back
to its initial state. Note that I set the table's ENABLE ROW MOVEMENT parameter to TRUE
before attempting to "rewind" the table.
Prerequisities. Before I can execute a FLASHBACK TABLE command, there are some
precursors:

The UNDO segments that hold the statements needed to "rewind" the table(s) back
to its prior state must still be available.
The user account from which I am issuing the FLASHBACK TABLE command must
have been granted the FLASHBACK TABLE object privilege for the tables that I wish
to "rewind," or the user account must have been granted the FLASHBACK ANY
TABLE privilege.

Also, the user account that is performing the FLASHBACK TABLE operation must
have been granted SELECT, INSERT, UPDATE, and DELETE rights.

Finally, the table(s) to be "rewound" must have the ENABLE ROW MOVEMENT
directive enabled. This directive allows Oracle to move rows into or out of the
selected table(s).

Caveats. Even though FLASHBACK TABLE offers some slick new capabilities, some warnings
are in order:

It is important to remember that once the FLASHBACK TABLE operation is completed,


it only rolls back the transactions applied to the table or tables specified in the
command. However, the state of any other database objects is unchanged. If I now
reissue the second set of INSERTs, the sequence upon which the APPLICANT table's
BEFORE INSERT trigger has not been reset, and the next set of applicants will use
the most current value of the sequence for the APPLICANT_ID value.
A FLASHBACK TABLE operation cannot be rolled back, as an implicit COMMIT is
issued once it is complete. However, another FLASHBACK TABLE statement can be

issued to restore the table to a different point in time (providing, of course, that
sufficient UNDO is available for the successive operation).

Also, FLASHBACK TABLE cannot be used to recover to a point in time prior to the
issuance of DDL statements that have modified the table's structure.

Restoring a Dropped Table: FLASHBACK DROP and the Recycle Bin


Rounding out the new logical flashback features, Oracle 10g offers the capability to recover
from one of the most destructive "accidental" operations that can happen to any Oracle
database: the complete removal of a table via the DROP TABLE command. To facilitate this
new feature, Oracle 10g has added a new storage area to the database called the Recycle
Bin where dropped objects are retained until the object is either recovered via a Flashback
Drop operation, or until the object is purged from the Recycle Bin.
Peering Into the Recycle Bin. Every time a table is dropped, it is assigned a unique
object identifier in the Recycle Bin. This 30-character-long object identifier is in the format
BIN$$globalUID$version, where globalUID is a 24-character globally unique identifier
for the dropped object, and version is assigned for each version of the dropped object.
Oracle 10g provides several methods to view the Recycle Bin's contents and identify which
tables have been dropped:

A new column, DROPPED, has been added to the DBA_TABLES data dictionary
view to allow screening for tables that have been dropped from the database but are
now present in the Recycle Bin instead.
The SHOW RECYCLEBIN; command shows all dropped tables and their dependent
objects when issued from within a SQL*Plus session.

The RECYCLEBIN data dictionary view shows the contents of the Recycle Bin for the
current user.

Finally, the DBA_RECYCLEBIN data dictionary view shows the complete contents of
the Recycle Bin.

Viewing Different Versions of Dropped Tables. Even if a table is created and dropped
several times, all of the different iterations of the dropped table and its dependent objects
are retained in the Recycle Bin until they are purged. Using the dropped table's object
identifier, I can query directly against the dropped table's data in the Recycle Bin, thus
allowing me to determine exactly which version of the table should be recovered. I will
demonstrate this feature in an upcoming recovery example.
Listing 3.9 displays several examples of querying the Recycle Bin for its current status.
Recycle Bin Space Pressure and Automatic Purging. Oracle 10g automatically manages
the contents of the Recycle Bin to insure there is enough space to store any dropped tables
and their related objects. Unfortunately, this also means that there is no way to predict
when Oracle may need to purge objects from the Recycle Bin.
Oracle will keep objects in the Recycle Bin until it can no longer allocate new extents in the
tablespace where the dropped objects originally resided without expanding the tablespace.
This situation is known as space pressure. When space pressure demands that Recycle Bin

space be reclaimed, Oracle will purge the oldest objects first (i.e., first-in, first-out basis),
and it will purge a dropped table's dependent objects first (e.g. indexes, triggers) before it
purges the table itself.
Manually Purging Recycle Bin Objects. Versions of objects that no longer need to be
retained can also be purged manually via the following commands, in order of increasing
destructiveness to the Recycle Bin:

Purging A Single Index. The PURGE INDEX <index name>; command purges
the most recent incarnation of the specified index from the Recycle Bin. Note that the
index cannot have been used to enforce a constraint for its supported table;
otherwise, Oracle will return an error.
Purging A Single Table. Issuing the PURGE TABLE <table name>; purges only
the most recent incarnation of the dropped table and its dependent objects (e.g.
indexes and triggers).

Purging All Objects in a Tablespace. The PURGE TABLESPACE


<tablespace_name>; purges all objects in the specified tablespace from the
Recycle Bin.

Purging All Schema Objects. The PURGE RECYCLEBIN; command will purge all
schema objects for the current user account from the Recycle Bin.

Purging All Objects. Finally, the PURGE DBA_RECYCLEBIN; command purges all
database objects in the Recycle Bin. Note that this command must be issued from a
user account with DBA privileges.

See Listing 3.10 for examples of these commands.


Example: Restoring a Dropped Table. As long as the table and its dependent objects are
still present in the Recycle Bin, the table can be recovered using the FLASHBACK TABLE
<table name> TO BEFORE DROP; command. To illustrate the power of this new feature, I
have dropped the HR.APPLICANTS table created in the previous FLASHBACK TABLE
example and then purged the entire Recycle Bin via the PURGE DBA_RECYCLEBIN;
command.
Next, I recreated the table via the code in Listing 3.6, and loaded with the first set of
applicants. I dropped it again, created it again and loaded it with the first set as well as the
second set of applicants (Listing 3.7). Finally, I dropped the table a third time, reloaded it
with the first and second set of applicants, added a third set of applicants (Listing 3.11),
and dropped it once again. This left three distinct iterations of the table to experiment
against. Here are the results stored in the Recycle Bin after these operations:
SQL>
SQL>
SQL>
SQL>
SQL>
SQL>
2
3
4

TTITLE 'Current Recycle Bin Contents'


COL object_name
FORMAT A30
HEADING 'Object Name'
COL type
FORMAT A8
HEADING 'Object|Type'
COL original_name
FORMAT A20
HEADING 'Original Name'
COL droptime
FORMAT A20
HEADING 'Dropped On'
SELECT
object_name
,type
,original_name

5
,droptime
6 FROM dba_recyclebin
7 WHERE owner = 'HR'
8 ;
Current Recycle Bin Contents
Object
Object Name
Type
Original Name
Dropped On
------------------------------ -------- -------------------- -------------------BIN$GXKs4x3zS+6aEyHIbjIO0g==$0 INDEX APPLICANTS_LAST_NAME 2005-013:19:04:25
_IDX
BIN$0YQGF9xpTgOPRqsqfuHtNA==$0 INDEX
3:19:03:36
_IDX

APPLICANTS_LAST_NAME 2005-01-

BIN$lpBfdfPSQfai8dZoa/DHUw==$0 INDEX APPLICANTS_PK_IDX 2005-01-3:19:03:36


BIN$TdmwJaPjSIu5XTGn2vmweQ==$0 INDEX APPLICANTS_PK_IDX 2005-013:19:04:25
BIN$eUzM3ZWMTQefYskd+7kAqw==$0 TRIGGER TR_BRIU_APPLICANTS 2005-013:19:04:25
BIN$ldrmjTN0R8K8qyRRCmSsxw==$0 TABLE APPLICANTS
2005-01-3:19:04:25
BIN$RSEdFMhCRcqCv5g7lYss6A==$0 TRIGGER TR_BRIU_APPLICANTS 2005-013:19:03:36
BIN$992SjQhHRlqHZHB4Aa/dWQ==$0 TABLE APPLICANTS
2005-01-3:19:03:36
BIN$tqINHzsMRT6EfbsgiD8eFQ==$0 INDEX APPLICANTS_LAST_NAME 2005-013:19:06:56
_IDX
BIN$877/hfooRKuuiVAKjsE7Jg==$0 INDEX APPLICANTS_PK_IDX 2005-01-3:19:06:56
BIN$vLf00KzHSpGbMjkhnJETUw==$0 TRIGGER TR_BRIU_APPLICANTS 2005-013:19:06:56
BIN$xJfp8JRWQ9KalGzPUw9Ygg==$0 TABLE APPLICANTS
2005-01-3:19:06:56
12 rows selected.
By querying directly against the Recycle Bin using the dropped table's object identifier, I can
confirm that the second iteration of the HR.APPLICANTS table has exactly 30 rows:
SQL> -- Most recent iteration of HR.APPLICANTS
SQL> SELECT COUNT(*) FROM "BIN$xJfp8JRWQ9KalGzPUw9Ygg==$0";
COUNT(*)
---------45
SQL> -- Second-most recent iteration of HR.APPLICANTS
SQL> SELECT COUNT(*) FROM "BIN$ldrmjTN0R8K8qyRRCmSsxw==$0";
COUNT(*)
----------

30
SQL> -- Third-most recent iteration of HR.APPLICANTS
SQL> SELECT COUNT(*) FROM "BIN$992SjQhHRlqHZHB4Aa/dWQ==$0";
COUNT(*)
---------16
Oracle retrieves the most recently-dropped table first, so if I issued a FLASHBACK TABLE
hr.applicants TO BEFORE DROP; Oracle would restore the iteration with 45 rows. Since I
want to restore only the iteration with 30 rows, I will issue a FLASHBACK TABLE
<object_identifier> TO BEFORE DROP; command to insure that I have restored the
desired copy of the table:
FLASHBACK TABLE "BIN$ldrmjTN0R8K8qyRRCmSsxw==$0" TO BEFORE DROP;
Alternatively, I can restore a different iteration of the table as with a different object name:
FLASHBACK TABLE "BIN$992SjQhHRlqHZHB4Aa/dWQ==$0"
TO BEFORE DROP
RENAME TO applicants_1;
Conclusion
Oracle 10g's new Logical Flashback features significantly expand an Oracle DBA's abilities to
recover data, transactions and database objects that have been lost with a minimum of
effort. When these new features are used in conjunction with each other and with the
FLASHBACK DATABASE features described in the previous article, just about any data loss
situation can be forestalled. The next article the final one in this series -- will concentrate
on additional availability enhancements implemented as part of DataGuard and Logminer.
References and Additional Reading
While there is no substitute for direct experience, reading the manual is not a bad idea,
either. I have drawn upon the following Oracle 10g documentation for the deeper technical
details of this article:
B10734-01 Oracle Database Backup and Recovery Advanced User's Guide
B10735-01 Oracle Database Backup and Recovery Basics
B10750-01 Oracle Database New Features Guide
B10759-01 Oracle Database SQL Reference
B10770-01 Oracle Database Recovery Manager Reference

/*
||
||
||
||
||
||
||
||
||
||
||
||
*/

Oracle 10g RMAN Listing 3


Contains examples of new Oracle 10g Logical Flashback features.
Author: Jim Czuprynski
Usage Notes:
This script is provided to demonstrate various features of Oracle 10g's
new Logical Flashback features and should be carefully proofread
before executing it against any existing Oracle database to insure
that no potential damage can occur.

------ Listing 3.1: Simple Flashback Query Examples


------ Simulate a user error
UPDATE hr.departments
SET manager_id = NULL;
COMMIT;
-- View data as it existed before the transaction was committed. This
-- example assumes that the approximate time of the damage is known
SELECT *
FROM hr.departments
AS OF TIMESTAMP
TO_TIMESTAMP('12/05/2004 14:45:00', 'MM/DD/YYYY HH24:MI:SS');
-- Repair damaged data using the flashed-back data
UPDATE hr.departments D1
SET D1.manager_id = (
SELECT manager_id
FROM hr.departments
AS OF TIMESTAMP
TO_TIMESTAMP('12/05/2004 11:55:00', 'MM/DD/YYYY HH24:MI:SS') D
WHERE manager_id IS NOT NULL
AND d1.department_id = D.department_id
);
COMMIT;
------ Listing 3.2: Add new Jobs, Departments, and Employees
-----

INSERT INTO hr.departments (department_id, department_name, manager_id,


location_id)
VALUES (280, 'Science Fiction Writers', 901, 1500);
INSERT INTO hr.jobs (job_id, job_title, min_salary, max_salary, job_type)
VALUES ('EDITOR', 'Science Fiction Editor', 100000, 199999, 'S');
INSERT INTO hr.jobs (job_id, job_title, min_salary, max_salary, job_type)
VALUES ('WRITER-1', 'Science Fiction Writer 1', 5000, 29999, 'S');
INSERT INTO hr.jobs (job_id, job_title, min_salary, max_salary, job_type)
VALUES ('WRITER-2', 'Science Fiction Writer 2', 25000, 64999, 'S');
INSERT INTO hr.jobs (job_id, job_title, min_salary, max_salary, job_type)
VALUES ('WRITER-3', 'Science Fiction Writer 3', 55000, 74999, 'S');
COMMIT;
INSERT INTO hr.employees (
employee_id,
first_name,
last_name,
email,
phone_number,
hire_date,
job_id,
salary,
commission_pct,
manager_id,
department_id
)
VALUES (
901,
'John',
'Campbell',
'jcampbell@astounding.com',
'212-555-1212',
TO_DATE('02/08/1943', 'MM/DD/YYYY'),
'EDITOR',
110000,
NULL,
100,
280
);
INSERT INTO hr.employees (
employee_id,
first_name,
last_name,
email,
phone_number,
hire_date,
job_id,
salary,
commission_pct,
manager_id,
department_id

)
VALUES (
902,
'Isaac',
'Asimov',
'iasimov@astounding.com',
'212-555-1313',
TO_DATE('01/01/1949', 'MM/DD/YYYY'),
'WRITER-1',
5000,
NULL,
901,
280
);
INSERT INTO hr.employees (
employee_id,
first_name,
last_name,
email,
phone_number,
hire_date,
job_id,
salary,
commission_pct,
manager_id,
department_id
)
VALUES (
903,
'Robert',
'Heinlein',
'bheinlein@astounding.com',
'212-555-1414',
TO_DATE('09/03/1945', 'MM/DD/YYYY'),
'WRITER-2',
25000,
NULL,
901,
280
);
INSERT INTO hr.employees (
employee_id,
first_name,
last_name,
email,
phone_number,
hire_date,
job_id,
salary,
commission_pct,
manager_id,
department_id
)
VALUES (
904,

'Ray',
'Bradbury',
'rbradbury@astounding.com',
'212-555-1515',
TO_DATE('10/31/1946', 'MM/DD/YYYY'),
'WRITER-1',
48500,
NULL,
901,
280
);
INSERT INTO hr.employees (
employee_id,
first_name,
last_name,
email,
phone_number,
hire_date,
job_id,
salary,
commission_pct,
manager_id,
department_id
)
VALUES (
905,
'Harlan',
'Ellison',
'hellison@astounding.com',
'212-555-1515',
TO_DATE('10/31/1962', 'MM/DD/YYYY'),
'WRITER-3',
32500,
NULL,
901,
280
);
COMMIT;
------ Listing 3.3: Sample transactions:
-- 1.) Add a new employee
-- 2.) Update salaries and department IDs for selected employees
-- 3.) Delete the newly-added employee
----INSERT INTO hr.employees (
employee_id,
first_name,
last_name,
email,
phone_number,
hire_date,
job_id,
salary,

commission_pct,
manager_id,
department_id

)
VALUES (
906,
'David',
'Brin',
'dbrin@astounding.com',
'212-555-1616',
TO_DATE('10/31/1987', 'MM/DD/YYYY'),
'WRITER-3',
37500,
NULL,
901,
280
);
COMMIT;
UPDATE hr.employees
SET salary = salary * 1.05
WHERE employee_id >= 902;
COMMIT;
UPDATE hr.employees
SET department_id = 270
WHERE employee_id = 905;
COMMIT;
DELETE FROM hr.employees
WHERE employee_id = 906;
COMMIT;
------ Listing 3.4: Flashback Version Example
------ Using the new ORA_ROWSCN pseudocolumn
SELECT
ORA_ROWSCN,
employee_id,
first_name,
last_name
FROM hr.employees;
-- Show all changes to selected rows regardless of versions available.
-- Note the use of MINVALUE AND MAXVALUE for the timestamp range so that
-- all possible versions are shown
-- What are results from prior RMAN commands and sessions?
SET LINESIZE 120
TTITLE 'Current FLASHBACK VERSION Results For Selected Employees'

COL
COL
COL
COL
COL
COL
COL

versions_xid
versions_startscn
versions_endscn
versions_operation
last_name
department_id
salary

FORMAT
FORMAT
FORMAT
FORMAT
FORMAT
FORMAT
FORMAT

A16
99999999
99999999
A12
A12
9999
999999.99

HEADING
HEADING
HEADING
HEADING
HEADING
HEADING
HEADING

'XID'
'Vsn|Start|SCN'
'Vsn|End|SCN'
'Operation'
'Last Name'
'Dept'
'Salary'

SELECT

versions_xid,
versions_startscn,
versions_endscn,
DECODE(
versions_operation,
'I', 'Insert',
'U', 'Update',
'D', 'Delete', 'Original') "Operation",
last_name,
department_id,
salary
FROM hr.employees
VERSIONS BETWEEN SCN MINVALUE AND MAXVALUE
WHERE employee_id >= 901

------ Listing 3.5: Flashback Transaction Example


------ Show the contents of the FLASHBACK_TRANSACTION_QUERY view
SET PAGESIZE 120
SET LINESIZE 100
TTITLE 'Current FLASHBACK_TRANSACTION_QUERY Contents For Selected Employees'
COL xid
FORMAT A16
HEADING 'XID#'
COL commit_scn
FORMAT 99999999 HEADING 'Commit|SCN'
COL operation
FORMAT A10
HEADING 'Operation'
COL logon_user
FORMAT A06
HEADING 'User|Logon'
COL table_owner
FORMAT A06
HEADING 'Table|Owner'
COL table_name
FORMAT A12
HEADING 'Table Name'
COL undo_sql
FORMAT A80
HEADING 'UNDO SQL'
SELECT
xid,
operation,
logon_user,
table_owner,
table_name,
commit_scn,
undo_sql
FROM flashback_transaction_query
WHERE xid IN (SELECT versions_xid
FROM hr.employees
VERSIONS BETWEEN SCN MINVALUE AND MAXVALUE
WHERE employee_id >= 901
AND versions_xid IS NOT NULL);
------ Listing 3.6: Create a new table (HR.APPLICANTS)

----DROP TABLE hr.applicants CASCADE CONSTRAINTS;


create table HR.APPLICANTS
(
applicant_id
NUMBER(5)
NOT NULL,
last_name
VARCHAR2(24) NOT NULL,
first_name
VARCHAR2(24) NOT NULL,
middle_initial
VARCHAR2(1),
gender
VARCHAR2(1),
application_date DATE
NOT NULL,
job_desired
VARCHAR2(10) NOT NULL,
salary_desired
NUMBER(10,2) NOT NULL,
added_on
DATE
DEFAULT SYSDATE NOT NULL,
added_by
VARCHAR2(12) NOT NULL,
changed_on
DATE DEFAULT SYSDATE NOT NULL,
changed_by
VARCHAR2(12) NOT NULL
)
TABLESPACE EXAMPLE
PCTFREE 10
PCTUSED 40
INITRANS 1
STORAGE
(
INITIAL 64K
MINEXTENTS 1
MAXEXTENTS UNLIMITED
);
-- Comments
COMMENT ON TABLE hr.applicants
IS 'Controls domain of Applicants, i.e. persons who have applied for an
employment opportunity';
COMMENT ON COLUMN hr.applicants.applicant_id
IS 'Unique identifier for an Applicant';
COMMENT ON COLUMN hr.applicants.last_name
IS 'Applicant Last Name';
COMMENT ON COLUMN hr.applicants.first_name
IS 'Applicant First Name';
COMMENT ON COLUMN hr.applicants.middle_initial
IS 'Applicant Middle Initial';
COMMENT ON COLUMN hr.applicants.gender
IS 'Applicant Gender';
COMMENT ON COLUMN hr.applicants.application_date
IS 'Application Date';
COMMENT ON COLUMN hr.applicants.job_desired
IS 'Job Applied For';
COMMENT ON COLUMN hr.applicants.salary_desired
IS 'Desired Salary';
COMMENT ON COLUMN hr.applicants.added_on
IS 'Added On';
COMMENT ON COLUMN hr.applicants.added_by
IS 'Added By';
COMMENT ON COLUMN hr.applicants.changed_on
IS 'Last Updated On';
COMMENT ON COLUMN hr.applicants.changed_by
IS 'Last Updated By';

-- Create indexes and constraints


CREATE UNIQUE INDEX hr.applicants_pk_idx
ON hr.applicants(applicant_id)
TABLESPACE EXAMPLE
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE
(
INITIAL 64K
MINEXTENTS 1
MAXEXTENTS UNLIMITED
);
ALTER TABLE hr.applicants
ADD CONSTRAINT applicants_pk
PRIMARY KEY (applicant_id);
CREATE INDEX hr.applicants_last_name_idx
ON hr.applicants(last_name)
TABLESPACE EXAMPLE
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE
(
INITIAL 64K
MINEXTENTS 1
MAXEXTENTS UNLIMITED
);
-- Create/Recreate check constraints
ALTER TABLE hr.applicants
ADD CONSTRAINT applicant_gender_ck
CHECK ((gender IN('M', 'F') or gender IS NULL));
-- Create sequence
DROP SEQUENCE hr.seq_applicants;
CREATE SEQUENCE hr.seq_applicants
MINVALUE 1
MAXVALUE 999999999999999999999999999
START WITH 1
INCREMENT BY 1
CACHE 3;
-- Create INSERT/UPDATE row-level trigger
CREATE OR REPLACE TRIGGER hr.tr_briu_applicants
BEFORE INSERT OR UPDATE ON hr.applicants
FOR EACH ROW
DECLARE
entry_id NUMBER := 0;
BEGIN
IF INSERTING THEN
BEGIN
SELECT

hr.seq_applicants.NEXTVAL
INTO entry_id
FROM DUAL;
:new.applicant_id := entry_id;
:new.added_on := SYSDATE;
:new.added_by := DBMS_STANDARD.LOGIN_USER;
:new.changed_on := SYSDATE;
:new.changed_by := DBMS_STANDARD.LOGIN_USER;
END;
ELSIF UPDATING THEN
BEGIN
:new.changed_on := SYSDATE;
:new.changed_by := DBMS_STANDARD.LOGIN_USER;
END;
END IF;
END TR_BRIU_APPLICANTS;
/
-- Create a first set of applicants
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('Aniston', 'Seth', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR2', 88017.94);
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('Niven', 'Ray', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR1', 82553.39);
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('Brown', 'Jackson', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR2', 70113.04);
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('Murdock', 'Charlton', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR2', 70389.16);
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('Bedelia', 'Colin', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR3', 38720.86);
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('Chandler', 'Gino', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR1', 55511.77);
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)

values ('Lerner', 'Hex', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy


hh24:mi:ss'), 'IT_CNTR2', 80587.46);
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('Robinson', 'Mekhi', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR3', 49516.37);
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('Chestnut', 'Denis', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR3', 73042.53);
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('Costa', 'Doug', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR1', 65403.50);
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('Bello', 'Lucy', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR1', 78432.05);
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('Playboys', 'Edward', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR2', 54464.91);
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('Spader', 'Leonardo', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR3', 49207.14);
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('Jovovich', 'Edwin', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR3', 56825.48);
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('Graham', 'Ray', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR3', 69169.14);
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('Barkin', 'Suzanne', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR3', 49641.49);
COMMIT;
------ Listing 3.7: Create a second set of applicants
----insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)

values ('Shandling', 'Mac', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy


hh24:mi:ss'), 'IT_CNTR1', 98871.03);
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('Sizemore', 'Casey', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR2', 73455.15);
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('Pressly', 'Bob', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR1', 63675.02);
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('Sandler', 'Joanna', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR1', 56205.25);
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('Callow', 'Ramsey', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR1', 90966.42);
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('Skerritt', 'Rade', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR2', 44394.27);
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('McBride', 'Earl', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR1', 58023.76);
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('Shepherd', 'Charles', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR1', 67411.52);
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('Nicholas', 'Mint', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR3', 63045.15);
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('Bassett', 'Jennifer', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR3', 86512.69);
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('Tobolowsky', 'Ronny', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR3', 77830.91);
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('Finney', 'Marisa', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR1', 92955.58);

insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,


JOB_DESIRED, SALARY_DESIRED)
values ('Maxwell', 'Emily', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR3', 72122.43);
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('MacLachlan', 'Walter', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR3', 97292.06);
COMMIT;
------ Listing 3.8: Performing a FLASHBACK TABLE operation
------ Enable row movement first, otherwise error ORA-08189 ("Cannot flashback
-- the table because row movement is not enabled") will result
ALTER TABLE hr.applicants ENABLE ROW MOVEMENT;
-- Issue the FLASHBACK TABLE command for the selected SCN
FLASHBACK TABLE hr.applicants TO SCN 2177093;
------ Listing 3.9: The Recycle Bin
------ Verify the contents of the Recycle Bin
SHOW RECYCLEBIN;
-- Show details of Recycle Bin contents
SET PAGESIZE 120
SET LINESIZE 100
TTITLE 'Current Recycle Bin Contents'
COL object_name
FORMAT A30
COL type
FORMAT A8
COL original_name
FORMAT A20
COL droptime
FORMAT A20
COL dropscn
FORMAT 9999999
SELECT
object_name
,type
,original_name
,droptime
FROM dba_recyclebin
WHERE owner = 'HR'
;

HEADING
HEADING
HEADING
HEADING
HEADING

'Object Name'
'Object|Type'
'Original Name'
'Dropped On'
'Drop|SCN'

-- Query directly from a table in the Recycle Bin using its object identifier
SELECT *
FROM "BIN$0M5fd0JLT2Gops3S5EDkNw==$0";
------ Listing 3.10: Recycle Bin Housekeeping (in order of destructiveness)
-----

-- Purge an index from the Recycle Bin. Note that any index that's enforcing
-- a constraint can't be purged via this method
PURGE INDEX hr.applicant_last_name_idx;
-- Purge a table and its dependent objects from the Recycle Bin
PURGE TABLE hr.applicants;
-- Purge all dropped tables and their dependent objects for a specific
-- tablespace from the Recycle Bin
PURGE TABLESPACE example;
-- Purge all objects from the Recycle Bin for the current user account
PURGE RECYCLEBIN;
-- Purge all objects from the Recycle Bin for the entire database
PURGE DBA_RECYCLEBIN;
------ Listing 3.11: Create a third set of applicants
----insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('Conners', 'Jose', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR2', 49579.74);
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('McFerrin', 'Jonatha', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR1', 87397.11);
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('Webb', 'Night', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR2', 85049.10);
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('Dzundza', 'Tramaine', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR1', 35239.10);
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('McCormack', 'Ethan', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR1', 49729.40);
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('Withers', 'Andie', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR2', 50804.93);
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('Goodman', 'Sonny', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR2', 97469.88);

insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,


JOB_DESIRED, SALARY_DESIRED)
values ('Goodman', 'Fiona', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR1', 35213.20);
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('Watson', 'Melanie', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR2', 76803.79);
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('Chaplin', 'Bridgette', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR3', 67701.57);
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('Patton', 'Ted', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR3', 43295.03);
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('Winwood', 'Chloe', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR3', 57301.55);
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('King', 'Clint', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR2', 50291.11);
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('Carrington', 'Joan', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR2', 91919.56);
insert into HR.APPLICANTS (LAST_NAME, FIRST_NAME, APPLICATION_DATE,
JOB_DESIRED, SALARY_DESIRED)
values ('Tyson', 'Hex', to_date('01-01-2005 12:00:00', 'dd-mm-yyyy
hh24:mi:ss'), 'IT_CNTR1', 56582.30);
COMMIT;