Sie sind auf Seite 1von 49

ORACLE DATABASE BACKUP AND RECOVERY BEST PRACTICES AND NEW FEATURES

Presented by techaccess Team

Partners

Agenda

Oracle Data Protection Planning & Solutions. Oracle Backup and Recovery Solutions Physical Data Protection User Managed backup Recovery Manager Oracle Secure Backup Logical Data protection Export and Import Oracle Data Pump Flashback Technologies Recovery Analysis Data Recovery Advisor

Assess Recovery Requirements


First Step in Data Protection Planning

Identify and prioritize critical data Design recovery requirements around data criticality

Assess tolerance for data loss Recovery Point Objective (RPO)


How frequently should backups be taken? Point in time recovery required? Downtime: Problem identification + Recovery planning + system recovery Tiered RTO per level of granularity, e.g. Database, tablespace, table, row Onsite, offsite, long-term

Assess tolerance for downtime Recovery Time Objective (RTO)

Determine backup retention policy

Assess data protection requirements


Physical: Disasters, outages, failures, corruptions Logical: Human errors, application errors

Interdependent RTO and RPO activities


Business Process recovered

shows that a business process's RTO and RPO can involve several interdependent and sequential activities

Oracle Data Pump


Overview of Oracle Data Pump Data Pump Quick Start for Exp/Imp Users New Features of Oracle Data Pump Advanced Features of Oracle Data Pump

Getting the most from Oracle Data Pump

Data Pump Overview: Background and Usage


Replacement for old exp and imp utilities Faster and more feature-rich than older utilities Available starting in Oracle 10g, Data Pump is the new export/import mechanism As of Oracle 11g, original Export is no longer supported for general use Typical uses for Data Pump Export/Import

Logical backup of schema/table Refresh test system from production Upgrade (either cross-platform, or with storage reorg) Move data from production to offline usage (e.g. data warehouse, ad-hoc query) expdp is ~2x faster than original exp impdp is ~15-40x faster than original imp Using PARALLEL can further improve performance

Typical results for data load/unload


Data Pump Overview: Features

Improved Speed

Direct path load/unload Parallel workers INCLUDE or EXCLUDE many more object types REMAP schema, tablespace, data file Use multiple dump files for parallelism and ease of file management Encrypted columns Network move over dblinks Newer data types

Flexibility

Database Feature Support


Data Pump Quick Start

Directory Object

Used for reading and writing dump files Allows DBA to control where files will be written on the server system Default object DATA_PUMP_DIR created as of Oracle Database 10g Release 2 Allows the user to monitor and control Data Pump jobs Many job parameters can be adjusted on the fly
Create the directory as a privileged user:
$ sqlplus sys/<pwd> as SYSDBA SQL> CREATE DIRECTORY scott_dir AS /usr/apps/datafiles; SQL> GRANT READ,WRITE ON DIRECTORY scott_dir TO scott; SQL> exit User scott can then export/import using Data Pump: expdp scott/<pwd> DIRECTORY=scott_dir dumpfile=scott.dmp

Interactive Command-line

Example of Directory Object Usage:

Data Pump Quick Start

Tuning Parameters

Data Pump handles most tuning internally expdp/impdp instead of exp/imp

New command line clients

Parameter changes: a few examples Interactive Command-line


Data Pump Parameter SCHEMAS Original Exp/Imp Parameter OWNER

REMAP_SCHEMA
CONTENT=METADATA_ONLY EXCLUDE=TRIGGER

TOUSER
ROWS=N TRIGGERS=N

New Features: Network Mode


expdp scott/tiger network_link=db1 tables=emp dumpfile=scott.dmp directory=mydir

Produces a local dump file set using the contents of a remote database Requires a local, writeable database to act as an agent May be parallelized Will generally be significantly slower than exporting to a file on a local device

Network Mode Restartability Parallelization Include/Exclude

impdp system/manager network_link=db1 directory=mydir


Moves a portion of a database to a new database without creating a dump file Ideal when the footprint of the dump file set needs to be minimized May be parallelized Primarily a convenience: will generally be slower than exporting to a file, copying the file over the network, and importing to the target Parallelization

SQLFILE

New Features: Restartability

Data Pump jobs may be restarted without loss of data and with only minimal loss of time Restarts may follow:

Network Mode Restartability Parallelization Include/Exclude

System failure (e.g., loss of power) Database shutdown Database failure User stop of Data Pump job Internal failure of Data Pump job Exceeding dumpfile space on export

SQLFILE

New Features: Restartability


expdp system/manager attach=myjob Export> start_job

Export writes out objects based upon object type On restart, any uncompleted object types are removed from the dump file and the queries to regenerate them are repeated For data, incompletely written data segments (i.e., partitions or unpartitioned tables) are removed and the data segments are totally rewritten when the job continues

Network Mode Restartability Parallelization Include/Exclude

impdp system/manager attach=myjob Import> start_job


SQLFILE

Restart is based upon the state of the individual objects recorded in the master table If object was completed, it is ignored on restart If object was not completed, it is reprocessed on restart If object was in progress and its creation time is consistent with the previous run, it is reprocessed, but duplicate object errors are ignored

New Features: Parallelization


Multiple threads of execution may be used within a Data Pump job Jobs complete faster, but use more database and system resources Only available with Enterprise Edition Speedup will not be realized if there are bottlenecks in I/O bandwidth, memory, or CPU Speedup will not be realized if bulk of job involves work that is not parallelizable

Network Mode Restartability Parallelization Include/Exclude

SQLFILE

New Features: Parallelization


expdp system/manager directory=mydir dumpfile=a%u.dmp parallel=2

There should be at least one file available per degree of parallelism. Wildcarding filenames (%u) is helpful All metadata is exported in a single thread of execution Typically each partition or unpartitioned table will be processed by a single worker thread In certain cases, a very large partition will be processed across multiple threads of execution using parallel query

Network Mode Restartability Parallelization Include/Exclude

impdp system/manager directory=mydir dumpfile=a%u.dmp parallel=6

SQLFILE

Degree of parallelization in import does not have to match degree of parallelization used for export Processing of user data is split among the workers as is done for export Creation of package bodies is parallelized by splitting the definitions of packages across multiple parallel workers Index building is parallelized by temporarily specifying a degree clause when an index is created

New Features: Include/Exclude


impdp hr/hr directory=mydir dumpfile=mydump exclude=index

Fine grained object selection is allowed for both expdp and impdp Objects may be either excluded or included List of object types and a short description of them may be found in the following views:

Network Mode Restartability Parallelization Include/Exclude

DATABASE_EXPORT_OBJECTS
SCHEMA_EXPORT_OBJECTS TABLE_EXPORT_OBJECTS

SQLFILE

New Features: Include/Exclude


expdp hr/hr directory=mydir dumpfile=mydump exclude=index,trigger

Objects described by the Exclude parameter are omitted from the job Objects that are dependent upon an excluded object are also excluded. (e.g., grants and statistics upon an index are excluded if an index is excluded) Multiple object types may be excluded in a single job

Network Mode Restartability Parallelization Include/Exclude

impdp hr/hr directory=mydir dumpfile=mydump include=procedure,function


SQLFILE

Objects described by the Include parameter are the only objects included in the job Objects that are dependent upon an included object are also included. (e.g., grants upon a function are included if the function is included) Multiple object types may be included in a single job

New Features: SQLFILE


Specifies a file into which the DDL that would have been executed in the import job will be written Actual import is not performed; only the DDL file is created Can be combined with EXCLUDE/INCLUDE to tailor the contents of the SQLFILE Example: to get a SQL script that will create just the tables and indexes contained in a dump file:

Network Mode Restartability Parallelization Include/Exclude

impdp user/pwd DIRECTORY=DPUMP_DIR1 DUMPFILE=export.dmp INCLUDE=TABLE,INDEX SQLFILE=create_tables.sql

SQLFILE

Output of SQLFILE is executable, but will not include passwords

Oracle Flashback technologies


Overview of Oracle FlashBack Enable Flash back technology Flash Back Functions New Features of Flashback in Oracle 11g Release 2

Fast Rewind of Logical Errors

Flashback Technologies: Overview

Oracle9i introduced Flashback Query to provide a simple, powerful and completely non-disruptive mechanism for recovering from human errors. Oracle Database 10g extended the Flashback Technology to provide fast and easy recovery at the database, table, row, and transaction level. Low impact

Flashback Database restore database to any point in time Flashback Drop restore accidentally dropped tables (based on free space in tablespace) Flashback Table restore contents of tables to any point in time (undo-based) Flashback Query back out transaction and all subsequent conflicting transactions (redobased) Flashback Transaction back out transaction and all subsequent conflicting transactions (redo-based)

Flashback Technologies: Overview


Flashback Database is implemented using a new type of log file called Flashback Database logs. The Oracle database server periodically logs before images of data blocks in the Flashback Database logs Flashback revolutionizes error recovery

80 60 40 20 0

View good data as of a past point-in-time Simply rewind data changes Time to correct error equals time to make error

Correction Time = Error Time + f(DB_SIZE)


Excellent tool for configuring QA, Dev and Training databases Flashback is easy simple commands, no complex procedure

Enable and Disable Flashback Database


Make sure the database is in archive mode. Configure the recovery area by setting the two parameters:
DB_RECOVERY_FILE_DEST DB_RECOVERY_FILE_DEST_SIZE

Disabling Flashback Database


SQL> ALTER DATABASE FLASHBACK OFF;

Determine if Flashback Database is enabled


SQL> select flashback_on from v$database;

Open the database in MOUNT EXCLUSIVE mode and turn on the flashback feature:
SQL> STARTUP MOUNT EXCLUSIVE;

FLASHBACK_ON
---------------------YES

SQL> ALTER DATABASE FLASHBACK ON;

Set the Flashback Database retention target: DB_FLASHBACK_RETENTION_TARGET

Monitoring Flashback

Monitoring flashback database


select begin_time, flashback_data, db_data, redo_data, ESTIMATED_FLASHBACK_SIZE from v$flashback_database_stat; (this view estimate the amount of flashback logs generated at various times)

Monitor the Flashback Database retention target


select * from v$flashback_database_log; (this view find out how far into the past you can flashback)

The default value for flashback retention time is 1440 minutes This command shows the background process of flashback ora_rvwr
$ ps -ef | grep rvwr oracle 25169 1 0 16:32:22 ? 0:00 ora_rvwr_grid

Flashback Database

The database must be in mount state before using flashback database operations flashback operation performs a point-in-time recovery, which is a form of incomplete recovery so you need to open the database with the resetlogs clause after perform flashback operations You can flashback your database to the following

A specific point in time, specified by date and time A specific SCN number The last resetlogs operation A named restore point

Flashback using RMAN


RMAN> flashback database to scn 1050951; RMAN> flashback database to time 'sysdate-2/60/24'; RMAN> flashback database to restore point rp6; RMAN> flashback database to before resetlogs;

Flashback Database

Flashback using SQL*Plus

SQL> flashback database to timestamp to_date('1/22/2007 00:00:00','mm/dd/yyyy hh24:mi:ss'); SQL> flashback database to scn 1050900;

SQL> flashback database to restore point rp1;

Flashback Drop

In Oracle Database 10g, When the table is dropped, its not really erased from the database; rather, it is renamed and placed in a logical container called the recycle bin, similar to the Recycle Bin found in Windows. The extents allocated to the segment are not reallocated until you purge the object. You can reinstate the dropped table any time using only one simple command.

Recycle bin

A recycle bin contains all the dropped database objects until,


You permanently drop them with the PURGE command. Recover the dropped objects with the FLASHBACK TABLE command. There is no room in the tablespace for new rows or updates to existing rows. USER_RECYCLEBIN: list all dropped user objects DBA_RECYCLEBIN: list all dropped system-wide objects Show recyclebin

You can view the dropped objects in the recycle bin from two dictionary views:

You can purge recycle bin object using following commands


SQL> purge recyclebin; (this statement purges user objects from recyclebin) SQL> purge dba_recyclebin; (this statement purges All objects from recyclebin) SQL> purge tablespace users; (this statement purges All objects belongs to users tablespace from recyclebin)

Flashback Drop

A recycle bin contains all the dropped database objects until,


You permanently drop them with the PURGE command. Recover the dropped objects with the FLASHBACK TABLE command.

There is no room in the tablespace for new rows or updates to existing rows.

You can view the dropped objects in the recycle bin from two dictionary views:

USER_RECYCLEBIN: list all dropped user objects DBA_RECYCLEBIN: list all dropped system-wide objects

You can purge recycle bin object using following commands


SQL> purge recyclebin; (this statement purges user objects from recyclebin) SQL> purge dba_recyclebin; (this statement purges All objects from recyclebin) SQL> purge tablespace users; (this statement purges All objects belongs to users tablespace from recyclebin)

Flashback Drop

After checking the contents of recycle bin, you can recover those object by one of the following methods
SQL> flashback table accounts to before drop;

SQL> flashback table accounts to before drop rename to new_accounts;


SQL> flashback table "BIN$bQ8QU1bWSD2Rc9uHevUkTw==$0" to before drop;

You have an option to drop table permanently.


SQL> drop table test purge;
SQL> purge table "BIN$0+ktoVChEmXgNAAADiUEHQ==$0";

Flashback Table

Flashback Table allows you to recover a table or tables to a specific point in time without restoring a backup. When you use the Flashback Table feature to restore a table to a specific point in time, all associated objects, such as, indexes, constraints, and triggers will be restored Flashback Table operations are not valid for the following object types:

Tables that are part of a cluster Materialized views Advanced Queuing tables Static data dictionary tables System tables Partitions of a table Remote tables (via database link)

Table row movement must be enabled to flashback a table:


SQL> ALTER TABLE billing ENABLE ROW MOVEMENT;

Flashback Table

Data used to recover a table is stored in the undo tablespace. You can use the parameter UNDO_RETENTION to set the amount of time you want undo information retained in the database. The default value for UNDO_RETENTION is 900 seconds (15 minutes). When an active transaction uses all the undo tablespace, the system will start reusing undo space that would have been retained unless you have specified RETENTION GUARANTEE for the tablespace. To create an undo tablespace with the RETENTION GUARANTEE option, issue the following command:
SQL> CREATE UNDO TABLEAPCE undo_tbs DATAFILE /u02/oradata/grid/undo_tbs01.dbf SIZE 1 G RETENTION GUARANTEE;

You must have the FLASHBACK TABLE or FLASHBACK ANY TABLE system privilege to use the Flashback Table feature

Flashback Table

This statement brings a table billing back to a certain SCN number;


SQL> FLASHBACK TABLE billing TO SCN 76230;

This statement brings a table billing back to a certain timestamp:


SQL> FLASHBACK TABLE billing
TO TIMESTAMP TO_TIMESTAMP(06/25/03 12:00:00,MM/DD/YY HH:MI:SS);

Flashback Version Query


Flashback Query was first introduced in Oracle9i, to provide a way for you to view historical data. In Oracle 10g, this feature has been extended. You can now retrieve all versions of the rows that exist or ever existed between the time the query was issued and a point back in time. You can use the VERSIONS BETWEEN clauses to retrieve all historical data related to a row. The Flashback Versions Query feature retrieves all committed occurrences of the row. The row history data is stored in the undo tablespace. The undo_retention initialization parameter specifies how long the database will keep the amount of committed undo information. If a new transaction need to use undo space and there is not enough free space left, any undo information older than the specified undo retention period will be overwritten. You can set the undo tablespace option to RETENTION GUARANTEE to retain all row histories.

Flashback Version Query Example


SQL> create table emp (name varchar2(10),salary number(8,2)); Table created. SQL> insert into emp values ('DANIEL',2000); SQL> select * from emp; NAME DANIEL SALARY 3000 ---------- ----------

1 row created
SQL> commit; Commit complete. SQL> update emp set salary = 3000 where name = 'DANIEL'; 1 row updated. SQL> commit; Commit complete.

SQL> select * from emp versions between scn minvalue and maxvalue; NAME DANIEL DANIEL SALARY 3000 2000 ---------- ----------

Flashback Transaction Query


It provides a way for you to view changes made to the database at the transaction level. It allows you to diagnose problems in your database and perform analysis and audit transactions. You can use this feature in conjunction with the Flash Versions Query feature to roll back the changes made by a transaction. You can retrieve the transaction history from flashback_transaction_query view:
Null? Type

Name

------------------------------------ -------- --------------

XID
START_SCN START_TIMESTAMP COMMIT_SCN

RAW(8)
NUMBER DATE NUMBER etc

Flashback Transaction Query


SQL> select xid, start_scn, start_timestamp, table_name, undo_sql from flashback_transaction_query where xid = '0009001F000000B2; XID START_SCN START_TIMESTAMP TABLE_NAME UNDO_SQL --------------------------------------------------------update "ORACLE"."EMP" set "SALARY" = 2000' where ROWID = 'AAAMWJAAEAAAAFsAAA';

---------------- ---------- -------------------- ---------- ---------------------0009001F000000B2 714980 Feb 21 2004 23:30:31 EMP

Flashback Data Archive 11g R2 New Feature

Transparently tracks historical changes to all Oracle data in a highly secure and efficient manner:

Secure

No possibility to modify historical data


Retained according to your specifications Automatically purged based on your retention policy Special kernel optimizations to minimize performance overhead of capturing historical data Stored in compressed form in tablespaces to minimize storage requirements Completely transparent to applications Easy to set up

Efficient

Flashback Data Archive Overview

A flashback data archive is a historical data store. Oracle Database 11g automatically tracks and archives the data in tables enabled for Flashback Data archive with a new Flashback Data archive background process, FBDA

Flashback Data Archive Overview

The Flashback Data Archive background process, FBDA, starts with the database.

FBDA operates first on the undo in the buffer cache. In case the undo has already left the buffer cache, FBDA could also read the required values from the undo segments. FBDA consolidates the modified rows of flashback archiveenabled tables and writes them into the appropriate history tables, which make up the flashback data archive.

Flashback Data Archive Workflow


Create the flashback data archive. Optionally, specify the default flashback data archive. Enable the flashback data archive. View flashback data archive data.

Flashback Data Archive Configuration


1.

Using a default flashback archive: Create a default flashback data archive:


CREATE FLASHBACK ARCHIVE DEFAULT fla2

TABLESPACE tbs1 QUOTA 10G RETENTION 2 YEAR;


2.

Enable history tracking for a table:


ALTER TABLE stock_data FLASHBACK ARCHIVE;

3.

Disable history tracking


ALTER TABLE stock_data NO FLASHBACK ARCHIVE;

Flashback Data Archive Maintenance

Adding space:
ALTER FLASHBACK ARCHIVE fla1 ADD TABLESPACE tbs3 QUOTA 5G;

Changing retention time:


ALTER FLASHBACK ARCHIVE fla1 MODIFY RETENTION 2 YEAR;

Purging data:
ALTER FLASHBACK ARCHIVE fla1 PURGE BEFORE TIMESTAMP(SYSTIMESTAMP - INTERVAL '1' day);

Dropping a flashback data archive:


DROP FLASHBACK ARCHIVE fla1;

Viewing Flashback data archive


*_FLASHBACK_ARCHIVE *_FLASHBACK_ARCHIVE_TS *_FLASHBACK_ARCHIVE_TABLES

Flashback Data Archive Guidelines


To ensure database consistency, always perform a COMMIT or ROLLBACK operation before querying past data. Remember that all flashback processing uses the current session settings, such as national language and character set, not the settings that were in effect at the time being queried. To obtain an SCN to use later with a flashback feature, you can use the DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER function. To compute or retrieve a past time to use in a query, use a function return value as a time stamp or an SCN argument. For example, add or subtract an INTERVAL value to the value of the SYSTIMESTAMP function. To query past data at a precise time, use an SCN. If you use a time stamp, the actual time queried might be up to 3 seconds earlier than the time you specify. The Oracle database server uses SCNs internally and maps them to time stamps at a granularity of 3 seconds.

Flashback transaction backout 11g R2

Logical recovery option to roll back a specific transaction and all its dependent transactions

Using redo logs and supplemental logging Creating and executing compensating transactions You finalize changes with commit or roll back.

Faster and easier than laborious manual approach

Flashing back a transaction

You can flash back a transaction by using Enterprise Manager or the command line. Enterprise Manager uses the Flashback Transaction Wizard, which calls the DBMS_FLASHBACK.TRANSACTION_BACKOUT procedure with the NOCASCADE option. If the PL/SQL call finishes successfully, it means that the transaction does not have any dependencies and a single transaction is backed out successfully.

Viewing a flashback Dependency Report

After choosing your backout option, the dependency report is visible in the DBA_FLASHBACK_TXN_STATE and DBA_FLASHBACK_TXN_REPORT views.

Review the dependency report that shows all transactions which were backed out.

Commit the changes to make them permanent.


Roll back to discard the changes.

Flashback database Enhancement in 11g R2


Enable Flashback Database while the database is open Monitor Flashback Database by using the SOFAR and TOTALWORK columns of V$SESSION_LONGOPS

Recovery manager (RMAN)


Overview of Recovery Manager (RMAN)

Features of RMAN
RMAN Components Repository Configuration in RMAN

New Features of RMAN in 11g R2

Fast Rewind of Logical Errors

Overview of Recovery Manager (RMAN) S