Beruflich Dokumente
Kultur Dokumente
High Availability
Oracle Internet Directory is designed to meet the needs of a variety of important applications. For
example, it supports full, multimaster replication between directory servers: If one server in a replication
community becomes unavailable, then a user can access the data from another server. Information about
changes made to directory data on a server is stored in special tables on the Oracle9i database. These
are replicated throughout the directory environment by Oracle9i Replication, a robust replication
mechanism.
Oracle Internet Directory also takes advantage of all the availability features of the Oracle9i. Because
directory information is stored securely in the Oracle9i database, it is protected by Oracle's backup
capabilities. Additionally, the Oracle9i database, running with large datastores and heavy loads, can
recover from system failures quickly.
Security
Oracle Internet Directory offers comprehensive and flexible access control. An administrator can grant or
restrict access to a specific directory object or to an entire directory subtree. Moreover, Oracle Internet
Directory implements three levels of user authentication: anonymous, password-based, and certificate-
based using Secure Socket Layer (SSL) Version 3 for authenticated access and data privacy.
*************************************************************************************
CHAP 2 Oracle Net Architecture
This illustration depicts the various layers of stack communication used in a client/server application
connection. On the client side, from the top down, the stack is constructed with the following layers:
• Client Application (uses OCI)
• Presentation - TTC
• Oracle Net Foundation Layer
• Oracle Protocol Support
• Network Protocol (TCP/IP, TCP/IP with SSL, VI, LU6.2)
On the server side, from the top down, the stack is constructed with the following layers:
• RDBMS(uses OPI)
• Presentation - TTC
• Oracle Net Foundation Layer
• Oracle Protocol Support
• Network Protocol (TCP/IP, TCP/IP with SSL, VI, LU6.2)
The Oracle Net Foundation Layer and Oracle Protocol Support comprise the Oracle Net. Associated with
the Oracle Net Foundation Layer on the client side is naming methods. Associated with Oracle Net
Foundation Layer on both the client and server sides is security services.
Client Application
During a session with the database, the client uses Oracle Call Interface (OCI) to interact with the
database server. OCI is a software component that provides an interface between the client application
and the SQL language the database server understands.
Left figure: This illustration depicts stack communication layers used by JDBC drivers. The JDBC OCI
driver stack, from the top down, is constructed with the following layers:
Right figure: The JDBC Thin driver stack, from the top down, is constructed with the following layers:
Web clients that do not require an application Web server to access applications can access the Oracle
database directly, for example, by using a Java applet. In addition to regular connections, the database
can be configured to accept HTTP and Internet Inter-ORB Protocol (IIOP) connections. These protocols
are used for connections to Oracle9i JVM in the Oracle9i instance.
The Oracle database server is also configured to support HTTP and IIOP.
One Web browser uses the HTTP protocol to connect to the Oracle Net layer on the database server. The
second Web browser uses the IIOP protocol to connect to the Oracle Net layer on the database server.
The third Web browser shows a communication stack. From the top down, the stack is constructed with
the following layers: 1. Java Applet 2.JDBC Thin Driver 3. JavaNet
This browser uses the TCP/IP network protocol to connect to the Oracle Net layer on the database.
Note: TNS_ADMIN : You can add the TNS_ADMIN parameter to change the directory name for
configuration files from the default location. For example, if you set TNS_ADMIN to
ORACLE_BASE\ORACLE_HOME\test\admin, the configuration files are used from
ORACLE_BASE\ORACLE_HOME\test\admin.
*************************************************************************************
CHAP 3 Basic Oracle Net Server Side Configuration
Note: Oracle9i database only supported by an Oracle9i Listener, Oracle9i Listener can be used for earlier
versions of Oracle database.
Connection Methods:
When a connection request is made by a client to a server, the listener performs one of the following:
• Spawns a process and bequeaths (passes) the connection to it (Dedicated Server Configuration)
• Hands Off the connection to a dispatcher in an Oracle Shared server configuration (not possible for
Dedicated Server Configuration)
• Redirects the connection to a dispatcher or an existing server process (Shared Server Configuration)
Note: connection session is bequeathed, Handed off or Redirected to an existing process the session is
transparent to user. Detected only by turning on the tracing
This illustration shows the connection sequence described in the preceding text. It also shows a database
instance that contains a dedicated server process, enabling the client a connection to an Oracle database.
Note: USE_SHARED_SOCKET: You can set the USE_SHARED_SOCKET parameter to TRUE to enable
the use of shared sockets. If this parameter is set to true, the network listener passes the socket
descriptor for client connections to the database thread. As a result, the client does not need to establish
a new connection to the database thread and database connection time improves. Also, all database
connections share the port number used by the network listener, which can be useful if you are setting
up third-party proxy servers. FALSE is default.
This parameter only works in dedicated server mode in a TCP/IP environment (using Windows Sockets
API WINSOCK2). If this parameter is set, you cannot use the 9.0 listener to spawn Oracle 7.x databases.
To spawn a dedicated server for an Oracle database not associated with the same Oracle home as the
listener and have shared socket enabled, you must also set the variable USE_SHARED_SOCKET for both
Oracle homes.
This illustration shows the direct hand-off sequence described in the preceding text. It also shows a
database instance that contains a dispatcher and two shared server processes. One of the shared server
processes picks up the connection request from the dispatcher, enabling a connection to an Oracle
database.
REDIRECTED SESSION
1. The listener receives a client connection request.
2. The listener provides the location of the dispatcher to the client in a redirect message.
3. The client connects directly to the dispatcher.
This illustration shows the redirected connection sequence described in the preceding text. It also shows
a database instance that contains a dispatcher and two shared server processes. One of the shared
server processes picks up the connection request from the dispatcher, enabling a connection to an Oracle
database.
The listener.ora file is used to configure the listener. The listener.ora file must reside on the machine or
node on which the listener is to reside. The listener.ora file contains configuration information for the
following:
• The listener name
• The listener address
• Databases that use the listener
• Listener parameters
Dynamic service registration: Registering Information with the Default, Local Listener
By default, the PMON process registers service information with its local listener on the default local
address of TCP/IP, port 1521. As long as the listener configuration is synchronized with the database
configuration, PMON can register service information with a nondefault local listener or a remote listener
on another node.
If you want PMON to register with a local listener that does not use TCP/IP, port 1521,
configure the LOCAL_LISTENER parameter in the initialization parameter file to locate the local listener. If
you are using shared server, you can also use the LISTENER attribute of the DISPATCHERS parameter in
the initialization parameter file to register the dispatchers with a nondefault local listener.
LSNRCTL:
Once the listener is configured, the listener can be administered with the Listener Control utility
Using LSNRCTL command
Microsoft Windows XP [Version 5.1.2600]
(C) Copyright 1985-2001 Microsoft Corp.
C:\>lsnrctl
LSNRCTL for 32-bit Windows: Version 9.2.0.1.0 - Production on 09-OCT-2006 11:27:59
Copyright (c) 1991, 2002, Oracle Corporation. All rights reserved.
Welcome to LSNRCTL, type "help" for information.
LSNRCTL>
Control non default listener
1. LSNRCTL> set current_listener [listener_name] 2.$listener start [listener_name]
Starting and Stopping the Listener
STOP Command: To stop the listener from the command line, enter:
lsnrctl STOP [listener_name] or $listener stop [listener_name]
where listener_name is the name of the listener defined in the listener.ora file. It is not necessary to
identify the listener if you are using the default listener, named LISTENER.
START Command: To start the listener from the command line, enter:
lsnrctl START [listener_name] $listener start [listener_name]
where listener_name is the name of the listener defined in the listener.ora file. It is not necessary to
identify the listener if you are using the default listener, named LISTENER.
In addition to starting the listener, the Listener Control utility verifies connectivity to the listener.
Monitoring Runtime Behavior: The STATUS and SERVICES commands provide information about the
listener. When entering these commands, follow the syntax as shown for the STOP and START
commands.
STATUS Command
The STATUS command provides basic status information about a listener, including a summary of listener
configuration settings, the listening protocol addresses, and a summary of services registered with the
listener.
STATUS Specifies the following: (status can obtain by OEM console)
* Name of the listener
* Version of listener
* Start time and up time
* Tracing level
* Logging and tracing configuration settings
* listener.ora file being used
* Whether a password is set in listener.ora file
* Whether the listener can respond to queries from an SNMP-based network management system
Command Description
CHANGE_PASSWORD Dynamically changes the encrypted password of a listener.
EXIT Quits the LSNRCTL utility.
HELP Provides the list of all available LSNRCTL commands.
QUIT Provides the functionality of the EXIT command.
RELOAD Shuts down everything except listener addresses and rereads the listener.ora
file. You use this command to add or change services without actually stopping
the listener.
Creates a backup of your listener configuration file (called listener.bak) and
SAVE_CONFIG
updates the listener.orafile itself to reflect any changes.
SERVICES Provides detailed information about the services the listener listens for.
SET parameter This command sets a listener parameter.
SHOW parameter This command lists the value of a listener parameter.
You need to set an encrypted password for the listener, LSNR. Which two options could you use to set
the password? (Choose two.)
A. use Oracle Net Manager
B. use the Listener Control utility
Host Naming:
º Connecting to oracle database using Oracle Net Services Client Software
º Your client and server are connecting using TCP/IP.
º The host name is resolved through an IP address translation mechanism such as Domain Name
Services (DNS), Network Information Services (NIS), or a centrally maintained TCP/IP /etc/hosts file.
º No Oracle Connection Manager features or security options are requested.
Advantages:
1. Requires minimal user configuration. The user can provide only the name of the host to establish
a connection.
2. Eliminates the need to create and maintain a local names configuration file (tnsnames.ora)
3. Eliminates the need to understand Oracle Names or OID administration procedures.
Disadvantage:
Available only in a limited environment, identify only one SID per node.
Local Naming
Simple distributed networks with a small number of services that change infrequently.
Advantages:
* Provides a relatively straightforward method for resolving net service name addresses
* Resolves net service names across networks running different protocols
* Configured using Graphical configuration tool (Oracle Net Manager)
Disadvantage: Requires local configuration of all net service name and address changes stored in
tnsnames.ora file
Required: Client> sqlnet.ora and tnsnames.ora file Server> listener.ora file
Required: sqlnet.ora [names.directory_path = (tnsnames)]
Generated Files:
tnsnames.ora file
A configuration file that contains net service names mapped to connect descriptors. This file is used for
the local naming method. The tnsnames.ora file must reside in one of the following locations:
1. The directory specified by the TNS_ADMIN environment variable If the TNS_ADMIN environment
variable is not defined as a variable on Windows NT, it may be in the registry.
2. The node's global configuration directory. For Sun Solaris, this directory is /var/opt/oracle.
Windows NT does not have a central directory.
3. The $ORACLE_HOME/network/admin directory on UNIX or the ORACLE_HOME\network\admin
directory on Windows operating systems.
Which one of the following statements about the TNSPING utility is correct?
Ans. It does not require the username and password to check the connectivity of the service.
ORADB =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = papai)(PORT = 1521))
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ORADB)
)
)
INST1_HTTP =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = papai)(PORT = 1521))
)
(CONNECT_DATA =
(SERVER = SHARED)
(SERVICE_NAME = MODOSE)
(PRESENTATION = http://HRService)
)
)
EXTPROC_CONNECTION_DATA =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
)
(CONNECT_DATA =
(SID = PLSExtProc)
(PRESENTATION = RO)
)
)
Parameter Description
ORADB NET Service name and domain name.
Keyword for describing the connect descriptor.
DESCRIPTION
Descriptions are always specified the same way.
Keyword for the address specification. If multiple
ADDRESS addresses are specified, use the keyword
ADDRESS_LIST prior to the ADDRESS.
PROTOCOL Specifies the protocol used.
Protocol-specific information for TCP/IP—specifies the
HOST host name of the server or IP address. Can differ for
another protocol.
PORT Protocol specific information for TCP/IP—specifies the
port number on which the server side listener is
listening.
CONNECT_DATA Specifies the database SID to which to connect.
sqlnet.ora file
A configuration file for the client or server that specifies the:
• Client domain to append to unqualified service names or net service names
• Preferred order of naming methods that the client should use when resolving a name
• External naming parameters
The sqlnet.ora file must reside in one of the following locations:
1. The directory specified by the TNS_ADMIN environment variable If the TNS_ADMIN environment
variable is not defined as a variable on Windows NT, it may be in the registry.
2. The $ORACLE_HOME/network/admin directory on UNIX or the ORACLE_HOME\network\admin
directory on Windows operating systems.
Troubleshooting
ORA-12520 TNS:listener could not find available handler for requested type of server Which action
should you take first to investigate the problem?
Ans. Executing the LSNRCTL SERVICES command to verify that the instances are registered
with the listener and that the appropriate service handler exists and is ready
*************************************************************************
CHAP 5 Usage and Configuration of Oracle Shared Server
Server Configurations
Dedicated server (two-task) process
Shared server process [part of Oracle Shared server architechture]
The program interface in use here depends on whether the user and the dedicated server processes are
on the same machine. If they are, the host operating system’s interprocess communication mechanism is
used for the program interface between processes.
To request a dedicated server, the clause SERVER=DEDICATED must be included in the Oracle Net TNS
connection string within the tnsnames.ora file:
TEST.world =
(DESCRIPTION =
(ADDRESS =
(PROTOCOL = TCP)
(HOST = wwed151-sun)
(PORT = 1521)
)
(CONNECT_DATA = (SERVICE_NAME = TEST.US.ORACLE.COM)
(SERVER=DEDICATED)
)
)
Technical Note: For most platforms, if your machine has plenty of memory to support dedicated
servers, you should use that configuration. In this situation, performance is likely to be better.
There are exceptions such as NT, in which performance may improve using the shared server
configuration due to the asynchronous nature of shared server architecture.
Technical Note
If the user call is from across a network, the dispatcher process chosen by the listener must match the
protocol of the network being used.
Processing a Request
1 A user sends a request to its dispatcher.
2 The dispatcher places the request into the request queue in the System Global Area (SGA).
3 A shared server picks up the request from the request queue and processes the request.
4 The shared server places the response on the calling dispatcher’s response queue.
5 The response is handed off to the dispatcher.
6 The dispatcher returns the response to the user.
Once the user call has been completed, the shared server process is released and is available to service
another user call in the request queue.
Request Queue
• One request queue is shared by all dispatchers.
• Shared servers monitor the request queue for new requests.
• Requests are processed on a first-in, first-out basis.
Response Queue
• Shared servers place all completed requests on the calling dispatcher’s response queue.
• Each dispatcher has its own response queue in the SGA.
• Each dispatcher is responsible for sending completed requests back to the appropriate user process.
• Users are connected to the same dispatcher for the duration of a session.
• Text and parsed forms of all SQL statements are stored in the SGA.
• The cursor state contains run-time memory values for the SQL statement, such as rows retrieved.
• User session data includes security and resource usage information.
• The stack space contains local variables for the process.
Technical Note: The change in the SGA and PGA is transparent to the user; however, if supporting
multiple users, you need to increase the SHARED_POOL_SIZE per connection.
Each shared server process needs to access the data spaces of all sessions so that any server can handle
requests from any session. Space is allocated in the SGA for each session’s data space. You can limit the
amount of space that a session can allocate by setting the resource limit PRIVATE_SGA to the desired
amount of space in the user profile.
ADDRESS (ADD or ADDR) The network address on which the dispatchers listen (Includes the
protocol)
DESCRIPTION (DES or DESC) The network description of the end point on which the dispatchers will
listen (Includes the protocol)
DISPATCHERS The initial number of dispatchers to start (default is 1)
(DIS or DISP)
SESSIONS (SES or SESS) The maximum number of network sessions for each dispatcher
Default is OS specific (16k)
LISTENER (LIS, LIST) The network name of an address or address list of the listeners with
which the dispatchers register (The listener or listeners can reside on
other nodes.)
The LISTENER attribute facilitates administration of multi-homed hosts.
This attribute specifies the appropriate listeners with which the
dispatchers will register. The LISTENER attribute overrides the
LOCAL_LISTENER parameter. non-default port (not 1521)
CONNECTIONS (CON or An integer specifying the maximum number of network connections to
CONN) allow for each dispatcher. The default is set by OS-specific. 1024 for
Solaris and NT
MAX_DISPATCHERS
MAX_DISPATCHERS specifies the maximum number of dispatcher processes allowed to be running
simultaneously. The default value applies only if dispatchers have been configured for the system.
The value of MAX_DISPATCHERS should at least equal the maximum number of concurrent sessions
divided by the number of connections for each dispatcher. For most systems, a value of 250 connections
for each dispatcher provides good performance. if the parameter file starts dispatchers for TCP and IPC,
you cannot later start dispatchers for protocol without changing the parameter file and restarting the
instance.
SHARED_SERVERS
SHARED_SERVERS specifies the number of server processes that you want to create when an instance is
started up. If system load decreases, this minimum number of servers is maintained. Therefore, you
should take care not to set SHARED_SERVERS too high at system startup.
MAX_SHARED_SERVERS
MAX_SHARED_SERVERS specifies the maximum number of shared server processes allowed to be
running simultaneously. If artificial deadlocks occur too frequently on your system, you should increase
the value of MAX_SHARED_SERVERS. Allocates shared servers dynamically based on length of request
queue.
CIRCUITS
CIRCUITS specifies the total number of virtual circuits that are available for inbound and outbound
network sessions. It is one of several parameters that contribute to the total SGA requirements of an
instance.
SHARED_SERVER_SESSIONS
SHARED_SERVER_SESSIONS specifies the total number of shared server architecture user sessions to
allow. Setting this parameter enables you to reserve user sessions for dedicated servers.
Verifying Setup
• Verify that the dispatcher has registered with the listener when the database was started by issuing:
$ lsnrctl services
LSNRCTL for 32-bit Windows: Version 9.2.0.1.0 - Production on 11-OCT-2006 09:14:29
Copyright (c) 1991, 2002, Oracle Corporation. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC0)))
Services Summary...
Service "ORADB" has 2 instance(s).
Instance "ORADB", status UNKNOWN, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0
LOCAL SERVER
Instance "ORADB", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "ORADBXDB" has 1 instance(s).
Instance "ORADB", status READY, has 1 handler(s) for this service...
Handler(s):
"D000" established:0 refused:0 current:0 max:1002 state:ready
DISPATCHER <machine: PAPAI, pid: 3588>
(ADDRESS=(PROTOCOL=tcp)(HOST=papai)(PORT=1075))
Service "PLSExtProc" has 1 instance(s).
Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0
LOCAL SERVER
The command completed successfully
• Verify that you are connected using Shared Server by making a single connection. Query V$CIRCUIT,
and that should show one entry per Shared Server connection.
The following are useful views for obtaining information about your shared server configuration and for
monitoring performance.
View Description
V$DISPATCHER Provides information on the dispatcher processes, including name,
network address, status, various usage statistics, and index number.
V$DISPATCHER_RATE Provides rate statistics for the dispatcher processes.
V$QUEUE Contains information on the shared server message queues.
V$SHARED_SERVER Contains information on the shared server processes.
V$CIRCUIT Contains information about virtual circuits, which are user connections
to the database through dispatchers and servers.
V$SHARED_SERVER_MONITOR Contains information for tuning shared server.
V$SESSION This View lists session information for each current session.
V$SGA Contains size information about various system global area (SGA)
groups. May be useful when tuning shared server.
V$SGASTAT Detailed statistical information about the SGA, useful for tuning.
V$SHARED_POOL_RESERVED Lists statistics to help tune the reserved pool and space within the
shared pool.
*************************************************************************
Network Failure
When your system uses networks such as local area networks and phone lines to connect client
workstations to database servers, or to connect several database servers to form a distributed database
system, network failures such as aborted phone connections or network communication software failures
can interrupt the normal operation of a database system. For example:
* A network failure can interrupt normal execution of a client application and cause a process failure to
occur. In this case, the Oracle background process PMON detects and resolves the aborted server
process for the disconnected user process, as described in the previous section.
* A network failure can interrupt the two-phase commit of a distributed transaction. After the network
problem is corrected, the Oracle background process RECO of each involved database automatically
resolves any distributed transactions not yet resolved at all nodes of the distributed database system.
Instance Failure:
An instance failure may occur for numerous reasons:
• A power outage occurs that causes the server to become unavailable.
• The server becomes unavailable due to hardware problems such as a CPU failure or memory corruption
or the operating system crashes.
• One of the Oracle server background processes (DBWR, LGWR, PMON, SMON, CKPT) experiences a
failure.
• No recovery action needs to be performed by you. All required redo information is read by SMON. To
restore from this type of failure, start the database:
SQL> connect / as sysdba;
Connected.
SQL> startup pfile=initDB00.ora;
...
Database opened.
• After the database has opened, notify users that any data that they did not commit will need to be re-
entered.
• There may be a time delay between starting the database and the “Database opened” notification—this
is the roll forward phase that takes place while the database is mounted.
– SMON performs the roll forward process by applying changes recorded in the online redo log files from
the last checkpoint.
– Rolling forward recovers data that has not been recorded in the database files, but has been recorded
in the online redo log, including the contents of rollback segments.
• Rollback can occur while the database is open, since either SMON or a server process can perform the
rollback operation. This allows the database to be available for users faster.
Media Failures:
Causes of Media Failures
• Head crash on a disk drive
• Physical problem in reading from or writing to database files
• File was accidentally erased
Business requirements
MTTR (Mean-Time-To-Recover):
Database availability is a key issue for a DBA. In the event of a failure the DBA should strive to reduce
the Mean-Time-To-Recover (MTTR). This strategy ensures that the database is unavailable for the
shortest possible amount of time. Anticipating the types of failures that can occur and using effective
recovery strategies, the DBA can ultimately reduce the MTTR.
MTBF (Mean-Time-Between-Failure):
Protecting the database against various types of failures is also a key DBA task. To do this, a DBA must
increase the Mean-Time-Between-Failures (MTBF). The DBA must understand the backup and recovery
structures within an Oracle database environment and configure the database so that failures will not
occur often.
Evolutionary Process: A backup and recovery strategy evolves as business, operational, and technical
requirements change. It is important that both the DBA and management review the validity of a backup
and recovery strategy on a regular basis.
Operational Requirements
• 24-hour operations
• User and operator appreciation
• Testing and validating backups
Testing Backups
Here are some questions to consider when selecting a backup strategy:
• Can I depend on system administrators, vendors, backup DBAs, and so forth when I need help?
• Can I test my backup and recovery strategies at frequently scheduled intervals?
• Are backup copies stored off-site?
• Is a plan well documented and maintained?
Database Volatility
Other issues that impact operational requirements include the volatility of the data and structure of the
database. Here are some questions to consider when selecting a backup strategy:
• Are tables frequently updated?
• Is data highly volatile? If so, you will need backups more frequently than a business where data is
relatively static.
• Does the structure of the database change often?
• How often do you add data files?
Technical Requirements
• Resources: Hardware, software, manpower, and time
• Physical image copies of the operating system files
• Logical copies of the objects in the database
• Database configurations
• Transaction volume affects desired frequency of backups
Natural Disaster
Perhaps your data is so important that you must ensure resiliency even in the event of a complete system
failure. Natural disasters and other issues can affect the availability of your data and must be considered
when creating a disaster recovery plan. Here are some questions to consider when selecting a backup
strategy:
• What will happen to your business in the event of a serious disaster such as:
– Flood, fire, earthquake, or hurricane
– Malfunction of storage hardware or software
• If your database server fails, will your business be able to operate during the hours, days, or even
weeks it might take to get a new hardware system?
• Do you store backups off-site?
Solutions
• Off-site backups
• Standby Database feature that enables a DBA to fall back on another database that is configured as a
standby in case the primary database fails.
• Geomirroring
• Messaging
• TP monitors
Loss of Key Personnel
In terms of key personnel, consider the following questions:
• How will a loss of personnel affect your business?
• If your DBA leaves the company or is unable to work, will your database system continue to run?
• Who will handle a recovery situation if the DBA is unavailable?
************************************************************************
Memory Structures:
Type Description
Memory area used to store blocks read from data files. Data is read into the
Data buffer cache
blocks by server process and written out by DBWn asynchronously.
Memory containing before and after image copies of changed data to be written
Log buffer
to the redo logs.
An optional memory area used in SGA for I/O by RMAN backup and restore,
Large pool
session memory for oracle share server and Oracle XA.
Stores parsed versions of SQL statements, PL/SQL procedures, and data
Shared pool
dictionary information.
Java Pool Used in server memory for all session specific Java code and data within JVM.
Background Processes
Type Description
Database writer Writes dirty buffers from the data buffer cache to the data files. This activity is
(DBWn) asynchronous.
Log writer(LGWR) Writes data from the redo log buffer to the redo log files.
Performs automatic instance recovery. Recovers space in temporary segments
System monitor
when they are no longer in use. Merges contiguous areas of free space
(SMON)
depending on parameters set.
Cleans up the connection/server process dedicated to an abnormally terminated
Process monitor
user process. Performs rollback and releases the resources held by the failed
(PMON)
process.
Checkpoint Synchronizes the headers of the data files and control files with the current redo
(CKPT) log and checkpoint numbers.
Archiver (ARCn) A process that automatically copies redo logs that have been marked for
(optional) archiving.
The User Process
The user process is created when a user starts a tool such as SQL*Plus, Forms, Reports, Enterprise
Manager, and so on. This process might be on the client or server, and provides an interface for the user
to enter commands that interact with the database.
The Server Process
The server process accepts commands from the user process and performs steps to complete user
requests. If the database is not in a multithreaded configuration, a server process is created on the
machine containing the instance when a valid connection is established.
Oracle Database
An Oracle database consists of the physical files.
File Type Description Type
Physical storage of data. At least one file is required per database. This
Data files Binary
file stores the system tablespace.
Contain before and after image copies of changed data, for recovery
Redo logs Binary
purposes. At least two groups are required.
Control files Record the physical structure and status of the database. Binary
Initialization
Store parameters required for instance startup. Text
Parameter file
Server
Initialization Store persistent parameters required for instance startup. Binary
Parameter file
Password file Store information on users who can start, stop, and recover the
Binary
(optional) database.
Archive logs Physical copies of the online redo log files. Created when the database
Binary
(optional) is set in ARCHIVELOG mode. Used in recovery.
Dynamic Views
The Oracle server provides a number of standard data dictionary views to obtain information on the
database and instance. These views include:
• V$SGA: Queries the size of the instance for the shared pool, log buffer, data buffer cache, and fixed
memory sizes (operating system dependent).
• V$INSTANCE: Queries the status of the instance, such as the instance mode, instance name, startup
time, and host name.
• V$PROCESS: Queries the background and server processes created for the instance.
• V$BGPROCESS: Queries the background processes created for the instance.
• V$DATABASE: Lists status and recovery information about the database. It includes information on the
database name, the unique database identifier, the creation date, the control file creation date and time,
the last database checkpoint, and other information.
• V$DATAFILE: Lists the location and names of the data files that are contained in the database. It
includes information relating to the file number and name, creation date, status (online/off-line), enabled
(read-only, read-write), last data file checkpoint, size, and other information.
Large Pool
• Can be configured as a separate memory area in the SGA, used for memory with:
– I/O slaves: DBWR_IO_SLAVES
– Oracle backup and restore: BACKUP_TAPE_IO_SLAVES
– Session memory for the multi-threaded servers
• Is sized by the LARGE_POOL_SIZE parameter
Recovery Manager (RMAN) uses the large pool for backup and restore when you set the
DBWR_IO_SLAVES or BACKUP_TAPE_IO_SLAVES parameters.
• DBWR_IO_SLAVES: This parameter specifies the number of I/O slaves used by the DBWn process. The
DBWn process and its slaves always write to disk. By default, the value is 0 and I/O slaves are not used.
– If DBWR_IO_SLAVES is set to a nonzero value, the numbers of I/O slaves used by the ARCn
process, LGWR process, and Recovery Manager are set to 4.
– Typically, I/O slaves are used to simulate asynchronous I/O on platforms that do not support
asynchronous I/O or implement it inefficiently. However, I/O slaves can be used even when
asynchronous I/O is being used. In that case, the I/O slaves will use asynchronous I/O.
• BACKUP_TAPE_IO_SLAVES: It specifies whether I/O slaves are used by the Recovery Manager to
backup, copy, or restore data to tape.
– When BACKUP_TAPE_IO_SLAVES is set to TRUE, an I/O slave process is used to write to or
read from a tape device.
– If this parameter is set to FALSE (the default), then I/O slaves are not used for backups;
instead, the shadow process engaged in the backup will access the tape device.
Note: Because a tape device can only be accessed by one process at any given time, this parameter is a
Boolean, which allows or does not allow the deployment of an I/O slave process to access a tape device.
– In order to perform duplexed backups, this parameter needs to be enabled, otherwise an error
will be signalled. Recovery Manager will configure as many slaves as needed for the number of backup
copies requested when this parameter is enabled.
Data Files
Data files store both system and user data on a disk. This data may be committed or uncommitted.
Configuring Tablespaces
Tablespaces contain one or more data files. It is important that tablespaces are created carefully to
provide a flexible and manageable backup and recovery strategy.
Here are some examples of tablespaces:
• SYSTEM: Backup and Recovery are more flexible if system and user data is contained in different
tablespaces.
• TEMPORARY: If the tablespace containing temporary segments (used in sort, and so on) is lost, it can
be re-created, rather than recovered.
• ROLLBACK SEGMENTS: Tablespaces containing online rollback segments are difficult to back up and
recover with the database online. Such tablespaces should be dedicated to contain only rollback
segments, such as the tablespace SYSTEM, which should contain only system and no application
segments.
• READ ONLY DATA: Backup time can be reduced because a tablespace needs to be backed up only
when the tablespace is made read-only.
• HIGHLY VOLATILE DATA: This tablespace can be backed up more frequently, also reducing recovery
time.
• INDEX DATA: Tablespaces to store index segments should be created. These tablespaces can often be
re-created instead of recovered.
Function of LGWR:
1 When the redo log buffer is one third full.
2 When a timeout occurs (every three seconds).
3 When there is 1 MB of redo.
4 Before DBWn writes modifies blocks in the database buffer cache to the data files.
5 When a transaction commits.
Dynamic Views
• V$LOG: Lists the number of members in each group. It contains:
– The group number
– The current log sequence number
– The size of the group
– The number of mirrors
– Status (CURRENT or INACTIVE)
– The checkpoint change numbers
• V$LOGFILE: Lists the names, status (STALE or INVALID), and group of each log file member.
• V$LOG_HISTORY: Contains information on log history from the control file.
Database Checkpoints:
• Checkpoints are used to determine recovery should start.
• Checkpoint position – where recovery start.
• Checkpoint queue – link list of dirty blocks [RBA: Redo byte address]
Data file number/data block number/Redo Byte Address (RBA)
Database checkpoints ensure that all modified database buffers are written to the database files. The
database header files are then marked current, and the checkpoint sequence number is recorded in the
control file. Checkpoints synchronize the buffer cache by writing all buffers to disk whose corresponding
redo entries were part of the log file being checkpointed
Types of Checkpoints:
Full Checkpoint
• All dirty buffers are written
• SHUTDOWN NORMAL, IMMEDIATE, TRANSACTIONAL
• ALTER SYSTEM CHECKPOINT
Incremental Checkpoint [fast-start Checkpoint]
• Periodic writes
• Only writes the oldest blocks
Partial Checkpoint
• Dirty buffers belonging to the tablespace
• ALTER TABLESPACE BEGIN BACKUP
• ALTER TABLESPACE TABLESPACE OFLINE NORMAL
CKPT PROCESS:
The CKPT process is always enables. The CKPT process updates file headers at checkpoint completion.
More frequent checkpoints reduce the time needed for recovering from instance failure at the possible
expense of performance.
1. At every log switch (cannot be suppressed).
2. When fast-start checkpointing is set to force DBWn to write buffers in advance in order to shorten
the instance recovery.
3. At a frequency defined by the LOG_CHECKPOINT_INTERVAL initialization parameter. It specifies
the frequency of checkpoints in terms of the number of redo log file blocks that can exist between
an incremental checkpoint and the last block written to the redo log.
4. When the elapsed time since writing the redo block at the current checkpoint position exceeds the
number of seconds specified by the LOG_CHECKPOINT_TIMEOUT initialization parameter.
LOG_CHECKPOINT_TIMEOUT specifies the amount of time, in seconds, that has passed since the
incremental checkpoint at the position where the last write to the redo log (sometime called the tail
of the log) occurred. This parameter also signifies that no buffer will remain dirty (in the cache) for
more than integer seconds.
5. At instance shutdown, unless the instance is aborted.
6. When forced by a database administrator (ALTER SYSTEM CHECKPOINT command)
7. When a tablespace is taken offline or an online backup is started.
8. NOTE: Read-only data files are an exception their checkpoint numbers are frozen and do not
correspond with the number in the control file.
SYNCHRONIZATION:
1. At each checkpoint, the checkpoint number is updated in every database file header and in the
control file.
2. The checkpoint number acts as a synchronization marker for redo, control, and data files. If they
have the same checkpoint number, the database is considered to be in a consistent state.
3. Information in the control file is used to confirm that all files are at the same checkpoint number
during database startup. Any inconsistency between the checkpoint numbers in the various file
headers results in a failure, and the database cannot be opened. Recovery is required.
INSTANCE RECOVERY:
Checkpoints expedite instance recovery because at every checkpoint all changed data is written to a disk.
After data resides in data files, redo log entries before the last checkpoint need not be applied again
during the roll forward phase of instance recovery.
INFORMATION OF CONTROLFILE:
1 V$CONTROLFILE. 2 V$PARAMETER. Show parameter command.
Archiving Considerations
The choice of whether to enable archiving depends on the availability and reliability requirements of each
database. Archived logs can be stored in more than one location (duplexing or multiple destinations),
since they are vital for recovery. For production databases, it is recommended that you use the archive
log feature with multiple destinations.
DATABASE SYNCHRONIZATION:
1. All datafiles (except offline and read-only) must be synchronized for the database to open.
2. Synchronization is based on the current checkpoint number.
3. Applying changes recorded in the redo log files synchronizes datafiles.
4. Redo log files are automatically requested by the Oracle server.
Phase Explanation
1 Unsynchronized files: The Oracle server determines whether a database needs
recovery when unsynchronized files are found. Instance failure can cause this to happen,
such as a shutdown abort. This situation causes loss of uncommitted data because
memory is not written to disk and files are not synchronized before shutdown.
2 Roll forward process: DBWR writes both committed and uncommitted data to the data
files. The purpose of the roll forward process is to apply all changes recorded in the log
file to the data blocks.
Note
-Rollback segments are populated during the roll-forward phase. Because redo logs store
both before and after data images, a rollback segment entry is added if an uncommitted
block is found in the data file and no rollback entry exists.
- Redo logs are applied using log buffers. The buffers used are marked for recovery and
do not participate in normal transactions until they are relinquished by the recovery
process.
- Redo logs are only applied to a read-only data file if a status conflict occurs (that is, the
file header states the file is read-only, yet the control file recognizes it as read-write, or
vice versa).
3 Committed and uncommitted data in data files: Once the roll forward phase has
successfully completed, all committed data resides in the data files, although
uncommitted data still might exist.
4 Roll-Back Phase: To remove the uncommitted data from the files, rollback segments
populated during the roll forward phase or prior to the crash are used. Blocks are rolled
back when requested by either the Oracle server or a user, depending on who requests
the block first.
The database is therefore available even while rollback is running. Only those data
blocks participating in rollback are not available.
5 Committed data in data files: When both the roll forward and rollback phases have
completed, only committed data resides on disk.
6 Synchronized data files: All data files are now synchronized.
V$INSTANCE_RECOVERY COLUMN:
1. RECOVERY_ESTIMATED_IOS: Contains the number of dirty buffers in buffer cache.
2. ACTUAL_REDO_BLKS: Current number of redoes blocks required to be read for recovery.
3. TARGET_REDO_BLKS: Goal for the maximum number of redoes blocks to be processed during
recovery. This value is the minimum of the next three columns (LOG_FILE_SIZE_REDO_BLKS 2
LOG_CHKPT_TIMEOUT_REDO_BLKS 3 LOG_CHKPT_INTERVAL_REDO_BLKS).
4. LOG_FILE_SIZE_REDO_BLKS: Number of redo blocks to be processed during recovery
corresponding to 90% of the size of the smallest log file.
5. LOG_CHKPT_TIMEOUT_REDO_BLK: Number of redo blocks that must be processed during
recovery to satisfy LOG_CHECKPOINT_TIMEOUT.
6. LOG_CHKPT_INTERVAL_REDO_BLKS: Number of redo blocks that must be processed during
recovery to satisfy LOG_CHECKPOINT_INTERVAL.
7. FAST_START_IO_TARGET_REDO_BLKS: This field is obsolete. It is retained for backward
compatibility. The value of this field is always null.
8. TARGET_MTTR: Effective mean time to recover (MTTR) target in seconds. Usually, it should be
equal to value of the FAST_START_MTTR_TARGET parameter. If FAST_START_MTTR_TARGET is
set to such a small value that it is impossible to do a recovery within its time frame, then the
TARGET_MTTR field contains the effective MTTR target, which is larger than
FAST_START_MTTR_TARGET. If FAST_START_MTTR_TARGET is set to such a high value that even
in the worst case (the whole buffer cache is dirty) recovery would not take that long, then the
TARGET_MTTR field contains the estimated MTTR in the worst case scenario. This field is 0 if
FAST_START_MTTR_TARGET is not specified.
9. ESTIMATED_MTTR: The current estimated mean time to recover (MTTR) in the number of seconds
based on the number of dirty buffers and log blocks (gives FAST_START_MTTR_TARGET is not
specified).
10. CKPT_BLOCK_WRITES: Number of blocks written by checkpoint writes
V$INSTANCE_RECOVERY View
• RECOVERY_ESTIMATED_IOS: The estimated number of data blocks to be processed during recovery
based on the in-memory value of the fast-start checkpoint parameter
• ACTUAL_REDO_BLKS: The current number of redo blocks required for recovery
• TARGET_REDO_BLKS: The goal for the maximum number of redo blocks to be processed during
recovery. This value is the minimum of the following four columns:
– LOG_FILE_SIZE_REDO_BLKS: The number of redo blocks to be processed during recovery to guarantee
that a log switch never has to wait for a checkpoint
– LOG_CHKPT_TIMEOUT_REDO_BLKS: The number of redo blocks that need to be processed during
recovery to satisfy
LOG_CHECKPOINT_TIMEOUT
– LOG_CHKPT_INTERVAL_REDO_BLKS: The number of redo blocks that need to be processed during
recovery to satisfy
LOG_CHECKPOINT_INTERVAL
– FAST_START_IO_TARGET_REDO_BLKS: The number of redo blocks that need to be processed during
recovery to satisfy FAST_START_IO_TARGET
TUNING PHASES OF CRASH AND INSTANCE RECOVERY:
Set the parameters checkpoint occurs very fast after some time. The RECOVERY_PARALLELISM
initialization parameter is used to specify the number of concurrent process for instance or crash recovery
operations. Using multiple processes in effect provides parallel block recovery. Different processes are
allocated to different blocks during the roll forward phases.
NOARCHIVELOG Mode
By default, a database is created in NOARCHIVELOG mode. The characteristics of running a database in
NOARCHIVELOG mode are as follows:
• Redo log files are used in a circular fashion.
• A redo log file can be reused immediately after a checkpoint has taken place.
• Once redo logs are overwritten, media recovery is only possible to the last full backup.
Implications of NOARCHIVELOG Mode
• If a tablespace becomes unavailable because of a failure, you cannot continue to operate the database
until the tablespace has been dropped or the entire database has been restored from backups.
• You may only perform operating system backups of the database when the database is shut down.
• You must back up the entire set of database, redo, and control files during each backup.
• You will lose all data since the last full backup.
• You cannot perform online backups.
Media Recovery Options in NOARCHIVELOG mode
• You must restore the data files, redo log files, and control files from an earlier copy of a full database
backup.
• If you used the Export utility to back up the database, you can use the Import utility to restore lost
data. However, this results in an incomplete recovery and transactions may be lost.
ARCHIVELOG Mode
• A filled redo log file cannot be reused until a checkpoint has taken place and the redo log file has been
backed up by the ARCn background processes. An entry in the control file records the log sequence
number of the archived log file in the log history of the control file.
• The most recent changes to the database are available at any time for instance recovery, and the
archived redo log file copies can be used for media recovery.
Archiving Requirements
• The database must be in ARCHIVELOG mode. Issuing the command to put the database into
ARCHIVELOG mode updates the control file. The ARCn background processes can be enabled to
implement automatic archiving.
• Sufficient resources should be available to hold generated archived redo log files.
Note: After the mode has been changed from Noarchivelog mode to Archive log, you must back up all the
data files and the control file. Your previous backup is not usable anymore because it was taken while the
database was in Noarchivelog mode.
Setting the database in Archive log mode does not enable the ARCHIVER (Arcn) processes. For automatic
archiving this parameter must be true LOG_ARCHIVE_START if it is false then DBA must take archive
manually. If the archive processes ARCn fail for any reason, after transaction activity has filled up all the
redo logs, the Oracle Server hangs.
Automatic vs. Manual ARCHIVING
Automatic ARCHIVING: LOG_ARCHIVE_START = TRUE
ARCn background processes are enabled and they copy redo log files as they filled
Manual ARCHIVING LOG_ARCHIVE_START = FALSE
Use SQL*Plus or OEM to copy the files
THE archive process: 2nd step for creating archived redo log files to use for recovery
Recommendation to enable automatic archiving mode
Guidelines:
Before decide on the archive mode must set database in ARCHIVELOG mode
Failure to switch ARCHIVElog will prevent ARCn to copy the redo log files
Database should be shut down cleanly [normal/immediate/transactional]
• The DBA can always spawn additional archive processes set by LOG_ARCHIVE_MAX_PROCESSES or kill
some superfluous archive processes at any time during the instance life
Note: If the database is shut down at night, the next day the database would start again with only two
archive processes as it is set up in the init.ora file.
LOG_ARCHIVE_DEST_n OPTIONS
Use LOG_ARCHIVE_DEST_n to specify up to ten archival destinations.
Set archive location as MANDATORY or OPTOINAL:
LOG_ARCHIVE_DEST_1=”location=/archive1 MANDATORY REOPEN”
LOG_ARCHIVE_DEST_1=”location=/archive2 MANDATORY REOPEN=600”
LOG_ARCHIVE_DEST_1=”location=/archive3 OPTIONAL”
MANDATORY: Implies that archiving to this destination must complete successfully before an online
redo log file can be overwritten.
OPTIONAL: Implies that an online redo log file can be reused even if it has not been successfully
archived to this destination. This is the DEFAULT.
REOPEN ATTRIBUTE:-
*The REOPEN attribute defines whether archiving to a destination must be re-attempted in case of
failure. DEFAULT IS 300 SECOND. There is no limit on the number of attempts made to archive to a
destination. Any errors in archiving are reported in the ALERT FILE at the primary site.
*If REOPEN is not specified, errors at optional destinations are recorded and ignored. No further redo log
will be sent to these destinations. Errors at mandatory destinations will prevent reuse of the online redo
log until the archiving is successful. The status of an archive destination is set to ERROR whenever
archiving is unsuccessful.
NOTE:- Archiving is not performed to a destination when the state is set to DEFER. If the state of this
destination is changed to ENABLE, any missed logs must be manually archived to this destination.
Channel Allocation
You must allocate a channel before you execute backup and recovery commands. Each allocated channel
establishes a connection from RMAN to a target or auxiliary database (either a database created with the
duplicate command or a temporary database used in TSPITR) instance by starting a server session on the
instance. This server session performs the backup and recovery operations. Only one RMAN session
communicates with the allocated server sessions.
Each channel usually corresponds to one output device, unless your media management library is capable
of hardware multiplexing.
@BACKUP, COPY, RESTORE, and RECOVER commands require at least one channel.
@Allocating a channel starts a server process on the target database.
@Channels affect the degree of parallelism.
@Channels write to different media types.
@Channels can be used to impose limits.
RMAN> RUN
{ALLOCATE CHANNEL C1 TYPE DISK
FORMAT='E:\BACKUP/USER052.BAK';
BACKUP DATAFILE 'E:\ORACLE\ORADATA\USERS01.DBF';}
@ The type of media desired determines the type of channel allocated. Query the
V$BACKUP_DEVICE VIEW TO DETERMINE SUPPORTED DEVICE TYPES.
You can impose limits for the COPY and BACKUP commands by specifying parameters in the ALLOCATE
CHANNEL command:
@ Read rate: musts number of buffers read per second. Per file to reduce online performance through
excessive disk I/O. ALLOCATE CHANNEL …RATE = integer
@ Kbytes:limits backup piece file size create by a channel . ALLOCATE CHANNEL …MAXPIECESIZE =
integer
@ MAXOPENFILE: limits the number of concurrently open files for a large backup [default 16]
ALLOCATE CHANNEL …MAXOPENFILE = integer
Media Management
To use tape storage for your database backups, RMAN requires a media manager. A media manager is a
utility that loads, labels, and unloads sequential media such as tape drives for the purpose of backing up,
restoring, and recovering data.
Some media management products can manage the entire data movement between Oracle data files and
the backup devices. Some products that use high-speed connections between storage and media
subsystems can reduce much of the backup load from the primary database server.
@Oracle server calls MML software routines to backup and restore data files to and form media that is
controlled by the media manager.
@ After you type the RMAN connection command, the following events occur:
* A user process is created for Recovery Manager.
* The user process creates two Oracle server processes:
-One default process connected to the target database for executing SQL commands,
resynchronizing the control file, and recovery roll forward.
-One polling process connected to the target database to locate Remote Procedure Call (RPC)
completions (only one per instance).
Backup and recovery information is retrieved from the control file.
BATCH MODE:- You can type commands into a file, and then run the command file. When running in
batch mode, RMAN reads input from a command file and writes output messages to a log file ( if
specified ). RMAN parses the command file in its entirely before compiling or executing any commands.
There is no need to place an exit command in the file because RMAN will terminate when the end of the
file is reached. rman target /@b_file.rcv log tbs.log
RMAN Commands
RMAN commands are of two types:
• Stand-alone
– Executed individually
– Usually do not interact with OS
– No channel allocation
• Job
– Executed as a group
– Generally interact with OS
– Channel allocation
• Stand-alone or Job
STAND-ALONE:- Executed only at the RMAN prompt. Executed individually. Cannot appear as
subcommands within RUN. 1. CHANGE 2. CONNECT 3.CREATE CATALOG, RESYNC CATALOG
4. CREATE SCRIPT, DELETE SCRIPT, REPLACE SCRIPT, and PRINT SCRIPT.
JOB:- The job commands are usually grouped and RMAN executes the job commands
inside of a run command block sequentially. If any command within the block
fails, RMAN ceases processing—no further commands within the block are
executed.
Users can execute the commands in interactive mode or batch mode. To run RMAN
commands interactively, start RMAN and then type commands into the command
line interface.
You can type RMAN commands into a file. You can then run a list of commands in
batch mode by specifying the command file name in the command line.
CONFIGURE COMMAND
RMAN preset with default configuration settings
• Configure automatic channels
• Specify the backup retention policy
• Specify the number of backup copies to be created
• Limit the size of backup sets
• Exempt a tablespace from backup
• Enable and disable backup optimization
RMAN does not consider any backup as obsolete. RMAN>CONFIGURE RETENT POLICY TO NONE;
You set backup optimization on so that the BACKUP command does not back up files to a device type if
the identical file has already been backed up to the device type. For two files to be identical, their
contents must be exactly the same. The default value is OFF.
CONFIGURE BACKUP OPTIMIZATION ON;
CONFIGURE DEFAULT DEVICE TYPE TO 'D:\ORA9I';
When a tablespace is added/when a successful backup is recorded in the RMAN repository
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO 'D:\ORA9I\C%F';
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE D:\ORA9I TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 3;
CONFIGURE DEVICE TYPE D:\ORA9I\B%U PARALLELISM 1; # default
Configure duplexed backup sets: max 4 copies
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE D:\ORA9I TO 1default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO 'E:\ORACLE\ORA90\DATABASE\SNCFORA9I.ORA';
Use the CLEAR option to return to the default value:
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE 'D:\ORA9I' TO 1 CLEAR
You must be connected to the target database. If you are connected in the NOCATALOG MODE, then the
database must be mounted. If you connect using a recovery catalog, then the target instance must be
started (but does not need to be mounted).
List backups of all files in the database: LIST BACKUP OF DATABASE;
List backup sets containing the USERS01.DBF datafile: LIST BACKUP OF DATAFILE "E:\B\USER01.DBF";
List all copies of datafiles in the SYSTEM tablespace: LIST COPY OF TABLESPACE "SYSTEM".
CONSISTENT BACKUP:- The whole backup that is taken when the database is closed shutdown with
NORMAL, IMMEDIATE, or TRANSACTINAL is called a consistent backup in which all files headers are
consistent with control file.
INCONSISTENT BACKUP:- When the database is open and operational, the datafile headers are not
consistent with the control file unless the database is open in read only mode. When the database
shutdown with the ABORT option this inconsistency persists.
PARTIAL DATABASE BACKUPS
-TABLESPACE BACKUP: A tablespace backup is a backup of the datafiles that make up tablespace.
-DATA FILE BACKUPS: You can make backups of a single datafile if your database is in Archive log
mode. You can make backups of read-only or offline normal datafiles in Noarchivelog mode.
-CONTROL FILE BACKUPS: You can configure RMAN for automatic backups of the control file after a
BACKUP or COPY command is issued.
Disadvantages
• For business operations where the database must be continuously available, a closed database backup
is unacceptable because the database is unavailable during backup.
• The amount of time that the database is unavailable is affected by the size of the database, the number
of data files, and the speed with which the copy operations on the data files can be performed.
Sometimes this may not be consistent within an available window of downtime and the DBA must choose
another type of backup.
• A recovery is only as good as the last full closed database backup, and lost transactions may have to be
entered manually following a recovery operation.
Guidelines
• The default shutdown parameter is normal. Use transactional or immediate if there is any chance that
transactions or processes are still accessing the database.
• Consider a reliable, automated procedure for this operation to ensure that every file is correctly backed
up.
• Back up the parameter file and the password file when performing full closed backups.
• You do not need to include files associated with read-only tablespaces in full backups.
• If the database is opened while the offline or cold backup is performed, the backup is invalid and
cannot be guaranteed usable in a recovery situation.
[Although the parameter file and the password file are not physically part of the database, they should be
included as part of the backup.]
Query the V$BACKUP view to determine which files are in backup mode. When an ALTER TABLESPACE
BEGIN BACKUP command is issued the status changes to ACTIVE.
SQL> select * from v$backup;
FILE# STATUS CHANGE# TIME
------ ----------- ------- ------
1 NOT ACTIVE 0
2 NOT ACTIVE 0
3 ACTIVE 240088 23/03/99
The control file must correctly identify the tablespace in read-only mode, otherwise you must recover it.
Tablespace, tables, indexes, or partitions may be set to nologging mode for faster load of data when
using direct-load operations. When the Nologging option is set for a direct-load operation. Insert
statements are not logged in the redo log files. Because the redo logs do not contain the values that
were inserted when the table was in Nologging mode, the data file pertaining to the table or partition
should be backed up immediately upon completion of the direct-load operation.
*Multiplex the control files and name them in the init.ora file by using the CONTROL_FILES parameter.
*The ALTER DATABASE BACKUP CONTROLFILE TO TRACE command creates a script to re-create the
control file. The file is located in the directory specified in the initialization parameter USER_DUMP_DEST.
This script does not contain RMAN Meta data.
*In addition, the individual control files should also be backed-up by using the ALTER DATABASE BACKUP
CONTROLFILE TO filename command. This provides a binary copy of the control file at that time.
*During a full backup, shutdown the instance normally and use an operating system backup utility to
copy the control file to backup storage.
DBVERIFY UTILITY:
The DBVERIFY utility enables administrators to perform verification of data files by checking the
structural integrity of data blocks within specified data files. That the utility is external to the database
minimizes the impact on database activities. DBVERIFY Main Features
Step Explanation
1 The utility can be used to verify online data files.
2 You can invoke the utility on a portion of a data file.
3 The utility can be used to verify online data files.
4 You can direct the output of the utility to an error log.
Running DBVERIFY
The name of the executable for the DBVERIFY utility varies across operating systems. It is located in the
bin directory of the appropriate Oracle Home.
The name of the executable is OS-dependent. For UNIX you execute the dbv executable.
Example
To verify the integrity of the data01.dbf data file, starting with block 1 and ending with block 500, you
execute the following command:
UNIX
$ dbv /users/DB00/u03/data01.dbf start=1 end=500
DBVERIFY Output:
An example of the output from the previous command would look like the following:
DBVERIFY - Verification starting : FILE = /users/DBA00/u03/data01.dbf
DBVERIFY - Verification complete
Total Pages Examined : 500
Total Pages Processed (Data): 22
Total Pages Failing (Data): 0
Total Pages Processed(Index): 16
Total Pages Failing(Index): 0
Total Pages Empty : 0
Total Pages Marked Corrupt: 0
Total Pages Influx: 0
where: Pages is the number of Oracle blocks processed.
*************************************************************************
CHAP 11 RMAN Backups
Backup Concepts
• Recovery Manager backup is a server-managed backup
– Recovery Manager uses Oracle server processes for backup operations
– Includes database, tablespaces, all or selected data files in a tablespace, control files, archive logs
• Closed backup
– Target database must be mounted (not open)
– Includes data files, control files
• Open Backup
– Tablespaces should not be put in backup mode
- Includes data files, control files, archive logs
Note: The online redo log files are not backed up when using Recovery Manager.
Backup Sets
A backup set consists of one or more physical files stored in an Oracle proprietary format, on either disk
or tape. Each backup set can contain one or more Oracle files.
You can make a backup set for one or more of data files, archive logs, or their copies.
Backup sets can be of two types:
• Data file: Can contain data files and control files, but not archived logs
• Archived log: Contains archived logs, not data files or control files
Note: Backup sets may need to be restored by Recovery Manager before recovery can be performed,
unlike image copies which generally are available on disks.
Backup Piece
• A backup piece is a file in a backup set.
• A backup piece can contain blocks from more than one data file.
BACKUP Command
The output can be written to tape or disk. You can control the number of backup sets that
Oracle produces as well as the number of input files that Recovery Manager places into a single backup
set. If any I/O errors are received when reading files or writing backup pieces, the job is aborted.
Option Significance
full Server session copies all blocks into the backup set, skipping only data file
blocks that have never been used. The server session does not skip blocks
when backing up archived redo logs or control files. Full backup is not
considered in incremental backup.
incremental The server session copies data blocks that have changed since the last
level integer incremental nbackup, where nis any integer from 1 to 4.
When attempting an incremental backup of level greater than 0, server
process checks that a level 0 backup or level 0 copy exists for each data
file in the BACKUP command.
If you specify incremental, then in the backupSpec you must set one of
the following parameters: DATA FILE, DATA FILECOPY, TABLESPACE, or
DATABASE. Recovery Manager does not support incremental backups of
control files, archived redo logs, or backup sets.
filesperset When you specify the filesperset parameter, Recovery Manager compares
integer the filesperset value to a calculated value (of number of files backed up
per number of channels) and takes the lowest integer of the two, thereby
ensuring that all channels are used.
If you do not specify filesperset, then Recovery Manager compares the
calculated value (number of files per allocated channels) to the default
value of 64 and takes the lowest of the two.
When there are more channels than files to back up, channels remain idle.
Input files cannot be split across channels.
skip Specify this parameter to exclude some data files or archived redo logs
from the backup set. You have following options within the parameter.
offline: Exclude offline data files from backup set.
readonly: Exclude data files belonging to read-only tablespaces.
inaccessible: Exclude data files or archived redo logs that cannot be read
because of I/O errors.
setsize Specifies a maximum size for a backup set in units of 1,024 bytes.
integer Recovery Manager attempts to limit all backup sets to this size. Useful for
backup of archive logs.
diskratio Directs Recovery Manager to assign only data files to backup sets spread
integer across the specified number of drives.Useful for data file backups when
data files are striped or reside on separate disk spindles.
delete input Deletes the input files upon successful creation of the backup set. Specify
this option only when backing up archived redo logs or data file copies. It
is equivalent to issuing a CHANGE . . . DELETE command for all of the
input files.
include Creates a snapshot of the current control file and places it into each
current backup
controlfile set produced by this clause.
Format of the name of output. The following format parameters can be
Format
used either individually or in combination.
%c Specifies the copy number of the backup piece within a set of duplexed
backup pieces.
Specifies the backup piece number within the backup set. This value starts
%p at 1 for each backup set and is increased by 1 as each backup piece is
created.
Specifies the backup set number. This number is a counter in the control
%s
file that is increased for each backup set.
%d Specifies database name.
Specifies the database name, padded on the right with xcharacters to a
%n total
length of 8 characters.
Specifies the backup set time stamp, which is a 4-byte value derived as
the
%t
number of seconds elapsed since a fixed reference time. The combination
of %s and %t can be used to form a unique name for the backup set.
Specifies an 8-character name constituted by compressed representations
%u
of the backup set number and the time that the backup set was created
Specifies a convenient shorthand for %u_%p_%c that guarantees
%U uniqueness in generated backup filenames. If you do not specify a format,
Recovery Manager uses %U by default.
Backup Constraints
• The database must be mounted or open.
• Online redo log backups are not supported.
• Only “clean” backups are usable in NOARCHIVELOG mode.
• Only “current” data file backups are usable in ARCHIVELOG mode.
• No parameter or password files are backed up.
Image Copies
COPY contains a single datafile, archived redo log file or control file.
You can use the Recovery Manager COPY command or OS copy command to create an image copy. An
image produced with the Recovery Manager COPY command uses an Oracle server session to perform
the task and records the copy in control file.
You can use the CHECK LOGICAL option to test data and index blocks that pass physical corruption
checks for logical corruption—for example, corruption of a row piece or index entry. If logical corruption
is detected, the block is logged in the alert log and trace file of the server process.
When the number of corrupted blocks detected reaches a threshold—defined by the MAXCORRUPT clause
—the copy process is terminated without populating the views.
Note: V$DATABASE_BLOCK_CORRUPTION should be queried at the completion of every image copy.
In the example, four channels are created, but only three will be used (channel d4 will remain idle). This
is how the command is executed:
1 Four channels are created for writing to disk: d1, d2, d3, d4.
2 The first COPY command uses three channels (server processes)—one for writing each data file to disk.
3 The second COPY command does not execute until the previous COPY command has finished
execution. It will use only one channel.
Note: When you use a high degree of parallelism, more machine resources are used, but the backup
operation can be completed faster.
Backup SPFILE
Automatically backed up when CONFIGURE CONTROLFILE AUTOBACKUP = ON
Explicitly backed up with BACKUP SPFILE
RMAN> BACKUP COPIES 2 DEVICE TYPE sbt SPFILE;
Tags
A tag is a meaningful name that you can assign to a backup set or file copy. The advantages of user tags
are as follows:
• Tags provide a useful reference to a collection of file copies or a backup set.
• Tags can be used in the LIST command to locate backed up files easily.
• Tags can be used in the RESTORE and SWITCH commands.
• The same tag can be used for multiple backup sets or file copies.
If a nonunique tag references more than one data file, then Recovery Manager chooses the most current
available file.
Recovery Manager data dictionary views used to query the control file are:
• V$ARCHIVED_LOG: Shows which archives have been created, backed up, and cleared in the database.
• V$BACKUP_CORRUPTION: Shows which blocks have been found corrupt during a backup
of a backup set.
• V$COPY_CORRUPTION: Shows which blocks have been found corrupt during an image copy.
• V$BACKUP_DATAFILE: Useful for creating equal sized backup sets by determining the number of blocks
in each data file. Can also find the number of corrupt blocks for the data file.
• V$BACKUP_REDOLOG: Shows archived logs stored in backup sets.
• V$BACKUP_SET: Shows backup sets that have been created.
• V$BACKUP_PIECE: Shows backup pieces created for backup sets.
Miscellaneous Issues
• Terminating a Recovery Manager Job
• Backing up the control file frequently
• Recording corrupt data file blocks in the control file and in the alert log
• Changing a fractured block while Recovery Manager is writing it
*************************************************************************
CHAP 12 User-Managed Complete Recovery
MEDIA RECOVERY:
Media recovery is used to recover a lost or damaged current DATAFILES or CONTROL FILE. You can also use it to
recover changes that were lost when a DATAFILES went offline without the OFFLINE NORMAL option.
RESTORING FILES:
When you restore a file, you are replacing a missing or damaged file with a backup copy.
RECOVERY OF FILES:
When you recover a file, changes recorded in the redo log files are applied to the restored files.
RECOVERY STEPS:
1. Damaged or missing files are restored from a backup.
2. Changes from the archived redo log files and online redo log files are applied as necessary. Undo blocks
are generated at this time. This is referred to as rolling forward or cache recovery.
3. The database may now contain committed and uncommitted changes.
4. The undo blocks are used to roll back any uncommitted changes. This is known as rolling back or
transaction recovery.
5. The database is now in a recovered state.
NOTE: For a DATABASE in NOARCHIVELOG mode, you do not have to restore all Oracle files if no redo log file
has been overwritten since the last backup, as illustrated in the following:
Scenario:
-There are two redo logs for a DATABASE,
-A closed DATABASE was taken at log sequence 144
-While the DATABASE was at log sequence 145, data file 2 was lost.
Result: log sequence 144 not overwritten so datafile no 2 can be restored and recovered manually
ADVANTAGES:
Easy to perform, with low risk of error, Recovery time is the time it takes to restore all files.
DISADVANTAGES:
Data is lost and must be reapplied manually. The entire DATABASE is restored to the point of the last whole
closed backup.
COMPLETE RECOVERY
1. Make sure that DATAFILES for restore are offline.
2. Restore only lost or damaged DATAFILES.
3. Do not restore the control files; redo log files, password files, or parameter files.
4. Recover the DATAFILES.
COMPLETE RECOVERY IN ARCHIVELOG MODE
ADVANTAGES
1. Only need to restore lost files.
2. Recovers all data to the time of failure.
3. Recovery time is the time it takes to restore lost files and apply all archived log files.
DISADVANTAGES
1. Must have archived log files since the backup from which you are restoring.
To locate data files needing recovery, and where they need recovery from, use the
V$RECOVER_FILE view.
SQL> select * from v$recover_file;
FILE# ONLINE ERROR CHANGE# TIME
----- ------- ------ ------- ----
2 OFFLINE 288772 02-MAR-99
• The ERROR column returns two possible values to define the reason why the file needs to be recovered:
– NULL if the reason is unknown
– OFFLINE NORMAL if recovery is not needed
• The CHANGE# column returns the SCN (system change number) where recovery must start.
While starting up the database on a Monday morning, you get the following error after the database is
mounted:
ORA-01157: cannot identify/lock data file 9 - see DBWR trace file
ORA-01110: data file 9: '/u01/oracle/app/oradata/orcl/users01.dbf'
On investigation, you find that the file system, u01, on the operating system is corrupted and you need
to recover the data file to a new location. The database is running in ARCHIVELOG mode and the
database was backed up on last Friday.
You must ensure that the database is not accessible till the data file is recovered. Which two tasks must
you have accomplished before applying the archived redo log files? (Choose two.)
A. update the control file by using the ALTER DATABASE RENAME FILE command
B. restore the data file from the backup to the new location by using an operating system utility
Method 1: Recovering a Closed Database This method of recovery generally uses either the RECOVER DATABASE
or RECOVER DATAFILE commands when:
• The database is not operational a 24 hour a day, 7 days a week.
• The recovered files belong to the system or rollback segment tablespace.
• The whole database, or a majority of the data files, need recovery.
Method 2: Recovering an Opened Database, Initially Opened This method of recovery is generally used when:
• File corruption, accidental loss of file, or media failure has occurred, which has not resulted in the database
being shut down.
• The database is operational a 24 hour a day, 7 days a week. Downtime for the database must be kept to a
minimum.
• Recovered files do not belong to the system or rollback tablespaces.
Method 3: Recovering an Opened Database, Initially Closed This method of recovery is generally used when:
• A media or hardware failure has brought the system down.
• The database is operational a 24 hour a day, 7 days a week database. Down-time for the database must be
kept to a minimum.
• The restored files do not belong to the system or rollback tablespace.
Method 4: Recovering a Data File with No Backup This method of recovery is generally used when:
• Media or user failure has resulted in loss of a data file that was never backed up.
• All archived logs exist since the file was created.
• The restored files do not belong to the system or rollback tablespace.
Note: During recovery, all archived logs files need to be available to the Oracle server on disk. If they are on a
backup tape, you must restore them first.
**************************************************************************************
To obtain detailed information about the datafiles associated with the temporary tablespace,
you must query the V$TEMPFILE or DBA_TEMP_FILES views in Oracle9i. Some of the
important columns in the V$TEMPFILE dynamic performance view are NAME, FILE#, TS#,
STATUS, ENABLED, and BYTES.
1. You can only restore using RMAN if the backups were taken or registered with RMAN.
2. To restore to a pervious point in time, you may have to use the backup of an older control file and use the
RESTORE CONTROL FILE option. The database should be in NOMOUNT state to restore the control file.
3. The target database must be in mount mode for the restoration of datafiles.
4. All of the datafiles must be restored from a backup taken at the same time.
5. The ALTER DATABASE OPEN RESETLOGS command may be required if a backup of the control file was
restored.
6. A whole backup is required after an OPEN WITH RESETLOGS command.
*************************************************************************************
INCOMPLETE RECOVERY :
Incomplete recovery reconstructs the database to a prior point in time before the time of the
failure. This situation results in the loss of data from transactions committed after the time of recovery. A
valid offline or online backup of all of the DATAFILES made before the recovery point. All archived logs
from the backup until the specified time of recovery.
*************************************************************************
Note: You can only restore using RMAN if the backups were taken or registered with RMAN.
*************************************************************************
DELETE COMMAND
DELETE BACKUPSET 102; Delete a specific backup set:
DELETE NOPROMPT EXPIRED BACKUP OF TABLESPACE USERS; Delete an expired backup without the confirmation.
By DEFAULT, the DELETE command displays a list of the files and prompts you for confirmation before deleting any
file in the list. No prompt is DEFAULT when running the DELETE command from a command file.
DELETE OBSOLETE;
Delete all backups, copies, and archived redo log files based on the configured retention policy.
DELETE OBSOLETE RECOVERY WINDOW OF 7 DAYS;
VIEWS
1.V$BACKUP_DATAFILE: Is useful for creating equal sized backup sets by determining the number of blocks in
each data file. It can also find the number of corrupt blocks for the DATAFILE.
2.V$BACKUP_REDOLOG: Shows archived logs stored in backup sets.
3.V$BACKUP_SET: Shows backup sets that have been created.
4.V$BACKUP_PIECE: Shows backup pieces created for backup sets.
5.V$SESSION_LONGOPS: To monitor the progress of backups and copies.
6.V$RECOVER_FILE: To identifies needing recovery, and from where recovery needs to start.
7.V$RECOVERY_LOG: Contains useful information only for the Oracle process doing the recovery.
VIEWING THE RECOVERY CATALOG:
8.RC_DATABASE:To determine which databases are currently registered in the recovery catalog.
9.RC_DATAFILE
10.RC_TABLESPACE: To determine which TABLESPACES are currently stored in the recovery catalog for
the target DATABASE.
11.RC_STORED_SCRIPT: To determine which scripts are currently stored in the recovery catalog for the target
DATABASE.
12.RC_STORED_SCRIPT_LINE: You must query the recovery catalog view RC_STORED_SCRIPT_LINE to
obtain the code associated with the RMAN stored scripts. This view contains one row for each line of the
stored script.
*********************************************************************************
The recovery catalog is a schema that is created in a separate DATABASE. It contains the RMAN
metadata obtained from the target DATABASE CONTROLFILE. RMAN propagates information about the
DATABASE structure, archived redo logs, backup sets, and DATAFILE copies into the recovery catalog
from the control file of the target DATABASE. You can use the REPORT and LIST commands to obtain
information from the recovery catalog. You should use a catalog when you have multiple target
databases to manage. The recovery catalog is maintained by RMAN when you do the following:
1. Register the target DATABASE in the catalog.
2. Resynchronize the catalog with the control file of the target DATABASE.
3. Reset the DATABASE to a previous incarnation.
4. Change information about the backups or files.
5. Performa backup, restore, or recovery operation.
6. You can use the REPORT and LIST commands to obtain information from the recovery catalog.
You can store scripts in the recovery catalog.
RMAN creates rows in the recovery catalog that contain information about the target DATABASE. RMAN
copies all pertinent data about the target DATABASE from the control file into the recovery catalog.
RMAN>REGISTER DATABASE;
The backup of database files created using O/S commands must be manually restored and recovered.
You can, however, register these files with the repository by using RMAN's CATALOG command. The
restore and recover operations hereafter would be possible using RMAN.
Resynchronization of the recovery catalog ensures that the METADATA is current or same with the target
CONTROLFILE. Resynchronizations can be full or partial. In partial resynchronization, RMAN reads the
current control file to update changed data, but does not resynchronize metadata about the database
physical schema: DATAFILES, TABLESPACES, redo threads, rollback segments, and online redo logs. In a
full resynchronization, RMAN updates all changed records, including schema records. RMAN automatically
detects when it needs to perform a full or partial resynchronization and executes the operation as
needed. You can also force a full resynchronization by issuing a RESYNC CATALOG command. To ensure
that the catalog stays current, run the RESYNC CATALOG command periodically. A good rule of thumb is
to run it at least once every n days, where n is the setting for the initialization parameter
CONTROL_FILE_RECORD_KEEP_TIME. Because the controlfile employs a circular reuse system, backup
and copy records eventually get overwritten. Resynchronizing the catalog ensures that these records are
stored in the catalog and are not lost.
ISSUE THE RESYNC CATALOG COMMAND WHEN YOU:
1. Add or drop a TABLESPACE.
2. Add or drop a DATAFILE.
3. Relocate a database file.
RMAN> RESYNC CATALOG
Any structural changes to the database cause the control file and recovery catalog to become “out of
synch”. The catalog will be synchronized automatically when a BACKUP or COPY command is issued with
a connection to the catalog. However, this synchronization can cause delay in the backup operation.
RESYNC CATALOG command updates the following records:
LOG HISTORY: Created when a log switch occurs. Recover Manager tracks this information so that it
knows what archive logs it should expect to find.
ARCHIVED REDO LOG: Associated with archived logs that were created by archiving an online log, by
copying an existing archived log, or by restoring an archived log backup set.
BACKUP HISTORY: Associated with backup sets, backup pieces, backup set members, proxy copies,
and image copies.
PHYSICAL SCHEMA: Associated with DATAFILES and TABLESPACES.
VIEWS:
In addition to the REPORT and LIST commands, you can use SQL commands to the data dictionary and
dynamic views that are created when the recovery catalog is created.
VIEWING THE RECOVERY CATALOG:
1.RC_DATABASE: To determine which databases are currently registered in the recovery catalog.
2.RC_DATAFILE
3.RC_TABLESPACE: To determine which TABLESPACES are currently stored in the recovery catalog
for the target DATABASE.
4.RC_STORED_SCRIPT: To determine which scripts are currently stored in the recovery catalog for the
target DATABASE.
5.RC_STORED_SCRIPT_LINE: To list the text of a specified stored script or you can use the PRINT
SCRIPT command.
CATALOG MAINTENANCE
(a) Register(b) Resynchronize (c) Reset (d) Change
(e) Backup (f) Restore (g) Recover
STORED SCRIPTS:
A Recovery Manager script is a set of commands that:
1. Specify frequently used backup, recover, and restore operations.
2. Are created using the CREATE SCRIPT command.
3. Are stored in the recovery catalog.
4. Can be called only by using the RUN command.
5. Enable you to plan, develop and test a set of commands for backing up, restoring, and
recovering the DATABASE.
6. Minimize the potential for operator errors.
RMAN provides a way of storing these scripts in the recovery catalog.
USE PRINT SCRIPT TO DISPLAY A SCRIPT
1. PRINT SCRIPT LEVEL0BACKUP;
Use CREATE SCRIPT TO STORE A SCRIPT
2. CREATE SCRIPT LEVL0BACKUP {BACKUP INCREMENTAL LEVEL 0
FORMAT ‘E:\BAKCUP\%d%s%p’ FILEPERSET 5
(DATABASE INCLUDE CURRENT CONTROLFILE);
SQL ‘ALTER DATABASE ARCHIVE LOG CURRENT;}
USE EXECUTE SCRIPTS TO RUN A SCRIPT
3. RUN { EXECUTE SCRIPT LEVEL0BACKUP;}
You can rewrite a script with the REPLACE SCRIPT command. You must supply the entire script, not just
the changed lines.
4. REPLACE SCRIPT LEVEL0 BACKUP {
…….
FILEPERSET 3
……. }
USE DELETE SCRIPT TO REMOVE A SCRIPT
5. DELTE SCRIPT LEVEL0BACKUP;
There are three databases in your company: PDDB, QTDB, and SLDB. A single RMAN recovery catalog is
used for all the three databases. In the recovery catalog you have a stored script, Level0Backup, created
for performing a level 0 backup. For which database will the backup be performed when you execute this
script?
C. the target database to which RMAN is connected
***********************************************************************************************
*Which backup is considered a logical backup? C. exports of schema objects into a binary file
• Recover:
– A logical database to a point different from the rest of the physical database when multiple logical
databases exist in separate tablespaces of one physical database
– A tablespace in a Very Large Data Base (VLDB) when tablespace point-intime recovery (TSPITR) is
more efficient than restoring the whole database from a backup and rolling it forward
Export Methods
• Interactive dialog: By specifying the EXP command at the operating system and no parameters, the
Export utility will prompt you for inputs, supplying default values.
• The export page of Data Manager within Oracle Enterprise Manager
• If command line mode is chosen, the selected options must be explicitly specified on the command line.
Any missing options will default to Export utility default values.
Note: Many options are only available by using the command line interface. However, you can use a
parameter file with command line.
Export Parameters
Direct-Path Restrictions
The direct-path option of the Export utility has some restrictions that differentiate it from the
conventional-path export.
• The direct-path export feature cannot be invoked by using and interactive EXP session.
• When the direct-path option is used, the client-side character set must match the character set of the
server side. Use the environment variable
NLS_LANG to set the same character set as the server one.
• The BUFFER parameter of the EXPORT utility has no effect on direct-path Export; it is used only by the
conventional-path option.
• You cannot direct-path Export rows that contain the datatypes of LOB, BFILE, REF, or object type
columns, including VARRAY columns and nested tables. Only the data definition to create the table is
exported, not the data.
Import Utility
The Import utility can be used for recovery of data by using a valid Export utility file.
Uses of the Import Utility for Recovery
• Creating table definitions since the table definitions are stored in the export file.
Choosing to import data without rows will create just the table definitions.
• Extracting data from a valid export file by using the Table, User, Tablespace, or Full Import modes.
• Importing data from a complete, incremental or cumulative export file.
• Recovering from user failure errors where a table is accidentally dropped or truncated by using one the
previously mentioned methods.
Import Modes
Mode Description
Table Import specified tables into a schema.
User Import all objects that belong to a schema
Tablespace Import all definitions of the objects contained in the tablespace
Full Database Import all objects from the export file
Table Mode
Table mode imports all specified tables in the user’s schema, rather than all tables. A privileged user can
import specified tables owned by other users.
User Mode
User mode imports all objects for a user’s schema. Privileged users can import all objects in the schemas
of a specified set of users.
Tablespace Mode
Tablespace mode allows a privileged user to move a set of tablespaces from one Oracle database to
another.
Full Database Mode
Full database mode imports all database objects, except those in the SYS schema. Only privileged users
can import in this mode.
Which of the following roles must be granted to a user to perform a full database
import? IMP_FULL_DATABASE
Import parameter
Parameter Description Default
BUFFER Size of the data buffer in bytes: (Integer) OS-Specific
List the datafiles to be transported into the None
DATAFILES database
Specifies whether or not the existing data file N
DESTROY making up the database should be reused
FROMUSER A list of schemas containing objects to import None
FULL Import entire file N
HELP Display export parameters in interactive mode N
IGNORE Ignore create errors due to an object’s existence N
Specifies the type of incremental import; options
INCTYPE are SYSTEM and RESTORE
INDEXES Indexes to import Y
Specifies a file to receive index-creation
INDEXFILE commands None
LOG File for informational and error messages None
PARFILE Parameter specification file None
Indicates whether or not the Import utility
recovers one or more tablespaces in an Oracle
database to a prior point in time without affecting
POINT_IN_TIME_RECOVER the rest of the database (release 8.0 only)
ROWS Include table rows in import file Y
TABLES Tables to import None
List of tablespaces to be transported into the
TABLESPACES database None
Specifies a list of usernames whose schemas will
TOUSER be imported None
Instructs Import to import transportable
TRANSPORT_TABLESPACE tablespace metadata from an export file N
List the users who own the data in the
TTS_OWNERS transportable tablespace set
USERID Username/password of schema objects to export None
Which of the following parameters would you use to record the errors that might be generated during the
import operation? A. LOG
D:\oracle\ora92\bin>EXP
D:\oracle\ora92\bin>EXP
Username: SCOTT/TIGER@NICORA
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
D:\oracle\ora92\bin>
******************
D:\oracle\ora92\bin>IMP
******************
D:\oracle\ora92\bin>IMP
Username: SCOTT/TIGER@NICORA
*****************************************************************************
Which of the following options enables a user to get authenticated though a single
password instead of using multiple passwords? A. Wallet Manager