Sie sind auf Seite 1von 97

CHAP 1 Networking Overview

Network Environment Challenges


• Configuring the network environment
• Maintaining the network
• Tuning, troubleshooting, and monitoring the network
• Implementing security in the network
• Integrating legacy systems

Configuring the Network Environment


To implement a successful networking environment consider the following questions:
• What type of network are you configuring? Is it a small network with a few clients, or a more complex
network with many clients and many servers?
• Are you using a single protocol or multiple protocols?
• Is the network static or expanding?
• What configuration options do you have?
• Are there user-friendly tools available to configure the network?
• Is your network strictly client/server or is it multi-tiered?

Maintaining the Network


• How much network maintenance is required for your enterprise?
• Will you add clients and servers to your network?
• Do you anticipate frequent upgrades?

Tuning, Troubleshooting, and Monitoring the Network


• Does your network include the needed tools?
• How large a workload do you anticipate?
– Number of users
– Number of transactions
– Number of nodes
– Location of nodes

Implementing Security in the Network


• Do you need to secure your network environment?
• Is secure and sensitive information being transmitted over the network?
• What tools are available for implementing security?

Integrating Legacy Systems


How will your legacy systems interact with your networking environment?

Simple Network: Two-Tier


• Network connects client and server
• Client and server speak the same “language” or protocol
The client and server communicate over a network using a given protocol, which must be installed on
both the client and the server. A common error in client-server network development is to prototype an
application in a small, two-tier environment and then scale up by simply adding more users to the server.

Simple to Complex Network: N-Tier


• Client can be a thin client or a PC
• Middle tier can contain applications and services
• Server holds actual data
• Translation services (as in adapting a legacy application on a mainframe to a client-server environment
or acting as a bridge between protocols)
• Scalability services (as in acting as a transaction-processing monitor to balance the load of requests
between servers)
• Intelligent agent services (as in mapping a request to a number of different servers, collating the
results, and returning a single response to the client)

Complex Network Issues


Networks should improve communication rather than impede distributed operations.
In a more complex network environment, several issues must be addressed:
• Different hardware platforms that run different operating systems
• Multiple protocols used on these platforms
• Variable syntax issues between the different but connected applications
• Different geographical locations in which the connected applications reside
A well-designed complex network can support a large-scale distributed system.

Oracle9i Networking Solutions


• Connectivity
• Directory Services
• Scalability
• Security
• Accessibility

Connectivity: Key Features


• Protocol independence
• Comprehensive platform support
• Integrated GUI administration tools
• Multiple configuration options
• Tracing and diagnostic toolset
• Basic security

Connectivity: Oracle Net Services


Oracle Net Services provides the industry’s broadest support for network transport protocols, including
TCP/IP, TCP/IP with SSL, Named Pipes Novell SPX/IPX, IBM LU6.2, and DECnet. For simple
environments, Oracle Net Services default settings provide a transparent name resolution adapter. This
eliminates the need for generating configuration files. For more complicated environments, Oracle Net
Services employs the Oracle Internet Directory to store connection information.
Oracle Net Services addresses Internet connectivity through integration of standard solutions such as
Remote Authentication Dial-In User Service (RADIUS) and Lightweight Directory Access Protocol (LDAP)
with legacy systems.

Connectivity: Database Connectivity with HTTP/IIOP


Connections to the database are not limited to Oracle Net Services alone; clients can establish
connections by using Internet protocols such as Hypertext Transfer Protocol (HTTP) and Internet Inter-
ORB Protocol (IIOP). Using these Internet protocols, users can run applications from within a Web
browser to connect directly to an Oracle9i database. Internet technologies such as iFS (Internet File
System), Enterprise JavaBeans (EJB), and the Internet standard Secure Sockets Layer (SSL) protocol
provide added security to network connections.
Directory Services: Directory Naming
• Process of resolving a network alias using an LDAP-compliant directory server
• Clients must be configured to use LDAP-compliant directory server
LDAP is an Internet standard for directory services.
• A well defined standard interface to a single, extensible directory service, such as the Oracle
Internet Directory
• Rapid development and deployment of directory-enabled applications
• An array of programmatic interfaces that enables seamless deployment of Internet-ready
applications.

Directory Services: Oracle Internet Directory


An Oracle database-based LDAP V3 directory server with the high performance, scalability, robustness,
and availability of Oracle9i, used for centralizing database user,
Features:
1. Integrates tightly with Oracle8i and Oracle9i databases.
2. Scalability, secure Internet Computing
3. Secure & reliable directory structure

High Availability
Oracle Internet Directory is designed to meet the needs of a variety of important applications. For
example, it supports full, multimaster replication between directory servers: If one server in a replication
community becomes unavailable, then a user can access the data from another server. Information about
changes made to directory data on a server is stored in special tables on the Oracle9i database. These
are replicated throughout the directory environment by Oracle9i Replication, a robust replication
mechanism.
Oracle Internet Directory also takes advantage of all the availability features of the Oracle9i. Because
directory information is stored securely in the Oracle9i database, it is protected by Oracle's backup
capabilities. Additionally, the Oracle9i database, running with large datastores and heavy loads, can
recover from system failures quickly.
Security
Oracle Internet Directory offers comprehensive and flexible access control. An administrator can grant or
restrict access to a specific directory object or to an entire directory subtree. Moreover, Oracle Internet
Directory implements three levels of user authentication: anonymous, password-based, and certificate-
based using Secure Socket Layer (SSL) Version 3 for authenticated access and data privacy.

Scalability: Oracle Shared Server


A database server that is configured to allow many user processes to share very few server processes, so
the number of users that can be supported is increased. With shared server configuration, many user
processes connect to a dispatcher. The dispatcher directs multiple incoming network session requests to
a common queue. An idle shared server process from a shared pool of server processes picks up a
request from the queue. This means that a small pool of server processes can serve a large number of
clients. Contrast with dedicated server.
Dispatcher: A process that enables many clients to connect to the same server without the need for a
dedicated server process for each client. A dispatcher handles and directs multiple incoming network
session requests to shared server processes.
• Enables large no. of user to connect a database simultaneously
• Database resources are shared resulting in efficient memory and processing usage
• Connections are routed via a dispatcher
• Server process are not dedicated to each client
• Server processes serve client processes as needed
Known as Oracle Multithreaded Server in Oracle8i

Scalability: Connection Manager


Connection Manager is a gateway process and control program tool normally configured and installed on
a middle tier. Connection Manager offers:
• Multiplexing of connections: Connection Manager can handle several incoming connections and transmit
them simultaneously over a single outgoing connection to the destination. The configuration is offered
only in a TCP/IP environment.
• Cross-protocol connectivity: Using this feature, a client and a server can communicate with different
network protocols.
• Network access control: Using Connection Manager, designated clients can connect to certain servers in
a network based on the TCP/IP protocol.
Benefits of Connection Manager
• Supports more users on the end tier if you use a middle tier to deploy Connection Manager and
provides for better use of resources and scalability
• Enables cross-protocol communication
• Can act as an access control mechanism

Security: Oracle Advanced Security Features


Oracle Advanced Security provides data privacy, integrity, authentication, single sign-on, and
Encryption /Data Privacy:
• Encodes between network nodes
• RC4 Encryption, DES Encryption, Triple-DES Encryption
Authentication:
• User authentication through several third-party authentication services, and through the use of SSL and
digital certificates
• Kerberos, Radius, CyberSafe
Data Integrity
• To ensure data integrity of data packets during transmission,
• Using MD5 or SHA encryption algorithms
Single sign-on lets a user access multiple accounts and applications with a single password, entered
during a single connection.

Security: Oracle Net Services & firewalls


• Oracle Corporation works with key firewall vendors to provide firewall support.
• Oracle Net Firewall Proxy kit allows firewall vendors to provide Connection support for Oracle
environment.
• Oracle Net Firewall Proxy based on Oracle Connection Manager
• Two types: Proxy based firewalls Stateful packet inspection firewalls

Accessibility: Heterogeneous Services


Much of the processing power of Oracle Transparent Gateways for Oracle7 and earlier versions of the
sever has been integrated into Oracle8i and later versions of the Oracle database server as a module
called Heterogeneous Services.
• Enables access of legacy data as if it resides in a single, local relational database.
• Retrieve and modify data stored in a non-Oracle system using Oracle SQL dialect.
• Execute stored procedures at the non-Oracle systems, services or APIs using Oracle PL/SQL calls.
• Issue these SQL statements or PL/SQL calls from either Oracle client applications like SQL*Plus or
Oracle programmatic interfaces like Pro*C or OCI.
The term "non-Oracle system" refers to the following:
Any system accessed by PL/SQL procedures written in C (that is, by external procedures)
Any system accessed through SQL (that is, by Oracle Transparent Gateways or generic connectivity)
Any system accessed procedurally (that is, by procedural gateways)
Oracle Transparent Gateway for Sybase on Solaris to access a Sybase database system operating on a
Sun Solaris platform or DB2, SQL Server, Informix
Accessibility: External Procedures
• Functions written in 3GL language can be called from PL/SQL
• Allows developers more flexibility than SQL or PL/SQL provide
• Listener can listen for external procedure calls
• PL/SQL passes the following information to extproc (loads the shared library and invokes the External
Procedure):
1. Shared library name
2. External Procedure Name
3. Parameters
A PL/SQL procedure executing on an Oracle server can call an external procedure or function that is
written in the C programming language and stored in a shared library. In order for the Oracle database to
connect to external procedures, the server must be configured with a net service name and the listener
must be configured with protocol address and service information. Oracle Net Configuration Assistant
automatically configures the necessary information during installation.

Oracle Net Services Configuration and Administration Tools


Oracle Net manager
A graphical user interface tool that combines configuration abilities with Oracle Names component control
to provide an integrated environment for configuring and managing Oracle Net. It can be used on either
the client or server. Integrated with Oracle Enterprise manager
Oracle Net Configuration Assistant
Oracle Net Configuration Assistant allows you to configure basic elements of the network, including
naming methods, listeners, and directory service access. Oracle Net Configuration Assistant runs in two
different modes: installation mode and stand-alone mode.
Oracle Net Control Utilities
Oracle Net Manager provides the following tools to help you start, stop, configure and control each
Network component.
- Listener Control utility
- Oracle Connection Manager
- Oracle Names Control utility

*************************************************************************************
CHAP 2 Oracle Net Architecture

Oracle Net connections


Used to establish connections between applications on a network depending on:
- The network configuration
- The locations of the nodes
- The applications
- The network protocol
Connections types
- Client-server Application
- Java Application
- Web–Client Application

Client server application connection

This illustration depicts the various layers of stack communication used in a client/server application
connection. On the client side, from the top down, the stack is constructed with the following layers:
• Client Application (uses OCI)
• Presentation - TTC
• Oracle Net Foundation Layer
• Oracle Protocol Support
• Network Protocol (TCP/IP, TCP/IP with SSL, VI, LU6.2)
On the server side, from the top down, the stack is constructed with the following layers:
• RDBMS(uses OPI)
• Presentation - TTC
• Oracle Net Foundation Layer
• Oracle Protocol Support
• Network Protocol (TCP/IP, TCP/IP with SSL, VI, LU6.2)
The Oracle Net Foundation Layer and Oracle Protocol Support comprise the Oracle Net. Associated with
the Oracle Net Foundation Layer on the client side is naming methods. Associated with Oracle Net
Foundation Layer on both the client and server sides is security services.
Client Application
During a session with the database, the client uses Oracle Call Interface (OCI) to interact with the
database server. OCI is a software component that provides an interface between the client application
and the SQL language the database server understands.

Two-Task Common (TTC)


The presentation layer of OSI model used by client/server applications is Two-Task Common (TTC). TTC
provides character set and data type conversion between different character sets or formats on the client
and database server.
Oracle Net Foundation Layer
The Oracle Net foundation layer is responsible for establishing and maintaining the connection between
the client application and database server, as well as exchanging messages between them. The Oracle
Net foundation layer is able to perform these tasks because of a technology called Transparent Network
Substrate (TNS). TNS provides a single, common interface functioning over all industry-standard
protocols. In other words, TNS enables peer-to-peer application connectivity. In a peer-to-peer
architecture, two or more computers (called nodes when they are employed in a networking
environment) can communicate with each other directly, without the need for any intermediary devices.
On the client side, the Oracle Net foundation layer receives client application requests and resolves all
generic computer-level connectivity issues, such as:
• The location of the database server or destination
• Whether one or more protocols are involved in the connection
• How to handle interrupts between client and database server based on the capabilities of each
On the server side, the Oracle Net foundation layer performs the same tasks as it does on the client side
and also works with the listener to receive incoming connection requests.
In addition to establishing and maintaining connections, the Oracle Net foundation layer communicates
with naming methods to resolve names and uses Oracle Advanced security services to ensure secure
connections. Oracle Net Foundation Layer - implementation of Session Layer in OSI Model

Oracle Protocol Support


Positioned between the Oracle Net foundation layer and the network protocol layer, the Oracle protocol
support layer is responsible for mapping TNS functionality to industry-standard protocols used in the
client/server connection. This layer supports the following network protocols:
• TCP/IP
• TCP/IP with SSL
• Named Pipes
• LU6.2
• VI

Oracle Program Interface (OPI)


Instead of OCI, the database server uses Oracle Program Interface (OPI). For each statement sent from
OCI, OPI provides a response.

Web client application connection


Using an application web server as a middle tier that is configured either of the following:
• The JDBC OCI driver (Java Application Client)
• The JDBC Thin driver (Java Applet Client)
Connecting directly using HTTP/IIOP

Web client application connection: Middle Tier Application Web Server


Java Application/Applet Client (Web Browser) ---
HTTP Application Web server (client) Oracle Net/TCP/IPOracle Server
Left Right

Left figure: This illustration depicts stack communication layers used by JDBC drivers. The JDBC OCI
driver stack, from the top down, is constructed with the following layers:

• Java Client Application


• JDBC OCI Driver (uses OCI)
• Presentation - TTC
• Oracle Net Foundation Layer
• Oracle Protocol Support (TCP/IP, TCP/IP with SSL, VI, LU6.2)
• Network Protocol

Right figure: The JDBC Thin driver stack, from the top down, is constructed with the following layers:

• Java Applet / Application


• JDBC Thin Driver
• Presentation - JavaTTC
• JavaNet
• TCP/IP Network Protocol

Web Connection using HTTP/IIOP


Oracle Net is not required but Oracle Database server must support the protocols/ Application Web server
not required. Client (Web Browser) ---HTTP/IIOPOracle Server(Oracle server support HTTP/IIOP)

Web clients that do not require an application Web server to access applications can access the Oracle
database directly, for example, by using a Java applet. In addition to regular connections, the database
can be configured to accept HTTP and Internet Inter-ORB Protocol (IIOP) connections. These protocols
are used for connections to Oracle9i JVM in the Oracle9i instance.
The Oracle database server is also configured to support HTTP and IIOP.
One Web browser uses the HTTP protocol to connect to the Oracle Net layer on the database server. The
second Web browser uses the IIOP protocol to connect to the Oracle Net layer on the database server.
The third Web browser shows a communication stack. From the top down, the stack is constructed with
the following layers: 1. Java Applet 2.JDBC Thin Driver 3. JavaNet
This browser uses the TCP/IP network protocol to connect to the Oracle Net layer on the database.

Connectivity Concepts and Terminology


Database Services: A database service entry contains the actual name of the database, as well as several
attributes, including those that constitute the connect descriptor.
Service Name: A logical representation of a database, which is the way a database is presented to clients.
Content Descriptor: The destination service is identified by its service name for Oracle9i or Oracle8i
databases or its Oracle System Identifier (SID) for Oracle8 databases.
Listener: A process that resides on the server whose responsibility is to listen for incoming client
connection requests and manage the traffic to the server. Every time a client requests a network session
with a server, a listener receives the actual request. If the client information matches the listener
information, then the listener grants/hands requests a connection to the server.
Service Registration: Service registration provides the listener with the following information: Service
name for each running instance of the database Instance names(s) of the database Service handlers
(dispatchers and dedicated server) available for the instance Dispatcher, instance, and node load
information
Service Handlers: Connection Points to an Database server; used as dedicated servers or dispatchers

Oracle Net Configuration Models:


Localized management: Network address information stored in local (tnsnames.ora) files on each
computer in the network.
Centralized management:
Network address information is stored in centralized directory services, including a LDAP-compliant
directory server or an Oracle Names server.
Note: In future releases, Oracle Names will not be supported as a centralized naming method.
Oracle naming methods:
A resolution method used by a client application to resolve a name to a network address when
attempting to connect to a database service. Oracle Net supports five naming methods:
Local naming: A naming method that resolves a net service name, stored in a client's tnsnames.ora file,
to a connect descriptor. Local naming is most appropriate for simple distributed networks with a small
number of services that change infrequently.
Directory naming: A naming method that resolves a connect identifier to a connect descriptor, stored in a
central directory server or in an LDAP-compliant directory server, including Oracle Internet Directory,
Microsoft Active Directory, or Novell Directory Services.
Oracle Names: An Oracle directory service made up of a system of Oracle Names servers that provide
name-to-address resolution for each Oracle Net service on the network. Oracle names naming if you have
an existing release 8.0 or release 7.x configuration
Host naming: A naming method that enables users to connect to an Oracle database server by using a
host name alias in a TCP/IP environment. Host names are mapped to the server's global database name
in an existing names resolution service, such as Domain Name System (DNS), Network Information
Service (NIS) or a centrally-maintained set of /etc/hosts files.
External naming: A category of naming methods that resolves services, stored in non-Oracle naming
services, to network addresses. External naming methods include: Network Information Service (Sun
NIS), Cell Directory Service (DCE CDS)
Note: Small organization with a few databases can use Host naming or Local naming,
Large organization with several databases should use Directory naming in a centralized LDAP-compliant
directory server.

Oracle Net Configuration Files


ldap.ora: Located on the database server and client computers configured for centralized management
features, this file contains parameters necessary to access a directory server.
listener.ora: Located on the database server, this configuration file for the listener includes:
* Protocol addresses it is accepting connection requests on
* Database and nondatabase services it is listening for
* Control parameters used by the listener
names.ora: Located on the Oracle Names server, this file includes the location, domain information, and
optional configuration parameters for an Oracle Names server.
sqlnet.ora: Located on client and database server computer, this file may contain:
* Client domain to append to unqualified service names or net service names
* Order of naming methods the client should use when resolving a name
* Logging and tracing features to use
* Route of connections
* Preferred Oracle Names servers
* External naming parameters
* Oracle Advanced Security parameters
* Database access control parameters
tnsnames.ora: Located on the clients, this file contains net service names mapped to connect descriptors.
This file is used for the local naming method.
Note: Configuration files are typically created in $ORACLE_HOME/network/admin on UNIX, and
ORACLE_HOME\network\admin on Windows operating systems.
*However, configuration files can be created in a variety of places, because Oracle Net searches for the
configuration files in the following order:
For sqlnet.ora and ldap.ora file, the current working directory from where an application is run.
1. The directory specified by the TNS_ADMIN environment variable If the TNS_ADMIN environment
variable is not defined as a variable on Windows NT, it may be in the registry.
2. The $ORACLE_HOME/network/admin directory on UNIX, and the ORACLE_HOME\network\admin
directory on Windows operating systems.

For cman.ora, listener.ora and tnsnames.ora


1. The directory specified by the TNS_ADMIN environment variable If the TNS_ADMIN environment
variable is not defined as a variable on Windows NT, it may be in the registry.
2. On Unix global configuration directory. For Sun Solaris, this directory is /var/opt/oracle.
4. The $ORACLE_HOME/network/admin directory on UNIX, and the ORACLE_HOME\network\admin
directory on Windows operating systems.

Note: TNS_ADMIN : You can add the TNS_ADMIN parameter to change the directory name for
configuration files from the default location. For example, if you set TNS_ADMIN to
ORACLE_BASE\ORACLE_HOME\test\admin, the configuration files are used from
ORACLE_BASE\ORACLE_HOME\test\admin.

*************************************************************************************
CHAP 3 Basic Oracle Net Server Side Configuration

Characteristics of the Listener Process


The listener is a process running on a node that listens for incoming connections on behalf of a database
or a number of databases. The following are the characteristics of a listener:
• A listener process can listen for more than one database.
• Multiple listeners can listen on behalf of a single database to perform load balancing.
• The listener can listen on multiple protocols.
• The default name of the listener in Net8/Oracle Net is LISTENER.
• The name of the listener must be unique on the machine on which it resides, and per listener.ora file.

Note: Oracle9i database only supported by an Oracle9i Listener, Oracle9i Listener can be used for earlier
versions of Oracle database.

Connection Methods:
When a connection request is made by a client to a server, the listener performs one of the following:
• Spawns a process and bequeaths (passes) the connection to it (Dedicated Server Configuration)
• Hands Off the connection to a dispatcher in an Oracle Shared server configuration (not possible for
Dedicated Server Configuration)
• Redirects the connection to a dispatcher or an existing server process (Shared Server Configuration)
Note: connection session is bequeathed, Handed off or Redirected to an existing process the session is
transparent to user. Detected only by turning on the tracing

SPAWNED OR BEQUEATH SESSION


When the listener spawns a dedicated server process and bequeaths (passes) the connection to the
dedicated server process, the session is called a Bequeath session.
The following sequence of events occurs:
1. The listener receives a client connection request.
2. The listener starts a dedicated server process, and the dedicated server inherits the connection
request from the listener.
3. The client is now connected directly to the dedicated server.
If, because of OS or protocol a connection cannot be passed between two different processes on the
same machine, a redirect must take placed instead.
Note: When a client disconnects, the dedicated server process associated with the client closes.

This illustration shows the connection sequence described in the preceding text. It also shows a database
instance that contains a dedicated server process, enabling the client a connection to an Oracle database.

Note: USE_SHARED_SOCKET: You can set the USE_SHARED_SOCKET parameter to TRUE to enable
the use of shared sockets. If this parameter is set to true, the network listener passes the socket
descriptor for client connections to the database thread. As a result, the client does not need to establish
a new connection to the database thread and database connection time improves. Also, all database
connections share the port number used by the network listener, which can be useful if you are setting
up third-party proxy servers. FALSE is default.
This parameter only works in dedicated server mode in a TCP/IP environment (using Windows Sockets
API WINSOCK2). If this parameter is set, you cannot use the 9.0 listener to spawn Oracle 7.x databases.
To spawn a dedicated server for an Oracle database not associated with the same Oracle home as the
listener and have shared socket enabled, you must also set the variable USE_SHARED_SOCKET for both
Oracle homes.

DIRECT HAND OFF CONNRECTIONS


The listener uses the dispatcher as a type of service handler to which it can direct client requests. When
client a client request arrives, the listener performs one of the following actions:
• Hands the connection request directly to a dispatcher.
• Issues a redirect message to the client, containing the protocol address of a dispatcher. The
client then terminates the network session to the listener and establishes a network session to
the dispatcher, using the network address provided in the redirect message.
The listener uses direct hand off whenever possible. Redirect messages are used, for example, when
dispatchers are remote to the listener.
1. The listener receives a client connection request.
2. The listener hands the connect request directly to the dispatcher.
3. The client is now connected to the dispatcher.

This illustration shows the direct hand-off sequence described in the preceding text. It also shows a
database instance that contains a dispatcher and two shared server processes. One of the shared server
processes picks up the connection request from the dispatcher, enabling a connection to an Oracle
database.
REDIRECTED SESSION
1. The listener receives a client connection request.
2. The listener provides the location of the dispatcher to the client in a redirect message.
3. The client connects directly to the dispatcher.

This illustration shows the redirected connection sequence described in the preceding text. It also shows
a database instance that contains a dispatcher and two shared server processes. One of the shared
server processes picks up the connection request from the dispatcher, enabling a connection to an Oracle
database.

Service configuration and Registration:


Listener configuration by two ways:
Dynamic Service Registration
A feature by which relies the PMON process automatically registers information with a listener. Because
this information is registered with the listener, the listener.ora file does not need to be configured with
this static information.
Static Service Registration
For Oracle8 and earlier releases, The listener.ora file must be configured, required for OEM and External
procedures or Heterogeneous Services.
Service registration provides the listener with information about:
1. Service names for each running instance of the database
2. Instance names of the database
3. Service handlers (dispatchers and dedicated servers) available for each instance
4. These enable the listener to direct a client request appropriately.
5. Dispatcher, instance, and node load information
6. This load information enables the listener to determine which dispatcher can best handle a client
connection request. If all dispatchers are blocked, the listener can spawn a dedicated server for
the connection.

Dynamic Service registration offers the following benefits:


Simplified configuration
Service registration reduces the need for the SID_LIST_LISTENER_NAME parameter setting, which
specifies information about the databases served by the listener, in the listener.ora file.
Note: The SID_LIST_listener_name parameter is still required if you are using Oracle Enterprise Manager
to manage the database.
Connect-time failover
Because the listener always knows the state of the instances, service registration facilitates automatic
failover of the client connect request to a different instance if one instance is down.
In a static configuration model, a listener would start a dedicated server upon receiving a client request.
The server would later find out that the instance is not up, causing an "Oracle not available" error
message.
Connection load balancing
Service registration enables the listener to forward client connect requests to the least loaded instance
and dispatcher or dedicated server. This balances the load across the service handlers and nodes.

Static Service Registration: Listener.ora file


When the Oracle software is installed, the LISTENER.ORA file is created for the starter database with the
following default settings:
• Listener name LISTENER
• Port 1521
• Protocols TCP/IP and IPC
• SID name Default instance
• Host name Default host name

The listener.ora file is used to configure the listener. The listener.ora file must reside on the machine or
node on which the listener is to reside. The listener.ora file contains configuration information for the
following:
• The listener name
• The listener address
• Databases that use the listener
• Listener parameters

# LISTENER.ORA Network Configuration File: C:\oracle\ora92\network\admin\listener.ora


# Generated by Oracle configuration tools.
# The name of the listener. The default name is LISTENER.
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
# The ADDRESS_LIST parameter contains a block of addresses at which the listener listens for
# incoming connections. Each of the addresses defined in this block represents a different way by which
# a listener receives connection.
(ADDRESS_LIST =
# IPC addresses identify both incoming connection requests from applications on the same node as the
# listener and information
# sent or registered by a database dispatcher. If the IPS addresses identify connection requests from the
# same node, the KEY value # is equal to the service name of the database. If the addresses identify a
# database dispatcher, the KEY value is equal to the database system identifier (SID). If the service
# name is the same as the SID, only one IPC address is needed.
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
)
(ADDRESS_LIST =
# The TCP address identifies incoming TCP connections from clients on the network attempting to
# connect to port 1521. The clients use the port defined in their tnsnames.ora file to connect to this
# listener. Based on the SID_LIST defined for this listener, the listener specifies the database to which to
# connect.
(ADDRESS = (PROTOCOL = TCP)(HOST = Comp1)(PORT = 1521))
)
)
)
# A listener can listen for more than one database on a machine.
# The SID_LIST_listener_name block or parameter is where these SIDs are defined.
SID_LIST_LISTENER =
# The SID_LIST parameter is defined if more than one SID is defined.
(SID_LIST =
# The SID_DESC parameter must exist for each defined SID.
(SID_DESC =
(SID_NAME = PLSExtProc)
# The ORACLE_HOME is where the home directory of the database is defined.
# This enables the listener to identify the location of a database executable file.
(ORACLE_HOME = C:\oracle\ora92)
(PROGRAM = extproc)
)
# SID_DESC should match the value of the SERVICE_NAMES parameter in initialization parameter file
(SID_DESC =
# identifies Global Database name (form database_name.database_domain as
GLOBAL_DBNAME: ORADB.us.oracle.com where db_name: ORADB db_Domain: us.oracle.com )
(GLOBAL_DBNAME = ORADB.us.oracle.com)
(ORACLE_HOME = C:\oracle\ora92)
# The SID_NAME parameter defines the name of the SID on behalf of which the listener accepts
# connections.
(SID_NAME = ORADB)
)
# By default, an example SID is defined here.
#...sample additional SID description ...
)
STARTUP_WAIT_TIME_LISTENER = 0
CONNECT_TIMEOUT_LISTENER = 10
TRACE_LEVEL_LISTENER = OFF
Listener.ora parameters
Parameter Description
ADDRESS Defines a single listener protocol address.
Sets the number of seconds that the listener waits for the
CONNECT_TIMEOUT_listener_name server process to get a valid database query after the session
has started.
LOG_DIRECTORY_listener_name
(UNIX: $ORACLE_HOME/network/log Controls the directory in which the log file is written.
NT: ORACLE_HOME\network\log)
LOG_FILE_listener_name
Specifies the filename to which the log information is written.
(listener.log)
LOGGING_listener_name By default, logging is always on unless you provide this
(ON) parameter and turn logging off.
Sets a nonencrypted password for authentication to the
PASSWORDS_listener_name
Listener Control utility (LSNRCTL).
SAVE_CONFIG_ON_STOP_listener_name Any changes made by the LSNRCTL SET command are made
(FALSE) permanent if the parameter is set to TRUE.
SERVICE_LIST_listener_name Defines the service served by the listener. This is the same as
the SID_LIST, made more generic for nondatabase servers.
SID_LIST_listener_name Defines the SID of the databases served by the listener. List of
SID descriptions
STARTUP_WAIT_TIME_listener_name Sets the number of seconds that the listener sleeps before
responding to the first LSNRCTL STATUS command. This
assures that a listener with a slow
protocol has time to start up before responding to a status
request.
TRACE_DIRECTORY_listener_name
(UNIX: $ORACLE_HOME/network/trace Controls the directory in which the trace file is written.
NT: ORACLE_HOME\network\ltrace)
TRACE_FILE_listener_name
Sets the name of the trace file.
(listener.trc)
TRACE_LEVEL_listener_name
Turns tracing off or to a specified level.
(OFF)
Instructs the listener to register with a well-known Names
USE_PLUG_AND_PLAY_listener_name server. Continues to look for a well-known Names server until
one is found.

Dynamic service registration: Configuring Service Registration


Dynamic service registration is configured in the database initialization file. It does not require any
configuration in the listener.ora file. However, listener configuration must be synchronized with the
information in the database initialization file.
To ensure service registration works properly, the initialization parameter file should contain the following
parameters:
SERVICE_NAMES for the database service name
INSTANCE_NAME for the instance name
For example:
SERVICE_NAMES=sales.us.acme.com
INSTANCE_NAME=sales
The SERVICE_NAMES defaults to the global database name, which is comprised of the values
from the DB_NAME and DB_DOMAIN parameters in the initialization parameter file.

Dynamic service registration: Registering Information with the Default, Local Listener
By default, the PMON process registers service information with its local listener on the default local
address of TCP/IP, port 1521. As long as the listener configuration is synchronized with the database
configuration, PMON can register service information with a nondefault local listener or a remote listener
on another node.
If you want PMON to register with a local listener that does not use TCP/IP, port 1521,
configure the LOCAL_LISTENER parameter in the initialization parameter file to locate the local listener. If
you are using shared server, you can also use the LISTENER attribute of the DISPATCHERS parameter in
the initialization parameter file to register the dispatchers with a nondefault local listener.

Note: The LISTENER attribute overrides the LOCAL_LISTENER parameter.


Set the LOCAL_LISTENER parameter as follows:
LOCAL_LISTENER=listener_alias
Set the LISTENER attribute as follows:
DISPATCHERS="(PROTOCOL=tcp)(LISTENER=listener_alias)"
listener_alias is then resolved to the listener protocol addresses through a naming method, such as a
tnsnames.ora file on the database server.
For example, if the listener is configured to listen on port 1421 rather than port 1521, you can set the
LOCAL_LISTENER parameter in the initialization parameter file as follows:
LOCAL_LISTENER=listener1
Using the same listener example, you can set the LISTENER attribute as follows:
DISPATCHERS="(PROTOCOL=tcp)(LISTENER=listener1)"
You can then resolve listener1 in the local tnsnames.ora as follows:
listener1=
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=sales-server)(PORT=1421)))

Configuring Protocol Addresses Listener for Oracle9i JVM


Connections to Oracle9i JVM (formerly named Oracle JServer in release 8.1) require the TCP/IP or TCP/IP
with SSL listening protocol addresses with presentation information.
If the database is release 8.1 or earlier, configure protocol addresses statically, using the following
procedure, even if a release 9.0 listener is used. If both listener and database are release 9.0, this
procedure is unnecessary because configuration occurs dynamically during service registration.
To configure a protocol address for Oracle JServer release 8.1:
1. Start Oracle Net Manager.
2. In the navigator pane, expand Local > Listeners.
3. Select an existing listener.
4. From the list in the right pane, select Listening Locations.
5. Choose Add Address. A new Address tab appears.
6. Select the TCP/IP or TCP/IP with SSL protocol from the Protocol list.
7. Enter the host name of the database in the Host field.
8. Enter port 2481 if the selected protocol is TCP/IP in the Port field, or enter port 2482 if the
selected protocol is TCP/IP with SSL in the Port field.
9. Choose "Statically dedicate this address for JServer connections".
10. Choose File > Save Network Configuration.
The listener.ora file updates with the following:
listener=
(DESCRIPTION_LIST=
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=sales1-server)(PORT=2481))
(PROTOCOL_STACK=
(PRESENTATION=giop)
(SESSION=raw))))

LSNRCTL:
Once the listener is configured, the listener can be administered with the Listener Control utility
Using LSNRCTL command
Microsoft Windows XP [Version 5.1.2600]
(C) Copyright 1985-2001 Microsoft Corp.
C:\>lsnrctl
LSNRCTL for 32-bit Windows: Version 9.2.0.1.0 - Production on 09-OCT-2006 11:27:59
Copyright (c) 1991, 2002, Oracle Corporation. All rights reserved.
Welcome to LSNRCTL, type "help" for information.
LSNRCTL>
Control non default listener
1. LSNRCTL> set current_listener [listener_name] 2.$listener start [listener_name]
Starting and Stopping the Listener
STOP Command: To stop the listener from the command line, enter:
lsnrctl STOP [listener_name] or $listener stop [listener_name]
where listener_name is the name of the listener defined in the listener.ora file. It is not necessary to
identify the listener if you are using the default listener, named LISTENER.
START Command: To start the listener from the command line, enter:
lsnrctl START [listener_name] $listener start [listener_name]
where listener_name is the name of the listener defined in the listener.ora file. It is not necessary to
identify the listener if you are using the default listener, named LISTENER.
In addition to starting the listener, the Listener Control utility verifies connectivity to the listener.

Monitoring Runtime Behavior: The STATUS and SERVICES commands provide information about the
listener. When entering these commands, follow the syntax as shown for the STOP and START
commands.

STATUS Command
The STATUS command provides basic status information about a listener, including a summary of listener
configuration settings, the listening protocol addresses, and a summary of services registered with the
listener.
STATUS Specifies the following: (status can obtain by OEM console)
* Name of the listener
* Version of listener
* Start time and up time
* Tracing level
* Logging and tracing configuration settings
* listener.ora file being used
* Whether a password is set in listener.ora file
* Whether the listener can respond to queries from an SNMP-based network management system
Command Description
CHANGE_PASSWORD Dynamically changes the encrypted password of a listener.
EXIT Quits the LSNRCTL utility.
HELP Provides the list of all available LSNRCTL commands.
QUIT Provides the functionality of the EXIT command.
RELOAD Shuts down everything except listener addresses and rereads the listener.ora
file. You use this command to add or change services without actually stopping
the listener.
Creates a backup of your listener configuration file (called listener.bak) and
SAVE_CONFIG
updates the listener.orafile itself to reflect any changes.
SERVICES Provides detailed information about the services the listener listens for.
SET parameter This command sets a listener parameter.
SHOW parameter This command lists the value of a listener parameter.

You need to set an encrypted password for the listener, LSNR. Which two options could you use to set
the password? (Choose two.)
A. use Oracle Net Manager
B. use the Listener Control utility

LSNRCTL SET and SHOW Modifiers


The SET modifier is used to change listener parameters in the Listener Control utility environment.
LSNRCTL> SET trc_level ADMIN
The SHOW modifier is used to display the values of the parameters set for the listener.
LSNRCTL> SHOW connect_timeout
Command Description
Determines the amount of time the listener waits for a valid connection
SET CONNECT_TIMEOUT
request after a connection has been started.
SET CURRENT_LISTENER Sets or shows parameters when multiple listeners are used.
Sets a nondefault location for the log file or to return the location to
SET LOG_DIRECTORY
the default.
SET LOG_FILE Sets a nondefault name for the log file.
SET LOG_STATUS Turns listener logging on or off.
Changes the password sent from the LSNRCTL utility to the listener
SET PASSWORD
process for authentication purposes only.
Saves any changes made by the LSNRCTL SET command permanently
SET SAVE_CONFIG_ON_STOP if the parameter is on. All parameters are saved right before the
listener exits.
Sets the amount of time the listener sleeps before responding to a
SET STARTUP_WAITTIME
START command.
Sets a nondefault location for the trace file or to return the location to
SET TRC_DIRECTORY
the default.
SET TRC_FILE Sets a nondefault name for the trace file.
SET TRC_LEVEL Turns on tracing for the listener.
Note: The SHOW command has the corresponding parameters of the SET command except PASSWORD.
*************************************************************************
CHAP 4 Naming Method Configuration

Host Naming:
º Connecting to oracle database using Oracle Net Services Client Software
º Your client and server are connecting using TCP/IP.
º The host name is resolved through an IP address translation mechanism such as Domain Name
Services (DNS), Network Information Services (NIS), or a centrally maintained TCP/IP /etc/hosts file.
º No Oracle Connection Manager features or security options are requested.
Advantages:
1. Requires minimal user configuration. The user can provide only the name of the host to establish
a connection.
2. Eliminates the need to create and maintain a local names configuration file (tnsnames.ora)
3. Eliminates the need to understand Oracle Names or OID administration procedures.
Disadvantage:
Available only in a limited environment, identify only one SID per node.

Host Naming Client Side: Client-Side Requirements


If you are using the host naming method, you must have TCP/IP installed on your client machine. In
addition you must install Oracle Net Services and the TCP/IP protocol adaptor.
The host name is resolved through an IP address translation mechanism such as Domain Name Services
(DNS), Network Information Services (NIS), or a centrally maintained TCP/IP host file: that means that
this should be configured from the client side before attempting to use the host naming method.
Required: sqlnet.ora file [names.directory_path = (HOSTNAME)]

Host Naming Server Side: Server-Side Requirements


If you are using the host naming method, you must have TCP/IP installed on your server machine as
well. You also need to install Oracle Net Services and the TCP/IP protocol adaptor on the server side.
º Default listener named LISTENER running on TCP/IP, port 1521.
º The LOCAL_LISTENER parameter is set in the initialization parameter file to locate the local listener.
Note: The host name must match the connect string you specify from your client. The additional
information included is the database to connect to.
Required: sqlnet.ora [names.directory_path = (HOSTNAME, tnsnames)] and tnsnames.ora file

File Example: listener.ora


SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = wwed151-sun.us.oracle.com) # Host Naming Server Side
(ORACLE_HOME = /oracle803)
(SID_NAME = TEST)
Connecting from client: sqlplus username/password@wwed151-sun.us.oracle.com

Local Naming
Simple distributed networks with a small number of services that change infrequently.
Advantages:
* Provides a relatively straightforward method for resolving net service name addresses
* Resolves net service names across networks running different protocols
* Configured using Graphical configuration tool (Oracle Net Manager)
Disadvantage: Requires local configuration of all net service name and address changes stored in
tnsnames.ora file
Required: Client> sqlnet.ora and tnsnames.ora file Server> listener.ora file
Required: sqlnet.ora [names.directory_path = (tnsnames)]
Generated Files:
tnsnames.ora file
A configuration file that contains net service names mapped to connect descriptors. This file is used for
the local naming method. The tnsnames.ora file must reside in one of the following locations:
1. The directory specified by the TNS_ADMIN environment variable If the TNS_ADMIN environment
variable is not defined as a variable on Windows NT, it may be in the registry.
2. The node's global configuration directory. For Sun Solaris, this directory is /var/opt/oracle.
Windows NT does not have a central directory.
3. The $ORACLE_HOME/network/admin directory on UNIX or the ORACLE_HOME\network\admin
directory on Windows operating systems.
Which one of the following statements about the TNSPING utility is correct?
Ans. It does not require the username and password to check the connectivity of the service.

# TNSNAMES.ORA Network Configuration File: C:\oracle\ora92\network\admin\tnsnames.ora


# Generated by Oracle configuration tools.

ORADB =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = papai)(PORT = 1521))
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ORADB)
)
)

INST1_HTTP =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = papai)(PORT = 1521))
)
(CONNECT_DATA =
(SERVER = SHARED)
(SERVICE_NAME = MODOSE)
(PRESENTATION = http://HRService)
)
)

EXTPROC_CONNECTION_DATA =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
)
(CONNECT_DATA =
(SID = PLSExtProc)
(PRESENTATION = RO)
)
)

Parameter Description
ORADB NET Service name and domain name.
Keyword for describing the connect descriptor.
DESCRIPTION
Descriptions are always specified the same way.
Keyword for the address specification. If multiple
ADDRESS addresses are specified, use the keyword
ADDRESS_LIST prior to the ADDRESS.
PROTOCOL Specifies the protocol used.
Protocol-specific information for TCP/IP—specifies the
HOST host name of the server or IP address. Can differ for
another protocol.
PORT Protocol specific information for TCP/IP—specifies the
port number on which the server side listener is
listening.
CONNECT_DATA Specifies the database SID to which to connect.

sqlnet.ora file
A configuration file for the client or server that specifies the:
• Client domain to append to unqualified service names or net service names
• Preferred order of naming methods that the client should use when resolving a name
• External naming parameters
The sqlnet.ora file must reside in one of the following locations:
1. The directory specified by the TNS_ADMIN environment variable If the TNS_ADMIN environment
variable is not defined as a variable on Windows NT, it may be in the registry.
2. The $ORACLE_HOME/network/admin directory on UNIX or the ORACLE_HOME\network\admin
directory on Windows operating systems.

# SQLNET.ORA Network Configuration File: C:\oracle\ora92\network\admin\sqlnet.ora


# Generated by Oracle configuration tools.
SQLNET.AUTHENTICATION_SERVICES= (NTS)
NAMES.DIRECTORY_PATH= (TNSNAMES, ONAMES, HOSTNAME)
NAMES.DIRECTORY_PATH parameter controls how Oracle Net Services resolves net service names into
correct descriptors

Configuring the client profile (sqlnet.ora file) with USE_DEDICATED_SERVER=on to enable


DEDICATED_SERVER from shared server environment.

Troubleshooting

ORA-12154: TNS:could not resolve service name


Cause: Oracle Net could not locate the content descriptor/net service name specified in the tnsnames.ora
configuration file.
Action: Perform these steps:
1. Verify that a tnsnames.ora file exists.
2. Verify that there are not multiple copies of the tnsnames.ora file.
3. In the tnsnames.ora file, verify that the net service name specified in your connect string is mapped to
a connect descriptor.
4. Verify that there are no duplicate copies of the sqlnet.ora file.
5. If you are using domain names, verify that your sqlnet.ora file contains a NAMES.DEFAULT_DOMAIN
parameter. If this parameter does not exist, you must specify the domain name in your connect string.
6. If you are not using domain names, and this parameter exists, delete it or disable it by commenting it
out.
7. If you are connecting from a login dialog box, verify that you are not placing an "@" symbol before
your connect net service name.
8. Activate client tracing and repeat the operation.

ORA-12198: TNS:could not find path to destination


ORA-12203: TNS:unable to connect to destination
Cause: The client cannot find the desired database.
Action: Perform these steps:
1. Verify that you have entered the net service name you wish to reach correctly.
2. Verify that the net service name ADDRESS parameters in the connect descriptor.
3. If using local naming, verify that the tnsnames.ora file is stored in the correct directory.
4. Verify that the listener on the remote node has started and is running.
Enter: lsnrctl
LSNRCTL> STATUS [listener_name]
listener_name is the name of the listener defined in the listener.ora file. It is not necessary to identify the
listener if you are using the default listener, named LISTENER.
If the output indicates the listener is not running, try starting it with the command:
LSNRCTL> START [listener_name]
5. If you are connecting from a login box, verify that you are not placing an "@" symbol before your
connect net service name.

ORA-12533: TNS:illegal ADDRESS parameters


Cause: The protocol specific parameters in the ADDRESS section of the designated connect descriptor are
incorrect.
Action: Correct the protocol address.

TNS-12541 TNS:no listener


Cause: The connection request could not be completed because the listener is not running.
Action: Ensure that the supplied destination address matches one of the addresses used by the listener.
Compare the TNSNAMES.ORA entry with the appropriate LISTENER.ORA file (or TNSNAV.ORA if the
connection is to go by way of an Interchange). Check STATUS of listener and START the listener on the
remote machine.

ORA-12520 TNS:listener could not find available handler for requested type of server Which action
should you take first to investigate the problem?
Ans. Executing the LSNRCTL SERVICES command to verify that the instances are registered
with the listener and that the appropriate service handler exists and is ready

*************************************************************************
CHAP 5 Usage and Configuration of Oracle Shared Server

Server Configurations
Dedicated server (two-task) process
Shared server process [part of Oracle Shared server architechture]

Dedicated Server Processes


• The user process and server process are separate.
• Each user process has its own server process.
• The user and server processes can run on different machines to take advantage of distributed
processing.
• There is a one-to-one ratio between the user and server processes.
• Even when the user process is not making a database request, the dedicated server exists but remains
idle.
• The dedicated server process is sometimes referred to as a shadow process, because it is acting on
behalf of one user process only.

The program interface in use here depends on whether the user and the dedicated server processes are
on the same machine. If they are, the host operating system’s interprocess communication mechanism is
used for the program interface between processes.

The Oracle Shared Server


In a shared server configuration, client user processes connect to a dispatcher. PMON process registers
the location and load the dispatchers with the listener. Service Registration does not require , the
listener.ora file.
A dispatcher can support multiple client connections concurrently. Each client connection is
bound to a virtual circuit. A virtual circuit is a piece of shared memory used by the dispatcher for client
database connection requests and replies. The dispatcher places a virtual circuit on a common queue
when a request arrives.
An idle shared server picks up the virtual circuit from the common queue, services the request,
and relinquishes the virtual circuit before attempting to retrieve another virtual circuit from the common
queue. This approach enables a small pool of server processes to serve a large number of clients. A
significant advantage of shared server architecture over the dedicated server model is the reduction of
system resources, enabling the support of an increased number of users.
The shared server architecture requires Oracle Net Services. User processes targeting the shared
server must connect through Oracle Net Services, even if they are on the same machine as the Oracle
instance.
There are several things that must be done to configure your system for shared server.
A number of different processes are needed in a shared server system:
1. A network listener process that connects the user processes to dispatchers or dedicated servers
(the listener process is part of Oracle Net Services, not Oracle).
2. One or more dispatcher processes
3. One or more shared server processes

Benefits of Oracle Shared Server


• Reduces the number of processes against an instance
• Increases the number of possible users
• Achieves load balancing
• Reduces the number of idle server processes
• Reduces memory usage and system overhead
When to Use a Dedicated Server
• Submitting batch jobs (it is expected that there will be little or no idle time)
• Connecting with Server Manager as SYSDBA to start up, shut down, or perform recovery
• Connecting as internal

To request a dedicated server, the clause SERVER=DEDICATED must be included in the Oracle Net TNS
connection string within the tnsnames.ora file:

TEST.world =
(DESCRIPTION =
(ADDRESS =
(PROTOCOL = TCP)
(HOST = wwed151-sun)
(PORT = 1521)
)
(CONNECT_DATA = (SERVICE_NAME = TEST.US.ORACLE.COM)
(SERVER=DEDICATED)
)
)

Technical Note: For most platforms, if your machine has plenty of memory to support dedicated
servers, you should use that configuration. In this situation, performance is likely to be better.
There are exceptions such as NT, in which performance may improve using the shared server
configuration due to the asynchronous nature of shared server architecture.

Connecting to the Shared Server


1 The listener process waits for any connection requests from a user process. When a process requests a
connection, the listener determines whether to connect the user process a dispatcher (depending on the
load of the dispatcher) or assign it a dedicated server process.
2 If the user process can connect to a dispatcher, the listener gives the user process the address of a
dispatcher process. If the user process requests a dedicated server, the listener creates a dedicated
server process and connects the user process to it.
3 Once the connection has been established, either through a dispatcher or a dedicated server process,
the connection is maintained for the duration of the session.

Technical Note
If the user call is from across a network, the dispatcher process chosen by the listener must match the
protocol of the network being used.

Processing a Request
1 A user sends a request to its dispatcher.
2 The dispatcher places the request into the request queue in the System Global Area (SGA).
3 A shared server picks up the request from the request queue and processes the request.
4 The shared server places the response on the calling dispatcher’s response queue.
5 The response is handed off to the dispatcher.
6 The dispatcher returns the response to the user.
Once the user call has been completed, the shared server process is released and is available to service
another user call in the request queue.

Request Queue
• One request queue is shared by all dispatchers.
• Shared servers monitor the request queue for new requests.
• Requests are processed on a first-in, first-out basis.

Response Queue
• Shared servers place all completed requests on the calling dispatcher’s response queue.
• Each dispatcher has its own response queue in the SGA.
• Each dispatcher is responsible for sending completed requests back to the appropriate user process.
• Users are connected to the same dispatcher for the duration of a session.

The SGA and PGA


The contents of the System Global Area (SGA) and the Program Global Area (PGA) differ when dedicated
servers or shared servers are used.
Dedicated Server:user session data is kept in the PGA.
SGA:Shared pool and other memory structures
PGA:Stack space|User session data|Cursor state

Shared Server:user session data is held in the SGA.


DSGA:[User session data|Cursor state] Shared pool and other memory structures
PGA:Stack space

• Text and parsed forms of all SQL statements are stored in the SGA.
• The cursor state contains run-time memory values for the SQL statement, such as rows retrieved.
• User session data includes security and resource usage information.
• The stack space contains local variables for the process.
Technical Note: The change in the SGA and PGA is transparent to the user; however, if supporting
multiple users, you need to increase the SHARED_POOL_SIZE per connection.
Each shared server process needs to access the data spaces of all sessions so that any server can handle
requests from any session. Space is allocated in the SGA for each session’s data space. You can limit the
amount of space that a session can allocate by setting the resource limit PRIVATE_SGA to the desired
amount of space in the user profile.

Initialization Parameters for Shared Server


Parameter Description
Required Initialization Parameters for Shared Server
DISPATCHERS Configures dispatcher processes in the shared server architecture.
SHARED_SERVERS Specifies the number of shared server processes created when an instance
is started up.
Optional. If you do not specify the following parameters, Oracle selects appropriate defaults.
MAX_DISPATCHERS Specifies the maximum number of dispatcher processes that can run
simultaneously.
MAX_SHARED_SERVERS Specifies the maximum number of shared server processes that can run
simultaneously.
CIRCUITS Specifies the total number of virtual circuits that are available for inbound
and outbound network sessions.
SHARED_SERVER__SESSIONS Specifies the total number of shared server user sessions to allow. Setting
this parameter enables you to reserve user sessions for dedicated servers.
Other initialization parameters affected by shared server that may require adjustment.
LARGE_POOL_SIZE Specifies the size in bytes of the large pool allocation heap. Shared server
Parameter Description
may force the default value to be set too high, causing performance
problems or problems starting the database.
SESSIONS Specifies the maximum number of sessions that can be created in the
system. May need to be adjusted for shared server.

The DISPATCHERS Parameter


DISPATCHERS configures dispatcher processes in the shared server architecture. The parsing software
supports a name-value syntax to enable the specification of attributes in a position-independent case-
insensitive manner. For example: DISPATCHERS = “(PROTOCOL=TCP) (DISPATCHERS=3)”
Parameter Type String (Specify as a quoted string)
Parameter class: Dynamic (can use ALTER SYSTEM to modify)
Default value: NULL
Attribute Description
PROTOCOL The network protocol for which the dispatchers listen
(PRO or PROT)

ADDRESS (ADD or ADDR) The network address on which the dispatchers listen (Includes the
protocol)
DESCRIPTION (DES or DESC) The network description of the end point on which the dispatchers will
listen (Includes the protocol)
DISPATCHERS The initial number of dispatchers to start (default is 1)
(DIS or DISP)
SESSIONS (SES or SESS) The maximum number of network sessions for each dispatcher
Default is OS specific (16k)
LISTENER (LIS, LIST) The network name of an address or address list of the listeners with
which the dispatchers register (The listener or listeners can reside on
other nodes.)
The LISTENER attribute facilitates administration of multi-homed hosts.
This attribute specifies the appropriate listeners with which the
dispatchers will register. The LISTENER attribute overrides the
LOCAL_LISTENER parameter. non-default port (not 1521)
CONNECTIONS (CON or An integer specifying the maximum number of network connections to
CONN) allow for each dispatcher. The default is set by OS-specific. 1024 for
Solaris and NT

Initial no. of dispatchers = CEIL(avg. no of concurrent sessions/connections per dispatcher)


900 users concurrently connected by TCP/IP and supports 255 connections per process.
Ceil(900/255) = ceil(3.52) = 4
DISPATCHERS = “(PROTOCOL=TCP) (DISPATCHERS=4)”
Connections per dispatcher dependence in OS.
n.b. CEIL returns smallest integer greater than or equal to n.
Example: The following example returns the smallest integer greater than or equal to 15.7:
SELECT CEIL(15.7) "Ceiling" FROM DUAL; Ceiling: 16

MAX_DISPATCHERS
MAX_DISPATCHERS specifies the maximum number of dispatcher processes allowed to be running
simultaneously. The default value applies only if dispatchers have been configured for the system.
The value of MAX_DISPATCHERS should at least equal the maximum number of concurrent sessions
divided by the number of connections for each dispatcher. For most systems, a value of 250 connections
for each dispatcher provides good performance. if the parameter file starts dispatchers for TCP and IPC,
you cannot later start dispatchers for protocol without changing the parameter file and restarting the
instance.

Parameter type: Integer


Default value: 5
Parameter class: Static
Range of values: 5 or the number of dispatchers configured, whichever is greater [OS dependent]

MAX no. of dispatchers = CEIL(MAX no of concurrent sessions/connections per dispatcher)


Adding or Removing Dispatchers
• If the load on the dispatcher processes is consistently high, start additional dispatcher processes to
route user requests without waiting. You may start new dispatchers until the number of dispatchers
equals MAX_DISPATCHER.
• The load on the dispatchers can be monitored using the data dictionary views V$CIRCUIT and V
$DISPATCHER.
• In contrast, if the load on dispatchers is consistently low, reduce the number of dispatchers.
The following example adds a dispatcher process where the number of dispatchers was previously two:
ALTER SYSTEM SET DISPATCHERS=’(PROTOCOL=TCP)(DISPATCHERS=3)’;
You can also use the ALTER SYSTEM command to remove dispatchers to the number specified in
DISPATCHERS. If you want to have fewer than that, edit the init.ora file, and bounce the database.

SHARED_SERVERS
SHARED_SERVERS specifies the number of server processes that you want to create when an instance is
started up. If system load decreases, this minimum number of servers is maintained. Therefore, you
should take care not to set SHARED_SERVERS too high at system startup.

Parameter type: Integer


Default value: If you are using shared server architecture, then the value is 1. If you are not
using shared server architecture, then the value is 0.
Parameter class: Dynamic: ALTER SYSTEM
Range of values: Operating system-dependent

Modifying the Minimum Number of Shared Server Processes


After starting an instance, you can change the minimum number of shared server processes by using the
SQL ALTER SYSTEM command.
• Oracle will eventually terminate servers that are idle when there are more shared servers than the
minimum limit you specify.
• If you set SHARED_SERVERS to 0, Oracle terminates all current servers when they become idle and
does not start any new servers until you increase SHARED_SERVERS.
• Setting SHARED_SERVERS to 0 effectively disables the multithreaded server temporarily.
To control the minimum number of shared server processes, you must have the ALTER SYSTEM privilege.
The following statement sets the number of shared server processes to two: ALTER SYSTEM SET
SHARED_SERVERS = 2

MAX_SHARED_SERVERS
MAX_SHARED_SERVERS specifies the maximum number of shared server processes allowed to be
running simultaneously. If artificial deadlocks occur too frequently on your system, you should increase
the value of MAX_SHARED_SERVERS. Allocates shared servers dynamically based on length of request
queue.

Parameter type: Integer


Default value: Derived from SHARED_SERVERS (either 20 or 2*SHARED_SERVERS)
Parameter class: Static
Range of values: Operating system-dependent

Estimating the Maximum Number of Shared Servers


In general, set this parameter for an appropriate number of shared server processes at times of highest
activity. Experiment with this limit, and monitor shared servers to determine an ideal setting for this
parameter.
To get the maximum numbers of servers started, query the data dictionary view
V$SHARED_SERVER_MONITOR.

CIRCUITS
CIRCUITS specifies the total number of virtual circuits that are available for inbound and outbound
network sessions. It is one of several parameters that contribute to the total SGA requirements of an
instance.

Parameter type: Integer


Default value: Derived:
* If you are using shared server architecture, then the value of SESSIONS
* If you are not using the shared server architecture, then the value is 0
Parameter class: Static

SHARED_SERVER_SESSIONS
SHARED_SERVER_SESSIONS specifies the total number of shared server architecture user sessions to
allow. Setting this parameter enables you to reserve user sessions for dedicated servers.

Parameter type: Integer


Default value: Derived: the lesser of CIRCUITS and SESSIONS - 5
Parameter class: Static
Range of values: 0 to SESSIONS - 5

Verifying Setup
• Verify that the dispatcher has registered with the listener when the database was started by issuing:
$ lsnrctl services
LSNRCTL for 32-bit Windows: Version 9.2.0.1.0 - Production on 11-OCT-2006 09:14:29
Copyright (c) 1991, 2002, Oracle Corporation. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC0)))
Services Summary...
Service "ORADB" has 2 instance(s).
Instance "ORADB", status UNKNOWN, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0
LOCAL SERVER
Instance "ORADB", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "ORADBXDB" has 1 instance(s).
Instance "ORADB", status READY, has 1 handler(s) for this service...
Handler(s):
"D000" established:0 refused:0 current:0 max:1002 state:ready
DISPATCHER <machine: PAPAI, pid: 3588>
(ADDRESS=(PROTOCOL=tcp)(HOST=papai)(PORT=1075))
Service "PLSExtProc" has 1 instance(s).
Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0
LOCAL SERVER
The command completed successfully

• Verify that you are connected using Shared Server by making a single connection. Query V$CIRCUIT,
and that should show one entry per Shared Server connection.

*****Select dispatcher, circuit, server, status from V$CIRCUIT;

The following are useful views for obtaining information about your shared server configuration and for
monitoring performance.

View Description
V$DISPATCHER Provides information on the dispatcher processes, including name,
network address, status, various usage statistics, and index number.
V$DISPATCHER_RATE Provides rate statistics for the dispatcher processes.
V$QUEUE Contains information on the shared server message queues.
V$SHARED_SERVER Contains information on the shared server processes.
V$CIRCUIT Contains information about virtual circuits, which are user connections
to the database through dispatchers and servers.
V$SHARED_SERVER_MONITOR Contains information for tuning shared server.
V$SESSION This View lists session information for each current session.
V$SGA Contains size information about various system global area (SGA)
groups. May be useful when tuning shared server.
V$SGASTAT Detailed statistical information about the SGA, useful for tuning.
V$SHARED_POOL_RESERVED Lists statistics to help tune the reserved pool and space within the
shared pool.

*************************************************************************

CHAP 6 Backup and Recovery Overview

Backup and Recovery Issues


• Protect the database from numerous types of failures
• Increase Mean-Time-Between-Failures (MTBF)
• Decrease Mean-Time-To-Recover (MTTR)
• Minimize data loss

This section contains these topics:


* Statement Failure
* Process Failure
* User Error
* Network Failure
* Database Instance Failure
* Media Failure
Statement Failures:
Causes of Statement Failures
• Logic error in an application
• Attempt to enter bad data into the table
• Attempt an operation with insufficient privileges
• Attempt to create a table but exceed allotted quota limits
• Attempt an INSERT or UPDATE to a table, causing an extent to be allocated, but with insufficient free
space left in the tablespace

Resolutions for Statement Failures


• Correct the logic flow of the program.
• Modify and reissue the SQL statement.
• Provide the necessary database privileges.
• Change the user’s quota limit by using the ALTER USER command.
• Add file space to the tablespace.
• Enable resumable space allocation

User Process Failures:


Causes of User Process Failures
• The user performed an abnormal disconnect in the session.
• The user’s session was abnormally terminated.
• The user’s program raised an address exception terminating the session.

Resolution of User Process Failures


• PMON rolls back the transaction and releases any resources and locks being held by it.
• The PMON process detects an abnormally terminated user process.

PMON Background Process


The PMON background process is usually sufficient for cleaning up after an abnormally terminated user
process.
• The PMON process detects an abnormally terminated server process.
• The PMON process rolls back the transaction of the abnormally terminated process, and releases any
resources and locks it has acquired.

User Error Failures:


Common Causes of User Error Failures
• The user accidentally drops or truncates a table.
• The user deleted all rows in a table that are required.
• The user committed data, but discovered an error in committed data.
Resolution of User Errors
• Train the database users.
• Recover from a valid backup.
• Bring back or Import a table from export file.
• Use LogMiner to determine the time of error.
• Recover with a point-in-time recovery.
• Use LogMiner to perform object level recovery.
[LogMiner is a relational tool that lets you read, analyze, and interpret online and archived log files using
SQL. You can also use LogMiner Viewer to access LogMiner functionality. LogMiner Viewer, which is
available with Oracle Enterprise Manager, provides a graphical user interface to LogMiner.]
• Use FlashBack to view and repair historical data.
[Oracle9i provides a new feature called Flashback Query, which lets you view and repair historical data.
Flashback Query offers the ability to perform queries on the database as of a certain wall clock time or
user-specified system commit number (SCN).]

Network Failure
When your system uses networks such as local area networks and phone lines to connect client
workstations to database servers, or to connect several database servers to form a distributed database
system, network failures such as aborted phone connections or network communication software failures
can interrupt the normal operation of a database system. For example:
* A network failure can interrupt normal execution of a client application and cause a process failure to
occur. In this case, the Oracle background process PMON detects and resolves the aborted server
process for the disconnected user process, as described in the previous section.
* A network failure can interrupt the two-phase commit of a distributed transaction. After the network
problem is corrected, the Oracle background process RECO of each involved database automatically
resolves any distributed transactions not yet resolved at all nodes of the distributed database system.

Instance Failure:
An instance failure may occur for numerous reasons:
• A power outage occurs that causes the server to become unavailable.
• The server becomes unavailable due to hardware problems such as a CPU failure or memory corruption
or the operating system crashes.
• One of the Oracle server background processes (DBWR, LGWR, PMON, SMON, CKPT) experiences a
failure.

To recover from instance failure, the DBA:


• Starts the instance by using the “startup” command. The Oracle server will automatically recover,
performing both the roll forward and rollback phases.
• Investigates the cause of failure by reading the instance alert.log file and any other trace files that were
generated during the instance failure

Recovery from Instance Failure


• No special recovery action needed from DBA
• Start the instance
• Wait for the database to be opened notification
• Notify users
• Check alert file to get the reason of the failure

• No recovery action needs to be performed by you. All required redo information is read by SMON. To
restore from this type of failure, start the database:
SQL> connect / as sysdba;
Connected.
SQL> startup pfile=initDB00.ora;
...
Database opened.
• After the database has opened, notify users that any data that they did not commit will need to be re-
entered.
• There may be a time delay between starting the database and the “Database opened” notification—this
is the roll forward phase that takes place while the database is mounted.
– SMON performs the roll forward process by applying changes recorded in the online redo log files from
the last checkpoint.
– Rolling forward recovers data that has not been recorded in the database files, but has been recorded
in the online redo log, including the contents of rollback segments.
• Rollback can occur while the database is open, since either SMON or a server process can perform the
rollback operation. This allows the database to be available for users faster.

Media Failures:
Causes of Media Failures
• Head crash on a disk drive
• Physical problem in reading from or writing to database files
• File was accidentally erased

Resolutions for Media Failures


• The recovery strategy depends on which backup method was chosen and which files are affected.
• If available, apply archived redo log files to recover data committed since the last backup.

Defining a Backup and Recovery Strategy


• Business requirements
• Technical requirements
• Operational requirements
• Management concurrence

Questions for the DBA


Here are some questions to consider when selecting a backup strategy:
• Does management understand the tradeoffs involved in their expectations of system availability?
• Is management willing to dedicate the resources needed to ensure a successful backup and recovery
strategy?
• Does management understand the importance of making backups and prepare recovery procedures?

Business requirements
MTTR (Mean-Time-To-Recover):
Database availability is a key issue for a DBA. In the event of a failure the DBA should strive to reduce
the Mean-Time-To-Recover (MTTR). This strategy ensures that the database is unavailable for the
shortest possible amount of time. Anticipating the types of failures that can occur and using effective
recovery strategies, the DBA can ultimately reduce the MTTR.
MTBF (Mean-Time-Between-Failure):
Protecting the database against various types of failures is also a key DBA task. To do this, a DBA must
increase the Mean-Time-Between-Failures (MTBF). The DBA must understand the backup and recovery
structures within an Oracle database environment and configure the database so that failures will not
occur often.
Evolutionary Process: A backup and recovery strategy evolves as business, operational, and technical
requirements change. It is important that both the DBA and management review the validity of a backup
and recovery strategy on a regular basis.
Operational Requirements
• 24-hour operations
• User and operator appreciation
• Testing and validating backups

Testing Backups
Here are some questions to consider when selecting a backup strategy:
• Can I depend on system administrators, vendors, backup DBAs, and so forth when I need help?
• Can I test my backup and recovery strategies at frequently scheduled intervals?
• Are backup copies stored off-site?
• Is a plan well documented and maintained?

Database Volatility
Other issues that impact operational requirements include the volatility of the data and structure of the
database. Here are some questions to consider when selecting a backup strategy:
• Are tables frequently updated?
• Is data highly volatile? If so, you will need backups more frequently than a business where data is
relatively static.
• Does the structure of the database change often?
• How often do you add data files?

Technical Requirements
• Resources: Hardware, software, manpower, and time
• Physical image copies of the operating system files
• Logical copies of the objects in the database
• Database configurations
• Transaction volume affects desired frequency of backups

Here are some questions to consider when selecting a backup strategy:


• How much data do you have?
• Do you have the machine power and capacity to support backups?
• Is the data easily recreated?
• Can you reload the data into the database from a flat file?
• Does the database configuration support resiliency to different types of failures?

Disaster Recovery Issues


• How will your business be affected in the event of a major disaster?
– Earthquake, flood, fire, or complete loss of machine
– Malfunction of storage hardware or software
– Loss of key personnel, such as the database administrator
• Do you have a plan for testing your strategy periodically?
• Do you perform the strategy tests?

Natural Disaster
Perhaps your data is so important that you must ensure resiliency even in the event of a complete system
failure. Natural disasters and other issues can affect the availability of your data and must be considered
when creating a disaster recovery plan. Here are some questions to consider when selecting a backup
strategy:
• What will happen to your business in the event of a serious disaster such as:
– Flood, fire, earthquake, or hurricane
– Malfunction of storage hardware or software
• If your database server fails, will your business be able to operate during the hours, days, or even
weeks it might take to get a new hardware system?
• Do you store backups off-site?
Solutions
• Off-site backups
• Standby Database feature that enables a DBA to fall back on another database that is configured as a
standby in case the primary database fails.
• Geomirroring
• Messaging
• TP monitors
Loss of Key Personnel
In terms of key personnel, consider the following questions:
• How will a loss of personnel affect your business?
• If your DBA leaves the company or is unable to work, will your database system continue to run?
• Who will handle a recovery situation if the DBA is unavailable?

Oracle Availability and Features


Oracle features for maintaining high availability of database include:
Oracle Parallel Server: Oracle Parallel Server is an optional feature that enables multiple database
instances to use one single database on a cluster. So, when one node fails, another node can take over
the tasks of the first node. The implementation of Oracle Parallel Server is discussed in detail in a
separate course, Implementing Parallel Server.
Oracle FailSafe: The Oracle FailSafe feature is available on WindowsNT platforms only. In this
environment, two nodes share a disk system on which a database is located. At any one point, only one
instance is operational. When the instance that is operational fails, the other node instantiates (starts the
instance) automatically.

************************************************************************

CHAP 7 Instance Media Recovery Structures

Memory Structures:
Type Description
Memory area used to store blocks read from data files. Data is read into the
Data buffer cache
blocks by server process and written out by DBWn asynchronously.
Memory containing before and after image copies of changed data to be written
Log buffer
to the redo logs.
An optional memory area used in SGA for I/O by RMAN backup and restore,
Large pool
session memory for oracle share server and Oracle XA.
Stores parsed versions of SQL statements, PL/SQL procedures, and data
Shared pool
dictionary information.
Java Pool Used in server memory for all session specific Java code and data within JVM.
Background Processes
Type Description
Database writer Writes dirty buffers from the data buffer cache to the data files. This activity is
(DBWn) asynchronous.
Log writer(LGWR) Writes data from the redo log buffer to the redo log files.
Performs automatic instance recovery. Recovers space in temporary segments
System monitor
when they are no longer in use. Merges contiguous areas of free space
(SMON)
depending on parameters set.
Cleans up the connection/server process dedicated to an abnormally terminated
Process monitor
user process. Performs rollback and releases the resources held by the failed
(PMON)
process.
Checkpoint Synchronizes the headers of the data files and control files with the current redo
(CKPT) log and checkpoint numbers.
Archiver (ARCn) A process that automatically copies redo logs that have been marked for
(optional) archiving.
The User Process
The user process is created when a user starts a tool such as SQL*Plus, Forms, Reports, Enterprise
Manager, and so on. This process might be on the client or server, and provides an interface for the user
to enter commands that interact with the database.
The Server Process
The server process accepts commands from the user process and performs steps to complete user
requests. If the database is not in a multithreaded configuration, a server process is created on the
machine containing the instance when a valid connection is established.
Oracle Database
An Oracle database consists of the physical files.
File Type Description Type
Physical storage of data. At least one file is required per database. This
Data files Binary
file stores the system tablespace.

Contain before and after image copies of changed data, for recovery
Redo logs Binary
purposes. At least two groups are required.

Control files Record the physical structure and status of the database. Binary

Initialization
Store parameters required for instance startup. Text
Parameter file
Server
Initialization Store persistent parameters required for instance startup. Binary
Parameter file
Password file Store information on users who can start, stop, and recover the
Binary
(optional) database.

Archive logs Physical copies of the online redo log files. Created when the database
Binary
(optional) is set in ARCHIVELOG mode. Used in recovery.

Dynamic Views
The Oracle server provides a number of standard data dictionary views to obtain information on the
database and instance. These views include:
• V$SGA: Queries the size of the instance for the shared pool, log buffer, data buffer cache, and fixed
memory sizes (operating system dependent).
• V$INSTANCE: Queries the status of the instance, such as the instance mode, instance name, startup
time, and host name.
• V$PROCESS: Queries the background and server processes created for the instance.
• V$BGPROCESS: Queries the background processes created for the instance.
• V$DATABASE: Lists status and recovery information about the database. It includes information on the
database name, the unique database identifier, the creation date, the control file creation date and time,
the last database checkpoint, and other information.
• V$DATAFILE: Lists the location and names of the data files that are contained in the database. It
includes information relating to the file number and name, creation date, status (online/off-line), enabled
(read-only, read-write), last data file checkpoint, size, and other information.

Large Pool
• Can be configured as a separate memory area in the SGA, used for memory with:
– I/O slaves: DBWR_IO_SLAVES
– Oracle backup and restore: BACKUP_TAPE_IO_SLAVES
– Session memory for the multi-threaded servers
• Is sized by the LARGE_POOL_SIZE parameter
Recovery Manager (RMAN) uses the large pool for backup and restore when you set the
DBWR_IO_SLAVES or BACKUP_TAPE_IO_SLAVES parameters.

Sizing the Large Pool


If the LARGE_POOL_SIZE initialization parameter is set, then the Oracle server attempts to allocate from
large pool. If LARGE_POOL_SIZE is set but is not large enough, the allocation fails does not try to get
buffers from shared pool and the Oracle server component requesting the buffers does the following:
The RMAN command writes a message to the alert file and does not use I/O slaves for that operation.
If the LARGE_POOL_SIZE initialization parameter is not set, then the Oracle server attempts to allocate
shared memory buffers from the shared pool in the SGA.

Large Pool Parameters


• LARGE_POOL_SIZE: If this parameter is not set, then there is no large pool. The specified size of
memory is allocated from the SGA.
– Description: Size of the large pool, in bytes (can specify values in K or M)
– Default: 0 if pool is not required by parallel execution, DBWR_IO_SLAVES not set.
Otherwise, derived from values from PARALLEL_MAX_SERVERS, PARALLEL_THREDS_PER_CPU,
CLUSTER_DATABASE_INSTANCES, DISPATCHERS, DBWR_IO_SLAVES
– Minimum: 300 K
– Maximum: At least 2 GB (the maximum is operating system specific)

– To determine how the large pool is being used, query V$SGASTAT:


SQL> SELECT * FROM v$sgastat WHERE pool = ’large pool’;
POOL NAME BYTES
large pool free memory 4194304*
Set the parameter only if an error is reported in alert.log stating that it does not have enough memory to
start I/O slaves.

• DBWR_IO_SLAVES: This parameter specifies the number of I/O slaves used by the DBWn process. The
DBWn process and its slaves always write to disk. By default, the value is 0 and I/O slaves are not used.
– If DBWR_IO_SLAVES is set to a nonzero value, the numbers of I/O slaves used by the ARCn
process, LGWR process, and Recovery Manager are set to 4.
– Typically, I/O slaves are used to simulate asynchronous I/O on platforms that do not support
asynchronous I/O or implement it inefficiently. However, I/O slaves can be used even when
asynchronous I/O is being used. In that case, the I/O slaves will use asynchronous I/O.

• BACKUP_TAPE_IO_SLAVES: It specifies whether I/O slaves are used by the Recovery Manager to
backup, copy, or restore data to tape.
– When BACKUP_TAPE_IO_SLAVES is set to TRUE, an I/O slave process is used to write to or
read from a tape device.
– If this parameter is set to FALSE (the default), then I/O slaves are not used for backups;
instead, the shadow process engaged in the backup will access the tape device.
Note: Because a tape device can only be accessed by one process at any given time, this parameter is a
Boolean, which allows or does not allow the deployment of an I/O slave process to access a tape device.
– In order to perform duplexed backups, this parameter needs to be enabled, otherwise an error
will be signalled. Recovery Manager will configure as many slaves as needed for the number of backup
copies requested when this parameter is enabled.

Function of the Data Buffer Cache:


• The data buffer cache is an area in the SGA that is used to store the most recently used data blocks.
• The server process reads tables, indexes, and rollback segments from the data files into the buffer
cache where it makes changes to data blocks when required.
• The Oracle server uses a least recently used (LRU) algorithm to determine which buffers can be
overwritten to accommodate new blocks in the buffer cache.

Function of the DBWn Background Process:


• The database writer process (DBWn) writes the dirty buffers from the database buffer cache to the data
files. It ensures that sufficient number of free buffers (buffers that can be overwritten when server
processes need to read in blocks from the data files) are available in the database buffer cache.
• The database writer regularly synchronizes the database buffer cache and the data files: this is the
checkpoint event triggered in various situations.
• Although one DBWn is adequate for most systems, you can configure additional processes (DBW1
through DBW9) to improve write performance if your system modifies data heavily. These additional
DBWn processes are not useful on uniprocessor systems.

Data Files
Data files store both system and user data on a disk. This data may be committed or uncommitted.

Data Files Containing Only Committed Data


This is normal for a closed database, except when failure has occurred or the “shutdown abort” option
has been used. If the instance is shutdown and is using either normal or immediate or transactional, the
data files contain only committed data. This is because all uncommitted data is rolled back, and a
checkpoint is issued to force all committed data to a disk.
Data Files Containing Uncommitted Data
While an instance is running, data files can contain uncommitted data. This happens when data is
changed but not committed (the data is now in the cache), and more space is needed in the cache
(uncommitted data is forced off to a disk). Only when all users eventually commit will the data files
contain only committed data. In the event of failure, during subsequent recovery, the redo logs and
rollback segments are used to synchronize the data files.

Configuring Tablespaces
Tablespaces contain one or more data files. It is important that tablespaces are created carefully to
provide a flexible and manageable backup and recovery strategy.
Here are some examples of tablespaces:
• SYSTEM: Backup and Recovery are more flexible if system and user data is contained in different
tablespaces.
• TEMPORARY: If the tablespace containing temporary segments (used in sort, and so on) is lost, it can
be re-created, rather than recovered.
• ROLLBACK SEGMENTS: Tablespaces containing online rollback segments are difficult to back up and
recover with the database online. Such tablespaces should be dedicated to contain only rollback
segments, such as the tablespace SYSTEM, which should contain only system and no application
segments.
• READ ONLY DATA: Backup time can be reduced because a tablespace needs to be backed up only
when the tablespace is made read-only.
• HIGHLY VOLATILE DATA: This tablespace can be backed up more frequently, also reducing recovery
time.
• INDEX DATA: Tablespaces to store index segments should be created. These tablespaces can often be
re-created instead of recovered.

Function of the Redo Log Buffer


• The redo log buffer is a circular buffer that holds information about changes made to the database.
This information is stored in redo entries.
• Redo entries contain the information necessary to reconstruct, or redo, changes made to the database
by INSERT, UPDATE, DELETE, CREATE, ALTER, or DROP operations. Redo entries are used for database
recovery, if necessary.
• Redo entries are copied by Oracle server processes from the user’s memory space to the redo log
buffer.

Function of LGWR:
1 When the redo log buffer is one third full.
2 When a timeout occurs (every three seconds).
3 When there is 1 MB of redo.
4 Before DBWn writes modifies blocks in the database buffer cache to the data files.
5 When a transaction commits.

Redo Log Files


Redo log files store all changes made to the database. If the database is to be recovered to a point in
time when it was operational, redo logs are used to ensure that all committed transactions are committed
to disk, and all uncommitted transactions are rolled back. The important points relating to redo logs are
as follows:
• LGWR writes to redo log files in a circular fashion. This behavior results in all members of a logfile
group being overwritten.
• While it is mandatory to have at least two log groups to support the cyclic nature, in most cases, you
would need more than just two redo log groups.
• To avoid a single-point media failure, it is recommended that you multiplex redo logs.
REDO LOG SWITCHES:
At a log switch, the current redo log group is assigned a log sequence number that identifies the
information stored in that redo log group and is also used for synchronization.
• A log switch occurs when LGWR stops writing to one redo log group and begins writing to
another.
• A log switch occurs when LGWR has filled one log file group.
• A DBA can force a log switch by using the ALTER SYSTEM LOGFILE command.
• A checkpoint occurs automatically at a log switch.
• Processing can continue as long as at least one member of a group is available. If a member is
damaged or unavailable, messages are written to the LGWR trace file and to the alert log.

Dynamic Views
• V$LOG: Lists the number of members in each group. It contains:
– The group number
– The current log sequence number
– The size of the group
– The number of mirrors
– Status (CURRENT or INACTIVE)
– The checkpoint change numbers
• V$LOGFILE: Lists the names, status (STALE or INVALID), and group of each log file member.
• V$LOG_HISTORY: Contains information on log history from the control file.

Guidelines for Multiplexing redo log files


• All members in the group contain identical information and are of the same size.
• Group members are updated simultaneously.
• Each group should contain the same number of members of the same size.
The redo log file configuration requires at least two redo log members per group, with each member on a
different disk to guard against failure. The locations of the online redo log files can be changed by
renaming the online redo log files. Before renaming the online redo log files, make sure that the new
online redo log file exists. The Oracle server changes only the pointers in the control files, but does not
physically rename or create any operating system files.

Database Checkpoints:
• Checkpoints are used to determine recovery should start.
• Checkpoint position – where recovery start.
• Checkpoint queue – link list of dirty blocks [RBA: Redo byte address]
Data file number/data block number/Redo Byte Address (RBA)

Database checkpoints ensure that all modified database buffers are written to the database files. The
database header files are then marked current, and the checkpoint sequence number is recorded in the
control file. Checkpoints synchronize the buffer cache by writing all buffers to disk whose corresponding
redo entries were part of the log file being checkpointed
Types of Checkpoints:
Full Checkpoint
• All dirty buffers are written
• SHUTDOWN NORMAL, IMMEDIATE, TRANSACTIONAL
• ALTER SYSTEM CHECKPOINT
Incremental Checkpoint [fast-start Checkpoint]
• Periodic writes
• Only writes the oldest blocks
Partial Checkpoint
• Dirty buffers belonging to the tablespace
• ALTER TABLESPACE BEGIN BACKUP
• ALTER TABLESPACE TABLESPACE OFLINE NORMAL
CKPT PROCESS:
The CKPT process is always enables. The CKPT process updates file headers at checkpoint completion.
More frequent checkpoints reduce the time needed for recovering from instance failure at the possible
expense of performance.
1. At every log switch (cannot be suppressed).
2. When fast-start checkpointing is set to force DBWn to write buffers in advance in order to shorten
the instance recovery.
3. At a frequency defined by the LOG_CHECKPOINT_INTERVAL initialization parameter. It specifies
the frequency of checkpoints in terms of the number of redo log file blocks that can exist between
an incremental checkpoint and the last block written to the redo log.
4. When the elapsed time since writing the redo block at the current checkpoint position exceeds the
number of seconds specified by the LOG_CHECKPOINT_TIMEOUT initialization parameter.
LOG_CHECKPOINT_TIMEOUT specifies the amount of time, in seconds, that has passed since the
incremental checkpoint at the position where the last write to the redo log (sometime called the tail
of the log) occurred. This parameter also signifies that no buffer will remain dirty (in the cache) for
more than integer seconds.
5. At instance shutdown, unless the instance is aborted.
6. When forced by a database administrator (ALTER SYSTEM CHECKPOINT command)
7. When a tablespace is taken offline or an online backup is started.
8. NOTE: Read-only data files are an exception their checkpoint numbers are frozen and do not
correspond with the number in the control file.
SYNCHRONIZATION:
1. At each checkpoint, the checkpoint number is updated in every database file header and in the
control file.
2. The checkpoint number acts as a synchronization marker for redo, control, and data files. If they
have the same checkpoint number, the database is considered to be in a consistent state.
3. Information in the control file is used to confirm that all files are at the same checkpoint number
during database startup. Any inconsistency between the checkpoint numbers in the various file
headers results in a failure, and the database cannot be opened. Recovery is required.
INSTANCE RECOVERY:
Checkpoints expedite instance recovery because at every checkpoint all changed data is written to a disk.
After data resides in data files, redo log entries before the last checkpoint need not be applied again
during the roll forward phase of instance recovery.

MULTIPLEXED CONTROL FILE FUNCTION:


The small binary file describes the structure of the database. Without this file, the database cannot be
mounted and recovery or re-creation of the control file will be required. The recommended configuration
is a minimum of two control files on different disk.

CONTROL FILE CONTENTS:


1 Database name.
2 Time stamp of database creation.
3 Synchronization information (checkpoint and log sequence information) needed for recovery.
4 Names and locations of datafiles and redo log files.
5 Archiving mode of the database.
6 Current log sequence number.
7 Recovery Manager backup Meta data.

INFORMATION OF CONTROLFILE:
1 V$CONTROLFILE. 2 V$PARAMETER. Show parameter command.

MULTIPLEX THE CONTROL FILE:


1 Shutdown database. 2 Make copy of the existing control file to a different device by using operating
system command 3 Edit the name of controlfile in PARAMETER CONTROL_FILES. 4 Startup the
database.

Function of the Archive Background Process


The ARCn process is an optional process. When enabled, it archives the redo log files to the designated
storage areas. This process has a great significance in backup, restoration, and recovery of a database
set to ARCHIVELOG mode, where databases are operational 24 hours a day and 7 days a week.
The ARCn process initiates when a log switch occurs and copies one member of the last (unarchived)
redo log group to at least one of the destinations specified by some init.ora parameters.

Function of the ARCHIVED LOG FILES


When the database is set to Archive log mode, the LGWR process waits for the online redo log files to be
archived (either manually or through the ARCn process) before they can be reused.
• A database backup, combined with archived redo log files, guarantees that all committed data
can be recovered to the point of failure.
• Valid database backups can be taken while the database is online.

Archiving Considerations
The choice of whether to enable archiving depends on the availability and reliability requirements of each
database. Archived logs can be stored in more than one location (duplexing or multiple destinations),
since they are vital for recovery. For production databases, it is recommended that you use the archive
log feature with multiple destinations.

DATABASE SYNCHRONIZATION:
1. All datafiles (except offline and read-only) must be synchronized for the database to open.
2. Synchronization is based on the current checkpoint number.
3. Applying changes recorded in the redo log files synchronizes datafiles.
4. Redo log files are automatically requested by the Oracle server.

PHASES OF INSTANCE RECOVERY


1. Data files out-of-synch
2. Roll forward (redo)
3. Committed and non committed data in files
4. Rollback (undo)
5. Committed data in files

1. The data files are not synchronized.


2. Roll forward phase: DBWR writes both committed and uncommitted data to the data files. The
purpose of the roll forward phase is to apply all changes recorded in the log file to the data blocks.
3. The datafiles contain committed and uncommitted changes. The database is opened.
4. During the transaction recovery or rollback phase, any changes that were not actually committed
are rolled back.
5. The datafiles now contain only committed changes to the database. All datafiles are now
synchronized.
Instance Recovery Phases

Phase Explanation
1 Unsynchronized files: The Oracle server determines whether a database needs
recovery when unsynchronized files are found. Instance failure can cause this to happen,
such as a shutdown abort. This situation causes loss of uncommitted data because
memory is not written to disk and files are not synchronized before shutdown.

2 Roll forward process: DBWR writes both committed and uncommitted data to the data
files. The purpose of the roll forward process is to apply all changes recorded in the log
file to the data blocks.
Note
-Rollback segments are populated during the roll-forward phase. Because redo logs store
both before and after data images, a rollback segment entry is added if an uncommitted
block is found in the data file and no rollback entry exists.
- Redo logs are applied using log buffers. The buffers used are marked for recovery and
do not participate in normal transactions until they are relinquished by the recovery
process.
- Redo logs are only applied to a read-only data file if a status conflict occurs (that is, the
file header states the file is read-only, yet the control file recognizes it as read-write, or
vice versa).

3 Committed and uncommitted data in data files: Once the roll forward phase has
successfully completed, all committed data resides in the data files, although
uncommitted data still might exist.
4 Roll-Back Phase: To remove the uncommitted data from the files, rollback segments
populated during the roll forward phase or prior to the crash are used. Blocks are rolled
back when requested by either the Oracle server or a user, depending on who requests
the block first.
The database is therefore available even while rollback is running. Only those data
blocks participating in rollback are not available.
5 Committed data in data files: When both the roll forward and rollback phases have
completed, only committed data resides on disk.
6 Synchronized data files: All data files are now synchronized.

TUNING CRASH AND INSTANCE RECOVERY PERFORMANCE


Tuning the phases of INSTANCE RECOVERY
TUNING THE DURATION OF CRASH AND INSTANCE RECOVERY

TUNING THE DURATION OF CRASH AND INSTANCE RECOVERY


Methods to keep the duration of CRASH AND INSTANCE RECOVERY within user-specified bounds:
• Set initialization parameters to influence the number of redo log records and data blocks involved
in recovery.
• Size the redo log file to influence checkpointing frequency.
• Issue SQL statements to initiate checkpoints.
• Parallelize instance recovery operations.

INITIALIZATION PARAMETERS INFLUENCING CHECKPOINTS:


It is recommended that you use only the FAST_START_MTTR_TARGET parameter [expected MTTR
specified in seconds], instead or a combination of FAST_START_IO_TARGET,
LOG_CHECKPOINT_INTERVAL [number of redo log file blocks that can exist between an incremental
checkpoint and the last block written to the redo log], and provides the most precise control over the
duration of recovery and eliminates the need to manually set values for LOG_CHECKPOINT_TIMEOUT
[Amount of time that has passed since the incremental checkpoint at the position where the last write to
the redo log occurred].

V$INSTANCE_RECOVERY COLUMN:
1. RECOVERY_ESTIMATED_IOS: Contains the number of dirty buffers in buffer cache.
2. ACTUAL_REDO_BLKS: Current number of redoes blocks required to be read for recovery.
3. TARGET_REDO_BLKS: Goal for the maximum number of redoes blocks to be processed during
recovery. This value is the minimum of the next three columns (LOG_FILE_SIZE_REDO_BLKS 2
LOG_CHKPT_TIMEOUT_REDO_BLKS 3 LOG_CHKPT_INTERVAL_REDO_BLKS).
4. LOG_FILE_SIZE_REDO_BLKS: Number of redo blocks to be processed during recovery
corresponding to 90% of the size of the smallest log file.
5. LOG_CHKPT_TIMEOUT_REDO_BLK: Number of redo blocks that must be processed during
recovery to satisfy LOG_CHECKPOINT_TIMEOUT.
6. LOG_CHKPT_INTERVAL_REDO_BLKS: Number of redo blocks that must be processed during
recovery to satisfy LOG_CHECKPOINT_INTERVAL.
7. FAST_START_IO_TARGET_REDO_BLKS: This field is obsolete. It is retained for backward
compatibility. The value of this field is always null.
8. TARGET_MTTR: Effective mean time to recover (MTTR) target in seconds. Usually, it should be
equal to value of the FAST_START_MTTR_TARGET parameter. If FAST_START_MTTR_TARGET is
set to such a small value that it is impossible to do a recovery within its time frame, then the
TARGET_MTTR field contains the effective MTTR target, which is larger than
FAST_START_MTTR_TARGET. If FAST_START_MTTR_TARGET is set to such a high value that even
in the worst case (the whole buffer cache is dirty) recovery would not take that long, then the
TARGET_MTTR field contains the estimated MTTR in the worst case scenario. This field is 0 if
FAST_START_MTTR_TARGET is not specified.
9. ESTIMATED_MTTR: The current estimated mean time to recover (MTTR) in the number of seconds
based on the number of dirty buffers and log blocks (gives FAST_START_MTTR_TARGET is not
specified).
10. CKPT_BLOCK_WRITES: Number of blocks written by checkpoint writes

V$INSTANCE_RECOVERY View
• RECOVERY_ESTIMATED_IOS: The estimated number of data blocks to be processed during recovery
based on the in-memory value of the fast-start checkpoint parameter
• ACTUAL_REDO_BLKS: The current number of redo blocks required for recovery
• TARGET_REDO_BLKS: The goal for the maximum number of redo blocks to be processed during
recovery. This value is the minimum of the following four columns:
– LOG_FILE_SIZE_REDO_BLKS: The number of redo blocks to be processed during recovery to guarantee
that a log switch never has to wait for a checkpoint
– LOG_CHKPT_TIMEOUT_REDO_BLKS: The number of redo blocks that need to be processed during
recovery to satisfy
LOG_CHECKPOINT_TIMEOUT
– LOG_CHKPT_INTERVAL_REDO_BLKS: The number of redo blocks that need to be processed during
recovery to satisfy
LOG_CHECKPOINT_INTERVAL
– FAST_START_IO_TARGET_REDO_BLKS: The number of redo blocks that need to be processed during
recovery to satisfy FAST_START_IO_TARGET
TUNING PHASES OF CRASH AND INSTANCE RECOVERY:
Set the parameters checkpoint occurs very fast after some time. The RECOVERY_PARALLELISM
initialization parameter is used to specify the number of concurrent process for instance or crash recovery
operations. Using multiple processes in effect provides parallel block recovery. Different processes are
allocated to different blocks during the roll forward phases.

FAST-START ON-DEMAND ROLLBACK


Server process encountering data to be rolled back performs the following:
• Rolls back the block containing the required row.
• Hands off further recovery, which may be in parallel, to SMON.
A user transaction initiates rollback on only the block the transaction is attempting to access. The
remainder of the blocks is recovered in the background by SMON, potentially in parallel. The advantage is
that a transaction does not have to wait until all work of a long transaction is rolled back.
FAST_START_PARALLEL_ROLLBACK:
Fast-start parallel rollback enables SMON to act as a coordinator and use multiple server processes to
complete the rollback operation. Parallel rollback is automatically started when SMON determines that the
dead transaction has generated a large number of rollback blocks.
The values of this parameter 1. false .LOW 2*cpu_count 3. HIGH 4*cpu_count.
NOARCHIVELOG MODE:- By default, a database is created in Noarchivelog mode.
Controlling Fast-Start Parallel Rollback
Define dynamic parameter: FAST_START_PARALLEL_ROLLBACK
Value Maximum Parallel Recovery Servers
FALSE None
LOW 2 * CPU_COUNT
HIGH 4 * CPU_COUNT

Monitoring Parallel Rollback


V$FAST_START_SERVERS
• STATE: recovering or idle
• PID, UNDOBLKSDONE
V$FAST_START _TRANSACTIONS
• USN, SLT, SEQ: Transaction ID
• UNDOBLKSDONE
• UNDOBLKSTOTAL
• CPUTIME: Time in seconds
*************************************************************************

CHAP 8 Configuring the database Archiving Mode


Redo Log History
Under typical database operations, all transactions are recorded in the online redo log files. This allows
for automatic recovery of transactions in the event of a database failure.
• If the database is configured for NOARCHIVELOG mode, no redo history is saved to archived log files,
and recovery operations are limited and a loss of transactions may occur. This is the result of the
automatic recycling of log files, where older log files needed for recovery are overwritten and only the
most recent part of the transaction history is available.
• You can configure a database in ARCHIVELOG mode, so that a history of redo information is maintained
in archived files. The archived redo log files can be used for media recovery.
• The database can be initially created in ARCHIVELOG, but it is configured for NOARCHIVELOG mode by
default.

NOARCHIVELOG Mode
By default, a database is created in NOARCHIVELOG mode. The characteristics of running a database in
NOARCHIVELOG mode are as follows:
• Redo log files are used in a circular fashion.
• A redo log file can be reused immediately after a checkpoint has taken place.
• Once redo logs are overwritten, media recovery is only possible to the last full backup.
Implications of NOARCHIVELOG Mode
• If a tablespace becomes unavailable because of a failure, you cannot continue to operate the database
until the tablespace has been dropped or the entire database has been restored from backups.
• You may only perform operating system backups of the database when the database is shut down.
• You must back up the entire set of database, redo, and control files during each backup.
• You will lose all data since the last full backup.
• You cannot perform online backups.
Media Recovery Options in NOARCHIVELOG mode
• You must restore the data files, redo log files, and control files from an earlier copy of a full database
backup.
• If you used the Export utility to back up the database, you can use the Import utility to restore lost
data. However, this results in an incomplete recovery and transactions may be lost.

ARCHIVELOG Mode
• A filled redo log file cannot be reused until a checkpoint has taken place and the redo log file has been
backed up by the ARCn background processes. An entry in the control file records the log sequence
number of the archived log file in the log history of the control file.
• The most recent changes to the database are available at any time for instance recovery, and the
archived redo log file copies can be used for media recovery.

Archiving Requirements
• The database must be in ARCHIVELOG mode. Issuing the command to put the database into
ARCHIVELOG mode updates the control file. The ARCn background processes can be enabled to
implement automatic archiving.
• Sufficient resources should be available to hold generated archived redo log files.

Implications of Setting ARCHIVELOG Mode in the Control File


• The database is protected from loss of data when a media failure occurs.
• You can back up the database while it is still online.
• When a tablespace other than SYSTEM goes offline as a result of media failure, the remainder of the
database remains available because tablespaces (other than SYSTEM) can be recovered while the
database is open.
• More online redo log groups guarantee that the archiving of online redo files can be accomplished
before they are needed for reuse.

Media Recovery Options


• You can restore a backup copy of the damaged files and use archived log files to bring the datafile up-
to-date while the database is online or offline.
• You can restore the database to a specific point-in-time.
• You can restore the database to the end of a specified archived log file.
• You can restore the database to a specific system change number (SCN).

CHANGING THE ARCHIVING MODE:-


It is the task of a DBA to change the mode. The DBA changes the mode by using the ALTER DATABASE
command while the database is in the MOUNT state.
SQL> alter database [ archivelog | noarchivelog ]
where: archivelog Establishes ARCHIVELOG mode for redo log file groups
noarchivelog Establishes NOARCHIVELOG mode for redo log file groups
[must have ALTER SYSTEM privilege]
1. Shutdown the database normal/immediate/transactional SHUTDOWN IMMEDIATE.
2 start the database in Mount state. STARTUP MOUNT
3 issue this command ALTER DATABASE ARCHIVELOG
4 Open the database. ALTER DATABASE OPEN
5 Take a full backup of the database.

Note: After the mode has been changed from Noarchivelog mode to Archive log, you must back up all the
data files and the control file. Your previous backup is not usable anymore because it was taken while the
database was in Noarchivelog mode.

Setting the database in Archive log mode does not enable the ARCHIVER (Arcn) processes. For automatic
archiving this parameter must be true LOG_ARCHIVE_START if it is false then DBA must take archive
manually. If the archive processes ARCn fail for any reason, after transaction activity has filled up all the
redo logs, the Oracle Server hangs.
Automatic vs. Manual ARCHIVING
Automatic ARCHIVING: LOG_ARCHIVE_START = TRUE
ARCn background processes are enabled and they copy redo log files as they filled
Manual ARCHIVING LOG_ARCHIVE_START = FALSE
Use SQL*Plus or OEM to copy the files
THE archive process: 2nd step for creating archived redo log files to use for recovery
Recommendation to enable automatic archiving mode
Guidelines:
Before decide on the archive mode must set database in ARCHIVELOG mode
Failure to switch ARCHIVElog will prevent ARCn to copy the redo log files
Database should be shut down cleanly [normal/immediate/transactional]

Multiple Archive Processes


LOG_ARCHIVE_MAX_PROCESSES Parameter
Parallel data definition language (DDL) and parallel data manipulation language (DML) operations may
generate a large amount of redo logs. A single ARC0 process to archive these redo logs might not be able
to keep up. To avoid this problem, you can spawn multiple archiver processes. This can be done
manually or by using a job queue.
• Oracle9i allows the database administrator to define multiple archive processes by using the
LOG_ARCHIVE_MAX_PROCESSES parameter.
• A maximum of ten archive processes ARCn are allowed. The minimum value is one.
• When LOG_ARCHIVE_START is set to TRUE, an Oracle instance starts up with as many archiver
processes as defined by LOG_ARCHIVE_MAX_PROCESSES.

• The DBA can always spawn additional archive processes set by LOG_ARCHIVE_MAX_PROCESSES or kill
some superfluous archive processes at any time during the instance life

Stop or Start Additional Archive Processes:Dynamic Number of ARCn Processes


During heavy transactional load or activity, the DBA can temporarily start additional archive processes to
prevent bottlenecks on the archiving workload. Once the transactional activity comes down to a normal
level, the DBA can stop some ARCn processes.
For example, every day of the month, the database starts up with two archive processes. During the last
day of each month, the activity always increases:
SQL> alter system set LOG_ARCHIVE_MAX_PROCESSES=3;
The day after, if the database is not shut down, the DBA can issue the following SQL command so as to
stop the additional archive process:
SQL> alter system set LOG_ARCHIVE_MAX_PROCESSES=2;
The dynamic parameter can be changed with ALTER SYSTEM start the number of process as instance
startup. Kill archive processes at any time during the instance life.

Note: If the database is shut down at night, the next day the database would start again with only two
archive processes as it is set up in the init.ora file.

ENABLING AUTOMATIC ARCHIVING AT INSTANCE STARTUP:


You can enable the ARCn processes in an opened instance, if it is not done through the initialization
parameter LOG_ARCHIVE_START. The database should be in ARCHIVELOG mode.
LOG_ARCHIVE_START = TRUE;
Starts numbers of ARCn processes determine by initialization parameter
LOG_ARCHIVE_MAX_PROCESSES=n;The database should be AUTOMATICALLY in ARCHIVELOG mode.
LOG_ARCHIVE_START = FALSE; Inhibits ARCn from starting upon instance startup

ENABLING AUTOMATIC ARCHIVING AFTER INSTANCE STARTUP:


You can enable automatic archiving without shutting down he instance by using the ALTER SYSTEM
command. The database should be in ARCHIVELOG mode.
Step 1: check status of ARCn process ARCHIVE LOG LIST
Step 2: Enable ARCn processes
UNIX: ALTER SYSTEM ARCHIVE LOG START TO ‘ORADATA/ARCHIVE1’;
NT: ALTER SYSTEM ARCHIVE LOG START TO ‘D:\oracle\ora92\database\archive\log’;
Step 3: ARCn automatically archive log files as they filled

DISABLING AUTOMATIC ARCHIVING


By ALTER SYSTEM command in SQL*Plus or OEM
To stop ARCn process ALTER SYSTEM ARCHIVE LOG STOP;
Ensure that automatic archiving is not enabled upon instance statup
Change the initialization parameter LOG_ARICHIVE_START=FALSE in parameter file. Stopping ARCn
processes does not set the database in NOARCHIVELOG mode. When all groups of redo logs are used
and not archived the database will hang if it is in ARCHIVELOG mode
ALTER SYSTEM ARCHIVE LOG SEQUENCE 52;
Manually Archiving
Database in ARCHIVELOG MODE,
Step1: ALTER SYSTEM ARCHIVE LOG CURRENT;
Step2: The server process for the user executing the command PERFORMS THE ARCHIVING OF ONLINE
REDO LOG FILES
NOTE: can use manual archiving when automatic archiving is enabled to re-archive an inactive group of
another destination.
Connect as Administrator privilege to use these options with ALTER SYSTEM ARCHIVE LOG
<1>THREAD: =Specifies thread containing the redo log file group to be archived (for Oracle Parallel
Server).
<2>SEQUENCE: =Archives the online redo log file group identified by the log sequence number.
<3> CHANGE: =Archives based upon the SCN.
<4> GROUP: =Archives online redo log title group.
<5> CURRENT: =Archives the current redo log file group of the specified thread.
<6> LOGFILE: =Archive the redo log file group with member identified by filename.
<7> NEXT: =Archives the oldest online redo log file group that has not been archived.
<8>ALL: =Archive all online redo log file groups.
<9> START: = Enables automatic archiving of redoes log file groups.
<10> TO: =specifies the location to which the redo log file group is archived.
<11> STOP: =disables automatic archiving of redo log file group

Archiving Log Files Selectively


If you have DBA privileges, you can manually archive redo log files by using the
following command:
1 Execute the alter system archive log [options] SQL command:
SQL> alter system archive log sequence 052;
2 The server process for the user executing the command will perform the archiving of the online redo
log files.

Specifying Archive Log Destinations


LOG_ARCHIVE_DEST_n to specify up to 10 archival destinations.
*in oracle 8i we can specify only five destinations.
Defining multiple archiving locations to specify a primary location by LOG_ARCHIVE_DEST parameter and
to use this parameter LOG_ARCHIVE_DUPLEX_DEST to define a backup destination.
Oracle Enterprise edtn. LOG_ARCHIVE_DEST and LOG_ARCHIVE_DEST_n both parameter are valid.

Specifying Multiple Archive Log Destinations


LOG_ARCHIVE_DEST_n to specify up to 10 archival destinations.
Which can be:
ON LOCAL DISK:- Using with LOCATION key word. The location specified must be valid and can not be
an NFS (network file system) mounted directory. You must specify the LOCATION parameter for at least
one destination.
log_archive_dest_1="location=/archive1"
ON REMOTE STAND BY DATABASE:- Specify SERVICE key word and it resolve tnsname.ora with the
support IPC or TCP/IP. Only one archive destination per remote database cat be specified.
log_archive_dest_2="SERVICE=ORA_STB1"
Note: must specify LOCATION for at least one destination

LOG_ARCHIVE_DEST_n OPTIONS
Use LOG_ARCHIVE_DEST_n to specify up to ten archival destinations.
Set archive location as MANDATORY or OPTOINAL:
LOG_ARCHIVE_DEST_1=”location=/archive1 MANDATORY REOPEN”
LOG_ARCHIVE_DEST_1=”location=/archive2 MANDATORY REOPEN=600”
LOG_ARCHIVE_DEST_1=”location=/archive3 OPTIONAL”
MANDATORY: Implies that archiving to this destination must complete successfully before an online
redo log file can be overwritten.
OPTIONAL: Implies that an online redo log file can be reused even if it has not been successfully
archived to this destination. This is the DEFAULT.
REOPEN ATTRIBUTE:-
*The REOPEN attribute defines whether archiving to a destination must be re-attempted in case of
failure. DEFAULT IS 300 SECOND. There is no limit on the number of attempts made to archive to a
destination. Any errors in archiving are reported in the ALERT FILE at the primary site.
*If REOPEN is not specified, errors at optional destinations are recorded and ignored. No further redo log
will be sent to these destinations. Errors at mandatory destinations will prevent reuse of the online redo
log until the archiving is successful. The status of an archive destination is set to ERROR whenever
archiving is unsuccessful.

Specifying Minimum Number of Local Destinations


• LOG_ARCHIVE_MIN_SUCCEED_ DEST parameter: LOG_ARCHIVE_MIN_SUCCEED_DEST =2
• An online redo log group can be reused only if:
– Archiving has been done to all mandatory locations
–The number of local locations archived is greater than or equal to the value of the
LOG_ARCHIVE_MIN_SUCCEED_ DEST parameter
• The number of destinations defined as MANDATORY
Example: Consider a case where LOG_ARCHIVE_MIN_SUCCEED_DEST is set to 2. If the number of
mandatory local destinations is 3, then these three locations must be archived before an online redo log
file can be reused. On the other hand, if the number of mandatory local archive destinations is 1, then at
least one optional local archive destination must be archived before an online redo log file can be reused.
In other words, the LOG_ARCHIVE_MIN_SUCCEED_DEST can be used to make archiving to one or
more optional destinations mandatory, but not vice versa.

Controlling Archiving to destination


• An archival destination may be disabled by using the dynamic initialization parameter.
LOG_ARCHIVE_DEST_STATE_2=defer
ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_3= enable By default it is enable.
• Archiving to a destination can be enabled again:
LOG_ARCHIVE_DEST_STATE_2 = ENABLE
ALTER SYSTEM SET log_archive_dest_state_3 =ENABLE

NOTE:- Archiving is not performed to a destination when the state is set to DEFER. If the state of this
destination is changed to ENABLE, any missed logs must be manually archived to this destination.

Specifying the File Name Format


LOG_ARCHIVE_FORMAT=extension.
%s or %S :- Includes the log sequence number as part of the filename.
%t or %T :- Includes the thread number as part of the filename.
Using %S causes the value to be a fixed length padded to the left with zeros.
UNIX: LOG_ARCHIVE_FORMAT=arch%s.arc
WINDOWS: LOG_ARCHIVE_FORMAT=%%ORACLE_SID%%S%TS%S.arc
Ex:
LOG_ARCHIVE_DEST_N= ‘D:\oracle\ora92\database\archive\’
LOG_ARCHIVE_FORMAT = arch%s.arc
DYNAMIC VIEWS:
V$ARCHIVED_LOG:- Displays archived log information from the control file.
V$ARCHIVE_DEST:- For the current instance, describes all archive log destinations, the current value,
mode, and status MANDATORY, OPTIONAL OR PRIMARY VALID DEFERRED OR ERROR AND INACTIVE.
V$LOG_HISTORY:- Contains log file information from the control filel.
V$DATABASE:- Current state of archiving.
V$ARCHIVE_PROCESSES:- Provide information about the state of the various ARCH processes for
the instance.
COMMAND LINE:- RMAN LIST command to obtain information
ARCHIVE LOG LIST command provide DBA log mode and status of archiving
*************************************************************************************
CHAP 9 Oracle Recovery Manager Overview and Configuration
RMAN Features
RMAN provides a flexible way to:
• Back up the database, tablespaces, datafiles, control files, and archive logs
• Store frequently executed backup and recovery operations
• Perform incremental block level backup
• Compress unused blocks
• Specify limits for backups
• Detect corrupted blocks during backup
• Support Oracle Parallel Server
Command line language as well as OEM used on later Databases of Oracle8i
@ You can store frequently executed operations as scripts in the database.
@ Using the incremental block-level backup feature you can limit the backup size to only those blocks
that have changed since the previous backup. This also helps to reduce the time it table to perform
recovery operations in Archivelog mode.
@ You can use RMAN to manage the size of backup pieces and save time by parallelizing the backup
operation.
@ RMAN operations can be integrated with the scheduling of the operation system to automate backup
operations.
DYNAMICVIEWS: (1) V$BACKUP_CORRUPTION (2) V$COPY_CORRUPTION
@ Increase performance through: = (1) Automatic parallelization. (2) Generation of less redo. (3)
Restricting I/O for backups. (4) Tape streaming.

Recovery Manager Components


• Recovery Manager Executable
• Oracle Enterprise Manager
• Server sessions
• Target database
The database for which backup and recovery operations are being performed using the RMAN is called
the target database. The control file of the target database contains information about its physical
structure, such as the size and location of data files, online and archive logs, and control files. This
information is used by the server processes invoked by RMAN in the backup and recovery using RMAN.
• RMAN Repository
The data used by RMAN for backup, restore and recovery operations referred to as RMAN metadata is
stored in the control file of the target database or when available in a schema of a database. This schema
is called the Recovery Catalog. Most of the information stored in the recovery catalog is available in the
control file of the target database.
Although it is not mandatory to have a recovery catalog for use of RMAN, it is beneficial to setup a
recovery catalog. Many of the features of RMAN, such as stored scripts and automatic backup, are not
available without the recovery catalog. The recovery catalog should be located in a database different
from the target database.
• Channel
To perform and record backup and recovery operations, RMAN requires a link to the target database.
This link is referred to as a channel. You must allocate a channel before you begin the execution of
backup or recovery commands.
• Media management library
Media Management Layer Media management layer (MML) is used by RMAN when writing to or reading
from tapes. The additional media management software required for using the tape medium is provided
by media and storage system vendors. The number of additional storage systems supporting RMAN is
continuously increasing.
RMAN REPOSITORY:
It is the data used by RMAN for backup restore, and recovery operations are referred to as RMAN
metadata.
It is stored in the control file of the target database and in an optional recovery catalog database.
The recovery catalog located in a database different from the target database.
CONTROL_FILE_RECORD_KEEP_TIME parameter determines retention time for RMAN records. The
default is 7 days. The control file can grow in size. The control file cannot be used to store RMAN
scripts.

Channel Allocation
You must allocate a channel before you execute backup and recovery commands. Each allocated channel
establishes a connection from RMAN to a target or auxiliary database (either a database created with the
duplicate command or a temporary database used in TSPITR) instance by starting a server session on the
instance. This server session performs the backup and recovery operations. Only one RMAN session
communicates with the allocated server sessions.
Each channel usually corresponds to one output device, unless your media management library is capable
of hardware multiplexing.

Automatic Channel Allocation


@ Change the default device type: CONFIGURE DEFAULT DEVICE TYPE TO SBT;
@ Configure parallelism for automatic channels:
CONFIGURE DEVICE TYPE DISK PARALLELISM 8;
@Configure automatic channel options:-
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT='E:\BACKUP\%U';
@ CONFIGURE CHANNEL DEVICE TYPE DISK MAXPIECESIZE 2G;
@When a channel is automatically allocated by RMAN , its name is in the format ora_devicetype_n.

Manual Channel Allocation:


Recovery Manager uses the channel processes to communicate between the Oracle server and the
operating system.

@BACKUP, COPY, RESTORE, and RECOVER commands require at least one channel.
@Allocating a channel starts a server process on the target database.
@Channels affect the degree of parallelism.
@Channels write to different media types.
@Channels can be used to impose limits.
RMAN> RUN
{ALLOCATE CHANNEL C1 TYPE DISK
FORMAT='E:\BACKUP/USER052.BAK';
BACKUP DATAFILE 'E:\ORACLE\ORADATA\USERS01.DBF';}

@ The type of media desired determines the type of channel allocated. Query the
V$BACKUP_DEVICE VIEW TO DETERMINE SUPPORTED DEVICE TYPES.

You can impose limits for the COPY and BACKUP commands by specifying parameters in the ALLOCATE
CHANNEL command:
@ Read rate: musts number of buffers read per second. Per file to reduce online performance through
excessive disk I/O. ALLOCATE CHANNEL …RATE = integer
@ Kbytes:limits backup piece file size create by a channel . ALLOCATE CHANNEL …MAXPIECESIZE =
integer
@ MAXOPENFILE: limits the number of concurrently open files for a large backup [default 16]
ALLOCATE CHANNEL …MAXOPENFILE = integer

Example of manual Allocation of channel


ALLOCATE CHANNEL FOR MAINTENANCE DEVICE TYPE DISK:-
This command allocates a channel for the DELETE command. Maintenance channels cannot be used for
any other I/O operation such as backup or copy.
RMAN> RUN
{ALLOCATE CHANNEL C1 DEVICE TYPE DISK
FORMAT='/db01/BACKUP/%U’;
BACKUP DATAFILE '/…/u03/USERS01.DBF';} allocate a channel C1
All file format '/db01/BACKUP/%U’ backs up one datafile '/…/u03/USERS01.DBF'

Media Management
To use tape storage for your database backups, RMAN requires a media manager. A media manager is a
utility that loads, labels, and unloads sequential media such as tape drives for the purpose of backing up,
restoring, and recovering data.
Some media management products can manage the entire data movement between Oracle data files and
the backup devices. Some products that use high-speed connections between storage and media
subsystems can reduce much of the backup load from the primary database server.
@Oracle server calls MML software routines to backup and restore data files to and form media that is
controlled by the media manager.

Types of connections with RMAN


Target Database
Recovery catalog Database
AUXILIARY DATABASE:
An auxiliary database is a database created using the RMAN DUPLICATE command. Or it may be a
temporary database used during tablespace point-in-time recover (TSPITR). A standby database is a copy
of your production database that can be used for disaster recover.

Connecting with a Recovery catalog


STARTING RMAN locally:-
UNIX $ ORACLE_SID=DBO1; export ORACLE_SID NT c:\> set ORACLE_SID=DBO1
$ rman target / as sysdba c:\> rman target / as sysdba
STARTING RMAN remotely.
rman target sys/sys@db01

@ After you type the RMAN connection command, the following events occur:
* A user process is created for Recovery Manager.
* The user process creates two Oracle server processes:
-One default process connected to the target database for executing SQL commands,
resynchronizing the control file, and recovery roll forward.
-One polling process connected to the target database to locate Remote Procedure Call (RPC)
completions (only one per instance).
Backup and recovery information is retrieved from the control file.

Additional RMAN command Line Arguments


RMAN o/p to log file: rman target sys/oracle log $HOME/ORADATA/u03/rman.log append
Executing a command file when RMAN is invoked:
rman target sys/oracle log $HOME/ORADATA/u03/rman.log append
@’$HOME/STUDENT/LABS/my_rman_script.rcv
Recovery Manager Modes
• A command line interpreter CLI
• Interactive mode $ rman target sys/oracle@db01 $ BACKUP DATABASE
– Use it when doing analysis.
– Minimize regular usage.
– Avoid using with log option.
• Batch mode $ rman target / @tbsbk.rcv tbs.log
– Meant for automated jobs.
– Minimizes operator errors.
– Set the log file to obtain information.

BATCH MODE:- You can type commands into a file, and then run the command file. When running in
batch mode, RMAN reads input from a command file and writes output messages to a log file ( if
specified ). RMAN parses the command file in its entirely before compiling or executing any commands.
There is no need to place an exit command in the file because RMAN will terminate when the end of the
file is reached. rman target /@b_file.rcv log tbs.log
RMAN Commands
RMAN commands are of two types:
• Stand-alone
– Executed individually
– Usually do not interact with OS
– No channel allocation
• Job
– Executed as a group
– Generally interact with OS
– Channel allocation
• Stand-alone or Job

STAND-ALONE:- Executed only at the RMAN prompt. Executed individually. Cannot appear as
subcommands within RUN. 1. CHANGE 2. CONNECT 3.CREATE CATALOG, RESYNC CATALOG
4. CREATE SCRIPT, DELETE SCRIPT, REPLACE SCRIPT, and PRINT SCRIPT.

JOB:- The job commands are usually grouped and RMAN executes the job commands
inside of a run command block sequentially. If any command within the block
fails, RMAN ceases processing—no further commands within the block are
executed.

Users can execute the commands in interactive mode or batch mode. To run RMAN
commands interactively, start RMAN and then type commands into the command
line interface.
You can type RMAN commands into a file. You can then run a list of commands in
batch mode by specifying the command file name in the command line.

CONFIGURE COMMAND
RMAN preset with default configuration settings
• Configure automatic channels
• Specify the backup retention policy
• Specify the number of backup copies to be created
• Limit the size of backup sets
• Exempt a tablespace from backup
• Enable and disable backup optimization

RMAN configuration parameters are:


Configure Automatic Channels. You can specify the default backup location and file naming convention
with CONFIGURE CHANNEL command:
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT 'E:\ORACLE\BACKUP\%U';
Implement retention policy by specifying that any number of backups or copies beyond a specified
number need not be retained. The default value is 1 day.
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
Implement retention policy by specifying a recovery window:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS;

RMAN does not consider any backup as obsolete. RMAN>CONFIGURE RETENT POLICY TO NONE;

You set backup optimization on so that the BACKUP command does not back up files to a device type if
the identical file has already been backed up to the device type. For two files to be identical, their
contents must be exactly the same. The default value is OFF.
CONFIGURE BACKUP OPTIMIZATION ON;
CONFIGURE DEFAULT DEVICE TYPE TO 'D:\ORA9I';
When a tablespace is added/when a successful backup is recorded in the RMAN repository
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO 'D:\ORA9I\C%F';
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE D:\ORA9I TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 3;
CONFIGURE DEVICE TYPE D:\ORA9I\B%U PARALLELISM 1; # default
Configure duplexed backup sets: max 4 copies
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE D:\ORA9I TO 1default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO 'E:\ORACLE\ORA90\DATABASE\SNCFORA9I.ORA';
Use the CLEAR option to return to the default value:
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE 'D:\ORA9I' TO 1 CLEAR

The SHOW Command:- Displays persistent configuration settings


Automatic channel configuration settings: SHOW CHANNEL;
Backup retention policy settings: SHOW DEVICE TYPE;
Number of backup copies to be created: SHOW DEFAULT DEVICE TYPE;
RMAN retention policy configuration settings: SHOW RETENTION POLICY;
Maximum size for backup sets: SHOW MAXSETSIZE;
Tablespaces excluded from whole database backups: SHOW EXCLUDE;
Status of backup optimization: SHOW BACKUP OPTIMIZATION;

The LIST Command:


List backup sets and copies of datafiles in the database
List backup sets and copies of any datafile for a specified tablespace
List backup sets and copies containing archive logs for a specified range

You must be connected to the target database. If you are connected in the NOCATALOG MODE, then the
database must be mounted. If you connect using a recovery catalog, then the target instance must be
started (but does not need to be mounted).
List backups of all files in the database: LIST BACKUP OF DATABASE;
List backup sets containing the USERS01.DBF datafile: LIST BACKUP OF DATAFILE "E:\B\USER01.DBF";
List all copies of datafiles in the SYSTEM tablespace: LIST COPY OF TABLESPACE "SYSTEM".

The REPORT Command:


Produces a detailed analysis of the recovery catalog/repository.
Produces reports to answer: Which files need a backup?
Which backups can be deleted?
Which files are unrecoverable?
What is the structure of the database: REPORT SCHEMA;
Which files need to be backed up : REOPRT NEED BACKUP
Which backups can be deleted (that is obsolete) REPORT OBSOLETE;
Which files are not recoverable because of unrecoverable operation? REPORT UNRECOVERABLE

REPORT NEED BACKUP


Lists all datafiles that require a backup
-Assumes the most recent backup is used during a restore.
-Provide 4 options:
*Incremental REPORT NEED BACKUP INCREMENTAL 3 databse;
An integer specifies the maximum number of incremental backups that should be restored during
recovery. If this number, or more, is required, then the data file needs a new full backup.
*Days REPORT NEED BACKUP DAYS 3 tablespace system;
An integer specifies the maximum number of days since the last full or incremental backup of a file. The
file needs a backup if the most recent backup is equal to or greater than this number.
*Redundancy REPORT NEED BACKUP REDUNANCY 3;
An integer specifies the minimum level of redundancy considered necessary.
*Recovery window REPORT NEED BACKUP DAYS RECOVERY WINDOW OF 3 DAYS;
A time window in which RMAN should be able to recover the database
-Without options, takes into account the configured retention policy

Recovery Manager Packages


DBMS_RCVCAT and DBMS_RCVMAN:= Two packages, DBMS_RCVCAT and DBMS_RCVMAN, are used by
RMAN to perform its tasks. These are internal, undocumented packages created by the CREATE
CATALOG command. DBMS_RCVMAN is created in the target database by the scripts DBMSRMAN.SQL
and PRVTRMNS.PLB, which are called by CATPROC.SQL. DBMS_RCVCAT is used by Recovery
Manager to maintain information in the recovery catalog, and DBMS_RCVMAN queries the control
file or recovery catalog.
DBMS_BACKUP_RESTORE Package:- This package is created by the DBMSBKRS.SQL and
PRVTBKRS.PLB scripts called by CATPROC.SQL It is used to interface with Oracle and the operating
system to create restore, and recover backups of datafiles and archived redo log files

RMAN USAGE CONSIDERATION


*SHARED RESOURCES: SHARED MEMORY, MORE PROCESSES
*PRIVILEGES GIVEN TO USERS
-DATABASE: SYSDBA
-OPERATING SYSTEM: ACCESS TO DEVICES
*REMOTE OPERATION:
You need to use a password file to connect to the target database over Oracle Net to perform privileged
operations, such as shut down, Startup, Backup and Recovery from a remote machine. You may have to
set up a password file. You should ensure that there is a strategy to backup the password file.
*GLOBALIZATION Environment Variables:
Before invoking RMAN, set the NLS_DATA_FORMAT and NLS_LANG environment variables. These
determine the format used for the time parameters in RMAN commands, such as RESTORE, RECOVER,
and REPORT.
*USE OF THE RECOVERY CATALOG
*************************************************************************************

CHAP 10 User Managed Backups


Terminology
WHOLE DATABASE BACKUP:-
It is also known as whole backup refers to a backup of all datafiles and the control file of the database.
Whole backups can be performed when the Target database is closed or open.
1 shutdown database
2. copy all files like control file, log file, password file, and parameter file.
3 restart database. You do not need to include files associated with read-only tablespaces in full backups.

CONSISTENT BACKUP:- The whole backup that is taken when the database is closed shutdown with
NORMAL, IMMEDIATE, or TRANSACTINAL is called a consistent backup in which all files headers are
consistent with control file.

INCONSISTENT BACKUP:- When the database is open and operational, the datafile headers are not
consistent with the control file unless the database is open in read only mode. When the database
shutdown with the ABORT option this inconsistency persists.
PARTIAL DATABASE BACKUPS
-TABLESPACE BACKUP: A tablespace backup is a backup of the datafiles that make up tablespace.
-DATA FILE BACKUPS: You can make backups of a single datafile if your database is in Archive log
mode. You can make backups of read-only or offline normal datafiles in Noarchivelog mode.
-CONTROL FILE BACKUPS: You can configure RMAN for automatic backups of the control file after a
BACKUP or COPY command is issued.

User managed backup and recovery:


• Files are backed up with OS commands
• Backup files are restored with OS commands
• Recovery accomplished using SQL*Plus commands

Dynamic Views: V$DATAFILE, V$CONTROLFILE, V$LOGFILE, and V$TABLESPACE views.

Evaluating Backup Methods


Closed database: NOARCHIVELOG mode Open or Closed database: ARCHIVELOG mode
You can safeguard against loss of data resulting from media failures by choosing the most appropriate
backup method for maximum data recovery. A database backup is an operating system backup of data
files while the database is opened or closed.
Physical Backup Methods
• Operating system backup without archiving: Used to recover to the point of the last backup after a
media failure.
• Operating system backup with archiving: Used to recover to the point of failure after a media failure.

Consistent or Closed Database Backups


A consistent/closed database backup is an operating system backup of all the data files, control files,
parameter files, and the password file that constitute an Oracle database. As Online or offline storage

Advantages of Closed/consistent Database Backups


• Conceptually simple
• Easy to perform
• Require little operator interaction
• A closed database backup is conceptually simple because all you need to do is:
– Shut down the database
– Copy all required files to the backup location
– Open the database
• A minimal number of commands are necessary to perform a closed database backup.
• You can automate the closed database backup process by executing a simple script that requires
minimal operator interaction and does the following:
– Shuts down the database
– Copies the data files
– Opens the database
• All files copied during a closed database backup are consistent to a point-in-time. No transactions occur
because the database is unavailable for use.

Disadvantages
• For business operations where the database must be continuously available, a closed database backup
is unacceptable because the database is unavailable during backup.
• The amount of time that the database is unavailable is affected by the size of the database, the number
of data files, and the speed with which the copy operations on the data files can be performed.
Sometimes this may not be consistent within an available window of downtime and the DBA must choose
another type of backup.
• A recovery is only as good as the last full closed database backup, and lost transactions may have to be
entered manually following a recovery operation.

Performing a Consistent/Closed Database Backup


Perform a full closed backup while the Oracle server instance is shut down.
1 Compile an up-to-date listing of all relevant files to back up.
2 Shut down the Oracle instance with the shutdown normal or shutdown immediate or shutdown
transactional command.
3 Back up all data files, redo log files, control files, parameter files, and the password file by using an
operating system backup utility.
4 Restart the Oracle instance.
SHUTDOWN IMMEDIATE;
HOST cp <files> /backup/ [Data files Control files Parameter files Log files Password file]
STARTUP OPEN;

Guidelines
• The default shutdown parameter is normal. Use transactional or immediate if there is any chance that
transactions or processes are still accessing the database.
• Consider a reliable, automated procedure for this operation to ensure that every file is correctly backed
up.
• Back up the parameter file and the password file when performing full closed backups.
• You do not need to include files associated with read-only tablespaces in full backups.
• If the database is opened while the offline or cold backup is performed, the backup is invalid and
cannot be guaranteed usable in a recovery situation.
[Although the parameter file and the password file are not physically part of the database, they should be
included as part of the backup.]

Opened Database Backup


Continuous-operation businesses have special implications for backup and recovery. If the business case
does not allow for shutting down the database to perform backups, then there must be a mechanism to
perform backups of the database while it is in use.
• A DBA can perform backups of all the tablespaces or individual data files while the database is in use,
by using the opened database backup method.
• Back up control file to a binary file or create a script to re-create the control file.
• The online redo log files do not need to be backed up. You either lose the current active online redo log
group while the database is still opened or you lose the current active online redo log group and the
database is closed.
– If you lose the current active online redo log group while the database is still opened, and after clearing
the redo log lost, if the database crashes due to media failure before the next backup, then incomplete
recovery will be required since redo information has not been archived. You should therefore immediately
perform a closed whole backup.
– If you lose the current active online redo log group and the database is closed because of a media
failure, an incomplete recovery is therefore required and you will lose the transactions that were stored in
the lost redo log that could not be archived yet.

Advantages of an Online Database Backup


• The database is available for normal use during the backup.
• A backup can be done at a tablespace or data file level.
• Supports business operations that operate all day every day.
Additional Concerns for an Online Database Backup
• More training is required for the DBA.
• Tested and automated scripts are recommended for performing opened database backups.

Opened Database Backup Requirements


A DBA can perform backups of tablespaces or individual data files while the database is in use, provided
two criteria are met:
• Database should be set to ARCHIVELOG mode.
• It must be ensured that the Online Redo logs are archived, either by enabling the Oracle automatic
archiving (ARCn) processes or by manually archiving the redo log files by using the ALTER SYSTEM
ARCHIVE LOG SEQUENCE command.

How to Perform an Opened Database Backup


1 Set the data file or tablespace in backup mode by issuing the ALTER TABLESPACE...BEGIN BACKUP
command. This prevents the sequence number in the data file header from changing, so that in case of
recovery, logs are applied from backup start time. Even if the data file is in backup mode, it is available
for normal transaction.
SQL> ALTER TABLESPACE USER_DATA BEGIN BACKUP;
2 Use an operating system backup utility to copy all data files in the tablespace to backup storage. The
log sequence numbers in the backup files may be different when each tablespace is backed up
sequentially.
UNIX: SQL>!CP /USERS/DISK1/USER01.DBF /USERS/BACKUP/USER01.DBF
NT: COPY C:\USERS\DISK1\USER01.ORA E:\USERS\BACKUP\USER01.ORA
3 After the data files of the tablespace have been backed up, set them into normal mode by issuing the
following command: `
SQL> ALTER TABLESPACE USER_DATA END BACKUP;
4. ARCHIVE the unarchived redo logs
SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
Dynamic Views: You can obtain information about the status of data files while performing
opened database backups by querying the V$BACKUP.
FAILURE DURING ONLINE TABLESPACE BACKUP:
During an online tablespace backup, the system may crash, a power failure may occur, the database may
be shut down, and so on. If any of these occurs then.
1 The backup files will be unusable if the operating system did not complete the backup. You will need
to backup up the files again.
2 The database files in online backup mode will not be synchronized with database because the header is
frozen when the backup starts.
3 The database will not open because the Oracle Server assumes that the files have been restored from a
backup.

Query the V$BACKUP view to determine which files are in backup mode. When an ALTER TABLESPACE
BEGIN BACKUP command is issued the status changes to ACTIVE.
SQL> select * from v$backup;
FILE# STATUS CHANGE# TIME
------ ----------- ------- ------
1 NOT ACTIVE 0
2 NOT ACTIVE 0
3 ACTIVE 240088 23/03/99

*File number 3 is currently in online backup mode.


*To unfreeze it issue
ALTER DATABSE DATAFILE 3 END BACKUP; or ALTER DATABSE END BACKUP; [in Oracle9i]
SQL> select * from v$backup;
FILE# STATUS CHANGE# TIME
------ ----------- ------- ------
1 NOT ACTIVE 0
2 NOT ACTIVE 0
3 NOT ACTIVE 240088 23/03/99

SQL> ALTER DATABSE OPEN;

READ-ONLY TABLESPACE BACKUP:


1. Change the status of a tablespace from read- write to read-only by using the command.
ALTER TABLESPACE user READ ONLY;
2. When the ALTER TABLESPACE command is issued, a checkpoint is performed for all data files
associated with the tablespace. The file headers are then frozen with the current SCN.
3. When you make a tablespace read-only, you must backup all of the datafiles for the tablespace.
4. The DBW0 process writes only to data files whose tablespace are in read-write mode, and normal
checkpoints occur on these files.

The control file must correctly identify the tablespace in read-only mode, otherwise you must recover it.

BACKUP ISSUES WITH LOGGING AND NOLOGGING OPTIONS


LOGGING: All changes recorded to redo. Fully recoverable from last backup. No additional backup
required.
NOLOGGING: Minimal redo recorded. Not recoverable from last backup. May require additional backup.

Tablespace, tables, indexes, or partitions may be set to nologging mode for faster load of data when
using direct-load operations. When the Nologging option is set for a direct-load operation. Insert
statements are not logged in the redo log files. Because the redo logs do not contain the values that
were inserted when the table was in Nologging mode, the data file pertaining to the table or partition
should be backed up immediately upon completion of the direct-load operation.

MANUAL CONTROL FILE BACKUPS:


• Creating a binary image: ALTER DATABASE BACKUP CONTROLFILE TO ‘CONTROL1.BKP`;
• Creating a text trace file: ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
Certain status information in the control file, such as the current online redo log file and the names of the
database files, is used by the Oracle server during instance or media recovery.

*Multiplex the control files and name them in the init.ora file by using the CONTROL_FILES parameter.
*The ALTER DATABASE BACKUP CONTROLFILE TO TRACE command creates a script to re-create the
control file. The file is located in the directory specified in the initialization parameter USER_DUMP_DEST.
This script does not contain RMAN Meta data.
*In addition, the individual control files should also be backed-up by using the ALTER DATABASE BACKUP
CONTROLFILE TO filename command. This provides a binary copy of the control file at that time.
*During a full backup, shutdown the instance normally and use an operating system backup utility to
copy the control file to backup storage.

THE FOLLOWING COMMANDS CHANGE THE DATABASE CONTROLFILE VERSION.


• ALTER DATABASE {ADD | DROP LOGFILE
• ALTER DATABASE {ADD | DROP LOGFILE MEMBER
• ALTER DATABASE {ADD | DROP | LOGFILE GROUP
• ALTER DATABASE {NOARCHIVELOG | ARCHIVELOG
• ALTER DATABASE RENAME FILE
• CREATE TABLESPACE
• ALTER TABLESPACE {ADD | RENAME | DATAFILE
• ALTER TABLESPACE {READ WRITE | READ ONLY
• DROP TABLESPACE
Note: Necessary to take backup the control file after any of the command is used

Backup the initialization parameter file


CRETE PFILE FROM SPFILE;
CREATE PFILE=’E:\BACKUP\INIT.ORA FROM SPFILE;

DBVERIFY UTILITY:
The DBVERIFY utility enables administrators to perform verification of data files by checking the
structural integrity of data blocks within specified data files. That the utility is external to the database
minimizes the impact on database activities. DBVERIFY Main Features
Step Explanation
1 The utility can be used to verify online data files.
2 You can invoke the utility on a portion of a data file.
3 The utility can be used to verify online data files.
4 You can direct the output of the utility to an error log.

Running DBVERIFY
The name of the executable for the DBVERIFY utility varies across operating systems. It is located in the
bin directory of the appropriate Oracle Home.
The name of the executable is OS-dependent. For UNIX you execute the dbv executable.

*External command line utility.


*Used to ensure that a backup database or data file is valid before a restore.
*May be a helpful diagnostic aid when data corruption problems are encountered.
DBV file= E:\oracle\oradata\ora9i\sys.bck logfile =dbv.log
%dbv file=/users/DBA00/data01.dbf logfile=dbv.log
DBVERIFY parameters:
1. FILE: Name of database file to verify.
2. START: Starting block address to verify. Block address is specified in Oracle blocks. If START is not
specified it assumes the first block in the file.
3. END: The ending block address to verify. If END is not specified, it assumes the last block in the file.
4. BLOCKSIZE: Required only if the file has a block size greater than 2KB.
5. LOGFILE: Specifies the file to which logging information should be written. Default is to send output to
terminal display.
6. FEEDBACK: Causes DBVERIFY to display a single for n pages verified.
7. HELP: Provides on-screen help.
8. PARFILE: Specifies the name of the parameter file to use.

Example
To verify the integrity of the data01.dbf data file, starting with block 1 and ending with block 500, you
execute the following command:
UNIX
$ dbv /users/DB00/u03/data01.dbf start=1 end=500
DBVERIFY Output:
An example of the output from the previous command would look like the following:
DBVERIFY - Verification starting : FILE = /users/DBA00/u03/data01.dbf
DBVERIFY - Verification complete
Total Pages Examined : 500
Total Pages Processed (Data): 22
Total Pages Failing (Data): 0
Total Pages Processed(Index): 16
Total Pages Failing(Index): 0
Total Pages Empty : 0
Total Pages Marked Corrupt: 0
Total Pages Influx: 0
where: Pages is the number of Oracle blocks processed.

*************************************************************************
CHAP 11 RMAN Backups

Backup Concepts
• Recovery Manager backup is a server-managed backup
– Recovery Manager uses Oracle server processes for backup operations
– Includes database, tablespaces, all or selected data files in a tablespace, control files, archive logs
• Closed backup
– Target database must be mounted (not open)
– Includes data files, control files
• Open Backup
– Tablespaces should not be put in backup mode
- Includes data files, control files, archive logs
Note: The online redo log files are not backed up when using Recovery Manager.

Backup Types Supported by Recovery Manager


There are two types of Recovery Manager backups:
• Image copies: Image copies are copies of a data file, or archive log file. A copy can be made using
Recovery Manager or an operating system utility. The image copy of data file consists of all the blocks of
the data file, including the unused blocks.
The image copy can include only one file and a single operation of copy cannot be multiplexed.
• Backup sets: Backup sets can include one or more data files or archived logs. The output of the backup
operation may comprise one or more files. You can make a backup set in two distinct ways:
– Full backup: In a full backup, you back up one or more files, which is not required in an incremental
backup. In a full backup, all blocks containing data for the files specified are backed up. One or more files
can be included in one full backup.
– Incremental backup: An incremental backup is a backup of data files that include only the blocks
that have changed since the last incremental backup. Incremental backups require a base-level (or
incremental level 0) backup, which back up all blocks containing data for the files specified. Incremental
level 0 and full backups copy all blocks in data files, but full backups cannot be used in an incremental
backup strategy.

Backup Sets
A backup set consists of one or more physical files stored in an Oracle proprietary format, on either disk
or tape. Each backup set can contain one or more Oracle files.
You can make a backup set for one or more of data files, archive logs, or their copies.
Backup sets can be of two types:
• Data file: Can contain data files and control files, but not archived logs
• Archived log: Contains archived logs, not data files or control files
Note: Backup sets may need to be restored by Recovery Manager before recovery can be performed,
unlike image copies which generally are available on disks.

Control Files in Data File Backup Sets


Each file in a backup set must have the same Oracle block size (control files and data files have the same
block size, whereas archived log block sizes are machine dependent). When a control file is included, it is
written in the last data file backup set.
A control file can be included in a backup set either:
• Explicitly using the INCLUDE CONTROL FILE syntax
• Implicitly by backing up file 1 (the system data file)

Characteristics of Backup Sets


• A backup set contains one or more physical files called backup pieces.
• A backup set is created by the BACKUP command to assist tape streaming. The FILESPERSET
parameter controls the number of data files contained in a backup set.
• A backup set can be written to disk or tape. Oracle provides one tape output by default for most
platforms, known as SBT [in Oracle8i SBT_TAPE] (System Backup to Tape), which writes to a tape device
when you are using a media manager.
• A restore operation must extract files from a backup set before recovery.
• Archived log backup sets cannot be incremental (they are full by default).
• A backup set performs compression by not including empty data blocks in data files that exist beyond
the high-water mark or
• Backup sets do not include empty blocks.

Backup Piece
• A backup piece is a file in a backup set.
• A backup piece can contain blocks from more than one data file.

Backup Piece Size


The piece size can be limited if required:
RMAN > run {
2> ALLOCATE CHANNEL T1 TYPE ’SBT’;
3> MAXPIECESIZE = 4G
4> backup
5> format ’df_%t_%s_%p’ FILESPERSET 3
6> (tablespace user_data); }
ALLOCATE CHANNEL MAXPIECESIZE = integer CONFIGURE CHANNEL MAXPIECESIZE = integer
[specify sizes in bytes, Kilobytes (K), Megabytes (M), Gigabytes (G)]
IN ORACLE 8I SET LIMIT CHANNEL T1 KBYTES 4194304;

BACKUP Command
The output can be written to tape or disk. You can control the number of backup sets that
Oracle produces as well as the number of input files that Recovery Manager places into a single backup
set. If any I/O errors are received when reading files or writing backup pieces, the job is aborted.

When using the BACKUP command, you must do the following:


• Mount or open the target database. Recovery Manager allows you to make an inconsistent backup if
the database is in ARCHIVELOG mode, but you must apply redo logs to make the backups consistent for
use in recovery operations.
• Manually Allocate a channel for execution of the BACKUP command.
• Use a current control file Explicitly using the INCLUDE CONTROL FILE syntax [Optional]

Option Significance
full Server session copies all blocks into the backup set, skipping only data file
blocks that have never been used. The server session does not skip blocks
when backing up archived redo logs or control files. Full backup is not
considered in incremental backup.
incremental The server session copies data blocks that have changed since the last
level integer incremental nbackup, where nis any integer from 1 to 4.
When attempting an incremental backup of level greater than 0, server
process checks that a level 0 backup or level 0 copy exists for each data
file in the BACKUP command.
If you specify incremental, then in the backupSpec you must set one of
the following parameters: DATA FILE, DATA FILECOPY, TABLESPACE, or
DATABASE. Recovery Manager does not support incremental backups of
control files, archived redo logs, or backup sets.
filesperset When you specify the filesperset parameter, Recovery Manager compares
integer the filesperset value to a calculated value (of number of files backed up
per number of channels) and takes the lowest integer of the two, thereby
ensuring that all channels are used.
If you do not specify filesperset, then Recovery Manager compares the
calculated value (number of files per allocated channels) to the default
value of 64 and takes the lowest of the two.
When there are more channels than files to back up, channels remain idle.
Input files cannot be split across channels.
skip Specify this parameter to exclude some data files or archived redo logs
from the backup set. You have following options within the parameter.
offline: Exclude offline data files from backup set.
readonly: Exclude data files belonging to read-only tablespaces.
inaccessible: Exclude data files or archived redo logs that cannot be read
because of I/O errors.
setsize Specifies a maximum size for a backup set in units of 1,024 bytes.
integer Recovery Manager attempts to limit all backup sets to this size. Useful for
backup of archive logs.
diskratio Directs Recovery Manager to assign only data files to backup sets spread
integer across the specified number of drives.Useful for data file backups when
data files are striped or reside on separate disk spindles.
delete input Deletes the input files upon successful creation of the backup set. Specify
this option only when backing up archived redo logs or data file copies. It
is equivalent to issuing a CHANGE . . . DELETE command for all of the
input files.

include Creates a snapshot of the current control file and places it into each
current backup
controlfile set produced by this clause.
Format of the name of output. The following format parameters can be
Format
used either individually or in combination.
%c Specifies the copy number of the backup piece within a set of duplexed
backup pieces.
Specifies the backup piece number within the backup set. This value starts
%p at 1 for each backup set and is increased by 1 as each backup piece is
created.
Specifies the backup set number. This number is a counter in the control
%s
file that is increased for each backup set.
%d Specifies database name.
Specifies the database name, padded on the right with xcharacters to a
%n total
length of 8 characters.
Specifies the backup set time stamp, which is a 4-byte value derived as
the
%t
number of seconds elapsed since a fixed reference time. The combination
of %s and %t can be used to form a unique name for the backup set.
Specifies an 8-character name constituted by compressed representations
%u
of the backup set number and the time that the backup set was created
Specifies a convenient shorthand for %u_%p_%c that guarantees
%U uniqueness in generated backup filenames. If you do not specify a format,
Recovery Manager uses %U by default.

Multiplexed Backup Sets


Two or more data files can be multiplexed into a backup set for tape streaming.
When more than one file is written to the same backup file or piece, Recovery Manager automatically
performs the allocation of files to channels, multiplexes the files, and skips any unused blocks. With a
sufficient number of files to back up concurrently, high-performance sequential output devices (for
example, fast tape drives) can be streamed. This is important for backups that must compete with other
online system resources. It is the responsibility of the operator or storage subsystem to change the tape
on the target database where the tape drive is located.
This process was designed for writing to tape but it can also be used to write to disk.
Example
RMAN > run { allocate channel c1 type ’SBT’;
2> backup (database filesperset = 3); }
The database contains three data files that will be multiplexed together (filesperset = 3) into one physical
file (set) and stored on tape. The data files are multiplexed by writing n number of blocks from data file
1, then data file 2, then data file 3, then data file 1, and so on until all files are backed up.

Parallelization of Backup Sets


Allocate multiple channels, specify filesperset, and include many files.
Parallelization of backup sets is achieved by:
• Allocating multiple channels.
• Specifying many files to back up.
• Specifying the FILESPERSET option in the BACKUP command. If
FILESPERSET is not specified, only one channel is used to create one backup piece containing all files—all
other channels remain idle.
Example
• There are nine files that need to be backed up (data files 1 through 9.)
• Data files have been carefully assigned so that each set has approximately the same number of data
blocks to back up (for efficiency.)
– Data files 1, 4, and 5 are assigned to backup set 1.
– Data files 2, 3, and 9 are assigned to backup set 2.
– Data files 6, 7, and 8 are assigned to backup set 3.
• Because there are three files per set, there is no need to use the FILESPERSET parameter.
Three backup sets will be written each of which would contain blocks from three data files. three
channels are used to write in parallel.
Solution: Use the following command to achieve the specified requirements:
RMAN > run {
2> allocate channel c1 type disk;
3> allocate channel c2 type disk;
4> allocate channel c3 type disk;
6> backup
7> incremental level = 0
8> format ’/disk1/backup/df_%d_%s_%p.bak’
9> (datafile 1,4,5 channel c1 tag=DF1)
10> (datafile 2,3,9 channel c2 tag=DF2)
11> (datafile 6,7,8 channel c3 tag=DF3);
12> alter system archive log current;
13> }

Duplexed backup sets


Can create up to four identical copies of each backup piece by duplexing the backup set.
BACKUP COPIES
SET BACKUP COPIES
CONFIGURE … BACKUP COPIES
RMAN> BACKUP COPIES 2 DATAFILE 1, DATAFILE 2
FORMAT ‘/BACKUP1/%U’ , ‘/BACKUP2/%U’;

Backup of backup sets: BACKUP BACKUPSET disk-to-disk disk-to-tape

Archived Log Backup Sets


• Can include only archive logs
• Are always full backups
RMAN > backup
2> FORMAT ’/disk1/backup/ar_%t_%s_%p’
3> archivelog ALL DELETE ALL INPUT;
A common problem experienced by DBAs is not knowing if an archived log has been completely copied
out to the archive log destination before attempting to back it up.
Recovery Manager has access to control file or recovery catalog information, so it knows which logs have
been archived and can be restored during recovery.
Characteristics of Archived Log Backup Sets
• Can include only archived logs, not data files or control files.
• Are always full backups. (There is no logic in performing incremental backups, because you can specify
the range of archived logs to backup.)
Example of Archived Log Backup (from slide) This example backs up archived logs to a backup set, where
each backup piece contains three archived logs. After the archived logs are copied, they are deleted from
disk and marked as deleted in the V$ARCHIVED_LOG view.

Backup Constraints
• The database must be mounted or open.
• Online redo log backups are not supported.
• Only “clean” backups are usable in NOARCHIVELOG mode.
• Only “current” data file backups are usable in ARCHIVELOG mode.
• No parameter or password files are backed up.

Image Copies
COPY contains a single datafile, archived redo log file or control file.
You can use the Recovery Manager COPY command or OS copy command to create an image copy. An
image produced with the Recovery Manager COPY command uses an Oracle server session to perform
the task and records the copy in control file.

Characteristics of an Image Copy


• Can be written only to a disk
• Can be used immediately; does not need to be restored
• Is a physical copy of a single data file, archived log, or control file
• Is most like an operating system backup (contains all blocks)
• Can be part of an incremental strategy

An image copy has the following characteristics:


• An image copy can be written only to disk. Hence additional disk space may be required to retain the
copy on the disk. When large files are being considered, copying may take a long time, but restoration
time is reduced considerably because the copy is available on the disk.
• If files are stored on disk, they can be used immediately (that is, they do not need to be restored from
other media). This provides a fast method for recovery using the SWITCH command in Recovery
Manager, which is equivalent to the ALTER DATABASE RENAME FILE SQL statement.
• In an image copy all blocks are copied, whether they contain data or not, because an Oracle server
process copies the file and performs additional actions such as checking for corrupt blocks and registering
the copy in the control file. To speed up the process of copying, you can use the NOCHECKSUM
parameter.
• Image copy can be part of a full or incremental level 0 backup, because a file copy always includes all
blocks. Use the level 0 option if the copy will be used in conjunction with an incremental backup set.
• Image copy can be designated as a level 0 backup in incremental backup strategy, but no other levels
are possible with image copy.

RMAN > run {


2> allocate channel d1 type disk;
3> copy
4> datafile ‘/ORADATA/users_01_db01.dbf’ to ’/backup/file3.dbf’ tag=DF3,
5>ARCHIVELOG ‘arch_1060.arc’ to ‘arch_1060.bak’

RMAN > run {


2> allocate channel d1 type disk;
3> copy
4> datafile 3 to ’backup/file3.dbf’,
5> datafile 1 to ’backup/file1.dbf’; }

You can use the CHECK LOGICAL option to test data and index blocks that pass physical corruption
checks for logical corruption—for example, corruption of a row piece or index entry. If logical corruption
is detected, the block is logged in the alert log and trace file of the server process.
When the number of corrupted blocks detected reaches a threshold—defined by the MAXCORRUPT clause
—the copy process is terminated without populating the views.
Note: V$DATABASE_BLOCK_CORRUPTION should be queried at the completion of every image copy.

Image Copy Parallelization One COPY command with many channels


By default, Recovery Manager executes each COPY command serially. However, you can parallelize the
copy operation by:
• Allocating multiple channels
• Specifying one COPY command for multiple files
RMAN > CONFIGURE DEVICE TYPE DISK PARALLELISM 4;
5> copy # 3 files copied in parallel
6> datafile 1 to ’/disk1/df1.dbf’,
7> datafile 2 to ’/disk1/df2.dbf’,
8> datafile 3 to ’/disk1/df3.dbf’;
9> copy # Second copy command
10> datafile 4 to ’/disk1/df4.dbf’; }

In the example, four channels are created, but only three will be used (channel d4 will remain idle). This
is how the command is executed:
1 Four channels are created for writing to disk: d1, d2, d3, d4.
2 The first COPY command uses three channels (server processes)—one for writing each data file to disk.
3 The second COPY command does not execute until the previous COPY command has finished
execution. It will use only one channel.
Note: When you use a high degree of parallelism, more machine resources are used, but the backup
operation can be completed faster.

COPY OF WHOLE DATABASE


RMAN> STARTUP MOUNT
RMAN> REPORT SCHEMA; #list of all data files in target database
RMAN>COPY datafile 1 to ’/disk1/df1.dbf’,…
RMAN>LIST COPY; #verify copy

Backup in NOARCHIVELOG Mode


• Ensure sufficient space for the backup.
• Shut down using the NORMAL or IMMEDIATE or TRANSACTIONAL clause [cleanly].
• Mount the database.
• Allocate multiple channels if not using automatic.
• Run the BACKUP command.
• Verify that the backup is finished and cataloged.
• Open the database for normal use.
RMAN> backup database filesperset 3;

CONTROL FILE AND SPFILE AUTOBACKUP


Enable CONFIGURE CONTROLFILE AUTOBACKUP
if enabled then RMAN automatically performs a back up of the control file and current spfile after BACKUP
or COPY command
Also occur after structural changes to the database.
Backup is given a default name
Default CONFIGURE CONTROLFILE AUTOBACKUP is OFF
Format of control file during backup:
SET CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE disk TO ‘controlfile_%F’;

Backup SPFILE
Automatically backed up when CONFIGURE CONTROLFILE AUTOBACKUP = ON
Explicitly backed up with BACKUP SPFILE
RMAN> BACKUP COPIES 2 DEVICE TYPE sbt SPFILE;

Tags
A tag is a meaningful name that you can assign to a backup set or file copy. The advantages of user tags
are as follows:
• Tags provide a useful reference to a collection of file copies or a backup set.
• Tags can be used in the LIST command to locate backed up files easily.
• Tags can be used in the RESTORE and SWITCH commands.
• The same tag can be used for multiple backup sets or file copies.
If a nonunique tag references more than one data file, then Recovery Manager chooses the most current
available file.

Data Dictionary Views


The V$DATAFILE_HEADER dynamic performance view displays the error information
related to data files when an internal read performed by Oracle fails. Apart from this, the view
displays the status of the file and whether a file needs media recovery to be performed.

Recovery Manager data dictionary views used to query the control file are:
• V$ARCHIVED_LOG: Shows which archives have been created, backed up, and cleared in the database.
• V$BACKUP_CORRUPTION: Shows which blocks have been found corrupt during a backup
of a backup set.
• V$COPY_CORRUPTION: Shows which blocks have been found corrupt during an image copy.
• V$BACKUP_DATAFILE: Useful for creating equal sized backup sets by determining the number of blocks
in each data file. Can also find the number of corrupt blocks for the data file.
• V$BACKUP_REDOLOG: Shows archived logs stored in backup sets.
• V$BACKUP_SET: Shows backup sets that have been created.
• V$BACKUP_PIECE: Shows backup pieces created for backup sets.

MONITORING RMAN BACKUPS


Use SET COMMAND ID command
Query V$PROCESS and V$SESSION to determine sessions
Query V$SESSION_LONGOPS to determine the progress of backups and copies

Miscellaneous Issues
• Terminating a Recovery Manager Job
• Backing up the control file frequently
• Recording corrupt data file blocks in the control file and in the alert log
• Changing a fractured block while Recovery Manager is writing it

*************************************************************************
CHAP 12 User-Managed Complete Recovery

MEDIA RECOVERY:
Media recovery is used to recover a lost or damaged current DATAFILES or CONTROL FILE. You can also use it to
recover changes that were lost when a DATAFILES went offline without the OFFLINE NORMAL option.
RESTORING FILES:
When you restore a file, you are replacing a missing or damaged file with a backup copy.
RECOVERY OF FILES:
When you recover a file, changes recorded in the redo log files are applied to the restored files.

RECOVERY STEPS:
1. Damaged or missing files are restored from a backup.
2. Changes from the archived redo log files and online redo log files are applied as necessary. Undo blocks
are generated at this time. This is referred to as rolling forward or cache recovery.
3. The database may now contain committed and uncommitted changes.
4. The undo blocks are used to roll back any uncommitted changes. This is known as rolling back or
transaction recovery.
5. The database is now in a recovered state.

RESTORATION AND DATAFILE MEDIA RECOVERY WITH USER-MANAGED PROCEDURES:


Restore files using operating system commands and Recover files using the SQL*PLUS RECOVER command to
apply redo log files to the restored files. You can perform automatic recovery or step through the log files to
apply the changes.

ARCHIVELOG AND NOARCHIVELOG MODES:


The Archive mode you choose to operate your DATABASE affects your options for recovery if you have a media
failure.
NOARCHIVELOG MODE MAY BE SUITABLE WHEN:
1. Data loss between backups can be tolerated (during development, training).
2. It is faster to reapply transaction (from batch files).
3. Data rarely changes (non-OLTP).
ARCHVELOG MODE IS PERFERABLE WHEN:
1. The DATABASE cannot be shutdown for a closed backup.
2. Data loss cannot be tolerated.
3. It is easier to recover using archived redo log files than reapplying transactions (OLTP)
4. By default, the DATABASE is in NOARCHIVELOG mode.

RECOVERY IN NOARCHIVELOG MODE:


In NOARCHIVELOG, you need a valid closed DATABASE backup to recover. The files must be synchronized fro the
DATABASE to open.
You restore following (1) DATAFILES (2) CONTROLFILES
and if you backup redo log file. You restore the password and parameter files only if they are corrupt or lost.

NOTE: For a DATABASE in NOARCHIVELOG mode, you do not have to restore all Oracle files if no redo log file
has been overwritten since the last backup, as illustrated in the following:
Scenario:
-There are two redo logs for a DATABASE,
-A closed DATABASE was taken at log sequence 144
-While the DATABASE was at log sequence 145, data file 2 was lost.
Result: log sequence 144 not overwritten so datafile no 2 can be restored and recovered manually

ADVANTAGES:
Easy to perform, with low risk of error, Recovery time is the time it takes to restore all files.
DISADVANTAGES:
Data is lost and must be reapplied manually. The entire DATABASE is restored to the point of the last whole
closed backup.

USER MANAGED RECOVERY IN NOARCHIVELOG MODE:


1. SHUTDOWN ABORT;
2. Restore all files most recent backup the corrupt file
COPY d:\backup\*.* e:\oracle\ora9i\*.*
3. STARTUP Notify users that they will need to reenter data from the time of the last backup.

RECOVERY IN NOARCHIVELOG MODE WITHOUT REDO LOG FILE BACKUPS


1. Shutdown the instance.
2. Restore the most recent whole DATABASE backup with operating system commands.
3. MOUNT DATABASE
4. RECOVER DATABASE UNTI CANCEL
5. CANCEL
6. Open the DATABASE with RESETLOGS option to reset the current redo log sequence to 1 as follows:
ALTER DATABASE OPEN RESETLOGS

Recovery in ARCHIVELOG MODE


*COMPLETE RECOVERY
-uses redo data or incremental backups
-updates the database to the most current point in time
-applies all redo changes
*InCOMPLETE RECOVERY
-uses backup and redo logs to produce a noncurrent version of the database

COMPLETE RECOVERY
1. Make sure that DATAFILES for restore are offline.
2. Restore only lost or damaged DATAFILES.
3. Do not restore the control files; redo log files, password files, or parameter files.
4. Recover the DATAFILES.
COMPLETE RECOVERY IN ARCHIVELOG MODE
ADVANTAGES
1. Only need to restore lost files.
2. Recovers all data to the time of failure.
3. Recovery time is the time it takes to restore lost files and apply all archived log files.
DISADVANTAGES
1. Must have archived log files since the backup from which you are restoring.

DETERMINING WHICH FIES NEED RECOVERY


1. View V$RESCOVER_FILE to determine. Which DATAFILES need recovery.
2. View V$ARCHIVED_LOG for a list of all archived redo log files for the DATABASE.
3. V$RECOVERY_LOG for a list of all archived redo log files required for recovery.

To locate data files needing recovery, and where they need recovery from, use the
V$RECOVER_FILE view.
SQL> select * from v$recover_file;
FILE# ONLINE ERROR CHANGE# TIME
----- ------- ------ ------- ----
2 OFFLINE 288772 02-MAR-99
• The ERROR column returns two possible values to define the reason why the file needs to be recovered:
– NULL if the reason is unknown
– OFFLINE NORMAL if recovery is not needed
• The CHANGE# column returns the SCN (system change number) where recovery must start.

USER MANAGED RECOVERY PROCEDURES


1. MOUNT DATABASE
2. RESTORE DATAFILE E:\ORADATA\USERS’;
3. RECOVER DATAFILE ‘E:\ORADATA\USERS’;
4. OR RECOVER DATABASE
IN OPEN DATABASE
1. RECOVER TABLESPACE USERS;
2. OR RECOVER DATAFILE 3;
This command can only be used for a closed DATABASE recovery. RECOVER {AUTOMATIC} DATABASE
This command can only be used for a open DATABASE recovery.
RECOVER {AUTOMATIC} TABLESPACE <NUMBER> | <NAME>
This command can be used for both an open and closed DATABASE recovery.
RECOVER {AUTOMATIC} DATAFILE <NUMBER> | <NAME>

USING ARCHIVED REDO LOG FILES DURING RECOVERY


Set new location for archive file with this command.
• To change archive location, use the ALTER SYSTEM ARCHIVE LOG . . . command.
• To automatically apply redo log files:
– Issue SET AUTORECOVERY ON before starting media recovery (RECOVER).
– Enter auto when prompted for an archived log file.
– Use the RECOVER AUTOMATIC . . . command.

Restoring Archives to a Different Location


If archived logs are not restored to the LOG_ARCHIVE_DEST directory, then the Oracle server will need to be
notified before or during recovery, by:
• Specifying the location and name at the recover prompt:
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
• Use the ALTER SYSTEM ARCHIVE command:
SQL> alter system archive log start to <new location>;
• Use the RECOVER FROM <LOCATION> command:
SQL> recover from‘<new location>’ database;

How to Apply Redo Log Files Automatically


1 Before starting media recovery, issue the SQL*Plus statement:
SQL> set autorecovery on
2 Enter auto when prompted for a redo log file:
SQL> recover datafile 4;
ORA-00279: change 308810...12/02/97 17:00:14 needed for thread 1
ORA-00289: suggestion : /disk1/archive/arch_35.rdo
ORA-00280: change 308810 for thread 1 is in sequence #35
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
AUTO
Log applied.
...
3 Use the AUTOMATIC option of the recovery command:
SQL> recover automatic datafile 4;
Media recovery complete.

Restoring Data Files Restore Files to a Different Location


1 If the control files are restored to a different location, update the parameter file.
2 If a data file or redo log is restored to a different location or name, then:
– Mount the instance.
– Use the ALTER DATABASE command to update the control file with the new file location:
SQL> alter database rename file
2 ‘/disk1/data/user_01.dbf‘
3 to ‘/disk2/data/user_01.dbf‘;
Note: In the UNIX environment, the files must exist in the new location prior to issuing the ALTER DATABASE
RENAME command. This is not the case in an NT environment.

While starting up the database on a Monday morning, you get the following error after the database is
mounted:
ORA-01157: cannot identify/lock data file 9 - see DBWR trace file
ORA-01110: data file 9: '/u01/oracle/app/oradata/orcl/users01.dbf'
On investigation, you find that the file system, u01, on the operating system is corrupted and you need
to recover the data file to a new location. The database is running in ARCHIVELOG mode and the
database was backed up on last Friday.
You must ensure that the database is not accessible till the data file is recovered. Which two tasks must
you have accomplished before applying the archived redo log files? (Choose two.)

A. update the control file by using the ALTER DATABASE RENAME FILE command
B. restore the data file from the backup to the new location by using an operating system utility

Complete Recovery Methods


• Closed database recovery for:
– System data files
– Rollback segment data files
– Whole database
• Opened database recovery, with database initially opened: for file loss
• Opened database recovery with database initially closed: for hardware failure
• Data file recovery with no data file backup

Method 1: Recovering a Closed Database This method of recovery generally uses either the RECOVER DATABASE
or RECOVER DATAFILE commands when:
• The database is not operational a 24 hour a day, 7 days a week.
• The recovered files belong to the system or rollback segment tablespace.
• The whole database, or a majority of the data files, need recovery.

Method 2: Recovering an Opened Database, Initially Opened This method of recovery is generally used when:
• File corruption, accidental loss of file, or media failure has occurred, which has not resulted in the database
being shut down.
• The database is operational a 24 hour a day, 7 days a week. Downtime for the database must be kept to a
minimum.
• Recovered files do not belong to the system or rollback tablespaces.

Method 3: Recovering an Opened Database, Initially Closed This method of recovery is generally used when:
• A media or hardware failure has brought the system down.
• The database is operational a 24 hour a day, 7 days a week database. Down-time for the database must be
kept to a minimum.
• The restored files do not belong to the system or rollback tablespace.

Method 4: Recovering a Data File with No Backup This method of recovery is generally used when:
• Media or user failure has resulted in loss of a data file that was never backed up.
• All archived logs exist since the file was created.
• The restored files do not belong to the system or rollback tablespace.

Note: During recovery, all archived logs files need to be available to the Oracle server on disk. If they are on a
backup tape, you must restore them first.

Read-Only Tablespace Recovery Issues


Special considerations must be taken for read-only tablespaces when:
• Re-creating a control file
• Renaming data files
• Using a backup control file

Loss of Control Files


You may need to create control files if:
• All control files are lost because of a failure
• The name of a database needs to be changed
• The current settings in the control file need to be changed

Recovering Control Files


Methods to recover from loss of control file:
• Use the current control file
• Create a new control file
• Use a backup control file

**************************************************************************************

To obtain detailed information about the datafiles associated with the temporary tablespace,
you must query the V$TEMPFILE or DBA_TEMP_FILES views in Oracle9i. Some of the
important columns in the V$TEMPFILE dynamic performance view are NAME, FILE#, TS#,
STATUS, ENABLED, and BYTES.

CHAP 13 RMAN Complete Recovery


RESTORATION AND DATAFILE MEDIA RECOVERY USING RMAN:
RMAN automates the procedure for restoring files. When you issue the RESTORE command,
RMAN uses a server session to restore the correct backups and copies. The RMAN repository is
used to select the best available backup set or image copies to use in the restoration. By default,
RMAN does not restore a file if the file is already in the correct place and its header contains the
correct information. In releases prior to Oracle9i, the files were always restored. When you issue
the RMAN RECOVER command, RMAN appies changes from online redo log files and archived
redo log files, or uses incremental backups to recover the restored files. Using RMAN you can
perform recovery at the following levels:
1. DATABASE 2. TABLESPACE 3. DATAFILE
In complete recovery, all of the redo entries in the archived redo logs files and online redo log files are used
to recover the database. The damaged files are restored from a backup and the log files are used to update
the DATAFILES to the current point in time.
USING RMAN TO RECOVER A DATABASE NOARHCIVELOG MODE:
rman target /
RMAN> STARTUP MOUNT
RMAN> RESTORE DATABASE;
RMAN> RECOVER DATABASE;
RMAN> ALTER DATABASE OPEN RESETLOGS;

1. You can only restore using RMAN if the backups were taken or registered with RMAN.
2. To restore to a pervious point in time, you may have to use the backup of an older control file and use the
RESTORE CONTROL FILE option. The database should be in NOMOUNT state to restore the control file.
3. The target database must be in mount mode for the restoration of datafiles.
4. All of the datafiles must be restored from a backup taken at the same time.
5. The ALTER DATABASE OPEN RESETLOGS command may be required if a backup of the control file was
restored.
6. A whole backup is required after an OPEN WITH RESETLOGS command.

USING RMAN TO RECOVER A DATABASE ARHCIVELOG MODE:


RMAN TARGET /
RMAN> STARTUP MOUNT
RMAN> RESTORE DATABASE;
RMAN> RECOVER DATABASE;
RMAN> ALTER DATABASE OPEN;

USING RMAN TO RESTORE DATAFILES TO A NEW LOCATION:


Use the SET NEWNAME command to restore the datafile to the new location. Use the SWITCH
command to record the change in the control file.
1. Connect to RMAN
2. START MOUNT
3. Use RMAN to restore the DATAFILES to the new location and record the change in the control file.
RUN { SET NEWNAME FOR DATAFILE 1 TO ‘C:\BACKUP\FILE1.BAK’ RESTORE
DATABASE; SWITCH DATAFILE ALL;
RECOVER DATABASE; ALTER DATABASE OPEN;
USING RMAN TO RESTORE AND RECOVER A TABLESPACE
RUN { SQL “ALTER TABLESPACE USERS OFFLINE IMMEDIATE”;
RESTORE TABLESPACE USERS; RECOVER TABLESPACE USERS;
SQL “ALTER TABLESPACE USERS ONLINE “; }

USING RMAN TO RELOCATE A TABLESPACE:


Use the SET NEWNAME command to restore the files. Use the SWITCH command to record the new
name in the control file. Use the RECOVER TABLESPACE command to recover the datafiles of the
TABLESPACE.
RUN { SQL “alter tablespace users offline immediate”;
SET NEWNAME FOR datafile ‘e:\oracle\oradata\users01.db to ‘c:\file3.dbf’;
RESTORE (TABLESPACE USERS);
SWITCH DATAFILE 3; recover TABLESPACE users; sql “alter tablespace users online”;}

*************************************************************************************

CHAP 14 User Managed InComplete Recovery

INCOMPLETE RECOVERY :
Incomplete recovery reconstructs the database to a prior point in time before the time of the
failure. This situation results in the loss of data from transactions committed after the time of recovery. A
valid offline or online backup of all of the DATAFILES made before the recovery point. All archived logs
from the backup until the specified time of recovery.

REASONS FOR PERFORMING INCOMPLETE RECOVERY:


• User error
– An important table was dropped.
– Bad data was committed in a table.
• Complete recovery fails because an archived log is lost.
• Loss of all control files
• Loss of all unarchived redo log files and a data file./when media failure destroys some or all of the
online redo logs that were not archived
TYPE OF Incomplete RECOVERY:
3 types: Time based Cancel based Change based
May need to recover when control files are lost, performing incomplete recovery to a point when the
database structure is different than current

TIME BASED RECOVERY:


This method of recovery is terminated after all changes up to a specified point in time are committed.
(1)Unwanted changes to data where made or important tables dropped, and the approximate time of the
error is known.
(2) The approximate time a nonmirrored online redo log becomes corrupt.

CANCEL BASED RECOVERY:


By entering CANCEL at the recovery prompt instead of a log file name terminates this method of
recovery.
(1) A current redo log file or group is damaged and is not available for recovery. Mirroring should prevent
the need for this type of recovery.
(2) An archived redo log file needed for recovery is lost. Frequent backups and multiple archive
destinations should prevent the need for this type of recovery.
The redo log files were not multiplexed and the current redo log file is not yet archived.
Which recovery method would you follow to recover the SALES_HISTORY table?
Cancel-Based incomplete recovery

CHANGE BASED RECOVERY:


This method of recovery is terminated after all changes up to the specified system change number (SCN)
are committed. Use this approach when recovering databases in a distributed environment.

RECOVERY USING A BACKUP CONTROL FILE:


This method of recovery is terminated when the specified method of recovery (cancel, time or changed
based) has completed or control files are recovered. You must specify in the RECOVER DATABASE
command that an old copy of the control file will be used for recovery. Use this approach when:
(1) All control files are lost, the control file cannot be re-created, and a binary backup of the control
file exists. Mirroring the control file (onto different disks) and keeping a current text version of the
CREATE CONTROLFILE statement reduces the chances of using this method.
(2) Restoring a database, with a different structure than the current database, to a prior point in
time.

INCOMPLETE RECOVERY GUIDELINES:


(1) Take whole database backups before and after recovery and backup after recovery.
(2) Always verify that the recovery was successful.
(3) Backup and remove archived logs.

INCOMPLETE RECOVERY AND THE ALTERT LOG:


During recovery progress information is stored in the alert log. This file should always be check before
and ater recovery.

USER MANAGED PROCEDURES FOR INCOMPLETE RECOVERY


1. Perform a full closed backup of the existing database. Shut down the database as all DATAFILES,
including the system TABLESPACE files, will be restored from a backup
2. Restore all DATAFILES to take your database back in time.
3. Mount the database. Insure that the DATAFILES are online.
4. Recover the database.
5. Open the database by using the RESETLOGS option and verify the recovery.
6. Perform a whole closed backup of the database.

RECOVER COMMAND OVERVIEW


SYNTAX:
RECOVER DATABASE | until time ‘YYYY-MM-DD:HH:MI:SS’; | UNTIL CANCEL |
UNTIL SCN <integer>; | USING BACKUP CONTROL FILE;

TIME BASED RECOVERY


SCENARIO: The current time is 12:00 pm on March 9, 2001. the EMPLOYEE table has been
dropped at approximately 11:45 a.m. the table must be recovered.
1. If the database is open, shut it down by using either the NORMAL, IMMEDIATE, or
TRASACTIONAL options.
2. Restore all DATAFILES from backup. You may need to restore archived logs. Restore archived log
to LOG_ARHCIVE_DEST or SET LOGSOURCE E:\ARCHIVE to change the location.
3. Mount the database RECOVER DATABASE UNTIL TIME ‘2001-03-09:11:44:00;
4. To synchronize DATAFILES with control files and redo logs. Open the database by using the
RESETLOGS. ATLER DATABASE OPEN RESETLOGS;
ARCHIVE LOG LIST;
5. Before performing the whole closed database backup, query the employees table to make sure it
exists.

CANCEL BASED RECOVERY:


SCENARIO: The current time is 12:00 p.m on March 9,2001. the EMPLOYEES table was
dropped while some one was trying to fix bad blocks. Log files exist on the same disk. The table was
dropped at approximately 11:45 a.m.
You concerned about block corruption in the EMPLOYEES table resulting from disk error.
One of the online redo logs is missing. The missing redo log is not archived. The redo log contained
information from 11:34:a.m. Twenty-six minutes of data will be lost. Users can recover their data.
After searching through the /disk1/data directory, you notice that redo log LOG2.rdo can not be
located and has not been archived. Therefore, you cannot recover past this point. Querying V
$LOG_HISTORY confirms the absence of archived log SEQ 48 (log2.rdo). the extra 10 minutes or work
will be lost if the database is recovered but it can recover manually.
(1) Shutdown the database.
(2) Restore all DATAFILES from recent backup. Mount the DATABASE.
(3) Recover DATABASE Until log SEQ 48; after this sequence number write CANCEL on the PROMPT.
And open the DATABASE RESETLOGS.

USING A BACKUP CONTROL FILE DURING RECOVERY:


SCENARIO: The current time is 12:00 p.m. on March 9, 2001. the TABLESPACE containing
the EMPLOYEES table has been dropped. The error Occurred around 11:45 a.m. Backup are taken every
night.
(1) The TABLESPACE EMP_TS containing the EMPLOYEES table has been dropped.
(2) Killed all users DATABASE. Alter System Enable Restricted Session;
(3) Restore the CONTROLFILE and DATAFILES for the DATABASE at a time when the TABLESPACE
exist. View V$RECOVERY_FILE.
(4) You confirm the time of error by checking the alter log.
(5) Perform the recovery: RECOVER DATABASE UNTIL TIME ‘2003-03-09:11:44:00’ USING
BACKUP CONTROLFILE;
(6) To Synchronize DATAFILES with the CONTROLFILES and redo logs, open the DATABASE with the
RESETLOGS option.
(7) Verify that the EMPLOYEES table exists and make whole backup.
The backup from last night contains DATAFILES and CONTROLFILES required for recovery. The EMP_TS
TABLESPACE has one DATAFILES.

LOSS OF CURRENT REDO LOG FILES:


(1) Attempting to open the DATABASE will immediately notify you of the current redo log group. It
cannot be CLEAR LOGFILE command because it is current.
(2) You must note the current log sequence number V$LOG. It is seq #61.
(3) Restore all DATAFILES from a pervious backup, use the RECOVER UNTIL CANCEL command, and
stop before redo log 61 is applied.
(4) Open the DATABASE using the RESETLOGS option.
(5) The DATABASE should now be operational, because any missing log files will be re-created.
(6) Because you just performed incomplete recovery, the DATABASE should now be backed up.

*************************************************************************

CHAP 15 RMAN InComplete Recovery

Incomplete recovery using RMAN


• Mount the database
• Allocate multiple channel of parallelization
• Restore all datafiles
• Recover the database by using UNTIL TIME, UNTIL CANCEL, UNTIL SEQUENCE, or UNTIL SCN
• Open the database using resetlogs
• Perform a whole database backup

Note: You can only restore using RMAN if the backups were taken or registered with RMAN.

RMAN InComplete Recovery: UNTIL TIME


If Target database open, perform clean shutdown
Mount it,don’t take any back up during recovery
$ NLS_LANG = american
$ NLS_DATE_FORMAT = ‘YYYY-MM-DD:HH24:MI:SS’
rman target rman/rman@db01

RMAN > run {


2> ALLOCATE CHANNEL T1 TYPE DISK;
3> ALLOCATE CHANNEL T2 TYPE DISK;
4> SET UNTIL TIME = ‘2001-12-09:11:44:00’;
5> RESTORE DATABSE;
6> RECOVER DATABSE;
7> ALTER DATABASE OPEN RESETLOGS; }
Perform a backup
**if using recovery catalog, register the new incarnation of the database:
RMAN> RESET DATABASE

RMAN InComplete Recovery: UNTIL TIME


RMAN > run {
2> SET UNTIL SEQUENCE 120 THREAD 1;
3> RESTORE DATABSE;
4> RECOVER DATABSE; #RECOVERS THROUGH LOG 119
5> ALTER DATABASE OPEN RESETLOGS; }
*to restore the database completely from the last available backup taken prior to the sequence 120, and
then recover it until the sequence number specified

*************************************************************************

CHAP 16 RMAN Maintenance

CROSS CHECK BACKUPS AND COPIES:


1. Ensure repository information is synchronized with actual files.
2. You can use the LIST command to obtain a report of the backups and copies that you have made and then
use the CROSSCHECK command to check that these files still exist. If RMAN cannot find a file, it updates the
repository records to EXPIRED. You can determine which files are marked EXPIRED by issuing a LIST EXPIRED
command. Then you can run DELETE EXPIRED to remove the repository records for all expired files.
3. If the backup or copy is on disk, then the CROSSCHECK command determines whether the header of the
backup piece is valid. If the backup is on tape, then the command simply check that the backups exist.

THE CROSSCHECK COMMAND


CROSSCHECK BACKUP; Crosscheck backups command to cross check backup sets, backup pieces, and
PROXY (proxy is a file speared on different disks) copies. By default, RMAN crosschecks backups of the whole
DATABASE.
CROSSCHECK COPY; Crosscheck copy command to cross check DATAFILES copies, control file copies,
archived redo logs, and image copies of archived redo logs. By default, all files in the DATABASE with status
AVAILABLE or EXPIRED are checked. Crosscheck command can also be used with options to restrict the check to a
specific backup piece, backupset, DATAFILES, or control file copy.

RMAN> CROSSCHECK BACKUPSET OF DATABASE;


It checks for synchronization of all backup sets including the backup pieces and proxy copies.

DELETE BACKUPS AND COPIES:


1. Delete physical backups and image copies. 2. Update repository status to DELETED. 3. Remove records from
the recovery catalog.

DELETE COMMAND
DELETE BACKUPSET 102; Delete a specific backup set:
DELETE NOPROMPT EXPIRED BACKUP OF TABLESPACE USERS; Delete an expired backup without the confirmation.
By DEFAULT, the DELETE command displays a list of the files and prompts you for confirmation before deleting any
file in the list. No prompt is DEFAULT when running the DELETE command from a command file.
DELETE OBSOLETE;
Delete all backups, copies, and archived redo log files based on the configured retention policy.
DELETE OBSOLETE RECOVERY WINDOW OF 7 DAYS;

DELETING BACKUPS AND COPIES:


Delete input files upon successful creation. You can specify this option only when backing up archived redo log files,
DATAFILES copies, and backup sets. The BACKUP ARCHIVED command only backups up one copy of each distinct
log sequence number, so if you specify the DELETE INPUT option without the ALL keyword, RMAN deletes only the
copy of the file that it backs up.

CHANGING THE AVAILABILITY OF RMAN BACKUPS AND COPIES:


You can use the CHANGE… UNAVALIABLE command when a backup or copy cannot be found or is unavailable
because of hardware maintenance. If a file is marked UNAVAILABBLE, RMAN will not use the file when a RESTORE or
RECOVER command is issued. When the file is found or the maintenance is completed, you can mark it available
again by issuing the CHANGE… ABAILABLE command.
(1) CHANGE BACKUPSET 100 UNAVALIABLE;
(2) CHANGE DATAFILECOPY ‘E:\USERS01.DBF’ UNAVAILABLE; Change the status of a specific
DATAFILE copies, archived redo logs, or image copies of archived redo log. If you do not specify an option for COPY,
then CHANGE COPY operates on all copies recorded in the repository.
(3) CHANGE BACKUP OF CONTROLFILE UNAVAILABLE; Change the status of a control file backup. If
you do not specify an option for BACKUP, then CHANGE BACKUP operates on all backups recorded in the repository.
(4) CHANGE COPY OF ARCHIVELOG SEQUENCE BETWEEN 230 AND 240 UNAVAILABLE;
Change the status of archived redo log files.

EXEMPTING A BACKUP OR COPY FROM THE RETENTION POLICY:


You can use the CHANGE… KEEP command to make a file exempt from the retention policy and CHANGE… NOKEEP
to make the conform to the retention policy. KEEP overrides any configured retention policy for this backup or copy
so that the backup is not obsolete. NOKEEP specifies that the backup or copy expire according to the users retention
policy.
CHANGE BACKUPSET 101 DEEP FOREVER NOLOGS; Create a long-term backup. FOREVER specifies that
the backup or copy never expires. You must use recovery catalog when FOREVER is specified. Other wise the backup
records eventually age out of the control file.
LOGS: Indicated that all of the archived logs required to recover this backup or copy must remain available as long
as this backup or copy is available.
NOLOGS: Specifies that this backup or copy cannot be recovered because the archived logs needed to recover
this backup will not be kept. You can only use this backup or copy to restore the DATABASE to the point in time that
the backup or copy was taken.
CHANGE DATAFILECOPY ‘E:\USERS01.DBF’ KEEP UNTIL ‘SYSDATE+60’;
Make a DATAFILE exempt from the retention policy for 60 days.

CATALOGING ARCHIVED REDO LOG FILES AND USER MANAGED BACKUPS:


You can use the CATALOG command to add information to the repository about.
(1)An operating system DATAFILE copy. CATALOG DATFILECOPY ‘E:\FILE1.DBF’;
(2)An archived redo log copy: CATALOG ARCHIVEDLOG ‘E:\ARCHIVE\arc_11’,’e:\ARCHIVE\arc_12’;
(3) A CONTROLFILE copy: CATALOG CONTROLFILECOPY ‘E:\BACKUP\CONTROL01.CTL’;
If it is a CONTROLFILE backup, then it should have been made with the ALTER DATABASE BACKUP CONTROLFILE.
RMAN treats the operating system backups as DATAFILE copies. During cataloging RMAN only checks the file header.
It does not check whether the file was correctly copied by the operating system utility.

UNCATALOGING RMAN RECORDS:


Use the CHANGE… UNCATALOG to update the record in the repository to DELETED status. DELETE a specific backup
or copy record from the recovery catalog. The CHANGE…
UNCATALOG command is used to update the repository. It does not remove physical backups or copies.
(1)Remove records fro deleted archived redo log files:CHANGE ARCHIVELOG ‘E:\ARCHIVE\arc_11’ UNCATALOG;
(2) Remove records for a deleted DATAFILE CHANGE DATFILECOPY ‘E:\FILE1.DBF’ UNCATALOG;

VIEWS
1.V$BACKUP_DATAFILE: Is useful for creating equal sized backup sets by determining the number of blocks in
each data file. It can also find the number of corrupt blocks for the DATAFILE.
2.V$BACKUP_REDOLOG: Shows archived logs stored in backup sets.
3.V$BACKUP_SET: Shows backup sets that have been created.
4.V$BACKUP_PIECE: Shows backup pieces created for backup sets.
5.V$SESSION_LONGOPS: To monitor the progress of backups and copies.
6.V$RECOVER_FILE: To identifies needing recovery, and from where recovery needs to start.
7.V$RECOVERY_LOG: Contains useful information only for the Oracle process doing the recovery.
VIEWING THE RECOVERY CATALOG:
8.RC_DATABASE:To determine which databases are currently registered in the recovery catalog.
9.RC_DATAFILE
10.RC_TABLESPACE: To determine which TABLESPACES are currently stored in the recovery catalog for
the target DATABASE.
11.RC_STORED_SCRIPT: To determine which scripts are currently stored in the recovery catalog for the target
DATABASE.
12.RC_STORED_SCRIPT_LINE: You must query the recovery catalog view RC_STORED_SCRIPT_LINE to
obtain the code associated with the RMAN stored scripts. This view contains one row for each line of the
stored script.

*********************************************************************************

CHAP 17 RECOVERY CATALOG creation and maintenance


RECOVERY CATALOG:

The recovery catalog is a schema that is created in a separate DATABASE. It contains the RMAN
metadata obtained from the target DATABASE CONTROLFILE. RMAN propagates information about the
DATABASE structure, archived redo logs, backup sets, and DATAFILE copies into the recovery catalog
from the control file of the target DATABASE. You can use the REPORT and LIST commands to obtain
information from the recovery catalog. You should use a catalog when you have multiple target
databases to manage. The recovery catalog is maintained by RMAN when you do the following:
1. Register the target DATABASE in the catalog.
2. Resynchronize the catalog with the control file of the target DATABASE.
3. Reset the DATABASE to a previous incarnation.
4. Change information about the backups or files.
5. Performa backup, restore, or recovery operation.
6. You can use the REPORT and LIST commands to obtain information from the recovery catalog.
You can store scripts in the recovery catalog.

RECOVERY CATALOG CONTENTS:


The recovery catalog is an optional repository containing information on:
1. DATAFILE and archived redo log file backup sets and backup pieces. The catalog stores
information such as the name and time of the backup set.
2. DATAFILE copies. The catalog records the time stamp and name of data file copies.
3. Archived redo log files. The catalog maintains a record of which archived logs have been created
by the server and any copes made by RMAN.
4. The physical structure of the target DATABASE. It contains information similar to that contained
in the target DATABASE CONTROLFILE.
5. Configuration settings which are persistent across RMAN sessions and are set with the
CONFIGURE command.
6. Stored scripts, which are named sequences of commands.

BENEFITS OF USING A RECOVERY CATALOG:


You can use a recovery catalog. The recovery catalog can be used to store information about more than
one incarnation of a single DATABASE. This allows you to report on the target DATABASE from a non-
current incarnation. If you want to store scripts, which contain commands for backup and recovery
operations, you must use a recovery catalog. The scripts cannot be cannot be stored in the target
DATABASE CONTROLFILE.

CREATE RECOVERY CATALOG


1. Create TABLESPACE
2. Create a user and schema or the recover catalog owner
3. Grant roles and privileges to this user to maintain the recovery catalog and perform the backup
and recovery operation privileges. RECOVERY_CATALOG_OWNER and Grant connect, resource to RMAN.
4. Create catalog. CREAT CATALOG TABLESPACE RMAN_TS;
5. Connect with target DATABASE. You must log in as a user with SYSDBA privileges on the target
DATABASE to perform all the backup and recovery operations. RMAN TARGET SYS/SYS@DB01
OR CONNECT CATALOG RMAN/RMAN_DB01@CATDB
6. Register target DATABASE. Register the target DATABASE in the catalog. If the target DATABASE
is not registered in the recovery catalog, the catalog cannot be used to store information about the
DATABASE. RECOVERY MANAGER uses the internal DATABASE identifier (DBID), which is calculated
when the DATABASE is first created, as a unique identifier for the DATABASE. If you attempt to register a
new DATABASE that has been created by copying an existing DATABASE and then changing the
DB_NAME, the register will fail. You can avoid this problem by using the DUPLICATE command, which
copies the DATABASE from backup and generates a new DATABASE identifier.

RMAN creates rows in the recovery catalog that contain information about the target DATABASE. RMAN
copies all pertinent data about the target DATABASE from the control file into the recovery catalog.
RMAN>REGISTER DATABASE;

The backup of database files created using O/S commands must be manually restored and recovered.
You can, however, register these files with the repository by using RMAN's CATALOG command. The
restore and recover operations hereafter would be possible using RMAN.

CONNECTING USING A RECOVERY CATALOG


Initiating a session on
UNIX: $ ORACLE_SID=OR9I; EXPORT ORACLE_SID
$RMAN TARGET SYS/ORACLE
RMAN> connect catalog rman/rman@catdb
NT C:\> SET ORACLE_SID=ORA9I
C:\> RMAN TARGET SYS/ORACLE
RMAN> connect catalog rman/rman@catdb

RECOVERY CATALOG MAINTENANCE:


The CATALOG, CHANGE, and DELETE commands can be used to update the recovery catalog manually.
1. REGISTER 2.RESYNCHRONIZE 3. RESET
4. CHANGE/DELTE/CATALOG 5. BACKUP/RESTORE/RECOVER

RESYNCHRONIATION OF THE RECOVERY CATALOG HAPPENS:


• Automatically with RMAN commands.
• Manually with RESYNC CATALOG.

Resynchronization of the recovery catalog ensures that the METADATA is current or same with the target
CONTROLFILE. Resynchronizations can be full or partial. In partial resynchronization, RMAN reads the
current control file to update changed data, but does not resynchronize metadata about the database
physical schema: DATAFILES, TABLESPACES, redo threads, rollback segments, and online redo logs. In a
full resynchronization, RMAN updates all changed records, including schema records. RMAN automatically
detects when it needs to perform a full or partial resynchronization and executes the operation as
needed. You can also force a full resynchronization by issuing a RESYNC CATALOG command. To ensure
that the catalog stays current, run the RESYNC CATALOG command periodically. A good rule of thumb is
to run it at least once every n days, where n is the setting for the initialization parameter
CONTROL_FILE_RECORD_KEEP_TIME. Because the controlfile employs a circular reuse system, backup
and copy records eventually get overwritten. Resynchronizing the catalog ensures that these records are
stored in the catalog and are not lost.
ISSUE THE RESYNC CATALOG COMMAND WHEN YOU:
1. Add or drop a TABLESPACE.
2. Add or drop a DATAFILE.
3. Relocate a database file.
RMAN> RESYNC CATALOG
Any structural changes to the database cause the control file and recovery catalog to become “out of
synch”. The catalog will be synchronized automatically when a BACKUP or COPY command is issued with
a connection to the catalog. However, this synchronization can cause delay in the backup operation.
RESYNC CATALOG command updates the following records:
LOG HISTORY: Created when a log switch occurs. Recover Manager tracks this information so that it
knows what archive logs it should expect to find.
ARCHIVED REDO LOG: Associated with archived logs that were created by archiving an online log, by
copying an existing archived log, or by restoring an archived log backup set.
BACKUP HISTORY: Associated with backup sets, backup pieces, backup set members, proxy copies,
and image copies.
PHYSICAL SCHEMA: Associated with DATAFILES and TABLESPACES.

RESETTING A DATABASE INCARNATION:


Use the RESET DATABASE command.
1.When the database is opened with the RESETLOGS option.
2.To direct RMAN to create a new database incarnation record.
3. To distinguish between opening with RESETLOGS and an accidental restore operation of an old control
file.
An incarnation of a database is a number used to identify a version of the database prior to the log
sequence number being reset to zero. This prevents archived and online redo logs from being applied to
an incorrect incarnation of the database. The RESET DATABASE command is used by RMAN to store
database incarnation information in the recovery catalog. All subsequent backup and log archives are
associated with the new database incarnation.
RESET DATABASE TO INCARNATION KEY command: The
RESET DATABASE TO INCARNATION <identifier> or (key); the identifier is obtained by the LIST
INCARNATION OF DATABASE command. Command is used to undo the effects of a RESETLOGS
operation by restoring backups of a prior incarnation of the database. You must specify the primary key
of the record for the database incarnation to which you return:

RECOVERY CATALOG REPORTING:


These commands analyze and list information contained inside the recovery catalog.
REPORT COMMAND: You can use the REPORT command to analyze various aspects of the backup,
copy, restore, and recovery operations.
LIST COMMAND: You can use the LIST command to display information on backup sets,
file copies, and archived logs, which are stored in the recovery catalog.

VIEWS:
In addition to the REPORT and LIST commands, you can use SQL commands to the data dictionary and
dynamic views that are created when the recovery catalog is created.
VIEWING THE RECOVERY CATALOG:
1.RC_DATABASE: To determine which databases are currently registered in the recovery catalog.
2.RC_DATAFILE
3.RC_TABLESPACE: To determine which TABLESPACES are currently stored in the recovery catalog
for the target DATABASE.
4.RC_STORED_SCRIPT: To determine which scripts are currently stored in the recovery catalog for the
target DATABASE.
5.RC_STORED_SCRIPT_LINE: To list the text of a specified stored script or you can use the PRINT
SCRIPT command.

CATALOG MAINTENANCE
(a) Register(b) Resynchronize (c) Reset (d) Change
(e) Backup (f) Restore (g) Recover

STORED SCRIPTS:
A Recovery Manager script is a set of commands that:
1. Specify frequently used backup, recover, and restore operations.
2. Are created using the CREATE SCRIPT command.
3. Are stored in the recovery catalog.
4. Can be called only by using the RUN command.
5. Enable you to plan, develop and test a set of commands for backing up, restoring, and
recovering the DATABASE.
6. Minimize the potential for operator errors.
RMAN provides a way of storing these scripts in the recovery catalog.
USE PRINT SCRIPT TO DISPLAY A SCRIPT
1. PRINT SCRIPT LEVEL0BACKUP;
Use CREATE SCRIPT TO STORE A SCRIPT
2. CREATE SCRIPT LEVL0BACKUP {BACKUP INCREMENTAL LEVEL 0
FORMAT ‘E:\BAKCUP\%d%s%p’ FILEPERSET 5
(DATABASE INCLUDE CURRENT CONTROLFILE);
SQL ‘ALTER DATABASE ARCHIVE LOG CURRENT;}
USE EXECUTE SCRIPTS TO RUN A SCRIPT
3. RUN { EXECUTE SCRIPT LEVEL0BACKUP;}
You can rewrite a script with the REPLACE SCRIPT command. You must supply the entire script, not just
the changed lines.
4. REPLACE SCRIPT LEVEL0 BACKUP {
…….
FILEPERSET 3
……. }
USE DELETE SCRIPT TO REMOVE A SCRIPT
5. DELTE SCRIPT LEVEL0BACKUP;

There are three databases in your company: PDDB, QTDB, and SLDB. A single RMAN recovery catalog is
used for all the three databases. In the recovery catalog you have a stored script, Level0Backup, created
for performing a level 0 backup. For which database will the backup be performed when you execute this
script?
C. the target database to which RMAN is connected
***********************************************************************************************

CHAP 18 Export and Import Utilities

Oracle Export and Import Utility Overview


These utilities enable you to do the following:
• Archive historical data
• Save table definitions (with or without data) to protect from user error failure
• Move data between machines and databases or versions of the Oracle server
• Transport tablespaces between databases
Export and Import Utility Overview
• The Export utility provides a logical backup of
– A database object
– Schema’s objects
– A tablespace
– An entire database
• The Import utility is used to read a valid Export file for moving data into a database. Redo log history
can not be applied to objects that are imported from an export file, therefore data loss may occur but can
be minimized.
The DBA can use the Export and Import utilities to compliment normal operating system backups by
using them to:
• Create a historical archive of a database object or entire database. For example
when a schema is modified to support changing business requirements.
• Save table definitions in a binary file. This may be useful for creating a to maintain a baseline of a given
schema structure.
• Move data from one Oracle version to another, such as upgrading from Oracle8i to Oracle9i.
• Protect against:
– User errors where a user may accidentally drop or truncate a table
– A table that has become logically corrupted
– An incorrect batch job or other DML statement that has affected only a subset of the database

*Which backup is considered a logical backup? C. exports of schema objects into a binary file

• Recover:
– A logical database to a point different from the rest of the physical database when multiple logical
databases exist in separate tablespaces of one physical database
– A tablespace in a Very Large Data Base (VLDB) when tablespace point-intime recovery (TSPITR) is
more efficient than restoring the whole database from a backup and rolling it forward

Methods to Run the Export Utility


• An interactive dialog
• The export page of the Data Manager within Oracle Enterprise Manager
• Parameter files
• The command line interface, by specifying parameters

Export Methods
• Interactive dialog: By specifying the EXP command at the operating system and no parameters, the
Export utility will prompt you for inputs, supplying default values.
• The export page of Data Manager within Oracle Enterprise Manager
• If command line mode is chosen, the selected options must be explicitly specified on the command line.
Any missing options will default to Export utility default values.
Note: Many options are only available by using the command line interface. However, you can use a
parameter file with command line.

Table Mode User Mode Tablespace Mode Full Database Mode


Table Tables Table Tables
definitions definitions definitions definitions
Table data Tables data Tables data
(all or selected rows)
Owner’s table Owner’s grants Grants Grants
grants
Owner’s table Owner’s indexes Indexes Indexes
indexes
Table Tables Table Tables
constraints constraints constraints constraints
Triggers
Table Mode
Table mode exports specified tables in the user’s schema, rather than all tables. A privileged user can
export specified tables owned by other users.
User Mode
User mode exports all objects for a user’s schema. Privileged users can export all objects in the schemas
of a specified set of users. This mode can be used to compliment a full database export.
Tablespace Mode
You can use transportable tablespaces to move a subset of an Oracle database and plug it into another
Oracle database, essentially moving tablespaces between the databases. Moving data by way of
transportable tablespaces can be much faster than performing either an import/export of the same data,
because transporting a tablespace only requires the copying of data files and integrating the tablespace
structural information. You can also use transportable tablespaces to move index data, thereby avoiding
the index rebuilds you would have to perform when importing table data.
Full Database Mode
Full database mode exports all database objects, except those in the SYS schema. Only privileged users
can export in this mode.

Command Line Export


Syntax
exp keyword = (value, value2, … ,value n)
Example
exp scott/tiger TABLES=(emp,dept) rows=y file=expincr1.dmp
exp system/manager OWNER=SCOTT direct=y file=expdat.dmp [these name is default]
exp SYS/SYS AS SYSDBA TRANSPORT_TABLESPACE=y TABLESPACES=(ts_emp) log=ts_emp.log

Export Parameters

Parameter Description Default


BUFFER Size of the data buffer in bytes: (Integer) OS-Specific
COMPRESS Specified to include all data in one extent: (Y)es/(N)o Y
Read-consistent view of the database when data is Y
CONSISTENT updated during an export: (Y)es/(N)o
CONSTRAINTS Specifies exporting table constraints Y
DIRECT Specify direct mode export: (Y)es/(N)o N
FILE Name of output file Expdat.dmp
FULL Export entire database: (Y)es/(N)o N
GRANTS Specifies exporting of object grants Y
HELP Display export parameters in interactive mode (Y) N
INCTYPE Type of export level
INDEXES Indexes to export: (Y)es/(N)o Y
LOG Name of file for informational and error messages None
OBJECT_CONSISTENT Specifies read only transaction for each object N
OWNER Users to export: Username None
PARFILE Name of file in which parameters are specified None
Indicates whether or not the Export utility exports one
POINT_IN_TIME_RECOVE or more tablespaces in an Oracle database (release
R 8.0 only)
RECORDLENGTH Specifies the length in bytes of the file record OS-Specific
Specifies the tablespaces that will be recovered by
RECOVERY_TABLESPACES using point-in-time recovery (release 8.0 only)
ROWS Include table rows in export file: (Y)es/(N)o Y
STATICTICS Specifiy all types of statictics Estimate
TABLES Tables to export: List of tables None
TABLESPACES Tablespaces to be transported (release 8.1 only) None
Enables the export of transportable tablespace N
TRANSPORT_TABLESPACE metadata (release 8.1 only)
USERID Username/password of schema objects to export None
Direct Path Export Concepts
By using the Direct-Path feature, you can extract data much faster. When parameter DIRECT=Y is
specified, Export utility reads directly from the data layer instead of going through the SQL-command
processing layer.
Mechanics of Direct-Path Export
• Direct mode of export can be set by specifying the parameter DIRECT=Y.
• Direct-path export does not compete with other resources of the Instance.
• If in direct read mode, it reads database blocks into a private area used by the session.
• Rows are transferred directly into the Two-Task Common (TTC) buffer for transport.
• The data in the TTC buffer is in the format that the Export utility expects.

Direct Path Export


The direct-path option of the Export utility introduces some features that differentiate
it from the conventional-path export.
Direct-Path Features
• The type of export is indicated on the screen output, export dump file, and the log file specified with
the LOG parameter.
• Data is already in the format that Export expects, thus avoiding unnecessary data conversion. The data
is transferred to Export client, which then writes the data into the export file.
• The direct-path export uses an optimized SELECT * FROM table without any predicate.
Note: The format of the column data and specification in the export dump file differ from those of
conventional-path export.

Direct-Path Restrictions
The direct-path option of the Export utility has some restrictions that differentiate it from the
conventional-path export.
• The direct-path export feature cannot be invoked by using and interactive EXP session.
• When the direct-path option is used, the client-side character set must match the character set of the
server side. Use the environment variable
NLS_LANG to set the same character set as the server one.
• The BUFFER parameter of the EXPORT utility has no effect on direct-path Export; it is used only by the
conventional-path option.
• You cannot direct-path Export rows that contain the datatypes of LOB, BFILE, REF, or object type
columns, including VARRAY columns and nested tables. Only the data definition to create the table is
exported, not the data.

Specifying Direct Path Export


Before you can use direct-path Export, you must run the catexp.sql script. /rdbms/admin/
Methods of by Using the DIRECT Parameter Command-Line Option
You can invoke direct-path export by using the DIRECT command line parameter at
the operating system prompt.
$ exp user=scott/tiger full=y direct=y
Parameter File
An example of a parameter file, exp_par.txt:
USERID=scott/tiger
TABLES=(emp,dept)
FILE=exp_one.dmp
DIRECT=Y
To execute the parameter from the operating system prompt: $ exp parfile=exp_param.txt

Import Utility
The Import utility can be used for recovery of data by using a valid Export utility file.
Uses of the Import Utility for Recovery
• Creating table definitions since the table definitions are stored in the export file.
Choosing to import data without rows will create just the table definitions.
• Extracting data from a valid export file by using the Table, User, Tablespace, or Full Import modes.
• Importing data from a complete, incremental or cumulative export file.
• Recovering from user failure errors where a table is accidentally dropped or truncated by using one the
previously mentioned methods.

Import Modes
Mode Description
Table Import specified tables into a schema.
User Import all objects that belong to a schema
Tablespace Import all definitions of the objects contained in the tablespace
Full Database Import all objects from the export file

Table Mode
Table mode imports all specified tables in the user’s schema, rather than all tables. A privileged user can
import specified tables owned by other users.
User Mode
User mode imports all objects for a user’s schema. Privileged users can import all objects in the schemas
of a specified set of users.
Tablespace Mode
Tablespace mode allows a privileged user to move a set of tablespaces from one Oracle database to
another.
Full Database Mode
Full database mode imports all database objects, except those in the SYS schema. Only privileged users
can import in this mode.

Command Line Import


Syntax
imp keyword = value or keyword = (value, value2, … value n)
Example
imp scott/tiger TABLES=(emp,dept) rows=y file=expincr1.dmp
imp system/manager FROMUSER=scott file=expincr1.dmp
imp sys/sys as sysdba TRANSPORT_TABLESPACE=y TABLESPACES=ts_emp

Which of the following roles must be granted to a user to perform a full database
import? IMP_FULL_DATABASE
Import parameter
Parameter Description Default
BUFFER Size of the data buffer in bytes: (Integer) OS-Specific
List the datafiles to be transported into the None
DATAFILES database
Specifies whether or not the existing data file N
DESTROY making up the database should be reused
FROMUSER A list of schemas containing objects to import None
FULL Import entire file N
HELP Display export parameters in interactive mode N
IGNORE Ignore create errors due to an object’s existence N
Specifies the type of incremental import; options
INCTYPE are SYSTEM and RESTORE
INDEXES Indexes to import Y
Specifies a file to receive index-creation
INDEXFILE commands None
LOG File for informational and error messages None
PARFILE Parameter specification file None
Indicates whether or not the Import utility
recovers one or more tablespaces in an Oracle
database to a prior point in time without affecting
POINT_IN_TIME_RECOVER the rest of the database (release 8.0 only)
ROWS Include table rows in import file Y
TABLES Tables to import None
List of tablespaces to be transported into the
TABLESPACES database None
Specifies a list of usernames whose schemas will
TOUSER be imported None
Instructs Import to import transportable
TRANSPORT_TABLESPACE tablespace metadata from an export file N
List the users who own the data in the
TTS_OWNERS transportable tablespace set
USERID Username/password of schema objects to export None

Which of the following parameters would you use to record the errors that might be generated during the
import operation? A. LOG

Invoking Import as SYSDBA


You need to invoke import as SYSDBA under the following:
-At the request of Oracle technical support
-When importing a transportable tablespace set
imp sys/sys as sysdba

Import Process Sequence


1. New tables are created.
2. Data is imported.
3. Indexes are built.
4. Triggers are imported.
5. Integrity constraints are enabled on new tables.
6. Any bitmap, functional, and/or domain Indexes are built.
NLS Considerations
• The export file identifies the character encoding scheme used for the character data in the file.
• The import utility translates data to the character set of its host system.
• A multibyte character set export file must be imported into a system that has the same characteristics.

D:\oracle\ora92\bin>EXP

Connected to: Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production


With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production
Enter array fetch buffer size: 4096 > 4096

Export file: EXPDAT.DMP > d:\scott.dmp

(2)U(sers), or (3)T(ables): (2)U > U

Export grants (yes/no): yes > YES


Export table data (yes/no): yes > YES
Compress extents (yes/no): yes > YES
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user SCOTT
. exporting PUBLIC type synonyms
. exporting private type synonyms
. exporting object type definitions for user SCOTT
About to export SCOTT's objects ...
. exporting database links
. exporting sequence numbers
. exporting cluster definitions
. about to export SCOTT's tables via Conventional Path ...
. . exporting table BONUS 0 rows exported
. . exporting table DEPT 4 rows exported
. . exporting table EMP 14 rows exported
. . exporting table SALGRADE 5 rows exported
. exporting synonyms
. exporting views
. exporting stored procedures
. exporting operators
. exporting referential integrity constraints
. exporting triggers
. exporting indextypes
. exporting bitmap, functional and extensible indexes
. exporting posttables actions
. exporting materialized views
. exporting snapshot logs
. exporting job queues
. exporting refresh groups and children
. exporting dimensions
. exporting post-schema procedural objects and actions
. exporting statistics
Export terminated successfully without warnings.
****************************

D:\oracle\ora92\bin>EXP

Export: Release 9.2.0.1.0 - Production on Wed Jun 21 11:25:45 2006

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

Username: SCOTT/TIGER@NICORA

Connected to: Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production


With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production
Enter array fetch buffer size: 4096 > 4096

Export file: EXPDAT.DMP > D:\EMP.DMP

(2)U(sers), or (3)T(ables): (2)U > T

Export table data (yes/no): yes > YES

Compress extents (yes/no): yes > YES

Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set

About to export specified tables via Conventional Path ...

Table(T) or Partition(T:P) to be exported: (RETURN to quit) > EMP

. . exporting table EMP 14 rows exported


Table(T) or Partition(T:P) to be exported: (RETURN to quit) >

Export terminated successfully with warnings.

D:\oracle\ora92\bin>
******************

D:\oracle\ora92\bin>IMP

Import: Release 9.2.0.1.0 - Production on Wed Jun 21 11:20:56 2006

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.


Username: SCOTT/TIGER@NICORA

Connected to: Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production


With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production

Import file: EXPDAT.DMP > D:\SCOTT.DMP

Enter insert buffer size (minimum is 8192) 30720> 30720

Export file created by EXPORT:V09.02.00 via conventional path


import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
List contents of import file only (yes/no): no > NO

Ignore create error due to object existence (yes/no): no > NO

Import grants (yes/no): yes > YES

Import table data (yes/no): yes > YES

Import entire export file (yes/no): no > YES

. importing SCOTT's objects into SCOTT


. . importing table "BONUS" 0 rows imported
. . importing table "DEPT" 4 rows imported
. . importing table "EMP" 14 rows imported
. . importing table "SALGRADE" 5 rows imported
About to enable constraints...
Import terminated successfully without warnings.

******************

D:\oracle\ora92\bin>IMP

Import: Release 9.2.0.1.0 - Production on Wed Jun 21 11:28:30 2006

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

Username: SCOTT/TIGER@NICORA

Connected to: Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production


With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production

Import file: EXPDAT.DMP > D:\EMP.DMP

Enter insert buffer size (minimum is 8192) 30720> 30720

Export file created by EXPORT:V09.02.00 via conventional path


import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
List contents of import file only (yes/no): no > NO

Ignore create error due to object existence (yes/no): no > NO

Import grants (yes/no): yes > YES

Import table data (yes/no): yes > YES

Import entire export file (yes/no): no > YES

. importing SCOTT's objects into SCOTT


. . importing table "EMP" 14 rows imported
About to enable constraints...
Import terminated successfully without warnings.

*****************************************************************************
Which of the following options enables a user to get authenticated though a single
password instead of using multiple passwords? A. Wallet Manager

Das könnte Ihnen auch gefallen