Sie sind auf Seite 1von 21

Control Files

Every Oracle Database has a control file, which is a small binary file that records the physical
structure of the database. The control file includes:

 The database name

 Names and locations of associated datafiles and redo log files

 The timestamp of the database creation

 The current log sequence number

 Checkpoint information

The control file must be available for writing by the Oracle Database server whenever the
database is open. Without the control file, the database cannot be mounted and recovery is
difficult.

The control file of an Oracle Database is created at the same time as the database. By default, at
least one copy of the control file is created during database creation. On some operating systems
the default is to create multiple copies. You should create two or more copies of the control file
during database creation. You can also create control files later, if you lose control files or want
to change particular settings in the control files.

----------------------------------------------------------------------------------------------------------------------------

The control files of a database store the status of the physical structure of the database. The
control file is absolutely crucial to database operation. It contains (but is not limited to) the
following types of information:

 Database information (RESETLOGS SCN and their time stamp)


 Archive log history
 Tablespace and datafile records (filenames, datafile checkpoints, read/write status)
 Redo threads (current online redo log)
 Database's creation date
 database name
 current archive log mode
 Log records (sequence numbers, SCN range in each log)
 RMAN catalog
 Database block corruption information
 Database ID, which is unique to each DB
The location of the control files is specified through the control_files init param

What Is the Archived Redo Log?


Oracle Database lets you save filled groups of redo log files to one or more offline destinations,
known collectively as the archived redo log, aka archive log. This process is only possible if the
database is running in ARCHIVELOG mode.

An archived redo log file is a copy of one of the filled members of a redo log group. It includes
the redo entries and the unique log sequence number of the identical member of the redo log
group.

For example, if you are multiplexing your redo log, and if group 1 contains identical member
files a_log1 and b_log1, then the archiver process (ARCn) will archive one of these member
files. Should a_log1 become corrupted, then ARCn can still archive the identical b_log1. The
archived redo log contains a copy of every group created since you enabled archiving.

When the database is running in ARCHIVELOG mode, the log writer process (LGWR) cannot
reuse and hence overwrite a redo log group until it has been archived. The background process
ARCn automates archiving operations when automatic archiving is enabled. The database starts
multiple archiver processes as needed to ensure that the archiving of filled redo logs does not fall
behind.

You can use archived redo logs to:


 Recover a database
 Update a standby database
 Get information about the history of a database using the LogMiner utility

Java Pool
The Java pool is an area of memory that stores all session-specific Java code and data within the
Java Virtual Machine (JVM). This memory includes Java objects that are migrated to the Java
session space at end-of-call.

For dedicated server connections, the Java pool includes the shared part of each Java class,
including methods and read-only memory such as code vectors, but not the per-session Java state
of each session. For shared server, the pool includes the shared part of each class and some UGA
used for the state of each session. Each UGA grows and shrinks as necessary, but the total UGA
size must fit in the Java pool space.
The Java Pool Advisor statistics provide information about library cache memory used for Java
and predict how changes in the size of the Java pool can affect the parse rate. The Java Pool
Advisor is internally turned on when statistics_level is set to TYPICAL or higher. These
statistics reset when the advisor is turned off.

What is the Data File?


Datafiles are physical files of the operating system that store the data of all logical structures in
the database. They must be explicitly created for each tablespace. Oracle assigns each datafile
two associated file numbers, an absolute file number and a relative file number, that are used to
uniquely identify it. These numbers are described in the following table:

Type of File Number Description


Absolute Uniquely identifies a datafile in the database. In earlier releases of Oracle,
the absolute file number may have been referred to as simply, the "file
number."
Relative Uniquely identifies a datafile within a tablespace. For small and medium
size databases, relative file numbers usually have the same value as the
absolute file number. However, when the number of datafiles in a
database exceeds a threshold (typically 1023), the relative file number
differs from the absolute file number.

File numbers are displayed in many data dictionary views. You can optionally use file numbers
instead of file names to identify datafiles or tempfiles in SQL statements. When using a file
number, specify the file number that is displayed in the FILE# column of the V$DATAFILE or
V$TEMPFILE view. This file number is also displayed in the FILE_ID column of the
DBA_DATA_FILES or DBA_TEMP_FILES view.

This section describes aspects of managing datafiles, and contains the following topics:

init.ora
The init.ora file stores the initialization parameters of Oracle. The values that are currently in
effect can be viewed through v$parameter. The init.ora file is read when an instance (for
example using startup in SQL*Plus).

Default location and name

The default location of init.ora is %ORACLE_HOME%\database on Windows. On Windows,


the location can be changed by changing ORA_%ORACLE_SID%_PFILE.
The default name for the file is init$ORACLE_SID.ora (unix) or init%ORACLE_SID%.ora
(windows)

However, it is possible to start the database with another init.ora file than the default one. In this
case, there is no way to determine which init.ora was used when the database is running (at least
up to Oracle 9i).

Oracle's buffer cache


The buffer cache is part of the SGA. It holds copies of data blocks so as they can be accessed
quicker by oracle than by reading them off disk.

Purpose
The purpose of the buffer cache is to minimize physical IO. When a block is read by Oracle, it
places this block into the buffer cache, because there is a chance that this block is needed again.
Reading a block from the buffer cache is less costly than reading it from the disk.

Segments
The database buffer cache (as well as the shared sql cache) are logically segmented into multiple
sets. This organization reduces contention on multiprocessor systems.

Buffer
The buffer cache consists of... buffers. A buffer is a database block that happens to be in
memory.,

MRU and LRU blocks


Blocks within the buffer cache are ordered from MRU (most recently used) blocks to LRU (least
recently used) blocks. Whenever a block is accessed, the block goes to the MRU end of the list,
thereby shifting the other blocks down towards the LRU end. When a block is read from disk and
when there is no buffer available in the db buffer cache, one block in the buffer cache has to
"leave". It will be the block on the LRU end in the list.

Streams Pool
The Streams pool stores buffered queue messages and provides memory for Oracle Streams
capture processes and apply processes. The Streams pool is used exclusively by Oracle Streams.
In a single database, you can specify that Streams memory be allocated from a new pool in the
SGA called the Streams pool. To configure the Streams pool, specify the size of the pool in bytes
using the STREAMS_POOL_SIZE initialization parameter.

Unless you specifically configure it, the size of the Streams pool starts at zero. The pool size
grows dynamically as required by Oracle Streams.

Oracle Enterprise Manager 11g


Today, IT organizations must proactively manage many technologies, ranging from Service-
Oriented Architecture (SOA) to Cloud Computing, all while maintaining a consistent standard of
service for business users and customers. In this dynamic IT environment, IT organizations need
a complete view of the health of business transactions as they traverse business services across
different tiers of technology and need to be responsive to business needs. IT needs to respond to
business needs in a language that business expects.

Oracle Enterprise Manager addresses these challenges with a bold new approach that
incorporates key integration points across IT management processes, business processes, and the
Oracle community. As a supplier of business applications, software and hardware infrastructure,
and a comprehensive set of management tools, Oracle is in a unique position to offer a new
Business-driven approach to IT management. This new and unique approach enables IT
organizations to drive IT efficiency and business agility.

Oracle Enterprise Manager 11g offers:

 Business-Driven Application Management


By providing deep visibility into business transactions and user experiences, Oracle
Enterprise Manager 11g enables IT departments to manage applications from a business
perspective. As a result, IT can better support business priorities and significantly
enhance end-user experience and customer interaction.

 Integrated Application-to-Disk Management


With the Launch of Oracle Enterprise Manager Ops Center 11g, and its unique
converged hardware management approach, Oracle further advances its capabilities to
help you manage your entire application stack—from application to disk—so you can
eliminate disparate tools and maximize return on your software and hardware
investments.

 Integrated Systems Management and Support


Oracle Enterprise Manager 11g delivers a new approach to systems management,
offering proactive notifications and fixes plus peer-to-peer knowledge sharing that can
significantly increase customer satisfaction.
Shared pool [Oracle]
The shared pool is the part of the SGA where (among others) the following things are stored:
 Optimized query plans
 Security checks
 Parsed SQL statements
 Packages
 Object information

Shared pool latch


The shared pool latch is used when memory is allocated or freed in the shared pool.

Library cache latch


Similar to the shared pool latch, this latch protects operations within the library cache.

UGA

Flushing the shared pool


The shared pool can be flushed with ALTER SYSTEM FLUCH shared_pool.

Allocation of memory
Memory in the shared pool is managed and freed in a LRU fashion, that is, Memory that isn't
used for long gets aged out first. This is in contrast to the large pool, where memory is managed
in a heap (using allocate and free).

Subpools
Up until 9g, the shared pool was always allocated in one large heap. From 10i, there's the
possibility to split split the shared pool into multiple seperate subpools (or areas). This behaviour
is controlled by the _kghdsidx_count parameter.

Examining the shared pool


select name, bytes/1024/1024 "MB"
from v$sgastat
where pool = 'shared pool'
order by bytes desc;

SGA (System Global Area)


The SGA is a chunk of memory that is allocated by an Oracle Instance (during the nomount
stage) and is shared among Oracle processes, hence the name. It contains all sorts of information
about the instance and the database that is needed to operate.

Components of the SGA

The SGA consists of the following four (five if MTS) parts:


 Fixed Portion
 Variable Portion
 Shared pool
 java pool

Fixed portion
The size of the fixed portion is constant for a release and a plattform of Oracle, that is, it cannot
be changed through any means such as altering the initialization parameters

Variable portion
The variable portion is called variable because its size (measured in bytes) can be changed.

The variable portion consists of:


 large pool (optional)
Provides working space for RMAN (RMAN will also work without large pool).
 Shared pool
The shared pool is used for objects that are shared among all users. For example:
table definitions, PL/SQL definitions, cursors and so on.

Shared pool

The shared pool can further be subdivied into:


 Control structures
 Character sets
 Dictionary cache
The dictionary cache stores parts fo the data dictionary because Oracle has to
query the data dictionary very often as it is fundamental to the functioning of
Oracle.

 Library cache
The library cache is further divided into
 + Shared SQL Area,
 + PL/SQL Procedures and
 + Control Structures (Latches and Locks).

The size of the Shared Pool is essentially governed by the initialization parameter
shared_pool_size (although shared_pool_size is usually smaller than the size of the shared pool,
see here) and db_block_buffers (which plays a role for this size because the database buffer
cache must be administered.)

v$db_object_cache displays objects (=tables, indexes, clusters, synonym definitions, PL/SQL


procedures/packages and triggers) that are cached in the library cache.

Java pool
The size for the variable portion is roughly equal to the result of the following statement:
select sum(bytes)
from v$sgastat
where pool in ('shared pool', 'java pool', 'large pool');

Redo log buffer


Redo Buffers is roughly equal to the parameter log_buffer

Database buffer cache


It's size is equal to db_block_size * db_block_buffers. If the init parameter db_cache_size is set,
the buffer cache's size will be set according to this value.

UGA
If the instance is running in MTS mode, there'se also a UGA: user global area

Parameters affecting the size of SGA


 db_block_buffers  java_pool_size.
 db_block_size  large_pool_size,
 db_cache_size  log_buffer,
 db_keep_cache_size  shared_pool_size,
 db_recycle_cache_size  streams_pool_size,

Oracle Recovery Manager ( RMAN)


RMAN can be used to backup and restore database files, archive logs, and control files. It can
also be used to perform complete or incomplete database recovery. Note that RMAN cannot be
used to backup initialization files or password files.

RMAN starts Oracle server processes on the database to be backed up or restored. The backup,
restore, and recovery is driven through these processes hence the term 'server-managed
recovery'.

Recovery manager is a platform independent utility for coordinating your backup and restoration
procedures across multiple servers. In my opinion it's value is limited if you only have one or
two instances, but it comes into it's own where large numbers of instances on multiple platforms
are used. The reporting features alone mean that you should never find yourself in a position
where your data is in danger due to failed backups.

The functionality of RMAN is too diverse to be covered in this article so I shall focus on the
basic backup and recovery functionality.
 Create Recovery Catalog
 Register Database
 Full Backup
 Restore & Recover The Whole Database
 Restore & Recover A Subset Of The Database
 Incomplete Recovery
 Disaster Recovery
 Lists And Reports

Spfile and Init.ora Parameter File Startup of an Oracle9i Instance:


Before Oracle9i, Oracle instances were always started using a text file called an init.ora. This file
is by default located in the "$ORACLE_HOME/dbs" directory.

Configuration Files

Client Configuration Files


Clients typically have three configuration files that are created by Oracle Network Manager.
These files provide information about the following:
 network destinations
 network navigation
 tracing and logging, and security (encryption and checksumming)

TNSNAMES.ORA
This file contains a list of the service names and addresses of network destinations. A client (or a
server that is part of a distributed database) needs this file to tell it where it can make
connections.
Note: This file is not necessary if Oracle Names is used.
Note: This file is generated and modified by Oracle Network Manager. Do not edit it manually.

TNSNAV.ORA
This file is used only in a network that includes one or more Oracle MultiProtocol Interchanges.
It lists the communities of which the client (or server) is a member and includes the names and
addresses of the Interchanges available in local communities as a first hop toward destinations in
other communities.
Note: This file is generated by the Oracle Network Manager. Do not edit it manually.

SQLNET.ORA
This file contains optional diagnostic parameters, client information about Oracle Names, and
may contain other optional parameters such as native naming or security (encryption and
checksumming) parameters.

Note: SQLNET.ORA may contain node-specific parameters. Unless you are using Oracle
Names and the Dynamic Discovery Option, you should create it with Network Manager. You
may edit the SQLNET.ORA file for an individual client by using the SQLNET.ORA Editor,
which is described in the Oracle Network Products Troubleshooting Guide.

In addition, clients and servers on some protocols may require PROTOCOL.ORA, which you
must create manually.

PROTOCOL.ORA
This file contains protocol- and platform-specific options for protocols that require them, such as
Async and APPC/LU6.2.
Server Configuration Files
Servers in a network that includes distributed databases also require the files that are needed by
clients, because when servers connect to other database servers through database links they are,
in effect, acting like clients.

In addition to the client configuration files described above, each server machine needs a
LISTENER.ORA file to identify and control the behavior of the listeners that listen for the
databases on the machine.

LISTENER.ORA
This file includes service names and addresses of all listeners on a machine, the system
identifiers (SIDs) of the databases they listen for, and various control parameters used by the
Listener Control Utility.

Note: Unless you are using Oracle Names and the Dynamic Discovery Option, this file should
be generated and modified by the Oracle Network Manager. You should not edit it manually.

Note: LISTENER.ORA and TNSNAMES.ORA contain some similar information. The address
of the server in TNSNAMES.ORA is the same as the address of the listener for a server in
LISTENER.ORA. Similarly, the address in the TNSNAMES.ORA file includes the SID which is
required (as SID_NAME) in the LISTENER.ORA file. Figure A - 1 shows the similarities
between these files for a single server.
Interchange Configuration Files
Each Interchange in a network requires three configuration files. The files provide information
about the following:
 network layout
 network navigation
 control parameters

TNSNET.ORA
This file contains a list of the communities in the network and the relative cost of traversing
them, and the names and addresses of all the Interchanges in the network. This file provides an
overview of the layout of the network for all the Interchanges.

TNSNAV.ORA
This file describes the communities of each individual Interchange on the network.

INTCHG.ORA
This file provides both required and optional parameters that control the behavior of each
Interchange.
Note: These files should be generated and modified through Oracle Network Manager. They
should not be edited by hand.

Oracle Names Configuration Files


Unless you are using the Dynamic Discovery Option, each Names Server in the network requires
an individual configuration file called NAMES.ORA, as well as parameters in SQLNET.ORA.

NAMES.ORA
Unless you are using the Dynamic Discovery Option, every node running a Names Server must
have a NAMES.ORA file. NAMES.ORA contains control parameters for the Names Server and
points the Names Server to the database where the network definition is stored.
Note: This file should be generated and modified by Oracle Network Manager. Do not edit it
manually.

SQLNET.ORA
This file contains client information about Oracle Names such as the default domain for service
names stored in Oracle Names, and lists preferred Oracle Names Servers. It may also contain
optional logging and tracing (diagnostic), native naming, and security (encryption,
checksumming, and authentication) parameters.

Oracle SNMP Support Configuration File

Each node managed by Oracle SNMP Support requires a configuration file named SNMP.ORA.
The parameters in SNMP.ORA:
 identify the services on the node that will be monitored by SNMP
 assign index numbers to the services that will be monitored
 contain contact information (name and phone number) for each service to be monitored
 specify a non-default time interval (in seconds) that the subagent polls an SNMP-
managed database
Note: You must generate the SNMP.ORA file with Network Manager.

The Oracle Enterprise Manager Console uses a daemon process for network communication with
the Oracle Intelligent Agents on remote systems. The network communication is done using
Oracle's SQL*Net product.

Job Scheduling, Event Management, Software Manager, Data Manager, Backup Manager, and
Tablespace Manager rely on communication between the Console, agent, and daemon, and
require SQL*Net.

SQL*Net requires a number of configuration files in order to work.


1. snmp_ro.ora and snmp_rw.ora are created by the Intelligent Agent.
2. listener.ora is required if a database is on the node. An agent can be installed on a
machine without a database.
On both the Console and host node, the sqlnet.ora file, which contains items such as domain
name and trace level, is needed:

On the host node where the Oracle database and agent reside, the following additional files are
needed.

listener.ora
Contains the listening addresses of the SQL*Net Listener on the machine plus the name and
ORACLE_HOME of any databases the listener knows about.

snmp.ora, or snmp_ro.ora and snmp_rw.ora


Contains the listening address of the agent, the names of SQL*Net listener and Oracle database
services it knows about, plus tracing parameters. snmp_ro.ora and snmp_rw.ora are created by
the 7.3.4 Intelligent Agent. snmp.ora is used by pre-7.3.3 machines.

UNDO_RETENTION parameter
In Oracle9i, Oracle has also introduced the 'SPFILE', which is a binary file stored on the
database Server. Changes which are applied to the instance parameters may be persistent accross
all startup/shutdown procedures.

Overview
Starting in Oracle9i, rollback segments are re-named undo logs. Traditionally transaction undo
information was stored in Rollback Segments until a commit or rollback statement was issued, at
which point it was made available for overlaying.

Best of all, automatic undo management allows the DBA to specify how long undo
information should be retained after commit, preventing "snapshot too old" errors on long
running queries.

This is done by setting the UNDO_RETENTION parameter. The default is 900 seconds (5
minutes), and you can set this parameter to guarantee that Oracle keeps undo logs for extended
periods of time.

Rather than having to define and manage rollback segments, you can simply define an Undo
tablespace and let Oracle take care of the rest. Turning on automatic undo management is easy.
All you need to do is create an undo tablespace and set UNDO_MANAGEMENT = AUTO.

However it is worth to tune the following important parameters


1. The size of the UNDO tablespace
2. The UNDO_RETENTION parameter

Calculate UNDO_RETENTION for given UNDO Tabespace

You can choose to allocate a specific size for the UNDO tablespace and then set the
UNDO_RETENTION parameter to an optimal value according to the UNDO size and the
database activity. If your disk space is limited and you do not want to allocate more space than
necessary to the UNDO tablespace, this is the way to proceed. The following query will help you
to optimize the UNDO_RETENTION parameter:

Optimal UNDO Retention = Actual UNDO Size / ( DB_Block_Size * UNDO Blocks per sec )

Because these following queries use the V$UNDOSTAT statistics, run the queries only after
the database has been running with UNDO for a significant and representative time!

UNDO_RETENTION
Property Description
Parameter type Integer
Default value 900

Modifiable ALTER SYSTEM

Range of values 0 to 232 - 1 (max value represented by 32 bits)


Real Application Clusters Oracle recommends that multiple instances have the same value.

UNDO_RETENTION specifies (in seconds) the low threshold value of undo retention.

For AUTOEXTEND undo tablespaces, the system retains undo for at least the time specified in
this parameter, and automatically tunes the undo retention period to satisfy the undo
requirements of the queries.

For fixed- size undo tablespaces, the system automatically tunes for the maximum possible undo
retention period, based on undo tablespace size and usage history, and ignores UNDO_RETENTION
unless retention guarantee is enabled.

The setting of this parameter should account for any flashback requirements of the system.
Automatic tuning of undo retention is not supported for LOBs. The RETENTION value for LOB
columns is set to the value of the UNDO_RETENTION parameter.

The UNDO_RETENTION parameter can only be honored if the current undo tablespace has enough
space. If an active transaction requires undo space and the undo tablespace does not have
available space, then the system starts reusing unexpired undo space. This action can potentially
cause some queries to fail with a "snapshot too old" message.

The amount of time for which undo is retained for the Oracle Database for the current undo
tablespace can be obtained by querying the TUNED_UNDORETENTION column of the V$UNDOSTAT
dynamic performance view.

RESOURCE_LIMIT
Property Description
Parameter type Boolean
Default value false

Modifiable ALTER SYSTEM

Range of values true | false


RESOURCE_LIMIT determines whether resource limits are enforced in database profiles.

Values:
 TRUE Enables the enforcement of resource limits
 FALSE Disables the enforcement of resource limits

Optimal Flexible Architecture


For Oracle Database 10g, the OFA recommended Oracle home path has changed. The OFA
recommended path is now similar to the following:

/u01/app/oracle/product/10.1.0/type[_n]

Ex:
/u01/app/oracle/product/10.1.0/db_1
/u01/app/oracle/product/10.1.0/client_1
/u01/app/oracle/product/10.1.0/db_2
Overview of the Optimal Flexible Architecture Standard

The OFA standard is designed to:

 Organize large amounts of complicated software and data on disk, to avoid device
bottlenecks and poor performance
 Facilitate routine administrative tasks such as software and data backup, which are often
vulnerable to data corruption
 Facilitate switching between multiple Oracle databases
 Adequately manage and administer database growth
 Help eliminate fragmentation of free space in the data dictionary, isolate other
fragmentation, and minimize resource contention

UTL_FILE_DIR
Property Description
Parameter type String
Syntax UTL_FILE_DIR = pathname

Default value There is no default value.


Modifiable No
Range of values Any valid directory path
UTL_FILE_DIR lets you specify one or more directories that Oracle should use for PL/SQL file
I/O. If you are specifying multiple directories, you must repeat the UTL_FILE_DIR parameter for
each directory on separate lines of the initialization parameter file.

All users can read or write to all files specified by this parameter. Therefore all PL/SQL users
must be trusted with the information in the directories specified by this parameter.

LDAP_DIRECTORY_ACCESS
Property Description
Parameter type String
Syntax LDAP_DIRECTORY_ACCESS = { NONE | PASSWORD | SSL }

Default value NONE

Modifiable ALTER SYSTEM

Basic No
LDAP_DIRECTORY_ACCESS specifies whether Oracle refers to Oracle Internet Directory for user
authentication information. If directory access is turned on, then this parameter also specifies
how users are authenticated.

Values:
 NONE
Oracle does not refer to Oracle Internet Directory for Enterprise User Security
information.
 PASSWORD
Oracle tries to connect to the enterprise directory service using the database
password stored in the database wallet. If that fails, then the Oracle Internet
Directory connection fails and the database will not be able to retrieve enterprise
roles and schema mappings upon enterprise user login.
 SSL
Oracle tries to connect to Oracle Internet Directory using SSL.

PLSQL_NATIVE_LIBRARY_DIR
Property Description
Parameter type String
Syntax PLSQL_NATIVE_LIBRARY_DIR = directory

Default value There is no default value.


Modifiable ALTER SYSTEM

Range of values Any valid directory path

PLSQL_NATIVE_LIBRARY_DIR is a parameter used by the PL/SQL compiler. It specifies the name


of a directory where the shared objects produced by the native compiler are stored.

Oracle Proccesses

pmon
The process monitor performs process recovery when a user process fails. PMON is
responsible for cleaning up the cache and freeing resources that the process was using. PMON
also checks on the dispatcher processes (described later in this table) and server processes and
restarts them if they have failed.

mman
Used for internal database tasks.
dbw0
The database writer writes modified blocks from the database buffer cache to the datafiles.
Oracle Database allows a maximum of 20 database writer processes (DBW0-DBW9 and DBWa-
DBWj). The initialization parameter DB_WRITER_PROCESSES specifies the number of
DBWn processes. The database selects an appropriate default setting for this initialization
parameter (or might adjust a user specified setting) based upon the number of CPUs and the
number of processor groups.

lgwr
The log writer process writes redo log entries to disk. Redo log entries are generated in the
redo log buffer of the system global area (SGA), and LGWR writes the redo log entries
sequentially into a redo log file. If the database has a multiplexed redo log, LGWR writes the
redo log entries to a group of redo log files.

ckpt
At specific times, all modified database buffers in the system global area are written to the
datafiles by DBWn. This event is called a checkpoint. The checkpoint process is responsible for
signalling DBWn at checkpoints and updating all the datafiles and control files of the database to
indicate the most recent checkpoint.

smon
The system monitor performs recovery when a failed instance starts up again. In a Real
Application Clusters database, the SMON process of one instance can perform instance recovery
for other instances that have failed. SMON also cleans up temporary segments that are no longer
in use and recovers dead transactions skipped during system failure and instance recovery
because of file-read or offline errors. These transactions are eventually recovered by SMON
when the tablespace or file is brought back online.

reco
The recoverer process is used to resolve distributed transactions that are pending due to a
network or system failure in a distributed database. At timed intervals, the local RECO attempts
to connect to remote databases and automatically complete the commit or rollback of the local
portion of any pending distributed transactions.

cjq0
Job queue processes are used for batch processing. The CJQ0 process dynamically spawns job
queue slave processes (J000...J999) to run the jobs.

d000
Dispatchers are optional background processes, present only when the shared server
configuration is used.

qmnc
A queue monitor process which monitors the message queues. Used by Oracle Streams
Advanced Queuing.

mmon
Performs various manageability-related background tasks.

mmnl
Performs frequent and light-weight manageability-related tasks, such as session history
capture and metrics computation.

ORA-01653 : unable to extend table string.string by string in tablespace string

Cause : Failed to allocate an extent of the required number of blocks for a table segment
in the tablespace indicated.

Action : Use ALTER TABLESPACE ADD DATAFILE statement to add one or more
files to the tablespace indicated

Using MTBF and MTTR to Drive Improvement


MTBF and MTTR are common metrics manufacturers use to align maintenance activity to
production activity, thereby increasing uptime and manufacturing capacity. Using Informance,
manufacturers can quickly and accurately calculate MTBF and MTTR, which allows them to
better plan out their preventive maintenance activities.

MTBF = Mean Time Between Failures


To calculate MTBF, use this formula:
MTBF = Total uptime for a period of time / number of stops during the same period

In other words, MTBF is the average time between downtime periods. For example, if during a
week of production you have 30 hours of uptime and 10 stops then the MTBF would be 30 / 10 =
3 hours.

MTTR = Mean Time to Repair


To calculate MTTR, use this formula:
MTTR = Total downtime for the same period as used for MTBF / number of stops

Put simply, MTTR is the average time of a failure. For example, if during a week of production
you have 10 hours of downtime and 10 stops then the MTTR would be 10 / 10 = 1 hour.

Please note that it is possible to use the median instead of the mean when calculating
these numbers. The median is the middle number in an ordered set of numbers. Using the median
is beneficial because it eliminates any possible skew due to extreme numbers on either end of the
distribution.

RTO = Recovery Time objective


The maximum amount of time that the database can be unavailable and still stasfy SLA's

Das könnte Ihnen auch gefallen