Beruflich Dokumente
Kultur Dokumente
txt
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Installing Oracle 11gr2 On Linux 6
step.1
Hosts File
The "/etc/hosts" file must contain a fully qualified name for the server.
<IP-address> <fully-qualified-machine-name> <machine-name>
For example.
127.0.0.1 localhost.localdomain localhost
192.168.0.181 ol6-112.localdomain ol6-112
step.2
fs.suid_dumpable = 1
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 536870912
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048586
The current values can be tested using the following command.
fs.suid_dumpable = 1
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 536870912
kernel.shmmni = 4096
# semaphores: semmsl, semmns, semopm, semmni
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default=262144
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=1048586
step.3
step.4
This will install all the necessary 32-bit packages for 11.2.0.1. From 11.2.0.2
onwards many of these are unnecessary, but having them present does not cause a
problem.
step.5
passwd oracle
We are not going to use the "asm" groups, since this installation will not use ASM.
Additional Setup
Set the password for the "oracle" user.
passwd oracle
Amend the "/etc/security/limits.d/90-nproc.conf" file as described below. See MOS
Note [ID 1487773.1]
# Change this
* soft nproc 1024
# To this
* - nproc 16384
step.6
SELINUX=permissive
Once the change is complete, restart the server.
If you have the Linux firewall enabled, you will need to disable or configure it,
as shown here or here.
step.7
mkdir -p /u01/app/oracle/product/11.2.0/db_1
chown -R oracle:oinstall /u01
chmod -R 775 /u01
step.8
step.9
Login as the oracle user and add the following lines at the end of the
".bash_profile" file.
# Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR
step.10
Log into the oracle user. If you are using X emulation then set the DISPLAY
environmental variable.
DISPLAY=<machine-name>:0.0; export DISPLAY
Start the Oracle Universal Installer (OUI) by issuing the following command in the
database directory.
./runInstaller
Proceed with the installation of your choice. The prerequisites checks will fail
for the following version-dependent reasons:
11.2.0.1: The installer shows multiple "missing package" failures because it does
not recognize several of the newer version packages that were installed. These
"missing package" failures can be ignored as the packages are present. The failure
for the "pdksh" package can be ignored because we installed the "ksh" package in
its place.
11.2.0.2: The installer should only show a single "missing package" failure for the
"pdksh" package. It can be ignored because we installed the "ksh" package in its
place.
11.2.0.3: The installer shows no failures and continues normally.
You can see the type of installation I performed by clicking on the links below to
see screen shots of each stage.
Edit the "/etc/oratab" file setting the restart flag for each instance to 'Y'.
DB11G:/u01/app/oracle/product/11.2.0/db_1:Y
g:\prints\core\12c admin.txt
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
12c Database Administration
++++++++++++++++++++++++++++:
The use of SET CONTAINER avoids the need to create a new connection from scratch.
If there is an existing connection to a PDB / CDB$root, the same connection can be
used to connect to desired PDB / CDB$root.
� Connect to CDB
CON_NAME
------------------------------
CDB$ROOT
� Check the PID for the process created on the operating system
sho con_name
CON_NAME
------------------------------
PDB1
� Check that the operating system PID remains the same as earlier connection is
reused and a new connection has not been created
CON_NAME
------------------------------
CDB$ROOT
� Check that a new operating system PID has been created as a new connection has
been created
define gname=idle
column global_name new_value gname
set heading off
set termout off
col global_name noprint
select upper(sys_context ('userenv', 'con_name') || '@' || sys_context('userenv',
'db_name')) global_name from dual;
set sqlprompt '&gname> '
set heading on
set termout on
- Let�s connect to PDB1 using �Connect� and verify that glogin.sql is executed and
prompt displays CDB/PDB name
SQL> conn sys/oracle@pdb1 as sysdba
PDB1@CDB1>
- Verify that the prompt displays current container (PDB1) and container database
(CDB1)
� Now let�s connect to PDB2 using Alter session set container and verify that
glogin.sql is not executed and the same prompt as earlier is displayed
PDB1@CDB1> alter session set container=pdb2;
Session altered.
PDB1@CDB1> sho con_name
CON_NAME
------------------------------
PDB2
-- Let's connect to PDB2 using connect and verify that glogin.sql is executed as
the prompt displays the PDB name PDB2
PDB2@CDB1>
Pending transactions are not committed when Alter system set container is used
� Let�s start a transaction in PDB1
ERROR at line 1:
ORA-65023: active transaction exists in container PDB1
� In another session check that the transaction was not committed and no rows are
visible in table pdb1_tab
� Try to give set container privilege to a local user HR in PDB2 � fails as common
privilege cannot be granted to a local user and hence a local user cannot user
alter session set container to connect to another PDB
Database 12c brought with it many new features. There are improvement in many areas
and new concepts. Most notable is the concept of Container database and Pluggable
database.
Container Databases (CBD) and Pluggable Databases (PDB) brings with it a radical
change and a major change in the core database architecture. Besides this major
change there are many other improvements that Database 12c has. Some of those new
features are listed below:
4) In Oracle 12c every node in the cluster does NOT need to have its own ASM
instance. Oracle Flex ASM, as the new set of features addresses this situation by
removing the strict requirement to have one ASM instance per cluster node. In this
scenario, if an ASM instance to which databases are connected fail, the database
will dynamically reconnect to another ASM instance in the cluster.
5) RAC crsctl & srvctl commands have a new option named -eval to evaluate commands
before they are executed
7) A new command �alter database move datafile� by which it is very simple to move
data and temp files from a file system into ASM while they are in use. Earlier it
was not possible to do this activity online.
8) Oracle has removed the Database Console in Oracle 12c. It was introduced with
Oracle 10g and it was not frequently used by DBAs.
9) Increase in varchar2 Limit. Instead of the previous limit of 4000 bytes for this
field it is now possible to store up to 32 kilobytes. This new behavior can be
controlled by max_string_size initialization parameter. Although SQL*Plus won�t be
able to insert that much data as the inherent limit of SQL*Plus is 2500 characters
for a column so you will need some other tool for it.
g:\prints\core\ag advntg.txt
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Active Data Guard
==================:
Benefits:
g:\prints\core\archive_solution.txt
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Using ALTERNATE archive destination to handle archive overflow
Scenario � Define a secondary archive location, which will be used when primary
destination is full
Solution � We can define an archive destination with value �ALTERNATE�, which will
take over if primary destination is full.
As per Oracle documentation � An archiving destination can have a maximum of one
alternate destination specified. An alternate destination is used when the
transmission of an online redo log from the primary site to the standby site fails.
Here we have to add �NOREOPEN�. Otherwise it will not spill over to �ALTERNATE�
location.
As per Oracle documentation � If archiving fails and the REOPEN attribute is
specified with a value of zero (0), or NOREOPEN is specified, the Oracle database
server attempts to archive online redo logs to the alternate destination on the
next archival operation.
When archive logs are written to primary location
When Primary location is full and archiver cannot write to it, first time it will
throw following error stack
Metalink Documents:
NOTE 270069.1 � How to Automate Archive Log Overflow Using �Alternate�
NOTE 369120.1 � ALTERNATE Attribute of LOG_ARCHIVE_DEST_n Does Not Appear to Work
g:\prints\core\asm int.txt
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ASM
Answer
Oracle ASM is Oracle�s volume manager specially designed for Oracle database data.
It is available since Oracle database version 10g and many improvements have been
made in versions 11g release 1 and 2 and 12c
ASM offers support for Oracle RAC clusters without the requirement to install 3rd
party software, such as cluster aware volume managers or file systems.
ASM is shipped as part of the database server software (Enterprise and Standard
editions) and does not cost extra money to run.
ASM simplifies administration of Oracle related files by allowing the administrator
to reference disk groups
rather than individual disks and files, which are managed by ASM.
Answer INSTANCE_TYPE � Set to ASM or RDBMS depending on the instance type. The
default is RDBMS.
DB_UNIQUE_NAME � Specifies a globally unique name for the database. This defaults
to +ASM but must be altered if you intend to run multiple ASM instances.
ASM_DISKGROUPS � The list of disk groups that should be mounted by an ASM instance
during instance startup, or by the ALTER DISKGROUP ALL MOUNT statement. ASM
configuration changes are automatically reflected in this parameter.
ASM_DISKSTRING � Specifies a value that can be used to limit the disks considered
for discovery. Altering the default value may improve the speed of disk group mount
time and the speed of adding a disk to a disk group. Changing the parameter to a
value which prevents the discovery of already mounted disks results in an error.
The default value is NULL allowing all suitable disks to be considered.
a) Provides automatic load balancing over all the available disks, thus reducing
hot spots in the file system
b) Prevents fragmentation of disks, so you don�t need to manually relocate data to
tune I/O performance
c) Adding disks is straight forward � ASM automatically performs online disk
reorganization when you add or remove storage
d) Uses redundancy features available in intelligent storage arrays
e)The storage system can store all types of database files
f) Using disk group makes configuration easier, as files are placed into disk
groups
g)ASM provides stripping and mirroring
h) ASM and non-ASM oracle files can coexist
The ASM instance creates an extent map which has a pointer to each 1MB extent of
the data file is located. When a database instance creates or opens a database file
that is managed by ASM, the database instance messages the ASM instance and ASM
returns an extent map for that file. From that point the database instance performs
all I/O directly to the disks unless the location of that file is being changed.
Three things might cause the extent map for a database instance to be updated: 1)
Rebalancing the disk layout following an storage configuration change (adding or
dropping a disk from a disk group), 2) Opening of a new database file and 3)
extending an existing database file when a tablespace is enlarged.
Answer ASM disk groups, each of which comprise of several physical disks that are
controlled as a single unit
Answer They are defined within a disk group to support the required level of
redundancy. For two-way mirroring you would expect a disk group to contain two
failure groups so individual files are written to two locations.
Answer ASM should be installed separately from the database software in its own
ORACLE_HOME directory. This will allow you the flexibility to patch and upgrade ASM
and the database software independently.
Answer Several databases can share a single ASM instance. So, although one can
create multiple ASM instances on a single system, normal configurations should have
one and only one ASM instance per system.
For clustered systems, create one ASM instance per node (called +ASM1, +ASM2, etc).
Answer Generally speaking one should have only one disk group for all database
files � and, optionally a second for recovery files i.e +DATA for datafile and +FRA
for Recovery files
Here is an example:
Here is an example how you can enable automatic file management with such a setup
in each database served by that ASM instance:
You may also decide to introduce additional disk groups � for example, if you
decide to put historic data on low cost disks, or if you want ASM to mirror
critical data across 2 storage cabinets.
Data with different storage characteristics should be stored in different disk
groups. Each disk group can have different redundancy (mirroring) settings (high,
normal and external), different fail-groups, etc. However, it is generally not
necessary to create many disk groups with the same storage characteristics (i.e.
+DATA1, +DATA2, etc. all on the same type of disks).
Answer Striping is spreading data across multiple disks so that IO is spread across
multiple disks and hence increase in throughput. It provides read/write performance
but fail over support.
ASM offers two types of striping, with the choice depending on the type of database
file. Coarse striping uses a stripe size of 1MB, and you can use coarse striping
for every file in your database, except for the control files, online redo log
files, and flashback files. Fine striping uses a stripe size of 128KB. You can use
fine striping for control files, online redo log files, and flashback files.
Mirroring means redundancy. It may add performance benefit for read operations but
overhead for write operations. It�s basic purpose is to provide fail over support.
There are three ASM mirroring options:
High Redundancy � In this configuration, for each primary extent, there are two
mirrored extents. For Oracle Database Appliance this means, during normal
operations there would be three extents (one primary and two secondary) containing
the same data, thus providing �high� level of protection. Since ASM distributes the
partnering extents in a way that prevents all extents to be unable due to a
component failure in the IO path, this configuration can sustain at least two
simultaneous disk failures on Oracle Database Appliance (which should be rare but
is possible).
Normal Redundancy � In this configuration, for each primary extent, there is one
mirrored (secondary) extent. This configuration protects against at least one disk
failure. Note that in the event a disk fails in this configuration, although there
is typically no outage or data loss, the system operates in a vulnerable state,
should a second disk fail while the old failed disk replacement has not completed.
Many Oracle Database Appliance customers thus prefer the High Redundancy
configuration to mitigate the lack of additional protection during this time.
External Redundancy � In this configuration there are only primary extents and no
mirrored extents. This option is typically used in traditional non-appliance
environments when the storage sub-system may have existing redundancy such as
hardware mirroring or other types of third-party mirroring in place. Oracle
Database Appliance does not support External Redundancy.8. What is a diskgroup?
A disk group consists of multiple disks and is the fundamental object that ASM
manages. Each disk group contains the metadata that is required for the management
of space in the disk group. The ASM instance manages the metadata about the files
in a Disk Group in the same way that a file system manages metadata about its
files. However, the vast majority of I/O operations do not pass through the ASM
instance. In a moment we will look at how file
I/O works with respect to the ASM instance.
Answer Oracle ASM disk group�s filesystem structure is similar to UNIX filesystem
hierarchy or Windows filesystem hierarchy.
Answer Oracle ASM files are stored within the Oracle ASM diskgroup. If we dig into
internals, oracle ASM files are stored within the Oracle ASM filesystem structures.
17 How are the Oracle ASM files stored within the Oracle ASM filesystem structure?
Answer Oracle ASM files are stored within the Oracle ASM filesystem structures as
objects that RDBMS instances/Oracle database instance access. RDBMS/Oracle instance
treats the Oracle ASM files as standard filesystem files.
18 What are the Oracle ASM files that are stored within the Oracle ASM file
hierarchy?
Answer Files stored in Oracle ASM diskgroup/Oracle ASM file structures include:
1) Datafile
2) Controlfiles
3) Server Parameter Files(SPFILE)
4) Redo Log files
19 How can you access a database file in ASM diskgroup under RDBMS?
Answer Once the ASM file is created in ASM diskgroup, a filename is generated. This
file is now visible to the user via the standard RDBMS view V$DATAFILE.
ASM Metadata
Answer This is the parameter which controls the number of Allocation units the ASM
instance will try to rebalance at any given time. In ASM versions less than
11.2.0.3 the default value is 11 however it has been changed to unlimited in later
versions.
Answer No, we cannot modify the redundancy for Diskgroup once it has been created.
To alter it we will be required to create a new Diskgroup and move the files to it.
This can also be done by restoring full backup on the new Diskgroup.
35 Does ASM instance automatically rebalances and takes care of hot spots?
Answer No. This is a myth and ASM does not do it. It will initiate automatic
rebalance only when a new disk is added to Diskgroup or we drop a disk from
existing Diskgroup.
36 What is ASMLIB?
Answer ASMLIB is the support library for the ASM. ASMLIB allows an Oracle database
using ASM more efficient and capable access to diskgroups. The purpose of ASMLIB,
is to provide an alternative interface to identify and access block devices.
Additionally, the ASMLIB API enables storage and operating system vendors to supply
extended storage-related features.
38 Whats is Kfed?
Answer kfed is a utility which can be used to view the ASM Disk information. Syntax
for using it is
kfed read devicename
Answer An ASM storage system requires the use of an additional specialized database
instance called ASM, which will actually manage the storage for a set of Oracle
databases. In order to use ASM storage for your Oracle databases, you must first
ensure that you have Oracle�s Cluster Synchronization Service (CSS) running on your
databases.
CSS is responsible for synchronizing ASM instances and your database instances, and
it is
installed as part of your Oracle software. CSS also synchronizes recovery from an
ASM instance failure. You can find out if the CSS service is running by using the
following command:
41 How to find out the databases, which are using the ASM instance?
Answer
ASMCMD> lsct
SQL> select DB_NAME from V$ASM_CLIENT;
When we put the database in begin backup-mode header of all datafiles get freeze or
SCN will not change, during this process it generates excessive redo logs.
suppose you are updating some data in a table and the size of change vector data is
1 KB but the bg process write 8KB of block instead of 1 KB which result in generate
excessive redo logs.
g:\prints\core\bgp.txt
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
10g New Background Process
-MMAN
SGA Background Process
The Automatic Shared Memory Management feature uses a new background process named
Memory Manager (MMAN). MMAN serves as the SGA Memory Broker and coordinates the
sizing of the memory components. The SGA Memory Broker keeps track of the sizes of
the components and pending resize operations
-RVWR
Flashback database
-Jnnn
These are job queue processes which are spawned as needed by CJQ0 to complete
scheduled jobs. This is not a new process.
-CTWR
This is a new process Change Tracking Writer (CTWR) which works with the new block
changed tracking features in 10g for fast RMAN incremental backups.
-MMNL
The Memory Monitor Light (MMNL) process is a new process in 10g which works with
the Automatic Workload Repository new features (AWR) to write out full statistics
buffers to disk as needed.
-MMON
The Manageability Monitor (MMON) process was introduced in 10g and is associated
with the Automatic Workload Repository new features used for automatic problem
detection and self-tuning. MMON writes out the required statistics for AWR on a
scheduled basis.
-M000
MMON background slave (m000) processes.
-CJQn
This is the Job Queue monitoring process which is initiated with the
job_queue_processes parameter. This is not new.
-RBAL
This is the ASM related process that performs rebalancing of disk resources
controlled by ASM.
-ARBx
These processes are managed by the RBAL process and are used to do the actual
rebalancing of ASM
controlled disk resources. The number of ARBx processes invoked is directly
influenced by the asm_power_limit parameter.
-ASMB
The ASMB process is used to provide information to and from the Cluster
Synchronization Services used by ASM to manage the disk resources. It is also used
to update statistics and provide a heartbeat mechanism.
-ACMS
(atomic controlfile to memory service) per-instance process is an agent that
contributes to ensuring a distributed SGA memory update is either globally
committed on success or globally aborted in the event of a failure in an Oracle RAC
environment.
-DBRM
(database resource manager) process is responsible for setting resource plans and
other resource manager related tasks.
-DIA0
(diagnosability process 0) (only 0 is currently being used) is responsible for hang
detection and deadlock resolution.
-DIAG
(diagnosability) process performs diagnostic dumps and executes global oradebug
commands.
-EMNC
(event monitor coordinator) is the background server process used for database
event management and notifications.
-FBDA
(flashback data archiver process) archives the historical rows of tracked tables
into flashback data archives. Tracked tables are tables which are enabled for
flashback archive. When a transaction containing DML on a tracked table commits,
this process stores the pre-image of the rows into the flashback archive. It also
keeps metadata on the current rows.
FBDA is also responsible for automatically managing the flashback data archive for
space, organization, and retention and keeps track of how far the archiving of
tracked transactions has occurred.
-GTX0-j
(global transaction) processes provide transparent support for XA global
transactions in an Oracle RAC environment. The database autotunes the number of
these processes based on the workload of XA global transactions. Global transaction
processes are only seen in an Oracle RAC environment.
-KATE
performs proxy I/O to an ASM metafile when a disk goes offline.
-MARK
marks ASM allocation units as stale following a missed write to an offline disk.
-SMCO
(space management coordinator) process coordinates the execution of various space
management related tasks, such as proactive space allocation and space reclamation.
It dynamically spawns slave processes (Wnnn) to implement the task.
-VKTM
(virtual keeper of time) is responsible for providing a wall-clock time (updated
every second) and reference-time counter (updated every 20 ms and available only
when running at elevated priority).
-PZ
(PQ slaves used for global Views) are RAC Parallel Server Slave processes, but they
are not normal parallel slave processes, PZnn processes (starting at 99) are used
to query GV$ views which is done using Parallel Execution on all instances, if more
than one PZ process is needed, then PZ98, PZ97,... (in that order) are created
automatically.
O00 (ASM slave processes) A group of slave processes establish connections to the
ASM instance. Through this connection pool database processes can send messages to
the ASM instance. For example opening a file sends the open request to the ASM
instance via a slave. However slaves are not used for long running operations such
as creating a file. The use slave (pool) connections eliminate the overhead of
logging into the ASM instance for short requests
-x000
Slave used to expell disks after diskgroup reconfiguration
-BWnn
There can be 1 to 100 Database Writer Processes. The names of the first 36 Database
Writer Processes are DBW0-DBW9 and DBWa-DBWz. The names of the 37th through 100th
Database Writer Processes are BW36-BW99. The database selects an appropriate
default setting for the DB_WRITER_PROCESSES parameter or adjusts a user-specified
setting based on the number of CPUs and processor groups.
-FENC
(Fence Monitor Process) Processes fence requests for RDBMS instances which are
using Oracle ASM instances
-IPC0
(IPC Service Background Process) Common background server for basic messaging and
RDMA primitives based on IPC (Inter-process communication) methods.
-LDDn
(Global Enqueue Service Daemon Helper Slave) Helps the LMDn processes with various
tasks
-LGnn
(Log Writer Worker) On multiprocessor systems, LGWR creates worker processes to
improve the performance of writing to the redo log. LGWR workers are not used when
there is a SYNC standby destination. Possible processes include LG00-LG99.
-LREG
(Listener Registration Process) Registers the instance with the listeners
-OFSD
(Oracle File Server Background Process) Serves file system requests submitted to an
Oracle instance
-RPOP
(Instant Recovery Repopulation Daemon) Responsible for re-creating and/or
repopulating data files from snapshot files and backup files
-SAnn
(SGA Allocator) Allocates SGA The SAnn process allocates SGA in small chunks. The
process exits upon completion of SGA allocation.
-SCRB
(ASM Disk Scrubbing Master Process) Coordinates Oracle ASM disk scrubbing
operations
-SCRn
(ASM Disk Scrubbing Slave Repair Process) Performs Oracle ASM disk scrubbing repair
operation
-SCVn
(ASM Disk Scrubbing Slave Verify Process) Performs Oracle ASM disk scrubbing verify
operation
Rac Processes:
A Pnnn process is a background Parallel Query Slave Process that is used for sql
statements executed in parallel. They can be seen in RAC and single instance
configurations. In addition to being utilized for dml and ddl parallel execution
servers are also used for transaction recovery, instance crash recovery, and
replication operations. The number of Pnnn started in the database is limited by
the value of parallel_max_servers. They start with P000.
In 12c, If the query is a GV$ query, then these background processes are numbered
backward, starting from PPA7
The LNSn process is a network server process used in a Data Guard (primary)
database.
"During asynchronous redo transmission, the network server (LNSn) process transmits
redo data out of the online redo logfiles on the primary database and no longer
interacts directly with the log writer process. This change in behavior allows the
log writer (LGWR) process to write redo data to the current online redo log file
and continue processing the next request without waiting for inter-process
communication or network I/O to complete."
g:\prints\core\bgprocess.txt
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Oracle Database 11g introduced 56 new background processes
==========================================================:
Note:
The following post based on the Oracle Database 11g and briefly describes some
important processes.
These processes are mandatory and can be found in all typical database environment.
CKPT � Checkpoint
Ensures data consistency and easy database recovery in case of crash by
sychronizing all the data file headers and control files with recent checkpoint
information.
DBW0�j � DB Writer
Flushing or writing modi?ed dirty data (buffers) from database buffer cache to
disks (datafiles). You can configure addition DB writer processes (up to 20) from
DBW0-DBW9 and DBWa through DBWj.
These are introduced in Oracle Database 11g. The first three is mandatory and
others could be running depending upon the features being used.
DIA0 � Diagnostic
Responsible for detecting hangs and resolving deadlocks
The Data Recovery Advisor is a tool that helps you to diagnose and repair data
failures and corruptions. The Data Recovery Advisor analyzes failures based on
symptoms and intelligently determines optimal repair strategies. The tool can also
automatically repair diagnosed failures.
The Data Recovery Advisor is available from Enterprise Manager (EM) Database
Control and Grid Control. You can also use it via the RMAN command-line.
In this example I will you will see examples of via the RMAN command line utilising
the DRA commands:
Restrctions:
Examples:
Others Command:
g:\prints\core\Database Architecture.txt
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Oracle Database Architecture
Shared pool
============
The size of shared pool is defined by the parameter called shared_pool_size .
Shared pool contains many components but important components are
library cache,data dictionary cache and control structure .
Library cache
=============
Lib cache consists of shared sql and pl/sql area .
SQL and PL/SQL area stored most recently used sql and pl/sql statements .
We cannot declare the size of lib cache ,but it is complete based on shared pool
size .
If the size of lib cache is small ,the statements are continously reload in the lib
cache which
can effect the performance .
It is managed through LRU.
The data dictionary cache also known as row cache ,because it store the infromation
in the form of rows instead of buffers.
If the size of DDC is small ,then database has to query database tables repeatedly
which degarde the performance.
Control Structure
=================
Locking information will be stored in control structure.
1)
DB_KEEP_CACHE_SIZE
==================
It will retain the block in the memory which are likely to be used.
DB_RECYCLE_CACHE_SIZE
====================
It will eliminate the blocks from memory which are having little chances of being
used .
Buffer modes
+++++++++++
unused
++++++++
The buffer is ready to use or available to use ,as it was never used .
Cleaned
+++++++++
The data has been written to database and available for use .
dirty
++++
The data has been modified but not written to the disk .
redo log buffer
===============
The size of redo log buffer is defined by parameter called log_buffer.
It sequentially record all the changes made to database.
log_buffer is a static parameter.
JAVA POOL
=========
The size of java_pool is defined by the parameter called java_pool_size.
If you want to execute java commands inside the database then java pool will be
used.
Whenever you run dbca,netca etc the memory is allocated from java pool .
Large pool
==========
The size of large_pool is defined by the parameter called large_pool_size.
Whenever a rman session is initiated the memory is allocated from large pool and
once finished the memory is de allocated.
It does not follow LRU algorithm.
g:\prints\core\dataguard by qadar.txt
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Oracle Data Guard
==================:
Oracle Data Guard is the disaster recovery solution. Protects our production
database from disasters, reducing the work load on it and more effective use it.
Simple Example:
Your primary database is running and you want to reduce downtime because of
unplanned outages. You create a replica of this primary database (termed as standby
database).You regularly ship redo generated in the primary database to standby
database and apply it there. So that is our �Data Guard� standby database and it is
in a continuous state of recovery, validating and applying redo to remain in sync
with the primary database.
->ORACLE 8i
->ORALCE 9i
->ORACLE 10g
-Real-Time Apply
-Forced support for Oracle RAC
-Fast-Start Failover
-sAsynchronous redo transfer
-Flashback Database
->ORACLE 11g
DATA GUARD 11g SYNCHRONOUS REDO TRANSFER PROCESS ARCHITECTURE (SYNC)-ZERO DATA LOSS
2 � LNS (logwriter Network Service) reports to RFS (Remote File Service) committed
redo. RFS writes to standby redo log file. If we use physical standby, the MRP
(Managed Recovery Process) will apply to standby database . In Logical Standby
this is made by LSP (Logical Standby Process) .
3 � RFS sends information to LNS that data is processed successfully. LNS transmits
this information to LGWR . Finally, commit information is send to the user that
initiated the transaction (transaction) .
2 � LNS (logwriter Network Service) reports to RFS (Remote File Service) committed
redo. RFS writes to standby redo log file. If we use physical standby, the MRP
(Managed Recovery Process) will apply to standby database . In Logical Standby
this is made by LSP (Logical Standby Process) .
3 � Once Redo Buffer is recycled, LNS automatically reads redo log files and
begins to send redo from log files.
The most common used process architecture. Asynchronous redo transfer does not
guarantee zero data loss. The system has recovered with minimal data loss.
startup migrate:
---------------
Used to upgrade a database till 9i.
Startup Upgrade ?
---------------
From 10G we are using startup upgrade to upgrade database.
g:\prints\core\flashbackp by qadar.txt
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The flashback database enables you to wind your entire database backward in time,
reversing the effects of unwanted database changes within a given time window. The
effects are similar to database point-in-time recovery and it is similar to
conventional point in time recovery in its effects, allowing you to return a
database to its state at a time in the recent past and Flashback Database can be
used to reverse most unwanted changes to a database, as long as the datafiles are
intact.
Note:
-The Flashback log files are never archived - they are reused in a circular manner.
-Redo log files are used to forward changes in case of recovery while flashback log
files are used to backward changes in case of flashback operation.
-Flashing back a database is possible only when there is no media failure. If you
lose a data file or it becomes corrupted, you�ll have to recover using a restored
data file from backups.
We can use Flashback Database in the following situations:
-Since we need the current data files in order to apply changes to them, we can�t
use the Flashback Database feature in cases where a data file has been damaged or
lost.
-If we have a damaged disk drive, or if there is physical corruption (not logical
corruption due to application or user errors) in our database, we must still use
the traditional methods of restoring backups and using archived redo logs to
perform the recovery.
Flashback Database provides:
Flashback Levels:
1) Row level
�Flashback Query: Allows us to view old row data based on a point in time or an
SCN.
we can view the older data and, if necessary, retrieve it and undo erroneous
changes.
�Flashback Versions Query: Allows us to view all versions of the same row over a
period of time so that you can undo logical errors. It can also provide an audit
history of changes, effectively allowing us to compare present data against
historical data without performing any DML activity.
�Flashback Transaction Query: Lets us view changes made at the transaction level.
This technique helps in analysis and auditing of transactions, such as when a batch
job runs twice and you want to determine which objects were affected. Using this
technique, we can undo changes made by an entire transaction during a specified
period.
2) Table level
3) Database level
1. Dropped user
2. Truncated table
3. Batch job:Partial changes.
1. TO_TIME
2. TO SCN
3. TO SEQUENCE( LOG ARCHIVE SEQ)
ENABLING FLASHBACK:
ALTER SYSTEM SET db_recovery_file_dest='/u01/flashy' SCOPE=spfile;
ALTER SYSTEM SET db_recovery_file_dest_size=10G SCOPE=spfile;
ALTER SYSTEM SET db_flashback_retention_target=1440; (default 1 day)
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE FLASHBACK ON;
ALTER DATABASE OPEN;
SQLPLUS / AS SYSDBA
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
FLASHBACK DATABASE TO RESTORE POINT <restore point name>;
ALTER DATABASE OPEN RESETLOGS;
Restore points can be dropped dynamically, i.e. with the database open.
SQLPLUS / AS SYSDBA
DROP RESTORE POINT <restore point name>;
EXIT
Flashback can be disabled with the database open. Any unused Flashback logs will be
automatically removed at this point and a message detailing the file deletion
written to the alert log.
SQLPLUS / AS SYSDBA
ALTER DATABASE FLASHBACK OFF;
EXIT
g:\prints\core\flback.txt
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
How to enable FLASHBACK in Oracle Database 11G R1 and below versions
System altered.
System altered.
System altered
FLASHBACK_ON
------------------
NO
OPEN_MODE
--------------------
READ WRITE
FLASHBACK_ON
------------------
NO
fra usage:
select * from v$flash_recovery_area_usage;
The main background processes which are related to Backup & Recovery process are :
Read further for brief description on these important oracle background processes.
The checkpoint process is a key database �concept� and it does three important
things:
� It signals the database write process (DBWn) at each �checkpoint� to process all
modified buffers in the SGA buffer cache (temporary location of DB blocks) to the
database data files(permanent & original location of DB blocks). After it has been
done, online redo log files can be recycled.
� It updates the datafile headers with the checkpoint information (even if the file
had no changed blocks).
� It updates the control files with the checkpoint information.
You can switch logfile manually to check this in the alert log.
System altered.
Alert.log entry>>
�������������������
Mon Feb 03 14:24:49 2014
Beginning log switch checkpoint up to RBA [0x8.2.10], SCN: 1006600
Thread 1 advanced to log sequence 8 (LGWR switch)
Current log# 2 seq# 8 mem# 0: /u01/oracle/DB11G/oradata/brij/redo02.log
Mon Feb 03 14:24:49 2014
Archived Log entry 2 added for thread 1 sequence 7 ID 0x4c45a3de dest 1:
�������������������
ii) Checkpoints can also be forced with the ALTER SYSTEM CHECKPOINT; command. We
generally do checkpoint before taking backups. At some point in time, the data that
is currently in the buffer cache would be placed on disk. We can force that to
happen right now with a user invoked checkpoint.
Frequent Checkpoints usually means redo log file size is small (and it also means a
slow system). But if you increase your redo log files size very high, it will also
increase the mean time to recover. So a DBA should determine log file size on the
basis of various factors like database type (DWH/OLTP etc), Transactions volume,
database behavior as shown in alert log error messages etc.
CKPT actually took one of the earlier responsibility of LGWR. LGWR was responsible
for updating the data file headers before database release 8.0. But with increasing
database size and number of data files this job was givne to CKPT process.
B) THE LOG WRITER PROCESS (ora_lgwr_<SID>)
LGWR play important role of writing the data changes from redo log buffer to online
redo log files. Oracle�s online redo log files record all changes made to the
database in sequential manner (SCN is the counter).
Why we are multiplexing only redo log files and not the datafiles?
Oracle uses a �writeahead� protocol, meaning the logs are written to before the
datafiles are. Data changes aren�t necessarily written to datafiles when you commit
a transaction, but they are always written to the redo log. Before DBWR can write
any of the blocks that are changed to disk, LGWR must flush the redo information
related to these blocks. Therefore, it is critical to always protect the online
logs against loss by ensuring they are multiplexed.
Redo log files come into play when a database instance fails or crashes. Upon
restart, the instance will read the redo log files looking for any committed
changes that need to be applied to the datafiles.
The log writer (LGWR) writes to the online redo files under the following
circumstances:
� At each commit
� Every three seconds
� When the redo log buffer is one-third full or contains 1MB of cached redo log
data
� When LGWR is asked to switch log files
Remember that the data will not reside in the redo buffer for very long. For these
reasons, having an enormous (hundreds/thousands of megabytes) redo log buffer is
not practical; Oracle will never be able to use it all since it pretty much
continuously flushes it.
On our 11gr2 database, the size is about 6MB. The minimum size of the default log
buffer is OS-dependent
Or you can also see the memory value when the databse is starting
LGWR does lots of sequential writes(fast operation) to the redo log. This is an
important distinction and one of the reasons that Oracle has a redo log and the
LGWR process as well as the DBWn process. DBWn does lots of scattered writes (slow
operation). The fact that DBWn does its slow job in the background while LGWR does
its faster job while the user waits gives us better overall performance. Oracle
could just write database blocks directly to disk when you commit, but that would
entail a lot of scattered I/O of full blocks, and this would be significantly
slower than letting LGWR write the changes out sequentially.
Also, during Commit, the lengthiest operation is, and always will be, the activity
performed by LGWR, as this is physical disk I/O. For that reason LGWR is designed
such that as you are processing and generating the redo log, LGWR will be
constantly flushing your buffered redo information to disk in the background. So
When it came to the COMMIT, there will not have much left to do and commit will be
faster.
REMEMBER that the purpose of the log buffer is to temporarily buffer transaction
changes and get them quickly written to a safe location on disk (online redo log
files), whereas the database buffer tries to keep blocks in memory as long as
possible to increase the performance of processes using frequently accessed blocks.
Although an optional process, but should be considered mandatory for all production
databases!
The job of the ARCn process is to copy an online redo log file to another location
when LGWR fills it up, before they can be overwritten by new data.
The archiver background process is used only if you�re running your database in
archivelog mode. These archived redo log files can then be used to perform media
recovery. Whereas online redo log is used to fix the data files in the event of a
power failure (when the instance is terminated), archived redo logs are used to fix
data files in the event of a hard disk failure.
For example, If we lose the disk drive containing the system.dbf data file , we
can go to our old backups, restore that old copy of the file, and ask the database
to apply all of the archived and online redo logs generated since that backup took
place. This will catch up that file with the rest of the data files in our
database, and we can continue processing with no loss of data.
ARCH copies the online redo log a bit more intelligently than how the operating
system command cp or copy would do: if a log switch is forced, only the used space
of the online log is copied, not the entire log file.
LOG_MODE
����
ARCHIVELOG
-Memory Structure
-Back Ground Process
Memory Structure:
(I)System Global Area Once The Instance is started it allocated memory to SGA. It
is a basic component of oracle instance its size depends on RAM. The oracle 10g
parameter of SGA and PGA sga_target , sga_max_size , pga_aggregate_target
It consists of
-Shared Pool
- Database Buffer Cache
- Redolog Buffer Cache
- Large pool
- Stream pool
- Java pool
(1 )Shared Pool:
- It's parameter is shared_pool_size
- It's consists of Library cache and Data Dictionary Cache
I) Library Cache:
(4)Large Pool:
- Parallel execution allocates buffers out of the large pool only when sga_traget
- It works to release the burden the shared pool
show parameter parallel_automatic_tuning
(6)Stream Pool:
its taking care by oracle and allocates SGA components size ASMM taking care of
1)Shared pool
2)Library cache
3)Database buffer cache
4)Large pool
5)Java Pool
6)Stream Pool
Process Structure :
1)USER PROCESS:
- A program that request interaction with oracle server
- It's must first establish a connection
- It does not interact directly with oracle server
2)SERVER PROCESS:
- It directly interacts with oracle server
- It can be a dedicated or shared server
- It always responds to user requests
3)BACKGROUND PROCESS:
1)DBWR
2)LGWR
3)SMON
4)PMON
5)CHPKT
Let us we can see each components are
1)DBWR:
2)LGWR:
- At commit
- Every 3 sec
- When there is full 1MB reached
- Redolog Buffer reached one-third full
- Before DBWR writes In above situations, redolog writes through LGWR from redo
log buffer
3)SMON:
4)PMON:
5)CHKPT:
Database :
The Database is a collection of data which contains data files ,control files
,redolog files
1) Data file:
- It is a portion of an oracle database ,it stores the data which includes user
data and undo data
- It's extension ".dbf"
- The default location is " $ORACLE_BASE/oradata"
- To view the location in database use this command
Select name from V$datafile;
2) Control file:
4) Archive log
- It's a group of redo log files to one or more offline destinations, known
collectively as the archived redo log
- Its Default location is Flash_recovery_area
- Must enable archive log mode in the database then only ll be saved on archive
log folder other wise the log buffer overwrites on redo log files through Lgwr.
1) Storage
2) Memory
3) Process
i) Storage
Datafile:
datafiles----inside of the datafile you have data, the data for tables and also
indexes are stored in datafiles.
undo data---
temporary data --- whenever oracle does a sort and can't store the all information
in memory in the (pga) is going to write in
temporary files.
* In oracle 7, 8 and 8i you can have 1,022 datafiles, and now you can have 65,536
datafiles.
* dba_data_files
* v$datafile
* v$dbfile
Controlfile:
controlfife ----- it contains the structure of your database
* Oracle recommend atleast two/three controlfiles in different locations.
* all the three controlfile information is same.
* Inside the controfile it stores the:
db name
db creation time
entire path of your datafile location
checkpoint information
v$controlfile
* control_files is very important parameters
Online Redologs:
online redologs---
* all the dml and ddl commands are stored (undo and redo)
* all the changes made to the databases are stored in redolog files
* all the commands are stored in redologfiles
* its a recorder of your major changes in database
* Oracle recommended that you have multiplex of your redologs in groups in
different locations
* Major used in recovery
* if archivelog is enabled the all information of redolog is stored in archivelog
files
datafiles (.dbf)
controlfiles (.ctl)
redologs (.rdo)
archivelog files
spfile
init.ora file
oracle password file
ii) Memory
By default the BACKUP command in RMAN creates BackupSet(s) -- each of which is one
or more BackupPiece(s). A datafile may span BackupPieces but may not span a
BackupSet.
However, RMAN does allow another method -- BACKUP AS COPY. This is talking to
"User Managed Backups" created with OS commands -- except that the ALTER TABLESPACE
| DATABASE BEGIN BACKUP command does not have to be issued.
If an active datafile is corrupted, the DBA can choose to SWITCH TO COPY instead of
having to restore the datafile copy. Thus, a switch can be a fast operation.
Obviously, the DBA must plan carefully where he creates such copies if he intends
to SWITCH anytime later (he wouldn't keep a datafile copy on a non-protected [RAID
or ASM] storage target).
g:\prints\core\sql_query_execution.txt
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
All SQL statements have to go through the following stages of execution:
1) PARSE:
Every SQL statement has to be parsed which includes checking the syntax,
validating, ensuring that all references to objects are correct and ensuring that
relevant privileges to those object exist.
2) BIND:
After parsing, the oracle server knows the meaning of the oracle statement but
still may not have enough info(values for variables) to execute the statement. The
process of obtaining these value is called as bind values.
3) EXECUTE:
After binding, the Oracle server executes the statement.
4) FETCH:
In the fetch stage, rows are selected and ordered and each successive row retrieves
another row of the result until the last row has been fetched. This stage is only
for certain DML statements like SELECT.
sqlplus scott/tiger@prod
Question: What are the internal SQL execution steps? How does Oracle translate a
table name into a read request from a physical datafile?
Answer: Between hitting "enter" and seeing your results, there are many steps in
processing a SQL statement.
All Oracle SQL statements must be processed the first time that they execute
(unless they are cached in the library cache). and SQL execution steps include:
Hard parse - A new SQL statement must be parsed from scratch. (See hard parse
ratio, comparing hard parses to executes).
Soft parse - A reentrant SQL statement where the only unique feature are host
variables. (See soft parse ratio, comparing soft parses to executes).
Excessive data parsing can occur when your shared_pool_size is too small (and
reentrant SQL is paged out), or when you have non-reusable SQL statements without
host variables. See the cursor_sharing parameter for an easy way to make SQL
reentrant and remember that you should always use host variables in you SQL so that
they can be re-entrant.
SQL> startup
Pluggable Database opened.
2) The another way you can use �alter pluggable database� command from root
container to start and shutdown pluggable database.You connect to container
database.
Maybe you have a lot of pluggable database in the container database and these
shutdown operation would be disturbed.We can shutdown all pluggable database with
one command from root container.
NAME OPEN_MODE
���������� ���-
PDB$SEED READ ONLY
PDB1 MOUNTED
PDB2 MOUNTED
You may want close all pluggable database except one pluggable database. You can do
this except command as following.
NOTE: When you open all pluggable database you can do same thing �open� command
instead �close� command .
NOTE:When you shutdown container database all PDB will shutdown too. But when you
start container database any PDB is not start automaticly. To start PDB we should
do manually intervention or we can create a trigger as following.
g:\prints\core\stdby issue.txt
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
STANDBY_FILE_MANAGEMENT parameter is not configured Auto
If you have not set standby_file_management to Auto on the primary and you add a
datafile to the primary you will see the following in the alert log of the standby
This will stop you standby database from being in recovery mode until this is fixed
, so if you are using OMF do the following to fix it