Beruflich Dokumente
Kultur Dokumente
Oracle DBA
What is Oracle ?
An Oracle database is a collection of related data treated as a unit. The
purpose of database is to store and retrieve related information. A
database server can manage large amount of data in a multiuser
environment reliably so that multiple users are allowed to access
the same data concurrently with high performance and also allows
efficiently to recover on failure.
Who is DBA ?
DBA is a person who is responsible to manage the database server.
Some times a database can be very large and can have large
number of users. Hence it is not a single person job, but group of
people(DBA's) who share responsibility.
DBA's Responsibilities
1
9866465379
Database Administrators
These are the persons who will install oracle software, create database,
configure and manage in the host computer.
Security Administrators or officers
These will create users in the database, controls and monitors user access
to the database, and looks after the system security.
Network Administrators
These will look after the administration of oracle network products.
Application Developers
These people will design and implement database applications.
Design database structure, estimate storage requirements for the
application and communicate this to the database administrator.
Application Administrators
Every application can have its own administrator.
Database Users
These users will interact with the database through applications or
utilities and are responsible to enter, modify, delete data where permitted.
2
9866465379
3
9866465379
4
9866465379
5
9866465379
6
9866465379
7
9866465379
ORACLE_SID
ORACLE_BASE
ORACLE_HOME
LD_LIBRARY_PATH
Under unix flavors we can add these variables to shell profile files so that
they are set automatically at the time of logging into the system.
For example if the default login shell is bash then we must enter into a
file called .bash_profile.
Append the following lines at the end of above file.
export ORACLE_SID=prod
export ORACLE_BASE=/oracle
export ORACLE_HOME=/oracle/10.2.0
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib
export PATH=$PATH:$ORACLE_HOME/bin
SYSDBA :
Allow to perform Startup and shutdown the database
Allow to alter the database
Allow to create and drop database
Allow to create spfile
SYSOPER :
Allow startup and shutdown the database
Allow to create Spfile
Allow to alter the database
We can grant SYSDBA and SYSOPER privileges to any database user,
there after who will act as a DBA.
SQL > grant sysdba to ramesh;
The above statement add user to the password file and enable the user to
connect as SYSDBA.
8
9866465379
Apart from user tables the oracle database also contain some system
tables that store the data about the database itself.
These system tables include the names of all the tables in the database,
the column names and datatypes of those tables, the number of rows these
tables contain, security information about which users are allowed to
access these tables etc.
This data about the database is referred as metadata.
These tables have cryptic names such as OBJ$, FILE$ etc.
To make it easier to use sql to examine metadata tables oracle builds
views on these tables.
Oracle 10g database contains two types of metadata views.
Data Dictionary Views
Depending upon features configured and installed oracle 10g can contain
more than 1300 Data dictionary views.
These have names that begin with DBA_, ALL_, USER_.
For Example :
DBA_TABLES view shows information on all the tables in the database.
ALL_TABLES view shows only the tables that a particular user owns
and has access to other user tables.
USER_TABLES view shows only those objects owned by a user.
Some Examples :
DBA_TABLES
Show the names and physical storage information about all the tables.
9
9866465379
DBA_USERS
Show information about all the users in the database.
DBA_VIEWS
Show information about all the views in the database.
DBA_TAB_COLUMNS
Show all the names and datatypes of the table columns in the database.
Dynamic Performance Views
Oracle 10g can contain around 350 dynamic performance views
depending up on features selected.
Most of these have names that begin with v$.
Some examples :
V$DATABASE
Contain info about the database itself such as database name, when it was
created etc.
V$VERSION
Shows which software version the database is using.
V$OPTION
shows optional components that are installed in the database.
V$SQL
Show info about the sql statements that database users have been issuing.
10
9866465379
Some views are available even when the database is not fully open and
running
Data contained usually lower case
These contain dynamic statistical data that is lost each time the database
is shutdown.
When ever we start database instance, parameter initialization file is read
first which will contain some parameters and values.
These parameters advice the instance of certain settings when it starts up
the database.
There are two types of parameter initialization files
Parameter File (Pfile) and Server parameter file (Spfile)
We can use any one of these two to configure the instance and database
options.
Pfile ( parameter file )
This is a text file which can be edited using a text editor
Its name will be initinstancename.ora
can be created from an Spfile.
Spfile ( Server parameter file )
Binary file that cannot be edited using text editor.
Changes can be made to spfile while the instance is open and running by
executing sql commands from an sql prompt
Its name will be spfileinstancename.ora
Can be created from a pfile.
For example : If ORACLE_SID is prod then
Pfile name will be initprod.ora
Spfile name will be spfileprod.ora
The default location of these files will be $ORACLE_HOME/dbs folder.
we can specify more than 250 configuration parameters in the pfile or
spfile.
Oracle 10g divides these parameters into two categories basic and
dvanced.
Oracle recommends to set only about 30 parameters manually.
Remaining parameters can be set as directed by oracle support or to meet
the specific needs of an application.
11
9866465379
12
9866465379
Oracle Architecture
The Oracle server architecture can be described as follows.
13
9866465379
The number of Private SQL areas a user process can allocate is limited by
open_cursors parameter (Default is 50).
Session Memory
This memory is allocated to hold session's variables and other session
information related to the session.
The PGA is also includes a sort area, which is used when ever a user
request requires a sort, bitmap merge or hash join operations.
As of oracle 9i PGA_AGGREGATE_TARGET parameter in conjunction
with the WORKAREA_SIZE_POLICY initialization parameter can ease
system administration by allowing the DBA to choose a total size for all
work areas and let oracle manage and allocate the memory between all
user processes.
PGA
14
9866465379
15
9866465379
If the data dictionary cache is too small, requests for information from the
data dictionary will cause extra I/O to occur. These I/O bound data
dictionary requests are called recursive calls and should be avoided by
sizing the data dictionary cache correctly.
The shared pool is sized by the SHARED_POOL_SIZE parameter.
This is dynamic parameter.
2. Database Buffer Cache
This holds blocks of data from disk that have been recently read to
satisfy a SELECT statement or that contain modified blocks that have
been changed or added from a DML statement The buffer cache contains
both modified and unmodified blocks. As of oracle 9i database buffer
cache is dynamic. Considering that there may be tablespaces in the
database with block sizes other than the default block size require their
own buffer cache. As the processing and transactional needs change
during the day or during the week the values of DB_CACHE_SIZE and
DB_nk_CACHE_SIZE can be dynamically changed without restarting
the instance. ( one block size for the default and up to four others)
Oracle can use two additional caches with the same block size as the
default block size. The KEEP buffer pool and the RECYCLE buffer pool.
As of oracle 9i both of these pools allocate memory independently of
other caches in the SGA.
When a table is created, you can specify the pool where the tables data
blocks will reside by using the BUFFER_POOL KEEP clause or
BUFFER_POOL_RECYCLE clause in the storage clause. For tables that
we use frequently throughout the day, it would be advantageous to place
this table into KEEP buffer pool to minimize the I/O needed to fetch
blocks in the table.
Oracle uses LRU algorithm to manage the contents of the Shared Pool
and Database Buffer Cache
16
9866465379
17
9866465379
Streams Pool
Fixed SGA
Redo Log Buffer
18
9866465379
19
9866465379
MAX(CHECKPOINT_CHANGE#)
-----------------------
789051
gets the current system SCN
SQL> select dbms_flashback.get_system_change_number from dual;
GET_SYSTEM_CHANGE_NUMBER
------------------------
789124
You can also query v$transaction to arrive at the SCN for that transaction
Events that trigger a checkpoint
The following events trigger a checkpoint.
Redo log switch
Log_checkpoint_timeout has expired
20
9866465379
Checkpoint number is the SCN number at which all the dirty buffers are
written to the disk, there can be a checkpoint at
object/tablespace/datafile/database level.
21
9866465379
22
9866465379
Oracle Database
An instance is a temporary memory structure, but the oracle database is
made up of a set of physical files those resides on the host computer hard
disk. These are Control files, Data files and Redo log files.
Apart from these additional files that are associated with oracle database
and are not technically part of the database are password file, pfile, spfile
and archived redo log files.
The three types of files that make up a database are
1.Control files
These contain locations of other physical files, database name,
database block size, database character set, and recovery information.
These are required to start the database instance.
The control files are created when the database is created in the locations
specified by CONTROL_FILES initialization parameter
in the parameter file.
Most production databases multiplex control files to multiple locations to
minimize the potential damage due to disk failure.
Oracle uses the CKPT background process to update these files
automatically.
2. Datafiles
These are the physical files which actually store the data that has been
inserted into each table in the database.
Datafiles are the physical structures behind another database storage area
called tablespace. A tablespace is a logical storage area in the database.
The information for a single table can span many datafiles or many tables
can share a set of datafiles. A tablespace can have more than one datafile.
23
9866465379
24
9866465379
TS1
TS2
For each tablespace there must be at least one datafile.
8 rows selected.
Temporary tablespaces are listed in dba_temp_files.
Datafiles are usually the largest files in the database. They will be
ranging from MB s to TB s. The maximum number of database files can
be set with the init parameter db_files. the maximum number of database
files in a smallfile tablespace is 1022. A bigfile tablespace can contain
only one database file. A datafile that contains a block whose SCN is
more recent than the SCN of its header is called a fuzzy datafile
if you're interested in when the a file's last checkpoint was
select name, checkpoint_change#, to_char(checkpoint_time,
'DD.MM.YYYY HH24:MI:SS') from v$datafile_header
The datafile size is still limited to 4,194,304 Oracle blocks. With a block
size of 8k, it limits the datafile to 32GB
When ever a user performs an sql operation on a table the user's
server process copies the affected data from the datafiles into the
Database Buffer cache. If the user has performed a committed
transaction that modifies the data, The Database Writer process
writes the modified data back to the datafiles.
25
9866465379
GROUP# MEMBER
---------- ----------------------------------------
1 /disk1/sales/log1a.ora
26
9866465379
2 /disk1/sales/log2a.ora
When ever LGWR switches from last redo log group to first,
any recovery information already available in the first redo log group is
overwritten. Therefore it is no longer available for recovery purpose.
But if the database is running in archive log mode, the contents of
these used logs are copied to a secondary location before the log
is used by LGWR.
If archiving feature is enabled a background process called
Archiver (ARCn) will copy the contents of redo log file to the
archive location.
27
9866465379
All production databases run in archive log mode because they need to be
able to recover all transactions since the last backup in
case of hardware failure.
28
9866465379
One of the goals of the parse phase is to generate a query execution plan
(QEP). Does the statement correspond to an open cursor, ie, does the
statement already exist in the library cache. If yes, the statement needs
not be parsed anymore and can directly be executed. If the cursor is not
open, it might still be that it is cached in the cursor cache. If yes, the
statement needs not be parsed anymore and can directly be executed. If
not, the statement has to be verified syntactically and semantically:
Syntax
This step checks if the syntax of the statmenet is correct. For example, a
statement like select foo frm bar is syntactically not correct (frm instead
of from).
Semantic
View merging
If the query contains views, the query might be rewritten to join the
view's base tables instead of the views.
29
9866465379
Statement Transformation
Optimization
The CBO uses "gathered??" statistics to minimize the cost to execute the
query. The result of the optimization is the query evaluation plan (QEP).
If bind variable peeking is used, the resulting execution plan might be
dependant on the first bound bind-value.
Execute
Memory for bind variables is allocated and filled with the actual bind-
values. The execution plan is executed. Oracle checks if the data it needs
for the query are already in the buffer cache. If not, it reads the data off
the disk into the buffer cache. The record(s) that are changed are locked.
No other session must be able to change the record while they're updated.
Also, before and after images describing the changes are written to the
redo log buffer and the rollback segments. The original block receives a
pointer to the rollback segment. Then, the data is changed.
Fetch
Data is fetched from database blocks. Rows that don't match the predicate
are removed. If needed (for example in an order by statement), the data is
sorted. The data is then returned to the application.
30
9866465379
31
9866465379
32
9866465379
BOTH - The parameter takes affect in the current instance and is stored in
the SPFILE.
SPFILE - The parameter is altered in the SPFILE only. It does not affect
the current instance.
MEMORY - The parameter takes affect in the current instance, but is not
stored in the SPFILE.
A parameter value can be reset to the default using:
33
9866465379
Logical Strucutres
Tablespaces and Data Files
Tablespaces are the primary logical storage structures of
any ORACLE database. The usable data of an ORACLE
database is logically stored in the tablespaces and
physically stored in the data files associated with the
corresponding tablespace
databases and tablespaces
An ORACLE database is comprised of one or more logical
storage units called tablespaces. The database's data is
collectively stored in the database's tablespaces.
tablespaces and data files
Each tablespace in an ORACLE database is comprised of
one or more operating system files called data files. A
tablespace's data files physically store the associated
database data on disk
schema objects, segments, and tablespaces
When a schema object such as a table or index is created,
its segment is created within a designated tablespace in
34
9866465379
35
9866465379
36
9866465379
37
9866465379
38
9866465379
39
9866465379
40
9866465379
41
9866465379
42
9866465379
43
9866465379
Overview of Segments
44
9866465379
next 50k
minextents 2
maxextents 50
pctincrease 0);
All segments created in the tablespace will inherit the
default storage parameters unless their storage
parameters are specified explicitly to override the default.
Initial – size in bytes of the first extent in a segment.
Next – size in bytes of second and subsequent segment
extents.
Pctincrease – Percent by which each extent grows after
the second.
SMON periodically coalesces free space in a DMT but only
if
the PCTINCREASE setting is not zero.
Minextents – Number of minimum extents allocated to
each
segment upon creation.
Maxextents – Number of maximum extents allocated in a
segment. we can a also specify UNLIMITED.
45
9866465379
46
9866465379
5 rows selected.
To create a locally managed tablespace
Important Points:
1. LMTs can be created as
a) AUTOALLOCATE: specifies that the tablespace is system managed.
Users cannot specify an extent size.
b) UNIFORM: specifies that the tablespace is managed with uniform
extents of SIZE bytes. The default SIZE is 1 megabyte.
2. One cannot create a locally managed SYSTEM tablespace in 8i.
3. This is possible with in 9.2.0.X, where SYSTEM tablespace is created
by DBCA as locally managed by default. With a locally managed
SYSTEM tablespace
AUTOALLOCATE specifies that extent sizes are system managed.
Oracle will choose "optimal" next extent sizes starting with 64KB. As the
segment grows larger extent sizes will increase to 1MB, 8MB, and
eventually to 64MB. This is the recommended option for a low or
unmanaged environment.
UNIFORM specifies that the tablespace is managed with uniform extents
of SIZE bytes (use K or M to specify the extent size in kilobytes or
megabytes). The default size is 1M. The uniform extent size of a locally
managed tablespace cannot be overridden when a schema object, such as
a table or an index, is created.
Also not, if you specify, LOCAL, you cannot specify DEFAULT
STORAGE, MINIMUM EXTENT or TEMPORARY.
SQL > create tablespace test
datafile '/u01/app/oracle/oradata/sales/test.dbf' size 100M
Extent management Local Autoallocate;
47
9866465379
To create DMT
SQL > create tablespace ts2
datafile '/u01/app/oracle/oradata/sales/ts2.dbf' size 50M
extent management Dictionary;
Segment Space Management in LMT:
Use segment space management clause to specify how free and used
space within a segment is to be managed.
Once it is established we cannot modify segment space management for
tablespace.
From Oracle 9i, one can not only have bitmap managed tablespaces, but
also bitmap managed segments when setting Segment Space
Management to AUTO for a tablespace
Segment Space Management eliminates the need to specify and tune the
PCTUSED, FREELISTS, and FREELISTS GROUPS storage parameters
for schema objects. The Automatic Segment Space Management feature
improves the performance of concurrent DML operations significantly
since different parts of the bitmap can be used simultaneously eliminating
serialization for free space lookups against the FREELSITS. This is of
particular importance when using RAC, or if "buffer busy waits" are
deteted.
Manual – This setting uses free lists to manage free space within
segments.
48
9866465379
49
9866465379
first
By extending the size of a datafile
SQL > alter database datafile
'/u01/app/oracle/oradata/sales/ts1.dbf' resize 100M;
50
9866465379
Second
We can also extend the size of the tablespace by adding a new
datafile to a tablespace.
Third
We can also autoextend feature. oracle will automatically increase the
size of the datafile whenever sapce is required. Here we can also specify
how much size file should increase and maximum size.
51
9866465379
USERS NO
TS1 NO
TS2 NO
Coalescing Tablespaces
SQL > alter tablespace ts1 colaesce;
Renaming Tablespaces
Dropping tablespaces
We can drop a table space and its contents.
52
9866465379
53
9866465379
dbms_space_admin.Tablespace_Migrate_To_Local
dbms_space_admin.Tablespace_Migrate_From_Local.
Converting DMT to LMT
SQL > exec bms_space_admin.Tablespace_Migrate_To_Local('ts1');
Converting LMT to DMT:
SQL> exec
dbms_space_admin.Tablespace_Migrate_FROM_Local('ts2');
54
9866465379
oracle/oradata;
OR
1) SQL> alter tablespace <TS name> begin backup;
2) $ cp <old name> <new name>;
2.5) SQL> alter tablespace <TS name> end backup;
3) SQL> alter database datafile <old name> offline;
4) SQL> alter database rename file <old name> to <new name>;
5) SQL> recover datafile <new name>;
4. SQL> alter database datafile <new name> online;
55
9866465379
There are two ways to specify alert thresholds for both locally managed
and dictionary managed tablespaces:
● By percent full
For both warning and critical thresholds, when space used becomes
greater than or equal to a percent of total space, an alert is issued.
● By free space remaining (in kilobytes (KB))
For both warning and critical thresholds, when remaining space
falls below an amount in KB, an alert is issued. Free-space-
remaining thresholds are more useful for very large tablespaces.
New tablespaces are assigned alert thresholds as follows:
56
9866465379
DBMS_SERVER_ALERT.SET_THRESHOLD(
metrics_id =>
DBMS_SERVER_ALERT.TABLESPACE_PCT_FULL,
warning_operator =>
DBMS_SERVER_ALERT.OPERATOR_GT,
warning_value => '0',
critical_operator =>
DBMS_SERVER_ALERT.OPERATOR_GT,
critical_value => '0',
observation_period => 1,
consecutive_occurrences => 1,
instance_name => NULL,
object_type =>
DBMS_SERVER_ALERT.OBJECT_TYPE_TABLESPACE,
object_name => 'USERS');
END;
/
57
9866465379
DBMS_SERVER_ALERT.SET_THRESHOLD.
Modifying Database Default Thresholds
To modify database default thresholds for locally managed tablespaces,
invoke DBMS_SERVER_ALERT.SET_THRESHOLD as shown in the
previous example, but set object_name to NULL. All tablespaces that
use the database default are then switched to the new default.
● Viewing Alerts
Unlike normal data files, TEMPFILEs are not fully initialised (sparse).
When you create a TEMPFILE, Oracle only writes to the header and last
58
9866465379
Tablespace groups
59
9866465379
60
9866465379
Database altered.
SQL>
'/oracle/temp01.dbf' ;
● What Is Undo?
61
9866465379
In this case, if you have not already created the undo tablespace (in this
example, undotbs_01), the STARTUP command fails. The
UNDO_TABLESPACE parameter can be used to assign a specific undo
tablespace to an instance in an Oracle Real Application Clusters
environment.
The following is a summary of the initialization parameters for automatic
undo management:
Initialization
Parameter Description
UNDO_MANA If AUTO, use automatic undo management. The default is
GEMENT MANUAL.
62
9866465379
Initialization
Parameter Description
UNDO_TABLE An optional dynamic parameter specifying the name of an
SPACE undo tablespace. This parameter should be used only
when the database has multiple undo tablespaces and you
want to direct the database instance to use a particular
undo tablespace.
● Undo Retention
After a transaction is committed, undo data is no longer needed for
rollback or transaction recovery purposes. However, for consistent read
purposes, long-running queries may require this old undo information for
producing older images of data blocks. Furthermore, the success of
several Oracle Flashback features can also depend upon the availability of
older undo information. For these reasons, it is desirable to retain the old
undo information for as long as possible.
When automatic undo management is enabled, there is always a current
undo retention period, which is the minimum amount of time that Oracle
Database attempts to retain old undo information before overwriting it.
Old (committed) undo information that is older than the current undo
retention period is said to be expired. Old undo information with an age
that is less than the current undo retention period is said to be unexpired.
Oracle Database automatically tunes the undo retention period based on
undo tablespace size and system activity. You can specify a minimum
undo retention period (in seconds) by setting the UNDO_RETENTION
initialization parameter. The database makes its best effort to honor the
specified minimum undo retention period, provided that the undo
tablespace has space available for new transactions. When available space
for new transactions becomes short, the database begins to overwrite
expired undo. If the undo tablespace has no space for new transactions
after all expired undo is overwritten, the database may begin overwriting
unexpired undo information. If any of this overwritten undo information
is required for consistent read in a current long-running query, the query
could fail with the snapshot too old error message.
The following points explain the exact impact of the
UNDO_RETENTION parameter on undo retention:
63
9866465379
Retention Guarantee
64
9866465379
and the current system load. This tuned retention period can be
significantly greater than the specified minimum retention period.
7. If the undo tablespace is configured with the AUTOEXTEND
option, the database tunes the undo retention period to be
somewhat longer than the longest-running query on the system at
that time. Again, this tuned retention period can be greater than the
specified minimum retention period.
Note:
Automatic tuning of undo retention is not supported for LOBs. This is
because undo information for LOBs is stored in the segment itself and not
in the undo tablespace. For LOBs, the database attempts to honor the
minimum undo retention period specified by UNDO_RETENTION.
However, if space becomes low, unexpired LOB undo information may
be overwritten.
You can determine the current retention period by querying the
TUNED_UNDORETENTION column of the V$UNDOSTAT view. This
view contains one row for each 10-minute statistics collection interval
over the last 4 days. (Beyond 4 days, the data is available in the
DBA_HIST_UNDOSTAT view.) TUNED_UNDORETENTION is given
in seconds.
select to_char(begin_time, 'DD-MON-RR HH24:MI') begin_time,
to_char(end_time, 'DD-MON-RR HH24:MI') end_time,
tuned_undoretention
from v$undostat order by end_time;
65
9866465379
tablespace size, the database tunes the undo retention period based on
85% of the tablespace size, or on the warning alert threshold percentage
for space used, whichever is lower. (The warning alert threshold defaults
to 85%, but can be changed.) Therefore, if you set the warning alert
threshold of the undo tablespace below 85%, this may reduce the tuned
length of the undo retention period
66
9866465379
You can activate the Undo Advisor by creating an undo advisor task
through the advisor framework. The following example creates an undo
advisor task to evaluate the undo tablespace. The name of the advisor is
'Undo Advisor'. The analysis is based on Automatic Workload Repository
snapshots, which you must specify by setting parameters
START_SNAPSHOT and END_SNAPSHOT. In the following example,
the START_SNAPSHOT is "1" and END_SNAPSHOT is "2".
DECLARE
tid NUMBER;
tname VARCHAR2(30);
67
9866465379
oid NUMBER;
BEGIN
DBMS_ADVISOR.CREATE_TASK('Undo Advisor', tid, tname, 'Undo
Advisor Task');
DBMS_ADVISOR.CREATE_OBJECT(tname, 'UNDO_TBS', null,
null, null, 'null', oid);
DBMS_ADVISOR.SET_TASK_PARAMETER(tname,
'TARGET_OBJECTS', oid);
DBMS_ADVISOR.SET_TASK_PARAMETER(tname,
'START_SNAPSHOT', 1);
DBMS_ADVISOR.SET_TASK_PARAMETER(tname,
'END_SNAPSHOT', 2);
DBMS_ADVISOR.SET_TASK_PARAMETER(name, 'INSTANCE',
1);
DBMS_ADVISOR.execute_task(tname);
end;
/
After you have created the advisor task, you can view the output and
recommendations in the Automatic Database Diagnostic Monitor in
Enterprise Manager. This information is also available in the
DBA_ADVISOR_* data dictionary views.
68
9866465379
You can create more than one undo tablespace, but only one of them can
be active at any one time.
69
9866465379
70
9866465379
71
9866465379
72
9866465379
The preceding example shows how undo space is consumed in the system
for the previous 24 hours from the time 14:35:12 on 10/27/2004.
Finding the amount of undo generated in the current session
To illustrate the exaplmes in the later sections of this article we need to
devise a small transaction (here, it is a single update statement). We also
need to know the exact amount of UNDO generated by the statement.
Table-1 shows the creation of a table TEMP1, and shows an UPDATE
on table TEMP1. It uses a query into the datadictionary dynamic views to
find the exact amount of UNDO generated by the UPDATE. We will
need this value in subsequent examples. The default block size for the
database is 8K.
UNDO Blocks and Bytes generated in a transaction/statement
SQL> create table temp1 as
2 select * from all_objects where rownum < 5001;
Table created.
73
9866465379
59 5001 687483932
SQL> commit;
Commit complete.
Since Oracle defers writing to the datafile there is chance of power failure
or system crash before the row is written to the disk. That’s why Oracle
writes the statement in redo logfile so that in case of power failure or
system crash oracle can re-execute the statements next time when you
open the database.
74
9866465379
When you drop logfiles the files are not deleted from the disk. You have
to use O/S command to delete the files from disk
Move the logfile from Old location to new location using operating
system command
SQL > startup mount
Change the location in control file.
SQL > alter database rename file
75
9866465379
'/u01/app/oracle/oradata/sales/log1.ora' to
'/u02/app/oracle/oradata/sales/log2.ora';
Open the database.
SQL > alter database open;
76
9866465379
77
9866465379
78
9866465379
79
9866465379
SAMPLE MYSPACE
DBSNMP SYSAUX
SCOTT SYSTEM
SYSMAN SYSAUX
SYS SYSTEM
SYSTEM SYSTEM
TSMSYS SYSTEM
DIP SYSTEM
To see quotas in all other tablespaces assigned to a user.
SQL > select username,tablespace_name,max_bytes from
dba_ts_quotas where username='SCOTT';
USERNAME TABLESPACE_NAME MAX_BYTES
------------------------------ ------------------------------ ----------
SCOTT USERS 5242880
SCOTT MYSPACE 10485760
Here apart from the default tablespace, quotas are given for user
SCOTT in users and myspace tablespaces.
To create a user
We can create a user by using CREATE USER statement.
80
9866465379
81
9866465379
BONUS SYSTEM
SALGRADE SYSTEM
Assigning quota to a user in other than default tablespace.
SQL > alter user scott quota 10M on users2;
User altered
Now we can query dba_ts_quotas to see tablespace quota for the
user scott.
By default all the objects created by a user will go the default tablespace.
Now while creating a table user can select the tablespace
SQL > create table mytable(name varchar(30))
tablespace users2;
The above table created by user scott will be stored in users2
tablespace.
82
9866465379
34 9 INACTIVE DEDICATED
35 60 INACTIVE DEDICATED
Here the user is connected the dedicated server process.
But if we see process ids by using operating system command
it simply show that the processes running in the host computer
belong the oracle database server. User scott is not shown.
ram $ ps -e | grep oracle
3702 ? 00:00:00 oracle
3704 ? 00:00:00 oracle
3706 ? 00:00:00 oracle
3708 ? 00:00:00 oracle
3710 ? 00:00:00 oracle
3712 ? 00:00:00 oracle
3714 ? 00:00:01 oracle
3716 ? 00:00:00 oracle
3718 ? 00:00:00 oracle
3720 ? 00:00:00 oracle
3722 ? 00:00:00 oracle
3726 ? 00:00:00 oracle
3734 ? 00:00:00 oracle
3736 ? 00:00:00 oracle
3739 ? 00:00:01 oracle
3903 ? 00:00:00 oracle
3970 ? 00:00:00 oracle
So any user connected to the database what ever the process is
created for this user in the host computer is not known to the OS.
This process simply belong to the oracle.
83
9866465379
SYS 3739
SCOTT 3903
SCOTT 3970
We can also terminate a user session by using the following. To terminate
first get sid and serial of a user session then use ALTER SYSTEM KILL.
SQL > select sid,serial# from v$session where username='SCOTT';
SID SERIAL#
---------- ----------
34 9
35 60
SQL > alter system kill session '34,9';
System altered.
Now if the user is trying to query system will give the following
error message.
SQL> select * from tab;
select * from tab
*
ERROR at line 1:
ORA-00028: your session has been killed
Dropping users
Use the DROP USER statement to remove a database user and optionally
we can also remove the users objects. Oracle does not drop users whose
schemas contain objects unless we specify CASCADE.
SQL > drop user raju;
user dropped.
SQL > drop user raju cascade;
84
9866465379
85
9866465379
86
9866465379
87
9866465379
88
9866465379
Privileges
Privilege is a right to execute some type of sql statement or
right to access other users objects.
There are two types of Privileges -
System Privileges :
create session, sysdba, sysoper etc.
Object Privileges :
select, insert, update etc.
The set of privileges are fixed.
We can grant these privileges to users depending on the
requirement. We grant privileges in two ways. Grant privilege directly to
a user or create a role with required privileges and then grant the
role to a users.
To see system privileges
SQL > select name from system_privilege_map;
89
9866465379
90
9866465379
CONNECT
RESOURCE
DBA
SELECT_CATALOG_ROLE
EXECUTE_CATALOG_ROLE
DELETE_CATALOG_ROLE
EXP_FULL_DATABASE
IMP_FULL_DATABASE
RECOVERY_CATALOG_OWNER
.
ROLE
------------------------------
AQ_ADMINISTRATOR_ROLE
AQ_USER_ROLE
GLOBAL_AQ_USER_ROLE
SCHEDULER_ADMIN
HS_ADMIN_ROLE
OEM_ADVISOR
OEM_MONITOR
MGMT_USER
CONNECT - Includes only the following system privilege: CREATE
SESSION
RESOURCE - Includes the following system privileges: CREATE
CLUSTER, CREATE INDEXTYPE, CREATE OPERATOR, CREATE
PROCEDURE, CREATE SEQUENCE, CREATE TABLE, CREATE
TRIGGER, CREATE TYPE
EXP_FULL_DATABASE - Provides the privileges required to perform
full and incremental database exports and includes: SELECT ANY
TABLE, BACKUP ANY TABLE, EXECUTE ANY PROCEDURE,
EXECUTE ANY TYPE, ADMINISTER RESOURCE MANAGER, and
INSERT, DELETE, and UPDATE on the tables SYS.INCVID,
SYS.INCFIL, and SYS.INCEXP. Also the following roles:
EXECUTE_CATALOG_ROLE and SELECT_CATALOG_ROLE.
91
9866465379
When a user is created, the default for active roles is set to ALL.
ALL means all the roles granted to the user are active.
92
9866465379
SCOTT RESOURCE
SCOTT CONNECT
We can use MAX_ENABLED_ROLES parameter to set the
number of roles allowed to be enabled by a user at any time.
We can see privileges assigned to a role
SQL > select role,privilege from role_sys_privs
where role='RESOURCE';
ROLE PRIVILEGE
------------------------------ ----------------------------------------
RESOURCE CREATE SEQUENCE
RESOURCE CREATE TRIGGER
RESOURCE CREATE CLUSTER
RESOURCE CREATE PROCEDURE
RESOURCE CREATE TYPE
RESOURCE CREATE OPERATOR
RESOURCE CREATE TABLE
RESOURCE CREATE INDEXTYPE
Dropping roles
SQL > drop role myrole1
93
9866465379
that determine how applications access the network and how data is
subdivided into packets for transmission across the network. Oracle Net
communicates with the TCP/IP protocol to enable computer-level
connectivity and data transfer between the client and the database server.
94
9866465379
By default, the PMON process registers service information with its local
listener on the default local address of TCP/IP, port 1521. As long as the
listener configuration is synchronized with the database configuration,
PMON can register service information with a nondefault local listener or
a remote listener on another node. Synchronization is simply a matter of
specifying the protocol address of the listener in the listener.ora
file and the location of the listener in the initialization parameter file.
95
9866465379
If you want PMON to register with a local listener that does not use
TCP/IP, port 1521, configure the LOCAL_LISTENER parameter in the
initialization parameter file to locate the local listener.
For a shared server environment, you can alternatively use the
LISTENER attribute of the DISPATCHERS parameter in the
initialization parameter file to register the dispatchers with a nondefault
local listener. Because both the LOCAL_LISTENER parameter and the
LISTENER attribute enable PMON to register dispatcher information
with the listener, it is not necessary to specify both the parameter and the
attribute if the listener values are the same.
Set the LOCAL_LISTENER parameter as follows:
LOCAL_LISTENER=listener_alias
Using the same listener example, you can set the LISTENER attribute as
follows:
DISPATCHERS="(PROTOCOL=tcp)(LISTENER=listener1)"
96
9866465379
(PORT=1421)))
STOP Command
START Command
STATUS Command
97
9866465379
Description
Output Section
STATUS of Specifies the following:
the LISTENER
● Name of the listener
● Version of listener
● Start time and up time
● Tracing level
● Logging and tracing configuration settings
● listener.ora file being used
● Whether a password is set in listener.ora
file
● Whether the listener can respond to queries
from an SNMP-based network management
system
Listening Lists the protocol addresses the listener is configured
Endpoints to listen on
Summary
Services Displays a summary of the services registered with
Summary the listener and the service handlers allocated to each
service
Service Identifies the registered service
98
9866465379
SERVICES Command
Output
Section Description
99
9866465379
Output
Section Description
100
9866465379
● Select Listeners from the Administer list, and then select the
Oracle home that contains the location of the configuration files.
● Click Go.
The Listeners page appears.
● Select a listener, and then click Edit.
The Edit Listener page appears.
● Click the Static Database Registration tab, and then click Add.
The Add Database Service page appears. Enter the required
information in the fields.
● Click OK.
Bequeath Session
This enables clients to connect to a database without using network
listener. This protocol internally spawns a server process for each client
application. The bequeath protocol does not use a network listener and
automatically spawns a dedicated server. This is used for local
connections where an oracle database client application communicates
with an oracle database instance running on the same machine. This
works in only dedicated server mode.
Oracle net services client side configuration
101
9866465379
102
9866465379
)
Configuration of the shared server
Oracle's shared server architecture increases the scalability of
applications and the number of clients that can be simultaneously
connected to the database. The shared server architecture also enables
existing applications to scale up without making any changes to the
application itself.
When using shared server, clients do not communicate directly with a
database's server process -a database process that handles a client's
requests on behalf of a database. Instead, client requests are routed to one
or more dispatchers .The dispatchers place the client requests on a
common queue. An idle shared server process from the shared pool of
server processes picks up and processes a request from the queue. This
means a small pool of server processes can serve a large number of
clients.
In the shared server model, a dispatcher can support multiple client
connections concurrently. In the dedicated server model, there is one
server process for each client. Each time a connection request is received,
a server process is started and dedicated to that connection until
completed. This introduces a processing delay.
Shared server is ideal in configurations with a large number of
connections because it reduces the server's memory requirements. Shared
server is well suited for both Internet and intranet environments.
we can query V$ session view to find out a session has shared server or
dedicated server connection.
SQL > select username,server from v$session
where type='USER' and username is not null;
USERNAME SERVER
------------------------------ ---------
RAMESH DEDICATED
MADHU NONE
SYS DEDICATED
103
9866465379
Dedicated Servers
104
9866465379
105
9866465379
Dispatchers
Dispatchers initialization parameter configures dispatcher processes in
the
shared server environment. At least one dispatcher process is required for
the shared server to work. If we do not specify this, and shared server is
enabled by using shared_servers then oracle database by default creates
one dispatcher for the TCP protocol.
Dispatcher initialization parameter attributes.
Address : This is used to specify the network protocol address of the
end point on which the dispatchers will listen.
Description : This is used to specify the network description of the end
point on which the dispatchers will listen including the network protocol
address.
Protocol : Specify the network protocol for which the dispatcher
generates a listening end point.
Dispatchers : This is used to specify the initial number of dispatchers to
start.
Connections : This will specify the maximum number of network
connections to allow for each dispatcher.
We can add the following line to initialization parameter file to set the
initial number of dispatchers.
dispatchers = “(protocol=tcp)(dispatchers=2)”
or
dispatchers = "(Address=(protocol=tcp)(host=192.168.100.10))
(dispatchers=2)"
Forcing the Port Used by Dispatchers To force the dispatchers to use a
specific port as the listening endpoint, add the port attribute as follows
dispatchers = “(address = (protocol = tcp) (port=6000))”
we can alter the number of dispatchers dynamically by using ALTER
SYSTEM statement.
for example
SQL > alter system set dispatchers = “(index=0)(disp=3)”;
we can shutdown a specific dispatcher process.
106
9866465379
107
9866465379
(ADDRESS=(PROTOCOL=tcp)(HOST=sales-server)
(PORT=1521))
(CONNECT_DATA=
(SERVICE_NAME=sales.us.acme.com)
(SERVER=dedicated)))
108
9866465379
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
109
9866465379
SQL>
Now we can identify database name
SQL> select name from v$database;
NAME
---------
PROD
SQL>
And get the locations of all datafiles, log files
SQL> select name from v$datafile;
NAME
--------------------------------------------------------------------------------
/oracle/oradata/prod/system01.dbf
/oracle/oradata/prod/undotbs01.dbf
/oracle/oradata/prod/sysaux01.dbf
/oracle/oradata/prod/users01.dbf
/oracle/oradata/prod/example01.dbf
/oracle/oradata/prod/ts1.dbf
6 rows selected.
SQL>
SQL> select member from v$logfile;
MEMBER
--------------------------------------------------------------------------------
/oracle/oradata/prod/redo01.log
/oracle/oradata/prod/redo03.log
/oracle/oradata/prod/redo02.log
SQL>
Create pfile from spfile.
SQL> create pfile from spfile;
File created.
SQL>
Now generate create control file sql statement
SQL> alter database backup controlfile to trace;
110
9866465379
Database altered.
SQL>
One trace file is generated in the location given by parameter
user_dump_dest
So identify the location
SQL> show parameter user_dump_dest;
111
9866465379
Set ORACLE_SID
[raj@server1 ~]$ export ORACLE_SID=testdb
112
9866465379
[raj@server1 ~]$
Then from the source database machine copy all the datafiles, redologs,
pfile
con.sql(which contain create control file statement) to the target
machine's
database files location directory ie /oracle/testdb
[oracle@linux10 ~]$ cd /oracle/oradata/prod/
[oracle@linux10 prod]$ scp *.dbf *.log raj@192.168.1.202:/oracle/testdb
raj@192.168.1.202's password:
example01.dbf 100% 100MB 9.1MB/s 00:11
sysaux01.dbf 100% 240MB 10.0MB/s 00:24
system01.dbf 100% 480MB 9.8MB/s 00:49
temp01.dbf 100% 20MB 10.0MB/s 00:02
ts1.dbf 100% 100MB 9.1MB/s 00:11
undotbs01.dbf 100% 25MB 12.5MB/s 00:02
users01.dbf 100% 5128KB 5.0MB/s 00:01
redo01.log 100% 50MB 10.0MB/s 00:05
redo02.log 100% 50MB 10.0MB/s 00:05
redo03.log 100% 50MB 10.0MB/s 00:05
[oracle@linux10 prod]$
Then copy pfile and con.sql file
[oracle@linux10 prod]$ scp $ORACLE_HOME/dbs/initprod.ora
raj@192.168.1.202:/oracle/testdb
raj@192.168.1.202's password:
initprod.ora 100% 1033 1.0KB/s 00:00
[oracle@linux10 prod]$ scp
$ORACLE_BASE/admin/prod/udump/con.sql
raj@192.168.1.202:/oracle/testdb
raj@192.168.1.202's password:
con.sql 100% 621 0.6KB/s 00:00
Now we can start the source Database prod.
Then at the target machine perform the following steps to clone.
[raj@server1 ~]$ cd /oracle/testdb/
[raj@server1 testdb]$ ls
[raj@server1 testdb]$ ls
adump example01.dbf redo03.log ts1.dbf
bdump initprod.ora sysaux01.dbf udump
cdump redo01.log system01.dbf undotbs01.dbf
con.sql redo02.log temp01.dbf users01.dbf
[raj@server1 testdb]$
We can see as listed above all the files are successfully transfered from
the source database machine.
113
9866465379
Then modify the initialization parameter file as per the requirement and
also modify the con.sql file to create control files.
current con.sql is as follows
1 CREATE CONTROLFILE REUSE DATABASE "PROD"
RESETLOGS ARCHIVELOG
2 MAXLOGFILES 16
3 MAXLOGMEMBERS 3
4 MAXDATAFILES 50
5 MAXINSTANCES 8
6 MAXLOGHISTORY 292
7 LOGFILE
8 GROUP 1 '/oracle/oradata/prod/redo01.log' SIZE 50M,
9 GROUP 2 '/oracle/oradata/prod/redo02.log' SIZE 50M,
10 GROUP 3 '/oracle/oradata/prod/redo03.log' SIZE 50M
11 -- STANDBY LOGFILE
12 DATAFILE
13 '/oracle/oradata/prod/system01.dbf',
14 '/oracle/oradata/prod/undotbs01.dbf',
15 '/oracle/oradata/prod/sysaux01.dbf',
16 '/oracle/oradata/prod/users01.dbf',
17 '/oracle/oradata/prod/example01.dbf',
18 '/oracle/oradata/prod/ts1.dbf'
19 CHARACTER SET WE8ISO8859P1
20 ;
114
9866465379
1 db_cache_size=905969664
2 java_pool_size=16777216
3 large_pool_size=16777216
4 shared_pool_size=285212672
5 streams_pool_size=0
6 *.audit_file_dest='/oracle/testdb/adump'
7 *.background_dump_dest='/oracle/testdb/bdump'
8 *.compatible='10.2.0.1.0'
9 *.control_files='/oracle/testdb/control01.ctl'
10 *.core_dump_dest='/oracle/testdb/cdump'
11 *.db_block_size=8192
12 *.db_domain=''
13 *.db_file_multiblock_read_count=16
14 *.db_files=31
15 *.db_name='testdb'
16 *.db_recovery_file_dest='/oracle/flash_recovery_area'
17 *.db_recovery_file_dest_size=2147483648
18 *.dispatchers='(PROTOCOL=TCP) (SERVICE=prodXDB)'
19 *.job_queue_processes=10
20 *.log_archive_dest_1='location=/oracle'
21 *.open_cursors=300
22 *.pga_aggregate_target=413138944
23 *.processes=150
24 *.remote_login_passwordfile='EXCLUSIVE'
25 *.sga_target=1239416832
26 *.undo_management='AUTO'
27 *.undo_tablespace='UNDOTBS1'
28 *.user_dump_dest='/oracle/testdb/udump'
Save and exit then change the name of the pfile from initprod.ora to
115
9866465379
inittestdb.ora
[raj@server1 testdb]$ mv initprod.ora inittestdb.ora
[raj@server1 testdb]$
If you want you can copy to the default location.
Now start the instance in nomount state.
[raj@server1 testdb]$ sqlplus / as sysdba
Database altered.
SQL>
SQL> select name from v$database;
NAME
---------
TESTDB
INSTANCE_NAME STATUS
---------------- ------------
testdb OPEN
SQL>
116
9866465379
no rows selected
SQL> select tablespace_name from dba_tablespaces;
TABLESPACE_NAME
------------------------------
SYSTEM
UNDOTBS1
SYSAUX
TEMP
USERS
EXAMPLE
TS1
7 rows selected.
SQL>
Add new temporary files to TEMP tablesapce or create a new temporary
tablesapce
SQL> alter tablespace temp add tempfile
2 '/oracle/testdb/temp01.dbf' size 100m reuse;
Tablespace altered.
117
9866465379
Example
Production server : 192.168.1.201
DBA os User : oracle
DB Name : prod
LOG_MODE
------------
118
9866465379
ARCHIVELOG
File created.
SQL>
Then using RMAN perform the backup
[oracle@linux10 ~]$ rman target /
119
9866465379
120
9866465379
RMAN>
From the above command's out put we see that backup pieces are stored
in
the /oracle/backup directory in the source database machine.
Copy all backup pieces to machine where cloning to be done into the
same directory as source database. Here it is /oracle/backup.
Log in to the destnation machine create all necessary directories
[raj@server1 ~]$ mkdir /oracle/backup (to keep backup pieces same
location as on the source machine)
[raj@server1 ~]$ mkdir /oracle/rclone (to keep all database physical files)
[raj@server1 ~]$ mkdir /oracle/rclone/bdump
[raj@server1 ~]$ mkdir /oracle/rclone/adump
[raj@server1 ~]$ mkdir /oracle/rclone/cdump
[raj@server1 ~]$ mkdir /oracle/rclone/udump
[raj@server1 ~]$ export ORACLE_SID=rclone (select an SID)
Then copy pfile and backup pieces to destination machine from source.
[oracle@linux10 backup]$ ls
prod_df709897253_s16_p1 prod_df709897271_s20_p1
prod_df709897253_s17_p1 prod_df709897271_s21_p1
prod_df709897256_s18_p1 prod_df709897272_s22_p1
prod_df709897256_s19_p1
[oracle@linux10 backup]$ scp * raj@192.168.1.202:/oracle/backup
raj@192.168.1.202's password:
prod_df709897253_s16_p1 100% 43MB 8.5MB/s 00:05
prod_df709897253_s17_p1 100% 4215KB 4.1MB/s 00:00
prod_df709897256_s18_p1 100% 381MB 9.8MB/s 00:39
prod_df709897256_s19_p1 100% 201MB 9.6MB/s 00:21
prod_df709897271_s20_p1 100% 6368KB 6.2MB/s 00:01
prod_df709897271_s21_p1 100% 96KB 96.0KB/s 00:00
prod_df709897272_s22_p1 100% 4608 4.5KB/s 00:00
[oracle@linux10 backup]$ cd
[oracle@linux10 ~]$ scp $ORACLE_HOME/dbs/initprod.ora
raj@192.168.1.202:/oracle/rclone
121
9866465379
raj@192.168.1.202's password:
initprod.ora 100% 1033 1.0KB/s 00:00
[oracle@linux10 ~]$
Then at the destination machine modify the pfile parameters as per the
requirement and add additional two parameters to change the locations of
datafiles and logfiles then change the name to init$ORACLE_SID.ora
[raj@server1 rclone]$ ls
initprod.ora
[raj@server1 rclone]$ mv initprod.ora
$ORACLE_HOME/dbs/initrclone.ora
[raj@server1 rclone]$
open the pfile make changes and save it finally it looks as follows.
1 db_cache_size=905969664
2 java_pool_size=16777216
3 large_pool_size=16777216
4 shared_pool_size=285212672
5 streams_pool_size=0
6 *.audit_file_dest='/oracle/rclone/adump'
7 *.background_dump_dest='/oracle/rclone/bdump'
8 *.compatible='10.2.0.1.0'
9 *.control_files='/oracle/rclone/control01.ctl'
10 *.core_dump_dest='/oracle/rclone/cdump'
11 *.db_block_size=8192
12 *.db_domain=''
13 *.db_file_multiblock_read_count=16
14 *.db_files=31
15 *.db_name='rclone'
16 *.db_recovery_file_dest='/oracle/flash_recovery_area'
17 *.db_recovery_file_dest_size=2147483648
18 *.job_queue_processes=10
19 *.log_archive_dest_1='location=/oracle'
20 *.open_cursors=300
21 *.pga_aggregate_target=413138944
22 *.processes=150
23 *.remote_login_passwordfile='EXCLUSIVE'
24 *.sga_target=1239416832
25 *.undo_management='AUTO'
26 *.undo_tablespace='UNDOTBS1'
27 *.user_dump_dest='/oracle/rclone/udump'
28 db_file_name_convert = (/oracle/oradata/prod,/oracle/rclone)
29 log_file_name_convert = (/oracle/oradata/prod,/oracle/rclone)
Last two parameters will make RMAN automatically to convert
filenames to new location.
122
9866465379
Select Local net service name configuration. click the next button
123
9866465379
124
9866465379
Type the service name of the production database, here it is prod an then
click the next buttton
125
9866465379
126
9866465379
127
9866465379
128
9866465379
129
9866465379
Select the Local net service name (connect string and this can be any
name)
Then click the next button
130
9866465379
131
9866465379
132
9866465379
133
9866465379
Now test the connection to the production database (also called here
Target)
[raj@server1 ~]$ sqlplus sys@prod as sysdba
Enter password:
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL>
Now in the destination machine startup the instance in nomount state.
This is also called auxilliary instance.
[raj@server1 ~]$ export ORACLE_SID=rclone
[raj@server1 ~]$ sqlplus / as sysdba
134
9866465379
RMAN>
Then execute Duplicate database command from rman prompt to clone.
RMAN> duplicate target database to 'rclone';
Starting Duplicate Db at 03-FEB-10
using channel ORA_AUX_DISK_1
using channel ORA_AUX_DISK_2
135
9866465379
Templates are used in DBCA to create new databases and clone existing
databases. The information in templates includes database options,
initialization parameters, and storage attributes (for datafiles, tablespaces,
control files, and online redo logs).
Templates can be used just like scripts, but they are more powerful than
scripts because you have the option of cloning a database. Cloning saves
time by copying a seed database's files to the correct locations.
Templates are stored in the following directory:
ORACLE_HOME/assistants/dbca/templates
● Types of Templates
136
9866465379
Other changes can be made after database creation using custom scripts
that can be invoked by DBCA
The datafiles and online redo logs for the seed database are stored in a
compressed format in a file with a .dfb extension. The corresponding
.dfb file's location is stored in the .dbc file.
Type : Non Seed : File extension .dbt include datafiles : No
This type of template is used to create a new database from scratch. It
contains the characteristics of the database to be created. Non-seed
templates are more flexible than their seed counterparts because all
datafiles and online redo logs are created to your specification, and
names, sizes, and other attributes can be changed as required.
137
9866465379
138
9866465379
139
9866465379
140
9866465379
template" option and select the "From and existing database (structure as
well as data)" sub-option then click the "Next" button
On the "Source database" screen select the database instance and click the
141
9866465379
"Next" button
142
9866465379
description for the template, confirm the location for the template files
and click the "Next" button
143
9866465379
144
9866465379
145
9866465379
146
9866465379
147
9866465379
Now we have a template created and we can use this to create our new
148
9866465379
[rajukb@linux10 ~]$ ls
$ORACLE_HOME/assistants/dbca/templates/cloneprod*
/oracle/10g/assistants/dbca/templates/cloneprod.ctl
/oracle/10g/assistants/dbca/templates/cloneprod.dbc
/oracle/10g/assistants/dbca/templates/cloneprod.dfb
[rajukb@linux10 ~]$
149
9866465379
150
9866465379
151
9866465379
152
9866465379
Provide the new Service Name for the new database. The SID will
automatically be set to the service name entered. Click "Next"
153
9866465379
154
9866465379
155
9866465379
Let the "File System" option remain checked unless you want to use
ASM or raw for your new database
156
9866465379
Select the Database File location and click the next button
Let the default values for Flash Recover Area remain as they are and
157
9866465379
click "Next"
158
9866465379
You can keep the default values for Memory and Sizing over here or
159
9866465379
Now we are at the final screen and click the next button
160
9866465379
161
9866465379
click the ok button and confirm, next few minutes Database should be up
162
9866465379
and runnning.
163
9866465379
164
9866465379
Note all user accounts except the system accounts are locked and expired
so we need to unlock them to allow users to connect to the new Database.
Set the passwords for users and the exit.
165
9866465379
166
9866465379
167
9866465379
Adding to inittab
Startup will be queued to init within 90 seconds.
Checking the status of new Oracle init process...
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
server1
CSS is active on all nodes.
Oracle CSS service is installed and running under init(1M)
[root@server1 ~]#
To check
[root@server1 ~]# /oracle/10.2/bin/crsctl check css
CSS appears healthy
[root@server1 ~]#
To stop
[root@server1 ~]# /oracle/10.2/bin/crsctl stop crs
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
[root@server1 ~]#
To start
[root@server1 ~]# /oracle/10.2/bin/crsctl start crs
Attempting to start CRS stack
The CRS stack will be started shortly
[root@server1 ~]#
168
9866465379
169
9866465379
170
9866465379
171
9866465379
172
9866465379
173
9866465379
174
9866465379
175
9866465379
176
9866465379
177
9866465379
WARNING: Re-reading the partition table failed with error 16: Device or
resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
[root@server1 ~]# partprobe
[root@server1 ~]#
Already there are 12 partitions. Now we have created 13 and 14.
Now install ASMlib packages. First check the kernel release and
download
release specific asmlib packages.
[root@server1 ~]# uname -r
2.6.9-34.ELsmp
[root@server1 ~]#
[root@server1 ~]# ls
178
9866465379
oracleasm-2.6.9-34.ELsmp-2.0.3-1.i686.rpm
oracleasmlib-2.0.4-1.el4.i386.rpm
oracleasm-support-2.1.3-1.el4.i386.rpm
[root@server1 ~]#
[root@server1 ~]# rpm -ivh oracleasm*
Preparing... ########################################### [100%]
1:oracleasm-support
########################################### [ 33%]
2:oracleasm-2.6.9-34.ELsm
########################################### [ 67%]
3:oracleasmlib ###########################################
[100%]
[root@server1 ~]#
After installing oracleasm command is created in /etc/init.d and using this
we can configure ASM library driver then mark the required partions as
ASM candidate disks.
[root@server1 ~]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.
179
9866465379
[root@server1 ~]#
Now we can mark the ASM candidate disks
[root@server1 ~]# /etc/init.d/oracleasm createdisk d1 /dev/sda13
Marking disk "d1" as an ASM disk: [ OK ]
[root@server1 ~]# /etc/init.d/oracleasm createdisk d2 /dev/sda14
Marking disk "d2" as an ASM disk: [ OK ]
[root@server1 ~]#
[root@server1 ~]# /etc/init.d/oracleasm listdisks
D1
D2
[root@server1 ~]#
Now login as oracle DBA os user and create ASM disk group using
DBCA by selecting ASM candidate disks.
[raju@server1 ~]$ dbca
180
9866465379
181
9866465379
182
9866465379
183
9866465379
184
9866465379
Now ASM instance is running and disk group is ready so that we can
create
database on this disk group either by using DBCA or manually.
Creating ASM instance and Disk group manually
Instance_type = asm
large_pool_size=12M
asm_diskstring='ORCL:D*'
185
9866465379
asm_diskgroups='DG1'
background_dump_dest='/oracle/admin/+ASM/bdump'
core_dump_dest='/oracle/admin/+ASM/cdump'
user_dump_dest='/oracle/admin/+ASM/udump'
186
9866465379
NAME PATH
------------------------------ ------------------------------
D1 ORCL:D1
D2 ORCL:D2
SQL>
SQL> select name, type, total_mb, free_mb from v$asm_diskgroup;
ASM instance are started and stopped in a similar way to normal database
instances. The options for the STARTUP command are:
12. FORCE - Performs a SHUTDOWN ABORT before restarting the
ASM instance.
13. MOUNT - Starts the ASM instance and mounts the disk groups
specified by the ASM_DISKGROUPS parameter.
14. NOMOUNT - Starts the ASM instance without mounting any disk
groups.
15. OPEN - This is not a valid option for an ASM instance.
The options for the SHUTDOWN command are:
15. NORMAL - The ASM instance waits for all connected ASM
instances and SQL sessions to exit then shuts down.
16. IMMEDIATE - The ASM instance waits for any SQL
transactions to complete then shuts down. It doesn't wait for
sessions to exit.
17. TRANSACTIONAL - Same as IMMEDIATE.
18. ABORT - The ASM instance shuts down instantly.
187
9866465379
● Directories
188
9866465379
'+disk_group_1/my_dir_2' FORCE;
● Aliases
● Files
Files are not deleted automatically if they are created using aliases, as
they are not Oracle Managed Files (OMF), or if a recovery is done to a
point-in-time before the file was created. For these circumstances it is
necessary to manually delete the files, as shown below.
-- Drop file using an alias.
ALTER DISKGROUP disk_group_1 DROP FILE
'+disk_group_1/my_dir/my_file.dbf';
-- Drop file using a numeric form filename.
ALTER DISKGROUP disk_group_1 DROP FILE
'+disk_group_1.342.3';
-- Drop file using a fully qualified filename.
ALTER DISKGROUP disk_group_1 DROP FILE
'+disk_group_1/mydb/datafile/my_ts.342.3';
189
9866465379
● ASM Views
The ASM configuration can be viewed using the V$ASM_% views, which
often contain different information depending on whether they are
queried from the ASM instance, or a dependant database instance.
View ASM Instance DB Instance
Displays a row for each
V$ASM_ALIA alias present in every disk
S Returns no rows
group mounted by the ASM
instance.
Displays a row for each
V$ASM_CLIE database instance using a Displays a row for the ASM
NT instance if the database has
disk group managed by the
open ASM files.
ASM instance.
Displays a row for each
disk discovered by the Displays a row for each disk in
V$ASM_DISK ASM instance, including disk groups in use by the
disks which are not part of database instance.
any disk group.
V$ASM_DISK Displays a row for each Displays a row for each disk
GROUP disk group discovered by group mounted by the local
the ASM instance. ASM instance.
Displays a row for each file
V$ASM_FILE for each disk group Displays no rows.
mounted by the ASM
instance.
Displays a row for each file
V$ASM_OPER for each long running
ATION Displays no rows.
operation executing in the
ASM instance.
190
9866465379
191
9866465379
The following example creates a log file with a member in each of the
disk groups dgroup1 and dgroup2.
The following parameter settings are included in the initialization
parameter file:
DB_CREATE_ONLINE_LOG_DEST_1 = '+dgroup1'
DB_CREATE_ONLINE_LOG_DEST_2 = '+dgroup2'
192
9866465379
We can use the following method to migrate database from regular file
system to ASM disk group.
System altered.
193
9866465379
System altered.
SQL>
Remove the CONTROL_FILES parameter from the spfile so the control
files will be moved to the DB_CREATE_* destination and the spfile gets
updated automatically. If you are using a pfile the CONTROL_FILES
parameter must be set to the appropriate ASM files or aliases.
SQL> alter system reset control_files
2 scope=spfile
3 sid='*';
System altered.
SQL>
Then shutdown the database.
SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release
10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
[raju@server1 ~]$
Now start the database in nomount mode
[raju@server1 ~]$ rman target /
Recovery Manager: Release 10.2.0.1.0 - Production on Fri Feb 5
12:50:48 2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database (not started)
RMAN>
RMAN> startup nomount;
Oracle instance started
Total System Global Area 603979776 bytes
Fixed Size 1220796 bytes
Variable Size 197136196 bytes
Database Buffers 398458880 bytes
194
9866465379
195
9866465379
196
9866465379
database opened
RMAN>
Create new redo logs in ASM and delete the old ones.
SQL> select member from v$logfile;
MEMBER
--------------------------------------------------------------------------------
/oracle/oradata/prod/redo03.log
/oracle/oradata/prod/redo02.log
/oracle/oradata/prod/redo01.log
SQL> ALTER DATABASE ADD LOGFILE;
Database altered.
SQL> ALTER DATABASE ADD LOGFILE;
Database altered.
SQL> alter database drop logfile group 1;
SQL> alter database drop logfile group 2;
SQL> alter database drop logfile group 3;
Enable change tracking if it was being used
SQL> ALTER DATABASE ENABLE BLOCK CHANGE
TRACKING;
197