Sie sind auf Seite 1von 99

NETTLINX

INDEX

1. DATABASE ADMINISTRATOR ROLE


2. UNDERSTANDING ORACLE ARCHITECHTURE
3. INSTANCE
4. DATABASE CREATION
5. MANAGING ROOLBACK SEGMENTS
6. MANAGING REDOLOG FILES
7. MANAGING CONTROL FILES
8. MANAGING USERS, ROLES & PRIVILEGES
9. MANAGING ORACLE NETWORKING USING – NETS
10. MANAGING EXPORTS AND IMPORTS
11. MANAGING BACKUPS AND RECOVERY
12. RECOVERY MANAGER
13. WORKING WITH SQL LOADER
14. TUNING – ENHANCING THE PERFORMANCE OF DATABASE
15. OPTIMIZATION
16. MANAGING MULTI THREADED SERVERS
17. WORKING WITH RAW DEVICES IN ORACLE
18. AUDITING
19. LOCK MANAGEMENT
20. DBMS PACKAGES
21. INSTALLATION OF ORACLE 8.0.5 LINUX 5.2
22. INSTALLATION OF ORACLE 8.1.5 ON SUN SPARC
23. DBA LAB EXERCISES

NETTLINX
DATABASE ADMINISTRATOR ROLE

A Description for the perfect Database administrator would be a benevolent king or


queen. To achieve success in this job, the database administrator must have a tight
reign over the database kingdom. A DBA must surround the database castle with a
moat, lowering the drawbridge only to those who have earned the right to use the
database. Only friends – not focs – can enter the database. Like a benevolent king or
queen, the ideal Database Administrator is concerned with the safety and security of
the castle. This means appropriate measures are taken and procedures are
established to make sure the database Is backed up and secured. DBA also
measures the performance of the database and takes the corrective action to ensure
adequate response time. This is a critical role, which should be present early in the
process.

At last, an attribute of a perfect DBA, which separates him from others is the power
and will for not giving up till the last minute.

Responsibilities of a DBA:

 Must be responsible for putting security in place to make certain, only the
right people can access the right data. A DBA works closely with the data
architects to implement the database design.
 Must work closely with the technical team to ensure, to adhere, and to co-
operate the policies and procedures pertaining to the database. This includes
development of policies to control the movement of applications on to a
production database.
 Must monitor the growth of database to ensure the smooth functioning of
daily activities.
 Must monitor performance. This is a critical function of a DBA. He or she must
establish base lines and compare the database performance against them to
ensure adequate performance.
 Must tend to daily administration of the database.
 Must be able to tackle issues as soon as they spring up. A DBA’s position is
one of the most technically challenging roles that, exists with in all the teams.
 Must be available 7X24
 Must work closely with system administrator to install all software and
patches.
 Must have political skills. For eg: a DBA might not want to upgrade the
system on the busiest day of the year. Common sense is required.
 Must ensure appropriate, backup and recovery procedures, are in place to
meet the business requirements. If a project is not backed up and the
database is lost, probably a month or more of the project’s teamwork would
be lost.
 Installing and upgrading the oracle server and application tools.
 Configure or aid in the configuration of the computer network.
 Allocate system storage and plan for future storage requirements for the
database system.
 Manage logical & physical database structures.
 Control and monitor user access to the database.
 Tune and trouble shoot the database.
 Plan and implement appropriate backup and recovery strategies for the
database.
 Minimize the database down time.
 Contact oracle corporation for technical support.

UNDERSTANDING ORACLE ARCHITECHTURE

Oracle structures a database, both physically and logically. An oracle Database’s


logical structure is the set of tables in the database. A database’s physical structure
is the set of operating system files that store the bits and bytes of database
information on disk. It is important to understand the physical and logical
components in an oracle database.

Physical structure:

Physical structure of a database comprises of the following files.

 Datafiles
 Redologs
 Control files

Data files:

An oracle database has one or more physical data files that hold the actual data of all
logical structures like tables, indexes, etc. A data file can be associated with only one
database and only one Tablespace.

Redolog files:

The primary function of redologs is to record all the changes made to the database
before they are written to the data files. These files can be mirrored and are used in
performing recovery methods.

Control files:

These files record control information of all files within the database. They are used
to maintain internal consistency and play a vital role in recovery operations. These
are used to maintain internal consistency and play a vital role in recovery operations.
These files can also be mirrored. Oracle automatically modifies control files, which
users cannot edit. They are used to maintain internal consistency and guide during
recovery. It is divided into five parts.
 Information about the database, total no. of data files, redologs and threads
that are enabled (parallel service)
 Information about each log group and current log group that LGWR is writing.
 Each member of log group, the size, path, full name, log sequence number
etc.
 Datafile, datafile size, fullname, path, status etc.
 Log history of database.
Logical structure comprises of Tablespaces, Schema objects like tables, Indexes,
views etc.
Table space:
It is a logical area of storage in a database that directly corresponds to one or more
physical data files.

Schema Objects:

Schema is a logical collection of database objects of a user.


Eg: tables, views, synonyms, sequences, indexes, clusters, database triggers,
procedures, functions, packages & database links.
The relationship among databases, tablespaces and datafiles can be stated as:
 Each database is logically divided into one or more tablespaces.
 One or more datafiles are explicitly created for each tablespace to physically
store the data of all logical structures in a tablespace.
 The combined size of a tablespace’s datafiles is the total storage capacity of
the tablespace.
 The combined storage capacity of a database’s tablespace is the total storage
capacity of the database.

INSTANCE

A system global area (SGA) and oracle background processes constitute an instance.
SGA: It is a shared memory region allocated by Oracle that contain data & control
information for an Oracle instance. An SGA comprises of buffer cache, redolog
buffers and shared pool area.
Buffer Cache: Buffer cache stores the most recently used blocks of data. It can also
have modified data that has not yet been permanently written to disk. When a row in
a table is updated. Foreground server process read the datafile information on the
disk into the buffer caches. Then the server process read modifies the data block in
the server memory. If another user request new data and as no data block is free in
the buffer cache, DBWR is called the blocks from the buffer cache are written to the
datafile using the LRU (Least Recently Used) mechanism.

Redolog Buffers: It stores redo entries, a log of changes made to a database.

Shared pool: Shared pool comprises of library cache and dictionary cache. Library
cache stores, and shares SQL statements PL/SQL procedures in memory.

Library cache: Oracle parses the statement and determines the most efficient
execution plan for the statement when a SQL statement is issued. Oracle then
caches the statement in the share pool and if another used issues the same
statement, oracle shares the statement already in memory rather than repeating the
same steps.

Data Dictionary: Oracle continuously requests, and updates information in the


database’s data dictionary. To maximize the performance of the system’s internal
operation, data dictionary cache holds the data dictionary information.

PGA: PGA (Program Global Area) is, the memory buffers that contain data and
control information for a server process.
Eg: A client’s server process uses its PGA to hold the state of the session’s program
variables and packages.
Background Process:
 DBWRDatabase Writer
 LGWR Log Writer
 CKPT Checkpoint
 SMON System Monitor
 PMON Process Monitor
 RECO Recoverer
 Dnnn Dispatcher
 LCKN Lock
 Snnn Server

DBWR: It writes blocks from the buffer cache to the appropriate data files. It writes
a block in memory back to disk only when DBWR sits idle for a few seconds or when
a foreground server wants to read a new block into memory but there is no free
space available oracle performs a checkpoint.

LGWR: LGWR writes redolog entries generated in the redo log buffer to an on-line
redo log file. As and when a transaction is carried out, oracle creates small records
called redo entries that contain just enough information necessary to regenerate the
changes made by the transactions oracle temporarily stores your transaction redo
entries in the server’s redo log buffer. The server’s redo log buffer is a small
memory area that temporarily caches transactions and redo entries for all system
transactions. Oracle does not consider a transaction as committed until LGWR
successfully writes the transaction redo entries and a commit record to the
transaction log.
It writes:
 When log buffer is full.
 When transaction is committed.
 For every 3 seconds.
 When it is 1/3 full.

CKPT: it is responsible for signaling the DBWR at checkpoints and updating all the
data files, control files of the database. It is optional. Its duty can be performed by
LGWR. The purpose of a check point is to establish mileposts of transaction
consistency on disk. Checkpoint indicates how much of the transaction log’s redo
entries oracle must apply if a server crash occurs and a database recovery is
necessary.

SMON:
 It performs instance recovery at instance startup in a multiple instances.
 Recovers other instances that have failed.
 Cleans up temporary segments that are no longer in use.
 Recovers dead transactions skipped during crash and instance recovery.
 Coalesee the free extents within the database, to make free space contiguous
and easy to allocate.

PMON: It performs process recovery when a user process fails and is also
responsible for cleaning up the cache, freeing resources that process was using. Also
checks on dispatcher and server processes and restarts them in case of failures.
ARCH: It copies on-line redo log files to the configured destination when they are
full. It is active only when database’s redo log is used in archive log mode. The
sequential set of archived transactions log files that ARCH creates, is called as
archived transaction log.

RECO: RECO is used to resolve distributed transactions that are pending due to a
network or a system failure in distributed database. At time intervals, the local RECO
attempts to connect to a remote database and automatically complete the commit or
rollback a local portion of any pending, distributed transactions.

Dnnn: It is responsible for routing requests from connected user processes to


available shared server processes and returning responses back to the appropriate
user processes.

Lckn: It is used for inter-instance locking when the Oracle parallel server option is
used.

SCN: System change number defines a committed version of a database at a precise


movement of time. When a transaction commits, it is assigned as SCN that uniquely
identifies the transaction. It is used for recovery operations. For eg: if transaction A
does updates and commits, it will be assigned an SCN value of, say 30. The next
transaction B that commits five minutes later will receive an SCN value of 31 or
greater. If B receives a value say 37 that means Oracle has assigned the SCN
numbers of 30 and 37 to two transactions A and B. SCN need not be sequential. It
also plays an important role in distributed databases. When a distributed transaction
is committed the highest SCN of all database instances involved is given to the
distributed transaction. Thus, there will never be a problem with read consistency at
the time of high transactions, multiple transactions may commit at a time then the
LGWR process may write multiple commit records to online redolog files. This is know
as group commits.

Low and High SCN: When a redolog file is filled up it switches to next the redolog
file. The new redolog file is marked as low SCN, which is one, greater than the high
SCN of previous log file. The low SCN represents the lowest value of the change
number that is stored in that log file. Similarly, when the log file is closed, the high
SCN mark is set to the highest SCN recorded in the log file.

You can get more information from V$LOGHISTORY.

DATABASE CREATION

Database creation prepares several operating system files so that they work together
as an oracle database. A database needs to be created once, regardless of how many
datafiles it has or how many instances access it.

Prerequisites for creating a database:

 Sufficient memory should be there to start the Oracle instance.


 Sufficient disk space must be available on the computer.

Steps to create a database:


$ vi .bash_profile
export ORACLE_SID=NETTLINX (or any other name)
:wq
then run file .bash_profile

Create initialization (parameter) file by copying from the sample init.ora file to
init<ORACLE_SID>.ora. The name of the file can be anything but the name has to be
specified explicitly at the time of database startup.
$ cd $ ORACLE_HOME/dbs
$ cp init.ora initNETTLINX.ora(as your ORACLE_SID=NETTLINX)
Make the necessary changes in your init<ORACLE_SID>.ora file. Eg: if
db_name=DEFAULT change it to db_name=NETTLINX.

$ vi initNETTLINX.ora
db_name=NETTLINX

control_files=(/disk/oradata/NETTLINX/cont1.ctl/disk2/oradata/NETTLINX/cont2.ctl)
background_dump_dest=/disk1/oradata/NETTLINX/bdump
user_dump_dest=/disk1/oradata/NETTLINX/udump
core_dump_dest=/disk1/oradata/NETTLINX/cdump
:wq
Create the necessary directories to place database files, redolog files, control files
and the dump_dest directories.

$ cd /disk1/oradata
$ mkdir NETTLINX
$ cd NETTLINX
$ mkdir bdump cdump udump(create these directories as specified in
init<ORACLE_SID>.ora).
$ cd /disk2/oradata
$ mkdir NETTLINX
$ cd /disk3/oradata
$ mkdir NETTLINX

Execute the, create database command, which is defined in the following lines (i.e.,
script written in a file ‘cr8NETTLINX.sql’ using “vi” and then execute it).

Now, if you want to create NETTLINX database then issue.

$ vi cr8NETTLINX.sql
CREATE DATABASE NETTLINX
DATAFILE’/disk1/oradata/NETTLINX/system01.dbf’ SIZE 25M
LOGFILE GROUP1 (‘disk1/oradata/NETTLINX/redolog1a.log’,
‘/disk2/oradata/NETTLINX/redolog1b.log’) SIZE
250K,
GROUP2 (‘disk1/oradata/NETTLINX/redolog2a.log’,
‘/disk3/oradata/NETTLINX/redolog2b.log’) SIZE
250K
CONTROLFILE REUSE control file path will be reused.
:wq

NOTE: Defining the controlfile clause is optional, because we will be specifying it in


init.ora file. Moreover “REUSE” is used only if the respective file is already existing
and with same size.

At $ prompt issue the command svrmgrl (server manager line mode). Which will take
you to “SVRMGR>” prompt.

SVRMGR>CONNECT INTERNAL
SVRMGR>STARTUP NOMOUNT
SVRMGR>@cr8NETTLINX.sql

When you execute this statement, oracle performs the following opertions:

 Creates the controlfile(s) for the database.


 Creates the redolog files for the database.
 Creates the datafiles(s) for the database.
 Creates the System Tablespace and the system rollback segment
 Creates the data dictionary.
 Creates user SYS and SYSTEM
 Specifies the character set used to store data in the database
 Mounts and open the database for use.

After the above statement is processed, the CATPROC and CATALOG scripts are to be
executed, as user “SYS”, which are present in
“$ORACLE_HOME/rdbms/admin”directory.
The commands are as follows:
SVRMGR>@$ORACLE_HOME/rdbms/admin/catalog.sql #as sys or internal
SVRMGR>@$ORACLE_HOME/rdbms/admin/catproc.sql #as sys or internal
Then, connect as system/manager and execute pupbld.sql the command is,
SVRMGR>CONNECT SYSTEM/MANAGER
SVRMGR>@$ORACLE_HOME/sqlplus/admin/pupbld.sql
For loading the help into the database, give the following command:

$cd ORACLE_HOME/sqlplus/admin/help
$ loadhelp
then to get help go to SQL,
SQL>HELP<COMMAND>
Eg: to get help on create table
SQL>Help CREATETABLE
Table spaces

Creation of additional Tablespaces:

A database is divided into one or more logical storage units called Tablespaces. A
database administrator can use the Tablespaces to do the following:

 To control the user access by making it read only or read write.


 Control database size by adding/dropping tablespaces.
 Assign specific space quota for database users.
 Control availability of data by taking individual tablespaces online or offline.
 Perform online database backup or recovery operations.
 Allocate data storage across devices to improve perfomance.

Remember for best performance, it is recommended to place the Tables. Indexes,


rollback segments. Temporary Segments in different tablespaces on different Hard-
Drives.

Advantages of having different tablespaces are as follows:


 Separation of user data from data dictionary data (which is in System
tablespace)
 Separation of applications data from another
 Store different tablespaces datafiles on separate disk drives to reduce I/O
contention
 Separate Rollback segment data from user data
 Take individual tablespaces offline while others remain online.
 Reserve a tablespace for a particular type of a database use such as high
update acitivity, red only activity or temporary storage segments
 Backup individual tablespaces leaving the rest to individual users.

Types of Table spaces (based on functionality of the data).


a) Read/Write(permanent) b) Read-only c) Temporary

While or after the table space creation the above types can be specified (default is
permanent) a database NETTLINX[created earlier] require 4 Table spaces. They can
be created as follows:
SQL>CREATE TABLESPACE USER_NETTLINX DATAFILE
‘/disk1/oradata/NETTLINX/user_NETTLINX01.dbf’ SIZE 2M
DEFAULT STORAGE (INITIAL 50K NEXT 50K MINEXTENTS 1
MAXEXTENTS 50 PCT INCREASE 0);
SQL>CREATE TABLESPACE TEMP_NETTLINX DATAFILE
‘/disk1/oradata/NETTLINX/temp_NETTLINX01.dbf’ SIZE 2M online;
SQL>CREATE TABLESPACE INDEX_NETTLINX DATAFILE
‘/disk1/oradata/NETTLINX/index_NETTLINX01.dbf’ SIZE 2M;
Temporary
SQL>CREATE TABLESPACE RBS_NETTLINX DATAFILE
‘/disk1/oradata/NETTLINX/rbs_NETTLINX01.dbf’ REUSE;

NOTE: Reuse option is used by assuming rbs_NETTLINX01.dbf file is existing and of


same byte count. Alter table space command can be used to make the tablespace
online or offline or to make the tablespace readwrite/temporary or to rename/add
the datafile to change the default storage system tablespace cannot be to make it
offline.

Examples:

1. To add a data file to a TableSpace:


SQL>ALTER TABLESPACE USER_NETTLINX ADD datafile
‘/disk1/oradata/NETTLINX/user_NETTLINX02.dbf’ SIZE 2M;

2. To change the Default Storage parameters:


SQL>ALTER TABLESPACE USER_NETTLINX DEFAULT STORAGE
(INITIAL 10K NEXT 10K MINEXTENTS 2
MAXEXTENTS 20 PCT INCREASE 50);
3. To make a Tablespace online:
SQL>ALTER TABLESPACE USER_NETTLINX ONLINE;

4. To make a Tablespace offline:


SQL>ALTER TABLESPACE USER_NETTLINX OFFLINE
(Normal/immediate/Temporary);
5. To make a Tablespace Read only:
SQL>ALTER TABLESPACE USER_NETTLINX READ ONLY;

6. To make a Tablespace Temporary:


SQL>ALTER TABLESPACE USER_NETTLINX TEMPORARY;

Once a table space made temporary permanent objects(table, index etc)


cannot be created in that tablespace.

7. To change the name of a datafile in a tablespace:


This can be done in two methods:
a. Make the desired table space offline.
SQL>ALTER TABLESPACE USER_NETTLINX OFFLINE;
b. Copy or move the desired datafile to new location at OS level.
$cp /disk1/oradata/NETTLINX/user_NETTLINX01.dbf
/disk2/oradata/NETTLINX/user_NETTLINX01.dbf

c. Issue the alter tablespace command.


SQL>ALTER TABLESPACE USER_NETTLINX RENAME DATAFILE
‘/disk1/oradata/NETTLINX/user_NETTLINX01.dbf TO
‘/disk2/oradata/NETTLINX/user_NETTLINX01.dbf

d. Bring the tablespace online mode.


SQL>ALTER TABLESPACE USER_NETTLINX ONLINE;

Second Method:

A. Bring the database to mount state.


SVRMGR>CONNECT INTERNAL
SVRMGR>STARTUP MOUNT

B. Copy or move the desired datafile to new location at OS level.


$cp /disk1/oradata/NETTLINX/user_NETTLINX01.dbf
/disk2/oradata/NETTLINX/user_NETTLINX01.dbf

C. Issue the alter database command.


SVRMGR>ALTER DATABASE RENAME FILE
‘/disk1/oradata/NETTLINX/user_NETTLINX01.dbf TO
‘/disk2/oradata/NETTLINX/user_NETTLINX01.dbf
D. Finally open the database.
SVRMGR>SHUTDOWN
SVRMGR>STARTUP
It is possible to rename a datafile but not a tablespace.

8. To drop a Table space


SQL>DROP TABLESPACE USER_NETTLINX INCLUDING CONTENTS;

9. To Coalesce a Table space:


Smaller contiguous free extents can be coalesced into one large free extent. By
default, SMON (System Monitor) process coalesces the free extents of tablespaces in
the background.
SQL>ALTER TABLESPACE USER_NETTLINX COALESCE;

10. To change the size of a datafile:


SQL>ALTER DATABASE DATAFILE
‘disk2/oradata/NETTLINX/user_NETTLINX01.dbf’ RESIZE 5M;

11. To extend the size of a datafile automatically


SQL>ALTER DATABASE DATAFILE
‘disk2/oradata/NETTLINX/user_NETTLINX01.dbf’
AUTOEXTEND ON NEXT 1M MAXSIZE 5M;

12. To extend the datafile:


SQL>ALTER DATABASE DATAFILE
‘disk1/oradata/NETTLINX/user_NETTLINX01.dbf’
AUTOEXTEND ON NEXT 1M MAXSIZE 5M;

13. To assign table to a specific datafile:


SQL>ALTER TABLE JUNK ALLOCATE EXTENT
(DATAFILE ‘disk1/oradata/NETTLINX/user01_NETTLINX01.dbf’);

STORAGE PARAMETERS

Every Tablespace has default storage parameters. To override the system defaults in
that Tablespace a user can specify the parameters while creating the objects. The
following are the parameters:
INITIAL: The size in bytes of the first extent allocated when a segment is created.
Though default system values are given data blocks, use bytes to set a value for this
parameter. You can also use the abbreviations K and M to indicate Kilobytes and
Megabytes.
Default: 5 datablocks
Minimum: 2 datablocks
Maximum: Operating system specific

NEXT: The size of the next extent to be allocated for a segment. The second extent
is equal to the original setting for next. From third extend onward ‘NEXT’ is set to
the previous size of NEXT multiplied by (1+Pctincrease/100). You can also use K and
M to indicate Kilobytes and Megabytes as above.
Default: 5 datablocks
Minimum: 1 datablock
Maximum: Operating system specific

MAXETENTS: The total number of extents, including the first, can ever be allocated
for the segment.
Default: Dependent on the data block size and operating system
Minimum: 1 (extent)
Maximum: Operating system specific
MINEXTENTS: The total number of extents to be allocated when the segment is
created. This allows for a large allocation of space at creation time, even if
contiguous space is not available.

Default: 1 (extent)
Minimum: 1 (extent)
Maximum: Operating system specific

If minextents are more than 1, then the specified number of incremental extents are
allocated at creation time using initial, next, pctincrease.

PCT INCREASE: The percent by which each incremental extent grows over the last
incremental extent allocated for a segment. If pctincrese is 0, then all incremental
extents are the same size. If pctincrease is greater than 0, then each time the next
is calculated, it grows by pctincrease. It cannot be negative. It is specified in
percentage.
Default: 50(%)
Minimum: 0 (%)
Maximum: Operating system specific
NOTE: Pctincrease for Rollback segment is always 0, Pctincrease cannot be specified
for Rollback Segments.
PCT FREE: It is used to set percentage of a block to be reserved (kept free) for
future updates. After this parameter is met the block is considered to be full and it is
not available to insert new rows.
PCT USED: It is used to allow a block to be reconsidered for the insertion of new
rows. When the percentage of a block being used falls below PCTUSED either
through row deletion or updates reducing column storage, the block is again
available for insertion of new rows.
INITTRANS: It reserves pre-allocated amount of space for initial number
transaction entries to access rows in the data block concurrently. Space is reserved
in the header of all data blocks of all associated data or index segement. The default
value is 1 for tables and 2 for clusters.
MAXTRANS: As multiple transactions concurrently access the rows of the same data
block, space is allocated for each transaction’s entry in the block.
Once the space is reserved by the inittrans is depleted, space for additional
transaction entries is allocated out of the free space in a block, if available. Once
allocated, this space effectively becomes a permanent part of the block header. The
maxtrans parameter is used to limit the no, of transaction entries than concurrently
use data in a data block.
To change the initial extent of a Table:

SQL>CREATE TABLE JUNK(A NUMBER)STORAGE (INITIAL 10K);


SQL>SELECT * FROM USER SEGMENTS WHERE SEGMENT NAME = ‘JUNK’;
SQL>ALTER TABLE JUNK DEALLOCATE UNUSED KEEP 4K;
SQL>SELECT * FROM USER_SEGMENTS WHERE SEGMENT_NAME=’JUNK’;
SQL>INSERT SOME ROWS AND CHECK IN USER_SEGMENTS
SQL>DELETE FROM JUNK;
SQL>ALTER TABLE DEALLOCATE UNUSED KEEP 2K;

NOTE: Check in user_segments, you will see that the initial is not decreased because
you have used delete command which will not reset the high water mark. But, if you
still want to decrease it further then, do
SQL>TRUNCATE TABLE JUNK;
SQL>ALTER TABLE JUNK DEALLOCATE UNUSED KEEP 1K;
SQL>SELECT*FROM USER_SEGMENTS WHERE SEGMENT_NAME = ‘JUNK’;

You can get the information from the following views:

DBA_SEGMENTS
DBA_EXTENTS
DBA_TABLES
DBA_INDEXES
DBA_TABLESPACES
DBA_DATA_FILES

DBA_FREE_SPACE

Listing Tablespaces and Default storage parameters:


SQL>SELECT tablespace_name “TABLESPACE”, initial_extent “INITIAL_EXT”,
next_extent “NEXT_EXT”, min_extents “MIN_EXT”,max_extents “MAX_EXT”,
pct_increase FROM sys.dba_tablespaces;

Listing the Datafiles and Associated Tablespaces of a Database:


SQL>SELECT FILE_NAME,BYTES,TABLESAPCE_NAME FROM SYS.DBA_DATA_FILES;

Listing the free space(extents) of each tablespace:


SQL>SELECT TABLESPACE_NAME, FILE_ID,COUNT(*) “PIECES”,
MAX(BLOCKS)”MAXIMUM”,MIN(BLOCKS) “MINIMUM”,AVG(BLOCKS)
“AVERAGE”,SUM(BLOCKS)”TOTAL”
FROM SYS.DBA_FREE_SPACE
WHERE TABLESPACE_NAME = ‘SYSTEM’
GROUP BY TABLESPACE_NAME, FILE_ID;

SUM shows the amount of free space in each tablespaces, PIECES, shows the
amount of fragmentation in the datafiles of the tablespace, and MAXIMUM shows the
largest contiguous area of space. This query is useful when you are going to create a
new object or you know that a segment is about to extend, and you want to make
sure that there is enough space in the containing table space.

Managing partitioned Tables and Indexes

A Partitioned table or partitioned index has been divided into a number of pieces, or
partitions, which have the same logical attributes.

Advantages of partitioning tables:

1. Reduce the possibility of data corruption in multiple partitions.


2. Make it possible to back up and recover each partition independently.
3. Make it possible to control the mapping of partitions to disk drives (important
for balancing I/O load)

1. To Create partitions: creating partitions is very similar to creating a table or


index.

SQL>CREATE TABLE SALES(INVOICE_NO NUMBER,


SALE_YEAR INT NOT NULL,
SALE_MONTH INT NOT NULL,
SALE_DAY INT NOT NULL)
PARTITION BY RANGE(SALE_YEAR,SALE_MONTH,SALE_DAY)
(PARTITION SALES_Q1 VALUES LESS THAN (1994,04,01)TABLESPACE TSA,
PARTITION SALES_Q2 VALUES LESS THAN (1994,07,01)TABLESPACE TSB,
PARTITION SALES_Q3 VALUES LESS THAN (1994,10,01)TABLESPACE TSC,
PARTITION SALES_Q4 VALUES LESS THAN (1994,01,01)TABLESPACE TSD);

2. To Move Table Partitions: you can use the MOVE PARTITION clause to move a
partition.

SQL>ALTER TABLE parts MOVE PARTITION depot2 TABLESPACE ts094


WOLOGGING;
This statement always drops the partition’s old segment and creates a new segment,
even if you don’t specify a new tablespace.

3. To Add Table partitions:

SQL>ALTER TABLE sales ADD PARTITION jan96 VALUES LESS THAN(‘960201’)


TABLESPACE tsx;

4. To Add Index Partitions: You cannot explicitly add a partition to a local index.
Instead, new partitions are added to local indexes only when you add a partition to
the underlying table. You cannot add a partition to a global index because the
highest partition always has a partition bound MAXVALUE.

5. To Drop Table Partitions: Delete the rows from the partition before dropping the
partition.

SQL>DELETE FROM SALES WHERE TRANSID<10000;


SQL>ALTER TABLE SALES DROP PARTITION DEC94;
(OR)
Disable the constraints before dropping the partition.
SQL>ALTER TABLE SALES DISABLE CONSTRAINT DNAME_SALES1;
SQL>ALTER TABLE SALES DROP PARTITION DEC’94;
SQL>ALTER TABLE SALES ENABLE CONSTRAINT DNAME_SALES1;

6. To Drop Index Partitions: You cannot explicitly drop a partition from a local index.
SQL>ALTER INDEX NPR DROP PARTITION P1;
SQL>ALTER INDEX NPR REBUILD PARTITION P2;

7. To Truncate Partitioned Tables: You can use the ALTER TABLE TRUNCATE
PARTITION statement to remove all rows from a table partition with or without
reclaiming space.

SQL>ALTER TABLE SALES TRUNCATE PARTITION DEC94;


(OR)
SQL>ALTER TABLE SALES DISABLE CONSTRAINT DNAME_SALES1;
SQL>ALTER TABLE SALES TRUNCATE PARTITION DEC94;
SQL>ALTER TABLE SALES ENABLE CONSTRAINT DNAME_SALES1;
(OR)
SQL>DELETE TABLE SALES WHERE TRANSID<10000;
SQL>ALTER TABLE SALES TRUNCATE PARTITION DEC94;
8. To split Table partitions: You can split a table partition by issuing the ALTER TABLE
SPLIT PARTITION statement.

SQL>ALTER TABLE sales SPLIT PARTITION sales_q4 at (1995,12,01) into


(PARTITION sales_q4, PARTITION sales_q5);

9. To split index partitions: You cannot explicitly split a partition in a local index. You
can issue the ALTER INDEX SPLIT PARTITION statement to split a partition in a global
index if the partition is empty.

SQL>ALTER INDEX QUON1 SPLIT


PARTITION CANADA AT VALUES LESS THAN (100) INTO
PARTITION CANADA1….,PARTITION CANADA2….,);
SQL>ALTER INDEX QUON1 REBUILD PARTITION CANADA1;
SQL>ALTER INDEX QUON1 REBUILD PARTITION CANADA2;

10. To Merge Table Partitions: You can use either of the following strategies to merge
table partitions. To merge partition OSU1 into partition OSU2:

a) Export the data from OSU1


b) Issue the following statement:
SQL>ALTER TABLE OH DROP PARTITION OSU1;
c) Import the data from Step 1 into partition OSU2.
Another way to merge partition OSU1 into partition OSU2;
a) Exchange
b) Issue the following statement:
SQL>ALTER TABLE OH DROP PARTITION OSU1;
c) Insert as SELECT from the “dummy” table, to move the data from OSU1
back into OSU2.
EXCHANGING TABLE PARTITIONS: YOU CAN CONVERT A PARTITION INTO A NON-
PARTITIONED TABLE, AND A TABLE INTO A PARTITION OF A PARTITIONED TABLE BY
EXCHANGING THEIR DATA (AND INDEX) SEGMENTS.
Merging Adjacent Table Partitions: This following scenario describes how merge two
adjacent table partitions. Suppose you have to merge two partitions, FEB95 and
MAR95, of the SALES table by moving the data from the FEB95 PARTITION INTO THE
MAR95 partition.

To Merge the 2 Table Partitions:

1. Create a temporary table to hold the FEB95 partition data.


SQL>CREATE TABLE SALES_FEB95(….) TABLESPACE TS_TEMP STORAGE
(INITIAL 2);

2. Exchange the FEB95 partition segment into the table SALES_FEB95.


SQL>ALTER TABLE SALES EXCHANGE PARTITION FEB95 WITH TABLE
SALES_FEB95 WITHOUT VALIDATION;
Now the SALES_FEB95 table place holder segment is attached to the FEB95
partition.

3. Drop the FEB95 partition, this frees the segment originally owned by the
SALES_FEB95 table.
SQL>ALTER TABLE SALES DROP PARTITION FEB95;

4. Move the data from the SALES_FEB95 table into the MAR95 partition via an
INSERT statement.

SQL>INSERT INTO SALES PARTITION (MAR95) SELECT * FROM SALES_FEB95;


Using the extended table name here is more efficient. Instead of attempting to
compute the partition to which a row belongs, Oracle verifies that it belongs to the
specified partition.

5. Drop the SALES_FEB95 table to free the segment originally associated with the
FEB95 partition.
SQL>DROP TABLE SALES_FEB95;

6. (Optional) rename the MAR95 partition


SQL>ALTER TABLE SALES RENAME PARTITION MAR95 TO FEB_MAR95;

Converting a partition view into a partitioned table: This following scenario describes
how to convert a partition view (also called “manual partition”) into a partitioned
table. The partition view is defined as follows:

SQL>CREATE VIEW ACCOUNTS AS


SELECT * FROM ACCOUNTS_JAN95
UNION ALL
SELECT * FROM ACCOUNTS_FEB95
UNION ALL
…..
SELECT * FROM ACCOUNTS_DEC95;
TO INCREMENTALLY MIGRATE THE PARTITION VIEW TO A PARTITIONED TABLE:

1. Initially, only the two most recent partitions, ACCOUNTS_NOV95 and


ACCOUNTS_DEC95, will be migrated from the view to the table by creating the
partition table. Each partition gets a temporary segment of 2 blocks (as a
placeholder).

SQL>CREATE TABLE ACCOUNT_NEW(…)


TABLESPACE TS_TEMP STORAGE (INITIAL 2)
PARTITION BY RANGE (OPENING_DATE)
(PARTITION JAN95 VALUES LESS THAN (‘950201’),
……..
PARTITION DEC95 VALUES LESS THAN (‘960101’));

2. Use the EXCHANGE command to migrate the tables to the corresponding


partitions.

SQL>ALTER TABLE ACCOUNTS_NEW EXCHANGE PARTITION NOV95 WITH


TABLE ACCOUNTS_95 WITH VALIDATION;
SQL>ALTER TABLE ACCOUNTS_NEW EXCHANGE PARTITION DEC95 WITH
TABLE ACCOUNTS_DEC95 WITH VALIDATION;
So now the place holder data segments associated with the NOV95 and DEC95
partitions have been exchanged with the data segments associated with the
ACCOUNTS_NOV95 and ACCOUNTS_DEC95 tables.

3. Redefine the ACCOUNTS view.

SQL>CREATE OR REPLACE VIEW ACCOUNTS


SELECT*FROM ACCOUNTS_JAN95
UNION ALL
SELECT*FROM ACCOUNTS_FEB_95
UNION ALL
…..
UNION ALL
SELECT*FROM ACCOUNTS_NEW PARTITION(NOV95)
UNION ALL
SELECT*FROM ACCOUNTS_NEW PARTITION(DEC95);

4. Drop the ACCOUNTS_NOV95 & ACCOUNTS_DEC95 tables, which own the


placeholder segments that were originally attached to the NOV95 and DEC95
partitions.

5. After all the tables in the UNIONALL view are converted into partitions, drop the
view and the partitioned table that was renamed as the view.

SQL>DROP VIEW ACCOUNTS;


SQL>RENAME ACCOUNTS_NEW TO ACCOUNTS;

MANAGING ROLLBACK SEGMENTS

Roll back segments stores undo information and are used for following purposes:

 To undo the previous command


 For read consistency and
 For crash recovery
Each Rollback Segment:

 Consists of several rollback entries from multiple transactions.


 Stores block information such as file and block_id, as well as data as it
existed before being modified.
 Must be created under special circumstances and brought online before being
used
 May increase due to large transactions.
 Will automatically shrink to optimal if extended.
 Transaction can be assigned automatically or explicitly.

Rollback segment maintains a table for every transaction, which is identified by


SMON during recovery. When you create database, system tablespace and system
Rollback Segment is created. You cannot drop this segment. Depending on the
number of transactions you have to decide how many Rollback Segments are
needed. There are two types of Rollback Segments.

 PUBLIC and
 PRIVATE

Public Rollback Segment is that Oracle automatically acquires access to and brings
online normal database operations. Private Rollback Segments serves if the name is
explicitly mentioned in parameter file. When any datafile or any tablespace is taken
offline, oracle creates deferred Rollback Segment in System Tablespace. It contains
transaction Rollback information that oracle could not apply to damage offline
tablespace. To check deferred Rollback Segment then,

SQL>SELECT SEGMENT_TYPE, SEGMENT_NAME FROM DBA_SEGMENTS


WHERE SEGMENT_TYPE LIKE ‘DEFFERED’;

If you want to create a new Rollback Segment RBSI in Tablespace rbs_NETTLINX


SQL> CREATE ROLLBACK SEGMENT RBSI TABLESPACE RBS_NETTLINX
STORAGE(INITIAL 10K NEXT 10K OPTIMAL 20K MINEXTENTS 2
MAXTENTS 5);

If you want to either make it online or want to change storage parameters you have
to use after commands as follows:

To change storage parameters:


SQL>ALTER ROLLBACK SEGMENT RBSI STORAGE (MAXEXTENTS 10);

To make it online or offline:


SQL>ALTER ROLLBACK SEGMENT RBSI ONLINE/OFFLINE;

To bring rollback segments online automatically set ROLLBACK_SEGMENTS


parameter. In init<ORACLE_SID>.ora parameter file.

To explicitly assign a Rollback Segment to a transaction:

SQL>SET TRANSACTION USE ROLLBACK SEGMENT RBSI;


To shrink Rollback segment Manually:

SQL>ALTER ROLLBACKSEGMENT RBSI SHRINK TO 100K;

If you do not specify the size, it will shrink upto optimal size if not, to minextents.

To drop a Rollback Segment: Make it offline and then drop the segment

SQL>ALTER ROLLBACK SEGMENT RBSI OFFLINE;


SQL>DROP ROLLBACK SEGMENT RBSI;

You can get the information from the following views:

DBA_SEGMENTS
USER_SEGMENTS
DBA_ROLLBACK_SEGS
V$ROLLSTAT
V$ROLLNAME

For eg:

1. To display Rollback Segment information:

SQL>SELECT SEGMENT_NAME,TABLESPACE_NAME,STATUS FROM


SYS.DBA_ROLLBACK_SEGS;

2. To check the name, tablespace and its size:

SQL>SELECT SEGMENT_NAME,TABLESPACE_NAME,BYTES,BLOCKS,EXTENTS FROM


SYS.DBA_SEGMENTS WHERE SEGMENT_TYPE = ‘ROLLBACK’;

3. When you take a Rollback Segment offline, it does not actually go offline until all
active transactions in it have completed. Between the time when you attempt to take
offline and when it is actually offline, its status in DBA_ROLLBACK_SEGS remain
ONLINE, but it is not used for new transactions. To determine whether any Rollback
Segment for an instance are in this state, use the following query:

SQL>SELECT NAME,XACTS ‘ACTIVE TRANSACTIONS’ FROM V$ROLLNAME,


V$ROLLSTAT WHERE STATUS = ‘PENDING OFFLINE’ AND
V$ROLLNAME.USN = V$ROLLSTAT.USN;

MANAGING REDOLOG FILES

Redologs record all the changes made to the database. Every database must have at
least two redolog files. These files can be mirrored to avoid sing point failure. These
are used by Oracle during instance recovery and for media recovery. These files are
written in circular fashion to save disk space. The filled redolog files will be archived
if the database is running in archivelog mode. It is strongly recommended that
databse should run in Archive log mode for eg: if power fails abruptly and data in
memory cannot be written on datafiles, however Oracle recovers the unrecorded
data in datafile by applying redologs. The process of applying the redolog during
recovery operation is called as rolling forward.

Mirrored REDO logs: The recommened redolog file configuration is, at least two
redolog members per group.

 All members of a group of logfiles contain the same information


 Group members are updated simultaneously
 Each group must contain the same number of other groups.

Log switches:

 A log switch occurs when Oracle switches from one redolog to another
 A log switch occurs LGWR has filled one log file group
 A log switch can be forced by a DBA when the current redo log needs to be
archieved.
 A log switch occurs upon database shutdown.

At a log switch the current redolog file is assigned a log sequence number that
identifies the information stored in that redolog and is also used for synchronization.
A checkpoint automatically occurs at a log switch.

1. To add a new group to existing database:

SVRMGR>STARTUP MOUNT
SVRMGR>ALTER DATABASE ADD LOGFILE GROUP3
(‘/disk3/oradata/NETTLINX/redolog3a.log’,
‘/disk4/oradata/NETTLINX/ redolog3b.log’ ) SIZE 500K;
SVRMGR>ALTER DATABASE OPEN;

2. To add a new member to the existing groups:

SVRMGR>STARTUP MOUNT
SVRMGR>ALTER DATABASE ADD LOGFILE MEMBER
‘/disk3/oradata/NETTLINX/redolog2b’ TO GROUP 2;

3. To rename a logfile:

a. Bring the database to mount state.

SVRMGR>ALTER DATABASE CLOSE;

b. Copy or move desired log files to new destination or to new names

$cp /disk3/oradata/NETTLINX/redolog2a.log /disk2/oradata/NETTLINX/redolog2a.log

C. Use Alter database command


SVRMGR>ALTER DATABASE RENAME FILE
‘/disk3/oradata/NETTLINX/redolog2a.log’ TO
‘/disk2/oradata/NETTLINX/redolog2a.log’;

d. finally open the database.


SVRMGR>ALTER DATABASE OPEN;

3. To clear online redolog files:


SVRMGR>ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP2;

4. To drop the log group and members: if you drop online group then you will get the
following error. ORA-7360 unable to obtain information about log group

SVRMGR>ALTER DATABASE DROP LOGFILE GROUP 3; /* (to drop a group)*/


SVRMGR>ALTER DATABASE DROP LOGFILE MEMBER
‘/disk3/oradata/NETTLINX/redolog2a.log’

To drop online redolog group, consider the following points:

 An instance requires at least two groups of online redolog files, regardless of


the number of members in the groups
 You can drop an online redolog group only if it is not the active group. If you
need to drop the active group, first force a log switch to occur.
SQL>ALTER SYSTEM SWITCH LOGFILE;

Make sure an online redolog is archived before dropping it. If you drop a member
from the online group, then you get the following error. ORA-313 Open failed for
member of member of log group.

To drop a member consider the following points:

 It is OK to drop online redolog files, so that, a mirrored online redolog


becomes temporarily unsymmetrical for eg: if you are using duplexed groups
of online redolog files, you can drop one member of one group, even though
all other groups have two members each. However, you should rectify this
situation immediately, so that, all groups have at least two members, and
there by eliminate the single point of failure of online redolog.
 You can drop an online redolog group only if it is not the active member. If
you need to drop the active member, first force a log switch to occur.

The information is available in the following views:

V$LOG
V$LOGFILE
V$LOG_HISTORY
V$LOGHIST
V$RECOVERY_LOG

MANAGING CONTROL FILES

Control files are created by Oracle, which are specified in INIT.ORA. Every Oracle
database should have at least two control files: each stored on different disks. If a
control file is damaged due to disk failure the associated instance must be shutdown.
Once the disk drive is repaired, the damaged control file can be restored using an
intact copy of the control file and the instance can restarted.

If one control file is present in /disk1/oradata/nettlink/control1.ctl, and second on


/disk2/oradata/nettlink/control2.ctl, and if control is lost then,

$ cp /disk2/oradata/nettlink/control2.ctl /disk1/oradata/nettlink/control1.ctl
SVRMGR>STARTUP

No media recovery is required. By using mirrored control files you avoid unnecessary
problems if a disk failure occurs on the database servers.

Managing the size of the control file: Typical control files are small. The main
determines of a control file size are the values set for MAXDATAFILES,
MAXLOGFILES, MAXLOGMEMBERS, MAXLOGHISTORY, MAXINSTANCES parameter of
the CREATE DATABASE statement that created the associated database. The
maximum control file size is operating system specific.

To check the number files specified in control files:


SQL>ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
$ cd /disk1/oradata/nettlink/udump
$ cp ora_2065.trc bkup.sql
If the MAXDATAFILES parameter is set to 2 then, if you try to add third datafile then,
SVRMGR>ALTER TABLESPACE USER_NETTLINX ADD DATAFILE
‘/disk1/oradata/nettlink/user03.dbf’ size 1m;

ORA-1503 create control file failed


ORA-1166 file number 3 larger than MAXDATAFILES(2)
Then go to you trace file and change,

$ vi bkup.sql
MAXDATAFILES 10
:wq
SVRMGR>connect internal
SVRMGR>@bkup
SVRMGR>ALTER TABLESPACE USER_NETTLINX ADD DATAFILE
‘/disk2/oradata/NETTLINX/user04.dbf’ SIZE 400K;

Follow the same steps for the LOGFILES and LOGMEMBERS.

Creating an additional copy of control file:

Include the parameter in INIT.ORA


$ cd /disk2/oradata/NETTLINX
$ cp control1.ctl control2.ctl
SVRMGR>STARTUP

1. Trace the control file to udump destination and generate the create control file
syntax:

SQL>ALTER DATABSE BACKUP CONTROL FILE TO TRACE;


$ cd /disk2/oradata/NETTLINX/udump
$ vi ora_2065.trc
$ cp ora_2065.trc orabkup.sql
SVRMGR>SHUTDOWN ABORT
SVRMGR>@orabkup

2. Use the create control file command:

SVRMGR>CREATE CONTROLFILE DATABASE “NETTLINX” RESETLOGS NO


ARCHIVE LOG
LOGFILE GROUP 1 (‘/data/oradata/NETTLINX/redolog1.log’,
‘/data/oradata/NETTLINX/redolog2.log’) size 250k,
GROUP 2 (‘/data/oradata/NETTLINX/redolog1.log’,
‘/data/oradata/NETTLINX/redolog2.log’) size 250k,
DATAFILE ‘/data/oradata/NETTLINX/system01.dbf’ SIZE 25m
ARCHIVE LOG
ALTER DATABASE OPEN RESETLOGS;

3. To drop a control file:

 Shutdown the database


 Edit the control file’s parameter in the INIT.ORA
 Restart the database
 This above steps do not delete the file physically from the disk

SVRMGR>SHUTDOWN IMMEDIATE
$ cat initNETTLINX.ora # here we are only observing 1 line which reads control
files
Controlfiles=(/disk1/oradata/NETTLINX/control1.ctl)
SVRMGR>STARTUP.

MANAGING USERS, ROLES & PRIVILEGES

The primary purpose of managing users, roles and privileges is to establish the
correct level of security for the different types of database users.
Managing Database Users: An organization must establish a database security policy
that defines, among the other things, the appropriate levels of database access for
different types of users. Once this policy is defined, you can then manage database
users easily.

Another area that comes under managing database users is licensing. For you Oracle
Server, you have a license that states how many concurrent users are allowed to
connect to the database. Through the management of users and their access, you
can make sure that your facility complies with the license agreement.

Creating Users: You can create a new database user by using the CREATE USER
dialog box in SQL*DBA or the SQL command creates user. When you create a new
user giving user name & password is mandatory.

Whereas the following will take default values if not provided.

 Default tablespace
 Temporary tablespace
 Tablespace quotas
 Profile

Default Tablespace: Each user is associated with a default tablespace. When a user
creates a schema object and specifies no tablespace to contain the object, the user’s
default tablespace is used. The default tablespace feature provides ORACLE with
information to direct space usage in situations where an object’s tablespace is not
specified. A user’s default tablespace can be set when the user is created or changed
after the user has been created. If default tablespace option is not specified then the
schema objects of the user will go into the system tablespace of the oracle database.
Always, make sure that the default tablespace option is given while creating a user.

Temporary Tablespace: Each user is associated with a temporary tablespace. When a


user executes an SQL statement that requires the creation of a temporary segment,
the user’s temporary tablespace contains the temporary segment.

The following command creates the new database user (user_01).

SQL>CREATE USER USER_01 IDENTIFIED BY NETTLINX


DEFAULT TABLESPACE USER_ NETTLINX
TEMPORARY TABLESPACE TEMP_ NETTLINX
QUOTA 5M ON USER_ NETTLINX
QUOTA 5M ON TEMP_ NETTLINX
QUOTA 3M ON SYSTEM
PROFILE CLERK;

The identified by clause is used to give the user a password. To change a user’s
password, issue the

ALTER USER command.

SQL>ALTER USER USER_01 IDENTIFIED BY<NEW PASSWORD>;

To drop a user:
SQL>DROP USER USER_01 CASCADE;

Use cascade option if user has any schema objects

PROFILES

System resource limits are managed with the user profiles. A profile is a named set
of resource limits that can be assigned to a user. These resources can generally be
established at the session and statement levels. A session is created every time a
database user connects to the database. If a session level resource limit is exceeded,
the current statement is rolled back and error message is returned to the user.

The database administrator has option to globally enable or disable profiles. That is
the DBA has a capability to make specific resource limits apply to all users. To create
a profile issue the create profile command. The following resource limits can be set
during profile creation.

Sessions_per_user : Limits the number of concurrent sessions for the


User.
Cpu_per_user : Limits the CPU time for session. This is expressed
hundredths of seconds.
Connect_time : Limits the total elapsed connect time of a session
Failed_login_attempts : No of failed attempts after which account is going
To locked.
Password_life_time : No of days the password can be changed
Password_reuse_max : No of times password can be changed
Password_verify_function : Function with which it is going to verify the
Password
Password_lock_time : No of days the password going to be locked
Password_grace_time : No of days it is going to prompt for the password
Expiry.
Idle_time : Defines the maximum amount of continuous
Inactive time span.
Logical_reads_per_session : Limits the number of data blocks read in a
Session
Logical_reads_per_call : Limits the number of data blocks read for a
Call to process a SQL statement.
Private_sga : Limits the amount of private space a session can
reserve in the system global area. This
limit
Applies only if you are using a multithreaded
Server.
Composite_limit : Limits the total resource cost per session, this is
A composite of the resources such as
cpu_per_session, connect_time,
logical_reads_per_session and private_sga.
The following statement creates profile named “clerk”. This statement defines only
four resources. All others assume their resource limits as defined by the default
profile. To enable profile you have to include, Resource_limit=true(in init.ora), or

SQL>ALTER SYSTEM SET RESOURCE_LIMIT=TRUE;


SYNTAX
SQL>Create profile<profile name>
LIMIT SESSIONS_PER_USER [INTEGER/UNLIMITED/DEFAULT]
CPU_PER_SESSION [………………..do……………………….]
CONNECT_TIME [ ………………..do……………………….]
FAILED_LOGIN_ATTEMPTS [………………..do……………………….]
PASSWORD_LIFE_TIME [………………..do……………………….]
PASSWORD_REUSE_MAX [………………..do……………………….]
PASSWORD_VERIFY_FUNCTION [………………..do……………………….]
PASSWORD_LOCK_TIME [………………..do……………………….]
PASSWORD_GRACE_TIME [………………..do……………………….]
IDLE_TIME [………………..do……………………….]
LOGICAL_READS_PER_SESSION [………………..do……………………….]
LOGICAL_READS_PER_CALL [………………..do……………………….]
PRIVATE_SGA [………………..do……………………….]
COMPOSITE_LIMIT [………………..do……………………….];

For example:

SQL>CREATE PROFILE CLERK LIMIT


SESSIONS_PER_USER1
CONNECT_TIME 560
FAILED_LOGIN_ATTEMPTS 3
PASSWORD_LIFE_TIME 60
PASSWORD_LOCK_TIME 1
PASSWORD_GRACE_TIME 10
CPU_PER_CALL UNLIMITED
IDEAL_TIME 15;

The following information is available in the data dictionary for every user and
profile.
 List of users in the database
 Each user’s default tablespace for tables, clusters, and indexes
 Memory usuage for each current session
 Space quotas for each user
 Each user’s assigned profile and resources limits
 The cost assigned to each applicable system resource

To Drop profile:
SQL>DROP PROFILE CLERKS CASCADE;

Use cascade option if you assign profile to any user.


Note: one cannot drop DEFAULT profile.

ROLES

Roles are named groups of related privileges that are granted to individual users and
other roles. Roles are created to manage the privileges for a database or to manage
the privileges for a user group. Roles have a certain set of properties that promote
an easier management of database privileges.

The properties are as follows:

 Simplistic privilege management


 Individual dynamic privilege management
 Application specify security

Creating a Role: The name you provide for the role must be unique among other
user names and roles in the database. Roles are not contained in the schema of the
user. When a role is created it has no privileges associated with it. You must grant
privileges or other roles to a new role. The grant command is used to assign
privileges and roles to the new role. To create a role, one must have the CREATE
ROLE system privilege. The following command creates the role named clerk:

SQL>CREATE ROLE CLERK IDENTIFIED BY NETTLINX;

System defined roles: Oracle provides five predefined roles with the Oracle server.
You cannot grant and revoke privileges and roles to these predefined roles just as
you can to any role you define. The following is a list of the Oracle predefined roles
and their granted privileges:

Connect: Alter session, create cluster, create database link, create session, create
sequence, create synonym, and create view.

Resource: create cluster, create procedure, create sequence, create table, create
Trigger

DBA: All system privilege with admin option

EXP_FULL_DATABASE: Select any table, backup_any_table, inser, delete and


update on the tables sys.incvid, sys.incfil and sys.incexp.Imp_full_database:become
user, write down

DELETE_CATALOG_ROLE: Delete privileges on all dictionary packages for this role.

EXECUTE_CATALOG_ROLE: Execute privileges on all catalog tables and views for


this role.

SELECT_CATALOG_ROLE: Select privileges on all catalog tables and views for this
role.

Altering Roles: You can alter roles using <ALTER ROLE>command:

SQL>ALTER ROLE CLERK IDENTIFIED BY XYZ;


Dropping roles: To drop a role from a database. Use the <DROP ROLE> command.

SQL>DROP ROLE CLERK;

Granting roles: Roles can be granted to users, to other roles, and to public. Public
represents all users of the system. Use the SQL grant command or grant system
privilege. The next statement grants role manager the user user_01 with the admin
option.

SQL>GRANT MANAGER TO USER_01 WITH ADMIN OPTION;

A system role can be granted with the admin option. This option enables users to do
the following

 Grant or revoke the role to or from any user or role in the database
 Grant the role with admin option to other users and roles
 Alter to drop the role
 The creator of a role is automatically granted the role with admin option

Revoking roles: Roles can be revoked using the revoke command. The following is
an example of revoke command

SQL>REVOKE CLERK FROM TOM;

You cannot selectively revoke the admin option of a role. To revoke the admin option,
you must revoke the role and then regrant the role without the admin option.

Privileges: A privilege is a permission to execute an action or to access another


user’s object. The following are a few examples of privileges

 The right to access the database


 The right to Select data from another user’s database
 The right to execute another user’s stored procedures
 To right to create new users

These are two categories of privileges: system and object privileges. System
privileges enable the user to perform an action on a type of object, whereas object
privileges give the user permission to perform the action on a specific object.

System privileges: A system privilege is the right or permission to execute a


particular database action on a particular type of object. For example the right to
create Tablespace is a system privilege because system privilege are powerful they
should be granted to trusted users with discretion.

Granting system privileges: System privileges can be granted to users and roles
using the grand command. The following statement grants ‘system privileges’ to the
user Tom and to the role finance.

SQL>GRANT CREATE SESSION TO TOM, FINANCE;

System privileges cannot be granted along with object privileges and roles in the
same grant command.

Revoking system privileges: System privileges can be revoked using revoke


command. The user must have the admin option for a system privilege being
revoked.

SQL>REVOKE ALL FROM ROBERT;


Object privileges: An object privilege is permission to perform an action on a
specific object. Such as a table, package or view. Some database object don’t have
an associated object privilege. Following are the available object privileges and their
associated object

ALTER : Tables, Sequences


DELETE : Tables, views
EXECUTE : Procedures
INDEX : Tables
INSERT : Tables,views
REFERNCE : Tables
SELECT : Tables,views, Sequence
UPDATE : Tables, views

Granting object privileges: Object privileges can be granted to users and roles
using the grant command. The following statement grants object privileges to the
user Tom and role finance.

SQL>GRANT SELECT, ALTER TABLE TO TOM, FINANCE;

To grant object privileges: You must own the object specified or have been granted
the object privileges with the grant option.

Revoking the object privileges: Object privileges can be revoked using revoke
command

SQL>REVOKE UPDATE ON EMP FROM TOM;

To set the maximum session for an instance:

SQL>ALTER SYSTEM SET LICENSE_MAX_SESSIONS = 100;

To set both the warning limit and the maximum limit:

SQL>ALTER SYSTEM
SET LICENSE_MAX_SESSIONS = 64;
LICENSE_SESSIONS_WARNING = 54;

To set the user limit:

SQL>ALTER SYSTEM SET LICENSE_MAX_USERS = 30;

To see the current licensing limits:

SQL>SELECT SESSIONS_MAX S_MAX,


SESSIONS_WARNINGS_WARNING,
SESSIONS_CURRENTS_CURRENT,
SESSIONS_HIGHWATERS_HIGH,
USERS_MAX
FROM V$LICENSE;

To see the current number of named users defined in the database:

SQL>SELECT COUNT(*) FROM DBA_USERS;

To use Authentication password file: Instead of connecting to internal, one can


connect to the database through other user by following steps;

1. Create the password file using using the ORAPWD utility.

ORAPWD file=<filename>password=<password>Entries=<max_users>
Ex:

$ORAPWD FILE=ORAPWDNETTLINX PASSWORD=NETTLINX ENTRIES=3

2. Set the REMOTE_LOGIN_PASSWORDFILE initialization parameter to Exclusive.


3. Grant sysdba or sysoper to user.
SVRMGR>CONNECT INTERNAL
SVRMGR>GRANT SYSDBA TO REDDY; (OR)
SVRMGR>GRANT SYSTOPER TO REDDY;
4. The user REDDY can connect to the database without connect to internal.
SVRMGR>CONNECT REDDY/NETTLINX AS SYSDBA;
SVRMGR>SHUTDOWN
V$PWFILE_USERS view gives the information regarding the password file.

To connecting to the database directly through OS login account:

1. Create a user exactly as OS account. (Here reddy is OS account)


SQL>CREATE USER REDDY IDENTIFIED EXTERNALLY;

2. Grant privileges to reddy.


SQL>GRANT DBA TO REDDY;

3. Include OS_AUTHENT_PREFIX=” “parameter in init<sid> ora parameter file.


And restartup the database in order to red the initialization parameter file.

4. Connect to the database via “sqlplus” as follows


$ sqlplus /

To see all users and Associated information:

SQL>SELECT USERNAME,PROFILE,ACCOUNT_STATUS FROM DBA_USERS;

To see all Tablespace Quotas:

SQL>SELECT * FROM SYS.DBA_TS_QUOTAS;

NOTE: When specific quotas are assigned, the exact number is indicated in the
MAX_BYTES column Unlimited quotas are indicated by “-1”.
To see all profiles and Assigned limits:

SQL>SELECT * FROM DBA_PROFILES ORDER BY PROFILE;

To see Memory Use per User Session:

SQL>SELECT USERNAME, VALUE \\ ‘BYTES’ “CURRENT SESSION MEMORY”


FROM V$SESSION SESS, V$SESSTAT STAT, V$STATNAME NAME
WHERE SESS.SID = STAT.SID
AND STAT.STATISTIC#=NAME.STATISTIC#
AND NAME.NAME=’SESSION MEMORY’;

Examples:

1. The following statement creates the profile prof:


SQL>CREATE PROFILE PROF LIMIT
FAILED_LOGIN_ATTEMPTS 5
PASSWORD_LIFE_TIME 60
PASSWORD_REUSE_MAX 60
PASSWORD_REUSE_MAX UNLIMITED
PASSWORD_VERIFY_FUNCTION VERIFY_FUNCTION
PASSWOR_LOCK_TIME 1
PASSWORD_GRACE_TIME 10;
2. To create a user with the same password as the username with profile prof:
SQL>CREATE USER REDDY IDENTIFIED BY REDDY PROFILE PROF;

ORA-28003:Password verification for the special password failed


ORA-20001:Password same as user

3. To create a user with the same password as the username with profile Prof:

SQL>CREATE USER REDDY IDENTIFIED BY REDDY PROFILE PROF;

4. To Change the user’s password to reddy again and returns an error

SQL>ALTER USER USER REDDY IDENTIFIED BY REDDY;


ORA-28007: The password cannot be reused

5. To lock a user’s account:

SQL>ALTER USER USER REDDY ACCOUNT LOCK;

6. To check the user account status:

SQL>SELECT USERNAME, USER_ID,ACCOUNT_STATUS,LOCK_DATE


FROM DBA_USERS WHERE USERNAME=’REDDY’;

7. To expire a user’s password


SQL>ALTER USER REDDY PASSWORD EXPIRE;

8. To check a user’s account status:

SQL>SELECT USERNAME, USER_ID,ACCOUNT_STATUS,LOCK_DATE


FROM DBA_USERS WHERE USERNAME=’REDDY’;

9. To unlock a user’s account:

SQL>ALTER USER USERREDDY ACCOUNT UNLOCK;

10. To check the account status

SQL>SELECT USERNAME, USER_ID,ACCOUNT_STATUS,EXPIRY_DATE


FROM DBA_USERS WHERE USERNAME=’REDDY’;

LISTING PRIVILEGE AND ROLE INFORMATION:

 ALL_COL_PRIVS, USER_COL_PRIVS, DBA_COL_PRIVS


 ALL_COL_PRIVS_MADE, USER_COL_PRIVS_MADE
 ALL_COL_PRIVS_RECD, USER_COL_PRIVS_RECD
 ALL_TAB_PRIVS, USER_TAB_PRIVS, DBA_TAB_PRIVS
 ALL_TAB_PRIVS_MADE, USER_TAB_PRIVS_MADE
 ALL_TAB_PRIVS_RECD, USER_TAB_PRIVS_RECD
 DBA_ROLES
 USER_ROLE_PRIVS, DBA_ROLE_PRIVS
 USER_SYS_PRIVS, DBA_SYS_PRIVS
 COLUMN_PRIVILEGES
 ROLE_ROLE_PRIVS, ROLE_SYS_PRIVS, ROLE_TAB_PRIVS
 SESSION_PRIVS, SESSION_ROLES

11. To list all system privilege grants:

SQL>SELECT * FROM SYS.DBA_SYS_PRIVS;

12. To list all Role Grants

SQL>SELECT * FROM SYS.DBA_ROLE_PRIVS;

13. To list objective privileges granted to a user:

SQL>SELECT TABLE_NAME,PRIVILEGE,GRANTABLE FROM


SYS.DBA_TAB_PRIVS WHERE GRANTEE = ‘WARD’;

14. To list all the column specific privileges that have been granted

SQL>SELECTGRANTEE, TABLE_NAME,COLUMN_NAME, PRIVILEGE FROM


SYS.DBA_COL_PRIVS;

15. To list all roles currently enabled for the issuer:


SQL>SELECT * FROM SESSION_ROLES;

16. To list all system privileges currently available in the issuer’s Security domain,
both from explicit privilege grants and from enabled roles:

SQL>SELECT * FROM SESSION_PRIVS;

17. To list Roles of the Database

SQL>SELECT * FROM SESSION_ROLES;

18. To list Information About the Privilege Domains of Roles

SQL>SELECT GRANTED_ROLE,ADMIN_OPTION
FROM ROLE_ROLE_PRIVS WHERE ROLE = ‘SYSTEM_ADMIN’;

Password management policy: Database security system depends on passwords


being kept secret at all times. Still, passwords are vulnerable to theft, forgery, and
misuse. To allow for greater control over database security, oracle’s password
management policy is controlled by DBA’s.

Account Locking: When a particular user exceeds a designated number of failed


login attempts, the server automatically locks that user’s account.

SQL>CREATE PROFILE PROF LIMIT


FAILED_LOGIN_ATTEMPTS 4
PASSWORD_LOCK_TIME 1/24;
SQL>ALTER USER REDDY PROFILE PROF;

The maximum number of failed login attempt for the user REDDY is 4, and the
amount of time the account will remain locked is 30 days;

Password aging and expiration: DBA can specify a maximum lifetime for passwords

SQL>CREATE PROFILE PROF LIMIT


FAILED_LOGIN_ATTEMPTS 4
ACCOUNT_LOCK_TIME 30
PASSWORD_LIFE_TIME 90;
SQL>ALTER USER REDDY PROFILE PROF;

User reddy can use the same password for 90 days before it expires. DBA can specify
Grace period for password expiry

SQL>CREATE PROFILE PROF LIMIT


FAILED_LOGIN_ATTEMPTS 4
ACCOUNT_LOCK_TIME 3O
PASSWORD_GRACE_TIME 3;
SQL>ALTER USER REDDY PROFILE PROF;

It will display warning message to change the password within 3 days.

Password History: DBA can specify a time interval during which users cannot reuse
a password

SQL>CREATE PROFILE PROF LIMIT


PASSWORD_REUSE_TIME 60;

DBA Can specify the number of password changes the user must make before his
current password can be used again is 3.

SQL>CREATE PROFILE PROF LIMIT


PASSWORD_REUSE_MAX 3
PASSWORD_REUSE_TIME UNLIMITED;

Password Complexity Verification: Oracle’s password complexity verification


routine can be specified using a PL/SQL Script (utlpwdmg.sql) which sets the default
profile parameters. The format of the PL/SQL is:

ROUTINE_NAME (
USERID_PARAMETER IN VARCHAR (30),
PASSWORD_PARAMETER IN VARCHAR (30),
OLD_PASSWORD_PARAMETER IN VARCHAR (30)
).

MANAGING ORACLE NETWORKING – NET 8

This facilitates the sharing of data between databases, even if those Databases are
far off and on different types of servers running different operating systems and
communications protocols. Each database server in the distributed database
cooperates to maintain the consistency of the global database.

I am connected to database NETTLINX on a machine named local host and I want to


get the information from ORACLE database on a machine named remote host and
the user in ORACLE is system/manager then, the steps are as follows:

$cd$ORACLE_HOME/network/admin –At machine local host(client side machine)


$vi tnsnames.ora
MY_ALIAS=
(DESCRIPTION=
“(ADDRESS=
(PROTOCOL=TCP)
(HOST=REMOTE_HOST)
(PORT=1521)
(CONNECT_DATA=(SID=ORACLE)
)
)
$cd$ORACLE_HOME/network/admin- At machine remote host(client side
machine)
$ vi listener.ora
LISTENER=
(ADDRESS=
(ADDRESS=
(COMMUNITY=TCP.WORLD)
(PROTOCOL=TCP)
(HOST=REMOTE_HOST)
(PORT=1521)
)
)
STARTUP_WAIT_TIME_LISTENER=0
CONNECT_TIMEOUT_LISTENER=0
TRACE_LEVEL_LISTENER=ADMIN
SID_LIST_LISTENER=
(SID_LIST=
(SID_DESC=
(SID_NAME=ORACLE)
(ORACLE_HOME=<Oracle_home>)
)
)
:wq
/*save this file and start the listener.*/

$lsnrctl start /*do these commands on the remote host to start the listener*/
$sqlplus system/manager@my_alias -At local host.

Database Links: Database links are used to access schema objects at remote
database from the local database. To create a database links at local databases the
syntax is:

SQL>CREATE DATABASE LINK LNK1


CONNECT TO XYZ IDENTIFIED BY XYZ
USING ‘MY_ALIAS’;
SQL>SELECT*FROM TAB@LNK1; /*gives the table information of USER XYZ*/

You cannot perform DDL(Create,Alter,Drop) operations through database links. You


can use insert, update and delete commands along with database links. If permission
is available. If you want to remove the database link then.

SQL>DROP DATABASE LINK <LINK NAME>;

SNAP SHOTS: Snapshots can be thought of as a table that holds the results of a
query. Usually on one or more tables, called master tables, in a remote database.
When snapshots are used, a refresh interval is established to schedule refreshes of
replicated data. Local updates can be prevented, and transactions based refreshes
can be used, available for some type of snapshots, send from the master database
only those rows that have changed for the snapshot. You need CREATE SNAPSHOT,
CREATE TABLE, CREATE VIEW & CREATE INDEX privileges.

The queries that form the basis of snapshots are grouped into two categories.

Simple and complex snapshots: Simple snapshot’s defining queries has no Group
by or CONNECT BY clauses or subqueries, joins or set operations. If a snapshot’s
query has any of these clauses or operations, it is referred to as a complex snapshot.
When a snapshot is created several internal objects are created in schema of the
snapshot. These objects should not be altered. To create a snapshot the steps are as
follows:

SYNTAX:
SQL>CREATE SNAPSHOT <SNAPSHOT NAME>
REFRESH[COMPLETE/FAST]
WITH [PRIMARY KEY/ROW ID]
START WITH SYSDATE
NEXT SYSDATE+1/(24*60*60)[FOR EVERY SECOND]
AS SELECT * FROM <TABLE NAME@LINKNAME>;

If you create a snapshot with refresh fast option, then you need to creat a snapshot
log on the main table at the remote site(i.e., at the server side).

SQL>CREATE SNAPSHOT LOG ON <TABLE NAME>;

COMPLETE: Entire data is regenerated every time the snapshot is refreshed.

FAST: only the rows that are modified are regenerated every time the snapshot is
refreshed using the snapshot log. Changed information is stored in the snapshot log.
Snapshot log is a table in the master database that is associated with the master
table. Oracle uses a snapshot log to track the rows that have been updated on the
master table.

Eg: If LINK1 has the order table, on which I want to create the snapshot then,

SQL>CREATE SNAPSHOT SNAP1


REFRESH COMPLETE
WITH ROWID
START WITH SYSDATE
NEXT SYSDATE + 1(24*60*60)
AS SELECT * FROM ORDER@LINK1;

If you want to make your snap1 to refresh fast then,

SQL>ALTER SNAPSHOT SNAP1 REFRESH FAST;

If you want to drop the snapshot then,

SQL>DROP SNAPSHOT SNAP1;

TWO_Phase commit: ORACLE automatically controls and monitors the commit or


rollback of a distributed transaction and maintains the integrity of the global
database using a mechanism called two phase commit. This mechanism guarantees
that the nodes pariticipating in a distributed transaction either commit or rollback the
transaction, there by maintaining the integrity, It has two phases.

Prepare Phase: The initiating node asks all the participants to prepare (either to
commit or to rollback, even if there is a failure)

Commit Phase: If all pariticipants respond to initiating node that they are prepared,
the initiating node asks all nodes to commit the transaction, if all participants cannot
prepare, it asks to rollback the transaction. If there is failure of transaction due to
any reason, the status of transaction is recorded in commit point site. Commit point
decides the commit point strength at the beginning. All transactions are
automatically resolved by RECO and automatically removed from the pending
transaction table.

MANAGING EXPORTS AND IMPORTS

There are two types of backups: Logical and Physical.


Logical backup can be done with export utility where as Physical backup is again
divided into COLD backup and HOT backup, which will be dealt in the next chapter.

Export & import: Export, is an ORACLE utility used to store ORACLE database in
export format(.dmp) files for later retrieval. These files can be later used to write
back into ORACLE database via import. Import is used to retrieve ORACLE database
found in export format files into an ORACLE database data found in export format
files into an ORACLE database.

Main Tasks:

 Data archival
 Upgrading to new releases
 Backing up Oracle database
 Moving between Oracle databases

Export’s basic function is to extract the object definition and table data, from an
Oracle database and store them in Oracle binary format. There are three levels of
export.

 Table level
 Schema level
 Database level

SYNTAX:

$ exp <username/password> parameters;

Export Parameters:

Buffer …………….. Size of a data buffer


FILE …………….. Output file (default ‘expand dmp’)
COMPRESS …………….. Import data to one extent (Default ‘Y’)
GRANTS …………….. Exports grants (Default ‘Y’)
INDEXES …………….. Exports indexex (Default ‘Y’)
ROWS …………….. Exports data rows (Default ‘Y’)
CONSTRAINTS …………….. Export constraints (Default ‘Y’)
LOG …………….. Logfile of screen output
FULL …………….. Entire file (Default ‘Y’)
OWNER …………….. List of owner names
TABLES …………….. List of table names
INCTYPE …………….. Incremental export type (Incremental, Cumulative,
Complete)
PARFILE …………….. Parameter file
FEEDBACK …………….. Display progress every X rows (default ‘0’)
PARFILE …………….. Parameter file
POINT_IN_TIME_RECOVERY …………….. Tablespace Point-in-time Recovery(N)
RECOVERY_TABLESPACES …………….. List of Tablespace names to recover
VOLSIZE …………….. Number of bytes to write to each type volume
Point_In_Time_Recovery: It indicates whether or not the export utility exports
one or more Table space in an Oracle database. On Import, you can recover the
Tablespace to a prior point in time without affecting the rest of the database.

Recovery_Tablespaces: Specifies the Tablespace that will be recovered using


point_in_time recovery.

Eg:

1. If you want to export full database

$ exp. System/manager file=exp.dmp log=exp.log full=y

2. If you want to export Reddy’s object,

$ exp. System/manager file=reddy.dmp log=reddy.log owner=reddy

3. If you want to export a table emp which is owned by steeve,

$ exp steeve/steeve file=steeve.dmp log=steeve.log tables=emp

4. If you want to export your database, with only those tables that are changed
after the previous complete backup.

$ exp. System/manager file=inc.dmp log=inc.log inctype=incremental

5. If you want to export tables emp, dept which are owned by reddy, with no
constraints,

$exp reddy/tiger file=reddy1.dmp log=reddy1.log tables=emp,dept constraints=n


OR
$cat demo.par –Creating an export parameter file.
file=reddy1.dmp
log=reddy1.log
tables=emp,dept
constraints=n
$exp reddy/tiger parfile=demo.par

6. If you want export partion. If emp table is having two partitions M and Z. It
exports only partition m from table emp.

$exp reddy/tiger tables=emp:m rows=y

Incremental, cumulative and complete Exports

Incremental Exports: An incremental Export backs up only tables that have


changed since the last Incremental, Cumulative, or complete export. An Incremental
exports the table definition and all its data, not just changed rows.

Cumulative Exports: A Cumulative export backs up tables that have changed since
the last cumulative or complete export. A cumulative export compresses a number of
incremental exports into a single cumulative export file.
Complete Exports: A complete export establishes a base for incremental and
cumulative exports. It is also similar to full database export except it updates the
tables that track incremental and cumulative exports.

Assume that as manager of a data center, you do the following tasks:

 A complete export (X) every three weeks


 A complete export (C) every Sunday
 An incremental export (I) every night
Your export schedule is as follows:

Day 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

X I I I I I I C I I I I I I I I I I
S M T W T F S S M T W T F S S M T W

To restore through day 18, first you import the system information from the
incremental export taken on day 18. Then you import the data from:

1. The complete export taken on day 1


2. The cumulative export taken on day 8
3. The cumulative export taken on day 15
4. Three incremental exports taken on days 16,17,18

Import Parameters:

Buffer Size of a data buffer

FILE …………….. Output file (default ‘expand dmp’)


GRANTS …………….. Exports grants (Default ‘Y’)
INDEXES …………….. Exports indexex (Default ‘Y’)
ROWS …………….. Exports data rows (Default ‘Y’)
LOG …………….. Logfile of screen output
FULL …………….. Entire file (Default ‘Y’)
TABLES …………….. List of table names
INCTYPE …………….. Incremental export type (system, restore)
PARFILE …………….. Parameter file
SHOW …………….. Lists file contents (Default ‘N’)
IGNORE ……………… Ignore create errors (Defaults ‘N’)
DESTROY .………..… Overwrite Tablespace datafile (Defaults ‘N’)
INDEX FILE ……………… Write table/index into specified file
FROM USER …………….. From Which user
TO USER ……………… To which user you want to import
COMMIT ……………… Commit array insert (Default ‘N)
POINT_IN_TIME_RECOVER …………. Whether or not import recovers one or more
Tablespaces

SKIP_UNUSABLE_INDEXES …………. Whether or not import skips building indexes


That were set to the index unusable state
For eg:

1. If you want to import all reddy’s object to khanna

$ imp System/managerfile=reddy.dmp log=reddy_imp.log fromuser=reddy


touser=khanna

2. If you want to import full database

$imp system/manager file=exp.dmp log=imp.log

3. If you want to import only emp and dept into steeve

$ imp System/manager file=reddy1.dmp table=emp,dept fromuser=reddy


touser=steve

4. If you want to import only the data of emp and ignore errors

$ imp System/manager file=steeve.dmp log=steeve_ign.log ignore=y

5. Migrating data across partitions and tables

$imp reddy/tiger file=export.dmp tables=(e:qc,e:qd) ignore=y

Importing Incremental, Cumulative and Complete export files: If you have


following export files.
 One complete export XI
 Two cumulative exports C8 and C15.
 Three incremental exports 116, 117 and 118

You should import in the following order.

$ imp system/manager inctype=system full=y file=118


$ imp system/manager inctype=restore full=y file=X1
$ imp system/manager inctype=restore full=y file=C8
$ imp system/manager inctype=restore full=y file=C15
$ imp system/manager inctype=restore full=y file=116
$ imp system/manager inctype=restore full=y file=117
$ imp system/manager inctype=restore full=y file=118

Postponing the local index maintenance:

SQL>ALTER TABLE T MODIFY PARTITION P2 UNUSABLE LOCAL INDEXES;


$imp reddy/tiger file=export.dmp tables=(t:p1,t:p2) ignore=y
skip_unusable_indexes=y
SQL>ALTER TABLE T MODIFY PARTITION P2 REBUILD UNUSABLE LOCAL
INDEXES;
We do logical Backups for:
 Moving Database objects from one database to another
 Moving Database objects from one user account to another user ac count
 When we are upgrading from Oracle Version.
 When migrating from one platform to another platform
 As a alternative to the regular COLD or HOT Backup (Physical), so that if the
tape doesn’t work atleast we can recover from the Logical backup.
 For Database REORG, which means, after a while of use database gets too
many extents (for its SEGMENTS), and the only way to get rid of them is by
making
A FULL export and
B Re-Create the database
C FULL Import
By doing so, all the segments will be c reated with INITIAL extent as big as the
Table’s space (basically it adds all extents space and makes an INITIAL extent).
Another Important thing that REORG ensures is INTEGRITY of Orac le Data. Although
we probably never see any problems with the data, there could be little chance that
Oracle gets some bad data because of Disk-Bloc k corruption, which would hide in the
dark until you try to select the data from that Table. These kinds of intricacies would
be avoided by doing REORG’s at lest every 4 months.

Points to remember before doing export or import:


1. Always specify LOG parameter when you do an EXP, to know that what
exactly happens when you are away from the mac hine. It will also be helpful
at the time of IMP so that we can compare the LOG files and make sure all
tables are getting right number of rows.
2. Always specify BUFFER=2000000 so that it overrides system default BUFFER
parameter which is only 64K, which means whenever EXP writes 64K worth of
data to MEMORY (not SGA). It tries to write that data to EXPORT FILE.
3. Always specify FILE=<filename>, otherwise Oracle starts writing to a default
file called EXPDAT.DMP, by which you may overwrite an earlier version of EXP
which might be one that is needed.
4. Always perform EXP by giving the complete syntax (c ommand line) rather
than the performing by interactive method.
5. Whenever you are doing EXP or IMP don’t try to do at the prompt, instead
write the command in a file and execute as a SHELL program, that too not
directly, but, by using either NOHUP or AT (UNIX Commands). This will
ensure you that though there is problem with your terminal or Modem, the
command will still run as a background proc ess.
6. Upon using the EXPORT File try to Compress it, so that it won’t occupy much
space
7. If you are doing EXP FULL=Y, and if the database size is relatively larger then
don’t create indexes as part of IMP, rather IMP with INDEXES=N and try to
create Indexes manually after the IMP is done.
8. EXP & IMP are the only way to get all the SCHEMA definitions. You c an get the
SCHEMA definitions by giving the following command.
$ exp reddy/tiger file=expreddy.dmp log=expreddy.log
$ imp reddy/tiger file=expreddy.dmp log=reddy_DDL.sql

9. Do these commands immediately one after the other without dropping any
tables or deleting any data or objects. This statement fails since we are trying
to create all REDDY’S objects once again which are already there. So IMP
process will generate all errors and will dump into LOG file. Onc e the IMP
finishes. We can go into the LOG FILE and by removing the Error messages we
can get the entire SCHEMA definition (undocumented)

In point-7, we discussed that INDEXES should be created after the IMP. But, we
don’t have any SQL script to generate indexes. Please see the code.
$exp sys/sys full=y file=expfull_Mar08.dmp log=expfull.log
buffer=2000000
$imp sys/sys full=y file=expfull_Mar08.dmp
indexfile=cr8_indexes.sql
(At this point we didn’t create any objects, except Oracle writes all the
INDEXES information to this file)
Now re-create the database and
$imp sys/sys full=y file=expfull_Mar08.dmp log=impfull.log
buffer=2000000 indexes=n commit=y
Edit the file “cr8_indexes.sql” since it has info like this:
CONNECT REDDY;
CREATE INDEX……..
CONNECT STEVE;

As we know, this would fail because there is no password associated with the UserID.
The entire file wil be like this. This can be altered by issuing the command.

SQL>ALTER SESSION SET CURRENT SCHEMA = REDDY;

This can be done via vi editor


:1,$s/CONNECT/ALTER SESSION SET CURRENT SCHEMA = REDDY:/

This would be very helpful since we don’t really know all the passwords, that’s why
we are logging as that user in-directly, from SYS or SYSTEM as that user.

11. If you have large tables and if the RBS are not big enough to store the entire
table’s information you should use COMMIT=Y at the time of import (as shown
in the above example). This will ensure that the data is committed to the TABLE
whenever the BUFFER is full, which won’t fill up Rollback Segments. There is a
disadvantage in doing COMMIT=y, which is, if the IMP fails in the middle (for
any kind of reason), the last imported table might contain PARTIAL number of
rows, this would cause some more failures when this table acts as Master table
for other tables.

In the above scenario, it is best to just drop the last table and start the same
command once again.
MANAGING BACKUPS AND RECOVERY

Cold Backup: Cold backup is taken when database is shutdown normal. The
following file should be backed up.
 All datafiles
 All control files
 All online redo logs
 The init.ora (optional)

It is easy to take a backup of datafiles if the database architecture uses a consistent


directory structure. Having all of these files backed up while the database is closed
provides a complete image of the database as it existed at the time it was closed.

The full set of these files could be retrieved from the backups at a later date and the
database would be able to function. It is not valid to perform a file system backup of
the database while it is open, unless a hot backup is performed. The steps to take a
cold backup of ABC database are as follows:

$ mkdir BKUP
$ vi getfiles.sql
SET ECHO OFF
SET PAUSE OFF
SET FEED OFF
SET HEAD OFF
SPOOL cold_backup.sh
SELECT ‘cp’||NAME|| ‘BKUP’ FROM V$DATAFILE;
SELECT ‘cp’||NAME|| ‘BKUP’ FROM V$CONTROL FILE;
SELECT ‘cp’||NAME|| ‘BKUP’ FROM V$LOGFILE;
Spool off
$ svrmgrl
SVRMGR>CONNECT INTERNAL
SVRMGR>startup
SVRMGR>@getfiles.sql
SVRMGR> SHUTDOWN IMMEDIATE
SVRMGR>EXIT
$sh cold_backup.sh /*Taking the cold backup to BKUP directory*/
$cd BKUP /*Changing to BKUP directory*/
$ls /*Checking the contents*/

Taking cold backup using tar command:


$ vi getfiles.sql
SET ECHO OFF
SET HEAD OFF
SET FEED OFF
SPOOL cold.sql
SELECT NAME FROM V$DATAFILE;
SELECT NAME FROM V$CONTROL FILE;
SELECT NAME FROM V$LOGFILE;
SPOOL OFF
EXIT
:wq
SQL>@getfiles.sql
SVRMGR> CONNECT INTERNAL
SVRMGR>SHUTDOWN
SVRMGR>EXIT
$file=’cat cold.sh’ /*stores the file names in a variable*/
$echo “tar Pcvf/disk5/oradata/NETTLINX/cold.tar” $file>>tar.cmd /*generates tar
command*/
$sh tar.cmd /*taking a backup*/

To restore the database use:


$ tar xvf cold.tar

Hot backup: Hot backup is taken when the database is up and running in Archive
log mode. Hot backup can be taken on Tablespace by Tablespace mechanism, which
is also the recommended method. You must put the Tablespace in begin backup
(using alter Tablespace command) and after finishing the backup you must set it to
End backup mode. It is worth to note that hot backup will generate lot of redo
entries.

$ mkdir HOT BKUP


$ vi hot.sql

SET ECHO OFF


SET FEEDBACK OFF
SET HEADING OFF
SPOOL hot_backup.sql
SELECT ‘SPOOL hot.log’ || CHR(10) FROM DUAL;
SELECT ‘ALTER TABLESPACE ||TABLESPACE_NAME|’ BEGIN BACKUP;
‘||CHR(10)||’!cp ‘||FILE_NAME||’ /disk5/oradata/NETTLINX/HOTBK UP’
||CHR(10)||’ALTER TABLESPACE ‘||TABLESPACE_NAME||’ END BACKUP;
FROM DBA_DATA_FILES;
SELECT ‘ALTER DATABASE BACKUP CONTROLFILE TO ‘||
“’/disk5/oradata/nettlinx/HOTBKUP/control.new”’||’;’ FROM DUAL;
SELECT CHR(10) FROM DUAL;
SELECT ‘ALTER SYSTEM SWITCH LOGFILE; ‘FROM DUAL;
SELECT CHR(10) FROM DUAL;
SELECT ‘EXIT’ FROM DUAL;
SPOOL OFF
@hot_backup.sql /*execute the SQL file just generated*/
EXIT
$cd /disk5/oradata/NETTLINX/HOTBKUP /*To change to HOTBKUP directory*/
$ ls –l

Taking a hot backup using cursors:

$ vi hot.sql
SET SERVER OUTPUT ON
SET ECHO OFF
SET HEAD OFF
SET FEED OFF
SPOOL hotbkup.sql
DECLARE
CURSOR T_TAB IS
SELECT DISTINCT (TABLESPACE_NAME) FROM DBA_DATA_FILES.
CURSOR F_TAB (FS VARCHAR) IS
SELECT FILE_NAME FROM DBA_DATA_FILES WHERE
TABLESPACE_NAME=FS;
D_REC DBA_DATA_FILES.FILE_NAME%TYPE;
T_REC DBA_DTA_FILES.TABLESPACE_NAME%TYPE;
BEGIN
OPEN T_TAB;
LOOP
FETCH T_TAB INTO T_REC;
EXIT WHEN T_TAB%NOT FOUND;
DBMS_OUTPUT.PUT_LINE (‘ALTER TABLESPACE’||T_REC||’BEGIN
BACKUP;’);
END LOOP;
CLOSE T_TAB;
END;
/
ALTER SYSTEM SWITCH LOGFILE;
SELECT ‘ ‘||’!mv /disk5/oradata/NETTLINX/HOTBKUP/control.new
/disk5/oradata/NETTLINX/HOTBKUP/control.old’||’ ‘ FROM DUAL;
SELECT ‘ALTER DATABASE BACKUP CONTROL FILE TO ‘
||”’ /disk5/oradata/NETTLINX/HOTBKUP/control.new”’||’;’ FROM DUAL;
SPOOL OFF
:wq

$hotbkup.sql

Recovery: Recovery is of three types. They are online block recovery, thread
recovery and media recovery. In all three cases the algorithm that applies the redo
records against an individual block is the same.

Block level: It is automatically performed by Oracle during normal operation of the


database and is transparent to the user and does not involve any action from the
DBA. When a process dies while changing a buffer, Oracle reconstructs the buffer
using the online redolog files for the current thread and writes it to the disk. The
buffer header contains the information regarding the range of redo records that
needs to be applied to it. If the PMON process is performing the block recovery,
Oracle does not allow it to spend large amounts of time working on the recovery of
the one buffer.

Thread recovery: If an instance crashes while it has a database open, it is


necessary to do thread recovery. This involved applying the database on the redo
changes in the thread that have occurred since the last time the thread was check
pointed. The check point SCN stored in the control file for the thread instance dead
will have the lost changes applied. Thread recovery is done either as a part of
instance recovery or crash recovery. Instance recovery is done while the database is
open and one instance that another instance as dead. This is possible only if you are
running multiple instances instance, using parallel query option. If you restart a dead
instance before instance recovery is done, then oracle will do crash recovery. In
general, the first instance to open the database after an abnormal termination
(shutdown abort or database crash) does crash recovery.
Media recovery: Media recovery is done in response to recovery command issued
by the DBA. It is used to make backup data files current, or to restore changes that
were lost when a datafile went offline without check point.

For example, if you take a Tablespace offline using the immediate option, the
datafiles will go offline without a check point being performed by Oracle Media
recovery can apply archived log files as well as online log files.

Online and Offline recovery:

Recovery command Database online Database offline

Recover database No Yes


Recover Tablespace Yes No
Recover datafile Yes Yes

Syntax:

SVRMGRL>RECOVER[AUTOMATIC][FROM ‘location’][DATABASE]
|UNTILL TIME/DATE
|UNTILL CANCEL
|UNTIL CHANGE
USING BACKUP CONTROLFILE;

Complete Vs Incomplete recovery: Recovering the database from a media failure


without losing any data is known as complete recovery. Otherwise it is known as
incomplete recovery. Complete recovery should be implemented when all the redolog
files, backup datafiles (for all the lost of damaged datafiles), and a current valid
controlfile are available.

Case 1: The database is running in NOARCHIVE mode and you lost a datafile
because of Media failure and you take cold backup every night. How you’ll recover
the database. The scenerio can be simulated as follows.

Steps: Take cold backup of the database you will get an Error stating that particular
Datafile is missing. Now using HOST command remove one datafile at the operating
system level. Now abort the instance.

Now try to open the database you will get an Error stating that particular Datafile is
missing. Now shutdown the database and restore the previous night’s backup and
open the database. So you lost today’s transactions. This is complete recovery
though you lost today’s actions because as far as the database is concerned it did not
loose any thing which came from last night. It may appear to you that it is in-
complete, but it is still complete recovery for that time.

Note: You just can not restore the lost datafile from previous backup and start up the
database, because the database will be in inconsistent state. So it will fail.

Case 2: Everything is same except that it is running in ARCHIVE mode. Here you
restore the lost file from the previous nights backup, Mount the Database and issue
the command RECOVER DATFILE AUTOMATIC. Oracle will apply the relevant archived
log files and online redo log files and then it will open the database. Here, you have
lost no data hence it is complete recovery.

Case 3: Everything is as above except that you lost the online redolog files
only. In this case you have archived log files but not online redolog files. So
you can restore up to the last available archived log file only by issuing the
command Recover database until cancel. Cancel the media recovery
immediately after applying the last archived file,. Open database with
resetlog option. This will invalidate the previous log files. This is an
incomplete recovery.

Case 4: Database is running in Archive log mode. We used to take cold back up
every night. On one day a programmer accidentally dropped one important table
(assuming that At 11:30:30 am) you realized this at 2:00 p.m.. As This is a critical
database without losing others data you have to recover the lost table.

Steps:

 Switch the log file.


 Shutdown the database
 Take the cold backup of the current database (New backup)
 Restore only the datafiles from previous night’s backup provided there are no
structural changes to the database.
 Mount the database and issue the command.

SVRMGR>ALTER DATABASE RECOVER AUTOMATIC USING BACKUP CONTROL


FILE UNTILL TIME ‘1998-08-07:11:29:00’ [using backup control
File if you have restored the control file otherwise use current
Control file only]
 Open the database by resetting the logs.
 Export the table, which the programmer has dropped.
 Shutdown the database
 Resotore the new backup
 Startup the database
 Now important the table, which was exported in step 6

Case 5: A DBA has lost both the control file of a database which is in archive log
mode. To recover the database, use CREATE CONTROL FILE command.

The scenario can be simulated as follows:

SVRMGR>!rm/disk1/oradata/nettlinx/control1.ctl
SVRMGR>!rm/disk1/oradata/nettlinx/control2.ctl

Steps:

$vi cr8controlfile.sql
CREATE CONTROL FILE REUSE DATABASE NETTLINX ARCHIVELOG
LOGFILE GROUP1(‘disk1/oradata/nettlinx/redolog1.log’,
‘/disk2/oradata/nettlinx/redolog2.log’,) SIZE 250K,
GROUP2(‘disk1/oradata/nettlinx/redolog1.log’,
‘/disk2/oradata/nettlinx/redolog2.log’,) SIZE 250K
DATAFILE ‘‘disk1/oradata/nettlinx/System01.dbf’ SIZE 25M
RESETLOGS
RECOVER DATABASE;
SVRMGR>STARTUP NOMOUNT
SVRMGR>@createcontrolfile
SVRMGR>ALTER DATABASE OPEN RESETLOGS;

Note: Do not forget to take a fesh backup now.


RECOVERY MANAGER

Costs and Benefits when using a recovery catalog when you use a recovery catalog.
Recovery Manager can perform a wider variety of automated backup and recovery
functions; however, Recovery Manager requires that you maintain a recovery catalog
schema, and any associated space used by that schema.

If you use a recovery catalog, you must decide which database you will use to install
the recovery catalog schema, and also how you will back this database up. The size
of the recovery catalog schema:

n is dependent on the number of databases monitored by the catalog


n is dependent on the number and size of Recovery Manager scripts stored in the
catalog
n will grow as the numbers of archived logs and backups for each database grow

If you use Recovery Manager to backup many databases, you may wish to create a
separate recovery catalog database, and create the Recovery manager user in that
database. You should also decide whether or not to operate this database in
ARCHIVELOG mode.

If you have more than one database to back up, you can create more than one
recovery catalog and have each database serve as the other’s recovery catalog. For
example, assume there are two production databases, one called “ACCT” and a
second called “PAY” you can install the recovery catalog for “ACCT” in the “PAY”
database, and the recovery catalog for the “PAY” database, in “ACCT” this enables
you to avoid the extra space requirements and memory overhead of maintaining a
separate recovery catalog database. However, this solution is not practical if the
recovery catalog databases for both reside in tablespaces residing on the same
physical disk.

Note: You must install the recovery catalog schema in a different database from the
target database you will be backing up. If you don’t, the benefits of using a recovery
catalog are lost if you lose the database and need to restore.

Note : It is difficult to restore and recover if you lose your control files and do not
use a recovery catalog. The only way to restore and recover when you have lost all
control files and need to restore and recover datafiles after creating a control file
manually.

Setting up the the Recovery Catalog Schema when you use a recovery catalog, you
need to set up the schema. Oracle suggests you put the recovery catalog in its own
tablespace; however, it could be put in the system tablespace, if necessary. To set up
the Recovery Catalog Schema.

 Create a tablespace RCVCAT which will be the default tablespace of a catalog


schema placing the read data.
 Using Server Manager (Line Mode) connect internal (or a SYSDBA) to the
database containing the recovery catalog.
 Issue the following commands:
SVRMGRL>SPOOL cr8rman.log
SVRMGRL>CONNECT INTERNAL
SVRMGRL>CREATE USER RMAN IDENTIFIED BY RMAN
TEMPORARY TABLESPACE TEMP
DEFAULT TABLESPACE RCVCAT
QUOTA UNLIMITED ON RCVCAT;
SVRMGRL>GRANT RECOVERY_CATALOG_OWNER TO RMAN;
SVRMGRL>CONNECT RMAN/RMAN
SVRMGRL>@$ORACLE_HOME/rdbms/admin/catrman.sql

Note: You must not run a catrman.sql script in the SYS schema. Run the catrman.sql
in the recovery catalog schema (RMAN)

To connect to Recovery Manager from the OS.

$ rman target internal/internal@acct rcvcat rman/rman@pay

To connect to Recovery Manager without Catalog

$ rman nocatlog
RMAN>connect target
Or
$ rman nocatlog
RMAN>register database

To connect to Recovery Manager with Password Files: If the target database uses
password files, you can connect using:

$ rman target internal/internal@acct rcvcat nocatalog


or
$ rman nocatalog
RMAN>connect target internal/internal@acct

Recovery manager automatically requests a connection to the target database as


SYSDBA. If you wish to connect as a DBA user who has SYSDBA privileges. Issue the
following state:

$ rman target reddy/tiger@acct nocatalog

Example:

Catalog maintenance commands are executed outside of a run command as follows:

$ rman target internal/internal@acct rcvcat rman/rman@pay


RMAN>run{
Allocate channel c1 type disk;
Copy datafile 2 to ‘/disk4/oradata/NETTLINX/user1.dbf’;
}

Note: Where 2 is the file_id from the dba_data_files

To run Recovery Manager commands interactively, you need to start Recovery


Manager then type the commands into the command line interface, as follows:
When the above file is stored as a file in OS the same can be called from RMAN as
you run a file in SQL*Plus or the same can be called in the following way

#This is a file to copy the datafile


# file name Test.Rcv
run{
allocate channel c1 type disk;
copy datafile 2 to ‘/disk4/oradata/NETTLINX/user1.dbf’;
}
#end of file
$ rman target internal/internal@acct rcvcat rman/rman@pay cmdfile=test.rcv

If you want to record all the log information, which is appearing, on the screen
generated by RMAN add the option mentioned below:

$ rman target internal/internal@acct rcvcat rman/rman@pay msglog=testrcv.log


append

Format Strings for Backup

You can use the following substitution variables to make unique format strings:

%d – database name (uppercase)


%t – backup set stamp
%s – backup set number
%p – piece number in the set
%u – an 8 character id composed of compressed representations of the backup

Full and Incremental Backup Sets

Datafile backup sets can be full or incremental. A full backup is a backup of one or
more datafiles that contain all blocks of the datafile(s). An incremental backup of one
or more datafiles that contain only those blocks that have been modified since a
previous backup. These concepts are described in more details in the following
sections.

Full Backup Sets

A full backup copies all blocks into the backup set, skipping only datafile blocks that
have never been used. No blocks are skipped when backing up archivelogs or control
files. A full backup is not the same as a whole database backup; full is an indicator
that the backup is not incremental. Also, a full backup has no effect on subsequent
incremental backups, and is not considered part of the incremental strategy (in other
words, a full backup does not affect which blocks are included in subsequent
incremental backups). Oracle allows you to create and restore full backups of the
following:

n datafile
n datafile copy
n tablespace
n control file (current or backup)
n database
n Archivelog backup sets are always full backups.
Incremental Backup Sets

An incremental backup is a backup of one or more datafiles that contain only those
blocks that have been modified since a previous backup at the same or lower level;
unused blocks are not written out.

Example:

Backing up a Data file:

$ rman target internal/internal@acct rcvcat rman/rman@pay


RMAN> run{
allocate channel t1 type disk;
copy datafile 2 to ‘/disk4/oradata/NETTLINX/junk.dbf’;
}

Determining Datafile Backup Status:

To view the backup status of a datafile, you can use thedata dictionary table
V$BACKUP. This table lists all online files and gives their backup status. It is most
useful when the database is open. It is also useful immediately after a crash,
because it shows the backup status of the files at the time of the crash. You can use
this information to determine whether you have left tablespaces in backup mode.

NOTE: V$BACKUP is not useful if the controlfile current in use is restored backup or
new controlfile created since the media failure occurred. A restored or re-created
controlfile does not contain the information oracle needs to fill V$BACKUP accurately.
Also, if you have restored a backup of a file, that file’s STATUS is V$BACKUP reflects
the backup status of the older version of the file, not the most current version. Thus,
this view might contain misleading information on restored files.

In the STATUS column, “INACTIVE” indicates that the file is not currently being
backedup “ACTIVE” indicates that the file is marked as currently being backedup.

Backing up a Tablespace:

The following example shows how to back up individual tablespaces, it is important


to backup tablespaces that contain important data frequently including system data
and any tablespace that contains rollback segments). Tablespaces containing only
temporary segments need not be backed up because this example backs up to disk,
the format string determines the name of the backup file.

$ rman target internal/internal@acct rcvcat rman/rman@pay


RMAN> run{
allocate channel t1 type disk;
backup
fomat ‘/disk4/oradata/NETTLINX/test.dbf_%s_t’
(tablespace system, user_Data);
}
Propagating Information from the Control File

The size of the target database’s control file will grow, depending on the number of

n backups performed
n archive logs created
n days (minimum number) this information is stored in the control file

You can specify the minimum number of days this information is kept in the control
file using the parameter CONTROL_FILE_RECORD_KEEP_TIME. Entries older than the
number of days are candidates for overwrites by newer information. The larger the
CONTROL_FILE_RECORD_KEEP_TIME setting is, the larger the control file will be. At
a minimum, you should resynchronize your recovery catalog at intervals less than
the CONTROL_FILE_RECORD_KEEP_TIME setting, because after this number of
days., the information in the control file will be overwritten with the most recently
created information: if you have not resynchronized and information has been
overwritten this information can not be propagated to the recovery catalog.

Note: The maximum size of the control file is port specific. See your operating
system specific Oracle documentation.

Backing up a Control File:

The current control file is automatically backed up when the first datafile of the
system tablespace is backed up. The current control file can also be explicitly
included in a backup or backed up individually.

$ rman target internal/internal@acct rcvcat rman/rman@pay


RMAN> run{
allocate channel t1 type disk;
backup
(datafile 2 include current controlfile format
‘/disk4/oradata/NETTLINX/test.dbf_%s_%t’);
}

$ rman target internal/internal@acct rcvcat rman/rman@pay


RMAN> run{
allocate channel t1 type disk;
backup
(tablespace system include current controlfile format
‘/disk4/oradata/NETTLINX/test.dbf_%s_t’);
}

$ rman target internal/internal@acct rcvcat rman/rman@pay


RMAN> run{
allocate channel t1 type disk;
backup
fomat ‘/disk4/oradata/NETTLINX/test.dbf_%s_t’
(tablespace system, user_Data);
}
$ rman target internal/internal@acct rcvcat rman/rman@pay
RMAN> run{
allocate channel t1 type disk;
copy
(current controlfile to ‘/disk4/oradata/NETTLINX/test.dbf_%s_t’);
}

Types of Recovery Manager Backups

The backup_type applies to all backup_specifications in the backup_specification_list.


The following two backup types are available.

Full

This is the default if neither full nor incremental is specified. A fullbackup copies all
the blocks into the backup set, skipping only datafile blocks that have never been
used. No blocks are skipped when backingup archive logs or control files. A full
backup has no effect on subsequent incremental backups, and is not considered to
be part of the incremental backup strategy.

Incremental

An incremental backup at a level greater than 0 copies only thoseblocks that have
changed since the last incremental backup. An incremental backup at level 0 is
identical in content to a full backup, but the level 0 backup is considered to be part of
the incremental strategy. Certain checks are performed when attempting to create
an incremental backup at a level greater than zero. These checks ensure that the
incremental backup would be usuable by a subsequent recover command. Among the
checks performed are

n A level 0 backup set must exist, or level 0 datafile copies must exist
for each datafile in the backup command. These must also not be
marked unavailable

n Sufficient incremental backups taken since the level 0 must exist and
be available such that the incremental backup about to be created
could be used

n Multiple levels of incremental backup are supported. A level N


incremental backup copies only those blocks that have changed since
the most recent incremental backup at level N or less

n If incremental is specified, then all backup_object_lists in the


command must specify one of the following; datafile, datafilecopy,
tablespace, or database. Incremental backups of control files,
archivelogs or backup sets are not supported.
Backup command operand list:

Tag
Cumulative
Nochecksum
Filesperset
Setsize

Backup Object list:

Database
Tablespace
Datafile
Datafilecopy
Archivelog
Current Controlfile
Backup Control file
Backupset

Backup Operand List:

Tag
parms

Format

%p Backuppiece number starts within the backup set


(initial value is 1 increments by 1)
%s Backupset Number (It’s a counter in control file starts with 1 and increments
by one for each backup set ( A CREATE CONTROL FILE initializes the counter
back to 1)
%d Database name
%n padded databasename
%t Backup set stamp ( it is a 4-type value derived as the number of seconds
since a fixed reference data/time).
%u An 8-character name composed of compressed representations of backup set
number.

Include current control file

Filesperset
Channel
Delete input

Copy Command Specifiers:

Datafile
Datafilecopy
Archivelog
Current control file
Backup control file
Optionally you can supply these keywords with copy command

Tag
Level 0

Backing up full database: (Including current control file)

$ rman target internal/internal@acct rcvcat rman/rman@pay


RMAN> run{
allocate channel t1 type disk;
backup
fomat ‘/disk4/oradata/NETTLINX/test.dbf_%s_t’
(database include current controlfile);
}

Physical Incremental Back ups: (on Sunday)

Sunday: Take a Complete physical backup

$ rman target internal/internal@acct rcvcat rman/rman@pay


RMAN> run{
allocate channel d1 type disk;
backup incremental level 0
fomat ‘/disk4/oradata/NETTLINX/test.df5_t%t_s%s_p%p’
(database include current controlfile);
release channel d1;
}

Monday: Take an Incremental Physical Backup

$ rman target internal/internal@acct rcvcat rman/rman@pay


RMAN> run{
allocate channel d1 type disk;
backup incremental level 2
fomat ‘/disk4/oradata/NETTLINX/test.df5_t%t_s%s_p%p’
(database include current controlfile);
release channel d1;
}

Tuesday: Take an incremental physical backup

$ rman target internal/internal@acct rcvcat rman/rman@pay


RMAN> run{
allocate channel d1 type disk;
backup incremental level 2
fomat ‘/disk4/oradata/NETTLINX/test.df5_t%t_s%s_p%p’
(database include current controlfile);
release channel d1;
}
Wednesday: Take a cumulative physical backup

$ rman target internal/internal@acct rcvcat rman/rman@pay


RMAN> run{
allocate channel d1 type disk;
backup incremental level 1
fomat ‘/disk4/oradata/NETTLINX/dbf_t%t_s%s_p%p’
(database include current controlfile);
release channel d1;
}

copying the Archieving information to the Catalog and Deleting the files
from Archive log Destination:

$ rman target internal/internal@acct rcvcat rman/rman@pay


RMAN> run{
allocate channel t1 type disk;
backup
(archivelog low logseq 1 high logseq 50 thread 1 all delete input
fomat ‘/disk4/oradata/NETTLINX/arch_’%d_%u’);
}

NOTE: ORACLE8 supports Archiveing to multiple destinations with the initialization


parameter LOG_ARCHEIVE_DUPLEX_DEST = Path

You can also back up archived logs to tape. The range of archived logs can be
specified by time or log sequence. Note that specifying an archive log range does not
guarantee that all redo in the range is backed up. For example, the last archived log
may end before the end of the range, or an archived log in the range may be
missing. Recovery manager simply backs up the logs it finds and does not issue
awarning. Note that online logs cannot be backedup; they must be archived first.

NLS_LANG = American
NLS_DATE_FORMAT = ‘Mon DD YYYY HH24:M1:SS’
$ rman target internal/internal@acct rcvcat rman/rman@pay
RMAN> run{
allocate channel t1 type disk;
backup
(archivelog from time ‘jan 25 1999 12:57:13’ until time
jan 25 1999 12:06:05’
all
fomat ‘/disk4/oradata/NETTLINX/arch1_%d_%u’);
}

Here we back up all archived logs from sequence#288 to sequence#301 and delete
the archived logs after the backup is complete. If the backup fails the logs are not
deleted.
$ rman target internal/internal@acct rcvcat rman/rman@pay
RMAN> run{
allocate channel t1 type disk;
backup
(archivelog low logseq 1 high logseq 301 thread 1 all delete input
fomat ‘/disk4/oradata/NETTLINX/arch2_%d_%u’);
}

The following commands back up all archived logs generated during the last 24
hours. We archive the current log first to ensure that all redo generated up to the
present gets backed up

$ rman target internal/internal@acct rcvcat rman/rman@pay


RMAN> run{
allocate channel t1 type disk;
sql “alter system archive log current”;
backup
(archivelog from time ‘SYSDATE-1’ all
fomat ‘/disk4/oradata/NETTLINX/arch3_%d_%u’);
}

See also: for more information about you environment variables, see your operating
system-specific documentation.

Hint: Set the NLS_LANG AND NLS_DATA_FORMAT environment variables before


invoking Recovery Manager.

RECOVERY & RESTORATION:

Restoring and Recovering a lost datafile

When one of the datafiles is lost the recovery amounts to complete recovery as you
have the latest control file and current online redolog file.

Steps involved are:

1. Make the datafile offline


2. Restore the datafile from the backup
3. Recover the datafile
4. Make the datafile online

$ rman target internal/internal@acct rcvcat rman/rman@pay


RMAN> run{
allocate channel t1 type disk;
sql ‘alter database datafile 2 offline’;
restore datafile 2;
restore datafile 1;
sql ‘alter database data file 2 online’;
release channel t1;
}
Restoring and Recovering a lost Tablespace:

When one of the datafiles is lost the recovery amounts to complete recovery as you
have the latest control file and the current online redolog file.

Steps involved are:

1. Make the tablespace offline


2. Restore the tablespace from the backup
3. Recover the tablespace
4. Make the tablespace online

$ rman target internal/internal@acct rcvcat rman/rman@pay


RMAN> run{
allocate channel t1 type disk;
sql ‘alter tablespace user_data offline immediate’;
restore tablespace user_data;
restore tablespace user_data;
sql ‘alter tablespace user_data online’;
release channel t1;
}

Note!
Whenever the RMAN command fails you are required to release the channel allocated
for that operation

Ex:

RMAN> run{
release channel t1;
}

Full Database Restore.Recovery(Incomplete)

When the whole database is lost you can depend upon the backup for restoring the
database also recovering by applying Archives.

Steps:

1. Start the Instance with nomount option

2. Restore the full database

3. Mount the database

4. Recover the database by applying Archives

5. Open the database with RESETLOGS option ( Incomplete Recovery )


First Method:

$ rman target internal/internal@acct rcvcat rman/rman@pay


RMAN> run{
allocate channel t1 type disk;
restore database;
sql ‘alter database mount’;
recover database;
sql ‘alter database open resetlogs’;
release channel t1;
}

Second Method:

$ rman target internal/internal@acct rcvcat rman/rman@pay


RMAN> run{
set until logseq 443 thread 1;
allocate channel dev1 type disk;
restore controlfile to ‘disk4/oradata/NETTLINX/ctlIacct.ora’;
replicate controlfile from ‘disk4/oradata/NETTLINX/ctlIacct.ora’;
sql ‘alter database mount’;
restore database;
recover database;
sql ‘alter database open resetlogs”;
release channel dev1;
}

Third Method:

$ rman target internal/internal@acct rcvcat rman/rman@pay


RMAN> run{
allocate channel t1 type disk;
restore controlfile to ‘disk4/oradata/NETTLINX/contlI.ctl’;
replicate controlfile from ‘disk4/oradata/NETTLINX/contlI.ctl’;
restore database;
sql ‘alter database mount’;
recover database;
sql ‘alter database open resetlogs”;
release channel t1;
}
WORKING WITH SQL*LOADER

Sql*loader moves data from external flat files into oracle database.

SQL*Loader can perform the following things:

n Load data from multiple datafiles of different file types


n Handle fixed-format, delimited-format, and variable-length records
n Manipulate data fields with SQL functions before inserting the data into
database columns
n Supports a wide range of data types, including DATE, BINARY, PACKED
DECIMAL and ZONED DECIMAL
n Load multiple tables during the same run, loading selected rows into each
table
n Combines multiple physical records into a single logical record, treats a
single physical record as multiple logical records
n Generate unique, sequential key values in specified columns
n Use your operating system’s file or record management system to access
datafiles
n Load data from disk or tape
n Provide thorough error reporting capabilities, so you can easily adjust and
load all records.

The control files contains the DDL definitions.

For eg:

$ vi case1.ctl
LOAD DATA
INFILE*
INTO TABLE DEPT
FIELDS TERMINATED BY ‘.’ OPTIONALLY ENCLOSED BY ‘ ” ’
(DEPTNO,DNAME,LOC)
BEGINDATA
12, “RESEARCH”, “SARATOGA”
10, “ACCOUNTING”, “CLEVELAND”
:wq

LOAD DATA statement is required in the beginning of the control file.


INFILE specifies that the data is found in the control file and not in an external
File INTO TABLE is required to identified the table to be loaded (dept) into.
By default SQL* Loader required the table to be empty before it inserts any records.
To invoke SQL * Loader the command is,

SYNTAX:
$ sqlldr <options>

The options are as follows:

Userid : username and password


Control : Controlfile name
Log : Logfile name
Bad : Badfile name
Data : Datafile name
Discard : Discardfile name
Discardmax : Number of discards to allow
Skip : Number of logical records to skip (Default 0)
Load : Number of logical records to load (Default 0)
Errors : Number of errors to allow (Default 0)
Rows : Number of rows in conventional path
Direct : Direct path data saves (Default 64 in conventional, all in
Direct)
Bind size : Size of conventional path bind array in bytes (system-
Dependent)
Silent : Suppress messages during run
Parfile : Parameter file

For Eg:

$sqlldr userid=reddy/manager control=case1.ctl log=case1.log


$sqlldr control=case.ctl log=case.log data=etc.dat userid=steeve/abc
errors=40 load=2000 discard=abc.dis discardmax=2

or

$ vi par.file –File with the parameters


control=case.ctl
log=case.log
data=etc.dat
userid=steeve/abc
errors=40
load=2000
discard=abc.dis
discardmax=2
$sqlldr parfile=par.file

Example: case 1: case 1 loads from the controlfile into the table dept.

$ vi case1.ctl
LOAD DATA
INFILE*
INTO TABLE DEPT
FIELDS TERMINATED BY ‘.’ OPTIONALLY ENCLOSED BY ‘ ” ’
(DEPTNO,DNAME,LOC)
BEGINDATA
12, “RESEARCH”, “SARATOGA”
10, “ACCOUNTING”, “CLEVELAND”
11, “ART”,SALEM
13, FINANCE, “BOSTON”
21, “SALES”, PHILA
22, “SALES”, ROCHESTER
42, “INT’ L”, “SAN FRAN”
:wq
$ sqlldr userid=reddy/tiger control=case1.ctl log=case1.log
Case 2: Case2 loads the data of case2.dat to the table emp.

$vi case2.ctl
LOAD DATA
INFILE ‘case2.dat’
(EMPNO POSITION(01:04) INTEGER EXTERNAL
ENAME POSITION(06:15) CHAR,
JOB POSITION(17:25) CHAR,
MGR POSITION(27:30) INTEGER EXTERNAL,
SAL POSITION(32:39) DECIMAL EXTERNAL,
COMM POSITION(41:48) DECIMAL EXTERNAL,
DEPTNO POSITION(50:51) INTEGER EXTERNAL)
:wq

$vi case2.dat

7782 CLARK MANAGER 7839 2572.50 10


7839 KING PRESIDENT 5500 10
7934 MILLER CLERK 7782 920.00 20
7566 JONES MANAGER 7839 1600.00 300.00 30
7654 MARTIN SALESMAN 7698 1312.50 1400.00 30

$sqlldr userid=reddy/tiger control=case2.ctl log=case2.log

Case 3: Case 3 adds the data into emp table using sequence function. Sequence
function generates unique keys for loaded data.

SQL>ALTER TABLE EMP ADD (PROJNO NUMBER, LOADSEQ NUMBER)


$vi case3.ctl
LOAD DATA
INFILE*
APPEND
INTO TABLE EMP
FIELDS TERMINATED BY ‘.’ OPTIONALLY ENCLOSED BY ‘ “ ‘
(EMPNO, ENAME, JOB, MGR, HIREDATE DATE(20) “DD-MONTH-YYYY”,
SAL, COMM, DEPTNO, CHAR TERMINATED BY ‘:’,
PROJNO,
LOADSEQ SEQUENCE (MAX,1)
BEGINDATA
7782,”CLARK”,”MANAGER”,7839,09-JUNE-1981,2572.50,,10:101
7839,”KING”,”PRESIDENT”,, 17-NOVEMBER-1981,5500.00,,10:102

7934,”MILLER”,”CLERK”,7782,23-JANUARY-1982,920.00,,10:102
7566,”JONES”,”MANAGER”,7839,02-APRIL-1981,3123.75,,20:101
7499,”allen”,”salesman”,7698,20-February-1981,1600.00,300.00,30:103
7654,”MARTIN”,”SALESMAN”,7698,28-SEPTEMBER-1981,1312.50,1400,30:103

$sqlldr userid=reddy/tiger control=case3.ctl log=case3.log

Case 4: Case 4 combines multiple records into one larger record using CONTINUEIF.
Inserting negative numbers, discardmax is used to specify a maximum number of
discards and also rejecting records due to duplicate values in a index or due to
invalid data.

$vi case4.ctl
LOAD DATA
INFILE ‘case4.dat’
DISCARDFILE ‘case4.dsc’
DISCCARDMAX 999
REPLACE
CONTINUEIF THIS (1) = ‘*’
INTO TABLE EMP
(EMPNO POSITION(01:04) INTEGER EXTERNAL
ENAME POSITION(06:15) CHAR,
JOB POSITION(17:25) CHAR,
MGR POSITION(27:30) INTEGER EXTERNAL,
SAL POSITION(32:39) DECIMAL EXTERNAL,
COMM POSITION(41:48) DECIMAL EXTERNAL,
DEPTNO POSITION(50:51) INTEGER EXTERNAL,
HIREDATE POSITION(52:60) INTEGER EXTERNAL)
:wq

$ vi case4.dat
*7782 clark man
ager 7839 2572.50-10 2512-Nov-85
*7839 king persi
dent 5500.00 2505-Apr-83
*7934 mil
ler manager 7839 3123.75 2517-Jul-85
:wq
$ sqlldr userid=reddy/tiger control=case4.ctl log=case4.log

Case5: Case 5 explains how to use sqlldr to break down repeating groups in a flat file
and load the data into normalized tables, one record may generate multiple database
rows, and use of when clause and also loading the same filed (empno) into multiple
tables.
$ vi case5.ctl
LOAD DATA
INFILE ‘case5.dat’
BADFILE ‘case5.bad’
DISCARDFILE ‘case5.dsc’
REPLACE
INTO TABLE EMP
(EMPNO POSITION(1:04) INTEGER EXTERNAL,
ENAME POSITION(6:15) CHAR,
DEPTNO POSITION(17:18) CHAR,
MGR POSITION(20:23) INTEGER EXTERNAL)
INTO TABLE PROJ
WHEN PROJNO!=’ ‘
(EMPNO POSITION(1:4) INTEGER EXTERNAL,
PROJNO POSITION(25:27) INTEGER EXTERNAL)
INTO TABLE PROJ
WHEN PROJNO!=’ ‘
(EMPNO POSITION(1:4) INTEGER EXTERNAL,
PROJNO POSITION(29:31) INTEGER EXTERNAL)
:wq
$sqlldr userid=reddy/tiger control=case5.ctl log=case5.log

Case 5: Case5 loads the data into table emp using the direct path load method and
also builds the indexes.

$vi case6.ctl
LOAD DATA
INFILE ‘case6.dat’
INSERT
INTO TABLE EMP
SORTED INDEXES (EMPID)
(EMPNO POSITION(1:4) INTEGER EXTERNAL NULLIF EMPNO=BLANKS,
ENAME POSITION(6:15) CHAR,
JOB POSITION(17:25) CHAR,
MGR POSITION(27:30) INTEGER EXTERNAL NULLIF MGR=BLANKS,
SAL POSITION(32:39) DECIMAL EXTERNAL NULLIF SAL=BLANKS,
COMM POSITION(41:48) DECIMAL EXTERNAL NULLIF COMM=BLANKS,
DEPTNO POSITION(50:51) INTEGER EXTERNAL NULLIF DEPTNO=BLANKS)

:wq

$ sqlldr userid=reddy/tiger case6.ctl log=case6.log direct=true


TUNING – ENHANCING THE PERFORMANCE OF DATABASE

Tuning is studying the configuration of a system. Every one involved with the system
has some role in the tuning process. By running oracle, you can tailor its
performance to best meet your needs.

Goals for tuning: Consider the performance issues when designing the system
Tuning at Hardware level and at operating system level. Identifying performance
bottlenecks, determining the cause of the problem and taking corrective action.

Steps of tuning process:

Tuning Memory allocation: Appropriate allocation of memory resources to oracle


memory, structures can have a large impact on performance

n SQL and PL/SQL shared areas


n The data dictionary cache
n The buffer cache

Tuning I/O: Disk I/O tends to reduce the performance of many software applications

n Distributing I/O to avoid disk contention


n Creating extents large enough for your data

Tuning Contention: Contention may cause Processes to wait until resources are
available

n Rollback Segments
n Processes of the multithreaded Server architecture
n Redolog buffer latches

Memory Tuning:

Tuning db_buffer_cache: First find out the ratio of hits and misses. If ratio is more
than 1 then, increase the size of db_buffer_cache in init.ora. This information can be
seen in X$KCBRBH and X$KCBCBH. The query to find the ratio is as follows:

SQL>SELECT USERNAME, CONSISTENT_GETS,BLOCK_GETS,


PHYSICAL_READS,100*(CONSISTENT_GETS+BLOCK_GETS-
PHYSICAL_READS)/(CONSISTENT_GETS+BLOCK_GETS)HITRATIO
FROM V$SESSION, V$SESS_10
WHERE V$SESSION.SID:=V$SESS_10.SID
AND (CONSISTENT_GETS + BLOCK_GETS)>0
AND USERNAME IS NOT NULL;

In the above query:


Physical_reads: The value of this column is no. of reads from each database file.

Consistent_gets: Statistics reflects the no. of accesses made to the block buffer to
retrieve the data in a consistent mode.

Block_gets: Statistics reflects the no.of blocks assessed via single block gets.

Tuning Redolog buffer cache: To tune the redolog buffer one has to reduce the
waiting for the latches. You have to find the ratio between redolog space wait time
and redo writes. If the ratio is more than 1% then we need to tune. The information
of this can be obtained from V$LATCH and V$SYSSTAT. The query is as follows.

SQL>SELECT NAME, VALUE FROM V$SYSSTAT WHERE NAME=’redo log space


requests’;

The value should be nearer to 0. If not increase LOG_BUFFER by 5% until it reaches


to 0.

SQL>SELECT NAME, GETS, MISSES, IMMEDIATE_GETS, IMMEDIATE_MISSES


FROM V$LATCH A, V$LATCHNAME B
WHERE B.NAME IN (‘REDO ALLOCATION’, REDO COPY’) AND
A.LATCH#=B.LATCH#

If misses/gets > 1 and immediate_misses/ (immediate_gets+immediate_misses)>1


You need to look into LOG_BUFFERS.

In the above query.

Gets - This column shows the total no of requests for information on the
Corresponding item.
Misses - This column shows the no. of data requests resulting in cache misses.
Immediate_misses – This column shows the No. of unsuccessful immediate
Requests for each latch.
Immediate_writes – This column shows the No. of successful immediate
Requests for each latch.

Tuning library cache: It is present in data dictionary, which has shared SQL and
PL/SQL areas. This section tells you how to tune the library cache by:

n Examining library cache activity


n Reducing library cache misses
n Speeding access to shared SQL and PL/SQL areas in the library cache

THE QUERY IS AS FOLLOWS:

SQL>SELECT SUM(PINS),SUM(RELOADS)/SUM(RELOADS)/SUM(PINS)*SUM
(RELOADS))*100 FROM v$LIBRARYCACHE;

In the above query.


Pins – this column shows the no. of items an item in the library cache was executed.
Reloads – this column shows the no. of librarycache misses on execution steps.

If the ratio of pins and reload is greater than 1% then, you should reduce this library
cache misses (increase the shared_pool_size parameter in init.ora).

STRIPPING: Stripping is the practice of dividing a large tables data into small
portions and storing these portions in separate datafiles on separate disks. This
permits multiple process to access different portions of the table concurrently without
disk contention. “STRIPPING” is particularly helpful in optimizing random access to
tables with many rows. Stripping can either be done manually as below:

SQL> CREATE TABLESPACE TEST DATA FILE


‘/disk1/oradata/NETTLINX/u1.dbf’ SIZE 300K,
‘/disk2/oradata/NETTLINX/u2.dbf’ SIZE 300K,
‘/disk3/oradata/NETTLINX/u3.dbf’ SIZE 300K,
STORAGE (MINIXTENTS 3);
SQL>CREATE TABLE JUNK (A NUMBER) STORAGE (MINEXTENTS 3);

Then insert around 1 lakh row into the table. While the insertion is going on observe
the status of the files in V$FILESTAT;

SQL>SELECT FILE_ID FROM DBA_DATA_FILES; (to check the file_id of


u1.dbf.u2.dbf.u3.dbf)
SQL> SELECT * FROM V$FILESTAT; (observe the v$filestat)

If you have more datafiles and if you have only one DBWR the performance may
decrease, so you increase DBWRs. In unix use asynchronus. I/O (aio – Kernel
Tunable Parameter) enable and then include the parameter in init.ora

SVRMGR>SHUTDOWN
$vi init.ora # looking for the following parameter
db_writes=3
:wq
SVRMGR>STARTUP
$ps ux\grep ora_ (observe the dbwr_ processes).

PARALLEL QUERY OPTION: Oracle will process the SQL statement by a single
server process with the parallel query option, multiple processes can work together
simultaneously to process a single SQL statement. This capability is called parallel
query option. The oracle server can process statement more quickly than a only a
single server process processed it, query processing can be effectively split among
many CPUs on a single system.

$ vi init.ora # looking for following parameters


parallel_min_servers=1
parallel_max_servers=10
parallel_server_idle_time=4 (default in min).
:wq
SVRMGR>SHUTDOWN
SVRMGR>STARTUP
SQL> CREATE TABLE JUNK (A NUMBER) PARALLEL(DEGREE 5);
SQL>INSERT MANY ROWS
SQL>SELECT * FROM JUNK;

From any other terminal do:

$ ps ux|grep ora_

observe.

Oracle 1903 1 0 25:09:98 ? 1:30 ora_p000_DEMO


Oracle 1933 1 0 25:09:98 ? 1:30 ora_p0001_DEMO
Oracle 1953 1 0 25:09:98 ? 1:30 ora_p0002_DEMO
Oracle 1998 1 0 25:09:98 ? 1:30 ora_p0003_DEMO
Oracle 1920 1 0 25:09:98 ? 1:30 ora_p0004_DEMO

TABLE CACHE : To mark a table as cache table, specify the cache clause either in
CREATE TABLE or ALTER TABLE command. If a table is marked as a cache table that
table’s blocks will be considered as the most recently used blocks in the data block
buffer cache. Even if they read via a full table scan. Thus you can avoid having your
small tables blocks frequently removed from the data block buffer cache. The below
example show the TEST table is marked as a cache table. The first time its blocks are
read into the data block buffer cache, they will be marked as the most recently used
blocks in the cache.

SQL> CREATE TABLE TEST (A NUMBER) CACHE; /* Table is created up-from


With Cache option*/
SQL> CREATE TABLE TEST (A NUMBER); /*An eg. For latet decision to implement
Cache */
Oracle 1903 1 0 25:09:98 ? 1:30 ora_p000_DEMO
SQL> ALTER TABLE TEST CACHE;
To deactivate the cache status of the table use the nocache clause;

SQL> ALTER TABLE TEST NOCACHE;


OPTIMIZATION

Optimization is the process of choosing the most efficient way to execute a SQL
statement. This is an important step in the Processing of any data manipulation
language statement (select, insert, update or delete), which is done by the
Optimizer. The Optimizer formulates execution plans and chooses the most efficient
plan before executing a statement. There are two types of optimizers like:

n Rule based and


n Cost based

Rule based: Using this approach the optimizer chooses an execution plan based on
the access paths available and the ranks of these paths.

Cost based: Using the cost based approach, the optimizer considers available access
paths and factors in information based on the statistics in the data dictionary objects
(tables, clusters or indexes) accessed by the statement to determine which execution
plan is most efficient. The analyze command generates these statistics. Cost based
will be effective only on the tables which are analyzed. The cost based approach also
considers hints. Cost based approach has three options CHOOSE. ALL_ROWS and
FIRST_ROWS. These can be enabled by using the following commands.

SQL> ALTER SESSION SET OPTIMIZER_MODE=ALL_ROWS;


SQL> ALTER SESSION SET OPTIMIZER_MODE=FIRST_ROWS;

TO ANALYZE THE TABLE EMP THE FOLLOWING COMMAND CAN BE USED:

SQL >ANALYZE TABLE EMP COMPUTE STATISTICS;

There are 15 access paths they are:

n single row by rowed


n single row by cluster join
n single row by hash cluster key with unique key or primary key
n single row by unique or primary key
n cluster join
n Hash cluster key
n Indexed cluster key
n Composite key
n Single column indexes
n Bounded range search on indexed columns
n Unbounded range search or indexed columns
n Sort merge join
n Max or Min of indexed column
n Order by on indexed columns
n Full table scans
Steps of execution plan:

This section deals with the following tables:

Company table –

Company_id – number (primary key)

Name varchar2
Address varchar2
City varchar2 (index)
State varchar2 (index)
Parent_company_id number (index)

Competitor:

Company_id number (primary key, foreign key to company_id)


Product_id number (primary key)

Sales:

Company_id number (foreign key to company_id of company)


Product_id number (primary key)
Sales_total number
SQL> @ $ORACLE_HOME/rdbms/admin/utlxplan
This command generates a plan_table.

Types of operations:

AND-EQUAL: It merges sorted list of values returned by indexes.


SQL> EXPLAIN PLAN SET STATEMENT_ID = ‘TEST1’
FOR
SELECT NAME, CITY, STATE
FROM COMPANY
WHERE CITY = ‘ROANOKE’ AND STATE = ‘VA’;
SQL> SELECT LPAD (‘ ‘,2*LEVEL)\\OPERATION\\ ‘ ‘ \\ OPTIONS \\
‘ ‘\\OBJECT_NAME FROM PLAN_TABLE WHERE STATEMENT_ID=
‘TEST1’ CONNECTED BY PRIOR ID=PARENT_ID AND
STATEMENT_ID=’TEST1’ START WITH ID=1;

Note: For every eplan operation issue the above command with different statement
Id:

Concatenation: It does a union all of result sets.

SQL> EXPLAIN PLAN SET STATEMENT_ID = ‘TEST2’


FOR
SELECT NAME, CITY, STATE
FROM COMPANY
WHERE STATE = ‘TX’ AND
CITY IN (‘HOUSTON’, ‘AUSTIN’, ‘DALLAS’);
Filter: It performs a where clause condition when no index can be used to assist in
the evaluation.

SQL> EXPLAIN PLAN SET STATEMENT_ID = ‘TEST3’


FOR
SELECT NAME, CITY, STATE
FROM COMPANY
WHERE STATE = ‘VA’
CONNECT BY PARENT_COMPANY_ID = PRIOR COMPANY_ID
START WITH COMPANY_ID = 1;

Hash join: It joins tables by creating an in-memory bitmap of one of the tables and
then using a hashing function to locate the join rows in the second table.

SQL> EXPLAIN PLAN SET STATEMENT_ID = ‘TEST4’


FOR
SELECT COMPANY.NAME
FROM COMPANY, SALES
WHERE COMPANY.COMPANY_ID = SALES.COMPANY_ID
AND SALES.PERIOD_ID=3
AND SALES.SALES_TOTAL>1000;

Index unique scan: It selects a unique value from a unique index.

SQL> EXPLAIN PLAN SET STATEMENT_ID = ‘TEST2’


FOR
SELECT NAME, CITY, STATE
FROM COMPANY
WHERE COMPANY_ID=12345;

Nested loops: Nested loops joins tables access operations when atleast one of the
join column is indexed.

SQL> EXPLAIN PLAN SET STATEMENT_ID = ‘TEST4’


FOR
SELECT COMPANY.NAME
FROM COMPANY, SALES
WHERE COMPANY.COMPANY_ID = SALES.COMPANY_ID
AND SALES.PERIOD_ID=3
AND SALES.SALES_TOTAL>1000;

Using hints: Hints are suggestions that give the optimizer for optimizing a SQL
statement. You can use hints to specify:

n The optimization approach for a SQL statement


n The goal of a cost based approach for a SQL statement
n The access path for a table accessed by the statement
n The join order for a join statement
n A join operation in a join statement
NOTE: The syntax for hints is similar to SQL statement, the only difference is the
addition of the “+” sign to indicate the start of the hint. The “+” cannot be preceded
by a space.

All_Rows: it is used to minimize the time it takes for all rows to be returned by the
query.

SQL> SELECT /*+ ALL_ROWS */


FOR
SELECT COMPANY.NAME
FROM COMPANY, SALES
WHERE COMPANY.COMPANY_ID = SALES.COMPANY_ID
AND SALES.PERIOD_ID=3
AND SALES.SALES_TOTAL>1000;

First_Rows: it tells the operator to optimize the query with the goal of the shortest
response time for the return of the first row from the query.

SQL> SELECT /*+ FIRST_ROWS */


FOR
SELECT COMPANY.NAME
FROM COMPANY, SALES
WHERE COMPANY.COMPANY_ID = SALES.COMPANY_ID
AND SALES.PERIOD_ID=3
AND SALES.SALES_TOTAL>1000;

Full: The full hint tells the optimizer to use rule based optimization for a query.

SQL> SELECT /*+ FULL (COMPANY) */


NAME, CITY, STATE
FROM COMPANY
WHERE CITY = ‘ROANOKE’
AND STATE = ‘VA’;

Rule: The rule hint tells the optimizer to use rule based optimizer for a query.

SQL> SELECT /*+ RULE */ COMPANY.NAME


FROM COMPANY, SALES
WHERE COMPANY.COMPANY_ID = SALES.COMPANY_ID
AND SALES.PERIOD_ID=3
AND SALES.SALES_TOTAL>1000;

Index: It can be used in three different ways:

n If a single index is listed, that index will be used


n If a multiple index are listed, the optimizer will choose which indexes to use.
n If a table is listed, but no indexes are listed, the optimizer will choose an
indexes to use for that table.

SQL> SELECT /*+ INDEX (COMPANY)*/


NAME, CITY, STATE
FROM COMPANY
WHERE CITY = ‘ROANOKE’
AND STATE = ‘VA’;

Cache: The cache hint, when used for a table in a query, tells oracle to treat the
table as a cached table. i.e., cache tells Oracle to keep the blocks from the full table
scan of a table in the SGA’s data block buffer cache area, instead of quickly removing
Them from SGA.

SQL> SELECT/*+FULL (COMPETITOR) CACHE (COMPETITOR)*/*


FROM COMPETITOR
WHERE COMPANY_ID > 5;

SQL> SELECT/*+FULL (COMPETITOR) NOCACHE (COMPETITOR)*/*


FROM COMPETITOR
WHERE COMPANY_ID > 5;
MANAGING MULTI-THREADED SERVERS

A Process is a “thread of control” or a mechanism in a operating system that can


execute a series of steps. A single process can have multiple threads of execution.
This simply means that a single multithreaded process can do the work of multiple
single-threaded processes. The advantage of a multithreaded process is that it can
do the work of many single-threaded processes but requires far less system
overhead. If a database server uses multithreaded server processes. It can support
large number of clients with minimal system overhead.

The goals of a process structure are:

n To stimulate a private environment for multiple Processes to work simultaneously


n To allow multiple processes to share computer resources

The process architecture is designed to maximize performance. A client/server


system splits processing between two different components.

Client process: Responsible for executing a client application on a workstation.

Server process:

n Foreground server processes: Directly handles the request from the client
process.
n Background server process: Handle other specific jobs of the database server.

Processes involved in the multi-threaded server are:

n A network listener process that connects user processes to dispatchers or


to a dedicated server process.
n One or more Dispatcher Processes
n One or more share server processes

The network listener process waits for incoming connection requests and determines
if each user process can use a shared process. If so, the listener process gives the
user process the address of a Dispatcher Process. If the user process request for a
dedicated server, the listener process creates a dedicated process and connects the
user process to it. Shared Server Processes are not associated with a specific user
process. Instead, a shared server process serves any client request in the multi-
threaded server configuration.

Using dedicated Server Processes:

n A database Server machine is currently running oracle using multiple


background processes.

n A client workstations runs a database application such as SQL*Plus.


It attempts to establish a connection to the server using SQL* Net driver.
The database server is currently running the proper SQL*Net driver.
The listener process on the Database Server detects the connection
Request from the client. the user executes a single SQL statement.

n Eg: the user inserts a row into a table.

n The dedicated Server Process receives the statement. At this point, two paths
can be followed to continue processing the SQL statement.

n If the shared pool contains a shared SQL area for an identical SQL statement,
the server process can use the existing shared SQL area to execute the
clients SQL statement.

n If the shared pool does not contains a shared SQL area for an identical SQL
statement, a new shared SQL area is allocated for the statement in the
shared pool.

n The background Process retrieves data block from the actual data file, if
necessary, or uses the data blocks already stored in the buffer cache in the
SGA of the instance.

n The server process executes the SQL statements stored in the shared SQL
area. Data is first changed in the SGA. It is permanently written to disk when
DBWR process determines it is most efficient to do so.

n The LGWR process records the transaction in the on-line redolog file only on a
subsequent commit request from the user.

n If the request is successful, the server sends a message across the network to
the user, else appropriate error message is transmitted.

n Throughout this entire procedure, the other background processes are


running and watching for any conditions that require intervention. In addition,
oracle is managing other transactions and preventing contention between
different transactions that request the same data.

Using multi-threaded server process:

n A database server is currently running Oracle using multi-threaded Server


configuration.

n A client workstation runs a database application such as oracle forms.

n It attempts to establish a connection to the server using SQL * Net Driver.

n The database server is currently running the proper SQL * Net Driver.

n The listener process on the database server detects the connection request
from the client application and determines how the user process should be
connected to the available dispatcher.

n The user issues a SQL statement. For eg:- the user updates a row in a table.
n The Dispatcher process places the user process’s request on the request
queue, which is in the SGA and shared by all Dispatcher Processes.

n An available shared Server Process checks the common Dispatcher request


and picks up the next SQL statement on the queue. It then processes the SQL
statement as described in previous example. Once the shared server process
finishes processing the SQL statement, the process places the result on the
response queue of the Dispatcher process that sent the request.

n The Dispatcher process checks its response queue and sends completed
request back to the user processes that made the request.

n To implement MTS you have to set these parameters in init.ora and


listener.ora.

Mts_service=database name(optional if you have already specified db_name)


Mts_dispatchers= “ipc,2” (min.number of dispatchers to be started)
Mts_max_dispatchers= 5 (max.number of dispatchers you want to start)
Mts_servers= 5 (min.number of server process you want to start)
Mts_max_servers= 10 (max.number of server processes)
Mts_listener_address= “ (address=(protocol=ipc)(key=xyz))”

$ lsnrctl start
$ svrmgrl
SVRMGR>CONNECT INTERNAL
SVRMGR>SHUTDOWN
SVRMGR>STARTUP
SVRMGR>EXIT
$sqlplus system/manager@alias name in tnsnames.ora

To check whether MTS is working:

SQL> SELECT * FROM V$MTS;


SQL> SELECT * FROM V$DISPATCHER; (check to see any changes in dispatcher
Busy time after some operations)
SQL> SELECT SERVER FROM V$SHARED_SERVER;(you can see SHARED in the
Server column)
SQL> SELECT * FROM V$QUEUE;

From operating system you can give this command to see whether MTS is working.

$ ps aux|grep ora_(check for (LOCAL=YES) or (LOCAL=NO), It should not appear).


WORKING WITH RAW DEVICES IN ORACLE

A raw device doesn’t have any characteristics like other regular filesystems.
Character device drivers will support these RAW devices. Character device drivers
accesses the raw devices through the special files in the /dev directory bypassing the
Unix I/O buffer.

ADVANTAGES

• Faster performance because the oracle server by passes the Unix buffer cache
and eliminate the filesystem this results in the fewer instructions per I/O.

• Servings in the memory usage because the oracle server doesn’t use the Unix
buffer cache for db block reads/writes.

• Increased output performance using asynchronous I/O. Asynchronous I/O is


available in many Unix platforms but may require the use of row disk devices/
special kernel configuration. Enable asynchronous I/O by setting the async-
write/use-async-I/O to true increased throughput

• They are most beneficial for files that are receives sequential writes.

• They can be used concurrently with the filesystem. A raw device doesn’t have
any characteristics like other regular filesystems. Character device drivers will
support these RAW devices. Character device drivers accesses the raw devices
through the special files in the /dev directory by passing the Unix I/O buffer.

DISADVANTAGES:

• You must devote an entire disk partition to a single db file leading to wasted
disk space.

• Backing up raw devices can be cumbersome when compared to backing up of


Filesystem db files.

• They limit your ability to improve peformance.

• May reduce random access read performance.

• Can require more volume than buffered I/O configuration.

• I/O load balancing and adding files to your database can be more difficult with
raw device.

• Moving datafiles around which is a option in a Unix FS is potentially


impossible in raw device
• It does not allow for tablespace growth.

• Handling of bad blocks may be a problem without proper hardware.

CREATING A DATABASE USING RAW DEVICES:

For example you have the syst, user, roll, temp, indx partitions. By default these will
be owned by ROOT. So, change the ownership and group permissions to ORACLE
AND DBA. Then to create the database the syntax is as follows:

SVRMGRL>CREATE DATABASE RDB


CONTROLFILE REUSE
MAXLOGFILES 10
CHARACTER SET “US7ASCII”
DATAFILE ‘/dev/rdsk/c0t0d0s1’ SIZE 25M
LOGFILE ‘/disk1/oradata/NETTLINX/redo1.log’ SIZE 250K REUSE,
‘/disk2/oradata/NETTLINX/redo2.log’ SIZE 250K REUSE;

SVRMGRL>CREATE TABLESPACE USERTS


DATAFILE ‘/dev/c0t0d0s2’ SIZE 20000K REUSE
DEFAULT STORAGE (INITIAL 50K NEXT 50K MINEXTENTS 2
MAXTENTS 100);

SVRMGRL>CREATE TABLESPACE UTEMP


DATAFILE ‘/dev/c0t0d0s3’ SIZE 20000K REUSE
DEFAULT STORAGE (INITIAL 50K NEXT 50K MINEXTENTS 2
MAXTENTS 100);

SVRMGRL>CREATE TABLESPACE URBS


DATAFILE ‘/dev/c0t0d0s4’ SIZE 20000K REUSE
DEFAULT STORAGE (INITIAL 50K NEXT 50K MINEXTENTS 2
MAXTENTS 100);

SVRMGRL>CREATE TABLESPACE DATATS


DATAFILE ‘$ORACLE_HOME/dbs/data1.dbf’ SIZE 20000K REUSE
DEFAULT STORAGE (INITIAL 50K NEXT 50K MINEXTENTS 2
MAXTENTS 100);

SVRMGRL>CREATE ROLLBACK SEGMENT RBS1


TABLESPACE URBS;

SVRMGRL> ALTER ROLLBACK SEGMENT RBSI ONLINE;

BACKUP AND RECOVERY: When you are working with RAW devices an additional
layer is introduced in the backup and recovery procedures.

1. BACKUP: First you have to use the Unix command dd which takes two arguments

i) if=input file This is the name and path of Char.device


ii) of=output file
Target file.
Eg: dd if=’/dev/c0t0d0s1’ of=/temp/sys.dd
This command extracts data from the block device roll and in the file ‘sys.dd’. Now
you can use regular backup utilities like tar on these files.

Suppose you have file: sys.dd


rbs.dd
TEMP.DD ………

Now using tar: tar cvf /temp/bckup.tar sys.dd rbs.dd temp.dd …….

2. RECOVERY: Suppose we need to recover the data file /dev/c0t0d0s1 for this

i) First extract the respective file.


tar xvf /temp/bckup.tar /temp/sys.dd

ii) Now using this file you have to restore the data.
dd if= ‘/temp/sys.dd’ of=’/dev/c0t0d0s1’ conv=bmode

Note the difference between this command and previous command. Here the output
file is the block device NOT the char device. The conv argument converts the
Character or Block data into Block mode. You can use this basic syntax for all
backups and recovery procedures.
AUDITING

Auditing is done to check regular and suspicious activity on the database. When your
auditing purpose is to monitor for suspicious database activity. Consider the following
guidelines.

n Audit generally, then specificially: when starting to audit for suspicious


database activity, it is common that not much information is available to
target specific users or schema objects. Therefore, audit options must be set
more generally at first. Once preliminary audit information is recorded and
analyzed the general audit options should be turned off and more specific
audit options enabled.

n Protect the audit trail: when auditing for suspicious database activity, protect
the audit trail so that audit information cannot be added, changed or deleted
without being audited.

When auditing nor database activity, consider the following guidelines:

n Audit only pertinent actions: To avoid cluttering meaningful information with


useless audit records and reduce the amount of audit trail administration, only
audit the targeted database activities.

n Archive audit records and purge audit trail: Once you have collected the
required information, archive the audit records of interests and purge audit
trail of this information.

In order to invoke auditing you have to set these parameters in init.ora

Audit_trail = true
Audit_file_dest = specify the path in which you have created the directory for
audit

Audit_trail takes three parameters:

Db : It stores the audit information in the database only


Os : Stores the audit information in physical files
None : Auditing is disabled

AUDIT_TRAIL enables or disables the writing of rows to the audit trail. Audited
Records are not written if the value is NONE or if the parameter is not present.
The OS option enables system-wide auditing and causes audited records to be
Written to the operating system’s audit trail. The DB option enables system-wide
Auditing and causes audited records to be written to the database audit trail (the
SYS>AUD$ table). The value TRUE and FALSE are also supported for backward
Compatibility. TRUE is equivalent to DB, and FALSE is equivalent to NONE.
Creating and deleting the database trail views: The database audit trail
(SYS.AUD$) is a single table in each ORACLE database data dictionary. To help

you view meaningful auditing information in this table, several predefined views are
provided. You have to run CATAUDIT.SQL as sys to create audit tail views. Auditing
can be done on all types of commands.

1. TO AUDIT INSERT STATEMENT:


SQL> AUDIT INSERT;

2. To audit insert, update, delete on emp table of reddy:


SQL> AUDIT INSERT,UPDATE,DELETE ON REDDY.EMP;

3. To audit alter user command:


SQL> AUDIT ALTER USER;

4. To sessions of steeve and lori:


SQL> AUDIT SESSION BY STEEVE, LORI;

5. To audit all unsuccessful select, insert and delete statements on all tables and un-
successful uses of the execute any procedure system privilege, by all database
users, by access
SQL> AUDIT SELECT TABLE,INSERT TABLE,DELETE TABLE ON
EXECUTE ANY PROCEDURE BY ACCESS WHENEVER NOT
SUCCESSFUL;

6. To audit all drop statements:


SQL> AUDIT DROP TABLE;

7. To audit the lock table command:


SQL> AUDIT LOCK TABLE;

8. To disable audit:
SQL> NOAUDIT;

9. To disable audit for a session:


SQL> NOAUDIT SESSION;

10. To disable select table, insert table, delete table.


SQL> NOAUDIT INSERT TABLE, SELECT TABLE,DELETE TABLE;

To view the information of auditing you have to query:

SQL>SELECT * FROM SYS.DBA_STMT_AUDIT_OPTS;


SQL>SELECT * FROM SYS.DBA_PRIVS_AUDIT_OPTS;
SQL>SELECT * FROM SYS.DBA_OBJ_AUDIT_OPTS;
SQL>SELECT * FROM ALL_DEF_AUDIT_OPTS;
SQL>SELECT * FROM SYS.AUD$;
SQL>SELECT USERNAME, OBJ_NAME, ACTION_NAME,SES_ACTONS
FROM SYS.DBA_AUDIT_OBJECT;
SQL> SELECT USERNAME,LOGOFF_TIME,LOGOFF_LREAD,
LOGOFF_PREAD,LOGOFF_LWRITE,LOGOFF_DLOCK
FROM SYS.DBA AUDIT SESSION;

Tables of auditing:

STMT_AUDIT_OPTION_MAP
AUDIT_AUCTIONS
ALL_DEF_AUDIT_OPTS
DBA_STMT_AUDIT_OPTS
USER_OBJ_AUDIT_OPTS, DBA_OBJ_AUDIT_OPTS
USER_AUDIT_TRAIL, DBA_AUDIT_TRAIL
USER_AUDIT_SESSION, DBA_AUDIT_STATEMENT
USER_AUDIT_OBJECT, DBA_AUDIT_OBJECT
DBA_AUDIT_EXISTS
USER_AUDIT_SESSIONS, DBA_AUDIT_SESSION
USER_TAB_AUDIT_OPTS
LOCK MANAGEMENT

Oracle guarantees concurrency, consistency and integrity at any point of time. To


implement this LOCKS are used. Locks are mechanisms used to prevent destructive
interactions between users accessing the same resource such as the entire table,
specific row in a table. Destructive interaction can be interpreted as any interaction
that incorrectly updates data or incorrectly alters underlying data structures, such as
tables, column definitions, indexes, privileges grants.

There are two types of locks. They are IMPLICT LOCKS AND EXPLICIT LOCKS.
Implicit locks are created by Oracle whereas explicit locks are user-created. These
can be created at two levels. Row level and table level.

Row level: A row level is always locked exclusively so that other users can modify the
row until the transaction holding the lock is committed or rolled backed. Row locks
are always acquired automatically by Oracle as a result of the statement.

Table level: A transaction acquires a table lock when a table is modified in the
following DML statement. Insert, Update, Delete, Select….. for update and lock table.
A table lock can be held in any of several modes: row share(RS), row
exclusive(RX),Share(S), share row exclusive (SRX) and exclusive(X)

SQL STATEMENT MODE LOCK MODES PERMITTED


OF
TABLE
LOCK
RS RX S SR X
X

SELECT…FROM TABLE NONE Y Y Y Y Y


INSERT INTO TABLE RX Y Y N N N
UPDATE TABLE RX Y* Y* N N N
DELETE FROM TABLE RX Y* Y* N N N
SELECT….FROM TABLE RS Y* Y* Y* Y* N
FOR UPDATE OF….
LOCK TABLE TABLE IN RS Y Y Y Y N
ROW SHARE MODE
LOCK TABLE IN ROW RX Y Y N N N
EXCLUSIVE MODE
LOCK TABLE IN SHARE S Y N Y N N
MODE
LOCK TABLE IN SHARE SRX Y N N N N
ROW EXCLUSIVE MODE
LOCK TABLE IN X N N N N N
EXCLUSIVE MODE

RS: row share RX: row exclusive S: share


SRX: share row exclusive X: exclusive

*: if no conflicting row locks are held by another transaction:otherwise, waits occur.

To lock a table in different modes:

SQL>LOCK TABLE EMP IN ROW SHARE MODE;


SQL>LOCK TABLE EMP IN EXCLUSIVE MODE;
SQL>LOCK TABLE EMP IN SHARE MODE;
SQL>LOCK TABLE EMP IN SHARE EXCLUSIVE MODE;
SQL>LOCK TABLE EMP IN ROW EXCLUSIVE MODE;

To view the information about locks you have to look into these tables:

V$LOCK
V$LOCKED_OBJECT
DBA_OBJECT_LOCK
V_$_LOCKS
DBMS PACKAGES

Creating user locks with Oracle lock management services: You can use Oracle
lock management services for your applications. It is possible to request a lock of a
specific mode, give it a unique name, change the lock mode, and release it. The
Oracle lock management services are available through procedures in the
DBMS_LOCK package. The following procedures are callable from DBMS_LOCK
package.

Function/Procedure Description

ALLOCATE_UNIQUE Allocate a unique lock ID to a named lock


REQUEST Request a lock of a specific mode
CONVERT Convert a lock from one mode to another
RELEASE Release a lock
SLEEP Put a procedure to sleep for a specified
Time

Naming locks:

DBMS_LOCK.ALLOCATE_UNIQUE(LOCKNAME IN Varchar2,
LOCKHANDLE OUT Varchar2,
EXPIRATION_SECS IN Integer default
864000);

Lockname: Specify the name of the lock for which you want to generate
a unique ID
Lock handle: Returns to the caller the handle to the lock ID generated
Expiration_secs: Specify the number of seconds to wait after the last
ALLOCATE_UNIQUE.

For eg:
SQL>EXEC DBMS_LOCK.ALLOCATE_UNIQUE(‘TESTLOCK’,30);

Requesting a lock: To request a lock with a given mode, use the request function.

DBMS_LOCK.REQUESTED(ID IN VARCHAR2,
LOCKHANDLE IN VARCHAR2,
LOCKMODE IN INTEGER DEFAULT X_MODE,
TIME_OUT IN “ ” MAXWAIT,
RELEASE_ON_COMMIT IN BOOLEAN “FALSE);

For eg:
SQL>SELECT * FROM DBA_LOCKS;

If you get a lock id as 3300 then,


SQL>EXEC DBMS_LOCK.REQUEST(3300,:LOCKHANDLE,’S’,20,TRUE);
/*locking in share mode*/

Converting a lock: To convert a lock from one mode to another

DBMS_LOCK.CONVERT(ID IN INTEGER,
LOCKHANDLE IN VARCHAR2,
LOCKMODE IN INTEGER
TIME_OUT “NUMBER DEFAULT MAXWAIT)

RETURN INTEGER;

RETURN VALUE DESCRIPTION

0 Success
1 Timeout
2 Deadlock
3 Parameter error
5 Don’t own lock specified by ID or
lock handle Illegal lock handle

For eg:

To convert the lock from shared to exclusive

SQL>EXEC DBMS_LOCK.CONVERT(3300,:LOCKHANDLE,’X’,NULL);

To release a lock:

DBMS_LOCK.CONVERT(ID IN INTEGER);
SQL>EXEC DBMS_LOCK.SLEEP(3300);

To suspend the session for a given period of time, use the SLEEP procedure.

DBMS_LOCK.SLEEP(SECONDS IN NUMBER);
SQL>EXEC DBMS_LOCK.SLEEP(10);

The DBMS_PIPE package allows two or more sessions in the same instance to
communicate. Oracle pipes are similar in concept to the pipes in the UNIX, but Oracle
pipes are not implemented using the operating system pipe mechanisms,
information, sent through Oracle pipes<is buffered in the SGA. All information in the
pipes is lost when the instance is shutdown. Depending on the security requirements
you have to use either public or private pipe. The following table shows the
procedures that can be called:

Function/procedure Description

CREATE_PIPE Explicitly create the pipe


Pack_message Build message in the local buffer
Send_message Send messages on the named pipe
Receive_message Copy message from named pipe into
Local buffer
Next_item_type Return datatype of next item in buffer

Unpack_message Access next item in buffer


Remove_message Remove the message
Purge Purge contents of named pipes
Reset_buffer Purge contents of local buffer
Unique_session_name Return unique session name

To create a new pipe:

DBMS_PIPE.CREATE_PIPE(PIPE_NAME IN VARCHAR2,
MAXPIPESIZE IN INTEGER DEFAULT 8192,
PRIVATE IN BOOLEAN DEFAULT TRUE)
RETURN INTEGER;

Pipe name : Specify the name of the pipe that you are creating. The name
Must be unique across the instance.
Maxpipesize : Specify the maximum size allowed for the pipe in bytes. The
Total sizes of all the messages on the pipe cannot exceed this
Amount
Private : Use the default, TRUE, to create a private pipe. Public pipes can
Be created implicitly when you call SEND_MESSAGE

For eg:

SQL>EXEC DBMS PIPE.CREATE_PIPE(‘TESTPIPE’,8192,TRUE);

To remove explicitly: Pipes created implicitly by SEND_MESSAGE are automatically


removed when empty

DBMS_PIPE.REMOVE_PIPE(PIPENAME IN VARCHAR2)
RETURN INTEGER;
SQL>EXEC DBMS_PIPE.REMOVE_PIPE(‘TESTPIPE’);

To purge, empty, the contents of a pipe call PURGE:

DBMS_PIPE.PURGE(PIPENAME IN VARCHAR2)
SQL>EXEC DBMS_PIPE.PURGE(‘TESTPIPE2’);

Receive a message: To receive a message from a pipe first call, RECEIVE_MESSAGE


to copy the message into the local message buffer. when you receive a message, it is
removed from the pipe; that is a message can only be received once.

DBMS_PIPE.RECEIVE_MESSAGE(PIPENAME IN VARCHAR2,
TIMEOUT IN Integer default maxwait)
RETURN INTEGER;

For eg:

STATUS:=DBMS_PIPE.SEND_MESSAGE(‘PROC1’,10);
STATUS:=DBMS_PIPE.RECEIVE_MESSAGE(‘TESTPIPE’,10);
Where status is variable and proc1 is PL/SQL program;

Creating alerts: The DBMS_ALERT package provides support for the asynchronous
notification of database events. By appropriate use of this package and database
triggers, an application can cause itself to be notified whenever values of interest in
the database are changed. The following table shows the procedures included in this
package:

Function/procedure Description

Register Receive mail from an alert


Remove Disable notification from an alert
Signal Signal an alert(send message to
Registered sessions)
Wait any wait time out seconds to receive alert
Message from an alert registered for
Session
Wait one wait time out seconds to receive
Message from named alert
Set_defaults Set polling interval

To set default using alerts:

DBMS_ALERT.SET_DEFAULT(POLLING_INTERVAL IN NUMBER);

For eg:
SQL>EXEC DBMS_ALERT.SET_DEFAULT(120);

To register an alert:

DBMS_ALERT.REGISTER(ALTER NAME IN VARCHAR2);

For eg:

SQL>EXEC DBMS_ALERT.REGISTER(‘ALERT1’);

To signal messages:

DBMS_ALERT.SIGNAL(ALTER NAME IN VARCHAR2),


MESSAGE_TEXT IN VARCHAR2);

For eg:

SQL>EXEC DBMS_ALERT.REMOVE(‘ALERT1’);

Usage of DBMS_JOBS: This package allows control of the Oracle job Queue allow
DBA’s to schedule, execute, and eliminate jobs from with in the Oracle itself
independent of the operating system queuing mechanisms.
To submit a job:

SQL> ED ins.sql
BEGIN
FOR 1 IN 1…….10 LOOP
INSERT INTO TEST VALUES (1);
END LOOP;
END;
X NUMBER;
DBMS_JOB.SUBMIT(:X, ‘PROGRAM NAME’, SYSDATE, NEXTINTERVAL);

For eg:

SQL>EXEC DBMS_JOB.SUBMIT(:X, ‘INS.SQL’,SYSDATE,SYSDATE+1/(24*


60*60);

To remove the job:

DBMS_JOB.REMOVE(JOB NUMBER);

For eg:

SQL>SELECT * FROM DBA_JOBS;

If job number is 99 then,

SQL>EXEC DBMS_JOB.REMOVE(99);

To change a specified job:

DBMS_JOB.CHANGE(JOBNUMBER,SYSDATE,NEXT,);

For eg:

SQL>EXEC DBMS_JOB.CHANGE(99,NULL,SYSDATE+1);
/* changing the interval from 1 second to 1 day */

To run a job:

DBMS_JOB.RUN(JOBNUMBER);

For eg:

SQL>EXEC DBMS_JOB.RUN(99);

You have to see the job number in DBA_JOBS TABLE.

The parameters specified above are:

Name : Program name, which you want to submit as a job


Sysdate : From which date you want to start a job
Next : The next interval when you want to execute the job
Jobnumber : The number assigned to a job.
INSTALLATION OF ORACLE ON LINUX 5.2

Install Linux 5.2.


Make sure you have enough disk space (at least 450mb) to install Oracle, preferably
We need a special filesystem (other than “root”) also try to avoid:

n Database Engine
n Database files on same filesystem.

The most preferred (or minimum requirements are)

/root FS----------------- 500Mb


/oraeng FS----------------- 500Mb
/disk1 FS----------------- 1 GB
/disk2 FS----------------- 1 GB
/disk3 FS----------------- 1 GB

In the above eg. /disk1, /disk2 and /disk3 are external disk subsystems. The reason
Why we have like this, in case if the internal disk goes corrupted, we can simply re-
install Linux after replacing the drive and every thing can function normally. And
make sure your external drives are running with either RAID-0 or RAID-5, so those
disk problems won’t stop the show.
Log in as root and do the following things:

1. Create the directory structure to hold the software


#mkdir –p /oraeng/app/oracle/product

2. Create group dba.


#groupadd –g 500 dba

3. Create a user called “oracle8” in which user account you’ll be installing the
software.
#useradd –u 501 –g 500 –s ksh –c “oracle Owner” –d /oraend/app/oracle/product/
8.0.5 –m oracle8

4. Change the group of all the slices to the oracle group i.e., “dba”.
#chgrp –R dba /oraeng /disk1/disk2/disk3

5. Change the owner of all the slices to “oracle8”


#chown –R dba /oraeng/disk1/disk2/disk3

6. To install Oracle 8 on Linux, you have to edit a file (shmparam.h) which is in


/usr/src/linux/include/asm; as the default KERNAL parameters won’t allow
oracle to grab 5 Mb of SGA size, you have to make some KERNEL parameter
updations in shmparam.h file.

#vi /usr/src/linux/include/asm/shmparam.h
define SHM_IDX_BITS 16

define SHMMAX 0x2000000


define SHMMNI 100
define SHMSEG 10
define ULIMIT 2113674
:wq

Now login as Oracle8 user

7. Update your profile to suit your environment and do the following things

$vi .bash_profile

ORACLE_SID=ORCL; export ORACLE_SID


ORACLE_HOME=/oraeng/app/oracle/product/8.0.5; export ORACLE_HOME
LD_LIBRARY_PATH=$ORACLE_HOME/lib; export LD_LIBRARY_PATH
PATH=$PATH:$ORACLE_HOME/bin; export PATH
:wq

8 Reboot the server to get changed values into effect


# init 6 (a command to reboot the server)

Login as root:

9. Mount the cdrom and answer appropriately

#mount –f HS /dev/cdrom /mnt


#cd /mnt/orainst
#sh oratab.sh

10. Now login as oracle8 and start the installation process

#cd /mnt/orainst
#sh orainst

Now you answer appropriately and select what all Software you need. Also make
sure to select “New Installation with creation of DB objects” also. By doing so, the
Installer process will try to create a Database with the name ORCL (since ORCL is
your ORACLE_SID from your .bash_profile). This is very important since the software
May get installed right, but fails to create the database later. That is why it is
important to ask the installer to create the database also.

11. After the software is installed correctly, login as root again and do the
following things:

#cd /oraeng/app/oracle/product/8.0.5/orainst
#sh root.sh
But in many companies where you go as a DBA, most of the cases the database is
Already installed and you may be asked to UPGRADE the Oracle versions eg: from
7.1 to 7.3

There are 2 ways you can do it. 1st way:(my preferred way)

1. First make a COLD backup of the Instances (databases) to the TAPE.


2. Also backup your Oracle HOME.
3. Now you start all the instances and take “full export”

$exp sys/sys file=expPROD_FULL.DMP log=expPROD_FULL.Log full=y


$exp sys/sys file=expDEVL_FULL.DMP log=expDEVL_FULL.Log full=y

So, here we are taking care of 2 databases.


4. Now you shutdown all the databases, including your listener. Make sure
No background processes are running for oracle.

Copy you dbs directory to /tmp (since it has all the INIT.ORA files)
$cd /oraeng
$rm –r app

Now follow above steps to install new oracle version from scratch. After the database
is installed again go to

$cd $ORACLE_HOME/orainst
$sh orainst

Here you choose “Upgrade DB objects” instead of “Install New Software” then it’ll
Confirm your ORACLE_SID and try to upgrade the database from ver 7.1 to 7.3
So, if you really observer it is a 2-step process.

Upgrading the Engine


Upgrading the Database.

Incase if it doesn’t work, you can always recreate your database and import the data
From your step 3 for all databases. If you have problems, you can always go back
your original 7.1 engine and the databases, from step 2(COLD Backup and /oraeng
backup).
INSTALLATION OF ORACLE 8.1.5 ON SUN-SPARC
(Oracle Enterprise Edition—OEE)

SYSTEM REQUIREMENTS:

1. 128MB RAM(Min)
2. Swap – Twice the RAM
3. CD-ROM—Oracle uses ISO-9660 Format CD-ROM disks with rockridge
Extensions

DISK SPACE REQUIREMENTS:

1. Minimal – 660mb(OEE) (Without Client&Programmer/2000)


2. Typical - 811mb(OEE) + 306Mb(Client) + 276Mb(Programmer/2000)

OS REQUIREMENTS: SOLARIS 2.6 OR 2.7 With the following packages Sunware,


Sunbtool, Sunlibm, Sunwlibm, Sunwtoo, Sunwhea

Perform the following actions after logging in as ROOT user:

1. install Sun OS on the Sparc m/c with at least three mount points (/disk1,
/disk2, /disk3) for the Database storage files & one mount point (/oraeng)
for the software.

2. Create the necessary directory structures (i.e., app/oracle/product)


under /oraeng

3. Create groups DBA, Oinstall, Operator

4. Create user ORACLE8 and assign him to the Oinstall group

5. Transfer the ownership and group of these filesystem (oraeng, /disk1,


/disk2, /disk3) to ORACLE8 and Oinstall respectively.

6. Following are lines you would add to the /etc/system file to configure the
UNIX kernel with the minimum recommended values:

Set shmsys:shminfo_shmmax=4294967295(min)
Set shmsys:shminfo_shmmin=1
Set shmsys:shminfo_shmmni=100
Set shmsys:shminfo_shmseg=10
Set shmsys:seminfo_shmmns=200
Set shmsys:seminfo_shmmni=100
Set shmsys:seminfo_shmmsl=100
Set shmsys:seminfo_shmopm=100
Set shmsys:seminfo_shmvmx=32767

And REBOOT the System

7. Create a directory /var/tmp such that the installer (oracle8) has write
permission over it and it has at least 20MB space in it.

8. Mount the CD-ROM


$ mount –r –F hsfs <device_name> /mnt or

if the automount option is there, the CD is automatically mounted on /cdrom

Actions to be done as oracle8 user:

1. Set the following parameters after logging in, in the .profile

ORACLE_HOME=/oraeng/app/oracle/product/8.1.5; export ORACLE_HOME


ORACLE_SID=ORCL; export ORACLE_SID
LD_LIBRARY_PATH=$ORACLE_HOME/lib, /usr/ucblib, /usr/openwin/lib or
/usr/dt/lib; export LD_LIBRARY_PATH
PATH=$ORACLE_HOME/bin:/bin:/usr/bin:/usr/ccs/bin:/usr/ucb/bin:.;
export PATH
CLASS_PATH=$ORACLE_HOME/jlib; export CLASS_PATH
DISPLAY=<Host_Name>:0.0;export DISPLAY

Note: Display setting is valid for only, workstations using bourne or korn shells. For
C shells the display setting is: setenv DISPLAY <HOST_Name>:0.0

2. Set the permissions on /disk1, /disk2, /disk3 & oraeng i.e type umask at the
$prompt, check for o22
3. cd /cdrom/cdrom0
4. ./runInstaller i.e., Start the installation by executing this file & answer
appropriately to complete the installation.
5. After the required software has been installed, login as ROOT user and run
root.sh file present in /oraeng/app/oracle/product/8.1.5

The Kernel Parameters

Shared Memory and Semaphore Parameters

Parameter Recommended Description


Value

Shmmax 4294967295 The maximum size of a single shared memory


Segment.
Shmmin 1 The mininum size(in bytes) of a single shared
Memory segment
Shmmni 100 The number of shared memory identifiers
Shmseg 10 The maximum number of shared memory
Segments that can be attached by a process
Shmmns 200 The number of Semaphores in the system

Shmmni 100 The number of Semaphore set identifiers in the


system
Shmmsl 100 Equal to the value of the PROCESSES
Initialization parameter

Note: The maximum number of semaphores that can be in one semaphore set
should be equal to the maximum number of Oracle processes.

New features in Oracle8

n Tables and indexes in Oracle8 can be partitioned.


n Tables and indexes partitions are defined with three new parts to the create
table and create index
n Statements. Partition by range(column) to define the partition key, values less
than(value) to define the upper bound for each partition’s sub-range and table-
space location and storage parameters. Only storage parameter need be
preceded by the storage clause.
n Tables and indexes can have upto 64,000 partitions.
n Oracle8 can store up to 512 petabytes of data.
n The maximum number of tablespaces allowed is about 2 billion
n The number of datafiles per tablespace is 1,022.
n The number of columns supported in a table now is 1000.
n The maximum number of indexed columns is 32.
n No table containing a column defined to be type LONG, LONG RAW or any of
the new LOB data types can be partitioned.
n A table’s partitions can be altered in several ways.
n The ALTER TABLE DROP PARTITION statement drops a named partitioned
and its contents.
n The alter table add partition statement adds a partition over and above the
highest range currently existing on a partition in the table.
n The ALTER TABLE RENAME PARTITION statement renames a partition from
one thing to another.
n The alter table modify partition statement sets the equi-partitioned local index
data to INDEX UNUSABLE status or allows the equi-partitioned local index to
be rebuild or allows the DBA to change physical storage parameters (but not
tablespace location).
n The ALTER TABLE TRUNCATE PARTITION statement deletes all data from the
table partition.
n The ALTER TABLE SPLIT PARTITION statement splits one partition into two.
n The ALTER TABLE MOVE PARTITION statement moves the partition to another
extent in the same tablespace or into another tablespace
n The ALTER TABLE EXCHANGE PARTITION statement turns a partition into its
own table or vice versa.
n The following restrictions apply to changing column and constraint definitions
on a partitioned table.
n The partition key’s datatype or size cannot be changed.
n All values for the partition-key column must be accommodated by a partition,
thus range of A-P, Q-Z, are allowed, but ranges A-O, R-Z are not.
n If no partition defined for the partitioned object contains values less than
(maxvalue), the partition-key column cannot contain NULL values.

n An insert on the table will fail if the value specified for the partition is outside
any range specified for any partition on the table.
n In addition to the dictionary views used to obtain information about tables and
indexes, the following new dictionary views support use of partitions. DBA_PART
_TABLES gives information about how the table is partitioned for all tables in
the database. DBA_PART_INDEXES gives information about how the index is
partitioned for all tables in the database. DBA_PART_KEY_COLUMNS identifies
the partition key used for all the tables and indexes in the database.
DBA_PART_PARTITIONS offers information about the partitions of all tables in
The database. DBA_PART_PARTITIONS gives information about the partitions of
All indexes in the database.
n DBA_PART_COL_STATISTICS lists statistics for cost-based optimization for
partition columns, for all the tables and indexes in the database. DBA_PART_
HISTOGRAMS shows the distribution of data in partitions for all partitions in the
Database. DBA_TAB_HISTOGRAMS shows the distribution of the data in tables
For all tables in the database.
n Several utilities were changed to accommodate partitions. They include explain
plan, analyze, SQL*Loader. Export and Import. For explain plan, three new
columns were added to the PLAN_TABLE, called PARTITION_START. PARTITION_
STOP and PARTITION_ID. A new operation called partition was added, along with
Three new options for its execution, concatenated, single and empty. Some new
Options for the table access operation were added as well, corresponding to the
New indexes that are available. The options for TABLE ACCESS are by user
ROWID, by global index ROWID and by local index ROWID.
n For SQL*Loader, there are changes to the conventional path and the direct path
for conventional path. SQL*Loader may load one partition only, but several loads
can operate on the same table but different partitions to execute data loads on
partitioned tables more quickly. For the direct path. SQL*Loader allows the
PARALLEL parameter, which is set to true to false depending upon whether the
DBA wants to load an individual partition using the direct path in parallel.
n For IMPORT & EXPORT entire partitioned tables can be imported or exported or
an individual partition can be done.
DBA LAB EXERCISE

Das könnte Ihnen auch gefallen