Beruflich Dokumente
Kultur Dokumente
INDEX
NETTLINX
DATABASE ADMINISTRATOR ROLE
At last, an attribute of a perfect DBA, which separates him from others is the power
and will for not giving up till the last minute.
Responsibilities of a DBA:
Must be responsible for putting security in place to make certain, only the
right people can access the right data. A DBA works closely with the data
architects to implement the database design.
Must work closely with the technical team to ensure, to adhere, and to co-
operate the policies and procedures pertaining to the database. This includes
development of policies to control the movement of applications on to a
production database.
Must monitor the growth of database to ensure the smooth functioning of
daily activities.
Must monitor performance. This is a critical function of a DBA. He or she must
establish base lines and compare the database performance against them to
ensure adequate performance.
Must tend to daily administration of the database.
Must be able to tackle issues as soon as they spring up. A DBA’s position is
one of the most technically challenging roles that, exists with in all the teams.
Must be available 7X24
Must work closely with system administrator to install all software and
patches.
Must have political skills. For eg: a DBA might not want to upgrade the
system on the busiest day of the year. Common sense is required.
Must ensure appropriate, backup and recovery procedures, are in place to
meet the business requirements. If a project is not backed up and the
database is lost, probably a month or more of the project’s teamwork would
be lost.
Installing and upgrading the oracle server and application tools.
Configure or aid in the configuration of the computer network.
Allocate system storage and plan for future storage requirements for the
database system.
Manage logical & physical database structures.
Control and monitor user access to the database.
Tune and trouble shoot the database.
Plan and implement appropriate backup and recovery strategies for the
database.
Minimize the database down time.
Contact oracle corporation for technical support.
Physical structure:
Datafiles
Redologs
Control files
Data files:
An oracle database has one or more physical data files that hold the actual data of all
logical structures like tables, indexes, etc. A data file can be associated with only one
database and only one Tablespace.
Redolog files:
The primary function of redologs is to record all the changes made to the database
before they are written to the data files. These files can be mirrored and are used in
performing recovery methods.
Control files:
These files record control information of all files within the database. They are used
to maintain internal consistency and play a vital role in recovery operations. These
are used to maintain internal consistency and play a vital role in recovery operations.
These files can also be mirrored. Oracle automatically modifies control files, which
users cannot edit. They are used to maintain internal consistency and guide during
recovery. It is divided into five parts.
Information about the database, total no. of data files, redologs and threads
that are enabled (parallel service)
Information about each log group and current log group that LGWR is writing.
Each member of log group, the size, path, full name, log sequence number
etc.
Datafile, datafile size, fullname, path, status etc.
Log history of database.
Logical structure comprises of Tablespaces, Schema objects like tables, Indexes,
views etc.
Table space:
It is a logical area of storage in a database that directly corresponds to one or more
physical data files.
Schema Objects:
INSTANCE
A system global area (SGA) and oracle background processes constitute an instance.
SGA: It is a shared memory region allocated by Oracle that contain data & control
information for an Oracle instance. An SGA comprises of buffer cache, redolog
buffers and shared pool area.
Buffer Cache: Buffer cache stores the most recently used blocks of data. It can also
have modified data that has not yet been permanently written to disk. When a row in
a table is updated. Foreground server process read the datafile information on the
disk into the buffer caches. Then the server process read modifies the data block in
the server memory. If another user request new data and as no data block is free in
the buffer cache, DBWR is called the blocks from the buffer cache are written to the
datafile using the LRU (Least Recently Used) mechanism.
Shared pool: Shared pool comprises of library cache and dictionary cache. Library
cache stores, and shares SQL statements PL/SQL procedures in memory.
Library cache: Oracle parses the statement and determines the most efficient
execution plan for the statement when a SQL statement is issued. Oracle then
caches the statement in the share pool and if another used issues the same
statement, oracle shares the statement already in memory rather than repeating the
same steps.
PGA: PGA (Program Global Area) is, the memory buffers that contain data and
control information for a server process.
Eg: A client’s server process uses its PGA to hold the state of the session’s program
variables and packages.
Background Process:
DBWRDatabase Writer
LGWR Log Writer
CKPT Checkpoint
SMON System Monitor
PMON Process Monitor
RECO Recoverer
Dnnn Dispatcher
LCKN Lock
Snnn Server
DBWR: It writes blocks from the buffer cache to the appropriate data files. It writes
a block in memory back to disk only when DBWR sits idle for a few seconds or when
a foreground server wants to read a new block into memory but there is no free
space available oracle performs a checkpoint.
LGWR: LGWR writes redolog entries generated in the redo log buffer to an on-line
redo log file. As and when a transaction is carried out, oracle creates small records
called redo entries that contain just enough information necessary to regenerate the
changes made by the transactions oracle temporarily stores your transaction redo
entries in the server’s redo log buffer. The server’s redo log buffer is a small
memory area that temporarily caches transactions and redo entries for all system
transactions. Oracle does not consider a transaction as committed until LGWR
successfully writes the transaction redo entries and a commit record to the
transaction log.
It writes:
When log buffer is full.
When transaction is committed.
For every 3 seconds.
When it is 1/3 full.
CKPT: it is responsible for signaling the DBWR at checkpoints and updating all the
data files, control files of the database. It is optional. Its duty can be performed by
LGWR. The purpose of a check point is to establish mileposts of transaction
consistency on disk. Checkpoint indicates how much of the transaction log’s redo
entries oracle must apply if a server crash occurs and a database recovery is
necessary.
SMON:
It performs instance recovery at instance startup in a multiple instances.
Recovers other instances that have failed.
Cleans up temporary segments that are no longer in use.
Recovers dead transactions skipped during crash and instance recovery.
Coalesee the free extents within the database, to make free space contiguous
and easy to allocate.
PMON: It performs process recovery when a user process fails and is also
responsible for cleaning up the cache, freeing resources that process was using. Also
checks on dispatcher and server processes and restarts them in case of failures.
ARCH: It copies on-line redo log files to the configured destination when they are
full. It is active only when database’s redo log is used in archive log mode. The
sequential set of archived transactions log files that ARCH creates, is called as
archived transaction log.
RECO: RECO is used to resolve distributed transactions that are pending due to a
network or a system failure in distributed database. At time intervals, the local RECO
attempts to connect to a remote database and automatically complete the commit or
rollback a local portion of any pending, distributed transactions.
Lckn: It is used for inter-instance locking when the Oracle parallel server option is
used.
Low and High SCN: When a redolog file is filled up it switches to next the redolog
file. The new redolog file is marked as low SCN, which is one, greater than the high
SCN of previous log file. The low SCN represents the lowest value of the change
number that is stored in that log file. Similarly, when the log file is closed, the high
SCN mark is set to the highest SCN recorded in the log file.
DATABASE CREATION
Database creation prepares several operating system files so that they work together
as an oracle database. A database needs to be created once, regardless of how many
datafiles it has or how many instances access it.
Create initialization (parameter) file by copying from the sample init.ora file to
init<ORACLE_SID>.ora. The name of the file can be anything but the name has to be
specified explicitly at the time of database startup.
$ cd $ ORACLE_HOME/dbs
$ cp init.ora initNETTLINX.ora(as your ORACLE_SID=NETTLINX)
Make the necessary changes in your init<ORACLE_SID>.ora file. Eg: if
db_name=DEFAULT change it to db_name=NETTLINX.
$ vi initNETTLINX.ora
db_name=NETTLINX
control_files=(/disk/oradata/NETTLINX/cont1.ctl/disk2/oradata/NETTLINX/cont2.ctl)
background_dump_dest=/disk1/oradata/NETTLINX/bdump
user_dump_dest=/disk1/oradata/NETTLINX/udump
core_dump_dest=/disk1/oradata/NETTLINX/cdump
:wq
Create the necessary directories to place database files, redolog files, control files
and the dump_dest directories.
$ cd /disk1/oradata
$ mkdir NETTLINX
$ cd NETTLINX
$ mkdir bdump cdump udump(create these directories as specified in
init<ORACLE_SID>.ora).
$ cd /disk2/oradata
$ mkdir NETTLINX
$ cd /disk3/oradata
$ mkdir NETTLINX
Execute the, create database command, which is defined in the following lines (i.e.,
script written in a file ‘cr8NETTLINX.sql’ using “vi” and then execute it).
$ vi cr8NETTLINX.sql
CREATE DATABASE NETTLINX
DATAFILE’/disk1/oradata/NETTLINX/system01.dbf’ SIZE 25M
LOGFILE GROUP1 (‘disk1/oradata/NETTLINX/redolog1a.log’,
‘/disk2/oradata/NETTLINX/redolog1b.log’) SIZE
250K,
GROUP2 (‘disk1/oradata/NETTLINX/redolog2a.log’,
‘/disk3/oradata/NETTLINX/redolog2b.log’) SIZE
250K
CONTROLFILE REUSE control file path will be reused.
:wq
At $ prompt issue the command svrmgrl (server manager line mode). Which will take
you to “SVRMGR>” prompt.
SVRMGR>CONNECT INTERNAL
SVRMGR>STARTUP NOMOUNT
SVRMGR>@cr8NETTLINX.sql
When you execute this statement, oracle performs the following opertions:
After the above statement is processed, the CATPROC and CATALOG scripts are to be
executed, as user “SYS”, which are present in
“$ORACLE_HOME/rdbms/admin”directory.
The commands are as follows:
SVRMGR>@$ORACLE_HOME/rdbms/admin/catalog.sql #as sys or internal
SVRMGR>@$ORACLE_HOME/rdbms/admin/catproc.sql #as sys or internal
Then, connect as system/manager and execute pupbld.sql the command is,
SVRMGR>CONNECT SYSTEM/MANAGER
SVRMGR>@$ORACLE_HOME/sqlplus/admin/pupbld.sql
For loading the help into the database, give the following command:
$cd ORACLE_HOME/sqlplus/admin/help
$ loadhelp
then to get help go to SQL,
SQL>HELP<COMMAND>
Eg: to get help on create table
SQL>Help CREATETABLE
Table spaces
A database is divided into one or more logical storage units called Tablespaces. A
database administrator can use the Tablespaces to do the following:
While or after the table space creation the above types can be specified (default is
permanent) a database NETTLINX[created earlier] require 4 Table spaces. They can
be created as follows:
SQL>CREATE TABLESPACE USER_NETTLINX DATAFILE
‘/disk1/oradata/NETTLINX/user_NETTLINX01.dbf’ SIZE 2M
DEFAULT STORAGE (INITIAL 50K NEXT 50K MINEXTENTS 1
MAXEXTENTS 50 PCT INCREASE 0);
SQL>CREATE TABLESPACE TEMP_NETTLINX DATAFILE
‘/disk1/oradata/NETTLINX/temp_NETTLINX01.dbf’ SIZE 2M online;
SQL>CREATE TABLESPACE INDEX_NETTLINX DATAFILE
‘/disk1/oradata/NETTLINX/index_NETTLINX01.dbf’ SIZE 2M;
Temporary
SQL>CREATE TABLESPACE RBS_NETTLINX DATAFILE
‘/disk1/oradata/NETTLINX/rbs_NETTLINX01.dbf’ REUSE;
Examples:
Second Method:
STORAGE PARAMETERS
Every Tablespace has default storage parameters. To override the system defaults in
that Tablespace a user can specify the parameters while creating the objects. The
following are the parameters:
INITIAL: The size in bytes of the first extent allocated when a segment is created.
Though default system values are given data blocks, use bytes to set a value for this
parameter. You can also use the abbreviations K and M to indicate Kilobytes and
Megabytes.
Default: 5 datablocks
Minimum: 2 datablocks
Maximum: Operating system specific
NEXT: The size of the next extent to be allocated for a segment. The second extent
is equal to the original setting for next. From third extend onward ‘NEXT’ is set to
the previous size of NEXT multiplied by (1+Pctincrease/100). You can also use K and
M to indicate Kilobytes and Megabytes as above.
Default: 5 datablocks
Minimum: 1 datablock
Maximum: Operating system specific
MAXETENTS: The total number of extents, including the first, can ever be allocated
for the segment.
Default: Dependent on the data block size and operating system
Minimum: 1 (extent)
Maximum: Operating system specific
MINEXTENTS: The total number of extents to be allocated when the segment is
created. This allows for a large allocation of space at creation time, even if
contiguous space is not available.
Default: 1 (extent)
Minimum: 1 (extent)
Maximum: Operating system specific
If minextents are more than 1, then the specified number of incremental extents are
allocated at creation time using initial, next, pctincrease.
PCT INCREASE: The percent by which each incremental extent grows over the last
incremental extent allocated for a segment. If pctincrese is 0, then all incremental
extents are the same size. If pctincrease is greater than 0, then each time the next
is calculated, it grows by pctincrease. It cannot be negative. It is specified in
percentage.
Default: 50(%)
Minimum: 0 (%)
Maximum: Operating system specific
NOTE: Pctincrease for Rollback segment is always 0, Pctincrease cannot be specified
for Rollback Segments.
PCT FREE: It is used to set percentage of a block to be reserved (kept free) for
future updates. After this parameter is met the block is considered to be full and it is
not available to insert new rows.
PCT USED: It is used to allow a block to be reconsidered for the insertion of new
rows. When the percentage of a block being used falls below PCTUSED either
through row deletion or updates reducing column storage, the block is again
available for insertion of new rows.
INITTRANS: It reserves pre-allocated amount of space for initial number
transaction entries to access rows in the data block concurrently. Space is reserved
in the header of all data blocks of all associated data or index segement. The default
value is 1 for tables and 2 for clusters.
MAXTRANS: As multiple transactions concurrently access the rows of the same data
block, space is allocated for each transaction’s entry in the block.
Once the space is reserved by the inittrans is depleted, space for additional
transaction entries is allocated out of the free space in a block, if available. Once
allocated, this space effectively becomes a permanent part of the block header. The
maxtrans parameter is used to limit the no, of transaction entries than concurrently
use data in a data block.
To change the initial extent of a Table:
NOTE: Check in user_segments, you will see that the initial is not decreased because
you have used delete command which will not reset the high water mark. But, if you
still want to decrease it further then, do
SQL>TRUNCATE TABLE JUNK;
SQL>ALTER TABLE JUNK DEALLOCATE UNUSED KEEP 1K;
SQL>SELECT*FROM USER_SEGMENTS WHERE SEGMENT_NAME = ‘JUNK’;
DBA_SEGMENTS
DBA_EXTENTS
DBA_TABLES
DBA_INDEXES
DBA_TABLESPACES
DBA_DATA_FILES
DBA_FREE_SPACE
SUM shows the amount of free space in each tablespaces, PIECES, shows the
amount of fragmentation in the datafiles of the tablespace, and MAXIMUM shows the
largest contiguous area of space. This query is useful when you are going to create a
new object or you know that a segment is about to extend, and you want to make
sure that there is enough space in the containing table space.
A Partitioned table or partitioned index has been divided into a number of pieces, or
partitions, which have the same logical attributes.
2. To Move Table Partitions: you can use the MOVE PARTITION clause to move a
partition.
4. To Add Index Partitions: You cannot explicitly add a partition to a local index.
Instead, new partitions are added to local indexes only when you add a partition to
the underlying table. You cannot add a partition to a global index because the
highest partition always has a partition bound MAXVALUE.
5. To Drop Table Partitions: Delete the rows from the partition before dropping the
partition.
6. To Drop Index Partitions: You cannot explicitly drop a partition from a local index.
SQL>ALTER INDEX NPR DROP PARTITION P1;
SQL>ALTER INDEX NPR REBUILD PARTITION P2;
7. To Truncate Partitioned Tables: You can use the ALTER TABLE TRUNCATE
PARTITION statement to remove all rows from a table partition with or without
reclaiming space.
9. To split index partitions: You cannot explicitly split a partition in a local index. You
can issue the ALTER INDEX SPLIT PARTITION statement to split a partition in a global
index if the partition is empty.
10. To Merge Table Partitions: You can use either of the following strategies to merge
table partitions. To merge partition OSU1 into partition OSU2:
3. Drop the FEB95 partition, this frees the segment originally owned by the
SALES_FEB95 table.
SQL>ALTER TABLE SALES DROP PARTITION FEB95;
4. Move the data from the SALES_FEB95 table into the MAR95 partition via an
INSERT statement.
5. Drop the SALES_FEB95 table to free the segment originally associated with the
FEB95 partition.
SQL>DROP TABLE SALES_FEB95;
Converting a partition view into a partitioned table: This following scenario describes
how to convert a partition view (also called “manual partition”) into a partitioned
table. The partition view is defined as follows:
5. After all the tables in the UNIONALL view are converted into partitions, drop the
view and the partitioned table that was renamed as the view.
Roll back segments stores undo information and are used for following purposes:
PUBLIC and
PRIVATE
Public Rollback Segment is that Oracle automatically acquires access to and brings
online normal database operations. Private Rollback Segments serves if the name is
explicitly mentioned in parameter file. When any datafile or any tablespace is taken
offline, oracle creates deferred Rollback Segment in System Tablespace. It contains
transaction Rollback information that oracle could not apply to damage offline
tablespace. To check deferred Rollback Segment then,
If you want to either make it online or want to change storage parameters you have
to use after commands as follows:
If you do not specify the size, it will shrink upto optimal size if not, to minextents.
To drop a Rollback Segment: Make it offline and then drop the segment
DBA_SEGMENTS
USER_SEGMENTS
DBA_ROLLBACK_SEGS
V$ROLLSTAT
V$ROLLNAME
For eg:
3. When you take a Rollback Segment offline, it does not actually go offline until all
active transactions in it have completed. Between the time when you attempt to take
offline and when it is actually offline, its status in DBA_ROLLBACK_SEGS remain
ONLINE, but it is not used for new transactions. To determine whether any Rollback
Segment for an instance are in this state, use the following query:
Redologs record all the changes made to the database. Every database must have at
least two redolog files. These files can be mirrored to avoid sing point failure. These
are used by Oracle during instance recovery and for media recovery. These files are
written in circular fashion to save disk space. The filled redolog files will be archived
if the database is running in archivelog mode. It is strongly recommended that
databse should run in Archive log mode for eg: if power fails abruptly and data in
memory cannot be written on datafiles, however Oracle recovers the unrecorded
data in datafile by applying redologs. The process of applying the redolog during
recovery operation is called as rolling forward.
Mirrored REDO logs: The recommened redolog file configuration is, at least two
redolog members per group.
Log switches:
A log switch occurs when Oracle switches from one redolog to another
A log switch occurs LGWR has filled one log file group
A log switch can be forced by a DBA when the current redo log needs to be
archieved.
A log switch occurs upon database shutdown.
At a log switch the current redolog file is assigned a log sequence number that
identifies the information stored in that redolog and is also used for synchronization.
A checkpoint automatically occurs at a log switch.
SVRMGR>STARTUP MOUNT
SVRMGR>ALTER DATABASE ADD LOGFILE GROUP3
(‘/disk3/oradata/NETTLINX/redolog3a.log’,
‘/disk4/oradata/NETTLINX/ redolog3b.log’ ) SIZE 500K;
SVRMGR>ALTER DATABASE OPEN;
SVRMGR>STARTUP MOUNT
SVRMGR>ALTER DATABASE ADD LOGFILE MEMBER
‘/disk3/oradata/NETTLINX/redolog2b’ TO GROUP 2;
3. To rename a logfile:
4. To drop the log group and members: if you drop online group then you will get the
following error. ORA-7360 unable to obtain information about log group
Make sure an online redolog is archived before dropping it. If you drop a member
from the online group, then you get the following error. ORA-313 Open failed for
member of member of log group.
V$LOG
V$LOGFILE
V$LOG_HISTORY
V$LOGHIST
V$RECOVERY_LOG
Control files are created by Oracle, which are specified in INIT.ORA. Every Oracle
database should have at least two control files: each stored on different disks. If a
control file is damaged due to disk failure the associated instance must be shutdown.
Once the disk drive is repaired, the damaged control file can be restored using an
intact copy of the control file and the instance can restarted.
$ cp /disk2/oradata/nettlink/control2.ctl /disk1/oradata/nettlink/control1.ctl
SVRMGR>STARTUP
No media recovery is required. By using mirrored control files you avoid unnecessary
problems if a disk failure occurs on the database servers.
Managing the size of the control file: Typical control files are small. The main
determines of a control file size are the values set for MAXDATAFILES,
MAXLOGFILES, MAXLOGMEMBERS, MAXLOGHISTORY, MAXINSTANCES parameter of
the CREATE DATABASE statement that created the associated database. The
maximum control file size is operating system specific.
$ vi bkup.sql
MAXDATAFILES 10
:wq
SVRMGR>connect internal
SVRMGR>@bkup
SVRMGR>ALTER TABLESPACE USER_NETTLINX ADD DATAFILE
‘/disk2/oradata/NETTLINX/user04.dbf’ SIZE 400K;
1. Trace the control file to udump destination and generate the create control file
syntax:
SVRMGR>SHUTDOWN IMMEDIATE
$ cat initNETTLINX.ora # here we are only observing 1 line which reads control
files
Controlfiles=(/disk1/oradata/NETTLINX/control1.ctl)
SVRMGR>STARTUP.
The primary purpose of managing users, roles and privileges is to establish the
correct level of security for the different types of database users.
Managing Database Users: An organization must establish a database security policy
that defines, among the other things, the appropriate levels of database access for
different types of users. Once this policy is defined, you can then manage database
users easily.
Another area that comes under managing database users is licensing. For you Oracle
Server, you have a license that states how many concurrent users are allowed to
connect to the database. Through the management of users and their access, you
can make sure that your facility complies with the license agreement.
Creating Users: You can create a new database user by using the CREATE USER
dialog box in SQL*DBA or the SQL command creates user. When you create a new
user giving user name & password is mandatory.
Default tablespace
Temporary tablespace
Tablespace quotas
Profile
Default Tablespace: Each user is associated with a default tablespace. When a user
creates a schema object and specifies no tablespace to contain the object, the user’s
default tablespace is used. The default tablespace feature provides ORACLE with
information to direct space usage in situations where an object’s tablespace is not
specified. A user’s default tablespace can be set when the user is created or changed
after the user has been created. If default tablespace option is not specified then the
schema objects of the user will go into the system tablespace of the oracle database.
Always, make sure that the default tablespace option is given while creating a user.
The identified by clause is used to give the user a password. To change a user’s
password, issue the
To drop a user:
SQL>DROP USER USER_01 CASCADE;
PROFILES
System resource limits are managed with the user profiles. A profile is a named set
of resource limits that can be assigned to a user. These resources can generally be
established at the session and statement levels. A session is created every time a
database user connects to the database. If a session level resource limit is exceeded,
the current statement is rolled back and error message is returned to the user.
The database administrator has option to globally enable or disable profiles. That is
the DBA has a capability to make specific resource limits apply to all users. To create
a profile issue the create profile command. The following resource limits can be set
during profile creation.
For example:
The following information is available in the data dictionary for every user and
profile.
List of users in the database
Each user’s default tablespace for tables, clusters, and indexes
Memory usuage for each current session
Space quotas for each user
Each user’s assigned profile and resources limits
The cost assigned to each applicable system resource
To Drop profile:
SQL>DROP PROFILE CLERKS CASCADE;
ROLES
Roles are named groups of related privileges that are granted to individual users and
other roles. Roles are created to manage the privileges for a database or to manage
the privileges for a user group. Roles have a certain set of properties that promote
an easier management of database privileges.
Creating a Role: The name you provide for the role must be unique among other
user names and roles in the database. Roles are not contained in the schema of the
user. When a role is created it has no privileges associated with it. You must grant
privileges or other roles to a new role. The grant command is used to assign
privileges and roles to the new role. To create a role, one must have the CREATE
ROLE system privilege. The following command creates the role named clerk:
System defined roles: Oracle provides five predefined roles with the Oracle server.
You cannot grant and revoke privileges and roles to these predefined roles just as
you can to any role you define. The following is a list of the Oracle predefined roles
and their granted privileges:
Connect: Alter session, create cluster, create database link, create session, create
sequence, create synonym, and create view.
Resource: create cluster, create procedure, create sequence, create table, create
Trigger
SELECT_CATALOG_ROLE: Select privileges on all catalog tables and views for this
role.
Granting roles: Roles can be granted to users, to other roles, and to public. Public
represents all users of the system. Use the SQL grant command or grant system
privilege. The next statement grants role manager the user user_01 with the admin
option.
A system role can be granted with the admin option. This option enables users to do
the following
Grant or revoke the role to or from any user or role in the database
Grant the role with admin option to other users and roles
Alter to drop the role
The creator of a role is automatically granted the role with admin option
Revoking roles: Roles can be revoked using the revoke command. The following is
an example of revoke command
You cannot selectively revoke the admin option of a role. To revoke the admin option,
you must revoke the role and then regrant the role without the admin option.
These are two categories of privileges: system and object privileges. System
privileges enable the user to perform an action on a type of object, whereas object
privileges give the user permission to perform the action on a specific object.
Granting system privileges: System privileges can be granted to users and roles
using the grand command. The following statement grants ‘system privileges’ to the
user Tom and to the role finance.
System privileges cannot be granted along with object privileges and roles in the
same grant command.
Granting object privileges: Object privileges can be granted to users and roles
using the grant command. The following statement grants object privileges to the
user Tom and role finance.
To grant object privileges: You must own the object specified or have been granted
the object privileges with the grant option.
Revoking the object privileges: Object privileges can be revoked using revoke
command
SQL>ALTER SYSTEM
SET LICENSE_MAX_SESSIONS = 64;
LICENSE_SESSIONS_WARNING = 54;
ORAPWD file=<filename>password=<password>Entries=<max_users>
Ex:
NOTE: When specific quotas are assigned, the exact number is indicated in the
MAX_BYTES column Unlimited quotas are indicated by “-1”.
To see all profiles and Assigned limits:
Examples:
3. To create a user with the same password as the username with profile Prof:
14. To list all the column specific privileges that have been granted
16. To list all system privileges currently available in the issuer’s Security domain,
both from explicit privilege grants and from enabled roles:
SQL>SELECT GRANTED_ROLE,ADMIN_OPTION
FROM ROLE_ROLE_PRIVS WHERE ROLE = ‘SYSTEM_ADMIN’;
The maximum number of failed login attempt for the user REDDY is 4, and the
amount of time the account will remain locked is 30 days;
Password aging and expiration: DBA can specify a maximum lifetime for passwords
User reddy can use the same password for 90 days before it expires. DBA can specify
Grace period for password expiry
Password History: DBA can specify a time interval during which users cannot reuse
a password
DBA Can specify the number of password changes the user must make before his
current password can be used again is 3.
ROUTINE_NAME (
USERID_PARAMETER IN VARCHAR (30),
PASSWORD_PARAMETER IN VARCHAR (30),
OLD_PASSWORD_PARAMETER IN VARCHAR (30)
).
This facilitates the sharing of data between databases, even if those Databases are
far off and on different types of servers running different operating systems and
communications protocols. Each database server in the distributed database
cooperates to maintain the consistency of the global database.
$lsnrctl start /*do these commands on the remote host to start the listener*/
$sqlplus system/manager@my_alias -At local host.
Database Links: Database links are used to access schema objects at remote
database from the local database. To create a database links at local databases the
syntax is:
SNAP SHOTS: Snapshots can be thought of as a table that holds the results of a
query. Usually on one or more tables, called master tables, in a remote database.
When snapshots are used, a refresh interval is established to schedule refreshes of
replicated data. Local updates can be prevented, and transactions based refreshes
can be used, available for some type of snapshots, send from the master database
only those rows that have changed for the snapshot. You need CREATE SNAPSHOT,
CREATE TABLE, CREATE VIEW & CREATE INDEX privileges.
The queries that form the basis of snapshots are grouped into two categories.
Simple and complex snapshots: Simple snapshot’s defining queries has no Group
by or CONNECT BY clauses or subqueries, joins or set operations. If a snapshot’s
query has any of these clauses or operations, it is referred to as a complex snapshot.
When a snapshot is created several internal objects are created in schema of the
snapshot. These objects should not be altered. To create a snapshot the steps are as
follows:
SYNTAX:
SQL>CREATE SNAPSHOT <SNAPSHOT NAME>
REFRESH[COMPLETE/FAST]
WITH [PRIMARY KEY/ROW ID]
START WITH SYSDATE
NEXT SYSDATE+1/(24*60*60)[FOR EVERY SECOND]
AS SELECT * FROM <TABLE NAME@LINKNAME>;
If you create a snapshot with refresh fast option, then you need to creat a snapshot
log on the main table at the remote site(i.e., at the server side).
FAST: only the rows that are modified are regenerated every time the snapshot is
refreshed using the snapshot log. Changed information is stored in the snapshot log.
Snapshot log is a table in the master database that is associated with the master
table. Oracle uses a snapshot log to track the rows that have been updated on the
master table.
Eg: If LINK1 has the order table, on which I want to create the snapshot then,
Prepare Phase: The initiating node asks all the participants to prepare (either to
commit or to rollback, even if there is a failure)
Commit Phase: If all pariticipants respond to initiating node that they are prepared,
the initiating node asks all nodes to commit the transaction, if all participants cannot
prepare, it asks to rollback the transaction. If there is failure of transaction due to
any reason, the status of transaction is recorded in commit point site. Commit point
decides the commit point strength at the beginning. All transactions are
automatically resolved by RECO and automatically removed from the pending
transaction table.
Export & import: Export, is an ORACLE utility used to store ORACLE database in
export format(.dmp) files for later retrieval. These files can be later used to write
back into ORACLE database via import. Import is used to retrieve ORACLE database
found in export format files into an ORACLE database data found in export format
files into an ORACLE database.
Main Tasks:
Data archival
Upgrading to new releases
Backing up Oracle database
Moving between Oracle databases
Export’s basic function is to extract the object definition and table data, from an
Oracle database and store them in Oracle binary format. There are three levels of
export.
Table level
Schema level
Database level
SYNTAX:
Export Parameters:
Eg:
4. If you want to export your database, with only those tables that are changed
after the previous complete backup.
5. If you want to export tables emp, dept which are owned by reddy, with no
constraints,
6. If you want export partion. If emp table is having two partitions M and Z. It
exports only partition m from table emp.
Cumulative Exports: A Cumulative export backs up tables that have changed since
the last cumulative or complete export. A cumulative export compresses a number of
incremental exports into a single cumulative export file.
Complete Exports: A complete export establishes a base for incremental and
cumulative exports. It is also similar to full database export except it updates the
tables that track incremental and cumulative exports.
Day 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
X I I I I I I C I I I I I I I I I I
S M T W T F S S M T W T F S S M T W
To restore through day 18, first you import the system information from the
incremental export taken on day 18. Then you import the data from:
Import Parameters:
4. If you want to import only the data of emp and ignore errors
9. Do these commands immediately one after the other without dropping any
tables or deleting any data or objects. This statement fails since we are trying
to create all REDDY’S objects once again which are already there. So IMP
process will generate all errors and will dump into LOG file. Onc e the IMP
finishes. We can go into the LOG FILE and by removing the Error messages we
can get the entire SCHEMA definition (undocumented)
In point-7, we discussed that INDEXES should be created after the IMP. But, we
don’t have any SQL script to generate indexes. Please see the code.
$exp sys/sys full=y file=expfull_Mar08.dmp log=expfull.log
buffer=2000000
$imp sys/sys full=y file=expfull_Mar08.dmp
indexfile=cr8_indexes.sql
(At this point we didn’t create any objects, except Oracle writes all the
INDEXES information to this file)
Now re-create the database and
$imp sys/sys full=y file=expfull_Mar08.dmp log=impfull.log
buffer=2000000 indexes=n commit=y
Edit the file “cr8_indexes.sql” since it has info like this:
CONNECT REDDY;
CREATE INDEX……..
CONNECT STEVE;
As we know, this would fail because there is no password associated with the UserID.
The entire file wil be like this. This can be altered by issuing the command.
This would be very helpful since we don’t really know all the passwords, that’s why
we are logging as that user in-directly, from SYS or SYSTEM as that user.
11. If you have large tables and if the RBS are not big enough to store the entire
table’s information you should use COMMIT=Y at the time of import (as shown
in the above example). This will ensure that the data is committed to the TABLE
whenever the BUFFER is full, which won’t fill up Rollback Segments. There is a
disadvantage in doing COMMIT=y, which is, if the IMP fails in the middle (for
any kind of reason), the last imported table might contain PARTIAL number of
rows, this would cause some more failures when this table acts as Master table
for other tables.
In the above scenario, it is best to just drop the last table and start the same
command once again.
MANAGING BACKUPS AND RECOVERY
Cold Backup: Cold backup is taken when database is shutdown normal. The
following file should be backed up.
All datafiles
All control files
All online redo logs
The init.ora (optional)
The full set of these files could be retrieved from the backups at a later date and the
database would be able to function. It is not valid to perform a file system backup of
the database while it is open, unless a hot backup is performed. The steps to take a
cold backup of ABC database are as follows:
$ mkdir BKUP
$ vi getfiles.sql
SET ECHO OFF
SET PAUSE OFF
SET FEED OFF
SET HEAD OFF
SPOOL cold_backup.sh
SELECT ‘cp’||NAME|| ‘BKUP’ FROM V$DATAFILE;
SELECT ‘cp’||NAME|| ‘BKUP’ FROM V$CONTROL FILE;
SELECT ‘cp’||NAME|| ‘BKUP’ FROM V$LOGFILE;
Spool off
$ svrmgrl
SVRMGR>CONNECT INTERNAL
SVRMGR>startup
SVRMGR>@getfiles.sql
SVRMGR> SHUTDOWN IMMEDIATE
SVRMGR>EXIT
$sh cold_backup.sh /*Taking the cold backup to BKUP directory*/
$cd BKUP /*Changing to BKUP directory*/
$ls /*Checking the contents*/
Hot backup: Hot backup is taken when the database is up and running in Archive
log mode. Hot backup can be taken on Tablespace by Tablespace mechanism, which
is also the recommended method. You must put the Tablespace in begin backup
(using alter Tablespace command) and after finishing the backup you must set it to
End backup mode. It is worth to note that hot backup will generate lot of redo
entries.
$ vi hot.sql
SET SERVER OUTPUT ON
SET ECHO OFF
SET HEAD OFF
SET FEED OFF
SPOOL hotbkup.sql
DECLARE
CURSOR T_TAB IS
SELECT DISTINCT (TABLESPACE_NAME) FROM DBA_DATA_FILES.
CURSOR F_TAB (FS VARCHAR) IS
SELECT FILE_NAME FROM DBA_DATA_FILES WHERE
TABLESPACE_NAME=FS;
D_REC DBA_DATA_FILES.FILE_NAME%TYPE;
T_REC DBA_DTA_FILES.TABLESPACE_NAME%TYPE;
BEGIN
OPEN T_TAB;
LOOP
FETCH T_TAB INTO T_REC;
EXIT WHEN T_TAB%NOT FOUND;
DBMS_OUTPUT.PUT_LINE (‘ALTER TABLESPACE’||T_REC||’BEGIN
BACKUP;’);
END LOOP;
CLOSE T_TAB;
END;
/
ALTER SYSTEM SWITCH LOGFILE;
SELECT ‘ ‘||’!mv /disk5/oradata/NETTLINX/HOTBKUP/control.new
/disk5/oradata/NETTLINX/HOTBKUP/control.old’||’ ‘ FROM DUAL;
SELECT ‘ALTER DATABASE BACKUP CONTROL FILE TO ‘
||”’ /disk5/oradata/NETTLINX/HOTBKUP/control.new”’||’;’ FROM DUAL;
SPOOL OFF
:wq
$hotbkup.sql
Recovery: Recovery is of three types. They are online block recovery, thread
recovery and media recovery. In all three cases the algorithm that applies the redo
records against an individual block is the same.
For example, if you take a Tablespace offline using the immediate option, the
datafiles will go offline without a check point being performed by Oracle Media
recovery can apply archived log files as well as online log files.
Syntax:
SVRMGRL>RECOVER[AUTOMATIC][FROM ‘location’][DATABASE]
|UNTILL TIME/DATE
|UNTILL CANCEL
|UNTIL CHANGE
USING BACKUP CONTROLFILE;
Case 1: The database is running in NOARCHIVE mode and you lost a datafile
because of Media failure and you take cold backup every night. How you’ll recover
the database. The scenerio can be simulated as follows.
Steps: Take cold backup of the database you will get an Error stating that particular
Datafile is missing. Now using HOST command remove one datafile at the operating
system level. Now abort the instance.
Now try to open the database you will get an Error stating that particular Datafile is
missing. Now shutdown the database and restore the previous night’s backup and
open the database. So you lost today’s transactions. This is complete recovery
though you lost today’s actions because as far as the database is concerned it did not
loose any thing which came from last night. It may appear to you that it is in-
complete, but it is still complete recovery for that time.
Note: You just can not restore the lost datafile from previous backup and start up the
database, because the database will be in inconsistent state. So it will fail.
Case 2: Everything is same except that it is running in ARCHIVE mode. Here you
restore the lost file from the previous nights backup, Mount the Database and issue
the command RECOVER DATFILE AUTOMATIC. Oracle will apply the relevant archived
log files and online redo log files and then it will open the database. Here, you have
lost no data hence it is complete recovery.
Case 3: Everything is as above except that you lost the online redolog files
only. In this case you have archived log files but not online redolog files. So
you can restore up to the last available archived log file only by issuing the
command Recover database until cancel. Cancel the media recovery
immediately after applying the last archived file,. Open database with
resetlog option. This will invalidate the previous log files. This is an
incomplete recovery.
Case 4: Database is running in Archive log mode. We used to take cold back up
every night. On one day a programmer accidentally dropped one important table
(assuming that At 11:30:30 am) you realized this at 2:00 p.m.. As This is a critical
database without losing others data you have to recover the lost table.
Steps:
Case 5: A DBA has lost both the control file of a database which is in archive log
mode. To recover the database, use CREATE CONTROL FILE command.
SVRMGR>!rm/disk1/oradata/nettlinx/control1.ctl
SVRMGR>!rm/disk1/oradata/nettlinx/control2.ctl
Steps:
$vi cr8controlfile.sql
CREATE CONTROL FILE REUSE DATABASE NETTLINX ARCHIVELOG
LOGFILE GROUP1(‘disk1/oradata/nettlinx/redolog1.log’,
‘/disk2/oradata/nettlinx/redolog2.log’,) SIZE 250K,
GROUP2(‘disk1/oradata/nettlinx/redolog1.log’,
‘/disk2/oradata/nettlinx/redolog2.log’,) SIZE 250K
DATAFILE ‘‘disk1/oradata/nettlinx/System01.dbf’ SIZE 25M
RESETLOGS
RECOVER DATABASE;
SVRMGR>STARTUP NOMOUNT
SVRMGR>@createcontrolfile
SVRMGR>ALTER DATABASE OPEN RESETLOGS;
Costs and Benefits when using a recovery catalog when you use a recovery catalog.
Recovery Manager can perform a wider variety of automated backup and recovery
functions; however, Recovery Manager requires that you maintain a recovery catalog
schema, and any associated space used by that schema.
If you use a recovery catalog, you must decide which database you will use to install
the recovery catalog schema, and also how you will back this database up. The size
of the recovery catalog schema:
If you use Recovery Manager to backup many databases, you may wish to create a
separate recovery catalog database, and create the Recovery manager user in that
database. You should also decide whether or not to operate this database in
ARCHIVELOG mode.
If you have more than one database to back up, you can create more than one
recovery catalog and have each database serve as the other’s recovery catalog. For
example, assume there are two production databases, one called “ACCT” and a
second called “PAY” you can install the recovery catalog for “ACCT” in the “PAY”
database, and the recovery catalog for the “PAY” database, in “ACCT” this enables
you to avoid the extra space requirements and memory overhead of maintaining a
separate recovery catalog database. However, this solution is not practical if the
recovery catalog databases for both reside in tablespaces residing on the same
physical disk.
Note: You must install the recovery catalog schema in a different database from the
target database you will be backing up. If you don’t, the benefits of using a recovery
catalog are lost if you lose the database and need to restore.
Note : It is difficult to restore and recover if you lose your control files and do not
use a recovery catalog. The only way to restore and recover when you have lost all
control files and need to restore and recover datafiles after creating a control file
manually.
Setting up the the Recovery Catalog Schema when you use a recovery catalog, you
need to set up the schema. Oracle suggests you put the recovery catalog in its own
tablespace; however, it could be put in the system tablespace, if necessary. To set up
the Recovery Catalog Schema.
Note: You must not run a catrman.sql script in the SYS schema. Run the catrman.sql
in the recovery catalog schema (RMAN)
$ rman nocatlog
RMAN>connect target
Or
$ rman nocatlog
RMAN>register database
To connect to Recovery Manager with Password Files: If the target database uses
password files, you can connect using:
Example:
If you want to record all the log information, which is appearing, on the screen
generated by RMAN add the option mentioned below:
You can use the following substitution variables to make unique format strings:
Datafile backup sets can be full or incremental. A full backup is a backup of one or
more datafiles that contain all blocks of the datafile(s). An incremental backup of one
or more datafiles that contain only those blocks that have been modified since a
previous backup. These concepts are described in more details in the following
sections.
A full backup copies all blocks into the backup set, skipping only datafile blocks that
have never been used. No blocks are skipped when backing up archivelogs or control
files. A full backup is not the same as a whole database backup; full is an indicator
that the backup is not incremental. Also, a full backup has no effect on subsequent
incremental backups, and is not considered part of the incremental strategy (in other
words, a full backup does not affect which blocks are included in subsequent
incremental backups). Oracle allows you to create and restore full backups of the
following:
n datafile
n datafile copy
n tablespace
n control file (current or backup)
n database
n Archivelog backup sets are always full backups.
Incremental Backup Sets
An incremental backup is a backup of one or more datafiles that contain only those
blocks that have been modified since a previous backup at the same or lower level;
unused blocks are not written out.
Example:
To view the backup status of a datafile, you can use thedata dictionary table
V$BACKUP. This table lists all online files and gives their backup status. It is most
useful when the database is open. It is also useful immediately after a crash,
because it shows the backup status of the files at the time of the crash. You can use
this information to determine whether you have left tablespaces in backup mode.
NOTE: V$BACKUP is not useful if the controlfile current in use is restored backup or
new controlfile created since the media failure occurred. A restored or re-created
controlfile does not contain the information oracle needs to fill V$BACKUP accurately.
Also, if you have restored a backup of a file, that file’s STATUS is V$BACKUP reflects
the backup status of the older version of the file, not the most current version. Thus,
this view might contain misleading information on restored files.
In the STATUS column, “INACTIVE” indicates that the file is not currently being
backedup “ACTIVE” indicates that the file is marked as currently being backedup.
Backing up a Tablespace:
The size of the target database’s control file will grow, depending on the number of
n backups performed
n archive logs created
n days (minimum number) this information is stored in the control file
You can specify the minimum number of days this information is kept in the control
file using the parameter CONTROL_FILE_RECORD_KEEP_TIME. Entries older than the
number of days are candidates for overwrites by newer information. The larger the
CONTROL_FILE_RECORD_KEEP_TIME setting is, the larger the control file will be. At
a minimum, you should resynchronize your recovery catalog at intervals less than
the CONTROL_FILE_RECORD_KEEP_TIME setting, because after this number of
days., the information in the control file will be overwritten with the most recently
created information: if you have not resynchronized and information has been
overwritten this information can not be propagated to the recovery catalog.
Note: The maximum size of the control file is port specific. See your operating
system specific Oracle documentation.
The current control file is automatically backed up when the first datafile of the
system tablespace is backed up. The current control file can also be explicitly
included in a backup or backed up individually.
Full
This is the default if neither full nor incremental is specified. A fullbackup copies all
the blocks into the backup set, skipping only datafile blocks that have never been
used. No blocks are skipped when backingup archive logs or control files. A full
backup has no effect on subsequent incremental backups, and is not considered to
be part of the incremental backup strategy.
Incremental
An incremental backup at a level greater than 0 copies only thoseblocks that have
changed since the last incremental backup. An incremental backup at level 0 is
identical in content to a full backup, but the level 0 backup is considered to be part of
the incremental strategy. Certain checks are performed when attempting to create
an incremental backup at a level greater than zero. These checks ensure that the
incremental backup would be usuable by a subsequent recover command. Among the
checks performed are
n A level 0 backup set must exist, or level 0 datafile copies must exist
for each datafile in the backup command. These must also not be
marked unavailable
n Sufficient incremental backups taken since the level 0 must exist and
be available such that the incremental backup about to be created
could be used
Tag
Cumulative
Nochecksum
Filesperset
Setsize
Database
Tablespace
Datafile
Datafilecopy
Archivelog
Current Controlfile
Backup Control file
Backupset
Tag
parms
Format
Filesperset
Channel
Delete input
Datafile
Datafilecopy
Archivelog
Current control file
Backup control file
Optionally you can supply these keywords with copy command
Tag
Level 0
copying the Archieving information to the Catalog and Deleting the files
from Archive log Destination:
You can also back up archived logs to tape. The range of archived logs can be
specified by time or log sequence. Note that specifying an archive log range does not
guarantee that all redo in the range is backed up. For example, the last archived log
may end before the end of the range, or an archived log in the range may be
missing. Recovery manager simply backs up the logs it finds and does not issue
awarning. Note that online logs cannot be backedup; they must be archived first.
NLS_LANG = American
NLS_DATE_FORMAT = ‘Mon DD YYYY HH24:M1:SS’
$ rman target internal/internal@acct rcvcat rman/rman@pay
RMAN> run{
allocate channel t1 type disk;
backup
(archivelog from time ‘jan 25 1999 12:57:13’ until time
jan 25 1999 12:06:05’
all
fomat ‘/disk4/oradata/NETTLINX/arch1_%d_%u’);
}
Here we back up all archived logs from sequence#288 to sequence#301 and delete
the archived logs after the backup is complete. If the backup fails the logs are not
deleted.
$ rman target internal/internal@acct rcvcat rman/rman@pay
RMAN> run{
allocate channel t1 type disk;
backup
(archivelog low logseq 1 high logseq 301 thread 1 all delete input
fomat ‘/disk4/oradata/NETTLINX/arch2_%d_%u’);
}
The following commands back up all archived logs generated during the last 24
hours. We archive the current log first to ensure that all redo generated up to the
present gets backed up
See also: for more information about you environment variables, see your operating
system-specific documentation.
When one of the datafiles is lost the recovery amounts to complete recovery as you
have the latest control file and current online redolog file.
When one of the datafiles is lost the recovery amounts to complete recovery as you
have the latest control file and the current online redolog file.
Note!
Whenever the RMAN command fails you are required to release the channel allocated
for that operation
Ex:
RMAN> run{
release channel t1;
}
When the whole database is lost you can depend upon the backup for restoring the
database also recovering by applying Archives.
Steps:
Second Method:
Third Method:
Sql*loader moves data from external flat files into oracle database.
For eg:
$ vi case1.ctl
LOAD DATA
INFILE*
INTO TABLE DEPT
FIELDS TERMINATED BY ‘.’ OPTIONALLY ENCLOSED BY ‘ ” ’
(DEPTNO,DNAME,LOC)
BEGINDATA
12, “RESEARCH”, “SARATOGA”
10, “ACCOUNTING”, “CLEVELAND”
:wq
SYNTAX:
$ sqlldr <options>
For Eg:
or
Example: case 1: case 1 loads from the controlfile into the table dept.
$ vi case1.ctl
LOAD DATA
INFILE*
INTO TABLE DEPT
FIELDS TERMINATED BY ‘.’ OPTIONALLY ENCLOSED BY ‘ ” ’
(DEPTNO,DNAME,LOC)
BEGINDATA
12, “RESEARCH”, “SARATOGA”
10, “ACCOUNTING”, “CLEVELAND”
11, “ART”,SALEM
13, FINANCE, “BOSTON”
21, “SALES”, PHILA
22, “SALES”, ROCHESTER
42, “INT’ L”, “SAN FRAN”
:wq
$ sqlldr userid=reddy/tiger control=case1.ctl log=case1.log
Case 2: Case2 loads the data of case2.dat to the table emp.
$vi case2.ctl
LOAD DATA
INFILE ‘case2.dat’
(EMPNO POSITION(01:04) INTEGER EXTERNAL
ENAME POSITION(06:15) CHAR,
JOB POSITION(17:25) CHAR,
MGR POSITION(27:30) INTEGER EXTERNAL,
SAL POSITION(32:39) DECIMAL EXTERNAL,
COMM POSITION(41:48) DECIMAL EXTERNAL,
DEPTNO POSITION(50:51) INTEGER EXTERNAL)
:wq
$vi case2.dat
Case 3: Case 3 adds the data into emp table using sequence function. Sequence
function generates unique keys for loaded data.
7934,”MILLER”,”CLERK”,7782,23-JANUARY-1982,920.00,,10:102
7566,”JONES”,”MANAGER”,7839,02-APRIL-1981,3123.75,,20:101
7499,”allen”,”salesman”,7698,20-February-1981,1600.00,300.00,30:103
7654,”MARTIN”,”SALESMAN”,7698,28-SEPTEMBER-1981,1312.50,1400,30:103
Case 4: Case 4 combines multiple records into one larger record using CONTINUEIF.
Inserting negative numbers, discardmax is used to specify a maximum number of
discards and also rejecting records due to duplicate values in a index or due to
invalid data.
$vi case4.ctl
LOAD DATA
INFILE ‘case4.dat’
DISCARDFILE ‘case4.dsc’
DISCCARDMAX 999
REPLACE
CONTINUEIF THIS (1) = ‘*’
INTO TABLE EMP
(EMPNO POSITION(01:04) INTEGER EXTERNAL
ENAME POSITION(06:15) CHAR,
JOB POSITION(17:25) CHAR,
MGR POSITION(27:30) INTEGER EXTERNAL,
SAL POSITION(32:39) DECIMAL EXTERNAL,
COMM POSITION(41:48) DECIMAL EXTERNAL,
DEPTNO POSITION(50:51) INTEGER EXTERNAL,
HIREDATE POSITION(52:60) INTEGER EXTERNAL)
:wq
$ vi case4.dat
*7782 clark man
ager 7839 2572.50-10 2512-Nov-85
*7839 king persi
dent 5500.00 2505-Apr-83
*7934 mil
ler manager 7839 3123.75 2517-Jul-85
:wq
$ sqlldr userid=reddy/tiger control=case4.ctl log=case4.log
Case5: Case 5 explains how to use sqlldr to break down repeating groups in a flat file
and load the data into normalized tables, one record may generate multiple database
rows, and use of when clause and also loading the same filed (empno) into multiple
tables.
$ vi case5.ctl
LOAD DATA
INFILE ‘case5.dat’
BADFILE ‘case5.bad’
DISCARDFILE ‘case5.dsc’
REPLACE
INTO TABLE EMP
(EMPNO POSITION(1:04) INTEGER EXTERNAL,
ENAME POSITION(6:15) CHAR,
DEPTNO POSITION(17:18) CHAR,
MGR POSITION(20:23) INTEGER EXTERNAL)
INTO TABLE PROJ
WHEN PROJNO!=’ ‘
(EMPNO POSITION(1:4) INTEGER EXTERNAL,
PROJNO POSITION(25:27) INTEGER EXTERNAL)
INTO TABLE PROJ
WHEN PROJNO!=’ ‘
(EMPNO POSITION(1:4) INTEGER EXTERNAL,
PROJNO POSITION(29:31) INTEGER EXTERNAL)
:wq
$sqlldr userid=reddy/tiger control=case5.ctl log=case5.log
Case 5: Case5 loads the data into table emp using the direct path load method and
also builds the indexes.
$vi case6.ctl
LOAD DATA
INFILE ‘case6.dat’
INSERT
INTO TABLE EMP
SORTED INDEXES (EMPID)
(EMPNO POSITION(1:4) INTEGER EXTERNAL NULLIF EMPNO=BLANKS,
ENAME POSITION(6:15) CHAR,
JOB POSITION(17:25) CHAR,
MGR POSITION(27:30) INTEGER EXTERNAL NULLIF MGR=BLANKS,
SAL POSITION(32:39) DECIMAL EXTERNAL NULLIF SAL=BLANKS,
COMM POSITION(41:48) DECIMAL EXTERNAL NULLIF COMM=BLANKS,
DEPTNO POSITION(50:51) INTEGER EXTERNAL NULLIF DEPTNO=BLANKS)
:wq
Tuning is studying the configuration of a system. Every one involved with the system
has some role in the tuning process. By running oracle, you can tailor its
performance to best meet your needs.
Goals for tuning: Consider the performance issues when designing the system
Tuning at Hardware level and at operating system level. Identifying performance
bottlenecks, determining the cause of the problem and taking corrective action.
Tuning I/O: Disk I/O tends to reduce the performance of many software applications
Tuning Contention: Contention may cause Processes to wait until resources are
available
n Rollback Segments
n Processes of the multithreaded Server architecture
n Redolog buffer latches
Memory Tuning:
Tuning db_buffer_cache: First find out the ratio of hits and misses. If ratio is more
than 1 then, increase the size of db_buffer_cache in init.ora. This information can be
seen in X$KCBRBH and X$KCBCBH. The query to find the ratio is as follows:
Consistent_gets: Statistics reflects the no. of accesses made to the block buffer to
retrieve the data in a consistent mode.
Block_gets: Statistics reflects the no.of blocks assessed via single block gets.
Tuning Redolog buffer cache: To tune the redolog buffer one has to reduce the
waiting for the latches. You have to find the ratio between redolog space wait time
and redo writes. If the ratio is more than 1% then we need to tune. The information
of this can be obtained from V$LATCH and V$SYSSTAT. The query is as follows.
Gets - This column shows the total no of requests for information on the
Corresponding item.
Misses - This column shows the no. of data requests resulting in cache misses.
Immediate_misses – This column shows the No. of unsuccessful immediate
Requests for each latch.
Immediate_writes – This column shows the No. of successful immediate
Requests for each latch.
Tuning library cache: It is present in data dictionary, which has shared SQL and
PL/SQL areas. This section tells you how to tune the library cache by:
SQL>SELECT SUM(PINS),SUM(RELOADS)/SUM(RELOADS)/SUM(PINS)*SUM
(RELOADS))*100 FROM v$LIBRARYCACHE;
If the ratio of pins and reload is greater than 1% then, you should reduce this library
cache misses (increase the shared_pool_size parameter in init.ora).
STRIPPING: Stripping is the practice of dividing a large tables data into small
portions and storing these portions in separate datafiles on separate disks. This
permits multiple process to access different portions of the table concurrently without
disk contention. “STRIPPING” is particularly helpful in optimizing random access to
tables with many rows. Stripping can either be done manually as below:
Then insert around 1 lakh row into the table. While the insertion is going on observe
the status of the files in V$FILESTAT;
If you have more datafiles and if you have only one DBWR the performance may
decrease, so you increase DBWRs. In unix use asynchronus. I/O (aio – Kernel
Tunable Parameter) enable and then include the parameter in init.ora
SVRMGR>SHUTDOWN
$vi init.ora # looking for the following parameter
db_writes=3
:wq
SVRMGR>STARTUP
$ps ux\grep ora_ (observe the dbwr_ processes).
PARALLEL QUERY OPTION: Oracle will process the SQL statement by a single
server process with the parallel query option, multiple processes can work together
simultaneously to process a single SQL statement. This capability is called parallel
query option. The oracle server can process statement more quickly than a only a
single server process processed it, query processing can be effectively split among
many CPUs on a single system.
$ ps ux|grep ora_
observe.
TABLE CACHE : To mark a table as cache table, specify the cache clause either in
CREATE TABLE or ALTER TABLE command. If a table is marked as a cache table that
table’s blocks will be considered as the most recently used blocks in the data block
buffer cache. Even if they read via a full table scan. Thus you can avoid having your
small tables blocks frequently removed from the data block buffer cache. The below
example show the TEST table is marked as a cache table. The first time its blocks are
read into the data block buffer cache, they will be marked as the most recently used
blocks in the cache.
Optimization is the process of choosing the most efficient way to execute a SQL
statement. This is an important step in the Processing of any data manipulation
language statement (select, insert, update or delete), which is done by the
Optimizer. The Optimizer formulates execution plans and chooses the most efficient
plan before executing a statement. There are two types of optimizers like:
Rule based: Using this approach the optimizer chooses an execution plan based on
the access paths available and the ranks of these paths.
Cost based: Using the cost based approach, the optimizer considers available access
paths and factors in information based on the statistics in the data dictionary objects
(tables, clusters or indexes) accessed by the statement to determine which execution
plan is most efficient. The analyze command generates these statistics. Cost based
will be effective only on the tables which are analyzed. The cost based approach also
considers hints. Cost based approach has three options CHOOSE. ALL_ROWS and
FIRST_ROWS. These can be enabled by using the following commands.
Company table –
Name varchar2
Address varchar2
City varchar2 (index)
State varchar2 (index)
Parent_company_id number (index)
Competitor:
Sales:
Types of operations:
Note: For every eplan operation issue the above command with different statement
Id:
Hash join: It joins tables by creating an in-memory bitmap of one of the tables and
then using a hashing function to locate the join rows in the second table.
Nested loops: Nested loops joins tables access operations when atleast one of the
join column is indexed.
Using hints: Hints are suggestions that give the optimizer for optimizing a SQL
statement. You can use hints to specify:
All_Rows: it is used to minimize the time it takes for all rows to be returned by the
query.
First_Rows: it tells the operator to optimize the query with the goal of the shortest
response time for the return of the first row from the query.
Full: The full hint tells the optimizer to use rule based optimization for a query.
Rule: The rule hint tells the optimizer to use rule based optimizer for a query.
Cache: The cache hint, when used for a table in a query, tells oracle to treat the
table as a cached table. i.e., cache tells Oracle to keep the blocks from the full table
scan of a table in the SGA’s data block buffer cache area, instead of quickly removing
Them from SGA.
Server process:
n Foreground server processes: Directly handles the request from the client
process.
n Background server process: Handle other specific jobs of the database server.
The network listener process waits for incoming connection requests and determines
if each user process can use a shared process. If so, the listener process gives the
user process the address of a Dispatcher Process. If the user process request for a
dedicated server, the listener process creates a dedicated process and connects the
user process to it. Shared Server Processes are not associated with a specific user
process. Instead, a shared server process serves any client request in the multi-
threaded server configuration.
n The dedicated Server Process receives the statement. At this point, two paths
can be followed to continue processing the SQL statement.
n If the shared pool contains a shared SQL area for an identical SQL statement,
the server process can use the existing shared SQL area to execute the
clients SQL statement.
n If the shared pool does not contains a shared SQL area for an identical SQL
statement, a new shared SQL area is allocated for the statement in the
shared pool.
n The background Process retrieves data block from the actual data file, if
necessary, or uses the data blocks already stored in the buffer cache in the
SGA of the instance.
n The server process executes the SQL statements stored in the shared SQL
area. Data is first changed in the SGA. It is permanently written to disk when
DBWR process determines it is most efficient to do so.
n The LGWR process records the transaction in the on-line redolog file only on a
subsequent commit request from the user.
n If the request is successful, the server sends a message across the network to
the user, else appropriate error message is transmitted.
n The database server is currently running the proper SQL * Net Driver.
n The listener process on the database server detects the connection request
from the client application and determines how the user process should be
connected to the available dispatcher.
n The user issues a SQL statement. For eg:- the user updates a row in a table.
n The Dispatcher process places the user process’s request on the request
queue, which is in the SGA and shared by all Dispatcher Processes.
n The Dispatcher process checks its response queue and sends completed
request back to the user processes that made the request.
$ lsnrctl start
$ svrmgrl
SVRMGR>CONNECT INTERNAL
SVRMGR>SHUTDOWN
SVRMGR>STARTUP
SVRMGR>EXIT
$sqlplus system/manager@alias name in tnsnames.ora
From operating system you can give this command to see whether MTS is working.
A raw device doesn’t have any characteristics like other regular filesystems.
Character device drivers will support these RAW devices. Character device drivers
accesses the raw devices through the special files in the /dev directory bypassing the
Unix I/O buffer.
ADVANTAGES
• Faster performance because the oracle server by passes the Unix buffer cache
and eliminate the filesystem this results in the fewer instructions per I/O.
• Servings in the memory usage because the oracle server doesn’t use the Unix
buffer cache for db block reads/writes.
• They are most beneficial for files that are receives sequential writes.
• They can be used concurrently with the filesystem. A raw device doesn’t have
any characteristics like other regular filesystems. Character device drivers will
support these RAW devices. Character device drivers accesses the raw devices
through the special files in the /dev directory by passing the Unix I/O buffer.
DISADVANTAGES:
• You must devote an entire disk partition to a single db file leading to wasted
disk space.
• I/O load balancing and adding files to your database can be more difficult with
raw device.
For example you have the syst, user, roll, temp, indx partitions. By default these will
be owned by ROOT. So, change the ownership and group permissions to ORACLE
AND DBA. Then to create the database the syntax is as follows:
BACKUP AND RECOVERY: When you are working with RAW devices an additional
layer is introduced in the backup and recovery procedures.
1. BACKUP: First you have to use the Unix command dd which takes two arguments
Now using tar: tar cvf /temp/bckup.tar sys.dd rbs.dd temp.dd …….
2. RECOVERY: Suppose we need to recover the data file /dev/c0t0d0s1 for this
ii) Now using this file you have to restore the data.
dd if= ‘/temp/sys.dd’ of=’/dev/c0t0d0s1’ conv=bmode
Note the difference between this command and previous command. Here the output
file is the block device NOT the char device. The conv argument converts the
Character or Block data into Block mode. You can use this basic syntax for all
backups and recovery procedures.
AUDITING
Auditing is done to check regular and suspicious activity on the database. When your
auditing purpose is to monitor for suspicious database activity. Consider the following
guidelines.
n Protect the audit trail: when auditing for suspicious database activity, protect
the audit trail so that audit information cannot be added, changed or deleted
without being audited.
n Archive audit records and purge audit trail: Once you have collected the
required information, archive the audit records of interests and purge audit
trail of this information.
Audit_trail = true
Audit_file_dest = specify the path in which you have created the directory for
audit
AUDIT_TRAIL enables or disables the writing of rows to the audit trail. Audited
Records are not written if the value is NONE or if the parameter is not present.
The OS option enables system-wide auditing and causes audited records to be
Written to the operating system’s audit trail. The DB option enables system-wide
Auditing and causes audited records to be written to the database audit trail (the
SYS>AUD$ table). The value TRUE and FALSE are also supported for backward
Compatibility. TRUE is equivalent to DB, and FALSE is equivalent to NONE.
Creating and deleting the database trail views: The database audit trail
(SYS.AUD$) is a single table in each ORACLE database data dictionary. To help
you view meaningful auditing information in this table, several predefined views are
provided. You have to run CATAUDIT.SQL as sys to create audit tail views. Auditing
can be done on all types of commands.
5. To audit all unsuccessful select, insert and delete statements on all tables and un-
successful uses of the execute any procedure system privilege, by all database
users, by access
SQL> AUDIT SELECT TABLE,INSERT TABLE,DELETE TABLE ON
EXECUTE ANY PROCEDURE BY ACCESS WHENEVER NOT
SUCCESSFUL;
8. To disable audit:
SQL> NOAUDIT;
Tables of auditing:
STMT_AUDIT_OPTION_MAP
AUDIT_AUCTIONS
ALL_DEF_AUDIT_OPTS
DBA_STMT_AUDIT_OPTS
USER_OBJ_AUDIT_OPTS, DBA_OBJ_AUDIT_OPTS
USER_AUDIT_TRAIL, DBA_AUDIT_TRAIL
USER_AUDIT_SESSION, DBA_AUDIT_STATEMENT
USER_AUDIT_OBJECT, DBA_AUDIT_OBJECT
DBA_AUDIT_EXISTS
USER_AUDIT_SESSIONS, DBA_AUDIT_SESSION
USER_TAB_AUDIT_OPTS
LOCK MANAGEMENT
There are two types of locks. They are IMPLICT LOCKS AND EXPLICIT LOCKS.
Implicit locks are created by Oracle whereas explicit locks are user-created. These
can be created at two levels. Row level and table level.
Row level: A row level is always locked exclusively so that other users can modify the
row until the transaction holding the lock is committed or rolled backed. Row locks
are always acquired automatically by Oracle as a result of the statement.
Table level: A transaction acquires a table lock when a table is modified in the
following DML statement. Insert, Update, Delete, Select….. for update and lock table.
A table lock can be held in any of several modes: row share(RS), row
exclusive(RX),Share(S), share row exclusive (SRX) and exclusive(X)
To view the information about locks you have to look into these tables:
V$LOCK
V$LOCKED_OBJECT
DBA_OBJECT_LOCK
V_$_LOCKS
DBMS PACKAGES
Creating user locks with Oracle lock management services: You can use Oracle
lock management services for your applications. It is possible to request a lock of a
specific mode, give it a unique name, change the lock mode, and release it. The
Oracle lock management services are available through procedures in the
DBMS_LOCK package. The following procedures are callable from DBMS_LOCK
package.
Function/Procedure Description
Naming locks:
DBMS_LOCK.ALLOCATE_UNIQUE(LOCKNAME IN Varchar2,
LOCKHANDLE OUT Varchar2,
EXPIRATION_SECS IN Integer default
864000);
Lockname: Specify the name of the lock for which you want to generate
a unique ID
Lock handle: Returns to the caller the handle to the lock ID generated
Expiration_secs: Specify the number of seconds to wait after the last
ALLOCATE_UNIQUE.
For eg:
SQL>EXEC DBMS_LOCK.ALLOCATE_UNIQUE(‘TESTLOCK’,30);
Requesting a lock: To request a lock with a given mode, use the request function.
DBMS_LOCK.REQUESTED(ID IN VARCHAR2,
LOCKHANDLE IN VARCHAR2,
LOCKMODE IN INTEGER DEFAULT X_MODE,
TIME_OUT IN “ ” MAXWAIT,
RELEASE_ON_COMMIT IN BOOLEAN “FALSE);
For eg:
SQL>SELECT * FROM DBA_LOCKS;
DBMS_LOCK.CONVERT(ID IN INTEGER,
LOCKHANDLE IN VARCHAR2,
LOCKMODE IN INTEGER
TIME_OUT “NUMBER DEFAULT MAXWAIT)
RETURN INTEGER;
0 Success
1 Timeout
2 Deadlock
3 Parameter error
5 Don’t own lock specified by ID or
lock handle Illegal lock handle
For eg:
SQL>EXEC DBMS_LOCK.CONVERT(3300,:LOCKHANDLE,’X’,NULL);
To release a lock:
DBMS_LOCK.CONVERT(ID IN INTEGER);
SQL>EXEC DBMS_LOCK.SLEEP(3300);
To suspend the session for a given period of time, use the SLEEP procedure.
DBMS_LOCK.SLEEP(SECONDS IN NUMBER);
SQL>EXEC DBMS_LOCK.SLEEP(10);
The DBMS_PIPE package allows two or more sessions in the same instance to
communicate. Oracle pipes are similar in concept to the pipes in the UNIX, but Oracle
pipes are not implemented using the operating system pipe mechanisms,
information, sent through Oracle pipes<is buffered in the SGA. All information in the
pipes is lost when the instance is shutdown. Depending on the security requirements
you have to use either public or private pipe. The following table shows the
procedures that can be called:
Function/procedure Description
DBMS_PIPE.CREATE_PIPE(PIPE_NAME IN VARCHAR2,
MAXPIPESIZE IN INTEGER DEFAULT 8192,
PRIVATE IN BOOLEAN DEFAULT TRUE)
RETURN INTEGER;
Pipe name : Specify the name of the pipe that you are creating. The name
Must be unique across the instance.
Maxpipesize : Specify the maximum size allowed for the pipe in bytes. The
Total sizes of all the messages on the pipe cannot exceed this
Amount
Private : Use the default, TRUE, to create a private pipe. Public pipes can
Be created implicitly when you call SEND_MESSAGE
For eg:
DBMS_PIPE.REMOVE_PIPE(PIPENAME IN VARCHAR2)
RETURN INTEGER;
SQL>EXEC DBMS_PIPE.REMOVE_PIPE(‘TESTPIPE’);
DBMS_PIPE.PURGE(PIPENAME IN VARCHAR2)
SQL>EXEC DBMS_PIPE.PURGE(‘TESTPIPE2’);
DBMS_PIPE.RECEIVE_MESSAGE(PIPENAME IN VARCHAR2,
TIMEOUT IN Integer default maxwait)
RETURN INTEGER;
For eg:
STATUS:=DBMS_PIPE.SEND_MESSAGE(‘PROC1’,10);
STATUS:=DBMS_PIPE.RECEIVE_MESSAGE(‘TESTPIPE’,10);
Where status is variable and proc1 is PL/SQL program;
Creating alerts: The DBMS_ALERT package provides support for the asynchronous
notification of database events. By appropriate use of this package and database
triggers, an application can cause itself to be notified whenever values of interest in
the database are changed. The following table shows the procedures included in this
package:
Function/procedure Description
DBMS_ALERT.SET_DEFAULT(POLLING_INTERVAL IN NUMBER);
For eg:
SQL>EXEC DBMS_ALERT.SET_DEFAULT(120);
To register an alert:
For eg:
SQL>EXEC DBMS_ALERT.REGISTER(‘ALERT1’);
To signal messages:
For eg:
SQL>EXEC DBMS_ALERT.REMOVE(‘ALERT1’);
Usage of DBMS_JOBS: This package allows control of the Oracle job Queue allow
DBA’s to schedule, execute, and eliminate jobs from with in the Oracle itself
independent of the operating system queuing mechanisms.
To submit a job:
SQL> ED ins.sql
BEGIN
FOR 1 IN 1…….10 LOOP
INSERT INTO TEST VALUES (1);
END LOOP;
END;
X NUMBER;
DBMS_JOB.SUBMIT(:X, ‘PROGRAM NAME’, SYSDATE, NEXTINTERVAL);
For eg:
DBMS_JOB.REMOVE(JOB NUMBER);
For eg:
SQL>EXEC DBMS_JOB.REMOVE(99);
DBMS_JOB.CHANGE(JOBNUMBER,SYSDATE,NEXT,);
For eg:
SQL>EXEC DBMS_JOB.CHANGE(99,NULL,SYSDATE+1);
/* changing the interval from 1 second to 1 day */
To run a job:
DBMS_JOB.RUN(JOBNUMBER);
For eg:
SQL>EXEC DBMS_JOB.RUN(99);
n Database Engine
n Database files on same filesystem.
In the above eg. /disk1, /disk2 and /disk3 are external disk subsystems. The reason
Why we have like this, in case if the internal disk goes corrupted, we can simply re-
install Linux after replacing the drive and every thing can function normally. And
make sure your external drives are running with either RAID-0 or RAID-5, so those
disk problems won’t stop the show.
Log in as root and do the following things:
3. Create a user called “oracle8” in which user account you’ll be installing the
software.
#useradd –u 501 –g 500 –s ksh –c “oracle Owner” –d /oraend/app/oracle/product/
8.0.5 –m oracle8
4. Change the group of all the slices to the oracle group i.e., “dba”.
#chgrp –R dba /oraeng /disk1/disk2/disk3
#vi /usr/src/linux/include/asm/shmparam.h
define SHM_IDX_BITS 16
7. Update your profile to suit your environment and do the following things
$vi .bash_profile
Login as root:
#cd /mnt/orainst
#sh orainst
Now you answer appropriately and select what all Software you need. Also make
sure to select “New Installation with creation of DB objects” also. By doing so, the
Installer process will try to create a Database with the name ORCL (since ORCL is
your ORACLE_SID from your .bash_profile). This is very important since the software
May get installed right, but fails to create the database later. That is why it is
important to ask the installer to create the database also.
11. After the software is installed correctly, login as root again and do the
following things:
#cd /oraeng/app/oracle/product/8.0.5/orainst
#sh root.sh
But in many companies where you go as a DBA, most of the cases the database is
Already installed and you may be asked to UPGRADE the Oracle versions eg: from
7.1 to 7.3
There are 2 ways you can do it. 1st way:(my preferred way)
Copy you dbs directory to /tmp (since it has all the INIT.ORA files)
$cd /oraeng
$rm –r app
Now follow above steps to install new oracle version from scratch. After the database
is installed again go to
$cd $ORACLE_HOME/orainst
$sh orainst
Here you choose “Upgrade DB objects” instead of “Install New Software” then it’ll
Confirm your ORACLE_SID and try to upgrade the database from ver 7.1 to 7.3
So, if you really observer it is a 2-step process.
Incase if it doesn’t work, you can always recreate your database and import the data
From your step 3 for all databases. If you have problems, you can always go back
your original 7.1 engine and the databases, from step 2(COLD Backup and /oraeng
backup).
INSTALLATION OF ORACLE 8.1.5 ON SUN-SPARC
(Oracle Enterprise Edition—OEE)
SYSTEM REQUIREMENTS:
1. 128MB RAM(Min)
2. Swap – Twice the RAM
3. CD-ROM—Oracle uses ISO-9660 Format CD-ROM disks with rockridge
Extensions
1. install Sun OS on the Sparc m/c with at least three mount points (/disk1,
/disk2, /disk3) for the Database storage files & one mount point (/oraeng)
for the software.
6. Following are lines you would add to the /etc/system file to configure the
UNIX kernel with the minimum recommended values:
Set shmsys:shminfo_shmmax=4294967295(min)
Set shmsys:shminfo_shmmin=1
Set shmsys:shminfo_shmmni=100
Set shmsys:shminfo_shmseg=10
Set shmsys:seminfo_shmmns=200
Set shmsys:seminfo_shmmni=100
Set shmsys:seminfo_shmmsl=100
Set shmsys:seminfo_shmopm=100
Set shmsys:seminfo_shmvmx=32767
7. Create a directory /var/tmp such that the installer (oracle8) has write
permission over it and it has at least 20MB space in it.
Note: Display setting is valid for only, workstations using bourne or korn shells. For
C shells the display setting is: setenv DISPLAY <HOST_Name>:0.0
2. Set the permissions on /disk1, /disk2, /disk3 & oraeng i.e type umask at the
$prompt, check for o22
3. cd /cdrom/cdrom0
4. ./runInstaller i.e., Start the installation by executing this file & answer
appropriately to complete the installation.
5. After the required software has been installed, login as ROOT user and run
root.sh file present in /oraeng/app/oracle/product/8.1.5
Note: The maximum number of semaphores that can be in one semaphore set
should be equal to the maximum number of Oracle processes.
n An insert on the table will fail if the value specified for the partition is outside
any range specified for any partition on the table.
n In addition to the dictionary views used to obtain information about tables and
indexes, the following new dictionary views support use of partitions. DBA_PART
_TABLES gives information about how the table is partitioned for all tables in
the database. DBA_PART_INDEXES gives information about how the index is
partitioned for all tables in the database. DBA_PART_KEY_COLUMNS identifies
the partition key used for all the tables and indexes in the database.
DBA_PART_PARTITIONS offers information about the partitions of all tables in
The database. DBA_PART_PARTITIONS gives information about the partitions of
All indexes in the database.
n DBA_PART_COL_STATISTICS lists statistics for cost-based optimization for
partition columns, for all the tables and indexes in the database. DBA_PART_
HISTOGRAMS shows the distribution of data in partitions for all partitions in the
Database. DBA_TAB_HISTOGRAMS shows the distribution of the data in tables
For all tables in the database.
n Several utilities were changed to accommodate partitions. They include explain
plan, analyze, SQL*Loader. Export and Import. For explain plan, three new
columns were added to the PLAN_TABLE, called PARTITION_START. PARTITION_
STOP and PARTITION_ID. A new operation called partition was added, along with
Three new options for its execution, concatenated, single and empty. Some new
Options for the table access operation were added as well, corresponding to the
New indexes that are available. The options for TABLE ACCESS are by user
ROWID, by global index ROWID and by local index ROWID.
n For SQL*Loader, there are changes to the conventional path and the direct path
for conventional path. SQL*Loader may load one partition only, but several loads
can operate on the same table but different partitions to execute data loads on
partitioned tables more quickly. For the direct path. SQL*Loader allows the
PARALLEL parameter, which is set to true to false depending upon whether the
DBA wants to load an individual partition using the direct path in parallel.
n For IMPORT & EXPORT entire partitioned tables can be imported or exported or
an individual partition can be done.
DBA LAB EXERCISE