Beruflich Dokumente
Kultur Dokumente
Show one instance when you encountered an error in alert log and you overcome that error.
What actions you took to overcome that error.
Oracle writes error in alert log file. Depending upon the error corrective action needs to be taken.
1) Deadlock Error: Take the trace file in user dump destination and analysis it for the error.
2) ORA-01555 Snapshot error: Check the query try to fine tune and check the undo size.
3) Unable to extent segment: Check the tablespace size and if require add space in the tablespace by
'alter database datafile .... resize' or alter tablespace add datafile command.
What is Ora-1555 Snapshot too Old error? Explain in detail?
Oracle Rollback Segments (Undo more recently) hold a copy of data before it was modified and they
work in a round-robin fashion. Writing and then eventually overwriting the entries as soon as the
changes are committed.
They are needed to provide read consistency (a consistent set of data at a point in time) or to allow a
process to abandon or rollback the changes or for database recovery.
Heres a typical scenario:User A opens a query to fetch every row from a billion row table. If User B updates and commits the
last row of the billion row table a Rollback entry will be created so User A can see the data as it was
before the update.
Other users are busily updating rows in the database and this in turn generates rollback which may
eventually cause the entry needed for User A to be overwritten (after all User B did commit the change
so its OK to overwrite the rollback segment). Maybe 15 minutes later the query is still running and
User A finally fetches the last row of the billion row table but the rollback entry is gone. He gets
ORA-01555: Snapshot too old rollback segment too small
I have applied the following commands: Now what will happen, will the database will give an
error / it will work?
Shutdown abort;
Startup;
Definitely database will be start without error but all uncommitted data will be lost such as killed all
sessions, killed all transactions, and didn't write from the buffers because shutdown abort directly
shutdown instance without committing.
There is four modes to shutdown the database:
1) Shutdown immediate, 2) Shutdown normal, 3) Shutdown transactional, 4) Shutdown aborts
When the database is shutdown by first 3 methods checkpoint takes place but when is shutdown by
abort option it doesn't enforces checkpoints, it simply shutdowns without waiting any users to
disconnect.
What is mutated trigger? In single user mode we got mutated error, as a DBA how you will
resolve it?
Mutated error will occur when same table access more than once in one state. If you are using before in
trigger block then replace it with after.
Explain Dual table. Is any data internally stored in dual Table. Lot of users is accessing select
sysdate from dual and they getting some millisecond differences. If we execute SELECT
SYSDATE FROM EMP; what error will we get. Why?
Dual is a system owned table created during database creation. Dual table consist of a single column
and a single row with value x. We will not get any error if we execute select sysdate from scott.emp
instead sysdate will be treated as a pseudo column and displays the value for all the rows retrieved. For
Example if there is 12 rows in emp table it will give result of date in 12 rows.
As an Oracle DBA what are the entire UNIX file you should be familiar with?
To check the process use: ps -ef |grep pmon or ps -ef
To check the alert log file: Tail -f alert.log
To check the cpu usage; Top vmstat 2 5
What is a Database instance?
A database instance also known as server is a set of memory structures and background processes that
access a set of database files. It is possible for a single database to be accessed by multiple instances
(this is oracle parallel server option).
What are the Requirements of simple Database?
A simple database consists of:
One or more data files, One or more control files, Two or more redo log files, Multiple users/schemas,
One or more rollback segments, One or more Tablespaces, Data dictionary tables, User objects (table,
indexes, views etc.)
The server (Instance) that access the database consists of:
SGA (Database buffer, Dictionary Cache Buffers, Redo log buffers, Shared SQL pool), SMON
(System Monitor),PMON (Process Monitor), LGWR (Log Write), DBWR (Data Base Write), ARCH
(ARCHiver), CKPT (Check Point), RECO, Dispatcher, User Process with associated PGS
Which process writes data from data files to database buffer cache?
The Background process DBWR rights data from datafile to DB cache.
How to DROP an Oracle Database?
You can do it at the OS level by deleting all the files of the database. The files to be deleted can be
found using:
1) select * from dba_data_files; 2) select * from v$logfile; 3) select * from v$controlfile; 4) archive log
list
5) initSID.ora 6) clean the UDUMP, BDUMP, scripts etc, 7) Cleanup the listener.ora and the
tnsnames.ora. Make sure that the oratab entry is also removed.
Otherwise, go to DBCA and click on delete database.
In Oracle 10g there is a new command to drop an entire database.
Startup restrict mount;
drop database <instance_name>;
In fact DBA should never drop a database via OS level commands rather use GUI utility DBCA to
drop the database
How can be determining the size of the log files.
Select sum(bytes)/1024/1024 "size_in_MB" from v$log;
What is difference between Logical Standby Database and Physical Standby database?
A physical or logical standby database is a database replica created from a backup of a primary
database. A physical standby database is physically identical to the primary database on a block-forblock basis. It's maintained in managed recovery mode to remain current and can be set to read only;
archive logs are copied and applied.
A logical standby database is logically identical to the primary database. It is updated using SQL
statements
How do you find whether the instance was started with pfile or spfile
1) SELECT name, value FROM v$parameter WHERE name = 'spfile';
This query will return NULL if you are using PFILE
2) SHOW PARAMETER spfile
This query will returns NULL in the value column if you are using pfile and not spfile
3) SELECT COUNT(*) FROM v$spparameter WHERE value IS NOT NULL;
If the count is non-zero then the instance is using a spfile, and if the count is zero then it is using a
pfile:
SQL> SELECT DECODE(value, NULL, 'PFILE', 'SPFILE') "Init File Type"
FROM sys.v_$parameter WHERE name = 'spfile';
What is full backup?
A full backup is an operating system backup of all data files, on-line redo log
files and control file that constitute oracle database and the parameter. If you are using the Rman for
backup then in Rman full backup means Incremental backup on 0 level.
While taking hot backup (begin end backup) what will happens back end?
When we r taking hot backup (begin backup - end backup) the datafile header associated with the
datafiles in the corresponding Tablespace is frozen. So Oracle will stop updating the datafile header but
will continue to write data into datafiles. In hot backup oracle will generate more redos this is because
oracle will write out complete changed blocks to the redo log files.
Which is the best option used to move database from one server to another serve on same
network and Why?
Import Export, Backup-Restore, Detach-Attach
Import-Export is the best option used to move database from one server to another serve on same
network. It reduces the network traffic. Import/Export works well if youre dealing with very small
databases. If we have few million rows its takes minutes to copy when compared to seconds using
backup and restore.
What is Different Type of RMAN Backup?
Full backup: During a Full backup (Level 0) all of the block ever used in datafile are backed up. The
only difference between a level 0 incremental backup and a full backup is that a full backup is never
included in an incremental strategy.
Comulative Backup: During a cumulative (Level 0) the entire block used since last full backup are
backed up.
If you have an example table, what is the best way to get sizing data for the production table
implementation?
The best way is to analyze the table and then use the data provided in the DBA_TABLES view to get
the average row length and other pertinent data for the calculation. The quick and dirty way is to look
at the number of blocks the table is actually using and ratio the number of rows in the table to its
number of blocks against the number of expected rows.
How can you find out how many users are currently logged into the database? How can you find
their operating system id?
To look at the v$session or v$process views and check the current_logins parameter in the v$sysstat
view. If you are on UNIX is to do a ps -ef|grep oracle|wc -l? Command, but this only works against a
single instance installation.
How can you determine if an index needs to be dropped and rebuilt?
Run the ANALYZE INDEX command on the index to validate its structure and then calculate the ratio
of LF_BLK_LEN/LF_BLK_LEN+BR_BLK_LEN and if it isnt near 1.0 (i.e. greater than 0.7 or so)
then the index should be rebuilt or if the ratio BR_BLK_LEN/ LF_BLK_LEN+BR_BLK_LEN is
nearing 0.3. It is not so easy to decide so I personally suggest contact to the expert before going to
rebuild.
What is tkprof and how is it used?
The tkprof tool is a tuning tool used to determine CPU and execution times for SQL statements. You
use it by first setting timed_statistics to true in the initialization file and then turning on tracing for
either the entire database via the sql_trace parameter or for the session using the ALTER SESSION
command. Once the trace file is generated you run the tkprof tool against the trace file and then look at
the output from the tkprof tool. This can also be used to generate explain plan output.
What is Explain plan and how is it used?
The EXPLAIN PLAN command is a tool to tune SQL statements. To use it you must have an
explain_table generated in the user you are running the explain plan for. This is created using the
utlxplan.sql script. Once the explain plan table exists you run the explain plan command giving as its
argument the SQL statement to be explained. The explain plan table is then queried to see the
execution plan of the statement. Explain plans can also be run using tkprof.
How do you prevent output from coming to the screen?
The SET option TERMOUT controls output to the screen. Setting TERMOUT OFF turns off screen
output. This option can be shortened to TERM.
How do you prevent Oracle from giving you informational messages during and after a SQL
statement execution?
The SET options FEEDBACK and VERIFY can be set to OFF.
How do you generate file output from SQL?
By use of the SPOOL command
A tablespace has a table with 30 extents in it. Is this bad? Why or why not.
Multiple extents in and of themselves arent bad. However if you also have chained rows this can hurt
performance.
How do you set up tablespaces during an Oracle installation?
You should always attempt to use the Oracle Flexible Architecture standard or another partitioning
scheme to ensure proper separation of SYSTEM, ROLLBACK, REDO LOG, DATA, TEMPORARY
and INDEX segments.
You see multiple fragments in the SYSTEM tablespace, what should you check first?
Ensure that users dont have the SYSTEM tablespace as their TEMPORARY or DEFAULT tablespace
assignment by checking the DBA_USERS view.
What are some indications that you need to increase the SHARED_POOL_SIZE parameter?
Poor data dictionary or library cache hit ratios, getting error ORA-04031. Another indication is steadily
decreasing performance with all other tuning parameters the same.
Guideline for sizing db_block_size and db_multi_block_read for an application that does many
full table scans?
Oracle almost always reads in 64k chunks. The two should have a product equal to 64 or a multiple of
64.
When looking at v$sysstat you see that sorts (disk) is high. Is this bad or good? If bad -How do
you correct it?
If you get excessive disk sorts this is bad. This indicates you need to tune the sort area parameters in
the initialization files. The major sort parameter is the SORT_AREA_SIZE parameter.
When should you increase copy latches? What parameters control copy latches?
When you get excessive contention for the copy latches as shown by the "redo copy" latch hit ratio.
You can increase copy latches via the initialization parameter LOG_SIMULTANEOUS_COPIES to
twice the number of CPUs on your system.
Where can you get a list of all initialization parameters for your instance? How about an
indication if they are default settings or have been changed?
You can look in the init.ora file for an indication of manually set parameters. For all parameters, their
value and whether or not the current value is the default value, look in the v$parameter view.
Describe hit ratio as it pertains to the database buffers. What is the difference between
instantaneous and cumulative hit ratio and which should be used for tuning?
The hit ratio is a measure of how many times the database was able to read a value from the buffers
verses how many times it had to re-read a data value from the disks. A value greater than 80-90% is
good, less could indicate problems. If you simply take the ratio of existing parameters this will be a
cumulative value since the database started. If you do a comparison between pairs of readings based on
some arbitrary time span, this is the instantaneous ratio for that time span. Generally speaking an
instantaneous reading gives more valuable data since it will tell you what your instance is doing for the
time it was generated over.
Discuss row chaining, how does it happen? How can you reduce it? How do you correct it?
Row chaining occurs when a VARCHAR2 value is updated and the length of the new value is longer
than the old value and would not fit in the remaining block space. This results in the row chaining to
another block. It can be reduced by setting the storage parameters on the table to appropriate values. It
can be corrected by export and import of the effected table.
You are getting busy buffer waits. Is this bad? How can you find what is causing it?
Buffer busy waits could indicate contention in redo, rollback or data blocks. You need to check the
v$waitstat view to see what areas are causing the problem. The value of the "count" column tells where
the problem is, the "class" column tells you with what. UNDO is rollback segments, DATA is data
base buffers.
If you see contention for library caches how you can fix it?
Increase the size of the shared pool.
If you see statistics that deal with "undo" what are they really talking about?
Rollback segments and associated structures.
If a tablespace has a default pctincrease of zero what will this cause (in relationship to the SMON
process)?
The SMON process would not automatically coalesce its free space fragments.
If a tablespace shows excessive fragmentation what are some methods to defragment the
tablespace? (7.1,7.2 and 7.3 only)
In Oracle 7.0 to 7.2 The use of the 'alter session set events 'immediate trace name coalesce level ts#';
command is the easiest way to defragment contiguous free space fragmentation. The ts# parameter
corresponds to the ts# value found in the ts$ SYS table. In version 7.3 the alter tablespace coalesce; is
best. If the free space is not contiguous then export, drop and import of the tablespace contents may be
the only way to reclaim non-contiguous free space.
How can you tell if a tablespace has excessive fragmentation?
If a select against the dba_free_space table shows that the count of tablespaces extents is greater than
the count of its data files, then it is fragmented.
You see the following on a status report: redo log space requests 23 redo log space wait time 0 Is
this something to worry about? What if redo log space wait time is high? How can you fix this?
Since the wait time is zero, no problem. If the wait time was high it might indicate a need for more or
larger redo logs.
If you see a pin hit ratio of less than 0.8 in the estat library cache report is this a problem? If so,
how do you fix it?
This indicates that the shared pool may be too small. Increase the shared pool size.
If you see the value for reloads is high in the estat library cache report is this a matter for
concern?
Yes, you should strive for zero reloads if possible. If you see excessive reloads then increase the size of
the shared pool.
You look at the dba_rollback_segs view and see that there is a large number of shrinks and they
are of relatively small size, is this a problem? How can it be fixed if it is a problem?
A large number of small shrinks indicates a need to increase the size of the rollback segment extents.
Ideally you should have no shrinks or a small number of large shrinks. To fix this just increase the size
of the extents and adjust optimal accordingly.
You look at the dba_rollback_segs view and see that you have a large number of wraps is this a
problem?
A large number of wraps indicates that your extent size for your rollback segments are probably too
small. Increase the size of your extents to reduce the number of wraps. You can look at the average
transaction size in the same view to get the information on transaction size.
You see multiple extents in the Temporary Tablespace. Is this a problem?
As long as they are all the same size this is not a problem. In fact, it can even improve performance
since Oracle would not have to create a new extent when a user needs one.
How do you set up your Tablespace on installation Level: Low?
The answer here should show an understanding of separation of redo and rollback, data and indexes
and isolation of SYSTEM tables from other tables. An example would be to specify that at least 7 disks
should be used for an Oracle installation.
Disk Configuration:
SYSTEM tablespace on 1, Redo logs on 2 (mirrored redo logs), TEMPORARY tablespace on 3,
ROLLBACK tablespace on 4, DATA and INDEXES 5,6
They should indicate how they will handle archive logs and exports as well as long as they have a
logical plan for combining or further separation more or less disks can be specified.
You have installed Oracle and you are now setting up the actual instance. You have been waiting
an hour for the initialization script to finish, what should you check first to determine if there is a
problem?
Check to make sure that the archiver is not stuck. If archive logging is turned on during install a large
number of logs will be created. This can fill up your archive log destination causing Oracle to stop to
wait for more space.
When configuring SQLNET on the server what files must be set up?
INITIALIZATION file, TNSNAMES.ORA file, SQLNET.ORA file
When configuring SQLNET on the client what files need to be set up?
SQLNET.ORA, TNSNAMES.ORA
You have just started a new instance with a large SGA on a busy existing server. Performance is
terrible, what should you check for?
The first thing to check with a large SGA is that it is not being swapped out.
What OS user should be used for the first part of an Oracle installation (on UNIX)?
You must use root first.
When should the default values for Oracle initialization parameters be used as is?
Never
How many control files should you have? Where should they be located?
At least 2 on separate disk spindles (Mirrored by Oracle).
How many redo logs should you have and how should they be configured for maximum
recoverability?
You should have at least 3 groups of two redo logs with the two logs each on a separate disk spindle
(mirrored by Oracle). The redo logs should not be on raw devices on UNIX if it can be avoided.
Why are recursive relationships bad? How do you resolve them?
A recursive relationship defines when or where a table relates to itself. It is considered as bad when it
is a hard relationship (i.e. neither side is a "may" both are "must") as this can result in it not being
possible to put in a top or perhaps a bottom of the table. For example in the EMPLOYEE table you
could not put in the PRESIDENT of the company because he has no boss, or the junior janitor because
he has no subordinates. These type of relationships are usually resolved by adding a small intersection
entity.
What does a hard one-to-one relationship mean (one where the relationship on both ends is
"must")?
This means the two entities should probably be made into one entity.
How should a many-to-many relationship be handled?
By adding an intersection entity table
What is an artificial (derived) primary key? When should an artificial (or derived) primary key
be used?
A derived key comes from a sequence. Usually it is used when a concatenated key becomes too
cumbersome to use as a foreign key.
When should you consider de-normalization?
Whenever performance analysis indicates it would be beneficial to do so without compromising data
integrity.
-UNIX-
The archive destination is probably full, backup the archivelogs and remove them and the archiver will
re-start.
Where would you look to find out if a redo log was corrupted assuming you are using Oracle
mirrored redo logs?
There is no message that comes to the SQLDBA or SRVMGR programs during startup in this
situation, you must check the alert. log file for this information.
You attempt to add a datafile and get: ORA-01118: cannot add anymore datafiles: limit of 40
exceeded. What is the problem and how can you fix it?
When the database was created the db_files parameter in the initialization file was set to 40. You can
shutdown and reset this to a higher value, up to the value of MAX_DATAFILES as specified at
database creation. If the MAX_DATAFILES is set to low, you will have to rebuild the control file to
increase it before proceeding.
You look at your fragmentation report and see that smon has not coalesced any of you
tablespaces, even though you know several have large chunks of contiguous free extents. What is
the problem?
Check the dba_tablespaces view for the value of pct_increase for the tablespaces. If pct_increase is
zero, smon will not coalesce their free space.
Your users get the following error: ORA-00055 maximum number of DML locks exceeded?
What is the problem and how do you fix it?
The number of DML Locks is set by the initialization parameter DML_LOCKS. If this value is set to
low (which it is by default) you will get this error. Increase the value of DML_LOCKS. If you are sure
that this is just a temporary problem, you can have them wait and then try again later and the error
should clear.
You get a call from you backup DBA while you are on vacation. He has corrupted all of the
control files while playing with the ALTER DATABASE BACKUP CONTROLFILE command.
What do you do?
As long as all datafiles are safe and he was successful with the BACKUP controlfile command you can
do the following:
CONNECT INTERNAL STARTUP MOUNT (Take any read-only tablespaces offline before next step
ALTER DATABASE DATAFILE .... OFFLINE;
RECOVER DATABASE USING BACKUP CONTROLFILE
ALTER DATABASE OPEN RESETLOGS; (bring read-only tablespaces back online)
Shutdown and backup the system, then restart If they have a recent output file from the ALTER
DATABASE BACKUP CONTROL FILE TO TRACE; command, they can use that to recover as well.
If no backup of the control file is available then the following will be required: CONNECT
INTERNAL STARTUP NOMOUNT CREATE CONTROL FILE .....; However, they will need to
know all of the datafiles, logfiles, and settings for MAXLOGFILES, MAXLOGMEMBERS,
MAXLOGHISTORY, MAXDATAFILES for the database to use the command.
You have taken a manual backup of a datafile using OS. How RMAN will know about it?
Whenever we take any backup through RMAN in the repository information of the backup is recorded.
The RMAN repository can be either controlfile or recovery catalog. However if you take a backup
through OS command then RMAN does not aware of that and hence recorded are not reflected in the
repository. This is also true whenever we create a new controlfile or a backup taken by RMAN is
transferred to another place using OS command then controlfile/recovery catalog does not know about
the prior backups of the database.
So in order to restore database with a new created controlfile we need to inform RMAN about the
backups taken before so that it can pick one to restore.
This task can be done by catalog command in RMAN.
Add information of backup pieces and image copies in the repository that are on disk.
Record a datafile copy as a level 0 incremental backup in the RMAN repository.
Record of a datafile copy that was taken by OS.
But CATALOG command has some restrictions. It can't do the following.
Can't catalog a file that belong to different database.
Can't catalog a backup piece that exists on an sbt device.
Example: Catalog Archive log
RMAN>CATALOG ARCHIVELOG '/oracle/oradata/arju/arc001_223.arc'
'/oracle/oradata/arju/arc001_224.arc';
Catalog Datafile
10
value 1 says as soon as a new backup is created the old one is no longer needed and can be deleted. The
other option of retention policy is RECOVERY WINDOW specified in days, to define period of time
in which point in time recovery must be possible. Thus it defines how long backup should retain.
What kind of backup you take Physical / Logical? Which one is better and Why?
Logical backup means backing up the individual database objects such as tables, views , indexes using
the utility called EXPORT, provided by Oracle. The objects exported in this way can be imported into
either same database or into any other database. The backed-up copy of information is stored in a
dumpfile, and this file can be read only using another utility called IMPORT. There is no other way
you can use this file. In this backup Oracle Export utility stores data in Binary file at OS level.
Physical backups rely on the Operating System to make a copy of the physical files like data files, log
files, control files that comprise the database. In this backup physically CRD (datafile, controlfile,
redolog file) files are copied from one location to another (disk or tape)
We don't preferred logical backup. It is very slow and recoveries are almost not possible.
What is Partial Backup?
A Partial Backup is any operating system backup short of a full backup, taken while the database is
open or shut down.
A partial backup is an operating system backup of part of a database. The backup of an individual table
spaces data files or the backup of a control file are examples of partial backups. Partial backups are
useful only when the database is in ARCHIVELOG ...
What are the name of the available VIEW in oracle used for monitoring database is in backup
mode (begin backup).
V$backup : Status column of this view shows whether a tablespace is in hotbackup mode. The status
'ACTIVE' shows the datafile to be in backup mode.
V$datafile_header : The fuzzy column also helps a dba to monitor datafile which are in backup mode.
The fuzzy NO indicates that the datafile is in hotbackup 9begin backup) mode.
NOTE : The database doesn't startup when a datafile is in backup mode. So put datafile back in the
normal mode before shutting down the database.
What is Tail log backup? Where can we use it?
Tail Log Backup is the log backup taken after data corruption (Disaster). Even though there is file
corruption we can try to take log backup (Tail Log Backup). This will be used during point in time
recovery.
Consider a scenario where in we have full backup of 12:00 noon one Transactional log backup at 1:00
PM. The log backup is scheduled to run for every 1 hr. If disaster happens at 1:30 PM then we can try
to take tail log backup at 1:30 (after disaster). If we can take tail log backup then in recovery first
restore full backup of 12:00 noon then 1:00 PM log backup recovery and then last tail backup of 1:30
(After Disaster).
How to check the size of SGA?
SQL> show SGA
Total System Global Area 167772160 bytes
Fixed Size 1247900 bytes
Variable Size 58721636 bytes
Database Buffers 104857600 bytes
Redo Buffers 2945024 bytes
How to define data block size
The primary block size is defined by the Initialization parameter DB_BLOCK_SIZE.
How can we determine the size of the log files.
SQL>Select sum(bytes)/(1024*1024) size_in_mb from v$log;
What do you do when the server cannot start due to a corrupt master database?
If the master database is corrupt then surely others also do have the problems and thus the need of
MDF recovery comes to an immediate. However you can try out to rebuild it with rebuild.exe and
restore it.
11
What is a flash back query? This feature is also available in 9i. What are the difference between
9i and 10g (related to flash back query).
Oracle 9i flashback 10g enhancement
Flashback query:
Flashback version query
Flashback_Transactional_query view
10g new Features:
Flashback Table
Flashback database
12
13
SQL>SHUTDOWN IMMEDIATE
Set the DB_NAME initialization parameter in the initialization parameter file (PFILE) to the new
database name.
Note:The DBNEWID utility does not change the server parameter file (SPFILE). Therefore, if you use
SPFILE to start your Oracle database, you must re-create the initialization parameter file from the
server parameter file, remove the server parameter file, change the DB_NAME in the initialization
parameter file, and then re-create the server parameter file. Because you have changed only the
database name, and not the database ID, it is not necessary to use the RESETLOGS option when
you open the database. This means that all previous backups are still usable.
4.
SQL>Startup;
Steps: change DBID only
Repeat the same above procedure
nid TARGET=sys/password@TSH3
Shutdown and open the database with RESETLOGS option
What is the view name where we can get the space for tables or views?
DBA_Segments;
SELECT SEGMENT_NAME, SUM(BYTES) FROM DBA_SEGMENTS
14
15
If you find the SQL Query (which make problem) then take a SQLTRACE with
explain plan it will show how the SQL query will executed by oracle depending
upon the report you will tune your database.
For example: one table has 10,000 records but you want to fetch only 5 rows but
in that query oracle does the full table scan. Only for 5 rows full table is scan is
not a good thing so create an index on the particular column by this way to tune
the database.
By default Maximum Enabled Role in a database.
The MAX_ENABLED_ROLES init.ora parameter limits the number of roles any
user can have enabled simultaneously. The default is 30 in both oracle 8i and 9i.
When you create a role it is enabled by default. If you create many roles, then you
may exceed the MAX_ENABLED_ROLE setting even if you are not the user of this
role.
User Profiles:
The user profile are used to limits the amount of system and database resources
available to a user and to manage password restrictions. If no profile are created
in a database then the default profile are, which specify unlimited resources for
all users, will be used.
How to convert local management Tablespace to dictionary managed
Tablespace?
>execute dbms_space_admin.tablespace_convert_to_local('tablespace_name');
>execute dbms_space_admin.tablespace_convert_from_local('tablespace_name');
What is a cluster Key ?
The related columns of the tables are called the cluster key. The cluster key is
using a cluster index and its value is stored only once for multiple tables in the
cluster.
What are four performance bottlenecks that can occur in a database server
and how are they detected and prevented?
CPU bottlenecks
Undersized memory structures
Inefficient or high-load SQL statements
Database configuration issues
Four major steps to detect these issues:Analyzing Optimizer Statistics
Analyzing an Execution Plan
Using Hints to Improve Data Warehouse Performance
Using Advisors to Verify SQL Performance
Analyzing Optimizer Statistics
Optimizer statistics are a collection of data that describes more details about the
database and the objects in the database. The optimizer statistics are stored in
the data dictionary. They can be viewed using data dictionary views similar to
the following:
SELECT * FROM DBA_SCHEDULER_JOBS WHERE JOB_NAME
'GATHER_STATS_JOB';
Because the objects in a database can constantly change statistics must be
regularly updated so that they accurately describe these database objects.
Statistics are maintained automatically by Oracle Database or you can maintain
the optimizer statistics manually using the DBMS_STATS package.
Analyzing an Execution Plan
16
17
form in which RMAN can write backups to media managers such as tape drives
and tape libraries.
A backup set contains one or more binary files in an RMAN-specific format. This
file is known as a backup piece. A backup set can contain multiple datafiles. For
example, you can back up ten datafiles into a single backup set consisting of a
single backup piece. In this case, RMAN creates one backup piece as output. The
backup set contains only this backup piece.
What is an UTL_FILE? What are different procedures and functions
associated with it?
The UTL_FILE package lets your PL/SQL programs read and write operating
system (OS) text files. It provides a restricted version of standard OS stream file
input/output (I/O).
Subprogram -Description
FOPEN function-Opens a file for input or output with the default line size.
IS_OPEN function -Determines if a file handle refers to an open file.
FCLOSE procedure -Closes a file.
FCLOSE_ALL procedure -Closes all open file handles.
GET_LINE procedure -Reads a line of text from an open file.
PUT procedure-Writes a line to a file. This does not append a line terminator.
NEW_LINE procedure-Writes one or more OS-specific line terminators to a file.
PUT_LINE procedure -Writes a line to a file. This appends an OS-specific line
terminator.
PUTF procedure -A PUT procedure with formatting.
FFLUSH procedure-Physically writes all pending output to a file.
FOPEN function -Opens a file with the maximum line size specified.
Differentiate between TRUNCATE and DELETE?
The Delete commands will log the data changes in the log file where as the
truncate will simply remove the data without it. Hence Data removed by Delete
command can be rolled back but not the data removed by TRUNCATE. Truncate
is a DDL statement whereas DELETE is a DML statement.
What is an Oracle Instance?
Instance is a combination of memory structure and process structure. Memory
structure is SGA (System or Shared Global Area) and Process structure is
background processes.
Components of SGA:
Database Buffer Cache: It is further divided into Library Cache and Data
Dictionary Cache or Row Cache,
Shared Pool/large pool/stream pool/java pool
Redo log Buffer,
Background Process:
Mandatory Processes (SMON, PMON, DBWR, LGWR, CKPT, RECO)
Optional Process (ARCN, RBAC, MMAN, MMON, MMNL)
When Oracle starts an instance, it reads the initialization parameter file to determine the values of
initialization parameters. Then, it allocates an SGA, which is a shared area of memory used for
database information, and creates background processes. At this point, no database is associated with
these memory structures and processes.
information, Backup set and backup piece information, Backup datafile and redo
log information, Datafile copy information, The current log sequence number
When you start an Oracle DB which file is accessed first?
To Start an instance, oracle server need a parameter file which contains
information about the instance, oracle server searches file in following sequence:
1) SPFILE ------ if finds instance started .. Exit
2) Default SPFILE -- if it is spfile is not found
3) PFILE -------- if default spfile not find, instance started using pfile.
4) Default PFILE -- is used to start the instance.
What is the Job of SMON, PMON processes?
SMON: System monitor performs instance recovery at instance startup in a
multiple instances. Recovers other instances that have failed in cluster
environment .It cleans up temporary segments that are no longer in use.
Recovers dead transactions skipped during crash and instance recovery.
Coalesce the free extents within the database, to make free space contiguous
and easy to allocate.
PMON: Process monitor performs recovery when a user process fails. It is
responsible for cleaning up the cache, freeing resources used by the
processes. In the mts environment it checks on dispatcher and server
processes, restarting them at times of failure.
What is Instance Recovery?
When an Oracle instance fails, Oracle performs an instance recovery when the
associated database is re-started.
Instance recovery occurs in two steps:
Cache recovery: Changes being made to a database are recorded in the database
buffer cache. These changes are also recorded in online redo log files
simultaneously. When there are enough data in the database buffer cache, they
are written to data files. If an Oracle instance fails before the data in the database
buffer cache are written to data files, Oracle uses the data recorded in the online
redo log files to recover the lost data when the
associated database is re-started. This process is called cache recovery.
Transaction recovery: When a transaction modifies data in a database, the
before image of the modified data is stored in an undo segment. The data stored
in the undo segment is used to restore the original values in case a transaction is
rolled back. At the time of an instance failure, the database may have
uncommitted transactions. It is possible that changes made by these
uncommitted transactions have gotten saved in data files. To maintain read
consistency, Oracle rolls back all uncommitted transactions when the associated
database is re-started. Oracle uses the undo data stored in undo segments to
accomplish this. This process is called transaction recovery.
1. Rolling forward the committed transactions
2. Rolling backward the uncommitted transactions
What is written in Redo Log Files?
Log writer (LGWR) writes redo log buffer contents Into Redo Log Files. Log
writer does this every three seconds, when the redo log buffer is 1/3 full and
immediately before the Database Writer (DBWn) writes its changed buffers into
the data file.
How do you control number of Datafiles one can have in an Oracle
database?
19
When starting an Oracle instance, the database's parameter file indicates the
amount of SGA space to reserve for datafile information; the maximum number
of datafiles is controlled by the DB_FILES parameter. This limit applies only for
the life of the instance.
How many Maximum Datafiles can there be in an Oracle Database?
Default maximum datafile is 255 that can be defined in the control file at the time
of database creation.
It can be increased by setting the initialization parameter value up to higher at
the time of database creation. Setting this value too higher can cause DBWR
issues.
Before 9i Maximum number of datafile in database was 1022.After 9i the limit is
applicable to the number of datafile in the Tablespace.
What is a Tablespace?
A tablespace is a logical storage unit within the database. It is logical because a
tablespace is not visible in the file system of the machine on which database
resides. A tablespace in turn consists of at least one datafile, which, in tune are
physically located in the file system of the server. The tablespace builds the
bridge between the Oracle database and the file system in which the table's or
index' data is stored.
There are three types of tablespaces in Oracle:
Permanent tablespaces, Undo tablespaces, Temporary tablespaces
What is the purpose of Redo Log files?
The purpose of redo log file is to record all changes made to the data during the
recovery of database. It always advisable to have two or more redo log files and
keep them in a separate disk, so you can recover the data during the system
crash.
Which default Database roles are created when you create a Database?
Connect , resource and dba are three default roles
What is a Checkpoint?
A checkpoint performs the following three operations:
1. Every block in the buffer cache is written to the data files. That is, it
synchronizes the data blocks in the buffer cache with the datafiles on disk. It's
the DBWR that writes all modified database blocks back to the datafiles.
2. The latest SCN is written (updated) into the datafile header.
3. The latest SCN is also written to the controlfiles.
The update of the datafile headers and the control files is done by the LGWR (if
CKPT is enabled). As of version 8.0, CKPT is enabled by default. The date and
time of the last checkpoint can be retrieved through checkpoint_time in
v$datafile_header. The SCN of the last checkpoint can be found in v$database
as checkpoint_change#.
Which Process reads data from Datafiles?
The Server process reads the blocks from datafiles to buffer cache
Which Process writes data in Datafiles?
DBWn Process is writing the dirty buffers from db cache to data files.
Can you make a Datafile auto extendible. If yes, then how?
You must be logged on as a DBA user, then issue
For Data File:
SQL>Alter database datafile 'c:\oradata\mysid\XYZ.dbf' autoextend on next 10m
maxsize 40G
20
21
22
23
SQL> begin
dbms_fga.add_policy ( object_schema => SCOTT,
object_name => EMP2,
policy_name => EMP_AUDIT,
statement_types => SELECT );
end;
/
PL/SQL procedure successfully completed.
SQL>select * from dba_fga_audit_trail;
no rows selected
In HR schema:
SQL> create table bankim(
name varchar2 (10),
roll number (20));
Table created.
SQL> insert into bankim values (bankim, 10);
1 row created.
SQL> insert into bankim values (bankim2, 20);
1 row created.
SQL> select * from bankim;
NAME ROLL
- bankim 10
bankim2 20
SQL> select name from bankim;
NAME
bankim
bankim2
In sys schema:
SQL>set head off
SQL> select sql_text from dba_fga_audit_trail;
select count(*) from emp2
select * from emp2
select * from emp3
select count(*) from bankim
select * from bankim
select name from bankim
What does DBMS_FGA package do?
The dbms_fga Package is the central mechanism for the FGA is implemented in
the package dbms_fga, where all the APIs are defined. Typically, a user other than
SYS is given the responsibility of maintaining these policies. With the convention
followed earlier, we will go with the user SECUSER, who is entrusted with much
of the security features. The following statement grants the user SECUSER
enough authority to create and maintain the auditing facility.
Grant execute on dbms_fga to secuser;
The biggest problem with this package is that the polices are not like regular
objects with owners. While a user with execute permission on this package can
create policies, he or she can drop policies created by another user, too. This
24
makes it extremely important to secure this package and limit the use to only a
few users who are called to define the policies, such as SECUSER, a special user
used in examples.
What is Cost Based Optimization?
The CBO is used to design an execution plan for SQL statement. The CBO takes an
SQL statement and tries to weigh different ways (plan) to execute it. It assigns a
cost to each plan and chooses the plan with smallest cost.
The cost for smallest is calculated: Physical IO + Logical IO / 1000 + net IO.
How often you should collect statistics for a table?
CBO needs some statistics in order to assess the cost of the different access plans.
These statistics includes:
Size of tables, Size of indexes, number of rows in the tables, number of distinct
keys in an index, number of levels in a B* index, average number of blocks for a
value, average number of leaf blocks in an index
These statistics can be gathered with dbms_stats and the monitoring feature.
How do you collect statistics for a table, schema and Database?
Statistics are gathered using the DBMS_STATS package. The DBMS_STATS
package can gather statistics on table and indexes, and well as individual
columns and partitions of tables. When you generate statistics for a table,
column, or index, if the data dictionary already contains statistics for the object,
then Oracle updates the existing statistics. The older statistics are saved and can
be restored later if necessary. When statistics are updated for a database object,
Oracle invalidates any currently parsed SQL statements that access the object.
The next time such a statement executes, the statement is re-parsed and the
optimizer automatically chooses a new execution plan based on the new
statistics.
Collect Statistics on Table Level
sqlplus scott/tiger
exec dbms_stats.gather_table_stats ( ownname
=> 'SCOTT', tabname
=> 'EMP', estimate_percent => dbms_stats.auto_sample_size, method_opt
=> 'for all columns size auto', cascade
=> true, degree
=> 5 - )
/
Collect Statistics on Schema Level
sqlplus scott/tiger
exec dbms_stats.gather_schema_stats ( ownname
=> 'SCOTT', options
=> 'GATHER', estimate_percent => dbms_stats.auto_sample_size, method_opt
=> 'for all columns size auto', cascade
=> true, degree
=> 5 - )
Collect Statistics on Other Levels
25
DBMS_STATS can collect optimizer statistics on the following levels, see Oracle
Manual
GATHER_DATABASE_STATS
GATHER_DICTIONARY_STATS
GATHER_FIXED_OBJECTS_STATS
GATHER_INDEX_STATS
GATHER_SCHEMA_STATS
GATHER_SYSTEM_STATS
GATHER_TABLE_STATS
Can you make collection of Statistics for tables automatic?
Yes, you can schedule your statistics but in some situation automatic statistics
gathering may not be adequate. It suitable for those databases whose object is
modified frequently. Because the automatic statistics gathering runs during an
overnight batch window, the statistics on tables which are significantly modified
during the day may become stale.
There may be two scenarios in this case:
Volatile tables that are being deleted or truncated and rebuilt during the course
of the day.
Objects which are the target of large bulk loads which add 10% or more to the
objects total size.
So you may wish to manually gather statistics of those objects in order to choose
the optimizer the best execution plan. There are two ways to gather statistics.
1. Using DBMS_STATS package.
2. Using ANALYZE command
How can you use ANALYZE statement to collect statistics?
ANALYZE TABLE emp ESTIMATE STATISTICS FOR ALL COLUMNS;
ANALYZE INDEX inv_product_ix VALIDATE STRUCTURE;
ANALYZE TABLE customers VALIDATE REF UPDATE;
ANALYZE TABLE orders LIST CHAINED ROWS INTO chained_rows;
ANALYZE TABLE customers VALIDATE STRUCTURE ONLINE;
To delete statistics:
ANALYZE TABLE orders DELETE STATISTICS;
To get the analyze details:
SELECT owner_name, table_name, head_rowid, analyze_timestamp FROM
chained_rows;
On which columns you should create Indexes?
The following list gives guidelines in choosing columns to index:
You should create indexes on columns that are used frequently in WHERE
clauses.
You should create indexes on columns that are used frequently to join
tables.
You should create indexes on columns that are used frequently in ORDER
BY clauses.
You should create indexes on columns that have few of the same values or
unique values in the table.
26
You should not create indexes on small tables (tables that use only a few
blocks) because a full table scan may be faster than an indexed query.
If possible, choose a primary key that orders the rows in the most
appropriate order.
If only one column of the concatenated index is used frequently in
WHERE clauses, place that column first in the CREATE INDEX statement.
If more than one column in a concatenated index is used frequently in
WHERE clauses, place the most selective column first in the CREATE
INDEX statement.
27
Yes, we can build index online. It allows performing DML operation on the base
table during index creation. You can use the statements:
CREATE INDEX ONLINE and DROP INDEX ONLINE.
ALTER INDEX REBUILD ONLINE is used to rebuild the index online.
A Table Lock is required on the index base table at the start of the CREATE or
REBUILD process to guarantee DDL information. A lock at the end of the process
also required to merge change into the final index structure.
A table is created with the following setting
storage (initial 200k
next 200k
minextents 2
maxextents 100
pctincrease 40)
What will be size of 4th extent?
Percent Increase allows the segment to grow at an increasing rate.
The first two extents will be of a size determined by the Initial and Next
parameter (200k)
The third extent will be 1 + PCTINCREASE/100 times the second extent
(1.4*200=280k).
AND the 4th extent will be 1 + PCTINCREASE/100 times the third extent
(1.4*280=392k!!!) and so on...
Can you Redefine a table Online?
Yes. We can perform online table redefinition with the Enterprise Manager
Reorganize Objects wizard or with the DBMS_REDEFINITION package.
It provides a mechanism to make table structure modification without
significantly affecting the table availability of the table. When a table is
redefining online it is accessible to both queries and DML during the redefinition
process.
Purpose for Table Redefinition
Add, remove, or rename columns from a table
Converting a non-partitioned table to a partitioned table and vice versa
Switching a heap table to an index organized and vice versa
Modifying storage parameters
Adding or removing parallel support
Reorganize (defragmenting) a table
Transform data in a table
Restrictions for Table Redefinition:
One cannot redefine Materialized Views (MViews) and tables with MViews or
MView Logs defined on them.
One cannot redefine Temporary and Clustered Tables
One cannot redefine tables with BFILE, LONG or LONG RAW columns
One cannot redefine tables belonging to SYS or SYSTEM
One cannot redefine Object tables
Table redefinition cannot be done in NOLOGGING mode (watch out for heavy
archiving)
Cannot be used to add or remove rows from a table
Can you assign Priority to users?
Yes, we can do this through resource manager. The Database Resource Manager
gives a database administrators more control over resource management
28
29
Method1:
DELETE FROM SHAAN A WHERE ROWID >
(SELECT min(rowid) FROM SHAAN B
WHERE A.EMPLOYEE_ID = B.EMPLOYEE_ID);
Method2:
delete from SHAAN t1
where exists (select 'x' from SHAAN t2
where t2.EMPLOYEE_ID = t1.EMPLOYEE_ID
and t2.EMPLOYEE_ID = t1.EMPLOYEE_ID
and t2.rowid > t1.rowid);
Method3:
DELETE SHAAN
WHERE rowid IN
( SELECT LEAD(rowid) OVER
(PARTITION BY EMPLOYEE_ID ORDER BY NULL)
FROM SHAAN );
Method4:
delete from SHAAN where rowid not in
( select min(rowid)
from SHAAN group by EMPLOYEE_ID);
Method5:
delete from SHAAN
where rowid not in ( select min(rowid)
from SHAAN group by EMPLOYEE_ID);
Method6:
SQL> create table table_name2 as select distinct * from table_name1;
SQL> drop table table_name1;
SQL> rename table_name2 to table_name1;
What is Automatic Management of Segment Space setting?
Automatic Segment Space Management (ASSM) introduced in Oracle9i is an
easier way of managing space in a segment using bitmaps. It eliminates the DBA
from setting the parameters pctused, freelists, and freelist groups.
ASSM can be specified only with the locally managed tablespaces (LMT). The
CREATE TABLESPACE statement has a new clause SEGMENT SPACE
MANAGEMENT. Oracle uses bitmaps to manage the free space. A bitmap, in this
case, is a map that describes the status of each data block within a segment with
respect to the amount of space in the block available for inserting rows. As more
or less space becomes available in a data block, its new state is reflected in the
bitmap.
CREATE TABLESPACE myts DATAFILE '/oradata/mysid/myts01.dbf' SIZE 100M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 2M
SEGMENT SPACE MANAGEMENT AUTO;
What is COMPRESS and CONSISTENT setting in EXPORT utility?
If COMPRESS=Y, the INITIAL storage parameter is set to the total size of all
extents allocated for the object. The change takes effect only when the object is
imported.
Setting CONSISTENT=Y exports all tables and references in a consistent state.
This slows the export, as rollback space is used. If CONSISTENT=N and a record
is modified during the export, the data will become inconsistent.
30
What is the difference between Direct Path and Convention Path loading?
When you use SQL loader by default it use conventional path to load data. This
method competes equally with all other oracle processes for buffer resources.
This can slow the load. A direct path load eliminates much of the Oracle database
overhead by formatting Oracle data blocks and writing the data blocks directly to
the database files. If load speed is most important to you, you should use direct
path load because it is faster.
What is an Index Organized Table?
An index-organized table (IOT) is a type of table that stores data in
a B*Tree index structure. Normal relational tables, called heap-organized tables,
store rows in any order (unsorted).
CREATE TABLE my_iot (id INTEGER PRIMARY KEY, value VARCHAR2 (50))
ORGANIZATION INDEX;
What are a Global Index and Local Index?
When you create a partitioned table, you should create an index on the table. The
index may be partitioned according to the same range values that were used to
partition the table. Local keyword in the index partition tells oracle to create a
separate index for each partition of the table. The Global clause in create index
command allows you to create a non-partitioned index or to specify ranges for
the index values that are different from the ranges for the table partitions. Local
indexes may be easier to manage than global indexes however, global indexes
may perform uniqueness checks faster than local (portioned) indexes perform
them.
What is difference between Multithreaded/Shared Server and Dedicated
Server?
Oracle Database creates server processes to handle the requests of user
processes connected to an instance.
Your database is always enabled to allow dedicated server processes, but you
must specifically configure and enable shared server by setting one or more
initialization parameters.
Can you import objects from Oracle ver. 7.3 to 9i?
We can not import from lower version export to higher version in fact. But not
sure may be now concept is changed.
How do you move tables from one tablespace to another tablespace?
Method 1:
Export the table, drop the table, create the table definition in the new tablespace,
and then import the data (imp ignore=y).
Method 2:
Create a new table in the new tablespace with the "CREATE TABLE x AS SELECT
* from y" command:
CREATE TABLE temp_name TABLESPACE new_tablespace AS SELECT * FROM
real_table;
Then drop the original table and rename the temporary table as the original:
DROP TABLE real_table;
RENAME temp_name TO real_table;
31
Note: After step #1 or #2 is done, be sure to recompile any procedures that may
have been
invalidated by dropping the table. Prefer method #1, but #2 is easier if there are no
indexes, constraints, or triggers. If there are, you must manually recreate them.
Method 3:
If you are using Oracle 8i or above then simply use:
SQL>Alter table table_name move tablespace tablespace_name;
How do see how much space is used and free in a tablespace?
SELECT * FROM SM$TS_FREE;
SELECT TABLESPACE_NAME, SUM(BYTES) FROM DBA_FREE_SPACE GROUP BY
TABLESPACE_NAME;
Can view be the based on other view?
Yes, the view can be created from other view by directing a select query to use
the other view data.
What happens, if you not specify Dictionary option with the start option in
case of LogMinor concept?
It is recommended that you specify a dictionary option. If you do not,
LogMiner cannot translate internal object identifiers and datatypes to object
names and external data formats. Therefore, it would return internal object IDs
and present data as hex bytes. Additionally, the MINE_VALUE and
COLUMN_PRESENT functions cannot be used without a dictionary.
What is the Benefit and draw back of Continuous Mining?
The continuous mining option is useful if you are mining in the same instance
that is generating the redo logs. When you plan to use the continuous mining
option, you only need to specify one archived redo log before starting LogMiner.
Then, when you start LogMiner specify the
DBMS_LOGMNR.CONTINUOUS_MINE option, which directs LogMiner to automatically
add and mine subsequent archived redo logs and also the online catalog.
Continuous Mining is not available in Real Application Cluster.
What is LogMiner and its Benefit?
LogMiner is a recovery utility. You can use it to recover the data from oracle
redo log and archive log file. The Oracle LogMiner utility enables you to query
redo logs through a SQL interface. Redo logs contain information about the
history of activity on a database.
Benefit of LogMiner?
1. Pinpointing when a logical corruption to a database; suppose when a row is
accidentally deleted then logMiner helps to recover the database exact time
based and changed based recovery.
2. Perform table specific undo operation to return the table to its original state.
LogMiner reconstruct the SQL statement in reverse order from which they are
executed.
3. It helps in performance tuning and capacity planning. You can determine
which table gets the most update and insert. That information provides a
historical perspective on disk access statistics, which can be used for tuning
purpose.
4. Performing post auditing; LogMiner is used to track any DML and DDL
performed on database in the order they were executed.
What is Oracle DataGuard?
32
Oracle DataGuard is a tools that provides data protection and ensures disaster
recovery for enterprise data. It provides comprehensive set of services that
create, maintain, manage, and monitor one or more standby databases to enable
production Oracle databases to survive disasters and data corruption. Dataguard
maintains these standsby databases as transitionally consistent copies of the
production database. Then, if the production database becomes failure Data
Guard can switch any standby database to the production role, minimizing the
downtime associated with the outage. Data Guard can be used with traditional
backup, restoration, and cluster techniques to provide a high level of data
protection and data availability.
What is Standby Databases
A standby database is a transitionally consistent copy of the primary database.
Using a backup copy of the primary database, you can create up to 9 standby
databases and incorporate them in a Data Guard configuration. Once created,
Data Guard automatically maintains each standby database by transmitting redo
data from the primary database and then applying the redo to the standby
database.
Similar to a primary database, a standby database can be either a single-instance
Oracle database or an Oracle Real Application Clusters database. A standby
database can be either a physical standby database or a logical standby
database:
Difference between Physical standby Logical standby databases
Provides a physically identical copy of the primary database on a block-for-block
basis. The database schema, including indexes, is the same. A physical standby
database is kept synchronized with the primary database, though Redo Apply,
which recovers the redo data, received from the primary database and applies
the redo to the physical standby database.
Logical Standby database contains the same logical information as the
production database, although the physical organization and structure of the
data can be different. The logical standby database is kept synchronized with the
primary database though SQL Apply, which transforms the data in the redo
received from the primary database into SQL statements and then executing the
SQL statements on the standby database.
If you are going to setup standby database what will be your Choice Logical
or Physical?
We need to keep the physical standby database in recovery mode in order to
apply the received archive logs from the primary database. We can open
physical stand by database to read only and make it available to the
applications users (Only select is allowed during this period). Once the database
is opened in Read only mode then we can not apply redo logs received from
primary database.
We do not see such issues with logical standby database. We can open up the
database in normal mode and make it available to the users. At the same time, we
can apply archived logs received from primary database.
If the primary database needed to support pretty large user community for the
OLTP system and pretty large Reporting Group then better to use logical
standby as primary database instead of physical database.
What are the requirements needed before preparing standby database?
OS Architecture of primary database secondary database should be same.
33
34
First we start up RMAN with a connection to the catalog and the target, making a
note of the DBID in the banner:
C:\>rman catalog=rman/rman@shaan target=HRMS/password@orcl3
connected to target database: W2K1 (DBID=691421794)
connected to recovery catalog database
Note the DBID from here. Next we list and delete any backupset recorded in the
repository:
RMAN> LIST BACKUP SUMMARY;
RMAN> DELETE BACKUP DEVICE TYPE SBT;
RMAN> DELETE BACKUP DEVICE TYPE DISK;
Next we connect to the RMAN catalog owner using SQL*Plus and issue the
following statement:
SQL> CONNECT rman/rman@shaan
SQL> SELECT db_key, db_id FROM db
WHERE db_id = 1487421514;
DB_KEY
DB_ID
------------------1
691421794
The resulting key and id can then be used to unregister the database:
SQL> EXECUTE dbms_rcvcat.unregisterdatabase(1, 691421794);
PL/SQL procedure successfully completed.
35
36
37
It depends on how you killed the process. If you did and alter
system kill session you should be able to look at the used_ublk
block in v$transaciton to get an estimate for the rollback being
done. If you killed to server process in the OS and pmon is
recovering the transaction you can look at
V$FAST_START_TRANSACTIONS view to get the estimate
How do you see how many instances are running?
Rman
38
39
40
41
object (or truncating them) and the resulting free space cannot be
used by any other object in that tablespace. This is a direct result of
using pctincrease that is not zero and having many weird sized
extents (every extents is unique size and shape). In oracle 8i and
above we all are using locally managed tablespace. These would use
either uniform sizing or our automatic allocation scheme. In either
case it is almost impossible to get into a situation where you have
unusable free space.
To see if you suffer from fragmentation you can query from
DBA_FREE_SPACE (best to do an alter tablespace to ensure all
contiguous made into 1 big free region). You would look any free
space that is smaller then the smallest next extent size for any
object in that tablespace. Check with below query:
Select * from dba_free_space
where tablespace_name = 'T' and bytes <= ( select
min(next_extent)
from dba_segments where tablespace_name = 'T') order by
block_id
Is there a way we can flush out a known data set from the
database buffer cache?
No you dont, in real life; the cache would never be empty. It is true
that 10g introduce an alter system flush buffer_cache, but it is not
really worthwhile. Having empty buffer cache is fake, if no more so
than what you are currently doing.
What would be the best approach to benchmark the
response time for a particular query?
run query q1 over and over (with many different inputs)
run query q2 over and over (with many different inputs)
discard first couple of observations, and last couple
use the observations in the middle
What is difference between Char and Varchar2 and which is
better approach?
A CHAR datatype and VARCHAR2 datatype are stored identically
(eg: the word 'WORD' stored in a CHAR(4) and a varchar2(4)
consume exactly the same amount of space on disk, both have
leading byte counts).
The difference between a CHAR and a VARCHAR is that a CHAR(n)
will ALWAYS be N bytes long, it will be blank padded upon insert to
ensure this. A varchar2(n) on the other hand will be 1 to N bytes
long, it will NOT be blank padded. Using a CHAR on a varying width
field can be a pain due to the search semantics of CHAR.
Consider the following examples:
SQL> create table t ( x char(10) );
Table created.
SQL> insert into t values ( 'Hello' );
1 row created.
SQL> select * from t where x = 'Hello';
X
43
---------Hello
SQL> variable y varchar2(25)
SQL> exec :y := 'Hello'
PL/SQL procedure successfully completed.
SQL> select * from t where x = :y;
no rows selected
SQL> select * from t where x = rpad(:y,10);
X
---------Hello
Notice how when doing the search with a varchar2 variable (almost
every tool in the world
uses this type), we have to rpad() it to get a hit. If the field is in
fact ALWAYS 10 bytes long, using a CHAR will not hurt -HOWEVER, it will not help either.
Rman always shows date in DD-MON-YY format. How to set
date format to M/DD/YYYY HH24:MI:SS in rman ?
You can just set the NLS_DATE_FORMAT before going into RMAN:
In Rman list backup how do i get time column that shows me
date and time including seconds as generally it is showing
only date.
Before connecting the rman target set the date format on command
prompt:
export NLS_DATE_FORMAT=dd-mon-yyyy hh24:mi:ss - Linux
Set NLS_DATE_FORMAT=dd-mon-yyyy hh24:mi:ss - windows
then try to connect rman target
rman target sys/oralce@orcl3 catalog rman/rman@shaan
rman> list backupset 10453
Why not use O/S backups instead of RMAN?
There is nothing wrong with doing just OS backups. OS backups are
just as valid as RMAN backups. RMAN is a great tool but it is not the
only way to do it. Many people still prefer using a scripting tool of
there choice such as perl or ksh to do this.
RMAN is good if you have lots of databases. The catalog it uses
remembers lots of details for you. You don't have as much to think
about.
RMAN is good if you do not have good "paper work" skills in
place. Using OS backups, it is more or less upto you to remember
where they are, what they are called and so on. You have to do all
of the book keeping RMAN would do.
RMAN provides incremental backups, something you cannot get
without RMAN.
RMAN provides tablespace point in time recovery. You can do this
without RMAN but you have to do it by yourself and it can be rather
convoluted.
44
45
Select sysdate,
sysdate+(substr(tz_offset(dbtimezone),1,1)||1)*to_dsint
erval(0
||substr(tz_offset( DBTIMEZONE ),2, 5)||:00) from
dual;
46
47
48
49
50
51
53
54
55
56
service is down ICM through FNDSM and other processes will try to
start it even on remote server) With GSM all services are centrally
managed via this Framework.
How can you license a product after installation?
You can use ad utility adlicmgr to licence product in Oracle
application.
In a situation when you want to know which was the last
query fired by the user. How to check?
Select S.USERNAME||'('||s.sid||')-'||s.osuser UNAME
,s.sid||'/'||s.serial# sid,s.status "Status",p.spid,sql_text sqltext
from v$sqltext_with_newlines t,V$SESSION s , v$process p
where t.address =s.sql_address and p.addr=s.paddr(+) and
t.hash_value = s.sql_hash_value
order by s.sid,t.piece;
Can one copy Oracle software from one machine to another?
Yes, one can copy or FTP the Oracle Software between similar
machines. Look at the following example:
# use tar to copy files and directorys with permissions and
ownership
tar cf $ORACLE_HOME | rsh cd $ORACLE_HOME; tar xf
To copy the Oracle software to a different directory on the same
server:
cd /new/oracle/dir/
(cd $ORACLE_HOME; tar cf . ) | tar xvf NOTE: Remember to relink the Intelligent Agent on the new
machine to prevent messages like Encryption key supplied is not
the one used to encrypt file:
cd /new/oracle/dir/
cd network/lib
make -f ins_agent.mk install
A single transaction can have multiple deletes and a single
SCN number identifying all of these deletes. What if I want
to flash back only a single individual delete?
You would flash back to the SYSTEM (not your transactions) SCN at
that point in time. The SYSTEM has an SCN, your transaction has an
SCN. You care about the SYSTEM SCN with flashback, not your
transactions SCN.
Are flash back queries useful for the developer or the DBA
both? How can I as a developer and DBA get to know the
SCN number of a transaction?
Oracle Flashback is a tool is useful for both either DBA and
Developer. If you deleted data accidently then either DBA or
Developer both can flashback, recover and fix this problem. As a
developer you can use
"dbms_flashback.get_system_change_number" to returns the
current system SCN and as DBA you can use Log Miner utility to to
look back in time at various events to find SCN's as well.
57
58
I have a new server. What is the best way I can have the
same oracle setup that is there on a prodn db? Either we
need to restore the file systems and relink oracle without
doing any installation?
My suggestion is install the same software on another server then
then apply restore and recover procedure on the same environment
or directory structure.
No idea about "relink oracle without doing any installation", see the
admin guide for your OS for details on things like this.
There is any difference between Oracle TCL and DCL
command?
DCL stands for Data Control Language. These command are used to
configure and control database objects such as GRANT,
REVOKE where as TCL stands for Transaction Control language. It is
used to manage the changes made by DML statements. It allows
statements to be grouped together into logical transactions such as
COMMIT - save work done
SAVEPOINT - identify a point in a transaction to which you can later
roll back
ROLLBACK - restore database to original since the last COMMIT
SET TRANSACTION - Change transaction options like isolation level
and what rollback segment to use
What happens when the lock is disabling on the table?
When you disabling the lock on table then you are not able to
perform DDL operation on that table but you still to manage DML
operation easily
For Example:
Create Table s1 (Eno number(2), ename varchar2(15), salary
number(5,2));
insert into s1 values (1, 'shahid', 400);
insert into s1 values (1, 'javed', 200);
insert into s1 values (2, 'karim', 100);
--disable lock on table
Alter table s1 disable table lock;
-- cannot drop/truncate table as table lock is disable
drop table s1;
truncate table s1;
--you cannot able to add/modify/drop column
Alter table s1 add comm number(5,2);
Alter table s1 modify s1 salary number (10,4);
Alter table s1 drop column salary;
-- But still you are able to perform DML
update s1 set salary= 800 where eno=2;
select * from s1;
delete from s1 where eno=2;
insert into s1 values (2, 'mohan', 250);
59
61
Stored Procedure:
It does not return rows to the user.
It has to use cursors to fetch multiple rows
It used Inout/out to send values to user
It is stored in DATABASE or USER PERM
A stored procedure also provides output/Input capabilities
Macros:
It returns set of rows to the user.
It is stored in DBC PERM space
A macro that allows only input values
If the port 1521 is default port for the TNSLinstener. I have a
database server on port 1527 how can I make the clients
connect on this port or can I have one listener service
connect to listen for 2 servers?
If you are using "Host naming" convention (this is a method that
does not require the client to have a tnsnames.ora file at all. You
must be using TCP or you must only have one default database per
host. The client only needs to know the hostname of the server to
connect) then yes, 1521 is the default and only port.
If you are using tnsnames.ora, the Oracle nameserver, or any other
method to connect then no, 1521 is not a default port. In this case,
1521 is simply the port used by "convention". The clients would,
typically in their tnsnames.ora, connect to the listener on some
specified port number. 1521 is the convention used by many
people; it is neither mandatory nor necessary.
What is an IPC protocol and where and how it is used? I
have experience only in TCP/IP protocol. Is there any
advantage in using IPC over TCP?
IPC is interposes communication you have messages, pipes, socket
pairs and so on it is alot like just using sockets with TCP/IP. IPC is
generally limited to "a machine", not over a network. IPC used to be
a tad faster than TCP but recent tests have shown this to be less
and less true.
In your absence any body has done any alteration then how
did you notice or How to know last DDL fired from the
particular schema and particular table?
To find the last ddl performed check out the last_ddl_time from
all_objects, dba_objects, user_objects view because each time and
object changes the last_ddl_time is updated from these view.
Select CREATED, TIMESTAMP, last_ddl_time from all_objects
WHERE OWNER='HRMS' AND OBJECT_TYPE='TABLE' AND
OBJECT_NAME='PAYROLL_MAIN_FILE';
In the above query HRMS is the schema name and
payroll_main_file is the table name.
How to find tables that have a specific column name?
SELECT owner, table_name, column_name
FROM dba_tab_columns
62
63
If the import has more than one table, this statement will only show information
about the current table being imported.
Method 2:
Use the FEEDBACK=n import parameter. This command will tell IMP to display a
dot for every N rows imported.
How we will increase performance on particular table? Here I am inserting
2GB data in table, its takes more time to insert in a table. Is there any way
to increase performance on a particular table?
Index on huge table while doing insert will not only solution to improve
performance. Get your table partitioned that will make table insertion faster and
also easy to manage the archive data. Alternatively do one thing first disable
constraints as well as index then perform insertion then again enable.
You can use high-speed solid-state disk (RAM-SAN) to make Oracle inserts run
up to 300x faster than platter disk.
How to reduce alert log Size?
If you move or delete your Alert log file, it is recreated automatically in next
startup, alternatively you can put a script at OS level to move the archives and
use new one. So the best way to reduce the size of log is just move your aler.log
to some other place. Oracle will recreate it in next startup.
How you will know the instance is Primary or Standby?
By querying v$database one can tell if the host is primary or standby
On the primary database:
SQL> select database_role from v$database;
DATABASE_ROLE
-----------------PRIMARY
OR check the value of controlfile_type in V$database i.e is CURRENT for
primary and "STANDBY" for standby
SQL> SELECT controlfile_type FROM V$database;
CONTROL
------------CURRENT
On the Standby database:
SQL> select database_role from v$database;
DATABASE_ROLE
------------------PHYSICAL STANDBY
SQL> SELECT controlfile_type FROM V$database;
CONTROL
------------STANDBY
Note: You may need to connect to as sys if the instance is in mount state
How would you determine what sessions are connected and what
resources they are waiting for?
Use of V$SESSION and V$SESSION_WAIT
Give two methods you could use to determine what DDL changes have been
made.
You could use Logminer or Streams
How would you determine who has added a row to a table?
Turn on fine grain auditing for the table.
64
65
I would create a text based backup control file, stipulating where on disk all the
data files where and then issue the recover command with the using backup
control file clause.
Explain the difference between a data block, an extent and a segment.
A data block is the smallest unit of logical storage for a database object. As
objects grow they take chunks of additional storage that are composed of
contiguous data blocks. These groupings of contiguous data blocks are called
extents. All the extents that an object takes when grouped together are
considered the segment of the database object.
A table is classified as a parent table and you want to drop and re-create it.
How would you do this without affecting the children tables?
Disable the foreign key constraint to the parent, drop the table, re-create the
table, enable the foreign key constraint.
What column differentiates the V$ views to the GV$ views and how?
The INST_ID column which indicates the instance in a RAC environment the
information came from.
How would you go about increasing the buffer cache hit ratio?
Use the buffer cache advisory over a given workload and then query the
v$db_cache_advice table. If a change was necessary then I would use the alter
system set db_cache_size command.
How would you determine the time zone under which a database was
operating?
select DBTIMEZONE from dual;
Explain the use of setting GLOBAL_NAMES equal to TRUE.
Setting GLOBAL_NAMES indicates how you might connect to a database. This
variable is either TRUE or FALSE and if it is set to TRUE it enforces database
links to have the same name as the remote database to which they are linking.
What background process refreshes materialized views?
The Job Queue Processes.
When a user process fails, what background process cleans up after it?
PMON
What are the roles and user accounts created automatically with the
database?
DBA - role Contains all database system privileges.
SYS user account - The DBA role will be assigned to this account. All of the base
tables and views for the database's dictionary are store in this schema and are
manipulated only by ORACLE.
SYSTEM user account - It has all the system privileges for the database and
additional tables and views that display administrative information and internal
tables and views used by oracle tools are created using this username.
What are the minimum parameters should exist in the parameter file
(init.ora) ?
DB NAME - Must set to a text string of no more than 8 characters and it will be
stored inside the datafiles, redo log files and control files and control file while
database creation.
DB_DOMAIN - It is string that specifies the network domain where the database
is created. The global database name is identified by setting these parameters
(DB_NAME & DB_DOMAIN) CONTORL FILES - List of control filenames of the
database. If name is not mentioned then default name will be used.
66
67
backup copy of the datafile will be resolved by replacing them with the full image
of the block from the redologs.
How do you increase the performance of % like operator?
The % placed after the search word (ss%) can enable the use of index if one is
specified in the index column. This performance is better than the other two
ways using % such as before the search word (like %ss) and before and after
the search word (%ss%).
What is cache Fusion Technology?
Cache fusion treats multiple buffer caches as one joint global cache. This solves
the issues like data consistency internally, without any impact on the application
code or design. Cache fusion technology eases the process of a very high number
of concurrent users and SQL operations without compromising data consistency.
Do you have idea about reports server?
Reports server is also a component of the middle tier and is hosted in the same
node of the concurrent processing server. Reports server is used to produce
business intelligence reports.
What is importance of replication and their use in oracle?
Replication is the process of copying and maintaining database objects in
multiple databases that make up a distributed database system. Changes applied
at one site are captured and stored locally before being forwarded and applied
each of the remote location. Replication provides user with fast, local access to
shared data, and protects availability of applications because alternate data
access options exist. Even if one site becomes unavailable, users can continue to
query or even update the remaining locations.
In simple replication, you create a snapshot, a table corresponding to the query's
column list. When the snapshot is refreshed, that underlying table is populated
with the results of the query. As data changes in a table in the master database,
the snapshot is refreshed as scheduled and moved to the replicated database.
Advanced replication allows the simultaneous transfer of data between two or
more Master Sites. There are considerations to keep in mind when using multimaster replication. The important ones are sequences (which cannot be
replicated), triggers (which can turn recursive if you're not careful) and conflict
resolution.
What is the basic difference between Cloning and Standby databases?
The clone database is a copy of the database which can be opened in read write
mode. It is treated as a separate copy of the database that is functionally
completely separate. The standby database is a copy of the production database
used for disaster protection. In order to update the standby database; archived
redo logs from the production database can be used. If the primary database is
destroyed or its data becomes corrupted, one can perform a failover to the
standby database, in which case the standby database becomes the new primary
database.
Why we are using materialized view instead of a table?
Materialized views are basically used to increase query performance since it
contains results of a query. They should be used for reporting instead of a table
for a faster execution.
Which BG process refreshes the materialized view?
Job Queue Process
What is the importance of transportable Tablespace in oracle?
68
69
If it is set at session level, trace file will be generated only for specified session.
The location of user process trace file is specified in the USER_DUMP_DEST
parameter.
How can you use automatic PGA memory management with oracle 9i or
above?
Set the WORK_AREA_SIZE_POLICY parameter to AUTO and set
PGA_AGGREGATE_TARGET
When a user comes to you and asks that a particular SQL query is taking
more time. How will you solve this?
If you find the particular query is taking time to execute, then take a SQLTRACE
with explain plan, it will show how the SQL query will be executed by oracle,
depending upon the report you will tune your database.
Then determine the table size and check the user requirement is % of data from
query table. If it is less then
For example: one table has 10000 records, but you want to fetch only 5 rows, but
in that query oracle does the full table scan. Only for 5 rows full table scan is not
a good, so create an index on that particular column.
If the user requirement is more than 80% of data from query table then in that
case if we create index, again user will get poor performance because oracle will
get contention on db buffer cache since first of all index block need to be picked
up as well as almost all block from that table will be pull out. Hence it will
increase the I/O, also other user request may get slow performance since
existing data in cache will be flush out and reloaded.
Additionally we need to check system level performance, either any problem
with dbwn either dbwn writing slow any modified data which is in buffer to
datafile and either user server process is waiting for space in buffer cache?
Check alert log file too.
Check if user query needed join or sorting?
Check either there is not enough space in temporary tablespace?
If user again user again facing issue then we need drill down to check either any
issue with table block level either table needs defragments if watermark reached
high.
What is Difference between sqlnet.ora, listener.ora, tnsname.ora network
file?
sqlnet.ora: The normal location for this file is D:\oracle\ora92\network\admin.
The sqlnet.ora file is the profile configuration file, and it resides on the client
machines and the database server. The sqnet.ora is text file (optional) that
contain basic configuration details used by the SQL*Net. It contain network
configuration details such domain name, as what path to take in resolving then
name of an instance, order of naming method, authentication services etc.
listener.ora: The normal location for this file is
D:\oracle\ora92\network\admin. This file is client side file (typically on remote
PC). The client uses this tnsname.ora file to obtain connection details from the
desired database.
tnsname.ora: The normal location for this file is
D:\oracle\ora92\network\admin. This file is located on both client and server. If
you make configuration changes on the server ensure you can connect to the
database through the listener if you are logged on to the server. If you make
70
configuration change on the client ensure you can connect from your client
workstation to the database through the listener running on the server.
What is the address of official oracle support?
Metalink.oracle.com or support.oracle.com
Is the password in oracle case sensitive?
In oracle 10g and earlier version NO and since 11g is YES
What is the difference between ISNULL and IS NOT NULL operators?
The IS NULL and IS NOT NULL operators are used to find the NULL and not NULL
values respectively. The IS NULL operator returns TRUE, when the value is
NULL; and FALSE, when the value is not NULL. The IS NOT NULL operator
returns TRUE, when the value is not NULL; and FALSE, when the value is NULL.
72
If a redo group containing redos of a dirty buffer that redo group is said to be
ACTIVE state. As we know log file keep changes made to the data blocks then
data blocks are modified in buffer cache (dirty blocks). These dirty blocks must
be written to the disk (RAM to permanent media).
And when a redolog group contains no redo records belonging to a dirty buffer it
is in an "INACTIVE" state. These inactive redolog can be overwritten.
One more state UNUSED initially when you create new redo log group its log file
is empty on that time it is unused. Later it can be any of the above mentioned
state.
What is difference between oracle SID and Oracle service name?
Oracle SID is the unique name that uniquely identifies your instance/database
where as the service name is the TNS alias can be same or different as SID.
How to find session for Remote users?
-- To return session id on remote session:
SELECT distinct sid FROM v$mystat;
-- Return session id of you in remote Environment:
Select sid from v$mystat@remot_db where rownum=1;
We have a complete cold Backup taken on Sunday. The database crashed
on Wednesday. None of the database files are available. The only files we
have are the taped backup archive files till Wednesday. Is there a
possibility of recovering the database until the recent archive which we
have in the tape using the cold backup.
Yes, if you have all the archive logs since the cold backup then you can recover to
your last log
Steps:
1) Restore all backup datafiles, and controlfile. Also restore the password file and
init.ora if you lost those too. Don't restore your redo logs if you backed them up.
2) Make sure that ORACLE_SID is set to the database you want to recover
3) startup mount;
4) Recover database using backup controlfile;
At this point Oracle should start applying all your archive logs, assuming that
they're in log_archive_dest
5) alter database open resetlogs;
How to check RMAN version in oracle?
If you want to check RMAN catalog version then use the below query from
SQL*plus
SQL> Select * from rcver;
If you want to check simply database version.
SQL> Select * from v$version;
What is the minimum size of Temporary Tablespace?
1041 KB
Difference b/w image copies and backup sets?
An image copy is identical, byte by byte, to the original datafile, control file, or
archived redo log file. RMAN can write blocks from many files into the same
backup set but cant do so in the case of an image copy.
An RMAN image copy and a copy you make with an operating system copy
command such as dd (which makes image copies) are identical. Since RMAN
image copies are identical to copies made with operating system copy
commands, you may use user-made image copies for an RMAN restore and
73
recovery operation after first making the copies known to RMAN by using the
catalog command.
You can make image copies only on disk but not on a tape device. "backup as
copy database;" Therefore, you can use the backup as copy option only for disk
backups, and the backup as backupset option is the only option you have for
making tape backups.
How can we see the C:\ drive free space capacity from SQL?
create an external table to read data from a file that will be as below
create BAT file free.bat as
@setlocal enableextensions enable delayedexpansion
@echo off
for /f "tokens=3" %%a in ('dir c:\') do (
set bytesfree=%%a
)
set bytesfree=%bytesfree:,=%
echo %bytesfree%
endlocal && set bytesfree=%bytesfree%
You can create a schedular to run the above free.bat, free_space.txt inside the
oracle directory.
Differentiate between Tuning Advisor and Access Advisor?
The tuning Advisor:
It suggests indexes that might be very useful.
It suggests query rewrites.
It suggests SQL profile
The Access Advisor:
It suggest indexes that may be useful
Suggestion about materialized view.
Suggestion about table partitions also in latest version of oracle.
How to give Access of particular table for particular user?
GRANT SELECT (EMPLOYEE_NUMBER), UPDATE (AMOUNT) ON
HRMS.PAY_PAYMENT_MASTER TO SHAHID;
The Below command checks the SELECT privilege on the table
PAY_PAYMENT_MASTER on the HRMS schema (if connected user is different
than the schema)
SELECT PRIVILEGE
FROM ALL_TAB_PRIVS_RECD
WHERE PRIVILEGE = 'SELECT'
AND TABLE_NAME = 'PAY_PAYMENT_MASTER'
AND OWNER = 'HRMS'
UNION ALL
SELECT PRIVILEGE
FROM SESSION_PRIVS
WHERE PRIVILEGE = 'SELECT ANY TABLE';
What are the problem and complexities if we use SQL Tuning Advisor and
Access Advisor together?
I think both the tools are useful for resolving SQL tuning issues. SQL Tuning
Advisor seems to be doing logical optimization mainly by checking your SQL
structure and statistics and the SQL Access Advisor does suggest good data
access paths, that is mainly work which can be done better on disk.
74
Both SQL Tuning Advisor and SQL Access Advisor tools are quite powerful as
they can source the SQL they will tune automatically from multiple different
sources, including SQL cache, AWR, SQL tuning Sets and user defined workloads.
Related with the argument complexity and problem of using these tools or how
you can use these tools together better to check oracle documentation.
75
76
The db_files parameter is a "soft limit " parameter that controls the maximum
number of physical OS files that can map to an Oracle instance. The maxdatafiles
parameter is a different - "hard limit" parameter. When issuing a "create
database" command, the value specified for maxdatafiles is stored in Oracle
control files and default value is 32. The maximum number of database files can
be set with the init parameter db_files.
Regardless of the setting of this parameter, maximum per database: 65533 (May
be less on some operating systems), Maximum number of datafiles per
tablespace: OS dependent = usually 1022
You can also by Limited size of database blocks and by the DB_FILES
initialization parameter for a particular instance. Bigfile tablespaces can contain
only one file, but that file can have up to 4G blocks.
What is Latches and why they are used in oracle?
A latch is a serialization mechanism. It is used to gain access to shared data
structure in order to latches the structure that will prevent others from
modifying it while you are modifying it.
Why it is not necessary to take UNDO backup?
In fact it is not necessary to take UNDO tablespace backup either with COLD or
HOT backup scripts but many of DBA include UNDO tablespace in their backup
script.
You know when you do some transactions; redo entries will be generated and
accepted! Just like that other tablespace whenever any change happens to UNDO
tablespace or UNDO segments oracle will generate redo entries. So even you not
backed up the UNDO tablespace, you have the redo entries through which you
can recover or rollback the transactions.
What should be effect on DB performance if virtual memory used to store
SGA parameter?
For optimal performance in most systems, the entire SGA should fit in real
memory. If it does not, and if virtual memory is used to store parts of it, then
overall database system performance can decrease dramatically. The reason for
this is that portions of the SGA are paged (written to and read from disk) by the
operating system.
What is the role of lock_sga parameter?
The LOCK_SGA parameter, when set to TRUE, locks the entire SGA into physical
memory. This parameter cannot be used with automatic memory management
or automatic shared memory management.
What is CSSCAN?
CSSCAN (Database Character Set Scanner) is a SCAN tool that allows us to see the
impact of a database character set change or assist us to correct an incorrect
database nls_characterset setup. This helps us to determine the best approach
for converting the database characterset.
Differentiate between co-related sub-query and nested query?
Co-related sub query is one in which inner query is evaluated only once and from
that result your outer query is evaluated where as Nested query is one in which
Inner query is evaluated for multiple times for getting one row of that outer
query.
Example: Query used with IN() clause is Co-related query.
SELECT EMPLOYEE_NUMBER, LOAN_CODE, DOCUMENT_NUMBER,
LOAN_AMOUNT
77
FROM PAY_LOAN_TRANS
WHERE EMPLOYEE_NUMBER IN (SELECT EMPLOYEE_NUMBER
FROM PAY_EMPLOYEE_PERSONAL_INFO
WHERE EMPLOYEE_NUMBER BETWEEN 1 AND 100);
Example: Query used with = operator is Nested query
SELECT * FROM PARTIAL_PAYMENT_SEQUENCE
WHERE SEQCOD = (SELECT MAX(SEQCOD) FROM
PARTIAL_PAYMENT_SEQUENCE);
One after noon suddenly you get a call from your application user and
complaining the database is slow then what will be your first step to solve
this issue?
High performance is common expectation for end user, in fact the database is
never slow or fast in most of the case session connected to the database slow
down when they receives unexpected hit. Thus to solve this issue you need to
find those unexpected hit. To know exactly what the second session is doing join
your query with v$session_wait.
SELECT NVL(s.username, '(oracle)') AS username, s.sid, s.serial#, sw.event,
sw.wait_time, sw.seconds_in_wait, sw.state
FROM v$session_wait sw, v$session s
WHERE s.sid = sw.sid and s.username = 'HRMS'
ORDER BY sw.seconds_in_wait DESC;
Check the events that are waiting for something, try to find out the objects locks
for that particular session. Follow the link: Find Locks : Blockers
Locking is not only the cause to effects the performance. Disk I/O contention is
another case. When a session retrieves data from the database datafiles on disk
to the buffer cache, it has to wait until the disk sends the data. The wait event
shows up for the session as db file sequential read (for index scan) or db file
scattered read (for full table scan). Query link: DB File Sequential Read Wait/
DB File Scattered Read , DB Locks
When you see the event, you know that the session is waiting for I/O from the
disk to complete. To improve session performance, you have to reduce that
waiting period. The exact step depends on specific situation, but the first
technique reducing the number of blocks retrieved by a SQL statement almost
always works.
Reduce the number of blocks retrieved by the SQL statement. Examine the SQL
statement to see if it is doing a full-table scan when it should be using an index, if
it is using a wrong index, or if it can be rewritten to reduce the amount of data it
retrieves.
Place the tables used in the SQL statement on a faster part of the disk.
Consider increasing the buffer cache to see if the expanded size will
accommodate the additional blocks, therefore reducing the I/O and the wait.
Tune the I/O subsystem to return data faster.
78
The database firewall has the ability to analyze SQL statements sent from
database clients and determine whether to pass, block, log, alert or substitute
SQL statements, based on a defined policy. User can set whitelist and blacklist
policy to control the firewall. It can detect the injected SQLs and block them. The
database firewall can do the following:
Monitor and block SQL traffic on the network with whitelist, blacklist and
exception list policies.
Protect against application bypass, SQL injection and similar threats.
Report on database activity.
Supports other database as well MS-SQL Server, IBM DB2 and Sybase.
However there are some key issues that it does not address. For Example
privilege user can login to the OS directly and make local connections to the
database. This bypasses the database firewall. For these issues, would need use
of other security options such as Audit Vault, VPD etc.
What is Oracle RAC One Node?
Oracle RAC one Node is a single instance running on one node of the cluster
while the 2nd node is in cold standby mode. If the instance fails for some reason
then RAC one node detect it and restart the instance on the same node or the
instance is relocate to the 2nd node incase there is failure or fault in 1st node. The
benefit of this feature is that it provides a cold failover solution and it automates
the instance relocation without any downtime and does not need a manual
intervention. Oracle introduced this feature with the release of 11gR2 (available
with Enterprise Edition).
What are invalid objects in database?
Sometimes schema objects reference other objects such as a view contains a
query that reference table or other view and a PL/SQL subprogram invokes
other subprograms or may reference another tables or views. These references
are established at compile time and if the compiler cannot resolve them, the
dependent object being compiled is marked invalid.
An invalid dependent object must be recompiled against the new definition of a
referenced object before the dependent object can be used. Recompilation occurs
automatically when the invalid dependent object is referenced
How can we check DATAPUMP file is corrupted or not?
Sometimes we may be in situation, to check whether the dumpfile exported long
time back is VALID or not or our application team is saying that the dumpfile
provided by us is corrupted.
Use SQLFILE Parameter with import script to detect corruption. The use of this
parameter will read the entire datapump export dumpfile and will report if
corruption is detected.
impdp system/*** directory=dump_dir dumpfile=expdp.dmp
logfile=corruption_check.log
sqlfile=corruption_check.sql
This will write all DDL statements (which will be executed if an import is
performed) into the file which we mentioned in the command.
How can we find elapsed time for particular object during Datapump or
Export?
We have an undocumented parameter metrics in DATAPUMP to check how
much it took to export different objects types.
79
On Linux: tune2fs -l
On Solaris: df -g /tmp
How to find location of OCR file when CRS is down?
If you need to find the location of OCR (Oracle Cluster Registry) but your CRS is
down.
When the CRS is down:
Look into ocr.loc file, location of this file changes depending on the OS:
On Linux: /etc/oracle/ocr.loc
On Solaris: /var/opt/oracle/ocr.loc
When CRS is UP:
Set ASM environment or CRS environment then run the below command:
ocrcheck
How can you Test your Standby database is working properly or not?
To test your standby database, make a change to particular table on the
production server, and commit the change. Then manually switch a logfile so
those changes are archived. Manually ship the newest archived redolog file, and
manually apply it on the standby database. Then open your standby database in
read-only mode, and select from your changed table to verify those changes are
available. Once you have done, shutdown your standby and startup again in
standby mode.
What is Dataguard & what is the purpose of Data Guard?
Oracle Dataguard is a disaster recovery solution from Oracle Corporation that
has been utilized in the industry extensively at times of Primary site failure,
failover, switchover scenarios.
a) Oracle Data Guard ensures high availability, data protection, and disaster
recovery for enterprise data.
b) Data Guard provides a comprehensive set of services that create, maintain,
manage, and monitor one or more standby databases to enable production
Oracle databases to survive disasters and data corruptions.
80
81
If both CPU and PSU are available for given version which one, you will
prefer to apply?
From the above discussion it is clear once you apply the PSU then the
recommended way is to apply the next PSU only. In fact, no need to apply CPU on
the top of PSU as PSU contain CPU (If you apply CPU over PSU will considered
you are trying to rollback the PSU and will require more effort in fact). So if you
have not decided or applied any of the patches then, I will suggest you to go to
use PSU patches. For more details refer: Oracle Products [ID 1430923.1], ID
1446582.1
PSU is superset of CPU then why someone choose to apply a CPU rather
than a PSU?
CPUs are smaller and more focused than PSU and mostly deal with security
issues. It seems to be theoretically more consecutive approach and can cause less
trouble than PSU as it has less code changing in it. Thus any one who is
concerned only with security fixes and not functionality fixes, CPU may be good
approach.
Will Patch Application affect System Performance?
Sometimes applying certain patch could affect Application performance of SQL
statements. Thus it is recommended to collect a set of performance statistics that
can serve as a baseline before we make any major changes like applying a patch
to the system.
What is your day to day activity as an Apps DBA?
As an Apps DBA we monitor the system for different alerts (Entreprise Manager
or third party tools used for configuring the Alerts) Tablespace Issues, CPU
consumption, Database blocking sessions etc., Regular maintenance activities
like cloning, patching, custom code migrations (provided by developers) and
Working with user issues.
How often do you use patch in your organization?
Usually for non-production the patching request comes around weekly 4-6 and
the same patches will be applied to Production in the outage or maintenance
window.
Production has weekly maintenance window (eg. Sat 6PM to 9PM) where all the
changes (patches) will applied on production.
How often do you use cloning in your organization?
Cloning happens weekly or monthly depending on the organization requirement.
Generally when we need to perform major task such as oracle financial annual
closing etc.
82
Answer: D
Which process read/write data from datafiles?
There is no background process which reads data from datafiles or database
buffer. Oracle creates server process to handle request from connected user
processes. A server process communicates with the user process and interacts
with oracle to carry out request from the associated user process.
For example: If a user queries some data not already in database buffer of the
SGA, then the associated server process reads the proper data block from the
datafiles into the SGA.
DBWR background process is responsible to writes modified (dirty block from
buffer cache to the datafiles) block permanently to disk.
Why RMAN incremental backup fails even though full backup exists?
If you have taken the RMAN full backup using the command Backup database,
where as a level 0 backup is physically identical to a full backup. The only
difference is that the level 0 backup is recorded as an incremental backup in the
RMAN repository so it can be used as the parent for a level 1 backup. Simply the
full backup without level 0 can not be considered as a parent backup from
which you can take level 1 backup.
How can you change or rename the database name?
SQL> ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
83
84
Create a text based control files, saved on the disk same location where all the
datafiles are located then issue the recover command by using backup control
file clause.
Shutdown abort;
-- if db still open
Startup nomount;
create controlfile
database <name>
logfile '<online redo log groups>'
noresetlogs|resetlogs
maxlogfiles 10
maxlogmembers <your value>
datafile '<names of all data files>'
maxdatafiles 254
archivelog;
ON DELETE CASCADE
ON UPDATE CASCADE
CREATE SEQUENCE [SequenceName]
DROP SEQUENCE [SequenceName]
Answer: B
What is the effect on working with Report when flex/confine mode are ON?
When flex mode is ON, reports automatically resize the parent when the child is
resized.
When the confine mode is ON, the object cannot be moved outside its parent in
layout.
How will you enforce security using stored procedure?
Dont grant user access directly to tables within the application. Instead grant the
ability to access the procedure that accesses the tables. When procedure execute
it will execute the privilege of procedures owner. Users cannot access except via
the procedure.
What is RAC? What is the benefit of RAC over single instance database?
In Real Application Clusters environments, all nodes concurrently execute
transactions against the same database. Real Application Clusters coordinates
each node's access to the shared data to provide consistency and integrity.
Benefits:
Improve response time
Improve throughput
High availability
Transparency
85
Can you configure primary server and standby server on different OS?
NO, Standby database must be on same version of database and same version of
OS.
If you want users will change their passwords after every 60 days then how
you will enforce this?
Oracle password security is implemented through oracle PROFILES which are
assigned to users. PASSWORD_LIFE_TIME parameter limits the number of days
the same password can be used for authentication.
You have to first create database PROFILE and then assign each user to this
profile or if you have already having PROFILE then you need to just alter the
above parameter.
create profile Sadhan_users
limit
PASSWORD_LIFE_TIME 60
PASSWORD_GRACE_TIME 10
PASSWORD_REUSE_TIME UNLIMITED
PASSWORD_REUSE_MAX 0
FAILED_LOGIN_ATTEMPTS 3
PASSWORD_LOCK_TIME UNLIMITED;
Then create user or already created user assigned to this profile.
SQL> Create user HRMS identified by oracle profile
sadhan_users;
If you have already assigned profile then you can directly modify the profile
parameter:
SQL> Alter profile sadhan_users set PASSWORD_LIFE_TIME =
90;
What happens actually in case of instance Recovery?
While Oracle instance fails, Oracle performs an Instance Recovery when the
associated database is being re-started. Instance recovery occurs in two steps:
Cache recovery: Changes being made to a database are recorded in the database
buffer cache as well as redo log files simultaneously. When there are enough data
in the database buffer cache, they are written to data files. If an Oracle instance
fails before these data are written to data files, Oracle uses online redo log files to
recover the lost data when the associated database is re-started. This process is
called cache recovery.
Transaction recovery: When a transaction modifies data in a database (the
before image of the modified data is stored in an undo segment which is used to
restore the original values in case the transaction is rolled back). At the time of
an instance failure, the database may have uncommitted transactions. It is
possible that changes made by these uncommitted transactions have gotten
saved in data files. To maintain read consistency, Oracle rolls back all
uncommitted transactions when the associated database is re-started. Oracle
uses the undo data stored in undo segments to accomplish this. This process is
called transaction recovery.
What is the main purpose of CHECKPOINT in oracle database?
A checkpoint is a database event, which synchronize the database blocks in
memory with the datafiles on disk. It has two main purposes: To establish a data
consistency and enable faster database Recovery. For more information:
Discussion on Checkpoint and SCN
86
1.
2.
3.
4.
5.
87
88
What are the steps to install oracle on Linux system. List two kernel
parameter that effect oracle installation?
Initially set up disks and kernel parameters, then create oracle user and DBA
group, and finally run installer to start the installation process. The SHMMAX &
SHMMNI two kernel parameter required to set before installation process.
__________ Parameter change will decrease Paging/Swapping?
Answer: Decrease_Shared_Pool_size
_______ Command is used to see the contents of SQL* Plus buffer
Answer: LIST
Transaction per rollback segment is derived from ________
Answer: Processes
LGWR process writes information into ___________
Answer: Redo log files.
A database over all structure is maintained in a file __________
Answer: Control files
What is the use of NVL function?
The NVL function is used to replace NULL values with another or given value.
For Example: NVL (Value, replace value);
What is WITH CHECK OPTION?
The WITH CHECK option clause specifies check level to be done in DML
statements. It is used to prevent changes to a view that would produce results
that are not included in the sub query.
The concepts are different than previous concept in fact. In that case you can
access the some of the concept in your mind to achieve the target.
How can you track the password change for a user in oracle?
Oracle only tracks the date that the password will expire based on when it was
latest changed. Thus listing the view DBA_USERS.EXPIRY_DATE and
subtracting PASSWORD_LIFE_TIME you can determine when password was
last changed. You can also check the last password change time directly from the
PTIME column in USER$ table (on which DBA_USERS view is based). But If you
have PASSWORD_REUSE_TIME and/or PASSWORD_REUSE_MAX set in a profile
assigned to a user account then you can reference dictionary table
USER_HISTORY$ for when the password was changed for this account.
SELECT user$.NAME, user$.PASSWORD, user$.ptime,
user_history$.password_date
FROM SYS.user_history$, SYS.user$
WHERE user_history$.user# = user$.user#;
What is the difference between a data block/extent/segment?
A data block is the smallest unit of logical storage for a database object. As
objects grow they take chunks of additional storage that are composed of
contiguous data blocks. These groupings of contiguous data blocks are called
extents. All the extents that an object takes when grouped together are
considered the segment of the database object.
What is the difference between SQL*loader and Import utilities?
Both these utilities are used for loading the data into the database. The difference
is that the import utility relies on the data being produced by another oracle
utility Export while SQL*Loader is a high speed data loading mechanism allows
data to be loaded that has been produced by other utilities from different data
source. Import is mainly used reading and writing operating system files.
89
90
The point at which oracle ends writing to one online redo log file and begins
writing to another is called a log switch. Sometimes you can force the log switch
by using the command: ALTER SYSTEM LOG SWITCH.
How can you pass the HINTS to the SQL processor?
Using comment line with (+) sign you can pass the HINTS to the SQL engine: For
Example: /* +PARALLEL() */
Give Example of available DB administrator utilities with their
functionality?
SQL * DBA It allows DBA to monitor and control an oracle database.
SQL * Loader It loads data from standard OS files or flat file in oracle
database tables.
Export/Import It allows moving existing data in oracle format to and from
oracle database.
Can you built indexes online?
YES. You can create and rebuild indexes online. This enables you to update base
tables at the same time you are building or rebuilding indexes on that table. You
can perform DML operations while the index building is taking place, but DDL
operations are not allowed. Parallel execution is not supported when creating or
rebuilding an index online.
CREATE INDEX emp_name ON emp (mgr, emp1, emp2, emp3)
ONLINE;
If an oracle database is crashed? How would you recover that transaction
which is not in backup?
If the database is in archivelog we can recover that transaction otherwise we
cannot recover that transaction which is not in backup.
What is the benefit of running the DB in archivelog mode over no
archivelog mode?
When a database is in no archivelog mode whenever log switch happens there
will be a loss of some redoes log information in order to avoid this, redo logs
must be archived. This can be achieved by configuring the database in archivelog
mode.
What is SGA? Define structure of shared pool component of SGA?
The system global area is a group of shared memory area that is dedicated to
oracle instance. All oracle process uses the SGA to hold information. The SGA is
used to store incoming data and internal control information that is needed by
the database. You can control the SGA memory by setting the parameter
db_cache_size, shared_pool_size and log_buffer.
Shared pool portion contain three major area: Library cache (parse SQL
statement, cursor information and execution plan), dictionary cache (contain
cache, user account information, privilege user information, segments and extent
information, buffer for parallel execution message and control structure.
You have more than 3 instances running on the Linux box? How can you
determine which shared memory and semaphores are associated with
which instance?
Oradebug is undocumented oracle supplied utility by oracle. The oradebug
help command list the command available with oracle.
SQL>oradebug setmypid
SQL>oradebug ipc
SQL>oradebug tracfile_name
91
How would you extract DDL of a table without using a GUI tool?
Select dbms_metadata.get_ddl('OBJECT','OBJECT_NAME') from
dual;
If you are getting high Busy Buffer waits then how can you find the reason
behind it?
Buffer busy wait means that the queries are waiting for the blocks to be read into
the db cache. There could be the reason when the block may be busy in the cache
and session is waiting for it. It could be undo/data block or segment header wait.
Run the below two query to find out the P1, P2 and P3 of a session causing buffer
busy wait
then after another query by putting the above P1, P2 and P3 values.
SQL> Select p1 "File #",p2 "Block #",p3 "Reason Code"
from v$session_wait Where event = 'buffer busy waits';
SQL> Select owner, segment_name, segment_type from
dba_extents
Where file_id = &P1 and &P2 between block_id and block_id
+ blocks -1;
Can flashback work on database without UNDO and with rollback
segments?
No, flashback query enable us to query our data as it existed in a previous state.
In other words, we can query our data from a point in time before any other
users made permanent changes to it.
Can we have same listener name for two databases?
No
92
LGWR or CKPT writes the redo log sequence to the datafile headers
and control files and tells the DBWR to write dirty buffers from the
dirty buffer write queue (buffer cache) to disk. It is a record
indicating the point in the redo log where all DB changes prior to
this point have been saved in the datafiles.
The database ALWAYS has transactions going on, ALWAYS. SMON
and many other background processes are always doing work, the
database (unless it is opened read only) is always doing
transactions. Now, since the database never sleeps. Most of those
other programs do transactions and commit. SQL>select
username, program from v$session;
The justification against the question, is SCN number, is it a
number to identify a committed transaction? or is it a
number just to identify the sequence of statements executed
against the database ?
SQL> create table s ( x int );
Table created.
SQL> Select dbms_flashback.get_system_change_number scn from
dual;
SCN
---------79178265
SQL> begin
for i in 1 .. 1000
Loop
insert into s values ( i );
end loop;
end;
/
PL/SQL procedure successfully completed.
SQL> select dbms_flashback.get_system_change_number - &SCN
from dual;
DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER-79178265
-----------------------------------------------5
It only advanced by 5 - but we did over 1,000 DML statements thus
the SCN is not assigned to a SQL statement. The SCN is
incremented upon commit.
SQL>
SQL> select dbms_flashback.get_system_change_number scn from
dual;
SCN
---------79178271
SQL> begin
for i in 1 .. 1000
loop
insert into s values ( i );
93
COMMIT;
end loop;
end;
PL/SQL procedure successfully completed.
SQL> select dbms_flashback.get_system_change_number &SCN from dual;
DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER-79178271
-----------------------------------------------1016
See, now if you COMMIT 1,000 times, the SCN does jump by 1,000
(the other jumps are the background processes, they are always
doing stuff - SMON, MMON, PMON, etc. they do SQL all of the time the database never rests
Is there a limitation on the number of SCN that can be
generated in a second?
It is depends upon number of commit you are doing
How can we check precision of SCN Timing?
SQL> select time_mp,time_dp, scn_wrp, scn_bas from
smon_scn_time;
It is internally done if you look at that table all of the columns there is a field TIM_SCN_MAP, it is hidden in there, by using the
APIs you can access that information.
SQL>select scn_to_timestamp(scn) ts, min(scn), max(scn)
from (
select
dbms_flashback.get_system_change_number()-level scn
from dual
connect by level <= 100
)
Group by scn_to_timestamp(SCN);
Order by scn_to_timestamp(SCN);
What if the transaction is rolled-back? Does the SCN again
increase?
Yes it is, check out this Example
SQL> CREATE TABLE S1 (ENO NUMBER(4), ENAME
VARCHAR2(20));
Table created.
SQL> Select dbms_flashback.get_system_change_number scn
from dual;
SCN
---------8806085
SQL> begin
for i in 1 .. 1000
loop
insert into S1 values ( 1, 'SHAAN' );
rollback;
end loop;
end;
94
/
PL/SQL procedure successfully completed.
SQL> select scn, scn-8806085 from (
select dbms_flashback.get_system_change_number scn
from dual
);
SCN SCN-8806085
---------- ------------8806085
2014
SQL> Select dbms_flashback.get_system_change_number scn
from dual;
SCN
---------8806085
SQL> begin
for i in 1 .. 10000
loop
insert into S1 values ( 1, 'SHAAN' );
rollback;
end loop;
end;
/
PL/SQL procedure successfully completed.
SQL> select scn, scn-8806085 from (
select dbms_flashback.get_system_change_number scn from
dual
);
SCN SCN-8806085
---------- ------------155317184 20180
Even more than if you do not rollback but commit instead
SQL> Create table S2 ( eno number(4));
Table created.
SQL> select dbms_flashback.get_system_change_number scn
from dual;
SCN
---------8828432
SQL> begin
for i in 1 .. 1000
loop
insert into s2 values ( i );
commit;
end loop;
end;
/
PL/SQL procedure successfully completed.
SQL> select scn, scn-8828432 from (
select dbms_flashback.get_system_change_number scn from
dual
);
95
SCN SCN-8828432
---------- ------------8830391
1959
SQL> select dbms_flashback.get_system_change_number scn
from dual;
SCN
---------8830447
SQL> begin
for i in 1 .. 1000
loop
insert into s2 values ( i );
commit;
end loop;
end;
/
PL/SQL procedure successfully completed.
SQL> select scn, scn-8830447 from (
select dbms_flashback.get_system_change_number scn from
dual
);
SCN SCN-8830447
---------- ------------8842825
12378
Is there any difference between select CURRENT_SCN from
v$database and select
dbms_flashback.get_system_change_number scn from dual?
For a "normal" database (not standby) they are for all intents and
purposes the same. They could be a LITTLE different if you do
something like:
SQL>select current_scn,
dbms_flashback.get_system_change_number from v$database;
since they would be evaluated at two slightly different points in
time, but consider them "the same"
What is the difference or similarity between SCN and
ORA_ROWSCN? Where does oracle store SCN?
The SCN is like a clock - it is always advancing (use the command
dbms_flashback.get_system_change_number and wait for few
seconds, print it again, it will have advanced). So, just think of the
SCN like a ticker, like time - every time a transaction ends another unit of time is added, like adding seconds to time where as
ora_rowscn is an observed point in time. The ora_rowscn is a value
associated to a block or a row on a block that represents the time
the block/row was last modified.
When alter system checkpoint command is used?
When we have few dirty buffers of one table in the buffer cache and
we issue the command
The checkpoint SCN of the data block is updated and ITL is also
updated as:
96
Itl
Xid
0x01
0x0005.020.00002b46
0x0000.00ddffee
0x02
0x0001.012.00002088
0x0000.00df1407
Uba
0x00c00235.0d0f.15
Flag
--U-
Lck
3
fsc
Scn/Fsc
0x00c0021d.0b70.07
--U-
fsc
But the header block (after we dump and see) of the file still
contains the same SCN as before irrespective of the change in the
data block
Where the SCN number resides? Does archive and redo logs
also contain SCN numbers?
SCN doe not really reside anywhere, it is like time itself. A value of
the SCN, taken at various times, representing the time something
happened is stored in many places, sort of like a timestamp would
be. Datafiles have SCNs associated with them (times of various
operations) control files have them (times of various operations) log
files have them (to record times of various operations) undo
segments have them (......) they are littered all over the place, they
are like timestamps.
Overview DATA DICTIONARY: CHECKPOINT
V$INSTANCE_RECOVERY, V$LOG, V$LOG_HISTORY
V$INSTANCE_RECOVERY: lowest value in last four columns controls
checkpoints
redo log file size, log_checkpoint_timeout, log_checkpoint_interval,
fast_start_io_target
init parameter: log_checkpoint_interval, log_checkpoint_timeout,
log_checkpoints_to_alert
log_checkpoint_interval
redo log blocks (OS blocks not DB blocks) written before a
checkpoint
If set greater than redo log file size, checkpoints occur at log
switches
Ignored if set to zero.
log_checkpoint_timeout
number of seconds since last checkpoint before another is
performed
ignored if set to zero
default = 1800 seconds (30 minutes)
log_checkpoints_to_alert if true, write checkpoints to alert log
To decrease checkpoints:
set log_checkpoint_interval larger than the size of the online redo
logs
eliminate time-based checkpoints by setting
log_checkpoint_timeout = 0
increase size of online redo logs
Note: checkpoints DO NOT cause log switches, but log switches
cause checkpoints. For Manual check point use alter system
checkpoint.
97
your question to the point in fact it is not possible in case of single instance but in
RAC you can Apply Opatch without downtime as there will be more separate
ORACLE_HOME and more separate instances (running once instance on each
ORACLE_HOME).
You have collection of patch (nearly 100 patches) or patchset. How can you
apply only one patch from patcheset or patch bundle at ORACLE_HOME?
With Napply itself (by providing patch location and specific patch id) you can
apply only one patch from a collection of extracted patch. For more information
check the opatch util NApply help. It will give you clear picture.
For Example:
opatch util napply <patch_location> -id 9 -skip_subset skip_duplicate
This will apply only the patch id 9 from the patch location and will skip duplicate
and subset of patch installed in your ORACLE_HOME.
How can you get minimum/detail information from inventory about
patches applied and components installed?
You can try below command for minimum and detail information from inventory
C:\ORACLE_HOME\Opatch\opatch lsinventory invPtrLoc
location of oraInst.loc file
$ORACLE_HOME\OPatch\opatch lsinventory -detail -invPtrLoc
location of oraInst.loc file
Differentiate Patcheset, CPU and PSU patch? What kind of errors usually
resolved from them?
Critical Patch Update (CPU) was the original quarterly patches that were
released by oracle to target the specific security fixes in various products. CPU is
a subset of patchset updates (PSU). CPU are built on the base patchset version
where as PSU are built on the base of previous PSU
Patch Set Updates (PSUs) are also released quarterly along with CPU patches
are a superset of CPU patches in the term that PSU patch will include CPU
patches and some other bug fixes released by oracle. PSU contain fixes for bugs
that contain wrong results, Data Corruption etc but it doe not contain fixes for
bugs that that may result in: Dictionary changes, Major Algorithm changes,
Architectural changes, Optimizer plan changes
Regular patchset: Please do not confuse between regular patchests and patch
set updates (PSU). Consider the regular patchset is super set of PSU. Regular
Patchset contain major bug fixes. The importance of PSU is minimizing once a
regular patchset is released for a given version. In comparison to regular patch
PSU will not change the version of oracle binaries such as sqlplus, import/export
etc.
If both CPU and PSU are available for given version which one, you will
prefer to apply?
From the above discussion it is clear once you apply the PSU then the
recommended way is to apply the next PSU only. In fact, no need to apply CPU on
the top of PSU as PSU contain CPU (If you apply CPU over PSU will considered
you are trying to rollback the PSU and will require more effort in fact). So if you
have not decided or applied any of the patches then, I will suggest you to go to
use PSU patches. For more details refer: Oracle Products [ID 1430923.1], ID
1446582.1
99
PSU is superset of CPU then why someone choose to apply a CPU rather
than a PSU?
CPUs are smaller and more focused than PSU and mostly deal with security
issues. It seems to be theoretically more consecutive approach and can cause less
trouble than PSU as it has less code changing in it. Thus any one who is
concerned only with security fixes and not functionality fixes, CPU may be good
approach.
How can you find the PSU installed version?
PSU references at 5th place in the oracle version number which makes it easier to
track such as (e.g. 10.2.0.3.1). To determine the PSU version installed, use
OPATCH utility:
OPATCH lsinv -bugs_fixed | grep -i PSU
To find from the database:
Select substr(action_time,1,30) action_time,
substr(id,1,10) id, substr(action,1,10)
action,substr(version,1,8) version,
substr(BUNDLE_SERIES,1,6) bundle, substr(comments,1,20)
comments from registry$history;
Note: You can find the details from the above query if you already executed the
catbundle.sql
100
101
Export (exp), Import (imp) are Oracle utilities which allow you to write data in
an ORACLE-binary format from the database into operating system files and to
read data back from those operating system files.
A simple automated script to export full database
SET ORACLE_SID=ORCL3
Column instnc new_value v_inst noprint
column instdate new_value v_instdate noprint
SELECT TO_CHAR(sysdate,'DDMMYYHH24') instdate FROM dual;
host exp system/oracle@orcl3 full=y consistent=y
file=D:\BACKUP\dump&&v_instdate..dmp
log=D:\BACKUP\dump&&v_instdate..log
exit
Which are the Import/Export modes?
Full export/export:
The EXP_FULL_DATABASE & IMP_FULL_DATABASE, respectively, are needed to
perform a full export. Use the full export parameter for a full export.
Tablespace:
Use the tablespaces export parameter for a tablespace export.
User:
This mode can be used to export and import all objects that belong to a user. Use
the owner export parameter and the FROMUSER import parameter for a user
(owner) export-import.
Table:
Specific tables (or partitions) can be Exported/Imported with table export mode.
Use the tables export parameter for a table Export/ Import mode.
For more details example follow the other post: Using Import/Export
Is it possible to exp/ imp to multiple files?
Yes, is possible. Here is an example:
exp SCOTT/TIGER
FILE=C:\backup\File1.dmp,C:\backup\File2.dmp
LOG=C:\backup\scott.log
Export and Import Schema in the same and different database
EXP SYSTEM/SYSMAN@sadhan.world OWNER=ORAFIN
FILE=H:\dump\orafin1_dump.DMP GRANTS=Y BUFFER=10000 COMPRESS=Y
ROWS=Y LOG= H:\dump\Orafin1.DMP.LOG
IMP SYSTEM/sysman@sadhan.world FILE=D:\backup\orafin1_dump.DMP
FROMUSER=ORAFIN TOUSER=ITGFIN LOG=D:\backup\Orafin1.DMP.LOG
The above two commands will export the 'orafin' schema from sadhan database
and import (restore) into the 'itgfin' schema located on the same database.
EXP edss/edss@isscohr.world OWNER=EDSS
FILE=H:\New_dump\ISSCO_EDSS_DUMP.DMP GRANTS=Y BUFFER=10000
COMPRESS=Y LOG=H:\New_dump\ISSCO_EDSS_DUMP.LOG
IMP edss/edss@sad1.world FILE=H:\New_dump\ISSCO_EDSS_DUMP.DMP
FROMUSER=edss TOUSER=edss LOG=H:\New_dump\ISSCO_EDSS_DUMP.LOG
The above two commands will export the 'edss' schema from Issco database and
import (restore) into the same schema (edss) on the sad1 database.
102
How we can use exp/ imp when we have 2 different Oracle database
versions?
exp must be of the lower version
imp must match the target version
What I have to do before importing database Objects?
Before importing database objects, we have to drop or truncate the objects, if
not, the data will be added to the objects. If the sequences are not dropped, the
sequences will generate inconsistent values. If there are any constraints on the
target table, the constraints should be disabled during the import and enabled
after import.
Is it possible to import a table in a different tablespace?
By default, NO. Because there is no tablespace parameter for the import
operation. However this could be done in the following manner:
(Re) create the table in another tablespace (the table will be empty)
Import the table using INDEXFILE parameter (the import is not done, but a file
which contains the indexes creation is generated)
Modify this script to create the indexes in the tablespace we want
Import the table using IGNORE=y option (because the table exists)
Recreate the indexes
In which cases imp/exp is used?
Eliminate database fragmentation
Schema refresh (move the schema from one database to another)
Detect database corruption. Ensure that all the data can be read (if the data can
be read that means there is no block corruption)
Transporting tablespaces between databases
Backup database objects
How we can improve the EXP Performance?
Set the BUFFER parameter to a high value (e.g. 2M)
If you run multiple export sessions, ensure they write to different physical disks.
How we can improve the IMP performance?
Import the table using INDEXFILE parameter (the import is not done, but a file
which contains the indexes creation is generated), import the data and recreate
the indexes
Store the dump file to be imported on a separate physical disk from the oracle
data files
If there are any constraints on the target table, the constraints should be
disabled during the import and enabled after import
Set the BUFFER parameter to a high value (ex. BUFFER=30000000 (~30MB) )
and COMMIT =y or set COMMIT=n (is the default behavior: import commits after
each table is loaded, however, this use a lot of the rollback segments or undo
space for huge tables.)
Use the direct path to import the data (DIRECT=y)
(if possible) Increase DB_CACHE_SIZE (DB_BLOCK_BUFFERS prior to 9i)
considerably in the init<SID>.ora file
(if possible) Set the LOG_BUFFER to a big value and restart oracle.
Which are the common IMP/EXP problems?
ORA-00001: Unique constraint ... violated - Perhaps you are importing
duplicate rows. Use IGNORE=N to skip tables that already exist (imp will give an
103
error if the object is re-created) or the table could be dropped/ truncated and reimported if we need to do a table refresh..
IMP-00015: Statement failed ... object already exists... - Use the IGNORE=Y
import parameter to ignore these errors, but be careful as you might end up with
duplicate rows.
ORA-01555: Snapshot too old - Ask your users to STOP working while you are
exporting or use parameter CONSISTENT=NO (However this option could create
possible referential problems, because the tables are not exported from one
snapshot in time).
ORA-01562: Failed to extend rollback segment - Create bigger rollback
segments or set parameter COMMIT=Y (with an appropriate BUFFER parameter)
while importing.
104