Sie sind auf Seite 1von 9

EXP IMP

Exp and Imp are logical backups.

They must connect to a opened database and take the backup or do the recovery.

It does cross platform data movement like linux to windows / solaris to linux etc..

Used for rectifying database fragmentation

Reads the contents of database and stores them as sql statements into export dump.

Can takes all the schema other than SYS?

Related roles

exp_full_database

imp_full_database

The type of backup levels are

1. User

2.Table

3.Full

4. Tablespace

Parameters

buffer : Specifies the size, in bytes, of the buffer used to fetch the rows. If 0 is specified, only one
row is fetched at a time.
compress : If Y, compresses all the extents into single extent for each table.

consistent : If Y , it internally specifies "set transaction read only" statement and ensures data

consistency. It may lead to "ORA-1555 Snapshot too old error" if you dont have sufficient undo size.

Direct : This is a performance parameter which bypass the SQL buffer and directly writes to external
buffer.

file : The name of the export file and multiple files can be specified using comma.

filesize : The maximum file size specified in bytes.

full : The entire database is exported.

help = Y : Displays the parameters/option of export command.

log : The Log filename used by export to write messages.

object_consistent : Consistent at objects level

owner : Schema will expoerted

parfile : The file that contains parameter list.

query : Retrieves set of records that satisfies where condition

recordlength : May increase the performace of export when spefified with direct option.
rows : specified to retrive rows or only segment structures.

statistics : Mostly none is specified.

tables : Takes backup of specific tables and partitions.

tablespaces : takes tablespace level backups. User should have EXP_FULL_DATABASE role.

transport_tablespace – Exports the metadata needed for transportable tablespaces.

tts_full_check – If TRUE, export will verify that when creating a transportable tablespace, a
consistent set of objects is exported.

userid : userid/password of the user doing the export.

volsize : Specifies the maximum number of bytes in an export file on each tape volume.

======================================================================

In environments we give the following mandatory parameters.

For export

userid

file

log

direct or buffer (anyone should be given to increase the performance)

statistics=none (which will not take segment statistics)


and other parameters

full=y

(takes all schemas other than SYS). But it will take the objects which are created in sys.

tables = <table_name>

With this option we can take backup of tables and partitions.

Owner = <schema>

With this option you can take one or more schemas

Frequently Asked Questions

==================================================================================
======================

Given a export full dump from production, how will create a database with the dump.

Hint : View the contents of the import dump using SHOW=Y

==================================================================================
======================

How will you do schema refresh from PROD to UAT with the below specification?

Schema_name=SSS

PROD ipaddress = 10.10.1.10

UAT ipaddress = 10.10.1.11

Note : Make sure that there is sufficient space for the export dump.

Use export in PROD environment like


$ exp system/manager file=/backup/exp/sss.dmp log=/backup/exp/sss.log owner=sss direct=y
statistics=none

then we need to transfer the file from PROD to UAT. Before that compress the dumpfile using

$ gzip /backup/exp/sss.dmp

which will create /backup/exp/sss.dmp.gz

Then move this file to UAT server under /backup directory using scp or ftp.

$scp /backup/exp/sss.dmp oracle@10.10.1.11:/backup

Now on UAT Server

Note : Make sure that size of UNDO and archive log destination are sufficient.

There are two ways to import the data

1. Drop the schema and recreate it.

2. Truncate the schema and import it. (if the user SSS has given any grants to other schema, this
truncate

option is best choice) and tablespace change.

1. Drop the user and recreate it with the default tablespace name as of production.

SQL > conn sys/sys as sysdba

SQL > drop user sss cascade;

SQL > create user sss identified by sakthi default tablespace USERS temporary tablespace temp;

SQL > grant connect,resource to sss;


then, do the import.

First unzip the dumpfile.

$ gunzip /backup/sss.dmp.gz

$ imp system/manager file=/backup/sss.dmp log=/backup/sss.log fromuser=sss touser=sss


buffer=10485760 commit=y

Import will be done successfully if you have sufficient space in the database. The storage that
should be monitored is

UNDO

USERS

archive log destination

and other index tablespaces related to the objects that is in sss.

2. Truncate and import

If you have grants that is given from SSS schema, then you can use this option so that the grants
need not

be given again.

Using TRUNCATE TABLE <tablename> ;

you can delete all the rows. But you might get issues when deleting records from parent tables.

We can create the script for deleting as shown below.

SQL> conn system/manager

SQL> spool /tmp/truncate_tables.sql

SQL> set pagesize 2000 head off

SQL> select 'truncate table '||owner||'.'||table_name||';' from dba_tables where owner='SSS';


SQL> spool off;

then execute the /tmp/truncate_tables.sql in sql window.

if you get child table error, again execute the script to delete.

Now, import the dump using the below script.

$ imp system/manager file=/backup/sss.dmp log=/backup/sss.log fromuser=sss touser=sss


buffer=10485760 commit=y ignore=y

ignore=y will append the data if the table already exists.

Now we have imported the data. But we need to confirm whether it is successful.

ie.

POST REFRESH ACTIVITIES

Check all the objects are copied into SSS@UAT Schema.

SQL> conn system/manager@PROD

SQL> Select count(*) from dba_objects where owner="SSS';

SQL> conn system/manager@UAT

SQL> Select count(*) from dba_objects where owner="SSS';

Note : The objects count should be equal in case of any mismatch then

Find what type of object is missing.

SQL> conn system/manager@PROD

SQL> select object_type,count(*) from dba_objects where owner='SSS' group by object_type;


SQL> conn system/manager@UAT

SQL> select object_type,count(*) from dba_objects where owner='SSS' group by object_type;

Still, if we want to find the missing object. Use dblink to compare the schema.

Create a DBlink in UAT for PROD.

SQL> conn system/manager@UAT

SQL> create database link LINK_PRD connect to system identified by manager using 'PROD';

Note : 'PROD' is the connect string that is configured in tnsnames.ora in UAT environment.

SQL> select object_name from dba_objects@PROD where owner='SSS' group by object_type

minus

select object_name from dba_objects where owner='SSS' group by object_type;

Import/create the missing objects.

Then,

1. Recompile all the objects in the SSS schema of UAT environment.

SQL> conn system/manager@sss

SQL> @?/rdbms/admin/utlrp.sql

2. Analyze the schema objects.

SQL> exec dbms_stats.gather_schema_stats('SSS');


==================================================================================
==============

How will you monitor imp command?

SQL > select

sql_text,

rows_processed

from v$sqlarea

where

parsing_schema_name='TEST'

and sql_text like 'INS%'

order by last_load_time;

Note : The rows can be seen only if you specify commit=y option.

==========================================================================

How will you import a table into different tablespace?

Create the table structure with the new tablespace in the target database like

create table sss(a number,b number) tablespace SSS_TBS;

then import the table using

$ imp sakthi/software file=/backup/table.dmp log=/backup/table.log fromuser=sakthi touser=sakthi


buffer=10485760 commit=y ignore=y

Das könnte Ihnen auch gefallen