Beruflich Dokumente
Kultur Dokumente
They must connect to a opened database and take the backup or do the recovery.
It does cross platform data movement like linux to windows / solaris to linux etc..
Reads the contents of database and stores them as sql statements into export dump.
Related roles
exp_full_database
imp_full_database
1. User
2.Table
3.Full
4. Tablespace
Parameters
buffer : Specifies the size, in bytes, of the buffer used to fetch the rows. If 0 is specified, only one
row is fetched at a time.
compress : If Y, compresses all the extents into single extent for each table.
consistent : If Y , it internally specifies "set transaction read only" statement and ensures data
consistency. It may lead to "ORA-1555 Snapshot too old error" if you dont have sufficient undo size.
Direct : This is a performance parameter which bypass the SQL buffer and directly writes to external
buffer.
file : The name of the export file and multiple files can be specified using comma.
recordlength : May increase the performace of export when spefified with direct option.
rows : specified to retrive rows or only segment structures.
tablespaces : takes tablespace level backups. User should have EXP_FULL_DATABASE role.
tts_full_check – If TRUE, export will verify that when creating a transportable tablespace, a
consistent set of objects is exported.
volsize : Specifies the maximum number of bytes in an export file on each tape volume.
======================================================================
For export
userid
file
log
full=y
(takes all schemas other than SYS). But it will take the objects which are created in sys.
tables = <table_name>
Owner = <schema>
==================================================================================
======================
Given a export full dump from production, how will create a database with the dump.
==================================================================================
======================
How will you do schema refresh from PROD to UAT with the below specification?
Schema_name=SSS
Note : Make sure that there is sufficient space for the export dump.
then we need to transfer the file from PROD to UAT. Before that compress the dumpfile using
$ gzip /backup/exp/sss.dmp
Then move this file to UAT server under /backup directory using scp or ftp.
Note : Make sure that size of UNDO and archive log destination are sufficient.
2. Truncate the schema and import it. (if the user SSS has given any grants to other schema, this
truncate
1. Drop the user and recreate it with the default tablespace name as of production.
SQL > create user sss identified by sakthi default tablespace USERS temporary tablespace temp;
$ gunzip /backup/sss.dmp.gz
Import will be done successfully if you have sufficient space in the database. The storage that
should be monitored is
UNDO
USERS
If you have grants that is given from SSS schema, then you can use this option so that the grants
need not
be given again.
you can delete all the rows. But you might get issues when deleting records from parent tables.
if you get child table error, again execute the script to delete.
Now we have imported the data. But we need to confirm whether it is successful.
ie.
Note : The objects count should be equal in case of any mismatch then
Still, if we want to find the missing object. Use dblink to compare the schema.
SQL> create database link LINK_PRD connect to system identified by manager using 'PROD';
Note : 'PROD' is the connect string that is configured in tnsnames.ora in UAT environment.
minus
Then,
SQL> @?/rdbms/admin/utlrp.sql
sql_text,
rows_processed
from v$sqlarea
where
parsing_schema_name='TEST'
order by last_load_time;
Note : The rows can be seen only if you specify commit=y option.
==========================================================================
Create the table structure with the new tablespace in the target database like