Sie sind auf Seite 1von 11

Data Pump IMPDP TABLE_EXISTS_ACTION = APPEND,

REPLACE, [SKIP] and TRUNCATE


In conventional import utility (IMP) we have ignore =y option which will ignore the error when the
object is already exist with the same name. When it comes to the data pump there is one enhanced
option of ignore=y which is called TABLE_EXISTS_ACTION. The values for this parameter give 4
different options to handle the existing table and data.
1.

SKIP: Default value for this parameter is SKIP. This parameter is exactly same as
theIGNORE=Y option in conventional import utility.

2.

APPEND: This option appends the data from the data dump. The extra rows in the dump will
be appended to the table and the existing data remains unchanged.

3.

TRUNCATE: This option truncate the exiting rows in the table and insert the rows from the
dump

4.

REPLACE: This option drop the current table and create the table as it is in the dump file.
Both SKIP and REPLACE options are not valid if you set the CONTENT=DATA_ONLY for the
impdp.

Method to Import only rows does not exist in


the target table
See some examples here.
This is my sample table employee
SQL> select * from employee;
EMP_NAME

DEPT

SAL

------------------------------ ---------- ---------Rupal

10

5000

Hero

10

5500

Jain

10

4000

John

20

6000

I took the data pump dump for employee table.

$ expdp directory=exp_dir tables=scott.employee dumpfile=emp.dmp


logfile=emp.log
Export: Release 11.2.0.2.0 - Production on Tue May 1 23:31:04 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates.

All rights reserved.

Username: / as sysdba
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage
Management, OLAP,
Data Mining and Real Application Testing options
Starting "SYS"."SYS_EXPORT_TABLE_01": /******** AS SYSDBA directory=exp_dir
tables=scott.employee dumpfile=emp.dmp logfile=emp.log
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "SCOTT"."EMPLOYEE"
5.953 KB
4 rows
Master table "SYS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
/home/oracle/shony/emp.dmp
Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at 23:31:20

A.

TABLE_EXISTS_ACTION=SKIP

$ impdp directory=exp_dir dumpfile=emp.dmp logfile=imp1.log


table_exists_action=skip
Import: Release 11.2.0.2.0 - Production on Tue May 1 23:36:07 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates.

All rights reserved.

Username: / as sysdba
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage
Management, OLAP,
Data Mining and Real Application Testing options
Master table "SYS"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYS"."SYS_IMPORT_FULL_01": /******** AS SYSDBA directory=exp_dir
dumpfile=emp.dmp logfile=imp1.log table_exists_action=skip
Processing object type TABLE_EXPORT/TABLE/TABLE

Table "SCOTT"."EMPLOYEE" exists. All dependent metadata and data will be


skipped due to table_exists_action of skip
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Job "SYS"."SYS_IMPORT_FULL_01" successfully completed at 23:36:13
B.

TABLE_EXISTS_ACTION=APPEND

I have deleted and inserted 4 new rows into employee table. So as of now the rows the dump and
table are different and I am going to import the dump with APPEND option.

SQL> delete from employee;


4 rows deleted.
SQL> insert into employee (select * from emp where dept>20);
4 rows created.
SQL> commit;
SQL> select * from employee;
EMP_NAME
DEPT
SAL
------------------------------ ---------- ---------Kiran
30
5500
Peter
30
6800
King
30
7600
Roshan
30
5500
$ impdp directory=exp_dir dumpfile=emp.dmp logfile=imp1.log
table_exists_action=append
Import: Release 11.2.0.2.0 - Production on Wed May 2 00:50:18 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates.

All rights reserved.

Username: / as sysdba
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage
Management, OLAP,
Data Mining and Real Application Testing options
Master table "SYS"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYS"."SYS_IMPORT_FULL_01": /******** AS SYSDBA directory=exp_dir
dumpfile=emp.dmp logfile=imp1.log table_exists_action=append
Processing object type TABLE_EXPORT/TABLE/TABLE

Table "SCOTT"."EMPLOYEE" exists. Data will be appended to existing table but


all dependent metadata will be skipped due to table_exists_action of append
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "SCOTT"."EMPLOYEE"
5.953 KB
4 rows
Job "SYS"."SYS_IMPORT_FULL_01" successfully completed at 00:50:25

Now 4 more rows appended to the table.

1* select * from employee


SQL> /
EMP_NAME
DEPT
SAL
------------------------------ ---------- ---------Kiran
30
5500
Peter
30
6800
King
30
7600
Roshan
30
5500
Rupal
10
5000
Hero
10
5500
Jain
10
4000
John
20
6000
8 rows selected.

C.

TABLE_EXISTS_ACTION=TRUNCATE
Now lets try with table_exists_action=truncate option. In truncate option it will truncate the
content of the existing table and insert the rows from the dump. Currently my employee table
has 8 rows which we inserted last insert.

$ impdp directory=exp_dir dumpfile=emp.dmp logfile=imp1.log


table_exists_action=truncate
Import: Release 11.2.0.2.0 - Production on Wed May 2 00:55:03 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates.

All rights reserved.

Username: / as sysdba
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage
Management, OLAP,
Data Mining and Real Application Testing options
Master table "SYS"."SYS_IMPORT_FULL_01" successfully loaded/unloaded

Starting "SYS"."SYS_IMPORT_FULL_01": /******** AS SYSDBA directory=exp_dir


dumpfile=emp.dmp logfile=imp1.log table_exists_action=truncate
Processing object type TABLE_EXPORT/TABLE/TABLE
Table "SCOTT"."EMPLOYEE" exists and has been truncated. Data will be loaded
but all dependent metadata will be skipped due to table_exists_action of
truncate
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "SCOTT"."EMPLOYEE"
5.953 KB
4 rows
Job "SYS"."SYS_IMPORT_FULL_01" successfully completed at 00:55:09
1* select * from employee
SQL> /
EMP_NAME
DEPT
SAL
------------------------------ ---------- ---------Rupal
10
5000
Hero
10
5500
Jain
10
4000
John
20
6000

D.

TABLE_EXISTS_ACTION=REPLACE

This option drop the current table in the database and the import recreate the new table as in the
dumpfile.

impdp directory=exp_dir dumpfile=emp.dmp logfile=imp1.log


table_exists_action=replace
Import: Release 11.2.0.2.0 - Production on Wed May 2 00:57:35 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates.

All rights reserved.

Username: / as sysdba
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage
Management, OLAP,
Data Mining and Real Application Testing options
Master table "SYS"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYS"."SYS_IMPORT_FULL_01": /******** AS SYSDBA directory=exp_dir
dumpfile=emp.dmp logfile=imp1.log table_exists_action=replace
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "SCOTT"."EMPLOYEE"
5.953 KB
4 rows

Job "SYS"."SYS_IMPORT_FULL_01" successfully completed at 00:57:40


Now if you check the last_ddl_time for the table it would be the same as the import time.

$ date
Wed May

2 00:58:21 EDT 2012

select OBJECT_NAME, to_char(LAST_DDL_TIME,'dd-mm-yyyy hh:mi:ss') created from


dba_objects where OBJECT_NAME='EMPLOYEE'
SQL> /
OBJECT_NAME
CREATED
-------------------- ------------------EMPLOYEE
02-05-2012 12:57:40

Oracle 10g 11g Data Pump EXPDP SAMPLE Parameter Option

Sample=10

I am going to export only 10% of data for the table.

expdp directory=EXPDIR dumpfile=object_list.dmp logfile=object_list.log


tables=scott.object_list sample=10

Oracle Data Pump technology enables very high-speed movement of data and metadata from one
database to another.

1.What Is Data Pump Export?


Data Pump Export is a utility for unloading data and metadata into a set of operating system
files called a dump file set. The dump file set can be imported only by the Data Pump Import
utility. The dump file set can be imported on the same system or it can be moved to another
system and loaded there.

Data Pump Export Modes


* Full Export Mode

* Schema Mode
* Table Mode
* Tablespace Mode

Full Export Mode:A full export is specified using the FULL parameter. In a full database export, the entire database is
unloaded. This mode requires that you have the EXP_FULL_DATABASE role.

Example:
The following is an example of using the FULL parameter. The dump file, expfull.dmp is written
to the dpump_dir2 directory.

> expdp hr/hr DIRECTORY=dpump_dir2 DUMPFILE=expfull.dmp FULL=y NOLOGFILE=y

Schema Mode:A schema export is specified using the SCHEMAS parameter. This is the default export mode.
If you have the EXP_FULL_DATABASE role, then you can specify a list of schemas and optionally
include the schema definitions themselves, as well as system privilege grants to those schemas.
If you do not have the EXP_FULL_DATABASE role, you can export only your own schema.
Example:

> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp SCHEMAS=hr,sh,oe

Table Mode:A table export is specified using the TABLES parameter. In table mode, only a specified set of tables,
partitions, and their dependent objects are unloaded. You must have the EXP_FULL_DATABASE role to
specify tables that are not in your own schema. All specified tables must reside in a single schema.

Example:

The following example shows a simple use of the TABLES parameter to export three tables found
in the hr schema: employees, jobs, and departments. Because user hr is exporting tables found

in the hr schema, the schema name is not needed before the table names.

> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=tables.dmp TABLES=employees,jobs,departments

Tablespace Mode:A tablespace export is specified using the TABLESPACES parameter.

Example:

> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=tbs.dmp TABLESPACES=tbs_4, tbs_5, tbs_6

2.What Is Data Pump Import?


Data Pump Import is a utility for loading an export dump file set into a target system.

Data Pump Import Modes


*Full Import Mode
*Schema Mode
*Table Mode
*Tablespace Mode

Full Import Mode:> impdp hr/hr DUMPFILE=dpump_dir1:expfull.dmp FULL=y


LOGFILE=dpump_dir2:full_imp.log

Schema Mode:> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp LOGFILE=skip.log


SKIP_UNUSABLE_INDEXES=y

Table Mode:> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp TABLES=employees,jobs

Tablespace Mode:> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp


TABLESPACES=tbs_1,tbs_2,tbs_3,tbs_4

1. What is use of CONSISTENT option in exp?


Cross-table consistency. Implements SET TRANSACTION READ ONLY. Default value N.

Government Jobs

Template

Order Of

Synonyms

What are the

Extract Data

Proxy Server
2. What is use of DIRECT=Y option in exp?
Setting direct=yes, to extract data by reading the data directly, bypasses the SGA,
bypassing the SQL command-processing layer (evaluating buffer), so it should be faster.
Default value N.
3. What is use of COMPRESS option in exp?
Imports into one extent. Specifies how export will manage the initial extent for the
table data. This parameter is helpful during database re-organization. Export the objects
(especially tables and indexes) with COMPRESS=Y. If table was spawning 20 Extents of 1M
each (which is not desirable, taking into account performance), if you export the
table withCOMPRESS=Y, the DDL generated will have initial of 20M. Later on when importing
the extents will be coalesced. Sometime it is found desirable to export with COMPRESS=N,
in situations where you do not have contiguous space on disk (tablespace), and do not want
imports to fail.
4.
1.
2.
3.
4.
5.
6.

How to improve exp performance?


Set the BUFFER parameter to a high value. Default is 256KB.
Stop unnecessary applications to free the resources.
If you are running multiple sessions, make sure they write to different disks.
Do not export to NFS (Network File Share). Exporting to disk is faster.
Set the RECORDLENGTH parameter to a high value.
Use DIRECT=yes (direct mode export).

5.
1.
2.
3.
4.
5.
6.
7.

How to improve imp performance?


Place the file to be imported in separate disk from datafiles.
Increase the DB_CACHE_SIZE.
Set LOG_BUFFER to big size.
Stop redolog archiving, if possible.
Use COMMIT=n, if possible.
Set the BUFFER parameter to a high value. Default is 256KB.
It's advisable to drop indexes before importing to speed up the import process or set

INDEXES=N and building indexes later on after the import. Indexes can easily be recreated
after the data was successfully imported.
8. Use STATISTICS=NONE
9. Disable the INSERT triggers, as they fire during import.
10. Set Parameter COMMIT_WRITE=NOWAIT(in Oracle 10g) or COMMIT_WAIT=NOWAIT
(in Oracle 11g) during import.
6. What is use of INDEXFILE option in imp?
Will write DDLs of the objects in the dumpfile into the specified file.
7. What is use of IGNORE option in imp?
Will ignore the errors during import and will continue the import.
8. What are the differences between expdp and exp (Data Pump or normal exp/imp)?
Data Pump is server centric (files will be at server).
Data Pump has APIs, from procedures we can run Data Pump jobs.
In Data Pump, we can stop and restart the jobs.
Data Pump will do parallel execution.
Tapes & pipes are not supported in Data Pump.
Data Pump consumes more undo tablespace.
Data Pump import will create the user, if user doesnt exist.
9. Why expdp is faster than exp (or) why Data Pump is faster than conventional
export/import?
Data Pump is block mode, exp is byte mode.
Data Pump will do parallel execution.
Data Pump uses direct path API.
10. How to improve expdp performance?
Using parallel option which increases worker threads. This should be set based on the
number of cpus.
11. How to improve impdp performance?
Using parallel option which increases worker threads. This should be set based on the
number of cpus.
12. In Data Pump, where the jobs info will be stored (or) if you restart a job in Data Pump,
how it will know from where to resume?
Whenever Data Pump export or import is running, Oracle will create a table with the
JOB_NAME and will be deleted once the job is done. From this table, Oracle will find out how
much job has completed and from where to continue etc.
Default export job name will be SYS_EXPORT_XXXX_01, where XXXX can be FULL or
SCHEMA or TABLE.
Default import job name will be SYS_IMPORT_XXXX_01, where XXXX can be FULL or
SCHEMA or TABLE.
13. What is the order of importing objects in impdp?

Tablespaces
Users
Roles
Database links
Sequences
Directories
Synonyms
Types
Tables/Partitions
Views
Comments
Packages/Procedures/Functions
Materialized views
14. How to import only metadata?
CONTENT= METADATA_ONLY
15. How to import into different user/tablespace/datafile/table?
REMAP_SCHEMA
REMAP_TABLESPACE
REMAP_DATAFILE
REMAP_TABLE
REMAP_DATA
16. How to export/import without using external directory?
17. Using Data Pump, how to export in higher version (11g) and import into lower version
(10g), can we import to 9i?
18. Using normal exp/imp, how to export in higher version (11g) and import into lower
version (10g/9i)?
19. How to do transport tablespaces (and across platforms) using exp/imp or expdp/impdp?

Das könnte Ihnen auch gefallen