Data pump Schema Refresh

Schema refresh is an regular job for any DBA specially during migration projects, so today I decide to post about a schema refresh

Assuming here schema(SCOTT) is  refreshed  from source(PROD) to Target(TEST) on oracle 11g server

SQL>  select  banner  from  v$version;

BANNER

——————————————————————————–

Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 – 64bit Production

PL/SQL Release 11.2.0.2.0 – Production

CORE    11.2.0.2.0      Production

TNS for Linux: Version 11.2.0.2.0 – Production

NLSRTL Version 11.2.0.2.0 – Production

On Source side (PROD) ,

Create a directory or use an existing directory (ex: data_pump_dir) and grant read and write permissions on this directory to user ‘SYSTEM‘

SQL >   grant  read, write  on  directory  data_pump_dir  to   system;

Grant Succeeded.

Step 1:   Exporting the data from prod(source)

$   vi   expdp_refresh_schema.sh

$  expdp  system/****@sourcehostname   dumpfile=expdpschema.dmp   Directory=data_pump_dir    logfile=export.log   schemas= scott

$  nohup  sh  expdp_refresh_schema.sh>refresh_schema.out &

Step 2 :  Copying the dumpfiles from source to target

For copying Dumpfiles from one server to another server we can use either Winscp(Graphical tool for copying files from windows to linux and  vice versa),FTP, SFTP, SCP, etc.

$ scp  expdpschema.dmp   system@TargetHostname:/home/oracle/datapump

Here I’m copying dumpfile from source to the target /home/oracle/datapump  location

Step 3 :  Importing data from dumpfile into target database

Before importing dunpfile into target(TEST) make sure you delete or backup all the objects in that schema, to clear all objects from particular schema run the script from here 

$ impdp  system/****@targethostname   dumpfile=expdpschema.dmp   Directory=data_pump_dir    logfile=import.log   remap_schema= scott:newscott

Step 4 :   Verify target database objects with source

SQL>   select   count(*)  from  dba_objects   where  owner=’NEWSCOTT’ ;

SQL>   select  count(*)  from  dba_tables  where  owner =’NEWSCOTT’;

The above results  should be same as that of source  ‘scott’  schema

READ MORE………
To Kill a running Data pump Job :  http://chandu208.blogspot.com/2011/09/data-pump-scenarios.html
About data pump :  http://chandu208.blogspot.com/2011/04/oracle-data-pump.html

11g data pump parameters

Export Parameters :

Parameter Description
abort_step Undocumented feature
access_method Data Access Method – default is Automatic
attach Attach to existing job – no default
cluster Start workers across cluster; default is YES
compression Content to export: default is METADATA_ONLY
content Content to export: default is ALL
current_edition Current edition: ORA$BASE is the default
data_options Export data layer options
directory Default directory specification
dumpfile dumpfile names: format is (file1,…) default is expdat.dmp
encryption Encryption type to be used: default varies
encryption_algorithm Encryption algorithm to be used: default is AES128
encryption_mode Encryption mode to be used: default varies
encryption_password Encryption key to be used
estimate Calculate size estimate: default is BLOCKS
estimate_only Only estimate the length of the job: default is N
exclude Export exclude option: no default
filesize file size: the size of export dump files
flashback_time database time to be used for flashback export: no default
flashback_scn system change number to be used for flashback export: no default
full indicates a full mode export
include export include option: no default
ip_address IP Address for PLSQL debugger
help help: display description on export parameters, default is N
job_name Job Name: no default
keep_master keep_master: Retain job table upon completion
log_entry logentry
logfile log export messages to specified file
metrics Enable/disable object metrics reporting
mp_enable Enable/disable multi-processing for current session
network_link Network mode export
nologfile No export log file created
package_load Specify how to load PL/SQL objects
parallel Degree of Parallelism: default is 1
parallel_threshold Degree of DML Parallelism
parfile parameter file: name of file that contains parameter specifications
query query used to select a subset of rows for a table
remap_data Transform data in user tables
reuse_dumpfiles reuse_dumpfiles: reuse existing dump files; default is No
sample Specify percentage of data to be sampled
schemas schemas to export: format is ‘(schema1, .., schemaN)’
service_name Service name that job will charge against
silent silent: display information, default is NONE
status Interval between status updates
tables Tables to export: format is ‘(table1, table2, …, tableN)’
tablespaces tablespaces to transport/recover: format is ‘(ts1,…, tsN)’
trace Trace option: enable sql_trace and timed_stat, default is 0
transport_full_check TTS perform test for objects in recovery set: default is N
transportable Use transportable data movement: default is NEVER
transport_tablespaces Transportable tablespace option: default is N
tts_closure_check Enable/disable transportable containment check: def is Y
userid user/password to connect to oracle: no default
version Job version: Compatible is the default

Import Parameters :

Parameter Description
abort_step Undocumented feature
access_method Data Access Method – default is Automatic
attach Attach to existing job – no default
cluster Start workers across cluster; default is Y
content Content to import: default is ALL
data_options Import data layer options
current_edition Applications edition to be used on local database
directory Default directory specification
dumper_directory Directory for stream dumper
dumpfile import dumpfile names: format is (file1, file2…)
encryption_password Encryption key to be used
estimate Calculate size estimate: default is BLOCKS
exclude Import exclude option: no default
flashback_scn system change number to be used for flashback import: no default
flashback_time database time to be used for flashback import: no default
full indicates a full Mode import
help help: display description of import parameters, default is N
include import include option: no default
ip_address IP Address for PLSQL debugger
job_name Job Name: no default)’
keep_master keep_master: Retain job table upon completion
logfile log import messages to specified file
master_only only import the master table associated with this job
metrics Enable/disable object metrics reporting
mp_enable Enable/disable multi-processing for current session
network_link Network mode import
nologfile No import log file created
package_load Specify how to load PL/SQL objects
parallel Degree of Parallelism: default is 1
parallel_threshold Degree of DML Parallelism
parfile parameter file: name of file that contains parameter specifications
partition_options Determine how partitions should be handle: Default is NONE
query query used to select a subset of rows for a table
remap_data Transform data in user tables
remap_datafile Change the name of the source datafile
remap_schema Remap source schema objects to new schema
remap_table Remap tables to a different name
remap_tablespace Remap objects to different tablespace
reuse_datafiles Re-initialize existing datafiles (replaces DESTROY)
schemas schemas to import: format is ‘(schema1, …, schemaN)’
service_name Service name that job will charge against
silent silent: display information, default is NONE
skip_unusable_indexes Skip indexes which are in the unsed state)
source_edition Applications edition to be used on remote database
sqlfile Write appropriate SQL DDL to specified file
status Interval between status updates
streams_configuration import streams configuration metadata
table_exists_action Action taken if the table to import already exists
tables Tables to import: format is ‘(table1, table2, …, tableN)
tablespaces tablespaces to transport: format is ‘(ts1,…, tsN)’
trace Trace option: enable sql_trace and timed_stat, default is 0
transform Metadata_transforms
transportable Use transportable data movement: default is NEVER
transport_datafiles List of datafiles to be plugged into target system
transport_tablespaces Transportable tablespace option: default is N
transport_full_check Verify that Tablespaces to be used do not have dependencies
tts_closure_check Enable/disable transportable containment check: def is Y
userid user/password to connect to oracle: no default
version Job version: Compatible is the default

The Following commands are valid while in interactive mode.

Command                                           Description

——————–                                   ———————————————————-

ADD_FILE                                                    Add dumpfile to dumpfile set.

CONTINUE_CLIENT                              Return to logging mode. Job will be re-started if idle.

EXIT_CLIENT                                          Quit client session and leave job running.

FILESIZE                                                  Default filesize (bytes) for subsequent ADD_FILE commands.

HELP                                                          Summarize interactive commands.

KILL_JOB                                                Detach and delete job.

PARALLEL                                              Change the number of active workers for current job.

START_JOB                                            Start/resume current job.

STATUS                                                   Frequency (secs) job status is to be monitored where  the default (0) will show new status when available.

STATUS[=interval]

STOP_JOB                                      Orderly shutdown of job execution and exits the client.

STOP_JOB=IMMEDIATE performs an immediate shutdown of the Data Pump job.

Oracle Export & Import

exp and imp are utilities present in the $ORACLE_HOME/bin directory and are installed when Oracle is installed. Their prime purpose is to move logical objects out of and into the database respectively – for example dumping all of the tables owned by a user to a single file is achieved using the exp utility.

exp and imp are the executables that allow to make exports and imports of data objects (such as tables). Therefore, logical backups can be made with exp.

exp/imp allow to transfer the data accross databases that reside on different hardware platforms and/or on different Oracle versions. If the data is exported on a system with a different Oracle version then on that on which it is imported, imp must be the newer version. That means, if something needs to be exported from 10g into 9i, it must be exported with 9i’s exp.

imp doesn’t re-create an already existing table. It either errors out or ignores the errors.

In order to use exp and imp, the catexp.sql script must be run. catexp.sql basically creates the exp_full_database and imp_full_database roles.

It is found under $ORACLE_HOME/rdbms/admin:

SQL> @?/rdbms/admin/catexp

The Export and Import tools support four modes of operation:

  • FULL : Exports all the objects in all schemas 
  • OWNER : Exports objects only belonging to the given OWNER 
  • TABLES : Exports Individual Tables 
  • TABLESPACE : Export all objects located in a given TABLESPACE.

EXPORT & IMPORT are used for the following tasks:

  • Backup ORACLE data in operating system files
  • Restore tables that where dropped
  • Save space or reduce fragmentation in the database
  • Move data from one owner to another

EXPORT Utility Commands:

Exporting of ORACLE database objects is controlled by parameters. To get familiar with EXPORT parameters type:
exp help=y
You will get a short description and the default settings will be shown

Example of Exporting Full Database

$exp USERID=scott/tiger FULL=y FILE=myfull.dmp

In the above command, FILE option specifies the name of the dump file, FULL option specifies that you want to export the full database, USERID option specifies the user account to connect to the database.

Note: To perform full export the user should have DBA or EXP_FULL_DATABASE privilege.

Example of Exporting Schemas

$exp USERID=scott/tiger OWNER=(SCOTT,ALI) FILE=exp_own.dmp

The above command will export all the objects stored in SCOTT and ALI’s schema.

Exporting Individual Tables

$exp USERID=scott/tiger TABLES=(scott.emp,scott.sales) FILE=exp_tab.dmp

This will export scott’s emp and sales tables.

Exporting Consistent Image of the tables

If you include CONSISTENT=Y option in export command argument then, Export utility will export a consistent image of the table i.e. the changes which are done to the table during export operation will not be exported.

Using imp/exp across different Oracle versions

If exp and imp are used to export data from an Oracle database with a different version than the database in which is imported, then the following rules apply:

  1. Exp must be of the lower version
  2. Imp must match the target version.

IMPORT Utility Commands:

Like EXPORT the IMPORT utility is controlled by parameters. To get familiar with these parameters type: imp help=y
You will get a short description of usage and default settings of parameters.
To start IMPORT simply type imp. You will be prompted for your ORACLE userid, password. The next prompts depend on what you answer.

Example Importing Individual Tables

$imp scott/tiger FILE=myfullexp.dmp FROMUSER=scott TABLES=(emp,dept)

This command will import only emp, dept tables into Scott user and you will get a output similar to as shown below

Export file created by EXPORT:V10.00.00 via conventional path

import done in WE8DEC character set and AL16UTF16 NCHAR character set

. importing SCOTT’s objects into SCOTT

. . importing table “DEPT” 4 rows imported

. . importing table “EMP” 14 rows imported

Import terminated successfully without warnings.

Example, Importing Tables of One User account into another User account

For example, suppose Ali has exported tables into a dump file mytables.dmp. Now Scott wants to import these tables. To achieve this Scott will give the following import command

$imp scott/tiger FILE=mytables.dmp FROMUSER=ali TOUSER=scott

Then import utility will give a warning that tables in the dump file was exported by user Ali and not you and then proceed.

Example Importing Tables Using Pattern Matching

Suppose you want to import all tables from a dump file whose name matches a particular pattern. To do so, use “%” wild character in TABLES option. For example, the following command will import all tables whose names starts with alphabet “e” and those tables whose name contains alphabet “d”

$imp scott/tiger FILE=myfullexp.dmp FROMUSER=scott TABLES=(a%,%d%)

Migrating a Database across platforms.

The Export and Import utilities are the only method that Oracle supports for moving an existing Oracle database from one hardware platform to another. This includes moving between UNIX and NT systems and also moving between two NT systems running on different platforms.

The following steps present a general overview of how to move a database between platforms.

  1. As a DBA user, issue the following SQL query to get the exact name of all tablespaces. You will need this information later in the process.

SQL> SELECT tablespace_name FROM dba_tablespaces;

  1. As a DBA user, perform a full export from the source database, for example:

exp system/manager FULL=y FILE=myfullexp.dmp

  1. Move the dump file to the target database server. If you use FTP, be sure to copy it in binary format (by entering binary at the FTP prompt) to avoid file corruption.
  2. Create a database on the target server.
  3. Before importing the dump file, you must first create your tablespaces, using the information obtained in Step 1. Otherwise, the import will create the corresponding datafiles in the same file structure as at the source database, which may not be compatible with the file structure on the target system.
  4. As a DBA user, perform a full import with the IGNORE parameter enabled:

> imp system/manager FULL=y IGNORE=y FILE=myfullexp.dmp

Using IGNORE=y instructs Oracle to ignore any creation errors during the import and permit the import to complete.

Oracle Data Pump

Introduction:
Oracle introduced the Data Pump in Oracle Database 10g Release 1. This new oracle technology enables very high transfer of data from one database to another. The oracle Data Pump provides two utilities namely:

  1. Data Pump Export which is invoked with the expdpcommand.
  2. Data Pump Import which is invoked with the impdpcommand.

The above two utilities have similar look and feel with the pre-Oracle 10g import and export utilities (imp and exp) but are completely separate. This means the dump files generated by the original export utility (exp) cannot be imported by the new data pump import utility (impdp) and vice-versa.

Data Pump Export (expdp) and Data Pump Import (impdp) are server-based rather than client-based as is the case for the original export (exp) and import (imp). Because of this, dump files, log files, and sql files are accessed relative to the server-based directory paths. Data Pump requires that directory objects mapped to a file system directory be specified in the invocation of the data pump import or export.

It for this reason and for convenience that a directory object be created before using the data pump export or import utilities. For example to create a directory object named expdp_dir located at /u01/backup/exports enter the following sql statement:

SQL> create directory expdp_dir as ‘/u01/backup/exports’;
Then grant read and write permissions to the users who will be performing the data pump export and import.

SQL> grant read,write on directory expdp_dir to system, user1, user2, user3;

What is Oracle’s Data Pump?

Oracle simply states that Data Pump is a “very high-speed” mechanism for moving data and meta-data from one database to another. Is Data Pump a complete replacement for export (exp) and Import (imp)? Oracle clearly states that the only time you would/ should use the original (exp & imp) is when you need backward compatibility to an earlier version that does not have Data Pump export & import (expdp & impdp). Basically if you wanted to import a 10g export into a pre-10g database or import into a 10g database from a pre-10g database.

Following are the process involved in the data pump operation:

Client Process :     This process is initiated by client utility. This process makes a call to the data pump API. Once the data pump is initiated, this process is not necessary for the progress of the job.

Shadow Process:     When client log into the database, foreground process is created. It services the client data pump API requests. This process creates the master table and creates Advanced queuing queues used for communication. Once client process ends, shadow process also go away.

Master Control Process:     MCP controls the execution of the data pump job. There is one MCP per job. MCP divides the data pump job into various metadata and data load or unload jobs and hands them over to the worker processes.

Worker Process:     MCP creates worker process based on the value of the PARALLEL parameter. The worker process performs the task requested by MCP.

Data pump is a server side technology and it can transfer large amounts of data very quickly, using parallel streams to achieve maximum throughput, they can be 15-45% faster than then older import/export utilities. Advantages using data pump are

  • Ability to estimate jobs times
  • Ability to restart failed jobs
  • Perform fine-grained object selection
  • Monitor running jobs
  • Directly load a database from a remote instance via the network
  • Remapping capabilities
  • Improved performance using parallel executions

Data Pump Uses

Use can use data pump for the following

  • migrating databases
  • copying databases
  • Transferring oracle databases between different operating systems
  • Backing up important tables before you change them
  • Moving database objects from one tablespace to another
  • Transporting tablespace’s between databases
  • Reorganizing fragmented table data
  • Extracting the DDL for tables and other objects such as stored procedures and packages

Data Pump components

Data pump technology consists of three major components

  • dbms_datapump– the main engine for driving data dictionary metadata loading and unloading
  • dbms_metadata– used to extract the appropriate metadata
  • command-line– expdp and impdp are the import/export equivalents

                                      Invoking Data Pump Export
You can invoke the data pump export using a command line. Export parameters can be specified directly in the command line.

Full Export Mode:    A full export is specified using the FULL parameter. In a full database export, the entire database is unloaded. This mode requires that you have the EXP_FULL_DATABASE role. Example :

$ expdp system/<password> DIRECTORY=expdp_dir DUMPFILE=expfull.dmp FULL=y LOGFILE=expfull.og
Schema Export Mode:   The schema export mode is invoked using the SCHEMAS parameter. If you have no EXP_FULL_DATABASE role, you can only export your own schema. If you have EXP_FULL_DATABASE role, you can export several schemas in one go. Optionally, you can include the system privilege grants as well.

$ expdp hr/hr DIRECTORY=expdp_dir DUMPFILE=schema_exp.dmp SCHEMAS=hr,sh,oe
Table Export Mode:  This export mode is specified using the TABLES parameter. In this mode, only the specified tables, partitions and their dependents are exported. If you do not have the EXP_FULL_DATABASE role, you can export only tables in your own schema. You can only specify tables in the same schema.

$ expdp hr/hr DIRECTORY=expdp_dir DUMPFILE=tables_exp.dmp TABLES=employees,jobs,departments
                                           Invoking Data Pump Import
The data pump import can be invoked in the command line. The export parameters can be specified directly in the command line.

Full Import Mode   The full import mode loads the entire contents of the source (export) dump file to the target database. However, you must have been granted the IMP_FULL_DATABASE role on the target database. The data pump import is invoked using the impdp command in the command line with the FULL parameter specified in the same command line.
$ impdp system/<password> DIRECTORY=expdp_dir DUMPFILE=expfull.dmp FULL=y LOGFILE=impfull.log

Schema Import Mode:  The schema import mode is invoked using the SCHEMAS parameter. Only the contents of the specified schemas are load into the target database. The source dump file can be a full, schema-mode, table, or tablespace mode export files. If you have a IMP_FULL_DATABASE role, you can specify a list of schemas to load into the target database.
$ impdp hr/hr DIRECTORY=expdp_dir DUMPFILE=expfull.dmp SCHEMAS=hr,sh,oe
Table Import Mode:  This export mode is specified using the TABLES parameter. In this mode, only the specified tables, partitions and their dependents are exported. If you do not have the EXP_FULL_DATABASE role, you can import only tables in your own schema.

$ impdp hr/hr DIRECTORY=expdp_dir DUMPFILE=expfull.dmp TABLES=employees,jobs,departments

Interesting Features

  • Data Pump export and import are not compatible with the old exp & imp. So if you need to import into a pre-10g database it is best to stick with the original export utility (exp).
  • There are new 10g features that are only supported (exported & imported) with the new Data Pump utility.
  • With Data Pump you can allocate parallel threads to fine-tune resource consumption during export and import. Also available is to have multiple client processes (expdp & impdp) to attach to a Data Pump job. Both of these options allow you to throw more resources to get the Data Pump job completed. Data Pump also works on a set of dump files rather than a single sequential dump file and you can add additional dump files to a running dump process. I am sure we will have to be concerned here with too many I/O operations to a single disk area.
  • Restart ability is supported for export jobs but also you can stop and restart the jobs as you see fit. So if you see a job is consuming too many resources or it is running too long and going to effect production, you can just stop the job and then restart at a later time.
  • Because Data Pump works on the database server through packaged procedures and directory structures, you now have the ability to perform export and import operations over the network.
  • Various features that allow for the re-mapping, re-naming, including, or excluding of database objects and database structures. As we saw in the DBMS_METADATA package in the last two articles how to compare across schemas and translate one schema ownership to another, Data Pump also has these types of abilities for moving objects and data.
  • Versioning capabilities of database objects.
  • To use Data Pump you must have EXP_FULL_DATABASE or IMP_FULL_DATABASE depending if you will be performing export or import operations. These allow you to expdp & impdp across ownership for items such as grants, resource plans, schema definitions, and re-map, re-name, or re-distribute database objects or structures.
  • Access to data is through direct path and external tables. Both of which, under the covers, have the same data format so Oracle can switch between the two depending on a best method approach to move your data. Some of which is dependent upon the targeted structure as direct path cannot be used for some object types and database structures.
  • Monitoring mechanisms have been put in place to see how the Data Pump job is performing. You can monitor through a log file created or one of three views that assist in the monitoring of Data Pump jobs.

<span “times=”” new=”” roman”;color:#333333’=”” roman”;mso-bidi-font-family:”times=”” roman'”,”serif”;mso-fareast-font-family:=”” style=”font-family: “; font-size: 12.0pt;” times=””>DBA_DATAPUMP_JOBS   shows all the active Data Pump jobs and details on the state of the job.

<span “times=”” new=”” roman”;color:#333333’=”” roman”;mso-bidi-font-family:”times=”” roman'”,”serif”;mso-fareast-font-family:=”” style=”font-family: “; font-size: 12.0pt;” times=””> DBA_DATAPUMP_SESSIONS shows the user sessions that are attached to a Data Pump job.

<span “times=”” new=”” roman”;color:#333333’=”” roman”;mso-bidi-font-family:”times=”” roman'”,”serif”;mso-fareast-font-family:=”” style=”font-family: “; font-size: 12.0pt;” times=””>V$SESSION_LONGOPS<span “times=”” new=”” roman”;color:#333333’=”” roman”;mso-bidi-font-family:”times=”” roman'”,”serif”;mso-fareast-font-family:=”” style=”font-family: “; font-size: 12.0pt;” times=””> shows the progress of a Data Pump job in regards to the amount of work it needs to do and how much it has done.

  • Through the use of internal tables, and what Oracle calls master & worker processes, there exists an intercommunication between the processes that are performing work on behalf of Data Pump and internal logging information that allows the user, and Oracle, to understand where in the process Data Pump is.

The available modes are as follows:

  • Full Import/Export Mode
  • Schema Mode
  • Table Mode
  • Tablespace Mode
  • Transportable Tablespace Mode

Running Data Pump

You can either run via a command line specifying options or use a parameter file, there are many options to Data Pump so it would be best to check out the oracle web site, I have given a few examples below

                                                Exporting

database

# expdp vallep/password directory=datapump full=y dumpfile=data.dmp

filesize=2G parallel=2 logfile=full.log

NOTE : Increase the parallel option based on the number of CPU’s you have

Schema

 # expdp sys/password schemas=testuser dumpfile=data.dmp logfile=schema.log

Table

# expdp vallep/password tables=accounts,employees dumpfile=data.dmp

content=metadata_only

Tablespace

# expdp vallep/password tablespaces=users dumpfile=data.dmp

logfile=tablespace.log

                                                 Importing

database

# impdp system/password full=y dumpfile=data.dmp nologfile=y

schema change

# impdp system/password schemas=’HR’ remap_schema=’HR:HR_TEST’

content=data_only

# impdp system/passwd remap_schema=’TEST:TEST3’ tables=test log=…

dumpfile=… directory=…

Other Options

directory specifies a oracle directory object filesize split the dump file into specific sizes (could be used if filesystem has 2GB limit) parfile specify the parameter file compression compression is used by default but you can stop it exclude/include metadata filtering query selectively export table data usin a SQL statement estimate tells you how much space is required without running the job network link you can perform a export across a network encryption you can encrypt data within the data pump file parallel increase worker processes to increase throughput, base it on number of CPU’s

remap_schema move objects from one schema to another

remap_datafile change the name of the datafile when moving across different systems

remap_tablespace move from one tablespace to another

Excluding and Including Objects during Export

You can exclude objects while performing a export by using EXCLUDE option of Data Pump utility. For example you are exporting a schema and don’t want to export tables whose name  starts with “A” then you can type the following command

$expdp scott/tiger DIRECTORY=data_pump_dir

DUMPFILE=scott_schema.dmp  SCHEMAS=SCOTT EXCLUDE=TABLE:”like ‘A%’”

Then all tables in Scott’s Schema whose name starts with “A“will not be exported.

Similarly you can also INCLUDE option to only export certain objects like this

$expdp scott/tiger DIRECTORY=data_pump_dir DUMPFILE=scott_schema.dmp
SCHEMAS=SCOTT INCLUDE=TABLE:”like ‘A%’”

This is opposite of EXCLUDE option i.e. it will export only those tables of Scott’s schema whose name starts with “A”

Similarly you can also exclude INDEXESCONSTRAINTSGRANTSUSERSCHEMA

Using Query to Filter Rows during Export

You can use QUERY option to export only required rows. For Example, the following will export only those rows of employees tables whose salary is above 10000 and whose dept id is 10.

 expdp hr/hr QUERY=emp:'”WHERE dept_id > 10 AND sal > 10000″‘

        NOLOGFILE=y DIRECTORY=dpump_dir1 DUMPFILE=exp1.dmp

Suspending and Resuming Export Jobs (Attaching and Re-Attaching to the Jobs)

You can suspend running export jobs and later on resume these jobs or kill these jobs using Data Pump Export. You can start a job in one client machine and then, if because of some work, you can suspend it. Afterwards when your work has been finished you can continue the job from the same client, where you stopped the job, or you can restart the job from another client machine.

For Example, suppose a DBA starts a full database export by typing the following command at one client machine CLNT1 by typing the following command

$expdp scott/tiger@mydb FULL=y DIRECTORY=data_pump_dir  DUMPFILE=full.dmp LOGFILE=myfullexp.log  JOB_NAME=myfullJob

After some time, the DBA wants to stop this job temporarily. Then he presses CTRL+C to enter into interactive mode. Then he will get the Export> prompt where he can type interactive commands

Now he wants to stop this export job so he will type the following command

Export> STOP_JOB=IMMEDIATE
Are you sure you wish to stop this job ([y]/n): y

The job is placed in a stopped state and exits the client.

After finishing his other work, the DBA wants to resume the export job and the client machine from where he actually started the job is locked because, the user has locked his/her cabin. So now the DBA will go to another client machine and he reattach to the job by typing the following command

$expdp hr/hr@mydb ATTACH=myfulljob

After the job status is displayed, he can issue the CONTINUE_CLIENT command to resume logging mode and restart the myfulljob job.

Export> CONTINUE_CLIENT

A message is displayed that the job has been reopened, and processing status is output to the client.

Note: After reattaching to the Job a DBA can also kill the job by typing KILL_JOB, if he doesn’t want to continue with the export job.

READ MORE……..

Data pump scenarios: http://chandu208.blogspot.com/2011/09/data-pump-scenarios.html

DATA PUMP Scenarios

1)   Import 11g data dump into 10g

You can use Oracle data dump to export data dump from 11g server, and import into 10g using the data Pump parameter called Version. When you use Version, Datapump exports all objects that are compatible for your target version

So, If your database is 11g and you want to export 11g and import into 10g

from 11g db ,

$  expdp  Test_schema_name/passs directory=datapump  schemas=Test_schema_name Version=10.2.0.4.0.

Once the export is done, you do the regular import from 10g server.

2)   Import multiple dump files

If the size of the dump file is large, usually they will be split into smaller chunks for easier ftp upload/download.

If you are trying to import a dump file that is split into many chunks, then you need to modify the DUMPFILE paratermeter to include %U

Ex:  If the dump files are named EXP_PROD_1.dmp, EXP_PROD_2.dmp etc ,    then  DUMPFILE=EXP_PROD_%U.DMP

3)   How to kill data pump jobs

When you import or export using data pump impdp or expdp commands, the import/export is done by a job. You have an option to provide a job name using JOB_NAME parameter too

Following sql will give you the list of data pump jobs

                   select * from dba_datapump_jobs

If you want to kill your impdp or expdp

1) Make sure that your impdp/expdp command prompt window is active
2) Press Control-C , It will pause the job. Don’t press another Control-C or close the command prompt. This will just close the window, but the job will still be running in the background
3) Type Kill_Job
ex: Import> kill_job
Are you sure you wish to stop this job (y/n): y

If by mistake, you closed the window and your import/export job is still running,

1) Get the name of the job using
select * from dba_datapump_jobs

2) Open a new command prompt window. If you want to kill your import job type
impdp username/password@database attach=name_of_the_job

3) Once you are attached to job, Type Kill_Job
ex: Import> kill_job
Are you sure you wish to stop this job (y/n): y

And your job is killed, it will no longer show in dba_datapump_jobs

4) REUSE_DUMPFILE : ( Overwrite existing dumpfile)

This is the option with data pump expdp utility. Normally when you perform the export using expdp utility and if the dumpfile is present in the export directory it will throw an error “ORA-27038: created file already exists”. This situation happens when you wanted to perform the repetitive exports using the same dumpfile. Oracle provides an option reuse_dumpfile=[Y/N] to avoid this error. You should mention the parameter value as Y to overwrite the existing dump file. By default the option considered as N.

$ expdp  scott/tiger directory=exp_dir    dumpfile = x.dmp   table s= example   reuse_dumpfiles = y