11g data pump parameters

Export Parameters :

Parameter Description
abort_step Undocumented feature
access_method Data Access Method – default is Automatic
attach Attach to existing job – no default
cluster Start workers across cluster; default is YES
compression Content to export: default is METADATA_ONLY
content Content to export: default is ALL
current_edition Current edition: ORA$BASE is the default
data_options Export data layer options
directory Default directory specification
dumpfile dumpfile names: format is (file1,…) default is expdat.dmp
encryption Encryption type to be used: default varies
encryption_algorithm Encryption algorithm to be used: default is AES128
encryption_mode Encryption mode to be used: default varies
encryption_password Encryption key to be used
estimate Calculate size estimate: default is BLOCKS
estimate_only Only estimate the length of the job: default is N
exclude Export exclude option: no default
filesize file size: the size of export dump files
flashback_time database time to be used for flashback export: no default
flashback_scn system change number to be used for flashback export: no default
full indicates a full mode export
include export include option: no default
ip_address IP Address for PLSQL debugger
help help: display description on export parameters, default is N
job_name Job Name: no default
keep_master keep_master: Retain job table upon completion
log_entry logentry
logfile log export messages to specified file
metrics Enable/disable object metrics reporting
mp_enable Enable/disable multi-processing for current session
network_link Network mode export
nologfile No export log file created
package_load Specify how to load PL/SQL objects
parallel Degree of Parallelism: default is 1
parallel_threshold Degree of DML Parallelism
parfile parameter file: name of file that contains parameter specifications
query query used to select a subset of rows for a table
remap_data Transform data in user tables
reuse_dumpfiles reuse_dumpfiles: reuse existing dump files; default is No
sample Specify percentage of data to be sampled
schemas schemas to export: format is ‘(schema1, .., schemaN)’
service_name Service name that job will charge against
silent silent: display information, default is NONE
status Interval between status updates
tables Tables to export: format is ‘(table1, table2, …, tableN)’
tablespaces tablespaces to transport/recover: format is ‘(ts1,…, tsN)’
trace Trace option: enable sql_trace and timed_stat, default is 0
transport_full_check TTS perform test for objects in recovery set: default is N
transportable Use transportable data movement: default is NEVER
transport_tablespaces Transportable tablespace option: default is N
tts_closure_check Enable/disable transportable containment check: def is Y
userid user/password to connect to oracle: no default
version Job version: Compatible is the default

Import Parameters :

Parameter Description
abort_step Undocumented feature
access_method Data Access Method – default is Automatic
attach Attach to existing job – no default
cluster Start workers across cluster; default is Y
content Content to import: default is ALL
data_options Import data layer options
current_edition Applications edition to be used on local database
directory Default directory specification
dumper_directory Directory for stream dumper
dumpfile import dumpfile names: format is (file1, file2…)
encryption_password Encryption key to be used
estimate Calculate size estimate: default is BLOCKS
exclude Import exclude option: no default
flashback_scn system change number to be used for flashback import: no default
flashback_time database time to be used for flashback import: no default
full indicates a full Mode import
help help: display description of import parameters, default is N
include import include option: no default
ip_address IP Address for PLSQL debugger
job_name Job Name: no default)’
keep_master keep_master: Retain job table upon completion
logfile log import messages to specified file
master_only only import the master table associated with this job
metrics Enable/disable object metrics reporting
mp_enable Enable/disable multi-processing for current session
network_link Network mode import
nologfile No import log file created
package_load Specify how to load PL/SQL objects
parallel Degree of Parallelism: default is 1
parallel_threshold Degree of DML Parallelism
parfile parameter file: name of file that contains parameter specifications
partition_options Determine how partitions should be handle: Default is NONE
query query used to select a subset of rows for a table
remap_data Transform data in user tables
remap_datafile Change the name of the source datafile
remap_schema Remap source schema objects to new schema
remap_table Remap tables to a different name
remap_tablespace Remap objects to different tablespace
reuse_datafiles Re-initialize existing datafiles (replaces DESTROY)
schemas schemas to import: format is ‘(schema1, …, schemaN)’
service_name Service name that job will charge against
silent silent: display information, default is NONE
skip_unusable_indexes Skip indexes which are in the unsed state)
source_edition Applications edition to be used on remote database
sqlfile Write appropriate SQL DDL to specified file
status Interval between status updates
streams_configuration import streams configuration metadata
table_exists_action Action taken if the table to import already exists
tables Tables to import: format is ‘(table1, table2, …, tableN)
tablespaces tablespaces to transport: format is ‘(ts1,…, tsN)’
trace Trace option: enable sql_trace and timed_stat, default is 0
transform Metadata_transforms
transportable Use transportable data movement: default is NEVER
transport_datafiles List of datafiles to be plugged into target system
transport_tablespaces Transportable tablespace option: default is N
transport_full_check Verify that Tablespaces to be used do not have dependencies
tts_closure_check Enable/disable transportable containment check: def is Y
userid user/password to connect to oracle: no default
version Job version: Compatible is the default

The Following commands are valid while in interactive mode.

Command                                           Description

——————–                                   ———————————————————-

ADD_FILE                                                    Add dumpfile to dumpfile set.

CONTINUE_CLIENT                              Return to logging mode. Job will be re-started if idle.

EXIT_CLIENT                                          Quit client session and leave job running.

FILESIZE                                                  Default filesize (bytes) for subsequent ADD_FILE commands.

HELP                                                          Summarize interactive commands.

KILL_JOB                                                Detach and delete job.

PARALLEL                                              Change the number of active workers for current job.

START_JOB                                            Start/resume current job.

STATUS                                                   Frequency (secs) job status is to be monitored where  the default (0) will show new status when available.

STATUS[=interval]

STOP_JOB                                      Orderly shutdown of job execution and exits the client.

STOP_JOB=IMMEDIATE performs an immediate shutdown of the Data Pump job.

Advertisements
Next Post
Leave a comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: