Basics of Data Pump Export in Oracle 12c

The command-line program expdb launches Oracle 12c’s Data Pump Export. All Data Pump Export jobs are estimated at the beginning so you see the estimate before it runs the actual export. Remember that estimates may not always be completely accurate.

From the OS command line, launch Data Pump Export and have it show a list of the parameter:

<expdp help=y>

You see something like this:

Export: Release 12.1.0.1.0 - Production on Sat Jul 20 06:56:47 2013
Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.
The Data Pump export utility provides a mechanism for transferring data objects
between Oracle databases. The utility is invoked with the following command:
  Example: expdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp
You can control how Export runs by entering the 'expdp' command followed
by various parameters. To specify parameters, you use keywords:
  Format: expdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
  Example: expdp scott/tiger DUMPFILE=scott.dmp DIRECTORY=dmpdir SCHEMAS=scott
        or TABLES=(T1:P1,T1:P2), if T1 is partitioned table
...output snipped...

You can see that Data Pump lists all the parameters you have to choose from and gives a brief explanation of each.

You can specify parameters at two locations:

  • On the command line

  • In a parameter file

Go over some of the more useful parameters in detail:

  • COMPRESSION: This parameter allows you to compress the output of Data Pump while the job is running. This trick is handy when space is at a premium. This parameter degrades the performance of the export, but that’s to be expected.

  • CONTENT: This specifies what type of data you want to get. Do you want just object definitions? Do you want just the data? Both? Determine what you want to export and specify accordingly.

  • DIRECTORY: This specifies the directory where you want the dump file to go. This is an Oracle Object directory, not a simple path on the OS.

  • DUMPFILE: This parameter names the dump file to be output. You can also have Data Pump number the files if you like. This numbering is handy when you use parallelism or have Data Pump break the job into multiple files of manageable size. To have Data Pump number the files, use the %U argument:

    DUMPFILE=my_dp_exp_%U.dmp

    Data Pump starts with 1 and numbers the files to 99. What if you need more than 99 files? Try something like this:

    DUMPFILE= my_dp_exp_seta_%U.dmp, my_dp_exp_set_b_%U.dmp

    You can have it dump to multiple files, which is especially useful when you’re parallelizing the output.

  • ESTIMATE: This parameter estimates your job size but won’t run it. Very handy when space is at a premium. This parameter stops the job after estimating.

  • EXCLUDE: You can exclude certain objects from the export. For example, say you want everything but the HR and OE schemas as well as all views and functions. EXCLUDE can have multiple entries. You can say this:

    EXCLUDE=SCHEMAS:"'HR','OE'"
    EXCLUDE=VIEW,FUNCTION
  • INCLUDE: Mutually exclusive with EXCLUDE, use this parameter if you want to get a specific type of object. When the list is small, this can be very useful:

    INCLUDE=VIEWS, TRIGGERS
  • FILESIZE: You can break your Data Pump Export into multiple files, which aids file management. For example, if you have a 200GB export to do, you might not want a 200GB dump file to manage afterward. Instead, use this parameter to break it into 4GB chunks or something similar.

  • FLASHBACK_TIME: If you want to dump the data from a time other than the present, you can use this parameter to specify a date and time. As long as your database still has the old data in its undo retention space, this parameter can be very useful.

  • NETWORK_LINK: You can connect from one database to export to another by setting up a database link and specifying it with this parameter.

  • PARALLEL: To help speed up your dump, you can parallelize it. Try different values to find the most efficient number of processes across different systems. At the very least, you should be able to parallelize by the number of CPUs you have while recalling the capabilities of the storage media to which you’re writing.

  • SCHEMAS: This parameter gives a list of schemas to Data Pump and tells it what to get. By default, Data Pump exports the schema that’s logging in to do the job.

  • TABLES: This restricts the export to a list of tables.

  • TABLESPACES: This parameter restricts the export to a list of tablespaces only.

  • Add a Comment
  • Print
  • Share
blog comments powered by Disqus
Advertisement

Inside Dummies.com