AWR Extract and Load

Sep 13, 2010 / By Daniel Fink

Tags: , ,

In preparing my presentation for the Michigan Oak Table Symposium, I came across AWR extract and load. While these statements are documented in the Oracle manuals (kind of), I don’t see much discussion online, which is a good barometer for the popularity of an item. Not much discussion – not very well known or used. Although the scripts are in $ORACLE_HOME/rdbms/admin in 10.2, they are not documented.

One of the frustrations with AWR (and Statspack) has been that diagnosing past performance issues and trending are dependent on keeping a large number of snapshots online. This means more production storage, resource consumption with queries. Would it not be nice to be able to take the AWR data from your production system, load it into a dba repository and then do all your querying? Perhaps even your own custom ETL to pre-aggregate, create custom views and procedures?

As of 10.2, Oracle supplies two scripts that enable you to extract and load AWR data into another database (even one already running AWR snapshots). You can even take the AWR data from a 10.2 database on Solaris and load it into an 11.2 database on Windows XP (other variations may work…but these are the two versions I have handy). I also took 11.2 database on Linux and loaded it to the Windows database.

Extracting AWR data is pretty straightforward. Login as a dba or appropriately privileged account and run $ORACLE_HOME/rdbms/admin/awrextr.sql.

PRODDB> @?/rdbms/admin/awrextr.sql
~  This script will extract the AWR data for a range of snapshots  ~
~  into a dump file.  The script will prompt users for the         ~
~  following information:                                          ~
~     (1) database id                                              ~
~     (2) snapshot range to extract                                ~
~     (3) name of directory object                                 ~
~     (4) name of dump file                                        ~
The database id and snapshot range questions are the same as you see when you run any AWR report. By providing a snapshot range, you can extract a subset of the AWR data. If you want to keep 30 days online, but copy the data to your dba repository weekly, you just select the snapshot range covering the prior week. No need to extract extra data and then have to delete it from the repository. Since the process uses datapump, you will have to provide the directory to write the file. The directory name must be entered in UPPERCASE and there is not a default value. The final information is to enter the name of the dump file, but it's not really the file name, it is the file prefix. Datapump will append '.dmp' to the file name selected.
Using the dump file prefix: awrdat_31844_31860
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|  The AWR extract dump file will be located
|  in the following directory/file:
|   /home/oracle
|   awrdat_31844_31860.dmp
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|  *** AWR Extract Started ...
|  This operation will take a few moments. The
|  progress of the AWR extract operation can be
|  monitored in the following directory/file:
|   /home/oracle
|   awrdat_31844_31860.log
End of AWR Extract

The Data Pump export took 10 – 20 minutes to extract 7 days of AWR data. The files were less than 50 megs and were able to be compressed to less than 1/2 that size. FTP (or scp) the file to the DBA repository server and uncompress it. Make certain that the dump file is stored in a directory that is also defined as an Oracle directory.

The AWR load was also fairly straightforward, with one minor wrinkle with the dump file name.

SQL> @?/rdbms/admin/awrload
~  This script will load the AWR data from a dump file. The   ~
~  script will prompt users for the following information:    ~
~     (1) name of directory object                            ~
~     (2) name of dump file                                   ~
~     (3) staging schema name to load AWR data into           ~
When you are prompted for the name of the dump file, do not include the .dmp suffix! There won't be an error immediately, you get to answer several more questions before you find out the name is wrong.
<pre><strong>Specify the Name of the Dump File to Load
Please specify the prefix of the dump file (.dmp) to load:
Enter value for file_name: awrdat_31672_31860
Loading from the file name: awrdat_31672_31860.dmp

The process will then prompt for the staging schema name, the default is AWR_STAGE. If you accept the default, the script will create the AWR_STAGE user after asking you for default tablespaces. Once it has completed the awr load process, the script will drop the AWR_STAGE user.

After the process completes, the AWR tables now have new data in them! You can query DBA_HIST_SNAPSHOT or any of the other DBA_HIST views (including DBA_HIST_ACTIVE_SESS_HISTORY). Unfortunately, the standard AWR and ASH reports use the current database dbid, which won’t be the same as the dbid of the data you have just loaded. You will need to create your own AWR/ASH scripts or modify the ones provided…but that is for the next blog post!

Share this article

5 Responses to “AWR Extract and Load”

  • Doug Burns says:

    A bit more here ….

    Unfortunately, the standard AWR and ASH reports use the current database dbid, which won’t be the same as the dbid of the data you have just loaded. You will need to create your own AWR/ASH scripts or modify the ones provided…but that is for the next blog post!

    You can just the ‘i’ versions of the reports for that.

    I didn’t know you were working with Pythian, Dan!

  • Doug Burns says:

    Haven’t been reading any blogs for a while, mate! Just catching up :-)

    I trust you’re having a great time in Ann Arbor – it looks like it.

  • wenmei says:

    One of the biggest issue with these Oracle export/import procedures is that some of the AWR tables are partitionned tables , so the import , wich uses datapump, shall work ONLY if you have an Enterprise Edition on the target ( repository ) database …
    I had to export using my own scripts to export/import and considering using PostgresSQL 9 with partitioning for my AWR repo
    patrick boulay

  • syed says:

    Hi, I am using 11g Enterprise Edition Release – 64bit Production windows. I am trying to import AWR report but not able to…below r d steps
    -@$O_H/rdbms/admin/awrextr.sql ,gave directory name :DATA_PUMP_DIR ,filename :testextractawr (got dumpfile , text file in “dpdump” folder

    -@$O_H/rdbms/admin/awrload.sql ,gave directory name :DATA_PUMP_DIR ,filename :testextractawr ,gave schema name ,used default tablespace names.

    i get errors as below…
    … Creating awrreport user

    | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    | Loading the AWR data from the following
    | directory/file:
    | C:\app\Administrator/admin/prod/dpdump/
    | TESTEXTRACTAWR.txt.dmp
    | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    | *** AWR Load Started …
    | This operation will take a few moments. The
    | progress of the AWR load operation can be
    | monitored in the following directory/file:
    | C:\app\Administrator/admin/prod/dpdump/
    | TESTEXTRACTAWR.txt.log
    ERROR at line 1:
    ORA-20102: user name ‘awrreport’ is invalid
    ORA-06512: at “SYS.DBMS_SWRF_INTERNAL”, line 411
    ORA-01918: user ” does not exist
    ORA-06512: at “SYS.DBMS_SWRF_INTERNAL”, line 1520
    ORA-06512: at line 4

    ERROR at line 1:
    ORA-20105: unable to move AWR data to SYS
    ORA-06512: at “SYS.DBMS_SWRF_INTERNAL”, line 2324
    ORA-44001: invalid schema
    ORA-06512: at line 4

    … Dropping awrreport user

    End of AWR Load

    dont know how to go about..kindly help me out

Leave a Reply

  • (will not be published)

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>