Skip to content

Insight and analysis of technology and business strategy

AWR Extract and Load

In preparing my presentation for the Michigan Oak Table Symposium, I came across AWR extract and load. While these statements are documented in the Oracle manuals (kind of), I don’t see much discussion online, which is a good barometer for the popularity of an item. Not much discussion – not very well known or used. Although the scripts are in $ORACLE_HOME/rdbms/admin in 10.2, they are not documented.

One of the frustrations with AWR (and Statspack) has been that diagnosing past performance issues and trending are dependent on keeping a large number of snapshots online. This means more production storage, resource consumption with queries. Would it not be nice to be able to take the AWR data from your production system, load it into a dba repository and then do all your querying? Perhaps even your own custom ETL to pre-aggregate, create custom views and procedures?

As of 10.2, Oracle supplies two scripts that enable you to extract and load AWR data into another database (even one already running AWR snapshots). You can even take the AWR data from a 10.2 database on Solaris and load it into an 11.2 database on Windows XP (other variations may work…but these are the two versions I have handy). I also took 11.2 database on Linux and loaded it to the Windows database.

Extracting AWR data is pretty straightforward. Login as a dba or appropriately privileged account and run $ORACLE_HOME/rdbms/admin/awrextr.sql.

PRODDB> @?/rdbms/admin/awrextr.sql
~~~~~~~~~~~~~
AWR EXTRACT
~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~  This script will extract the AWR data for a range of snapshots  ~
~  into a dump file.  The script will prompt users for the         ~
~  following information:                                          ~
~     (1) database id                                              ~
~     (2) snapshot range to extract                                ~
~     (3) name of directory object                                 ~
~     (4) name of dump file                                        ~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The database id and snapshot range questions are the same as you see when you run any AWR report. By providing a snapshot range, you can extract a subset of the AWR data. If you want to keep 30 days online, but copy the data to your dba repository weekly, you just select the snapshot range covering the prior week. No need to extract extra data and then have to delete it from the repository. Since the process uses datapump, you will have to provide the directory to write the file. The directory name must be entered in UPPERCASE and there is not a default value. The final information is to enter the name of the dump file, but it's not really the file name, it is the file prefix. Datapump will append '.dmp' to the file name selected.
Using the dump file prefix: awrdat_31844_31860
|
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|  The AWR extract dump file will be located
|  in the following directory/file:
|   /home/oracle
|   awrdat_31844_31860.dmp
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|  *** AWR Extract Started ...
|
|  This operation will take a few moments. The
|  progress of the AWR extract operation can be
|  monitored in the following directory/file:
|   /home/oracle
|   awrdat_31844_31860.log
|
End of AWR Extract
PRODDB>

The Data Pump export took 10 – 20 minutes to extract 7 days of AWR data. The files were less than 50 megs and were able to be compressed to less than 1/2 that size. FTP (or scp) the file to the DBA repository server and uncompress it. Make certain that the dump file is stored in a directory that is also defined as an Oracle directory.

The AWR load was also fairly straightforward, with one minor wrinkle with the dump file name.

SQL> @?/rdbms/admin/awrload
~~~~~~~~~~
AWR LOAD
~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~  This script will load the AWR data from a dump file. The   ~
~  script will prompt users for the following information:    ~
~     (1) name of directory object                            ~
~     (2) name of dump file                                   ~
~     (3) staging schema name to load AWR data into           ~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When you are prompted for the name of the dump file, do not include the .dmp suffix! There won't be an error immediately, you get to answer several more questions before you find out the name is wrong.
<pre><strong>Specify the Name of the Dump File to Load
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Please specify the prefix of the dump file (.dmp) to load:
Enter value for file_name: awrdat_31672_31860
Loading from the file name: awrdat_31672_31860.dmp

The process will then prompt for the staging schema name, the default is AWR_STAGE. If you accept the default, the script will create the AWR_STAGE user after asking you for default tablespaces. Once it has completed the awr load process, the script will drop the AWR_STAGE user.

After the process completes, the AWR tables now have new data in them! You can query DBA_HIST_SNAPSHOT or any of the other DBA_HIST views (including DBA_HIST_ACTIVE_SESS_HISTORY). Unfortunately, the standard AWR and ASH reports use the current database dbid, which won’t be the same as the dbid of the data you have just loaded. You will need to create your own AWR/ASH scripts or modify the ones provided…but that is for the next blog post!

Top Categories

  • There are no suggestions because the search field is empty.

Tell us how we can help!

dba-cloud-services
Upcoming-Events-banner