Yesterday I was discussing with a fellow DBA about ways to check the status of existing and/or past RMAN jobs. Good backup scripts usually write their output to some sort of log file so, checking the output is usually a straight-forward task. However, backup jobs can be scheduled in many different ways (crontab, Grid Control, Scheduled Tasks, etc) and finding the log file may be tricky if you don’t know the environment well.
Furthermore, log files may also have already been overwritten by the next backup or simply just deleted. An alternative way of accessing that information, thus, may come handy.
Fortunately, RMAN keeps the backup metadata around for some time and it can be accessed through the database’s V$ views. Obviously, if you need this information because your database just crashed and needs to be restored, the method described here is useless.
Backup jobs’ status and metadata
A lot of metadata about the RMAN backup jobs can be found in the V$RMAN_% views. These views show past RMAN jobs as well as jobs currently running. Once the jobs complete backup sets, metadata about the sets and pieces are also added to the control file and can be accessed through the V$BACKUP_% views.
For the queries in this post I need only four of those views:
V$BACKUP_SET
V$BACKUP_SET_DETAILS
V$RMAN_BACKUP_JOB_DETAILS
GV$RMAN_OUTPUT
NOTE: I haven’t tested the below in Oracle 10g or earlier.
In the query below I used these views to combine in a single query the information I’m usually interested in when verifying backup jobs:
set lines 220 set pages 1000 col cf for 9,999 col df for 9,999 col elapsed_seconds heading "ELAPSED|SECONDS" col i0 for 9,999 col i1 for 9,999 col l for 9,999 col output_mbytes for 9,999,999 heading "OUTPUT|MBYTES" col session_recid for 999999 heading "SESSION|RECID" col session_stamp for 99999999999 heading "SESSION|STAMP" col status for a10 trunc col time_taken_display for a10 heading "TIME|TAKEN" col output_instance for 9999 heading "OUT|INST" select j.session_recid, j.session_stamp, to_char(j.start_time, 'yyyy-mm-dd hh24:mi:ss') start_time, to_char(j.end_time, 'yyyy-mm-dd hh24:mi:ss') end_time, (j.output_bytes/1024/1024) output_mbytes, j.status, j.input_type, decode(to_char(j.start_time, 'd'), 1, 'Sunday', 2, 'Monday', 3, 'Tuesday', 4, 'Wednesday', 5, 'Thursday', 6, 'Friday', 7, 'Saturday') dow, j.elapsed_seconds, j.time_taken_display, x.cf, x.df, x.i0, x.i1, x.l, ro.inst_id output_instance from V$RMAN_BACKUP_JOB_DETAILS j left outer join (select d.session_recid, d.session_stamp, sum(case when d.controlfile_included = 'YES' then d.pieces else 0 end) CF, sum(case when d.controlfile_included = 'NO' and d.backup_type||d.incremental_level = 'D' then d.pieces else 0 end) DF, sum(case when d.backup_type||d.incremental_level = 'D0' then d.pieces else 0 end) I0, sum(case when d.backup_type||d.incremental_level = 'I1' then d.pieces else 0 end) I1, sum(case when d.backup_type = 'L' then d.pieces else 0 end) L from V$BACKUP_SET_DETAILS d join V$BACKUP_SET s on s.set_stamp = d.set_stamp and s.set_count = d.set_count where s.input_file_scan_only = 'NO' group by d.session_recid, d.session_stamp) x on x.session_recid = j.session_recid and x.session_stamp = j.session_stamp left outer join (select o.session_recid, o.session_stamp, min(inst_id) inst_id from GV$RMAN_OUTPUT o group by o.session_recid, o.session_stamp) ro on ro.session_recid = j.session_recid and ro.session_stamp = j.session_stamp where j.start_time > trunc(sysdate)-&NUMBER_OF_DAYS order by j.start_time;
The output of the query above looks like the one below:
SESSION SESSION OUTPUT ELAPSED TIME OUT RECID STAMP START_TIME END_TIME MBYTES STATUS INPUT_TYPE DOW SECONDS TAKEN CF DF I0 I1 L INST ------- ------------ ------------------- ------------------- ---------- ---------- ------------- --------- ---------- ---------- ------ ------ ------ ------ ------ ----- 35877 759180320 2011-08-14 19:25:29 2011-08-14 19:26:09 213 COMPLETED ARCHIVELOG Saturday 40 00:00:40 35880 759180438 2011-08-14 19:27:28 2011-08-14 19:28:02 217 COMPLETED ARCHIVELOG Saturday 34 00:00:34 35883 759180496 2011-08-14 19:28:25 2011-08-14 19:28:57 219 COMPLETED ARCHIVELOG Saturday 32 00:00:32 35886 759182786 2011-08-14 20:06:26 2011-08-14 20:24:30 14,100 FAILED DB FULL Saturday 1084 00:18:04 35888 759184489 2011-08-14 20:34:50 0 FAILED DB FULL Saturday 35890 759189013 2011-08-14 21:50:13 0 FAILED ARCHIVELOG Saturday 35892 759203414 2011-08-15 01:50:14 2011-08-15 01:53:55 875 COMPLETED ARCHIVELOG Sunday 221 00:03:41 35895 759207985 2011-08-15 03:08:15 0 FAILED ARCHIVELOG Sunday 35899 759213359 2011-08-15 04:36:27 2011-08-15 04:38:53 3,335 COMPLETED DATAFILE FULL Sunday 146 00:02:26 35907 759217815 2011-08-15 05:50:15 2011-08-15 05:50:52 229 COMPLETED ARCHIVELOG Sunday 37 00:00:37 35911 759218083 2011-08-15 05:54:43 2011-08-15 10:15:49 193,016 COMPLETED DB FULL Sunday 15666 04:21:06 35913 759232215 2011-08-15 09:50:15 2011-08-15 09:51:39 628 COMPLETED ARCHIVELOG Sunday 84 00:01:24 ... 36209 760053012 2011-08-24 21:50:13 2011-08-24 21:53:21 2,422 COMPLETED ARCHIVELOG Tuesday 188 00:03:08 1 36212 760067412 2011-08-25 01:50:12 2011-08-25 01:54:17 2,514 COMPLETED ARCHIVELOG Wednesday 245 00:04:05 1 36215 760081812 2011-08-25 05:50:12 2011-08-25 05:51:40 907 COMPLETED ARCHIVELOG Wednesday 88 00:01:28 1 36218 760096215 2011-08-25 09:50:15 2011-08-25 09:51:36 737 COMPLETED ARCHIVELOG Wednesday 81 00:01:21 0 0 0 0 9 1 36221 760110617 2011-08-25 13:50:17 2011-08-25 13:54:58 3,524 COMPLETED ARCHIVELOG Wednesday 281 00:04:41 0 0 0 0 9 1 36224 760114813 2011-08-25 15:00:13 2011-08-25 19:34:55 209,491 COMPLETED DB FULL Wednesday 16482 04:34:42 3 203 0 0 17 1 36228 760125016 2011-08-25 17:50:16 2011-08-25 17:52:33 1,464 COMPLETED ARCHIVELOG Wednesday 137 00:02:17 0 0 0 0 9 1 36237 760139412 2011-08-25 21:50:12 2011-08-25 21:52:44 1,879 COMPLETED ARCHIVELOG Wednesday 152 00:02:32 1 0 0 0 8 1 36240 760153813 2011-08-26 01:50:13 2011-08-26 01:57:04 5,123 COMPLETED ARCHIVELOG Thursday 411 00:06:51 1 0 0 0 17 1
Most of the columns above have either a obvious meaning or derive directly from the the V$ views, so they are explained in the Oracle documentation. The few columns I added to the output that need some additional explaination are the aggregations below:
CF:
-
- Number of controlfile backups included in the backup set
DF:
-
- Number of datafile full backups included in the backup set
I0:
-
- Number of datafile incremental level-0 backups included in the backup set
I1:
-
- Number of datafile incremental level-1 backups included in the backup set
L:
-
- Number of archived log backups included in the backup set
OUT INST:
- Instance where the job was executed and the output is available (see below)
Please note that the aggregations are only shown for the recent backup jobs in the example above, since they are purged from the catalog after a few days.
Another important thing to note is that in a RAC environment some fields for a RUNNING backup job may contain invalid information until the backup job is finished. To get consistent information, run this query on the node where the backup is running.
Backup set details
Once you found the general information about the backup sets available, you may need to get more information about the backup sets for one particular backup job. Each backup job is uniquely identified by (SESSION_RECID, SESSION_STAMP), which are listed by the query above.
The query below retrieves details for a backup job, given a pair of values for (SESSION_RECID, SESSION_STAMP):
set lines 220 set pages 1000 col backup_type for a4 heading "TYPE" col controlfile_included heading "CF?" col incremental_level heading "INCR LVL" col pieces for 999 heading "PCS" col elapsed_seconds heading "ELAPSED|SECONDS" col device_type for a10 trunc heading "DEVICE|TYPE" col compressed for a4 heading "ZIP?" col output_mbytes for 9,999,999 heading "OUTPUT|MBYTES" col input_file_scan_only for a4 heading "SCAN|ONLY" select d.bs_key, d.backup_type, d.controlfile_included, d.incremental_level, d.pieces, to_char(d.start_time, 'yyyy-mm-dd hh24:mi:ss') start_time, to_char(d.completion_time, 'yyyy-mm-dd hh24:mi:ss') completion_time, d.elapsed_seconds, d.device_type, d.compressed, (d.output_bytes/1024/1024) output_mbytes, s.input_file_scan_only from V$BACKUP_SET_DETAILS d join V$BACKUP_SET s on s.set_stamp = d.set_stamp and s.set_count = d.set_count where session_recid = &SESSION_RECID and session_stamp = &SESSION_STAMP order by d.start_time;
And the output is:
ELAPSED DEVICE OUTPUT SCAN BS_KEY TYPE CF? INCR LVL PCS START_TIME COMPLETION_TIME SECONDS TYPE ZIP? MBYTES ONLY ---------- ---- --- ---------- ---- ------------------- ------------------- ---------- ---------- ---- ---------- ---- 33429 L NO 1 2011-08-25 15:00:24 2011-08-25 15:03:25 181 DISK YES 812 NO 33428 L NO 1 2011-08-25 15:00:24 2011-08-25 15:03:08 164 DISK YES 812 NO 33429 L NO 1 2011-08-25 15:00:24 2011-08-25 15:03:25 181 DISK YES 1,187 NO 33429 L NO 1 2011-08-25 15:00:24 2011-08-25 15:03:25 181 DISK YES 812 NO 33429 L NO 1 2011-08-25 15:00:24 2011-08-25 15:03:25 181 DISK YES 1,187 NO 33428 L NO 1 2011-08-25 15:00:24 2011-08-25 15:03:08 164 DISK YES 1,187 NO 33428 L NO 1 2011-08-25 15:00:24 2011-08-25 15:03:08 164 DISK YES 1,187 NO 33428 L NO 1 2011-08-25 15:00:24 2011-08-25 15:03:08 164 DISK YES 812 NO 33430 L NO 1 2011-08-25 15:03:11 2011-08-25 15:03:28 17 DISK YES 89 NO 33436 D NO 45 2011-08-25 15:03:38 2011-08-25 19:05:51 14533 DISK YES 91,898 NO 33436 D NO 45 2011-08-25 15:03:38 2011-08-25 19:05:51 14533 DISK YES 91,898 NO 33443 D NO 52 2011-08-25 15:03:38 2011-08-25 19:31:51 16093 DISK YES 106,166 NO 33443 D NO 52 2011-08-25 15:03:38 2011-08-25 19:31:51 16093 DISK YES 106,166 NO 33437 D NO 2 2011-08-25 19:05:58 2011-08-25 19:16:24 626 DISK YES 2,999 NO 33438 D NO 3 2011-08-25 19:16:28 2011-08-25 19:29:51 803 DISK YES 4,948 NO 33439 D NO 1 2011-08-25 19:29:52 2011-08-25 19:29:52 0 DISK YES 0 NO 33440 D NO 1 2011-08-25 19:29:53 2011-08-25 19:29:53 0 DISK YES 0 NO 33441 D NO 1 2011-08-25 19:29:54 2011-08-25 19:29:54 0 DISK YES 0 NO 33442 D NO 1 2011-08-25 19:29:55 2011-08-25 19:29:55 0 DISK YES 0 NO 33445 L NO 1 2011-08-25 19:32:15 2011-08-25 19:34:11 116 DISK YES 677 NO 33445 L NO 1 2011-08-25 19:32:15 2011-08-25 19:34:11 116 DISK YES 618 NO 33445 L NO 1 2011-08-25 19:32:15 2011-08-25 19:34:11 116 DISK YES 677 NO 33446 L NO 1 2011-08-25 19:32:15 2011-08-25 19:34:18 123 DISK YES 618 NO 33446 L NO 1 2011-08-25 19:32:15 2011-08-25 19:34:18 123 DISK YES 677 NO 33446 L NO 1 2011-08-25 19:32:15 2011-08-25 19:34:18 123 DISK YES 618 NO 33445 L NO 1 2011-08-25 19:32:15 2011-08-25 19:34:11 116 DISK YES 618 NO 33446 L NO 1 2011-08-25 19:32:15 2011-08-25 19:34:18 123 DISK YES 677 NO 33447 D YES 1 2011-08-25 19:34:19 2011-08-25 19:34:20 1 DISK NO 24 NO 33448 D YES 1 2011-08-25 19:34:48 2011-08-25 19:34:49 1 DISK YES 3 NO 33449 D YES 1 2011-08-25 19:34:49 2011-08-25 19:34:50 1 DISK NO 24 NO
Backup job output
And finally, sometimes it may be helpful to retrieve the job’s output from the metadata kept by the instance.
It might be that the original log on disk, if any, may have been overwritten by a more recent backup, or just that selecting it from a V$ view may be easier than connecting to a server to find out were the log file is.
The tricky thing here, though, is that the view that contains the output, V$RMAN_OUTPUT, exists in memory only; the job’s output is not stored in the controlfile or anywhere else in the database. Thus, if the instance gets restarted, the contents of that view are reset.
To retrieve the job output for a specific backup job, identified by the (SESSION_RECID, SESSION_STAMP) pair, you can use the following query:
set lines 200 set pages 1000 select output from GV$RMAN_OUTPUT where session_recid = &SESSION_RECID and session_stamp = &SESSION_STAMP order by recid;
Which shows:
OUTPUT ---------------------------------------------------------------------------------------------------------------------------------- connected to target database: ORCL (DBID=4132479105) using target database control file instead of recovery catalog echo set on backup archivelog all not backed up delete all input; Starting backup at 26-AUG-11 current log archived allocated channel: ORA_DISK_1 channel ORA_DISK_1: sid=233 instance=ORCL devtype=DISK allocated channel: ORA_DISK_2 channel ORA_DISK_2: sid=261 instance=ORCL devtype=DISK channel ORA_DISK_1: starting compressed archive log backupset channel ORA_DISK_1: specifying archive log(s) in backup set ... ... archive log filename=+ALOGS/ORCL/archivelog/2011_08_26/thread_2_seq_23561.462.933333703 recid=213400 stamp=933333707 archive log filename=+ALOGS/ORCL/archivelog/2011_08_26/thread_2_seq_23562.412.933335505 recid=213402 stamp=933335506 archive log filename=+ALOGS/ORCL/archivelog/2011_08_26/thread_2_seq_23563.632.933337017 recid=213404 stamp=933337020 Finished backup at 26-AUG-11 Starting Control File and SPFILE Autobackup at 26-AUG-11 piece handle=/fra/ORCL/autobackup/2011_08_26/o1_mf_s_933337284_34h856x2_.bkp comment=NONE Finished Control File and SPFILE Autobackup at 26-AUG-11 exit;
30 Comments. Leave new
Very informative. Thanks Andre.
Thanks, Fahd!
Hey!
Just in time! Very nice queries!
I think I’ll include them in the backup review template I’m working on right now.
Thanks!
Thanks, Maris!
Mmm… backup review template! That would be something nice to share ;)
Hi,
Instead of taking inputs like “session_recid”
and “SESSION_STAMP”, you can map the join with bs_key — that you make input further easy. Since at first instance you can’t get that info.. but you can build the query to that from “V$BACKUP_SET_DETAILS”
hope it saves your time..further a step.
Hi, Pavan, thanks for your feedback.
You’re right, the BS_KEY column in V$BACKUP_SET_DETAILS can uniquely identify a backup set. The reason I didn’t use that was just because my focus was on the backup job rather than the backup set. The queries in this post show the information from at job level: the first query shows the jobs’ details, the second one shows all the backup sets for a given job and the last one, the job’s output.
Whenever interested in information for a particular backup set, your suggestion is the easiest way to retrieve that information.
Cheers!
Andre
Since column INPUT_TYPE doesn’t have the distinction between diff Vs Cummulative, how do you find out what backups were run?
Thanks!
Hi friend. Thank you very much! These scripts are very useful.
Thanks for the feedback, Victor.
[…] https://www.pythian.com/blog/viewing-rma-jobs-status-and-output/ […]
Just to comment that V$RMAN_BACKUP_JOB_DETAILS comes on 10gR2.
Thanks for the info, Luis.
Thanks for these scripts, Andre. I just have one minor comment: The decode function for the day of week depends on the ‘proper’ NLS setting. Here in germany, weeks start on monday, so 1 should decode to Monday, not Sunday. This threw me off a bit at first.
But I guess you can safely replace the whole decode block with to_char(j.start_time, ‘DAY’) which returns the proper day of the week directly.
Nice reporting script. was in the middle of creating an alerting package when I came across this. Have you noticed that the output from v$rman_output is clipped, noteably on the session output? So that if you pass sbt_tape parms in the script, the parms get clipped at col 69. Using 11gR2. Im trying to figure this out, because the ouput from the views is therefore NOT comparable to a logfile.
Excellent. Thanks for the blog Andre!
Nice Job Bro !
Thanks André..
Hi,
There seems the incremental level 0 backup is now properly identified as I + 0 (instead of D + 0).
So, to make the script compatible with both <12c and 12c databases the line:
sum(case when d.backup_type||d.incremental_level = 'D0' then d.pieces else 0 end) I0,
could be changed to:
sum(case when d.backup_type||d.incremental_level in ('I0', 'D0') then d.pieces else 0 end) I0,
Regards,
Nick
Very, very nice…
Hi, I can see “RECVR AREA” in the INPUT_TYPE in the view v$rman_backup_job_details
Can anyone please explain what does it mean. As i can see archivelog & Control file backups other than this. I suspect this is a full backup going to tape but why does it show recovery area ?
Please Help
Hi, I wonder why is so hard to find the incremental level of a backup in rman views? Running your queries I see that the incremental level column is always emtpy, is there any other way to find out the incrmental level of a running backup?
tks!
Thanks very useful
I started a duplicate from standby with 8 channels and 5 auxiliary channels. The channels will be running on primary or standby? How can I check that information?
I used this query “select * from v$rman_backup_job_details ” to find rman backup jobs running detail with status.
I take rman backup daily and it take 30 minutes.
So when I run above mention query, I think return 31 rows of october month. but actually return only 10 rows.
Like start time is 01-oct-17 08:30:00 to end time 04-oct-17 05:00:00.
But only 30 min require so why output shows combine many days.
Hi André Araújo,
First of all thanks for posting Rman backup status scripts.
One thing is missing, database name in output result.
Other aspect; if we consolidated RMAN Backups status report from rman catalog database; that time it will show you multiple databases RMAN backup status report which we required to check all at once.
Can it be done like that?
Thanks,
Ranvir
Hi
How to take the consolidated report for all the databases+database_Name which is on catalog /no catalog per day basis.
Thanks,
Ranvir
This query gives Very detailed information on RMAN Backup status.
I found below query also report the status of RMAN backup- Incremental, Full and Archivelog.
col STATUS format a9
col hrs format 999.99
select
SESSION_KEY, INPUT_TYPE, STATUS,
to_char(START_TIME,’mm/dd/yy hh24:mi’) start_time,
to_char(END_TIME,’mm/dd/yy hh24:mi’) end_time,
elapsed_seconds/3600 hrs
from V$RMAN_BACKUP_JOB_DETAILS
order by session_key;
@Santosh Your sql doesn’t any different between a INC0 or INC1
This is very helpful. How do I control how long v$BACKUP_SET_DETAILS retains data. We are finding it varies depending on database. Can’t find any control/setting that sets this. Need to retail at least 30 days but we are finding anywhere from 24 to 48 days depending on environment.
JP, check control_file_record_keep_time