If you have an extremely busy OLTP system with a physical standby ready for a manual role transition, and you want to run very heavy reporting queries without affecting the system, consider using a separate report database with downstream capture configuration. Yes, it is very easy to configure, and it will have no performance impact on the OLTP system whatsoever.
In this blog, I will show how to do this, and how to maintain the archivelog transportation during a manual switchover.
Below is the configuration of the three databases I have for testing:
Production DB
Host – PRDSRV DB name – TESTDB DB unique name – PRDDB OS – Linux x86_64 Oracle – 11.1.0.7 OLTP schema – OLTPUSER log_archive_dest_1='location=use_db_recovery_file_dest mandatory' log_archive_dest_2='service=STBDB lgwr async valid_for=(primary_role,online_logfiles) db_unique_name=STBDB reopen=15' log_archive_dest_state_1=enable log_archive_dest_state_2=enable archive_lag_target=1200 fal_client=PRDDB fal_server=STBDB
Standby DB
Host – STBSRV DB name – TESTDB DB unique name – STBDB OS – Linux x86_64 Oracle – 11.1.0.7 OLTP schema – OLTPUSER log_archive_dest_1='location=use_db_recovery_file_dest mandatory' log_archive_dest_2='service=PRDDB lgwr async valid_for=(primary_role,online_logfiles) db_unique_name=PRDDB reopen=15' log_archive_dest_state_1=enable log_archive_dest_state_2=enable archive_lag_target=1200 fal_client=STBDB fal_server=PRDDB
Report DB
Host – RPTSRV DB name – RPTDB OS – Linux x86_64 Oracle – 11.1.0.7 Streams schema - STRMADMIN log_archive_dest_1='location=use_db_recovery_file_dest mandatory' log_archive_dest_state_1=enable
So let’s start. First, I will configure the downstream capture.
1. Add RPTDB to tnsnames.ora on PRDSRV and STBSRV.
RPTDB = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = RPTSRV)(PORT = 1521)) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = RPTDB) ) )
2. Set up log_archive_dest_3 on PRDDB and STBDB.
alter system set log_archive_dest_3='service=RPTDB arch noregister template=+DG_FRA/TESTDB/ARCHIVELOG/TESTDB%r_%t_%s.log valid_for=(standby_role,all_logfiles) reopen=15' scope=both; alter system set log_archive_dest_state_3=enable scope=both;
3. Add supplemental log data on PRDDB and build dictionary.
alter database add supplemental log data; select supplemental_log_data_min from v$database; exec dbms_capture_adm.build; select name,sequence#,first_change# from v$archived_log where dictionary_begin='YES';
4. Prepare OLTPUSER schema for instantiation on PRDDB.
exec dbms_capture_adm.prepare_schema_instantiation(schema_name => 'OLTPUSER'); select * from dba_capture_prepared_schemas;
5. Export the OLTPUSER schema on PRDDB.
select dbms_flashback.get_system_change_number from dual; expdp system/*** directory=EXP_DIR dumpfile=oltpuser.dmp schema=OLTPUSER logfile=oltpuser.log flashback_scn={SCN from the previous query}
6. Setup queue, rules, rule set, and apply and capture processes on RPTDB.
begin dbms_streams_adm.set_up_queue( queue_table => 'STRMADMIN.TESTDB_QUEUE_TABLE', queue_name => 'STRMADMIN.TESTDB_QUEUE', queue_user => 'STRMADMIN'); end; / begin dbms_aqadm.start_queue(queue_name => 'TESTDB_QUEUE'); end; / begin dbms_rule_adm.create_rule( rule_name => 'TESTDB_DML', condition => '(:dml.get_object_owner() = “OLTPUSER”)', evaluation_context => 'SYS.STREAMS$_EVALUATION_CONTEXT', rule_comment => 'testdb streams_dml'); end; / begin dbms_rule_adm.create_rule( rule_name => 'TESTDB_DDL', condition => '(:ddl.get_object_owner() = “OLTPUSER” or :ddl.get_base_table_owner() = “OLTPUSER”)', evaluation_context => 'SYS.STREAMS$_EVALUATION_CONTEXT', rule_comment => 'testdb streams_dml'); end; / begin dbms_rule_adm.create_rule_set( rule_set_name => 'TESTDB_RULE_SET', evaluation_context => 'SYS.STREAMS$_EVALUATION_CONTEXT', rule_set_comment => 'testdb streams'); end; / begin dbms_rule_adm.add_rule( rule_name => 'TESTDB_DML', rule_set_name => 'TESTDB_RULE_SET', rule_comment => 'testdb streams'); end; / begin dbms_rule_adm.add_rule( rule_name => 'TESTDB_DDL', rule_set_name => 'TESTDB_RULE_SET', rule_comment => 'testdb streams'); end; / begin dbms_apply_adm.create_apply( queue_name => 'STRMADMIN.TESTDB_QUEUE', apply_name => 'TESTDB_APPLY' ,apply_user => 'STRMADMIN' ,source_database => 'TESTDB' ,apply_captured => true ); end; / begin dbms_capture_adm.create_capture( queue_name => 'STRMADMIN.TESTDB_QUEUE' ,capture_name => 'TESTDB_CAPTURE' ,rule_set_name => 'STRMADMIN.TESTDB_RULE_SET' ,source_database => 'TESTDB' ,first_scn => {FIRST_CHANGE# from step 3} ,start_scn => {SCN from step 5} ,capture_user => 'STRMADMIN' ,checkpoint_retention_time => 0.5); end; /
7. Copy export file from PRDSRV to RPTSRV.
8. Import OLTPUSER schema on RPTDB.
impdp system/*** directory=IMP_DIR dumpfile=oltpuser.dmp schema=OLTPUSER logfile=oltpuser_imp.log
9. Start apply and capture on RPTDB.
exec dbms_apply_adm.start_apply('TESTDB_APPLY'); exec dbms_capture_adm.start_capture('TESTDB_CAPTURE');
10. Check the capture state and if there is a gap, copy the missing archivelogs from PRDDB to RPTDB and register them. Then add a record into OLTPUSER test table on PRDDB switch the logfiles and check the data on RPTDB.
select capture_name,state,state_change_time,capture_message_create_time from v$streams_capture; select * from dba_registered_archived_log where source_database='TESTDB'; alter database register logical logfile '{archivelog name}' for 'TESTDB_CAPTURE';
11. The last thing is to create a job to clean registered archivelogs. If the archivelogs are on an ASM diskgroup, they can be deleted by the following command:
alter diskgroup {diskgroup name} drop file '{file name}';
Next, I will show how to maintain the archivelog transportation during a manual switchover.
1. Perform the switchover on the primary database PRDDB.
select name,db_unique_name,open_mode,database_role,switchover_status from v$database; alter database commit to switchover to physical standby with session shutdown; shutdown immediate startup mount
2. Switch the target physical standby database STBDB role to the primary role.
select name,db_unique_name,open_mode,database_role,switchover_status from v$database; alter database commit to switchover to primary; alter database open;
3. Start Redo Apply on PRDDB.
alter database recover managed standby database using current logfile disconnect from session; select process,status,client_process,sequence#,delay_mins from v$managed_standby;
4. There will be one missing archivelog on RPTDB, the one that was created with End-Of-Redo indicator. It can be found in PRDDB alert log.
ARCH: Noswitch archival of thread 1, sequence 1051 ARCH: End-Of-Redo Branch archival of thread 1 sequence 1051 Archived Log entry 194 added for thread 1 sequence 1051 ID 0xd7df275e dest 1: ARCH: Archiving is disabled due to current logfile archival LOG_ARCHIVE_DEST_2 is a potential Physical Standby switchover target LOG_ARCHIVE_DEST_3 currently has a gap
5. Copy the missing archivelog from PRDDB to RPTDB and register it.
-- On PRDSRV select name from v$archived_log where sequence#=1051; rman target / copy archivelog '{archivelog name}' to '{temp folder}'; exit -- On RPTSRV alter database register logical logfile '{archivelog name}' for 'TESTDB_CAPTURE';
Now, after the Downstream capture is set up and tested, one can run any reports on the report database, and there will be absolutely no impact on the production OLTP database. Moreover, since reporting queries can use different type of indexes and indexes on different columns, the report database indexes can be reorganized to better suit the queries.
2 Comments. Leave new
[…] Alisher Yuldashev-How to configure OLTP with reporting queries […]
Hi,
I just reviewed your example what’s the need of capture process on RTPDB (STRMADMIN.TESTDB_QUEUE). Since from production/prod environment the capture process would be sending across LCR to RPTDB and as you have apply process to dequeue LCR and perform actions.
Am I missing any thing ?
Does it perform CCA ?