Silent Rollback From an Out-of-Place Patching For Oracle 19c GI

Posted in: Oracle, Technical Track

My last post dealt with how to do an out-of-place patching for Oracle GI (Grid Infrastructure). The next thing I wanted to try was the rollback procedure for this methodology. I searched in MOS (My Oracle Support) and the Oracle documentation but couldn’t find rollback instructions using gridSetup.sh.

The first thing I did was try to follow the document 2419319.1 for OOP rollback using OPatchAuto, but I faced the error below:

[[email protected] ~]# . oraenv
ORACLE_SID = [root] ? +ASM2
The Oracle base has been set to /u01/app/grid
[[email protected] ~]# $ORACLE_HOME/crs/install/rootcrs.sh -downgrade
Using configuration parameter file: /u01/app/19.8.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/node2/crsconfig/crsdowngrade_node2_2020-10-08_11-01-27AM.log
2020/10/08 11:01:29 CLSRSC-416: Failed to retrieve old Grid Infrastructure configuration data during downgrade
Died at /u01/app/19.8.0.0/grid/crs/install/crsdowngrade.pm line 760.
The command '/u01/app/19.8.0.0/grid/perl/bin/perl -I/u01/app/19.8.0.0/grid/perl/lib -I/u01/app/19.8.0.0/grid/crs/install -I/u01/app/19.8.0.0/grid/xag /u01/app/19.8.0.0/grid/crs/install/rootcrs.pl -downgrade' execution failed

The log mentions there is no previous GI information for the CRS downgrade. This is because I did an OOP patching to 19.8 from the source 19.3 binaries, and the previous version of the GI home was never in place prior to 19.8. CLSRSC-416: Failed to retrieve old Grid Infrastructure configuration data during downgrade

Right after the error, I checked that everything was still OK with the 19.8 GI.

[[email protected] ~]# crsctl check cluster -all
**************************************************************
node1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
node2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[[email protected] ~]# crsctl query crs releasepatch
Oracle Clusterware release patch level is [441346801] and the complete list of patches 
[31281355 31304218 31305087 31335188 ] have been applied on the local node. 
The release patch string is [19.8.0.0.0].

[[email protected] ~]# crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [19.0.0.0.0]. 
The cluster upgrade state is [NORMAL]. The cluster active patch level is [441346801].

My next idea was to try the same procedure I used to switch from the 19.6 to 19.8 GI. Since I had already detached the 19.6 GI, I had to reattach it to the inventory. Be aware that you have to reattach the GI_HOME in all the nodes, below is just an example of node 1. 

[[email protected] ~]$ /u01/app/19.8.0.0/grid/oui/bin/runInstaller -attachhome \
 -silent ORACLE_HOME="/u01/app/19.6.0.0/grid" \
 ORACLE_HOME_NAME="OraGI196Home"

Again I proceeded to unset my Oracle variables and set the 19.6 GI home.

[[email protected] ~]$ unset ORACLE_BASE
[[email protected] ~]$ unset ORACLE_HOME
[[email protected] ~]$ unset ORA_CRS_HOME
[[email protected] ~]$ unset ORACLE_SID
[[email protected] ~]$ unset TNS_ADMIN
[[email protected] ~]$ env | egrep "ORA|TNS" | wc -l
0
[[email protected] ~]$ export ORACLE_HOME=/u01/app/19.6.0.0/grid
[[email protected] ~]$ cd $ORACLE_HOME

Once I had reattached the 19.6 GI home and unset my variables, I tried to do the switch, but got a lot of errors due to permissions.

[[email protected] grid]$ pwd
/u01/app/19.6.0.0/grid
[[email protected] grid]$ ./gridSetup.sh -switchGridHome -silent
Launching Oracle Grid Infrastructure Setup Wizard...

You can find the log of this install session at:
 /u01/app/oraInventory/logs/cloneActions2020-10-08_11-37-23AM.log
Could not backup file /u01/app/19.6.0.0/grid/rootupgrade.sh to 
/u01/app/19.6.0.0/grid/rootupgrade.sh.ouibak
Could not backup file /u01/app/19.6.0.0/grid/perl/lib/5.28.1/x86_64-linux-thread-multi/perllocal.pod to 
/u01/app/19.6.0.0/grid/perl/lib/5.28.1/x86_64-linux-thread-multi/perllocal.pod.ouibak
...
[FATAL] Failed to restore the saved templates to the Oracle home being cloned. Aborting the clone operation.

I realized that I had forgotten to change the ownership back to grid:oinstall as it kept the ownership of certain files and directories as root. I proceeded to change the permissions of the 19.6 GI home.

[[email protected] ~]# cd /u01/app/19.6.0.0
[[email protected] 19.6.0.0]# chown -R grid:oinstall ./grid

[[email protected] ~]# cd /u01/app/19.6.0.0
[[email protected] 19.6.0.0]# chown -R grid:oinstall ./grid

After I had changed the permissions, I ran the command gridSetup.sh as the grid owner by unsetting the Oracle variables. This is the same command I used for the 19.8 patching but this time it was from the 19.6 GI home.

[[email protected] grid]$ pwd
/u01/app/19.6.0.0/grid
[[email protected] grid]$ ./gridSetup.sh -switchGridHome -silent
Launching Oracle Grid Infrastructure Setup Wizard...

You can find the log of this install session at:
 /u01/app/oraInventory/logs/cloneActions2020-10-08_11-40-15AM.log

As a root user, execute the following script(s):
    1. /u01/app/19.6.0.0/grid/root.sh

Execute /u01/app/19.6.0.0/grid/root.sh on the following nodes: 
[node1, node2]

Run the scripts on the local node first. After successful completion, run the scripts in sequence on all other nodes.

Successfully Setup Software.

After the gridSetup.sh finished, the only thing I needed to do was to run the root.sh commands.

[[email protected] ~]# /u01/app/19.6.0.0/grid/root.sh
Check /u01/app/19.6.0.0/grid/install/root_node1_2020-10-08_12-01-01-027843996.log for the output of root script

[[email protected] ~]# /u01/app/19.6.0.0/grid/root.sh
Check /u01/app/19.6.0.0/grid/install/root_node2_2020-10-08_12-09-00-516251584.log for the output of root script

My final step was to verify that everything was running correctly from the 19.6 GI home.

[[email protected] ~]$ crsctl query crs releasepatch
Oracle Clusterware release patch level is [2701864972] and the complete list of patches 
[30489227 30489632 30557433 30655595 ] have been applied on the local node. 
The release patch string is [19.6.0.0.0].

[[email protected] ~]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [19.0.0.0.0]. 
The cluster upgrade state is [NORMAL]. The cluster active patch level is [2701864972].

[[email protected] ~]$ ./rac_status.sh -a

		Cluster rene-ace-c

        Type      |      Name      |      node1      |      node2      |
  ---------------------------------------------------------------------
   asm            | asm            |      Online     |      Online     |
   asmnetwork     | asmnet1        |      Online     |      Online     |
   chad           | chad           |      Online     |      Online     |
   cvu            | cvu            |      Online     |        -        |
   dg             | DATA           |      Online     |      Online     |
   dg             | RECO           |      Online     |      Online     |
   network        | net1           |      Online     |      Online     |
   ons            | ons            |      Online     |      Online     |
   qosmserver     | qosmserver     |      Online     |        -        |
   vip            | node1          |      Online     |        -        |
   vip            | node2          |        -        |      Online     |
   vip            | scan1          |        -        |      Online     |
   vip            | scan2          |      Online     |        -        |
   vip            | scan3          |      Online     |        -        |
  ---------------------------------------------------------------------
    x  : Resource is disabled
       : Has been restarted less than 24 hours ago
   
      Listener    |      Port      |      node1      |      node2      |     Type     |
  ------------------------------------------------------------------------------------
   ASMNET1LSNR_ASM| TCP:1525       |      Online     |      Online     |   Listener   |
   LISTENER       | TCP:1521       |      Online     |      Online     |   Listener   |
   LISTENER_SCAN1 | TCP:1521       |        -        |      Online     |     SCAN     |
   LISTENER_SCAN2 | TCP:1521       |      Online     |        -        |     SCAN     |
   LISTENER_SCAN3 | TCP:1521       |      Online     |        -        |     SCAN     |
  ------------------------------------------------------------------------------------
   
   
        DB      |    Version    |      node1      |      node2      |    DB Type   |
  ---------------------------------------------------------------------------------------
   renedev        | 19.6.0.0  (1) |         -         |        Open       |  SINGLE (P)  |
   reneqa         | 19.6.0.0  (2) |        Open       |        Open       |    RAC (P)   |

I hope these two posts are helpful if you try this methodology. Should you face an error, be sure to let me know and I’ll try to help out.

Note: This was originally published on rene-ace.com.

email
Want to talk with an expert? Schedule a call with our team to get the conversation started.

About the Author

Currently I am an Oracle ACE ; Speaker at Oracle Open World, Oracle Developers Day, OTN Tour Latin America and APAC region and IOUG Collaborate ; Co-President of ORAMEX (Mexico Oracle User Group); At the moment I am an Oracle Project Engineer at Pythian. In my free time I like to say that I'm Movie Fanatic, Music Lover and bringing the best from México (Mexihtli) to the rest of the world and in the process photographing it ;)

No comments

Leave a Reply

Your email address will not be published. Required fields are marked *