I will go through detailed steps here on how to recreate a disk group used by CRS.
Some things to bear in mind:
– I created this cluster based on another existing cluster, so I just followed the same patterns adopted in the existing one.
– Oracle Grid Infrastructure version is 12.1.0.2.
– SSDCRS disk group exists with External Redundancy and the intention here is to recreate it using Normal redundancy.
– When creating this cluster, I used runInstaller on silent mode and there was no option to select Normal or High redundancy with response file or command line on Oracle Grid Infrastructure 12.1.0.2. So creating the cluster with External redundancy disk group for the cluster files and recreating the disk group afterward was the only option I found suitable.
This blog post can be split into smaller sections and used to move ASM SPFile, ASM Password File, Voting files or Oracle Cluster Registry. Use it wisely.
Validating and preparing for the change
Important: Pay attention to the OS user you are running each command of this step by step.
0. Before performing any change in the cluster, save an OCR backup:
[[email protected] ~]# . oraenv >>> +ASM1 [[email protected] ~]# ocrconfig -manualbackup [[email protected] ~]# ocrconfig -showbackup
1. Check the current configuration of the OCR:
[[email protected] ~]# ocrcheck
2. Check the current configuration of Voting files:
[[email protected] ~]# crsctl query css votedisk
3. Check the current configuration of ASM password file and spfile:
[[email protected] ~]$ . oraenv >>> +ASM1 [[email protected] ~]$ srvctl config asm [[email protected] ~]$ asmcmd spget
Starting the changes
4. Create the disk group SSDDATA with the proper configuration specified by the client:
SQL> CREATE DISKGROUP SSDDATA EXTERNAL REDUNDANCY DISK '/dev/mapper/lun_oradisk_SSDDATA0000' NAME SSDDATA0000, '/dev/mapper/lun_oradisk_SSDDATA0001' NAME SSDDATA0001, '/dev/mapper/lun_oradisk_SSDDATA0002' NAME SSDDATA0002, '/dev/mapper/lun_oradisk_SSDDATA0003' NAME SSDDATA0003, '/dev/mapper/lun_oradisk_SSDDATA0004' NAME SSDDATA0004, '/dev/mapper/lun_oradisk_SSDDATA0005' NAME SSDDATA0005, '/dev/mapper/lun_oradisk_SSDDATA0006' NAME SSDDATA0006, '/dev/mapper/lun_oradisk_SSDDATA0007' NAME SSDDATA0007, '/dev/mapper/lun_oradisk_SSDDATA0008' NAME SSDDATA0008, '/dev/mapper/lun_oradisk_SSDDATA0009' NAME SSDDATA0009, '/dev/mapper/lun_oradisk_SSDDATA0010' NAME SSDDATA0010, '/dev/mapper/lun_oradisk_SSDDATA0011' NAME SSDDATA0011, '/dev/mapper/lun_oradisk_SSDDATA0012' NAME SSDDATA0012, '/dev/mapper/lun_oradisk_SSDDATA0013' NAME SSDDATA0013, '/dev/mapper/lun_oradisk_SSDDATA0014' NAME SSDDATA0014, '/dev/mapper/lun_oradisk_SSDDATA0015' NAME SSDDATA0015 ATTRIBUTE 'AU_SIZE' = '4M', 'compatible.rdbms'='10.1', 'compatible.asm'='12.1.0.2';
5. Relocate the OCR to the disk group SSDDATA:
[[email protected] ~]# . oraenv >>> +ASM1 [[email protected] ~]# ocrconfig -add +SSDDATA [[email protected] ~]# ocrcheck [[email protected] ~]# ocrconfig -delete +SSDCRS
6. Relocate Voting file to the disk group SSDDATA:
[[email protected] ~]# crsctl replace votedisk +SSDDATA [[email protected] ~]# crsctl query css votedisk
7. Relocate ASM password file to disk group SSDDATA:
[[email protected] ~]$ asmcmd pwget --asm [[email protected] ~]$ srvctl config asm -detail [[email protected] ~]$ asmcmd pwmove --asm +SSDCRS/orapwASM +SSDDATA/orapwASM
8. Relocate spfile to the disk group +SSDDATA
[[email protected] ~]$ . oraenv >>> +ASM1 [[email protected] ~]$ sqlplus / as sysasm SQL> create pfile='/tmp/initasm.ora' from spfile; SQL> create spfile='+SSDDATA' from pfile='/tmp/initasm.ora'; [[email protected] ~]$ $ORACLE_HOME/bin/gpnptool get -o- | xmllint --format - | grep -i spfile
9. Restart the cluster to see if it comes back with the new ASM parameter file:
[[email protected] ~]# . oraenv >>> +ASM1 [[email protected] ~]# crsctl stop cluster -all [[email protected] ~]# crsclt start cluster -all
10. Check the parameter file again by querying the GPnP profile:
[[email protected] ~]$ $ORACLE_HOME/bin/gpnptool get -o- | xmllint --format - | grep -i spfile
11. Make sure everything that was on SSDCRS disk group is already in SSDDATA disk group (check everything again if needed) and proceed with dropping the existing SSDCRS disk group (the drop command will raise an error if there is any file inside the disk group, so after you ensure everything was moved or copied to SSDDATA disk group, include the clause “… INCLUDING CONTENTS” to the DROP command below):
SQL> DROP DISKGROUP SSDCRS;
12. Save a new OCR backup:
[[email protected] ~]# . oraenv >>> +ASM1 [[email protected] ~]# ocrconfig -manualbackup [[email protected] ~]# ocrconfig -showbackup
13. Recreate the SSDCRS disk group with high redundancy:
CREATE DISKGROUP SSDCRS NORMAL REDUNDANCY FAILGROUP SSDXCRS DISK '/dev/mapper/lun_oradisk_SSDXCRS0000' NAME SSDXCRS0000, '/dev/mapper/lun_oradisk_SSDXCRS0001' NAME SSDXCRS0001, '/dev/mapper/lun_oradisk_SSDXCRS0002' NAME SSDXCRS0002, '/dev/mapper/lun_oradisk_SSDXCRS0003' NAME SSDXCRS0003, '/dev/mapper/lun_oradisk_SSDXCRS0004' NAME SSDXCRS0004, '/dev/mapper/lun_oradisk_SSDXCRS0005' NAME SSDXCRS0005, '/dev/mapper/lun_oradisk_SSDXCRS0006' NAME SSDXCRS0006, '/dev/mapper/lun_oradisk_SSDXCRS0007' NAME SSDXCRS0007, '/dev/mapper/lun_oradisk_SSDXCRS0008' NAME SSDXCRS0008, '/dev/mapper/lun_oradisk_SSDXCRS0009' NAME SSDXCRS0009, '/dev/mapper/lun_oradisk_SSDXCRS0010' NAME SSDXCRS0010, '/dev/mapper/lun_oradisk_SSDXCRS0011' NAME SSDXCRS0011, '/dev/mapper/lun_oradisk_SSDXCRS0012' NAME SSDXCRS0012, '/dev/mapper/lun_oradisk_SSDXCRS0013' NAME SSDXCRS0013, '/dev/mapper/lun_oradisk_SSDXCRS0014' NAME SSDXCRS0014, '/dev/mapper/lun_oradisk_SSDXCRS0015' NAME SSDXCRS0015 FAILGROUP SSDYCRS DISK '/dev/mapper/lun_oradisk_SSDYCRS0040' NAME SSDYCRS0040, '/dev/mapper/lun_oradisk_SSDYCRS0041' NAME SSDYCRS0041, '/dev/mapper/lun_oradisk_SSDYCRS0042' NAME SSDYCRS0042, '/dev/mapper/lun_oradisk_SSDYCRS0043' NAME SSDYCRS0043, '/dev/mapper/lun_oradisk_SSDYCRS0044' NAME SSDYCRS0044, '/dev/mapper/lun_oradisk_SSDYCRS0045' NAME SSDYCRS0045, '/dev/mapper/lun_oradisk_SSDYCRS0046' NAME SSDYCRS0046, '/dev/mapper/lun_oradisk_SSDYCRS0047' NAME SSDYCRS0047, '/dev/mapper/lun_oradisk_SSDYCRS0048' NAME SSDYCRS0048, '/dev/mapper/lun_oradisk_SSDYCRS0049' NAME SSDYCRS0049, '/dev/mapper/lun_oradisk_SSDYCRS0050' NAME SSDYCRS0050, '/dev/mapper/lun_oradisk_SSDYCRS0051' NAME SSDYCRS0051, '/dev/mapper/lun_oradisk_SSDYCRS0052' NAME SSDYCRS0052, '/dev/mapper/lun_oradisk_SSDYCRS0053' NAME SSDYCRS0053, '/dev/mapper/lun_oradisk_SSDYCRS0054' NAME SSDYCRS0054, '/dev/mapper/lun_oradisk_SSDYCRS0055' NAME SSDYCRS0055 FAILGROUP SSDZCRS DISK '/dev/mapper/lun_oradisk_SSDZCRS0080' NAME SSDZCRS0080, '/dev/mapper/lun_oradisk_SSDZCRS0081' NAME SSDZCRS0081, '/dev/mapper/lun_oradisk_SSDZCRS0082' NAME SSDZCRS0082, '/dev/mapper/lun_oradisk_SSDZCRS0083' NAME SSDZCRS0083, '/dev/mapper/lun_oradisk_SSDZCRS0084' NAME SSDZCRS0084, '/dev/mapper/lun_oradisk_SSDZCRS0085' NAME SSDZCRS0085, '/dev/mapper/lun_oradisk_SSDZCRS0086' NAME SSDZCRS0086, '/dev/mapper/lun_oradisk_SSDZCRS0087' NAME SSDZCRS0087, '/dev/mapper/lun_oradisk_SSDZCRS0088' NAME SSDZCRS0088, '/dev/mapper/lun_oradisk_SSDZCRS0089' NAME SSDZCRS0089, '/dev/mapper/lun_oradisk_SSDZCRS0090' NAME SSDZCRS0090, '/dev/mapper/lun_oradisk_SSDZCRS0091' NAME SSDZCRS0091, '/dev/mapper/lun_oradisk_SSDZCRS0092' NAME SSDZCRS0092, '/dev/mapper/lun_oradisk_SSDZCRS0093' NAME SSDZCRS0093, '/dev/mapper/lun_oradisk_SSDZCRS0094' NAME SSDZCRS0094, '/dev/mapper/lun_oradisk_SSDZCRS0095' NAME SSDZCRS0095 ATTRIBUTE 'AU_SIZE' = '1M', 'compatible.rdbms'='12.1.0.2', 'compatible.asm'='12.1.0.2';
Now move everything back to the SSDCRS disk group.
14. Relocate the OCR to the disk group SSDCRS:
[[email protected] ~]# . oraenv >>> +ASM1 [[email protected] ~]# ocrconfig -add +SSDCRS [[email protected] ~]# ocrcheck [[email protected] ~]# ocrconfig -delete +SSDDATA
15. Relocate Voting file to the disk group SSDCRS:
[[email protected] ~]# crsctl replace votedisk +SSDCRS [[email protected]1 ~]# crsctl query css votedisk
16. Relocate ASM password file to disk group SSDCRS:
[[email protected] ~]$ asmcmd pwget --asm [[email protected] ~]$ srvctl config asm -detail [[email protected] ~]$ asmcmd pwmove --asm +SSDDATA/orapwASM +SSDCRS/orapwASM
17. Relocate spfile to the disk group +SSDCRS:
[[email protected] ~]$ . oraenv >>> +ASM1 [[email protected] ~]$ sqlplus / as sysasm SQL> create pfile='/tmp/initasm.ora' from spfile; SQL> create spfile='+SSDCRS' from pfile='/tmp/initasm.ora'; [[email protected] ~]$ $ORACLE_HOME/bin/gpnptool get -o- | xmllint --format - | grep -i spfile
18. Restart the cluster to see if it comes back with the new ASM parameter file:
[[email protected] ~]# . oraenv >>> +ASM1 [[email protected] ~]# crsctl stop cluster -all [[email protected] ~]# crsclt start cluster -all
Checking everything after the changes
19. Check the parameter file by querying again the GPnP profile:
[[email protected] ~]$ $ORACLE_HOME/bin/gpnptool get -o- | xmllint --format - | grep -i spfile
20. After ensuring all files are in their correct places, save a new OCR backup:
[[email protected] ~]# . oraenv >>> +ASM1 [[email protected] ~]# ocrconfig -manualbackup [[email protected] ~]# ocrconfig -showbackup
Fortunately, from 12cR2 onwards, there is an option to create the disk groups with proper redundancy using response files.
I hope this will be useful for you and let me know if you have any issues.
3 Comments. Leave new
Hey Franky,
Thank you for sharing this..
So basically, we are creating a temporary diskgroup +SSDDATA and relocating OCR, VD, password file and spfile from +SSDCRS, dropping +SSDCRS, recreating it with normal redundancy and moving ocr, vd, password file and spfile back to +SSDCRS.
Point 13 states – Recreate the SSDCRS disk group with high redundancy:
Create diskgroup command is as follows –
CREATE DISKGROUP SSDCRS NORMAL REDUNDANCY
If SSDCRS is having 3 failure groups -SSDXCRS, SSDYCRS and SSDZCRS, then redundancy should be high.
Normal redundancy typically has 2 way mirroring by default and high redundancy should have 3 way mirroring.
Please let me know if providing 3 way mirroring(SSDXCRS, SSDYCRS and SSDZCRS) while creating diskgroup(SSDCRS) with normal redundancy works as I have never tried it.
Regards,
Maaz Khan
Hi Maaz,
Thank you for catching this. Actually it was a typo in my post in bullet 13. It is a Normal redundancy diskgroup with 3 failure groups. This is the correct setting for the CRS diskgroup, since Voting files will use 1 different disk in each failure group.
When using CRS diskgroup with normal redundancy you need at least 3 failure groups while with High redundancy you must have at least 5 failure groups. That is required for voting disks as I explained above.
[[email protected]~]# crsctl query css votedisk
## STATE File Universal Id File Name Disk group
— —– —————– ——— ———
1. ONLINE c18a6b6248c44fa2bf5fb68a673100d2 (/dev/oracleasm/disks/SSDXCRS0300) [SSDCRS]
2. ONLINE e10d437ef2e64f54bf49d5149dc596c0 (/dev/oracleasm/disks/SSDYCRS0340) [SSDCRS]
3. ONLINE 241ce04efbac4fedbfbfb4a8f8c56617 (/dev/oracleasm/disks/SSDZCRS0380) [SSDCRS]
The number of failure groups is independent of the numbers of data copies mirrored across the FGs. For normal redundancy you need 2 FGs minimum and for high redudancy the minimum is 3. This is due to the number of copies. For CRS diskgroup voting disks require at least 3 FGs for normal and 5 FGs for high redundancy.
In Exadata infrastructure, for example, each storage server is a Failure Group, so if you have a half rack you can have normal redundancy diskgroup with 7 storages or high redundancy diskgroup also with 7 storages, but in this case voting disks are handled differently.
Hopefully I was clear enough. Let me know if you need further clarification. I recommend you researching for failure groups and Partner Status Table (PTS: https://docs.oracle.com/en/database/oracle/oracle-database/12.2/ostmg/mirroring-diskgroup-redundancy.html#GUID-9AAB5AE7-D819-4CDA-9552-4B41C3383C73). Oracle recommends a minimum of 3 failure groups for normal redundancy or 5 failure groups for high redundancy.
Best regards,
Franky Faust
Thank you Franky for clarifying.. :)
Yes, here we are dealing with crs diskgroups..
Regards,
Maaz