Are you relying on guaranteed restore points (GRP) as a fallback plan for your migration or upgrade strategy? If you’re using RAC, especially before 12.2, be careful!
When performing a non-prod upgrade with the AutoUpgrade tool, after completing the upgrade, I wanted to roll it back and go through the process again. This is what happened:
SQL> startup ORA-29702: error occurred in Cluster Group Service operation
When looking for more information, I found this blog post from Mike which I missed last year: https://mikedietrichde.com/2020/11/13/ora-29702-and-your-instance-does-not-startup-in-the-cluster-anymore/.
This means my database isn’t starting anymore! Wow, I’m glad we’re in the testing phase!
The problem is caused by Bug 31561819 – “Incompatible maxmembers at CRSD level causing database instance not able to start.”
As per Mike’s post, “you don’t need to even restore or flashback a database to hit this error. A simple instance in NOMOUNT state leads to the same error. Without even any datafile.”
The bug is fixed on:
- 18.104.22.168.201020 (Oct 2020) OCW RU
- 22.214.171.124.201020 (Oct 2020) OCW RU
- 126.96.36.199.201020 (Oct 2020) OCW RU
The takeaway is, you should include this patch BEFORE starting any move. Do it right away if you are on these versions!
In my case, the usage is for a 12.1->19c upgrade. So, the fix isn’t available (there’s no extended support in place). Since this is the case, I had to consider alternate fallback plans, like a physical standby. But this is a topic for another post.
Also, be aware of the latest change regarding restore point propagation on 19c, as per MOS Automatic Propagate Restore Points from Primary to Standby site in 19C (Doc ID 2463082.1).
You should take the following steps:
- Apply this patch if you can!
- If not, be very careful with your fallback plans and as usual: test, test and test!
See you next post!
If you still have any questions, or you have thoughts about this post, please leave them in the comments.
Interested in working with Matheus? Schedule a tech call.