War Story: Applying Bundle Patch 1 to EM 12 Cloud Control

Posted in: Technical Track

We have a few clients already using Enterprise Manager 12c Cloud Control. The interface and navigation have improved a lot from the 11g version in my opinion. However, as with any new release of anything, quite a few bugs still need to be fixed.

After working with Oracle on some of these bugs last week, we were asked to apply the Bundle Patch 1 (BP1) to one of our clients’ installation.

The first thing that I noticed when I started looking for information about BP1 was the amount of warnings from different people I found in MOS and around the internet. The general advice was to follow the instructions closely and take all the precautions possible. And so we did. All the preparation for the patching was completed, following closely the MOS note 1393173.1, which is a long and very detailed document on how to apply the BP1.

With all the safeguards in place (backups of everything), we started the patching. There are actually three patches to be applied: 13242773 (CC 12c Bundle Patch 1), 12321965 (Web Services Manager), and 13470978 (JDeveloper), as explained in the Oracle instructions. The steps to be followed are:

  1. Install Bundle Patch1 on the OMS server;
  2. Upgrade all deployed plug-ins to the latest release on OMS;
  3. Apply Bundle Patch 1 on Management Agents;
  4. Upgrade all deployed plug-ins to the latest release on MANAGEMENT AGENTS.

The patch installation started smoothly: Step 1 completed successfully, with the 3 patches applied to the OMS server without any hitches. At this point, I thought the major risks had already been passed. I was wrong.

Plug-in upgrades

The next step was to upgrade all the plug-ins on the OMS to the latest versions before applying BP1 to the agents. The first plug-in (Oracle Database) was upgraded without problems. However, I noticed that the complete upgrade was going to take much longer than I expected, since it took me 20 minutes for the upgrade of a single plug-in. I had at least 3 more to go, and they had to be applied individually. Each plug-in upgrade requires a restart of the entire OMS stack, which adds to the delay.

Nevertheless, it completed successfully, which was positive. We then proceeded with the upgrade of the second plug-in (Oracle Exadata), which was when things started to get pear-shaped.

This second plug-in took longer than the first one, but seemed to be progressing without problems. It apparently completed all the installation steps successfully and began starting OMS. This step ran for almost 10 minutes and eventually failed.

Since the OMS startup is a step of the upgrade process, the plug-in upgrade is considered as having failed as well, and an attempt is made to recover the middle-tier to its original state. The logs show that the recovery completed successfully; however, after that, OMS was left in a fuzzy state and couldn’t be started anymore.

$ emctl status oms -details
...
Oracle Management Server is not functioning because:
A recent plug-in upgrade operation has failed.  Follow these steps to recover:
1. Restore/recover the repository database (For example from a backup)
2. Start the Management server using 'emctl start oms' command. Oracle Management Server cannot be started without recovering the repository database.
After identifying and fixing the issues reported in log files you can retry plug-in upgrade.
For more details, Refer to 'Plug-in Deployment' section in the Enterprise Manager Advanced Configuration Guide.
The details of failed step and log file locations are given below.
Plugin Deployment/Undeployment Status
Destination          : OMS - hostname:service
Plugin Name          : Oracle Exadata
Version              : 12.1.0.2.0 [u120427]
ID                   : oracle.sysman.xa
Content              : Plugin
Action               : Deployment
Status               : Failed
Steps Info:
---------------------------------------- ------------------------- ------------------------- ----------
Step                                     Start Time                End Time                  Status
---------------------------------------- ------------------------- ------------------------- ----------
Start deployment                         5/31/12 6:50:25 PM EST    5/31/12 6:50:25 PM EST    Success
Initialize                               5/31/12 6:50:36 PM EST    5/31/12 6:50:43 PM EST    Success
Install software                         5/31/12 6:50:45 PM EST    5/31/12 6:50:48 PM EST    Success
Validate plug-in home                    5/31/12 6:50:49 PM EST    5/31/12 6:50:49 PM EST    Success
Perform custom pre-configuration         5/31/12 6:50:49 PM EST    5/31/12 6:50:52 PM EST    Success
Check mandatory patches                  5/31/12 6:50:52 PM EST    5/31/12 6:50:54 PM EST    Success
Generate metadata SQL                    5/31/12 6:50:54 PM EST    5/31/12 6:51:01 PM EST    Success
Pre-configure repository                 5/31/12 6:51:01 PM EST    5/31/12 6:51:01 PM EST    Success
Pre-register DLF                         5/31/12 6:51:01 PM EST    5/31/12 6:51:01 PM EST    Success
Stop management server                   5/31/12 6:51:02 PM EST    5/31/12 6:51:59 PM EST    Success
Configure repository                     5/31/12 6:51:59 PM EST    5/31/12 6:59:30 PM EST    Success
Register DLF                             5/31/12 6:59:30 PM EST    5/31/12 7:01:28 PM EST    Success
Configure middle tier                    5/31/12 7:01:40 PM EST    5/31/12 7:06:38 PM EST    Success
OPSS jazn policy migration               5/31/12 7:06:40 PM EST    5/31/12 7:06:40 PM EST    Success
Register metadata                        5/31/12 7:06:40 PM EST    5/31/12 7:07:03 PM EST    Success
Perform custom post-configuration        5/31/12 7:07:03 PM EST    5/31/12 7:07:03 PM EST    Success
Update inventory                         5/31/12 7:07:03 PM EST    5/31/12 7:07:04 PM EST    Success
Start management server                  5/31/12 7:07:05 PM EST    5/31/12 8:06:36 PM EST    Failed
Recover Middletier                       5/31/12 8:06:45 PM EST    5/31/12 8:11:09 PM EST    Success
---------------------------------------- ------------------------- ------------------------- ----------
Diagnostic information for the failed step
---------------------------------------- ------------------------- ------------------------- ----------
Step name            : Start management server
...
Error message        : Could not start Management Server because of error: Unable to start OMS
---------------------------------------- ------------------------- ------------------------- ----------

Recovering from the failed plug-in installation

We contacted Oracle Support, and they went through all the logs and the installation steps, which we had thoroughly documented.

They concluded that there were no errors with the upgrade of the Exadata plug-in. The issue was that the OMS took too long to start after the plug-in upgrade, due to “external factors”, and the upgrade process kind of timed-out and didn’t recover well from that. They recommended restoring the database to the state before the Oracle Exadata plug-in upgrade and trying the process again.

Unfortunately, we hadn’t taken intermediate backups of the database before each plug-in upgrade since we didn’t realize that would be necessary. The only backup we had was the one taken prior to starting the upgrade. So, back to square one we went. All the backups were restored and we put OMS up and running again; still unpatched.

Second attempt

After a few discussions with the Oracle Support to decide how to proceed with the next attempt, we slightly modified our original plan and were ready to try again. The changes to the plan were:

  • This time, we added an additional backup to the plan just before starting the OMS plug-ins upgrade. The latest version of the upgrade document in MOS note 1393173.1 was also updated with this extra step.
  • Oracle has also recommended the installation of an additional patch (13903572). This patch provides an additional feature to deploy all Plugins in one go via the command line. We modified our plan to use that option instead, which should reduce the deployment time significantly.

With this option, we can deploy all the plug-ins at once with the following command line:

# check pre-reqs first
emcli deploy_plugin_on_server -plugin="oracle.sysman.db;oracle.sysman.xa;oracle.sysman.emas;oracle.sysman.mos" -sys_password=<sys password> -prereq_check
# deploy plug-ins
emcli deploy_plugin_on_server -plugin="oracle.sysman.db;oracle.sysman.xa;oracle.sysman.emas;oracle.sysman.mos" -sys_password=<sys password>

The list of plug-ins above is specific to our case. If you’re using this method on your environment, ensure to list all the plug-ins you have installed on the OMS server. Please consult Oracle Support before using this option.

The BP1 installation started smoothly again without any issues. After we completed applied the BP1 patches (and the additional one recommended by Oracle), we backed up the repository and all the software locations. We then started the mass upgrade of all the plug-ins in one go, with all the fingers crossed, hoping this wouldn’t blow OMS up again.

This step was by far the longest of the entire upgrade process, taking more than 1.5 hours to complete. Nevertheless, this time it completed without a hitch, upgrading all the plug-ins successfully.

Patching the management agents

The following step was to apply BP1 to all the management agents. This is done through an Enterprise Manager Patch Plan. I created the plan with all the targets to be patched and the 4 patches that need to be applied to the agents (13242776, 13491785, 13550561, and 13550565). I had run the validation of the plan a few days before the implementation, and it had completed successfully. Just to be sure that everything was still okay, I verified that all the targets were reachable and ran the validation again, which completed successfully.

I then initiated the deployment of the plan. Surprisingly, at the end of the deployment I found that the deployment had failed for quite a few of the targets even though the validation had succeeded. Most of the failures were due to connectivity problems. I checked the agents and they seemed fine; to be safe, I restarted them, cleared their state, and ensured that they were communicating properly with the OMS.

I checked with the Oracle Support, and they advised to check the inventory on each one of the managed targets to confirm the success or failure of the deployment. The four patches above are installed in different Oracle Homes. The commands to check them are shown below:

$AGENT_HOME/OPatch/opatch lspatches -oh $AGENT_HOME -id 13242776 -verify
$AGENT_HOME/OPatch/opatch lspatches -oh $AGENT_HOME -id 13491785 -verify
$AGENT_HOME/OPatch/opatch lspatches -oh $AGENT_HOME/../../plugins/oracle.sysman.oh.discovery.plugin_12.1.0.1.0 -id 13550561 -verify
$AGENT_HOME/OPatch/opatch lspatches -oh $AGENT_HOME/../../plugins/oracle.sysman.oh.agent.plugin_12.1.0.1.0 -id 13550565 -verify

After checking all the managed targets, I verified that some of the targets for which the deployment had reported failure were actually patched successfully.

So I prepared another Patch Plan, including only the remaining targets, validated it, and re-deployed those patches. There were no errors this time, and the patching of all the management agents was completed.

The rest of the patching plan, which consisted of upgrading the plug-ins on the management agents, was completed without any issues and very quickly. EM12c BP1 was finally installed!

All in all

I haven’t used the EM console much after the patching, but I’ve already noticed that most or all of the UI rendering problems I was experiencing before the patching seem to be gone now. In terms of other functionality, I’m not yet sure of the improvements. I still have to retest the original problem that we had and that prompted the installation of BP1 in the first place.

Based on the experience of this patching, if you’re looking into applying BP1 to your EM 12 Cloud Control, please consider the following:

  1. Read the 1393173.1 note carefully and follow all the steps in that document. Do not skip any steps. Ensure that you have the latest version of the document.
  2. Consider taking additional backups of your repository and software locations at different points during the installation process, especially before starting the OMS plug-ins upgrade.
  3. Open a pro-active SR with Oracle and ask for advice on patching your environment. Refer to the additional patch 13903572 and ask whether you should use it. Ask them about the command line option to perform the mass upgrade of all the plug-ins in one go.
  4. Ensure that all the Management Agents are healthy before starting the deployment of the patch to them. If you’re not sure, bounce them beforehand, just in case.
  5. If the deployment fails for some of the targets, check the inventories of each target as explained above and re-deploy the patches just to the targets where they are not present.

Thanks to the Oracle Support team!

Regards,
Andre

email
Want to talk with an expert? Schedule a call with our team to get the conversation started.

About the Author

DBA since 1998, having worked with Oracle from version 7.3.4 to the latest one. Working at Pythian since 2009.

8 Comments. Leave new

Gorjan Todorovski
June 19, 2012 12:59 am

Nice… Thanks Andre!

Reply
André Araujo
June 20, 2012 8:15 am

My pleasure, mate. Thank you for helping with the patching!!!

Reply
Geert De Paep
June 19, 2012 3:52 am

Thank you very much for taking the time to write down your experiences. I am sure that many others will benefit from this. I also hope that the Oracle guys read this and take it seriously, so that patching in the future goes smoothly. Personally I expect patching to be something straightforward without any need to reads tons of documents and making various intermediate backups. After all, oem12c is positioned as a strategic product and should be fully reliable, instead of failing to start after a patch. Also the time to patch should be much smaller.

Reply
André Araujo
June 20, 2012 8:25 am

Thanks for you comment, Geert. I’m 100% with you in regards to improving the patching experience for OEM 12c. Simpler and reliable patching is a must-have and I hope Oracle puts some effort on improving it.

Reply

hi ,

i have isuue it always failed in validation state and resulting in one error in oracle home without any details and when show details i found it

Task: RunOnAgentTarget odcagent12c_1_auohsrigs02 error
could you advise plz

Reply
André Araujo
October 1, 2012 11:41 pm

Thanks for your comment, Randa. Unfortunately, I didn’t hit the same issue and can’t advise on that.

I’d recommend you to open an Oracle SR for this problem.

Regards,
Andre

Reply

i want to apply patch on 12c cloud control .can you please tell me step by step process on windows.

Thankyou,
deepthi

Reply
André Araujo
March 11, 2013 3:20 pm

Hi, Deepthi,

Unfortunately, I don’t have details steps for Windows. I’d suggest you follow MOS note 1393173.1 as mentioned in the post above.

Regards,
Andre

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *