This seventh post digs into some of the silent installation commands of an 11.1 RAC. For a complete series agenda up to now, see below:
- Installation of 10.2 And 11.1 Databases
- Patches of 10.2 And 11.1 databases
- Cloning Software and databases
- Install a 10.2 RAC Database
- Add a Node to a 10.2 RAC database
- Remove a Node from a 10.2 RAC database
- Install a 11.1 RAC Database (this post!)
- Add a Node to a 11.1 RAC database
- Remove a Node from a 11.1 RAC database
- A ton of other stuff you should know
As for the Installation of a 10.2 RAC Database, this post shows how to (1) install the 11.1 clusterware, (2) install the 11.1 database, and (3) create a RAC database. It doesn’t explore any Patch Set upgrade since 11.1.0.7 is not out for now. Another interesting question, however, is how to upgrade the 10.2 clusterware to 11.1, since it has to be done in place.
So let’s get into it.
Checking the prerequisites
As with 10.2, the best way to start is definitely to check, double- and triple-check that all the prerequisites are met. You can refer to the 10g post to find more about how to use RDA and the CVU for this purpose. Check also the installation documentation for your platform and Metalink Note 169706.1.
Install Oracle 11.1 Clusterware
Once you’ve made sure all the prerequisites are met, you can install or upgrade the 11.1 clusterware.
Install the 11.1 Clusterware from scratch
The steps and commands to install 11.1 clusterware are exactly the same as the ones to install 10.2 clusterware. The only difference is due to a typo in the crs.rsp
response file that comes with the 11.1.0.6 distribution — namely, the FROM_LOCATION
parameter doesn’t point the correct location. To overcome this issue, just add the parameter in the runInstaller
command line. Below is the syntax that matches that from the 10.2 post; refer to it for more detail about the meaning of the parameters.
cd clusterware export DISTRIB=`pwd` echo $DISTRIB ./runInstaller -silent \ -responseFile $DISTRIB/response/crs.rsp \ FROM_LOCATION=$DISTRIB/stage/products.xml \ ORACLE_HOME="/u01/app/crs" \ ORACLE_HOME_NAME="OraCrsHome" \ s_clustername="rac-cluster" \ sl_tableList={"rac-server1:rac-server1-priv:rac-server1-vip:N:Y",\ "rac-server2:rac-server2-priv:rac-server2-vip:N:Y",\ "rac-server3:rac-server3-priv:rac-server3-vip:N:Y",\ "rac-server4:rac-server4-priv:rac-server4-vip:N:Y"}\ ret_PrivIntrList={"bond0:10.0.0.0:1","bond1:192.168.1.0:2",\ "bond2:10.1.0.0:3"} \ n_storageTypeOCR=1 \ s_ocrpartitionlocation="/dev/sdb1" \ s_ocrMirrorLocation="/dev/sdc1" \ n_storageTypeVDSK=1 \ s_votingdisklocation="/dev/sdb2" \ s_OcrVdskMirror1RetVal="/dev/sdc2" \ s_VdskMirror2RetVal="/dev/sdd1"
Once the clusterware is installed, you only have to connect as root
on each of the servers and run the orainstRoot.sh
and root.sh
scripts:
rac-server1# /u01/app/oraInventory/orainstRoot.sh rac-server2# /u01/app/oraInventory/orainstRoot.sh rac-server3# /u01/app/oraInventory/orainstRoot.sh rac-server4# /u01/app/oraInventory/orainstRoot.sh rac-server1# /u01/app/crs/root.sh rac-server2# /u01/app/crs/root.sh rac-server1# /u01/app/crs/root.sh rac-server1# /u01/app/crs/root.sh
Note that, unlike what use to happen with 10.2, if you use a private network (i.e. 192.168.x.x
, 10.x.x.x
or 172.[16-31].x.x
), it should not affect the installation.
Upgrade your 10.2 Clusterware to 11.1
If the install didn’t change between 10.2 and 11.1, the clusterware upgrade differs slightly from applyingthe patch set on top of the clusterware. The principle stays the same, however: (1) The clusterware has to be applied in-place; and (2) on a rolling upgrade fashion. However, the way you do it now is:
- Stop the clusterware and its managed resources on one or several node so that you can apply the patch on top of them.
- Apply the 11.1 release from one of the stopped nodes to all the stopped nodes.
- Apply the
rootupgrade
script to all the nodes that were upgraded. - For another set of servers, repeat (1), (2), and (3), and do it until you’ve upgraded all the servers.
In the case of the RAC we installed in this series’s article on the 10.2 RAC Intall, one way you could do the upgrade would be to upgrade rac-server1 as a first step and as a second step upgrade rac-server2, rac-server3, and rac-server4 all together. Obviously, you could also upgrade them all together or one-by-one depending on your requirements. Let’s have a look at the syntaxes for the first scenario.
Important Note: The Clusterware has to be 10.2.0.3 or higher to upgrade to 11.1.0.6.
Step 1: Prepare rac-server1 for the upgrade
You first have to push the clusterware distribution to the server and unzip it. Once this is done, you should run the preupdate.sh
script as root
with the clusterware ORACLE_HOME
and owner as parameters. Below is an example of the associated syntax:
rac-server1$ cd clusterware rac-server1$ export DISTRIB=`pwd` rac-server1$ su root rac-server1# cd $DISTRIB/upgrade rac-server1#./preupdate.sh -crshome /u01/app/crs -crsuser oracle
This step will stop all the managed resources and the clusterware.
Step 2: Install the 11.1 clusterware in rac-server1
Connect as oracle
(or, if it’s not oracle
, as the clusterware owner) and run a command like the one below:
$ cd clusterware $ export DISTRIB=`pwd` $ ./runInstaller -silent \ -responseFile $DISTRIB/response/crs.rsp \ FROM_LOCATION=$DISTRIB/stage/products.xml \ REMOTE_NODES={} \ ORACLE_HOME=/u01/app/crs \ ORACLE_HOME_NAME="OraCrsHome"
If you’ve been reading this series, -silent
, -responseFile
, ORACLE_HOME
and ORACLE_HOME_NAME
will be familiar to you. FROM_LOCATION
is in the command thanks to a typo in the crs.rsp
file that come with 11.1.0.6 for Linux x86 — it doesn’t point to the right location. REMOTE_NODES
is the list of nodes in addition of the local node onto which you’ll install clusterware 11.1. In this case, because we apply the upgrade on rac-server1 only, which is the local node, the list has to be an empty list.
Step 3: Apply the rootupgrade script
Once you’ve applied the 11.1 release on top of the 11.1 clusterware for that first note, just run the rootupgrade
script. It will complete the upgrade of that node and restart all the resources. As root
:
rac-server1# cd /u01/crs/install rac-server1# ./rootupgrade
Step 4: Prepare the other servers for the upgrade
We will then apply the patch set from rac-server2 on the three remaining servers, To proceed, push the clusterware distribution to rac-server2 and the preupdate.sh
script to the three servers. Run that script as root
on those three servers as you did on the first one:
rac-server2# /preupdate.sh -crshome /u01/app/crs -crsuser oracle rac-server3# /tmp/preupdate.sh -crshome /u01/app/crs -crsuser oracle rac-server4# /tmp/preupdate.sh -crshome /u01/app/crs -crsuser oracle
This step will stop all the managed resources and the clusterware.
Step 5: Install the 11.1 clusterware on rac-server2, rac-server2 and rac-server4 all together
Connect as oracle
(or, if it’s not oracle
, as the clusterware owner) on the server you’ll use to do the install, and run a command like the one below:
rac-server2$ cd clusterware rac-server2$ export DISTRIB=`pwd` rac-server2$ ./runInstaller -silent \ -responseFile $DISTRIB/response/crs.rsp \ FROM_LOCATION=$DISTRIB/stage/products.xml \ REMOTE_NODES={rac-server3,rac-server4} \ ORACLE_HOME=/u01/app/crs \ ORACLE_HOME_NAME="OraCrsHome"
REMOTE_NODES
is used to list all the nodes onto which you’ll install the clusterware. The local node will also be installed.
Step 6: Apply the rootupgrade script
Once you’ve applied the 11.1 release on top of the 10.2 clusterware for that first note, just run the rootupgrade
script as root
on each one of the nodes:
rac-server2# cd /u01/crs/install/rootupgrade rac-server3# cd /u01/crs/install/rootupgrade rac-server4# cd /u01/crs/install/rootupgrade
Once all the nodes are installed, you should be able to see that the 11.1 release is active by running on any of the nodes:
/u01/crs/install/bin/crsctl query crs activeversion
Install Oracle 11.1 RAC Database Software
Install the Oracle RAC Database Base Release
Once the clusterware has been installed, installing RAC Database software is very similar to installing a non-RAC database software, you just need to specify which servers you want the software to be installed on. The first step is downloading the software and unzipping it:
$ unzip linux.x64_11gR1_database.zip $ cd database $ export DISTRIB=`pwd`
To install the database software, you don’t need to modify the response files. You only have to run a command like the one below, in the case of an Enterprise Edition:
export DISTRIB=`pwd` runInstaller -silent \ -responseFile $DISTRIB/response/enterprise.rsp \ FROM_LOCATION=$DISTRIB/stage/products.xml \ ORACLE_BASE=/u01/app/oracle \ ORACLE_HOME=/u01/app/oracle/product/11.1.0/db_1 \ ORACLE_HOME_NAME=ORADB111_Home1 \ CLUSTER_NODES={"rac-server1","rac-server2",\ "rac-server3","rac-server4"} \ n_configurationOption=3 \ s_nameForDBAGrp="dba" \ s_nameForASMGrp="dba"
Or in the case of a Standard Edition:
runInstaller -silent \ -responseFile $DISTRIB/response/standard.rsp \ FROM_LOCATION=$DISTRIB/stage/products.xml \ ORACLE_BASE=/u01/app/oracle \ ORACLE_HOME=/u01/app/oracle/product/11.1.0/db_1 \ ORACLE_HOME_NAME=ORADB111_Home1 \ CLUSTER_NODES={"rac-server1","rac-server2"} \ n_configurationOption=3
As you can see, only a few parameters differ from the non-RAC database installation described in Part 1 of this series:
CLUSTER_NODES
contains the list of cluster nodes you want to install the database software on.FROM_LOCATION
is used when the response file doesn’t point to the location of theproducts.xml
file.s_nameForDBAGrp
,s_nameForASMGrp
, ands_nameForOPERGrp
are used to specify non-default groups forSYSDBA
,SYSASM
, andSYSOPER
.
Once the software is installed, you have to execute the root.sh
script from the ORACLE_HOME
. Connect as root
on every server and run:
rac-server1# /u01/app/oracle/product/11.1.0/db_1/root.sh rac-server2# /u01/app/oracle/product/11.1.0/db_1/root.sh rac-server3# /u01/app/oracle/product/11.1.0/db_1/root.sh rac-server4# /u01/app/oracle/product/11.1.0/db_1/root.sh
Install the Oracle RAC database Patch Set
The first 11.1 Patch Set is not available at the time of this writing.
Configure the Listeners
The fastest way to create and configure the listeners is to use NETCA as below:
$ export ORACLE_HOME=/u01/app/oracle/product/11.1.0/db_1 $ export PATH=$ORACLE_HOME/bin:$PATH $ netca /silent \ /responsefile $ORACLE_HOME/network/install/netca_typ.rsp \ /nodeinfo rac-server1,rac-server2,rac-server3,rac-server4
Unlike other tools, NETCA uses the “/
” character instead of “-
” for its flags. With 11.1, the DISPLAY
environment variable can stay empty.
Configure Automatic Storage Management
If you plan to use ASM from the newly installed ORACLE_HOME
or another one you’ve installed earlier, you can use DBCA to configure it in silent mode. The syntax is the same as the 10.2 asm configuration syntax:
$ dbca -silent \ -nodelist rac-server1,rac-server2,\ rac-server3,rac-server4 \ -configureASM \ -asmSysPassword change_on_install \ -diskString "/dev/sd*" \ -diskList "/dev/sde,/dev/sdf" \ -diskGroupName DGDATA \ -redundancy EXTERNAL \ -emConfiguration NONE
Create a RAC Database with DBCA
You can use DBCA again to create the RAC database. The syntax is the same as that explained in the 10.2 RAC post:
$ dbca -silent \ -nodelist rac-server1,rac-server2,\ rac-server3,rac-server4 \ -createDatabase \ -templateName General_Purpose.dbc \ -gdbName ORCL \ -sid ORCL \ -SysPassword change_on_install \ -SystemPassword manager \ -emConfiguration NONE \ -storageType ASM \ -asmSysPassword change_on_install \ -diskGroupName DGDATA \ -characterSet WE8ISO8859P15 \ -totalMemory 500
From my tests, I have found that -nodelist
has to come before the -createDatabase
flag. 11.1 also allows you to specify the size of the memory instead of a percentage.
More to come
If you’re used to installing 10.2 RAC databases in silent mode, doing it with 11.1 is very similar. The next two posts will explore the addition and removal of new cluster nodes. Expect them very soon.
8 Comments. Leave new
[…] I should also mention that Pythian’s Grégory Guillou has published the seventh part of his series on Oracle silent mode. […]
[…] | user-saved public links | iLinkShare 2 votesOracle Silent Mode, Part 7: Installing an 11.1 RAC Database>> saved by tvj08 2 days […]
[…] assume we want to add a new node, rac-server5, to the cluster we’ve build in the previous post. Connect as the Clusterware owner on any of the existing nodes and run the command […]
Hi,
I have tried the silent mode installation of Oracle 11g Database over an existing cluster ware. But this is just installing on the local node from which I am executing the command and is not installing on the remote node that I have specified in the node list. Where as when I tried to install the same using the normal GUI installation it worked well and completed the installation on the remote nodes too.
Please advise.
Ravi KAP
Ravi,
If you use the CLUSTER_NODES variable, it should do the work. However, if you can run the GUI, run it with “-record -destinationFile /tmp/output.rsp” and compare it the content of the default response file overloaded with the variable you’ve added as command line parameters. You’ll find what is different between the GUI and the silent install easily.
Gregory
Hi,
The CLUSTER_NODES variable is good in the command, which I have just copy-pasted and modified accordingly.
And I did what you suggested, running the GUI, recording the RSP file and comparing that with the command you have posted. The only difference that I see between the two is the “n_configurationOption” parameter that carries a 3 in your command where as the GUI has used a 1.
Question: If I am running from lin1 server, I have enabled the password-less communication with the lin2 server(executing the $SHELL and the add-agent commands). So is it required to log on to the lin2 server also and do the same?
Regards
Ravi
Hi
Did you get to export
Patch Set upgrades using the response file
approach. I’d be intersted in any problems /issuses encountered
Ron
In my attempt to install 11g database on a cluster, specifying nodes like this doesn’t work:
CLUSTER_NODES={“rac-server1″,”rac-server2”,
“rac-server3″,”rac-server4”} \
while this did:
“CLUSTER_NODES={rac-server1,rac-server2,
rac-server3,rac-server4}” \