Oracle Silent Mode, Part 4: Installation Of A 10.2 RAC

Posted in: Technical Track

This fourth post introduces the fundamental silent installation commands for a 10.2 RAC. For a complete series agenda, see below:

  1. Installation of 10.2 And 11.1 Databases
  2. Patches of 10.2 And 11.1 databases
  3. Cloning Software and databases
  4. Install a 10.2 RAC Database (this post!)
  5. Add a Node to a 10.2 RAC database
  6. Remove a Node from a 10.2 RAC database
  7. Install a 11.1 RAC Database
  8. Add a Node to a 11.1 RAC database
  9. Remove a Node from a 11.1 RAC database
  10. A ton of other stuff you should know

As the title suggests, this post will dig into how to (1) install the 10.2 Clusterware, (2) apply the latest Patch Set on top of it, (3) install the 10.2 database, (4) apply the latest Patch Set on top of it, and (5) create a RAC database. These operations will be performed with the Oracle Universal Installer, NETCA and DBCA in silent mode. Before you start, just in case you’re not familiar yet with Oracle Silent Installation, have a look at the first post of the series

Checking the prerequisites

Before you start the installation, make sure all the prerequisites are met. Use the 10.2 Clusterware and Real Application Clusters Installation Guide for your platform to set them up. You should also refer to Metalink Note 169706.1 for the latest updates and make sure the prerequisites for non-RAC databases are met by running the RDA HCVE module as described in the 1st post of the series.

To check that you haven’t missed anything, you should also run the Cluster Verify Utility (CVU). This utility is part of the Clusterware distribution or can be downloaded — get the latest release from here (note that you don’t have to be connected to OTN to download the latest release; you can just wget it once you’ve got its URL).

To run the latest Cluster Verify Utility, create a directory and unzip the CVU there. You can run the check as below:

$ cd bin
$ ./cluvfy -help
$ ./cluvfy stage -pre crsinst                          \
    -n rac-server1,rac-server2,rac-server3,rac-server4 \
    -r 10gR2                                           \
    -verbose

The meaning of those parameters:

  • stage -pre crsinst indicates the CVU will run all the checks needed before the installation of the Clusterware.
  • -n contains a comma-separated list of the servers you’ll install the Clusterware on.
  • -r 10gR2 indicates that the checks are for a 10gR2 installation. This parameter is useful if you use the latest release of the CVU which by default is 11gR1 as I write.
  • -verbose is used to get more details about all the checks that are run.

If the server is configured properly, the only error that the CVU could return is that it can’t find any candidate network interface for the VIP. Actually, this message depends of the addresses you use for the public addresses of the servers. If they are Private Network Addresses as described here, then the VIPCA that is part of the last root.sh script to be run will fail. I will explain the workaround for this in the Clusterware installation part of this post. Any other error you encounter must be investigated and corrected.

Something else to check is that the date is the same on all the servers. This is not mandatory, but it helps a lot to read log and trace files. Check for the NTP daemon and for the date:

$ for i in 1 2 3 4; do 
   ssh rac-server$i "ps -aef |grep [n]tpd"
   ssh rac-server$i date
   done

Install Oracle 10.2 Clusterware

Once you’ve made sure all the prerequisites are met, you can start the installation of the Clusterware. If you have any doubt about the prerequisites, it’s worthwhile to double-check: you’ll be better-off losing minutes to that rather than hours to figuring out why you can’t make the software work correctly!

Install the Clusterware Base Release

In order to install the Clusterware, you first have to download the distribution from Oracle E-Delivery or from OTN (See Marc’s Paper to do it with wget). You can unzip the distribution in a directory and set the DISTRIB environment variable to fit that directory, as below:

$ gunzip -c 10201_clusterware_linux_x86_64.cpio.gz |\
    cpio -idmv
$ cd clusterware
$ export DISTRIB=`pwd`

Before you install the Clusterware with runInstaller in silent mode, you must decide on and set some of the information used by the installer:

  • What will be the directory (or ORACLE_HOME), you’ll use to install the Clusterware? Because you’ll upgrade the clusterware in place (unlike the database software), avoid using the database version in that directory path; in the example that follows, we’ll use /u01/crs.
  • What will be the Clusterware ORACLE_HOME name? For the same reason as the directory path, don’t use the release in that name; In the example that follows, we’ll use OraCrsHome.
  • What will be the cluster name? That name will be used in tools such as Oracle Enterprise Manager, so make it unique within your organization; in the example that follows, we’ll use rac-cluster.
  • What should be the various network aliases you’ll use? Each node will have three aliases:
    1. the public name for the cluster servers. These aliases must match the IP addresses that are set up on the network used by the application. They usually match the output of the hostname command. The IP for these aliases must be set up on the servers and configured in the DNS, or preferably in the /etc/hosts file so that the installer can get the IP from the aliases. In the examples, the public network aliases are rac-server1, rac-server2, rac-server3 and rac-server4; their respective IP addresses are 10.0.0.1, 10.0.0.2, 10.0.0.3 and 10.0.0.4.
    2. the private name for the cluster servers. These aliases must match the IP addresses that are setup on the interconnect network. The IP for these aliases must be set up on the servers and configured in the DNS, or preferably in the /etc/hosts file so that the installer can get the IP from the alias. In the examples, the private network aliases are rac-server1-priv, rac-server2-priv, rac-server3-priv and rac-server4-priv. Their respective IP addresses are 192.168.1.1, 192.168.1.2, 192.168.1.3 and 192.168.1.4
    3. the virtual IP aliases of the cluster servers. These aliases must match the IP addresses that are NOT yet setup on the server network; VIPCA will actually set up these IP addresses and the Clusterware will manage them. The associated addresses must use the same subnet as the public addresses. They must be registered in the DNS, or preferably in the /etc/hosts file so that the installer can get the IP from the alias. In the examples, the virtual network aliases are rac-server1-vip, rac-server2-vip, rac-server3-vip and rac-server4-vip. The associated IP addresses are 10.0.0.11, 10.0.0.12, 10.0.0.13 and 10.0.0.14.
  • What are the network interfaces used, and what are the associated subnets? When the network is set up, if you run ifconfig -a on each one of the nodes, you’ll see what network interfaces are used (e.g eth0, bond1,…), what IPs are bound to them, and what is the associated network mask. With the standard installation, the subnet and network interface have to match. To put it another way, if 10.0.0.0 is the subnet of the public network, and it uses eth0 on one server, it has to be the same on all the servers. (NB: subnet is equal to “IP address” BITAND “netmask”). In the example, 10.0.0.0 is the public subnet and using bond0, 192.168.1.0 is the private subnet using bond1 and 10.1.0.0 is another network used by the Clusterware as a storage network (in the case of iscsi or NFS) and is using bond2.
  • Will you have one or two copies of the OCR? What their location should be? In the following example, you’ll generate two copies of the OCR in /dev/sdb1 and /dev/sdc1.
  • Will you have one or three copies of the Voting Disks? What their location should be? In the following example, you’ll generate three copies of the Voting Disk in /dev/sdb2, /dev/sdc2 and /dev/sdd1.

Once you’ve made sure you have all the information you need, you can run Oracle Universal Installer for the Clusterware. There is no need to change the content of the crs.rsp response file, as you’ll pass the parameters you need to change in the command line:

cd $DISTRIB
./runInstaller -silent                                            \ 
  -responseFile $DISTRIB/response/crs.rsp                         \ 
  ORACLE_HOME="/u01/crs"                                          \
  ORACLE_HOME_NAME="OraCrsHome"                                   \ 
  s_clustername="rac-cluster"                                     \ 
  sl_tableList={"rac-server1:rac-server1-priv:rac-server1-vip:N:Y"\
,"rac-server2:rac-server2-priv:rac-server2-vip:N:Y"\
,"rac-server3:rac-server3-priv:rac-server3-vip:N:Y"\
,"rac-server4:rac-server4-priv:rac-server4-vip:N:Y"}              \ 
  ret_PrivIntrList={"bond0:10.0.0.0:1","bond1:192.168.1.0:2",\
"bond2:10.1.0.0:3"}                                               \ 
  n_storageTypeOCR=1                                              \ 
  s_ocrpartitionlocation="/dev/sdb1"                              \ 
  s_ocrMirrorLocation="/dev/sdc1"                                 \ 
  n_storageTypeVDSK=1                                             \ 
  s_votingdisklocation="/dev/sdb2"                                \ 
  s_OcrVdskMirror1RetVal="/dev/sdc2"                              \ 
  s_VdskMirror2RetVal="/dev/sdd1"

Here are some details about the parameters:

  • sl_tableList contains a list of information about the servers that will be installed, in the form, “<public-alias>:<private-alias>:<vip-alias>:N:Y”. The last two fields (N and Y) don’t have to be changed.
  • ret_PrivIntrList contains the list of network interfaces and for each of them: (a) the interface device name (e.g. bond0), (b) the associated network, or “IP Address” BITAND “netmask” (e.g. 10.0.0.0), and (3) the purpose of the interface (1 for public network, 2 for interconnect network, and 3 for the storage network, if any).
  • n_storageTypeOCR and n_storageTypeVDSK designate the redundancy for the OCR and for the Voting Disk. 1 means the Clusterware does the redundancy; 2 means the clusterware does not, and you will rely on the storage layer to secure those files.
  • s_ocrpartitionlocation and s_ocrMirrorLocation are the locations for the OCR and its mirror
  • s_votingdisklocation, s_OcrVdskMirror1RetVal and s_VdskMirror2RetVal are locations 1,2 and 3 of the Voting Disks

Once the software is installed, you can create the oraInst.loc file on all the servers if it’s the first Oracle software you have installed on them. To proceed, connect as root on each one of them, navigate to the newly-created Oracle Inventory, and run orainstRoot.sh:

rac-server1# /u01/app/oraInventory/orainstRoot.sh
rac-server2# /u01/app/oraInventory/orainstRoot.sh
rac-server3# /u01/app/oraInventory/orainstRoot.sh
rac-server4# /u01/app/oraInventory/orainstRoot.sh

Note:
You don’t have to install the Clusterware on each one of the servers, just install it on one node and the installer will push it to all the nodes you’ve specified.
Don’t run the root.sh script on any of the nodes. You’ll do that once the Patch Set is installed on top of the Clusterware Home you’ve just created.

Apply the Patch Set to the Clusterware

Once you’ve installed the Clusterware, and assuming you didn’t start it by running the root.sh script, applying the patch Set is straightforward. It consists of:

  • downloading and unzipping the Patch Set on any of the nodes you’ve installed
  • executing the Universal Installer in silent mode on one node only with the ORACLE_HOME and ORACLE_HOME_NAME parameters without changing with content of the patchset.rsp response file. Note that you don’t have to specify the server you want the Patch Set installed on. All the nodes will be patched by running the Patch Set on one node only.

Below is the command you’ll execute to apply the 10.2.0.4 Patch Set on Linux x86_64:

$ cd patchset10204
$ unzip p6810189_10204_Linux-x86-64.zip
$ cd Disk1
$ export DISTRIBS=`pwd`

Then run the installer with the information about the ORACLE_HOME you want to patch:
$ ./runInstaller -silent                          \
     -responseFile $DISTRIB/response/patchset.rsp \      
     ORACLE_HOME="/u01/crs"                       \
     ORACLE_HOME_NAME="OraCrsHome"

Note:
If the Clusterware has already been started and the OCR created, to apply the Patch Set, follow the Clusterware Rolling Upgrade Patch Set section below.

Creating OCR, Voting, and Starting the Clusterware

If you don’t have your public IP set to 192.168.x.x, 10.x.x.x or 172.[16-31].x.x, creating the OCR, the voting disk and starting the Clusterware should be as easy as successively connecting to each one of the nodes as root and running the root.sh script as below:

rac-server1# /u01/crs/root.sh
rac-server2# /u01/crs/root.sh
rac-server3# /u01/crs/root.sh
rac-server4# /u01/crs/root.sh

Obviously it’s not always as simple as that. If you got a VIPCA error saying that there is no suitable interface to create the VIP when you run the last root.sh script, you’ll have to configure the network settings manually. To proceed, connect as root on any of the nodes and setup ORA_CRS_HOME so that is referenced the Clusterware ORACLE_HOME. Once the environment set, you can list the network interfaces as registered by the clusterware with oifcfg; that list should be empty:

rac-server1# cd /u01/crs
rac-server1# export ORA_CRS_HOME=`pwd`
rac-server1# export PATH=$ORA_CRS_HOME/bin:$PATH
rac-server1# cd $ORA_CRS_HOME/bin
rac-server1# ./oifcfg getif -global
rac-server1# # Empty list...

If that’s the case (the list is empty), you can register the interfaces with oifcfg. We’ll assume that the public subnet, 10.0.0.0, is using the bond0 interface and that the interconnect subnet, 192.168.1.0, is using the bond1 interface. To register these two subnets and their interfaces in the OCR, run the command below:

rac-server1# cd $ORA_CRS_HOME/bin
rac-server1# ./oifcfg setif -global \
               bond0/10.0.0.0:public
rac-server1# ./oifcfg setif -global \
               bond1/192.168.1.0:cluster_interconnect
rac-server1# ./oifcfg getif -global
rac-server1# # Should now display the subnet list

Once you’ve registered the subnets, you’ll be able to create the nodeapps for all of the servers with the srvctl command. In the commands that follow, we assume that the public network mask is 255.255.255.0:

rac-server1# ./srvctl add nodeapps \
       -n rac-server1                           \
       -o /u01/crs                              \
       -A rac-server1-vip/255.255.255.0/bond0
CRS-0210: Could not find resource ora.rac-server1.LISTENER_RAC-SERVER1.lsnr. 
rac-server1# ./srvctl add nodeapps \
       -n rac-server2                           \
       -o /u01/crs                              \
       -A rac-server2-vip/255.255.255.0/bond0
CRS-0210: Could not find resource ora.rac-server2.LISTENER_RAC-SERVER2.lsnr. 
rac-server1# ./srvctl add nodeapps \
       -n rac-server3                           \
       -o /u01/crs                              \
       -A rac-server3-vip/255.255.255.0/bond0
CRS-0210: Could not find resource ora.rac-server3.LISTENER_RAC-SERVER3.lsnr. 
rac-server1# ./srvctl add nodeapps \
       -n rac-server4                           \
       -o /u01/crs                              \
       -A rac-server4-vip/255.255.255.0/bond0
CRS-0210: Could not find resource ora.rac-server4.LISTENER_RAC-SERVER4.lsnr. 

rac-server1# ./srvctl start nodeapps -n rac-server1
rac-server1# ./srvctl start nodeapps -n rac-server2
rac-server1# ./srvctl start nodeapps -n rac-server3
rac-server1# ./srvctl start nodeapps -n rac-server4

rac-server1# ./srvctl status nodeapps -n rac-server1
VIP is running on node: rac-server1 
GSD is running on node: rac-server1 
PRKO-2016: Error in checking condition of listener on node: rac-server1 
ONS daemon is running on node: rac-server1

Once the VIP is correctly configured, the cluster should be correctly installed. You can check it’s the case with cluvfy stage -post crsinst -n all -verbose. The error on the listener is expected since the database software is needed to run the listener and it’s not installed yet.

Clusterware Rolling Upgrade Patch Set

If the Clusterware has already been configured and is running, applying a Patch Set slightly differs from what I explained in the previous sections. When the clusterware is running, you need to install the Patch Set on top of the existing clusterware home. For each of the cluster nodes, and one by one:

  • Stop all the Oracle resources running on the node.
  • Stop the Clusterware.
  • Run the root102.sh script, which will restart the Clusterware and its managed ressources.

Below is the command to install the Patch Set. It has to be run from one node only:

$ cd patchset10204
$ unzip p6810189_10204_Linux-x86-64.zip
$ cd Disk1
$ export DISTRIBS=`pwd`

Then run the installer with the information about the ORACLE_HOME you want to patch:

$ ./runInstaller -silent                          \
     -responseFile $DISTRIB/response/patchset.rsp \      
     ORACLE_HOME="/u01/crs"                       \
     ORACLE_HOME_NAME="OraCrsHome"

Once this is done, as root, stop all the resources and the Clusterware, and apply the Patch Set:

rac-server1# cd /u01/crs
rac-server1# export ORA_CRS_HOME=`pwd`
rac-server1# cd $ORA_CRS_HOME/bin
rac-server1# ./crsctl stop crs
rac-server1# cd $ORA_CRS_HOME/install
rac-server1# ./root102.sh

rac-server2# cd /u01/crs
rac-server2# export ORA_CRS_HOME=`pwd`
rac-server2# cd $ORA_CRS_HOME/bin
rac-server2# ./crsctl stop crs
rac-server2# cd $ORA_CRS_HOME/install
rac-server2# ./root102.sh

rac-server3# cd /u01/crs
rac-server3# export ORA_CRS_HOME=`pwd`
rac-server3# cd $ORA_CRS_HOME/bin
rac-server3# ./crsctl stop crs
rac-server3# cd $ORA_CRS_HOME/install
rac-server3# ./root102.sh

rac-server4# cd /u01/crs
rac-server4# export ORA_CRS_HOME=`pwd`
rac-server4# cd $ORA_CRS_HOME/bin
rac-server4# ./crsctl stop crs
rac-server4# cd $ORA_CRS_HOME/install
rac-server4# ./root102.sh

Once you’ve run these commands, the Patch Set release should appear when you run:

rac-server4# cd $ORA_CRS_HOME/bin
rac-server4# crsctl query crs activeversion

Once the Clusterware is installed and correctly configured, you can install the database software, if you haven’t already.

Install Oracle 10.2 RAC Database Software

Install the Oracle RAC database Base Release

Once the Clusterware is installed, installing a RAC Database software is very similar to installing a non-RAC database software, you just need to specify which servers you want the software to be installed on. The first step consists of downloading the software and extracting it from the tarball, as below:

$ gunzip -c 10201_database_linux_x86_64.cpio.gz \
       | cpio -idmv
$ cd database
$ export DISTRIB=`pwd`

To install the database software, you don’t need to modify the response files. You only have to run a command like the one below (for Enterprise Edition):

$ runInstaller -silent                                 \
      -responseFile $DISTRIB/response/enterprise.rsp   \
       ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1 \
       ORACLE_HOME_NAME=ORADB102_Home1                 \
       CLUSTER_NODES={"rac-server1","rac-server2",\
"rac-server3","rac-server4"}                           \
       n_configurationOption=3

Or, for Standard Edition:

$ runInstaller -silent                                 \
      -responseFile $DISTRIB/response/standard.rsp     \
       ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1 \
       ORACLE_HOME_NAME=ORADB102_Home1                 \
       CLUSTER_NODES={"rac-server1","rac-server2",\
"rac-server3","rac-server4"}                           \
       n_configurationOption=3

As you can see, the only parameter that differs from the non-RAC database installation described in Part 1 of this series is the CLUSTER_NODES parameter that contains the list of cluster nodes you want to install the database software on.

Then run the root.sh script from the ORACLE_HOME. Connect as root on every server and run:

rac-server1# /u01/app/oracle/product/10.2.0/db_1/root.sh
rac-server2# /u01/app/oracle/product/10.2.0/db_1/root.sh
rac-server3# /u01/app/oracle/product/10.2.0/db_1/root.sh
rac-server4# /u01/app/oracle/product/10.2.0/db_1/root.sh

Install the Oracle RAC database Patch Set

Applying a Patch Set to the database software is even simpler. Stop all processes (Instances, Listener, EM Console), if any, that run on top of the database ORACLE_HOME. Run the exact same commands as you’d do in the case of a non-RAC Database Software:

$ cd patchset10204/Disk1
$ export DISTRIB=`pwd`
$ ./runInstaller -silent                               \
      -responseFile $DISTRIB/response/patchset.rsp     \
       ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1 \
       ORACLE_HOME_NAME=ORADB102_Home1

Then, you can run the root.sh script from the ORACLE_HOME. Connect as root on every server and run:

rac-server1# /u01/app/oracle/product/10.2.0/db_1/root.sh
rac-server2# /u01/app/oracle/product/10.2.0/db_1/root.sh
rac-server3# /u01/app/oracle/product/10.2.0/db_1/root.sh
rac-server4# /u01/app/oracle/product/10.2.0/db_1/root.sh

That’s it, you’ve installed the database software!

Configure the Listeners

In the case of a RAC 10.2, you must configure and register the listeners in the OCR with NETCA. With 10.2, NETCA is the only way to do it. As for the Universal Installer, you can run NETCA in Silent mode without modifying the response file that comes with it. To proceed, connect as oracle on one of the server and run the command below:

$ export ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1
$ export PATH=$ORACLE_HOME/bin:$PATH
$ export DISPLAY=:0
$ netca /silent \ 
 /responsefile $ORACLE_HOME/network/install/netca_typ.rsp \ 
 /nodeinfo rac-server1,rac-server2,rac-server3,rac-server4

Unlike the other tools, NETCA uses the / characters instead of - for its flags. The DISPLAY environment variable, even if it goes unused, cannot stay empty. That’s not a big concern since you don’t have to have an X Display running on the value you set. If you want to modify some values like the listener ports, you can always edit the listener.ora files once the listeners are registered in the OCR.

Configure Automatic Storage Management

If you plan to use ASM from the newly installed ORACLE_HOME or another one you’ve installed earlier, you can use DBCA to configure it in silent mode. The syntax is the same as the one from the non-RAC install described in part 1, except for the -nodelist parameter that lists the nodes on which you want to configure ASM:

$ dbca -silent                        \
    -nodelist rac-server1,rac-server2,\
rac-server3,rac-server4
    -configureASM                     \
    -asmSysPassword change_on_install \
    -diskString "/dev/sd*"            \
    -diskList "/dev/sde,/dev/sdf"     \
    -diskGroupName DGDATA             \
    -redundancy EXTERNAL              \
    -emConfiguration NONE

Create a RAC Database with DBCA

In the case of a RAC Database, using DBCA to create a database speeds up everything again. The syntax is the same as the one explained in the non-RAC database creation described in part 1, except for the -nodelist parameter that lists the nodes on which you want to create an instance for the database. The syntax is as below:

$ dbca -silent                             \
       -createDatabase                     \
       -templateName General_Purpose.dbc   \
       -gdbName ORCL                       \
       -sid ORCL                           \
       -SysPassword change_on_install      \
       -SystemPassword manager             \
       -emConfiguration NONE               \
       -storageType ASM                    \
         -asmSysPassword change_on_install \
         -diskGroupName DGDATA             \
       -nodelist rac-server1,rac-server2,\
rac-server3,rac-server4                    \
       -characterSet WE8ISO8859P15         \
       -memoryPercentage 40

Note that DBCA not only creates the database, it also

  1. creates the Oracle*Net configuration files
  2. creates the init.ora files on all the servers
  3. registers the database and the instances in the OCR
  4. creates the dependencies for ASM (if you use ASM)
  5. creates the Undo Tablespaces and Redo log threads for all the instances
  6. start the instances from all the nodes

More to come

With some practice, installing a 10.2 RAC database in silent mode will become as easy for you as using the installer and assistants in interactive mode. However, this post has shown only how to perform the initial installation of a 10.2 RAC database; there is much more to learn before you can scale–up and down–your RAC configurations and benefit from the Grid Approach. The next two posts will go one step further and show how to add a node to a 10.2 RAC. The series will then follow up with how to remove a node from a 10.2 RAC.

email
Want to talk with an expert? Schedule a call with our team to get the conversation started.

5 Comments. Leave new

Oracle Silent Mode, Part 6: Removing a Node From a 10.2 RAC
July 9, 2008 11:11 am

[…] Install a 10.2 RAC Database […]

Reply
Oracle Silent Mode, Part 7: Installing an 11.1 RAC Database
August 14, 2008 1:55 pm

[…] just add the parameter in the runInstaller command line. Below is the syntax that matches that from the 10.2 post; refer to it for more detail about the meaning of the […]

Reply
Grégory Guillou
August 30, 2008 4:56 pm

You may get an issue with the content of sl_tableList when you run runInstaller for the clusterware. If that’s the case, the clusterware won’t be pushed to the other nodes. Install a 1 node cluster and add the other nodes as described in part 5

Reply

Thanks ! worked like charm RAC database creation in silent mode,its fast-took 7 minutes for completion across 3 nodes.

Reply
Oracle Silent Mode, Part 7: Installing an 11.1 RAC Database
February 13, 2013 11:08 am

[…] for the Installation of a 10.2 RAC Database, this post shows how to (1) install the 11.1 clusterware, (2) install the 11.1 database, and (3) […]

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *