Skip to content

Insight and analysis of technology and business strategy

Oracle RAC on Azure

Microsoft Azure provides an acceptable and affordable platform for a training environment. I am an Oracle DBA, and use it to test functionality, new technologies and features of different Oracle products. Azure supplies a template for Oracle linux and it can be used to run a single database, but when we try to create an Oracle RAC, we hit two major issues. In the first, the Azure virtual network doesn't support multicast and, as result, cannot be used for interconnect. The second issue is shared storage. Azure provides shared file storage, and you can access it using SMB-2 protocol, but it isn't exactly what we need for RAC. How we can solve or workaround those problems? I will share my experience and show how I can setup a RAC on Azure. For a two node RAC we first need to create at least two virtual machines for the cluster nodes. I've chosen Oracle Linux 6.4 from Azure Marketplace. I decided to create the machines with 2 network interfaces where one will be used for public, and another will be used for private interconnect. Here is my blog post how to create a VM with 2 network interfaces. It may not be necessary since you can fork a virtual interface out of your only public network, but I decided to go this way and create cluster nodes with two interfaces. Here is output for the network from the first node: [root@oradb5 network-scripts]# ifconfig eth0 Link encap:Ethernet HWaddr 00:0D:3A:11:A3:71 inet addr:10.0.1.11 Bcast:10.0.1.255 Mask:255.255.254.0 inet6 addr: fe80::20d:3aff:fe11:a371/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:776 errors:0 dropped:0 overruns:0 frame:0 TX packets:789 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:96068 (93.8 KiB) TX bytes:127715 (124.7 KiB) eth1 Link encap:Ethernet HWaddr 00:0D:3A:11:AC:92 inet addr:10.0.2.11 Bcast:10.0.3.255 Mask:255.255.254.0 inet6 addr: fe80::20d:3aff:fe11:ac92/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2 errors:0 dropped:0 overruns:0 frame:0 TX packets:9 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:722 (722.0 b) TX bytes:1166 (1.1 KiB) We need to install oracle-rdbms-server-12cR1-preinstall-1.0-14.el6.x86_64 rpm. It will install all required packages and set up kernel and limits for oracle user on our boxes : yum install oracle-rdbms-server-12cR1-preinstall-1.0-14.el6.x86_64 The next step is to enable multicast support on the network for interconnect. You can read how to enable the multicast support in my other blog. As result you are getting a network interface edge0 which can be used now for our private network. Here is output of the ifconfig after crating virtual interface with support of multicast: [root@oradb5 ~]# ifconfig edge0 Link encap:Ethernet HWaddr 9E:1A:D8:0B:94:EF inet addr:192.168.1.1 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::9c1a:d8ff:fe0b:94ef/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:3 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:0 (0.0 b) TX bytes:238 (238.0 b) eth0 Link encap:Ethernet HWaddr 00:0D:3A:11:A3:71 inet addr:10.0.1.11 Bcast:10.0.1.255 Mask:255.255.254.0 inet6 addr: fe80::20d:3aff:fe11:a371/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:118729 errors:0 dropped:0 overruns:0 frame:0 TX packets:62523 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:143705142 (137.0 MiB) TX bytes:20407664 (19.4 MiB) eth1 Link encap:Ethernet HWaddr 00:0D:3A:11:AC:92 inet addr:10.0.2.11 Bcast:10.0.3.255 Mask:255.255.254.0 inet6 addr: fe80::20d:3aff:fe11:ac92/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:9 errors:0 dropped:0 overruns:0 frame:0 TX packets:271 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1274 (1.2 KiB) TX bytes:43367 (42.3 KiB) I've used multicast tool from Oracle support document
Grid Infrastructure Startup During Patching, Install or Upgrade May Fail Due to Multicasting Requirement (Doc ID 1212703.1)
The check was successful: [oracle@oradb5 mcasttest]$ ./mcasttest.pl -n oradb5,oradb6 -i edge0 ########### Setup for node oradb5 ########## Checking node access 'oradb5' Checking node login 'oradb5' Checking/Creating Directory /tmp/mcasttest for binary on node 'oradb5' Distributing mcast2 binary to node 'oradb5' ########### Setup for node oradb6 ########## Checking node access 'oradb6' Checking node login 'oradb6' Checking/Creating Directory /tmp/mcasttest for binary on node 'oradb6' Distributing mcast2 binary to node 'oradb6' ########### testing Multicast on all nodes ########## Test for Multicast address 230.0.1.0 Nov 24 16:22:12 | Multicast Succeeded for edge0 using address 230.0.1.0:42000 Test for Multicast address 224.0.0.251 Nov 24 16:22:13 | Multicast Succeeded for edge0 using address 224.0.0.251:42001 [oracle@oradb5 mcasttest]$ So, we have solved the first obstacle and need to get shared storage for our RAC. We have at least a couple of options here, and I believe somebody can advise us on others. We can use NFS based shared storage, or we can use iscsi for that. You may choose something from the Azure Marketplace like SoftNAS or Stonefly or you may decide to create your own solution. In my case I just faired another Oracle Linux VM, added couple of storage disks to it using portal, and then set up NFS server on that machine. Here is the high level description for that: We create a linux based VM on Azure using Oracle Linux 6.4 template from Marketplace. The size will be dictated by your requirements. I called the machine oradata. I've added a 20 Gb disk to the oradata machine through the Azure portal, and created a partition and filesystem on it: [root@oradata ~]# fdisk -l [root@oradata ~]# fdisk /dev/sdc [root@oradata ~]# mkfs.ext4 /dev/sdc1 [root@oradata ~]# mkdir /share [root@oradata ~]# mkdir /share/oradata1 [root@oradata ~]# e2label /dev/sdc1 sharedoradata1 [root@oradata ~]# vi /etc/fstab [root@oradata ~]# mount -a [root@oradata ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 7.4G 1.4G 5.7G 19% / tmpfs 1.7G 0 1.7G 0% /dev/shm /dev/sda1 485M 50M 410M 11% /boot /dev/sda2 2.0G 67M 1.9G 4% /tmp /dev/sdc1 20G 4.2G 15G 23% /share/oradata1 /dev/sdb1 60G 180M 56G 1% /mnt/resource [root@oradata ~]# Installed necessary utilities using yum: [root@oradata ~]# yum install nfs-utils Configured NFS server on the box: [root@oradata ~]# chkconfig service nfs on [root@oradata ~]# vi /etc/exports [root@oradata ~]# cat /etc/exports /share/oradata1 10.0.0.0/23(rw,sync,no_root_squash) [root@oradata ~]# service nfs restart [root@oradata ~]# showmount -e Export list for oradata: /share/oradata1 10.0.0.0/23 Configure or stop firewall(You may need to do it on your cluster nodes as well) : [root@oradata ~]# service iptables stop iptables: Flushing firewall rules: [ OK ] iptables: Setting chains to policy ACCEPT: filter [ OK ] iptables: Unloading modules: [ OK ] [root@oradata ~]# chkconfig iptables off [root@oradata ~]# On your cluster nodes you need add the mountpoint for your shared storage to /etc/fstab and mount it. [root@oradb5 ~]# vi /etc/fstab [root@oradb5 ~]# cat /etc/fstab | grep nfs oradata:/share/oradata1 /u02/oradata nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,noac,actimeo=0,vers=3,timeo=600 0 0 [root@oradb5 ~]# mount -a [root@oradb5 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 7.4G 2.5G 4.6G 36% / tmpfs 3.5G 0 3.5G 0% /dev/shm /dev/sda1 485M 69M 391M 15% /boot /dev/sda2 2.0G 86M 1.8G 5% /tmp /dev/sdc1 60G 12G 45G 21% /u01/app /dev/sdb1 281G 191M 267G 1% /mnt/resource oradata:/share/oradata1 20G 4.2G 15G 23% /u02/oradata [root@oradb5 ~]# mount | grep /u02/oradata | grep -v grep oradata:/share/oradata1 on /u02/oradata type nfs (rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,noac,actimeo=0,vers=3,timeo=600,addr=10.0.1.101) [root@oradb5 ~]# Now we have the required storage for OCR and Voting disks, network for public and interconnect, and can install our cluster. We need to correct /etc/hosts file on both nodes (you may choose to use Azure DNS service instead). [oracle@oradb5 ~]$ cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 10.0.1.11 oradb5 10.0.1.12 oradb6 10.0.1.15 oradb5-vip 10.0.1.16 oradb6-vip 10.0.1.19 oradb-clst-scan 192.168.1.1 oradb5-priv 192.168.1.2 oradb6-priv 10.0.1.101 oradata [oracle@oradb5 ~]$ You can see I setup the public,VIP and SCAN in the hosts file. Of course it is not acceptable for any production implementation or if you want to have more than one scan. As I've already mentioned above you can use DNS for proper installation. We copy required software to one of the nodes, unpack it and create a response file for installation like: [oracle@oradb5 ~]$ cat grid.rsp oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v12.1.0 ORACLE_HOSTNAME=oradb5 INVENTORY_LOCATION=/u01/app/oraInventory SELECTED_LANGUAGES=en oracle.install.option=CRS_CONFIG ORACLE_BASE=/u01/app/oracle ORACLE_HOME=/u01/app/12.1.0/grid oracle.install.asm.OSDBA=dba oracle.install.asm.OSOPER=dba oracle.install.asm.OSASM=dba oracle.install.crs.config.gpnp.scanName=oradb-clst-scan oracle.install.crs.config.gpnp.scanPort=1521 oracle.install.crs.config.ClusterType=STANDARD oracle.install.crs.config.clusterName=oradb-clst oracle.install.crs.config.gpnp.configureGNS=false oracle.install.crs.config.autoConfigureClusterNodeVIP=false oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS oracle.install.crs.config.gpnp.gnsClientDataFile= oracle.install.crs.config.gpnp.gnsSubDomain= oracle.install.crs.config.gpnp.gnsVIPAddress= oracle.install.crs.config.clusterNodes=oradb5:oradb5-vip,oradb6:oradb6-vip oracle.install.crs.config.networkInterfaceList=eth0:10.0.0.0:1,eth1:10.0.2.0:3,edge0:192.168.1.0:2 oracle.install.crs.config.storageOption=FILE_SYSTEM_STORAGE oracle.install.crs.config.sharedFileSystemStorage.votingDiskLocations=/u02/oradata/voting/vdsk1,/u02/oradata/voting/vdsk2,/u02/oradata/voting/vdsk3 oracle.install.crs.config.sharedFileSystemStorage.votingDiskRedundancy=NORMAL oracle.install.crs.config.sharedFileSystemStorage.ocrLocations=/u02/oradata/ocr/ocrf1 oracle.install.crs.config.sharedFileSystemStorage.ocrRedundancy=EXTERNAL oracle.install.crs.config.useIPMI=false oracle.install.crs.config.ipmi.bmcUsername= oracle.install.crs.config.ipmi.bmcPassword= oracle.install.asm.SYSASMPassword= oracle.install.asm.diskGroup.name= oracle.install.asm.diskGroup.redundancy= oracle.install.asm.diskGroup.AUSize=1 oracle.install.asm.diskGroup.disks= oracle.install.asm.diskGroup.diskDiscoveryString= oracle.install.asm.monitorPassword= oracle.install.asm.ClientDataFile= oracle.install.crs.config.ignoreDownNodes=false oracle.install.config.managementOption=NONE oracle.install.config.omsHost= oracle.install.config.omsPort=0 oracle.install.config.emAdminUser= oracle.install.config.emAdminPassword= The file can be used for silent installation. You may choose instead to use runInstaller in GUI mode. To run installation in silent mode you just need to go to your unpacked software and run: [oracle@oradb5 grid]$ ./runInstaller -silent -responseFile /home/oracle/grid.rsp -ignoreSysPrereqs -ignorePrereq Starting Oracle Universal Installer... Checking Temp space: must be greater than 415 MB. Actual 1350 MB Passed Checking swap space: 0 MB available, 150 MB required. Failed <<<>> Ignoring required pre-requisite failures. Continuing... Preparing to launch Oracle Universal Installer from /tmp/OraInstall2016-02-01_09-41-01AM. Please wait ... You've of course noticed that I've run the installation ignoring requirements. As a matte of fact, I ran it without ignoring, checked the failing checks, made necessary adjustments for those checks, and then I decided they were important and left other as they were. As example my /etc/resolve.conf file was different due to settings on dhcp server and so on. I advise to apply common sense and your knowledge to decide what checks are important for you and what can be ignored. Your installation will be completed and all you need to run is a couple of scripts to finish the installation. As a root user, execute the following script(s): 1. /u01/app/12.1.0/grid/root.sh Execute /u01/app/12.1.0/grid/root.sh on the following nodes: [oradb5, oradb6] Run the script on the local node first. After successful completion, you can start the script in parallel on all other nodes. Successfully Setup Software. As install user, execute the following script to complete the configuration. 1. /u01/app/12.1.0/grid/cfgtoollogs/configToolAllCommands RESPONSE_FILE= Note: 1. This script must be run on the same host from where installer was run. 2. This script needs a small password properties file for configuration assistants that require passwords (refer to install guide documentation). We run the root.sh on each node one by one as user root and execute configToolAllCommands script as user oracle on the node we have run our installation. The response file would be required if we specified password for ASM,ASM monitoring or for DBCA. Here is an example of the file contents: oracle.assistants.server|S_SYSPASSWORD=welcome1 oracle.assistants.server|S_SYSTEMPASSWORD=welcome1 oracle.assistants.server|S_DBSNMPPASSWORD=welcome1 oracle.assistants.server|S_PDBADMINPASSWORD=welcome1 oracle.assistants.server|S_EMADMINPASSWORD=welcome1 oracle.assistants.server|S_ASMSNMPPASSWORD=welcome1 Change permission for the file to 600 before running the script: [oracle@oradb5 grid]$ vi /home/oracle/cfgrsp.properties [oracle@oradb5 grid]$ chmod 600 /home/oracle/cfgrsp.properties We don't have any ASM in our installation or BMS console but I will leave the file nevertheless just for reference. Here is an output what we ran on our system : [root@oradb5 ~]# /u01/app/12.1.0/grid/root.sh Check /u01/app/12.1.0/grid/install/root_oradb5_2016-02-01_10-21-07.log for the output of root script .... [root@oradb6 ~]# /u01/app/12.1.0/grid/root.sh Check /u01/app/12.1.0/grid/install/root_oradb6_2016-02-01_10-38-50.log for the output of root script .... [oracle@oradb5 grid]$ /u01/app/12.1.0/grid/cfgtoollogs/configToolAllCommands RESPONSE_FILE=/home/oracle/cfgrsp.properties Setting the invPtrLoc to /u01/app/12.1.0/grid/oraInst.loc perform - mode is starting for action: configure .... Keep in mind the configToolAllCommands should also create the management database in your cluster. If somehow it was failed you can try to recreate it using dbca in silent mode like : /u01/app/12.1.0/grid/bin/dbca -silent -createDatabase -createAsContainerDatabase true -templateName MGMTSeed_Database.dbc -sid -MGMTDB -gdbName _mgmtdb -storageType FS -datafileDestination /u02/oradata/ocr/oradb-clst/mgmtdb -datafileJarLocation /u01/app/12.1.0/grid/assistants/dbca/templates -characterset AL32UTF8 -autoGeneratePasswords -skipUserTemplateCheck -oui_internal The RAC is created and now it can be used for application high availability or for databases tests. You may install a database software on the RAC either using GUI installer or silent mode, but don't forget to specify cluster nodes during installation. I would also like to mention that I would not recommend installing it as production system, but it is quite suitable for tests or to experiment if you want to verify or troubleshot some RAC specific features.

Top Categories

  • There are no suggestions because the search field is empty.

Tell us how we can help!

dba-cloud-services
Upcoming-Events-banner