This topic has been raised again and again and quite a few people have asked me how to configure RAC on VMware Fusions on Mac. This warrants a blog post, especially, that Mac is definitely the way to go for an Oracle DBA — a Unix desktop OS that just works. What can be better? Sorry, I digress without even starting!
Before I go any further, I should say that this is not a complete guide on the Oracle RAC install with VMware Fusion but just the hints on setting up shared storage for Oracle RAC using Mac as host for VMWare Fusion virtual machines (VM’s). The reader is assumed to understand how to setup Oracle RAC and has general understanding of VMware itself. There are plenty of guides on the Internet on how to setup Oracle RAC including VMware but they usually refer to VMware Server on Linux or Windows. Please note that I’m writing it largely by memory so if you hit any issue — please leave a comment.
Disclaimers are over — moving on!
The root of the problem is that VMware Fusion doesn’t support shared disks unlike VMware Server on Windows and Linux. If you try to update the .vmx file manually to enable shared disk, you get the error message “Clustering is not supported for VMware Fusion – this setting will be ignored”. Fear not — you are running the best desktop OS anyway! ;-)
Other than shared disks from the host OS, there are couple other options you can investigate for shared storage — NFS filesystem and iSCSI. iSCSI is good when you want to play with ASM but as you will see, there is no problem to use NFS for ASM (well, as a playground). Actually, I’ll save it for another post.
One solution is to run Open Filer setup in another VM and it can present storage as iSCSI targets and as NFS exports. iSCSI setup is not very easy (well, definitely not for me) and I avoid it as a plague. Accessing NFS mounts from Open Filer requires configuring LDAP on your Oracle RAC VM’s and even though it’s not too difficult (I’ve done it few times), it’s still some additional complexity and we don’t want any of that on our playground.
Finally, running yet another VM adds some overhead of memory and CPU as well as makes IO slower. That’s my experience. YMMV.
Back to out host machine — don’t forget it’s full blown Unix OS — Mac OSX is based on FreeBSD — and it naturally includes NFS server.
Desktop Mac OSX doesn’t have NFS server daemon running by default but enabling it is a piece of cake:
macbook:~ gorby$ sudo nfsd enable macbook:~ gorby$ sudo nfsd status nfsd service is enabled nfsd is running (pid 87970, 8 threads) macbook:~ gorby$ ps -p 87970 PID TT STAT TIME COMMAND 87970 ?? Ss 0:00.01 /sbin/nfsd
Update: 13-Apr-09 before starting NFS daemon
/etc/exports file must exist so run
sudo touch /etc/exports or just start
nfsd after you created a proper
/etc/exports as explained below (thanks to juergen for the hint).
You will want to check your VMware Fusion network configuration. Be default, VMware Fusion creates two virtual NIC’s:
- vmnet1 — host-only network
- vmnet8 — NAT network
When I setup Oracle RAC on VMware, I usually configure public interface on the NAT network (this way I have internet access from VM’s if I need to download something directly) and private interface on the host-only network but private sub-net can be completely different as it doesn’t need to be routed anywhere.
If you follow this advise, you want to limit your NFS exports to NAT sub-net. Check network configuration:
macbook:~ gorby$ ifconfig vmnet1 vmnet1: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 inet 172.16.59.1 netmask 0xffffff00 broadcast 172.16.59.255 ether 00:50:56:c0:00:01 macbook:~ gorby$ ifconfig vmnet8 vmnet8: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 inet 192.168.94.1 netmask 0xffffff00 broadcast 192.168.94.255 ether 00:50:56:c0:00:08
My NAT network is 192.168.94.0/24.
At this point, you need to create a directory that you want to export from your host OSX and share in your Oracle RAC VM’s. Let’s say I want to share /exports/nfs1, /exports/nfs2 and /exports/nfs3. Next, edit /etc/exports adding a line:
/exports/nfs1 /exports/nfs2 /exports/nfs3 -maproot=root -network 192.168.94 -mask 255.255.255.0
-maproot=root is a convenience to have full root access on these exports as root from a VM. Now you just need to bounce nfsd —
sudo nfsd restart.
In the early releases of OSX 10.5, nfsd daemon caused hangs from time to time under high load but then it isn’t supposed to be used in the desktop OSX and generally required OSX Server. However, it’s been quite stable on the latest 10.5.x patchsets.
I should say that I’ve been using this setup for more than a year and I’ve done many demos during my presentations. I always have my test RAC cluster with me on my laptop — super handy if you drop by a geeky Oracle group. ;-)
Want to see how it works in real life — drop by the first Sydney Oracle Meetup next week.
I suppose it isn’t a big deal, but since the host OS has access to both NAT and host-only networks, I usually export NFS to the host-only network to keep it off the public virtual NIC. Any particular reason you chose NAT for NFS instead of host-only? Works for me and I’m sure NAT works for you too.
That’s very useful Alex.
I was looking at this 6 months ago. I got as far as downloading Open Filer with a view to using iSCSI and then… who knows… suddenly six months has gone passed.
And well done for linking this to the stackoverflow question.
Hopefully I’ll use this next week (but that means not watching things like The Sopranos and The Wire on my train commutes).
Uh, no. Mac? Just no!
Thank you for sharing the details on how to configure NFS on Mac OS X to use for RAC with VMWare Fusion. I had to add some tweaks because Mac OS X Leopard (10.5) does NFS differently and had some issues with portmapper. The sandboxing new security in Leopard makes it more challenging to configure NFS and had to edit /etc/exports files and restart portmapper and nfsd a few times to get it working.
I had issues mounting as well, on Snow Leopard (10.6), however was completely unsuccessful. The error message on the Ubuntu client makes me think my issue is related to yours.
From the client’s syslog:
“ubuntu kernel: [ 679.768629] RPC: server –myNFSServerIP– requires stronger authentication”.
Does this look similar to your NFS issues, and if so details on your tweaks will be appreciated and used.
Thanks to all who replied.
@Dan Well, interconnect is interconnect – trying to limit the traffic. Keeping storage access separate from interconnect let you simulate some interesting failure scenarios. In fact, I would even prefer to configure it completely separate to be able to fail public and storage networks separately. NAT network is private as well, it’s just that it’s routed to the external world and NAT’ed.
@Dominic Great. Make sure you post back if you hit any issues.
@Ben Deatails please. :) Mine is already configured so I was writing by memory. I don’t recall issues with portmapper. What differences are in your /etc/exports?
Thanks very much for this summary. Before I saw the post, I had seen the Twitter conversation between you, Ben, and Dan on this subject by way of an oracle-related TweetDeck search. It’s nice to see this laid out in more than 140-character chunks. :-) You’ve inspired me to take my long-neglected RAC/Mac/NFS project back off the shelf.
Speaking of things VMware-related, did you see this news from VMware today? VirtualCenter access via mobile phone, including iPhone, Blackberry, and some Nokia devices. Requires running a small appliance that can access the VirtualCenter server and the managed ESX servers. https://communities.vmware.com/community/beta/vcmobileaccess
Thanks John. It’s always a pleasure to inspire someone!
Re VirtualCenter – well, I don’t know how useful it is. Plus, I usually don’t manage ESX. Anyway, their demo is not with iPhone but with some win mobile. don’t even want to look. :)
in order to startup nfsd, the file /etc/exports must exist, if not startup will fail. Create this file by executing this line in your terminal:
$ sudo touch /etc/exports
Thanks Juergen — I added this to the instruction.
I greatly appreciate this article but I am getting error when trying to mount /u01 as:
mount: mount to NFS server ‘x’ failed: RPC Error: Authentication error.
I have been searching on internet and trying different things but can’t seem to get past the error.
Any insight would be greatly appreciated.
Does error occur quickly on mount attempt or it takes a while? Can you ping the NFS server x?
What’s the content of your /etc/exports file?
You can increase nfsd logging – see nfsd man pages.
I’ve the same issue when I try to nfs mount. I get “Authentication Failed” error message when I attempt to NFS mount. Any sugesstions?
Your NFS exports are most likely misconfigured like exporting to wrong IPs and etc. The same suggestions to increase logging level would apply.
Hi, Alex –
I am new to MAC and just started playing Fusion. I have a few questions on setting up nfs. On the instructions you’ve mentioned above, do I have to run all these under the host machine? What commands do I need to run in the virtual machine to see the shared resources?
thanks for the help.
i’ve been trying to get RAc installed on my mac using NFS for the shared disk. on the clusterware install it goes well up untill the root.sh script. the script run fine on node1 but fails on node 2 with “Failed to upgrade Oracle Cluster Registry configuration”
i’ve been told that NFS will work only as a NAS device. so i’m kind of stuck at the moment. any help would be greatly appreciated.
Thanks for this article. I have been using openfiler for a while now and the benefit of running a fewer number of Virtual Machines is drawing me to this method. My prob is i have done everything you mentioned about but can’t seem to know where to proceed. Do i just start my Virtual machines and presto see the shared directories? How does the Virtual machine treat these directories as disk? Basically what do i do after enabling nfsd and exporting the directories. Your help is greatly appreciated.
Contents of my export file
/exports/raw1 /exports/raw2 /exports/raw3 /exports/raw4 -maproot=root -network 192.168.115.1 -mask 255.255.255.0
@av: the commands in the blog are to be run on your host – i.e. in OS X prompt. The host will be your NFS server — sharing NFS mounts with other clients (RAC virtual machines will be the clients). Hope this clarifies the architecture. The instructions on how to install Oracle RAC using NFS shared storage is a separate article and won’t fit into the comment format (and there are articles on the Net available).
@reiner: It will work on any device (granted, you would be careful what you choose for *production* implementation). In fact, Oracle released their requirements on NFS shared storage and you can use anything – Oracle will support you as long as the storage implements proper NFS v3/4. To troubleshoot you issues, drill into the logs of ocrconfig, check whether mount point is mounted on the second node and file is readable.
@Nde: when you install your RAC on VM’s with shared storage, you actually mount those NFS mounts — you would generally add an entry (or few entries in your case) in /etc/fstab. Something like:
Again, the details of the install itself is a separate article all together.
Nice update. Its been a while since I visited this with Mac OS X and VMWare Fusion. It probably would be a nice addition to do an article on how to install Oracle RAC on VMWare Fusion with Linux guest OS and Mac OS X from start to finish. Since we have 11gR2 RAC would be worthy paper!
Well, I’d rather focus on specifics and leave the rest as the homework for readers. :)
Btw, VMware Fusion 3.0 seems to allow shared devices so we might not need NFS to run RAC.
Hi Alex, I have a question, do you configure the use by NFS and what is your option to OCR´s and Voting Disk´s? do you create a OCFS2 share and put those files over there, or is there any way to use raw partitions with NFS?
Thanks a lot,
If I use NFS, I use it as shared storage for everything including voting disks and OCR. Otherwise, I’d place them on shared raw disks – I avoid OCFS2 by all means as it adds just another complexity layer without real need (in most cases).
Hi Alex, thanks for your fast answer. I use MAC during week days and PC Desktop on weekends, So, I´m interested in both solutions. Last weekend, I had spended 10 hours trying to build a cluster with Oracle Clusterware(on my PC). My problem is not the cluster, is setup shared devices to use as raw devices for OCRs and VTdisks. I have made the entire setup using openfiler with raw devices, Note: 465001.1 – Configuring raw devices (singlepath) for Oracle Clusterware 10g
Release 2 (10.2.0) on RHEL5/OEL5. This is because Oracle Clusterware 10.2.0.1 does not accept block devices mapping for OCR and voting disks. So, I setup some devices on a third vm with openfiler. Ok, all the steps were succesfully, including, creating raw devices with iscsi uuid, but, when I ran the root.sh script, which creates the ocr and format the voting disks, I just return erros. Have you ever expirienced something like that using openfiler? I´ll try this approach today on vmware mac fusion, but I was reading on net, and some people say vmware fusion 3 now can support shared disks, the only problem is that in PC I have a six-core cpu with 8GB RAM, and my mac book has only a intel core2duo cpu and 2gb RAM.
Thanks and Regards
Bruno, forget about RAC on VM with just 2GB. You want to have 4GB (you will struggle with 2GB).
I did install RAC with OpenFiler few years ago but there can be zillion things to go wrong. I won’t be able to dig into this now.
Another approach as you already know is to use shared disks in VMware Server for Windows and VMware Fusion for Mac (assuming 3.0 does support it). You will need configure raw devices indeed for 10.2.0.1 until you upgrade it and then can move to block devices directly.
I got a error “mount.nfs: access denied by server while mounting 192.168.1.65:/Users/Harid/exports/LUN_OTHERS” when I try to mount the shared NFS folder on my VM OpenSuSE OS. Could you give me some advices? Many thanks!