A few months ago, we had a test instance complaining that it couldn’t write to ASM. This was an 188.8.131.52 single (non-RAC) instance on Oracle Enterprise Linux 5, using ASM for the storage. We first saw these errors in the alert log:
ORA-15032: not all alterations performed ORA-29702: error occurred in Cluster Group Service operation ORA-29702: error occurred in Cluster Group Service operation ERROR: error ORA-15032 caught in ASM I/O path
Uh-oh, that doesn’t look good. So I log into the ASM instance and try to see if the disks are OK:
SQL> select path, mount_status from v$asm_disk; select path, mount_status from v$asm_disk * ERROR at line 1: ORA-15032: not all alterations performed ORA-29702: error occurred in Cluster Group Service operation ORA-29702: error occurred in Cluster Group Service operation
I can’t even query that. As Ted would say, “strange things are afoot at the Circle K.” To be safe, I thought I’d try to shutdown the DBMS instance, which also failed without having to abort:
SQL> shutdown immediate ORA-00204: error in reading (block 1, # blocks 1) of control file ORA-00202: control file: '+FOOTEST_DATA/footest1_footest_db/control01.ctl' ORA-15081: failed to submit an I/O operation to a disk SQL> shutdown abort ORACLE instance shut down.
We decided to restart the whole DBMS/ASM/CSS stack, but CSS wouldn’t stop either:
-bash-3.2# /etc/init.d/init.cssd stop Stopping Cluster Synchronization Services. Unable to communicate with the Cluster Synchronization Services daemon. Shutdown has begun. The daemons should exit soon.
We ended up booting the server altogether, after which everything came up nicely. We filed an SR with Oracle Support, who directed us to Note 391790.1 (Unable To Connect To Cluster Manager Ora-29701). This note lists the cause, quite simply, as:
The hidden directory ‘/var/tmp/.oracle’ was removed while instances & the CRS stack were up and running. Typically this directory contains a number of “special” socket files that are used by local clients to connect via the IPC protocol (sqlnet) to various Oracle processes including the TNS listener, the CSS, CRS & EVM daemons or even database or ASM instances. These files are created when the “listening” process starts.
The solution is to restart CRS or reboot the machine. Our /var/tmp/.oracle directory looked like this:
[[email protected] ~]$ ls -la /var/tmp/.oracle total 12 drwxrwxrwt 2 root root 4096 May 8 15:03 . drwxrwxrwt 3 root root 4096 May 10 07:02 .. srwxrwxrwx 1 oracle dba 0 May 8 15:03 s#18854.1 srwxrwxrwx 1 oracle dba 0 May 8 15:03 s#18854.2 srwxrwxrwx 1 oracle dba 0 May 8 15:03 sEXTPROC srwxrwxrwx 1 oracle dba 0 May 8 14:44 sfootestDBG_CSSD srwxrwxrwx 1 oracle dba 0 May 8 14:44 sOCSSD_LL_footest_ srwxrwxrwx 1 oracle dba 0 May 8 14:44 sOCSSD_LL_footest_localhost srwxrwxrwx 1 oracle dba 0 May 8 14:44 sOracle_CSS_LclLstnr_localhost_0 srwxrwxrwx 1 oracle dba 0 May 8 15:03 sPNPKEY
I did some sandbox testing, and found that only the Oracle and root OS users could delete that directory, and was able to duplicate the error every time when doing so.
However, I really was dumbstruck that Oracle would have so critical a directory in /var/tmp! I politely note this to Oracle Support, who justified this location with a few solid reasons:
- It has always been in this location (and still is in 11gR2).
- /var/tmp/.oracle is a hidden directory, so it probably won’t be noticed by any miscreants looking to cause trouble.
OK, I was being sarcastic, these reasons are awful. The only safeguard they gave was “make sure no one deletes it.” We scoured the server for cron jobs that would automatically clean out /var/tmp but didn’t find any, nor any bash history suggesting malice. The only thing that we could think of was that this test server was in a VM (Citrix Xen), although one would hope that it doesn’t happen at all, regardless. We certainly could not find an explanation, but now we’re aware to not delete /var/tmp/.oracle while the instances are running (even though we never did before).
Surachart Opun has also blogged on this topic.