Are There Performance Penalties Running Oracle ASM on NFS?

Posted in: Technical Track

In this blog post, I will share my SLOB test results and conclusions on Oracle database IO’s performance when placing data files on NFS directly and on the ASM disk group located on NFS.

You may ask: “Why would anyone consider placing ASM files on NFS? Is it even possible at all?” There seem to be legitimate reasons for doing that, and Oracle supports it. You could find more information in the following blog post: Reasons for using ASM on NFS.

I’ve provided some additional details on how I have executed the tests below. Please do not hesitate to ask for more details. At this stage, I just want to mention that I have used my favorite testing tool’s (SLOB from Kevin Closson) modified version, SOS – SLOB on steroids.

You may want to have a look at my DNFS presentation since it is related to the comparison between kNFS and dNFS. You can find it here.

There are no visible penalties

Placing data files on NFS directly and running SLOB’s 22 reader tests gave me an average response time of 0.53 ms in the DNFS configuration and 1.73 ms in the kernelized NFS test.
The ASM setup using the same NFS configuration returned 0.49 ms and 1.74 ms correspondingly.
Based on the tests results, there are no clear performance-related penalties when running Oracle ASM on NFS compared with placing data files directly to NFS. I think that less than 10% discrepancy in DNFS tests is negligible.

Testing results

The following are bits of AWR reports that I found relevant. The full reports are available here.

Just kNFS

 Elapsed:                1.09 (mins)
      DB Time(s):               21.4              116.9       0.36       4.94
       DB CPU(s):                0.8                4.4       0.01       0.19
   Logical reads:           12,140.1           66,458.8
   Block changes:               41.8              228.8
  Physical reads:           12,042.2           65,923.3
db file sequential read             791,093       1,370      2   97.6 User I/O
DB CPU                                               53           3.8
awr_0w_22r.20121023_201639.txt
Tue Oct 23 20:16:40 EDT 2012

Just dNFS

real    1m13.117s
user    0m0.576s
sys     0m1.281s
   Elapsed:                1.04 (mins)
      DB Time(s):               21.3              110.7       0.13       4.68
       DB CPU(s):                5.0               26.0       0.03       1.10
   Logical reads:           37,408.2          194,450.9
   Block changes:               33.3              173.0
  Physical reads:           37,298.0          193,878.0
db file sequential read           2,326,535       1,229      1   92.5 User I/O
DB CPU                                              312          23.5
awr_0w_22r.20121023_203540.txt
Tue Oct 23 20:35:40 EDT 2012

ASM on kNFS

real    1m13.052s
user    0m0.606s
sys     0m1.259s
   Elapsed:                1.04 (mins)
      DB Time(s):               21.3              111.1       0.36       4.69
       DB CPU(s):                1.2                6.2       0.02       0.26
   Logical reads:           11,883.9           61,944.8
   Block changes:               43.7              227.8
  Physical reads:           11,786.3           61,435.9
db file sequential read             737,114       1,283      2   96.3 User I/O
DB CPU                                               74           5.6
awr_0w_22r.20121104_030233.txt
Sun Nov  4 03:02:34 EST 2012

ASM on dNFS

real    1m13.743s
user    0m0.633s
sys     0m1.241s
   Elapsed:                1.05 (mins)
      DB Time(s):               21.2              111.8       0.13       4.71
       DB CPU(s):                6.5               34.2       0.04       1.44
   Logical reads:           37,754.5          198,865.5
   Block changes:               43.9              231.3
  Physical reads:           37,639.5          198,259.8
db file sequential read           2,379,107       1,162      0   86.7 User I/O
DB CPU                                              411          30.6
awr_0w_22r.20121104_025602.txt
[[email protected] SLOB]$

Details on how I executed the test

    • Both NFS server and Oracle database were located on the same host.
    • The volume where a data file was located for all tests was created on Linux RAM disk to exclude any slow HDDs impact.
    • NFS mount was mounted via loopback device (127.0.0.1) to exclude any slow network component impacts on the test results.
    • Tests used “db file sequential read”, a.k.a. random reads.
    • The host was located on Oracle VM.

After reading the above points, you would probably say: “Hey Yury, your tests are far from real world workload. And I totally agree. But I didn’t want to test the real life load. In fact, it is quite difficult to reach close to real life workloads in any testing. The only goal I had was to see if an additional IO layer (ASM) would make any difference in terms of IO performance. From my perspective, this setup may be one of the best for the purpose since it eliminates a lot of components that may have an impact on the testing results. The only concern on my side as of now is the fact that I ran Oracle VM aware kernel, and it may have an impact on the kNFS vs dNFS comparison. However, I think we are good with both NFS and ASM on NFS.

For those of you who are used to SLOB & SOS, I have used the following command to run the tests. I’ve also published my SOS scripts here – SLOB on steroids.

v_r=22; v_w=0; v_c=60
date; time bash -x runit.sh ${v_w} ${v_r} ${v_c} 2>&1 > ./runit.sh_${v_w}_${v_r}c1t.`date +%Y%m%d_%H%M%S`.log ; egrep "Elapsed:  |Logical reads:   |Reddo size:   |Block changes:    |Physical reads:|DB Time\(s\):|DB CPU\(s\):"  `ls -trp awr_* | tail -1` ; egrep "db file sequential read|DB CPU" `ls -trp awr_* | tail -1` | head -3 | tail -2 ; ls -trp awr_* | tail -1 ; date

View Yury Velikanov's profile on LinkedIn

email
Want to talk with an expert? Schedule a call with our team to get the conversation started.

About the Author

Yury is a nice person who enjoys meeting and working with people on challenging projects in the Oracle space. He started to work as an Oracle DBA 14 years ago (1997). For the past 10 years (since 2001) his main area of expertise is Oracle e-Business Suite. Yury is an OCP 7,8,9,10g and OCM 9i,10g. He is a frequent presenter at Oracle related conferences such as Hotsos, UKOUG and AUOUG. Yury is a socially active person. Apart from Social Media (Twitter, Bloging, Facebook) he is the primary organizer of Sydney Oracle Meetup group (250 people). So if you happen to be in Sydney (Australia) drop Yury a message and stop by at one of his Meetups.

6 Comments. Leave new

Yury Velikanov
November 6, 2012 9:57 pm

Just a comment on 0.49 ms number (ASM on NFS). I Have checked several tests’ results I have executed having ASM on NFS and difference in results was in the following borders 0.44 ms – 0.59 ms. Those response times kind of corresponds to NFS tests.

Reply
Alex Timofeyev
November 29, 2012 6:10 pm

Yury, I’m trying to use ASM over dNFS with no luck so far. kNFS works fine, but once switched to dNFS I see no servers under v$dnfs_servers :(

trace file shows:
alert_isdbasm.log:Oracle instance running with ODM: Oracle Direct NFS ODM Library Version 3.0

I’d appreciate a quick advice please
spasibo!

Reply
Yury Velikanov
December 2, 2012 10:55 pm

Have a look on the following presentation:
https://www.slideshare.net/yvelikanov/sharing-experience-implementing-direct-nfs

You may try to set events I mention and reproduce the problem.
Other problem to look for is insecure option on Filler side.

Let me know if it helps

Reply
Alain Azagury
April 24, 2016 3:27 am

Yury, when we set a second member for the redo logs on dNFS (first member is not dNFS), then when the NFS server is unavailable, the database hangs. This is contrary to Oracle’s documentation that specifies that: when LGWR can successfully write to at least one member in a group, writing proceeds as normal. LGWR writes to the available members of a group and ignores the unavailable members.

Unfortunately, it looks like if the unavailable member is accessed through dNFS, it may hang the database…

Reply

Alain,

your question is not directly related to this blog post but I think I can help anyway. The situation you describe can happen when ARCn actually hangs while writing to NFS. Like when the storage becomes unresponsive. In those cases ARCn will just hang and wait and eventually no ARC processes will be available for work anymore at which point the whole database will freeze.

This would not happen if the NFS filer returned a hard error back, because only then would the archiver process free up and allow to write to any location (like the one not on NFS) again. This is also described in My Oracle Support note 1669589.1

Hope this helped
Bjoern

Reply
Jose Rodriguez
July 13, 2017 11:10 am

Yuri,
I am speaking out of memory here and may be totally wrong but, AFAIK, ASM is a mere structure to hand out the physical location of datafile segments to the database, hence, with a single datafile and only read queries, effectively AMS won’t do any work.
On the contrary, during a massive writing operation that includes growing the datafile, there AMS will be doing some work like updating the memory segments map, at least, which may be or not, enough of a burden to be noticed during the SLOB tests.
Again, this is right out of my head so I may be missing something important here.

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *