Oracle Database Appliance — Storage expansion with NFS (dNFS, HCC)

Posted in: Technical Track

The biggest objection to Oracle Database Appliance (ODA) we hear from customers is about 4TB usable space limit (tripple mirrored 12TB of raw storage). I think most of the times this is more a perceived barrier rather than objective — more along the lines of being afraid to hit the limit if the system grows a lot. Nevertheless, Oracle has been always listening customers’ concerns when it comes to purchasing barriers. Of course, this time is no exception.

4TB is limit no more

NFS was always a good option to store your ODA database backups. Now there is a simple way to go beyond 4 TB storage limitation — ODA is now fully supporting read-write NFS-mounted external storage for database files. The recommendation is to use Oracle ZFS Storage Appliance (ZFSSA) since this is what Oracle has been testing extensively with. However, there is no reason why it can’t work with other NAS storage.

Direct NFS is your friend

Whether you are using ZFSSA or another NAS storage device, Direct-NFS (dNFS) can be used (I haven’t tested it yet myself) instead of standard Linux “kernelized” NFS — this allows for a more efficient NFS IO with better performance and scalability and reduced CPU overhead doing NFS IO.

Hello HCC

Usage of ZFS Storage Appliance will make Hybrid Columnar Compression (HCC) available for ODA customers. HCC is the technology that originally appeared with Exadata only but has recently become available to Oracle Database customers using ZFSSA and Pillar as the database storage. HCC is actually free with Oracle Database Enterprise Edition — no additional database options and no additional ZFSSA options are required. However, HCC does require Oracle Database version 11.2.0.3 while the latest ODA patchset runs 11.2.0.2 version but wait… 11.2.0.3 will be on ODA in April it seems so stay tuned.

Do you appliance?

One thing to remember with external NFS storage for ODA is the new dependency and, as a result, you now need to take care of storage availability yourself. This somewhat breaks the all-in-one appliance idea — you need to manage your storage which is more work (and coordination between storage and DBA team) than just racking in a singe 4U ODA device. One way to limit dependency of ODA on external storage is to use NFS mounted storage for read-only data in read-only tablespaces and set READ_ONLY_OPEN_DELAYED parameter to TRUE.

Just like if your were to use NAS for backups, it’s advised to segregate NFS IO traffic on separate bonded pair of NICs and that’s where ODA has plenty of capacity with six 1Gbit and two 10Gbit NICs.

And good timing for Pythian — a new shiny ZFS 7320 Storage Appliance should show up at our office any day now.

If you haven’t yet considered a new Oracle Database Appliance for your next project, drop us a line at [email protected]

email
Want to talk with an expert? Schedule a call with our team to get the conversation started.

About the Author

What does it take to be chief technology officer at a company of technology experts? Experience. Imagination. Passion. Alex Gorbachev has all three. He’s played a key role in taking the company global, having set up Pythian’s Asia Pacific operations. Today, the CTO office is an incubator of new services and technologies – a mini-startup inside Pythian. Most recently, Alex built a Big Data Engineering services team and established a Data Science practice. Highly sought after for his deep expertise and interest in emerging trends, Alex routinely speaks at industry events as a member of the OakTable.

13 Comments. Leave new

Kevin Closson
March 26, 2012 11:55 pm

Great post, Alex.

Yes, Oracle continues to be the most responsive IT vendor by going out of their way to address customers needs!

So, I take it then the recipe for an ODA customer would get 11203 (as soon as available), apply 13041324 so that HCC works with NFS and then, just to make sure one doesn’t accidentally use another vendor’s fully functional, standards-based NFS filer, the ODA customer would apply 13362079?

13041324 – HCC ON ZFS AND PILLAR STORAGE
13362079 – HCC SHOULD NOT BE ENABLED FOR NON ZFS/ PILLAR STORAGE ARRAY

Reply
Alex Gorbachev
March 27, 2012 8:57 am

Now you almost spoiled another blog post. :)

Wait when 11.2.0.3 is out for ODA. If Oracle keeps the promise, ODA patch bundles include all required patches (that would include 11.2.0.3.1 database patch bundle that includes both of those patches). So you should be right for the users of generic platform and ODA customers should have their life simplified by Oracle.

Reply

With HCC only on ZFS, Oracle seems to be moving towards a strategy of Oracle product works best with Oracle hardware. I can only foresee many other similar “Synergistic” features in the coming releases. Is there anything wrong with that ? [ I do not work for Oracle anymore!, but still hold the stocks :-) ]

Reply

11.2.0.3 should be out soon along with multiple oracle home support as oracle promised . ZFS should help out ODA customers that started out with a small solution and now need more space.

Reply
Kevin Closson
March 27, 2012 11:45 am

Alex,

Is it permissible to replace the 2 x8 PCIe 1GbE ethernet cards in each server with 10GbE cards?

Reply
Alex Gorbachev
March 27, 2012 2:20 pm

No hardware customization is supported. Each ODA server node already has two 10 GbE ports that Oracle generally suggests to bond together. Do you need more?

Reply
Kevin Closson
March 27, 2012 11:49 pm

Really? I read through the datasheet and saw GbE. All the better.

Smart customers will put Oracle to the tack of doing PoC for quarter-rack and ODA+S7000 side by side. If the queries are complex, the performance will be *very* close.

Reply
Alex Gorbachev
March 28, 2012 4:29 pm

I share your point that for quite a few workloads, ODA on its own will come close to a quarter rack.

The pressure (often subjective) is that you can’t easily scale capacity beyond a single ODA. ZFSSA is the way out for storage capacity (again, not ideal but it adds lots of comfort). Anyway, for customers who are concerned with super growth – quarter-rack Exadata (or whatever non-ODA solutions for that matter) would be a good option but if such growth is experienced – throw away (or reuse) that $50K appliance and move on – it’s job was done.

Of course, an ego often makes potential needs look bigger than they really are but that’s not a new problem. Eh?

Reply
Alex Gorbachev
March 28, 2012 4:30 pm

And thanks for your feedback Kevin. Sure appreciated by the readers here rest assured the author. ;)

Reply
Kevin Closson
March 29, 2012 11:16 am

By the time you outgrow your ODA Oracle will (presumably) be offering Ivy Bridge ODA so you can double up performance right there. I would simply ride the ODA Moore’s Law train.

Reply
Alex Gorbachev
March 29, 2012 12:25 pm

Totally viable option.

Reply

Alex, Kevin,

Thanks, just the information I was looking for (and google found you).

ODA + dNFS + NetApp : we suddenly have an option to put the power of the ODA CPUs and memory to good use. Not sure if I can arrange it, but if we do a PoC, I’ll let you know.

Regards,
PdV

Reply

ODA + dNFS + ZFS Storage Appliance is a much better option than with NetApp. Why, you may ask .. ZFS SA has integration of features and functionalities that no other storage vendor has with lots more performance and at a fraction of the cost .. the first obvious one is HCC (Hybrid Columnar Compression).. but there’s lots more here and coming (cloning, rman backups, etc).

Take a look :
https://blogs.oracle.com/si/entry/7420_spec_sfs_torches_netapp

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *