Insider’s Guide to ODA Performance

Posted in: Technical Track

I have a confession to make: I hate webinars. I find it difficult to concentrate on a disembodied voice. I typically get distracted and find myself checking emails and blogs even during the best webinars. Watching a webinar is a bit like watching a DVD of a live show – not as fun as live show, and not as polished as a music video.

But one can’t escape the fact that webinars are quickly becoming a very popular learning method. The best presenters I know are giving webinars, and every DBA uses them to improve his or her knowledge. I’m happy to report that despite the fact that I sat alone in a room and talked to my cat about ODA performance for a full hour, the presentation went rather well.

If you are interested in the topic but missed the webinar or if you want to hear it again, you can view our recording.

I also uploaded the slides to slideshare, so you can take a better look at our benchmark results and study our consolidation methods at your own pace. Unfortunately, Slideshare mangled a few of the slides, so if they look unclear you can use the recorded webinar to disambiguate.

There were a few questions I did not have time to address during the webinar. Interestingly enough, many of them have nothing to do with ODA but are rather about my benchmarking methods.

There were two types of benchmarks in the webinar:
The single node and RAC tests were ran by me using Swingbench. I used the SOE benchmark, which creates an OLTP workload that should be similar to what is used in TPCC-C. Since my intention was to stress the interconnect and not the IO system, I used a tiny database (10G of data files and indexes) that fit entirely in memory and pre-warmed the buffer cache on both nodes. The load was driven from a single swingbench client (charbench) to test a single node and two charbench processes running on the same machine to test the cluster.

The storage tests were executed by Alex Gorbachev, and he used Orion to run them. The number of threads used and the IO sizes are shown in the slides.

After the webinar, Kevin Closson chided us for not using his benchmark tool: SLOB. SLOB is mid-way between Orion, which is an artificial IO test, and Swingbench, which emulates full application workloads and therefore runs into many unrelated bottlenecks and can be difficult to configure and interpert. I’ll refer you to Kevin’s excellent blog posts to read more on what is SLOB and what it does. My colleague Yury was very excited about the new benchmark tool and started using it to get more data about ODA performance. He already shared his experience and perliminary results. I expect he’ll share more over the next week or two, so watch our blog.

Another question during the webinar was how ODA storage differs from Exadata storage and why ODA can’t use storage cache.
If you watched the webinar, you already know that ODA’s storage is simple and elegant. 20 SAS disks, 4 SSDs each with two ports connected to the server nodes by two HBAs, and two extenders per node. This is about as direct as a shared storage can be, which accounts in part for the performance we measured. No more misconfigured SAN switches. The catch is that because nothing is shared except the disks themselves, there is no place to locate a shared cache. Due to RAC, unshared cache (for example on the HBAs) can cause corruption and cannot be used. This means that the storage system can easily get saturated, causing severe performance issues, especially for writes to redo logs. This is part of the reason why the redo logs are located on SSD. We suggested additional methods to avoid saturating the disks in the webinar.

Exadata, however, has entire servers dedicated to storage. These are commonly known as storage cells, and because we have entire servers with RAM and CPUs dedicated to storage, we actually get what Oracle sometimes describes as “Smart Storage”. These storage cells can not only cache blocks in memory, avoiding expensive disk reads, but also interact with the Oracle database to pre-process and pre-filter data and send only data that is needed to process a specific query. This pre-filtering is especially valuable for data warehouse workloads, which frequently scan entire large tables but only require a subset of the rows and columns. Pre-filtering the data allows us to reduce contention on the storage network and the database cores. ODA has none of these features, and all data processing is done at the database server as it would on any 112R2 database.

Finally, someone asked whether they can run Windows on ODA. I reacted with some outrage – run Windows on Oracle Database Appliance sounds like a sacrilege to me. However, Oracle does offer a flexible license model on the ODA, where you only pay the database license fees for the cores you actually use. If you wish to use the other cores to run windows, you can certainly install Oracle VM on the ODA and run Windows on a virtual machine.

Thanks again to everyone who attended and to everyone who made this event happen. Feel free to ask additional questions in the comment section.

email
Want to talk with an expert? Schedule a call with our team to get the conversation started.

5 Comments. Leave new

Kevin Closson
May 18, 2012 1:36 pm

Don’t let my chiding distract… I do think it was a good webinar. Hey, it’s an appliance…there can’t really be much to say about it, right?

:-)

Chiding again :-)

You can chide back at me 5/31 :-)

Reply
Insider's Guide to ODA Performance | The Pythian Blog | Network Storage Report
May 20, 2012 2:29 pm

[…] the original here: Insider's Guide to ODA Performance | The Pythian Blog Share this post to your […]

Reply

“If you wish to use the other cores to run windows, you can certainly install Oracle VM on the ODA and run Windows in a virtual machine.”

This is not true, or so I believe. The licensing scheme for ODA requires you to shut down the unused cores at the hardware level. I don’t see how you would give a Xen based hypervisor access to those cores.

However, I don’t see why you could not turn the database appliance into a virtualization appliance with Oracle VM. This would of course defat the Oracle Appliance Kit but would allow flexible licensing of Oracle Database Standard Edition on the ODA hardware aswell as open up to running applications and middleware on the ODA itself.

Reply

I’m pretty sure that with Oracle VM you don’t need to shut down cores at hardware level. Of course, check with your local sales guy before listening to me :)

I’m less sure on whether you can run Windows on Oracle VM…

I like the idea of virtualization appliance and I agree that running middleware on ODA makes tons of sense!

Reply

While Oracle VM does not require you to shut down the cores at the hardware level, the Oracle Appliance Kit does. Or used to, anyway.

With the latest release of OAK you can now run everything in Oracle VM, the database will run in a special Xen domain using however many cores you have licensed. Any spare capacity can be used for other virtual machines.

Note that there is no Oracle VM clustering in the current offering. In my experience running Windows guest machines on Oracle VM is painfully slow, but I have not (yet) tested this on ODA.

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *