Analysis of the Oracle Exadata Storage Server and Database Machine

Posted in: Technical Track

Pythian has a full-featured Oracle Exadata Practice complete with successful implementations and reference customers.

*Updated* see comments.
Exadata — the smart storage server. I am definitely excited about this product, but my point of view is a bit different.

It’s fast, and much faster than anything out there right now. But how many shops will actually need this? How many shops can spend 2.2 million dollars on hardware and equipment?

What are the products, in a nutshell? The Oracle Exadata Storage Server (Data Sheet, PDF):

  • 2U Storage “unit” with either 1 TB SAS or 3.3 TB SATA redundant capacity. There is a query processor in the box that can “offload” tasks from the main database server. Primary filtering, decompression, joins, backups.
  • Storage units linked to database servers via dual Infiniband offering 20 Gbit/s (2.5 GBytes/sec) bandwidth

The Database Machine (Data Sheet, PDF):

  • A standard 42U rack with 8 database servers and 12 Exadata storage servers.
  • Pre-installed Linux and Oracle. Pre-configured.
  • In 8 servers — a total of 256GB RAM, 64 Intel cores @ 2.66 Ghz, InfiniBand-ed and gigabit-switched.

The cost for one Database Machine: $2.33M ($650,000 + $1,680,000 in software) as grabbed from Larry’s keynote (thank chet) I called the “call us now” phone mentioned on the Oracle Exadata website to ask them for pricing. They had no idea what I was asking about, and I’m still waiting on a salesperson to call me back. (Hint for Oracle — educate your sales staff about new products, just in case I decide to buy one the day after you announce it.)

You have to realize how “cheap” this is. It comes down to $25,000 per core for Oracle EE, RAC, and Partitioning! And extra “free” CPUs for decompressing, filtering and joining, and backups. That’s a good deal. Oh, did I mention you can interconnect several 42U racks?

Back to the main question, what problems does this product solve?


That’s right, the number one problem this product solves is configuration. 90% of the problems I’ve seen are due to improperly configured systems. I am not talking init.ora settings here, or design, or indexing, or any of that. I am talking configuration mistakes all over the place. Starting from bottom up, these are the most common mistakes:

  • buying large disks without accounting for I/O bandwidth delivery
  • mis-configuring them in big meta-arrays (EMC style) with non-aligned stripe sizes. (See “turn-offs” in Christo Kutrovsky, Oracle Pinup)
  • sharing the spindles for redo, datafiles, backup, and a bunch of databases (3par style), thus ensuring that I/O is never sequential
  • getting single-channel Fibre Channel connectors to the database server
  • not configuring directI/O or asyncI/O or the largest possible db_file_multi_read_count
  • not using ASM. Of course, ASM reduces overhead, manages data in 1MB or larger (11g feature) extents (this is sequential data!)
  • not using parallel query properly — using default values or considering all of the above, just not getting the bandwidth to perform

Using Exadata necessarily and immediately solves all of these issues. You don’t have a choice–you get more I/O bandwidth when you buy extra space, there’s no other way.

No expensive consultants to install your system. From the DBA perspective it’s heaven — no arguing with storage people for dedicated spindles, no arguing with CIOs about big vs. small disks, no arguing with system administrators for ASM. No hiring of expensive consultants to “tune” the system or apply best practices.

You may laugh at all of the above issues, but many shops are exactly like that. Especially the big ones (the target market for Exadata), where everyone is too afraid to change anything in case they get blamed if it doesn’t work. The “best practices” are the only practices with the Database Machine.

To maximize performance, you have to get all the pieces together. Then and only then will you get all the benefits. And this is quite difficult to achieve, especially in large shops where several entire departments are involved.

In all my experience at Pythian, there has been only one client–who, thanks to a combination of good managers, thrust, and desire for performance–would exactly follow my recommendations. And you know what? They are getting their 400 Mb/sec. The new server is reaching 800 with the dual 4gbit fibre channel.

Some interesting aspects of the Oracle Exadata Storage server.


The data sheet presents two options: 1 TB with SAS with 1000 MB/s bandwidth; or 3.3 TB with SATA and 750 MB/sec. Compression is “extra”, meaning in a typical data warehouse you get 2-3 times compression, meaning your actual bandwidth will be 2000-3000 MB/sec from a single Exadata server.


Mirroring is provided by ASM (either 2- or 3-way). It is also performed across Exadata storage servers (does that mean 2 minimum?)

Disk failure does not abort queries or transaction.

Exadata Storage server does abort queries or transactions, but with no data loss. This is important to know when calculating risk.


There’s a plug-in available for 10g Enterprise Manager, a GUI to manage all that. Absolutely mandatory in my opinion.


Oracle has solved the communication issues for big shops, and the result is indeed extreme performance. Don’t get me wrong here, the Exadata is a brilliant idea that will solve some very difficult, specific problems for large data warehouse shops. But the Database Machine will do something much more real, that will help far more people: it will make it impossible for people to mis-configure their database systems.

Hats off to Oracle for releasing a product that solves a problem we are facing everyday: convincing clients to get the right hardware setup for their database workload.

Learn more about Pythian’s services for evaluation, migration to and operational support for Oracle Exadata.

Want to talk with an expert? Schedule a call with our team to get the conversation started.

About the Author

An Oracle ACE with a deep understanding of databases, application memory, and input/output interactions, Christo is an expert at optimizing the performance of the most complex infrastructures. Methodical and efficiency-oriented, he equates the role of an ATCG Principal Consultant in many ways to that of a data analyst: both require a rigorous sifting-through of information to identify solutions to often large and complex problems. A dynamic speaker, Christo has delivered presentations at the IOUG, the UKOUG, the Rocky Mountain Oracle Users Group, Oracle Open World, and other industry conferences.

19 Comments. Leave new

Don’t get me started about configuration… :-)

I am not sure how this new product leaves the Oracle Optimized Warehouse Initiative – that is boxed and preconfigured server + storage + os + database ready to go and install all those custom 0’s and 1’s that users like to call data; especially for the high-end Optimized Warehouses like the big SGI and IBM boxes.



I took a whole bunch of screenshots during the presentation and even captured the one with the pricing. You’ll find it here:



It is 40Gb/s for Infiniband (not 40GB/s).


I think the main problem that the Exadata storage appliance and the database machine solves is that Oracle RAC created a huge demand for big shared disks such as the SAN equipment that EMC and Netapp sell. That huge demand is something Oracle created and they weren’t profiting from it, and now they are.


Actually, it is 16 Gb per Infiniband port times two ports = 32 Gb (8 useful bits per 10 bit encoding).

Log Buffer #116: A Carnival of the Vanities for DBAs
September 26, 2008 11:51 am

[…] Christo Kutrovsky offered his analysis of the Oracle Exadata Storage Server and Database Machine. In a nutshell, “ . . . the number one problem this product solve is […]


This solution is primarily geared towards data warehouses. Does anyone know if they plan to certfy the Oracle E-Business Suite database to run on this architecture?


…actually, it’s 20Gb. The other leg is for failover. 20Gb is more than enough. You might care to read my blog:


Actually it’s 2U not 1U

Christo Kutrovsky
September 28, 2008 8:00 pm

Just a note that I updated the following in this post:

– Exadata cells are 2U in size
– Pricing – 1 680 000 is for license (added picture of slide)
– Corrected GB to Gbits
– Changed total bandwith as per Kevin’s comments


Slide seems to indicate that 1 680 000 is for storage server software and does not include database licenses.


“…actually, it’s 20Gb. The other leg is for failover.”

If the other port is used for failover, then the actual bitrate is 16Gb/s. IB is the only networking technology that advertises baud rate as opposed to say FC which also uses 8b/10b encoding but advertises for example 4Gb/s rather than 5 ;)


Christo, 1680000$ – it’s a pricing only for storage software, not for all Oracle software. Whole system cost will be abount 5000000$. It is very expensive.


Software licenses can be negotiated way down. Actually, if you factor in the cost of the services you DON’T need then you see what the actual ROI is. Its a big number. This is an offering that is changing the landscape of IT orgs everywhere, especially with the announcement today.

Kumar Ramalingam
March 1, 2010 6:23 pm

They are trying to compete with Netezza on the DW market. Reality is Oracle has not revealed who their Exadata customers are.


It is amazing how non of the operational challenges the box introdeces are mentioned
Backup can only be run by rman.
There is no SAN solutions.
Dev and test has to be on exadata
A finnaly tuned db would not need exadata.
For an application to be fast, it needs to be parallelized.
All applications need to be RAC aware and on 11gr2

Edited by Christo: changed “arc” to “RAC”

Christo Kutrovsky
August 13, 2010 10:25 am


I agree there are operational challenges, particularly with experience in support the new database machine, however I don’t agree with your points.

– RMAN is a great way of doing backups. And Exadata offers greatly improved incremental backups. Without RMAN backups of a 30-40 TB database would take forever.

– I am not sure what you mean by there are no SAN solutions?

– Dev and test do not have to be on exadata. QA has to be if you are benchmarking performance.

– No matter how finally tunned, exadata offers some unique features. Whether those are deal breakers is application specific, but once we go to Exadata size applications, building a balanced machine is the challenge. And it’s because there are too many departments involved and company policies limitations. Exadata solves all this by having a nicely build package.

– Assuming you are talking a datawarehouse reporting application, yes it has to be parallelised. OLTP is parallelised by the fact there are multiple users. And that’s normal, I don’t see the problem with this? There are no fast datawarehouse systems that do use some kind of parallelism.

– You have to agree there are various levels of RAC awareness. These matter more if you are more OLTP biased, and less if you are datawarehouse biased. And a RAC aware application, is just a well “tuned” application.


Seems like I understand most of the concerns pointed out by Zman:

– has anyone explained so far what is the proper methodology for doing backups of data residing on an Exadata? Or how they feet into customers’ existing backup infrastructures? What needs to be changed, what is/isn’t supported (backup vendors) and finally what is the best practise for doing backups and whether one should expect additional costs here?

– no FC SAN in Exadata may be a problem. It mostly means no SAN-based/LAN-free backups, no direct attachmet for tape drives/libraries. I guess, that the only method of pumpuing the data out of Exadata rack will be TCP/IP (1G or 10G Ethernet). This is still not the same as SAN. Anybody willing to share the knowledge?

– will Oracle offer an Exadata-Mini-Edition for 1/10th of the price to allow customers to have their dev/test envs running on a fully compatible HW/SW or maybe Exadata could be purchased as a software-only solution to be deployed on commodity servers?

– let’s assume Exadata is 10x faster then a comparable bunch of disks + comparable high-end CPU/RAM power put toghether in a classic SMP_host-SAN-array manner. Will all applications be able to put enought pressure and generate enough parallelizm onto an Exadata engine to reach say 80% of its potential?

I don’t want to say Exadata is a poor product or a bad concept. It seems to be a really breakthrough technology, but as it is still in it’s childhood it has lots of areas that make the product/technology not mature. Best practises not clear. Final results unkown. TCO/ROI proved only on paper.



Hi Oczkov,

– The Oracle MAA team has an excellent whitepaper on backup and recovery best practices for Exadata, recently updatedfor, at Integration into third-party backup infrastructures is fully supported, and works as for other ASM-based databases using RMAN media manager plugins. Oracle’s secure backup media manager requires additional licenses on a per-tape drive basis, and some third party vendors require addiitonal licenses for their RMAN media manager plugins.
– If you’re looking for speed and mjinimizing resource usage, consider putting an InfiniBand HCA in your backup server. THe IB backbone has much more capacity than a FC SAN.
– Exadata SQL syntax is the same as for other Oracle platforms, so using a regular non-Exadata development environemnt can be a viable option. You can of course place multiple instances (Dev/QA/Production) into a single Exadata system.
– There are plenty of performance results available online, and even on this blog. Any actual performance results depend on the application driving the load. There’s still no substitute for good old-fashioned benchmarking for your particular workload.



Leave a Reply

Your email address will not be published. Required fields are marked *