Everyone knows that seminal papers need a simple title and descriptive title. “A Relational Model for Large Shared Data Banks” for example. I think Michael Stonebraker overshot the target In a 2007 paper titled, “The End of an Architectural Era”.
Why is this The End? According to Michael Stonebraker “current RDBMS code lines, while attempting to be ‘one size fits all’ solution, in face, excel at nothing. Hence, they are 25 years old legacy code lines that should be retired in favor of a collection of ‘from scratch’ specialized engined”.
He makes his point by stating that traditional RDBM design is already being replaced for a variety of specialized solutions: Data-warehouses, streams processing, text and scientific databases. The only uses left for RDBMS is OLTP and hybrid systems.
The provocatively named paper is simply a description of a system, designed from scratch for modern OLTP requirements and the demonstration that this system gives better performance than traditional RDBMS on OLTP type load. The conclusion is that since RDBMS can’t even excel at OLTP – it must be destined for the garbage pile. I’ll ignore the fact that hybrid systems are far from extinct and look at the paper itself.
The paper starts with a short review of the design considerations behind traditional RDBMS, before proceeding to list the design considerations behind the new OLTP system, HStore.:
- The OLTP database should fit entirely in-memory. There should be no disk writes at all. Based on TPC-C size requirements this should be possible, if not now then within few years.
- The OLTP database should be single threaded – no concurrency at all. This should be possible since OLTP transactions are all sub-millisecond. In an memory-only system they should be even faster. This will remove the need for complex algorithms and data structures and will improve performance even more. Ad-hoc queries will not be allowed.
- It should be possible to add capacity to an OLTP system without any downtime. This means incremental expansion – it should be possible to grow the system by adding nodes transparently.
- The system should be highly available, with a peer-to-peer configuration – the OLTP load should be distributed across multiple machines and inter-machine replication should be used for availability. According to the paper, in such a system redo and undo logging becomes unnecessary. This paper references another paper that argues that rebuilding a failed node over the network is as efficient as recovering from redo log. Obviously, eliminating redo logs eliminates one of the worse OLTP bottlenecks where data is written to disk synchronously.
- No DBAs. Modern systems should be completely self tuning.
In other sections Stonebraker describes few more properties of the system:
- With persistent redo logs gone, and locks/latches gone, the overhead of JDBC interface is likely to be the next bottleneck. Therefore the application code should be in form of stored procedures inside the data store. The only command ran externally should be “execute transaction X”.
- Given that the DB will be distributed and replicated, and network latencies still take milliseconds, the two-phase commit protocol should be avoided
- There will be no ad-hoc queries. The entire workload will be specified in advance.
- SQL is an old legacy language with serious problems that were exposed by Chris Date two decades ago. Modern OLTP systems should be programmable in a modern light-weight language such as Ruby. Currently the system is queried with C++.
The requirements seem mostly reasonable and very modern – use replication as a method of high availability and scalabilty, avoid disks and their inherent latencies, avoid the complications of concurrency, avoid ad-hoc queries, avoid SQL and avoid annoying DBAs. If Stonebraker can deliver on his promise, if he can do all of the above without sacraficing the throughput and durability of the system, this sounds like a database we’ll all enjoy.
In the rest of the paper, the authors describe some special properties of OLTP work loads, and then explains how HStore utilizes the special properties to implement a very efficient distributed OLTP system. In the last part of the paper, the authors use HStore to run a TPC-C like benchmark and compare the results with an RDBMS.
Here are in very broad strokes the idea:
The paper explains in some detail how things are done, while I only describe what is done:
The system is distributed, with each object partitioned over the nodes. You can have specify how many copies of each row will be distributed, and this will provide high availability (if one node goes down you will have all the data available on other nodes).
Each node is single threaded. Once SQL query arrives at a node, it will be performed to the end without interruptions. There are no physical files. The data objects are stored as Btrees in memory, Btree block is sized to match L2 cache line.
The system will have a simple cost-based optimizer. It can be simple because OLTP queries are simple. If multi-way joins happen they always involve identifying a single tuple and then tuples to join to that record in a small number of 1-to-n joins. Group by and aggregation don’t happen in OLTP systems.
The query plans can either run completely in one of the nodes, can be decomposed to a set of independent transactions that can run completely in one node each, or require results to be communicated between nodes.
The way to make all this efficient is by using a “database designer” – Since the entire workload is known in advance, the database designer’s job is to make sure that most queries in the workload can run completely on a single node. It does this by smartly partitioning the tables, placing parts that are used together frequently on the same node and copying tables (or just specific columns) that are read-only all over the place.
Since there are at least two copies of each row and each table, there must be a way to consistently update them. Queries that can complete on a single node, can just be sent to all relevant nodes and we can be confident that they will all complete them with identical results. The only complication is that each node must wait a few milliseconds before running the latest transaction to allow for recieving prior transactions from other nodes. The order in which transactions run is identified by timestamps and node ids. This allows for identical order of execution on all nodes and is responsible for consistent results.
In case of transactions that span multiple sites and involve changes that affect other transactions (i.e. The order in which they execute in relation to other transactions matter), one way to achieve consistency could be locking the data sources for the duration of the transaction. The HStore uses another method – each worker node recieves its portion of the transaction from a coordinator. If there are no conflicting transactions with lower timestamps, the transaction runs and the worker sends the coordinator an “ok”, otherwise the worker aborts and notifies the coordinator. The transaction failed and its up to the application to recover from this. Of course, some undo should be used to rollback the successfull nodes.
The coordinator monitors the number of aborts and if there are too many unsuccessfull transactions, it starts waiting longer between the time a transaction arrives at a node until the node attempts to run it. If there are still too many failures, a more advanced strategy of aborting is used. In short, this is a very optimistic database where failure is prefered to locking.
I’ll skip the part where a modified TPC-C proves that HStore is much faster than a traditional RDBMS tuned for 3 days by an expert. We all know that all benchmarks are institutionalized cheating.
What do I think of this database?
- It may be too optimistic in its definition of OLTP. I’m not sure we are all there with the pre-defined workload. Especially since adding queries can require a complete rebuild of the data-store.
- I’m wondering how he plans to get a consistent image of the data stored there to another system to allow querying. ETL hooks are clearly required, but it is unclear how they can be implemented.
- Likewise, there is no clear solution on how to migrate existing datasets into this system.
- HStore seems to depend quite heavily on the assumption that networks never fail or slow down. Not a good assumption from my experience.
- If Stonebraker is right and most datasets can be partitioned in a way that allows SQL and DML to almost always run on a single node, this can be used to optimize OLTP systems on RAC.
- I like the idea of memory only systems. I like the idea of replication providing recoverability and allowing us to throw away the redo logs. I’m not sure we are there yet, but I want to be there.
- I also like the idea of a system allowing only stored procedures to run.
- I’m rather skeptical about systems without DBAs, I’ve yet to see any large system work without someone responsible for it to keep working.
- I’m even more skeptical about systems without redo logs and how they manage to still be atomic, durable and just plain reliable. Unfortunately this paper doesn’t explain how redo-less systems can be recovered. It references another paper as proof that it can be done.
- Stonebraker deserves credit for anticipating the NoSQL boom 3 years in advance. Especially the replication and memory-only components.
I hope that in the next few month I’ll add few more posts reviewing futuristic systems. I enjoy keeping in touch with industry trends and cutting-edge ideas.
“Especially since adding queries can require a complete rebuild of the data-store”
Says it all, doesn’t it?
If it depended on him, he’d have the entire edifice of IT recoding itself every three years, which is about as long as his new ventures last! Or more often if possible.
After all: developers are so “cheap”…
And of course, re-coding costs absolutely nothing. Whereas dbas, we all know, cost a “fortune”…
1- any dbas nowadays had any say in application tuning?
2- any dba nowadays was paid a “fortune”?
3- development wasn’t by far the highest cost of any IT concern nowadays?
Why do people even listen to this lunatic?…
Remember that for MIT professors, developers are called “grad students” and are very cheap indeed :)
Why listen? Because amidst the bullshit, there are ideas that can work.
CTO of Amazon read the paper, decided that much of it can’t work, but in-memory distributed OLTP can work.
Amazon built their own version which is way more practical, but also involves “eventual consistency”.
3 Years later and everyone talks about those NoSQL databases.
Maybe its just a fad, but you can’t ignore the fact that either Stonebraker predicted the future, or he invented it. Either way, its worth listening.
[…] Architectural Era about HStore, his modern implementation of an OLTP database system. Her article It’s the End of the World As We Know It (NoSQL Edition) is a good write-up and analysis for those (such as me) who don’t have time to read and digest […]
VoltDB is the commercialization of some of these ideas. There are some important differences from the paper – but the core ideas are present: non-blocking, stored procedure interface to a horizontally partitioned SQL row store optimized for throughput.
Your vitriol against Stonebraker aside, the issue of DBA cost versus developer cost is an interesting one. I’ve talked to organizations that put a lot of effort into minimizing DBA cost per database instance and other groups that lament development costs. Minimizing both is the logical desire (unless you’re a DBA or an application developer).
Thanks for stopping by to comment, Ryan!
I’m very interested in real-world stories of how these ideas worked for VoltDB customers. Especially if VoltDB really doesn’t allow for ad-hoc queries.
There are some excellent conceptual ideas in the Stonebraker paper, but:
It is irresponsible to describe some of the mechanisms he proposed to eliminate without having sufficient detail about their replacements. I will gladly give up undo and redo for a fully working and provable redundant recovery mechanism that’s been thoroughly tested, but not one that doesn’t even quite yet exist!
Come to think of it, IT software salesman are notorious for doing exactly the same thing.
Cassandra and other NOSQL solutions are all the rage. They are non blocking and so fast.
First, people use it because it is free. Second, it is not perfect because Reddit is down much more and seems slow at times and every other time I go to twitter I get the “Ooops something is wrong.”
Something is wrong, they all use Cassandra. I can’t fault the technology, but maybe the people implementing it? Ok, maybe it is the technology too. Either way, it is not as “simple” as they make it out to be and it is not a silver bullet.
A slightly more balanced way to look at this:
Cassandra is new software backed by relatively new algorithms.
Of course its not perfect.
Of course its not a silver bullet.
Since it requires a new way for data modeling, a lot of implementations will suck until we figure out how to do it right.
It is a real attempt to solve a real problem web sites ran into. It is interesting and worth following. If you have a problem that matches this solution – you can consider trying Cassandra, knowing that it has bugs and will not solve all your problems easily.
re: “If Stonebraker is right and most datasets can be partitioned in a way that allows SQL and DML to almost always run on a single node, this can be used to optimize OLTP systems on RAC.”
Stonebraker was unable to partition the nine-table TPC-C schema in such a way that each TPC-C transaction can be performed on a single node. The “Voter” teaching example used by VoltDB has only one table and two queries. :-) See https://www.slideshare.net/VoltDB/building-voltdb-applications
[…] ideas) uses only one table and two queries. This post originally appeared over at Pythian. There are also some very smart comments over there that you shouldn’t miss, go take a look! […]
[…] It’s the End of the World As We Know It (NoSQL Edition) | The Pythian Blog […]