Benchmarking NDB vs Galera

Posted in: Technical Track

Inspired by the benchmark in this post, we decided to run some NDB vs Galera benchmarks for ourselves.

We confirmed that NDB does not perform well using m1.large instances. In fact, it’s totally unacceptable –  no setup should ever have a minimum latency of 220ms – so m1.large instances are not an option. Apparently the instances get CPU bound, but CPU utilization never goes above ~50%. Maybe top/vmstat can’t be trusted in this virtualized environment?

So, why not use m1.xlarge instances? This sounds like a better plan!

As in the original post, our dataset is 15 tables of 2M rows each, created with:

./sysbench –test=tests/db/oltp.lua –oltp-tables-count=15 –oltp-table-size=2000000 –mysql-table-engine=ndbcluster –mysql-user=user –mysql-host=host1 prepare

Benchmark against NDB was executed with:

for i in 8 16 32 64 128 256

do

./sysbench –report-interval=30 –test=tests/db/oltp.lua –oltp-tables-count=15 –oltp-table-size=2000000 –rand-init=on –oltp-read-only=off –rand-type=uniform –max-requests=0 –mysql-user=user –mysql-port=3306  –mysql-host=host1,host2 –mysql-table-engine=ndbcluster –max-time=600 –num-threads=$i run > ndb_2_nodes_$i.txt

done

After we shutdown NDB, we started Galera and recreated the table, but found that running sysbench was failing. A suggestion from Hingo was to use –oltp-auto-inc=off, which worked.

Our benchmark against NDB was executed with:

for i in 8 16 32 64 128 256

do

./sysbench –report-interval=30 –test=tests/db/oltp.lua –oltp-tables-count=15 –oltp-table-size=2000000 –rand-init=on –oltp-read-only=off –rand-type=uniform –max-requests=0 –mysql-user=user –mysql-port=3306  –mysql-host=host1,host2 –mysql-table-engine=ndbcluster –max-time=600 –num-threads=$i –oltp-auto-inc=off run > galera_2_nodes_$i.txt

done

Below are the graphs of average throughput at the end of 10 minutes, and 95% response time.

2a

2b

2c

2d

Galera clearly performs better than NDB with 2 instances!

But things become very interesting when we graph the reports generated every 10 seconds.

2e

2f

Surprised, right? What is that?

Here we see that even if the workload fits completely in the buffer pool, the high number of TPS causes aggressive flushing.

We assume the benchmark in the Galera blog post was CPU bound, while in our benchmark the behavior is I/O bound.

We then added another 2 more nodes (m1.xlarge instances), but kept the dataset at 15 tables x 2M rows , and re-ran the benchmark with NDB and Galera. Performance on Galera gets stuck, due to I/O. Actually, with Galera, we found that performance on 4 nodes was worse than with 2 nodes; we assume this is caused by the fact that the whole cluster goes at the speed of the slower node.

Performance on NDB keeps growing as new nodes are added, so we added another 2 nodes for just NDB (6 nodes total).

2g

2h

The graphs show that NDB scales better than Galera, which is not what we expected to find.

It is perhaps unfair to say that NDB scales better than Galera, but rather that NDB checkpoint causes less stress on I/O than InnoDB checkpoint, thus the bottleneck is on InnoDB and not Galera itself. To be more precise, the bottleneck is on slow I/O.

The follow graph shows the performance with 512 threads and 4 nodes (NDB and Galera) or 6 nodes (only NDB). Data collected every 30 seconds.

2i

email

Interested in working with Administrator? Schedule a tech call.

No comments

Leave a Reply

Your email address will not be published. Required fields are marked *