mysql> set global innodb fast=true;

Posted in: MySQL, Technical Track

So you ran into some basic limitations with MyISAM when your site got busier. Even single row updates would lock the whole table and slow things down to a crawl. Then you updated to InnoDB to get the benefit of row-level locking, but now the site is even slower than before. What gives?

Well, we had exactly the same issue with one our clients, a large online community site for models and photographers. Updates were killing the system with MyISAM, and converting the tables to InnoDB got rid of the table locks but made the site slower than ever. The buffer pool hit ratio was 1000/1000 and there were no obvious bottlenecks, except that it was all excruciatingly slow.

Here’s what was happening: MySQL by default sets autocommit=1 for every connection. Which isn’t so bad for MyISAM, since a write isn’t guaranteed to disk, but for InnoDB, it has to do a flush to disk for each implicit commit. Which means each insert/update/delete on an InnoDB table will result in a flush to disk.

What’s so bad about flushes to disk? Nothing at all. They ensure that any commit is guaranteed to be there when you restart the database after a crash. The only penalty for that is the physical speed of the commit. Since the disk has to write the data to a platter before the commit can return, it can take a while to do.

Assuming even an average seek time of 9ms per disk write, you’re limited to roughly 67 commits/sec1, which is really slow. And while the disk is busy seeking to the sector to be written, it is not doing reads for you. In effect, things just hang. InnoDB may work around part of this limitation by grouping some commits together, but even so, the limitation exists, just at a slightly higher commit rate.

So what are the workarounds? Having the InnoDB log files on a dedicated disk might help some — your frequent flushes to disk will be on a different disk, so they won’t block selects from the data disk2. However, the physical limitation of the number of flushes you can do still remains. A (battery-backed) write cache on the controller can help as well, but that may be overkill for what you want to do, those cards can be expensive, and eventually they too have to flush to disk.

So lets go back to why we went to InnoDB in the first place. We went because we did not want to have the database doing table-level locks. Data integrity on a crash is not guaranteed with MyISAM, and it was not the reason we were switching to InnoDB anyway. So while a commit guaranteed to disk is nice to have, it’s not essential for the particular application. In the rare event of a database crash, we are willing to lose a few transactions.

Remember, this client is not a financial site, so the parameter of interest becomes innodb_log_flush_at_trx_commit. Setting this to 0 means that InnoDB will no longer flush every commit to disk. It will still do a flush every one second or so anyway, so in the worst case you may lose about one second worth of data. But on the other side, your disk is now freed up to do reads for you at full speed. The result — the site is fast again. Or, in the words of a user on the site: “OMFG someone fed the Hamster…the wheel is spinning again!”

1.  This figure is illustrative only, it depends on the disk speed, disk layout, raid controller etc. but the point is that eventually you run into a commit rate limitation.

2.   (Commits are not to the datafiles, only to the logfiles; datafiles only ever get written at a checkpoint… but that’s for another blog — look up the InnoDB double write buffer if you’re impatient!)

email

Author

Want to talk with an expert? Schedule a call with our team to get the conversation started.

About the Author

Raj has been managing and guiding highly effective teams for over 20 years. Raj combines strong leadership skills with a deeply technical background in IT consulting to bring tailored and efficient technology solutions to Pythian. Additionally, his focus on data security has helped Pythian meet and often exceed client and regulatory requirements. Prior to joining Pythian in 2003, Raj held various leadership positions with a proven record of success. He has a degree in Industrial Engineering and has worked in multiple IT and engineering fields.

2 Comments. Leave new

A better “workaround” might be to disable AutoCommit and group logically related statements together in transactions, rather than letting InnoDB arbitrarily decide which bits of work are committed together as a unit. Unless you really are doing a bunch of rapid-fire single-statement transactions, in which case yeah, you’ve hit the ceiling of your commit rate and that won’t help. Another good solution is to use a RAID controller with a battery-backed write cache and let it group flushes together from the cache.

Reply
Innodb commits « François Schiettecatte’s Blog
June 21, 2007 3:26 pm

[…] Raj Thukral suggests setting the innodb_log_flush_at_trx_commit parameter in my.cnf to 0, thereby causing […]

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *