Optimizing your MySQL Tuning Methodology

Posted in: Technical Track

Optimizing your queries

There are two general methods for creating a query plan for a query. A rule-based optimizer goes through your query, sees if you’re doing X or Y, and then optimizes for method A or B depending. Rule-based optimizers grow rather large as there are many possible cases to account for. Sometime in the past couple of decades, rule-based optimizers fell out of favour and cost-based optimizers took over instead. At the moment the query is submitted, the optimizer does some simple calculations of costs of various ways of doing the query and picks the probably-best. It turns out the simplicity of the optimizer combined with the “it generally works” nature of the beast means it totally wins the popularity contest.

My Tuning Methodology, the Overview

With this in mind, I wanted to evaluate my own tuning methodology. At first blush, I thought my method rather coarse and unrefined. I generally look at every variable in my.cnf, do a quick search if I don’t know what it is, set it to something that “seems about right,” and then go to the next one.

My tuning experience is almost exclusively in high-volume shops, which isn’t representative of the many different MySQL installations of different sizes in the world. In high-volume environments, there are no good rules of thumb for tunings, because if you followed any such rules, your MySQLs would crash into the dust. So this rather naïve approach is actually a good start. Because there’s that word again. Rule.

You can tune the my.cnf by having several rules of thumb handy. A very long ruleset of “if this, then that” either documented or as a mental checklist. This works well on MySQL installations that are typical, but it isn’t my methodology. Mine is iterative and experimentally-based.

My Tuning Methodology, the Details

So now I’ve got a my.cnf that I’ve touched line-by-line and picked values that seemed reasonable. Anyone who’s ever administered a MySQL knows this my.cnf is going to be a disaster if left unchecked. But here’s where my magic begins and why my methodology works in high-volume environments where no rules of thumb apply.

I place the my.cnf onto a machine that is taking representative load. That is, it receives queries much like it would if it were performing its production role. There are a lot of ways of doing this, but my favorite is to put the slave into a production pool at a small fraction (perhaps 1%) of the workload of a currently-production slave.

Next, I point trending software at the slave. What do I monitor? That’s easy. EVERYTHING. Every single statistic I can think of, I graph, from the OS and DBMS. Everything sar would ever tell you. Everything from “show global status”. Everything in log files I can turn into numbers.

Then I measure the “black-box expected throughput” of the machine. That is, it’s getting a fraction of the query traffic, so is it performing obviously better than the other hosts performing full work? Say this machine is supposed to be able to answer its typical queries in 250ms. Is it doing it in at most 250ms, and perhaps better? If so, I increase the load on the machine. I keep increasing the load until it starts to struggle. Say I get it up to 150% normal load, and it’s now taking 260ms to answer its queries. Now I know it’s “stressed.”

At this point, I start to look at every statistic I’ve graphed and look for things at their limits. Is CPU maxed out? Is disk I/O at a plateau? Are read_rnd_next handler stats off-the-charts? I find it useful to have the same metrics coming from an existing reference slave, but sometimes they are obviously in a bad state (like if read_rnd_next is pushing 100,000,000/s for example, we probably need some indexes).

From here, I begin to get an idea of tunings or, more precisely, “experiments I can run.” Maybe the tables opened is going crazy. So I tune table_open_cache, restart the server, and see if the graph corrects (secondary concern) and if my server is responding better, say 10ms faster per query (primary concern, which is most-correctly measured as “does the application seem happy with this server in its current state?”). Maybe disk I/O is crazy, but InnoDB free pages is nothing. That suggests an experiment to increase innodb_buffer_pool_size.

These are perhaps obvious examples, but there can be some subtler and surprising tuning choices to be made when you start viewing graphs of statistics you don’t even know the meaning of. For example, suppose you didn’t know what handler_read_rnd_next meant. You look at it and see it’s 10,000x larger than handler_read_next which is 50x larger than handler_read_first. You start looking up what those stats mean, et voila, you’ve made a useful discovery.

At this point, I’ve formulated a hypothesis, which sounds a bit like this: “The graphs indicate that my system is doing suboptimal X. There is a tuning A that affects X. Thus I will change its current value from M to N, and I expect it will perform more optimally in X.” I make the change, and verify the evidence on the graphs. Most importantly, the primary concern is that the application, the thing using my DBMS, should perform better. If the graphs indicate everything is better, but the application is performing worse, we must accept the failed experiment!

This leads us to an anecdote about InnoDB tuning.

Why Experimental Evidence Matters

At a previous company, I was running several 100% InnoDB MySQL servers in a particular slave query pool, and was getting a particular performance characteristic. I was looking at my.cnf and noted innodb_buffer_pool_size was rather small (2GB out of 24GB total available RAM) and that I had an extremly low cache hit rate. I formulated the obvious hypothesis: that I should increase the buffer pool to decrease my deplorable cache hit rate. Upon increasing the innodb_buffer_pool_size to 6GB (I believe in changing things in small increments), however, I discovered that although my cache hit rate much nicer, my query latency was also up (and the application was clearly suffering).

The first thing to do was tune it back to 2GB to ensure things returned to status quo. Then I formulated a new experiment which made no sense, but I had to try: I tuned innodb_buffer_pool_size to 1GB… Performance INCREASED over the original 2GB setting!

Further experimentation settled on about a 750MB innodb_buffer_pool_size being optimal for the machines in this particular query pool (that is, smaller values began leading to worse performance again).

This was EXTREMELY counter-intuitive, as it should be to you if you’re a MySQL DBA. It took me a long time to accept the truth of this situation and move on. I formulated many other hypotheses in the process of trying to understand this phenomenon. MyISAM? Bad trending data? Matter/antimatter collision? Practical joke by my co-workers who were in the datacenter siphoning off the blue smoke from my machines?

To add to the mystery, I observed two additional phenomena: First, in other slave pools, increased innodb_buffer_pool_size indeed had the expected result: better cache hit rate and better performance. Second, several months later these machines were replaced with a new set of hardware that was configured similarly in RAM/disk, but were AMD processors instead of Intel. Upon loading them up with MySQL and putting them into the pool, they performed extremely badly compared to their peers. It took almost as long to discover the obvious, that increasing the innodb_buffer_pool_size decreased latency on this configuration, as it took to originally ignore this intuition. But the FACTS, that is, the graphs coming out of the trending software didn’t lie. On this new configuration, we could see the performance increase as innodb_buffer_pool_size was tuned up. I believe we settled on about 18GB of total 24GB RAM as the optimal tuning.

Wrapping up the Experiment, Standardize on your Configuration

At some point, after staring at graphs, modifying parameters, altering tables, changing sysctl.conf settings, remounting filesystems, it starts to be more difficult to suggest tunings that will have a great effect. Keep in mind that the business has its own goals. After a while, getting another 5% performance out of your servers isn’t worth the same to the business as, say, helping the developers push out the next release. So recognize that although you could spend the next several months staring at graphs and performing experiments (which are fun), at this point you should be satisfied with your increased performance and get back to the daily grind. Roll your tunings out to the pool of DBMSes, monitor the cluster to be sure that, on the whole it’s performing as well as the individual test machine was (This is important! Sometimes a subset of the machines will behave in a deviant fashion differently than you expect! Be prepared for it!), and if so, move on to something else more important. Don’t worry. Changes in workload or hardware configuration or application code will necessitate revisiting this procedure in the future.

 

email
Want to talk with an expert? Schedule a call with our team to get the conversation started.

No comments

Leave a Reply

Your email address will not be published. Required fields are marked *