Had an interesting situation come up today with a client. We had a situation where a server crashed because it ran out of memory. The calculation we used to monitor memory usage did not take into account all factors. When looking at this, I noticed a couple of things:
- There are numerous calculations available online.
- It seems that none of them take everything in account.
Wouldn’t it be nice if there was a total possible memory consumption status value? It can be a valuable piece of information. (And while I am requesting things, what about peak memory usage by the mysqld server since it has been online?) Realistically, there is almost no case where you would actually use the maximum amount of memory. But knowing how much memory that MySQL server could possibly use would be a very useful bit of information to know.
Second, why isn’t there an “official” formula that actually shows everything? It’s easy to forget things. Just as the simple case, what about the query cache? Did you remember it?
Sure, there are some big things.
key_buffer_size spring to mind. But all the little things add up too. On a busy system did you consider
thread_stack? It defaults to 128k per connection. With 500 connections that’s 64 MB. You may dismiss that and and say its not much. But in the end it doesn’t take 64 MB to cause a crash of your serve.
Do you know how much the core of MySQL server actually uses outside of all the buffers and data structures it builds? The only figure I have seen is 32 MB.
What about the operating system? And I don’t even want to consider if other programs are running on the same server.
max_memory_needed = core_mysql + global_values + (thread_buffers * max_connections) max_memory_needed = core_mysql + key_buffer_size + innodb_buffer_pool_size + innnodb_additional_memory_pool_size + innodb_log_buffer_size + max_tmp_tables * min(tmp_table_size,max_heap_table_size) + query_cache_size + 3 * myisam_sort_buffer_size + max_connections * ( read_buffer_size + join_buffer_size + read_rnd_buffer_size + thread_stack + (2 * max_packet_size) ))
core_mysql is the amount actually used by the core server itself without all the buffers and data structures that are given above.Â The only figure I have seen for this is 32 MB which is probably pretty close.
myisam_sort_buffer_size is the size of the buffer allocated when sorting MyISAM indexes during a
REPAIR TABLE, or when creating indexes with
CREATE TABLE or
CREATE INDEX. I am estimating there will never be more than three of these operations going on at the same time. Of course there is no guarantee of this.
I believe the above formula will give a pretty accurate representation of the total amount of memory that could possibly be used by a MySQL server and operating system.
Where it becomes far more unclear is a more realistic “normal operating conditions” formula. The problem here is that this is going to vary from application to application. One application might never hit more than 75% of the memory calculated by this formula.Â Another might be 50%, and another 87%.
How do you resolve this? Well, probably the best way is through long-term monitoring. Cacti or Ganglia spring to mind for this. If you configure a server conservatively and know that it is using 80% of RAM at any time over a significant period of time (including periods of time with spikes in server activity), then you know you can increase the amount of memory allocated somewhere.
What I typically do (if I have a choice), is configure a server for the expected number of connections, and then, after establishing a baseline of a set amount of connections and a minimum amount of buffering, I monitor the server while it is in use. Then I add any additional memory the server can spare to the InnoDB buffer pool (this is presuming that InnoDB is the main storage engine in use). Always leave some free memory and don’t allocate everything down to the last byte. Once MySQL starts hitting your swap partition, very bad things can happen with servers slowing down and users complaining.
How do you determine your buffer allocations and memory usage? Is it just hit-or-miss? And did I miss something in the above formula? I want to hear from you!
I always use the minimum number of connections I can get away with. Just my two cents.
Still missing from the formula is the memory consumed by memory tables (number of memory tables * max_heap_table_size).
Also, there is no actual limit to the number of temporary tables. The comments for max_tmp_tables says:
…The maximum number of temporary tables a client can keep open at the same time. (This option does not yet do anything.) …
And last one I can think of is binlog_cache_size, which may be allocated per connection if using transactional engines, so this one goes inside the “max_connections * (…)” formula.
[…] makes downloading with bittorent more fun! The Pope is on YouTube – Do you believe it’s true? MySQL Memory Consumption – Nice article on MySQL memory management and consumption. Share and […]
First of all..thanks!!
You are absolutely right about the maximum number of temp tables. I had looked that at up at some point in writng the formula but forgot to update it evidently. I really wish that max_tmp_tables would be implemented max so you could know exactly how much could be used by temporary tables.
I completely forgot about binlog cache. Thanks!
Well, it says so here:
Option Sets Variable Yes, max_tmp_tables
Variable Name max_tmp_tables
Variable Scope Both
Dynamic Variable Yes
The maximum number of temporary tables a client can keep open at the same time. (This option does not yet do anything.)
[…] https://www.pythian.com/news/1455/mysql-memory-consumption Leave a […]
[…] ????https://www.pythian.com/blog/mysql-memory-consumption/ […]