MySQL Memory Usage Profile Script

Posted in: Technical Track

There are various tools available for gathering and monitoring mysql performance data, but it’s not always clear what they mean, what is important, and what can be tuned. I’m not going to try to post a comprehensive mysql tuning manual here. This post is about taking a snapshot profile of a mysql instance that will allow you to define what type of instance you are dealing with, and give you some idea about whether the current configurations are stable or not.

I’ll be using a script that I’ve designed for daily reporting on the mysql instances that we monitor. This script will be available in a follow-up post, but for now, you can see the output of the report itself and a break-down of the more important sections.

To start with, let’s look at an example report:
Sample Report 1

Global engine stats: 13 [InnoDB] 183 [MyISAM]

With the exclusion of ‘avail’ which is our own monitoring software, this instance is entirely MyISAM-based. This is crucial to know when you are distributing your resources, because many buffer and variables are only used for one or the other. (I’ll get into that more further down).

Temp Space:
max_heap_table_size: 16777216
tmp_table_size (20971520) reduced to max_heap_table_size (16777216)
::created_tmp_disk_tables / created_tmp_tables: 524375 / 945129
:: if large, consider increasing tmp_table_size (current: 20971520)

Temporary and Heap space are used for storage of temporary data and ‘Memory’ storage tables respectively. However, the actual space available for temp can be dictated by max_heap_table_size when it is less than tmp_table_size. If these values are too small, you’ll see a high percentage of temporary tables that are forced onto disk for the additional space, which is very bad for performance, but sometimes unavoidable. In our case, over half of temporary tables are written to disk. This instance could definitely benefit from more heap memory!

Open Tables:
::opened_tables: 4696 - current 300 open
:: if large, consider increasing table_open_cache (current: 300)

MySQL requires a file descriptor for every table that is ‘opened’ or accessed by a connection. To save time, it caches these descriptors between connections. The ideal size of the table_open_cache should be large enough to accommodate for each connection, the number of tables that it could use in a join (max_connections * n). Various operating systems may impose limits much lower than that. In our example, the table cache of 300 is completely used which means the setting is not too high, but it is lower than the max_connections setting which means if all connections were used, they would not all be able to access tables simultaneously.

The number of opened_tables is relatively small though, so this is probably not a problem.

Handler Types - suggests scan types
::handler_read_key:		234180339<
::handler_read_first: 		7485442
::handler_read_next: 		653975869
::handler_read_previous: 	216841
::handler_read_rnd: 		87718789
::handler_update: 		46201061
::handler_delete: 		12406222
::handler_write: 		2437434595
::handler_rollback: 		634167

COM_… Values:
::com_alter_table: 		536
::com_commit: 			942294
::com_create_table: 		150
::com_delete: 			2972374
::com_drop_table: 		85
::com_flush: 			246
::com_insert: 			9805299
::com_insert_select: 		5
::com_purge: 			240
::com_replace: 			124713
::com_rollback: 		318714
::com_select: 			57998672
::com_truncate: 		1
::com_update: 			13356230

The com_ values and handler types tell you what kind of queries are being run, and how they are processed. I won’t go into too much detail here, but the idea is, if you are doing a lot of sequential reads, and few random reads, it could mean that you have insufficient indexes. Different handlers can also use different buffers, so you may want to size them accordingly. See the mysql documentation for information on each one.

General memory usage:
binlog_cache_size: 		32768
query_cache_size: 		16777216
::holding 6689 queries with 8946176 free memory
::qcache hit ratio: 91 %
::qcache_lowmem_prunes can suggest qcache too small - 7347343
TOTAL: 16.03 M

qcache_hit_ratio = qcache_hits / (qcache_hits + qcache_inserts + qcache_not_cached)
~= qcache_hits / (com_select+qcache_hits)

The main thing to look at here is the query cache hit ratio (91%), which is actually quite good. This means that 91% of applicable queries coming in can get their results directly from the query cache without going to the engine. That said, 6689 queries are only using about half of the available cache.

The last equation listed there gives you an idea of what potential the query cache has. A high ratio (meaning lots of data changes) could mean that queries caching is not useful because any update to a table will remove the associated queries from the cache.

MyISAM fixed memory usage:
key_buffer_size: 512 M
::key_buffer usage: 44 % (high-water: 66 %)
::Read ratio to disk instead of buffer: 0.08 % (ideally less than 1%)
::Write ratio to disk instead of buffer: 49.64 % (may be high if lots of deletes/updates)
delay_key_write is ON, lowers key_disk_write ratio.
myisam_sort_buffer_size: 8388608 (for sorting MyISAM indexes)
TOTAL: 520 M

key_buffer_use = 1 - ((key_blocks_unused * key_cache_block_size) / key_buffer_size)
key_buffer_highwater = (key_blocks_used * key_cache_block_size) / key_buffer_size

The administrators of this database may want to consider increasing the key_buffer_size, since this database relies on MyIsam, however it seems to be performing well at the moment, and it is not full. The write ratio is high, but that expected in a database with lots of changes.

InnoDB fixed memory usage:


innodb_buffer_pool_size: 2147483648
innodb_additional_mem_pool_size: 5242880innodb_log_buffer_size: 1048576
TOTAL: 2054 M

2G is a fair size for most InnoDB databases, unfortunately, we determined that this instance is using MyIsam. So that 2G of physical memory is almost completely going to waste!

Per connection memory usage:
thread stack: 131072
net_buffer_length: 16384		(alloc. per client, grows to 16776192)
TOTAL: 0.15 M
Per allocation memory usage:
bulk_insert_buffer_size: 8388608	(alloc. for, load data infile, insert value exteded)
sort_buffer_size: 2097144		(allocated for repair table or create index)
read_buffer_size: 131072		(allocated for sequential scans)
read_rnd_buffer_size: 262144		(pre-alloc. for sorted reads)
join_buffer_size: 131072		(using full-table-scan, alloc. per join)
TOTAL: 10.49 M

The entries under Per Connection Memory Usage are allocated for each connection to the database. The variable max_connections should be considered when determining these values. The Per allocation memory usage variables are buffers that are allocated to each connection, but only when they are needed. Under normal circumstances, these buffers would never be all used for any one connection.

max_connections: 500
Potential Connection Memory: 75 M
::Current Connections - 2 (2 active)
::Max_Used_Connections - 96
::Thread Cache (0) hit ratio - 0.01 %
Theoretical Potential Consumption: 2649 M
MySQL formula: key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections
= 536870912 + (131072 + 2097144) * 500 = 1574.49 M
Physical memory available: 2027.7 M


: estimated potential memory may be too high for resources available

If you investigate memory problems in the mysql documentation, they offer an equation to find how much memory you will need (as seen above). This is a useful equation, but in my experience, the value was usually on the low side.

In my calculations I include all of the potential per-connection memory, but I do not include the per-allocation memory. I found this to be a generally useful estimate of how much memory mysql will require. It is up to the administrator to consider how much breathing room is desired for the per-allocation memory buffers.

Still to come: another example and the memory-profile script. Feel free to comment on anything specific you want me to expand on in the next post.

Interested in working with Tim? Schedule a tech call.

8 Comments. Leave new

Where’s the download? Am I missing it?


You didn’t miss it.
I’ll be writing a second post within the week which will have the script attached. It still needs some minor changes to make it generally usable, since it was designed for our internal system. I’ll post a comment here when it’s ready.

Sorry for the inconvenience.


Ah, no prob. No convenience, thanks for sharing! :)



To make the script work under freebsd the os check and momory check need to be extended:

— Line 63

} elsif ($^O eq ‘freebsd’) {
$OStype = “freebsd”;

— Line 475

elsif( $OStype eq ‘freebsd’ ) {
my $sysctl = {};
my $sysctl_output = `/sbin/sysctl hw.physmem`;
$sysctl_output =~ m/^[^:]+:s+(d+)$/s;
$physical_mem = int($1/1024);

custom essays

To get the physical memory from Unix:
$physical_mem = `top -d 1 | grep -i ‘Memory’ | awk ‘{ print \$2 }’ | sed -e ‘s/M//g’`;
$physical_mem = int($physical_mem);


Thanks for the script. Would you please let me know how I can pass the db & user parameters as I’m getting an error:

BI connect(‘database=’,”,…) failed: Access denied for user ‘root’@’localhost’ (using password: NO) at line 594


Run the script using the “-h” argument to see the configuration options.

Example: perl -h


Leave a Reply

Your email address will not be published. Required fields are marked *