Generic benchmark tools can be very useful for testing performance on your system. These benchmark tools normally have a set of predefined workloads, but often they don’t match your specific workload in useful ways.
One of the best ways to reproduce your workload is to have a good sense of the application that uses the database and how it manages requests to the database. If this is not an option, it is also possible to analyze traffic and to find the most common queries, and use those to define the most common workload.
You can analyze traffic in many ways, from tcpdump to general log, from binlog (only for DML statements) to slow query log.
Afterwards it is possible to analyze them with pt-query-digest (or the obsolete mk-query-digest) to find the most common and/or heavy queries.
In the system we analyze here, the workload was mainly write intensive and involved just 4 tables:
- tableA was receiving single-row INSERT statements;
- for each insert on tableA , on average 200 INSERTs were performed in the other 3 tables, distributed as follows: 100 on tableB, 95 on tableC, 5 on tableD (to be more specific , for each INSERT on tableB there is an INSERT either on tableC or tableD).
The system also receives SELECT statements, but in a very small number and very simple primary key lookup.
To simulate the workload, we generated a simple perl script that spawns a certain number of threads that perform the DML statements, and other threads that perform the SELECT statements.
At regular intervals, the script prints statistics and progress.
The benchmark test was executed in a setup with 2 hosts: one host where the client was running, and another host where the servers were running.
The RDBMS tested were: MariaDB 5.2.3 with TokuDB 5.2.7 and InnoDB, and Percona 5.5.20.
Additionally, Percona 5.5.20 was tested as multiple instances running on the same hosts.
The goal of the first benchmark test was to compare TokuDB against InnoDB for this specific workload.
We executed MariaDB with TokuDB with the following (simple) config file:
We found the performance of InnoDB significantly better compared than TokuDB in this instance, though this test – where the dataset fits almost entirely in memory – does not show the real power of TokuDB, which excels at insertion rate at scale. Because these tables have very few indexes, TokuDB and Fractal tree indexes weren’t very efficient. Furthermore, the benchmarks were running on FusionIO, which meant that performance on InnoDB didn’t degrade much as on spinning disks. We excluded TokuDB out from the next benchmark tests because they are all cases which are not well-suited for TokuDB’s strengths.
We temporarily abandoned MariaDB, and tested Percona 5.5.20 with the following config file:
We tried various innodb_flush_method attempts, and the graphs show that O_DIRECT performs slightly better than the default fsync(), even if the benchmark shows a weird bootstrap. We also tried ALL_O_DIRECT, which performed badly.
Additionally, we tried innodb_log_block_size=4096 instead of the default 512, but nothing changed: insert rate wasn’t affected.
One of the goals of this benchmark was to test if running multiple mysqld instances on the same host performs better than a single mysqld instance.
On this specific hardware, the answer seems to be yes. Configuring 8 mysqld instances with the same config file listed below (but different paths and ports), throughput is significantly higher. Note that innodb_buffer_pool_size was set to 256M to try to stress the IO subsystem.
All the above tests were executed using 36 client connections for writes and 36 client connections for reads.
We then ran a new cycle of tests, but instead of using 36 x 2 connections, we used 80 x 2 (80 for writes and 80 for reads).
With 80 connections, throughput was higher than with 36 connections, but at nearly regular intervals we found performance dropping. This seems independent from the size of the buffer pool.
It is interesting to note that with only one mysqld instance, FusionIO was performing at 4.7k – 4.8k IOPS, while with 8 mysqld instances FusionIO was performing at 27k – 29k IOPS. As expected, with a small buffer pool performance tends to slowly degrade when the data doesn’t fit in memory.
We tried various values of innodb_write_io_threads, but this didn’t make any difference, since the Redo Log was the most written and not the tablespaces.
To better analyze the throughput, we reduced the sample time to 10 seconds and reran the test:
It is clear that throughput drops from time to time, and for a nearly constant amount of time. While the test was running, we tried to monitor the mysqld instances, but there was no clear indication of why they were stalling. The Redo Log wasn’t anywhere close to full and InnoDB wasn’t performing aggressive flushing. The amount of data read from disk was pretty low but the amount of data written was spiking. Yet, the writes weren’t coming from InnoDB.
The reason for the stalls became apparent when we analyzed the content of /proc/meminfo: the Linux Virtual Memory (VM) subsystem was performing dirty pages flushing!
We changed the dirty_background_ratio from 10 (the default) to 1 , and reran the test.
sysctl -w vm.dirty_background_ratio=1
Throughput is now way more stable, although performance has dropped by 2.8%. It is interesting to note that throughput drops at nearly the same time no matter the value of dirty_background_ratio.
A quick analysis of MySQL source code shows that binlog are synced to disk when closed, therefore the drops in throughput may be caused by the flush of binary logs.
We then raised vm.dirty_background_ratio up to 10 (the default value) and lowered max_binlog_size from 1G to 64M.
Throughput doesn’t drop drastically as in the two previous tests, but goes up and down at more regular intervals.
At the end of this test, performance with max_binlog_size=64M is ~4% lower than the initial test with max_binlog_size=1G (in both cases, vm.dirty_background_ratio=10).
The last setup of 8 instances with a 256M buffer pool each and max_binlog_size=64M was then compared with a new setup: 4 instances with a 512M buffer pool each (2GB total in both cases) and max_binlog_size=64M:
An interesting outcome from this last test is that total throughput raised by around 4% (that was originally lost using binlogs of 64M) and that the total number of IOPS dropped to ~16k, leaving room for more IO in case of a different workload.
We then ran a new test using only 2 mysqld instances. It shows what was already easy to guess when running a similar test with only one mysqld instance: a lower number of mysqld instances can’t fully utilize IO capacity and therefore has lower throughput.
Conclusions (most of them are as expected) for this specific workload and on this specific hardware:
O_DIRECT performs slightly better than the default fsync for innodb_flush_method .
A high number of clients provides more throughput than a smaller number of clients: not enough tests were performed to find the optimal number of clients.
Throughput reduces when data doesn’t fit in the buffer pool.
A high number of mysqld instances running on the same server are able to better utilize the number of IOPS that FusionIO is able to provide (perhaps, it should be a very bad idea to run multiple mysqld instances on the same spinning disk or array)
The sync of binlog during binlog rotation are able to stall the system. Lowering dirty_background_ration or max_binlog_size is able to stabilize the throughput.
Leave a Reply