Here’s the hardware configuration on which I performed test cases:
I have inserted nearly 8 million records using sysbench.
root@ubuntu:/home/ghoshbabu# sysbench --db-driver=mysql --mysql-user=root --mysql-password=**** \--mysql-socket=/var/run/mysqld/mysqld.sock --mysql-db=sysbench --range_size=1000 \--table_size=2000000 --tables=4 --threads=4 --events=0 --time=60 \--rand-type=uniform /usr/share/sysbench/oltp_write_only.lua prepare
Table size after inserting:
mysql> select Table_Name, round((data_length)/1024/1024,2) as Data_Size_MB, round((index_length)/1024/1024,2) as Index_Size_MB
from information_schema.tables where table_schema in ('sysbench') order by round((data_length)/1024/1024,2) desc;
+--------------------+---------------------+----------------------+
| TABLE_NAME | Data_Size_MB | Index_Size_MB |
+--------------------+---------------------+----------------------+
| sbtest2 | 420 | 0.00 |
| sbtest1 | 390 | 0.00 |
| sbtest3 | 390 | 0.00 |
| sbtest4 | 390 | 0.00 |
+------------+--------------+---------------+--------------------+
Row count after inserting data:
mysql> select count(1) from sysbench.sbtest1;
+------------+
| count(1) |
+------------+
| 2000000|
+------------+
1 row in set (3.21 sec)
mysql> select count(1) from sysbench.sbtest2;
+------------+
| count(1) |
+------------+
| 2000000|
+------------+
1 row in set (3.14 sec)
mysql> select count(1) from sysbench.sbtest3;
+------------+
| count(1) |
+------------+
| 2000000|
+------------+
1 row in set (3.18 sec)
mysql> select count(1) from sysbench.sbtest4;
+------------+
| count(1) |
+------------+
| 2000000|
+------------+
1 row in set (3.27 sec)
Here is the time taken with a backup size using MySQL Shell Utility to backup 8 million records with default compression zstd:
MySQL 127.0.0.1:33060+ ssl JS > util.dumpInstance("/home/ghoshbabu/test_backup", {compatibility: ["strip_restricted_grants"],threads:4})
Acquiring global read lock
Global read lock acquired
Initializing - done
Gathering information - done
All transactions have been started
Locking instance for backup
Global read lock has been released
Writing global DDL files
Writing users DDL
Running data dump using 4 threads.
NOTE: Progress information uses estimated values and may not be accurate.
Writing schema metadata - done
Writing DDL - done
Writing table metadata - done
Starting data dump
103% (8.00M rows / ~7.70M rows), 628.38K rows/s, 123.45 MB/s uncompressed, 56.02 MB/s compressed
Dump duration: 00:00:12s
Total duration: 00:00:12s
Schemas dumped: 2
Tables dumped: 4
Uncompressed data size: 1.56 GB
Compressed data size: 708 MB
Compression ratio: 2.2
Rows written: 8000000
Bytes written: 708 MB
Average uncompressed throughput: 121.59 MB/s
Average compressed throughput: 55.19 MB/s
Load dump using MySQL Shell
util.loadDump()
Here is the time taken to restore 8 million records using MySQL Shell Utility:
MySQL 127.0.0.1:33060+ ssl JS > util.loadDump("/home/ghoshbabu/test_backup",{skipBinlog: true,deferTableIndexes: 'all',analyzeTables: 'on',progressFile :"/home/ghoshbabu/test_backup.json" })
Loading DDL and Data from '/home/ghoshbabu/test_backup' using 4 threads.
Opening dump...
Target is MySQL 8.0.27. Dump was produced from MySQL 8.0.27
Scanning metadata - done
Checking for pre-existing objects...
Executing common preamble SQL
Executing DDL - done
Executing view DDL - done
Starting data load
2 thds loading - 2 thds indexing \ 100% (1.56 GB / 1.56 GB), 13.20 MB/s, 4 / 4 tables done
Executing common postamble SQL
Recreating indexes - done
Analyzing tables - done
28 chunks (8.00M rows, 1.56 GB) for 4 tables in 2 schemas were loaded in 3 min 8 sec (avg throughput 14.85 MB/s)
0 warnings were reported during the load.
I have inserted 32 million records using sysbench.
sysbench --db-driver=mysql --mysql-user=root --mysql-password=root \--mysql-socket=/var/run/mysqld/mysqld.sock --mysql-db=sysbench --range_size=1000 \--table_size=8000000 --tables=4 --threads=4 --events=0 --time=60 \--rand-type=uniform /usr/share/sysbench/oltp_write_only.lua prepare
Table size after inserting:
mysql> select Table_Name, round((data_length)/1024/1024/1024,2) as Data_Size_GB, round((index_length)/1024/1024/1024,2) as Index_Size_GB
from information_schema.tables where table_schema in ('sysbench') order by round((data_length)/1024/1024/1024,2) desc;
+--------------------+---------------------+----------------------+
| TABLE_NAME | Data_Size_GB | Index_Size_GB |
+--------------------+---------------------+----------------------+
| sbtest1 | 1.64 | 0.00 |
| sbtest3 | 1.62 | 0.00 |
| sbtest2 | 1.60 | 0.00 |
| sbtest4 | 1.53 | 0.00 |
+------------+--------------+---------------+--------------------+
Row count after inserting data:
mysql> select count(1) from sysbench.sbtest1;
+------------+
| count(1) |
+------------+
| 8000000|
+------------+
1 row in set (13.01 sec)
mysql> select count(1) from sysbench.sbtest2;
+------------+
| count(1) |
+------------+
| 8000000|
+------------+
1 row in set (12.95 sec)
mysql> select count(1) from sysbench.sbtest3;
+------------+
| count(1) |
+------------+
| 8000000|
+------------+
1 row in set (12.64 sec)
mysql> select count(1) from sysbench.sbtest4;
+------------+
| count(1) |
+------------+
| 8000000|
+------------+
1 row in set (12.37 sec)
Here is the time taken and backup size using MySQL Shell Utility to backup 32 million records with default compression zstd:
Load dump using MySQL Shell
util.loadDump()
Here is the time taken to restore 32 million records using MySQL Shell Utility:
Here is the time taken and backup size using XtraBackup tool to backup 8 million records with default compression qpress:
time /usr/bin/xtrabackup --rsync --user="root" --password=${pw} --parallel=4 --compress --compress-threads=4 --backup --target-dir=/home/ghoshbabu/backups
xtrabackup: Transaction log of lsn (2135471688) to (2135471708) was copied.
230321 11:36:26 completed OK!
5.02user 2.56system 0:12.00elapsed 65%CPU (0avgtext+0avgdata 249564maxresident)k
4004952inputs+2417672outputs (0major+82264minor)pagefaults 0swaps
Compressed data size
ghoshbabu@ubuntu:~$ sudo du -sh /home/ghoshbabu/backups/
1.2G /home/ghoshbabu/backups/
Here is the total time taken to restore 8 million records using XtraBackup utility:
Steps include
1. ###### To decompress ######
root@ubuntu:/home/ghoshbabu# time /usr/bin/xtrabackup --decompress --target-dir=/home/ghoshbabu/backups --- 45 secs
2. ##### To remove qp file ########
root@ubuntu:/home/ghoshbabu/backups# for i in $(find -name "*.qp"); do rm -f $i; done -- 30 secs
3. #### Apply log or prepare data directory ######
time /usr/bin/xtrabackup --prepare --target-dir=/home/ghoshbabu/backups --- 30 secs
4.##### change ownership to mysql user ########
root@ubuntu:/home/ghoshbabu# chown -R mysql:mysql mysql
5. ##### Start mysql service ########
root@ubuntu:/var/lib# service mysql start
Here is the time taken and backup size using XtraBackup tool to backup 32 million records with default compression qpress:
Time taken to restore 32 million records using XtraBackup Utility
Here is the time taken and backup size using MySQLDump Utility to backup 8 million records with gzip compression:
root@ubuntu:/home/ghoshbabu/test_backup# time /usr/bin/mysqldump --defaults-file=/home/ghoshbabu/.my.cnf --flush-privileges --single-transaction --triggers --routines --events --set-gtid-purged=OFF --all-databases |gzip -c > mysqldump.sql.gz
real 1m23.516s
user 1m28.574s
sys 0m3.222s
Compressed data size
root@ubuntu:/home/ghoshbabu# du -sh /home/ghoshbabu/test_backup/mysqldump.sql.gz
738M /home/ghoshbabu/test_backup/mysqldump.sql.gz
Time taken to restore 8 million records using MySQL utility:
root@ubuntu:/home/ghoshbabu/test_backup# time gunzip < mysqldump.sql.gz | mysql -uroot -p
real 24m22.377s
user 0m23.366s
sys 0m2.058s
Here is the time taken and backup size using MySQLDump utility to backup 32 million records with gzip compression:
Time taken to restore 32 million records using MySQL utility:
XtraBackup is the fastest when it comes to larger data sets as we have seen from case to case. MySQL Shell Utility is much better when it comes to compression ratio as well—pretty good when it comes to backup or restore compared to XtraBackup.
MySQLDump and restore operation is a single-threaded operation that takes a lot of time, so you can choose which is best according to your environment.
Looking to optimize your MySQL use?