Skip to content

Insight and analysis of technology and business strategy

Exploring Backup/Restore Comparison Using MySQL Shell Utility vs. Percona XtraBackup Utility vs. MySQLDump

In this blog post, we will be comparing the time it takes for backup and restore using MySQL Shell Utility vs. Percona XtraBackup Utility vs. MySQLDump. Also, each tool uses its default compression method.

Here’s the hardware configuration on which I performed test cases:

  • OS: Ubuntu 18.4 (bionic)
  • Cores: 2
  • Processor: 11th Gen Intel (R) Core(TM) i3-1115G4 @ 3.00GHz 2.90 GHz
  • Memory: 4G

 

Case 1:

I have inserted nearly 8 million records using sysbench.

root@ubuntu:/home/ghoshbabu# sysbench --db-driver=mysql --mysql-user=root --mysql-password=**** \--mysql-socket=/var/run/mysqld/mysqld.sock --mysql-db=sysbench --range_size=1000 \--table_size=2000000 --tables=4 --threads=4 --events=0 --time=60 \--rand-type=uniform /usr/share/sysbench/oltp_write_only.lua prepare

Table size after inserting:

mysql> select Table_Name, round((data_length)/1024/1024,2) as Data_Size_MB, round((index_length)/1024/1024,2) as Index_Size_MB
from information_schema.tables where table_schema in ('sysbench') order by round((data_length)/1024/1024,2) desc;
+--------------------+---------------------+----------------------+
| TABLE_NAME | Data_Size_MB | Index_Size_MB |
+--------------------+---------------------+----------------------+
| sbtest2             |                  420  |                   0.00 |
| sbtest1             |                  390  |                   0.00 |
| sbtest3             |                  390  |                   0.00 |
| sbtest4             |                  390  |                   0.00 |
+------------+--------------+---------------+--------------------+

Row count after inserting data:

mysql> select count(1) from sysbench.sbtest1;
+------------+
| count(1)  |
+------------+
|  2000000|
+------------+
1 row in set (3.21 sec)

mysql> select count(1) from sysbench.sbtest2;
+------------+
| count(1)  |
+------------+
|  2000000|
+------------+
1 row in set (3.14 sec)

mysql> select count(1) from sysbench.sbtest3;
+------------+
| count(1)  |
+------------+
|  2000000|
+------------+
1 row in set (3.18 sec)

mysql> select count(1) from sysbench.sbtest4;
+------------+
| count(1)  |
+------------+
|  2000000|
+------------+
1 row in set (3.27 sec)

 

1. MySQL Shell Dump Utility:

  • It's a logic-based dump and load utility and it was introduced in MySQL 8.0.21 version
  • One can use multiple threads to export/import the data in parallel
  • It also supports the following compression algorithms:
    - Zstandard (zstd - default)
    - Gzip
    - None

 

Case 1:

Here is the time taken with a backup size using MySQL Shell Utility to backup 8 million records with default compression zstd:

  • Total Time: 12 sec
  • Compress data size: 708 MB
MySQL  127.0.0.1:33060+ ssl  JS > util.dumpInstance("/home/ghoshbabu/test_backup", {compatibility: ["strip_restricted_grants"],threads:4})
Acquiring global read lock
Global read lock acquired
Initializing - done
Gathering information - done
All transactions have been started
Locking instance for backup
Global read lock has been released
Writing global DDL files
Writing users DDL
Running data dump using 4 threads.
NOTE: Progress information uses estimated values and may not be accurate.
Writing schema metadata - done
Writing DDL - done
Writing table metadata - done
Starting data dump
103% (8.00M rows / ~7.70M rows), 628.38K rows/s, 123.45 MB/s uncompressed, 56.02 MB/s compressed
Dump duration: 00:00:12s
Total duration: 00:00:12s
Schemas dumped: 2
Tables dumped: 4
Uncompressed data size: 1.56 GB
Compressed data size: 708 MB
Compression ratio: 2.2
Rows written: 8000000
Bytes written: 708 MB
Average uncompressed throughput: 121.59 MB/s
Average compressed throughput: 55.19 MB/s

Load dump using MySQL Shell
util.loadDump()

Here is the time  taken to restore 8 million records using MySQL Shell Utility:

  • Total time: 3m 8 sec
MySQL  127.0.0.1:33060+ ssl  JS > util.loadDump("/home/ghoshbabu/test_backup",{skipBinlog: true,deferTableIndexes: 'all',analyzeTables: 'on',progressFile :"/home/ghoshbabu/test_backup.json" })
Loading DDL and Data from '/home/ghoshbabu/test_backup' using 4 threads.
Opening dump...
Target is MySQL 8.0.27. Dump was produced from MySQL 8.0.27
Scanning metadata - done
Checking for pre-existing objects...
Executing common preamble SQL
Executing DDL - done
Executing view DDL - done
Starting data load
2 thds loading - 2 thds indexing \ 100% (1.56 GB / 1.56 GB), 13.20 MB/s, 4 / 4 tables done
Executing common postamble SQL
Recreating indexes - done
Analyzing tables - done
28 chunks (8.00M rows, 1.56 GB) for 4 tables in 2 schemas were loaded in 3 min 8 sec (avg throughput 14.85 MB/s)
0 warnings were reported during the load.

 

Case 2:

I have inserted 32 million records using sysbench.

sysbench --db-driver=mysql --mysql-user=root --mysql-password=root \--mysql-socket=/var/run/mysqld/mysqld.sock --mysql-db=sysbench --range_size=1000 \--table_size=8000000 --tables=4 --threads=4 --events=0 --time=60 \--rand-type=uniform /usr/share/sysbench/oltp_write_only.lua prepare

Table size after inserting:

mysql> select Table_Name, round((data_length)/1024/1024/1024,2) as Data_Size_GB, round((index_length)/1024/1024/1024,2) as Index_Size_GB
from information_schema.tables where table_schema in ('sysbench') order by round((data_length)/1024/1024/1024,2) desc;
+--------------------+---------------------+----------------------+
| TABLE_NAME | Data_Size_GB | Index_Size_GB |
+--------------------+---------------------+----------------------+
| sbtest1             |                  1.64 |                   0.00 |
| sbtest3             |                  1.62 |                   0.00 |
| sbtest2             |                  1.60 |                   0.00 |
| sbtest4             |                  1.53 |                   0.00 |
+------------+--------------+---------------+--------------------+





Row count after inserting data:

mysql> select count(1) from sysbench.sbtest1;
+------------+
| count(1)  |
+------------+
|  8000000|
+------------+
1 row in set (13.01 sec)

mysql> select count(1) from sysbench.sbtest2;
+------------+
| count(1)  |
+------------+
|  8000000|
+------------+
1 row in set (12.95 sec)

mysql> select count(1) from sysbench.sbtest3;
+------------+
| count(1)  |
+------------+
|  8000000|
+------------+
1 row in set (12.64 sec)

mysql> select count(1) from sysbench.sbtest4;
+------------+
| count(1)  |
+------------+
|  8000000|
+------------+
1 row in set (12.37 sec)

Here is the time taken and backup size using MySQL Shell Utility to backup 32 million records with default compression zstd:

  • Total time:  1 min 5 sec
  • Compress data size: 2.83 GB


Load dump using MySQL Shell
util.loadDump()


Here is the time taken to restore 32 million records using MySQL Shell Utility:

  • Total time: 10 min 16 sec

 

2. Percona XtraBackup Utility

  • Percona XtraBackup is an open-source hot backup or physical backup utility
  • It supports parallelism during backup/restore operations
  • It also supports compressed backup (using qpress utility)

 

Case 1:

Here is the time taken and backup size using XtraBackup tool to backup 8 million records with default compression qpress:

  • Total time: 12 sec
  • Compress data size: 1.2G
time /usr/bin/xtrabackup --rsync  --user="root" --password=${pw} --parallel=4 --compress --compress-threads=4 --backup --target-dir=/home/ghoshbabu/backups
xtrabackup: Transaction log of lsn (2135471688) to (2135471708) was copied.
230321 11:36:26 completed OK!
5.02user 2.56system 0:12.00elapsed 65%CPU (0avgtext+0avgdata 249564maxresident)k
4004952inputs+2417672outputs (0major+82264minor)pagefaults 0swaps


Compressed data size 
ghoshbabu@ubuntu:~$ sudo du -sh /home/ghoshbabu/backups/
1.2G     /home/ghoshbabu/backups/

Here is the total time taken to restore 8 million records using XtraBackup utility:

  • Total time: 1m 54 sec 
Steps include 
1. ###### To decompress ######
root@ubuntu:/home/ghoshbabu# time /usr/bin/xtrabackup --decompress --target-dir=/home/ghoshbabu/backups --- 45 secs

2. ##### To remove qp file ########
root@ubuntu:/home/ghoshbabu/backups# for i in $(find -name "*.qp"); do rm -f $i; done -- 30 secs

3. #### Apply log or prepare data directory ######
time /usr/bin/xtrabackup --prepare --target-dir=/home/ghoshbabu/backups --- 30 secs

4.##### change ownership to mysql user  ########
root@ubuntu:/home/ghoshbabu# chown -R mysql:mysql mysql 

5. ##### Start mysql service ########
root@ubuntu:/var/lib# service  mysql start

 

Case 2:

Here is the time taken and backup size using XtraBackup tool to backup 32 million records with default compression qpress:

  • Total time: 29 sec
  • Compress data size: 4.6G

Time taken to restore 32 million records using XtraBackup Utility

  • Total Time: 4 m 44 sec

 

3. MySQLDump/Restore Utility

  • MySQLDump is a command-line utility that generates logic-based database backup files containing SQL statements to reproduce a database to its original state
  • It's a single-threaded backup/restore operation, and it does not support parallelism

 

Case 1:

Here is the time taken and backup size using MySQLDump Utility to backup 8 million records with gzip compression:

  • Total time: 1m 23 sec
  • Compress data size: 738M
root@ubuntu:/home/ghoshbabu/test_backup# time /usr/bin/mysqldump --defaults-file=/home/ghoshbabu/.my.cnf  --flush-privileges  --single-transaction --triggers --routines --events  --set-gtid-purged=OFF  --all-databases  |gzip  -c > mysqldump.sql.gz

real    1m23.516s
user   1m28.574s
sys     0m3.222s

Compressed data size 
root@ubuntu:/home/ghoshbabu# du -sh /home/ghoshbabu/test_backup/mysqldump.sql.gz
738M    /home/ghoshbabu/test_backup/mysqldump.sql.gz

Time taken to restore 8 million records using MySQL utility:

  • Total Time: 24 m 22 sec
root@ubuntu:/home/ghoshbabu/test_backup# time gunzip < mysqldump.sql.gz | mysql -uroot -p

real    24m22.377s
user    0m23.366s
sys     0m2.058s

 

Case 2:

Here is the time taken and backup size using MySQLDump utility to backup 32 million records with gzip compression: 

  • Total time: 6m 47 sec
  • Compress data size: 2.9G


Time taken to restore 32 million records using MySQL utility: 

  • Total time: 3h 59 sec

Backup_Graph (1)

Compression_graph (1)

Conclusion

XtraBackup is the fastest when it comes to larger data sets as we have seen from case to case. MySQL Shell Utility is much better when it comes to compression ratio as well—pretty good when it comes to backup or restore compared to XtraBackup.

MySQLDump and restore operation is a single-threaded operation that takes a lot of time, so you can choose which is best according to your environment.

Top Categories

  • There are no suggestions because the search field is empty.

Tell us how we can help!

dba-cloud-services
Upcoming-Events-banner