Installation
Easiest way is to install Percona repo and get the latest package from there e.g.:yum -y install https://www.percona.com/downloads/percona-release/redhat/0.1-3/percona-release-0.1-3.noarch.rpm
yum -y install percona-xtrabackup
Backup user
This creates a user with the minimum required privileges:CREATE USER 'bkpuser'@'localhost' IDENTIFIED BY 's3cret';
GRANT RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* \
TO 'bkpuser'@'localhost';
Streaming backups
Keep in mind nc command sends the output through the network without any encryption. Also, it is usually a good idea to compress the stream to save space.Option 1: tar
To have nc listen on a designated port (e.g. 9876) on the destination host run:nc -l 9876 | tar -ivxf - On the origin:
innobackupex --stream=tar /tmp | nc desthost 9876
Option 2: xbstream
Using xbstream is similar, in this case I am adding parallel gzip (aka pigz) to the mix:nc -l 9876 | xbstream -x -C /var/lib/mysql/
innobackupex --stream=xbstream /tmp | pigz -c --fast | nc dest 9876
Parallelising
With this option xtrabackup will read files in parallel which is much faster. This option does not work with the tar method, so you will need to use xbstream:innobackupex --parallel=16 \
--stream=xbstream \
/tmp > backup.xbs
Compression & encryption
Xtrabackup has built-in options to compress (using quicklz algorithm) and encrypt your backups. Compression needs to happen before encryption to get a good ratio, so pigz is not useful here. First generate a key to be used for encryption, and store somewhere safe:echo -n $(openssl rand -base64 24) > /data/key
In this case I am using both compression and encryption but you can choose to remove any:
innobackupex --compress \
--compress-threads=8 \
--encrypt=AES256 \
--encrypt-key-file=/data/key \
--encrypt-threads=8 \
--parallel=8 \
--stream=xbstream \
/tmp > backup.xbs.qp
Decompression & decryption
Oddly, the qpress package is needed for decompression but not for compression. You can get it from the Percona repo. Decompression needs to happen before the apply logs phase, and is done as follows:xbstream -x < /backups/backup.xbs.qp -C /var/lib/mysql
innobackupex \
--decompress \
--decrypt=AES256 \
--encrypt-key-file=/tmp/key \
--parallel=8 \
--use-memory=8G \
/var/lib/mysql After that is done you can remove any leftover xbcrypt and qpress files, and apply logs normally:
find /var/lib/mysql \( -name "*.xbcrypt" -or -name "*.qp" \) -delete
innobackupex --apply-logs \
--use-memory=8G \
/var/lib/mysql
Incremental backups
It is best to have tracking of changed pages to my.cnf to allow incremental backups to complete faster. Also if the backups will not be stored locally, we need to pass the --extra-lsndir option pointing to a local path. This way we can keep a copy of the checkpoints file available, which is needed for incremental to work. Here's how to take an incremental backup (and compress it):innobackupex --incremental \
--stream=xbstream \
--extra-lsndir=/backup/lsn_dir \
--incremental-basedir=/backup/lsn_dir |
pigz -c > /backup/inc/inc1.xbs.gz The restore steps would be:
- Decompress the full backup
unpigz -c full.xbs.gz | xbstream -x -C /var/lib/mysql/
- Decompress all the incrementals you want to apply
unpigz -c inc1.xbs.gz | xbstream -x -C /backup/inc/1/
- Apply logs to the base backup, using redo only option
innobackupex --apply-log \ --redo-only /var/lib/mysql/ \ --use-memory=24G
- Apply any incrementals using redo only option e.g.
innobackupex --apply-log \ --redo-only /var/lib/mysql/ \ --use-memory=24G \ --incremental-dir=/backup/inc/1
innobackupex --apply-log \ --redo-only /var/lib/mysql/ \ --use-memory=24G \ --incremental-dir=/backup/inc/2
- After the last incremental has been applied, run apply logs one last time without the redo only option
innobackupex --apply-log \ --use-memory=24G \ /var/lib/mysql
- Profit!
On this page
Share this
Share this
More resources
Learn more about Pythian by reading the following blogs and articles.
Backup and data streaming with xbstream, tar, socat, and netcat
Backup and data streaming with xbstream, tar, socat, and netcat
Jan 15, 2018 12:00:00 AM
5
min read
The night of the living files
The night of the living files
Jul 6, 2018 12:00:00 AM
13
min read
Scaling ProxySQL Rapidly in Kubernetes
Scaling ProxySQL Rapidly in Kubernetes
Nov 24, 2020 12:00:00 AM
13
min read
Ready to unlock value from your data?
With Pythian, you can accomplish your data transformation goals and more.