MySQL high availability with ProxySQL, Consul and Orchestrator
Proof of concept
With the goal of minimizing the number of servers required for the POC, I installed three servers which run MySQL and Consul servers: mysql1, mysql2 and mysql3. On mysql3, I also installed ProxySQL, Orchestrator and Consul-template. In a real production environment, you'd have servers separated more like this:- ProxySQL + consul-template + Consul (client mode)
- MySQL
- Orchestrator + Consul (client mode)
- Consul (server mode)
Installing Consul
- Install Consul on mysql1, mysql2 and mysql3:
$ sudo yum -y install unzip $ sudo useradd consul $ sudo mkdir -p /opt/consul $ sudo touch /var/log/consul.log $ cd /opt/consul $ sudo wget https://releases.hashicorp.com/consul/1.0.7/consul_1.0.7_linux_amd64.zip $ sudo unzip consul_1.0.7_linux_amd64.zip $ sudo ln -s /opt/consul/consul /usr/local/bin/consul $ sudo chown consul:consul -R /opt/consul* /var/log/consul.log
- Bootstrap the Consul cluster from one node. I've picked mysql3 here:
$ sudo vi /etc/consul.conf.json { "datacenter": "dc1", "data_dir": "/opt/consul/", "log_level": "INFO", "node_name": "mysql3", "server": true, "ui": true, "bootstrap_expect": 3, "retry_join": [ "192.168.56.100", "192.168.56.101", "192.168.56.102" ], "client_addr": "0.0.0.0", "advertise_addr": "192.168.56.102" } $ sudo su - consul -c 'consul agent -config-file=/etc/consul.conf.json -config-dir=/etc/consul.d > /var/log/consul.log &'
- Start Consul on mysql1 and have it join the cluster:
$ sudo vi /etc/consul.conf.json { "datacenter": "dc1", "data_dir": "/opt/consul/", "log_level": "INFO", "node_name": "mysql1", "server": true, "ui": true, "bootstrap_expect": 3, "retry_join": [ "192.168.56.100", "192.168.56.101", "192.168.56.102" ], "client_addr": "0.0.0.0", "advertise_addr": "192.168.56.100" } $ sudo su - consul -c 'consul agent -config-file=/etc/consul.conf.json -config-dir=/etc/consul.d > /var/log/consul.log &' $ consul join 192.168.56.102
- Start Consul on mysql2 and have it join the cluster:
$ sudo vi /etc/consul.conf.json { "datacenter": "dc1", "data_dir": "/opt/consul/", "log_level": "INFO", "node_name": "mysql2", "server": true, "ui": true, "bootstrap_expect": 3, "retry_join": [ "192.168.56.100", "192.168.56.101", "192.168.56.102" ], "retry_join": , "client_addr": "0.0.0.0", "advertise_addr": "192.168.56.101" } $ sudo su - consul -c 'consul agent -config-file=/etc/consul.conf.json -config-dir=/etc/consul.d > /var/log/consul.log &' $ consul join 192.168.56.102At this point we have a working three-node Consul cluster. We can test writing k/v pairs to it and retrieving them back:
$ consul kv put foo bar Success! Data written to: foo $ consul kv get foo bar
Configuring Orchestrator to write to Consul
Orchestrator has built-in support for Consul . If Consul lives on separate servers you should still install the Consul agent/client on the Orchestrator machine. This allows for local communication between Orchestrator and the Consul cluster (via the API) to prevent issues during network partitioning scenarios. In our lab example, this is not required as the Consul servers are already present on the local machine.- Configure Orchestrator to write to Consul on each master change. Add the following lines to Orchestrator configuration:
$ vi /etc/orchestrator.conf.json "KVClusterMasterPrefix": "mysql/master", "ConsulAddress": "127.0.0.1:8500",
- Restart Orchestrator:
$ service orchestrator restart
- Populate the current master value manually. We need to tell Orchestrator to populate the values in Consul while bootstrapping the first time. This is accomplished by calling orchestrator-client. Orchestrator will update the values automatically if there is a master change.
$ orchestrator-client -c submit-masters-to-kv-stores
- Check the stored values from command line:
$ consul kv get mysql/master/testcluster mysql1:3306Slave servers can also be stored in Consul; however, they will not be maintained automatically by Orchestrator. We'd need to create an external script that can make use of the Orchestrator API and put this in cron, but this is out of scope for this post. The template I am using below assumes they are kept under mysql/slave/testcluster prefix.
Using Consul template to manage ProxySQL
We have ProxySQL running on mysql3. The idea is to have the Consul template dynamically update ProxySQL configuration when there are changes to the topology.- Install Consul template on mysql3:
$ mkdir /opt/consul-template $ cd /opt/consul-template $ sudo wget https://releases.hashicorp.com/consul-template/0.19.4/consul-template_0.19.4_linux_amd64.zip $ sudo unzip consul-template_0.19.4_linux_amd64.zip $ sudo ln -s /opt/consul-template/consul-template /usr/local/bin/consul-template
- Create a template for ProxySQL config file. Note this template also deals with slave servers.
$ vi /opt/consul-template/templates/proxysql.ctmpl DELETE FROM mysql_servers where hostgroup_id=0; REPLACE into mysql_servers (hostgroup_id, hostname) values ( 0, "" ); REPLACE into mysql_servers (hostgroup_id, hostname) values ( 1, "" ); LOAD MYSQL SERVERS TO RUNTIME; SAVE MYSQL SERVERS TO DISK;
- Create consul-template config file. Note that we need Consul agent, which will provide us with a Consul API endpoint at port 8500, installed locally in order for consul template to subscribe to 127.0.0.1:8500.
$ vi /opt/consul-template/config/consul-template.cfg consul { auth { enabled = false } address = "127.0.0.1:8500" retry { enabled = true attempts = 12 backoff = "250ms" max_backoff = "1m" } ssl { enabled = false } } reload_signal = "SIGHUP" kill_signal = "SIGINT" max_stale = "10m" log_level = "info" wait { min = "5s" max = "10s" } template { source = "/opt/consul-template/templates/proxysql.ctmpl" destination = "/opt/consul-template/templates/proxysql.sql" # log in to proxysql and execute the template file containing sql statements to set the new topology command = "/bin/bash -c 'mysql --defaults-file=/etc/proxysql-admin.my.cnf < /opt/consul-template/templates/proxysql.sql'" command_timeout = "60s" perms = 0644 backup = true wait = "2s:6s" }
- Start consul-template
$ nohup /usr/local/bin/consul-template -config=/opt/consul-template/config/consul-template.cfg > /var/log/consul-template/consul-template.log 2>&1 &
The next step is doing a master change (e.g. via Orchestrator GUI) and seeing the effects. Something like this should be present on the logs:
[root@mysql3 config]$ tail -f /var/log/consul-template/consul-template.log [INFO] (runner) rendered "/opt/consul-template/templates/proxysql.ctmpl" => "/opt/consul-template/templates/proxysql.sql" [INFO] (runner) executing command "/bin/bash -c 'mysql --defaults-file=/etc/proxysql-admin.my.cnf < /opt/consul-template/templates/proxysql.sql'" from "/opt/consul-template/templates/proxysql.ctmpl" => "/opt/consul-template/templates/proxysql.sql"What happened? Orchestrator updated the K/V in Consul, and Consul template detected the change and generated a .sql file with the commands to update ProxySQL, then executed them.
Conclusion
ProxySQL, Orchestrator and Consul are a great solution to put together for highly available MySQL clusters. Some assembly is required, but the results will definitely pay off in the long term. If you want to read more about how the benefits of a setup like this, make sure to check out my post about graceful switchover without returning any errors to the application . Also Matthias' post about autoscaling ProxySQL in the cloud.On this page
Share this
Share this
More resources
Learn more about Pythian by reading the following blogs and articles.
MySQL high availability with HAProxy, Consul and Orchestrator
MySQL high availability with HAProxy, Consul and Orchestrator
Apr 23, 2018 12:00:00 AM
5
min read
How to Autoscale ProxySQL in the cloud

How to Autoscale ProxySQL in the cloud
Jun 18, 2019 12:00:00 AM
6
min read
Scaling ProxySQL Rapidly in Kubernetes
Scaling ProxySQL Rapidly in Kubernetes
Nov 24, 2020 12:00:00 AM
13
min read
Ready to unlock value from your data?
With Pythian, you can accomplish your data transformation goals and more.