MySQL high availability with ProxySQL, Consul and Orchestrator

Tags:
MySQL,
Technical Track,
High Availability,
Open Source,
Orchestrator,
Haproxy,
Consul,
Consul-Template
In this post, we will explore one approach to MySQL high availability with ProxySQL, Consul and Orchestrator.
This is a follow up to my previous
post
about a similar architecture but using HAProxy instead. I've re-used some of the content from that post so that you don't have to go read through that one, and have everything you need in here.
Let’s briefly go over each piece of the puzzle:
– ProxySQL is in charge of connecting the application to the appropriate backend (reader or writer).
It can be installed on each application server directly or we can have an intermediate connection layer with one or more ProxySQL servers. The former probably makes sense if you have a small number of application servers; as the number grows, the latter option becomes more attractive. Another scenario for the latter would be to have a "shared" ProxySQL layer that connects applications to different database clusters.
– Orchestrator’s role is to monitor the servers and perform automatic (or manual) topology changes as needed.
– Consul is used to decouple Orchestrator from ProxySQL, and serves as the source of truth for the topology. Orchestrator will be in charge of updating the identity of the master in Consul when there are topology changes. Why not have Orchestrator update ProxySQL directly? Well, for one, Orchestrator hooks are fired only once... what happens if there is any kind of error or network partition? Also, Orchestrator would need to know ProxySQL admin credentials which might introduce a security issue.
– Consul-template runs locally on ProxySQL server(s) and is subscribed to Consul K/V store, and when it detects a change in any value, it will trigger an action. In this case, the action is to propagate the information to ProxySQL by rendering and executing a template (more on this later).
Proof of concept
With the goal of minimizing the number of servers required for the POC, I installed three servers which run MySQL and Consul servers: mysql1, mysql2 and mysql3. On mysql3, I also installed ProxySQL, Orchestrator and Consul-template. In a real production environment, you'd have servers separated more like this:- ProxySQL + consul-template + Consul (client mode)
- MySQL
- Orchestrator + Consul (client mode)
- Consul (server mode)
Installing Consul
- Install Consul on mysql1, mysql2 and mysql3:
$ sudo yum -y install unzip $ sudo useradd consul $ sudo mkdir -p /opt/consul $ sudo touch /var/log/consul.log $ cd /opt/consul $ sudo wget https://releases.hashicorp.com/consul/1.0.7/consul_1.0.7_linux_amd64.zip $ sudo unzip consul_1.0.7_linux_amd64.zip $ sudo ln -s /opt/consul/consul /usr/local/bin/consul $ sudo chown consul:consul -R /opt/consul* /var/log/consul.log
- Bootstrap the Consul cluster from one node. I've picked mysql3 here:
$ sudo vi /etc/consul.conf.json { "datacenter": "dc1", "data_dir": "/opt/consul/", "log_level": "INFO", "node_name": "mysql3", "server": true, "ui": true, "bootstrap_expect": 3, "retry_join": [ "192.168.56.100", "192.168.56.101", "192.168.56.102" ], "client_addr": "0.0.0.0", "advertise_addr": "192.168.56.102" } $ sudo su - consul -c 'consul agent -config-file=/etc/consul.conf.json -config-dir=/etc/consul.d > /var/log/consul.log &'
- Start Consul on mysql1 and have it join the cluster:
$ sudo vi /etc/consul.conf.json { "datacenter": "dc1", "data_dir": "/opt/consul/", "log_level": "INFO", "node_name": "mysql1", "server": true, "ui": true, "bootstrap_expect": 3, "retry_join": [ "192.168.56.100", "192.168.56.101", "192.168.56.102" ], "client_addr": "0.0.0.0", "advertise_addr": "192.168.56.100" } $ sudo su - consul -c 'consul agent -config-file=/etc/consul.conf.json -config-dir=/etc/consul.d > /var/log/consul.log &' $ consul join 192.168.56.102
- Start Consul on mysql2 and have it join the cluster:
$ sudo vi /etc/consul.conf.json { "datacenter": "dc1", "data_dir": "/opt/consul/", "log_level": "INFO", "node_name": "mysql2", "server": true, "ui": true, "bootstrap_expect": 3, "retry_join": [ "192.168.56.100", "192.168.56.101", "192.168.56.102" ], "retry_join": , "client_addr": "0.0.0.0", "advertise_addr": "192.168.56.101" } $ sudo su - consul -c 'consul agent -config-file=/etc/consul.conf.json -config-dir=/etc/consul.d > /var/log/consul.log &' $ consul join 192.168.56.102At this point we have a working three-node Consul cluster. We can test writing k/v pairs to it and retrieving them back:
$ consul kv put foo bar Success! Data written to: foo $ consul kv get foo bar
Configuring Orchestrator to write to Consul
Orchestrator has built-in support for Consul . If Consul lives on separate servers you should still install the Consul agent/client on the Orchestrator machine. This allows for local communication between Orchestrator and the Consul cluster (via the API) to prevent issues during network partitioning scenarios. In our lab example, this is not required as the Consul servers are already present on the local machine.- Configure Orchestrator to write to Consul on each master change. Add the following lines to Orchestrator configuration:
$ vi /etc/orchestrator.conf.json "KVClusterMasterPrefix": "mysql/master", "ConsulAddress": "127.0.0.1:8500",
- Restart Orchestrator:
$ service orchestrator restart
- Populate the current master value manually. We need to tell Orchestrator to populate the values in Consul while bootstrapping the first time. This is accomplished by calling orchestrator-client. Orchestrator will update the values automatically if there is a master change.
$ orchestrator-client -c submit-masters-to-kv-stores
- Check the stored values from command line:
$ consul kv get mysql/master/testcluster mysql1:3306Slave servers can also be stored in Consul; however, they will not be maintained automatically by Orchestrator. We'd need to create an external script that can make use of the Orchestrator API and put this in cron, but this is out of scope for this post. The template I am using below assumes they are kept under mysql/slave/testcluster prefix.
Using Consul template to manage ProxySQL
We have ProxySQL running on mysql3. The idea is to have the Consul template dynamically update ProxySQL configuration when there are changes to the topology.- Install Consul template on mysql3:
$ mkdir /opt/consul-template $ cd /opt/consul-template $ sudo wget https://releases.hashicorp.com/consul-template/0.19.4/consul-template_0.19.4_linux_amd64.zip $ sudo unzip consul-template_0.19.4_linux_amd64.zip $ sudo ln -s /opt/consul-template/consul-template /usr/local/bin/consul-template
- Create a template for ProxySQL config file. Note this template also deals with slave servers.
$ vi /opt/consul-template/templates/proxysql.ctmpl DELETE FROM mysql_servers where hostgroup_id=0; REPLACE into mysql_servers (hostgroup_id, hostname) values ( 0, "" ); REPLACE into mysql_servers (hostgroup_id, hostname) values ( 1, "" ); LOAD MYSQL SERVERS TO RUNTIME; SAVE MYSQL SERVERS TO DISK;
- Create consul-template config file. Note that we need Consul agent, which will provide us with a Consul API endpoint at port 8500, installed locally in order for consul template to subscribe to 127.0.0.1:8500.
$ vi /opt/consul-template/config/consul-template.cfg consul { auth { enabled = false } address = "127.0.0.1:8500" retry { enabled = true attempts = 12 backoff = "250ms" max_backoff = "1m" } ssl { enabled = false } } reload_signal = "SIGHUP" kill_signal = "SIGINT" max_stale = "10m" log_level = "info" wait { min = "5s" max = "10s" } template { source = "/opt/consul-template/templates/proxysql.ctmpl" destination = "/opt/consul-template/templates/proxysql.sql" # log in to proxysql and execute the template file containing sql statements to set the new topology command = "/bin/bash -c 'mysql --defaults-file=/etc/proxysql-admin.my.cnf < /opt/consul-template/templates/proxysql.sql'" command_timeout = "60s" perms = 0644 backup = true wait = "2s:6s" }
- Start consul-template
$ nohup /usr/local/bin/consul-template -config=/opt/consul-template/config/consul-template.cfg > /var/log/consul-template/consul-template.log 2>&1 &
The next step is doing a master change (e.g. via Orchestrator GUI) and seeing the effects. Something like this should be present on the logs:
[root@mysql3 config]$ tail -f /var/log/consul-template/consul-template.log [INFO] (runner) rendered "/opt/consul-template/templates/proxysql.ctmpl" => "/opt/consul-template/templates/proxysql.sql" [INFO] (runner) executing command "/bin/bash -c 'mysql --defaults-file=/etc/proxysql-admin.my.cnf < /opt/consul-template/templates/proxysql.sql'" from "/opt/consul-template/templates/proxysql.ctmpl" => "/opt/consul-template/templates/proxysql.sql"What happened? Orchestrator updated the K/V in Consul, and Consul template detected the change and generated a .sql file with the commands to update ProxySQL, then executed them.