$ sudo yum -y install unzip $ sudo useradd consul $ sudo mkdir -p /opt/consul $ sudo touch /var/log/consul.log $ cd /opt/consul $ sudo wget https://releases.hashicorp.com/consul/1.0.7/consul_1.0.7_linux_amd64.zip $ sudo unzip consul_1.0.7_linux_amd64.zip $ sudo ln -s /opt/consul/consul /usr/local/bin/consul $ sudo chown consul:consul -R /opt/consul* /var/log/consul.log
$ sudo vi /etc/consul.conf.json { "datacenter": "dc1", "data_dir": "/opt/consul/", "log_level": "INFO", "node_name": "mysql3", "server": true, "ui": true, "bootstrap_expect": 3, "retry_join": [ "192.168.56.100", "192.168.56.101", "192.168.56.102" ], "client_addr": "0.0.0.0", "advertise_addr": "192.168.56.102" } $ sudo su - consul -c 'consul agent -config-file=/etc/consul.conf.json -config-dir=/etc/consul.d > /var/log/consul.log &'
$ sudo vi /etc/consul.conf.json { "datacenter": "dc1", "data_dir": "/opt/consul/", "log_level": "INFO", "node_name": "mysql1", "server": true, "ui": true, "bootstrap_expect": 3, "retry_join": [ "192.168.56.100", "192.168.56.101", "192.168.56.102" ], "client_addr": "0.0.0.0", "advertise_addr": "192.168.56.100" } $ sudo su - consul -c 'consul agent -config-file=/etc/consul.conf.json -config-dir=/etc/consul.d > /var/log/consul.log &' $ consul join 192.168.56.102
$ sudo vi /etc/consul.conf.json { "datacenter": "dc1", "data_dir": "/opt/consul/", "log_level": "INFO", "node_name": "mysql2", "server": true, "ui": true, "bootstrap_expect": 3, "retry_join": [ "192.168.56.100", "192.168.56.101", "192.168.56.102" ], "retry_join": , "client_addr": "0.0.0.0", "advertise_addr": "192.168.56.101" } $ sudo su - consul -c 'consul agent -config-file=/etc/consul.conf.json -config-dir=/etc/consul.d > /var/log/consul.log &' $ consul join 192.168.56.102At this point we have a working three-node Consul cluster. We can test writing k/v pairs to it and retrieving them back:
$ consul kv put foo bar Success! Data written to: foo $ consul kv get foo bar
$ vi /etc/orchestrator.conf.json "KVClusterMasterPrefix": "mysql/master", "ConsulAddress": "127.0.0.1:8500",
$ service orchestrator restart
$ orchestrator-client -c submit-masters-to-kv-stores
$ consul kv get mysql/master/testcluster mysql1:3306Slave servers can also be stored in Consul; however, they will not be maintained automatically by Orchestrator. We'd need to create an external script that can make use of the Orchestrator API and put this in cron, but this is out of scope for this post. The template I am using below assumes they are kept under mysql/slave/testcluster prefix.
$ mkdir /opt/consul-template $ cd /opt/consul-template $ sudo wget https://releases.hashicorp.com/consul-template/0.19.4/consul-template_0.19.4_linux_amd64.zip $ sudo unzip consul-template_0.19.4_linux_amd64.zip $ sudo ln -s /opt/consul-template/consul-template /usr/local/bin/consul-template
$ vi /opt/consul-template/templates/proxysql.ctmpl DELETE FROM mysql_servers where hostgroup_id=0; REPLACE into mysql_servers (hostgroup_id, hostname) values ( 0, "" ); REPLACE into mysql_servers (hostgroup_id, hostname) values ( 1, "" ); LOAD MYSQL SERVERS TO RUNTIME; SAVE MYSQL SERVERS TO DISK;
$ vi /opt/consul-template/config/consul-template.cfg consul { auth { enabled = false } address = "127.0.0.1:8500" retry { enabled = true attempts = 12 backoff = "250ms" max_backoff = "1m" } ssl { enabled = false } } reload_signal = "SIGHUP" kill_signal = "SIGINT" max_stale = "10m" log_level = "info" wait { min = "5s" max = "10s" } template { source = "/opt/consul-template/templates/proxysql.ctmpl" destination = "/opt/consul-template/templates/proxysql.sql" # log in to proxysql and execute the template file containing sql statements to set the new topology command = "/bin/bash -c 'mysql --defaults-file=/etc/proxysql-admin.my.cnf < /opt/consul-template/templates/proxysql.sql'" command_timeout = "60s" perms = 0644 backup = true wait = "2s:6s" }
$ nohup /usr/local/bin/consul-template -config=/opt/consul-template/config/consul-template.cfg > /var/log/consul-template/consul-template.log 2>&1 &
The next step is doing a master change (e.g. via Orchestrator GUI) and seeing the effects. Something like this should be present on the logs:
[root@mysql3 config]$ tail -f /var/log/consul-template/consul-template.log [INFO] (runner) rendered "/opt/consul-template/templates/proxysql.ctmpl" => "/opt/consul-template/templates/proxysql.sql" [INFO] (runner) executing command "/bin/bash -c 'mysql --defaults-file=/etc/proxysql-admin.my.cnf < /opt/consul-template/templates/proxysql.sql'" from "/opt/consul-template/templates/proxysql.ctmpl" => "/opt/consul-template/templates/proxysql.sql"What happened? Orchestrator updated the K/V in Consul, and Consul template detected the change and generated a .sql file with the commands to update ProxySQL, then executed them.
ProxySQL, Orchestrator and Consul are a great solution to put together for highly available MySQL clusters. Some assembly is required, but the results will definitely pay off in the long term. If you want to read more about how the benefits of a setup like this, make sure to check out my post about graceful switchover without returning any errors to the application . Also Matthias' post about autoscaling ProxySQL in the cloud.
Looking to optimize your MySQL use?