$ sudo yum -y install unzip $ sudo useradd consul $ sudo mkdir -p /opt/consul $ sudo touch /var/log/consul.log $ cd /opt/consul $ sudo wget https://releases.hashicorp.com/consul/1.0.7/consul_1.0.7_linux_amd64.zip $ sudo unzip consul_1.0.7_linux_amd64.zip $ sudo ln -s /opt/consul/consul /usr/local/bin/consul $ sudo chown consul:consul -R /opt/consul* /var/log/consul.log2. Bootstrap the Consul cluster from one node. I've picked mysql3 here:
$ sudo vi /etc/consul.conf.json
{
"datacenter": "dc1",
"data_dir": "/opt/consul/",
"log_level": "INFO",
"node_name": "mysql3",
"server": true,
"ui": true,
"bootstrap": true,
"client_addr": "0.0.0.0",
"advertise_addr": "192.168.56.102"
}
$ sudo su - consul -c 'consul agent -config-file=/etc/consul.conf.json -config-dir=/etc/consul.d > /var/log/consul.log &'
3. Start Consul on mysql1 and have it join the cluster
$ sudo vi /etc/consul.conf.json
{
"datacenter": "dc1",
"data_dir": "/opt/consul/",
"log_level": "INFO",
"node_name": "mysql1",
"server": true,
"ui": true,
"bootstrap": false,
"client_addr": "0.0.0.0",
"advertise_addr": "192.168.56.100"
}
$ sudo su - consul -c 'consul agent -config-file=/etc/consul.conf.json -config-dir=/etc/consul.d > /var/log/consul.log &'
$ consul join 192.168.56.102
4. Start Consul on mysql2 and have it join the cluster
$ sudo vi /etc/consul.conf.json
{
"datacenter": "dc1",
"data_dir": "/opt/consul/",
"log_level": "INFO",
"node_name": "mysql2",
"server": true,
"ui": true,
"bootstrap": false,
"client_addr": "0.0.0.0",
"advertise_addr": "192.168.56.101"
}
$ sudo su - consul -c 'consul agent -config-file=/etc/consul.conf.json -config-dir=/etc/consul.d > /var/log/consul.log &'
$ consul join 192.168.56.102
At this point we have a working 3 node consul cluster. We can test writing k/v pairs to it and retrieving them back:
$ consul kv put foo bar Success! Data written to: foo $ consul kv get foo bar
$ vi /etc/orchestrator.conf.json "KVClusterMasterPrefix": "mysql/master", "ConsulAddress": "127.0.0.1:8500",2. Restart Orchestrator
$ service orchestrator restart3. Populate the current master value manually
$ orchestrator-client -c submit-masters-to-kv-stores4. Check the stored values from command line
$ consul kv get mysql/master/testcluster mysql1:3306
$ mkdir /opt/consul-template $ cd /opt/consul-template $ sudo wget https://releases.hashicorp.com/consul-template/0.19.4/consul-template_0.19.4_linux_amd64.zip $ sudo unzip consul-template_0.19.4_linux_amd64.zip $ sudo ln -s /opt/consul-template/consul-template /usr/local/bin/consul-template2. Create a template for HAProxy config file
$ vi /opt/consul-template/templates/haproxy.ctmpl global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 4096 chroot /usr/share/haproxy user haproxy group haproxy daemon defaults log global mode http option tcplog option dontlognull retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 frontend writer-front bind *:3307 mode tcp default_backend writer-back frontend stats-front bind *:80 mode http default_backend stats-back frontend reader-front bind *:3308 mode tcp default_backend reader-back backend writer-back mode tcp option httpchk server master check port 9200 inter 12000 rise 3 fall 3 backend stats-back mode http balance roundrobin stats uri /haproxy/stats stats auth user:pass backend reader-back mode tcp balance leastconn option httpchk server slave1 192.168.56.101:3306 check port 9200 inter 12000 rise 3 fall 3 server slave2 192.168.56.102:3306 check port 9200 inter 12000 rise 3 fall 3 server master 192.168.56.100:3306 check port 9200 inter 12000 rise 3 fall 33. Create consul template config file
$ vi /opt/consul-template/config/consul-template.cfg
consul {
auth {
enabled = false
}
address = "127.0.0.1:8500"
retry {
enabled = true
attempts = 12
backoff = "250ms"
max_backoff = "1m"
}
ssl {
enabled = false
}
}
reload_signal = "SIGHUP"
kill_signal = "SIGINT"
max_stale = "10m"
log_level = "info"
wait {
min = "5s"
max = "10s"
}
template {
source = "/opt/consul-template/templates/haproxy.ctmpl"
destination = "/etc/haproxy/haproxy.cfg"
command = "sudo service haproxy reload || true"
command_timeout = "60s"
perms = 0600
backup = true
wait = "2s:6s"
}
4. Give sudo permissions to consul-template so it can reload haproxy
$ sudo vi /etc/sudoers consul ALL=(root) NOPASSWD:/usr/bin/lsof, ...,/sbin/service haproxy reload5. Start consul template
$ nohup /usr/local/bin/consul-template -config=/opt/consul-template/config/consul-template.cfg > /var/log/consul-template/consul-template.log 2>&1 &And that is all the pieces we need. The next step is doing a master change (e.g. via Orchestrator GUI) and seeing the effects:
[root@mysql3 config]$ tail -f /var/log/consul-template/consul-template.log 2018/04/17 12:56:25.863912 [INFO] (runner) rendered "/opt/consul-template/templates/haproxy.ctmpl" => "/etc/haproxy/haproxy.cfg" 2018/04/17 12:56:25.864024 [INFO] (runner) executing command "sudo service haproxy reload || true" from "/opt/consul-template/templates/haproxy.ctmpl" => "/etc/haproxy/haproxy.cfg" 2018/04/17 12:56:25.864078 [INFO] (child) spawning: sudo service haproxy reload Redirecting to /bin/systemctl reload haproxy.serviceWhat happened? Orchestrator updated the K/V in Consul, and Consul template detected the change and updated the haproxy config file in turn, reloading haproxy after.
HAProxy is still being widely used as a proxy/load balancer in front of MySQL, so it's nice to be able to combine it with Orchestrator and Consul to put together a high availability solution. While this is a viable alternative, for a new deployment I usually recommend going with with ProxySQL instead. For one, you have the benefit of graceful switchover without returning any errors to the application. The setup is also a bit easier as there are less moving parts with ProxySQL Cluster (one could get rid of Consul). Finally, having a SQL aware proxy opens up more interesting possibilities like r/w splitting and query mirroring.
Ready to optimize your MySQL Database for the future?