There are a number of solutions available for MySQL architecture that provide automatic failover, but the vast majority involve potentially significant and complex changes to existing configurations. Fortunately, HAProxy can be leveraged for this purpose with a minimum of impact to the current system. Alex Williams posted this clever solution a couple years ago in his blog. Here we take a closer look at the details of implementing it into an already existing system.
HAProxy is a freeware load balancer, proxy, and high availability application for the TCP and HTTP protocols. Since it was built mostly to handle web traffic, it has robust rule writing using HTTP components to check on the health of systems. The key to Alex’s solution was creating xinetd daemons on the database servers that send out HTTP messages. The HTTP check to determine database statuses works like this:
- The HAProxy server sends HTTP requests to the database servers at configured intervals to specified ports
- The /etc/services file on the database servers maps those ports to services in their /etc/xinetd.d directory
- The services can call any specified script, so we build scripts that connect to the databases and checks for whatever conditions we choose.
The services then return an OK or a Service Unavailable response per the conditions. Code for these scripts is in included in the Alex’s article.
Our database configuration for this implementation is two pairs of Master-Slave databases in an Active-Passive relationship with Master-Master replication between the sets. Siloing the passive Master-Slave provides a hot spare as well as continuous up-time during deployments provided we have a means of swapping the active/passive roles of each pair. To accomplish the latter, we built two HAProxy configuration files, haproxy_pair1.cfg and haproxy_pair2.cfg, the only difference between the two being which Master-Slave pair is active. Having the two files indicate which pair is active also allows immediate visibility of the current configuration.
Just as in Alex’s sample configuration, our web application uses two DNS entries, appmaster and appslave to call writes and reads respectively. The IPs for these addresses are then attached to our HAProxy server, allowing HAProxy to bind to them when it starts and then route them to the appropriate database server.
On our HAProxy server we then customized the /etc/init.d/haproxy script to handle an additional parameter of “flipactive” which provides us the capability to swap the database pairs:
# first detect which cfg file haproxy is using
ps_out=`ps -ef |grep haproxy|grep cfg`
# if pair2 then use pair1
if [[ “$ps_out” =~ pair2 ]] then
# the -sf does a friendly reread of the config file
/usr/local/sbin/haproxy -f /etc/haproxy/haproxy_pair1.cfg -p /var/run/haproxy.pid -st $(cat /var/run/haproxy.pid)
# if pair1 then use pair2
/usr/local/sbin/haproxy -f /etc/haproxy/haproxy_pair2.cfg -p /var/run/haproxy.pid -st $(cat /var/run/haproxy.pid)
The rest of the configuration details are covered pretty well in Alex’s article as well as in the HAProxy documentation.
We successfully developed and implemented this technique with the devops team atSlideShare last month to build rolling DDL scripting to multiple databases using Capistrano. It allowed them to have explicit control over which database was being updated, thereby giving them the means necessary to update one database while other served the current application code.
Interested in working with Laine? Schedule a tech call.