Editor’s Note: Because our bloggers have lots of useful tips, every now and then we update and bring forward a popular post from the past. Today’s post was originally published on November 26, 2019. It’s not uncommon these days for us to use a high availability stack for MySQL consisting of Orchestrator, Consul and ProxySQL. You can read more details about this stack by reading Matthias Crauwels’ blog post How to Autoscale ProxySQL in the Cloud as well as Ivan Groenwold’s post on MySQL High Availability With ProxySQL, Consul and Orchestrator. The high-level concept is simply that Orchestrator will monitor the state of the MySQL replication topology and report changes to Consul which in turn can update ProxySQL hosts using a tool called consul-template. Until now we’ve typically implemented the ProxySQL portion of this stack using an autoscaling group of sorts due to the high levels of CPU usage that can be associated with ProxySQL. It’s better to be able to scale up and down as traffic increases and decreases. This ensures you’re not paying for resources you don’t need. This, however, comes with a few disadvantages. The first is the amount of time it takes to scale up. If you're using an autoscaling group and it launches a new instance it will need to take the following steps:
- There will be a request to your cloud service provider for a new VM instance.
- Once the instance is up and running as part of the group, it will need to install ProxySQL along with supporting packages such as consul (agent) and consul-template.
- Once the packages are installed, they'll need to be configured to work with the consul server nodes as well as the ProxySQL nodes that are participating in the ProxySQL cluster.
- The new ProxySQL host will announce to Consul that it’s available, which in turn will update all the other participating nodes in the ProxySQL cluster.
- Each container should run only a single process. We know we’re working with ProxySQL, consul (agent), and consul-template, so these will all need to be in their own containers.
- The primary process running in each container should run as PID 1.
- The primary process running in each container should not run as root.
- Log output from the primary process in the container should be sent to STDOUT so that it can be collected by Docker logs.
- Containers should be as deterministic as possible — meaning they should run the same (or at least as much as possible) regardless of what environment they are deployed in.
Consul (Agent) Container
Below is a generic version of the Dockerfile I’m using for consul (agent). The objective is to install Consul then instruct it to connect as an agent to the Consul cluster comprised of the consul (server) nodes.FROM centos:7
RUN yum install -q -y unzip wget && \
yum clean all
RUN groupadd consul && \
useradd -r -g consul -d /var/lib/consul consul
RUN mkdir /opt/consul && \
mkdir /etc/consul && \
mkdir /var/log/consul && \
mkdir /var/lib/consul && \
chown -R consul:consul /opt/consul && \
chown -R consul:consul /etc/consul && \
chown -R consul:consul /var/log/consul && \
chown -R consul:consul /var/lib/consul
RUN wget -q -O /opt/consul/consul.zip https://releases.hashicorp.com/consul/1.6.1/consul_1.6.1_linux_amd64.zip && \
unzip /opt/consul/consul.zip -d /opt/consul/ && \
rm -f /opt/consul/consul.zip && \
ln -s /opt/consul/consul /usr/local/bin/consul
COPY supportfiles/consul.conf.json /etc/consul/
USER consul
ENTRYPOINT ["/usr/local/bin/consul", "agent", "--config-file=/etc/consul/consul.conf.json"]
Simply put, the code above follows these instructions:
- Start from CentOS 7. This is a personal preference of mine. There are probably more lightweight distributions that can be considered, such as Alpine as recommended by Google, but I’m not the best OS nerd out there so I wanted to stick with what I know.
- Install our dependencies, which in this case are unzip and wget.
- Create our consul user, group and directory structure.
- Install consul.
- Copy over the consul config file from the host where the Docker build is being performed.
- Switch to the consul user.
- Start consul (agent).
- Container runs a single process:
- The ENTRYPOINT runs Consul directly, meaning nothing else is being run. Keep in mind that ENTRYPOINT specifies what should be run when the container starts. This means when the container starts it won’t have to install anything because the packages come with the image as designated by the Dockerfile, but we still need to launch Consul when the container starts.
- Process should be PID 1:
- Any process run by ENTRYPOINT will run as PID 1.
- Process should not be run as root:
- We switched to the Consul user prior to starting the ENTRYPOINT.
- Log output should go to STDOUT:
- If you run Consul using the command noted in the ENTRYPOINT, you’ll see log output goes to STDOUT.
- Should be as deterministic as possible:
- We’ve copied the configuration file into the container, meaning the container doesn’t have to get support files from anywhere else before Consul starts. The only way the nature of Consul will change is if we recreate the container image with a new configuration file.
ProxySQL Container
Below is a generic version of the Dockerfile I'm using for ProxySQL. The objective is to install ProxySQL and make it available to receive traffic requests on 6033 for write traffic, 6034 for read traffic and 6032 for the admin console which is how consul-template will interface with ProxySQL.FROM centos:7
RUN groupadd proxysql && \
useradd -r -g proxysql proxysql
RUN yum install -q -y https://github.com/sysown/proxysql/releases/download/v2.0.6/proxysql-2.0.6-1-centos67.x86_64.rpm mysql curl && \
yum clean all
COPY supportfiles/* /opt/supportfiles/
COPY startstop/* /opt/
RUN chmod +x /opt/entrypoint.sh
RUN chown proxysql:proxysql /etc/proxysql.cnf
USER proxysql
ENTRYPOINT ["/opt/entrypoint.sh"]
Simply put, the code above follows these instructions:
- Start from CentOS 7.
- Create our ProxySQL user and group.
- Install ProxySQL and dependencies, which in this case is curl, which will be used to poll the GCP API in order to determine what region the ProxySQL cluster is in. We’ll cover this in more detail below.
- Move our configuration files and ENTRYPOINT script to the container.
- Make sure the ProxySQL config file is readable by ProxySQL.
- Switch to the ProxySQL user.
- Start ProxySQL via the ENTRYPOINT script provided with the container.
#!/bin/bash
dataCenter=$(curl https://metadata.google.internal/computeMetadata/v1/instance/zone -H "Metadata-Flavor: Google" | awk -F "/" '{print $NF}' | cut -d- -f1,2)
...
case $dataCenter in
us-central1)
cp -f /opt/supportfiles/proxysql-us-central1.cnf /etc/proxysql.cnf
;;
us-east1)
cp -f /opt/supportfiles/proxysql-us-east1.cnf /etc/proxysql.cnf
;;
esac
...
exec proxysql -c /etc/proxysql.cnf -f -D /var/lib/proxysql
The script starts by polling the GCP API to determine what region the container has been launched in. Based on the result, it will copy the correct config file to the appropriate location, then start ProxySQL.
Let's see how the combination of the Dockerfile and the ENTRYPOINT script allows us to meet best practices.
- Container runs a single process:
- ENTRYPOINT calls the entrypoint.sh script, which does some conditional logic based on the regional location of the container, then ends by running ProxySQL. This means at the end of the process ProxySQL will be the only process running.
- Process should be PID 1:
- The command “exec” at the end of the ENTRYPOINT script will start ProxySQL as PID 1.
- Process should not be run as root:
- We switched to the ProxySQL user prior to starting the ENTRYPOINT.
- Log output should go to STDOUT:
- If you run ProxySQL using the command noted at the end of the ENTRYPOINT script you’ll see that log output goes to STDOUT.
- Should be as deterministic as possible:
- We’ve copied the potential configuration files into the container. Unlike Consul, there are multiple configuration files and we need to determine which will be used based on the region the container lives in, but the configuration files themselves will not change unless the container image itself is updated. This ensures that all containers running within the same region will behave the same.
Consul-template container
Below is a generic version of the Dockerfile I'm using for consul-template. The objective is to install consul-template and have it act as the bridge between Consul via the consul (agent) container and ProxySQL; updating ProxySQL as needed when keys and values change in Consul.FROM centos:7
RUN yum install -q -y unzip wget mysql nmap-ncat curl && \
yum clean all
RUN groupadd consul && \
useradd -r -g consul -d /var/lib/consul consul
RUN mkdir /opt/consul-template && \
mkdir /etc/consul-template && \
mkdir /etc/consul-template/templates && \
mkdir /etc/consul-template/config && \
mkdir /opt/supportfiles && \
mkdir /var/log/consul/ && \
chown -R consul:consul /etc/consul-template && \
chown -R consul:consul /etc/consul-template/templates && \
chown -R consul:consul /etc/consul-template/config && \
chown -R consul:consul /var/log/consul
RUN wget -q -O /opt/consul-template/consul-template.zip https://releases.hashicorp.com/consul-template/0.22.0/consul-template_0.22.0_linux_amd64.zip && \
unzip /opt/consul-template/consul-template.zip -d /opt/consul-template/ && \
rm -f /opt/consul-template/consul-template.zip && \
ln -s /opt/consul-template/consul-template /usr/local/bin/consul-template
RUN chown -R consul:consul /opt/consul-template
COPY supportfiles/* /opt/supportfiles/
COPY startstop/* /opt/
RUN chmod +x /opt/entrypoint.sh
USER consul
ENTRYPOINT ["/opt/entrypoint.sh"]
Simply put, the code above follows these instructions:
- Start from CentOS 7.
- Install our dependencies which are unzip, wget, mysql (client), nmap-ncat and curl.
- Create our Consul user and group.
- Create the consul-template directory structure.
- Download and install consul-template.
- Copy the configuration file, template files and ENTRYPOINT script to the container.
- Make the ENTRYPOINT script executable.
- Switch to the Consul user.
- Start consul-template via the ENTRYPOINT script that’s provided with the container.
#!/bin/bash
dataCenter=$(curl https://metadata.google.internal/computeMetadata/v1/instance/zone -H "Metadata-Flavor: Google" | awk -F "/" '{print $NF}' | cut -d- -f1,2)
...
cp /opt/supportfiles/consul-template-config /etc/consul-template/config/consul-template.conf.json
case $dataCenter in
us-central1)
cp /opt/supportfiles/template-mysql-servers-us-central1 /etc/consul-template/templates/mysql_servers.tpl
;;
us-east1)
cp /opt/supportfiles/template-mysql-servers-us-east1 /etc/consul-template/templates/mysql_servers.tpl
;;
esac
cp /opt/supportfiles/template-mysql-users /etc/consul-template/templates/mysql_users.tpl
### Ensure that proxysql has started
while ! nc -z localhost 6032; do
sleep 1;
done
### Ensure that consul agent has started
while ! nc -z localhost 8500; do
sleep 1;
done
exec /usr/local/bin/consul-template --config=/etc/consul-template/config/consul-template.conf.json
This code is very similar to the ENTRYPOINT file used for ProxySQL in the sense that it checks for the region the container is in, then moves configuration and template files into the appropriate location. However, there is some additional logic here that checks to ensure that ProxySQL is up and listening on 6032 and that consul (agent) is up and listening on port 8500. The reason for this is the consul-template needs to be able to communicate with both these hosts. You really have no assurance as to what container is going to load in what order in a pod, so to avoid excessive errors in the consul-template log, I have it wait until it knows that its dependent services are running.
Let’s go through our best practices checklist one more time against our consul-template container code.
- Container runs a single process:
- ENTRYPOINT calls the entrypoint.sh script, which does some conditional logic based on the regional location of the container, then ends by running consul-template. This means at the end of the process consul-template will be the only process running.
- Process should be PID 1:
- The command “exec” at the end of the ENTRYPOINT script will start consul-template as PID 1.
- Process should not be run as root:
- We switched to the consul user prior to starting the ENTRYPOINT.
- Log output should go to STDOUT:
- If you run Consul using the command noted at the end of the ENTRYPOINT script, you’ll see log output goes to STDOUT.
- Should be as deterministic as possible:
- Just like ProxySQL and consul (agent), all the supporting files are packaged with the container. Yes, there is logic to determine what files should be used, but you have the assurance that the files won’t change unless you create a new version of the container image.
Putting it all together
Okay, we have three containers representing the three processes we need to package together so ProxySQL can work as part of our HA stack. Now we need to put it all together in a pod so Kubernetes can have it run against our resources. In my use case, I’m running this on GCP, meaning once my containers have been built they're going to need to be pushed up to the Google Container Registry. After this we can create our workload to run our pod and specify how many pods we want to run. Getting this up and running can be done with just a few short and simple steps:- Create a Kubernetes cluster if you don’t already have one. This is what provisions the Cloud Compute VMs the pods will run on.
- Push your three Docker images to the Google container registry. This makes the images available for use by the Kubernetes engine.
- Create your Kubernetes workload, which can be done simply via the user interface in the GCP console. All that’s required is selecting the latest version of the three containers you’ve pushed up to the registry, optionally applying some metadata like an application name, Kubernetes namespace, and labels, then selecting which cluster you want to run the workload on.
- The pod is started.
- The three containers will start. In Kubernetes, pods are fully atomic. All the containers start without error or the pod will not consider itself started.
- The consul-template container will poll consul (agent) and ProxySQL on their respective ports until it’s confirmed those processes have started, then consul-template will start.
- Consul-template will create the new SQL files meant to configure ProxySQL based on the contents of the Consul key / value store.
- Consul-template will run the newly created SQL files against ProxySQL via its admin interface.
- The pod is now ready to receive traffic.
The YAML
During the process of creating your workload, or even after the fact, you’ll be able to see the YAML you’d normally have to create with standard Kubernetes deployments. Let’s have a look at the YAML that was created for my particularShare this
Previous story
← Oracle E-Business Suite Database Upgrade to 19c
You May Also Like
These Related Stories
Analyzing a Movie Dataset Housed on MongoDB Through GraphQL - Part 3: App Containerization & Deployment
Analyzing a Movie Dataset Housed on MongoDB Through GraphQL - Part 3: App Containerization & Deployment
May 6, 2024
2
min read
An Introductory Guide on Deploying Cassandra Database using Ansible
An Introductory Guide on Deploying Cassandra Database using Ansible
Dec 19, 2023
13
min read
Installing Oracle 11gR2 Enterprise Edition on Ubuntu 10.04 (Lucid Lynx)
Installing Oracle 11gR2 Enterprise Edition on Ubuntu 10.04 (Lucid Lynx)
Jun 11, 2010
5
min read
No Comments Yet
Let us know what you think