Container orchestrators and persistent volumes: part 2: Kubernetes
In my previous post, I presented how external persistent volumes were handled by DC/OS on AWS and how persistent services would recover from a node failure. Now let's review how Kubernetes deals with similar situations, but on Google Cloud.
Kubernetes on GCP in 10 minsI used this tutorial from Kubernetes documentation to generate a small cluster in GCP. TL;DR If you basically run the following command on your cloud shell, you should have a running cluster in about 5 minutes: [code language="bash"] gabocic@kubernetes-196513:~$ curl -sS https://get.k8s.io | bash [/code] I then installed the latest version of kubectl by downloading it from storage.googleapis.com directly. You can use this link for more details about installing Kubernetes CLI.
A quick recapIn Part 1, I created a persistent service on DC/OS (i.e., a MySQL instance) using external persistent volumes. This means that the data was not actually stored on the node where the container was running, but on an AWS EBS volume. We also saw that upon failure, the volume would be moved and mounted transparently on the new node where the service is running. Let’s run a similar test on Kubernetes and see how it reacts.
Deploying a MySQL serviceI chose to use the web GUI to interact with the cluster, so I first launched kubectl proxy from Google cloud shell and used the web preview feature to make the GUI available from my laptop. Additional details about how to do this can be found here. For those new to Kubernetes, the Create button on the upper right corner will allow you to create any Kubernetes resource by providing the correspondent YAML code. To create the MySQL service, I used this example YAML: I’m not going to describe it in detail, but we are basically creating a Deployment for our MySQL instance, a Persistent Volume Claim and a Service to access the server. I copied the storage-related sections below so we can take a closer look:
--- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 20GiFrom the snippet above, you can infer that we are requesting a persistent, 20Gib volume through a claim named mysql-pv-claim. Further down the YAML text, we can see that the resultant persistent volume should be mounted on /var/lib/mysql:
volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claimAfter uploading the YAML objects description, you can now see we have our Deployment, Persistent Volume Claims and Services created and healthy. To see on which Kubernetes node our MySQL Docker container is running, you need to check the mysql pod details ( kubernetes-minion-group-tj5w, in this case). To confirm the server is reachable and to create some test data, I connected to the instance using the kubectl syntax below. This will create a temporary pod running the MySQL client: [code language="bash"] gabocic@kubernetes-196513:~$ ./kubectl run -it --rm --image=mysql:5.6 --restart=Never mysql-client -- mysql -h mysql -ppassword [/code] Now let’s see what happens when kubernetes-minion-group-tj5w is stopped from the GCP console. As one would expect, all mysql components are marked as unhealthy, but the pod was not migrated to a healthy node. Why? Same as with DC/OS on AWS, Kubernetes nodes were created as part of an instance group, and stopping them would trigger a healing operation. To prevent this, I disabled autoscaling for the correspondent instance group and shut down the instance again. If you are wondering if the disk can be detached from a stopped machine, the answer is no, it can't. So I had to destroy the instance to allow the service migration to complete successfully. You can see on the screenshots below that after recovery, mysql is running on node kubernetes-minion-group-tvf9 and that the persistent volume created for the service is also attached to this node.