Running SQL Server 2019 on Google Kubernetes Engine
Have you tried to run a SQL Server deployment on Google Kubernetes Engine yet? Here is a complete guide to create and run SQL Server 2019 on GKE. We will be creating the following resources to successfully host SQL Server on GKE.
- A namespace to host resources for SQL Server deployment
- A secret for ‘sa’ account
- Persistent Volume Claims (PVCs) for volumes required for SQL Server setup and data files
- We will also look at resizing the data volumes if required
- SQL Server Deployment
- Service of type internal load balancer to expose the SQL Server deployment
Create a namespace for SQL Server
We are going to host the resources for SQL Server Instance in a separate namespace so we will use kubectl command line to create a new namespace
kubectl create namespace rdbms
Following resources will be hosted in the namespace
- Secret for ‘sa’ account
- Persistent volume claims and persistent volumes for database storage
- Base volume
- Data volume for .mdf files
- Log volume for .ldf files
- Deployment and pods for SQL Server
- Service (Load Balancer) for exposing SQL Server Deployment
Create a secret for ‘sa’ account
You will need to create a secret that will be used for SA account when running the MSSQL container. The password needs to be a strong password otherwise the deployment will fail.
kubectl create secret generic sql-server-secret --from-literal=MSSQL_SA_PASSWORD="P@ssw0rd" -n rdbms
Create volumes for SQL Server
Note: All Persistent Volume Claims (PVCs) are create in the namespace “rdbms” created in the create a namespace step. Make sure to change your namespace while running the “kubectl apply” command in case you are using a different namespace.
Firstly, we will create a persistent volume claim for base volume of 2 GB where SQL Server directories will be created. If you wish to have a larger persistent disk allocated for this claim, then you can do so by updating the YAML file. Below is the YAML file for your reference.
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: mssql-base-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi
If there are no changes to the YAML file then you can execute the below command to claim the base volume.
kubectl apply -f https://raw.githubusercontent.com/sa-proj/proj-containers/main/sql-server-deployment/mssql-base-volume-pvc.yaml -n rdbms
Next, we will create a persistent volume claim for data volume of 10 GB where SQL Server MDF files will be placed. If you wish to have a larger persistent disk allocated for this claim, then you can do so by updating the YAML file.Below is the YAML file for your reference.
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: mssql-data-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi
If there are no changes to the YAML file then you can execute the below command to claim the data volume.
kubectl apply -f https://raw.githubusercontent.com/sa-proj/proj-containers/main/sql-server-deployment/mssql-data-volume-pvc.yaml -n rdbms
Finally, we will create a persistent volume claim for log volume of 10 GB where SQL Server LDF files will be placed. If you wish to have a larger persistent disk allocated for this claim, then you can do so by updating the YAML file.Below is the YAML file for your reference.
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: mssql-log-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi
If there are no changes to the YAML file then you can execute the below command to claim the log volume.
kubectl apply -f https://raw.githubusercontent.com/sa-proj/proj-containers/main/sql-server-deployment/mssql-log-volume-pvc.yaml -n rdbms
To list all claims and verify their status is bound run the below kubectl get command
kubectl get pvc -n rdbms
What if you wish to expand the volume size later?
You can expand the VPC in Google Kubernetes Engine. Check your storage class configuration for allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: … provisioner: kubernetes.io/gce-pd allowVolumeExpansion: true
Once the storage class is updated with the allowVolumeExpansion: true, you can update the PVC volume and expand it by changing the capacity of respective PVC volume. You will need to update the field spec.resources.requests.storage with the desired capacity you want to expand too. Lets expand the base volume to 5 GB as an example.
kubectl edit pvc mssql-base-volume -n rdbms
spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi
updating field with desired size 5 GB
spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi
Once PVC capacity has increased you are good to restart the respective POD so that volume details get updated into the POD. For more details on resizing the volume check official documentation from Google.
Create SQL Server Deployment
Since, my GKE Cluster is an internal cluster and I was unable to pull the image directly from mcr.microsoft.com registry. I had to use cloudshell to pull the image and push it to Google Container registry so my deployment can pull the image and create a pod for deployment.
- Run docker pull command to download the image locally to cloudshell
docker pull mcr.microsoft.com/mssql/server:2019-latest
2. Tag the image to include gcr.io and your Google project ID where your cluster in located
docker tag mcr.microsoft.com/mssql/server:2019-latest gcr.io/<PROJECT_ID>/server:2019-latest
3. Push the image to Google container registry
docker push gcr.io/<PROJECT_ID>/server:2019-latest
4. Download the YAML file and use the editor to update the image in the yaml file along with licensing information. For demo purposes I am using Developer edition.
wget https://raw.githubusercontent.com/sa-proj/proj-containers/main/sql-server-deployment/mssql-db-deployment.yaml
Below is the YAML file for your reference.
apiVersion: apps/v1 kind: Deployment metadata: name: mssql-db-deployment spec: replicas: 1 selector: matchLabels: app: mssql template: metadata: labels: app: mssql spec: securityContext: runAsUser: 0 runAsGroup: 0 terminationGracePeriodSeconds: 10 containers: - name: mssql image: mcr.microsoft.com/mssql/server:2019-latest ports: - containerPort: 1433 env: - name: MSSQL_PID value: "Developer" - name: ACCEPT_EULA value: "Y" - name: SA_PASSWORD valueFrom: secretKeyRef: name: sql-server-secret key: MSSQL_SA_PASSWORD - name: MSSQL_DATA_DIR value: /var/opt/mssql/data - name: MSSQL_LOG_DIR value: /var/opt/mssql/log volumeMounts: - name: mssql-base-volume mountPath: /var/opt/mssql - name: mssql-log-volume mountPath: /var/opt/mssql/log - name: mssql-data-volume mountPath: /var/opt/mssql/data volumes: - name: mssql-base-volume persistentVolumeClaim: claimName: mssql-base-volume - name: mssql-data-volume persistentVolumeClaim: claimName: mssql-data-volume - name: mssql-log-volume persistentVolumeClaim: claimName: mssql-log-volume
If there are no changes and your cluster can download image from a public repository then you can use the below “kubectl apply” command to create the SQL Server deployment
kubectl apply -f https://raw.githubusercontent.com/sa-proj/proj-containers/main/sql-server-deployment/mssql-db-deployment.yaml -n rdbms
If there are modifications to the local file in cloudshell then you can use the below “kubectl apply” command tocreate the SQL Server deployment
kubectl apply -f mssql-db-deployment.yaml -n rdbms
Check if your deployment is running fine and you have containers running
kubectl get deployments -n rdbms
Check if the underlying pod is running fine
kubectl get pods -n rdbms
Copy the pod name from the output of the above command
If there are any errors and pod fails to create then you can use the describe command to get more details
kubectl describe pods <<pod name>> -n rdbms
To check logs for the pod run the below kuebctl logs command
kubectl logs <<pod name>> -n rdbms
Expose the SQL Server deployment
Next we are going to expose the deployment by creating a service with type internal load balancer. The YAML for the service in given below. If you want to change the port number on which SQL Server TCP connections will listen then you update the below YAML file.
apiVersion: v1 kind: Service metadata: name: mssql-service annotations: networking.gke.io/load-balancer-type: "Internal" spec: selector: app: mssql ports: - protocol: TCP port: 1433 targetPort: 1433 type: LoadBalancer
If there are no changes and your cluster can download image from a public repository then you can use the below “kubectl apply” command to expose the SQL Server Service.
kubectl apply -f https://raw.githubusercontent.com/sa-proj/proj-containers/main/sql-server-deployment/mssql-service.yaml -n rdbms
Testing Connectivity
- Grab the IP address of the internal load balancer
- Add Network tags to the Kubernetes cluster nodes and add a firewall rule in the network to allow port 1433 from IP addresses within your VPC network or Source Tag of a VM resource in that VPC network.
- Connect to the VM instance and install either SQLCMD or SSMS and try connecting to SQL Server using the IP address of the internal load balancer and specific ‘sa’ as username and password set for ‘sa’ at time of secret creation.
Congratulations! you have successfully connected to to your SQL Server 2019 instance hosted on Google Kubernetes Engine.
Share this
You May Also Like
These Related Stories
No Comments Yet
Let us know what you think