Prerequisites
Basic Kubernetes knowledge We are going to assume you have at least a basic knowledge of a Kubernetes cluster including its concepts and file structures (e.g. pods, deployments and services), if not we suggest you start by reading Introduction to Kubernetes concepts.
Existing Kubernetes cluster You will need access to a Kubernetes cluster via
kubectl
and will follow on from our previous learn guide Getting started with Kubernetes.Rails app We will be using K8s Sample Rails App, but feel free to try and deploy your own Rails app and use our repo as a reference for files such as the
Dockerfile
andconfig/deploy/*
.Container registry You will need to Dockerize your Rails app and push it to a remote container registry. We cover this in our Dockerize a Rails app guide.
Cluster config You can also view or use our global cluster config.
What this learn guide will cover
- Configure an Ingress Controller using Traefik (pronounced traffic).
- Launch a single MySQL service within the Kubernetes cluster (not recommended for production).
- Deploy the Rails app to the cluster.
- Update and redeploy the Rails app with zero downtime.
Considerations for a production environment
- Traefik will be setup as a single instance, but it would be better to have a multi-pod setup which is highly available.
- The MySQL setup is just for demonstration purposes, this should not be setup like this for a production environment as:
- It is not highly available and the data is not shared between the nodes which means losing a node could lose your data. It would be better to run a Galera cluster inside of the cluster but that is beyond the scope of this guide; if you're interested we already have a guide for Creating a MariaDB Galera Cluster (outside the Kubernetes cluster).
- We are going to access the MySQL service using the root user. This is technically fine as the MySQL service is not exposed to the outside world and we also have no other apps running inside the cluster. If we did have other apps inside the same cluster then we would make use of namespaces and app specific authentication.
Note: In the Kubernetes documentation section Configuration Best Practices they suggest grouping related objects into a single YAML configuration file, but for clarity we have split them into separate YAML files.
Get started - Global Kubernetes configuration
We are going to store all of our global cluster configuration files at ~/Code/kube-system
.
Configure Traefik as an Ingress Controller
Create a docker-registry secret to allow your cluster to access your private Docker Hub repository.
kubectl create secret docker-registry dockerhub-credentials \
--docker-server=https://index.docker.io/v1/ \
--docker-username=<your_username> \
--docker-password='<your_password>' \
--docker-email=<your_email>
Create the file ~/Code/kube-system/traefik/01-rbac.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-lb
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-lb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-lb
subjects:
- kind: ServiceAccount
name: traefik-ingress-lb
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-lb
namespace: kube-system
Create the file ~/Code/kube-system/traefik/02-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: traefik-conf
data:
traefik.toml: |
# traefik.toml
defaultEntryPoints = ["http","https"]
[entryPoints]
[entryPoints.http]
address = ":80"
compress = true
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
compress = true
[entryPoints.https.tls]
[acme]
email = "hello@example.com"
storageFile = "/acme/acme.json"
entryPoint = "https"
caServer = "https://acme-staging.api.letsencrypt.org/directory"
onDemand = true
onHostRule = true
[acme.httpChallenge]
entryPoint = "http"
[[acme.domains]]
main = "traefik.public.k8s.example.com"
Create the file ~/Code/kube-system/traefik/03-ingress-controller.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: traefik-ingress-controller
labels:
k8s-app: traefik-ingress-lb
spec:
replicas: 5
revisionHistoryLimit: 0
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
configmap-version: "2"
spec:
serviceAccountName: traefik-ingress-lb
terminationGracePeriodSeconds: 60
volumes:
- name: config
configMap:
name: traefik-conf
- name: acme
hostPath:
path: /srv/configs/acme.json
initContainers:
- name: init-traefik
image: busybox
command: ['sh', '-c', 'touch /acme/acme.json ; chmod 600 /acme/acme.json']
volumeMounts:
- mountPath: "/acme"
name: "acme"
containers:
- image: containous/traefik:latest
name: traefik-ingress-lb
imagePullPolicy: Always
volumeMounts:
- mountPath: "/config"
name: "config"
- mountPath: "/acme"
name: "acme"
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
- containerPort: 8080
args:
- --configfile=/config/traefik.toml
- --web
- --kubernetes
- --logLevel=DEBUG
Create the file ~/Code/kube-system/traefik/04-service.yaml
apiVersion: v1
kind: Service
metadata:
name: traefik
labels:
k8s-app: traefik-ingress-lb
spec:
type: LoadBalancer
selector:
k8s-app: traefik-ingress-lb
ports:
- port: 80
name: http
- port: 443
name: https
# externalIPs: # Uncomment these lines if you need external IPs to access the cluster
# - 127.0.0.1 # e.g. self-hosted GitLab
# - 127.0.0.2 # e.g. self-hosted GitLab runner
Apply it to the Cluster
kubectl apply -f ~/Code/kube-system/traefik/ -n kube-system
Note: CA (Let's Encrypt) is set to use the staging server to ensure we don't accidentally rate-limit ourselves. You can remove the following line from ~/Code/kube-system/traefik/config.yaml
if you would to hit the production server.
caServer = "https://acme-staging.api.letsencrypt.org/directory"
Add a MySQL database service
Create a secret for the MySQL password:
kubectl create secret generic mysql-pass --from-literal=password=<your_password>
Create the file ~/Code/kube-system/mysql/01-service.yaml
apiVersion: v1
kind: Service
metadata:
name: k8s-sample-app-mysql
labels:
app: k8s-sample-app
spec:
ports:
- port: 3306
selector:
app: k8s-sample-app
tier: mysql
clusterIP: None
Create the file ~/Code/kube-system/mysql/02-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: k8s-sample-app
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Create the file ~/Code/kube-system/mysql/03-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv
labels:
app: k8s-sample-app
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
hostPath:
path: /data/pods/mysql/datadir
Create the file ~/Code/kube-system/mysql/04-deployment.yaml
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: k8s-sample-app-mysql
labels:
app: k8s-sample-app
spec:
selector:
matchLabels:
app: k8s-sample-app
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: k8s-sample-app
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
Create it on the cluster:
kubectl create -f ~/Code/kube-system/mysql/
Ensure the PersistentVolume was dynamically provisioned. Please note it can take a couple of minutes for the PV to be provisioned and bound.
kubectl get pvc
You should see a response like this:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
mysql-pv 10Gi RWO Retain Bound default/mysql-pv-claim 2m
Verify the pod is running (again, it might take a couple of minutes):
kubectl get pods
Example response:
NAME READY STATUS RESTARTS AGE
k8s-sample-app-mysql-5c7cc5fdf9-rdg9v 1/1 Running 0 4m
You can run an interactive MySQL client to test it is working with:
kubectl run -it --rm --image=mysql:5.6 --restart=Never mysql-client -- mysql -h k8s-sample-app-mysql -p<your_password>
# you can then run normal MySQL commands like:
mysql> show databases;
Dockerize
You now need to Dockerize your Rails app so that is can be deployed to Kubernetes.
DNS records
You have two options here:
- Setup multiple A records per app/project domain (e.g. k8s-sample-app.example.com) to point to all of the cluster node IPs.
- Setup multiple A records for a cluster domain (e.g. public.k8s.example.com) and then create a CNAME record to point to the cluster domain.
Although option 1 is slightly quicker, we will go with option 2 as it is the more manageable option; as the cluster grows you will only need to add 1 CNAME record per app as opposed to 4 A records. Also if you ever add, remove or change a node you would only have to update the node IPs for the cluster domain.
Your DNS records should now look like this (excluding the node specific records from the previous guide):
Project specific Kubernetes configuration
We will keep our Kubernetes deployment files inside our repository under config/deploy
, this way they can be checked into source control (Git) as recommended by the Kubernetes docs.
Create the app specific secrets (Rails environment variables)
kubectl create secret generic k8s-sample-app-secrets --from-literal=secret_key_base='<your_really_secure_key_base>'
Check the secrets
kubectl get secrets mysql-pass --output=yaml
Create the file ~/Code/k8s-sample-app/config/deploy/01-config.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: k8s-sample-app-config
data:
APP_NAME: k8s-sample-app
PORT: "3000"
RACK_ENV: production
MYSQL_USER: root
MYSQL_DATABASE: k8s_sample_app_production
Create the file ~/Code/k8s-sample-app/config/deploy/02-deployment-web.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: k8s-sample-app-web
labels:
name: k8s-sample-app-web
spec:
replicas: 1
revisionHistoryLimit: 0
strategy:
type: RollingUpdate # default value, but explicitly set for demo
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
name: k8s-sample-app-web
spec:
imagePullSecrets:
- name: dockerhub-credentials
initContainers:
- name: k8s-sample-app-web-migrate
image: <your_username>/k8s-sample-app:v0.1
args: ["bundle", "exec", "rake", "db:create", "db:migrate"]
envFrom: &envfrom1
- configMapRef:
name: k8s-sample-app-config
env: &env1
- name: SECRET_KEY_BASE
valueFrom:
secretKeyRef:
name: k8s-sample-app-secrets
key: secret_key_base
- name: MYSQL_HOST
value: k8s-sample-app-mysql
- name: MYSQL_PW
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
containers:
- name: k8s-sample-app-web
image: <your_username>/k8s-sample-app:v0.1
args: ["bundle", "exec", "rails", "server", "-p", "3000"]
resources:
requests:
cpu: 200m
memory: 200Mi
ports:
- containerPort: 3000
livenessProbe:
httpGet:
path: /
port: 3000
initialDelaySeconds: 5
timeoutSeconds: 1
readinessProbe:
httpGet:
path: /
port: 3000
initialDelaySeconds: 5
timeoutSeconds: 1
envFrom: *envfrom1
env: *env1
Create the file ~/Code/k8s-sample-app/config/deploy/03-service.yml
apiVersion: v1
kind: Service
metadata:
name: k8s-sample-app-web
labels:
name: k8s-sample-app-web
spec:
selector:
name: k8s-sample-app-web
ports:
- port: 3000
name: k8s-sample-app-web
Create the file ~/Code/k8s-sample-app/config/deploy/04-ingress.yml
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: k8s-sample-app
annotations:
kubernetes.io/ingress.class: "traefik"
spec:
rules:
- host: k8s-sample-app.example.com
http:
paths:
- backend:
serviceName: k8s-sample-app-web
servicePort: 3000
Deploy to the cluster
kubectl apply -f ~/Code/k8s-sample-app/config/deploy/
Check the pod status:
kubectl get pods
Response should list our MySQL service and Rails app. Again this might take a couple of minutes as the pod is initialised and created.
NAME READY STATUS RESTARTS AGE
k8s-sample-app-mysql-5c7cc5fdf9-r7vds 1/1 Running 0 1h
k8s-sample-app-web-6675d7fb68-4hrgf 1/1 Running 0 1m
Make changes and redeploy
Make changes to local repo:
cd ~/Code/k8s-sample-app`
# e.g. change the text in app/views/pages/home.html.erb
Build a new Docker image (with a new version/tag):
docker build -t <your_username>/k8s-sample-app:v0.2
Push the new image to Docker Hub:
docker push <your_username>/k8s-sample-app:v0.2
Update the two image
values in the ~/Code/k8s-sample-app/config/deploy/02-deployment-web.yml
:
# from
image: <your_username>/k8s-sample-app:v0.1
# to
image: <your_username>/k8s-sample-app:v0.2
Apply to changes to the cluster:
kubectl apply -f ~/Code/k8s-sample-app/config/deploy/
When checking the pods this time you should see a new pod and then the old pod terminating
:
kubectl get pods
NAME READY STATUS RESTARTS AGE
k8s-sample-app-mysql-5c7cc5fdf9-r7vds 1/1 Running 0 1h
k8s-sample-app-web-6675d7fb68-4hrgf 1/1 Running 0 1m
k8s-sample-app-web-6675d7fb68-7wjv2 1/1 Terminating 0 5m
Scaling our deployment
By default deployments have a default .spec.strategy.type
of "RollingUpdate" but we set explicitly set it for the purpose of this guide. You will notice we also set .spec.strategy.rollingUpdate.maxUnavailable
to 0 (zero) to ensure our app doesn't become unavailable during deployments, as we set .spec.replicas
to 1; this value defaults to 25% (rounded down) of the current pods, which would be fine if working with more than 1 pod.
Method 1. Using kubectl scale
kubectl scale deployment --replicas 2 k8s-sample-app-web
Method 2. Changing our config file
Alternatively you could edit ~/Code/k8s-sample-app/config/deploy/02-deployment-web.yml
and change replicas: 1
to replicas: 2
and then apply it to the cluster:
kubectl apply -f ~/Code/k8s-sample-app/config/deploy/