In this tutorial, we are going to implement a system for CI/CD pipelines using Jenkins and Harbor running on a Kubernetes cluster.

  • Jenkins: One of the most used and popular open source solutions to facilitate CI/CD.
  • Harbor: A CNCF project that aims to offer an open source solution for rolling up your own container registry as an alternative to proprietary solutions such as Amazon ECR and Google Container Registry.

By the end of this tutorial, you will have two Kubernetes clusters: one for your operational CI/CD system to run Jenkins and Harbor, and the other for your team’s staging environment running your backend applications.

⚠️ During this tutorial, you will incur fees in your Civo account for the period that you keep your Kubernetes clusters running. There is a clean up process at the end of this tutorial to stop the usage.

Prerequisites

Before starting this tutorial, you will need the following in place:

  • A Civo account: If you don't have one yet, you can sign up here.
  • A computer with the following tools installed:
  • A GitHub account: In this tutorial, you will store your application in a GitHub repository and set it up so that anytime there are commits pushed into the repository, they will be built on your Jenkins server.

Civo CLI

To check if you have Civo CLI installed, run the following command on your terminal:

$ civo version

The output should show you the version of the Civo CLI that is currently installed. If it doesn't, then you should first install Civo CLI. Check the documentation page on how to install it.

After you have the Civo CLI installed, run:

$ civo apikey show
+---------+----------------------------------------------------+
| Name    | Key                                                |
+---------+----------------------------------------------------+
| tempKey | your-key-here                                      |
+---------+----------------------------------------------------+

If the output tells you that no API key has been supplied, follow the instructions in the Civo CLI documentation to set up the API key.

Kubectl

To check if you have kubectl installed, run the following command. It should show you the version of your installed kubectl:

$ kubectl version --client  

Docker Desktop

Next, to check if you have the Docker desktop installed, run the following command on your terminal:

$ docker version

It should show you complete information about the Docker client and server running on your computer. We will use the Docker CLI with the Docker desktop to build an image and push it to your Harbor server.

You will use Helm to install packages on your kubernetes cluster. You can check if you have it on your machine by running helm version. If you don’t have it yet, check the instructions on the Helm documentation page.

Initial Setup

Using your terminal, create a new directory to work on this tutorial.

$ mkdir civo-jenkins-harbor
$ cd civo-jenkins-harbor
$ export PROJECT_DIR=$(pwd)

The PROJECT_DIR environment variable will be used often to move between directories in this tutorial.

Set up Jenkins on Kubernetes

To follow along with this tutorial, you will need to ensure that Jenkins is deployed on your Kubernetes cluster. I have put together a tutorial on how you can do this here. By the time you finish that tutorial, you will have one cluster with Jenkins installed on that cluster. Also, your kubectl configuration will point to the context of that cluster.

You can check the name of the context by running the following command:

$ kubectl config current-context

Let's save it into a variable to make it easier for you to switch to that context later on:

$ OPS_CONTEXT=$(kubectl config current-context)

Install required Jenkins plugins

For this tutorial, you need two Jenkins plugins:

  • GitHub plugin
  • Kubernetes CLI plugin

You need the first one to connect your GitHub repo and your Jenkins server, and you need the second one so that you can use a Kubernetes config file as credentials later on when you deploy your docker image on your staging cluster.

To go to the Jenkins plugin page, replace <jenkins-base-url> with your actual Jenkins base URL in the following URL and open it in your browser:

http://<jenkins-base-url>/manage/pluginManager/available

Once the page is loaded, search for GitHub. You should see a plugin named GitHub at the top of the search results. Tick the checkbox at the left of the plugin and then click the Install button at the top right corner. Wait until all the progress items show successful status.

GitHub Progress Items

GitHub Download Progress

By installing the GitHub Jenkins plugin, now your Jenkins server has a webhook with the URL http://<jenkins-base-url>/github-webhook/. Please be aware that by doing this, your webhook is accessible publicly, so in the real production environment, you need to add authentication or source IP verification.

You can find more details about the GitHub Jenkins plugin on its documentation page. With that configured, you can add the Jenkins webhook to your GitHub repository.

Go back to the available plugins page and search for Kubernetes CLI. Install the plugin.

Install Kubernetes CLI Plugin

Install Kubernetes CLI Plugin Download Progress

With that, your Jenkins server now has the required plugins for this tutorial.

Creating a GitHub Repo for Jenkins

Create a Repo for your Application

Now that you have Jenkins configured, it’s time to create an application that you will build in Jenkins. Back to your terminal, create a new directory for your application:

$ mkdir $PROJECT_DIR/my-app
$ cd $PROJECT_DIR/my-app

Using your web browser, access the create-repo page on GitHub. Fill in your repository name, select Private as your repo type, and click the Create repository button.

Create a Repo for your Application

A new page will load, and there, you will find the instructions for quick setup. Click the copy icon under the create a new repository on the command line section header.

Create a new repository on the command line

Paste the commands in your terminal and press Enter to execute them. You will see the output like the following lines in your terminal.

$ echo "# my-app" >> README.md
git init
git add README.md
git commit -m "first commit"
git branch -M main
git remote add origin git@github.com:rizaldim/my-app.git
git push -u origin main
Initialized empty Git repository in /Users/rizaldim/repo/civo-jenkins-harbor/my-app/.git/
[main (root-commit) fc464c6] first commit
 1 file changed, 1 insertion(+)
 create mode 100644 README.md
Enumerating objects: 3, done.
Counting objects: 100% (3/3), done.
Writing objects: 100% (3/3), 224 bytes | 224.00 KiB/s, done.
Total 3 (delta 0), reused 0 (delta 0), pack-reused 0
To github.com:rizaldim/my-app.git
 * [new branch]     main -> main
branch 'main' set up to track 'origin/main'.

Go back to your browser and refresh the page. You should now see there is a README.md file inside your repository.

README file in repository

Before proceeding to the next section, run the following command to save your application's GitHub repository into a shell variable named REPO_URL. Replace <username> and <repo-name> with your GitHub username and repository.

$ REPO_URL=https://github.com/<username>/<repo-name>

Set up SSH Keys in the GitHub Repo

Next, to allow access to the GitHub repo from the Jenkins server, you need to create SSH public and private keys specific for the Jenkins server. Create a new directory named ssh-keys and change it to:

$ mkdir $PROJECT_DIR/ssh-keys
$ cd $PROJECT_DIR/ssh-keys

Run the following command to create the keys. Replace your_email@example.com with your actual email:

$ ssh-keygen -t ed25519 -C "your_email@example.com"

You will be asked where to save the key. Type in id_jenkins. Leave the passphrase empty and press Enter. Press Enter one more time to confirm. The output should be similar to the following:

Generating public/private ed25519 key pair.
Enter file in which to save the key (/Users/rizaldim/.ssh/id_ed25519): id_ed25519
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in id_ed25519
Your public key has been saved in id_ed25519.pub
The key fingerprint is:
SHA256:14boNEjQCPWhurRpaJn/Itx4rOFHakFd9zPs2FBOD7I zal@fastmail.com
The key's randomart image is:
+--[ED25519 256]--+
|  .ooo.        |
|   .=.+ +      |
|  . o + O o    |
| . o . E B +   |
|. o   . S = o  |
| +o=   + = .   |
|o=%    .             |
|oBo*           |
|.o=.o.         |
+----[SHA256]-----+

If you run ls, you should see that in the ssh-keys directory, there are 2 new files: idjenkins.pub and ​​idjenkins. The first one is the public key, and the other one is the private key.

Copy the content of id_ed25519.pub file and then go to your repository settings page in GitHub by opening the following URL in your browser. Replace <repo-url> with your GitHub repository URL.

<repo-url>/settings/keys

Once the page is opened, click the Add deploy key button.

Add deploy key initial page

Type in jenkins as the title. Paste the content of id_jenkins.pub field into the Key field. Click Add key.

Add deploy key process

With that, you can use your private key to access your GitHub repository from your Jenkins installation.

Add Jenkins Webhook to GitHub Repo

Replace <repo-url> with your GitHub repo URL in the following URL and open it in your browser to access your repository hooks settings:

https://github.com/<username>/<repo-name>/settings/hooks

Click the Add webhook button.

Add webhooks initial page

Replace <jenkins-base-url> with your Jenkins base URL in the following URL and paste it into the Payload URL field:

http://<jenkins-base-url>/github-webhook/

Click the Add webhook button.

Add webhooks process

After you add the webhook, GitHub will try to ping the webhook by sending a test payload, and you will see whether the ping is successful in the webhook list. A successful ping will look like the picture below:

Successful webhooks addition

Create a new credential in Jenkins

Before you can test the webhook, you need to create a new pipeline job in Jenkins. To create a new pipeline, you need to create a new credential first. This credential will then be used in your pipeline to authenticate to GitHub before accessing your repository.

Replace <jenkins-base-url> with your actual Jenkins base URL in the following URL and open it in your browser to access your Jenkins global credentials page.

http://<jenkins-base-url>/manage/credentials/store/system/domain/_/

Creating new credentials in Jenkins

Click the Add Credentials button. In the New credentials form, under the Kind field choose SSH Username with private key. Under the ID field type in github-key. Under the Username field type in your GitHub username.

New Credentials in Jenkins

Click Enter directly radio button and then click Add. A text box will appear. Paste the content of the id_jenkins file that you created earlier into the text box. It should be in the form of:

-----BEGIN OPENSSH PRIVATE KEY-----
Long-random-text
-----END OPENSSH PRIVATE KEY-----

Click Create.

Creating new private key

The newly created credential is now in your credentials list.

Global credentials with new credential

Before you can use this credential, you need to change the settings for the key verification configuration. Replace <jenkins-base-url> with your actual Jenkins URL in the following URL and open it in your browser to access your Jenkins security page.

http://<jenkins-base-url>/manage/configureSecurity/

Scroll down the page until you find the Git Host Key Verification Configuration section. Choose Accept first connection for Host Key Verification Strategy. Click Save.

Git Host Key Verification Configuration

With that, you can now use the newly created credentials in your Jenkins pipeline.

Create a job for your application

With the credential configured, it’s time to add a new pipeline job in Jenkins. To do this, replace <jenkins-base-url> with your actual Jenkins URL in the following URL and open it in your browser:

http://<jenkins-base-url>/view/all/newJob

A page to create a new Jenkins pipeline will be opened. Type in my-app-staging for the item name, choose Pipeline as the item type, and then click OK.

Enter Item Name

A new page for configuring your newly created job will open. Under the General section, tick the GitHub project checkbox and then type in your GitHub repo SSH address. You can find the address in your GitHub repo homepage.

GitHub SSH Address

GitHub SSH Address Project URL

Under the Build triggers section, tick the GitHub hook trigger for GITScm polling checkbox.

Build Triggers

Under the Pipeline section, choose Pipeline script from SCM for the definition. Then choose Git as SCM type. Paste your GitHub repo SSH address into the Repository URL field.

Repository URL in GitHub

Jenkins will show an error message telling you that it fails to connect to the GitHub repo. To fix this error, click on the Credentials dropdown, and it will show you your GitHub username as one of the credentials. This is the credential that you just created in the previous section. Click that item. The error message should disappear. Change the branch specifier to */main. Click Save.

Jenkins Credentials

Trigger build by pushing to GitHub

Now, to test all the configurations above (webhook, credentials, and job pipeline), you need to push a new commit to your GitHub repo, and the new commit has to have Jenkinsfile inside it. Inside the my-app directory create a new file named Jenkinsfile. Paste the following code into it.

pipeline {
  agent any
  stages {
    stage('Stage 1') {
      steps {
        echo 'Hello world!'
      }
    }
  }
}

Then, run the following commands to create a new commit and push it to GitHub.

$ git add Jenkinsfile
$ git commit -m ‘Add Jenkinsfile’
$ git push

After a few seconds, in my-app-staging job page, you will now see a job triggered under the Build History section on the left-side menu.

Build History

Click on the date and then click Console Output.

Console Output

You will see the output of your job execution. Wait until it shows Finished: SUCCESS. A couple of lines above you can see your commit information and the output of the job.

Jenkins Finished SUCCESS

With that you have successfully connected your GitHub repo with your Jenkins server so that anytime there is a new commit pushed into your GitHub repo on the main branch, the my-app-staging pipeline job will be triggered. Now you can move to the next step, creating your private container registry using Harbor.

Set up a container registry using Harbor

Install Harbor Helm package

Before you install Harbor in your kubernetes cluster, you need to find the DNS entry for your cluster. Run the following command to find it and store the value to CLUSTER_DNS variable. Replace with your actual cluster name:

$ CLUSTER_DNS=$(civo kubernetes show -o custom -f DNSEntry <cluster-name>)

Run the following command to install Harbor in your kubernetes cluster:

$ helm install harbor harbor/harbor \
  --create-namespace \
  --namespace harbor \
  --set expose.ingress.hosts.core=harbor.$CLUSTER_DNS \
  --set expose.tls.enabled=false \
  --set externalURL=http://harbor.$CLUSTER_DNS \
  --wait

It will take a few minutes for the installation to finish. If you want to watch the process while waiting, you can open a new terminal and run the following:

$ watch -n 3 kubectl get pods -n harbor

The output will be like the output below. When the process is finished, all the pods will have Running status:

NAME                               READY   STATUS  RESTARTS    AGE
harbor-portal-85dd5576df-x767p      1/1     Running   0             2m22s
harbor-redis-0                      1/1     Running   0             2m22s
harbor-registry-f6d447b86-xllvm     2/2     Running   0             2m22s
harbor-database-0                   1/1     Running   0             2m22s
harbor-trivy-0                      1/1     Running   0             2m22s
harbor-core-79d9d6f798-77jbb        1/1     Running   1 (72s ago)   2m22s
harbor-jobservice-6859d7bc76-2hphw   1/1    Running   3 (63s ago)   2m22s

Go back to the first terminal where you run the helm install command. Run the following command to check the ingress object created for Harbor:

$ kubectl get ingress -n harbor

You should now have 1 ingress in the harbor namespace named harbor-ingress. Under the PORTS column you should see one port, 80. Copy the domain under the HOSTS column and access it in your browser. This domain is your harbor domain. You should now have your Harbor login page opened.

Harbor login page

Use username admin and password Harbor12345 to log in.

Harbor Dashboard

Change Harbor's account password

Now that you're logged in, change your account password. Click the admin username at the top-right corner and then click Change password. Type in the current password. Then type in your new password on the New password and Confirm password fields. Then click OK.

Change Harbor Password

Test pushing image to Harbor

By default, you have 1 project created in your Harbor registry named library. You can use this project to test pushing images to Harbor. Access the project details page by opening the following URL in your browser. Replace <harbor-url> with your Harbor domain:

http://<harbor-url>/harbor/projects/1/repositories

Click the Configuration tab and untick the Public checkbox to make the project private.

Test pushing image to Harbor

Click the Save button at the bottom of the page. The access level should now be Private.

Test pushing image to Harbor Private

Since there is no TLS configured on your Harbor domain, you need to do a few extra steps before you can push docker images into your Harbor registry. The TLS configuration itself is outside the scope of this tutorial. For the production environment, you should configure it using something like cert-manager.

Open your Docker desktop application and click the gear icon at the top-right corner to open the settings.

Docker Desktop Application

Click Docker Engine on the left-side menu. Replace the content of the text area with the following lines. Don't forget to replace with your actual Harbor domain. Once you are done, click Apply & restart:

{
  "builder": {
    "gc": {
      "defaultKeepStorage": "20GB",
      "enabled": true
    }
  },
  "experimental": false,
  "insecure-registries": ["<harbor-domain>"]
}

Next, log in to your Harbor server using the admin username and the password that you have set previously.

$ docker login $HARBOR_DOMAIN -u admin -p <your-password>

Now in your terminal, pull the hello-world docker image and change its tag to use your Harbor domain. Push the docker image to your Harbor server. Remember to replace <harbor-domain> with your actual Harbor domain:

$ HARBOR_DOMAIN=
$ docker pull hello-world
$ docker tag hello-world $HARBOR_DOMAIN/library/hello-world
$ docker push $HARBOR_DOMAIN/library/hello-world

On the Harbor web, open the Repositories tab in the library project. It should now show that there is one repository named library/hello-world with one artifact. That one artifact is the docker image that you just pushed to Harbor.

Image 33 Image in Harbor Dashboard

Set up staging environment

Create staging cluster

After setting up Jenkins and Harbor in your operational cluster, create a new cluster for your staging environment. In the actual software development process, it’s a good practice to separate your environments into different kubernetes clusters. This way if there are errors in your staging or operational environment, it won’t affect the running services in your production environment.

Run the command below to create a new cluster. For the purpose of this tutorial, you only need 1 node.

$ civo k8s create cluster-staging \
--nodes 1 \
--size g4s.kube.medium \
--cluster-type k3s \
--cni-plugin flannel \
--network default \
--create-firewall \
--save \
--merge \
--wait

Use kubectl to list the Kubernetes contexts in your local config. You should find a new context named cluster-staging. This context is now the active context so any kubectl command that you run will be executed on the staging cluster.

$ kubectl config get-contexts

Create a simple server-side Python application

Currently, inside the $PROJECTDIR/my-app directory, there is no program or application. You are going to create a server-side Python application using a framework named <a href="https://flask.palletsprojects.com/en/3.0.x/" target="blank" rel="noopener">Flask.

Run the following commands to a new Python virtual environment inside my-app directory and activate it.

$ cd $PROJECT_DIR/my-app
$ python3 -m venv venv
$ source ./venv/bin/activate

Next, install the Flask package.

$ pip install flask

Create a file named main.py and paste the following Python code into the file.

from flask import Flask

app = Flask(__name__)

@app.route("/")
def hello_world():
    return "<p>Hello, World!</p>"

Now, run the following command to run the application locally.

$ flask --app main run --host 0.0.0.0 --port 8888

You will see the output similar to the following lines telling you that now the application can be accessed.

 * Serving Flask app 'main'
 * Debug mode: off
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
 * Running on all addresses (0.0.0.0)
 * Running on http://127.0.0.1:8888
 * Running on http://192.168.18.6:8888
Press CTRL+C to quit

In your browser, try to access localhost:8888. You should see Hello, World! message.

Create a simple server-side Python application

Go back to your terminal and press Ctrl-C to stop the application. Now, you need a docker file to containerize the application. But first, run the following command to store the list of packages needed by your application into requirements.txt file:

$ pip freeze > requirements.txt

Then create a new file named Dockerfile and paste into it the following code:

FROM python:3.9-alpine

WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY main.py .

EXPOSE 8888

CMD ["flask", "--app", "main", "run", "--host", "0.0.0.0", "--port", "8888"]

Build and push docker image to Harbor

Now that you have your application and a docker file, build the docker image and push it into your Harbor container registry:

$ docker build --platform linux/amd64 \
  --push \
  --tag $HARBOR_DOMAIN/library/my-app:v1 .

If you check the repositories in the library project in Harbor, there is now a new repository named library/my-app.

Build and push docker image to Harbor

Make Harbor accessible from the staging cluster

Before you can run your application's Docker image on the staging cluster, you need to create a secret object with the credentials needed by your cluster to pull the Docker image from the Harbor container registry. In this tutorial you are going to use your Harbor admin account credentials for this. Be warned that this is only for the purpose of this tutorial, and it's not a good practice from a security standpoint. In your actual work environment, you should create a separate Harbor account and grant it the permissions that it needs to pull docker images from Harbor. You can read more about Harbor user permissions on the Harbor documentation page.

Create a new secret in your staging cluster with docker-registry type using your Harbor admin account. Replace <password> with your actual admin password.

$ kubectl create secret docker-registry regcred \
  --docker-server=$HARBOR_DOMAIN \
  --docker-username=admin \
  --docker-password=<password>

You will use this secret during your Kubernetes deployment later.

Enable pulling images from insecure registries

By default, Kubernetes doesn't allow you to pull images from insecure registries. Since there is no TLS configured for your Harbor domain, you need to do a few extra steps to enable pulling images from your Harbor registry. As a reminder, in an actual production environment, you should configure TLS certificates for your Harbor domain with something like cert-manager. By doing so you don't need to do the extra steps laid out in this section.

First, install kubectl-node-shell on your machine. You need this tool so that you can ssh into your Kubernetes cluster node.

$ curl -LO https://github.com/kvaps/kubectl-node-shell/raw/master/kubectl-node_shell
$ chmod +x ./kubectl-node_shell
$ sudo mv ./kubectl-node_shell /usr/local/bin/kubectl-node_shell

Get the list of your nodes. Since you created your cluster with only one node previously, there should be only one node.

$ kubectl get nodes

Copy the name of the node, and then run the following. Replace with your actual node name.

$ kubectl node-shell <node-name>

A new command prompt will appear, and you are now logged in to your node through SSH. Change into /etc/rancher/k3s directory.

# cd /etc/rancher/k3s

Run the following command to create a new file named registries.yaml inside that directory. Replace with your actual Harbor domain.

# cat >> registries.yaml<< EOF
mirrors:
  "<harbor-domain>":
    endpoint:
      - "http://<harbor-domain>"
EOF

Your changes will only take effect after you restart your cluster. Run the following to do that.

# rc-service k3s restart
 * Stopping node-problem-detector …                                   [ ok ]
 * Stopping k3s ...

It will look like your terminal session hangs, but what actually happens is your SSH session is ended because you restart the k3s service. Wait for a minute, then press Ctrl-C to resume your terminal session.

As a reminder, if you have more than 1 node in your cluster you need to do the process of creating registries.yaml file and restarting the k3s service in every node. That way, no matter in which node your pod is running, the node will be able to pull the image from the Harbor registry.

Now that you have configured the node in your cluster to pull the image from your Harbor registry, you are ready to deploy your application in your staging cluster.

Deploy my-app Docker image manually

The next step is to deploy your application's Docker image in the staging cluster. In the $PROJECT_ID/my-app directory, create a new file named deployment.yaml and paste the following yaml code into the file.

Replace <harbor-domain> with your actual Harbor domain.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app-deployment
  namespace: default
  labels:
    app: my-app
spec:
  replicas: 1
  selector:
    matchLabels:
    app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
    imagePullSecrets:
    - name: regcred
    containers:
    - name: main
          image: <harbor-domain>/library/my-app:v1
        ports:
          - containerPort: 8888

You might notice that there is a imagePullSecrets field in the pod's template specification in the yaml code above. This refers to the secret that you created in the previous section and the credentials in it will be used by the cluster to pull the Docker image.

To create the deployment based on deployment.yaml file, run the following command:

$ kubectl apply -f deployment.yaml

Run the following command to check the deployment status and the pods created.

$ kubectl get deployment
$ kubectl get pods

After a few seconds, you should have your pod with Running status.

$ kubectl get pods
NAME                                READY   STATUS      RESTARTS   AGE
install-traefik2-nodeport-cl-prwxl   0/1    Completed   0       72m
my-app-deployment-84db7fdc4c-7dtvz   1/1    Running     0       3s

By running the following command, you can access your pod through localhost port 8888.

$ kubectl port-forward deploy/my-app-deployment 8888

To check, access localhost:8888 using your browser. You should see the same Hello, World! message, but this time, your application is running your staging cluster. Please note that accessing your pod this way is for debugging purposes only. In the real working environment, you should use Ingress controllers to make your application accessible from outside of the cluster.

Create a pipeline for application

At this point, you have successfully run your application in the staging cluster. You need to implement your pipeline by defining it in the Jenkinsfile inside your repository. But before editing the Jenkinsfile, you need to prepare a few things in cluster-ops to enable the Jenkins server running in the cluster to build docker images.

Add kaniko and kubectl to Jenkins pod template

For building docker images in the Jenkins pipeline, you will use kaniko, an open-source tool developed by Google to build container images inside a container. To let Jenkins use kaniko, you need to edit the Jenkins pod template and add the kaniko container. While doing it, you also need to add a container to run kubectl, that you will use in your pipeline to edit your app deployment in the staging cluster.

Replace <jenkins-base-url> in the following url with your actual Jenkins base url and open it in your browser.

http://<jenkins-base-url>/manage/cloud/kubernetes/templates/

Click default to view the details of the default pod template.

Add kaniko and kubectl to Jenkins pod template

Scroll down a bit until you find the Add container dropdown just above the Environment variables field. Click the dropdown and click Container template.

Add kaniko and kubectl to Jenkins pod template

Type in kaniko as the container name. In the Docker Image field type in gcr.io/kaniko-project/executor:debug. Leave the default values for other fields unchanged.

Add kaniko and kubectl to Jenkins pod template

After that, add another container template for kubectl. Type in kubectl as the container name and portainer/kubectl-shell:latest as the docker image.

Add kaniko and kubectl to Jenkins pod template

Scroll down a bit, just below the Environment variables field, you will find Volumes field. Click the Add Volume dropdown and click Secret Volume. Type in regcred as the Secret name and /kaniko/.docker as the Mount path.

Add kaniko and kubectl to Jenkins pod template

To finish editing the pod template, click the Save button.

Add secret for Harbor credential

Next, you need to create the regcred secret that you just added as a volume for the pod template. Before you can do that you need to decode your Harbor's username and password by running the following command in your terminal. Replace <password> with your actual Harbor's admin user password.

$ echo -n admin:<password> | base64

Then, create a new file named config.json and copy the following content into that file. Replace <harbor-domain> with your Harbor's domain and <value> with the output of the previous command.

{
  "auths": {
    "<harbor-domain>": {
      "auth": "<value>"
    }
  }
}

Since the last time you were using kubectl in the context of your staging cluster, you need to switch back to the context of your Jenkins cluster.

$ kubectl config use-context $OPS_CONTEXT

Finally, run the following command to create a kubernetes secret that will be used by your Jenkins pod.

$ kubectl create secret generic regcred --from-file config.json --namespace jenkins

Please note that you are using admin user for Harbor credentials, and this is for demo purposes only. In the real production environment, you should use a different Harbor user account with less privilege. It's even better if you use a Harbor user with LDAP/Active Directory authentication or OIDC Provider authentication. You can read more about Harbor user authentication in its documentation.

Edit the pipeline to build the image on Jenkins

Open the Jenkinsfile inside my-app directory and replace the content with the following code. Replace <harbor-domain> inside the stage('Build') block with your actual Harbor domain.

pipeline {
  agent any
  stages {
    stage('Build') {
      steps {
        container('kaniko') {
          sh '''
        /kaniko/executor --context . \
            --insecure \
            --destination=<harbor-domain>/library/my-app:v${BUILD_NUMBER}
        '''
        }
      }
    }
  }
}

In the Jenkins pipeline code above, notice that you use a Jenkins variable, ${BUILD_NUMBER}. By doing this, images built with this pipeline will have tags based on the build number in Jenkins.

Push the changes to GitHub repository

The next thing to do is commit the changes in my-app directory and push it into the GitHub repository. But first, create a new file named .gitignore and paste the following lines into it.

/__pycache__/
/venv/

This is to prevent your local Python cache and virtual environment from being included in your GitHub repository.

Add your changes and create a new git commit. Then, push it into the GitHub repository.

$ git add .
$ git commit -m "Update Jenkinsfile"
$ git push

Open my-app-staging pipeline page in your Jenkins dashboard. You can open it by using the following URL and replace <jenkins-base-url> with your actual Jenkins base URL.

http://<jenkins-base-url>/job/my-app-staging/

Notice now that there is a new build in your build history.

Push the changes to GitHub repository

Click on the build date and then click Console Output on the left-side menu to show the output of your build. You should be able to monitor your build progress there.

Push the changes to GitHub repository

If you read the output you will see the output of the processes for docker image building and when it's being pushed to your Harbor registry. Once the docker image is pushed into the Harbor container registry, the output will show Finished: SUCCESS at the end.

Push the changes to GitHub repository

You can also now check the repositories in the Harbor registry's library project. You should have 2 artifacts now inside the library/my-app repository.

Push the changes to GitHub repository

Deploy your image to the staging cluster

Now, it's time for the final step. You need to edit your pipeline to add the step to deploy your image on your staging cluster. To do this, you need to add another credential in your Jenkins server, and you will use this credential to update the image of your app deployment in the staging cluster.

Using Civo CLI save the cluster-staging's config to your local machine.

$ cd $PROJECT_DIR
$ civo k8s config cluster-staging > cluster-staging-conf

Once again, access your Jenkins' global credentials page and click the Add Credentials button. Select Secret file for the type of the credential. Under the File field, click Browse and find the cluster-staging-conf file that you just saved inside the $PROJECT_DIR directory. Type in cluster-staging-kubeconfig for the ID and click Create.

Deploy your image to the staging cluster

Under the closing parenthesis of stage('Build') paste the following code. With this you are adding another stage to update the image in your app's deployment. Replace <cluster-staging-ip> with your staging cluster external IP. You can find it by running civo k8s show cluster-staging -o custom -f MasterIP. Replace <harbor-url> with your actual Harbor server URL.

...
    stage('Deploy') {
      steps {
        container('kubectl') {
          withKubeConfig([
            credentialsId: 'cluster-staging-kubeconfig',
            serverUrl: 'https://<cluster-staging-ip>:6443'
          ]) {
            sh '''
              kubectl set image --namespace default \
                deployment/my-app-deployment \
                main=<harbor-url>/library/my-app:${BUILD_NUMBER}"
              '''
          }
        }
      }
    }
...

So if your Harbor server URL is 1.2.3.4 and your staging cluster master IP is 74.220.24.102. The complete Jenkinsfile should be as follows.

pipeline {
  agent any
  stages {
    stage('Build') {
      steps {
        container('kaniko') {
          sh '''
        /kaniko/executor --context . \
            --insecure \
            --destination=1.2.3.4/library/my-app:${BUILD_NUMBER}
        '''
        }
      }
    }
    stage('Deploy') {
      steps {
        container('kubectl') {
          withKubeConfig([
            credentialsId: 'cluster-staging-kubeconfig',
            serverUrl: 'https://74.220.24.102:6443'
          ]) {
            sh '''
              kubectl set image --namespace default \
                deployment/my-app-deployment \
                main=harbor.example.com/library/my-app:${BUILD_NUMBER}"
            '''
          }
        }
      }
    }
  }
}

Edit the main.py file. Change Hello, World! text to Good morning, World! just so that we can confirm later on that the new version of the app has been deployed. Commit your changes and push it into GitHub.

$ git add .
$ git commit -m 'Add deploy step'
$ git push

Now open my-app-staging job page in your browser. There should be a new build triggered. Like before, click on the date and click Console Output to monitor the build output. Notice now that after building and pushing the docker image, there is a new stage for updating your app deployment in the staging cluster.

Deploy your image to the staging cluster

Once the build is finished, use kubectl port-forward to access your app.

$ kubectl config use-context cluster-staging
$ kubectl port-forward deploy/my-app-deployment 8888

Open http://localhost:8888 in your browser. You should now see Good morning, World! message, instead of Hello, World!.

Deploy your image to the staging cluster

With that you have successfully built your app in your Jenkins server and deployed it on your staging cluster.

Clean up

Run the following command to delete all the Civo volumes used in cluster-staging:

$ kubectl config use-context cluster-staging
$ for volumeId in $(kubectl get pvc -A -o jsonpath='{.items[*].metadata.uid}'); do
  civo volume rm pvc-$volumeId -y
done

Next, delete all the Civo volumes used in the Jenkins cluster:

$ kubectl config use-context $OPS_CONTEXT
$ for volumeId in $(kubectl get pvc -A -o jsonpath='{.items[*].metadata.uid}'); do
  civo volume rm pvc-$volumeId -y
done

Finally, delete the Jenkins cluster and cluster-staging by running the following commands:

$ civo k8s delete cluster-staging -y
$ civo k8s delete $OPS_CONTEXT -y

Summary

In this tutorial, you implemented a CI/CD system using Jenkins and Harbor. You start by installing Jenkins in your operational cluster. Then, you installed Harbor as your container registry in the same cluster. Using a sample application hosted in GitHub you define a pipeline job using Jenkinsfile, and you set up the necessary configuration to automate build anytime there is a new commit pushed into your GitHub repository.

If you want to continue to refine the system that you have implemented, there is room for improvement. You might start by improving the security of your system by configuring the TLS certificates for both your Jenkins and Harbor installation. To further secure your Harbor registry, you should also create a specific Harbor robot account and a separate Kubernetes service account instead of using your Harbor admin account and your own kubeconfig. Read Jenkins Kubernetes plugin documentation page to find out how to do that.