In Part 1 of our tutorial series, we set up a simple Digger pipeline to perform infrastructure as code (IaC) deployments, all within a GitHub pipeline. But what if we wanted to expand such a deployment for a larger installation or want it hardened for production deployments? We will look at deploying the orchestrator backend.

Utilizing the orchestrator backend will allow for efficient management of CI pipelines, especially for teams working on multiple PRs simultaneously. For this reason, this tutorial will dive into configuring and deploying the Digger orchestrator backend, setting up secure webhooks using cert-manager, and integrating it with GitHub to manage larger infrastructure environments efficiently.

What is the Orchestrator Backend?

The orchestrator backend is a service that triggers Pipeline Runs. Events within the source code management system usually trigger most CI pipeline runs - code commits, pull requests, and other internal activity. Most CI systems also provide a way for CI activities to be triggered externally. This is what the Digger Orchestrator uses to help build a configuration that will work more efficiently with multiple users and larger environments.

Some of the other core points include:

  • Quicker response to PR comments and faster status check updates
  • Parallelization where appropriate
  • Appropriate queuing with multiple PRs
  • PR Level locks

Setting Up Cert-Manager

As we will expose a webhook on a public address for GitHub itself to call, we need to ensure we leverage a signed TLS certificate. To configure one, leverage the cert-manager Civo Marketplace app. The core cert-manager is installed, but we still need to configure it.

The following will set up cluster-issuers to allow us to continue:

# Install the kubeconfig for your digger cluster
civo k3s config -save core
kubectl config use-context core
# Setup a cluster-issuer to for Let’s Encrypt
kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    email: ****REPLACE-WITH-EMAIL****
    preferredChain: ""
    privateKeySecretRef:
      name: prod-letsencrypt-key
    server: https://acme-v02.api.letsencrypt.org/directory
    solvers:
    - selector: {}
      http01:
        ingress:
          class: traefik

EOF

Record the default URL for the cluster. We use that as our default ingress domain name for the webhook. (Note: you could use a custom domain if you wish)

civo k3s show core
# Make a note of the "DNS A record" information

Installing Digger with Helm

Helm makes it easy to deploy complex applications on Kubernetes. Let's start by adding the Digger Helm repository and installing the chart:

cat - > values.yaml <<EOF
digger:
  ingress:
    enabled: true
    annotations:
      cert-manager.io/cluster-issuer: letsencrypt-prod
      kubernetes.io/ingress.class: traefik
    # Must laster be removed, but must be here for install install
    className: traefik
    host: 0123456abcd-fefe-1010101.k8s.civo.com #Replace with cluster DNS A record, from above
    tls:
      secretName: digger-backend-tls
  secret:
    # CHANGE BELOW!!!
    httpBasicAuthUsername: civoadmin
    httpBasicAuthPassword: REPLACEPASSWORD76341
    bearerAuthToken: "22321cdede" # You should generate with something like openssl rand -base64 32
    hostname: https://0123456abcd-fefe-1010101.k8s.civo.com #Replace with cluster DNS A record, from above, with https:// protocol ahead of it.
    githubOrg: "myorg" #replace with your GitHub organization
    # Replace below, AFTER GitHub setup process
      # githubAppID: ""
    # githubAppClientID: ""
    # githubAppClientSecret: ""
    # githubAppKeyFile: #base64 encoded file
    # githubWebhookSecret: ""
postgres:
  enabled: true
  secret:
    useExistingSecret: false
    password: "testpass123"
EOF

helm repo add digger https://diggerhq.github.io/helm-charts/
helm repo update 
helm install digger digger/digger-backend --namespace digger --create-namespace -f values.yaml

This command installs Digger in its own namespace, keeping our cluster organized. After installation, you'll need to configure Digger to work with your GitHub organization:

kubectl get pods -n digger

You will need to delete the .spec.ingressClassName from the Ingress that is created for the cert-bot to work with the version of Traefik installed by default:

kubectl edit ingress -n digger

Configuring GitHub Integration

Next, we are going to set up a GitHub organization. To do this, go to https://CIVOCLUSTERURL/ and log in using the basic auth username and password above.

This information is taken from the httpBasicAuthUsername and httpBasicAuthPassword in the values.yaml file created above

Next, go to https://CIVOCLUSTERURL/github/setup to start the GitHub app set.

You will see a screen similar to the one below. Review all the information, then click Setup:

Configuring GitHub Integration Setup

GitHub will prompt for authentication information. Then, confirm the creation of a new GitHub app.

Configuring GitHub Integration Creation of New App

You will return to the Digger GitHub configuration screen.

Digger GitHub configuration screen

Record the information, then edit the values.yaml in the original Helm configuration to fill out. The names are commented out, so they just need to be uncommented and filled out correctly. Once complete, reapply the values to the installed helm application.

helm upgrade digger digger/digger-backend --namespace digger -f values.yaml
Make sure to also delete the .spec.ingressClassName attribute from the Ingress after application.
kubectl edit ingress -n digger

After applying the helm configuration, click the link provided on the GitHub setup screen - should be something like https://github.com/apps/digger-app-111111111/installations/new, only the 1s replaced with random numbers assigned by GitHub. This will complete the configuration!

Completed Configuration of GitHub Integration

Running Digger in the Pipeline

Now that we have our GitHub App, let's configure Digger in the pipeline to use it! It will be the same as in Part 1 of this series, we just adjust a few parameters to make it work add a parameter for the digger URL to specify the URL for your Digger hosted installation.

     - name: digger run
        uses: diggerhq/digger@latest
        with:
         digger-hostname: https://0123456abcd-fefe-1010101.k8s.civo.com #Replace with cluster DNS A record, from above, with https:// protocol ahead of it.
          # To Disable all locking
          #disable-locking: true
          #no-backend: true

Summary

Congratulations - you are now self-hosting Digger! Now that you have successfully set up the Digger orchestrator and connected it with GitHub, you can explore options for scaling and hardening this setup.

Option 1: The database is currently self-hosted within Kubernetes. Layer a backup system on this, or host the Postgres database on more resilient architecture for critical environments. Keep in mind that all important configurations remain in the pipeline and runners.

Option 2: You only need to run this configuration if you have a high risk of concurrent PR operations. For smaller organizations, just running a stand-along Digger without locks should work - just potentially minor delays in running.