DeepSeek, a Chinese AI startup, has recently launched its latest model, DeepSeek-R1, which rivals leading AI models like OpenAI's o1 in performance but at a fraction of the cost. This open-source model has quickly gained attention, topping Apple's App Store and causing significant ripples in the tech industry.

Deploying DeepSeek-R1 on a Civo GPU-powered Kubernetes cluster allows you to harness its advanced capabilities efficiently. This guide will walk you through the process, enabling you to leverage DeepSeek-R1's test and power for your AI applications seamlessly.

Simplifying LLM Deployment with Civo’s LLM Boilerplate

Setting up a GPU-enabled Kubernetes cluster to run LLMs can be complex and time-consuming, especially for those who require seamless integration, data security, and regulatory compliance. To address this challenge, we've created a step-by-step guide to deploying a Kubernetes GPU cluster on Civo using the Civo LLM Boilerplate.

Accelerate Your LLM Deployment with Civo GPUs

Experience high-performance, scalable, cost-effective GPU solutions for your machine learning and AI projects. Our NVIDIA-powered cloud GPUs help you streamline LLM deployments, whether for development or production.

👉 Learn More

What You'll Learn

In this tutorial, you'll learn how to automate the setup of DeepSeek on a Kubernetes GPU cluster on Civo Cloud using Terraform or GithubActions, and deploy essential tools such as:

Project Goal

The goal of this project is to enable customers to easily use Open Source LLMs, providing 1:1 compatibility with DeepSeek:

  • Access to the latest Open Source LLMs made available from Ollama.
  • Provide a user interface to allow non-technical users access to models.
  • Provide a path to produce insights with LLMs while maintaining sovereignty over the data.
  • Enable LLMs in regulatory use cases where ChatGPT can't be used.

Prerequisites

Before beginning, ensure you have the following:

Deploying DeepSeek on Civo using Terraform

Project Setup

  1. Obtain your Civo API key from the Civo Dashboard.
  2. Create a file named terraform.tfvars in the project's root directory.
  3. Insert your Civo API key into this file as follows:
civo_token = "YOUR_API_KEY"

Project Configuration

Project configurations are managed within the tf/variables.tf file. This file contains definitions and default values for the Terraform variables used in the project.

Variable Description Type Default Value
cluster_name The name of the cluster. string "llm_boilerplate"
cluster_node_size The GPU node instance to use for the cluster. string "g1.l40s.kube.x1"
cluster_node_count The number of nodes to provision in the cluster. number 1
civo_token The Civo API token, set in terraform.tfvars. string N/A
region The Civo Region to deploy the cluster in. string "LON1"

Deployment Configuration

Deployment of components is controlled through boolean variables within the tf/variables.tf file. Set these variables to true to enable the deployment of the corresponding component.

Variable Description Type Default Value
deploy_ollama Deploy the Ollama inference server. bool true
deploy_ollama_ui Deploy the Ollama Web UI. bool true
deploy_app Deploy the example application. bool false
deploy_nv_device_plugin_ds Deploy the Nvidia GPU Device Plugin for enabling GPU support. bool true

Deploy LLM Boiler Plate

To deploy, simply run the following commands:

Step 1: Initialize Terraform

terraform init

This command initializes Terraform, installs the required providers, and prepares the environment for deployment.

Step 2: Plan Deployment

terraform plan

This command displays the deployment plan, showing what resources will be created or modified.

Step 3: Apply Deployment

terraform apply

This command applies the deployment plan. Terraform will prompt for confirmation before proceeding with the creation of resources.

Building and deploying the Example Application

Step 1: Build the custom application container

Enter the application folder:

cd app

Build the docker image:

docker build -t {repo}/{image} .

Push the docker image to a registry:

docker push -t {repo}/{image}

Navigate to the helm chart:

cd ../infra/helm/app

Modify the Helm Values to point to your docker registry, e.g.

replicaCount: 1
image:
    repository: {repo}/{image}
    pullPolicy: Always
    tag: "latest"

service:
    type: ClusterIP
    port: 80

Step 2: Initialize Terraform

Navigate to the terraform directory:

cd ../tf

Then:

terraform init

This command initializes Terraform, installs the required providers, and prepares the environment for deployment.

Initialize Terraform

Step 3: Plan Deployment

terraform plan

This command displays the deployment plan, showing what resources will be created or modified.

Plan Deployment

Step 4: Apply Deployment

terraform apply

This command applies the deployment plan. Terraform will prompt for confirmation before proceeding with the creation of resources.

Apply Deployment

Deployment takes around 10 minutes to stand up the Civo Kubernetes Cluster, assign a GPU node, deploy the helm charts and GPU configuration before downloading the models and running them on your Nvidia GPU.

deploy the helm charts and GPU configuration

Troubleshooting

If you experience any issues during the deployment (for example, if you experience a timeout), you can reattempt the deployment by rerunning:

terraform apply

Deploy DeepSeek through GitHub Actions

For those who prefer a fully automated cloud-based approach, GitHub Actions offers a powerful solution. As a part of GitHub's CI/CD platform, Actions allows you to automate your software workflows, including deployments. This method simplifies the deployment process, ensuring that it is repeatable and error-free, which is particularly beneficial for managing and updating large-scale machine learning models like DeepSeek without manual intervention.

First, navigate to the repository: https://github.com/civo-learn/civo-llm-boilerplate, and then use the template to create a new repository.

create a new repository

After doing so, go to the settings of your newly created repository and make sure GitHub Actions are allowed to run.

GitHub Actions are allowed to run

Make a new secret through the settings for the repository called CIVO_TOKEN and set it to your Civo account token.

Now, you can head to the actions tab and run the deployment.

run the deployment

Accessing and Managing Your Deployment

Once you have successfully deployed DeepSeek using either Terraform or GitHub Actions, the next step is to verify and utilize the deployment:

Checking the Load Balancers

After deployment, you can check the load balancers attached to your Kubernetes cluster to locate the Open Web UI endpoint. Navigate to the load balancer section in your Civo Dashboard and find the DNS name labeled “ollama-ui-open-webui.”

check the load balancers attached to your Kubernetes cluster

Completing the initial open-web-ui setup, which involves registering an initial administrator account and configuring the deployment options, will grant you access to a “ChatGPT-like” interface, where you can interact with the deployed LLM directly.

Private DeepSeek chat interface

From this window, you can further configure your environment, such as setting your security and access preferences and what access newly registered users can access. You can make other users administrators in addition to the first registered account.

Deploying Additional Models

If you wish to expand your LLM capabilities, simply navigate to the settings menu found in the top right-hand corner of the Open Web UI screen. Select “models” from the left-hand menu to add or manage additional models. This feature allows for versatile deployment configurations and model management, ensuring that your setup can adapt to various requirements and tasks.

If you would like to change the default models deployed, simply modify the variables.tf file in the infra/tf folder. This is a list of all the Ollama models you wish to deploy.

 variable "default_models" {
  description = "List of default models to use in Ollama Web UI."
  type        = list(string)
  default     = ["llama3.2", "deepseek-r1"] #Include additional models here if required
}

Summary

Congratulations! You have successfully deployed a Kubernetes GPU cluster on Civo Cloud using Terraform and set up various components for running LLMs, including the Ollama inference server and web interface.

With this boilerplate, you now have a scalable and flexible infrastructure for leveraging Open Source LLMs, allowing you to customize deployments, integrate additional tools, or expand your cluster as needed.

If you want to learn more about LLMs, check out some of these resources: