One of the often-overlooked aspects of Kubernetes is container networking, specifically the Container Network Interface (CNI). CNIs provide a standardized way to configure network interfaces for containers within a Kubernetes cluster. This abstraction layer allows for flexibility and simplifies network management by decoupling the specifics of network plugin implementation from Kubernetes.
In this guide, we’ll explore Cilium, an eBPF-powered CNI for Kubernetes, and how you can leverage Cillum network policies to secure cluster communications.
Unlike traditional CNIs that rely on firewall rules or iptables, Cilium leverages eBPF to modify network behavior efficiently. Lin Sun wrote about this more by outlining how Cilium is deployed as a DaemonSet that runs on each Kubernetes worker node. Inside the Cilium pod, an Envoy proxy is running to mediate any traffic into the pods (on the same node as the Cilium pod) with L7 policies.
Prerequisites
This tutorial assumes some familiarity with Kubernetes. In addition, you would need the following installed locally to follow along.
Installing Cilium on Civo
The default CNI plugin on Civo Kubernetes clusters is flannel. However, we can select a different CNI during cluster creation. To do this using the Civo CLI, run the following command:
civo k3s create --create-firewall --nodes 1 -m --save --switch --wait cilium-demo --region NYC1 --cni-plugin=cilium
The command above will launch a one-node cluster in the NYC1
region, using --cni-plugin
we specify the CNI plugin. At the time of writing, Civo supports flannel and Cilium.
In a few minutes, you should have a fresh cluster with Cilium installed as the CNI.
Alternatively, you can install Cilium using Helm.
Install Cilium using Helm
Add the helm repo:
helm repo add cilium https://helm.cilium.io/
Deploy the release:
helm install cilium cilium/cilium --version 1.15.2 \
--namespace kube-system
Deploying a sample application
Before creating any policies, we will need an application to test them on. In a directory of your choice, create a file called deployment.yaml
, and add the following code:
apiVersion: apps/v1
kind: Deployment
metadata:
name: whoami
namespace: whoami
labels:
app: whoami
spec:
replicas: 1
selector:
matchLabels:
app: whoami
template:
metadata:
labels:
app: whoami
spec:
containers:
- name: whoami
image: traefik/whoami
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: whoami
namespace: whoami
labels:
app: whoami
spec:
type: LoadBalancer
selector:
app: whoami
ports:
- protocol: TCP
port: 80
targetPort: 80
The deployment above creates a single instance of the whoami HTTP server and a service that listens on port 80
. Save the file and apply it to your cluster using kubectl:
# create the whoami namespace
Kubectl create ns whoami
kubectl apply -f deployment.yaml
L7 Network policies with Cilium
What makes Cilium unique in regard to network policies is the ability to create L7 network policies, which the standard NetworkPolicy resources Kubernetes ships with do not support L7 policies. For those unfamiliar, L7 policies allow users to create rules specifically for HTTP services.
In a directory of your choice, create a file named policy.yaml
and add the following configuration using a text editor of your choice:
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: allow-api
namespace: whoami
spec:
endpointSelector:
matchLabels:
app: whoami
ingress:
- fromEndpoints:
- matchLabels:
app: whoami
- toPorts:
- ports:
- port: "80"
protocol: TCP
rules:
http:
- method: "GET"
path: "/api"
The policy above allows incoming HTTP GET requests to the "/api" path on port 80 for pods labeled with "app: whoami" within the "whoami" namespace. The endpointSelector
selects the pods with the label "app: whoami" as the target for this policy.
Apply the policy using kubectl:
Kubectl apply -f policy.yaml
To test the policy, obtain the IP of the load balancer provisioned by the whoami service:
kubectl get service -n whoami whoami -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
Test the API endpoint:
curl http://<yourLBIP>/api
Output:
{"hostname":"whoami-7f89db7768-k4dtz","ip":["127.0.0.1","::1","10.0.0.76","fe80::201e:feff:fe25:97df"],"headers":{"Accept":["*/*"],"User-Agent":["curl/8.4.0"],"X-Envoy-Expected-Rq-Timeout-Ms":["3600000"],"X-Forwarded-Proto":["http"],"X-Request-Id":["2c5737a4-2c31-4bdf-a5c0-d3e11932db38"]},"url":"/api","host":"212.2.245.110","method":"GET","remoteAddr":"192.168.1.4:36524"}
Test another endpoint:
The whoami service is another endpoint on /ip
which we have not explicitly allowed, so let’s test that:
curl http://<yourLBIP>/api
Output:
Access denied
Locking down cross-namespace traffic
So far, we have looked at how L7 policies can enable you to restrict access to certain application endpoints. In a production environment, you typically have more than one service deployed, and chances are they live in the same namespace. In the following example, we’ll demonstrate how we can lockdown cross-namespace traffic.
Begin by creating a new namespace using kubectl:
kubectl create ns sleep
Next, we would be deploying the sleep service from the Istio samples:
kubectl apply -n sleep -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml
Out-of-the-box Kubernetes allows namespaces to talk to each other, which makes sense for anyone just starting with Kubernetes. Sometimes, it’s easier not to think of security at the early stages of adoption.
We can verify the pods in the sleep
namespace can talk to the whoami service using kubectl:
kubectl -n sleep exec -it deploy/sleep -- curl whoami.whoami.svc.cluster.local/ip
Running this command should result in an access denied message, as the same ingress rules still apply. But what if we didn’t want the sleep
namespace to talk to whoami
at all?
Let’s write a policy to deny traffic from the sleep
namespace to the whoami
namespace; in a text editor of your choice, create a file called ns-policy.yaml
and add the code below:
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: deny-sleep-to-whoami
namespace: sleep
specs:
- endpointSelector:
matchLabels: {} # Applies to all pods
egress:
- toEndpoints:
- matchLabels:
"k8s:io.kubernetes.pod.namespace": sleep # Allow egress from sleep within its namespace
- toEndpoints:
- matchLabels:
"k8s:io.kubernetes.pod.namespace": whoami # But deny egress to whoami namespace
ingress:
- fromEndpoints:
- matchLabels:
"k8s:io.kubernetes.pod.namespace": sleep # Allow ingress to sleep within its namespace
- fromEndpoints:
- matchLabels:
"k8s:io.kubernetes.pod.namespace": whoami # Allow ingress from whoami within its namespace
In the policy above, the endpointSelector: {}
section ensures this policy applies to all pods in the cluster, regardless of labels.
To control egress or “outbound” traffic, pods in the sleep
namespace can talk to other pods within the same sleep
namespace. This is defined by the first toEndpoints
rule with matchLabels: "k8s:io.kubernetes.pod.namespace": sleep
.
Pods in sleep
cannot talk to pods in the whoami
namespace. This is enforced by the second toEndpoints
rule with matchLabels: "k8s:io.kubernetes.pod.namespace": whoami
.
Apply the policy using kubectl:
kubectl ns-policy.yaml
Curl the whoami service:
kubectl -n sleep exec -it deploy/sleep -- curl whoami.whoami.svc.cluster.local/api
Instead of access denied, you should be greeted with the following:
curl: (6) Could not resolve host: whoami.whoami.svc.cluster.local
This shows our policy is active, and traffic to the whoami namespace is no longer allowed.
Clean up (Optional)
After completing this tutorial, you may want to clean up the resources you created. To delete the Kubernetes cluster, run the following command:
civo k3s delete cilium-demo --region NYC1
This command will delete the cilium-demo cluster from the NYC1 region in your Civo account.
Summary
One of the many things I love about using Cilium for securing cluster communications is how easy it is. To obtain some of the features demonstrated in this tutorial, a quick Google would point you towards deploying a service mesh. This can be daunting for new Kubernetes users, or perhaps your company avoids bringing a mesh entirely.
Regardless, Cilium offers many compelling features, and network security is a great feature for users who are already leveraging it as their CNI.
Looking to Learn more about Cilium? Take a look at these resources:
- In this meetup, Kunal and Raymond go over some of the observability and networking features of Cilium.
- Wondering how Cilium compares to other CNIs like Flannel? Check out this guide.