In the previous segment of this series, we discussed the foundational concepts of a service mesh. From here, we outlined its inherent advantages and introduced Istio as one of the leading open-source solutions in this domain.
Now that we have this knowledge, we can shift our attention toward traffic management. This aspect involves controlling how data is routed and distributed within our cluster.
In this tutorial, we’ll start with a basic demonstration of letting traffic into our cluster, cover Canary deployments, and round up some best practices for increasing network resiliency.
Prerequisites
This post assumes you have Istio set up in your Kubernetes cluster. If you haven’t done so already, take a look at the first part for instructions on how to install Istio.
An introduction to Istio service routing
Before diving into a demo, we’ll need to discuss service routing. Service routing in Istio involves creating custom resources which tells Istio where in our cluster applications live and how to reach them.
Service routing is a pivotal aspect to grasp before we proceed to hands-on demonstrations. At the heart of this are two crucial components: Virtual Services and Gateways.
- Virtual Services define the rules for routing traffic to different versions of your services, allowing for fine-grained control.
- Gateways, on the other hand, act as the entry and exit points to your cluster, handling traffic from outside sources and directing it to the appropriate services.
When a user makes a request, it goes through the Istio gateway. The gateway then routes the requests to the appropriate virtual service, which in turn identifies the right destination for your application.
At a high level, service routing within Istio can be illustrated as follows:
With this in mind, let’s move on to the implementation phase.
Creating a deployment
Before we can send traffic anywhere, we’ll need an application deployed into the cluster. Within your text editor of choice, create a file called deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: pong-server-deployment
spec:
replicas: 1
selector:
matchLabels:
app: pong-server
version: v1
template:
metadata:
labels:
app: pong-server
version: v1
spec:
containers:
- name: pong-server
image: ghcr.io/s1ntaxe770r/pong:e0fb83f27536836d1420cffd0724360a7a650c13
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: pong-server-service
spec:
selector:
app: pong-server
ports:
- protocol: TCP
port: 80
targetPort: 8080
This would create a deployment and service for the app we’ll be using for this demonstration. The application we just deployed is a Go API with a couple of endpoints that would be useful for testing later in the series.
Enabling Automatic Injection
Before applying the deployment, we’ll need to label the default namespace. This action allows Istio's automatic sidecar injection for any new deployments we create in the default namespace, meaning it does not have to be configured for each one separately. To label the namespace, execute the following command:
kubectl label namespace default istio-injection=enabled --overwrite
After labeling the namespace, you can apply the deployment file you created above using kubectl:
kubectl apply -f deployment.yaml
Creating a Gateway
For the Virtual Service to effectively route incoming traffic, it relies on an Istio Gateway. Let's proceed by setting up the necessary Gateway. In your preferred text editor, create a file named gateway.yaml
and include the following configuration:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: pong-server-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
Now, let's break down the key elements of this configuration:
metadata
: Specifies the metadata for the Gateway, including its name.spec
:selector
: Identifies the pods that will be targeted by this Gateway. In this case, it's selecting pods labeled withistio: ingressgateway
.servers
: Defines the server settings for the Gateway.port
: Specifies the port number (80 for HTTP) and protocol (HTTP).name
: Names the port as "http".protocol
: Indicates that the protocol being used is HTTP.hosts
: Lists the hosts that this Gateway will accept traffic for. Here, it's set to accept traffic for any host ("*").
Apply the gateway using kubectl:
kubectl apply -f gateway.yaml
Creating a Virtual Service
Now, we can create a virtual service that uses the Gateway we created. In your editor of choice, create a file called virtual-service.yaml
and add the following:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: pong-server-virtual-service
spec:
hosts:
- "*"
gateways:
- pong-server-gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
host: pong-server
port:
number: 8080
Here are the important bits to note:
metadata
: Specifies the metadata for the Virtual Service, including its name.spec
:hosts
: Indicates that this Virtual Service will handle traffic for any host (indicated by "*").gateways
: Associates the Virtual Service with the previously definedpong-server-gateway
.http
:match
: Specifies criteria for matching incoming requests. In this case, we're matching requests with a URI prefix of "/".route
: Defines where matching requests will be directed. Here, it's set to route to thepong-server
service in the default namespace.
Apply the virtual service configuration to your cluster:
kubectl apply -f virtual-service.yaml
Obtaining your load balancer IP
As mentioned earlier, creating a gateway may provision a load balancer resource. We’ll use the IP to interact with the service. Retrieve the IP with kubectl
:
kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
Test the service
You should be able to hit the /ping
endpoint using curl
:
curl http://<yourloadbalancerIp>/ping
Introducing Canary Deployments
Canary deployments play a pivotal role in modern software development and deployment practices. You introduce these changes gradually, targeting a small, controlled subset of users or traffic. This approach provides a safety net. By doing this, developers can closely monitor how the new release behaves in a real-world scenario, gaining insights into its stability and performance. In essence, Canary deployments act as a trial run, allowing you to catch and rectify any potential issues before they reach a wider audience
What is a Canary Deployment?
A Canary Deployment, sometimes referred to as a "Canary Release," is a technique that allows for the controlled and incremental rollout of a new version of an application. Named after the practice of using canaries in coal mines to detect dangerous gasses, this strategy involves exposing a small percentage of users or traffic to the new version while keeping the majority on the stable release. This "canary group" serves as an early indicator of any potential issues that might arise with the new version.
Use Cases for Canary Deployments
Canary deployments find their strength in scenarios where a cautious and controlled approach to releasing updates is essential. They are particularly valuable in high-stakes environments such as:
- E-commerce Platforms: Ensuring that a new feature or update doesn't disrupt the shopping experience for a large user base.
- Critical Business Applications: Minimizing the risk of downtime or functionality issues when deploying mission-critical applications.
- Service-Level Agreements (SLAs): Meeting or exceeding service-level agreements is paramount, making a gradual rollout crucial for validation.
- Feature Flagging: Gradually enabling new features for specific user segments or geographic locations.
- Performance Testing: Using real-world traffic to validate the performance and scalability of a new release.
Implementing Canary Deployments with Istio
We’ll begin by creating a new deployment for the new release. Open up deployment.yaml
and add the following configuration:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: pong-server-v2
spec:
selector:
matchLabels:
app: pong-server
version: v2
template:
metadata:
labels:
app: pong-server
version: v2
spec:
containers:
- name: pong-server
image: ghcr.io/s1ntaxe770r/pong:e0fb83f27536836d1420cffd0724360a7a650c13
env:
- name: PONG_VERSION
value: "v2"
ports:
- containerPort: 8080
For this demonstration, we are simulating a new version of our application using an environment variable. This would enable you to differentiate between each server without making changes to the underlying code.
Apply the deployment using kubectl:
kubectl apply -f deployment.yaml
Creating Destination Rules
Using a DestinationRule
, we can define subsets of a service based on version. Subsets allow us to define specific groups or versions of a service. Picture it as organizing your services into distinct categories, each representing a different version or variant. This becomes particularly crucial when you have multiple versions of a service running concurrently. Using a DestinationRule
, we can precisely define these subsets based on attributes like version labels.
To do this, create a file name destination-rule.yaml
and add the following:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: pong-server-destination-rule
spec:
host: pong-server
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
This DestinationRule defines two subsets based on the version label in each deployment.
Apply the destination rule
kubectl apply -f destination-rule.yaml
Performing the Canary
To complete the traffic split, the virtual service needs one last update. Open up virtual-service.yaml
and add the update as follows:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: pong-server-virtual-service
spec:
hosts:
- "*"
gateways:
- pong-server-gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
host: pong-server-service.default.svc.cluster.local
subset: v1 # Using subset 'v1' from DestinationRule
port:
number: 80
weight: 80
- destination:
host: pong-server-service.default.svc.cluster.local
subset: v2 # Using subset 'v2' from DestinationRule
port:
number: 80
weight: 20
In the updated manifest, a new destination has been added to the route configuration in the Virtual Service, enabling the split of incoming traffic between two subsets, v1
and v2
, as outlined in the DestinationRule.
With the weight parameter set at 80 for v1
and 20 for v2
, 80% of the traffic is directed to the proven, stable version (v1
), ensuring most users experience the reliable service. Meanwhile, the 20% allocated to the newer version (v2
) facilitates controlled testing. This strategic distribution of traffic exemplifies the core principle of Canary Deployments, providing a measured transition for end-users.
Testing the Canary
To test the changes we just implemented, head over to your terminal and run the following commands:
Export your Ingress IP:
ING=$(kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
Curl the version endpoint:
for i in $(seq 50); do curl http://$ING/version; done
After a few requests, you should see v2
printed to the terminal
If you do not observe this behavior, it's important to double-check a few things. First, ensure that the destination rules and virtual service configurations have been applied correctly. You can verify this using the kubectl get destinationrule
and kubectl get virtualservice
commands. Additionally, make sure that the pods for both versions (v1
and v2
) are running and healthy. You can use kubectl get pods
to check their status. If everything appears to be in order and you're still experiencing issues, take a look at this blog on the Istio website on Canary deployments
Visualizing Traffic
Testing the canary through the terminal is fine. But Istio enables us to do better, in your terminal run:
istioctl dashboard kiali
This will open up the Kiali Dashboard. Then you can head over to Applications > pong-server, and you should see a graph like this:
Istio produces a map that visualizes the Canary deployment we just implemented and provides metrics about latency and the requests per second.
Summary
In this tutorial, we navigated the intricacies of Istio's traffic management. Beginning with the foundational steps of allowing traffic into our Kubernetes cluster, we delved into Canary Deployments - a controlled release strategy that allows for incremental rollout of new versions.
In the next section, we’ll take a look at some of the security features Istio provides.
Further resources
To keep learning about traffic management and deployment strategies with Istio, here are some great resources to check out: