|
From GitOps and Kubernetes by Billy Yuen, Alexander Matyushentsev, Todd Ekenstam, and Jesse Suen This article delves into Canary deployments: what they are; how they work; and where you might consider using them. |
Take 40% off GitOps and Kubernetes by entering fccyuen into the discount code box at checkout at manning.com.
Canary deployment is a technique to reduce the risk of introducing a new software version in production by rolling out the change to a small subset of users for a short period before making it available to everybody. Canary acts as an early indicator for failures for avoiding problematic deployments and having a full impact on all customers at once. If one canary deployment fails, the rest of your servers aren’t affected, and you can terminate the canary and triage the problems.
NOTE Based on our experience, most production incidents are due to a change in system, such as new deployment. Canary deployment is another opportunity to test your new release before the new release reaches all user populations.
The ingress controller fronts both blue and green services, but in this case, ninety percent of traffic goes to the Blue (Production) service, and ten percent goes to the Green (Canary) service. Because Green only gets ten percent of the traffic, we only scale up one Green pod to minimize the resource usage (Figure 1).
Figure 1. Canary Deployment starting with one Green pod
With Canary running and getting production traffic, we can then monitor the Canary health (latency, error, etc.) for a fixed period (ex: one hour) to determine whether to scale up Green Deployment and route all traffic to Green Service or route all traffic back to Blue Service and terminate the Green pod in case of issues. Figure 2 depicts a successfully completed canary deployment with fully scaled green pods (3) getting one hundred percent of the production traffic.
Figure 2. Canary Deployment completed successfully with all Green pods
Canary with Deployment
In this tutorial, perform a Canary Deployment using native Kubernetes Deployment and Service.
- Create the Blue Deployment and Service (Production).
- Create ingress to direct traffic to blue service.
- View the application in the browser (blue).
- Deploy Green Deployment (one pod) and Service and wait for all pods to be ready.
- Create the canary ingress to direct ten percent traffic to green service.
- View the web page again in the browser (ten percent green with no error).
- Scale up the green deployment to three pods.
- Update the canary ingress to send one hundred percent traffic to the green service.
- Scale down the blue deployment to zero.
We can create the production deployment by applying the blue_deployment.yaml
shown below.
$ kubectl apply -f blue_deployment.yaml deployment.apps/blue created service/blue-service created
Now we can expose an ingress controller, and the blue service is accessible from your browser by applying the blue_ingress.yaml
(below). The ‘kubectl get ingress’ command returns the ingress controller hostname and IP address.
$ kubectl apply -f blue_ingress.yaml ingress.extensions/demo-ingress created configmap/nginx-configuration created $ kubectl get ingress NAME HOSTS ADDRESS PORTS AGE demo-ingress demo.info 192.168.99.111 80 60s
NOTE The nginx ingress controller only intercepts traffic with the hostname defined in the custom rule. Please make sure you add “demo.info” and its IP address to your /etc/hosts.
Once you have the ingress controller, blue service, and deployment created and updated /etc/hosts with “demo.info” and correct IP address, you can enter the URL demo.info
and see the blue service running.
Now we are ready to deploy the new Green version. Let’s apply the green_deployment.yaml
to create the Green service and deployment.
$ kubectl apply -f green_deployment.yaml deployment.apps/green created service/green-service created
Listing 1. green_deployment.yaml.
apiVersion: apps/v1 kind: Deployment metadata: name: green labels: app: green spec: replicas: 1 #A selector: matchLabels: app: green template: metadata: labels: app: green spec: containers: - name: green image: argoproj/rollouts-demo:green imagePullPolicy: Always ports: - containerPort: 8080 --- apiVersion: v1 kind: Service metadata: name: green-service labels: app: green spec: ports: - protocol: TCP port: 80 targetPort: 8080 selector: app: green type: NodePort
#A ReplicaSet is set to one for the initial green deployment
Next, we create the canary_ingress
, and ten percent of the traffic is routed to the canary (Green) service.
$ kubectl apply -f canary_ingress.yaml ingress.extensions/canary-ingress configured configmap/nginx-configuration unchanged
Listing 2. canary_ingress.yaml.
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: canary-ingress annotations: nginx.ingress.kubernetes.io/canary: "true" #A nginx.ingress.kubernetes.io/canary-weight: "10" #B spec: rules: - host: demo.info http: paths: - path: / backend: serviceName: green-service servicePort: 80 --- apiVersion: v1 data: allow-backend-server-header: "true" use-forwarded-headers: "true" kind: ConfigMap metadata: name: nginx-configuration
#A Tells Nginx Ingress to mark this one as “Canary” and associate this ingress with the main ingress by matching host and path.
#B Route ten percent traffic to green-service
Now you can go back to the browser and monitor the green service. You should see roughly ten percent partial green the bars and ten percent dots in green with no issue (Figure 3).
Figure 3. Canary Deployment with ten percent traffic to green
If you’re able to see the correct result (healthy canary), then you’re ready to complete the Canary Deployment (Green Service). We then scale up the green deployment, send all traffic green service, and scale down the blue deployment.
$ sed -i '' 's/replicas: 1/replicas: 3/g' green_deployment.yaml $ kubectl apply -f green_deployment.yaml deployment.apps/green configured service/green-service unchanged $ sed -i '' 's/10/100/g' canary_ingress.yaml ingress.extensions/canary-ingress configured configmap/nginx-configuration unchanged $ sed -i '' 's/replicas: 3/replicas: 0/g' blue_deployment.yaml deployment.apps/green configured service/green-service unchanged
Now you should be able to see all green bars and dots as one hundred percent of the traffic is routed to the Green Service (Figure 4).
NOTE In true production, we need to ensure all green pods are up before we can send one hundred percent of the traffic to the canary service. Optionally, we can incrementally increase the percentage of traffic to green service as the green deployment is scaling up.
Figure 4. Canary Deployment with one hundred percent traffic to green
Canary with Argo Rollouts
As you can see, using a Canary Deployment can help detect issues early to prevent problematic deployment but involves many additional steps in the deployment process. In the next tutorial, use Argo Rollouts to simplify the process of Canary deployment.
- Create the ingress, Production Deployment, and Service (Blue).
- Create ingress to direct traffic to production service.
- View the application in the browser (Blue).
- Apply the manifest with Green image with ten percent canary traffic for sixty seconds
- Create the canary ingress to direct ten percent traffic to green service.
- View the web page again in the browser (ten percent green with no error).
- Wait sixty seconds
- View the application again in browser (All Green)
First, create the ingress controller, demo-service, and “Blue” deployment (Listing 3).
$ kubectl apply -f ingress.yaml ingress.extensions/demo-ingress created configmap/nginx-configuration created $ kubectl apply -f canary_rollout.yaml rollout.argoproj.io/demo created service/demo-service created $ kubectl get ingress NAME HOSTS ADDRESS PORTS AGE demo-ingress demo.info 192.168.99.111 80 60s
Listing 3. canary_rollout .yaml.
aapiVersion: argoproj.io/v1alpha1 kind: Rollout metadata: name: demo labels: app: demo spec: replicas: 3 selector: matchLabels: app: demo template: metadata: labels: app: demo spec: containers: - name: demo image: argoproj/rollouts-demo:blue imagePullPolicy: Always
#A Deploy with strategy Canary.
#B Scale-out enough pods to service ten percent of the traffic. In this example, Rollout scales up one Green pod along with the three Blue pods resulting in the Green pod getting twenty-five percent of the traffic. Argo Rollout can work with Service Mesh or Nginx Ingress for fine-grain traffic routing.
#C Wait twenty seconds. If no error or user interruption, scale up Green to one hundred percent.
NOTE For the initial deployment (Blue), Rollout ignores the Canary setting and performs a regular deployment.
Once you have the ingress controller, service, and deployment created and updated /etc/hosts with the “demo.info” and the correct IP address, you can enter the URL demo.info
and see the blue service running.
NOTE The nginx ingress controller only intercepts traffic with the hostname defined in the custom rule. Please make sure you add “demo.info” and its IP address to your /etc/hosts.
Once Blue service is fully up and running, we can update the manifest with the green image and apply the manifest.
$ sed -i '' 's/demo:blue/demo:green/g' canary_rollout.yaml $ kubectl apply -f canary_rollout.yaml rollout.argoproj.io/demo configured service/demo-service unchanged
Once the canary starts, you should see something similar to Figure 3. After one-minute, Green ReplicaSet
scales up as Blue Deployment scales down with all bars and dots going Green (Figure 4).
That’s all for this article. If you want to see more of the book’s contents, preview them on our browser-based liveBook platform here.