Luksa_IRC_00 By Marko Lukša

This article was excerpted from the book Kubernetes in Action.

 

A replication controller is a Kubernetes resource that ensures a pod (or multiple copies of the same pod) is always up and running. If the pod disappears for any reason (like in the event of a node disappearing from the cluster), the replication controller creates a new pod immediately. Figure 1 shows what happens when a node (Node 1) goes down and takes two pods with it. Pod A is a standalone pod, while Pod B is backed by a replication controller. After the node disappears, the replication controller will create a new pod (Pod B2) to replace the now missing Pod B. On the other hand, Pod A is lost completely – nothing will ever recreate it.


Luksa_IRC_01

Figure 1 When a node fails, only pods backed by a replication controller are recreated


The replication controller in the previous example manages only a single pod, but replication controllers, in general, are meant to manage multiple replicas of a pod, hence their name.


The operation of a replication controller

A replication controller, in essence, constantly monitors the list of running pods and makes sure the actual number of pods of some type always matches the desired number. If there are too few pods running, it creates new pods based on a pod template that is configured on the replication controller at that moment. If there are too many pods running, it removes the excess pods.

You might be wondering how there can be more than the desired number of running pods. This can happen for a number of reasons:

  • A pod of the same type was created manually.
  • A node that is running a pod disappears, a new replacement pod is then created by the replication controller, and then the lost node reappears.
  • The desired number of pods is decreased, etc.

I’ve used the term pod types a few times. Actually, there’s no such thing. Replication controllers don’t operate on pod types, but simply on sets of pods that match a certain label selector. So, a replication controller’s job is really just making sure that there is always an exact number of pods matching its label selector. If there isn’t, a replication controller takes the appropriate action to reconcile the actual with the desired number. . The operation of a replication controller can be thought of as a constantly running loop shown in figure 2.


Luksa_IRC_02

Figure 2 Replication controller’s reconciliation loop


Parts of a replication controller

A replication controller has three essential parts:

  • a label selector, which determines what pods are in the replication controller’s scope,
  • a replica count, which specifies the desired number of pods that should be running, and
  • a pod template, which is used when creating new pods.

A replication controller’s replica count, the label selector and even the pod template can all be modified at any time, but only changes to the replica count affect existing pods. Changes to the label selector and the pod template have no effect on existing pods whatsoever. Changing the label selector makes the existing pods fall out of the scope of the replication controller, so the controller stops caring about them completely. Replication controllers also don’t care about the actual “contents” of its pods (the Docker images, environment variables and other things) once it creates the pod. The template therefore only affects new pods created by this replication controller. The template is simply used as a cookie cutter to stamp out new pods.

Like many things in Kubernetes, a replication controller, although an incredibly simple concept, provides or enables the following powerful features:

  • It makes sure a pod (or multiple pod instances) is always running by starting new pods when an existing pod fails, is terminated or is deleted.
  • When a cluster node fails, it creates replacement pods for all the pods that were running on the failed node (of course only those that were under the replication controller’s control).
  • It enables easy horizontal scaling of pods. You can scale a replication controller up or down or have the scaling performed automatically by a horizontalpod autoscaler.
  • It enables rolling updates of pods – by having two replication controllers, with one managing pods of the previous version and another one managing pods of the new version and then slowly decreasing the number of replicas on the first, and increasing the number of replicas on the second.

But it’s important to note that, powerful as they are, replication controllers never actually relocate existing pod instances. A pod instance is never actually moved to another node. Instead, a replication controller always completely replaces the old instance with a new one.


Creating, using and deleting a replication controller

Let’s see how to create a replication controller and then use it to horizontally scale a group of pods. But first, let’s do a clean sweep of our Kubernetes cluster to remove all resources we’ve created so far. We can delete all pods, replication controllers and other objects at once with the following command:

$ kubectl delete all --all

The command will list every object as it deletes it. As soon as all the pods terminate, our Kubernetes cluster should be empty again.


Creating a replication controller

Like pods and other Kubernetes resources, we create a replication controller by posting a JSON or YAML descriptor to the Kubernetes REST API endpoint.

Let’s create a YAML file called kubia-rc.yaml for our replication controller:

apiVersion: v1
kind: ReplicationController       ❶
metadata:
 name: kubia                      ❷   
spec:
 replicas: 3                      ❸   
 selector:                        ❹   
   app: kubia                     ❹   
 template:                        ❺   
   metadata:                      ❺   
     labels:                      ❺   
       app: kubia                 ❺   
   spec:                          ❺   
     containers:                  ❺   
     - name: kubia                ❺   
       image: luksa/kubia         ❺

❶     What this descriptor is describing
❷     The name of this replication controller (RC)
❸     The desired number of pod instances
❹    The pod selector determining what pods the RC is operating on
❺     The pod template for creating new pods

 

When we post it to the API, Kubernetes will create a new replication controller named kubia, which will make sure there are always three instances of a pod matching the label selector app=kubia running. When there aren’t enough pods, new pods will be created from the provided pod template. The three parts of our replication controller are shown in figure 3.


Luksa_IRC_03

Figure 3 The three key parts of a replication controller (pod selector, replica count and pod template)


The pod labels in the template must obviously match the label selector of the replication controller, otherwise the controller would just keep creating new pods indefinitely, since spinning up a new pod would not bring the actual replica count any closer to the desired number of replicas. To prevent such scenarios, the API server doesn’t allow creating a replication controller where the selector does not match the labels in the pod template.

To create the replication controller, we use the kubectl create command, which you already know by now:

$ kubectl create -f kubia-rc.yaml
replicationcontroller “kubia” created

As soon as the replication controller is created, it goes to work. Since there are no pods with the app=kubia label, the replication controller will spin up three new pods from the pod template. Here’s a list of the pods. Has the replication controller done its job?

$ kubectl get po
NAME          READY     STATUS              RESTARTS   AGE
kubia-53thy   0/1       ContainerCreating   0          2s
kubia-k0xz6   0/1       ContainerCreating   0          2s
kubia-q3vkg   0/1       ContainerCreating   0          2s