Kubernetes ReplicationControllers, Deployments and Upgrade existing ReplicationController to Kubernetes Deployment

Upendra Kumarage
8 min readMay 13, 2020

Most of the production level Kubernetes clusters are using Kubernetes Deployment already. But it is possible that some of the previously deployed Kubernetes based applications may still be using ReplicationController which is already deprecated and soon to be removed. Even though, upgrading ReplicationController to Deployment is not a big challenge, someone might find it is difficult to find all the information scattered here and there and refer them all at once in a short time period. Also, if an upgrade is planned, it is required to consider on the impact to live application and traffic as well.

So, I thought of presenting these scattered information in a briefly organized manner. As I always believe, when we have a high level idea on what we are dealing with and how we are planning to resolve the issue, it is fairly easy in gaining further knowledge and continue our way forward. There could be different ways of doing this other than what I have explained here, and there could be much easier methods as well. It is possible to refer the standard Kubernetes documentation, forums discussions and other scattered article all over the internet and you will be able find a treasure trove of information on these topics.

First of all, what is a ReplicationController and what is a Deployment. A ReplicationController ensures that, at a given time, specific number of pods of a certain application is up and running in the production Kubernetes Cluster. The pods deployed using a ReplicationController will automatically replace any failed pods thus maintaining the consistency of the number of pods.

A Kubernetes Deployments can be consider as the successor of the ReplicationController. As a ReplicationController, Deployments will also automatically replace any failed pods thus maintaining the consistency of the number of pods. But some extensive use cases are being supported by Kubernetes Deployment other than consistent number of pods and Rolling-Updates. You can refer [2] for get more information on these use cases.

Sample ReplicationController and a Service for further understanding

To discuss this further, I am going to use Nginx Web Server, as it is most widely used example as well as simplest to deploy. Provided below is a ReplicationController template written for deploying Nginx pods,

nginx-rc.yaml

apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-app-defaultv1
namespace: myapps
spec:
replicas: 2
selector:
app: nginx-v1
template:
metadata:
name: nginx-v1
labels:
app: nginx-v1
spec:
containers:
- name: nginx-app-defaultv1
image: nginx
ports:
- containerPort: 80

Here, note the apiVersion and kind,

apiVersion: v1
kind: ReplicationController

In a ReplicationController, the Kubernetes API Version is represented as v1. I am not going to explain on the Kubernetes API versions here. To get more information regarding the Kubernetes API versioning and naming conventions you can refer reference [3]. And the Kind is a string value representing the REST resource this object represents. In this template the kind is set to ReplicationController.

Also, I have created a new namespace named “myapps” and it has been added to metadata.namespace in the above template. This was mainly done because we can isolate our Deployment testing from the rest of the Kubernetes based applications you have in your cluster. Here, I have jumped into the assumption that you are familiar with the Kubernetes namespaces as we are going to deploy our ReplicationController, Deployment and the relevant Service to this created namespace.

Let’s assume that due to an unhealthy worker node, a pod is recreated by the replication controller. The new pod is created with a different IP from the first one. To mitigate this issue, every production grade Kubernetes based applications use Kubernetes Services. When a Kubernetes Service is created, each Service is assigned a unique IP address (also called clusterIP). This IP address is depend upon the life span of the Service and will not be changed unless Service is terminated or recreated. The pods will be configured to talk to the Service. Additionally requests received by the Service will automatically load-balanced between the pods which communicate through the service.

More description into Services and how they communicate will lengthen this article. So let’s assume that our Nginx pods are using the following Service template to expose the relevant Services.

nginx-service.yaml

apiVersion: v1
kind: Service
metadata:
name: nginx-app-defaultv1
namespace: myapps
labels:
run: nginx-app-defaultv1
spec:
ports:
- port: 80
protocol: TCP
selector:
run: nginx-app-defaultv1

Here,

apiVersion: v1
kind: Service

Notice that the apiVersion is same but the kind is changed to Service. Again I am not going to explain how a service template is created as it will lengthen this article. You can refer [4] to get further understanding on the Kubernetes Services. My intention here is to explain how to upgrade existing ReplicationController to Kubernetes Deployments and Deployment related features.

We usually use kubectl create command to create a resource from a file. I assume that the many of the production grade Kubernetes clusters should have their own deployment scripts consist with Kubernetes commands. However, for the testing purpose, let’s create the Nginx Pods from the ReplicationController and the Service with the above command. As mentioned earlier, I have created a new namespace named “myapps” for this demostration.

kubectl create -f nginx-rc.yaml
kubectl create -f nginx-service.yaml

Now the pod which created by the ReplicationController and the relevant severs are up and running. You can check the replication controller details by,

kubectl get rc –n myapps
Nginx ReplicationController
kubectl describe rc nginx-app-defaultv1 -n myapps
ReplicationController Description

Same way, you can find the Service information we created for exposing the Nginx Service,

kubectl get svc -n myapps
kubectl describe svc nginx-app-defaultv1 -n myapps

Upgrading our ReplicationController to Deployment

So now we have two up and running Nginx Pods which were created from the ReplicationController and we have an up and running Service to communicate with the Pods. Now, the next task is updating the Nginx template to use Deployments instead of ReplicationController.

The two major changes you need to do in order upgrade ReplicationController to Deployment is changing the apiVersion and and kind. They should be changed as follows,

apiVersion: apps/v1
kind: Deployment

Is it all we need to do? Let’s try to create pods from our new deployment, I have named my new template upgraded to Deployment as nginx-deployment.yaml,

kubectl create –f nginx-deployment.yaml

It does not work, Kubernetes will throw out the following error,

Error Message

So not only changing the apiVersion and kind is not enough to upgrade the ReplicationController to Deployment. You need to add the matchLabels clause to spec.selector. What matchLabel basically do is determine which pods are manage by the Deployment. So our updated template looks as below,

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-app-defaultv1
namespace: myapps
spec:
replicas: 2
selector:
matchLabels:
app: nginx-v1
template:
metadata:
name: nginx-v1
labels:
app: nginx-v1
spec:
containers:
- name: nginx-app-defaultv1
image: nginx
ports:
- containerPort: 80

Now we can create a Deployment using the above template. So, the next question, we already have the same Nginx Pods created by our ReplicationController, will there be any conflict, and even more, we have one Service running allowing us to communicate with Nginx Pods. The best thing is, there will not be any conflict. You can create Pods from the Deployment and they will run simultaneously with the Pods created from the ReplicationController.

kubectl create –f nginx-deployment.yaml
Nginx Pods from both ReplicationController and Deployment

Here, you can see the two Pods created from ReplicationController and the two Pods created from the Deployment. The highlighted Pods have been created from Deployment and note the string concatenated with the Pod name is bit longer in the Pods created by ReplicationController. You can get the details of the Deployment using,

kubectl get deployments -n myapps
Deployment Information
kubectl describe deployment nginx-app-defaultv1 -n myapps
Deployment Description

Final note to wrap this up, if you are using ReplicationController in Production Grade Kubernetes Cluster and if you are planning to upgrade to Deployments, it is quite possible to do the upgrade without any downtime. As you can see there will be Pods up and Running from both ReplicationController and the Deployment on the same time. The Services not needed to be change and newly Created Pods from the Deployment will use the same Services as we are not changing any directives related to Services.

Furthermore, if you are using automated scripts in your Production Environment, then it is advise to create the Deployment manually using the kubectl create command as there might be dependencies in you scripts which pods may deploy or undeploy with their respective Services. Once the Pods have been created, you can verify the functionality you expects from the pod and then you can scale down the Pods created from ReplicationController to reevaluate the functionality of the newly created Pod from the Deployment. In case there is any inconsistency noted in new Pods, you can easily scale up the ReplicationController to the amount of Pods you desire. This will prevent your production Environment from any traffic loss during the upgrade Process.

Initially I was thinking that I will describe how to do Rolling-Update with a Deployment and explain on the new features a Deployment has which ReplicationController does not. But it would further lengthen this article and I will write another article on how to do rolling-update and brief introduction to other features of a Deployment.

References

[1]https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/

[2]https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment

[3]https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md

[4]https://kubernetes.io/docs/concepts/services-networking/service/

[5]https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands

[6]https://kubernetes.io/docs/reference/kubectl/cheatsheet/#updating-resources

[7]https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/

[8]https://kubernetes.io/docs/tasks/debug-application-cluster/debug-application-introspection/

--

--

Upendra Kumarage

Cloud & DevOps enthusiast, Cloud Operations professional