In a previous post of this Kubernetes guide, I discussed how to create a Pod in Kubernetes using imperative and declarative configurations. In this post, you will learn how to create a Deployment in Kubernetes. In Kubernetes, a Deployment is a resource object that defines how your application is deployed and managed in a cluster. You just need to define the desired number of instances (a.k.a. replicas) of your application, the container image, resource requirements, and other options in a configuration file, and Kubernetes takes care of deploying and managing the application.

ReplicationController and ReplicaSet

As discussed in the previous post, Kubernetes encapsulates containers with your application in Pods. An instance of a Pod running in the Kubernetes cluster is commonly referred to as a replica. Replicas play an important role in offering scalability and high availability for your application.

Apart from Deployments, let's discuss the concepts of ReplicationController and ReplicaSet in Kubernetes. A ReplicationController is a Kubernetes object that ensures that the desired numbers of replicas (or Pods) are always running in the cluster. If a Pod is deleted or has crashed for some reason, the ReplicationController ensures that a new Pod is created automatically to bring your deployment to the desired state. The ReplicaSet is the next-gen version of the ReplicationController, and it serves the same purpose. The important thing to note is that the ReplicationController was deprecated and replaced by the ReplicaSet in Kubernetes 1.18.

You can define a ReplicaSet (or a ReplicationController) with a YAML configuration file. Fortunately, a Deployment handles creating and managing the ReplicaSets and Pods for you, so you do not need to create the YAML files manually. The Kubernetes documentation states that you do not need to manually manipulate the ReplicaSets managed by a Deployment.

Create a Deployment

Now that you understand the concept of ReplicaSets, let's learn how to create a Deployment in Kubernetes. Remember the --dry-run=client trick we used to generate a YAML file for a Pod quickly? We can use the same trick to write a Deployment manifest. To do so, use the kubectl create deployment command, as shown below:

kubectl create deployment webapp --image=nginx --replicas=2 --dry-run=client -o yaml > webapp-deployment.yaml
Create a YAML configuration file for a Deployment in Kubernetes

Create a YAML configuration file for a Deployment in Kubernetes

Let's take a brief look at the various options used with the command:

  • webapp: The name of the Deployment
  • --image=nginx: Specifies the name of the container image
  • --replicas=2: Specifies the number of replicas (Pods) to run
  • --dry-run=client: Used to preview the object instead of creating it
  • --output=yaml: Specifies that the output format is YAML

Finally, we use the shell redirection operator (>) to redirect the output to the webapp-deployment.yaml file. To view the configuration file or to make further changes, open it in a text editor:

nano webapp-deployment.yaml
View or change the Deployment manifest in Kubernetes

View or change the Deployment manifest in Kubernetes

You can see that the configuration looks similar to that of a Pod, of course, except that the apiVersion is now apps/v1, and the type is Deployment. Additionally, there is a new field called replicas, which is currently set to 2. It tells Kubernetes to run two Pods. Furthermore, there is an entirely new template section that Pod configuration files lack. This section defines the metadata and specs of the Pod that will be run by this Deployment. Kubernetes uses labels and selectors to group Pods of the same type.

Once you have the configuration file ready, use the following command to create the Deployment:

kubectl create -f webapp-deployment.yaml
Create a Deployment in Kubernetes

Create a Deployment in Kubernetes

Alternatively, you can use the kubectl apply command to create the Deployment. To view the Deployments, use the kubectl command, as shown below:

kubectl get deployments
View Deployments in Kubernetes

View Deployments in Kubernetes

Let's take a look at the output:

  • The NAME column shows the name of the Deployment (webapp, in our case).
  • The READY column uses the ready/desired format to show how many replicas of your application are ready. Note that 2/2 here indicates that two replicas out of two are ready. If there were a problem in the rollout, you would see 0/2 or 1/2.
  • The UP-TO-DATE column contains the number of replicas that are updated to achieve the desired state.
  • The AVAILABLE column specifies how many replicas are available to users.
  • The AGE column shows the amount of time that has passed since the replica was created.

You can also see the rollout status of the Deployment with the following command:

kubectl rollout status deployment/webapp
View the rollout status of the Kubernetes Deployment

View the rollout status of the Kubernetes Deployment

By now, you know that the Deployment is supposed to create ReplicaSets and Pods. If you run the kubectl get all command, you will be able to see all the Kubernetes objects at once in the current or specified namespace.

View all objects in the Kubernetes cluster

View all objects in the Kubernetes cluster

The screenshot above shows the Pods and ReplicaSets with the Deployment object. Just ignore the service object for now. You can also view each resource individually with the kubectl get <resource> command. For example, to view the ReplicaSets, run this command:

kubectl get replicaset

An alternative is this command:

kubectl get rs
View the ReplicaSets created by a Kubernetes Deployment

View the ReplicaSets created by a Kubernetes Deployment

Notice how the template hash is automatically added at the end of the ReplicaSet and the Pod name. When we increase or decrease the number of replicas in a Deployment, the ReplicaSets are also updated accordingly.

Update a Deployment

Now comes the most interesting part. Do you remember from the previous post that we had to delete the existing Pod and recreate it manually with the updated configuration? As I said earlier, the Deployment automatically handles creating, updating, and deleting Pods and ReplicaSets. However, there are scenarios in which you might want to update a Deployment. Let's consider the following two cases: scaling a Deployment and updating a Deployment.

Scale a Deployment using an imperative command

We defined two replicas for the webapp Deployment. When two replicas are not enough, you need to scale the Deployment up. You can easily scale the replicas up or down with an imperative command, as shown below:

kubectl scale deployment webapp --replicas=4
Scale a deployment in Kubernetes with an imperative command

Scale a deployment in Kubernetes with an imperative command

Here, we used the kubectl scale deployment command, where webapp is the name of the Deployment and --replicas=4 specifies the desired number of replicas. After running this command, you can see that four Pods are now running. Alternatively, you can use the kubectl edit deployment webapp command to edit the live Deployment directly and update the replicas to four.

Both methods work, but changes made this way are difficult to track. For instance, if you encounter a problem due to a bad config change, tracking and fixing it will be more difficult, since imperative commands are only available in the user's command history.

Update a Deployment using a declarative command

Since we created the Deployment using the webapp-deployment.yaml configuration file, the best approach is to open the file in a text editor, update the number of replicas or make other changes as needed, and then run the kubectl apply -f webapp-deployment.yaml declarative command. The benefit of using this approach is that it allows you to track the changes in the Deployment and possibly revert them when necessary.

If you open the configuration file in a text editor, you will notice that the replicas are still two, but the Deployment was scaled to four replicas with an imperative command. That is why you should avoid using imperative commands when there are multiple admins. Let us keep the replicas as two, but change the container image from nginx to a custom container image stored in a private harbor registry.

Modify the Deployment configuration file and change the container image

Modify the Deployment configuration file and change the container image

You can see that I have updated the container image. Since this container image is not public, we also need to specify Harbor credentials using an image pull secret. Harbor is an open-source, cloud-native registry that stores, signs, and scans container images for vulnerabilities. When the updated configuration is applied, you will notice that old Pods are terminated, and new ones are created automatically.

Update a Kubernetes deployment

Update a Kubernetes deployment

If you check the Deployment rollout status again, you will see that the rollout is stuck with the following message:

Waiting for deployment "webapp" rollout to finish: 1 out of 2 new replicas have been updated

Waiting for deployment webapp rollout to finish 1 out of 2 new replicas have been updated

Waiting for deployment webapp rollout to finish 1 out of 2 new replicas have been updated

Let's take a look at Pods to understand what's going on. Here, you will notice that the status of a Pod is ImagePullBackOff. Let's dig a little bit deeper by running the kubectl describe command on the Pod and see the events section.

kubectl get pods -l app=webapp
kubectl describe pod webapp-569ffb4c8f-b9x54
Use the kubectl describe pod command to troubleshoot the image pull error

Use the kubectl describe pod command to troubleshoot the image pull error

The events section gave us a hint of why the image pull failed, as you can see in the screenshot. The reason is that I forgot to create a secret in the Kubernetes cluster, whereas we have defined it in the Deployment configuration under the imagePullSecrets field. Even though the Deployment reconfiguration wasn't successful, it's important to note that existing Pods are still running, which essentially means that your application is still available to users. This is because Kubernetes Deployment follows a Rolling Updates strategy to update the Pods incrementally, so there is zero downtime.

View the deployment strategy in Kubernetes

View the deployment strategy in Kubernetes

While troubleshooting, if you determine that the rollout problem is due to a bug that could take a while to fix, you can simply revert the rollout with the kubectl rollout undo deployment webapp command. But in our case, let's fix the image pull error by creating a secret in the Kubernetes cluster. To do so, run the following command:

kubectl create secret docker-registry harbor-cred \
	--docker-server=harbor.testlab.local \
	--docker-username=surender \
	--docker-password=P@ssw0rd123 \
	--docker-email=surender@testlab.local
Create a secret for the private container registry in the Kubernetes cluster

Create a secret for the private container registry in the Kubernetes cluster

After creating the secret, delete the Pod, which was in pending status; this will trigger the rollout again. This time, the rollout should be successful and error free.

Force the Deployment rollout again after resolving the image pull error

Force the Deployment rollout again after resolving the image pull error

The screenshot shows that the deployment rollout is now successful, and as per the desired new configuration, we have two replicas (Pods) running.

I hope this post helps you get started with Kubernetes Deployments. In the next post, we will discuss Services in Kubernetes to expose the Deployment.

0 Comments

Leave a reply

Your email address will not be published. Required fields are marked *

*

© 4sysops 2006 - 2023

CONTACT US

Please ask IT administration questions in the forums. Any other messages are welcome.

Sending

Log in with your credentials

or    

Forgot your details?

Create Account