- Kubernetes logs: Pod logs, container logs, Docker logs, kubelet logs, and master node logs - Mon, Sep 25 2023
- Kubernetes DaemonSets - Wed, Sep 6 2023
- Static Pods in Kubernetes - Fri, Sep 1 2023
Understanding DaemonSets
From a configuration perspective, DaemonSet is similar to a ReplicaSet or a Deployment. The only difference is that the kind is set to DaemonSet, and there is no spec.replicas field because it creates only one Pod per node in Kubernetes by default. When a new node is added to the cluster, DaemonSet creates a Pod on it. When the node is removed from the cluster, the Pod is removed (garbage is collected).
If you delete the DaemonSet, the Pod is removed from all nodes. This property of DaemonSet makes it ideal for running various services on cluster nodes, such as a logging agent, a monitoring agent, a container networking solution, and storage services.
You might be wondering how DaemonSet is different from the static Pod that we discussed in the previous post. Remember, static Pods are directly managed by the kubelet service on the node—independently from the Kubernetes cluster. The DaemonSet, on the other hand, is controlled by the Kubernetes API server, which means you can use native Kubernetes features, such as Taints and tolerations, labels, and node affinity with DaemonSets.
The kubeadm tool uses a DaemonSet to deploy the kube-proxy component on each cluster node. kube-proxy is a network proxy that is responsible for managing the network traffic between Kubernetes Pods and Services. Kubernetes uses a Container Network Interface (CNI) to provide Pod-to-Pod and Pod-to-Service communication, as well as external connectivity for the Pods. Normally, CNI is not a part of Kubernetes, so to deploy a Kubernetes cluster yourself (e.g., with the kubeadm tool), you need to use a plugin that provides the necessary networking services to the Kubernetes cluster.
Flannel is an example of a CNI plugin that also uses DaemonSets. To view DaemonSets in all Kubernetes namespaces, use this command:
kubectl get daemonsets --all-namespaces
The screenshot shows that the kube-proxy DaemonSet is created in the kube-system namespace. The kube-flannel-ds DaemonSet is created in the kube-flannel namespace by the flannel plugin.
Create a DaemonSet
There is no straightforward imperative command for creating a DaemonSet. So, you first need to create a configuration (YAML) file. For this demo, let's deploy a Prometheus node exporter on all cluster nodes using a DaemonSet. A node exporter is a program that collects and exposes hardware and operating system metrics for Linux systems. Let's create a node-exporter-ds.yaml file.
apiVersion: apps/v1 kind: DaemonSet metadata: name: node-exporter-ds labels: app: node-exporter spec: selector: matchLabels: app: node-exporter template: metadata: labels: app: node-exporter spec: containers: - name: node-exporter image: prom/node-exporter:latest ports: - name: metrics containerPort: 9100 volumeMounts: - name: proc mountPath: /host/proc readOnly: true - name: sys mountPath: /host/sys readOnly: true - name: root mountPath: /rootfs readOnly: true volumes: - name: proc hostPath: path: /proc - name: sys hostPath: path: /sys - name: root hostPath: path: /
You can see that the configuration file is similar to a Deployment or ReplicaSet, except for the kind, which is set to DaemonSet. There is no replicas field since a DaemonSet is supposed to run one copy of a Pod per node. Here, I defined a Pod with a node-exporter container using a prom/node-exporter image, which exposes the metrics on the 9100 port. The root (/), /sys, and /proc directories from nodes are mounted as read-only into the container, so it can read the metrics from these directories.
Once you have the YAML file ready, apply it to create the DaemonSet.
kubectl apply -f node-exporter-ds.yaml
Now you can view the DaemonSet. Run this command:
kubectl get ds node-exporter-ds
The various columns of the kubectl get command show the status of Pods that are managed by the DaemonSet. The screenshot shows only two Pods, but if you take a look at the nodes, there are three nodes in the cluster. Let's take a look at the Pods to see what's going on.
kubectl get pods -o wide -l app=node-exporter
The above command shows the Pods with an app=node-exporter label, which filters out the other Pods that we are not currently interested in. You can see that the Pods are created on the worker nodes (kube-srv2 and kube-srv3) only. Why not on the control plane node (kube-srv1)? If you remember from my Taints and tolerations post, the control plane node has a Taint set that prevents your Pods from running on it. Run this command to see the Taints configured on the control plane node.
kubectl describe nodes kube-srv1 | grep Taints
You can see there is already a Taint with a NoSchedule effect. If you want to collect the metrics from the control plane node as well, you can modify the node-exporter-ds.yaml file and add a toleration for the Taint set on the control plane node under the template.spec section, as shown below:
tolerations: - key: node-role.kubernetes.io/control-plane effect: NoSchedule
After updating the file, run the kubectl apply -f node-exporter-ds.yaml command once again to update the DaemonSet. This time, you will notice the node-exporter Pod is running on the control plane node too.
kubectl apply -f node-exporter-ds.yaml kubectl get pods -o wide -l app=node-exporter kubectl get ds node-exporter-ds
That's it. You have a node-exporter service running on all cluster nodes. You can now configure a Prometheus server or any other monitoring tool to scrape the metrics from the Kubernetes nodes. If you add a new node in the future, the DaemonSet takes care of creating a Pod on it. If a node is deleted, the Pod is also collected as garbage. To delete the DaemonSet, use this command:
kubectl delete ds node-exporter-ds
As soon as you run this command, the DaemonSet and the node-exporter Pods are deleted from all nodes.
Subscribe to 4sysops newsletter!
Conclusion
Remember, I only covered a simple use case of DaemonSets. In a production environment, you might want to consider things like creating a separate namespace for running such DaemonSets and Pods and defining the resource requests and limits so that the monitoring Pods do not cause too much overhead on the worker nodes. When implemented properly, you can deploy various node-level services with the help of DaemonSets.