- Update container images with Copa - Mon, Nov 27 2023
- Deploying stateful applications with Kubernetes StatefulSets - Wed, Nov 1 2023
- Install and enable IIS Manager for Remote Administration - Thu, Oct 26 2023
Install the metrics server
To install the metrics server in a Kubernetes cluster, we will first pull the latest YAML manifest from the GitHub repository. To do so, run this command:
wget -O metrics-server.yaml https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
The actual name of the manifest is components.yaml in the repository, but we downloaded it locally and saved it with a more descriptive name (i.e., metrics-server.yaml). The manifest contains all the configurations needed to run the metrics server in the cluster.
Now open the metrics-server.yaml file in a text editor, navigate to the Deployment section, and add the --kubelet-insecure-tls flag under the template.spec.containers section, as shown in the screenshot below.
The kubelet running on each worker node uses a self-signed certificate that is not trusted by the metrics server. The --kubelet-insecure-tls flag tells the metrics server to skip the kubelet certification verification. In a production environment, you should use valid kubelet certificates signed by a trusted certificate authority (CA) to avoid the risk of potential man-in-the-middle attacks.
Now, apply the manifest file to install the metrics server:
kubectl apply -f metrics-server.yaml
The screenshot shows that all the Kubernetes resources, such as service account, cluster role, cluster role binding, Deployment, Service, etc. are created successfully. If you don't add the --kubelet-insecure-tls flag and kubelets are using a self-signed certificate, the metrics server Pod will not come up, as shown in the screenshot below:
kubectl get all -l k8s-app=metrics-server -n kube-system
The 0/1 under the READY column indicates that 0 containers are ready out of 1. The metrics server will not work until you get 1/1 here.
How does the metrics server work?
Let's discuss how the metrics server shows the resource utilization metrics. The kubelet runs on each node that exposes a /metrics/resource/ endpoint. This endpoint stores the CPU and memory utilization metrics of the node and the Pods running on it. The metrics server periodically scrapes this endpoint, aggregates the metrics data for all nodes, and then exposes it in the Kubernetes API server through the Metrics API. To view the metrics endpoint on a particular node, you can use this command:
kubectl get --raw /api/v1/nodes/<node-name>/proxy/metrics/resource
The screenshot shows that metrics are available in Prometheus format, which is readable by humans and machines. The metrics server scrapes these metrics, and you can use the kubectl top command to view resource utilization. You can also use other monitoring tools, such as Prometheus, InfluxDB, Grafana, Datadog, etc., to scrape these metrics for historical reporting purposes.
View resource metrics
Once the metrics server is deployed in the cluster, you can view the node and Pod metrics. To view the node metrics, run the kubectl top nodes command.
This command shows the utilization of CPU (cores), CPU percentage, memory (bytes), and memory percentage for all cluster nodes.
To view the Pod metrics, run the kubectl top pods command.
This command shows the CPU (cores) and memory (bytes) utilization for the Pods. To view the metrics for the actual containers running in the Pods, you can append the --containers flag, as shown below:
kubectl top pods --containers
This command shows the Pod name, container name, CPU (cores), and memory (bytes) metrics for each container running in the Pod.
When running the kubectl top command, if you get the Metrics API not available error, as shown in the screenshot, make sure the Pod (container) for the metrics server is fully ready.
You can run the kubectl get pods -l k8s-app=metrics-server -n kube-system command to see the metrics server Pod status.
Remember that the metrics server only collects and serves the most recent metrics. It does not retain historical data for reporting purposes. To keep track of historical metrics data, you might want to use monitoring solutions for Kubernetes like Prometheus or Grafana.
Subscribe to 4sysops newsletter!
The main advantage of a metrics server is its ability to scale a deployment automatically based on the metrics data collected from the kubelets. In the next post, we will discuss the Kubernetes HorizontalPodAutoscaler (HPA), which works with the metrics server to automatically scale the number of Pods (replicas) up and down.