Knowing how to view Kubernetes logs is essential when it comes to troubleshooting. Whereas the worker nodes store the Pod logs, container logs, and kubelet logs, the master node stores the API server logs, controller manager logs, and the scheduler log. If your entire Kubernetes cluster is in trouble, the cluster events can help you track down the issue.
Avatar

Kubernetes beginners often find it confusing to access container and Pod logs. Since Kubernetes Pods run on a different network than the host network, you cannot directly access the logs as easily as the logs of applications, such as Apache or Nginx, that run directly on a server or a virtual machine (VM). If the application running in a container is configured to write its logs to stdout and stderr, the container log will include the applications' logs.

Logs on worker nodes

Kubernetes container vs. Pod logs

The main difference between Pod logs and container logs is that Pod logs hold the logs from all the Pod's containers, while container logs only hold the logs from the specific container. By default, container logs and Pod logs are stored on worker nodes.

If the kubelet (responsible for managing containers and Pods) runs as a system service (as opposed to a regular process), the logs are written through the Linux logging system journald.

Furthermore, individual containers write the logs to the stdout and stderr streams. Stdout is a standard output stream used for writing normal output, such as information, results, and messages, whereas stderr is a standard error stream used for writing error messages, such as warnings, exceptions, and failures.

The kubelet creates two different log directories for containers and Pods on each worker node:

/var/log/pods: Under this directory, the logs are organized in subdirectories that follow a <namespace><pod-name><pod-id>/<container-name> naming scheme. The container directory has a log file, which is a symlink to the actual log file stored by the container runtime.

Viewing the directory structure of varlogpods

Viewing the directory structure of varlogpods

The screenshot shows two Pods running on the kube-srv3 node. The /var/log/pods folder on the node contains the directories for those Pods. Each Pod directory contains the individual container name directory (nginx-container, in this case) and the respective log file (0.log, in this case), which is a symlink that points to the actual JSON log file located at /var/lib/docker/containers/<container-id>/<container-id>-json.log.

/var/log/containers: This directory also contains the symlinks to /var/log/pods/<namespace><pod-name><pod-id>/<container-name>/*.log files, which in turn point to the actual JSON log files written by the container runtime at /var/lib/docker/containers/<container-id>/<container-id>-json.log.

Viewing the directory structure of varlogcontainers

Viewing the directory structure of varlogcontainers

The /var/lib/docker/containers/ directory contains the JSON log files organized in <container-id> directories. Remember, these log files are written by the container runtime, which only knows about the containers. It doesn't know about Pods, so logs are organized based on container IDs.

Viewing the directory structure of varlibdockercontainers

Viewing the directory structure of varlibdockercontainers

Since the kubelet knows how to communicate with the Kubernetes API server, it created those /var/log/pods and /var/log/containers directories.

Viewing Pod logs

Now that you understand where logs are stored on the Kubernetes nodes, let's discuss how to access them. To view the logs, you can use the kubectl logs command. When you run this command, the API server locates the node where the Pod is running and forwards the request to the kubelet on that node. The kubelet retrieves the logs and sends them back to the API server, which returns them to kubectl.

kubectl logs <pod-name>
View the logs of a single container Pod

View the logs of a single container Pod

This command pulls the logs from a Pod that runs a single container. To follow the log in real-time, you can include the –follow (or -f) flag; for example, kubectl logs -f <pod-name>. If your Pod crashes or restarts, you can use the –previous flag to view the logs of the previous instance to learn why the Pod was restarted.

kubectl logs <pod-name> --previous
View the logs of a crashed or restarted Pod to know the reason for a crash or restart

View the logs of a crashed or restarted Pod to know the reason for a crash or restart

You can see that the Pod was restarted because I manually sent a kill signal to its process. The –previous flag comes in handy for troubleshooting.

If your Pods are running in a different namespace, you need to use the –namespace (or -n) option and specify the namespace. For example, to view the logs of a kube-scheduler Pod that is running as a static Pod in the kube-system namespace, I will first get the Pods from all namespaces using the –all-namespaces (or -A) flag. Then, I use the kubectl logs command, as shown below.

kubectl get pods –all-namespaces
kubectl logs <kube-scheduler-pod-name> --namespace kube-system
View logs of the kube scheduler Pod running in a kube system namespace

View logs of the kube scheduler Pod running in a kube system namespace

Viewing container logs

If the Pod is a multicontainer Pod, you can specify the container name to view its logs, as shown below.

kubectl logs <pod-name> <container-name>
View the logs of a multicontainer Pod

View the logs of a multicontainer Pod

I have a Pod named multi-pod that runs two containers: busybox-container and ubuntu-container. The screenshot shows how to view the logs of two containers running in the same Pod. To view the logs of all containers, you can use the –all-containers flag, as shown below.

kubectl logs <pod-name> --all-containers

You know that the kubectl command works on a control plane (master) node only. There are several ways to access logs on worker nodes.

Another option to view container logs on a node is to use the regular cat or tail Linux commands and read the log file stored in the /var/log/pods directory. See the screenshot below for reference:

sudo tail /var/log/pods/default_nginx-pod_64ae98c8-fbee-403b-b817-8affbbc671f2/nginx-container/0.log
View Pod logs with the tail command on a worker node

View Pod logs with the tail command on a worker node

We have already discussed how Kubelet creates a /var/log/pods directory. Here, I used the tail command to view the last 10 lines of a log file.

Viewing Docker logs

The container runtime (such as Docker) redirects these container streams to store the logs under the /var/lib/docker/containers/<container-id> directory on each node in JSON format.

If the container runtime is running on the worker node Docker, you can simply use the docker logs command to view the container logs.

View container logs with the docker logs command on a worker node

View container logs with the docker logs command on a worker node

First, we used the docker ps command with a filter to find out the name or ID of the desired container. Then, we used the docker logs command to view its logs. If you are using a ContainerD runtime instead of Docker, you can use the equivalent crictl ps and crictl logs commands.

Viewing kubelet logs

The kubelet logs are the logs from the kubelet process, which is a process that runs on each worker node in a Kubernetes cluster. The kubelet is responsible for launching and managing containers, monitoring the health of containers, and reporting the status of containers to the API server.

If the kubelet is running as a systemd service, you can use the journalctl --unit kubelet command. To filter/format the output, you can use various options, such as -f to follow the log in real-time, -r to reverse the order, or --since and --until to specify a time range.

journalctl --unit kubelet --since today | grep webapp-pod
View the kubelet logs with the journalctl command on a worker node

View the kubelet logs with the journalctl command on a worker node

Logs on the master node

The master (or control plane) node controls the entire Kubernetes cluster. It stores API server logs, controller manager logs, and scheduler logs.

API server logs

The API server is a critical component of the Kubernetes control plane. It serves as the primary management and control point for the entire Kubernetes cluster. The API server provides the Kubernetes API, which allows users, administrators, and other components within the cluster to interact with and manage Kubernetes resources, such as pods, services, deployments, and nodes.

To view its logs, you can use this command:

kubectl logs -l component=kube-apiserver -n kube-system
View kube apiserver logs on the master node

View kube apiserver logs on the master node

The --selector (or -l) option is used to specify the component label, and the --namespace (or -n) is used to specify the namespace (i.e., kube system) where the cluster component is running.

Controller manager logs

The controller manager is one of the key components of the Kubernetes control plane. Its primary responsibility is to manage various controllers that regulate the desired state of different resources within the cluster. Each controller is responsible for ensuring that a specific type of resource (e.g., pods, replica sets, services) maintains its desired state, as defined in the Kubernetes API server.

To view the controller manager logs, run this command:

kubectl logs -l component=kube-controller-manager -n kube-system 
View controller manager logs on the master node

View controller manager logs on the master node

Scheduler logs

The scheduler is a critical component of the Kubernetes control plane and is responsible for making decisions about where and how to deploy Pods within the cluster. Its primary role is to ensure that Pods are scheduled to run on suitable nodes based on various factors, such as resource requirements, node capacity, and user-defined constraints. To view the scheduler logs, run this command:

kubectl logs -l component=kube-scheduler -n kube-system
View kube scheduler logs on the master node

View kube scheduler logs on the master node

If you can SSH into the master node, you can read the log files for each component directly with the cat or tail commands:

  • /var/log/kube-apiserver.log
  • /var/log/kube-controller-manager.log
  • /var/log/kube-scheduler.log

Remember, if you configured your cluster with the kubeadm tool, these components run as static Pods. In that case, you can follow the steps mentioned in the View Pod logs section to view the logs for each component. You only need to specify the complete Pod name and the kube-system namespace.

Cluster events

Cluster events in Kubernetes are objects that provide information about what is happening inside a cluster, such as what decisions were made by the scheduler or why some pods were evicted from the node. They are created by Kubernetes components, such as the API server, scheduler, and controller manager.

The cluster events contain the following information:

  • Type of event, such as "Pod created" or "Node evicted"
  • Timestamp of the event
  • Name of the resource that the event is related to, like the name of a pod or a node
  • Reason for the event
  • Message of the event
  • Source of the event, such as the API server or the scheduler

Cluster events can be used to troubleshoot problems with a Kubernetes cluster, monitor the health of the cluster, and understand how the cluster is being used.

To see what is happening inside a cluster, you can use the kubectl events command, as shown below:

kubectl get events
View Kubernetes cluster events

View Kubernetes cluster events

This command shows recent events from the default namespace. You can use the --namespace flag to view events from a particular Kubernetes namespace.

Subscribe to 4sysops newsletter!

Wrapping up

You just learned how to view Pod logs in Kubernetes. In a real production environment, you might want to use a cluster-level logging solution to collect and store logs from all the containers in the cluster. Unfortunately, Kubernetes does not offer such native logging solutions, but there are many open-source alternatives that you can integrate with Kubernetes. Fluentd is a popular open-source tool that can be easily deployed in Kubernetes to collect logs and send them to various destinations, such as Elasticsearch or Amazon S3.

avatar
0 Comments

Leave a reply

Your email address will not be published. Required fields are marked *

*

© 4sysops 2006 - 2023

CONTACT US

Please ask IT administration questions in the forums. Any other messages are welcome.

Sending

Log in with your credentials

or    

Forgot your details?

Create Account