- Update container images with Copa - Mon, Nov 27 2023
- Deploying stateful applications with Kubernetes StatefulSets - Wed, Nov 1 2023
- Install and enable IIS Manager for Remote Administration - Thu, Oct 26 2023
Kubernetes beginners often find it confusing to access container and Pod logs. Since Kubernetes Pods run on a different network than the host network, you cannot directly access the logs as easily as the logs of applications, such as Apache or Nginx, that run directly on a server or a virtual machine (VM). If the application running in a container is configured to write its logs to stdout and stderr, the container log will include the applications' logs.
Logs on worker nodes
Kubernetes container vs. Pod logs
The main difference between Pod logs and container logs is that Pod logs hold the logs from all the Pod's containers, while container logs only hold the logs from the specific container. By default, container logs and Pod logs are stored on worker nodes.
If the kubelet (responsible for managing containers and Pods) runs as a system service (as opposed to a regular process), the logs are written through the Linux logging system journald.
Furthermore, individual containers write the logs to the stdout and stderr streams. Stdout is a standard output stream used for writing normal output, such as information, results, and messages, whereas stderr is a standard error stream used for writing error messages, such as warnings, exceptions, and failures.
The kubelet creates two different log directories for containers and Pods on each worker node:
/var/log/pods: Under this directory, the logs are organized in subdirectories that follow a <namespace><pod-name><pod-id>/<container-name> naming scheme. The container directory has a log file, which is a symlink to the actual log file stored by the container runtime.
The screenshot shows two Pods running on the kube-srv3 node. The /var/log/pods folder on the node contains the directories for those Pods. Each Pod directory contains the individual container name directory (nginx-container, in this case) and the respective log file (0.log, in this case), which is a symlink that points to the actual JSON log file located at /var/lib/docker/containers/<container-id>/<container-id>-json.log.
/var/log/containers: This directory also contains the symlinks to /var/log/pods/<namespace><pod-name><pod-id>/<container-name>/*.log files, which in turn point to the actual JSON log files written by the container runtime at /var/lib/docker/containers/<container-id>/<container-id>-json.log.
The /var/lib/docker/containers/ directory contains the JSON log files organized in <container-id> directories. Remember, these log files are written by the container runtime, which only knows about the containers. It doesn't know about Pods, so logs are organized based on container IDs.
Since the kubelet knows how to communicate with the Kubernetes API server, it created those /var/log/pods and /var/log/containers directories.
Viewing Pod logs
Now that you understand where logs are stored on the Kubernetes nodes, let's discuss how to access them. To view the logs, you can use the kubectl logs command. When you run this command, the API server locates the node where the Pod is running and forwards the request to the kubelet on that node. The kubelet retrieves the logs and sends them back to the API server, which returns them to kubectl.
kubectl logs <pod-name>
This command pulls the logs from a Pod that runs a single container. To follow the log in real-time, you can include the –follow (or -f) flag; for example, kubectl logs -f <pod-name>. If your Pod crashes or restarts, you can use the –previous flag to view the logs of the previous instance to learn why the Pod was restarted.
kubectl logs <pod-name> --previous
You can see that the Pod was restarted because I manually sent a kill signal to its process. The –previous flag comes in handy for troubleshooting.
If your Pods are running in a different namespace, you need to use the –namespace (or -n) option and specify the namespace. For example, to view the logs of a kube-scheduler Pod that is running as a static Pod in the kube-system namespace, I will first get the Pods from all namespaces using the –all-namespaces (or -A) flag. Then, I use the kubectl logs command, as shown below.
kubectl get pods –all-namespaces kubectl logs <kube-scheduler-pod-name> --namespace kube-system
Viewing container logs
If the Pod is a multicontainer Pod, you can specify the container name to view its logs, as shown below.
kubectl logs <pod-name> <container-name>
I have a Pod named multi-pod that runs two containers: busybox-container and ubuntu-container. The screenshot shows how to view the logs of two containers running in the same Pod. To view the logs of all containers, you can use the –all-containers flag, as shown below.
kubectl logs <pod-name> --all-containers
You know that the kubectl command works on a control plane (master) node only. There are several ways to access logs on worker nodes.
Another option to view container logs on a node is to use the regular cat or tail Linux commands and read the log file stored in the /var/log/pods directory. See the screenshot below for reference:
sudo tail /var/log/pods/default_nginx-pod_64ae98c8-fbee-403b-b817-8affbbc671f2/nginx-container/0.log
We have already discussed how Kubelet creates a /var/log/pods directory. Here, I used the tail command to view the last 10 lines of a log file.
Viewing Docker logs
The container runtime (such as Docker) redirects these container streams to store the logs under the /var/lib/docker/containers/<container-id> directory on each node in JSON format.
If the container runtime is running on the worker node Docker, you can simply use the docker logs command to view the container logs.
First, we used the docker ps command with a filter to find out the name or ID of the desired container. Then, we used the docker logs command to view its logs. If you are using a ContainerD runtime instead of Docker, you can use the equivalent crictl ps and crictl logs commands.
Viewing kubelet logs
The kubelet logs are the logs from the kubelet process, which is a process that runs on each worker node in a Kubernetes cluster. The kubelet is responsible for launching and managing containers, monitoring the health of containers, and reporting the status of containers to the API server.
If the kubelet is running as a systemd service, you can use the journalctl --unit kubelet command. To filter/format the output, you can use various options, such as -f to follow the log in real-time, -r to reverse the order, or --since and --until to specify a time range.
journalctl --unit kubelet --since today | grep webapp-pod
Logs on the master node
The master (or control plane) node controls the entire Kubernetes cluster. It stores API server logs, controller manager logs, and scheduler logs.
API server logs
The API server is a critical component of the Kubernetes control plane. It serves as the primary management and control point for the entire Kubernetes cluster. The API server provides the Kubernetes API, which allows users, administrators, and other components within the cluster to interact with and manage Kubernetes resources, such as pods, services, deployments, and nodes.
To view its logs, you can use this command:
kubectl logs -l component=kube-apiserver -n kube-system
The --selector (or -l) option is used to specify the component label, and the --namespace (or -n) is used to specify the namespace (i.e., kube system) where the cluster component is running.
Controller manager logs
The controller manager is one of the key components of the Kubernetes control plane. Its primary responsibility is to manage various controllers that regulate the desired state of different resources within the cluster. Each controller is responsible for ensuring that a specific type of resource (e.g., pods, replica sets, services) maintains its desired state, as defined in the Kubernetes API server.
To view the controller manager logs, run this command:
kubectl logs -l component=kube-controller-manager -n kube-system
Scheduler logs
The scheduler is a critical component of the Kubernetes control plane and is responsible for making decisions about where and how to deploy Pods within the cluster. Its primary role is to ensure that Pods are scheduled to run on suitable nodes based on various factors, such as resource requirements, node capacity, and user-defined constraints. To view the scheduler logs, run this command:
kubectl logs -l component=kube-scheduler -n kube-system
If you can SSH into the master node, you can read the log files for each component directly with the cat or tail commands:
- /var/log/kube-apiserver.log
- /var/log/kube-controller-manager.log
- /var/log/kube-scheduler.log
Remember, if you configured your cluster with the kubeadm tool, these components run as static Pods. In that case, you can follow the steps mentioned in the View Pod logs section to view the logs for each component. You only need to specify the complete Pod name and the kube-system namespace.
Cluster events
Cluster events in Kubernetes are objects that provide information about what is happening inside a cluster, such as what decisions were made by the scheduler or why some pods were evicted from the node. They are created by Kubernetes components, such as the API server, scheduler, and controller manager.
The cluster events contain the following information:
- Type of event, such as "Pod created" or "Node evicted"
- Timestamp of the event
- Name of the resource that the event is related to, like the name of a pod or a node
- Reason for the event
- Message of the event
- Source of the event, such as the API server or the scheduler
Cluster events can be used to troubleshoot problems with a Kubernetes cluster, monitor the health of the cluster, and understand how the cluster is being used.
To see what is happening inside a cluster, you can use the kubectl events command, as shown below:
kubectl get events
This command shows recent events from the default namespace. You can use the --namespace flag to view events from a particular Kubernetes namespace.
Subscribe to 4sysops newsletter!
Wrapping up
You just learned how to view Pod logs in Kubernetes. In a real production environment, you might want to use a cluster-level logging solution to collect and store logs from all the containers in the cluster. Unfortunately, Kubernetes does not offer such native logging solutions, but there are many open-source alternatives that you can integrate with Kubernetes. Fluentd is a popular open-source tool that can be easily deployed in Kubernetes to collect logs and send them to various destinations, such as Elasticsearch or Amazon S3.
Read the latest IT news and community updates!
Join our IT community and read articles without ads!
Do you want to write for 4sysops? We are looking for new authors.