- Kubernetes logs: Pod logs, container logs, Docker logs, kubelet logs, and master node logs - Mon, Sep 25 2023
- Kubernetes DaemonSets - Wed, Sep 6 2023
- Static Pods in Kubernetes - Fri, Sep 1 2023
In a previous post, we discussed using Kubernetes Deployments to run a containerized application on a Kubernetes cluster. So far, we have also learned that a Deployment automatically manages the lifecycle of Pods and ReplicaSets. In this post, you will learn about Kubernetes Services.
Why a Kubernetes Service?
If you are wondering why you need a Kubernetes Service, remember that each Pod in a Kubernetes cluster is allocated a dynamic IP address. The Pods are ephemeral, meaning they are created and destroyed automatically in a Kubernetes cluster. Their IP addresses change as they are recreated, so we cannot rely on the IP addresses to communicate with the Pods. Furthermore, the Pods in a Kubernetes cluster are only accessible from within the cluster by default. So how can end users access the application from outside the cluster? Well, this is where a Kubernetes Service comes into the picture.
A Kubernetes Service offers an abstraction and a stable endpoint to expose a group of Pods over the network, allowing communication between various components of an application and external users. For example, you might have a group of Pods running a frontend webapp, while another group of Pods are running a backend datastore. A Kubernetes Service helps you connect the frontend webapp to the backend. Similarly, a Kubernetes Service helps you expose the frontend webapp to end users.
Another question you might ask is how a Service would know which Pods to pick as endpoints. The Pods are chosen using the labels assigned to them. If a label app=webapp is assigned to your Pods, you will define a selector with the same label in the Service manifest, and it will select all the Pods matching that label.
Types of Services
There are different types of Services in Kubernetes, including the following:
Choose the type of Service you need according to your requirements. Let's take a look at each Service type, including its use case.
A ClusterIP Service exposes the application such that it is only reachable from within the Kubernetes cluster. When you do not specify a Service type in a Service manifest, the ClusterIP Service is created by default. This type of Service is useful for enabling communication between different components of an application (e.g., frontend and backend) within the cluster itself.
To create a ClusterIP Service, you can create a YAML file with the following code:
apiVersion: v1 kind: Service metadata: name: webapp-cluster-ip-service spec: type: ClusterIP ports: - port: 80 targetPort: 80 selector: app: webapp
Notice that the Service manifest contains apiVersion, kind, metadata, and spec sections, similar to the Pod and Deployment manifests that we saw in previous posts. Let's take a look at the differences.
- The kind is the type of Kubernetes object defined.
- The spec.type is ClusterIP. If you omit this field, a ClusterIP Service is created by default.
- The ports section is an array, so you can define multiple TCP ports in the configuration. Note that the ports defined here are from the Service side, and only the port field is mandatory.
- The port specifies the Service's port number.
- The targetPort specifies the port number of the container in which your application is running.The spec.selector section defines the selectors to match the Pod labels.
The following screenshot shows a Service manifest and a Deployment manifest open side-by-side:
Note that if the selector defined in the Service manifest doesn't match the Pod label, the service will not be able to identify Pods as its endpoints.
A NodePort Service exposes the application on a static port on each Kubernetes node, making it accessible from outside the cluster on the node's IP address (or hostname) and the static (or automatically allocated) port number. The supported NodePort range is 30000–32767. You can use the NodePort Service type to expose a development app to a bunch of internal users. However, it is not ideal for exposing a production app to end users since it is not very secure. This is because you are opening certain ports on all the nodes in a cluster, and users can use the IP address and port number combination of any node to access the application. This can be convenient for some use cases, but problematic for others.
To create a NodePort Service, create a YAML file with this code:
apiVersion: v1 kind: Service metadata: name: webapp-node-port-service spec: type: NodePort ports: - port: 80 targetPort: 80 nodePort: 32001 selector: app: awesome-webapp
In the case of a NodePort Service, we just have to set the spec.type to NodePort. In addition, you can define an optional nodePort if you want to expose your application on a particular port. If omitted, a free port is automatically allocated from the range mentioned above.
The LoadBalancer Service exposes the application to external users using the cloud provider's native load balancer to help distribute the traffic to multiple Pods. Note that this Service type can only be used when your Kubernetes cluster is running on a supported cloud platform, such as AWS, Google Cloud, or Azure.
To create a LoadBalancer Service, use the following code:
apiVersion: v1 kind: Service metadata: name: webapp-load-balancer-service spec: type: LoadBalancer ports: - port: 80 targetPort: 80 selector: app: webapp
Note that when you set the spec.type to LoadBalancer, Kubernetes will send a command to your cloud provider to provision a native load balancer for your Service. Remember, the cloud-native load balancers are not usually free, so it will cost you money every time you create a Service of the LoadBalancer type. If your Kubernetes cluster is running in a private network, creating a LoadBalancer Service will essentially work like the NodePort Service. Note that the LoadBalancer Service is the preferred way to expose a production application to external users.
An ExternalName Service maps the application to the contents of the externalName field (e.g., a DNS record) defined in the Service manifest without creating any proxy or load balancer. It is particularly useful in situations where you want your application to connect to an external service hosted outside the Kubernetes cluster (e.g., Amazon RDS).
To create an ExternalName service, use the following code:
apiVersion: v1 kind: Service metadata: name: mysql-external-name-service spec: type: ExternalName externalName: mysql101.1234.ap-southeast-1.rds.amazonaws.com
You can see that, in the case of an ExternalName Service, there is a new field named externalName. The other Service types in the cluster return the internal IP address of a Kubernetes resource, but the ExternalName Service returns a CNAME record with the value mentioned in the externalName field. This way, you can connect your application running in a Kubernetes cluster to external services.
A headless Service is a type of Service that allows direct communication with individual Pods without using a single Service endpoint or load balancing. A regular Kubernetes Service gets a cluster IP address assigned to it, and incoming traffic is automatically load-balanced across different Pods. However, in the case of a headless Service, no cluster IP is assigned. A headless Service is particularly useful when deploying stateful applications, such as databases, in a Kubernetes cluster. It comes in handy in situations in which you want to manually control how Pods can talk to each other directly without relying on regular Kubernetes Services. The advantage of adding pods to a headless Kubernetes Service instead of configuring each individual Pod is that clients can connect to Pods by connecting to the Service's DNS name, as they can with regular Services. In addition, you can scale all Pods up or down under a Service with a single command or an autoscaler.
To create a headless Service in Kubernetes, use the following code:
apiVersion: v1 kind: Service metadata: name: webapp-headless-service spec: clusterIP: None ports: - port: 80 targetPort: 80 selector: app: webapp
You can see in the screenshot that setting the spec.clusterIP to None creates a headless Service. This means that the Service will not be allocated a cluster IP, so whenever a client wants to connect to this Service, it will simply give a list of Pods with a matching label, and the client can decide which Pod to connect to.
Create a Service
In the previous section, we discussed how to create a different type of Service in Kubernetes, depending on your use case. Once you have the Service manifest ready, use the kubectl create or apply command. For the demo, let's create a NodePort Service.
kubectl apply -f webapp-node-port-service.yaml
The screenshot shows that the NodePort Service was created successfully. To view the Service, you can use the kubectl get service <service-name> and the kubectl describe service <service-name> commands, as shown in the screenshot below:
You can also use the svc alias instead of service as shown in the screenshot. Note that a Service can be created before actually creating the Pods or the Deployment in the cluster. In that case, the Service will continue to wait for the matching Pods with endpoints set to <none>, as you can see in the screenshot above.
As soon as we create the Deployment, you will notice that the IP addresses of the Pods will start showing up under the service endpoints:
kubectl apply -f webapp.yaml kubectl get pods -l app=awesome-webapp -o wide kubectl describe svc webapp-node-port-service
If you still don't see the endpoints section populated, make sure you have the same Pod labels and Service selectors. Now, note the port number from the NodePort field, which is 32001, and try accessing the webapp using the IP address and port number of any node.
Subscribe to 4sysops newsletter!
You can see in the screenshot that webapp is accessible on my PC, which is in the same network as that of the Kubernetes cluster nodes. I hope this post helps you understand the different types of Kubernetes Services. In the next post, you will learn about Taints and Tolerations in Kubernetes.