- How to use VMware vSAN ReadyNode Configurator - Fri, Dec 17 2021
- VMware Tanzu Kubernetes Toolkit version 1.3 new features - Fri, Dec 10 2021
- Disaster recovery strategies for vCenter Server appliance VM - Fri, Nov 26 2021
Virtualization vs. containers
Traditional virtualization is an emulation of a physical computer. It uses virtual hardware, such as CPU, memory, or storage, which enables virtual machines (VMs) to run workloads. You can execute multiple VMs with multiple operating systems (OS) on a single host.
However, VMs can take up a lot of system resources. Each VM runs not just a full copy of an OS, but a virtual copy of all the hardware that the operating system needs to run. All this uses RAM and CPU cycles. That's still better than running workloads directly on bare metal computers, but for smaller applications, deploying an entire OS can be overkill, especially if the application is running in the cloud. This is where containers come in.
Rather than spinning up an entire virtual machine with a host, containers rely on the underlying host OS. Containers use features of the host OS to isolate processes and control the processes' access to CPUs, memory, and disk space. The container has all the code, with dependencies that enable the applications to run anywhere. On a desktop computer, a traditional on-prem server infrastructure, or the cloud.
However, there are some drawbacks. The fact that the container does not contain a real OS as such means that the applications that rely on a UI of a particular OS such as Windows have to be adapted.
The Docker platform
Docker was the first container platform which is another evolution of virtualization. Docker packages the application with dependencies into a single image. You can then run these containers on any Docker-compatible engine. A container can be deployed manually or with the use of an orchestrator tool such as Kubernetes.
Kubernetes is an open-source orchestration and automation system for deploying, operating, and scaling containerized applications. Kubernetes is cost-efficient because it requires few infrastructure resources. It can scale up applications and their required resources during peak times and scale down infrastructure during less busy times.
VMware's container approach
vSphere Integrated Containers (VIC) was VMware's primary offering before Tanzu became available. The VIC Engine allows developers who are familiar with Docker to write code for containers and deploy the containers alongside traditional VM-based workloads on vSphere clusters.
However, the VIC platform isn't optimal, as all the tools and products required for containers run inside VMs. On the other hand, with Tanzu, they are "baked" into the ESXi hypervisor. The management of Tanzu and the namespaces is done via the vCenter Server and the vSphere web client.
In the same way that VMware has integrated vSAN services into ESXi, the Tanzu Kubernetes platform is now a part of ESXi as well. You can activate Tanzu and namespaces through the vCenter Server after deploying and installing ESXi, your HA cluster, and shared storage. You can imagine that Tanzu is much more efficient than VIC.
VMware Tanzu is a great Kubernetes platform that allows vSphere administrators to use the same types of tools to administer their Kubernetes clusters in the same way they manage virtual machines (VMs). The great advantage of Tanzu is that admins do not need to deploy the NSX-T product to handle virtual networking. A traditional virtual distributed switch architecture is sufficient.
When vSphere with Tanzu is activated on a vSphere cluster, it creates a Kubernetes control plane inside the hypervisor layer. This layer contains specific objects that enable running Kubernetes workloads within ESXi.
vSphere admins can grant permissions on resources at the namespace level to DevOps admins. Once you configure those permissions, resource quotas, and storage, you'll obtain a URL that you can hand out to your DevOps admins so that they can log in, access their namespace, and create containers and package apps.
When you grant them owner permissions, they are able to deploy new namespaces and workloads, or they can destroy them when they are no longer needed. They can also share the namespace, assign, view, or edit owner permissions for other DevOps engineers or groups.
vSphere requirements for Tanzu
From a networking perspective, you can (but don't have to) install and configure NSX-T. If you don't, however, you'll lose the ability to run certain components, such as vSphere Pods or Embedded Harbor Registry. You'll still be able to run containers on vSphere with Tanzu as an orchestration platform.
Additionally, you must install an external load balancer together with a vSphere distributed switch. This is not the case when you have NSX, where you'll deploy the load balancer bundled with NSX.
Your vSphere cluster must have high availability (HA) enabled and shared storage configured. You can (but don't have to) run VMware vSAN for shared storage.
Why are containers becoming increasingly popular? The major reason is that they can run almost anywhere and can be easily moved to different hosts or cloud providers. With Tanzu, VMware successfully jumped on the container bandwagon.
Admins can share vSphere resources and run containers in their VMware environment. Tanzu also offers multi-cloud support (VMware Cloud on AWS). You can manage Tanzu Kubernetes clusters through your familiar vSphere UI without worrying about managing double-sided infrastructure. Resource consumption, quotas, security, and authentication are managed the same way as you manage your traditional VM workloads.
Subscribe to 4sysops newsletter!
Having the possibility of delegating part of the administrative job to your DevOps admins means you remain in control of the overall infrastructure with vSphere and ESXi as a base.
Want to write for 4sysops? We are looking for new authors.