Back to blog
TELEPRESENCE

How to Run a Virtual Machine (VM) on a Local Kubernetes Cluster

Vivek Sonar
November 13, 2023 | 8 min read

Kubernetes has emerged as the de facto standard for container orchestration because it revolutionized the way applications are deployed and managed at scale. But sometimes, it is a good choice to run a GPU-intensive workload, tokenization, or LDAP/Active Directory applications on virtual machines (VMs) instead of containers for security, compatibility, or performance reasons.


Virtual machines have long been the go-to technology for running diverse workloads, offering greater isolation, compatibility with legacy applications, and the ability to run operating systems of different types. However, managing VMs in a traditional infrastructure along with containerized applications on Kubernetes can be challenging and time-consuming. This is where Kubevirt steps in, offering a solution that allows you to run versatile and isolated virtual machines (VMs) inside your local Kubernetes cluster.


In this article, we will explore how you can harness the power of Kubevirt to efficiently run virtual machines inside your local Kubernetes cluster, enabling you to embrace a hybrid cloud architecture and maximize the utilization of your infrastructure resources. We will dive into the concepts, tools, and best practices required to set up and manage VMs within a Kubernetes environment, providing you with the knowledge and guidance to embark on your VM orchestration journey. Whether you are a developer, an infrastructure engineer, or an IT professional looking to optimize your resource allocation and simplify your deployment pipelines, this article will serve as your comprehensive guide to running virtual machines on a local Kubernetes cluster. So, let's get started.

What is Kubevirt?

Kubevirt is an open-source solution created by a Redhat engineer that makes provisioning, managing, and controlling VMs inside Kubernetes alongside containerized applications effortless. It leverages the power of the Virtlet project to enable running VMs on Local Kubernetes clusters.

But why do we need Kubevirt?

Developers often require dedicated environments to test their applications, especially when dealing with complex setups that involve multiple services and dependencies. By utilizing KubeVirt in a local Kubernetes cluster, developers can create VMs to simulate realistic testing environments. Apart from this, Kubevirt can also be used to deploy & test legacy applications that run on Virtual Machine (VM) on your local cluster.

Prerequisites (Setting up local Kubernetes cluster)

Before getting started with kubeVirt and running VMs on our local Kubernetes cluster, we need to ensure we have a running local Kubernetes cluster already in place. In this article, we have used Minikube as our chosen option for running a local Kubernetes cluster; however, alternatives like K0s, K3s, or Kind are also popular choices. Remember installing minikube is essential for running kubernetes cluster locally. You can follow these instructions to download & install minikube on your machine. After installing minikube, you’d need to start it by running the command below on your command line interface(CLI):

minikube start

Minikube will by default create a cluster named “minikube” after you execute this command.

Installing Kubevirt

Step 1: Enabling Kubevirt Addon

You can enable Kubevirt addon in Minikube by using the following command

minikube addons enable kubevirt

Once the kubevirt addon is enabled you can check if it's working properly by running the following command

minikube kubectl -- get all -n kubevirt

Note: There should be 7 pods, 3 services, 1 daemonset, 3 deployment apps, and 3 replica sets deployed. It might take a few minutes to deploy the following resources, depending on your network connection.


Once the resources are deployed & running, you can move to step 2.

Step 2: Installing Virtctl

Virtctl is an additional library provided by kubevirt to handle start/stop applications on VMs and also to access serial and graphical ports of VMs. To install Virtctl run the following command:

VERSION=$(kubectl get kubevirt.kubevirt.io/kubevirt -n kubevirt -o=jsonpath="{.status.observedKubeVirtVersion}")
ARCH=$(uname -s | tr A-Z a-z)-$(uname -m | sed 's/x86_64/amd64/') || windows-amd64.exe
echo ${ARCH}
curl -L -o virtctl https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/virtctl-${VERSION}-${ARCH}
chmod +x virtctl
sudo install virtctl /usr/local/bin

Now that we have installed Virtctl we are all set to create our first VM resource inside our local Kubernetes cluster.

Step 3: Creating VM resource inside the local Kubernetes cluster

Create a file named VirtualMachine.yaml and paste the YAML configuration below to it:

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: testvm
spec:
running: false
template:
metadata:
labels:
kubevirt.io/size: small
kubevirt.io/domain: testvm
spec:
domain:
devices:
disks:
- name: containerdisk
disk:
bus: virtio
- name: cloudinitdisk
disk:
bus: virtio
interfaces:
- name: default
masquerade: {}
resources:
requests:
memory: 64M
networks:
- name: default
pod: {}
volumes:
- name: containerdisk
containerDisk:
image: quay.io/kubevirt/cirros-container-disk-demo
- name: cloudinitdisk
cloudInitNoCloud:
userDataBase64: SGkuXG4=

You can also download the file using the following command too:

wget -o VirtualMachine.yaml https://raw.githubusercontent.com/kubevirt/kubevirt.github.io/master/labs/manifests/vm.yaml

Now that your Virtualmachine.yaml is ready, go to your terminal and apply the YAML configuration file into the minikube Kubernetes cluster using the command below:

minikube kubectl -- apply -f VirtualMachine.yaml

Then run the command below to check the status of your VM.

kubectl get vms

Step 4: Starting a Virtual Machine

To start the virtual machine using Virtctl, run this command

./virtctl start testvm
. Then proceed to confirm whether our VM is running or not using this command:
minikube kubectl -- get vms
.

There are 3 statuses for the VMs, namely scheduling, scheduled, and running. Here’s a detailed explanation for each of them:

  • Scheduling: During this stage, the Kubevirt scheduler determines the appropriate node on which VMs can be scheduled based on the resource availability. It takes into account factors such as CPU, memory, storage, and network requirements specified for the VM.
  • Scheduled: Once the VM is scheduled, the Kubernetes controller responsible for Kubevirt creates the necessary resources and configurations on the target node. This includes allocating storage volumes, setting up the network interfaces, and initializing the VM's environment.
  • Running: At this stage, the VM has started running on the assigned node. The Kubevirt controller monitors the VM's status and ensures it's running properly within the Kubernetes cluster.

Stopping the VMs

To stop VMs scheduled, you can use the following command

./virtctl stop {VM_Name}

Note: Replace {VM Name} with the VM name you wish to stop.

Deleting the VMs

To delete the VMs you stopped, you can use the following command

kubectl delete vmis {VM_Name}

Note: Replace {VM Name} with the VM name you wish to delete.


Stopping Minikube

Now that you are done with the tutorial, you can stop the minikube instance by running the

minikube stop
command.

Securing Your VMs


Running VMs inside your cluster using Kubevirt introduces specific security considerations. Here are some essential security considerations you should follow.

  1. Isolation and resource allocation:
    1. Namespace isolation: Utilize Kubernetes namespaces to isolate VM workloads and prevent unauthorized access or data leakage between different namespaces.
    2. Resource quotas: Define resource quotas for VMs to prevent resource contention. Set limits on CPU, memory, and storage usage to avoid VMs monopolizing resources.
  2. Hypervisor security:

Secure hypervisor configuration: Ensure that the hypervisor used by Kubvirt is properly configured with security best practices. Follow guidelines provided by the specific hypervisor technology, such as KVM or QEMU.

Conclusion

In this blog, we explored how you can run VMs (Virtual Machines) inside your local Kubernetes cluster using Kubevirt. By combining the power of Kubernetes' container orchestration with the flexibility of VMs, you can unlock a range of possibilities for your applications.


As the world of technology evolves, the combination of VMs and Kubernetes will continue to play a crucial role in modern software development and deployment. By staying informed, exploring new possibilities, and following best practices, developers can take full advantage of this powerful combination and drive innovation in their projects.