Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello everybody.
Welcome to Conf42, Kube Native 2024.
Hello everyone.
We are truly to be here to talk about KubeVirt and how it brings
virtualization into the KUBAN as world.
This is our first talk at Conf42, and we are very excited about that.
Hope it won't be our last talk.
Today we'll dive deep into the how you can leverage keyword to run virtual
machines alongside your containers.
giving you a unified platform for managing both of them.
By the end of the session, you'll know how to get started with KubeWord and integrate
it into your Kubernetes environment.
But first, let's introduce ourselves, Batuhan.
Hello again, my name is Batuhan, and I am working as a platform
engineer here at TrendVault.
For those interested in software supply chain security, you might know
me as one of the early contributors of SIGSTOR, for which I was selected
as best SIGSTOR evangelist at 2022.
I am proud to be very first astronaut from Turkey.
I am also CNCF ambassador and the captain and one of the organizers of KCD Istanbul.
And my name is Koray.
I work as a cubanist consultant and technical trainer for Cubomatic.
I am also CNCF ambassador and cube astronaut.
I also contribute to the cubanist project on CKC infra.
To start with but on you're a platform engineer and I know that you've worked
on a few migration projects from virtual machines towards containers.
Let's start by looking at the fundamental difference between
containers and virtual machines.
Yeah, sounds great, right?
People might know that containers are lightweight, right?
Designed to run isolate applications with minimal overhead by sharing
the host's kernel and resources.
This makes them incredibly efficient for modern workloads.
On the flip side, VMs virtualize entire operating systems, including the kernel.
So each VM running its own operating system, right?
They require a hypervisor adding more overhead, but offering a
full isolation, which is a good thing for a security perspective.
While containers are ideal for microservices and stateless applications,
many companies still rely on VMs, especially for legacy systems that haven't
transitioned to the container world.
For those cases, VMs remain to go to choice due to the
need for full OS isolation and compatibility with older workloads.
The real challenge is that some applications still require the
VM model due to dependencies or architecture, but also Want to take
advantage of benefits of continuation.
Koray, what do you think happens in those scenarios?
This is where the KubeWord comes into play.
So why KubeWord?
Why do we need to bring virtualization into Kubernetes?
KubeWord allows you to manage both containers and VMs within
a single Kubernetes platform.
So instead of managing a separate infrastructure for VMs and
containers keyword provides a way to unify your workloads.
This is especially helpful for teams in transition.
The ones that are moving from VMS containers or for those who want to mix
both type of work workflows for for the flexibility you can run traditional VM
based applications while modernizing other pies of your parts of your stack
with containers and they can run all under the same orchestration layer.
Batuhan, what do you think you can actually do with kube word?
Yeah, that's a good question.
First, you can manage VMs just like any other Kubernetes resource like
deployment, port, services, etc.
Meaning that you can scale, restart, and manage VMs using
familiar Kubernetes tools.
So this allows you to run legacy applications that aren't easily
containerized, which is particularly beneficial for organizations needing
to support both modern and traditional workloads on a single platform.
Qubit also supports live migration, which is quite helpful for enabling
you to move running VMs between nodes without any downtime.
Crucial for maintaining uptime during maintenance.
Plus, you can manage the full life cycle of VMs provisioning, starting,
stopping, and deleting using the standard kubernetes commands like
kubectl create, kubectl scale, etc.
So yeah, that's, this is lots of virtualization, right?
Exactly.
Let's look at how keyword works under the hood.
So at its core keyword is real top on communities, leveraging
its native API for VM management.
And that means, as you mentioned before, scheduling, networking and storage.
These are all responsibility of responsibilities of the
underlying Kubernetes cluster.
So how it works.
There are several key components here.
We have the Kuber operator, which installs and manages the life
cycle of other Kubert components.
There is weird controller which watches for the VM resources and creates the
necessary virtual machine instances.
VMI's there we have with launcher it's a per VM pod that launches
and runs the VM workload.
So each VM runs inside the pod on a Kubernetes node.
Yeah.
Obviously kube word uses lib word to handle VM interactions with the hypervisor
like camo or any other hypervisor.
And we have the weird handler that component runs on each communities
note and it's responsible for handling VM related requests, such as the.
life cycle operations and migrations.
This architecture allows communities to treat VMS as just another
workload alongside the containers.
Batuhan, if you can simplify it a little bit.
So yeah here is the simplified version of the previous diagram
by highlighting the key points.
The main takeaway is that Kubernetes integrates seamlessly with Kubernetes.
In this setup, VMs are treated as pods running on Kubernetes
nodes, meaning that they follow the same scheduling and resource
management processes as containers.
Kubernetes also leverages the Kubernetes API alongside libvirt
qemu, enabling seamless VM Lifecycle management while taking an advantage
of Kubernetes features like auto scaling, networking, and storage.
This integration simplifies the process of deploying and managing both the apps and
containers within the same environment by eliminating the need for separate tools.
Yeah.
This brings us to running also workloads on Kubernetes.
It's quite easy as our friend Mark says, right?
So it's demo time.
Before we start a special thanks to metal stack cloud.
They provided their bare metal Kubernetes many service to us to use.
So if you want to run cube word on public cloud providers you have to enable nested
virtualization because it's a another virtualization layer on top of VMs.
So in the case of, for example, AWS.
You also have to use metal instance types.
Then here we'll be using another bare metal service.
And for the demo, we have a a repo where we include both Flux and Argo CD.
installations or scenarios.
If you prefer more UI based stuff you can also try out Argo city, but
today we'll be working with flux and we'll be creating our VMs on
keyword using in with GitOps approach.
So I have already bootstrapped the flux in the cluster.
Let's check it out.
And you will see that flux is already deployed and now I would like to
start deploying first with cube word.
So I will be exporting some environment variables.
This could so this one needs to be V version one to zero
and create a director for it.
And in that directory, I'll be copying the.
Q word operator and Q word custom resources manifests.
If now we take a look, as you see in my getups configuration directory,
the flux system is there, and there is also another manifest for Q word.
Eventually we'll be deploying RVMs using some ham chart that I just created.
Which will be handling some other stuff include creating some SSH keys for the VM.
So you can also do such stuff or copy things from here as well.
The VM provisioner is app that that is used in the Argo CD scenario.
You can also take a look at this one.
So first I would like to push these things and let Flux create the, or deploy
kubert for us if I do start checking out.
So here Flux first will deploy kubert operator, and then it will be responsible
for deploying the other components.
Like a weird launcher and with handler.
Let's wait flux to
To do it or maybe we can just manually trigger the reconciliation on flux side
Let's try it flux reconcile source kit flux system Now if we check the
pods as you see the weird operator is being deployed and then it will start
deploying the other components to the system when it's up upon running.
And we can also check the here like this.
So as you see the it's being deployed now and we can also wait
for cover to be to be deployed the.
So right now the phase is is deploying and then we'll wait until
the kube word part is up and ready.
I hope this will be fast.
Meanwhile, we can also check again.
What's happening with the pause.
If there are any failures.
No, there are no failures.
So we're tender virtual operator controller, a virtual API.
They're all deployed and it's ready.
If again, you check for the yeah.
Now the condition is met.
And if we check all the resources in cube word namespace Yeah, we'll see
that weird controller, weird handler, weird API and operator are deployed.
There are a few services.
Yeah, also the weird handler is a daemon set.
So they're all running.
Yeah.
To be able to interact with with kube word VMs you need to install
virtctl or virtctl binary or kube plugin virt to be able to do that.
Yeah, I've already installed them, so I will not try those, but
Essentially, you will need them working.
As I said, I've already created a Helm chart and pushed it to some OCI
repository on GitHub GitHub registry.
So I'll be using that One, two, create a VM.
First again I create the VM there, so I will create virtual machines
directory and put all my virtual machine related manifest there.
And now I would like to create a source.
Yeah, my, from my GitHub registry and create a source here.
And then yeah, you can check what is created in the values YAML file, but
simply I'll be providing a memory for my VM, the host name of my VM and
which container disc will be used.
So I'll be using Fedora and I can also provide some user data.
like a change password, like a password is federal.
And you can also provide some more information with SSH keys or other stuff.
So I'll be creating this values file and then I will be creating the helm release.
I will use VM chart.
The helm releases test VM one.
I'll use this ham repository that I just created.
When I push the code, it will be created.
I will create a virtual machines namespace and then use the values
file that I also created here.
So if we check now or so on the virtual machines, I have the kube
virt vm ham, ham repo, and also one VM YAML to create the VM.
So let's again push those to Flux, or for Flux to work, and then watch
again the pods, what's happening now.
So I'll, I'm expecting a new namespace here called virtual machines which will
have a pod for my test VM one hopefully soon, or maybe I can reconcile it again.
Manually yes.
Flux reconcile.
Yeah.
So as you see, the virtual machine is being started.
So this is a Helm chart.
It will first create some SH keys for me and then.
It will start the Virt Launcher.
So Virt Launcher is working.
So this is the pod that contains my VM.
And It's being initialized.
I think it's running.
And if we check the virtual machines so I have some, I have a virtual
machine running on that node.
It's in running phase, so it's I think it's ready to be connected.
As I said, I'll get secret dash and mutual machines.
So it created a private and public key for me this ham chart.
So I will need to download those to connect.
So I simply create these two files, public, private key, and then the
command to connect to the VM is kubectl weird ssh I'm using your identifier.
So this is the private key that I just downloaded.
I will use the local ssh
Connection type to With federal user to test vm1 and it's running
on the virtual machines namespace.
So if I do that yes, I accept the key and then now I am on my fedora machine.
So this is a a VM that runs all the stuff, it has its own kernel and everything.
You name dash a, you will see that it's running my record kernel and everything.
This is a vm and the good part is that now from this VM, I am able
to connect to the Kubernetes API.
So if I try to connect to Kubernetes default service or Kubernetes
service on the default name space, yeah, I wanna authorize.
I know that, but.
I was just able to interact with the Kubernetes namespace.
So the application you have running here, whatever that is, it can
connect to other services inside your Kubernetes clusters so that your
virtualized application can just connect to your containerized applications.
Simply that brings you that information.
So some.
Other companies, some companies like Rattat or Qubermatic they provide
some other approaches, approaches to Qubeword for example, Qubermatic.
Creates communities clusters by using keyword VM.
There are VMs on the keyword cluster and using those VMs we create a communities
cluster and you can check that sort of information from the documentation
to conclude to make it easier to manage hybrid workloads.
So you don't have too many.
You don't have to choose between VMs and containers.
With its simplest integration into Kubernetes, Kubernetes opens up new
possibilities for legacy modernization and workload consolidation.
So thank you for joining our session.
You can reach both of, if you have any questions, we have both on
LinkedIn and X and also GitHub.
Thank you very much.
Have a great conference.