Transcript
This transcript was autogenerated. To make changes, submit a PR.
Are you an SRE,
a developer,
a quality engineer who wants to tackle the challenge of improving
reliability in your DevOps? You can enable your DevOps
for reliability with chaos native.
Create your free account at Chaos native Litmus Cloud
hello everyone. Welcome to my session. Top new CNCF
projects to look out for it's going to be
a great time, I hope, and very much thank you everyone for attending.
But let's get straight to it. So top new
CNCF projects to look out for. You can
see the session on the slide there and that's
what we will be going through today. So before all
of my sessions, I like to do a bit of talking
about, very briefly about what to expect,
what is the value that you get by attending this session,
this talk, and the kind of like the learning tools
for this session as well. So the learning goals here, as well as what
I hope that you will be finding out after the session, is I hope
that you get inspired. I hope that you encounter new technologies,
new projects that you haven't encountered before, and now you can
actually start using them either in your hobby projects,
either in your work or just you get to know really cool
CNCF projects and you get to be
inspired by them, hopefully to continue forward
further. So that's the main goal of this session
as well. And I update this session periodically anytime
new projects pop up that are interesting and whatnot.
So keep tuning back if you
are interested. So who am I and why
am I speaking here to you today? So, I'm Annie.
Hi, nice to meet you. And I'm a CNCF
ambassador and a product marketing manager at Cast AI.
Cast AI does Kubernetes cost optimization.
So Kubernetes cost optimization by automation. That's where
the tongue twister happened. So we
promised to cut your cloud bill in half and that's what I do for
my day job. But I also do a lot of speaking at conferences
as well as I am a Kubernetes and CNCF Meetup co organizer.
I'm also an Azure MVP as well as an early stage startup
coach as well as a co host of Cloud gossip podcast that
you can find@cloudgossip.net.
So let's get started now that I hope we got all the tongue twisters
out of the way. And let's kick off with our
main content for today's session.
So cloud Native Computing Foundation,
CNCF. That's the topic today essentially, and the projects
that come from there. So now let's just briefly look into what
is CNCF aka Cloud Native Computing foundation.
So the goal and the mission and what they're doing is building
sustainable ecosystems for cloud native software. So it
really hosts critical components of the global technology infrastructure.
So CNCF is home to, for example,
Kubernetes, Prometheus, all of these great
projects. And CNCF brings together the
world's top developers and users and vendors, and runs the largest open
source developer conferences. For example, Kubecon plus
Cloud Nativecon, it's the long name.
And CNCF is part of the nonprofit Linux
foundation as well.
So I think the impact of cloud native is relatively well
known. But just as a quick recap,
so CNCF does this survey yearly
where they really explore the current landscape
of cloud native world, what technologies are people using, what challenges
are they facing, and so forth. And this data is
from the CNCF 2020 survey.
So let's just explore the packs a bit.
So the use of containers in production has increased to 92%,
up from 84% last year and up 300%
from the first survey in 2016. So containers
are really booming and their Kubernetes
using production has increased to 83% and up
from 78% last year. So even though Kubernetes
is already a household name, the growth just continues
as we go further. So the impact is really huge to the
world of software at the moment.
Then if we want to now block into what does the landscape actually look like?
This is the impact, and what does the landscape look like,
what are the different parts of it? So this is what it
looks like, and I know it is quite a lot. It's a
lot of different areas. One could talk about just the area
of this for like hours, days and so forth.
So it definitely is quite a lot to take in. When you first look at
this image, I understand and I totally
sympathize with you as well. But if we take
in this session, we will take a bit of a closer look into a
simplified view of this look and explore a few things
in the project landscape, as well as just explore a
few of the projects more. So the main goal of this
session is also to simplify this view. So if you ever saw this view
and you were like, that's quite a lot, now we will take a simplified view
and focus on a few aspects of this. So let's
get to that then. So there is three stages to
CNCF projects. So there's the sandbox phase, there's the incubating
phase, and then there's the graduated phase.
So projects start from the sandbox phase, where they are
starting off, kicking off, they're relatively small,
and then they move to the incubating phase, where as
they mature, as they get more maintainers, end users, all of these things.
And then when they get to the graduated phase, they are full
fledged mature projects, fully recommended
for production. And the numbers there
indicate roughly, they change quite a lot,
constantly. So roughly on how many projects
are there per category. So there's
around 40 to 50 sandbox projects, about 20 to 30 incubating
projects, and about 14 ish, a bit more graduated
projects. And just to give you an idea how the
landscape has really grown, when I started talking about CNCF topics
a few years ago, there was only essentially one or two graduated
projects, which was essentially just kubernetes and maybe Prometheus, and that's it.
So the projects keep on maturing, they keep on moving
within the limits. So there's constant growth as
far as these things go. So if we
take another view of how does these different projects and the phases, what they
actually mean is the sandbox phase, then it's
the adoption curve, is that innovators and techies are using it. And when
you get to the incubating phase, then the early adopters and the visionaries are
using the project. And then when you get to the graduated
phase, that means that you're really used by early majority
or late majority already. So you're like a full fledged thing.
I think this is a really helpful view to really
understand how does it look like to mature within the CNCF foundation.
So then a bit of expectation management, as well as letting you know which projects
I will be covering in this session. So this
session is not really built on obviously any scientific method.
I don't have a PhD on how to select CNCF projects,
nor is it fortune telling. I do not know what will succeed for sure.
I do not know what will fail for sure either.
But what this is based on is what I'm
excited about, what people around me are excited about,
and which projects have the best communities. Well,
the best is always subjective, but biggest communities, most active communities,
a lot of people talking about them and so forth. Because obviously for
open source projects, these things are
great indications of how successful the projects will be,
because the more people you have using it, building it,
building with it and so forth, is a great indication of passion
towards the project as well. And then as
an expectation management as well.
Usually CNCF intro to projects talks are around 30 to
45 minutes. So this is a shorter talk covering
many projects. So already by the pure math,
I will not be doing a deep dive or even a smallish
dive into any of the projects. It will be more of an overview of
multiple projects. So you can see what the snapshot of CNCF
projects look like, what is probably something maybe that you could
use in your projects. So that's the goal here, not to do a deep dive,
but I will give resources on how to continue with your deep dive further along.
So the projects in this session are helm,
Linkerd, Keda, Flux, kudo and Meshery, and a super quick sneak
peek project at some point as well. But that's helm, Linkerdikeda,
Flux, kudo and meshery that we will be going through today.
As I said, I do update the session regularly,
so it is constantly in flux this session.
But let's get started with the first project which
is oh,
and I should mention that all the projects, some of them are graduated, some of
them are incubating, some of them are sandboxed. So even if you are new
to the CNCF world, you will get to know about very mature projects.
But if you're super deep in the CNCF world,
you will find out a few new sandbox projects as well. So there's a
bit of everything, bit of something for everyone.
So then Helm, which is the package manager for Kubernetes.
So Helm is really the best way to find, share and use software
built for kubernetes. Helm is a graduated
project, so it is very mature. It's one of the earlier ones,
very old in a good way as well. So it's fully
recommended for production as well. So what is Helm?
As mentioned just now, it's package management for Kubernetes. So it's essentially
homebrew snack or chocolate just for kubernetes.
So I think one of the helm maintainers really said it well. So package
management is tooling that enables someone who has a knowledge of an application
and a platform to package up an application. So that
someone else who has neither extensive knowledge of the application
or the way that it needs to run on the platform so that they can
use it as well. So that is really the power of
package management and therefore the power of Helm.
So what are then the benefits of Helm? If that's what package management
is, what are the benefits? Helm helps you manage complexity.
So charts describe even the most complex apps,
provide reputable application installation,
and serve as a single point of authority.
It also has easy updates, so you can take the
pain out of updates with an in place upgrades and custom tools.
Helm also has simple sharing. So charts are easy to version, share and
host on public or private service. And you can
use Helm to roll block to
an older version of a release with ease as well. So it offers really good
rollbacks. So what are the principles then of
Helm or the different features or features,
but principles of Helm. So helm takes security
very seriously. Helm can be already recommended for
public deployment. Helm has multiple maintainers
and multiple companies backing it. So that comes
with, you know, Helm is very mature, it has power, user email list release candidates,
all of these. It supports Mac, Linux and Windows and
it passed 1 million downloads a month already in 2019.
A proper household name.
So how is Helm then used? So it's used by
using charts. And the prerequisites are to have a Kubernetes cluster
deciding what security configurations to apply to your installation,
if any, and installing and configuring
helm. And then the bonus project that I
mentioned which is artifact hub, which is a CNCF sandbox project.
So it helps you find helm charts. So the
cloud native landscape was in a situation where a
lot of the different projects were starting to have their own artificial hubs. So for
example, Helm had Helm hub, but that really makes
the user experience very fragmented and
very difficult to manage or kid of everyone is reinventing the
wheel as well. So then artifact hub was greatest with the goal
to provide a single experience for consumers so that any project
in the CNCF can leverage it.
So that's very nice. Sneak peek.
Super quick mention of a CNCF sandbox project as well.
So helm demo is next up
in our agenda. So the demo that I will be
doing today is easily deploy a complex application, in this
case WordPress, to Kubernetes being a helm chart.
So then let's switch over to
the side here, let's choose this terminal
and then let's grab here my notes. Because I need notes.
I make way too many typos if I don't use notes.
But that's wonderful and lovely anyhow.
So then if we do and we start with easy account show to
get you understanding what's happening. So then we see that yes, I am
locked into Azure and everything is working fine
and well. So then if we do Kubectl
get services there, we can actually see a
bit more info about our cluster. So this shows
us that yes, we have an empty Kubernetes cluster running there.
So it's empty. So that's the for this demo. You can see that I
am indeed doing everything from scratch.
So then we can do Kubectl get
services all namespaces to explore a bit further.
And in here we actually now see that the cluster in fact is
not completely empty, but for the purposes of this demo it is.
So there's a lot of linkerd stuff running around there, which is
useful for our demo that is coming up,
which is the Linkerd demo. But for now we're focusing on
helm and that's why it is empty
for this purpose. So if we do helm list,
we can see that no helm releases
in use either. So it is truly in fact empty.
So then if we do helm
repo add Vietnam,
we're adding the Vietnam there. So for me it says already exists with
the same configuration, skipping it, but that's simply because I've done this
before. So that's what's supposed to happen for you it would do more things.
So then we go here and we get helm search
repo WordPress as
so and there we get to
some more info. Eventually when it starts
moving forward, everything is always low with the demo effect.
Anytime that you try to do anything
like time sensitive crucial. So there we go.
I took a bit of time hoping that the demo effect will continue to
work, but maybe a bit faster, but let's hope so. So here
we see that we have two helm versions that we could be two WordPress versions
that we could be using, but we will be using the Bitnami one because that
is newer. So then we can
do helm install
Bitnami WordPress generate name.
So this is when the magic starts to
happen. And this portion
will take a bit longer than the previous version.
So since the previous version took a bit longer, it might take a while,
but that's fine, we are ready, we have things to do while we wait.
But here we see now a lot of things happened.
So we actually are starting to spring up our WordPress.
So now we can actually use the same command that we used
previously, which is the kubectl
get services to see if our cluster would not be
empty this time. And as you can see, here we go, it is not
empty. There's a lot of things happening there. So we can then take
the external ip because we want to access our
things as well. So we will go here, we will open a browser
there we go.
And then we will get the browser over here,
we will open it up there. It will probably take a while
for it to still just sprung up because it takes a bit of time,
but we have stuff to do still while we wait. But we are kind
of put it there to load while we do those things.
So now we can see that the Kubernetes cluster is no longer empty
and we already have the external ip for our WordPress to use.
And now we will get a nice WordPress where we can start our block block
or whatever, whatever we want to do with the WordPress. But to
get inside the WordPress to the admin page, we obviously need a username and password
as well. So this is a very neat, easy way to
handle this as well. So we see the username there already and
then if we want the password we just take this
whole thing and then we put it there,
put it in and then we see, there we go, that's where the
password is. Let's already copy paste it to safety
so that when we get this up and running here
that's still loading, then we can get to using
that one. So let's see when it opens we will be
ready. Might take a while. Oh, now it's already working. Every time I say that
it's going to work soon, that it magically works,
it's wonderful. So then we
go to the admin side of things.
We put user, we put
the password that we just got, we log in
and then we see that. There we go, we have our
WordPress ready to
use, ready to be used. All nice and good looking
there. So that's really great
and nice. So I think all in all here,
we've done quite a lot of things in a small amount of time. So we've
installed WordPress and Mariadp to our Kubernetes cluster, configured WordPress
and stored the admin credentials securely as kubernetes secrets.
Super easy, super quick. And that's truly why I love
helm. Great.
Done with the first demo. Moving on to the
next set of things. I sometimes go
through a bit of case studies about using helm,
but I think we don't have time to doubt that today. But you can
read more about CNCF's case studies in the
CNCF website. There's a lot of good data and a lot of good information
there about how these projects are used by
companies in production and so forth. But now
we will be moving to the next project, which is Linkerd,
which is a service mess. Linkerd is a very
recently graduated project. It used to be incubating for a
while, obviously, but now, just a month or so
ago, Linkerd graduated to the graduated phase.
So Linkerd is, as mentioned before, a service mess.
It's ultralight, ultra fast and a security first service mess
for kubernetes. It's similar to istio as far as things
go, but it's a bit more streamlined and maybe a bit faster and lighter.
And the goal of Linkerd is to reduce
mental overhead of having a service mesh to begin with. So what
does Linkerd then do? It provides observability,
so it's those service level important metrics, success rates,
latencies, all of these. It provides reliability,
so retries, timeouts and so forth, and it
provides security as well. So what are
the benefits of using liquid then? It has a really thriving
open source community. It's 100% Apache licensed
with a really cast AI active community.
It has a simple and minimalist design, so no complex APIs or
configuration needed for most applications. Linkerd will
just kind of work out of the box, and it has a deep runtime
diagnostics, so you can get a comprehensive suite of diagnostic
tools including automatic service dependency maps and live
traffic samples. And as mentioned before, it's very
fast and very light,
and it installs in
seconds with zero configuration as well. So it installs
into a single namespace and services can be added to
the mesh one at a time. And it has actionable service metrics as well.
So just mentioned before, success rates, request volume and latency for
every service as well.
So what are then the Linkerd principles? How is it built?
So it's built to just work, as I mentioned,
just works out of the box. Ultralight, it's the lightest service mess
around. It's actually the oldest mess, the first service mess around as well. It's super
simple to reduce operational complexity and it's security
first. And security is not a default, security is
not an extra, it's a default. So Linkerd
has its own proxy as well. Envoy is
a CNCF proxy project, but this
Linkerd uses its own proxy called Linkerd two proxy
that's specifically built for Linkerd so that the founder is more secure and custom.
Then we have the second demo of the day, which will be the final
demo of the day as well, but continuing with after that with the slide decks
and everything. So with the Linkerd demo, it's a bit
different than the helm demo app that I showed before. It's a
proper enterprise grade app. So it's
Linkerd demo app and I've added Linkerd to it and then we can see
what we can get from the app from there.
And quickly before the demo, we can actually go through what is needed
to use Linkerd just injecting this and that's it. So it's
super simple. So then
the Linkerd demo which is easy real time service metrics.
Then let's switch over to
these demo nodes and let's go to here.
So we have another terminal that are color coordinated.
So then we take this command which is cubectl
emoji with the port forward so that we can actually get our demo app
working. So we'll grab here,
move on to here where our amazing WordPress is
still running very well.
It's handling a connection and then we see that, yes there we
have, which is this app. Let's make it a bit bigger
so that we can see a bit better maybe.
So it's emoji vote. It's a very simple as far as the
functionality goes. So you have all of
these emojis. We have to decide on which emoji to vote for.
We vote for the embarrassed or smiley
or so forth and we can pick another one.
We can vote for the masked one which is our favorite.
We can vote for the monkey
one, so forth. It has also built
in poro one
so that you can see this. But then actually if we click view the leaderboard
we can see that there's quite a lot more votes cast.
AI our simple votes there quite a
lot of voting happening. So that's the bots voting on the background. So for this
app we're going to see what are they doing and what's happening
on that side. So then let's go to this for
example and then let's there.
So to see what the bots are doing and
what's happening in the background we can see, we can put
Linkerdmog at the top deploy and
now we see a lot of action happening,
a lot of get data that we are getting and
then we can actually go to
another terminal to get things up
and running here with tap deploy
on the other side and there we can see what's happening inside a
single bot. So it's a bit
of more detailed view. So then we see
all of these things and then we go to here
and we put Linkerd dashboard in.
We will get even more info
here. So then we have that opening up in our default browser.
Let's just click make
this a bit bigger so that we are not disturbed
by all the background things happening.
So in here we see then a graphical way to explore
this data. So then we want to see for example the same data that
I viewed showed before we go to here we click on
the emoji voto and we click on start and
there we see the same data, but maybe to me at least in a bit
of a nice format to read through.
And then if we go to namespaces and we click
on Mojivoto for example, we can
see the structure that we had before.
You see deployments, pods, replica sets, all of these
things. And then if we go to block here
we see even more visual
things brought to you by Prometheus. Actually it's
going to take a while for it to load, but we will get there,
no worries. Loading,
loading. So now we're starting to see that we see global success rate,
global greatest volume deployments success,
greatest workage volume, and all of
these nice things in a really visual nice format.
So that's how easy a nice linkerd is and works.
So then we can get back to the slideset once again.
There we go. That was
the Linkerd demo. So then we're going to move on to Keda,
which is Kubernetes event driven auto scaling. So Keda is incubating
projects very recently incubating projects. Actually one or
two months ago it went from sandbox to incubating.
So what is Keda then? So Keda really focuses on
serverless. So focus on your code, event driven code and
scaling on demand, compute, pay per use and these things. So how keta
maintainers view serverless is the automation glue between services,
rapid APIs, eventstream and queue processing.
So really default. Kubernetes scaling is not really well suited
for event driven applications and Kubernetes is more
for resource based scaling. So cpu and memory.
So then keta provides event driven scale controlling
that can run inside any cluster so that it can monitor the rate
of the events to preemptively act before a cpu
is even affected. So that's where the benefit
and the power lies with Cata. So you can install it
into a new or existing cluster and it has extensible
and pluggable scalars to grab metrics from multiple sources as well.
So what are the cater principles then? It's not
rebuild anything that Kubernetes offers out of the box. It is
single purpose, simple, non intrusive and it works with any
container and any workload. And then there are two public
case studies for Keda, which is Alibaba cloud and
cast AI. So cast AI, the company that I work for is a cloud end
user of Keda as well.
Then moving on to another project. There's so many to cover
through which is flux which is the
Githubs family of projects is the tagline here. So flux is an incubating
project as well as
is Keda as well. So let's go. So what
is flux really briefly?
So the flux is incubating projects, but it's actually relatively mature for an
incubating project. It's already recommended by the CNCF technology
radar as the technology of choice to be used for Githubs.
So the flux has a lot of end projects, the project has
a lot of end users. So what is Gitops in this context? So you have
all of these CLI tools, so Kubectl
apply, Kubectl Cl set image, helm upgrade, Kubectl upgrade, so forth,
so that it replaces all of those with the most
widely known CLI tool, git. So with the git
push tool.
So really it really goes into instead of changing the
state of your cluster with multiple things, you can use one command,
so you can modify something, push it to a
git representative, and therefore it ends up in the cluster as well.
So this can be anything from namespace and so forth.
Git also gives you a nice history of what has happened to your cluster in
the past as well. So Gitops provides a really nice
one model for making infrastructure apps and Kubernetes add on changes.
So you have consistent end to end workflow across your entire organization.
So really what this means in a nutshell is that you have
an easy snapshot of your cluster that you can restore to.
So if anything happens, so if you lose your cluster, you just
block your new cluster to flux and you
restore everything. So everything, meaning that it doesn't
obviously include no stateful sets or databases.
So everything that was in the Kubernetes memory essentially. So to simplify
things, it's a desired state that stay saved in git and
it's not the actual thing, but the desired state. So it's kind of like a
save point that you can restore back to.
So why is flux so great for ops then?
Flux project aims to really provide a complete continuous delivery
platform on top of kubernetes, supporting all the common practices and tooling in
the field. For example, customize helm and metrics with Prometheus and so on.
So that's flux in a nutshell.
So what are the flux practices and benefits then?
So flux has defined Gitop practices as describe your
system declaratively, keep configuration under source control,
use software agents to reconcile and ensure correctness and alerts
for drift as well. And really the benefits are collaboration
on infra access control, auditable history,
drift correction and clear boundaries between dev team and
kubernetes as well. So then
moving on to CuDA, which is the Kubernetes Universal declarative
operator. So this is a CUDa is a sandbox project,
so it's not as mature as previous projects but still very
cool and nice. So CuDA really goes into the issue
of stateless versus stateful apps. So if
all apps were stateless, everything would be super simple. But not all apps
are so stateful apps need logic specific
knowledge to run a certain application. So Kafka might differ from Cassandra
and so forth. And Kubernetes has been very
focused on stateless and stateful apps do not really like it.
So Kubernetes has created stateful sets to mitigate this problem,
but it does not really solve the fundamental issue with the
solution. Which is and the solution here is operators.
So what are operators then? Operator manages
and monitors the lifecycle. It takes a lot of custom knowledge to build one.
So each operator is unique and purpose built for each
application. So often operator framework or cube builder
is used to build an operator, but building operators really
requires deep expertise and may require thousands of lines of code.
So substantial engineering research is needed.
So this is where kudo comes in. So rather than using a custom operator,
Kudo provides a universal operator with the concept of plans built in.
So what are the benefits of Cuda then? So Kuda can create operators without needing
deep knowledge of kubernetes or coding. Just by defining the lifecycle stages
you can just use Kubernetes API so it's a lot easier to learn. And it
has kubernetes laid native management aka using Kubectl
and other familiar tools as well. So it's simple to use as well
a real cool sandbox project. Then kicking off
with the last project
for today which is meshery, the service
mesh management plane. It's sandbox project super new.
It got accepted to CNCF as a project, let's say
a month ago, a few weeks ago. It's a super new project,
but it's actually very popular already. It's the most popular
project for mentorships in
the Linux foundation already and
it has already 15 maintainers and 300 plus contributors. So very
fast growing. So service mesh
management plane is the name of the game here. So mesh is that service
mesh management plane. So if
you end up in a situation where you have to use multiple or you
need to use multiple service meshes, reasons might be
legacy personal preference of team members.
So you end up in a situation where you have more than one type of
service mesh running inside one cluster.
So for example if Lingerd and SDR are in the same cluster, things might
get a bit difficult. So this
is where meshery comes in. So it provides service
mes performance by being the management plane. So usually
service mes compromises of control plane and data plane. And this
adds kind of like the third plane which is a management plane that manages all
the different service meshes. So it provides
federation integrates with backend systems, may help perform chaos engineering deeper
insight into the performance. Really a long list of things that
it can do. So it is quite
wonderful. So meshery supports
over ten different service meshes. It provides
multimesh management so lifecycle workload, performance configuration
patterns and practices, chaos and filters as well.
So mesher is about halfway to complete architecture. So it's not
at full version one yet, but getting there and I mentioned very popular
for being such an early stage project and CNCF
so starting to wrap things up.
I still have some of the relearn more resources coming up and all of these
things but wrapping up. So we went through CNCF overview,
we went through multiple projects, so we went through helm, Linkerd, Cuda,
Flux, Keda, Meshri plus the bonus project of Artifact hub
and then a few resources
you have. I recommended checking out the CNCF
survey. You can check out all the project sites from helm, liquor,
Dikudo, Keda measuring all of them. You can check out the case
studies area of CNCF to learn more how these projects are
being implemented in real life.
The CNCF end user technology radar is really cool.
Recommend checking that out and the Kubernetes
or the CNCF technical oversight committee does their
predictions for the future of cloud native tech at every Kubecon
essentially. So that's super nice. Very enjoyable session. Highly recommend checking
it out. Tech world with Nana has a lot of good content
for if you are starting your cloud native journey on
how to for example use these tools step by step.
CNCF YouTube has a lot of great content. For example, all the maintainers
usually have a session there on more of a deep
dive compared to my session on all the projects. So you can learn more there.
So I'll be adding this slide deck to my GitHub as well.
So you can find that at Annie Talvasto.
And then if you're interested in finding more about cast AI, which is the company
that I work for, where we do Kubernetes cost optimization
where we're really good at that you can go to Castai
Annie and if you want to leave your email there and we will email you
some things, we will also give you a $25
gift card to the CNCF store so you can use that to get your
Kubernetes plushies, Kubernetes t shirts, all of these
fun. So that's if
you want that, you can do that. And then
a few recommendations still. So I did my one podcast episode
on adventures in open source with Tom Kirkhobb. So Tom
is the maintainer of Keda. So he really went in a
very candid manner, talked about how does it feel to and how does
it work to maintain an open source project for CNCF, how does it work,
how does he split his time as
a personal life to work, to maintaining projects and all of
these things. Really wonderful insight into the
mind of a maintainer. And then if you're interested in
starting to contribute to CNCF projects, you can check out,
there's a lot of good resources on that. And for example, a keynote getting
started in the Kubernetes community from a few years back in
Kubecon is really great on how to get started. If you want to contribute,
if you want to start doing meetups and all of these things, I recommend checking
these things out as well. So the one thing I hope that everyone
has mentioned in the beginning is that everyone takes away from this is inspiration and
inspiration and so forth. So I hope everyone has had a
lovely time and learned something new. And if you have any questions later,
I'm super happy to always answer them in Twitter. For example,
so you can find me there at Annie Talvasto. So you can reach out to
me, ask those burning questions. And I'm super happy
to help on there as well.
So I hope everyone has can absolutely
wonderful conference, can absolutely wonderful time and a great week
and talk to you in Twitter or some places as well.
Thank you everyone.