Transcript
This transcript was autogenerated. To make changes, submit a PR.
Welcome everybody, and thank you for joining me on Conf 42, Cloud native. I hope
you're enjoying all the great talks from all these awesome speakers. Today we're going to
be looking at two revolutionary technologies we're using to learn about Gitops
and see how Gitops is done using flux, and we'll apply Gitops patterns across
multi cloud resources by using crossplane. I think there's no better way to learn than
hands on. So most of this talk is gonna be a live demonstration to see
these cool technologies in action. But before we do that, let's take a quick look
at the theory behind Gitops and the tools we're gonna be using. Before we get
started, I want to tell you a little bit about who I am. My name
is Leonardo Murillo. I'm founder of Cloud Native Architects, a consulting
firm specializing in continuous and progressive delivery slides,
reliability and continuous security. I'm CNCF
community organizer for the Costa Rica chapter, CNCF speaker
and co chair of the CNCF Githubs Working Group. I'm also DevOps Institute
ambassador. I love to connect. I love to network and share ideas
around these projects that I'm so passionate about. So please do
connect with me. Find me on LinkedIn, Twitter, or look me up on my personal
blog. Youll can see the details now to reach me on the screen. Right now
we're going to be looking at some technologies and patterns today that are very revolutionary,
so let's briefly talk about them. My focus today is to actually demonstrate
how these work, but let's get a good idea of what we're
going to be going into. The first subject that we're going
to be looking at. The first concept that we need to grasp is Gitops.
Gitops is an operating model for the continuous delivery of cloud native
applications. What makes it unique is the fact that it's driven by
four principles that to me, wrap its power
in four fundamental tenets. The first one
is the principle of declarative desired state. There's a
fundamental difference between declarative a declarative system and an
imperative system. Imperative means you're given instructions.
You're telling a system what to do to reach an
expected desired state. Githubs is not about imperative. It's but declarative
where you're just describing in code how you want that
system to look like. But you're not specifying which steps
to take to get there. Your state, the desired state of your system,
now, going into principle two should be immutable and versioned,
which means once a desired state hands been pushed
and committed to your fork, to your trunk, to your main
branch, then it should never change. It represents a
static, immutable point in time, and the only way to modify
that to progress your system to a newer desired state
is by committing a new version
so it's versioned can immutable desired state.
Once new versions of state are committed,
there should be a continuous process of state reconciliation,
which means agents, this is automated, not human agents,
are continuously validating that what you have
declared in your state store what you have declared in your
repository where you're storing your declarations of desired
state, is matching that which you're
running on your target system. There's this continuous process
of reconciliation of state, and the fourth principles
means that there should be no other mechanism to interact to
manipulate this target system. Everything that you do,
every operation that you perform should be through code
and through a declaration. The only way for anybody to be able to manipulate the
state of a system is by submitting committing a new
version of desired state. But we're going to be
looking today at Githubs applied CTO, a very specific
area. Multi cloud resources multi cloud resources could be
anything from a SQL managed SQL instance to another Kubernetes
cluster. So how do we declare those in code?
Well, we're going to be using crossplane. Crossplane is a tool that
was created by upbound and eventually handed
over CTO. The CNCF granting sandbox state and crossplane allows us
to declare in code using Kubernetes natives,
for instance, custom resource definitions. Crossplane gives us a whole variety
of crds to create any imaginable resource
across most relevant public clouds that we work
with nowadays has support for Alibaba, Google Cloud,
Azure, AWS, even private cloud providers
gives us a CRD for pretty much any resource that you want
to create. So this allows us to declare code with Kubernetes
declarations. Kubernetes manifests what we want to click create in the
cloud. But how do we make it so that those declarations in some
state store in some repository continuously get reconciled
with our actual runtime state? That's where flux
comes in. Flux is a tool created by Weaveworks, also handed over
to the CNCF that has recently reached incubation status. And Flux
is a tool for doing Githubs on top of kubernetes.
It tracks a repository or a path within a repository,
and makes sure through operators and controllers
that the state of your cluster is consistent with
that declared in the repository. And it's
not just by chance that it satisfies the principles
of Githubs. After all. We'veworkworks, the creators of Flux was
the company that coined the term Gitops. So we're going to be
using this to reconcile crossplane objects on
our Kubernetes cluster to make sure that these site state of our
clouds is always consistent. And now is the time
that we've been expecting. This is what we came here for. We came here to
see how this actually operates. So now I'm going to show you how
this looks in action. Okay, let's get to code. Let me
show you a little bit as to what our setup is going to look like.
Since we're going to be doing Githubs, of course we need a
repository. So I've created a blank, brand new repository that you
can actually access. It's in my personal GitHub,
Marillo Digital, and it's called Conf 42 multicloud. There's nothing
in here other than a readme. You'll also be able to see these two
terminals on this terminal. I'm going to be applying changes to
this repository, and the first thing, of course that I'm going to do is I'm
going to clone this repo. And on this screen you are seeing
canines, which is this really awesome tool, in case you
haven't used it. Look it up, canines, basically showing us what is
currently happening in the repo, in the cluster. So here
we're currently seeing all pods across all namespaces. That's our setup.
This is what we're going to be working with. Step number one that we want
to do is we need to clone this repository.
I'm just going to go ahead and clone it
onto my development workstation.
And here it is. Got an empty repo and an empty cluster.
So we need to bootstrap this cluster
using flux. That's the first step that we have to perform,
so that from that point forward, we will no longer directly
interact with the cluster, rather only through
changes to the repository. Luckily, flux gives
us very simple idea, a very clear idea as to how to
get this done. Since our repo is in GitHub,
we are going to be using the flux Cli that you can actually see
here how to install on the flux documentation
to bootstrap our cluster. For that to work, we need
to export our token, a username so that GitHub,
that flux Cli knows how to authenticate aws us to be able
to work with this repo. Okay, this I've already done.
So if we look at my terminals, I have
already created a GitHub environment, and this
effectively allows flux to know who we
are. We're actually just going to use these variables in the flux cli command
line to trigger our bootstrapping of our
cluster. I've already installed the flux Cli, so if
I run flux here and I run this command right here,
I can actually validate that. I'm ready to use flux.
The versions of Kubernetes hands kubectl
that are required pass. Okay, so now I need to
run a command that is going to allow us to bootstrap
this cluster against our repository.
I'm using to show you that command real quick because I have it here pretty
handy. And I'll tell you what this is. So I'm going to modify it
to match what we have.
The name of our repo is Const 42
multicloud. And we're saying, hey, Flux, I want you to bootstrap using
a GitHub repository, me using the owner. This is a personal repository.
These is the name of the repo, the name of the branch, and this
path is what is going to tell Flux where
to store the manifests that it will create
to match the runtime that it will run against the cluster
and store in the repo. So there is a starting point of consistency
in what's in the repo and what's running on these cluster. So once
we run this, just going to copy this over and
paste these right here. Flux is going to connect
to GitHub. It's going CTO clone the repository that we had
already created, and it's going to generate all sorts of manifests and
apply them. And here you can see on the right side how Flux
has already created multiple control, a helm controller, a customized controller,
hands others as it's bootstrapping the cluster
so that it will actually be able to maintain consistency of
the repo that we're tracking and what it
is running on the cluster itself. Now if
we go back to the repository,
you'll see how new content is
being added here. These manifests
that have been created are the declarations that match
what Flux just did on the system. So once the cluster has
been bootstrapped, now we have a consistent state of
what Flux hands pushed to my repo and what
is running on the cluster. Okay,
now we have our cluster being tracked by Flux, and Flux
having pushed all the manifests that represent that
desired state and is consistent with the runtime, pushed our
repo. Now it's time for us to install crossplane.
The fourth principle that we looked at mentioned that
there should be no direct manipulation of a target system,
which means at this point we will not be doing anything that directly
talks CTO. The Kubernetes API if you look at the crossplane
documentation, the recommended way to install is using helm. So we
are going to be using helm as well, but not by running a helm install.
We're going to be using declarations. Luckily, Flux allows us to
use two different types of technologies to deploy applications.
We can either use helm or we can use customize. Following the
recommended approach by the crossplane documentation, we are going
to be using helm. But before we can install crossplane, we want to create
a namespace for it. Remember, we're not doing anything directly against the
cluster. So the first thing that we need to do is we need to push
a declaration that created that namespace for us.
When the state is reconciled, I've changed canines
to track our different namespaces. So here we're seeing all
namespaces, and I have already pulled the
changes that flux added the code
that flux added to my repo in the cluster,
my cluster path. There is a very valuable observation to make
around this. When I bootstrapped flux, I specified which
path I wanted flux to use to store the manifest, which means you
can do multiple things. You can, in the same repo,
in a single repo, store the state for multiple clusters
as long as they live in different paths. And you can also point multiple
clusters to the same path. So you can effectively control
the state across a fleet of clusters by modifying files
in a single location. Now the other important observation is
that you will want to have some logic behind how you
structure your files within your repository. Here we have already a basis
for that. We have a specific path that matches one or more clusters.
We'll want to create within that path a directory structure that
is also meaningful, that means something to you when
you look at it. The default setup that flux bootstrap
creates is a flux system subdirectory, which is actually consistent with
the flux system namespace that it creates. So I'm going to follow the same pattern,
and I'm going to create a crossplane system subdirectory where I'm going
to be adding all the resources, all the manifests that
I want to be synchronized against my crossplane system
namespace. First thing I'm going to do is I'm going to create a directory crossplane
system, and here
I'm going CTO add a manifest now these
are good old Kubernetes manifests to create our
namespace.
Okay, nothing out of the ordinary, just a namespace
manifest. What's going to happen now is I'm going to commit this adding
namespace for crossplane.
I'm going to, oh wait, I got to add it and
I'm going to push it to the repo. As soon as it's pushed
we're going to see it here. Now we have crossplane system. And now this is
where the magic starts to happen. Pay attention that I haven't done anything other
than pushing this file to the repository. Soon you'll
see here a new namespace show up the crossplane system
namespace that flux is creating
for us as it looks to reconcile the state.
Right now the desired state of my system is different from my runtime
state. It's inconsistent. Flux will make sure that it ends up being
consistent by creating those resources for us. Now this
might take a few seconds because of the cycle
at which flux validates what's running,
what's available in your state store. So let's continue
moving on up. There it is. Crossplane system now exists.
Okay, so we have a namespace for crossplane.
Now let's install crossplane. So we need helm to install crossplane.
Or actually we're going to use helm to install crossplane. Flux comes with
custom resource definitions for helm applications and
customized applications that we're going to use for that you can look at the
documentation in the crossplane website to
get the parameters that we will need as far as helm is
concerned to install this application. Now again, we're just going to
create manifests. So I'm going to create a new manifest here that's going to be
called these helm release. I'm going
to copy over helm release that I already have
available and we'll walk through it real quick.
1 second.
Okay, so in this helm Yaml file we're
using to be adding two different objects,
both coming from the flux
application. One is the helm repository.
Helm repository is basically a type of resource that
specifies helm repo. The URL on the helm
repo is the same URL that you would use if you do helm
repo add. For instance this I got from these crossplane
documentation. Then we're going to create a helm release.
Helm release specifies a helm repository from which
we want to get a helm application and which
name we want to install, which application we want
to install. So here we're saying I want to install the crossplane chart on this
specific versioned and I'm going to get it from this helm repo
called crossplane stable. Crossplane stable is the same name
that we used here in the helm repository. And I'm actually telling it in which
system this repository has been created.
This helm repository object exists. Again, we don't do anything other
than committing this while adding this and
pushing it. Now I'm going to take
a look here as to what we get in term
of pods. Right now we only have flux
system and Kube system. There's nothing running in crossplane system, but we just asked
it to create a new helm release.
Matter of fact, we can even look for helm release here. Oh, there it
is. We have an in progress reconciliation for a crossplane system
helm release. This is actually installing crossplane
for us. This is the equivalent of doing a helm install. Except that crossplane
is doing for us because we added a helm release
in the helm repo manifest. And here you can see that now it says it's
true to reconcile. If we look at pods.
Now we have crossplane running in the crossplane system.
All we did was push a manifest of
a helm release that included a helm release in the helm repo and
we have a helm install of our crossplane system.
Just pretty much awesome, right? But now we have
crossplane unable to talk to anything. We're going to look at two
different types of projects. Now we're going to look at providers and
provider configurations. A provider is what crossplane
uses to talk to any one specific cloud. There's providers for GCP, for AWS,
and for other clouds. And a provider configuration configures that provider
given a set of credentials that it will use to authenticate against that
specific cloud and some additional properties. For instance,
if we're looking at the GCP provider, we need to configure which project
we want that provider to use. So next we're going to push
providers and provider configs. Remember one thing though,
that we'll see soon. Provider configs also need secrets.
Those secrets are actually holding the credentials that
the provider is going to use to talk to those clouds
with some identity. We're also going to look at those. Okay, so we're ready.
CTO install our providers. We're going to go through the same process AWS we've
done already. We're going to declare those in code and push them to
the repo. We need to create one provider for AWS and one provider for
GCP. On this demo, we're going to be creating a SQL database,
a managed SQL database in GCP and in AWS,
since we're going to be talking to two separate clouds, we need CTO define
two separate providers. We're going to do that by
adding a provider manifest to our repository.
First, let's add the AWS provider manifest,
which is a very simple manifest that we're going to just copy
over and
we're going to do the same for our GCP manifest for
our GCP provider. I mean, create another
file.
We're going to paste this provider. Now,
both providers are basically the same. The difference is these package that we're
installing for AWS. We installed the provider AWS package
for GCP. We're installing the provider GCP package. These are not
namespaced resources. They're cluster wide resources and
they were now available in our cluster thanks to Crossplane's
helm install. These crds now are available for us to
leverage. We're going to create two providers, one for
GCP and one for AWS.
We're going to add these two files and
we're going to push this to the repo,
change this here so that we
actually see these resources getting created.
So we need package crossplane,
package crossplane version one
provider providers,
crossplane, version one providers.
Okay, here you can see how we have now two providers,
AWS and GCP. And now they're both healthy
and installed. This means that now we are able to communicate or
we have all these necessary components that we would need for crossplane to communicate
with AWS and GCP. With two notable exceptions. We need
to configure these providers and as part of that configuration we need secrets.
Those secrets are going to include the credentials that we're going to be passing to
the provider so that they can authenticate as us with these specific
cloud that we're going to be engaging. So let's do that.
First, we're going to add those secrets. Secrets.
And of course, by these time you see this, these tokens are going to be
already invalid. So there's no risk here. But I'm going to show you what a
GCP secret looks like and an AWS
secret.
These secrets basically have,
in the case of GCP, the full
JSON key as you would create for an
identity. So let me show you a little bit what that looks like.
This is our GCP secret. It's basically our
encoded JSON key. And I'm going to
do the same for AWS,
which includes our AWS secret key
and secret access key id and secret access key.
If you want CTO understand how these keys were created,
you can look at the crossplane documentation, which is very
thorough. And when you go to install and
configure, scroll the way down. You'll see
here how to create the AWS
credentials or the GCP credentials. And for the GCP
credentials you will need to create a service account.
For AWS you need an identity with AWS
key id and key secret. So please
refer to this URL to understand how to create
these secrets. Once we have the secrets created
which we have here, going to add these secrets
for cloud connection, we're going to
push them to the repo. We also need another resource
that's called a provider configuration and we're also using to need one
for each one of our cloud providers.
So I'm going to show here secrets. Hold on a second.
Oh, here you can see that we have our AWS and GCP provider ready available.
Now we're going to create our provider configurations.
We need one for each cloud and I'll
walk youll through what those look like. So first
let's start with the GCP provider configuration.
Space it over from here and I'll show you what that looks like.
The provider config kind of object is also
made available by crossplane. There is a namespace for GCP
and there's a namespace for AWS and a namespace for the other clouds here.
We're basically configuring how we're going to talk to this cloud
provider. We're specifying which secret we're going to be using to
talk to the cloud. In the case of GCP, which project we
want to create our resources in,
and that's about that. So let's also create the
AWS provider config,
which is going to be very similar. In the case of the AWS provider config.
We don't really need anything other
than just these secret that we're going to be using
to talk CTo the cloud secret,
which name of the secret, what the key within that secret
is. And as you can see, the only difference here is these fact that
the API where this is coming from is not GCP,
rather AWS. We're going to add these two files,
configure providers hands.
We're going to push that to a repository. So now
we have a secret that we're going to need to talk CTo
the clouds. We have our providers installed and configured.
We are now ready to deploy some cloud resources
intro AWS and GCP using crossplane crds.
Awesome. So let's do a little bit of recap of where we are. We started
off with an MTK three s cluster and an empty repo. We used the flux
CLI to bootstrap flux into our cluster, which effectively put
in place all the different manifests in our repo to
be consistent with what's running in the cluster. Then we used flux with its
helm repository hands helm release crds to install
crossplane into our cluster. We added providers and
provider configs for crossplane to be able to talk to our cloud,
including the necessary secrets for those provider configs to be
able to communicate with these cloud. And now it's just a matter of adding
resources hands for practical purposes we're using to install, we're going to create,
we're going to provision an RDS instance in AWS
and a cloud SQL instance in GCP. So I've switched one
of the screens on our setup just now and
now we're looking here on these right side to a AWS console
and a GCP console, specifically the RDS
service hands cloud SQL service. So now we want to
create our databases. I'm going to copy over because I have this
manifest already ready, the GCP database
manifest which we're going to look at real quick. This is Crossplane.
And Crossplane gives us different APIs for the different
types of services that are for every cloud provider. So in this case, we're looking
at the database GCP crossplane IO API and we're looking at
cloud SQL instance. There's some important values that you're
using to want to pay attention to here. One is the provider config ref.
This tells Crossplane which provider config to use for
spinning up this resource, for provisioning this resources. This is valuable because you
can have a single provider, say GCP with multiple provider configs
so that you can use different billing accounts or you can identify as different
users or service accounts or identities within GCP, for instance,
and use access controls to limit who can use which
single cloud provider, multiple provider configs. Here you can specify
which provider config to use as well as pass specific
attributes for the provider to configure your resource. So in
this case, we're telling it which database version we want to use,
which engine we want to use, in which region, et cetera. Now there is a
very valuable feature of crossplane, very powerful,
which is this write connection secret to ref configuration attribute in these manifest.
This allows you to specify the name of a secret in which crossplane
is going to insert the endpoint and credentials
for this specific resource. It's going to pull the data from your cloud provider and
make it available as a secret within your Kubernetes cluster,
which is of course super efficient whenever you create resources and want
applications to connect to those resources without having to do any manual effort. So I'm
going to just do what I've done with all the other manifests that
we've looked at today, add in a commitment,
adding GCP database and push it.
I'm going to show you here on the right side the cloud SQL
console for GCP. Now this is not instantaneous. It's going to
take a few seconds. So while we do that, I'm going to also copy over
the AWS database manifest to my repo.
Let's look at it real quick while this is refreshing,
which we're going to see the database come up soon. And this
is basically a very similar configuration to what we saw. The kind of resource
is different. Now we're looking at an RDS instance database. The API
is different. This is no longer in GCP, this is AWS. But you'll
see a lot of commonality as to how you configure this resource. You're specifying which
provider config to use, which is the same thing that we talked about. You can
have multi configuration for say AWS and
you're passing to your provider parameters as to how to configure
this specific resource as well as which secret to
write the connection details into. Going to add this file and
I'm going to refresh here real quick to see if we're seeing the database already.
Okay, so we waited a minute or two and now let's take
a look at our consoles. We have our RDS instance already created
matching the spec that we defined in our CRD. And we have our cloud SQL
instance created also matching what we had in our CRD.
Just phenomenal. All it took was adding to manifesto repo and
we're actually creating resources in our cloud. Now. There was
one specific attribute to crossplane that I highlighted while
we were looking at the database manifests. The fact that it stores the
connection string credentials and other details that you need to be able
to connect CTO the database as secrets within your cluster. So let's take
a look at that real quick. This is our previous console and let's get
secrets in the crossplane system and
namespace where we asked it to store them. Here you
can see that we have this AWS RDS PostgreSQl connection and clouds postgresql
connection secrets. These are the names that we
specified in our CRD where we wanted the connection secrets to
be stored. Let's look at one of them real quick.
We're going to look at this AWS connection
secret.
And here you can see that we're not going to decrypt it, but we have
our endpoint, we have a password, we have a port and a username. These are
the details that have been provided CTO crossplane
that have been pulled from, in this case AWS by crossplane
and been used to populate the secret, which now you can use anywhere
to connect to this database. It's super powerful.
Cool. So we created our resources. Now we don't want to have
any lingering stuff there, so let's just get rid of these. How do you
get rid of them? Well, you just remove them from your repo.
Let's remove GCP database and AWS database
because we want CTO clean up.
We're going to get rid of our databases and
we push, since we're looking at Githubs,
this is also going CTO reconcile and these is going to destroy our databases.
This is also going to take a few seconds, a minute or two, but we'll
see these databases be destroyed. So that is
the power of crossplane using the Kubernetes
API to manage your cloud resources and fully
managed using a Gitops operating model where I did not interact
with the Kubernetes API directly at all. And everything that I
did is now tracked in git and represents different
points in time of the evolution of my platform. So that is what I
wanted to show you. Crossplane and flux demonstrating the power of Gitops
for multi cloud resource management. I honestly think this is the future, how we're
going to operate clusters. I did not interact with the cluster directly at all.
And there's auditability, there's versioning, and there's these very powerful
mechanism to control and collaborate in managing our platforms,
in managing our cluster hands, our cluster fleets. So I hope you found this
valuable and exciting. You can always reach me,
find me on LinkedIn, you can find me on Twitter. I'd love to show you
more. The repo is available for you to look at the code that we worked
with today. And I'd like to say big thanks to conf 42 for
giving me the trinity to show you how githubs and Crossplane
and Flux can work together to really revolutionize how
we manage multicloud resources. Thank you, and till the
next time.