Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello everyone, my name is Jim. Thanks for coming to my talk on
Terraform, Gitops and Kubernetes how to manage your infrastructure and
applications as code a little bit of background on me.
I've been working in the software world for over
16 years now. I've had a number of roles at various companies.
Currently I am a developer advocate with harness and
if you'd like to get in touch with me, you can find me on Twitter.
Here's my GitHub and my email address.
So if you haven't heard of harness before, our mission is
to enable every software engineering team in the world to deliver
code reliably, efficiently and quickly to their users.
So we have a software platform that covers the entire
software delivery lifecycle from CI CD all the
way into production, with features like cloud cost management,
feature flags, and we recently acquired chaos native.
So we'll be offering chaos engineering integrations in the near future.
If you'd like to learn more, you can check out our website at harness IO.
So in 2022, I hope that everyone has heard of
infrastructure as code. There are a lot of different solutions, but there. But today I'll
focus on terraform. Once you start learning about Terraform, you'll quickly want
to get your code checked into a source code repository,
hopefully git. And from there you'll need to structure that
code and build a delivery pipeline. So today I'll
share two approaches that I've seen development teams adopt and be
successful managing code from development into production.
Each has its own benefits and potential drawbacks.
So we'll be looking at an approach that uses a terraform
TFR file, one per environment and then another that
uses directories, one per environment. So let's have
a look at some of the tools we'll be using. So of course we'll be
using terraform, we'll be using GitHub Docker.
Hopefully everyone's used specific to terraform and Kubernetes
is the Terraform Kubernetes provider and we'll be using drone CI.
So hopefully the first three are familiar to everyone. The last two I'll
show right here. So if you're unfamiliar with the terraform
Kubernetes provider, it's pretty fantastic.
There are a lot of ways to manage your kubernetes resources.
Terraform is yet another tool to add to the pile. But I
feel like if you're in an organization that's comfortable with Terraform, you should
really consider the Kubernetes Terraform provider as an option for your kubernetes
resources. Your deployments, your staple sets, your services,
and so on. So what I'll be sharing today in these
pipelines is specific to Kubernetes. This is a workflow
that you could follow for any tool that has
a or service that has a terraform provider.
I was at a company where we used the GitHub provider to manage all of
our GitHub repositories has code and it worked very similar.
If you haven't heard of drone before, Drone is an open source container native tool
written in Go. It was started in 2012,
so drone turns ten this year. Drone was acquired by Harness
in August of 2020. Drone is an extremely small footprint
tool where you just need the server and a runner to
start executing pipelines. It supports a variety of source code
solutions. GitHub, BitBucket, GitLab. It's also multi os
and multi architecture. So if you need to run builds on Windows or Mac or
even arm, drone will work for all of them. So let's have
a look at a demo of these two workflows in action. So first we'll
start with the tfvar per environment.
So here I have a repository and I have
pods running in two namespaces, demo prod
and demo dev. So let's take a look
at the dev one. So this is the
pod info project, just a simple project microservice
that tells you about the pod, where it's running. So this is
in the dev namespace and tells you the pod is
based on the promote workflow that I'm showing here.
So let's make a change and see it through. So I'll
go into the deployment terraform file and change the message.
Hello Conf 42,
new message. And we'll just commit that right to the main branch.
This is going to kick off my drone pipeline
if I click it correctly. There we go. So here's
the demo dev pipeline kicking off clones, my repositories,
sets my cube configuration, initializes the environment
and then runs and apply and it's complete.
So now if we go look at the pod, we should see our message
if we refresh and once we reestablish the
tunnel port forward. Yes, hello conf 42 now
appears so. One feature of drone is the ability
to handle a promotion pipeline. So that's how I've configured
this one. So all I have to do is take that build and promote it
to prod in this case. So here's a new
pipeline kicked off. You'll see the name is different demo prod.
This time similar steps are running,
configuration initialization and now an apply,
but this time it's using the prod TFRS file. All right,
the deployment is complete and we can try forwarding
into the prod pod 98,
98, yes,
here we are in the prod namespace with our new config conf
42 message. So a couple of thoughts on that workflow
on TFR per environment. So it works well for a small amount
of tightly coupled resources, and it works
well when both environments or multiple, however many multiple
environments, they're all running continuously. So now let's look at the
other approach to a directory based approach in this other repository
that I have. So here is the
other repository, and you'll see here we have dev and prod,
separate directories. So if we look at both of them,
they have the same files. The only differences now
are going to be in the locals TF, where we've got namespace
that has to be different. Other than that, everything is
pretty much the same. So if we
bring up a port forward to the dev
pod, here it is, pod info directory
based, and we
can make some changes in here and see them reflected. So let's
go to write in the new message,
same as before, hellocom 42 and we'll commit
that right into main and watch the drone
job kick off. Same steps are running
initialization and
now an apply step. Great. So now if I come over
and reestablish my port forward.
There we go. Our message is there. Perfect.
So if we go back to the repositories,
something I can do in this workflow that I couldn't do in the other,
or I couldn't do easily, is tear
this down. Let's rename this to destroyed.
I like renaming files to underscore destroyed. It's very obvious
what's running and what's not if you do it this way.
So I'm going to say destroy
the dev deployment. I'll do this in a pull request this time,
just to be safe.
And we'll watch the drone pull request run this time.
This time it's going to run a plan. Great. So it's telling
me it's going to destroy the deployment, which is exactly what I
want, because this is Dev.
I will merge this in and
go back and watch the pipeline.
Great. So there it is being destroyed.
So if you've ever worked with a
CI pipeline, CD pipeline for Kubernetes, and wondered how to
handle when you delete a YAML file,
if you delete a YAML file, your CD pipeline doesn't have it there. It can't
run Kubectl delete on the file, but if
you work with it in terraform, everything cloud just be an apply
operation. If a file is removed, that's fine.
Terraform will go ahead and remove that resource for you.
So it's a potential workflow that really isn't possible if
you're doing YAML based manifests.
So that's that demo. Some thoughts on the directory
approaches. So the pull request workflow, I could have done
a pull request for prod where I couldn't have done it in the previous example.
And that ability to tear down the
development resources as I just did, I think is a big
advantage. So I can save some money, save some resources and
not run stuff in development when it's not needed.
So the big question is, is this Gitops?
What we just did, we're using Git, we're following a pipeline,
is this Gitops? So there is now a
Gitops dev, which is created by a working
group comprised of members of various companies that
have created this resource. So open Gitops is
a set of open resources standards, best practices and community focused education to
help organizations adopt a structured, standardized approach to
implementing Gitops, which is great. So now we have a really good
resource to hopefully standardize all this.
And they've given us four principles, so it needs to be declarative,
versioned and immutable, pulled automatically and continuously
reconciled. So the first two principles here, I think what
we just went through. Absolutely, we're following those pulled
automatically, maybe not as much. We're really only executing
this when commit events are happening in the repository
or deployment event. In the first example, promotion event,
continuously reconciled. Yeah, I don't think we're meeting that we're
not running something continuously. Looking at the current state of
the infrastructure resources, comparing it to the desired state and
reconciling them if needed.
So that's something to think about. So the two
big tools that everybody talks about when they talk about Gitops
are Argo and flux. So the question I'd like everyone to
come away from this with is, can we follow Gitops
without flux or Argo? So these
are two tools, they're great tools.
Nothing's perfect. And not everything is running in a
Kubernetes cluster. Right.
Argo recently had a fairly high profile
security vulnerability which could have potentially led to an attacker
obtaining sensitive data from your Kubernetes cluster.
So what do we do if we're not using Kubernetes
and we want to do Gitops? Kubernetes is great,
but there are a lot of other ways that we can run software in 2022,
and we're just going to have more in the future, not less.
So using Git as a single resources of truth for your infrastructure and
applications brings a lot of benefits. So let's keep talking
about all the different ways that we can safely and securely apply that
configuration to our environments through Gitops principles
and with these different tools. So with
that, I will leave you with some resources.
So harness we have our own slack community.
At harnesscommunity slack.com we have a discourse
that you can get involved with. At community harness IO.
We also have regular meetups@meetup.com slash harness if
you want to learn more about drone, the official documentation is but at Docs drone
IO and I recently wrote an
article blog post on how to run your own drone CI.
So if you liked what you saw in those demos, it's very easy to get
started running your own drone CI right on your laptop. Anything that can run Docker.
You'll get started in just a few minutes at this link at the bottom.
And if this sounds interesting to you, if being part of a company
that creates tools to help developers build, test, deploy and manage
their code more effectively, we are hiring.
Just go to harness IO and look for the career section.
And with that we've reached the end. Thank you very much for
watching. I hope you enjoy the rest of the conference.