Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello everyone. Thanks for joining my session. My name is Samuel
Barufi. I am a solutions architect here with AWS.
And for today's presentation I'm going to be talking about how
we can bootstrap eks clusters with EKS blueprint.
A quick agenda on what we're going to go through in the next few
minutes. We're going to start with a high level understanding of
kubernetes and we're going to then move on into a high
level understanding of eks. And that is going to
set the stage for us to talk about what is EKS blueprints,
how EKS blueprints work and why you should
potentially use on your environment. After that, I'm going
to present you with some resources on
how you can get started with EKS blueprints. And if
you want to run a workshop with your team or yourself,
I'm using to provide those links
and that information. And then hopefully I'll finalize the
session with a quick demo just showcasing
an example on how we could use ecas blueprints in your
environment. So let's start the journey of
the presentation of. Okay,
you as an organization or you as an architect have
decided to use kubernetes. Okay, you've heard
Kubernetes is a popular container orchestration platform.
So what comes next? Right?
So a lot of companies and this has become
very popular in the last, I'll say, six to eight
years have decided to use kubernetes. Why do they
decided to use kubernetes? Well, first, there are a couple of reasons.
I mean, there are multiple reasons. We're just going to narrow it down into four
main reasons here. The first one is easy,
it's the isability of use. You have a standard way
that you can declare through YaMl files
and through common APIs that can be
flexible and extensible. You can deploy your applications
and deploy your platform on top of a common ecosystem called
Kubernetes. The great thing about kubernetes is consistency.
It's built on top of common APIs regardless
of where you run. So if you're managing your own Kubernetes clusters on
your on premise environment the way the APIs
of Kubernetes, assuming you are on the same Kubernetes versions,
because each Kubernetes versions will have different API
settings and different APIs availability
as Kubernetes grows as an ecosystem. And of course
the third one is ecosystem,
Kubernetes has been chosen to be the default kubernetes,
the default container orchestrator so there are
hundreds of thousands of solutions across the cloud native compute
foundation ecosystem that can be easily run
on top of Kubernetes. And then I think the best one is the
community. Kubernetes community is
large, big and very helpful. So you are
just building on top of that the skill sets from people that
know containers are likely to involve Kubernetes.
So if you talked about why kubernetes,
let's talk about eks in the cloud. So you've decided
to run your container ecosystem, that is
Kubernetes with AWS. So you can actually use EKS.
EKS stands for elastic Kubernetes service.
It's a managed Kubernetes platform
on the AWS cloud. The great thing about Kubernetes is
you can easily create a clusters and AWS will manage
a lot of those heavy lifting operations for
you. One of them is managing the
cluster platform, right? So when you have the control plane,
the control plane you do not need to manage. AWS will
manage for you. And this slide actually talks about
how AWS manages and gives you that single
control plane API for Kubernetes without you touching
or needing to care about anything else.
So AWS will manage the Kubernetes APIs
for you. We'll create the ETCD data store for you.
You actually replicate the data across multiple availability zones.
So a cluster will be a single tenant cluster
for only you and your account. It provides a highly available API because
it's running across multiple availability zones behind the
scenes. He also provides with a three nine so 99.95%
of SLA you have fully support to
open cases and get help from the AWS support engineers at
any time. And you can scale if your
cluster is growing significantly and the control plane requires
more resource, AWS will automatically behind the
scenes scale up and down different instances to just support
your control plane on Kubernetes. And you also manage your
upgradings from maybe major versions or even minor
versions with patchins. And all that
means is you as a developer or you as an architect don't needed
to worry about the complexity of a control plane kubernetes.
You only actually focus on your applications
and your business value. There are
multiple ways you can run kubernetes on AWS.
Of course, EKS is our managed service platform for kubernetes.
And if you see here what this slide is saying, there are two different
flavors of eks. First is Amazon EKS, which is what
I've just described. And Amazon eks can run across multiple
places on AWS so the first one is of course the AWS
regions that we just talked about it. You can go to us east
one, northern Virginia, and deploy EKS. And eks
should be available across all the regions on AWS.
I think we currently have 32 or 33 different regions
globally. But you can also deploy EKS on local zones
and wavelengths. Those are specific new types
of availability zones you can call that runs maybe
in the specific metro areas or for wavelengths that
will be close to 5g cellular locations.
Also, if you want to run a physical
piece of infrastructure that is called outpost, that AWS
will manage and connect to your AWS infrastructure,
you can actually run eks on top of that. But if
you also want to run eks on your on premise,
or maybe on some other clouds, you can actually use
what we call eks anywhere. So it's a set of
best practice and deployments that help you actually create the
control plane, manage the control plane anywhere else while utilizing
all the common non functionalities of eks.
We're not going to talk about eks anywhere, but we are just displaying
here a capability that is available for you as well.
So moving on our journey, you have chosen EKS for
your Kubernetes clusters. What is next?
Right? Normally in the Kubernetes journey, you choose an orchestrator
for Kubernetes, in this case eks. And then
you need to focus on the data plane, right? So we talk about the control
plane where eks completely takes care for you of that.
Now, the data plane is literally where you're going to be running your pods.
Therefore your applications with eks, there are
a couple of different options that makes it very flexible
for you to choose how you want to run your pods.
So as an administrator, you can choose to run pods on
EC two based containers. So you can actually scale
up and create EC two instances where your pods are going to be running,
or you can choose AWS Fargate, which is a completely serverless
container environment that you don't need to manage. Each pod will create
a specific fargate infrastructure and those are going to
be charged from how long they live and the configuration of
memory and cpu for those. Now,
when you're talking about ECU, there are multiple ways that EKs
allows you to have flexibility for
both cost and performance on how you decide to manage
your data plane of ecgs. So the first one that is very common
known is manage node groups. With manage node groups,
AWS, a part of taking care of the control plane,
will help take care of the data plane as well. That means
that we'll create a scaling group behind the scenes
for a specific instance type and you can have multiple node
groups with different instance types and those specific node
groups can be for specific applications within your clusters. So there
is a lot of flexibility that you can create. Also with managed
node groups it allows you to ease
off upgrade. So when you were upgrading from one version to another version
of Kubernetes, AWS within the manage node groups
can actually help you achieve that ease of upgrade.
Now, when EKS was launched,
manage no group was not a functionality available. The only functionality
available was what we call self managed node groups.
That just means that you will create your no group. Everything you do like
from the alti Scaling group creation and management
from an upgrade is your responsibility as an operator.
There are very few occasions why you should go through and use
a self managed node groups, but in this slide it's just displaying
the capability. And the other
option, which is probably the best option for
everyone to use, is called carpenter. So carpenter is a
open source cluster altiscaler competitor
that can run on eks and other cloud providers
as well. And with that it removes
the idea of a manage node group and just treats your cluster
within a single kind of environment.
And depending on your applications, you can actually
say for this specific application I have these tags,
run these on the spot instances and Carpenter will take care of
with having in mind cost, performance and
availability. Depending on your configuration, Carpenter will take care of that without
you even thinking about managed node groups.
Carpenter a major differential of carpenter versus managed
node group is that with carpenter you can have a polygloth
of different EC two types being spun up at the same
time where node groups, each node group will actually be
forced into a single EC two type
family and so forth. So now that
we know that we have decided into this, right, so we
are going to use, as part of the journey, we're going to use
eks and the eks. This is
the data plane what you're seeing here. We're going to create two managed node groups.
One node group will have an m five instance type and the
other node group we have m six g. So we might be running different types
of applications. And within each managed node group you're going to have two
availability zones where multiple instances are going to be scaled up
and down. And at the same time you can configure applications to be
deployed across this environment.
So when we look in the container journey, we have decided the orchestrator,
we have decided how we're going to do the data plane complete next
is remember, Kubernetes is a platform, it's an ecosystem.
The cluster on itself is not really powerful without
its add ons. So add ons can be anything, right?
If you are familiar with the cloud native Compute Foundation Cloudmap,
if you're not, just Google and take a look. There is no shortage
of amazing tooling that can be deployed within the
Kubernetes ecosystem. But it's really, really hard
to deploy those because there is no guide for how to put
all those two togethers. So continue
the journey. What we've decided is, okay, we have
our cluster, our eks cluster, and we want to deploy
some NgInX proxy, maybe on this
specific managed node group we want to deploy some maybe open
policy agents. In this other node
group we want to use Grafana and Prometrius for our
monitoring and observability. How do we actually get all
those together? Right? So we want to do this, but how do we achieve that
in a very repeatable, easy to manage way?
Again, from the Kubernetes journey, you've decided you've created
that. Let's say you install the cluster add ons manually.
You actually went into each of these like Prometheus,
Grafana, OpA, GitHub repositories,
you learn how to deploy those and you deploy those on your clusters. You probably
spend like a few days or weeks deploying that for a single cluster.
And now it comes day two operations, right?
So what is day two operations? Well, what you need to consider is
which users and which developers will have access for different parts
of your clusters, right? Maybe your cluster is what
we call multitenant cluster, meaning that multiple applications and multiple
teams on your organization are going to be using this cluster.
So now you need to think about it, okay? You have multiple developers
that are going to be assuming a specific developer
role, which therefore we will actually
give proper permission on the Kubernetes. But you might have some temporary users
that might be just be connecting to your cluster
here in a few occasion and sporadic way, but then you
have your platform team which has kind of an admin type of role
that you'll be accessing those. So now need to think
about that as well. On top
of that, hopefully alphas are falling best
practice where you might have different environments and
different clusters for each environment. So what I've just described
here, you should be replicating on your dev environment, you should be replicating on your
test environment, and you should be replicating on the production environment. Now think
about if you're on the platform team or the DevOps team SRE team.
If you need to replicate that across dozens
of kubernetes clusters, it becomes very painful
and it becomes very hard to actually manage that if
you are not using a way of automate that.
And that is the perfect segue for Amazon EKS blueprints
what is Amazon EKS blueprints? So Amazon
EKS Blueprint is an open source framework that
allows you to easily configure and deploy EKS clusters
in an automated and secure way.
So you can choose between, if you have
a preference for infrastructure, that code with terraform or CDK.
There are two flavors of EKS blueprints,
CDK cloud development kit which is using your normal
non programming language like python node,
Javascript Java to actually build infrastructure.
Or you can use Terraform which is a popular open source
infrastructure as a code tool. The great thing about
EKS blueprint, it's based on the best practice from
AWS and the recommendations on how to create
EKS clusters and how to manage EKS clusters from
both a cluster creation, a VPC creation,
a multi team tenant creation, a add
on creation, and also the upgrade of those clusters
and the lifecycle of those clusters.
So with that said, ECAS blueprints also integrated
with your popular Kubernetes tools and services. So this
is where it comes with add ons. The great thing about
EKS blueprint, it's fully extensible and customizable.
If you want to create your own deployments and your own add ons,
you can build on top of this platform that is available
for you. You can leverage again your
preferred tool. I talked about this in a moment ago. You can
use CDK blueprints, EKS blueprints and you can use
terraform EKS blueprints. There are two different repositories
that you can see here as part of
the AWS open source initiative.
If you want to use terraform, you just go on terraform
AWS EkS blueprints if you want to go
on CDK, you just choose the CDK EkS blueprints.
So thinking continue this trajectory.
How does actually EKS blueprints create
a solution for you? So let's just look at that.
So first, EKS blueprint will allow
and help you create your clusters. So everything that comes with
the VPC, the security groups, the cluster
creation, all that will be taken
care for you. And of course it gives you the flexibility to proper configure
those. So you can choose if you want to run Amazon EKs on
bottle rocket operating system or if you want to use Amazon Linux
as the operating system or if you want to use Fargate, you have
the flexibility to mix and match as well.
So once you have the cluster now you want to build and install different
add ons. Maybe you're doing a lot of git ops
and you are using Argo or maybe flux for your
githubs, or you can install those by default and already configure
different repositories where those githubs tooling are
going to be looking for different applications to be deployed.
But you can deploy cluster out scalar
if you're using maybe a managed node group and you really want
to do cluster out scalar on top of your eks.
So this diagram and this image is
very minimalist. There are many many
more add ons that are supported on EKS blueprint and you
can find those on the documentation. They're going to be shared in the
end of this presentation.
The great thing about the installations of add ons
is literally in this example that I'm showing here is just an
example for how you can install for example
the metric server and kubecost. It's literally
two lines to install those add ons on your cluster and
it comes with the best practice. So all the best practice on how you
should enable metric server and how you should install kubecosts
on your clusters just with this specific client are
actually taken care for you. And this is one of the great things about of
course each add on might provide different flexibility
and options if you want to customize. And you can also always fork and
create your own modules on terraform or your own
l choose abstractions objects
on CDK if you feel so.
But then on top of that ECAs blueprint remember also can create
different teams and manage the permission for you. So you can manage the
access and different permissions by always using
infrastructure as a club. So what do we get
with EKS blueprint? First you get cluster management,
so you configure and deploy your EKS clusters
using AWS. Best practice, you can also replicate across
multiple AWS accounts and regions because remember these are just infrastructures
of code that are very easily replicable and you can
create eks clusters with existing vpcs
or actually create
new vpcs if you deem so it also
manages add on. So out of the box integration with very popular
kubernetes add ons and those keeps getting added as
time progress. So you know the specific best practice
for those if you want.
And again you don't need to do everything that is on this list.
Flexibility is something that comes with EKS blueprint,
but if you want to do team management you can actually create distinct things
from both admins application owners developers, SRE,
whatever you deem. You can actually have the flexibility for team
management on top of your EKS clusters and then this
is a little bit more advanced. But if you really want to use workload management,
you can actually leverage Gitops tooling like Flux and
Argo CD to run workloads as you deploy
your kubernetes on top of that. So you can do self service onboarding of new
workloads via pull requests so you as the platform team can create a cluster
can configuration. The GitHub stooling gives a repository for
your application team as soon as they push and do a pr with
a new version. As long as all the Gitops configuration
properly configured with your YAML files for kubernetes,
those tools like Argo CD and Flux are going to continue
to deploy new versions of the application into your cluster. So this
is pretty cool. Now we're getting into the resource
part, right? So like
I said, you can use terraform or cdk depending
on your preference. Here is both
links for the GitHub repository. Remember those are open
source. What I recommend if you're new to EKS blueprints,
there is a nice workshop for eks blueprints for terraform
and a nice workshop for eks blueprints for CDK.
So just click on those links, navigate and
will give you a step by step on how to get started on the different
flavors. One of the great things
is part of the GitHub repository.
EkS blueprints also provides you with
different patterns and different examples.
So you are not kind of on your own to learn
how to create specific eks configurations based
on a specific scenario. So let's say you
want to use eks blueprints to create a fully private eks cluster.
So no VPC with connection to the Internet fully private
within your VPC. Well there is an example that will tell you exactly
with example of terraform and Cdk how to actually do that.
Or if you want to use observability
with adot for application
telemetry, open search for maybe shipping your logs and
manage Prometheus, you can actually go there and
check. So I think there is not a better time to
actually jump into a demo. So what we're going to do now we
are going to jump into my AWS console and I'll
show you a simple example, simple but useful example on
how to use eks blueprints with terraform.
And maybe I'll try to install a different eks add ons
through the terraform template. So see you there and hopefully
it'll be useful for you. Perfect.
So let's dive deep into the demo. What I'm trying to
do here, I will show I have this terraform template
already deployed because it can take 1520
minutes, sometimes more. Josh, you ready to be created? So just going to show you
quickly the terraform template and then I'm going to
try to do a demo, just deploying an application into
my clusters and also installing a different add on into my
cluster. So here we have some variables,
like the region that is being deployed, some providers that I'm
using, like the Kubernetes, the certificates that my
cluster is going to be using some helm configuration. This is all
boilerplate. You can have dynamic
configuration if you wanted, but by default you don't need to change here.
So if you scroll down, you see that the part that really
matters for us is the cluster section. So in this case,
we're starting with the module eks. This is not part of the EKS blueprints,
but it's the official eks terraform module.
We are setting the specific version for
the module, setting the specific version for
my Kubernetes cluster. In this case 1.27.
And then if we scroll a little bit here,
I'm using the VPC that you see that is actually already
being created down below. So I'm creating a new VPC
for my cluster. I'm setting a manage node group of m five
large with a minimum size of one, maximum size
of five and desired size of two.
And down below here is when the things are starting to get a little bit
more interesting. I'm creating add ons. So as part of
my add ons, I am creating some eks add
ons. Those are actually the official eks add
on feature. So like if you go on the console, you'll be able to see
those. So you can enable those through EKs blueprint
as well. In this case, I'm enabling the AWS EBS
CSI. In case I would in the future want to create
some stateful set using EBS. I'd be able to do that core DNS,
VPC, CNI and Kubeproxy. Those are using the most recent
versions for my Kubernetes cluster. And then I'm
adding two more that are not part of the eks add on
official set, which in this case is the metric service and
certificate manager have been deployed automatically by just
setting these specific settings.
The other thing that eks blueprint allows you to do is
the creation of teams. So here I'm creating three different teams
and I just want to quickly explain what those means. So first
I'm creating an admin team. So here you can see that I
have created an admin team and I have this flag set
for true, meaning that this user will actually have access
to anything to do anything on my cluster as an admin.
The other thing I'm doing and creating some dev teams,
just keep in mind that these dev teams creations
are only view only, like they cannot write
anything to my cluster. And this is on purpose. We are trying
to follow here an approach of if you're using GitHubs is
the GitHub pipeline.
Not the pipeline, but githubs tool like Flux, argo,
CD or any other tool that you are using that is actually writing
into your eks cluster. In this case, the dev teams are just
to actually do some namespace configuration and
creating some permissions to
only view the resources. So if you're a developer on one of these teams,
you'd be able to only go and see the resources within your namespace,
right? So if you see here, I'm just setting some labels here, it's saying
for the red team and creating these labels projects to be secret.
For blue, I'm not creating any specific label,
then I'm merging those labels. And by the way, this is all default, you can
copy the configuration. The interesting things are here. So for
each team I'm creating a namespace. So for namespace,
each key which is pretty much each team, I'm creating
a label for my namespace. So it'll be Team blue
and Team red I'm creating some resource quotas
for my namespace. This is really best practice for kubernetes.
When you're creating namespace, you dedicated space for those namespaces
you set also some limit range for your pod,
for persistent volume claying and for the container itself.
And then some tags down below you see some supporting resources.
Literally the supporting resources here are just creating the VPc module
for me and the security groups. So this is already created.
You can see that I have on oregon us west
two, it creates a Kubernetes clusters for me on eks.
And you can see that here on compute I have just two
m five large have been created. If I go and I quickly
show you, if I do Kubectl get
let's do nodes, you'll be able to see those nodes
created if I go and I do kubectl get pods shay
to see all the pods that have been created. Let me just run
this just 1 second while you're updating
so you can see now that I have the CSI already installed
because remember it was an add on. I have the kubeproxy, the metric
server, the cert manage, the Cordians and the
AWS node. So you can see that those things are actually
getting properly configured for me and I
didn't need to do anything. One thing I would like to do now is
I want to create a Nginx deployment,
like a simple Nginx deployment with three pods
on my nodes and actually create an ingress using load
balancer controller. If you saw I don't have load balancer controller
installed on my Eks cluster so I want
to install I'm literally going to uncomment this and I'm going to save
then what I'm going to do is I'm just going to run the terraform again
and hopefully what the terraform will do will install this add on
for me. So let me just quickly
go here, paste this command and apologize
while recording my screen. This terminal
is a little bit slow. Chrome is just not behaving
very well here. So we're just going to give a moment until this
shows on the screen. So now I
have applied my terraform to install the
AWS load balancer controller into my cluster
and hopefully in a couple of seconds, actually minutes
it might take behind the scenes. Actually you saw that
within a couple of seconds it actually got deployed. And if
I go and I check all the pods that have been deployed
on my cluster, hopefully you'll be able to see that. Now the AWS
load balance controller is installed on my cluster.
So let's just wait a few seconds here. Again apologize.
My screen is a little bit slow here but you can see that now I
have the AWS load balancer controller pods that are
part of my add on properly installed.
Now the next thing I want to do is I
have created this simple deployment that is going to be deployed
on team blue. Right now I am on my bash
console, I'm actually an admin. So if you see
here, let me just do kubectl config
1 second kubectl config
if you see here right now I am an admin.
So right now I am actually providing as part of the admin team.
So I'm accessing Kubectl as an admin. So I have a capability
of deploying anything because I'm an admin. So what I would
like to do is deploy this specific deployment that is
just using an Nginx on port 80. It's creating a service
on cluster IP and then finally it's creating an ingress using
the AWS load balancer controller. Behind the scenes you
hopefully create a load balancer for me on AWS and
that load balancer will then forward the traffic
from, you can see here it's an Internet facing load balancer.
So you have a public ip address and then it's actually redirecting
into my service on forward
slash. So what we're going to do now, we're literally going to go
here and you say kubectl apply f and
the name of my file. So behind the scenes
it has actually created those resources on my
namespace called Team Blue. Remember, Team Blue is
a namespace that comes with a dev team. So if I go and
I check here, kubectl get all s team
blue. So right now hopefully you'll be able to see it
created some of those resources. So it created my deployment, it created my
service. And now let's check the ingress Kubectl get ingress
ntimblue.
So you can see here, it actually created my load balancer for
me. So if I go on my console and
I literally go on load balancers and
let's see now it's probably behind the scenes creating my load balancer.
So we're just going to see here you can see
k eight team blue nginx, it's provisioning.
And you'll be able to see that this DNS record here
is exactly the same as the DNS record that
you see on this screen. So we will eight this to get
provisioned. Right. Why we aid. What I want to show
you is right now I am logged as an admin.
I will just change my context to be logged as team
blue user. So I'm going to run
this, this is changing my configuration.
And if I go just give a second, just waiting for
the command here to come back. So if
I go and I check my config, you can see that now I
am a timblow. What is the difference if I try to do Kubectl
get all a. So try to see all the namespaces, you'll see
that these will fail. You'll be like,
you don't have permissions to see everything
on all namespaces because you are a developer only on
Team Blue. So you can see that I got a lot of forbidden. But now
if I try to do Kubectl get pods on the
specific team blue namespace,
you'll be able, hopefully this should return my pods
on Team Blue. So let's just wait a few seconds here.
You can see that. I am able to see that. And if
I want to see for example the ingress,
I can see the ingress, right? So hopefully you'll be able to see the ingress.
I see this ingress. So let's just copy this address here.
Let's actually just copy, let's see if this has finalized.
Provisioning. Since you're provisioning, the load balancer is actually now active.
So it's active. If we go and we check the rules, it's forwarding
port 80 into this target group. And if you
go to look at the target group, you can see that the target group has
all the three pods. Healthy part of my eks
cluster, right? Remember I created deployment. What does this mean?
If I go and I copy this DNS and
I go on HTTP port, I can
see nginx and behind the scenes that is actually redirecting
into my Eks cluster. So that is the demo.
What we've done so far and we achieved in the demo was
I had created a EKS cluster using EKS
blueprints which didn't actually have the
AWS load balancer controller. I easily just uncomment
and redeploy my terraform which behind the scenes it's installed
the load balancer controller. Once installed the load balancer controller,
I deployed my Nginx simple application that uses
the AWS load balancer controller into
my namespace for Teamblue. So I deploy as an admin
because only admin has permissions to deploy. Once it
finished deploying what I went, I changed my roles
into my Kubectl context. Sorry config to
use the development role for Teamblue.
And then I check the ingress controller,
I paste on my browser and hopefully you can all see that
it's actually redirecting. So I just want to say
thanks for people that have tuned in. Hopefully this provides a little bit of
an idea you can just google. For example, if you are interested on terraform
terraform eks get started. You can see here,
if you click here you can see the documentation and if you want
just google eks blueprints,
terraform for example. Workshop. It's part
of my presentation as well. We have the links, but if you want to go
and do a workshop, I highly recommend you do this. If you have any
questions, feel free to reach out on
Twitter or X and also on LinkedIn.
Again, my name is Samuel Baruffi. Thank you so much for the time. Have a
great one.