Transcript
This transcript was autogenerated. To make changes, submit a PR.
My name is Yuri Besonov, and I am senior solution architect at AWS.
In my role, I work with the customers and AWS partners on solutions
related to container technologies.
In today's talk, I am going to walk you through a journey.
Which we see customers are facing when choosing to build their own
internal development platform or internal delivery platform and expose
internal APIs on top of Kubernetes.
We will discuss today the different application delivery streams,
we see out of the air what are infrastructure controllers and
how you can use them to accelerate delivery time of your application.
We also define what are compositions in Kubernetes and
review the tooling that support it.
We are going to finish with an overview of a solution that you
can try in your own environment.
But before we talk about delivery streams, let's define a problem statement.
Let's set up the context with a sample application.
Our sample application is taken from EKS workshop and it is a good
representation of an application that consists of multiple sub applications,
which have one or more components.
Those components can be runtime components, for example, Kubernetes
pod deployment services, ingresses, which lives live inside the cluster,
inside the Kubernetes cluster, and also infrastructure resources.
Thank you such as AWS services in this case, which lives outside of the cluster,
for example, Amazon DynamoDB, Amazon MQ, or Amazon Relational Database Services.
When deploying an application, we need to consider not only deploying its
runtime, but also its backend services.
And in this case, we need to take care about the components which live inside the
cluster and outside of Kubernetes cluster.
In the next slide, I will walk you through some of the delivery streams
we, we see out of there, some of the delivery methods and the time it takes
to deploy an application in each stream.
The first stream is when developers simply deploy a run times and
configuration to Kubernetes.
Usually the components Kubernetes configuration and its runtime group.
together in a single repository.
Kubernetes supports the creation of and management of external networking and
storage services since the very beginning.
And those AWS response resources, which corresponds to those services, can
be managed in a single stream as part of application runtime configuration.
For example, Kubernetes manifests such as services, ingress, and
persistent volumes can in turn.
Deploy application load balancer, network load balancer, and persistent load
balancer using the Kubernetes constructs.
These delivery streams usually take minutes to complete.
The second stream, more complicated stream, is usually happens when
additional back end services are needed.
A pattern which we see is an infrastructure definition repository that
holds the entire organizational file.
Infrastructure as a code definition, for example, Amazon DynamoDB,
RDS, S3, and other components.
An infrastructure, DevOps, or platform team, you may have different names
for those teams, needs to approve the pull request, which will trigger the
creation of AWS services in AWS account.
Since this stream includes handover process from one team to another, these
delivery streams can take sometimes The last stream is the slowest one.
It involves some traditional ticketing system to provision resources.
The database team here is just an example, it can be any other team.
But the pattern here is that the developers have no way to influence or
trigger a change on the infrastructure service side and they have to wait
for other team to proceed their request to fulfill their tickets.
And the question that we ask ourselves and developers often ask ourselves, how
we can make the deployment process faster?
How we can improve turnaround from changing the line of code of the
application to deployed infrastructure and running application in the cloud?
And before diving into techniques and practices of achieving this, of improving
this process, let's set the requirement of each team, because they are different.
From a platform team perspective, they want to build consistent
deployment process, while enforcing organizational standards.
They need secure environment, they need compliant environment, and they need
observability, because they may have many environments for various teams.
Application developers, on the other hand, want to have full ownership of
their application components, including its backend infrastructure services, and
they would like to have it in the easy way, and they would like to have it fast.
In the infrastructure as a code domain, we already saw a definition
of clear boundaries and ownership when provisioning resources.
This quote is taken from AWS Cloud Deployment Kit CDK Best Practices
Guide, and it calls out that whoever that develop and manage their own time.
should also manage the infrastructure.
That's why infrastructure and runtime code should live in the same
package, in the same repository.
When you scan this QR code, you will it will take you to AWS CDK
best practices guide, where you can read the details of that approach.
And by applying this practice, we can see now that a developer managed both
the runtime and the infrastructure definition in the same code repository.
Once code is being merged, Two streams are triggered automatically, one to
Kubernetes, to the cluster, creating the runtime deployment spot and so on.
And the existing back end services we discussed previously, like load
balancer or persistent volume, which can be deployed from the cluster.
The second stream goes directly to an infrastructure as a code
tooling, whether it's AWS CDK.
which generate a CloudFormation template or Terraform.
And in turn, these tools create the needed infrastructure.
Applying this practice reduces the deployment time from hours to minutes.
Focusing on Kubernetes again, we can build a similar process using the well known
operators framework to build and maintain external services from within Kubernetes.
It's true that operators frameworks can be used to deploy and maintain
external systems to Kubernetes, but writing your own operator, it
is rather complex and what I would call undifferentiated heavy lifting.
This is where pre built infrastructure controllers can help us significantly
speed up implementation time.
Those infrastructure controllers are pre built controllers that allow you
to manage cloud services using the.
your known Kubernetes API like you do it with your runtime.
It allows you to create your own platform API on top of Kubernetes and
have a declarative way, a declarative infrastructure configuration which would
help you leverage the Kubernetes ecosystem to provision external resources as well
as internal resources in the cluster.
Two of the main tools available now are AWS controller for Kubernetes
insured, a CK, and cross plane.
A CK is developed by AWS.
It enables the management of variety of AWS resources, AWS services, and
it is supported by AWS cross plane is A-C-N-C-F of pet Source project
with an extensible framework, and hence a broad community around it.
Returning to our previous diagram, we can see how those
tools can help us to provision the application backing services.
At that point, developers might ask if that means that they need to have
extensible knowledge around infrastructure configuration, and need to know how to
deploy those various cloud services, how to combine them together in the right way.
And, No, it's not necessary.
Let's have a look at this example.
On the bottom, developers simply request the database instances.
It doesn't really matter which instances the developers need database.
Of course, after that, they have a responsibility to
configure it upon creation.
ensure it is running and being able to modify it when necessary to have
control over that database instance.
However, the platform team might decide to have different implementation
depends on when this instance run in cloud or on premise and abstract the
implementation details from the developer.
In our example, we see that the platform team can decide to implement this database
instance on premise using Apple's Grail.
Scale l database and permit use instance for observability.
On the other hand, on AWS, they want to leverage.
Im for identification.
Amazon, RDS for database and Amazon Cloud watch for observability
and use more managed services to reduce operational overhead.
This method is usually called a composition, which means an obstruction
of the underlying implementation details by providing simple set of API.
Going back to infrastructure as a code, we already have this capability
of building abstraction of different implementations while providing clear
interfaces on how to consume this.
And as you can see in this slide, these quotes can be taken from the documentation
side of AWS CDK and HashiCorp Terraform and describe exactly that.
Composition is the key pattern for defining high level,
Abstractions through construct.
In AWS CDK we even call it the same way, a composition.
Going back to the previous diagram, we should know how we would use
infrastructure as a code tooling with composition to abstract the implementation
detail of our database instance.
But how Kubernetes?
The CNCF Application Delivery Workgroup published the platform whitepaper, calling
out two open source projects available now that can help with compositions.
Those projects are Kubela and Crossplane.
Kubela is a software delivery platform that implements an open application model.
This model focuses on building applications rather than
configuring Kubernetes objects and can be more flexible.
More natural, more native for developers.
Crossplane is the same tool that we discussed previously, but this time
it is about the capability to build platform on top of Kubernetes, as well
as providing infrastructure controllers.
Another advantage of using Kubernetes API to provision AWS resources.
Is that you can leverage the extensive Kubernetes ecosystem tooling to
package, deploy, and validate your infrastructure definitions.
The same way like you would use it with runtime configuration with your
manifest, deployment, and ports.
A small example can be using Helm or CDK8s for packaging components.
Argo CD or Flux for GitOps to reconcile the state of repository with a state
of the cluster and infrastructure.
Kiverno open policy agent for po I'm going to show you how to use it in
practice, how to use composition and infrastructure controllers together.
We at AWS built a solution reference for managing AWS services using infrastructure
controllers through compositions.
In this solution, we demonstrate how to provision Amazon EKS
cluster using compositions and infrastructure controllers.
Yes, you can even provision the cluster itself.
With with the help of compositions, once a platform team adds a cluster composition
manifests to the repository, the GitHub student reconcile the composition request
and which in turns triggers the creation of a cluster with all the needed tooling
deployed to that cluster you have cluster with all the necessary components
which allow you to do next step.
The solution also describes an application onboarding PO process where
a developer commits its runtime code and infrastructure configuration to the
application specific git repository.
After that developer creates a pull request to centralized
repository, so the cluster GitHubs tooling will start reconcile the
application specific repository.
Once the platform team approves the pr.
The GitOps tooling starts the reconciliation of application repository.
From there, the cluster infrastructure controller tooling will provision
the relevant AWS resources for that specific application.
It will create a complete life cycle from the written line of code to the deployed
application with all of the required infrastructure and solution components.
The solution itself is covered in the series of blog posts and have a detailed
sample code to get started with.
If you scan this QR code, you will get additional information and
you will be able to try to the solution in your own AWS account.
To summarize what we learned today, we saw how grouping runtime and infrastructure
defines clear responsibility boundaries, how infrastructure
controllers extend Kubernetes capabilities to provision AWS services.
We also seen that Kubernetes API can enable standardization.
This allowed us to And we will leverage the Kubernetes ecosystem tooling,
especially DDoS for provisioning not only Kubernetes resources,
but also infrastructure resources.
With that, I would like to say thank you.
I hope you enjoyed the session and please feel free to reach out
on LinkedIn and have a discussion about anything related to platforms,
containers, and architectures.
And I would really appreciate if you spent one minute and answer on
a short survey about the sessions.
Your feedback is very valuable.
Thank you.