Transcript
This transcript was autogenerated. To make changes, submit a PR.
All right.
Before we dive into the topic, so let's quickly revisit the problem.
You have a groundbreaking idea for your new application, and you have wisely
chosen to use container for deployment.
But the challenge remains, how do you effectively run this container on AWS?
Hi, my name is Chakri Thipsupha, a solution architect from AWS.
And today I will walk you through some of the ways to run
containers on our cloud platform.
So let's explore the options.
Considering that nearly 80 percent of all cloud containers run on AWS today.
It's clearly that AWS is a platform of choice when it comes to containerization.
But why customers select us for hosting their container workloads then?
It is, that is mainly because, AWS makes it easy.
Easier to build container ecosystem by offering more than 34 ways to run
container in the cloud, providing flexibility to suit database
requirements from our customer.
And these are some of the key services that you can build, you can deploy,
and also manage your containerized application with ease and efficiency.
I would like to divide this service into three layers.
So starting from the first one.
So this one is the image repository.
we have Amazon Elastic Container Logistic, or Amazon ECR.
you can upload or push your container, image here.
the same time you can also scan it, on push or so, or you can scan it.
A pretty good period, audibly as well, like every week or every month,
you can keep track of the version here for your container image.
And after that, you already have your container, in image inside the cloud.
Then you need to choose the orchestrator.
So here I include some of the options, that you can choose from.
we have Amazon Elastic Container Service.
We have, Amazon, Elastic App Kubernetes Service, and also AWS AppRunner.
So we also have some kind of, third party, orchestrator as well.
For example, Red Hat.
But today we are going to explore these three options.
Bye bye.
And then when you select the office data that will be covered later, your
container needs to run somewhere, right?
so that is the layer of the compute.
you have the choice to run it on Fargate, which is the serverless option.
And also another option to run is on EC2 instance.
So these are the three main categories of the service that
we are going to cover today.
Before we dive deeper into those services, I would want to pause a
little bit and share about the shared responsibility model on Airbus.
Some of you might have heard about this before.
the key point is that, no matter what AWS service you use, the availability
and also the security posture of your application is the joint, responsibility
between you, the customer and AWS.
us AWS.
So they're going to be part of the stack that we own responsibility for,
and they will be part of the stack that you will own the responsibility for.
We are going to use this lens of shared responsibility to go through each of
the service that I mentioned last slide.
this is the early days of container on AWS.
Around 2013, containers were starting to become more and more popular.
And we saw customers using EC2 instance to install container runtime
like Docker to run web applications.
So with one instance, it's totally fine.
But how about with hundred or, thousand instances?
Customer also started to use, Kubernetes or Mesos to orchestrate
their workload, to maintain their own control plane in separate, EC2 instance.
So that was the first opportunity that we spotted.
It just felt so wrong that the customer have to run more instance to just
manage their existing, instance here.
So that's why in 2015.
we have ECS, we basically move the container orchestrator down here to
AWS side of responsibility and then use that container orchestrator.
To communicate with the EC2 instance via the agent, but, you still need
to own a lot of things, of course, including load balancer, including,
auto scaling algorithm, and also CICD pipeline in both, CICD pipeline
for your container image, and also CICD pipeline for your application.
So this was still a lot of stuff that were in the customer side of
responsibility, stability, not to mention OS patching, runtime patching.
agent patching for someone who just wants to run the web application
is just still too much for them.
That's why, in 2017, so we moved the line of responsibility higher with Fargate.
So here we move the line.
so Fargate is address Fargate is the serverless option, that you
can use to run your container.
So it means that if you want to run container, you don't have to ever
launch an EC2 instance anymore.
we took responsibility for.
Uh, underlying tasks like patching and, OS patching, agent patching and everything.
You as the customer don't need to worry about instance layer at all.
But there are still some of the tasks that, still belong to you, the customer.
So yeah, you can guess load balancer, auto scaling, and both CI CD pipeline
for container and for, application code.
The SAM also apply for, elastic Kubernetes service, which we launched in 2018.
For this one, we saw a lot of customers run, Kubernetes, on the cloud.
self managed Kubernetes, workload in their own EC2 instance.
That's why we also would like to have a Kubernetes control plane that will
take care of, the availability or, and also upgrading of the version.
as well in AWS side of responsibility.
And, for EKS, it also supports EC2.
And it also can support Fargate as well, if you would like to
have a serverless offering.
But now, it came to the point that, how we would like to make it more easier for you.
You can guess, right?
in 2021, we can move the line higher again.
So here you don't need to take care of your container at all.
You don't need to take care of the load balancer.
You don't need to take care of CICD pipeline for your, code or
for your, container image at all.
Even auto scaling, so you can see.
So we launched AWS AppRunner, and, so you just need to take care of your application
source codes, and, We can see how does it work when it comes to AWS AppRender.
So let's see.
So this is how, the experience of the user for AppRender looks like.
We can start from the web development team.
Either they can have the source code in GitHub, or they already
have the image and put it into ECR, which is the image repository.
We have two choices.
And then with the right permission, So you allow to, the app runner to load
the source code from GitHub or load the, image from container image repository.
It's a repository, right?
Like ECR.
And it's just one simple API call that you can make to, create a
service inside AWS AppRunner.
after that, It will take a while, like 7 to 10 minutes, depending
on your size of the source code and also the size of the image.
It will return the URL.
So this URL can be internal use or it can be internet facing application as well.
depending on the option that you selected.
You won't see the instance or file gate.
You won't see anything.
It will just return the URL and then the client can make the
HTTP request to that endpoint.
So let's see it in action after this.
Yep.
So I will just give an example of the demo, how we can have an image and then
create the AWS AppRunner application.
So let's see in action.
So here that we have the doctor image already.
So here we have one image.
I already tagged it.
Here the size is 300 megabytes.
It's my web application.
And then we would like to push this one to ECR, which is the image repository.
If you can remember the first layer that I mentioned, I already create
the private, repository here.
the name is, Hotel.
And you can also see the command, how to push it, from your local
device or from your local computer.
Here, I just copy the instruction here to push my.
Yeah, image to ECR.
Just copy here because I already tagged it.
And login to ECR.
yep.
And, yeah, I already pushed that.
Let's check inside the ECR.
So here, is the image that we put.
And, so the version is the latest one.
The size is 100 megabytes.
now we can create the, service in app run.
So this is app run and as I mentioned that, we can either
select from using the code from repository in GitHub or from ECR.
but now we are going to use ECR and we can browse that is
the hotel, if I'm not wrong.
Yep.
Hotel with the LA tag lettuce.
And here you can also, need to provide the credential.
the IAM role that allows App Render to download the, image from ECR.
Specify the, name of the service and also the size of your Fargate.
so this one just go with how many CPU and how many, how much of the ramp.
and also the port here can specify, which product you
would like to open auto scaling.
If you have more requests, it will scale up from one instead to
25 instead of five gate for you.
And, for.
networking.
So you can choose either, public endpoint or a private endpoint.
You can like, configure in terms of incoming and also outgoing as well.
Yeah.
And I think that's it.
And you can click next.
And also review here.
So
just take some time to create and deploy.
So let's see, just create it.
And it will take some time like 7 or to 10 minutes for in my case.
So here we have the service and the status is running already.
So going into this, we would like to see some logs down there.
Here's the logs, showing that the app runner just pulled the, image from ECR
and then deploy it through the pipeline.
the deployment ID here is already successful.
So what else?
So something that we can configure later, we can have, Connection to our database.
We can have the connection to our parameter store like SSM or a secret
manager to store SQL username and password or something like that.
So here for the networking on outgoing side, I would like to connect to my
RDS instance that sit into another VPC.
You can configure later.
so let's test it out.
So we have the UI already here.
It can be custom later as well.
So I just use the same one.
And voila, my web application is working.
this is the room management in my hotel.
It's called Chakri Hotel.
And we can list the room from our RDS instance or database instance.
And we have three rooms.
Can we add some more?
So here I would like to add a room in room number four and the fourth floor.
And we already added.
So this one is right to the database.
And you can see that, it's already, be there and we can
read from the database already.
So you can see that it quite easy to get going with, AppRender.
And, you can explore other option as well, like easy, ECS and EKS,
depending on, your use case.
So then, coming into how can we choose from numbers of orchestrator then.
So I have three main, criteria for you to choose from and to be considered as well.
So the first one is the, in terms of the operational complexity and
also the flexibility because these two factors or these two criteria
tend to get go the same way.
So for AppRunner, you don't really have any, complex
operational complexity at all.
It's quite low because you just have your code and then upload it or, you
have your container image and then just click to create a service and then
you have your application endpoint.
But for ECS, you still have to manage the infrastructure like Fargate.
You need to still manage the CIC pipeline.
the auto scaling and everything just like EKS as well, you need to upgrading
your OS, upgrading your, EC2 instance, version of EKS, control plane as well.
And the flexibility is quite high in terms of EKS because, you can run all
the open source or CNCF project, that support Kubernetes, you can run it on EKS.
So this is the first thing, operational complexity and flexibility.
It goes the same way.
And then how about the scalability?
So as you can see for app runner, you can do automatically scale depending on
the number of requests you can remember.
But for ECS and EKS, you can scale based on different metrics, like
CPU utilization, memory utilization, request count that comes into your
load balancer, or other metrics.
So you can scale based on a lot of things.
So it gives you more flexibility here for ECS and EKS.
And the next one, how about a cost optimization capability for app running?
Unfortunately that you cannot do much because, we take care
of everything under the hood.
We use ECS with Fargate to provide the, AWS app runner, service, but for
ECS and EKS, you can use the Reserve instance, you can use saving plan.
You can use, like auto scaling to cover the cost optimization.
You can use, graph on, which is the, arm CBU based, to provide better price
performance, compared to Xic architecture.
So you can see that you can have multiple criteria to choose, your
orchestrated based on, use case.
for example, if you would like to, if you would like your workload to
be portable between cloud, between, Multiple cloud or on prem as well.
So EKS may be a better option because ECS is our proprietary orchestrator.
So the rule of thumb is that, always start your experiment with
the highest abstraction first.
and only go down to the stack if you need to.
So you can start do some experiment from AppRunner.
And then if it doesn't work, if you would like to, control the scaling
mechanism, then you move to ECS.
And if you would like to run a lot of CNCF, open source
project, you can use EKS instead.
So this is the rule of thumb.
As we come to an end of my presentation, it's clear that
eService offers unique advantages and also considerations to take in.
So now I would like to leave you with a question, which of these services
align best with your organization goals and needs for containerization on AWS?
Your answer may not be immediate, but exploring this option
further could lead to valuable insight for your cloud strategy.
So thank you so much for your attention.
And, yeah, so I welcome any further discussion.
You can reach out to me via LinkedIn or yeah, you can just contact
me and ask anything about AWS.
Thank you so much, Dan.