Transcript
This transcript was autogenerated. To make changes, submit a PR.
Azure Container apps is one of many cloud native offerings
in Microsoft Azure, which enables you to run microservices
and containerized applications on a serverless platform.
Azure Container apps enables executing application code
package in any container with any runtime or programming model.
With container apps, you enjoy the benefits of running container
while leaving behind the concerns of managing cloud infrastructure and complex
container orchestrators. My name is Bojan Vrhovnik,
senior Cloud solution architect working for Microsoft,
and in this session we will explore Azure container apps, going from
simple demos to more complex requirements. By moving our
application to container, deploying them on our private registry
manually, or by using command line, and then leveraging automatic
deployments via GitHub, we will improve our applications
gradually and explore different features and options provided by
Azure Container apps solution.
When saying part of cloud native, we mean by integrating
different building blocks to achieve your business goals. You can build
your cloud native apps with Azure fully managed services,
seamlessly integrated development tools and built in enterprise
grade security. You can use the tool and techniques and
technologies of your choice while implementing a microservices
based cloud native architecture that makes it easier to
develop and scale your applications. You can work efficiently through
an end to end development experience, coding, debugging,
deployment, monitoring and management with integrated
tools and DevOps as a process. For example, you can
build container apps, connect to SQL database store files
to Azure storage, audit access and react based on policy
use services to translate text, recognize objects,
get transcripts and more from cognitive services,
enable just in time access verify container images for
vulnerabilities set up continuously delivery integration with
enterprise grade solutions, plugin proven community plugins
to efficiently scale and manage app and much more which is added
each month. In short, it enables you to build not
just container apps, but enterprise grade solutions with focus on
the logic and application itself. The rest is covered by Microsoft and
Azure by having multiple options
to run containers in Azure. Where does our Azure container apps or
short ACA fit in? You can run any containers
in aka its ability to scale down to zero to reduce
cost can be quite useful in background scenarios. For example,
you can execute jobs scale to many instances and after
execution is finished you scale down to zero automatically. Or you
want to host web applications with your own domain certificates,
integrated authentication with multiple versions for users or
you decide to host APIs for your customers to consume by
having an easy way to do bluegreen deployments or a b testing scenarios.
Or perhaps you need a dynamic scale based on the
cpu load or HTTP request or any other factors
which is important for our needs without configuring complex environment
behind the scenes. Or maybe you have multiple teams which
is each building their own microservice and you need integrated
support for failures, the tries, timeouts, distributed calls
over the network, service to service, invocation, pub sub,
different internal and external services and much more. Or maybe
you decided to build a game and you need to scale, you need to spawn
actors, you need to state management with different services and you
need to support without creating all of the infrastructure behind the scenes.
This is just a tip of the iceberg of possible scenarios
with ACA, but we already have solution for that in Azure
which is called Azure Kubernetes service. So how does that compare to
ACA? AKS is infrastructure
focused, which means that you are being highly flexible
where aka is more focused on application and scenes infrastructure
as an obstruction. Now what does that exactly mean now in terms
of control and cost? With aks you have full access to APS server,
you have high level control over the cluster configuration and you pay
for the nodes that you're using. Arca, on the other hand, is an abstraction
built on top of Kubernetes. Without access to APS server you
pay based on what you consume in terms of deployment and developers.
On AKS you deploy via Kubernetes deployment manifests
or YAML or helm charts or CLI. On ACA you
can use the portal CLI or infrastructure as code templates
to deploy container apps. If we look at integration
on aks, you can install, but what you need to also do is maintain
components like Keda, DAPr service mesh.
You need to bootstrap them. On ACA that is fully managed and supported,
you're using the features without having to bootstrap them in the environment.
Let's see this in action.
Let's start by creating a resource inside Azure portal.
Let's select containers as a category
and then find container app. Let's press create
to start with the visit. In this visit we select the
resource group where our application will reside, select some name,
and then we configure the application environment. We will talk
about the application environment a little bit later. Let's leave this as
a default. Next, what we do is set up the
container application. What we define here is what
kind of container we'll be using. What you can do here you can basically
select either Quickstart image which is a sample
container application which is running just a simple web
app, but we can also select our own container registries.
This is by default enabled to accept traffic from anywhere,
which means that this will be publicly available on port 80.
So let's go to the next step where we define text, which are
really nice feature to basically define
value and name pairs to categorize resources,
which can be really handy that we do some billing purposes or we apply
some policies on the subscription level.
So when we press create, what this will do will kick off a deployment.
And what will happen is that if we refresh the
process, it will give us the success of,
it will give us a successful deployment. What we can then do is simply
go to the application URL which we configured.
Let's go back to the slides to learn a little bit more about environments.
So let's go a little bit more into detail. So what happened behind
the scenes? First we filled out a few information about the
container apps, resource groups, in which region we would
like to host application, name of the application and then we needed to specify
an environment. We'll have the default options which created default
settings. But what is an environment,
what are they? In short, they are the virtual boundary
around a collection of container apps. In Kubernetes, we achieve
the same logic with the use of namespaces. We define what we need
and configure the environment based on the requirements. Let's see this
in action. Let's navigate to the container
environment. You can find the container environment in the overview tab.
Let's click it and it will redirect us to the settings.
It will redirect us to the page where we can then set up additional
managed services like DAPR certificates, azure files,
and where we can also configure streaming and monitoring options
for our containers. Here we can see how dapper can be configured
without us putting up or bootstrapping everything from
scratch. We can just use the services. What we can
also do is define where our logs should be
and metrics should be stored. Either we choose Azure log
analytics or Asia monitor and here we can also define what
log analytics should we use. What we can then do is have one source
for all of logs and metrics from our services that we have there,
and we can even go and then search custom tables
with specific information about the containers.
What is the revision, what is the name? And of course additional information
is it is available which ports and so on which can be reviewed.
So when we are debugging our application from scratch,
environment is up and running, we can define services which
will be available to our containers. And in this case what we saw, we were
running a simple container which was provided by the Arca team.
What if we want to run different containers? What if we need to test
out features with subset of users to get a feedback. Or maybe I
want to apply changes without runtime. Or maybe
I want to go back to the previous versions of the container.
How to tackle that challenge? This is where revisions come
into a play. What revisions enables us to do is to have
different versions available, so called snapshots, and we can
decide how the flow of the traffic will go from us
to services. Let's see this in action.
We have a simple ASP. Net core web application with two pages environment
and second page. First contains the code which
reads an environment variable named message. That message
is not in the system. Then it basically outputs
environment variable not set. And then we have a second page which
just displays some text. I built two containers,
one with the link which contains MV files and the second one which
container both of the links.
Let's go to the revision management and create a new revision where we
will use our container which we specified locally.
So what I do in this case is select container image, name it
so that I know how and what I'm basically working on.
Then in this case we will select the web application that
we deployed in. This web applications has a tag name simple
web app environment. In this case only environment link
will be displayed and here we can then add environment
variables. In this case we will use the manual entry
to showcase how this can be done by injecting the environment
variables in. We don't need the simple hello workload demo.
So we will delete this and we will name it with some name
so that we know the revision that we can easily find it in the revision
management. Let's create the revision. After a
few seconds it will provision and it will take all
of the traffic that is available on this website.
And now what we can do is go and click on this link and
select the revision URL, which means it's a pros URL which you can then
check if everything is okay. And now if you click there's an environment variable
which we set is defined there. Now what
we want to do is use the link with second page. So we
want to create a new revision. So what we can do is go and select
container. We want to change this to the second page. In this case
we want to also change the value so that we can know what this
value will be and we save all of the changes appropriately.
Now of course we name it in a way that we understand
so that we can then refer to it when we need it and click
create to basically create the second revision.
When this will finish you will see that we now have two revisions.
One is the simple web and the second one is simple web with
second page. And now that one received 100%
of the traffic. And let's test out to see if this works as expected.
Now we have two links environment valid second page which we see
here with updated environment variable. But what if you want
to do traffic splitting? What if you want to
set, for example, 30% or 50%
or 40% of the traffic to go to a specific URL
to a specific revision? So this is where we can
choose a revision mode multiple where we can then define how
much percent of the traffic will go to this case. For the demo purposes,
we will use 50 50 so that you can see on each second request,
it will basically display a different web page.
So let's save all of these changes and
when these changes will be saved, what we
can then do is check if the revision still works. So we can go to
that revision, check if everything is as it should be.
We can test out the solution. We can either send this link to the customer
to test out the application so we know that everything is as
it should be. And now we can basically see if this
works. So we go to the application URL. The application URL will then hit
the envoy ingress, and then the envoy ingress will then
showcase what is possible or not. So let's refresh a few
times so that you can see the result. As you can see,
second page is now displayed and after refresh that is gone.
In essence, what happened is that container app now has
a multiple version or snapshot of the workload.
And we can then decide by the business rules how to apply our logic and
need without configuring helm chairs or any infrastructure
behind the scenes. Our app is now running,
but then we receive a lot of requests and system is
not handling the load based on what we expect. We need to scale
the solution horizontally to handle the load. Even though we split the traffic,
requests are still coming in and we're not handling the approach of
having a resilient application. So how to configure auto
scaling? Let's check this in azure.
So let's go to the scale and replicas option and
then select the revision that we want to work with. So we will work with
a revision which has the second application both links
and click edit and deploy. We have a tab option which is
called scale. Now this part we can then decide how
much replicas we will have. Now since there is a requirement,
there is a lot of HTTP requests coming in we can
then add a scale rule with concurrent
requests which will handle and which will scale based on that specific
request. So when those requests will be met, it will scale
the replicas accordingly up and down.
Let's create the scale rule and
when the scale rule will be created, we can see that the traffic
now because we created a new revision is
zero. So we need to basically change the revision to
single. We can leave it as is and define the traffic.
But in our case we want to have a support for automatic scaling
based on our app URL. And in this case now
that it's successfully updated, we can then go and
check the application if everything is working. And what we can
see that now the replicas, now the configuration in
this case is that we have minimum two and maximum
ten replicas. And now the system
is provisioned to have all hundred requests
coming in on that side.
Now that we have the basics covered, let us use this knowledge and deploy a
little bit more complex solution to Azure container apps. We will explore
solution on local machine and then set up the environment
in the cloud. Solution is already containerized and is located in
our Azure Container registry. If you want to follow along you
can use scripts in the repositories link is provided on the
screen and go to the scripts folder where you will find various scripts for
various tasks. Let's see the complex application in action.
I built an application, a web application which represents work
tasks which can be private, which can be public,
which you can basically comment on, export to pdf,
get statistic and much more. The idea here is to have a
web application, you have a database. In this case
the database is a SQL database. You're connecting to
that database directly through a repository
pattern. But then again you have a web API
which is exposed internally, which means this application is connecting
to that specific API, getting back the
results about statistics specifically for the sign in users
about the work task, about daily tasks,
public tasks and so on. But then again you also have a public
access which means that user can basically access the application through
a web browser directly through web applications, or he can basically
directly connect to the API that is exposed
publicly to the outside world. Then we have
a background application, a background service which collects
the data out of SQL database and then stores
that data to a file which is located on the file system.
In this case, this file is basically JSON file with all of the statistics
which is daily saved to a specific folder and
then this can be retrieved via API or
via SDK library back to the user
on the system. So how does this look like in
code. Let's go check inside of our developers environment.
The structure here we have UI user interface which contains
the web application, the API and then we also have
the background service. Then we have generators which are generating some data
so they populate database with some bogus data.
Then we have the data layer which basically represents
our models, our repository patterns to connect to
the database behind the scenes and many other useful services.
So let's see this in action application to showcase how
this application looks like and how does it work.
This application will now run on the web server.
You see below that we are running two applications so one
is local web and then we have the report API.
Now when we go here you see that I'm running on my
operating system. If I log in into the system what
I will get is now I'll be redirected to the dashboard
and here I have basically options to see my tasks that are located
here and I can go inside of the task see the comments of this specific
task. What I can then do is go to home. And when I
go to home you'll see that here the API call was
done. This stuff here that you see is basically retrieved
from the report API that is available there giving me latest
stats about my own achievement. If I'm logged
in, if I'm not logging, I don't receive this information back.
And then what I can do as well is go to the task, for example
the public task and say that I would like to download PDF and
this is also issuing a call to the rest API giving me
back the public statistics about tasks
which are available for a specific period of time. Okay, so let's
deploy this application to Azure. Let's create a new resource,
let's go to the containers, create a new container app let's
name our container application with some meaningful
name like conf web and then create a new container
apps environment where we'll specify additional
settings. Let's name that conf n and
then select for monitoring or created local
analytics. Before let's specify the container
app in our container registry, I used a web
for the application with the latest tag
and let us enable the application to be accessible
from the outside workload with a port 80
on top. Let's review and
create the solution and enter
some tags which will help us with the billing purposes.
Let's review and create a deployment.
Let's repeat the same story for the reports and
also for the stats and all other applications
that will be available for us to run. Let's repeat this for
the report API and
let's do the same with our filestat
server. The only difference here is that instead of
accessing it from external we don't need external access.
So we won't be configuring load balancer in this case
because we don't need external access. Let's go and
create some tags for the billing purposes and then review
and create and we are ready to start with the application itself
when this is finished.
Let's go to the container apps.
Let's click on our container
conf web. Let's check
if the container is up and running by
clicking on the application link and we can see that
the application is there. Let's try to create
a new user type in some details
and what we will see is a deployment error because we don't have a
SQL defined we can try the same stuff
with the reports API, but because we don't have
an API endpoint, what we can do is check in the log stream
to see if the application is up and running and we can see that
there are some errors regarding environment variables.
What kind of variables do we need? So in our user
interface web application we have a few variables which we
need to set up. So if you go and check inside of the
application itself, we have an app option,
the URL to the report service. We have the connection
string for a SQL database and then we have some authentication options.
Here below you see an Azure storage settings which
is something that we will implement in the later stages and the same is
with the report API. So if we go here. So here we have the
same options that you saw on the web application and the same
goes for stats service where we have only one option which
is SQL connection string. So let us first fix
the connection string. Connection strings are sensitive data so what
we need to provide is a way to secure that data.
And we will use secrets inside of our container apps in
order to secure our connection string so that malicious users cannot see
the value. Otherwise they will be able to connect to our database. Let us
fix the web application here you will see that we have an option to add
secrets and we will add secrets which are required in our application.
So the first secret that we will be adding is SQL connection string.
So we will name a SQL connection string as a key Sqlcon
so that we can reference it later on and we will add that
to the system itself. What we'll also add is two
things which will then be ready for the later stages.
First will be the API key because
what we need in order to access the application from
our web application is a key because based on that key the
API will know that we are authenticated. So let us copy
some key and then we also
add another secret which is consult. Let me add
this as well.
Hash salt hash is used to
hash the route values. Let me add
this as well. Now we have the secrets added. Let us first check the
application so that we see what is the problem again. So let
me open this web application and as we saw before,
when we go to the web application will run.
So let's go to the login page, register new user,
say enter
subdata, then register and you know that this application
is not working. So let's use the secrets that we added
to the system. Let's go to the release and management.
Let's open this web application and then choose
containers and then set, edit and deploy.
And in this case we can then select the container image that was used.
In this case we are selecting the web and with the latest
version all others is the same. The only stuff that you will add here is
environment variables. First thing that we need to add is the connection string.
So SQL options connection string
and let me add reference a secret and this secret will
be SQL connection string. So we added the SQL connection string.
Let me save the data and
write it here so that we know what we are
referring to and then create the revision.
Now we can go back to the revision. So provision
was successful. So let me go to the overview
and let me run the application again. And now if we
go to the login say that we would like to register
connection string. Let's say that we want to add
boen@outlook.com some
password and when we do register it basically goes through
and regular access to the pages. And now we can go to the tasks,
public tasks, see all of the tasks and basically perform
all of the application that we want. So if we go to the home,
you see that we don't have anything there.
Now it takes a little bit time because we have a try policy enabled
with poly net zero result.
So here you see that page is
presented to us but there's missing something because we didn't provide
any connection string to the reporting service
we fixed the connection string. Now what we need to do is fix
our reporting service access. So let's go to
conf report and copy the application URL and
go back to the conf web, select the containers
and then choose edit and deploy and select
the container that you would like to fix.
And here you see that we have the latest version. What I did
was I added API key and hash salt from
the reference secret. What we need to do now is
add a new option API options
and then underscore report API URL
and in this case it will be manual entry. And this value that we have
here, let's save the
revision, the container changes and let's put in the
name which will be report app
and then create the revision.
Let's go to the overview tab, open the web browser
login with buoyan@outlook.com
log in when we log in what
we can now do is create your task, for example test task
at publicly
available some data safe and we can
add some comments.
And now when we press the home button, what you will see is
we get back result which is the call from the API
itself. So we fix the web application and report service.
Now we need to fix the stats service as well.
So we have a stat service which stores the data inside of file.
Since we didn't configure any volumes or something like that to
store the data, what we get is an exception. When we
go to the lock stream and get an exception that something is wrong.
Now here we explicitly said because this is a background
service, we don't have ingress, right? So ingress is
disabled. But what we want to do now is fix
the specific problem. In order to fix the problem, I already provisioned
in the secrets tab a storage connection string
for the Azure storage. So what we'll be doing is basically
saving all of the files, all of the statistics regularly
each day to a file which will be located in Azure storage.
Now in order for us to use this, I built a new
container which has attack storage
which we can then leverage in order to fix this specific problem in
code. I basically provided an interface, an implementation
which uses blob storage behind the scenes to store the data inside
of the Azure storage. All that code is located on my GitHub.
So let's go to the containers and edit and deploy the containers and
let us change this container to use the connection
string. So in our image
I have a storage option and now what I will do
is I will add the Azure storage
options connection string,
reference the secret that we provided,
save name it storage
and then create the revision when
the revision will be created, let's check if this application is running.
And now what we can do is go to the lock stream and
see our output from the lock.
And after this will connect, we will see that everything
is okay. And here you see that the stats was completed.
Storing the data inside of that file
we fixed the start report, but what is the
challenge? In order to understand that, let's see the implementation itself.
So in the task report controller we have one
app which returns the most active task and this
uses a repository pattern and in this case workloads repository.
So if I go into the details, this is an interface and this interface
has only three methods. So generatestates, gets all
and gets stats based on some defined range.
So what is the problem? So the filestat service
uses on localhost it uses file system to
store the statistics in Azure.
What we did, we implemented this interface with
a blob storage. So if I go to the data you see here that I
have storage data which is just one class which
implements this workload repository. And here you can see that
we have different parameters that we need to provide in
order for it to work with Azure storage. Now if
we wanted to save this to another, for example storage
in order to support another storage, for example SQL
or Cosmos DB, what we need to do is create a new
library and then implement this workload interface
and register this in the reporting service.
So in the reporting service we can go inside of the
program CS and here we have the configuration itself.
Now in my case I configured dapper already, but here for example,
what we could do is then for example say let's copy
this one and we could for example iwork
stats. And let's say that we would like to use blob
work starts repository and then work
starts repository.
The problem here is that now we have three parameters
that we need to provide, which means that we need to then configure the
application settings so that we can then inject
into the application. And then what we need to do is build a
new container. So in this case you see that we have specific settings
that's related to azure storage and then what we need to do in the docker
file. So in the containers folder you'll find all of the docker files.
We need to then provide basically support for that and
tag it appropriately. So how can we solve this problem? This is
where Dapper comes into the play. But what
if we want to change the storage and we don't want to change the container
with new settings, new configuration, new environment, vulnerable snoop
tags and so on. We could create generic service,
but then we can get different requirements for business or customers and
we need to adopt to that change. Maybe we need to observe
the calls or we need to securely communicate between the services.
This is where DAPR shines.
Dapper is portable, event driven, non time that makes it easy
for any developer to build resilient, stateless and stateful applications that
run on the cloud and the edge. It provides best practices for common capabilities
when building microservices application the developer can use
in a standard way and deploy to any environment. It does this by providing a
distributed system building blocks. Each of these building blocks APIs is
independent meaning that you can use one some of them in your
application. So how does it work? It uses sitecar
container pattern. When enabled it will run sidecar container
listening for our request either via HTTP or ViagrPC.
We issue command that we will need state or event and then the
upper side container gets the data and returns info to us.
What we need to do is configure the sidecar container to use the
different stores and our app is then calling in same signature API
without us needed to change all of the structure behind the scenes.
Let's see this in action. Let us first see the implementation in
code. So this DAPR workload repository,
what it does is basically just calls
the DAPR client. It builds the connection
string, everything that is needed and then we call get
state async with state that we want to achieve or we
want to save state on a specific data store
with a specific key and the values that will be stored on
the system. So let's see now how we configure DAPR in
our container apps.
Let us enable DAPR in our environment. So let's go to the container
apps environment. Here we have DAPR components option
and let's click add. So when we add the component we need
to first provide a name state store for example. And then we need to provide
a state type which means a component which
will then receive the state and store the state. In our
case we'll be using blob storage version
one. And then we need to provide some additional metadata.
So for example, first one needs to be account name.
So which account will be using. In this case we have
conf 24 data storage. So this is the
storage that we'll be using. Then we need to provide container
name so which containers will be
accessing. So we have everything stored inside of
files. So let's go and enter that one.
And then we will need to provide some authentication mechanism asian
client id with specific value.
So let's put in just some of the values inside because
we need to configure this later on. Which means
what we need to provide to the DAPR side car container is
authentication mechanism so that he can authenticate
to azure storage. In this case the blob storage container name.
And next, what we need to provide is the scope.
So which application will be able to load this
component inside of their apps. So let me add this, we will
change this later on because we need to configure the application yet.
So let's go back to the apps,
let's go to the app. And now what we need to do is enable DAPR.
So we will enable Dapper in this case and provide some information
so we can provide the name. So in this case conf reports
which protocol we'll be using. In this case we'll be
using HTTP. So how will our application communicate to
this sidecar container? And then we will save
these settings. So when we save this, we configured our
application to use DAPR, this will be saved. We need to define
which application will be able to use the components that we are configure
inside of our container environment. So let's go now back to the
container environment in
the DAPR components and then click the state store.
And what we should see now is an ability to add
apps. And let's add conf reports ad
which we configured earlier and
now basically save the DAPR component. So this state store
will now be available to our application. So let's
go inside of our application and on the dapper you will see
that now this application can use state store
component which we configure in the environment itself.
Let us save all of the changes and when
these changes are saved let's configure our component to be
able to communicate to azure storage.
I compiled a new container which I need to configure to
use DAPR. So let's go to the containers,
edit and deploy and click on the conf report storage
and change this to dapper. Let me use image
tag which is DAPR which uses the aps that we saw earlier.
And what you need to do now is configure some settings.
First what we need to do is define tapper options,
underscore store name. So the name that we
specified in our case is what stays stored.
And then we also need to provide a key, right? So we need to provide
a key which is tapper options key
manual entry and then work stats
and let us save these changes.
Define here dapper and then
create the revision why
work starts? Because in storage account under
the files container we have the work starts
file which component will be basically reading and
writing to. So let's go back to see if this
finished, when this will refresh.
You see that we have solution up and running.
But if we click in the containers tab,
what we can see is that we have now DApr D and Dapper D
is now configured to basically listen. So this is
a site container that is basically listening to requests that are
coming in. So let's create a request. So let me clear this
and connect. So let me go to the, let's copy
this URL, open a new tab and execute the
request. So when we go to the lock stream,
what we then found out is that Dapper is
basically getting an exception. And you
can see here that it has a problem with the identity because we didn't configure
the identity, failed to acquire a token. So if we
go to the DAPR as is here inside
what you can see that ifidconf
instance specific instance scope and
so on. So it basically communicated between the replicas,
but it didn't executed the request to
the file storage. So let's configure the DAPR component to
be able to authenticate to azure storage.
Let's create a user identity and
provide some information. 24 source
group specify the region and of course the name that you'll
be using. And let's define also environment variables
for billing purposes. Let's wait for this to finish,
then go to Conf 24 to basically add access
and rights to be able to access the storage.
Let's use data contributor in this case and select
the principals which will have the access. This case,
our principal will be user assigned identity with
report tamper user identity as we selected.
Let's review and assign when we did this,
let's go back to that user identity,
copy the client id because this is something that we will need in order
to set up our component.
Let's go to that component and we had
Azure client id which will now populate and
edit the details.
Now we need to go back to the application and configure
the identity so that it will be aware of that identity.
Let's go inside of user assigned identity and add the identity
that we configured and save to changes.
When this will be saved, let's go to the revision management
and since there were some changes,
let's restart the revision
to pick up all of the changes.
And now check if the application is up and running. Yes,
now let's go and issue another request to see if
everything is working expected. So let's refresh the URL.
And now what we should see is a result back.
Now the application is up and running. If you want, we can also
define continuous deployment. So if we have our application
on GitHub. We can easily signup in in GitHub,
use a repository, and then define where those
images should be stored, when the application will execute the build process.
In our case 24 the GitHub action
will tag the images with the GitHub commit id, but we
can modify that action based on our needs.
To recap, what we saw is just a glimpse of what is possible with
Azure container apps and how we can focus on the application business,
lodging, and making sure we don't lose time on the infrastructure itself.
For more details, check out these great resources and
thank you for listening.