Transcript
This transcript was autogenerated. To make changes, submit a PR.
My name is Vince Lesa. I'm a solutions architect working for AWS.
I help customers being successful in AWS for a year and a half.
In this talk I would like to show you how you can build applications
using go with AWS CDK and how to deploy them to
your AWS account. So let's get started.
Okay, let's start at the beginning. It stands
with manual deployments where we use like wikis and playbooks
which were sometimes outdated. Or you actually ask
someone to click your infrastructure through the AWS console
and if something needs to be changed it would hard to see who to approach
for that and what was actually done into your AWS account.
The next track was actually a bit better. It was scripting
things in Bash and it worked very well
until the complexity was too big because
Bash was not really designed to build complex deployment
frameworks and it was usually hard to maintain. So it started working quite
well until it grow and become more bigger.
Usually the advisor was like hey, it worked the last time, so don't touch
it. An important next step. What has been taken is the use of cloudformation
or terraform. It was like those infrastructure
provisioning engines would actually hide the complexity
of things like state management, but also include features
like rollbacks or error management or drift detection.
And actually it allowed developers to describe their infrastructure
in a declarative way and hand it
over to the engine which start deploying the resources
on the account. Actually, the ease of use
increased the adoption of infrastructure and code among companies
and enterprises and in the development communities.
This actually led to growth in size of
the templates, which eventually made it harder to
maintain because it's hard to maintain
growing sizes of decorative files like JSON
and JAML. And you can imagine that
also got into some problems.
So the next idea was to use generators, which actually
allows you to write code, transform that, and generate
JSON and demo documents from classes and
methods you already know in programming languages. And on top of
that they came new tools such as AWS,
CDK and Pulumi, which actually provide you with a set
of tools and framework to create custom abstractions
and cloud infrastructure. So let's take a look at AWS
CDK. The AWS Cloud development Kit is an
open source development framework to model and provision your
cloud infrastructure resources using the programming
languages you already know and love. So you can do this with different programming
languages like Python, typescript, Java,
C sharp, JavaScript and Fair.
And recently you can also use go which is currently in preview.
With CDK you will be much faster than with the previous tools
because you can work with familiar languages and concepts
like classes and methods without you have to switch in context.
You also have all the tools and support from programming
languages such as autocomplete, inline documentation, testing,
linting, and use of a debugger by actually writing your
code within the ide you already know and love. The most important
part is that you're able to build abstractions and components of infrastructure
and applications, and we provide many sane
default values, so there's no need for you to
read a lot of documentation to get started quickly and
also in a safe way. And of course, many of these default values can
be adjusted to your needs. So let's take a look
of the components that AWS CDK access of so first we have the
core framework, we have the AWS construct library and
we have the command line interface. So with the core framework
you can create and structure apps that contain one or
multiple stacks. The stack is a logical unit of infrastructure
which contain multiple resources and is mapped 101
to a cloud formation stack. It's a good practice to divide the resources
into stacks that have different lifecycles and you will create one stack of network
infrastructure such as VPC, another stack which
have like a container cluster using eks or
the elastic container service. Or you have another stack which actually is
the application that runs on top of the cluster. The AWS
Construct library is a set of components which is crafted by AWS
to create resources for specific services. This helps you to decouple libraries
and use only the dependencies that you need in your project.
It also builds with best practices and security considerations to provide
good development experience, ease of use and a fast installation cycles.
The CDK CLI help you to interact with the code framework
and to initialize project structures, inspect differences between
deployments and also deploy easily towards your
AWS accounts. So let's take a look at an example.
Here's where we create a new VPC, an ECS cluster
building applications with the application load balancer by just providing the
name of the Docker image from Docker Hub and a number of maximal availability
zones of a VPC. And that's it. Nothing more than this CDK
would generate more than 800 rows of cloud formation template
with separate VPC subnet, Internet gateway routing
tables, application load balancer, the Fargate cluster
service and task definitions. So in total like 37
resources with just these amount of line of codes.
Now let's see how we can use a developers flow of CDK and
how we can actually ship this code to our AWS
accounts. So we start a project by executing the CDK in it,
and it will actually generate a project structure
for a specific programming language and we can start creating
our apps and stacks and the constructs
and resources that are needed. Once we have created the project,
we can actually build it already by using
Go build. Or if you're using typescript you could use
NPM run build, but this is the next step.
You're now able to build your application with the code that has been provisioned.
Then we need to synthesize our code to cloudformation templates and
you can do this by running CDK synth and this will actually generate
the cloudformation templates and the assets needed and
that will be bundled in something called a cloud assembly. Before we
deploy, we can actually inspect what to change if we deploy the cloud assembly
and which resources will be deleted,
will be updated, or will be created. We can
do this by using CDK diff and finally with CDK
deploy we can push the changes to cloudformation and
from there the service will create all the cloud resources necessary into
your AWS account. So let's take a look at the CDK constructs.
So CDK includes the AWS construct library which
contains construct representing AWS resources. And this library
includes constructs that represent all resources available on AWS.
For example, the S three bucket class represents an
Amazon s three bucket and the dynamodb table
class represents a dynamodb table. With this library
is the starting point for you to create resources.
And one of the big advantages is to have composition of the services
that represents like complex infrastructure like networking
setup or creating clusters or databases. And let's
take a look at the various levels of constructs and how you can actually use
them. So constructs are organized in three different levels.
A level one are constructs that automatically generated from
cloud formation resource specification. So this is a one to one
mapping between classes and AWS resources.
Level two are higher level service constructs and actually represents
resources such as s three buckets or on VPC
and including other resources. So it can be a composition of
multiple that still be tied to one service.
So these constructs are simpler and require fair little
input to generate complex cloud formation templates,
and they can have many defaults and opinions already baked into them.
Level three are opinionated abstractions that are created by AWS,
for instance ECS servers with an application load
balancer and a VPC. But you can also see
level three constructs in third party libraries provided
by the community or your own set of libraries that you want
to distribute into your organization. And this is the level where you're
using to create your own obstructions that can be reduced. So let's
take a look closer look to a level one construct.
So with a low level one construct you actually have direct
access to the generated cloud formation elements. And this provides
control, but also requires the knowledge about the properties
of the resource specification. All fields are optional in here and
CDK tries to generate properties where it can, such as names or well
defined defaults. So as you can see over here it says a
new c event bucket. So usually level one constructs are
prefixed with c event which stands for cloudformation.
And in this case we can give it the name mybucket
and we can also specify the bucket name over here.
A level two contract can generate more complex structures
so they can contain multiple resources, but they are still in
scope of the specific service. So with a single line of code
we can actually create a VPC that stands two availability
zones which include four subnets
and have more than 65,000 ips split
at equality. We have can Internet gateway, route tables and everything.
And it's actually all what you need for a fully
configured VPC according to the AWS security
best practices. Anyone who actually build a VPC for scratch and
cloudformation, I think you can relate. Challenging it is to
set this up and how long it takes and how much of lines of
code you actually need to do this in a declarative way.
And with just this one line of code. New VPC,
everything is for you, ready to set up, ready for you and
set up with the default values and for
you actually to deploy resources into this VPC.
Then we also have level three constructs which are like composition of
multiple resources across different services. This is the example
that you actually have seen before where we're going to create an application load
balancer with the corresponding security groups, task definitions,
the listeners, but it also creates the Fargate servers
together with the iron rules and policies needed for the service and
for the logging. It creates the log groups and the task definitions just
by providing the two parameters. That's the acs cluster
and the name of the Docker image from the public docker repository.
And this actually shows how powerful level three constructs can be and
how you can create your own abstractions that you can share publicly
or privately within your organization.
So now let's take a look on how we can deploy
our own go application using CDk and using
CDk with go. So what we're going to do is deploy
something like a URL shortener which has
been written in go and will be deployed as a lambda
function. We're going to expose this lambda function to the outside world using
API gateway for which the user can interact with.
And URL shortener also needs the storage to translate
between the shortened URL to the original.
And for that we're going to use a dynamodb table.
So let's take a look. Okay, let's now deploy
our go application using CDk on AWS.
First things first, we need to install the Cdk CLI
and we can do that by using NpM. So what we do is Npm
install Cdk and this will install the
CLI. And for there you can use the Cdk command to
init your application. So what we're going to do say Cdk,
init, add languages, go, you can also
provide other languages, but as this is in go talk, we're actually
going to use go in this way. So let's execute it.
And the Cdk CLI will actually create a
bunch of files for us. And from now actually
all files are there. And we can now do something called like Cdk
synth which actually will synthesize the
code that has been created to generate cloudformation.
And we can actually look at this output by using this
sync command.
So if you're familiar with AWS cloudformation, you actually
see come quite familiar.
And the demo resource that
it actually creates is an SMS topic. But we're going to change
that for application of course. So let's look at the files that has been created.
So AWS, you can see it creates a Gomod file
and Cdk json, it initializes a git repository
for you with a readme. And I think the file that we are mostly interested
is in the CDK go file.
And over here, this is actually the infrastructures code that
we're going to use. So let's look
at a file and what's in it. As you can see there's a bunch
of imports. There is a structure
created for the stack properties.
There's a function to new up a new stack.
And as you can see over here, there's already the topic that has
been created by the initialize command.
There's a main entry point. And basically what's in that main entry
point is the creation of a new app,
the creation of the stack which will be assigned to
the app, and then the app synth which actually generates
the lambda function. There's also some ways for you to
use different environments by specifying the
environment over here. But this is not something that
we're going to do now. So let's go back to the code that we're going
to put in to deploy our application. So let's remove this.
If we look at our architecture, the first thing that we want to create is
a dynamodb table. So to save you some time
of looking at me typing, I've copied in
some of the code that's needed to create a dynamodb
table. So we create a variable called table and
we say AWS dynamodb new table
and we provide in stack, that's the stack that we're going
to use. Then we provide a name.
In this example the name is table and
as you can see the string is in GSI string. The reason for
this is because we also want to manipulate the
strings from other resources. So you can see
an example later on. And for this we actually have to provide string
type pointers and this is how we're going to do that. And the last
argument that we provide is the table properties. And one of
the things that we need to have in the table properties is the partition key.
The partition key contains of a name of type slug
and it has this type attribute string. You can also provide other attribute
types like binary and number.
As you can see, because I'm using go in this
editor, I have all the intelligence that I needed to
discover how to create this AWS resources here.
So now we have created our table, we want to create our
lambda function. So for our lambda function we are using
a construct, the go function which is in the package aws
lambda go. So as you can see over here we
say new go function. We also provide the stack
and the name which will be function and the properties to
go from properties. What we provide over
here is the entry which is actually our
go application. It's called go URL. So what we're
going to do is actually copy this application into
our project. So copy code
your role.
And if we look at the application, it's just a plain go application
that you already know on how to create. There's a
main package, we created some
commands and the API is using
actually with two calls that you can do. One is to create a URL
which returns slack to us and the other one
is actually accessing the URL and that will return a
redirect permanently to where the slug is done.
And for the storage we're using dynamodb. So what we're going
to do is actually deploy this application over here by using
a lambda function we point to the application by
specifying in the entry. Of course, our application
need to know which dynamodb table it should actually write
the URLs to. For this we're using environment variables
and we can do it like this. So we provide a map
of the type string with a point string. The name will be
dynamodb table and we reference the table
which we created before and we reference the table name to
that, meaning that the table name will be
generated over here by this construct and
the name of the table that will be generated will be actually provided
to the environment variable. This way you actually make things
in code and if things are changing, it automatically change with it in the lambda
function later on. Okay, now our lambda
function needs to be able to access the table to read and
write table entries to that. On AWS we use something
called identity and access management, where we provide
a role towards a lambda function which has the permissions to do
that. And normally in best practices to use least privileged,
normally you have to create this role, you have assigned the privileges
to that role and you have to assign that role to the function.
With CDK there are actually some convenient methods for you
to help. So what you can do is say table
and then grant read write
data, and then we can actually put
in the resource that we want to grant this permissions to,
which is the handler. So this will actually take care of all those
things like creating the IM role, assigning the policies
with the permissions to actually read the right data to the table.
So now we want the API gateway be created
that actually proxies the requests coming through the API gateway
to our go application. So what we're
going to create is something called an
integration, and that integration will be
a new lambda proxy integration. We provided
some properties. One is the handler
which points to the function handler and that
we just created previously. The other one
is that we have to provide the version one of the payload
format provided by the API gateway. API gateway fee
two has a new payload format only
the lambda go library still is on version one.
So that's why we say that the payload format should be version one.
And then once we have created this integration,
we actually provide this integration to an HTTP
API which is in the AWS API gateway fee too.
And again here we provide a stack, we provided a
name called API and we gave it to the properties to
say that the default integration will be the lambda
integration that we provided before. So we
now have all the resources created AWS. You know, if you create
an API gateway, the URL that you're going to invoke is
actually generated by the API gateway and we want
to make use of that URL.
So what we're going to do is actually output that URL back
once we're creating the stack. So how you can do that is
you can do like AWS CDK.
We say new can, which stands for cloud formation
output. Again, we provide, the stack provided a name which will
be the API URL. And we say that the output props
value will be the API URL.
So this is basically all the code that we have to write.
Now let's deploy this to our aws
and let's see if we can get it
running.
Okay, in order to deploy using Cdk, what we type here is
Cdk deploy and
what it will do, it will execute our code, it will synthesize it
towards cloud formation and it will actually
start deploying our code to our AWS
account. It starts building the assets.
And over here it's going to ask because our code
contains some changes to the security scheme. So it comes to create
IAM roles and change policies. So it actually
will ask you if you want to deploy these changes, just an extra
level of security. This case it's
okay. So we say yes, you can now see
the Cdk stack starts deploying and it will start with creating
the cloudformations change set and push it towards the cloudformation
API. Now let's wait for the cloud formation
stack to be deployed.
Here's the table. It creates the IM rules.
Okay, now the stack has been deployed and it returned to me the
API URL. So let's start using
this URL, see if our application now works.
So using this curl command we say curl we
provide an application Json content type.
Let's put in the URL that just has been shown
to us. So as you
can see, it returns to me the slack that has been generated.
Also what was the original URL. So now we can
actually use this slack by user going to
the,
then we paste in the slack and
AWS you do. So you see, now it returns and move permanently
towards that euro, meaning that our API that
we have deployed our go application is working.
So now you have seen how easy it could be to deploy your go application
using CDk with go from the
ide you already know and just the number
of lines you needed to deploy this type of application.
AWS also provides something called AWS solution constructs which
are like architecture patterns for you, available as open
source extensions for CDK. It helps
you to assemble production ready workloads
according to the best practices that we provide for
such solutions. So this is a growing library of best practices and
you can filter to find the right solutions which matches your use
case. So I recommend you to take a look
and see if they can actually work for your solutions as well.
CDK is an open ecosystem, so we have a public roadmap
to show what features are prioritized by customer feedback and we
are happy to have many contributions from the community and would
like to love to have your voice in this future
development of the tool. There are also many other resources that you can be found
around CDK, like the awesome CDK repository which
is like a collection of third party level three
construct libraries. There's also something called CDK patterns
and feel free to inspect how others are building service compositions.
So there are plenty of resources out there if you want to get started with
CDK, such as the CDK workshop which
is a workshop that guides you step by step
in how you can use CDK to deploy your applications.
There's also CDK examples to show you some example implementations
using CDK and many more. Feel free
to drop into our GitHub channel or look us up at the GitHub
repository where you can create issues but
also pull requests and interact with the team if you need
certain features added to the framework.
There also are community resources out there like the CDK patterns.
Also something that's nice to mention is CDK eighty s which is CDK
to generate Kubernetes manifest files
so you can also share them
among your organization and have the same benefits that you see
also for AWS resources but then tailored to kubernetes.
For this, I would like to thank you listening to my talk.
Hopefully CDK is a tool that's something for
you to look into and enjoy the rest of your
conference. Thank you.