Transcript
This transcript was autogenerated. To make changes, submit a PR.
You. Hey everyone, I'm glad to
see you here. Thank you for joining the talk. And in the
next 35 40 minutes we will be talking
about infrastructure scored for JavaScript application on AWS.
Visit typescript. And my promise for you is that
till the end of this talk you're going to have complete infrastructure
production ready one for your front application.
So if you're ready, let's crack on.
Before we continue, let me introduce myself, my name is Denis, Denis Artyuhovich,
I'm from Belarus, originally, now based in London. I'm Tim
Lita at the zone and at the zone we're changing every aspect
how fans are engaging with sports,
starting from these content distribution and truly
immersive experience for live matches and boat
content, we are available on various type of devices
including smart tvs, mobiles and tablets,
laptops, game consider such as PS five or Xbox
whatever. And I think youll can imagine how many types of various
infrastructure we have. So why
do we need infrastructure AWS code? I think the answer is obvious,
but nowadays it's increasingly crucial to
automate literally everything, what can be automated, right?
And yeah, let's consider just normal feature
development flow. We're starting with one idea,
implementing one feature, then the other shine idea comes to
stage and we have two features and then boom, hundreds of them.
And that's actually exactly our case. We have hundreds of probably
thousands of different features, and we can expect changes
in our infrastructure even multiple times per day.
So I think you can imagine how stress it could be
if you're going to do it manually. What if someone forget to
document specific checkboxes? Or what if UI on
our cloud provider? Let's say AWS has changed
and we click on the wrong tick box and our
site not available anymore. Oh, that can be
stressy. Just to avoid such situation,
we need to take advantage of the infrastructure
as code. And before we continue,
let's consider these two programming paradigms because they
are quite relevant when we're talking about coding
for our infrastructure. So imperative one stands
for explicit instructions, which actually
these biggest advantage of the approach, you're basically specifying
all your instructions, usually in some bash scripts,
and you kind of have full control. Youll flexibility.
But at the same time it's the biggest concern because you
need to maintain everything to yourself and
likely it won't going to scale well with a declarative
approach. Instead of specifying cabrosync step by step,
you're just describing the outcome of these shape
of your state which you expect to be applied.
And yeah, it's usually way simpler
scales better if someone has implemented this
declarative approach, if someone has implemented those providers
which allow you to work in a declarative way. So just
to recap imperative stands for explicit instructions when
if something goes wrong, you likely going to blame
system because you route so many code. You're a truly amazing developer,
but it doesn't work as expected. But with a
declarative approach you rely on providers
which are doing the job for you. And yeah, you don't really care
what happens under the hood. So in the good news that terraform,
which is open source infrastructure
AWS code software tool region developed by Hashicorp
provide us option to write our
infrastructure AWS code in a declarative approach. And what's
even more important today it supports typescript
and more than 1000 providers. So or better to say that
CDK, which is cloud development kit developed by AWS,
supports typescript. And in
the next 30 minutes or so we're going to build production
ready infrastructure for our front end application.
I will be using react as a boilerplate
for the application itself, but you can use whatever you
prefer if you want to do it on view framework or
handguard, whatever, any framework you choose. It's going to work for
any Javascript application.
So yeah, I think we can start.
And to start with, we're going to generate
the project and it's actually only one time when we're going to type the react
itself. So I'm going to use react create up to
generate the basic project. I call it IAC talk demo
project. I'm passing these template which is typescript and
after command executed and finished we should be able
to see this familiar for many of us website
if you've done it, we ready to start creating
our infrastructures. We have like prepared project.
So as a prerequirement we need to install CDK CLI
and you can install it globally on your laptop or you can have it per
project, it's up to you.
So once we have it, we can run CDKTF
to generate the terraform polar plate,
but before we need to create the terraform folder where
we're going to do all this magic. So we are running CDKTF
init and passing to play typescript similarly as what we
have done with the reactreate app. And yeah, it asks
us for the project name. We think that terraform as
a default name of our folder and it starts execution
after this you're going to see some failures. No worries. It's just
because terraform CLI probably not expecting
that there will be other typescript project
in the same place, but it's really easy to fix. These are
related to our react project, those errors. So we just need to
fix our TS config and add skip leap check with
the flag true and after that everything will be
okay. So I'm passing Skiplab check true and
after that you're going to see that now
we have terraform folder and we have main TS file.
This main TS file we will be modifying and writing
infrastructure in. As you can see it's just
a class which is extend from the terraform stack and
yeah, as I said, everything. What we're going to do we're going to do here
in these file, but later you can create let's say source folder and
any subfolders and structure in these way you prefer your infrastructure
because you have a typescript and literally everything is possible.
So now let's
add the provider. As I said, we're going to use AWS as a
cloud provider. So we need to modify our CdK JSON file
and add the provider here I'm going to use AWS one.
These are no other providers required at the moment and we
ready to start. So now let's think about
how our infrastructure going to look like what we actually need from
it. We definitely need to store our
static file somewhere, right? So for these purpose we're going to use s three
which stands for simple storage service provided
by AWS. It has great availability, I think about
99.99% scalability and
all these stuff. But yeah, what's important
for us, it allow us to store files and get access to
them. It even supports some domains. So let's
going to try it and let's going to create code
for it. So we're going to import AWS provider
and s three bucket provider from generated providers which
are available for us because in the previous slide we have
added them to the list of providers. I'm going to
use bucket name as my domain name because we building
the real infrastructure.
So I think it makes sense.
You can use as many providers as you like in your configurations.
You can use any regions you prefer for your s three bucket.
But just keep in mind that as
later on we will be using the provider for ACM to
issue SSL certificate. AWS requires us
east one provider. So if you're going to use a different region
for your provider for s three bucket, dont forget to create other one
for ACM. But we will cover it a bit later.
Okay, so is a s three bucket everything quite straightforward? We're just passing
our bucket name. We specifying access control list as public read
because who cares about security, right?
Sorry, I'm making fun. Of course we're going to fix
this a bit later. We're specifying that our s three bucket will be
these website and we're going to have index
HTML AWS index document after
this. What we need to do, we need to run yarn, build,
synthize our code, go to the CDK out
folder and run following comments. So we
start with init. This is one time comment because we're
just starting. We need to run it later on. We can
continue only with plan and apply plan.
It's not the same as validation, but it
just shows you what will be planned to
apply with your infrastructure, what state
you plan for the next apply and apply.
It's basically command to apply your changes.
Okay. So if you run them, you're going to
see such output in your terminal. Thank you.
That website, endpoint will be this domain.
And actually now what we can do, we can
try to build our project. These is a react project. So we
go out from the terraform folder back to our
application folder and we're going to copy everything to
our s three bucket. And with the deployment state I would like
to highlight few bits actually. Deployment can
be done with the infrastructure together. Terraform can handle it.
But in this particular case I think
it's a bit pointless because you can expect
deployments, I dont know, in a more frequent manner. And usually
you're going to have either some
CI integration for your deployments,
separate one, or probably even some deployment
dashboard. It depends on your preferences. So usually those
things are separated and we're going to keep these separated here too. So we
will be used just command line instruction and
going to use AWS CLI. So you need to have preinstalled AWS ClI
to execute this command. But yeah, basically we're just saying AWS
s three sync, passing the build folder which we would like to put to
the s three bucket and specifying our s these bucket name
which we just have created. We're also passing the access control
list as public read, same as we've done it previously.
As I said, we're going to code it a bit later.
So if we run it now we can go to the
bucket name s three Amazon Aws.com
index HTML and see that our website is available.
And let me congrats you because
that's a great achievement we just implemented. Probably as simple as possible,
but our own infrastructure with code for
the front end application. But to make
it now, to make it more production ready, we need to
connect it with real domain name. Right. So for this we're
going to use route 53, which is domain
name system service provider by AWS DNS
in short. And yeah,
let's have a look on our user
flow in this case. So basically users, they're going
to write WW, whatever our domain is, they will
be later transferred to the AWS private infrastructure,
which is going to allow us our call to their route 53.
And route 53 will be aliasing our users to the s
three bucket. Okay, so now as
we know what we're going to do, we can build it for this.
We're going to import route 53 zone and route 53
record and going to create initially hosted zone for
which we need just pass these provider and the domain host.
This is domain host because I'm going to create our domain
actually to be a subdomain of these main domain if it makes
sense for you. And we're going to use a record.
So again, we creating root 53
records, passing our domain name and specifying the
behavior that it's going to alias our user to the bucket.
We run again yarn, build yarn scenes from the terraform folder,
go to the CDKTF out run, plan and apply.
And yeah, our state will
be applied. But during this execution,
these time you will see that
apply staff going to fail on
the very last part. And the reason for
this, I did it intentionally. Sorry, let me probably show it a bit more.
Yeah, so it fails and fails on the validation step.
And the reason for this, because intentionally
I bought domain not on AWS itself. I bought
it from the different domain provider, which is I think quite common
case. And you need to remember that
youll need to meet them somehow. Right together. So let's have a look what we
have created on AWS. And I think it's going
to be clear. So we go to the route 53, we see that hosted zone
actually already created. And in this hosted zone we're
going to see that there are even records created. There will be two default
records and one a record. So as soon as you
see ns and so records, they are created by default and a record
is what we have created. So now from the NS record we need to
copy NS values and add
them to our domain provider. If you use quite popular domain provider
and have provider for this in terraform, you can even handle
this terraform. But it is one time instruction so we can even do
it manually. Like what I going to do because my domain provides russian
one and it's not very popular. These don't have any IP
configuration. I don't think they even support updates
with I don't know, rest API or something.
I really think they not but yeah. So I'm going to do
it manually. I'm just adding my NS servers there. I'm pressing
save and I need to wait 510 minutes till
NS servers will be updated. And as soon as they are updated
we're ready to run terraform apply again. There is no need to run any
other commands aws we haven't done any changes
so there's no reasons to build it again or something.
So we're running apply again. It's asking
do we want to add these changes. But this time
you're going to see that it's going to be completed successfully.
So it means now we
can open our domain and see that our website
available here,
but still, right. To make it production ready
we need a few things.
I may think that we need at least SSL certificate to make it
secure here. And I think
we need to make it more performant, right because currently we have created
our website and put it to the s three bucket
which is available only in United States because
we used us east region one.
And you can imagine that John Trip won't be so
fast as we may expect to be to
edge locations, right? So we can need to take
advantage of cdms and distribute
our code across different age locations so our
users can have best
performance possible to fetch our website.
So let's try to implement it. For this we're going to use previous
one. For this we're going to use Cloudfront and AWS certificate
manager. These cloudfront is can
provider,
can stands for content delivery network and certificate manager
will be responsible to provide to issue SSL
certificate which we will assign to the cloud front.
Cool. Let's have a look on our infrastructure diagram one more
time. Again we have route 53, we have cloud dont but
now. So route 53 instead of elastic directly to the s three will be
redirecting to closed edge location available with
cloud front and cloud front itself will be responsible to retrieve object
for caching during cache invalidation or for the very first requests.
So it also going to have certificate
issued by ICM and to create
it. Let's start with a certificate.
We gonna create certificate
with ACM certificate provider. We specifying wildcard
here because I would like to issue these certificate for all
subdomains. I'm creating validation record
and after this I'm creating the validation itself. Passing the
record and passing the certificate once it's ready we
can create cloud from distribution and assign it. So with
the cloud from distribution we just specifying the origin,
passing our bucket
and domain name. After this we're going to specify default root
object and saying that there is no any restrictions yet.
You can later assign any restrictions you
may have. For example if you youll like to disable
certain countries because of,
I don't know, maybe youll don't have rights
to launch on these or something, whatever youll call.
And we're going to specify default cache behavior which is going
to be for get a hat and options methods only
because we don't really need to have it for post or
something as it has this infrastructure for the
front end only. We're going to redirect everyone to steps as we have now
issued certificate. Why not to use it? We're specifying the
default ETL as one day in seconds
and we're not specifying order
cache behavior yet. And actually order cache behavior
is responsible for override. So for example you can have default cache
behavior for everything, let's say to be cached for one day,
but specific folder you want to be cached only for 1 minute or let's
say forever, you can list it in order cache behavior.
You also can say that index HTML shouldn't be cached at all.
And yeah that's also possible.
Yeah we are assigning viewer certificate, our ACM certificate
and we're going to change route 53. Instead of aliasing
to the s these bucket like previously, now it's going
to be alias users to the cloud dont
distribution. So we specify cloud from distribution name and hosted
zone id. As soon as we're ready we can
again run build scenes and terraform plan and apply
and magic happens. Everything should pass successfully and you
should be able to open the website and this time you're going to see that
it is secure. And moreover it wasn't fetched from the
United okay, for very first call, probably it still was fetched from the
United States s these bucket, but in general
it's now fetched from the cloud front. So it's way more,
yeah it's distributed across all edge automation
AWS support and yeah
that's awesome. And now I think it's the time just
to cover the security part because we skipped at the beginning
of the talk, right. So we'll
use one more thing which is called origin access identity.
So let me show you diagram
to help you understand how exactly it's going to
work. So as you can see, not many things
changed, but now Cloudfront
going to retrieve objects for caching from the s three as previously.
But this time s three bucket going to have some bucket policies
which will be allowing access to anyone who tries to
perform some operations on the s three bucket to
only those who have origin access identity. It means Cloudfront should have
origin access identity specific one which will be listed
in bucket policies to have access to it. Okay,
so our s these bucket no longer will be publicly
available. And yeah, it's going
to be our complete infrastructure.
So to achieve that, what we need to do, we need to create origin access
identity here we need to create policy document.
Again, sorry, one route,
probably worse to mention it early, but please
don't be scared if you see some configuration which you don't really understand
because they sometimes not really related to the infrastructure AWS code
as a sync, they sometimes related to the cloud provider. In our case
there are quite a few things which are related to the AWS itself
and if you haven't worked with it, you probably
just don't know in which format they want configuration
to be passed in. But it's quite easy to get it from
looking to their docs. Plus for each provider
which supported by terraform, on the terraform website you can find reach documentation
which help you to understand everything.
Sorry. So we're creating the policy document,
we specifying that we want to have access to objects. We're specifying that
we want to have access to list of objects.
And after this we creating the s three bucket policy,
assigning the bucket itself and the policy itself. I think it's straightforward.
After we had a look to these diagram and
yeah, we're adding these changes to our SAM certificate
to have this origin access identity. Okay, so it
will be available for the cloud front. Now what we can do,
simplest option, just remove everything what we have previously deployed
to the s three bucket and redeploy it again, this time not specifying KCL
at all. So it will be use default one private one.
And if you've done it, we can try
to open the s three domain again and see that
we have this nice access denied
page provided by AWS for us. And when
with a cloud front and route 53 with a regional domain
normal one, everything is okay. So site still operatable.
So all good. So if you're now sweating
as I am, take a deep breath.
We very close to the end of this talk. We're not going
to write infrastructure more during this
talk at least, but there are a few bits which I'd like to
cover with you. And first one is remote backend.
If you remember at the beginning of the talk, when we initialized the project,
we started with a local backend. And I said that, yeah,
if you know what backends in terraform world means, that's great.
If not, we're going to cover it. And it's time to cover it,
because when you work alone,
in theory, yeah, you can handle everything and store
it on your local machine. But if you work
at least with some other dev or in a team,
you need to store your state of
the Terraform somewhere, right? You probably may think
the first thing about GitHub, but it's not the best idea
because it's certainly going to have some sensitive information.
And terraform provides better options for this.
And it calls back ends. So it
supports various databases, including DynamoDB,
it supports s three bucket. They have even their
own cloud storage for this. So as we started with AWS,
let's stick with this and let's put our back
end on AWS. So let's
create the s three backend. For this we actually need just
three parameters. We need to specify bucket name. It shouldn't
be the same bucket name AWS we store for our application. It's like completely
separate thing. So it can be
completely private, manually created or created with a different terraform project.
We specify key, which is just our name for
the file. Yeah, I just specified infrastructure
as code talk demo project. And again we specify ingredient, which can be any.
After we done with this,
we need to build scenes, of course, and go to the Cdktf
out folder. And here we need to run terraform
init again, this command, if you remember, we have
run at the beginning of the talk, we need to run it again. And this
time it's going to ask us do we want to copy existing state to the
new back end? If answer is yes, local version will
be removed and on our s three bucket there will be this
deployed version. So in this file it's actually just json with
a state describing our current
infrastructure. So now
why typescript? Why we need to use typescript to coding
for our infrastructure. So I think it's important that
there is no new language to learn, because with
terraform previously you need to learn hashicorp
configuration language, which is not that bad. I mean, it's even not
complex at all. It's slightly more complex than YamL or JSon,
not configuration languages, but this one has some loops.
But we are developers, right? And I'm
coding for many years. I really like the power of programming
languages. And that's certainly what I'd like create
my infrastructure in and get all
advantages it shares with me, such as powerful autocomplete and
typings. Because now it's even
way easier to understand what exactly
I need to specify to certain providers.
Because previously I was always needed to visit
the terraform website to check the documentation. Now I can check just typing
send, sometimes there are even comments and yeah,
that's great. It speed up these development.
Plus we have all mature language advantages.
We have, I don't know, like Visa array, all those methods like maps,
reduce, whatever. We have options for code structuring because
we have models. That's something you don't expect
from the hashicorp configuration language. With this
you just store everything in a single folder and you don't know
what depends on what. With a typescript youll have models.
These you can import certain bits, you can share certain bits,
you can create functions which accepting some parameters
and mixing your state, whatever, do any magic
you want. There are way more features available now for
you. And we have new ways of sharing
because previously there were options to share complete models
through terraform ecosystem. Now we have completely new
ecosystem when we're talking about javascript, and I mean NPM.
So we can partially or fully share our infrastructure with reusable
functions, classes,
whatever. We can implement tests for our infrastructure
on the same language on the typescript.
So I think now it's a
big step closer to developers.
And yeah, that's one of the reason why I'm doing this
talk, especially for front end developers, because I found that
of course DevOps and back end developers,
they know what infrastructure is called. Usually they
know what infrastructure is, code is, and why it's needed.
When for many front end engineers
I found that it is still buzzword and I want to change it because
it's very important not only to create the infrastructure
for these website like we just did during this talk,
but create infrastructure for monitoring and alerting.
Let's say we want to specify some
specific conditions to alert with a new relic,
or maybe we want to specify them with sentry. I don't
know, maybe we want to integrate some incident management flow,
including the pager duty for all these things. We can create infrastructure
and we should create infrastructure with any IAC solution.
And again, I would personally recommend terraform.
But let me mention one more thing, because I think it's
incredibly crucial. Terraform itself is quite mature, used in
production. We use it for lots of different services,
but CDK, it's still under active development.
So CDK, it's something what you may expect
changes in because
it's can active development phase. I think it's obvious now it's
in beta, so it's not alpha anymore, which is great.
Still, they have this warning on their GitHub,
so just remember about it. And yeah,
everything. What is generated? I mean all the terraform
output, if it's generated, if it works for you, it's going to work because after
you run scenes youll already generated
terraform JSON file,
which is going to work with the terraform previously, and that's
the stable part.
If it's okay for you, if you're ready to
have some minor issues with CDK,
you please adopt it with your projects. I found it super
useful because I really like power AwS a typescript.
If youll feel that you need more stable solution,
stick with hashicorp configuration language for now and then
migrate partially or fully your infrastructure to the typescript.
Okay, all code samples,
all of them available in this repo.
They cover a full talk. I tried to follow the same
history in comments as we just did it during the presentation.
Plus there is one more repo which I'd like to share with you. Terraform typescript
frontend infrastructure which has slightly more advanced
structuring, so you may use it as a reference
to cool. Thank you so much again,
my name is Dennis. Hope youll enjoy it and see you
later.