Transcript
This transcript was autogenerated. To make changes, submit a PR.
Jamaica real
time feedback into the behavior of your distributed systems
and observing changes exceptions.
Errors in real time allows you to not only experiment with confidence,
but respond instantly to get things working again.
Cloud hello
everyone, and welcome to development productivity in a post serverless
world. I want to start with the story,
so you may be wondering, why am I showing
you a slide with a scooter or
a moped, as they call in the UK, and what apparently
looks like a computer or a server?
So this is a story of 20 years ago when
I was working for a telecommunication company back home
in Naples. And what
we were doing in that company was we were providing authentication
for some of the biggest broadband provider in
the area. So we did not have the infrastructure to
actually run and provide the broadband ourselves,
but those kind of broadband provider, they would offload authentication
to us. So when somebody will try
to connect to the Internet from their home via
their router, a request will come to us to say, should I authenticate this
user? Should I authorize this user to actually go over the Internet
or to access the Internet or not? And then based on
accounts and all the information that we had, we would either authorize or
deny the request.
Now, all of these was handled by a
serverless called Argo. So actually, Argo was
a server that we have had for some time.
It was actually living in our office, it was connected to the
Internet. Argo was actually living in a room where
there was air conditioner to make sure it didn't get
too hot. And Argo looked very much like the computer you can
see here on the slide. And everyone
knew Argo. He has been with us for
a few years. We would regularly look after him. We will
regularly patch Argo,
and we relied on Argo to be there to
do his job of authorizing or denying requests to
allow people to go over the Internet.
Now, one day what happened was that there was
a power cut into the area, and therefore,
once we had a backup generator to give power
to Argo, we could not keep Argo alive
for a long time. And actually we did
not have another argo. So Argo was the only server that we had
to handle the job. We could not recreate a
new Argo in a reasonable amount of time. We did not
know what was on Argo. Argo was a few
years old, and during that time, he was patched,
he was reconfigured, updates were applied,
so we didn't really know what was going on. We did not have either
the hardware or the knowledge to recreate a
new Argo in time. So what did I do?
I unplugged Argo. I took Argo on
my scooter with me, and I did
drive in the traffic of Naples as quickly, as fast as I
could to take Argo to my house.
Actually, it was my parents house at the time, in my bedroom.
So I arrived in my bedroom, I plugged Argo back
in, I connected it to the Internet.
I phoned up our broadband provider partners,
I gave them the ip address of my house so they
could redirect requests back to
Argo. And job done.
We were back in business. Actually, Argo was
very noisy. He had a very big fan
to kind of keep it cool. So one night I couldn't really sleep with
the noise that Argo was making. So I unplugged it again and I moved it
in my living room. And so my appeals changed
and that kind of caused a little bit of downtime
in service, but there you go. Why am
I telling you this in a boost serverless talk?
I'm telling you this because there was a time in which servers were
not ephemeral, they were very important,
they were long lived. And entire
businesses were actually built and would
rely on physical servers to
actually keep going.
So my name is Domenico, I'm a principal engineer at economics.
Recently we kind of rebranded ourselves and
now we called Hardall. So those are some of the things that
I do and some of the things that I
really can't do. So I've been in the industry for more than 20 years.
I still cannot write a simple,
regular expression.
So what I want to talk to you, but today is two things.
One, I want to talk about how did we get from
serverless, like Argo, to running functions into the
cloud? So what has the journey like to move from physical
servers to AWS Lambda and azure function
and all of that kind of good stuff? And also, I want to talk,
but a little bit about infrastructure as code and what kind
of role I think that's going to play into the future.
So let's start from servers to serverless.
So initially, back in the days, we had physical
servers like Argo. I mean, they were much better looking than
Argo. So they were flat and
they could fit in rack space, and then
those were leaving kind of data center. And then you had,
traditionally you had the development, the developers, kind of software engineers
on one side, and then you had kind of the operational engineers
on the other side. And kind of the words between these two was
very different. Kind of the tools that they were using, the day to day activities
that they would perform were very different developers. Whats will
write a code. They had little to no idea what was going
on into the production environment. They would hand over the course,
sometimes in the form of a package to kind of operation, which would
then go ahead and deploy the software onto the server and
the operation they were looking after, patching and maintaining not just
the configuration of the servers, but also kind of procuring the hardware
issue, replacement parts and whats kind of stuff.
And so the distance between the two worlds was very big,
like significantly two different jobs with two different
kind of skill sets. Whats you will need.
Then virtualization came in
and this is where I guess the distance between
the two words started to reduce
a little bit because here you'd have things like starting
to do some scripting and potentially some configuration
as code over those machines. I mean there was still
like a physical hardware potentially owned and run
by operation. And the service here are still long lived.
But I see it in terms of tooling, it will
start to kind of reduce the distance between the two teams because
both operation and development might start to do
some bashing or some scripting. And developers started to
get a little bit more information and knowledge around
what kind of the virtual machine looked like. And potentially they would also do
some of the configuration on those machines themselves to actually run their
own software. And so like I said, the distance kind of shrink
down a little bit. And then
Docker came in and I think this was like the most significant
shift in terms of reducing the distance
between development and operation,
because here is where kind of the environment
in which the software runs becomes ephemeral.
We don't talk anymore in terms of servers
and we don't give those servers names. They're not long lived
anymore. But we talk about in terms of processes and maybe
amount of memory I need to run my application and the number of cpus.
But we also change
the way that we write code and we stop relying on
service being long lived and configuration being applied
upfront. And this is also where I guess in terms of
tooling and ecosystem brought those two words together,
because now developers can deploy their
own application much more easily. Tooling is kind of shared,
for example, in the kind of Kubernetes world. If you use
that as an orchestrator, you might use kind of
helm as a tool shared between operational and
development. And even
if you think about activities like scaling, for example,
developers started to take ownership around some of
those, because now it's a much easier thing to do
in terms of the tooling and in the ecosystem. And so the
distance between the kind of activity, the tooling, the skills,
I guess between those two words significantly reduced
at this time with the advent of Docker.
And then after that serverless came in specifically
function as a service. So the idea that you write some code,
you push that code into a cloud environment and
that code magically runs for you.
And this had I guess an impact in terms of reducing
the distance between the two words. But I guess in absolute terms,
in absolute values, it was not, in my opinion,
as impactful as the advent of
Docker. And whilst here we don't have server
to manage, and we now talk about
business logic, write them processes, and we talk
about events that triggers those function.
The distance between the two world, I think is still there
even in function as a service. And why do I think there is still distance
there? It's because for once, for example, the kind
of tooling and languages that we use
for the infrastructure to run serverless and write the serverless
function, the business logic itself is still different. So for example,
here you can see on the right hand side we have an example
of an AWS lambda function written
with typescript. And on the left hand side
you see kind of this yaml configuration file which
is used to provide kind of the serverless ecosystem
on which that lambda functions run. Now this particular example is
using the serverless framework as a packaging
tool for lambda based application.
But you can see that there is still the notion of infrastructure versus
application. And the way that we define the tool,
it's still using two different languages. So we
still need to have to understand the yaml, we still need to understand what's
behind it, and we still need to understand typescript and whats is still two
different tooling.
Now the question for me is what's coming next?
Have we reached kind of the plateau in terms of reducing
that distance and therefore maximizing the productivity
and the speed in which we can write application?
Or is there anything else? And if so, what is the
next thing that is going to reduce the distance between the two
world even further? Is it going to allow us to actually be
even faster at creating systems
and application? Okay,
and with that, so let's move on a little bit on that and
try to explore what might be coming next and talking about infrastructure
as code and what kind of role it might play.
So actually I think whats, there's not going to be infrastructure as code
anymore, or at least infrastructure as code is not
going to exist in the same sense,
in the same way that he exists today,
because I think going forward we're going to move to
a world where we think more and more about systems
as a whole over
differentiating infrastructure and infrastructure
and application. And so what we called
it's going to be the system as in something that
produce values, the business and solves a problem,
rather than thinking about this is the application and this
is the infrastructure on which the application run.
The two things are kind of going to be merged together.
So actually the real thing I want to talk about
is in our agenda is forget
about infrastructure as code and talk. But I think two emerging patterns
that I see more and more when talking to people and companies implementing,
which I think is going to play a bigger role into the future. I mean,
of course that's my bet. I might be wrong, but this is where I would
kind of put my money on.
And one is platform code and the other one is infrastructure
as code. Now, I think the best way
to talk about those pattern is to actually see some code
and code something and write something. And for that
I'm going to use some specific tools and libraries.
Just a little bit of a disclaimer here. I have no commercial
relationship whatsoever with the tools I'm going to show you. I'm not trying to sell
you anything. I just think that those two tools I'm going
to show you are a good example of an implementation of those
patterns. But you can achieve the same things, I'm sure,
with many different other tools. Okay, so the
first pattern we're going to be talking about is platform
has code, and with platform has code.
I guess the idea for me is that
there are three main characteristics, I guess that those
type of systems would have. The first one is that
the language that you use to code
your system essentially is the same. You code the platform
over thinking about either the infrastructure or the application,
and you do that by using a single language. So there is no Yaml or
typescript anymore. There is only one thing. And also is
that from a deployment perspective,
it's kind of an atomic system, it's a deployment unit.
And you deploy the system has a whole in a single CI CD
pipeline, so you no longer deploy the infrastructure and then you deploy the
application on top of that. So with these three characteristics in mind,
let's see how this look,
I guess, concretely. And for that we're
going to be using Pulumi again as a tool
that I think is a good implementation of this pattern and particularly
the metaprogramming aspect of Pulumi. So whilst you can use Pulumi
to just create the infrastructure like you will do with terraform or CDK
or cloudformation. Pulumi has some interesting,
I guess, metaprogramming characteristics which allow
us to kind of meet the three, have the three
characteristics that I'm outlining here on
the slide. Okay, so let's move to write
some code then.
Okay, so what we have here is, as you can see, is an empty
kind of application. So what I'm going
to do, I'm going to create an
application in Pulumi. So we're
going to say Pulumi new typescript. So what is this going to
do? This is actually, okay, we need to
make this empty. So what this is actually
going to do.
Okay, so we're going to create a new application
with Pulumi. So Pulumi new typescript. I'm just instructed Pulumi,
let's create a new typescript application. Let's give it a project name.
I'm going to call test conf fourty two dollars
and let's run this dev
name for the stack. And now it's installing a bunch of
dependencies, a bunch of
NPM packages. So I'm using typescript here because that's kind
of one of the language I'm used to.
But of course Pulumi supports a
bunch of different languages. So this was now created.
So if we do Pulumi app, this is going now
to create an application.
So you can see he's asking me, I'm going to get a new stack.
Do you want to create the application? Not, I'm going to say yes.
And now this is happening in my own kind
of account, right?
So I still own kind of the
cloud infrastructure under which this will run.
So you can see this now is an empty application. I have a single
index TS file, which is kind of standard for kind of typescript
or node based application in general. And this
is empty at the moment. So there's nothing in this application. So let's try to
put something. Okay, so I'm just going to copy and paste this code.
Now, what I'm creating here is a simple,
I guess, HTTP rest API with
the root on hello, which is going to return some
JSon. So you can see here, I'm not defining any
API gateway or any infrastructure as such.
If you're familiar with Express as a framework, for example,
this looks very much like Express, and I'm using
one single file, one single language to create my
application. Now I think before we can actually run this, I need to
install the NPM package that I'm using here.
So Pulumi cloud and
let's try to run it. If you do pull me up.
So this now will try to
create the application for me.
So let's give this a second,
perform this update. I'm going to say yes.
It say
yes.
And okay,
so you can see that he did not manage to update
this because he's asking for the cloud provider. And this
is because I own this infrastructure. So I
need to tell Pulumi on which cloud provider I want this
application to be created. So I'm going to be doing that
by saying the cloud provider I want to use is AWS.
Okay. And then because of that I also
need to install the AWS
NPM package and
I'm going to need to also tell in which region
I want to run the application. So I need to give it some
configuration about my WS environment. So I'm
using EU west one and now I
should be able to actually deploy
the application.
So you can see that this is now telling me
it's going to create a number of
AWS specific resources like
API gateway and rest API and permission
and staging. So pulumic is inferring what
kind of the infrastructure should be
based on the application I want to create. But if
you look at this, this is like a single file,
it's a single language.
I haven't specified any infrastructure myself.
I didn't talk about lambda at all. I'm just saying this is
my code for the application. So I am given a
URL while the application is
running. So let's have a look at that. So let me share,
let me share this so you can see,
okay, so if I do hello, here you
can see my hello world. So my application was
deployed. Okay, let's see something more that we can actually do
here. So another thing, for example that
we could do would be to actually maybe
define a queue. So if
I want something to actually happen asynchronously.
So you can see here I can define a queue,
and here I'm being explicit about there is like an
AWS queue and what the queue is. But then I can say
when there is an event on this queue, I want to run some code,
this console log, it's my code here.
And again, this is usually like for example a callback function,
which as a typescript or JavaScript developer you might be very familiar
with. Also the thing that I can do here, I can share code
between you. I can say I have a constant of variable
and this is const of dummy and
this has some value and actually
you can go ahead and use this dummy value
everywhere. You can use it in here or you can use it in here.
As you can see it's one application, one file,
I guess one thing. So I can actually go
ahead and then run this.
And now behind the scene this is going to
create for me an sqs queue, and I'm
assuming a lambda function with the trigger on that sqs queue.
But you can see I've done all of this using one
single language, no terraform,
no yaml, no cdk. My entire
application is in one file. So what I coded is
the platform, the system rather than the
infrastructure on one hand and then the application on the other hand. And I'm
using construct and paradigms that as a developer
I'm very familiar with.
And this is just I guess an abstraction
layer on top of the application because all of this is still running in
my own AWS environment. And you can see there
is some knowledge about infrastructure
because here you are defining a queue explicitly.
Okay,
so hopefully you would have seen how
something like the metaprogramming language of Plumi could help
writing an application like this and thinking
about the platform rather than infrastructure and application.
I guess the next pattern
I want to talk about is infrastructure from code. And I guess
it's an evolution on top of platform s code that we
just discussed. And the idea is whats we're
going to have, application driven system. So all we
think about is the application. So we forget
about the cloud or the
infrastructure that we even need to run the application.
And that infrastructure just becomes a
sub resource of the application, it gets inferred from
the application. So the
application is scared at first and then the infrastructure becomes a side effect
of that application. And the idea here would be that the
system whats I built, it's decoupled from any cloud provider.
Like I write some code and I run this code somewhere
I don't own anymore, I guess the cloud provider on which this code
is running. Okay,
so let's use again
a tool to show this,
and the tool I want to use to show
you this is serverless cloud. So serverless cloud is a new offering
from the same company whats runs the serverless framework.
And I think again, it's just for me a good implementation of
the infrastructure from code pattern that I want to show you.
So let's see, we start again from
an empty directory. So I've pasted
some code snippets to run. Okay, so the
first thing we're going to do, we are actually going to run create
a new application cloud is kind of a
little cli to kind of manage serverless cloud application.
I'm going to say create a new one. I want to create a new application
in typescript. So yes, I want to create a new one. I confirm
I want to use typescript,
I give it a name bigger.
So let's call it cloud 42.
Now this is installing a bunch of things.
So you can see already kind of a structure emerged
here, again with an index ts kind of file, again very
similar to what we've done before. And one of the things that is
interesting here, if it is like it's connecting to
my personal sandbox. So I'm already without
doing anything, without configuring any AWS or azure environment,
I was given a URL in which my
application is running, so they are hosting
the infrastructure for the infrastructure for
me. So let's try to write some code.
So maybe what we
can do is I'm going to get rid of all of these and
I'm going to have a very simple
hello word. So you
can see this is very much like express
like kind of structure when I
just give it a URL and then define
my API and this is what I'm going to monitor on. So the
interesting thing is that the moment I save this, this gets
updated in my kind of remote cloud test environment.
So if I open up this URL,
maybe let's open up the browser as well here. So you can see if I
open up this URL and
I do hello.
So you can see, I don't know if you can see this, but I
have my hello word, my hello word
here responded. And again,
no cloud configuration words whatsoever.
I just defined the API and they're running
this for me. So let's look at some more interesting things
that maybe we can do.
So you can see here, you get the primitive, so you get an
API, you can deal with some data, you can do some scheduling,
and you can have some kind of environmental parameters as well.
And those kind of primitives are the ones on top of
which I can build essentially my
application. So maybe let's look at some data. One,
so I'm just coming to copy those kind of two API
here where you have an endpoint here to store the data
and one to get the data. So you can see that I can use
the primitive data here to kind of store
some data and then I'm going to
get some data. So the moment I save this again,
look how fast this is. This is already kind of synced
with my cloud environment. So if we're going to open this
one up again, so let's look at the browser.
So we have get data, store data. So if I do get data,
since this is returning nothing because I haven't stored the data yet,
and then if I do store data,
the value is stored, and then if I do get data again,
I get the value back. So again, you can see that here I'm
dealing already with data. I haven't defined any database myself, I don't
even know what the database is. I'm assuming it's dynamodb running in
AWs. But I have no idea and nor I care at
this stage because this is all run for
me behind the scene.
The other interesting thing that we can look at is
scheduling. For example, the other primitive that we have available here
is scheduling. So you can see I can schedule something
to happen every minute and if
I save this, this will be deployed and you will also get
direct logging
in your console, in your terminal here.
And so this I guess also solves some of the problem
with serverless around the local development
story, which is not, I guess,
the boost in terms of tooling and
ecosystem. But of course you can do, I guess,
more complex things like with the
data. So for example,
one of the other things that we can look
at is here,
this would allow you to do query, for example.
So if you look at something like an API like
this, SS users, where you can get the data from
the user, and here we're getting all of the users, but here you could write
query. So this does give
you ability to actually operate with dagger, a little bit more complex
data structure. And you can see here I'm getting also the logs for the
scheduler function I've created before and
all I has to do is just to save it and everything is running behind
the scene. And again, this is a step forward on what
we talked about before. No, it's still a single language,
it's all typescript or c sharp or go,
whatever language you want to go for, but I only care.
But those primitives, I don't even know now what kind of infrastructure
or cloud environment this
is running on. I guess some interesting
thing that you can do here. This is kind of my now local development
environment which is running to the cloud, but I can actually create
a copy of that environment by sharing
it with someone. I can type this kind of share command here.
And what this is doing behind the scene is creating a new copy
of the environment and it will give me a new URL
and then I can share that URL with a team member or a
QA. And the interesting idea here is that as
part of that new environment that is getting created,
it also contains the data. So whatever data I've
created so far, and I think I've created one entry with the
kind of store data, that data will be there. And in fact, if I'm
going to open now this new URL which I was given,
let me show you the browser. And I do on this new URL
get data. You see, I already
the value in because also the data was
actually copied over. And of
course, I guess from here I can then do deployment.
So I can actually do deploy to prod.
And now that I am deploying in
prod, actually one
thing that we're going to see is that data will not be synced, right?
Because when you deploy in an actual environment,
you only want to have the actual code or
application deployed, but not the
data itself. Now, because the product environment does not exist
for this application. Again, behind the scene, this is also creating the environment for
me. It will give me a new URL
and that's now kind of the production URL. And then any
change I will be doing on my local environment here, it will not affect
production. That's still my local environment.
Okay, cool.
Yeah. So hopefully you may have seen that how
something like this, I guess, can help us implementing the infrastructure
from code pattern, where we do
a step forward and really think
about the application as the main thing that
we code rather than the infrastructure that
runs it. And in this case, with this particular tool, we are also
offloading against the run of the infrastructure to
somebody else.
That is everything I had for you for today.
So if you have any questions or you are interested
in understanding more about some
of the patterns that we discussed today, please drop me an
email or follow me on Twitter at mimotzo.
And I'm happy to engage in conversation about those topics.
Thank you, you very much and you have a good rest of your day.