Transcript
This transcript was autogenerated. To make changes, submit a PR.
What if you could work with some of the world's most innovative companies,
all from the comfort of a remote workplace?
Andela has matched thousands of technologists across the globe
to their next career adventure. We're empowering
new talent worldwide, from Sao Paulo to Egypt
and Lagos to Warsaw. Now the future of
work is yours to create. Anytime,
anywhere, the world is at your fingertips.
This is Andela Dahoy
there my name Savas and I would like to go serverless
with you. But before you head out with a total stranger into the cloudy serverless
wild, let me introduce myself. I am Savas Ziplies,
founder, managing director at Ellipsis and passionate software engineer for
over 15 years now, having worked on research and product developments in multiple
areas and industries such as mobile apps,
caring backend and high available service applications.
With the advent of DevOps and serverless,
and the goal to bring DevOps back again closer to
the operation, I would like to show you today how
to leverage your daily go environment and just
deploy it serverless from the tip of your finger. So let's
start. And let's start with the most important question. Why serverless?
And this is hard to answer. It's an excellent question, but it's hard to answer.
And I hope by the end of this presentation you will know for yourself if
serverless is good for you or even not. But defining
serverless is also already hard because serverless can mean
a lot for some people. Mobile developers, for example, that just use
a backend service like codebase. It's already serverless
because they don't have any server anymore. But this is rather backend aws,
a service compared to a real serverless
vendor function as a service. What we will talk about today,
so what we will talk about today is not caring about the infrastructure.
So there are serverless, in the end, there are serverless, it's just the cloud,
it's the service of others. But you don't own the service anymore,
so you are serverless. And this is the important part. No dedicated
servers for you, no dedicated instances for you. You just think
about the functions and the functionality that you want to deliver to your
app. So you are just simply developing.
If we break it down to the four pillars for serverless and for
today, we are talking about the infrastructure as a service that we want to deploy
to. They will care about the high availability and the
nearly endless scalability that we want to use.
What we deploy only runs when we need it,
so we also only pay when we need it. And we just focus completely on
the development. We just continue as we did before, develop our
application and just care a little bit, tiny little bit about
the operations side. So if you look at the market right now, what is
there available? You probably heard about Microsoft Azure functions,
you probably heard about AWS Lambda, maybe even Google Cloud functions.
And some might even heard about IBM cloud functions.
Why IBM cloud functions here? Because it is based on
Apache OpenWisc which is can open framework that you can potentially also
use to deploy a functions as a service on
your own Kubernetes cluster or your own server environment or FN
project which provides something similar that you could use.
On top of that there are also other full service providers like the server or
Netlify which also have functions as a service. But more
abstracted for a web development part, if you really want to deploy pretty much everything,
they provide a full service abstracting what is really behind
that. So with so much going on,
let's just go for it. Let's develop our serverless application.
So if you think about it from an architectural perspective,
it's pretty simple. We have for example our web application and our web application
has some requirements. So we write down the user stories and what
we want to develop. So for example we need to provide the time we
need to update an entry filter data search data, yada yada
yada. So the web application just want to push the requests and
get the data that we have stored in a database,
for example from our singlefile functions. If you would now develop this,
for example, we have three developers picking different user stories
working on the time implementation, the update implementation, the search implementation,
and just go ahead create their functions. One might
develop it in typescript, the other one might develop it in go,
the other one in python. So it's pretty flexible and everybody can work
in the best environment that they would like to work in.
Then everything is deployed. Let's take the example of AWS. You have
the different endpoints. So for example API time API
update API search where you can then access from
a client perspective the different endpoints and functionality.
So client and front end developer is happy.
If we look now what is really happening inside these functions and we
take a first close look at the Google cloud function, then you can see
that's basically just a simple function, in this
case hello world, that just implements the HTTP handler interface
response router request is in there. And then you just go
ahead and do whatever you would like. In this case it's pretty simple.
You just return your hello name or hello world without
any main or anything and you could just deploy it and have a serverless function
on Google Online. For AWS it's pretty similar. You can
also deploy or return a hello worlds or hello name function,
but you have to proxy these requests first, because based
on the API gateway by AWS,
you have to proxy these into the lambda handler that is already provided by
AWS. So after that it's pretty much the same as we have
seen on Google and with many other providers again, so it's a
little bit more complicated compared to some others,
but in the end it's the same. So if
we would continue now with the AWS example. So you
create just your project, add the AWS lambda go
library, and add the lambda start handler as we have seen on the slide
before. Then you create your AWS account,
you can create a free account, you have a free tier, you have 1 million
requests on lambda that you can execute for free. So just go
ahead, just create something, build your application,
install the AWS cly, zip everything up, create your
iam permissions, because without permissions you cannot invoke or upload anything.
Then by creating your function, you're just
uploading basically your compiled go application,
push it online, and then you can invoke it if you invoke
it. This example is very creative. You get hello world returned
and you can rinse and repeat function by function,
leveraging your time, update, filter, search,
whatever you need. So it's very easy
and quick to start easy in development, function by function.
Basically you could work user story by user story and you would
really follow a single purpose design from
the get go. But as you already have seen, there are already
differences only between Google and AWS, and it's similar
with other vendors. So you are immediately a vendor locked here.
And to some occurrences the single purpose functions might
be a little bit too microservice for some development. You just want
to think an app. You want to develop an app and not think already
from the start in the single functions.
On top of that, if you start creating 50 functions,
100 functions, maybe with dependencies to different infrastructure
like user database or in between, it might become very
tedious to develop locally and then especially
test it locally if there are some interdependencies.
So it's not native to what we are normally used in development.
Everything can be covered with structure and with good architectural design, of course.
But maybe we just want to stick with what we already know.
So these native fast functions, they fulfill their use
cases, they have a use case. If you really need small,
quick, high available functions that you just
want to deploy on single clouds and that you just want to incorporate
in your existing application. But maybe it's not right for you if you want to
develop a whole application. But this is what we want to.
So let's go a little bit further. So next level is
when we look at our architecture that we have seen as our simple
serverless architecture, that we do not think in functions anymore,
but think in applications. So we are developing apps because
this is what we do. It can be one app, it can be two apps,
it can be multiple apps. We can start off with one app
and then split it later into multiple apps because of the requirements that
we have derived from the execution. You just start
really developing and focusing on your application deployments.
So our goal is to reuse
existing frameworks and libraries that we already have in our go environment
and that we work with on a daily basis. We want
to start simple, so developing just our application, but stay
flexible if we might potentially more or want to split
it up. We want to deploy it to AWS in our example today,
but be flexible to also deploy it to others. And we
don't want to care about the infrastructure of course.
So I would like to call this now for today a
framework as a service approach to stay with the abbreviation of Fars.
We are flexible and extendable by using an existing framework
which already has so many plugins, so many modules,
so many middleware that we can use. It has all the routing, it has all
the JWT authentication, so we don't have to do anything there.
We want to be able to develop locally, deploy it in
a docker container, deploy it on our own service, but also be able to deploy
it serverless. So development first,
operation second. And when we are using a
common web framework and a common design pattern, we are also using something
that we already know, a structure that we already know. So we are keeping it
simple, not serverless. And for the example today,
the project structure given today, we are looking at a monolithic
microservice architecture. So we are combining the convention
and the configuration altogether.
For the example, I've picked fiber as an
expressjs inspired Golang web framework,
very fast, very easy to use,
and will be incorporated in the example today.
This we want to deploy to AWS with AWS client
SaM, and building, testing and deploying can be done via Docker,
of course. So if we look at the project structure
that is provided here, it's basically segregated into,
I would say three or four parts. So we have the build part.
The build part is where the build configurations for the different providers are in there.
In our case AWS, then the most important part
is the command part. This is where our serverless apps really live
in. We have our API, a queue handler, for example a
web API or whatever we need. All our serverless
applications live in the command area and are fully fledged applications.
On top of that we can add any common code. So for
example models or helper functions that we want to share
between our different commands and then some additional files
like environment, Docker compose and so on. I won't go through all
the different code lines as those are just normal go projects.
So I provided a GitHub repository which you can check
out yourself and see if everything works
out for you too. So if we look closer
at one of the commands, for example the API command,
it's just a fully fledged API. It serves
AWS its own, it stands on its own, it's just a go module in it
app where you can just develop, include fiber,
add your roots, add your middleware, add your database
connection, whatever you need, you can add it there, but you
can add additional commands for example. So you could even add vendor specific
commands. So for example that are stuck to AWS. Everything can be
mixed altogether. But the important part is
you can start simple and extend it later on. You are not
locked by anything from the get go, but you can later on
decide or for example in this application I have noticed that this endpoint
requires more memory, so I split it up into a different function
which is basically just a new command. Split it up, create a new deployment configuration
and you have a new serverless configuration with the according configuration of memory
that you require now
to handle the different entries coming
from AWS or locally or later on GCP or whatever we can
think of. We want to create a single point of entry which is in our
case our main go file where we just basically
import a new environment variable in our case server
environment to decide where are we running in this case,
if we're running on AWS, we start the lambda handler that we have seen earlier.
If we are not in any specific server environment, we are just starting a normal
fiber local web server and can just test it or deploy it
however we like it. So we have a single point of code and a single
point of entry here for AWS. AWS mentioned we already have.
We need to add a little bit more as we have to proxy the request
into fiber. But it's quite easy as
AWS provides everything themselves. So AWS lambda Go
API proxy is a GitHub repository which provides
adapters that can be attached to fiber, but also gin or echo
and other web frameworks. So based on our server environment,
we just init it, attach it, proxy our context
and we are back into our normal fiber context.
And everything after that is just normal development
as we know it.
And this is why we just develop. So we are not caring about serverless,
we are not caring, but AWS, we are not caring about GCP.
And this is the goal. We just want to develop, test, and everything that works
locally should just work automatically serverless,
but be high available and scalable on
the cloud, available for everybody. So just develop,
finish your app and when you're ready, we're bringing it to the cloud.
Now we're really going serverless. So we
already have our build structure, we have our commands, now we
want to bring it to AWS. How do we bring it to AWS?
By using AWS Sam. Sam stands
for serverless application model and the name already gives
it away. It's an application modeling. So we are not modeling
an infrastructure, but we are modeling our services. We are modeling our application
as a service online. So it serves as an ops layer between
our app that we have developed locally and the serverless deployment. In the end,
you can just install the AWS Sam cly
in. It can select from different templates,
zip your artifacts, or create for example containers via
an ECR. On AWS, for example, you select go
as your programming language in our case and just create your application.
Sam takes care about everything. It creates the folders, it creates
the configuration. And theoretically you could just call Sam build.
It builds and zips the initial template that
has been created, then calls Sam deploy and just deploy
it to your configured AWS account. And suddenly
you have your first serverless application online running with
SAM deploy guided. You get even asked the important questions,
what you want to do, where you want to host it, which region, for example,
and you can configure everything from the tip of your fingers.
And this is what we want to. But as we have our own
structure with applications and not single functions which would be
created if you just call SAM in it, we have to adapt it to our
project structure. So we just look at a template, yaml,
that is created by Sam, because this is basically the infrastructure as code
or the app as code that we are using in
the app as code in the template yaml. You have resources and
our function, our API function for example is a resource.
It's can AWS serverless function. But instead
of just having single files, we define the URi
back to our folder structure where we have our commands and define
the whole build as our singlefile function. We can
add additional properties like for example memory size and timeout. And always
remember, you want to always start off low and the lowest you
can run the cheapest. You will end up in the end in a serverless environment.
And this is basically all the configuration that we need that we give away to
Sam. And then if we call Sam, build and deploy, Sam takes
care about the rest. The important
part is then how does our app get called.
So we have to add an event. An event is to
trigger how our app is invoked.
And as our app is a function we want to attach it to an
API resource and we want to just proxy in
all the requests that are coming in and handle it internally.
So compared to what we have seen earlier where we created an API time
API update, API search for example, we don't
split it up because we are doing the handling, not the operations side is doing
the handling. So we are just proxying all the requests that are coming in
into our own app and handle the rest.
On top of that we have to configure the environment variables
of course so that we can set the server environment so that our application knows
we are running in an AWS environment. And of course a
lot else because as already mentioned we are defining resources.
So if you define resources like for example an RDS database,
you can reference the RDS database that you have
defined in the same template and
just attach it as an environment variable so that
you don't have to create anything on an AWS console. Copy the URL
and paste it in here and then the URL changes and everything is broken.
You just reference your deployments directly here and AWS SAM
and cloudformation takes care about the rest.
So sometimes you might need additional data.
So I picked an example that you might need for example chromium
to be used by your services application because you want to invoke a
chrome instance and then visit a specific website because you need
the Javascript to be called to get some information. For example to
have this running and not necessarily have
to build a new container image, you can just add the data
as a layer on top of your serverless function. So you would just
download whatever data you need. In this case the
chromium build, have it in your build structure,
you define where it is lying around with the content URI
and then it is just attached as serverless layer that is always hooked
into the serverless execution function. So every time the function is executed.
The data is available that you have uploaded here. And as functions
and layers are always versioned, you can always update new
versions, for example chromium, but you could also grow back and
switch back and forth in the end.
Also already mentioned you might want to configure databases,
dynamodb, SQS, queues or redis clustered.
Whatever you can think of, you can basically configure in this template because it
is a cloud formation based template and everything that AWS
offers you can configure in this template. So creating an
RDS instance, easy. Creating a queue, very easy. And the
important part is you don't have to do anything on the console and
AWS mentioned copy around to some URLs. But you define just everything internally
and everything is handled by AWS SAm
and also injected accordingly. So if you have to
wait for a certain instance, AWS cases about this.
If you update for example, an instance that you increase for
example the allocated storage because you want to go from 10gb to 100gb,
AWS takes care about this modification of the
resource. You don't have to do it yourself. So there's no manual
setup required. And everything is resources that are just
interconnected and usable for you.
So if we look at it from a development cycle,
I would say right now. So you would start off developing and
configuring what you actually use. So maybe you would start oh, I need a
database in the beginning for my app. Okay, you just configure your database and start
developing. You start developing, creating your first commands,
maybe thinking I have to add this road,
this functions, this service, whatever,
then just build it, deploy it. Everything is
on AWS, check it out or it's running fine,
maybe it's not running. So you're reconfiguring, turning the cycle around.
Everything is just modifying, adding, modifying, removing. And everything
is handled by AWS Sam. So you don't have to care about anything else.
So if you would look now how this would eventually look,
if you're calling Sam build and you worlds have three functions, AWS in this
case the API function, a web function and a queue function.
These functions would be built and zipped by SAM build
based on the configuration that we have made in the template level.
Then you worlds be able to deploy it. If you have deployed once
with a guided environment, the SAM config Tommy is
created which is basically the deployments configuration
that is then available on the system and then can always be
reused. You can also overwrite for example an AWS
profile because you want to use a different profile or deploy the
same application to two different AWS environments, because maybe
for security reasons or whatever, you can even override specific
environment variables and inject something for testing purposes.
So you are pretty much free to do a lot and SAM takes care
about the rest. So you're now deploying apps instead of functions.
And if SAM deploys, you can check out everything that is actually
happening with SAM. So SAM always asks for confirmation,
or basically it asks for that in the default configuration.
And you should never change it actually. So you can see
where is everything uploaded, the s three bucket, you can see what
has been added as a resource, what has been modified as a resource,
what else has been uploaded, for example, all these possibilities
you have available and visible from the command line so
that you can see what is really happening and can really confirm, yes, this is
supposed to be deployed. So if
we look at the deployment configuration, then deployment configuration is pretty
much very simple. It is exactly what you
have entered when you first started guided
deployments via SAM. But the important part is you can
have multiple different stages. So you basically have profiles
here. So you can have a default profile, a dev profile,
a staging provide production profile. And this is the important
part. So you can have multiple deployments directly in
one configure without the requirement to have
different files or different setups or different checkouts
running. So if we would take the two files that we have right now,
we have the template yaml, which is basically the infrastructure as code or
the app as code, if you want to call it like this, and the Sam
config tunnel, which is the deployments AWS code.
So there are of course also some drawbacks.
The example given is pretty much locked to AWS SAM
or AWS in general, but we have split
it up. So good. I would say
that we can just exchange the mediator that we have here with the AWS
SAM and introduce a different one, for example for GCP or
Azure or whatever we want to via scripts or via other tools.
So we created just an ops layer that we can exchange for
something else to deploy our app that works locally and that
worlds out of the box anyways to any other vendor.
The other thing that we did, we introduced a framework. This framework offers
us a lot on functionality and libraries that we potentially can use, but it also
introduced some overhead. And the overhead
can increase the boot and execution time of your serverless function.
And this is important because every millisecond of reserved
memory that you use from your serverless
function, cases and costs can excel pretty heavily.
But as you use existing frameworks. Frameworks have
common optimizations, frameworks are normally known
and there's a lot of documentation. So in most cases you
can circumvent this overhead that you generate there
and it doesn't affect your serverless function too much, at least from
my experience. So with all my advocating
for serverless right now, everything could go serverless right
now, right? So if you ask yourself do I need serverless?
I would say no. So serverless is not the first
idea to think about. As mentioned, think about your application.
First start developing your application, then with
a big but you can look at the pros and cons and think about if
you should introduce a deployment to a serverless environment.
It provides high availability and nearly endless scalability without
the need of a big DevOps team. So really this is bringing the developer
back to operation. Who can maintain a fully running serverless
application on its own. It's easy to start and
cases with your customer supporting peaks when you have for example marketing
campaigns. So if you are ever in the lions
then you're ready for it. You're only
paying when it's running and the availability is only given when it's running.
So it's good. If you have also maybe no traffic, no customers,
it's not good, but at least then you are not paying. And I only gave
you a glimpse about serverless deployments and serverless abilities right now,
so there's much more to discover. So just go ahead.
But on the opposite side,
you have no insights into the infrastructure. This is good and bad. You don't want
to care about the infrastructure, but this also limits you in the optimization that
you have. You cannot care about servers, you cannot
cases about memory basically, or cpus or what hardware is
used. This is all provided by the vendor that you are locking in.
Costs can easily excel AWS a serverless
on a serverless environment, as serverless really requires optimization
because you are paying as mentioned for every millisecond in memory. And now for
example, if you set a timeout of let's say five minutes and you are
reserving 1gb of ram and you have a failed,
I don't know, a board condition that is not hooking
and every time you're paying for five minutes,
1gb and this can be very, very expensive very easily.
So always think about monitoring very important and setting limits.
The other thing is if you have constantly running apps or
constantly running endpoints, then maybe serverless
is not the best because you're paying basically for
every invocation of the serverless app. And maybe it's better
to have it warmed up, have it caching and
have it running on normal instances and just scale in
a more classic manner. The other thing is you're only
as scalable as the rest that you configure. If you
configure resources like for example the smallest database available,
then of course you might be able to
catch 10 million requests because of an advertising campaign.
But your database cannot hold it. So the biggest part
always defines your availability and scalability.
And one important point, in a serverless environment
you are limited to the computational resources that the vendor
provides you. So for example, you can define a timeout and a memory size,
but you cannot define the cpu amount. For example
in the AWS case because at the current state with
every 768 megabytes you get one VCPU.
So until you go over this limit of memory you only
have one VCPU. But if you need for example more vcpus
you would have to increase the memory size. But with memory size you again pay
more money and you have to decide if you want to go
down that route or optimize your application.
So this is a good and bad part. You are forced to optimize your application
to reduce the memory and execution time.
So now looking at the pros and cons,
I cannot give you really decisive answer
if you should go serverless. The only thing I could
say is just go, just try it out.
There are so many free tiers and free accounts that you could
register for Microsoft AWS or whatever.
Take the example that is provided on GitHub with this presentation nation
and just try it. But find out for yourself if this works
out in your environment. If this is a good use case for your application,
you know best what to do, but think about your application first
and not the operation part. Serverless is only here to
help you and not to dictate what you should
do. Thanks a lot for your time.