Transcript
This transcript was autogenerated. To make changes, submit a PR.
Conference for 2023. My name
is Roman Boykar and I work senior Senior Senior Senior senior specialist
Solutions architect serverless. Today I'm going to
talk a little bit about rusts and why I think
it is the best fit for writing your serverless workloads
and especially lambda functions. To be honest, when I
started looking at Rust and learning it a years ago,
I was something a little bit skeptical about
that. Yeah, I would use rusts as my primary
language for writing lambda functions. But over time
I came to the conclusion that yeah, rust is the best fit
for writing the lambda functions, especially given how
lambda runtime runs your code. So let's
get started and look at today's
agenda. So first of all,
I will remind you a little bit about lambda,
what it does and how it rusts your code and
give a short overview. And then we will
look at how you can kickstart your new
project and run and create your lambda
functions and what tools
and options are there. Then we will be talking a little
bit about lambda extensions. This is another way
where you can apply rust and it perfectly fits there
as well. And we will look at how to write a
lambda extension with rust and then yeah, we will
cover and summarize all the things
we will be looking at today.
So let's first talk a little bit about
lambda and how it runs your
code. Essentially, from the developer standpoint,
you can think of lambda as another compute option,
which given a code written in
many different languages, it just provides
compute runtimes, compute resources to
run that code. And essentially then you write code and
that code interacts with different
underlying resources such as AWS services,
databases, or any other resources.
And Lambda essentially allows
you to react to different incoming events from
other AWS services or external APIs.
And as you see, you can use different languages.
Probably you can't see Rust here because
in lambda we have different types of languages. We have so
called provided runtimes which provide
some environment for you, for Python,
JavaScript, Java, or you can
use so called bring your own runtimes
where you can create a custom runtime and execute
that runtime and run any language whatever
you want. So with rusts it is
usually a custom runtime, but don't
think that you will spend a lot of time creating
all that primitives yourself. I will explain in
the future that you don't need to care
a lot about this,
but if we look at bigger picture,
yeah, you have your code, then probably
you will need to provide some configuration options,
referring maybe to some external resources or
some configuration parameters.
Then of course, because your lambda
function needs to react to different events, you will need
to configure either event source mapping or
provide a way how your lambda function will receive those events
from the external sources.
And there are different things which allows you to
version your lambda functions, to use different deployment
strategies, et cetera. But I'm not
going to focus a lot on these aspects.
And of course security is very important thing and
you need to think what resources
are available for your lambda functions or who
on the other hand can run and execute your lambda functions.
And this is clearly defined and protected
by assigning appropriate IAM
and lambda permissions and using executions roles on the
lambda functions. And in the end,
once you write your code, there are two different options how
you can package the code and deploy it in the
lambda runtime. The most commonly used packaging
is zip archives, so you can package all the
things as a zip archive and deploy it
into a lambda runtime. Or you can also use container
images. For example, if you already build some pipelines
around producing containers
and container images, then you can probably adopt and
use them to produce also images for lambda functions.
In terms of rusts, in the end you will have a binary,
and then this binary you can package both as a
zip or as a container image.
There are some differences and usually
I encourage you starting with zip because it
is easier to implement and to deploy
unless you have some specific requirements where you want
and need to use container images.
Another very important thing,
especially in terms of rust, is how
you can considered how much resources
in terms of hardware are available to your lambda functions.
And if you look at configuration options, you will
see that there are not plenty of options here.
And essentially the only option you can configure
is the amount of memory given to your lambda
function. But interestingly enough,
the amount of memory you give to your lambda function also
tells how much lambda runtime will give
cpu cycles to this or that function.
And this cpu capacity is allocated proportionally to
the amount of memory. So the bigger
amount of memory you give to the function, the maximum is
ten gigs here, the bigger number
of cpu cycles your function will get.
And essentially, if you think about
virtual CPUs, if you give your function around
one and 80 megs of memory,
it will transform into one full VCPU
and it will scale up to six
virtual CPUs. If you give the maximum
amount of ten gigs to your lumber function, that essentially means
that that function will be assigned six
full virtual cpus.
And here comes another very important
aspect is the pricing
and the pricing for your lambda functions.
Consists of two options of two things. The first
one is the number of rusts. So the more requests
you assign and the more rusts
you issued the lambda function, the more you will pay.
And another dimension is the duration.
And duration is calculated in gigabytes per second,
so it is essentially the amount of
memory you give and contribute for this
particular function and the time your function is
running. So essentially
that means the more memory you assign to a lambda function,
and the longer this function is running,
the more you will pay. But the
duration is metered in one milliseconds, so it is quite
granular. If you can optimize the
time of how long your
lambda function is running, it essentially means
that you can optimize also on the cost.
And another very important thing to realize here is
that you will pay
the same for the function which runs 100
milliseconds with two gigs of ram configured,
or the function if it runs for 200
milliseconds with one gig of ram configured.
But what if you can minimize
both things so you can have less
memory considered, and your function runs less
amount of time. So it essentially means that you will pay
less. And here rusts comes
into play, because again, it is
super fast and super optimized, and it
essentially allows you to run your functions faster
in terms of time. And in many cases it
also can run in a less memory
and less cpu, because again, it is quite optimized.
And essentially that will imply that your functions
will cost you less. So let's talk
a little bit about Rust's performance and
cost characteristics applied to serverless workloads and to
lambda functions. First of all, there are different
tests run by different people.
Probably one of the most well
known is this test of
hello world application. Essentially it
is not super real world scenario,
but it can give you high
level view on different runtimes and the
amount of time it will take to run these
simple but loads. But again,
in real life, probably we won't be running hello world
applications. Often there are
a lot of data and a
lot of people already compared
different runtimes, Python versus rust, like in
this example, or node js and typescript versus
rust. And essentially in many cases,
we observe that customers who adopt rusts
compared to any other runtimes on
lambda, they observe both.
They can see the performance gains
and usually those performance gains applied to
how lambda bill. You also
convert into the price and
cost gains. And essentially, if you look
at the cost efficiency,
there are a lot of samples. For example,
one of our customers, they see that
the reduction of amount of cpu and memory
used in production dropped significantly.
And essentially, yeah, it is good in terms of performance,
but if you look at lambda, as I explained,
it also means you pay less.
Another great example, I have a customer
who was running
the majority of their workloads in node js,
and essentially they
were quite happy. But then they realized
that there are certain types of workloads that require
lower latency, and they started to optimize
on the latency and started
evaluating different options. And in the end they came to
the conclusion and essentially they rebuilt some
of their applications in rust, and they
observed first of all the lower
memory usage for their functions. So the same function
running on, for example, node runtime and rusts
required different memory configurations,
and they could get the same or even better
performance at lower memory configurations for
their lambda functions running in rusts compared to node
or go runtimes. And essentially because the
memory is less, the speed is better.
They also observed some cost
gains as well. So this is
a quite common pattern. You may ask, does it
mean that I should rewrite
all my lambda functions in rust immediately?
Of course not. Of course it will take time and effort
to teach your years rust and rewrite
everything in rust. Probably even you don't need
to do that immediately.
I have another quite interesting customer example that
the team was primarily using node
js for their serverless application.
And essentially their serverless application consisted of
hundred of different functions,
but they identified two
or three hot functions that essentially almost
every request coming into their system hit
those two or three lambda functions.
And essentially they decided that, yeah,
we want to optimize those functions, we want to
minimize the latency, we want to
make those functions the most performant ones.
And essentially they started rewriting only those three
functions in rusts, and they already
observed quite great impacts
in terms of performance. First, because those
functions were hot, and as
I mentioned, all requests coming into the system
has to pass through those functions. And also
they absorbed some cost gains,
because again, those functions were
most called
the majority of time, and reducing the
cost was also a great benefit. But they
happily stay with no chairs
on all other functions, and they
still haven't rewritten the
whole application in rust. So usually this is the best
strategy. If you already invested a lot in
your serverless applications and you already run
in different runtimes, identify the
most important, the most critical parts of your application,
and probably rewrite those in rust. And it
is quite obvious.
Another very important thing about rusts
is sustainability. Again, because you use less compute
resources, you can use it
more efficiently and in the
end with serverless it is already quite
sustainable because again, you don't have to run those
resources 24/7 serverless automatically scales up
and down depending on your workloads. But again,
applying rust to serverless will make you even
more sustainable. And again, rust is
heavily used under the hood by different
parts of AWS. And for example, if we look
at lambda under the
hood, lambda is using firecracker vms to
isolate your workloads. And those firecracker
vms, they're written
completely in rust. So already,
even if you don't use rusts as your
language to implement lambda functions, under the
hood, lambda will already use rust and
it will already be
beneficial in terms of sustainability.
But how you can start for example,
you know rust, but you
want to run some and
create some functions in rust. How you can do that
I personally recommend you looking into a
tool called cargo lambda. It is an extension
for cargo tool and
essentially it provides you with a set of workflows
which allow you to bootstrap a new application
to test that application locally to deploy it to
your test AWS account to
build that application. Again,
it supports for example cross compilation.
And for example you can target different
lambda runtimes because in
lambda we have two different cpu options. You can
run your functions in x 86 or an RM
architecture, and with cargo lambda you can
easily build for both of those architectures.
Again, here you can find a link to
the cargo lambda. So if
you scan this car code it will
navigate you to the site and there you will have
quite great documentation how to use
this tool. Another thing which I personally like around cargo
lambda is that it is agnostic
to infrastructure AWS code tools. So you
may use this kagalamda with different
infrastructure tools. For example, if you use terraform
or if you use SAM or if you use CDK,
you can integrate them with kaga lambda.
Then once you bootstrapped your application,
let's look at some sample application and
I will guide you quickly through how
typical small serverless application written
in rust may look like. So this is small application
consisting of API gateway for receiving
incoming HTTP requests.
Then we have a lambda function where we
implement some business logic and for example quite
common use case to have a dynamodb as a data storage.
And we have
quite good documentation how to build
and how to use lambda functions
with rust. Again, you can follow this link
on the page and get quite comprehensive
tutorial and guide how to build and how to use
rust with lambda functions.
Let's briefly look at main
things in this sample project.
First of all, as you see, we input
number of libraries, and essentially there are two
important libraries. First is lambda HTTP.
And essentially, remember I mentioned that
if you want to use rust on lambda, you need to create
your custom runtime. And essentially if you need to
create a custom runtime, that custom runtime is
responsible how your code is interacting with
lambda, and there are certain specifications
you need to follow, how you can get the events from
lambda runtime, how you should
pass the responses back, and yeah,
you can implement that yourself, but we
already done it for you. And we have this rust
runtime for AWS Lambda.
It's an open source project and it encapsulates
all the interactions between your code and lambda runtime.
And essentially it also adds a lot of
syntactic sugar. For example, if your lambda
function consumes and gets events from API
gateway, there's another abstraction on
top of lambda rust runtime
which encapsulates how those
HTTP events are coming into the lambda
function and how you can interact with
them. And another important input
here is dynamodb interaction.
And for that we're using AWS
SDK for rusts and it essentially allows you
to interact with dynamo or any other AWS services.
Then the most important function in
your code will be
function handler. And essentially this
handler function is the main
function where you can write and put the
code. And for example,
in this example it
gets the event, and this event
abstracts the data which is
coming from API gateway or any other HTTP
sources. And in the end
your function should return result type
and if you successfully complete
the function, you should return okay and some
response from your lambda function.
Then within the handler it's an
arbitrary rusts code. You can again write
some basic things in the handler function, but in
reality if you have more sophisticated business logic,
you will put that logic in
separate rust functions, methods outside
the handler function and you can essentially
interact and call them for sure.
Another thing is that you can do
within handler function, you can interact with lambda environment.
For example, you can use printer LAN
or tracing mechanism
to emit logs for Cloudwatch.
For sure you can interact with local
Tnp file system. It's an ephemeral file
system available for the lambda function during
runtime of this or that function
lifecycle. And of course you can make
any network calls to external resources,
to any other AWS services.
Another thing you should add into your lambda function is
a main function, and essentially this is the
entry point which will be called by a
lambda runtime or the start of those
runtime. And here the only thing you should be aware
of this is an asynchronous
invocation, and we use Tokyo for that.
So you must annotate the
main function with Tokyo main and
make it a synchronous invocation.
Another very important thing is that because
this main function is called
during the initialization of your
lambda runtime, you can declare some independent resources
which will be reused by your handler function.
Usually you can define some external
dependencies, creating an
SDK client or getting the configuration
options for the function. And then those
configuration options and those SDK clients,
they will be preserved in memory. And your
handler function doesn't need to reinitiate
and recreate all that shared resources
for the subsequent calls.
And essentially this is a best practice how
you should initialize
some common resources which
will be reused in your handler function.
And then, as I already mentioned, okay statement in
your handler function just ends the
execution. And usually you return a JSON document
if it's a synchronous invocation,
or if it's asynchronous, for example, then you
can return something, but you can
also return empty okay response,
because essentially for synchronous invocations,
lambda runtime won't forward any data back
to the caller. This is quite good
thing, and you can build your
business logic with lambda functions. But another
quite interesting application for rust and
in terms of lambda and serverless is
so called lambda extensions. First of all,
let me briefly describe what lambda extensions
are. It is additional
process which runs alongside you with your main
function and your main code, and it essentially allows
you to capture some diagnostic information,
maybe run some instrumentation for the main code,
fetch some configuration settings out of external
parameter stores, or react
to some function activity, maybe imposing
some additional security guardrails
into your lambda functions. And essentially,
in terms of extensions, there are two different
types, but for today's talk we will be focusing
more on external extensions and external
extensions. It's like a separate process that
runs in the same runtime
in the same execution environment. And usually
you can query different parameters from
your lambda runtime, or you can again use
this separate process for monitoring
observability and security.
And one important
thing here is that this external extension,
it still shares the same runtime environment. And essentially
that means that it shares the resources like memory,
cpu and all other
things. And another very important thing is that
your extension doesn't have to be written
in the same language as your main
lambda function. So that essentially means that you can create
extensions in rust and augment
the behavior of lambda functions written in
node js in Python in Java or any
other languages. But here,
important thing, that those extensions,
they can impact the performance of the main function
because the resources are shared. And here
where rusts comes into play because again it
is the most performant language.
And essentially now I see a lot of
extensions are being created with rust.
So it is a perfect fit.
If you want to augment the behavior of the
function and you want to create an extension,
rust is a great option.
Here. Again, if you
use cargo lambda extension,
you can easily bootstrap a new lambda
extension. And essentially here you see
that there's another method which you need
to implement. It is event extension method
and it gets the lambda events.
And then depending on what
happens with your lambda runtime, you can interact
with the main code
and you can react when the
lambda is being invoked, lambda function is being invoked.
Then you can react to those invoke events or
when the lambda runtime is being shut down.
So you can again do some maintains
clearance or emitting some logs or
doing some other things,
and then you have this main function
as well. And essentially this
main function just passes and
executes this events extension function.
So here I encourage you,
looking at extensions,
there are pretty good samples.
What you can do with lambda extensions,
probably one of the most well
known ones is lambda adapter.
And essentially it was created by one of
my colleagues. And this lambda adapter allows
you to get API gateway HTTP events
and transform them back to actual HTTP
calls. So that for example, if you want to run in your lambda functions,
some legacy applications written in traditional
frameworks like Express Js, like flask,
like PHP, for example, you can
take that application and you don't have to
change anything. So this application will still listen on a
particular port, but you can't expose
a port with lambda function directly.
And this adapter written in rust
in the form of extension, essentially it makes
a call to a port which your
application listening inside lambda runtime,
get the response and transform that response
back to actual JSON
payload understood by API gateway, for example.
So a nice example what you can achieve with rusts
and lambda extensions.
So in summary, rust is
the best fit for lambda functions,
and it essentially allows you
to get the best cost performance group heuristics.
So I highly encourage you, if you want to
use rust with lambda, go and
experiment. And essentially with rust, you can
both build business logic in terms of
normal lambda functions, or you can also use
rust to create different lambda extensions and
to extend capabilities of your lambda functions,
even if the business logic is still running
and executed in other languages.
And there are a lot of tools. Not only
cargo lambda, we have support for rusts in Sam,
so you can start using and
start building with those tools quite
easily. New projects with
that. Thank you very much. Hopefully I
encouraged you enough to start
and to try build some lambda
functions with rust. In case you will have any questions
and want to communicate, feel free to reach
out and ping me on Twitter with
that. Thank you very much and hope you enjoy this
conference.