Transcript
This transcript was autogenerated. To make changes, submit a PR.
Welcome to this stock on GrPC and Python
where we will show you how to build fast, scalable APIs
for your services. Let's go over some of the things that
we're going to cover in this talk. We're going to talk about the history of
APIs. We're going to talk about what GRPC is,
why you should use it, and what its benefits are.
We're going to talk about some of the disadvantages of GRPC.
We're going to have a quick demo where we show you how to
build a client and server in Python.
We're going to talk about some advanced features like interceptors.
And finally, we're going to show you how you can services front ends
using GrPC web so
let's get started. So what
is an API? An API stands for
application programming interface, which is a set of definitions
and protocols for building and integrating application software.
What does that mean? Well, an API
is just a way for a program to let other programs
communicate with it. So in this example here,
we see program a has its API that
it exposes to allow anything to communicate with it.
And program b, if it wants to communicate with program a,
will do so via the exposed APIs.
So here program b invokes the API on program
a and asks it to do something.
And program a can either respond with an okay or an error.
Now, APIs are the foundations of modern computing.
Almost any major system you see today has
a bunch of different components that talk to each other,
and they all do this via APIs. The modern web
is hugely based on a set of different disparate
services, all talking to each other via APIs.
And this is enabled by a variety of technologies.
So let's take a look at some of the technologies that are used to
build web APIs. Early on we had this technology called SOAP,
or simple object access protocol. Now this is
from the late ninety s and early two thousand s. This is a
fairly complicated and clunky piece of technology.
It used XML as its definition language,
and it required the use of these things called wisdoms that needed
to be shared across the server and client.
It was not great. It was clumsy,
error prone, and as a result of which,
in the late 2000s, we had this new technology come up
that was called Rest or representational state transfer.
Now, rest is a way of modeling your API
as a bunch of resources and enabling
or exposing operations on those resources.
If you've ever used an API, chances are it's
been a rest API. Rest was a huge step up from soap.
It was simpler, it was easy to use, it was easy to understand.
It used JSOn as its communication language,
which is far easier, far simpler,
and far more lightweight than XML. Recently,
however, we've had some new technologies come up.
GraphQL is an API technology that was invented by Facebook.
GraphQL allows you to model your API as a
GRPC that your client can then traverse to ensure
that it accesses the data it needs in a
single, thus avoiding the multiple round trips necessary.
With Rest and scope, you'll find GraphQL used heavily
in mobile and in scenarios where network
bandwidth is a huge concern. Finally, we have these two technologies
here, Apache Thrift and GrPC, which are
fairly similar. Thrift was
built by Facebook, GrPC by Google, but they
were both solving the same problem, which is having
an API technology that was designed to be strongly typed and
high performance. GrPC seems to have won out in
terms of adoption. It is far more widely used than
thrift, and that is why we will be focusing on GRPC for
the rest of this talk. So what is GrPC?
Well, GrPC is a cross platform, open source,
high performance remote procedure call framework, which is
a fancy way of saying GrPC is a way
to invoke procedures on other services
in a very fast and performant way. It uses a
bunch of technologies under the hood in order to ensure
that this is possible, and we will look into those later on
in this talk. The way GRPC works is very simple
and very similar to regular APIs.
The client makes a request to the server
and the server responds. Now let's take a look at some of the advantages
of GrPC. GrPC was built from the ground up
to be high performance. It uses HTTP
two natively, which has several advantages
over HTTP one, HTTP two has
support for binary framing and compression.
This ensures that the messages that are sent over the wire are a lot smaller.
HTTP two also has support for multiple multiplexing
of a bunch of calls over a single TCP
connection. GrPC also has code generation
GrPC when you share a protobuff
file between a client and a server, the client can use
the protobuff to generate a client stub. This means that the
client doesn't really need to re implement the API, it already
has the interface defined and available for it, and you
can just plug it in and start using the API easily.
GrPC is very strongly typed.
Protobuff files allow you to specify not
just the message, but also the type of the message. This leads
to the elimination of a huge class of untyped errors that we've seen
in previous API technologies such as rest.
GrPC also has native support for streaming.
It has unary as well as bi directional duplex
streaming support, which means the client and the server can establish
a connection and basically just pass messages back and forth.
Finally, GrPC is support for deadline or
timeouts, which means the client can let
the server know how long the
server needs to hold the connection open for before
it can close the connection on its own. So let's say as a
client I specified timeout of 1 second.
If the server doesn't respond in a second, the server can close
the connection.
Now let's take a look at some of the disadvantages of GrPC.
Well, GrPC is a newer technology, it still
has some bugs, but the GrPC
team is really good at patching them really quickly.
GrPC doesn't really have native support for browsers.
If I build a front end app, I cannot directly talk
to a GRPC API like I can with a rest API.
There are some new technologies, such as GrPC Web or
GrPC gateway, that do make it possible for browsers
to talk to GrPC APIs. However,
GrPC also requires proto files to be shared
between the client and server, as well as client stubs to
be generated ahead of time. Now this means as a
client, I just cannot go about invoking any GrPC
API that's available out there. I need to have access
to the proto file, I need to generate the client stub,
and only then can I invoke that API.
GrPC is also a little more complicated than rest.
It's not horribly hard to use, but it's definitely
not as easy as rest where I can just fire client up and
call any API that's available to me.
Finally, GrPC is not as widely adopted as rest,
which means if you build a GrPC API, all of
your users might not know how to invoke and call that
API. Well, now that we've
seen these, what are some of the use cases for
GrPC? Well,
GrPC is really good for service to service communication.
It's really good if you have a services architecture where
you have these bunch of different services that are constantly making calls to each
other. GrPC lets you make those calls extremely
efficiently, extremely quickly, and because you already are
able to share the protobuff files, it makes it very rare that
you'll get Malform API requests.
GrPC is also very useful for point of mind real time
communication and high performance, low latency applications.
Anywhere that you need fast speed,
GrPC is usually going to outperform an
equivalent rest API.
Okay, now that we have all of this information available to us,
let's actually take a look at the demo to implement a GrPC
server and client.
Okay, so welcome to the demo.
So on the right you'll see I have a sample
project that I have created for the purposes of
this demo. It uses poetry,
but you could use any boat tool that you want. There's a
few important things to focus on and I'll walk you
through this project. So the first thing to notice is
this folder called protos, within which
I have defined this file called creating Proto,
which is a protobuff file.
And what protobuff is, is it's basically an
interface definition language. And what that means
is it allows me to define my API in
a language agnostic manner. So here you'll see what
I've done is I've defined a service called Greeter,
that exposes a method called greet,
that takes in a greeting request, which is a string
called name, and that returns a greeting response
which is a string called greeting. This is
the entirety of the API for my service. I have one GRPC
service that exposes one method.
And what grpC is using to do under the hood is
it's going to take this protobuff file and it's going to generate
code for me in whatever language I choose.
And I can plug that code into either my server
or my client, and it allows them to
not worry too much about the interface of the API because
they already have that provided to them. Now the way
that's done is through this tool called
Prodoc. Prodoc is a tool that ships with GRPC.
You can use it to compile this protobuff file into
code for a particular language, and I've done that here.
If you look in GRPC types, it's created
these three different files for me. I only care about these two.
This is the actual definition of the protobuff
in Python. We don't really need to care too
much about this file because it's not really meant for human consumption.
And this is the other one, which is the GRPC file,
again, not really meant for human consumption. These are more
sort of like stubs that are meant to be used in the code
we actually write. The way you do this is by
installing a set of libraries, the first of which is
called GrPC IO
tools, and the other
one is Mypi protobuff.
You need to install these two libraries in your project.
I'm not going to do that here just because they take a really long time
to install and I already have them installed. Once that's done,
you can run this protoc command here.
I'm not going to get into the details of protoc, but protoc is basically a
way to tell GrPC to take my protobuff file
that you see here and generate these files for me.
Okay, now that we have our code generated and available
for us, we can go ahead and start looking at the implementation of
our server. Now if
you notice in our protobuff file we
had one method called Greet. And so any server
that we built has to implement this method here and that's exactly
what we're doing. You'll notice I've created this folder
called services just because I like to create a different
file for the implementation of each method, but that
is totally up to you. It's just a conversation, a convention
that I focus on. But within my services
source folder I have this file called Greeter
Py. And what you'll see what I'm doing here
is I'm importing GrPC, but from this actual
compiled files that we have here,
I am importing the request
definition, the response definition and the servicer
definition. And that comes from this
file here as well
as this file here.
Once we have that, I'm going to actually go ahead and implement that greet method.
And you see here I have the definition for
the greet method. It takes in a greeting request and
it returns a greeting response. And the actual
implementation is fairly simple. For this demo I'm just
taking the name off the GrPCG
request and I'm just like appending it
to the string called hello. So whatever I pass in
as the name, I'm just going to get a hello name back as a
response. And this is basically the implementation
of everything that you see here.
Now that we have this implemented over here,
we kind of need some sort of GrPC server to accept and
serve clients and that is done by
this file called server Py, which again super
simple, import GrPC.
There's one important thing to note here, again from the
compile code that we have, we need
to import this file called add greeter services to server.
And you'll notice that that comes from here.
You notice this add greeter services to server.
What this method does is it takes that services that
we've just defined here and it's going to add it to the GrPC
server. And for those of you that come from Django
land,
this is analogous to adding a URL path to your
Django server. So now that
we have that in, we also need to import the actual
implementation of the greeter and
then the server defines a single method called services.
What this does is I'm going to start a
GRPC server in a thread pool and
then I call this method called add greeter services to server
and I supply this method called greater in and what that's going to
do is it's going to take the server that I just created here and it's
going to add this method onto that server so that that server
that I've just created can actually serve clients
requesting the greet method. I'm going to specify the port,
I'm going to start the server and it's
just going to lock and run.
And now because I use poetry I can basically
just define a simple script, a simple command
to run the server. And I am actually going to do
that right now. I am actually going to run the server.
And as you can see here,
my server is running and it started. Now that
we have the server up and running,
let's take a look at defining our GRPC
client. Again, it's fairly simple.
I have this file called client Py.
Again very simple. From the
code definition, from the compile code I'm going to again import
the greeting request from this file and
the greeter stub from this file here.
The greeter stub is basically just a way
to let the client know what methods it can call. You can
see it has this self greet which
is a unary GrPC call. So this lets the client know that it
can call the greet method which is a unary method.
So if I go back into my client py here,
I've imported the greeter sub and the greeting request and
I have a simple run method here which is like what is your name?
And then what I'm doing here is I'm creating
an insecure channel on GrPC on
localhost. 50 00:51 that's the port on
which my server which you could see here is running
on port 50 00:51 once that's done
I am going to create an instance of this greeter stub that I've
just imported. Pass the channel into that.
Once that's done I'm actually going to create a request that I
wish to send over. And my request is basically just
an instance of this
message that I've created here. It takes one input called name. So I'm going to
pass that in which is like what
is your name? And the response
is going to, and then I'm just going to
call the greek method on that stub and I'm just going
to print the response on the screen. Very simple.
Let's call it and see if it works.
So, programming Python
demo and I'm going to do poultry run
and think it's called run
GrPC client. Let's run this method
here. What is your name? I'm going to type my name in.
Boom. It made the call to the server. I got a
response. There we have it, folks.
GrPC is. It's very simple.
This project is available on GitHub if you'd like to take
a further look how this actually works.
But that's it for the demo. Thank you.
Okay, so now let's look at some advanced features that
GrPC provides. We're going to start by looking
at interceptors. So what
are interceptors? So interceptors
are a GrPC concept that allows apps to
interact with incoming or outgoing GrPC calls.
You could think of this as middleware, but for GrPC.
So just like with middleware, interceptors allow you to
intercept the incoming message. They allow you
to verify certain aspects or certain metadata
on the message, and they allow you to either allow the message
through or deny the message. However,
unlike middleware, which you'll find on a lot of restful services,
GRP interceptors can be on both the client
as well as the server. So for
example, you could have a client interceptor that adds
authorization credentials onto every message. And you could
have a server interceptor that checks and validates whether those
credentials are okay. Some common use
cases for interceptors are for logging,
monitoring, authentication,
validation, as well as adding tracing to
your services.
So this is sort of how interceptors
fit into the regular request response cycle.
The client would make a request which would then be picked up
by the client interceptor, which could potentially add some metadata
or modify the request in some way, which then gets sent
over the wire to a server interceptor,
which again can check and validate certain concepts of the
message before passing it to the server interceptor.
So now let's take a look at some kinds of interceptors
that GrPC offers. And GrPC
offers four basic types of interceptor.
You first have the client unary interceptor,
which is an interceptor that lives on the client,
that is used when you're making unary calls to the
server. So this is where you could
add some client side metadata onto the message. For example,
let's say you're making a call from a
device, a GRPC call from a device to a server.
You could add some metadata, such as the device type,
the os installed on the device, et cetera,
along with the contents of the message and then send it over the wire.
You then have a server unary interceptor,
which is an interceptor that lives on the server, and again
that is used for unary messages.
An example of this is this could check whether the message
has authenticated metadata. So whether the message has
some sort of username or password or some sort of token that is valid,
the server interceptor can check for the validity of that authentication
data that you've provided and decide whether to allow the
message to continue or terminate the cycle.
Then we have the client stream interceptor,
which is the version of the client unary interceptor,
but that's used when you have streaming messages between the client
and server. So an example of this would be if, for example,
your client is sending packets of video over
a client stream, the client interceptor
could add the checksum for each packet that's
sent over that stream. And then finally we have this server
stream interceptor, sorry, which is
the version of the server unary interceptor that's used for streaming messages.
Again, this lives on the server. And an
example of this is this could check the value of the checksum
that has been sent by the client and
check that against the actual message and see whether it's valid.
Or if not, it could decide to ask the client to resend
that packet of data. All right,
that's it for interceptors. Now let's take a look about how
we could services front ends using GrPC.
So GrPC was designed to be a machine
to machine protocol, which means browsers
natively do not support GrPC.
In order to get over this limitation, something called GrPC
Web was developed. Now, GrPC web allows
browser apps to call GrPC services using the
GrPC web client and protobuff.
So GrPC web is similar to normal
GrPC. However, it is a slightly different protocol.
It supports HTTP 1.1 as well, making it
compatible with most modern browsers.
Now the way this works is it requires the browser to generate
a GrPC client from a proto file,
similar to what you would do with a regular GrPC client.
That client now lives on the web app,
and the browser based web app will make GRPC calls
through that client. Once this is done,
you need some sort of translation layer that's
present at the server, and this is usually provided
by some sort of proxy such as NY or there are
other things that you could use as well.
The advantage of this is it allows browser APIs to
benefit from the high performance and low network usage of
binary messages that GrPC uses.
All right, I hope you enjoyed this talk. Thank you.