Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello everyone and welcome to my talk about Kubernetes gateway API.
And I will be happy to show you the differences
between this new API and the ingress API that
we use it to use until now.
Okay, let's get to some presentations.
And I am Gregorio Palama. I work as
DevOps and cloud engineer at Pinwave and I
am a community manager at the local chapter of GDGs
in Pescara. You can find me on social networks,
especially on X and on LinkedIn.
Okay, let's start with saying what
are we talking about? We are talking about exposing our
services outside the cluster. Well,
note that in this slide and in
all the other slides we are referring to services such
as the service resource in Kubernetes.
And it's done because of simplicity.
We can say that until now we
use ingresses API to expose our services
outside the cluster. And that's an API that is
globally available since Kubernetes one nine
and is widely used. Let's see how
it works using a way that is known
as north south that depicts the way
that traffic goes from our client, the top
the north, to our service.
We will see it on the bottom, the south.
So let's say for example that our
client performs request to a
DNS myapplication.com. And on that DNS
there is attached an ingress called my ingress and
that ingress will route the request
to our service. Let's say it is called
my service. Well, what happens from now on
is, for example, my service is backed
by an endpoint that
collects eps and posts for our
pods that will receive the request and
answer with a response that will be handled back to
the client. The important
thing is the way that the requests is routed.
Okay, we can see the yaml that
is representing the example that we've seen
right now. And I added for
example, a way to define a particular path
that the ingress can
handle for us. Okay,
before we go to see the limitation
of the ingresses API, let's start by
reading small part of the
documentation, the official documentation about the ingresses.
Well, the official documentation says that Ingress
exposes HTTP and HTTPs routes
from outside the cluster to services within the cluster.
In traffic routing is controlled by rules defined
on the ingress results. Well,
first of all, HTTP and HTTPs,
the Ingress API API was
designed with these two protocols in mind,
no other protocols.
Also, everything is defined in the same resource
and that's a limitation that, well,
we can see the implication going
forward in our presentation because
let's see the personas that were in mind when
the Ingress API was designed. Well, the only personas
was the user, the ingress owner. And that's a
big limitation because the user had
the responsibility to manage the infrastructure
configuration or for example TLS
configuration and the routing configuration
too. Going through
the years, a Persona was added
and the Persona was added when the
Ingress class resources was added.
Ingress class are resources that contain additional
configuration, including the name of the controller that should
implement a class. That's what the documentation says.
Okay, what does it mean? First of
all, we have, as I said, another personas.
The user that was the original
owner of the ingress resource
still has the responsibility of
working on the routing configuration. The new
Personas infrastructure provider and cluster operator
has the responsibility of
working on the infrastructure configuration and on
the TLS configuration.
Well, this was a nice addition,
but still the first design was with just
one personas in mind. So this is an addition,
it's not been redesigned,
no process of redesign was done on the
whole API. So we still have
other limitations, for example extensibility.
It relies on annotations.
We have different implementation of the Ingress API,
and every different implementation
has to rely on the annotation if it
wants to add something that is not provided
by the API itself.
So you have a situation where some implementation
created custom extensions.
The easier way is to rely on
annotation. So we have different kind of
annotation, different ways to annotate
these technologies from implementation to implementation.
And these custom extensions are valid
for this single implementation only. And that's a
strong limitation. Well, let's get to
the Gateway API. The Gateway API
is a global available since Kubernetes 125
that has been released in October, the 31
in 2023. So it's
kind of new if we want
to call it new, but it's new
if we refer to the global availability of it.
The API is available,
not in global available since
like two or three years. Well,
let's get to the details. API Gateway describes anything
that exposes capabilities of a backend service,
while providing extra capabilities for traffic routing
and manipulation and sometimes more advanced features.
What does it mean? Let's start by
saying that the first thing
that was in the mind of who designed this
API was the personas. Why is that?
Well, we will see it shortly. Let's get to
the personas and we can see that the
first personas have even a name,
any personas of the Gateway API
as a name, because who design wants to
give a strong meaning to the personas.
The first personas is called Yan. It is
an infrastructure provider.
The second one is Chihiro and he is
a cluster operator. And the third one
is Anna, an application developer.
Well, the first implication of these is that
we can refer to these personas and think
about them like a single
Persona, but we can think about them like
teams. Also, for examples, we can have Anna
that works with her team as
an application developer, but we can have another team.
We will use the same personas, but we can
think about two teams working on the routing
that our traffic management
system needs to implement. Okay,
what are their responsibilities? First of
all, Jan, the infrastructure provider, he has the responsibility
of configurate the infrastructure and can
work on multiple clusters. So we
see that here the design is totally
different because for the first time we are thinking
about working on the configuration on
different multiple clusters, and the
whole API is designed around this kind
of possibility.
Let's get to here. The cluster operator,
he has the responsibility of configuration
and configurate the entry points,
not only the endpoints, he can only configure
Tls and so on. And finally,
the application developer, Anna, well,
she has the responsibility of configurating
the routing rules. And that's something that
is application related,
because the application needs to perform
the routing rules and the application guides
the developer well,
its intrinsic needs. So the
application developer is the only one that really knows
how the routing configurations should be done.
So we can see a separation of concerns that
the ingress API doesn't provide. But the Gateway
API provides. And that's a great thing
because every team, every personas can
think on its own responsibility
and its own concern
while he or she works
on their works.
Okay, let's get to the resources.
The first resource of the Gateway API is a gateway class.
It defines a set of gateways that share a
common configuration and behavior, and each
gateway class will be handled by a single controller.
The gateway class is resources that
is a cluster scoped, and it is
somehow similar to the ingress class for ingresses.
The second resource is the gateway itself.
It describes how traffic can be translated to
services within the cluster and defines a request
for a specific load balancer config
that implements the gateway class configuration and behavior
contract. So the gateway class configuration
will configurate,
will configure behavior,
and the gateway defines a
request for that specific behavior.
It may be attached to one or more route references.
So we can have a way to define
security boundaries that the ingresses API
didn't provide. And it is possible to limit
the routes that can attach to a gateway.
The third resource is routes.
Routes define protocol specific rules for mapping requests
from a gateway to a Kubernetes services.
In version one alpha two,
there are four route resource
types that are included,
and our route can attach to one or more gateways.
As I said, the possibility
to be attached can be limited and
filters and advanced rules can be applied on
the route. Well, I said that there are four
routes in the actual version.
The four out are these ones, HTTP route,
TLS route, TCP and UDP
route, and GRRPC route.
As we can see, we have HTTP
route and tls route that maps the HTTP and
HTTPs capability of the
Ingress API. But we also have TCP
and UDP route. Those are two protocols that
are not available in the Ingress API. And we
also have GRRPC route.
That is another protocol that was not available
in the Ingress API. And we have a bunch of new
functionalities with gateway API that we didn't
have with the Ingress API.
Okay, as we seen
in the Ingress API, let's get to the
north south, the peak
thing, so we can understand how it
works. Let's start from our client that sends
a request to myapplication.com. In this
situation, we will have a gateway called,
for example, my gateway. With the Ingress API, we add
my ingress.
Well, the gateway implements gateway
class that defines all the rules
and all the behaviors and the controller, for example,
that will be used to implement that gateway.
And the gateway itself
sends the requests to the route.
In this case, I call the example
route my route. It will have
his particular rules defined,
for example, from one or two or more teams.
And these rules will help
will define how the
route will route the
request to the service or the services inside my cluster.
Well, from now on it will be
handled the same way we've
seen with the ingress API. What's different
here is everything that comes before
the service resource. Okay, let's get to the examples.
First of all, in the gateway example
we can see here gateway called my gateway.
In the namespace Gateway API ns one,
it implements a class that is called my gateway class.
And for example, we have a
listener that is called my application. And we
can see here that we are limiting
the route that can attach this gateway.
In this case, we said that on this
gateway can attach only an HTTP route.
Not only, we also said that to this
gateway can attach a route that
is in the Gateway
API and namespace. Two,
let's get to the route.
This is an HTTP route, so it can attach
to the my gateway gateway and it is
in the namespace Gateway API NS
two. So it is everything
that is defined as a loaded route. So it requests
these limitations. It is
an HTTP route, it is in the right. Net base
so it can attach to that gateway and
it has some specifications, the parent refresh.
It is defined as a kind namespace
and name kind gateway. So we are defining
on which gateway it is going to attach and
it is going to attach to the gateway called my gateway. In the namespace
Gateway API Ns one, we also define
some rules and in this case
we also define the service that
it is the back end for this route.
My service on port 80.
Well, it's a very simple,
easy example. Let's get to some,
maybe more complex, even if
we are still in a simple
example the same route but
with different rules. In this case we
have for example two service, my service and my second service.
But here we are giving particular
rules that define the weight. In this case,
one time out of ten,
the route will route the request to
my second service. So we have a 10%
of the traffic that will be served by my second
service and all the other traffic will
be served by my service. That's easy.
We have a load balancing strategy directly
on the route and
it can be done directly from the application developers.
Let's get to more complex rules.
In this case we have only
service that runs as the back
end of this route, but we have different matching
rules. In these cases we have a neither that
should match and the match should
be exact. So we need a neither
that is called conference and with a value of
conf 42 and also everything
that is requested on
the path sum thing
will be routed to my service service.
Still another example, in this case we also have
filters. Filters allows us to
modify for example our
request some fields on the request.
In this case we are modifying the
request, adding an eater and
the header that we will be adding is
the my heater and it
will have a value of conf
fourty two.
Okay, let's get to demo.
And for the demo I will
use a cluster that is served by
Google cloud platform. So I will
use Google Kubernetes engine. And as we can
see from the documentation, we have
different gateway class
that are served by the
platform itself. And we can choose one of these
if we want to create a gateway
and we can use one wit to implement
a gateway, I will use this
one global external managed because it
gives me an external ip and it gives
me a load balancer that will balance
every request. Okay,
let's get to our example.
And first of all, let's create
everything that we
are going to create everything that we will need to see
how these demo works. And for
example, I will create two
pods, pod one and my pod one and
my pod two. Two services my service one and my
service two. A gateway called my gateway and
route called my route. Okay,
let's get to see what's inside
the definition of my gateway. We are using the
GTL seven global external manager,
and it's just gateway with a
listener on port 80 and protocol
HTTP. Okay, let's get to
the route. And we can see here that
the route is reached to the gateway called my
gateway. And we have some matching
rules. And we have configured two
path one that will
route the request to my service one,
and a slash two that will route the
request to the service, my service
two. Okay, what we
can do here is, first thing,
let's get the address, and, okay, we have
that address that it is the public ip
of our gateway. It is given
by the platform, and we can use it to perform
a request. We configured two routes,
so, for example, we can see something
like this,
and one, let's get.
Okay, we can see my pod one.
The two pods are the same container,
and the container is a simple one that I created
that just prints out the australian.
So in this case, we have my pod one.
Let's try the other route two.
Okay, we have my pod two. That's because I use
the same container but for two different pods.
Call it my pod one and my pod two.
But what happens if I perform a
request to the root address?
Well, I didn't configure any route
for that path, so a fault filter abort
will be received as a response.
That's perfectly normal, because I didn't
configure anything for that route.
Okay, what we can do here is, for example,
adding, let's see what's inside
of it. Okay, I created some tests,
and this is the fourth test that I created,
but it is the most interesting one.
For example, I configured this
path three. Oh,
I forgot to mention this filter.
It is valid for the other path, too.
It just says that everything that is
in the path will be replaced because the
application inside of the pod answers on
the root path. So if we
have, for example, one,
it will be translated to the lash before
it will be routed to the service
and then to the pod. In this case, it's just
the same three as the path,
but the 20%
of traffic will be sent to my service one. And the
80% of the traffic will be sent to my service
two. What we will see, or what
we will expect to see here,
is that two
out of ten requests will print
my pod two. And the other ones.
Oh no, my pod one and the other ones.
My pod two. Okay,
let's apply it and
let's see what are the results.
Let's wait for it to be configured.
We also have our myroot four, and what
we can test is
that path three, we add
the okay, let's add the path
three. We can see my pod two as the
answer, and it is in this
situation. So that's the 80%
of the traffic, but let's try to
send more requests.
And we should see my pod one too.
And let's get to.
Here we are. So the 80% of
the traffic is backed by the
my service two, and the other 20% of
the traffic is backed by my pod one. So the
service, my service one. The interesting stuff
here is that we have two routes,
my route and my route four. And that's
interesting because we use the same
gateway but two different routes.
And that's something that we can
think about in the real world.
Like if two different teams working
on, for example, two different microservices
of the same application that configured
the different routes, and not only
one, two and three path,
but a whole subset of
paths, and they created
an HTTP route to perform,
or the rules for that microservice, for the
single microservice, and the
two teams created the two routes
that are attached to the same
gateway. So we have Jihiro that will
configure the gateway. So the cluster
operator working on the
configuration of the, for example, single DNS,
but the single team working on this single
microservice, configuring all the routes
for that single microservice, and that's a great
step forward in the possibility of
separation, of concern when it comes to the
security level of the things
that are served by the API.
Okay, let's get back to our slides
and we've seen a live demo.
And let's get to the next interesting
feature of the Gateway API, the gum initiative.
Gum stands for Gateway API for mesh management and administration.
And basically it is an initiative that allows
the API to works not only north south
but east west. So for the mesh,
the communication between the services.
Okay, what we have here,
the gamma initiative introduced
three. We can call them two,
but they are basically three changes.
The first change is that a
route can be associated directly with a service too.
You've seen that the parent ref was
a gateway until now, but we can also define
it as service. We can define the
kind service and indicate
the name of the service as a parent ref of
a route. And also the gamma initiative
said that a service has
two facets, a front end and back
end. The from end is a combination of the cluster IP
and the DNS of the service. The back
end is the collection of the endpoint ips.
So every IP that the endpoint
that is backing the service is called back
end, the back end of the service.
Okay, let's see how the traffic
is handled by seeing
the east west graph
of our traffic.
In this case, the request is not
done by a client, but the client in this case is
another microservice, for example.
So the pod asks for a
service, and that's the particularity.
The pod still performs a request to a
service, but here we have three different scenarios
that can happen. The first scenario is
the service has no routes reached
to it, but it still
have an endpoint. So the
request is routed to one of the
p back selected by the
endpoint. That's the standard way of
the function of the service and its endpoint.
The second scenario is the one where
we have a route that is attached to
my service. So we have the my route that
is attached to my service,
but there is no path
that is defined in my route and matches
the one that is requested to my service.
In this case, the request will be aborted.
The third scenario is the one where
there is one or more path that
is defined in my route and matches the one
that is requested to my service.
In this case, all the rules inside my route
will be executed and the requests will
be sent to the endpoint backing the service
that is defined as the back
end inside my route. In this case,
we see that the requests is handled
to the endpoint, and not to
the service, but directly to endpoint.
It is the default behavior. We can define
that it still should
route the requests to a service.
It's clear that it should be a different service and
not my service. Okay,
let's get back to the general discussion about
the gateway API and talk about the implementations.
The API it's a set of
rules of design guides.
The actual implementations are different ones,
and not every one of them are
global available. For example, the one that I use,
the Google Kubernetes engine, is global
available. Kong is global available.
But we have some implementations that
are not global available and are in
beta stage, for example Apache API six and istio.
At this link we can see a list of implementation
with their maturity level.
Okay, and let's get to the takeaways of this
talk. First of all, the ingress API will
not be deprecated. It's a great API. It works
and it worked until now. It will not be
extended or enhanced. And that's
one of the limitations that brought us to get a
API so we can still use
it. But let's take in mind that we will
not see any new implementation,
any new extension. The ingresses API
has an insufficient permission model.
Okay, we use it until now and we
somehow, maybe someone
decided that there
is a shared responsibility on
the ingress resource. The ingress class
helped,
introducing another resource that is
maintained by a different team
instead of the application development team.
But still the Ingress API has an insufficient permission
model. The Gateway API permission model with
the three personas is definitely better
than the Ingress API one. And we
have a new roleoriented and extensible
and portable and very expressive
API, the gateway API. And we can use it
and it has different good implementations
and all of them adhere to the standard. And we'll
always adhere to the standard because the standard is definitely
extensible. So what
we are saying here is that we should not
stop using Ingress API because
it is evil. It works great. It just
has some limitations. And those limitations are
not in the gateway API. So that's why we
should start using the Gateway API instead of the Ingress API.
Here we can find some useful links starting
from the website of the Gateway API with the official documentation
and some example
with the API
and with the Ingress API.
And you can also find a YouTube channel with
some videos, some official videos from Google
that talks about the Gateway API
and gives us some example on some
specific topics.
Okay, we are at the end of this talk and here
you can find a feedback form that I
kindly ask you to submit
because every good talk
becomes good because of the feedback. It's not designed
to be good from the start.
It becomes good and great.
So if you want, and I
will be very grateful to you if you
want to send me some feedback. And that's the end of my talk.
And I thank you for following me
and I really hope the talk
will be useful and the concept
that I talked about will be
useful for you. And let's hope that you
will switch to the gateway API very soon because it
is really great. Thank you and bye.