Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hi everyone and welcome to thanks for
all Kubernetes Ingress API long life to Gateway API.
And I'd say yes, the name in the foreground
describes pretty well, which is the current situation.
But stop any spoiler. Let's start instead the git API adventure
just words about me my name is Yuri and
this is absolutely my cave where I usually write small
tech articles, tech tips, my suggested readings
and travel and sport adventures.
So if you want you can take note of it and
for any reasons you can reach me.
Of course, also to talk about Gateway API,
the first very question is
why, Yuri, are you about to tell us all these things?
And the answer is exactly in front of us.
So since October 2023, our lovely
kubernetes in recipe has been frozen. Yes,
you heard well, has been frozen.
So what it means, it means that no features will be
added anymore to ingress in favor to the new Gateway
API standard. This is a screenshot from
the official Kubernetes Ingress API that you can check also
on. What I suppose is that the frozen
word has been carefully chosen to avoid
riots from kubernetes users and
I suppose then will be softly,
then be deprecated. But this is an
opinion of mine, so take note of it.
But anyway, don't panic. So for who
doesn't know this quote yet,
I don't really know what to say to use
today. I want to be a good guy and live here on the bottom
right, the most awesome book that I ever read in
my life, which is the it kicker's
guide to the galaxy or the italian version,
Vida Galactica per Lee autostopisti.
So don't disparate. I'm here to tell you as
much information as possible regarding the rise of
Gateway API. And today the
adventure will cover very quickly what is Ingress?
And then the introduction to Gateway API.
And we will see next a demo
regarding how to use Gateway API on Google Cloud
platform. And we will finish with a summary a
brief history of Ingress before the Gateway
API introduction, I had to prepare a few slides regarding
Ingress API, mainly for two reasons. The first
one is that to understand why
Gateway API was born, we have to do a
brief history of Ingress. And then the second one
that we owe it to it a crif history.
So 2015 Kubernetes introduces Ingress API.
This is exactly the same year where Kubernetes 10
was released, and then 2020 Ingress
API became stable. What was the aim?
And the aim of ingress was to create an API object to solve
a very specific and basic problem.
So the management of HTTP external
access to our workloads running. Of course on
Kubernetes Ingress API
operates on layer seven, even if during time
many implementation on layer three,
four have been implemented. But the
layer seven is the main purpose of ingresses and
the portability of course, when it is
used in its very basic way. So the basic
way, as we will see next, it is not usually enough
in real world scenario how ingress works.
Let's first of all imagine to
connect to a Kubernetes cluster and apply the
yaml template on the right for who
don't really know it. This is the most common way to expose
HTTP service on a Kubernetes system. It basically
describes in a declarative way, a simple way,
how we want to expose what.
So in this case we are trying to expose our service one,
on hostname HTTPs example
four, apply it,
but against the loads nothing will happen.
So we missed something. And what we
missed we missed mainly the ingress control
character. So the ingress
controller is who really makes it happen.
So putting all together in this diagram, we can see how ingress
controller continuously watch for new Kubernetes ingress
that we saw before here and
then register the new configuration
and be ready to serve the traffic coming from the outside,
which is usually this external traffic,
first ingested from an external load balancer,
usually managed on a cloud platform solution,
and then forwarded the traffic to our ingress
controller. This traffic is usually also
called as north south. So the traffic that
comes from the outside and go down to the inside.
On the contrary, the west s traffic is
the traffic that is covered by
pod to pod communication. So the internal one,
what's wrong with ingress? Well, basically I'd say
nothing. As we said before, it solved for
many, it is still solving a
very basic and specific problem, which is HTTP routing in
a very basic way and
also giving to ingress controller provider
implementator the possibility to customize it
to use ingress in real world scenario built
with the increase of modern complexity of
our system, the multitenancy needs
of kubernetes infrastructure, the multi cloud
environments, the need, the real need
of portability,
these ended for the poor ingress
to have a lack of native advanced
usage. So for example, not path rewriting
is available in a native way. So we
ended to be locked
in, in a lot of provider specific configuration.
I mean the hundreds of ingress annotation
available from every ingress controller implementations
and most importantly the third one, the lack of
role orientation. So the main problem here is
as well as we will see next, is that
ingress configuration reside all in
the same file. So for example developers
has to deal with TLS configuration,
hostnames configuration and
of course routing. How we survived
as said before, each ingress controller has taken its path
defining its property way
with annotation ingress annotation to enrich
Ingress API features.
So we ended to enrich our Ingress YAML configuration
with a lot of custom configuration,
sometimes also unstable over the time that the
ingress annotation that as said before carried us
to a cloud, not cloud but Ingress controller provider
lock in allowing to the big picture I
want here to underline how also
meshes are another important component
allowing to the big picture and
I want to mention it because it will be useful
then in the next slides which
is of course serving mesh. An important brick in modern complex
application and in the pioneer
of meshes has been course istio.
Thanks to service meshes we have been
able to pull off from our application
a lot of networking logic and delegate all
this stuff to of course service
smash configuration. So leave inside application
only business logic.
I mean for example delegate to service smash,
TLS, termination, mutual authentication,
throttling circuit breaker and so
on. Let's do warm welcome to Gateway API
like an ID card, I'd like to see
who is it? Gatewaypi is an open source project managed by
the SIG network community. Who is the SIG network
community or who doesn't know it? It is a network special
interest network group which is officially
recognized by Kubernetes itself as a contributor to a lot
of Kubernetes subsystems, for example
DNS, Ingress network plugins, network policy,
and now also Gateway API. Gateway PI
has reached its very first stability on 31
October 2023, and you probably
are thinking the same thing.
Maybe as me it was a
trick or treat. We will see next.
Again, a lot of clap for fans for
Gateway API. I want also to leave here a
Spotify episode from the Kubernetes podcast
by Google that you can find of course on Spotify.
The guest was Rob Scott, a software engineer at Google
and the lead of the SIG network Gateway API project.
Here. Rob tells us a lot of interesting
things regarding Gateway API. Let's deep
dive a little bit more into Gateway API.
Gateway API is basically a
collection of new customer resources. We can mention,
for example, gateway class, Gateway HTTP route,
TCP route, and many others.
On the right diagram you can immediately see how
Gateway API was thinking, and was thinking
to be more role oriented. First of all,
where for example, infrastructure provider
takes care of the concept to the gateway class that
we will see next. Cluster operator instead takes care
of the gateway concept that again we will see next here.
Just a spoiler, the cluster
operator can manage TLS configuration,
available host names, and finally,
application developer cares only to the
HTTP route, so only
takes care of what he she wants to take care.
So the routing, the application and nothing else.
Starting with some analogies with the old Ingress.
It is a little bit weird to see old Ingress,
but I think we have to get used to it.
And if you're familiar with Ingress API, you can think
Gateway API exactly as a more expressive
and next generation of that API.
It is not obviously written in any gateway API
documentation, even if it is mentioned in the
previous Spotify episode that I
left before. It is objectively
inspired by Istio. So Gateway API was objectively
inspired by istio. Some concepts just
to recap, the first one, the role oriented that just
we mentioned. The second
one, the truly portable in
real world scenario expressive.
Thanks to Gateway API, we can now transform the
majority of custom ingress annotation to
native gateway API configuration. And the
fourth the extensible. So as Ingress
also gateway API is extensible in
order to not force any limits by default to
implementers this might sound
trivial, but this is essential to of
course distinguish a gateway from Gateway API.
We are talking here of gateway API.
Of course API Gateway is instead
a general concept that describes anything that exposes
capabilities of a backend service. For example,
AWS API gateway gateway API again is
an interface. So we are talking about an interface.
Many API gateway are already implementing gateway API.
So we can mention of course Google Cloud platform that
is covered in the next demo.
Kong Wso two Kong Nginx
traffic AWS. You can find all
the implementation status here. Another analogy
with Ingress that probably put a primer for the
hands on part. We can see on the left our lovely
ingress, and on the
right what it will become.
The ingress is basically composed
in little pieces that are the custom resources
of gateway API. For example, starting from the top,
ingress classes are exactly the same concept
of gateway class. So as before,
these are usually already provided by cloud providers.
So this was the violet arrow and
with a light blue arrow
going down the OS name configuration is
a gateway PI. First of all defined gateway
resources. And talking about the light blue
narrow. And this way we can give to cluster
operator the power to decide what are allowed
hosts and include of course TLS configuration.
We finally arrived to the application developer
role and now on
developer side is finally incur only
what matters. So I'm talking about essentially to the
HTTP routing that include also path definition,
path providing either manipulation and so on.
The last analogy that I want to do is as
we deal with ingress controllers, now we have to
deal with gateway controller, even if
gateway controllers are not always managed
by Kubernetes users. For example on GKE
that we will see next, the gateway controller is
totally managed by the cloud provider.
Let's with more focus the gateway API.
So these are the main custom resources
that gateway API adds.
And let's start with the main three items,
hands on time. Let's start the hands on.
As I said, the hands on
is based on Google Cloud platform. So here
in this repository on my GitHub space,
I collected all the material that you
will then be able to retrieve
if you want it, of course. And first of all
I had to provision a GKE Kubernetes cluster.
I provision it here with an autopilot
mode, which is pretty cool. Then I have
to connect to the Kubernetes cluster. So first of all,
let's connect to it. Okay, so we
will now see some pod and
be also a node. Okay, awesome.
So switch back to the path.
What we will do now is basically two
kind of exercise. The first one we will deploy
a simple blue application that
will be exposed using ingress. And then
we will use Gateway API to do exactly the
same thing. And then see
how gateway API can solve a lot of problems that
ingress could not do.
So let's start with the first exercise
and this is exactly what we will do.
So a basic traffic routing using
ingresses that flow only to the blue application.
So first of all I'd like
to install the blue one.
So let's return to the CLI and
install the blue application.
Let's see if all is working as
expected. Okay awesome.
So going back to our path then,
we have now to wait that the GKE cloud load balancer
is provisioned. Why? And the
reason is because how I
do is enable an
ingress resource that is configurated here as
follows. So it is using an
ingress class GCE.
The host name will be this one. And the path
where our dummy blue application is exposed
is this one. So let's
go to the Google cloud platform web
page and let's see what is
happening. So first of all I have to go to the network services
load balancing and see that something
will start move. Let's wait a couple
of seconds. Here we go. Now it's
ready. So the ingress made
it work. So we can now try going
back to our path of the
first exercise. We saw the load balancer and
now we can copy the public IP and
in order to reach our custom fake
hostname we have to put the public Ip on
our etcd host name.
So going back to the CLI
or can copy the public Ip
here and switch back CLi.
And I already have the record,
I have only to switch the right
ip here. So is this one
okay, now going back to
our path, we can now try
to reach our host name that is exposed using
basic ingress API. So let's
try. It's not ready yet because load
balancer is becoming ready.
So let's wait a couple of seconds. Load balancer
is ready now. So refreshing. We can say
that our colored page blue
10 zero application is ready as
we can see here. I wanted to show
some either us from the outside
and the path that is arriving to
our blue application. Cool. So we
covered the basic ingress exposure
and the last tool.
I'd say that what are the limitations here?
For example, how can I add some HTTP URL
rewrites or either manipulations? And the
answer is unfortunately you
can't, or rather you can't in a native
way. So we already talked in previous slides how we survived,
so we ended in a provider specific ingress
control implementation lock in.
So yeah, we had to use a lot of ingress controller
annotation reach this aim
going on. So the core exercise
here is the gateway API one. And let's see
now how gateway API solve a lot of
ingress API limitation. For example HTTP
rules, more scalable, more oral oriented,
and in a real world scenario
I'd like to install a
second green application and then use
another namespace, an infrastructure namespace where I
will put gateway stuff from
the cluster operator role
and then putting again the developer operator ahead,
I'll try to make some HTTP routing rules
in order to expose our applications,
both the blue one and the green one, and then finally
some either manipulation path rewriting cannery and blue
green using only native gateway API
functionality. So let's start from the beginning
and let's install the green application
to the CLI again. Install the
green one, let's wait.
The readiness could it's already
running. So going back to
the path, then I have to
label the development namespace
and we will see next why.
Next thing to do is to create the infrastructure
namespace where we will put then the gateway definition.
So let's go back to the CLI
create the namespace infrastructure.
And what we have to do now PowerPoint
is putting the operator hat and
install the gateway resource. Note that the gateway class
is already provided by the cloud provider, in this case GKE.
So if meantime I can copy
it, but if I return to the CLI
and I can check it get gateway
class, I can see four different
gateway classes, we will use the first one
and these are provided by cloud provider.
So let's install our gateway.
And what I did here is exactly,
let's see on code
is exactly this file.
So what I stole is a gateway
kind resource from the Gateway API official
API version. And the name is that
one in the namespace infrastructure created
just before the Gateway class name is one of
the Google cloud platform available.
And here I decided to create a basic
HTTP on the 80 port listener
and the host name is a custom one, which name
is a new name exposed by Gatewaypi.
Net. And here we can also
say to gateway to accept only from
specific namespaces the routes.
And here I decided to accept from all namespaces, but we
can also be more granular.
Then I want to mention also the
TLS configuration. So here always
on the gateway configuration, on the
cluster operator role we
can define how it is configured,
the TLS part, so we can say here that it
is involved the secret, which name is this one,
and a pretty cool feature is that we can
take this secret from a different namespace from
for example infrastructure charts.
So here we can delegate then to
another corporate team the
management of certificates.
That is pretty cool. We already
applied with a cluster operator hat.
So let's see how it is going on the
cluster side. So let's go back here and
let's describe the gateway resource.
We can see here something is moving.
So it was applied and it seems that
it is already synced. So I
expect now going back to the Google cloud platform
to see that a new load
balancer will be created.
Awesome this one, and it
has not any rules yet because
we deployed only the gateway part. So let's go
back to the path and yeah,
no rules created yet. And now we have, in order to
create some rules put the developer operator hat.
So install the first
very easy HTTP route resource.
So this is the command,
let's go back to the ClI and then see
what is this route. Easy. So let's see
on visual studio code what
I did, and basically under the
development hat folder I see
the easy route that is this one kind
HTTP route, which name is this one in
the development namespace, see that the development and the infrastructure
namespace are different. And here we
are referring to the gateway
name, which is this one in the namespace infrastructure.
And the only allowed namespace host name,
sorry is this one. So if I try to users
a different host name will be rejected.
And this is a basic rule. So it match
all on this host name with
this path and redirect all to the our
dummy version, one blue application on 3000
port. Let's see how it is going to
TTP route side.
Here we go. Oh cool. So we
are seeing that it is already a success.
So it means that it was correctly
attached to the gateway in the namespace infrastructure
and returning to the cloud
provider. We should see that now
something challenges.
So we can see now that a new endpoint group is ready.
So returning to our path,
what we need now is try to reach
the new name, which is now by
Gateway API. The previous one was by
Ingress, so are different.
So let's pick the new public ip from
the new cloud load balancer,
which is this one going back
the CLI and also the
second record.
Cool. So let's back
the path. So now we expect
to reach the new application exposed.
Well the application is always the same, the blue one, but is
now exposed using Gateway API instead of
ingress. So let's try cool. It's exactly
the same result as we reached using Ingress.
So you can see here Gatewayapi.
Net. Here it's ingress.
Net. So it's exactly the same result.
So let's add some more feature coming
from Gateway API using always the developer
operator head. For example, let's add a
custom meter using always native
integration of Gateway API.
So the new HTTP route
is described in this file. Let's apply it again
and see what contains.
So first of all let's apply it
and going back visual studio
code. Let's see what I did. What I
did is exactly described here
on the right I put previews
HTTP route. See that I'm basically
editing exactly the same route and
what it challenges, it challenges basically this part.
So I added a filter
which basically add, as the name said,
a filter before forwarding the
request to our blue application. And the
filter type is request either modifier.
And as the name said, I'm trying to add a
new editor which name is my eater and
the value is foe. This is only what
I change it. So let's see how it is going
on HTTP route side. So let's do
scribe again. And we can see that
a few seconds ago there was a reconciliation
the Gateway resource from HTTP
route, the Gateway resource. So I expect now that
my new custom editor is reaching
our blue application, let's go
back our path and try to
use the GitVPI net URL
and see if something is changing.
Wow. The eater is already here.
So I can see that my eater custom my dear
is here and the values is
exactly four as expected.
And the second exercise here is to do
a path rewriting. So let's go back,
let's first apply it and then see what
is having. So let's apply the new file root
either pathway write so
it's configured and now let's see
what is changed from HTTP route side
what is challenges can see again on the
left I leave on the right the previous version
is exactly the same name. So we are editing exactly
the same route and we are just implementing
a new filter. So I
added a new filter just after the
previous request, either modifier I add a
new URL rewrite where I can also rewrite
the host name, but it is not covered in this demo I'm
editing now the path, a new path
is exactly this one. So I already
applied it. Let's see how it is going.
And 53 seconds ago was a success.
The reconciliation. So I expect again that
already is going expected.
So what I expect now that if I try to
reach the blue application, the path that is printed
from the blue application side will not be
slash anymore, but should be the new path.
So let's try cool. So the new path is
already here. So the blue application is
getting the new path while from the external everything
is the same. So we are already using the slash path.
Let's do a couple of pretty cool example
of deploying. So we will try to do a
canary deploy and then a blue green deploy.
The canary deploy it's described
in the route canary Yaml HTTP route
what I'll try to do is to split
the traffic based on a custom meter.
The custom meter is the traffic either and
the values has to be test. So if
this condition is matched,
the canary configuration should
then flow the traffic to the new one and only in
that case because in all other cases. So by default
all traffic has to be served by the
blue application. So let's introduce
the cannery again. Let's first apply
it back to the cli Id
canary routing let's see on
code what the root
canary yaml here.
Cool. So on the left the new HTTP route.
Again the name is always the same. So we are editing
always the same route. And on the left we can see
that a new match appeared here.
So what we are doing now is
we are adding a new match that match
only traffic that has this header.
So the traffic that has this traffic header name and
the values has to be test then
in this case please we are saying to the route redirect
all the traffic to the green application by default
continues to serve the blue application.
So what we will expect now,
as we saw before, we expect
that if we set the traffic either with
test value we expect to see the green application.
So first of all let's try by default that
everything is going as before.
So the blue one is here. Let's try to do,
let's try to add the custom either who's in this custom
either manipulation extension.
So try we have to wait a couple
of seconds, maybe that the load balancer
sync it. I think it's ready now.
So try to put again the either
and yeah we are reaching now the green application
and always we are always
reaching the green application and try to pull
off the either. We are back to the
blue one. Cool. So we did also the
canary deploy using gateway API and
the last one I'd like to show you is the blue green deploy
which is basically the traffic splitting
using a percentage. So the 50%
of the traffic we want to be surveyed by the blue
application and the other part. So the other 50%
surveyed by the green one try
to always apply it and see
what is changed from the
HTTP route side. The blue
green is left.
On the right we see the canary that was the previous.
Again always the myrot is
edited.
What we see on the left, that is
we are not using the double matches match
strategy. We are using a single match strategy
which match always the slash path. But we are
saying to the HTTP route to redirect
the 50% of the traffic to the blue service and the
other 50% of the traffic to the green one.
And in this way we can do a sort of
gradual release and for example
reduce over the time the
traffic. Sorry,
contrary. This way we are introducing gradually
the green application and we can then arrive to
1% traffic to the green one.
So let's before try to do a
50% as I already
applied. So what expect now let's see how
it is going from the HTTP route side.
Again let's describe it
seems already reconciliated.
Yes I'd say yes. So let's
go back to Chrome and see if it is working as
expected. So now what I expect using
no either I expect to be redirect
with a 50% probability to the green one and
the other probability to the blue one. So let's
try and cool it's
already applied so we can see that
we are going to the green one and to the
blue one with a 50% probability.
And yeah, this was the last
example in this hands on. Let's go back
to the slide with final talks. Hope you
enjoyed the hands on on Google Cloud platform.
Let's see now a weekly summary.
So in summary, we had the chance to see how Gateway API
evolved, the ingress API and
which features has been introduced thanks to Gateway
API. This is a very quick
recap of what we said and we tried.
So this diagram represent
exactly where we are now thanks to Gatewaypi.
And we saw exactly this on
the hands on. We saw in the hands
on the part of the two typical user
personas. So we saw the cluster operator and the developers
side that usually works on Kubernetes
platforms. Just last two things, look into the future,
of course, of Gateway API.
The first one is a question, will Gateway API
replace the ingress API? And the official
answer is no,
as you can see here from the official answer of
Gateway API. But in
my opinion I'd say the answer is yes. Again,
it's a personal opinion, so take care of it.
And the second and the last one,
it's this one, gateway API for
service mesh. Yes,
before I wanted to mention
istio because of it, because Gateway
API is working on a service mesh standard solution.
Pretty cool. And the real pretty cool thing is that
just a spoiler of it, because this kind of
feature is in an alpha status.
So in using exactly the same HTTP
route that we saw before, we will be able to attach
these awesome features to Kubernetes
services instead of gateway resources. So we
are enriching basic services from
Kubernetes with a lot of features that
comes from API. Thanks you
all again. And I leave here my blog
URL and most importantly the GitHub project where
I collected all the stuff from the hands on.
Yeah, you can try it on your
own, you can contact me whenever you want.
And thanks again.