Abstract
SD-WAN is increasingly being used to stitch network connectivity between enterprise locations, and the applications running there. In many cases, Kubernetes provides the fine grained management for the microservices that compose those applications.
The ability to influence the SD-WAN based on microservice metadata adds even greater power to the microservice application model and the SD-WAN. In hybrid/multicloud application deployments, optimizing service communication between remote locations is highly desirable and SD-WAN application routing capability is a nice addition to the operations toolbox.
This talk will show how an SD-WAN controller, using Kubernetes configuration and state, can adapt the network for optimal application performance. The talk will provide a few use-cases showing what is possible today via custom tooling, as well as, go through possible future approaches.
Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello and welcome to this talk about how Kubernetes can drive SD
one. My name is Alberto Rodriguez Natal, and I'm
a technical leader at the enterprise networking CTO team
at Cisco. My email is on the screen right now,
so feel free to send me a message if you have any questions as you
watch the presentation. So why are we talking about Kubernetes
and SD one today? What is SD one?
You may or may not have heard the concept of SD one before,
but I'm sure that you have heard about wide area networks or one
for short. Those are the kind of networks that interconnect geographically distributed
locations. Think, for instance, an enterprise that may have different branches distributed
geographically, may have some headquarters, may need to connect those branches
to the cloud among themselves, with remote
workers, with IoT devices and so on. So this big network that
expands potentially globally is what usually has
been called the one. So what is SD one now?
Software defined one? Well, software defined one is a
solution or a set of technologies that allows you to build at
one network in a software defined manner, following the pattern of
software defined network, or SDN. In short,
I'm trying to summarize SD one with a single slide here.
In short, we can say that those is all about connecting
any location with any transfer that happens to be
available, providing any service you want to provide,
and focusing on any deployment model that suits your
enterprise, right? So for instance, you have a branch
that you want to connect to the cloud. You may want to use
Internet and mpls line, and you want to apply some
security to that connection. You can apply and
enable all this through an SD one solution.
Now, what is the relationship with Kubernetes?
So let's consider a typical SD
one use case where you have an SD one connection
that connects some users in a branch to some
applications in the cloud. This figure is trying to represent that model
where you have two SD one edges,
each of them closer
to the users or closer to the application,
and you have an SD one controller that is handling these
SD one edges in order to establish an SD one fabric or
an SD one tunnel that connects these two distributed,
these two geographically separated locations. Right?
So why do we care about Kubernetes in this case?
Because that app that shows on the right may
be a Kubernetes app, right? And when
you have Kubernetes, typically what you have is not a single
big monolithic app. Typically what you have is a collection of
services. So even though you
could easily apply optimizations on the
traffic that goes between the branch and the data center
of the cloud via SD one. When you have
a multitude of services, you may want to have different
optimizations and different policies to each of those services so
that you don't look at the traffic as a single green
line, but rather a collection of different lines that
connect to each of these services. So one thing that
we are after here is making SD one aware
of Kubernetes applications, of microservices applications,
right? So that SD one can apply per service,
per microservice a specific optimization. So we end up with
a more colorful deployment as you see here.
The question is how do we achieve that? What can
we do that enables SD one to be aware of
the Kubernetes applications? So this is
an interesting problem because traditionally SD one
and Kubernetes have been like ships in the night. They are deployed
together or like close by. You may have an SD one terminating
right in front of a Kubernetes application, but they have no
idea that the other is behind or after.
Right. However, the fact that today
these two are agnostic to each other doesn't mean that
they need to. So there is a big opportunity here to
make SD one aware of cloud native applications
and then enable this traffic optimization, security and so on,
both in the overlay and underlay of the SD one,
so that you get all the benefits from an SD one deployment
for your cloud native applications. So now
how do we do this? One way to do that is
by something we have called the cloud native SD
one project. So cloud native SD one or CN
one for sort is just an open source project that takes care of this
mapping between applications on the Kubernetes
side with SD one policies on the SD one
side. This is just one way you can make the SD
one aware of kubernetes. You may have SD
one solutions that have some of this already and
they are being more and more aware of kubernetes.
If your SD one solution doesn't have this kind of awareness,
cloud native SD one is a good way to achieve that.
Those end goal at the end is to make
the network aware of these application requirements so
it can dynamically optimize the application performance.
So let's discuss a bit how SCN one work
and let's use it to understand how
SD one can become aware of the cloud native applications.
Let's start from the same picture we
had before. I'm leaving some space because those slide
is going to get populated soon enough. So we have SD
one, we have Kubernetes. Who else will be
here? Well netops and DevOps.
These are those people that is in charge
of either the SD one, the NetOps team, or the
Kubernetes infrastructure and the client applications on the DevOps side.
So these two teams are being to do their best
to make sure that the performance is stellar on
both the network and the application infrastructure.
How we can help these two
teams to integrate better so that the
network plus application perform much
better than they would do on its own. Right? It's on its own.
So let's try to follow the
flow of a clonative application that is
connected to an SD one network, right? So it all
starts with DevOps deploying a
service on Kubernetes, typically with a YAML file
that provides certain metadata
that would be useful for those one things like
okay, this is a service that I'm exposing via
a load balancer type, for instance. And here
are the labels that I'm applying or maybe even some annotations that
I'm adding to the service to the YAML configuration.
So that gives you some information about what the application is about.
That is information that later on the net ops team can use
to map into SD one policies.
So how do we extract this application information
and metadata from Kubernetes? Well, the way can one
does it is by something we call the
CNG one operator that is monitoring for services that have
been deployments in the Kubernetes cluster that are being exposed outside and
is harvesting information about this service, extracting the
labels and annotations and so on so that they can be
made available to those SD one. How we made them available.
Well, what the Syngon operator does is that it introduces that
information in a service registry so that you can go to
the service registry and find information about these applications.
We have a component that we call the can one reader that is reading
this service registry to find information that the CN one operator
has put there. And now since you
need to adapt this information to specific SD
one policies, we have a can one adapter that
is specific to the SD one solution,
right? It speaks those APIs, the particular APIs
that the SD one solution has. And these APIs may be different from vendor to
vendor, right? And it's used to pass information
about the applications to the SD one controller
so that those netops can program some policies
that match the metadata that we found on the application
side. And this is what we need to enable per microservice
specific optimizing on the SD one. So we end up with
those colorful picture that we saw a moment ago. Now I
understand that those is a lot to take in, especially if it is the first
time that you are seeing this. So let's walk through the same
flow but this time with an example. So you see what
we are after here. And for that we're going to use the example
of a video conferencing application. So let's
say that we have deployed a video conference
application on kubernetes and it so happens to
be composed of four microservices. So one for
voice, another for video slice and chat and so on. As you
can imagine, each of these different microservices may have different
requirements in terms of how the network should handle the traffic.
So what can we do here? Well, let's drop all the components that we
have discussed up until now. So we have the net tops, the bob's teams
and then the can one components as well as the SD one controller
and the service registry. So this time we are going to start
from the network perspective and describe
how they can program the SD one to prepare
it to receive the metadata from the applications.
So one thing you have to do is to populate the
same one adapter to the kind of metadata that is expected from
the application. Those can be labels that are being applied to
the services, to the microservices. This could be annotations
that the DevOps add to the YaML files as they
deploy the services. The point here is that this is what
comes from the application. This is the metadata that comes from the application.
So this is what the netops uses to then build some
policies. Now on the SD one controller what we do
is that we create some specific policies to match
that metadata. And here we are
defining four different policies that match the metadata
that we have defined. So for instance for the real time
metadata we define a policy that tries to
optimize for minimal latency and no
drops if possible for let's say
those file transfer kind of metadata,
what we care about is a lot of bandwidth. What we care about is a
lot of bandwidth and again minimizing drops. So you see here
on the slide the different policies that we have defined.
Now this metadata needs to come from the application.
So as we saw before, when the DevOps
deploys the services along with the metadata,
the CN one operator extracts this metadata
and cases it to the service registry so that the CNG reader
can then retrieve the application information along with the metadata.
So the right application is mapped
to the right policy and the flows get again
optimized over the SD one.
So this is an example that shows the kind of
optimizations you can apply on the SD one overlay,
that means on the SD one edges.
So from SD one edge to SD one edge over a certain
underlay network that is typically not
under control of the SD one. Now let's look at another example
where you control not only the overlay but also
the underlay. And we are going to talk about how you can use
that, for instance for bandwidth auto scaling. So in
this cases, we are taking advantage of some
network connectivity providers that nowadays offer APIs to
take advantage of underlay capabilities. So if
you happen to be using and your SD one, one of these providers, you can
optimize both at the overlay with those SD one as well as
on the underlay with one of these providers. And a very
good example to showcase those kind of capabilities
is following the same pattern that Kubernetes follows
in terms of auto scaling. So Kubernetes is very good
to scale applications both horizontally and vertically. Why we
cannot do the same on the network, right? And if you
have the right tools, you can certainly do that on the network as well.
So for this example, we are looking again at this kind
of deployment model. We are adding this
time a connectivity provider in the middle that as
in the previous example, we were agnostic to it. This time we
are interacting with it through an underlay API, through an API
those connectivity provide offers to the SD one.
If we have this in place, and we again
deploy the same tools that we had available to us before,
so that we have this workflow of extracting application metadata
and using it for SD one policies. Now we can
for instance not only extract information about labels and
annotations on the application, we can also extract information about
the number of replicas that a particular application has and
use that on the underlay to scale the bandwidth.
So as the number of replicas grows on the Kubernetes
side, the bandwidth allocated grows as well on
the underlay provider. And not only that, which is even more interesting
is that when the replicas scale down, you can
also scale down the connectivity, the bandwidth
that you have requested. Right? So this translates
into some cost savings to you at the end of the day, which is something
nice that we all are after, I guess. Right? So this
is in general another good example of how enabling
these kind of integrations between the network and the application
can deliver not only optimization in terms of performance,
but also in terms of cost. So, wrapping up we have
discussed several things in this presentation,
but the bottom line is that if you have an
SD one and you are connecting to a Kubernetes cluster through that
SD one. Please make sure that those two talk with another
so that your Kubernetes applications can take advantage of the SD ones of
those one optimizing. Now I want to leave you with
a question right if we have discussed today about
how SD one can learn about Kubernetes metadata,
what about a service mesh metadata? Is there anything we can
do there? We believe that the answer
is yes and we are trying to work
on that front right now. So hopefully you
should see some ideas flowing in the right direction
soon. And yeah, if you want to take a
look at what we have done in the clarity one project, you have
the repo there and you can reach out to me
either via email or you can find me on LinkedIn and
feel free to send me any questions or
feedback on this presentation and that will be it. Yeah,
thanks for your attention and looking forward to the discussion.