Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello everyone, my name is Kufran and I will be talking about the istio
service mesh and we will look into how we can use the istio service
mesh to secure our applications running inside the Kubernetes
clusters. And we will look how we can implement
authentication authorization policies. So let's go ahead and get
started.
So this is the outline of the talk. First we will start with the introduction
to the istio. We will look into the service identities how we can use service
account to convert it into the service identities
that will be used by istio. We will look into the authentication policies
like tls or mtls, and then we will look into the authorization policies that
will let us enforce access rules
within our services. And then finally we will have some q a
so what do you mean by a service
mesh? Service Mes is infrastructure framework
that handles communication between your services.
For example, you want to have some kind of a network policies or retries
or some splitting of the traffic based on
the weightage and the latency. And most
of the times this is implemented as
a network proxies and that intercept the incoming traffic
and they are deployed along with your
application code and this is how
they work. So you will have one container
which contains the sitecar, which we call it as a sidecar proxy,
and another container will be your application port application.
So what does Istio offers?
So istio offers mainly the features.
Istio features can be categorized across three categories.
First is the security. Istio can help us
to implement service to service communication and we
can implement access control within
our services like authentication and authorization tool
rules. With the help of istio we can have complicated routing
routing rules, retry, timeout, circuit breakers, et cetera.
And this is more related to enhancing
the resiliency of your services. It is more
related to routing. And the third part is the observability
with the help of service mesh. It offers rich
metrics and traces of how your traffic is flowing
within your service mesh and which we can use for
benchmarking or debugging the issues or
any latency issues within our service mesh architecture.
So important terminology. So when I say
workload, what I mean is a pod or application deployed within
your kubernetes cluster. And when I
say service it is a microservice or an application that is going
to serve some kind of API
or some feature to end
user or maybe to another service.
So this is how your pod
looks like before we have the istio. So you have two
kind of two parts basically, or maybe
one part inside one part. You have your application logic and
then you have infrastructure logic like routing circuit
breaker code that will be deployed along
with your business logic or that is part of
your application. And same goes with the container
two and these network
policies are part of your business logic without istio.
And what we can do with the istio is we can abstract your
network policies, circuit breaker code and routing
or metrics related code inside
a sidecar proxy. Because istio offers all these
features to you that is built into the istio, we can leverage
the istio. And you can see now we have two containers.
One container which only contains your business logic and another container is
a sidecar container that only contains your
infrastructure logic or your network policies.
And you can see now these two containers can be managed independently
and making a change inside your network policies will
not require to roll out your business logic or your application code and
vice versa. So what the sitecard does,
sitecar is again sitecard is deployed along with your application
part and in general the name of your sidecar
will be istio proxy. And istio proxy doesn't
really modify your incoming request, it is
transparent to the application code. The only thing sidecar proxy
does, it enforces your network policies to the
incoming request. And if these network policies are not satisfied
it will drop the incoming request and your container will not see or
your application will not see that request.
And istio uses the NY proxy
as a sidecar proxy. It is written in C plus plus.
So how we inject the sidecar. So there are
many ways to inject the sidecar proxy in your workloads.
So one way is you deploy container application container
and you also deploy sidecar container that's manual way automatic.
You can enable, you can annotate the namespaces where you want to automatically
inject the sidecar proxies like this.
And your istio control plane will
take care of injecting the sidecar whenever you
deploy a new part. And you can also add the annotations
to your deployment or your deployment
and that will make sure it will automatically
inject the sidecar proxy. So you have different way
or different level of granularity, how you want to inject sitecar
proxy in your application containers and
when there are three part a, part b, part c and as
you can see with each part there is a sidecar container
which is istio proxy deployed. And all the incoming and
outgoing traffic is now being intercepted by the Istio sitecar proxy.
And then it will forward the traffic to your
actual container or your application.
And this is the basic architecture of istio sitecar.
These are the different components of the istio site, car istio and
with the latest version of Istio 1.5 all
these components are consolidated inside the istio control plane.
And for example citadel.
It was a part of control plane which used to
generate the TLS certificate and deploy to or communicate it
to the new parts or cycle proxies. And there was
a mixer which used to take
care of the telemetry related data and there was a gallery
that used to make sure your configurations are correct.
But with the newer version of istio control plane, all these services or all
these components are consolidated inside the istio control plane.
Okay, so let's talk about istio service mesh security.
So when we talk about security, we will specifically look into the authentication
and authorization policies. And when you talk
about authentication and authorization, right? So we have to start from somewhere,
like for example service identities. Before we start
enforcing authentication and authorization policies, we need
to have some kind of identities assigned to
each of our services inside our service mess.
And this will be the starting point. Once we have the established
identities assigned to each of the service, then we can start
implementing authorization and authentication policies. So there
are different ways to implement or assign the identities
to the app services inside your service
mesh. One of the very common ways. Using service account,
you can use the Kubernetes service account and
the service account basically becomes identity for
that application running inside the service account.
You can either use the GCP service account or Aws
im user role. Or if
you are deploying Kubernetes on prem,
you can use user account or some other account that you
have. But in general in most of the cases you will be using
the Kubernetes service account.
So how we convert your identity
or service account into a certificate that
will become identity of your service.
So whenever a new part is deployed or
a new part is created, it will contain a sidecar container,
istio proxy and your application.
Okay, so your istio proxy will create a
public private key pair and it will send the certificate
signing request to the istio control plane. And then control
plane will sign that, create a certificate
scoped within the service account and send it back
to the istio proxy, okay, and control plane
takes care of rotating your SSL certificates
whenever they are about to expire. And control
plane serves your
SSL certificates through the SDS API
and your certificates are stored inside the memory. So if
your container gets deleted, your certificate
gets deleted basically. And this is how the workflow looks like, you create
a new part, your proxy sidecar istio proxy will
create a public private key pair, send it to the control plane. Control plane will
generate a certificate, self signed certificate, and then you can
use that certificate as an identity for
your service. A okay,
so now by now we have some kind of
so now we have enabled
or established the identities for each of the service we are
going to have in our istio service architecture.
And now once we have the identities there, we can now start
implementing the authentication and authorization policies.
So istio provides two types of authentication. One is
the end user authentication, which you can use a JSON web token,
JWT and we are more interested in the service to service
authentication or mtls. And there
are multiple or multiple ways you can implement
the authentication. One is the permissive where your
service mesh or applications or services running
in your Kubernetes workload kubernetes,
they will accept the plain text traffic as
well as the mtls traffic. Okay.
Second is the strict where all the services or services
will only accept the mtls and
third is the disable where we say okay, we don't want any kind out
of the TLS encryption, we just want to go with the plain text communication between
our applications or services. And this is
how your mtls looks like
or mtls how your request flow within your
kubernetes application. So on the left side
we have workload a, in the middle we have workload b, and the right side
we have the workload c and client makes
a request and we can have a different different
policies, authentication policies for each of the workload. For example,
you want to have mtls for workload a,
but workload b can accept plaintext
traffic and also mtls and workload C's only accept
the mtls again. So you can have different policies,
authentication policies for each of your workload.
And yeah, this is how your request flows
within your service mesh and
this is how you can implement the authentication policies. There are
different level of granularity you can implement the authentication.
One way is like you implement the authentication all the
service mesh wide where all the services in
your service mesh has to use the mtls.
And if they don't use anyone try to communicate with your application
and they don't use the mtls, their request will be dropped.
Another way is you want to enable the mtls within
a namespace. So any communication within
that namespace will use the mtls. And anyone try to communicate
to the application inside that namespace they will have to use the
mtls, otherwise their request will be dropped. Third way
is you implement mtls only for
a specific services or specific application. So there are
multiple different level of granularity you can implement.
But in general, we want mtls
enabled within all of our service mesh applications, not just one
namespace. We want it to be enabled for all of
our application inside our service mesh.
Okay, so let's go ahead. I have a dome demo. We can look
how authentication policies looks like or how it works
in real world just with a quick demo.
All right, so let's implement the authentication policies.
And I have two applications deploy.
I will be deploying two applications in two different namespaces.
Okay, so I have an HTTP bin, which is a basic HTTP
server you can ping from, ping it and see it will respond
with this status 100. Okay, so this is the deployment
yaml looks like and we
are deploying it within some specific service account. Okay,
and there is a sleep part that we will be using to make a request
to the HTTP bin application. So there are
two application, one is the slip, one is the HTTP bin, two different services and
we'll try to make a request from slip part to
the HTTP bin pod. And this is
how my authentication policies looks like for
the namespace foo. Okay, so HTTP bin will be
deployed in the namespace foo, where I will deploy the sleep
part in a namespace bar. And foo
namespace will have the mtls or it will have the sidecar proxy
injected. This namespace is the part of your
istio service mess. Okay. And your sleep part is not part of
your istio service mess. It will not have any sidecar containers.
Okay,
so first we will try without implementing the authentication
policies, and then we will see how
the communication works. And then we will implement the authentication policy
and we will see how it behaves or how it automatically drops
your traffic from slippod. Okay, so let's go ahead
and deploy
it. Okay, so I'll just go ahead, create namespace and
just deploy. So I will deploy it.
Okay.
All right, namespace is created.
It's already there and there is no change in the deployment yaml file.
Everything is just up and running. Just check if everything
is up and running. Okay, so you
see HTTP bin has two containers. One is a
sidecar, one is the HTTP bin container and sleep part.
Like this is the sleep part. This one,
it contains only one container. It doesn't have any
sidecar proxies injected. These are two different application deployed
in two different namespaces. Okay, so now let's go ahead and
let's see if we have some authentication policies are deployed or
not. Okay, so we have one
authentication policy which I was talking about. So let's
see how your request will be dropped.
Okay, so we have enabled the strict mtls
in our namespace foo. Anyone tries to communicate to
the namespace foo, it has to do the mtls. And our
service sleep is not running inside a service mesh, it cannot perform
the mtls and it will automatically reject. Our incoming
request will be rejected by the HTTP bin or the sidecar proxy
running along with the HTTP bin. Okay, so let's give
it a try.
Let me do the asset inside the slip
part.
I'm inside the slip part and I try to access the HTTP
bin where it only allows the empty list graphic.
You see connection reset. So this connection is reset
by the sidecar proxy running across
with your HTTP bin container because we have strict authorized authentication
policies which says no communication
is allowed without the mtls. Okay,
and let's go ahead,
open a new terminal and delete the authentication policy which
we deployed and we will see how
it behaves. Okay, so it's
all right, so we deleted the
authentication policies. Now as we see,
we have deleted the authentication policies. By default it
will allow any kind of a traffic. So now
what is expected is namespace foo
should allow the traffic coming from mtls or non
mtls. Okay, so let's go back again to our slip pod
and let's do the call again. And as you can
see, we are able to communicate
to the HTTP service which is inside the service mess.
And slip pod is not a part of service mesh. Okay, so this
is how this authentication policies
policies are deployed. You can either use namespace wide or
you can just enable it within all of your services workloads.
And you can have different kind of modes you want to enforce.
Okay, so let's go back to the presentation again.
So now we have implemented,
or implemented the secure communication within
our services running inside
the Kubernetes. Okay, we have enabled the TLS or MTLS
encryption. All the communication within our services are
securing. Okay, now whenever you are running
some services inside your cluster, you may
want to expose these services to the Internet or
some private endpoints, right? So this is where this
gateway comes into the picture. And this is
to ingress gateway is kind
of entry point that services as an
entry point to any incoming traffic.
All the incoming traffic goes to the ingress gateway,
and from that ingress gateway it gets forwarded to the downstream services
and ingress you can have different kind
of ingress does the TLS
encryption or TLS termination. You can say
you can have a different kind of different
rules, how you want to route the traffic, how you want to manage
the incoming request, and it
is just a reverse proxy running along with your
ingress parts, and it will enforce all the rules that
you want to implement or implement the routing.
So there are different ways you
can do the or you can manage the incoming traffic and
how you want to do the TLS part of it. So there are
three ways, pass through, simple and mutual, and there
are other more policies you can implement. But what pass through does
is whenever client making a request,
both the client and the application part has to
do the mtls pass through does not
terminate your SSL certificate on the ingress gateway,
but it passes as is to your application
part. And then your application part will actually
do the mtls with the client. And this is how it
works. It's more complicated one, there are more simple
another policy simple, which means you want to do the
server side TLS. When I say server side Tls,
it is just your client doesn't really need to
provide a certificate, you will
just create an ingress gateway, you will pass your service
or SSL certificate to the ingress gateway, and whenever client try to
communicate to your server, your client
will just validate the SSL certificate of your services
and you can continue and your TLS gets terminated.
Your TLS termination happens on the ingress gateway,
okay. And there are mutual the third policy where
client and the server client or the ingress gateway,
both this component will perform two way mtls and your
TLS termination will happen on the ingress gateway.
Okay, so only pass through does only
in in case of, only, only in case of pass through. Your TLS
termination doesn't happen on the ingress gateway. In other
two policies, your TLS termination happens on the ingress gateway and
this is how you can define your ingress gateway.
And we will go ahead and I will show you the demo.
Okay.
All right, so before I talk about or
show you the ingress, I have the control plane, istio control plane
deployed, and I specifically want to
talk about,
okay,
this one, okay, so I have a different there
is one ingress gateway, there is one egress gateway,
and whenever you deploy control plane
you will deploy one ingress gateway that will
give you some public or private IP address or DNS
address, and that way you will use, or your client will
use whenever they want to communication or want to access any of
the services within your cluster through this ingress gateway.
Okay? And there
are more configurations you can specify how you want to like
memory, HPA availability, and all we
will not go into it. This is the name
of your ingress gateway that we will use inside our ingress.
Okay, so let's go ahead and
see. I have created the certificates,
one for client, one for the NgInx part, and this is
what our deployment looks like. I will just deploy NgINX
pod that will contain the SSL certificate for
the application or the NgINX server. And there
is mtls. I enable the MTS bar within
our namespace ingress. This is where our NgINX
server is deployed. And this is the NgInx config
where we are mounting the assets. Like this is mounted as a config
map and passed to the pod running. And this
is what our ingress gateway looks like.
So we are using the default ingress gateway that I
just shown you just some time back,
which is, it will give you some IP address or public
private IP address. And this gateway, ingress gateway
will accept the traffic only on port four four three,
we are doing, in this example, we are doing the pass
through the mode, TLS termination mode is passed through, which means
it will do the mtLs, okay, but the
TLS like client will do its TLS, and TLS termination
will not happen on the gateway, but it will happen on the application part.
Like for example the NgInx part, it will take care of the TLS termination
part. You can have more configuration like
which kind of a minimum version you want to enforce,
or some cipher suit and
hostname. Okay, so incoming request should come from this hostname.
For example, you can add more host. So this is kind of
a very critical part of your ingress gateway.
You want to make sure you kind of specify
or block anything else. We only enabling the traffic on
port four four three, we are doing TLS termination. We have some
specific cipher suit. So this is where you have to be very careful
or be very strict what you accept or
what you don't accept. In this example, we are only accepting
the traffic on four four three. Anything else is not allowed,
or any hostname which does not match with this one, it will not be
allowed. It will just drop the request on the ingress gateway
itself. And then we are routing the incoming request,
we are checking the SNI host and we are routing
it to the application running inside our
ingress namespace here. And it is listening
on port four four three. So any incoming request will be routed to the
NgInX part. Okay, so let me go ahead.
So I have already deployed all of
these things already configured. And if
I go ahead and make a request here.
And this is my hostname, I am setting the hostname incoming
request. This is the port number, this is the
public IP address of my Kubernetes cluster and
client is passing the client certificate and
we have the NgINX server which has its own certificates
and both the component will
do the mtls. So let me go ahead and you can see we
are able to access the NgInX part and if I modify it
and I use the port 80,
it should not interpange this request.
Okay, so you see your
request is terminated with some SSL error.
Similarly, if your hostname does not match,
okay, for example, you just making a request
from some another domain or hostname,
it should automatically reject the request because in
our gateway config we only accepting the traffic from
this hostname. Okay, so this is where you
configure or protect your incoming request, or this
is your entry point and this is how you protect your ingress through
any other issues. Or you can implement the termination
policies and. All right, let's go back.
Okay, so far
we have seen how we can implement the protect
the traffic within our cluster, how to securely
expose our application to the outside world. And the third part
is we want to how by
now any other application can try to access any other
application, right? So for example, can service a can send
request to service b? Yes, because we don't have any authorization
policies implemented yet. So we want to
make sure only required resources
are allowed to access. Right. For example database,
it can be accessed by back end, not the front end. Right.
So we want to implement or restrict the access to your services,
right. Or we want to protect our services from any other
services running inside our service mass.
And there are different ways you can
define the authorization policies where you can allow access
or deny access to some applications or resources,
and these requests are validated
at the runtime by the istio sidecar proxy. And if
authorization policies are not satisfied,
your request will be automatically dropped. Okay. For example,
service a try to access the service b.
And authorization policies say service b cannot be accessed by
any other service other than service c. Right?
So the request will be dropped. If service request is
coming from service a, it will be dropped by service b.
Okay, and in general, what you want to do
is you want to start your application or you
want to deploy your application, you want to set the authorizing
policies to deny all that means your
application cannot be accessed by any other application.
And then you want to slowly open your application to other services
or your service to other services, right? For example, you deploy
database, you block all the access and then
you slowly open your application to
another application. Like back end application. Right.
So this is how you can kind of go ahead when you're implementing the
authorization policy by default, deny all and then slowly
open it for other applications. Okay, so this is how
workflow looks like. Workload A, it has its own authorization policies.
Workload B, it will have its own authorization policies.
Whenever workload a, try to access the workload B authorization
policies has to be met.
If it doesn't match, sitecar proxy will terminate
your request. Okay. And these are validated
at the runtime. And your control plane kind of pushes all of your authorization
policies as you make changes there. And this is
what authorization policy look like. I will just show the demo instead.
Let's go back. This is the third and last demo.
Okay,
it, all right. Okay, so I
am deploying three application. One is the sous,
okay. One is the inventory and
one is the users. Okay, let me go ahead
and we will just deploy this application in the same namespace
for the simplicity. But whatever I say here is applicable
to all of other, or if applications are deployed in
another namespaces as well. I will explain what I mean when I say this.
Okay, so we have three applications running inventory
Sus user, three different applications. Each application has
the sidecar proxies injected.
This is the authorization policies. Let's look at the authorization
policies. So we are deploying
this authorization policy inside the namespace auth where our parts
are running. These policies are applied to
the SUS application. And what we say is
application sus can only be accessed by
a namespace auth which is running inside
a service account, inventory sa. Okay,
so we are restricting right now only
application running inside a namespace Auth. And it
is running inside a services name, service account name sa,
inventory sa. Then it will allow
the get request. See application,
this running inside this service account can make a get request to the
sous part. Okay. And we have users
authorization policies. And when I don't specify any
rules by default, any request
made to the user application will be dropped.
You cannot access the user's application. Nobody can access
it because there are no rules by default. It will just
deny all. Okay, all right,
now let me go back and see if there
is authorization policy. Okay,
so it's already deployed. Okay, I have already deployed
beforehand. We have suse writer authorization policy,
which I explained, and there is one user's authorization
policies. Okay, and let's go
ahead and go
to the inventory part. And from the inventory part, what we
will do is like, we will try to access both of
these applications. Okay, go to the inventory part.
Try to access the users.
Again, we have allowed.
Allowed. We have the allowed
access from user application.
From inventory application to the user application. So it should work.
It. Oh, sorry. So user is not accessible by anyone.
So it should be denied by default.
Inventory cannot access the users. Right? But we have deployed the policies
which says SUS can be accessed by the inventory.
Okay, let me go ahead. You see, it is able to
access the SUS service or
SUS application. Okay. And if I try to make a post
request, it should denied, right?
You see, it's saying forbidden, right?
Because inventory service can only make a get request
and it has to satisfy this rule.
Right? And I will go ahead,
I will delete.
So we delete data authorization policies and.
Let's go back. Let me go.
Just verify there are no authorization policies. See? No authorization
policies. We go back to our inventory part.
We make a post request, you see, it is able to
successfully access the make
post request on the source application. It can
make the get request, you see?
200. Okay. And this
is the part which is
serving the request. Right? And same goes for
the users. There are no authorization policies.
You should be able to make the users application as well
from inventory. And you can also make post
as well if you want. All right,
so without authorization policies, it's open. Anyone can
access it. Without authorization policies, you can restrict
the access.
So it. All right, so with this, we are done
with the talk. Thanks for joining and
have a nice rest of your day.