Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hi everyone, my name is Mahindra Bagul.
Today we are going to play with kind kubernetes in Docker, Nginx Ingress
controller and secured GrPC server. So let's
get started. So a little bit about me. My name is Mahindra Bagul.
I am senior software engineer at Infracloud Technologies, India. You can
find me on GitHub on this link. My user id is Mahendra Bagul and
my interests are basically around kubernetes, Golang and cloud native technologies.
So for today we are going to have this agenda.
So the presentation has been divided into two parts,
theory and demo. In the theory part we will be talking
about kubernetes architecture, kind kubernetes
in Docker, Nginx ingress controller, kubernetes and ingress
deployment, how they both usually work,
and certificates and mtls and
then architecture of the final deployment. So this architecture of the final deployment is
related to the demo which I have. And in the demo
part we will see the kind and NgInx ingress
deployment, ingress controller deployment and we will also try to run
secured GrPC server behind the ingress. Okay,
this is a typical kubernetes architecture. So you have a
control plane or master nodes and then you have a set of worker nodejs,
okay, on control plane or on master nodes
you will see API server, scheduler, controller, manager and HD these
components. Whereas on the worker nodes you will see Docker,
I mean any container runtime like Docker or
container D and Kubernetes and
Kubeproxy, right? So whenever
you are trying to hit the kubernetes, whenever you
are trying to reach to the pods or any kubernetes objects,
you are basically using Kubectl commands, right? So your Kubectl commands are
basically handled by the API server. So you can see
it is just like a ClI client for the API server,
right? And when you have a docker container, when you
have a container runtime on your worker nodes, you are able to
create the pods, right? On that worker code and inside
your pod you will have multiple containers, right? So pod is basically
like just a wrapper over containers and the
relationship is one too many. Like one pod can have multiple containers.
So in this diagram you can see pod one has three containers whereas pod
two has just one container, right? The same
scenario you can see here in worker node two as well,
right? There are many user interfaces available in the market as
well for your API server. So when
you deploy your application on a Kubernetes cluster,
that time you need to expose it to the outside world
so that your users can reach to your application, right?
So that can be done using services or using ingresses.
So ingress also makes use of services.
But node port is like a raw or crude way
of exposing your application to the outside world. And that is again
not the preferred way to do in the production. So when you
use node port that time you basically open a port on your worker
nodes. So here you can see in the diagram that there are set
of pods which are wrapped by a service and then that
service is of type node port. And now your user is able to reach
to your application right through the node port type
service. And you can create code port type service using
this command. So to use this command you need
to have a hello world deployment in place. And then you can
use expose command of Kubectl to expose the hello world deployment.
Then you can specify the type of the service and then you can also specify
the name of the service, right? And then example service, service of type
node port will be created, whereas ingress, which is again another object of kubernetes
here, what you do is you also need to create it
using Kubectl command or using YaML files or JSON files.
And it is just, again, you can say in the layman's terms, it is
just a URL map, right? So there are two services behind
the ingress and the URL map is basically like a decision maker
to which service the user's request should be forwarded
to, right? So for example, if I'm trying to hit blue,
then service blue will be called. If I'm trying to hit
green, then that time service green will be called,
right. When you make use of ingress in your kubernetes
control in your Kubernetes cluster, then that time you also need
to install ingress controller to handle your or to manage
your ingresses. So there are many types of ingress controllers available
in the market. So Nginx ingress controller is one of them. So controller
or like glue. These are a few examples of ingress controllers.
So here in the diagram you can see there is a user and then user
is trying to reach to your applications through ingress.
Then again the request will first come to service and then service will
then forward the request to your ports, right? And then pods
are again running some containers, right where your actual application
is running. We are going to make use of the same diagram in the
next slide. But just before that, let me talk
about a little bit about Kubernetes in Docker as well. So it's just a
tool for running local Kubernetes clusters, right. And the
nodes are nothing but the docker containers. When you create a
Kubernetes cluster using kind nodes
are just docker containers. So earlier it was just designed for testing
kubernetes, but later, due to its popularity, it is now being used for
local development or inside the CI as well. Continuous integration
tools. As part of my demos I
have GRPC server and client as well. So my GRPC server
is in Golang and GRPC client is in node js and they
are both talking over HTTP two and they are
sharing this file, employee serverside definition. So that's a proto file.
A protobop is a contract defining
mechanism, I would say. And due to that,
server knows in what format data needs to be sent and client
also knows in what format the data will be received.
So protofile is nothing but the contract. Right. Our main
motive of this presentation is to achieve mtls between these two components because
our GRPC server is going to be running inside the Kubernetes
cluster, whereas our node js client will be running on my local machine
and we need to have server certificates for the GrPC
server. We also need to have the client certificates for the GRPC client which
is in node js and then they will be able to establish the communication.
Right. To understand more mtls,
let's go through this diagram. So here also you can see there is a
client and then there is a server. Client has its client
cert and then server has its server cert, right.
And then there is a common entity who knows clients
and server's identity. Right. And it is called as a certificate authority.
Okay, so this is basically a component which again provides
the certificates to these two components. So when
client tries to access protected resource on a server,
server will send or present its server certificate.
So server is trying to now say who is it actually?
So client will then validate server certificate with the CA.
Once that check pass, then client will now try to present its
client cert. Now serverside also needs to verify whether the client cert
presented by client is valid or not, and that is done through the
can. And then once these both things are passed,
these both checks are passed, client is able to access the protected resource.
Okay, so this is how the mtls flow works and we
will be seeing or observing the same flow in the demo as well.
So this is going to be my final deployment. So I have a
node GRPC client which I already talked about and it
is going to run on my local machine. And then we have a kubernetes cluster
created using kind kubernetes in Docker tool.
And then we have a namespace called Golang 2021 meetup.
And then in that namespace you can see I have a GRPC
server in Greece which is running just next to
the service called Golang GrPC server. And then I have a
Golang GRPC server deployment and pod, right?
And then this deployment and pod. These are using two config maps,
Golang JRPC config and employee database. So employee
database is like just containing the JSON.
I will be showing you the exact content of it when
I will be walking you through the demo. And then it is also
making use of a secret. So GrPC server certificates.
So as this is my server component and it
should run with its server certificates, right? So the certificates are
stored in a secret and then that secret is mounted on the pod,
right? And then here in ingress you can also see it is also
referring to the secret. So that is because we are
Golang to send the client certificates from the
client through the ingress, through the service to the pod, okay?
And we don't want TLS offloading or SLS
offloading on the ingress level. We are golang to
pass through the TLS request directly
to the pod or to the container. Right? So that's why
here ingress is also referring to the secret.
I will be talking more on this one in the demo. So let's move on
to the first demo. So we are here going to create kind cluster
and install Nginx ingress controller. So just before that
I want to walk
you through the directory structure,
okay, so I have a folder called code, okay, you can ignore this part
because I'm going to push this whole repository to
the GitHub. And then in the code you can see there is a
config. So inside kind there are commands and then kind config,
yaml, and then mtls you will see a bunch of certificates
because we are going to need client certificates as well as server
certificates, right? And also there will be CA component also.
And in the Golang grPC server I have the grPC
server related code, and in the node grPC client I
have the code Js grpC client.
Right? So this is the structure I have let us first
go through the first demo, right? And that is
about creating kind based cluster, right? So what
I will do is I will create a kind based Kubernetes
cluster. So the command is kind create cluster. And it is referring
to the kind config yaml, right. So let me open kind config yaml and
let me walk you through that. So kind is a cluster.
So this cluster will be three node cluster. So there is a control
plane. So one master node
and two worker nodes, okay. And few things
which you should be noticing is that I'm specifying these node
labels, right, so ingress ready. So these labels are required
when you are going to install NgInx ingress controller because
this is how you will be able to reach to
your Kubernetes cluster from your local, okay.
And I'm mapping few ports like 84,
four, three, okay so let us move ahead
with this command.
I first need to move to the directory structure and
then I will be firing the same command again,
code, then configured,
then kind.
Yeah it should work now. So you can see it is trying
to create a cluster kind. Okay.
And then it is creating the nodes now. So meanwhile we
will go through the next set of commands. So the
next command will be, or the next step will be to install the
NgInx ingress controller, right. So I'm just firing Kubectl apply command and
the content
of creating Nginx ingress controller is on this link. Okay,
so let me just copy this command and wait for
the cluster to
be ready.
So it will take some time because it is three node cluster. So it
will try to spin three containers,
right, because one container. So nodes in a kind based clusters
are nothing but docker containers.
We will also like let me walk you through the next commands as well.
So the next command is just for waiting because
when you fire this command, Kubectl apply and this
Nginx ingress controller deployment file, right. So that time what happens,
it takes some time to get all the pods
up and running and you need to have all the pods,
all the configurations ready. And the appropriate way to
check is to fire this command. So here what we are
doing is we are just waiting for namespace Ingress Nginx
and then we are also waiting for the condition ready,
right. And then selector is Kubernetes IO component controller,
right? Yeah,
here you can see now it wrote
configuration, it started control plane,
it also installed CNI, installed storage class and it is
now joining the worker nodes, right,
okay, coming back to this command, sorry for moving
back and forth. So in the last command we are going
to edit the deployment of Ingress NgInX controller as well.
So I will tell you what we are going to modify here.
So this flag, right, so enable SSl pass through. This needs to be
added as an argument to the ingress NgINX controller, otherwise your SSl
pass through mechanism will not work. And this was like a major
blocker for me when I was working on a similar assignment.
Yeah, here you can see now the cluster is ready. Let me try
to list all the ports. So it
will also take some time because the ports may not be,
all the ports may not be in up and running state
or in the ready state. So here you can see few pods like
HCD kind control plane it is not in
the ready state. Eight,
let us watch. So controller manager kind
control plane it's not in. Yeah it got up
and in up and running state now. Yeah all the pods are in
ready state. So let us do one thing. Let me go back
in the commands, copy this line. And I need to
fire this because now I'm going to install the Nginx
Nginx ingress controller, the kind based Kubernetes cluster.
So this link, right,
it basically has a file and it has got namespace service
account, config map and bunch of other kubernetes objects.
And these are all required to have
the Nginx ingress controller up and running right in your cluster. So I'm going to
fire this wait command now.
So it will wait till all the pods are up and running.
So let me start another window for Tillix
and increase the font. Let me try
running,
let me copy this command. Watch Kubectl.
Yeah here also the Kubectl weight command,
it succeeded. Here you can see the condition was made and then
the command has ended. Right. So now moving back to
the commands file and here
you can see I now need to edit the deployment,
right, which is in ingress Nginx namespace
and the deployment name is Ingress Nginx controller.
And I need to basically add this flag, right.
So I will just end this window
terminal and here editing the
deployment. Okay, moving down.
Yeah. So here you can see in the spec part containers,
the container is Nginx Ingress controller. Right. And I now need to add
one more argument over here.
So what I will do is I will just copy and paste this.
So I have added this now and I will just exit from this
file. Yeah, the deployment was edited. Okay let's
move back to,
let's move back. Let me see whether all ports are
up and running or not. So in the ingress
Nginx namespace, the controller component should be up and running, right. So it
is not in the ready state because after I
modify the deployment, the container
got restarted. Right. So yeah,
now we can see that it is in the ready state. Right. One on one.
Let's move back to the presentation. So this was like a demo one.
So this is how we create a kind
based kubernetes cluster and install NginX ingress controller on it.
Okay then in the demo two, we have a
GRPC serverside and then we need to run it on the kubernetes.
And then we will basically expose this GRPC server through service
through NginX ingress. Right, move back. And now we need to
explore this MTLs part.
But just before that, let me walk you through the Golang GrPC server.
So this is a Golang based project. So here you can see
in the protobops folder I have employee proto,
okay, so this is the way to
define the contract in protobops. So I'm using proto
three, making use of employee package
and then I have a service, okay, this service is basically,
so service will basically have all the RPC methods in it. So get details is
one of the RPC method which I have used in this demo.
And then there are messages as well. So employee request
has just one field called id.
So whenever client will try to access the employee by
id, the request needs to be formed in this way
and the response will be of this type. So employee response,
it has a nested message, employee details. And the employee details
message has these four fields, id, email, first name, last name.
Okay, and in the employee service you can see RPC is
get details and the employee request is being passed
from the client and it returns employee response.
Right. So in the server package I have main
go, okay, so just before that. So when you have employee
proto, so you need to have some Golang code, right,
because your server is in Golang. So you need to have some Golang implementation
of this proto file. So there is a protocub
compiler which you can use to create these files.
Okay, so these files are auto generated. When you open it, you will notice
this message code generated by protocin, right. Do not edit.
So this is generated using the command and this code is like,
so this code is basically generated as per
the information you specify here. Okay, so both files are
created using the protoc
and it is as per the proto which you have created.
Okay, so when you create these two files, you can now make use of
these two files in your server main go. Okay,
so let me walk you through what I have in the server.
So I will not go into much details, but let me
just show you the gate details. Right. So gate details is the server method here,
right. And employee request, that is what I'm accepting here.
Right. And then I'm checking for is valid certificate.
Okay, that's the first check. And then inside that method
I'm also checking for some c and common name and all.
And then when that check is passed here in get employee
details, I basically retry
all the employees and then run
a for loop and then just extract the
employee by id, right. And then just return that employee.
Few other things which you might be interested in
are. So here you can see I'm trying to create a cert full
which using the certificates present at this path. Okay. And then
once I have all the certificates I
need to add them to the TLS config here, right? So I
get cert pool and certificate here and then I try to
create TLS config. So I written tls config from here and then
making use of it here. So in the main method
you can see I'm trying to run GrPC
server using this credits method
and passing this TLS config. Right,
okay, so the next thing would be the Kubernetes
folder. So here I have config map. So I have got two
config maps as I had shown you in the diagram, right? So one
of them is to store the server data,
server details. Like here you can see I have server address and server
port in the config map. Okay. And in the next config map
I just have employee database. So all the employees I have placed in the config
map in the deployment I have just one container
running and the container will be running this
Golang GrPC server code. Right? So I have already got a docker image on
this account, Mahindrabagal Golang 2020 1 meter and
few things to notice here that I'm passing server port and
employees from config map as an environment variable,
right? And I will be mounting, so here down below you can see I'm
mounting the certificates which are
in the secret on a pod. Right,
sorry. Yeah, so I don't have a yaml file
for creating secrets here. I have got it in the MTLs directory,
but I will show you when I will go in the MTLs directory.
So I'm just mounting the secrets on a pod here, right.
And in the ingress part you can see few things to notice
are like. So you
need to specify these annotations. Okay, so Auth TLS
pass certificate to upstream. True. Then also you need to specify
where you have your GRPC server certificates, right? Let me
go back to the diagram so that it can get more clear. Yeah. So if
you see here, then the ingress is referring to this secret, right? So here
this annotation is referring to the secret. Also the
backend protocol is GRPC right now and the GRPC backend
true. That's what you need to mention. And this annotation
needs to be specified. This needs
to be specified because that's what we did when we installed NgINX
Ingress controller as well, right? And here in the
rules part you have Golang 2021 conf 42
that's like etc. Host entry.
And here in the service you can see Golang
JRPC server. So this service I'm specifying from the ingress. So you can
relate this diagram here. So I have opened Ingress yaml
configuration here and this ingress is referring to service, right? So this
is what I'm doing here. So Golang JrPC server this service I am referring over
here, right? And the port is 50 00:51 okay?
And then TSS host is Golang 2021
again in the service it
is just like a plain, any normal Kubernetes service
definition over here. So Golang GrPC service I'm labeling over here.
So just wrapping the deployment and pods, right?
With this understanding let's move to the MTLs folder.
So I have got few sales script, let me minimize
this and then expand this one.
I have got a few shell script files.
So Jainserd sh and jainsecrets sh.
So Janeserd sh has all the commands
to create the root can server ca, then client
CA server certificates, client certificates and the certificate
chain as well, right? And when you do that, when you
fire this command, the whole set of certificates will be generated.
Then once you generate these certificates, you also need to
now create secrets out of this file set so that the
secret creation I have added Kubectl commands for them here, right? So you
can see here kubectl delete namespace. If there is already namespace present for
this Golang 2021 meetup, then it will be deleted. And I'm
creating a new one and creating a bunch of certificates here, right? So just creating
GrPC server certificates, okay?
And in the Openssl CNF I have a few configurations for
my certificates here. So I'm Golang to push this repository. You can have a
detailed look at it or you can watch my another meetup
video where I have gone through these files in detail level.
Okay, so let us do one thing,
let me move to code and
then configs
emtls and then gen
certs.
So it is now creating all the secrets,
all the certificates. So once
all the certificates are generated we will try firing
this gensecrets sharonspace
earlier because we created a fresh cluster and a namespace was created and also a
secret got created. Right? So now once we have these certificates,
once we have these secrets here, what we can do is we can move to
another folder,
that is Golang GrPC
server,
sorry.
And then we will just try. So we first
need to apply all the commands,
sorry we need to apply all the,
we need to do kubectl, apply for all this Kubectl, all these yaml files,
right? So we will do this using this command k apply f
kubernetes and it will basically create config
map, two config maps, deployment, one ingress and one service.
Right, for our objects that we just discussed. Right. These four.
And let us now move back to the terminal window and let
us watch the Kubectl get pods a.
So here you can see a Golang 2021
meetup, a namespace was created and a new server, a new pod
is running, right. And it's not in the ready state eight.
So it will take some time because this is a new cluster
and I had removed all my images on my local.
It will try to pull can image from Docker hub. So it's in
the release state. Now let's do one thing.
So our Golang server is up and
running now inside a Kubernetes cluster. So what
we will do is now we will move to node
JrPC client and we
will try to run it, right? So this node GrPC client,
it will run on my local and it will talk to the Golang
GrPC server running on a Kubernetes cluster behind the ingress.
Right? So let me first run it and then we
will walk through the code. Okay, so node
GrPC client. So the command to run is node five.
Oh, it's index js. Right?
So there are just four employees. So when I specified two, the employee was
retrieved. Right? So this message is coming from the Kubernetes
cluster, a pod running on a Kubernetes cluster which was
created using kind. So let me first,
now walk you through the node GrPC client code,
right? So I have index js which I ran basically let
me close,
it's taking some time because I have got two other clusters running on my
local as well. Yeah. So here you can see in index js
I'm using jrpcjs which is GrPC library.
And then using helper methods from that library I'm
loading the protobuff file. So in the protobob's directory we will
have the same file here, right, can you see,
right. So then once I
read that file I create can employee service object Javascript object,
right. Then I'm specifying this.
So I'm trying to create a credentials, basically GRPC
credentials. So createssl is a method helper method and it basically
takes three parameters. So first parameter is a CA cert
chain basically then the
key and then the cert. So you can see GrPC client key and GrPC
client cert, right?
And using these generate credentials I
will try to create a GrPC client. So here you can see,
you can notice this URL, right Golang 2021
com 42. Com four four three. So this is basically
the URL configured on the ingress. So if I
go and open the ingress yaml,
right, so here you can see the host configuration. So this is the host specified
over here, right. And as this is a TLS connection,
it runs on the port four four three. And you might have
also noticed when I had created kind based cluster,
I had expose four four three,
right, extra port mappings, that's what I had done. So this is like a way
to run your mtls
based applications on kind based Kubernetes cluster. Okay so
the other file which you should be looking at this
index js in the client folder. So here I'm just
like trying to hit or trying to call this gate details. So this get details
is a RPC method. Define the proto file
right here. So I'm just trying to
hit it over here and then once I get the response I'm just
trying to log it here so you can see the employer
details for employee id. That's the message printed over here and then
the whole message, right? Yeah. So that is the end of the
second demo at intracloud we are hiring. You can
go and visit this link HTTPs slash
careers or you can just send your resumes on this link clusters at the
rate infracloud IO thank you. So this was a
try from my end for explaining or for playing with kind
Nginx ingress controller and secured GRPC server.
Thank you everyone,