Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hi and welcome to my presentation about how to debug
a container with a sidecar in Kubernetes
using Gefyra. My name is Michael. I am software
engineer and co founder at Blushu. Blueshoe is a
Munich based software development provider
for cloud native software. We are creating
a couple of tools and platforms, for instance Unikube IO
or Jifira. And in my day to day business I am advising
our teams and our customers how to do cloud native development.
If you want to reach me, then drop me a line at Michael at Unicube
IO or find me on GitHub. And here is my agenda
for today. So first of all, I'd like to introduce you
to Jifira and its project goals.
Then I'd like to talk about my development infrastructure,
my demo application and yeah, how to debug
my debug with Jifira. In the end,
I will wrap everything up. So let's get started.
Jifira is under development for
a couple of months now. We have started it to
give developers a completely new way of writing and testing their applications
by using kubernetes right from the beginning and to make
this as convenient as possible.
So we'd like to promote more service oriented
thinking and to foster domain driven mindsets.
And yeah, we'd like to see the adoption of advanced kubernetes
patterns to become part of modern overall
software architectures. Then generally speaking,
about using kubernetes right from the beginning will
increase the dev prosperity by an order of magnitude.
So since you maybe already have members in your teams
taking a lot of effort to describe kubernetes
environments for production or testing, why don't you take
this work and bring it to your developers as
well? Yeah, in the past we have been working with telepresence two
for quite some time, but we have been at least our teams
a little bit dissatisfied with the features and
the approaches. And yeah, couldn't find a
good solution for us. So we set OAuth and created Jifira.
One of the major project goals of Jifira is to
make use of conventional mechanisms and toolings
that your developers are working nowadays already. So for
instance, we'd like to support your favorite ide
code hot reloading and mounting the current state
of my source into the development container.
You can override environment variables, drop into an interactive
shell and read the logs easily. And most importantly for
this example is that you can attach a debugging quite
easily to your application container instance
which is currently subject to your development
work. Gefyra creates a dedicated
docker network on top of your local docker host,
and by employing a sidecar which creates the
underlying VPN connection to your Kubernetes cluster,
your application behaves as it would run
already within the context of a Kubernetes namespace,
and that makes it very convenient for
developers to write code that is for
the execution already part of the Kubernetes cluster.
So speaking about my example
project here, I will bootstrap a local
Kubernetes cluster using k means.
I have to do a couple of cluster configs such
as selecting the right Kubernetes
version, set the correct port forwardings for
HTTP traffic, and more importantly for JFIRA
to work. I have dependencies which I am going
to install using ham charts and customize,
and I will be providing a plain or a couple of
plain Yaml files to describe my custom
backend application. And of course developers
are using Kubecontrol also during development
time in order to interact with the Kubernetes cluster.
So I need a Kubernetes cluster. As I said,
it'll be case 3d. My demo application
will perform or demonstrates
an Oauth two login flow, or more
precisely an openid connect flow. So I need some sort
of identity provider. In my example I will be using keycloak
and for that to work I need a custom realm with an auth
two client. For my back end service I
need of course a test user with the required privileges to
log in, and I need the ingress configuration that
supports that local fully fledged or to lock in
flow for my demonstration. And on the other hand of course
I need the workloads in order to run my demo backend
application. And for this example I will be employing
the Oauth two proxy in a Kubernetes sidecars
pattern. The backend application itself is
made with fast API, so this is a popular
Python web framework. But this
example, or better yet, this pattern, is not specific to
Python as a programming language. Rather it's agnostic
to the programming language that you are using to
set everything up. I will be using another tool that's created
by our teams. It's called Getdeck and it will set
me up this environment in a
second. I will just run deck,
get to this public repository
and I'm selecting the deck OAuth to demo.
So this is all open source.
It processes five sources of
different kinds such as helm charts for the keycloak
plain yaml in order to initialize keycloak with realm and
the user and everything and of course my demo application
as a backend. In addition, for this deck
there is the Kubernetes dashboard available which is convenient
for developers to watch all the services and ports
and coming up, the published ports are listed down below here.
So my HTTP traffic will go on localhost port
80 80. Wonderful. So what's inside
of my example? I have at
least two domains, local domains running in order
to establish the OpenID connect flow. I have
on the one hand the Oauth two demo domain. I am
using Nipio here on port 80 80
in order to serve my Oauth two demo which
is created with the sidecars pattern.
That means that each and every request that
comes through my ingress is then first
authorized by the OAuth two proxy
and only authorized requests are passed through
to my Python backend application. And that means
that I separated my back end completely from
the OAuth two login flow,
which greatly increases at least the security because
I don't have to write custom code for the lock in mechanisms
and I'm using standard software here.
And on the other hand I have the keycloak running on
another domain. It's keycloak on my localhost Nibio
on port 80 82. So just a quick
recall how the OpenID connect flow
is working. So if I want to access
my backend application from my browser,
I first have to request an authorization
code and this is done by a redirect from the
AutH to reverse proxy to the JSON web token issuer.
In this case it's keycloak. And upon providing
my login credentials I will be receiving the authorization code
and a redirect and it redirects me back to the auth
two reverse proxy and from there my
authorization code gets exchanged for a valid access
token that will travel along
in a HTTP header to my backend
applications and having a closer
look to the workload yaml that is creating
the sidecars pattern in the container section you can find two
containers running in one pod. So there is the first
is the auth two proxy which is
being served on port 80 nine which
is published through the Kubernetes service.
And the second container is the OAuth two demo app.
This one will be target for the Jifira bridge operations
in a couple of minutes and this container is
listening on port 8155 and the AutH
two proxy is configured to upstream to
that port and this sidecars pattern is then
rolled out with each replication of the deployment,
meaning that for every pod that is scheduled
by the OAuth to demo deployment.
Exactly. This pattern is part of each
pod and therefore I'm
not scaling just one component, for instance my auth two
demo app, but rather I am scaling the auth
two proxy as well as my back end application.
And this is pretty much what the sidecars patterns is
useful for. So to see this in action
I will head over to my browser
and I point it to auth two demo on my
local domain. Just hitting the refresh button here and
the OAuth two proxy welcomes me with this login
screen and it is configured to use OpenID connect for
the login flow. So I hit the sign
in with Openid connect and now I'm
please notice how I got redirected to keycloak here
and I am now using my demo
user. It's John at Gefyra
Dev and the same is for the password.
I hit the lock in and here
we go. I got redirected back to the auth two demo domain
and the response for this simple example
application is JSON telling me helps world and
this application allows me to for instance retrieve
items so I can go to the item route Thames
and put it Thames
and I put n one two three here and it says ooh,
an internal server error. So this is the
HTTP 500, a bug that is
currently present in my current state of the
application. And yeah,
we already found the bug. Now we might take care
about what is required with Jifira
in order to fix it. So this is the code that
is serving the items route. And first
thing that comes to mind is I think it's not too
uncommon anti pattern to have an if else branch
that separates two environments. In this
case it is probing if a access
token is available here and if not which
is for a development
environment which was not created using kubernetes
and this pattern. The token is
not really used for creating the response.
As you can see the email is not given in this case.
But once this application moves into
a Kubernetes environment employing the sidecar and also
forwarding the access token, I'm hitting
this nasty bug. So ns and attentive
audience, you might already spotted what's the problem here.
I'd like to keep it open for a couple of seconds.
Right. The process with Gefyra is quite simple.
You run Gefyra app in order to set up the development
environment and the infrastructures. You can
then run a jifira run within
virtually any image that you'd like to run
in a context of a Kubernetes namespace.
Maybe it's a new service that is not yet reflected
in Kubernetes workloads, but you would like to run
and develop a new service in an existing mesh.
You can run afterwards the bridge
command which intercepted an existing service,
and tunnels all the traffic hitting that particular
container to your development instance.
And in this case you may write code, fix bugs,
or write new features. Once you're done,
you can commit and push, and CI CD
pipeline will take over. And from a developer's
perspective, you're now able to run the Jifira down command in order
to tear the development infrastructure down.
So the Gefyra run command with this
parameter list is doing the following
I specify the image, so this is basically
the same image that is deployed to my Kubernetes cluster.
At the moment. I'd like to assign a name, it's my fast API
demo. For further reference, I want
this container instance to run in the or two demo
namespace, which is the namespace for my example application,
and I'd like to mount my current working
tree to this container. I am overriding
the starter command here. I am using debug PI,
which is a Python implementation for the debug
adapter protocol. It is available for a long list
of programming languages too. So the
debug PI is waiting for a debug client to connect,
in this case on port 5678. And then I
am starting Uvcorn. It's my application server, and this
is basically the same command that the application is also started in
Kubernetes at the moment, except for the reload flag at the
end, which allows me to reload the
application. Upon changing my source code,
I will head over to vs code
to demonstrates how this looks.
So first I have to run the Gefyra
up command. It installs Gfira's operator
to my local Kubernetes cluster,
does everything else which is required for my local development
infrastructure. I have the code which
is currently serving the server error
already at hand here, and I
will be running a
development instance of this application container in
a second. So Jifira is up
ready. So I am running
this command.
Okay, this container has been started locally. I can
inspect the docker ps with my
fast API running locally here.
So. And now for vs code to
connect to the debugger, I'm going to create a launch
JSon. We'll do this with the debug extension,
create a launch JsOn and select
the remote attach in order to connect to a remote
Python debug server and it asks me
about the host name and in order to find
out the IP address of
my local
container I am running the docker inspect command
and it tells me the IP address. By the way, this will be part
of Jifira in one of the upcoming
releases to make this even more easy.
So I put the IP address in here, it's the default port and I
hit enter and vs code creates
a launch JSON. And now by starting the debug
I am getting connected to my local container
instance and it already tells me that Uvcorn is
running on this container address
and it basically started the application already.
Wonderful. Now getting back to my slides,
because now I have started the container instance
running in my Jifira context
and it is running as part of the Kubernetes
namespace in my application landscape.
So in order to receive requests I
have to create the Jifira bridge and this command
is going to do that. It's a
bridge, and on the one end of the bridge I
target my fast API demo, the local
container instance that I just created, and on
the other end there will be everything scheduled by
the Kubernetes deployment Oauth to demo. So this deployment
is placed in the namespace Oauth to demo.
And I am particularly interested in all
containers within pods that are named or
to demo app. You remember it's the second container
in my sidecars pattern which runs on port 8155.
So this is the same that I'd like to forward to my local container
instance and I assign the name my fast API at
this point again for further reference.
So getting back to vs code,
I will change to the terminal again, clear my terminal,
and run the Gefyra bridge command,
and it selects one pod with this name.
Jifira is now waiting for the bridge to become active, and once
a connection is established, my local container instance
is now serving the request for
the auth two domain. I can demonstrate this if
I go back to the browser and
here I can go for instance
back to the front page and I get the already working hello
world response. So if I get to the etems,
one, two, three, it's still the internal
server error of course, because I did not change any code
so far. So getting back to
vs code and challenging
to my code that is being executed,
I can now drop a breakpoint
at this position in order to
interrupt the execution and to find out what's wrong
with this code. So if I now
start a new request on that particular route,
it is interrupted and vs code
already popped up with the breakpoint being
halted the execution. And I'm now able to inspect
the data, which is the JSON
web token decoded. And I am now able to inspect
the keys of the web token. You can see it's
just an ordinary JSON web token,
nothing special here, but it tells me the email
key is actually written with a lower e in
the beginning. And if I go
on with this run,
I'm even able to head to the
output of the application. And here it tells me
about the key error too. So now I can
change this uppercase e to a
lowercase e and remove the breakpoint.
And since I activated the reload
flag in my application server,
I will be reloading the code. And now
my application is executing my changed code.
I hit the refresh and it still says internal
server error.
And this is because I forgot to save the source
file. Okay, here we go. So this response
is now working and I fixed the bug.
Wonderful.
Yeah, basically that's it. Back to
my slides. As I said,
in order to fix this debug, I had to make
the uppercase email to a lowercase email
because this key access caused
an exception. That's it.
I have demonstrates that Jifira is
able to create a kubernetes native development
approach by creating a local container
instance with all conventional capabilities
for developers writing their code being executed at the same time
already within a kubernetes context and
just side by side with existing adjacent services.
So there's no need anymore for a docker compose or
vagrant setup or any custom scripts that create
a developing infrastructure which is not really
matching your production systems. If you
want to follow along this example, please head over to dev
use casesauth two demo and
if you're interested in using Getdeck, I would be happy to
see you in this repository too. So if
you do have any question then please reach out and in
any case, have a wonderful day.