Transcript
This transcript was autogenerated. To make changes, submit a PR.
Everyone, my name is Matt Williams and I'm an evangelist at
Infra. I'm super excited to have you joining me for my session here at comp
42. It's my home lab. Why would I want SSO? Now that's a
nice and short title for the schedule, but really that
title should be, it's my home lab. Why would
I want single sign on or roles or users? And this whole talk
is specific to Kubernetes. I mentioned that
I'm an evangelist at Infra and we'll learn more about what Infra
does later in this session. So let's get started with some definitions.
What is this homelab thing in the title?
Well, a homelab is whatever you want it to be. Traditionally,
it's a place to practice. Maybe at work you're using
different tools and technologies like kubernetes or something else,
and you want to practice with it. You would like to try out different
uses that your boss might not approve of
until you prove that it's really worth trying. All of these things are
great uses for a homelab. It's also a great place for tools
for your home. Maybe you and your family have
a shared calendar and you don't want to host it with one of the clouds.
Or maybe it's collaboration tools such as having your own file server that you
share with your significant other and with your kids. All of
that is a good use for a homelab. Okay, so what does a
homelab look like? What goes into it? Well, there are no
rules, so it's whatever you got. Yeah, you're probably
going to see plenty of people on YouTube or Instagram with
big professional racks that look really,
really cool. And they have all sorts of fancy networking
equipment and rack mounted servers. You don't need to have that.
A rack isn't required if you can afford it.
Well, users what you got? Maybe you have an extra laptop or
even your current laptop. They have some extra cycles on. Maybe you have
a raspberry PI or some other box that you've acquired along the way.
All of those things can be used in your home lab. Maybe you
actually do have a rack that you got from eBay that a company was no
longer using and so you got it really, really cheap. And maybe you got some
of those older Dell servers. They work perfectly well in
a homelab. So really the limit is defined by your
budget, maybe even the approval levels of your significant
other. So here's what was in my first home lab,
I was working for a company called Capteris and then it was bought
out by Opentext. I needed to get up to speed on faxer of IP,
so I bought a used Cisco 2600 with some analog
phone cards and then I hooked it up to the Opentext fax
gateway, got some simple routers, and used some virtual networking
tools. Now this was great for me at the time, but these would totally
not go over well today. That Cisco box sounded
like a jet engine, and remember what I said about getting approval
from a significant other that would definitely never
fly today. I'm a lot further along in my career, but my
homelab doesn't look any more professional. My servers are
just computers that don't get any love anymore. The workhorse is
a 2011 Mac Mini, but here you can also see
that there's a pie by the utility sink next to a fly swatter. And here's
a stack of pies in some other box I collected as I was
bringing them together in one spot in the house. Most of the software running
sits on Proxmox, a virtualization platform, but I'm also
using truenas to try out its virtualization tools.
I have portainer as a front end to Docker and kubernetes and running
as vms and containers. I have a few kubernetes clusters using
minikube and kind micro kubernetes and k three
s and more. Why? Because I want to practice and play to
see what's interesting with each of them, what makes them
unique and special. I also run various software
experiments now. Being married with
a three year old means the weekends spent unwashed and
eating takeout junk food, learning a new language are
long since gone, but I still experiment with next js
or Svelkit in the evening after reading Sophie mouse to
Stella, and I want a place to host those experiments.
That tends to be one of the Kubernetes clusters. One of
the other key bits of software running is home assistant.
I've automated the lights and other systems based on sensors
all around the house. Remember what I said about hosting a calendar?
We have one that shows what the trash trucks will be taking this week.
Are they also taking compost or recycling that day? And that's displayed
on an old iPad on one of the walls. And of course, Homelab doesn't
just have to be machines that are located inside my home,
it can also be stuff up in the cloud. So I have a bunch of
accounts from AWS and Azure, Google Cloud, even oracle
cloud, and I use all the free for life or free for
trial periods. I use all those types of services on all the
different clouds. And that way I can kind of spread
around the free stuff and get to play around with lots of tools
without paying a whole lot. I think my AWS bill
each month is about $0.53. So that's
what my home lab looks like today. So why
Kubernetes in the home lab? Well, I talked about that
a little bit a few slides ago. It's really a chance for me to practice.
I use kubernetes at work and want to understand how it works, so I
play around with it at home. And going back a few decades when
we had clusters, they had to be identical hardware. I remember running
a novell netware active passive cluster and they had to be absolutely
identical. But these days with kubernetes, you can
have a hodgepodge of machines, like whatever these things are in this picture.
And as far as computers, that's precisely what I've got,
a hodgepodge of machines. Well, maybe some of those raspberry pis
look exactly the same. But now I can have a cluster that spreads across multiple
types of machines, which is awesome. And that allows
me to create more consistent deployment practices even in my own
home. Remember I told you about those software experiments? I have one way to
deploy all those software experiments rather than having to think about it.
If I want to push it to docker, I'm going to do it this way.
If I want to do it to proxbox containers, then there's another way.
If I do want it to go to kubernetes, then it's this other way.
Now I have one way to deploy all these software experiments
or other tools that I find on the Internet. So if I'm using lots
of different tools that I find online, and one of them gets compromised while
running as admin, it now has access to the secrets and data
of all these other services. And that's bad.
So by implementing users and roles and least privilege,
I can go a long way toward avoiding those problems.
Okay, sso, let's talk about users. Well, they don't
exist. Everything in Kubernetes is a resource, and there's no
resource for users. Actually, users are just certificates.
So to create a users, we need to create certificates and then
put those certs into a kubeconfig file. So here's what that
file looks like. Minutes. Some of the actual super long strings that are the
certs. Up at the top we have the cluster and how to access it.
Down at the bottom is the user and the cert associated with it.
In the middle is the context that links user to cert
and we can have many of each type defined in a single file.
Okay, so what is a role? Well, a role just defines a level
of access that a user has to the cluster. And that level of
access is defined with a resource and a verb. Now here's an example
of a role. This role is called marketing dev and
it says that for the pod resource, the user can get, watch and
list. Normally there'd be a lot of steps of resources
and verbs, but I wanted to keep it simple for this session. But that's
how we create a role. Just define it in a yaml file, then apply that
to the cluster. So let's create a user with the tools
built into kubernetes and the OS. It's not hard,
but is a bit tedious. It's all about creating the key,
signing it, then adding it to your kubeconfig file. Easy right?
So you create the key and then a certificate signing request.
That request goes to the server, then use the Kubectl command to
approve it. Next you download the signed request and
build the kubeconfig file. Finally you distribute the file.
The commands I showed were fragments of the real command, but you can
find these full commands with explanations on this blog post.
Now possession of this cert means you have access to the
cluster. So it's important to verify the user still has access.
That can be dont by regenerating the certs and redistributing the
config file every five to 30 minutes. And of
course that sounds painful. So you might come back to the idea of just giving
everyone admin access. Well remember, Kubernetes is
just remote execution as a service. And if everything
shares the same credentials and one job or user is compromised these,
the entire environment is compromised. Maybe your users isn't
compromised, but is fired. One disgruntled
user with admin can do a lot of damage. So if
admin for all isn't a great solution, surely there must
be a way to automate it. I'll give you two solutions here.
First is this script from Brendan Burns. Now today Brendan
is a corporate vice president at Microsoft,
but eight years ago Joe Beta, Craig McClucky, and Brendan
created a little open source project you might have heard of called Kubernetes.
And this script basically goes through the same steps I did
just now, built it skips what is probably
the hardest part distribution of that config file.
So how about something easier and more self contained? Well that's where
Infra comes in. Infra is a 100% open source solution
to this problem. It's 100% free to use.
We've been working on it for a couple of years now. And the original founder
has also created another open source tool called kitematic.
There are two ways you can use it. You can use it, you can
host it yourself, or you can use infracloud. When we fully release
it though if you're interested, we can get you in on the beta.
So let's do a demo now to see how easy it is.
Okay, so here we are at the command line. I'm going to install infra,
the server to my Kubernetes cluster. I'm using
Docker desktop for my Kubernetes cluster and I'm
going to use a values file. So I'm just going to show you this demo
values. It's really simple and
it just shows that I'm defining a users. That user is called Matt@example.com
it's got a password of password. Now normally I wouldn't be doing
a password as password because it's not very secure,
but normally I would create a secret in kubernetes
and then refer to that secret within this file.
But it's a demo, excuse me.
And then I've got a grant. And that grant just says that
matt@example.com that you just defined is
an admin within the resource of infra.
Okay, cool. So now we
can run helm upgrade
install infrahq infra and I'm going
to specify the values file of demovalues
yaml.
And that's it. We're installed.
One of the things it tells me is if I run this
command, it'll get me the endpoint where my
server is. And there it's
localhost. So let's go ahead and open up
the browser and go to localhost.
And when it first starts, it can take a few seconds
for it to boot up. But if I refresh,
there we go and my login screen
shows up. So I created a user called matt@example.com
and the password is password.
Okay, so now I'm in. First step
I want to do other than Zoom in a bunch is
to connect a cluster. So I'm going
to call this the Docker desktop and
it gives me a command to run.
So I'll copy this and go back to
here, I'll paste that in. Now you might be
thinking, wait, you just installed infra and you have to do another install.
Well, this is a self hosted instance of infra and
there's two pieces to it. There's the infra server,
which is what you're going to log into and has the kind of authoritative source
for user information. But then there's the connector.
And so you might have one, well you will have one server
and then lots of connectors, one for each of the
clusters that you want to manage.
Now there's one other thing that I want to add to this. Well, let's take
a look at this command. It's helm upgrade install and
these I'm installing the infra connector. Well that's what I'm going to call it.
And it's at the same repo that
we used before, I think.
And then I'm setting a server host of localhost
and then I want to call this docker desktop when I use the UI,
I want to call this particular cluster docker desktop. And then I'm
setting an access key. That access key, it just lets
these connector gives the connector something to
give to the server to say yeah, hey, I'm the connector
that you just generated that command for.
Now there's one more thing because I'm using Docker desktop and
I'm using self signed certificates and I'm
not using let's encrypt or some other solution for certificates.
I need to tell it to, well I need to tell it something.
So what was it?
It was this command.
So here I'm saying set connector config
skip tools verify is true. That just lets
me skip any sort of problems with my self signed
certificates. Normally in the real world with real
clusters you would never do this. It's just because
it's on Docker desktop and I'm using self signed certificates
so I can press enter and
now I'm done. So now if I go back to the browser,
you can see my cluster has been connected SSo I can click on finish.
So now it knows about my docker desktop. In fact, if I click on this,
I'll see I can grant access to individual
users to the whole cluster as well
as individual namespaces. I only have one user,
so let's go ahead and create a new user. I'll add user
and I'll call them user one@example.com
and it generates a temporary password. I'm going to copy that
and save it.
Okay, so that defines a new user. I can
also define groups. So if I have a dozen hundred
users and some are dev,
some are QA, some are marketing, some are whatever
other groups inside a dev, I can create groups
that have each of those users. So add a group, I'll call it
dev. And now for dev,
I want to add both Matt and
user one.
Cool. Now if
you have hundreds of users like I suggested, you probably
wouldn't want to enter them manually here. You could also use the
command line and you can add a whole bunch of users right
there on the command line. But you're probably not even going to want to do
that. Instead you probably have something
like Okta or Google OIDC
or Azure ad or another OIDC
provider. And so with providers you can just connect to those and
we'll get all the users and groups from there.
But my users and groups are my test. Users and
groups are set. So now I want to grant access,
let's see for the entire thing
I want to grant my dev group,
let's say view access and add that.
And then for, let's go
to the default namespace and I'll
add Matt as an admin
and I'll add user one.
Let's give him exec and
add that.
Okay, so now I'm set. Now I can go back to my terminal
and one thing I'm going to do is what directory
am I in? I'm Matt downloads. So I'll go to
Kube. And when I run
Kubeconfig or when I run Kubectl
or any Kubernetes command, it's getting my
configuration information from this config file. So I'm just going
to move the config file, move config to
config conf 42.
And so now if I run Kctx which is the context
plugin for crew, for Kubectl, it says there
are no kubernetes clusters. So I'm going to cancel that.
And now I can run infra install.
And what was my first user,
matt@example.com not
install, it's infra login.
It's not that login
localhost because that's where
I want to log into. I don't want to log in to the user,
I want to log into my cluster. I want to trust this certificate.
And now my username is matt@example.com
and the password is my super secure password.
Okay, so now which directory I'm in, I'm still in
cube. So now let's take a look at config.
And we see right away there's a bunch of things in here.
So I can see I've got a cluster that's
been defined and the name of that is
infra docker desktop. I've got another cluster defined which
is docker desktop default. I've got a context,
I've got a couple of contexts and then I've got a users
matt@example.com that's pretty cool.
And so now I can just run Kctx
and I see both of those contexts that exist.
And if I go into any one of them, let's say that one and
you do get pods,
I see everything. Cool. But if I were to go
in as, let's say infra login localhost and
my user is user one@example.com
password. Oh wait, no, that password was at new
password I was given and I have to update it because it's
a temporary password. I'll set it to
password password.
And now if I do,
let's do bat config.
I see the same things, except down at the bottom my user
is user one@example.com
so the users for oh, actually my
older mat@example.com is still there.
If I had run infra login or infra
logout before logging in as user one, then the mat@example.com
user information would have been purged from
my config file. But if
I want to take a look at Infralist, this just shows me okay,
I have view access to the whole cluster, but I also have
exec access just to the default namespace.
So what we've done here is create
two users and assign them different roles.
View and exec. View is one of the
default roles that comes with Kubernetes. Exec is
one of three roles that we add as
part of the, when we install the connector. So we add exec,
we add port forward, we add logs.
I think that's it. And then you can also add your own
roles. Any role that you've created in your Kubernetes cluster, just add
one annotation, one label to it and we'll be able to see
that and then be able to sign those roles within the UI.
Or actually you can do everything from
the command line as well. You don't just have to use the UI. So that's
the quick demo. Let's run back to the presentation.
In summary, you've seen that labs let you practice with whatever
tools and technologies you want to practice with things at work
or things you wish you were doing at work.
If you're using kubernetes either at work or at home, you should be
using users and roles and single sign on. Unfortunately, users and
kubernetes are hard but tools like Infra and that script
from Brendan Burns helped make it easier. I hope you enjoyed this,
and if you have any questions, you can reach out to me on Twitter or
at all be in these
discord. Thanks so much and goodbye.