Transcript
This transcript was autogenerated. To make changes, submit a PR.
Well, hello there. My name is JJ Asghar and I'm a developer advocate
for the IBM cloud. You're here to see a simple
python application deployed to Openshift or kubernetes,
and I'm going to walk you through it during this talk.
I'm going to start with some slides first, just to kind of make sure that
we all have the same language and vernacular. And then from there I'm going
to go into the it as quickly as possible and hopefully you'll see
how easy it is to be able to just get a very simple application,
simple python application deployed to OpenShift,
specifically. All this stuff you can take and run with it
yourself. And hopefully you'll see how easy it is. And let's see if we
can do this under 20 minutes from
soup to nuts. This will be fun. Come on.
All right, hopefully you can see Openshift and
kubernetes in way too short of a time. Again, hi JJ,
developer advocate. And I really do have the email address of awesome@ibm.com
you can find me on Twitter at jjasgar. If you
ever have any questions or if I can help in any way, never hesitate to
reach out. My job is to be a personable nerd, to be
able to help you get the information you need.
So let's start with some building blocks. First, we need to start somewhere.
So in order for us to understand the advantage of using containerization,
or kubernetes, or Openshift for that matter,
we need to build something we build up from a foundation. So let's start
there. Everything that's inside the container
ecosystem is built off of something called a container, hence their
term. There's two major different players in the space right now.
One called Podman from Red Hat, and the other one called Docker.
Unfortunately, my marketing people informed me I'm not allowed to use the Docker logo
due to branding rules. So of course it's an old man yelling at the cloud.
There's multiple offerings, Docker build,
Podman, there's Canaco. There are different ways of doing it. They all adhere
to something called the OCI spec. They all need a registry of someplace
to be able to actually put the container and
hold it. And everything's built from something called a docker file,
which is souped up bash script in all finishedness.
And then finally, if as long as it
adheres to the OCI spec, you can package it and run it anywhere
with the container runtime. So that whole promise of a
developer saying, hey, it worked on my laptop. If they containerize it
and send it off to QA or production,
as long as they start adhering to the container spec,
OCI spec, everything will work. As you start
getting more advanced into the container system,
you need some way to orchestrate all of them. If you actually look at what
Kubernetes is, it is a very stripped
down way to specifically just run
a container or orchestrate running containers.
There are obviously companies like IBM that
will allow you to run Kubernetes on our systems. We have
an offering called IKS or IBM Kubernetes service,
but it is actually just community supported. So the
actual underlying system of Kubernetes is open source,
is run by the community, and there is no
company behind it, it is just
a project. The framework is so small that it's designed
and scoped specifically to run containers and orchestrate them across a
bunch of different stuff. And we'll get into that in a minute. The platform or
the user is responsible to integrate more than just the core. So if
you need security on top of it, you need other different systems. You actually have
to do the work to be able to make that happen. That's the reason why
you see things like hey, three s out there, or other different systems that
have built in bits and pieces. And that brings us to the actual enterprise offering
of Openshift. What Openshift is, is a containerization
built on top of Kubernetes with the production grade actual
enterprise ready Kubernetes for your system.
That's actually a lie. It's actually four eight now there's
actual enterprise support behind it, which is one of the coolest parts about
it. So you actually do get the red hat ten year support.
With Openshift there is the best of breed
of chance where it is an opinionated groupings of systems
around Kubernetes. But the beauty of it is you
actually get the support to run your business on it. There are some strong
opinions inside of it, but if you go whole hog into the
openshift ecosystem, your developers will, which you will see here in a moment.
Your developers will be able to have the environment that they need to run the
containerized applications. This is really important because it's
no longer you have to build it, you get to just
in essence buy it and be able to get your business out the door instead
of you spending the time and effort to being able to.
It's a good word for it. Build what you need.
It's very much a conversation that you need to have with your teams
and your understanding that if you are going into this ecosystem, the advantage
of leveraging something where it's already supported and everybody knows about
it, instead of you building up bits and pieces of what you need,
if that makes sense. So I actually
lied at the very beginning of this,
inside of the Kubernetes ecosystem. And by the way, I'll use Kubernetes and Openshift
interchangeably. I'll mention specifically when it's can openshift difference.
But if I say Kubernetes, it works on Openshift.
And then I'll specifically say Openshift when it works on just
Openshift or the layer of Openshift.
But I actually lied at the very beginning of this.
So most people, when I said everything's built off of something container, that's not actually
100% true when it comes to Kubernetes, as you see here, Kubernetes and Openshift
at the bottom, the actual smallest thing it knows about is something called
a deployment. And as you can see here, there's three
different deployments going on where you have a deployment replica of two,
where it will make sure that those two pods,
which is a grouping of n, number of containers,
is always running. And then the deployments in pod b and e
will always be running somewhere in the cluster two. So you tell Kubernetes,
hey, I need deployment replica of two pod a. It will find a place
and go to hell and back for you to make sure that it is
running, which is important to know. So the smallest thing that
Kubernetes about is actually a pod, it's not a container.
Another important thing to mention is nodes code are actually
the compute of Kubernetes or Openshift. And when
you talk to it, you talk to the API and Kubernetes
figures out where to run the thing. So even though we have three code here
to us as the downstream user,
Kubernetes and OpenShift is just one big blob that's important
to know and a good thing to kind of anchor
your thought around Kubernetes or OpenShift is all it is,
is just an API to compute. If you anchor your
thoughts and kind of build up your knowledge around that simple concept
where all Kubernetes is, is an API to compute,
everything else kind of just falls into place.
I've trained Kubernetes to quite a few people now.
And after discovering that after speaking to some people about it,
it really does make sense because that's what you do. You give it
a declaration of how many containers inside of a pod you want
to run, and it will do the job to make sure that it happens.
The storage and all that stuff is all around it.
But the core aspect of Kubernetes is it's can API
to compute. So again, you can actually have affinity rules and anti
affinity rules for specific nodes. Say for instance you're doing some ML
or graphics work. You can make sure that the nodes have specifically
graphics, cardinal or whatever. But that's a little bit out of scope for this conversation.
For what we are trying to understand is that we have this app that takes
a container and just runs and I apologize, I keep looking off over to the
side here. My screen is a little bit weird,
so I'm trying to share at the same time. So I'm looking, but hopefully it
makes sense. I am more than aware
of sometimes my views here. Anyway,
let's keep going. So let's talk about some internal concepts
of Kubernetes and Openshift to kind of make sure that when I
start deploying around with it live here in a second you kind
of get the references, the reference points,
of course, because you have no idea what's actually inside the system
or what the names of the pods are. You need some way to name them.
And of course, what do all engineers go for in the web?
DNS, of course we have our own DNS inside of Kubernetes.
It's called Kubedns and it does service discovery for you.
So you can service a name called foo web and
map it to machine or pods.
For instance, this pod c here and label it to make sure that
whenever pod a internally inside the
cluster is looking for foo web, it knows where to go.
Now that's not 100% true. Now you have ways to be able to extend the
services outside the cluster into other offerings like route
53 or different external DNS providers.
But for our conversation, CubeDNS is just
internal and it's a way for the inter pods to talk to one
another so they don't have to know about each one. And if something,
a pod dies or whatever, it knows to go to the labeled way
and the desired state actually gets it to the machine.
It's much more complex than that, but for our understanding, that's more or less
what we need to know. The next thing you need
to know about is something called namespaces. Namespaces are extremely powerful inside
of kubernetes, and as you notice here we have two namespaces.
My application and your application. They're exactly the same.
They have a service and two pods, and they just talk
to each other or talk internally inside the namespace.
This is very important to know. This is the way
you can slice and dice your Kubernetes cluster up into
your application, my application, or for that matter,
dev, QA and prod. So you can actually run Dev,
QA and prod all in the same cluster, and they cannot talk to
one another. Okay, now that's a little bit of a lie. There are
ways to cross namespaces, but out of the scope of the purity
of this talk, that's not possible.
It's a way to be able to slice and dice your Kubernetes cluster. So then
now all of a Sudden, you don't have to worry about Jane Doe's
computer underneath her desk to run uat. Now you can actually
run it on the actual hardware that your Kubernetes cluster is running.
This allows you to do real performance testing, and there's actually
huge benefits around it. So it's well worth spending your time looking into
this.
Okay, so now with namespaces, the next thing, and this is specifically
to OpenShift, something called projects. Projects extend
the namespace ecosystem into some really interesting
isolated layers on top of it. It adds real
rbac to it. So if you have, for instance, active directory,
you can actually tie your specific
projects to different
ad groups. So your devs can only get into payment
dev for instance, or your payment prod can only get to your
DevOps people or whatever, and they are complete isolated
on top of it. Projects really are insanely powerful.
Also with the workflow of OpenShift
specifically, and this is an Openshift specific thing, you can
actually leverage projects to be just like feature branches. We're using
git. So if you use git git with feature branches,
you can create a project as easy as you do as a
git branch, push your code inside of it, have it
be able to reach the things it need to see that it works. Then merge
that back into your main branch and delete that project basically ephemerally,
which allows for specifically developers to be able to really use
the same kind of tooling and understanding that they do for their job on
an interface. That makes it sense, makes sense.
Projects really are something that really do just
ramp up the Openshift ecosystem to help developers be successful.
So we've talked about kind of like internally what's going
on inside of the cluster. Now we need to talk about how to actually get
into it. And that's where there's a couple of different ways of doing
it. The first one by default,
actually not by default, but the one that people mainly go to
for Kubernetes is something called a load balancer. It's just like an f
five load balancer. You can actually rip out the load balancer and put in other
load balancers. Now it takes input
from an external ip and level sets around the pods
that are required. You can actually have a whole talk just on load
balancing alone. If you're a networking nerd like I used to be,
you can go down that rabbit hole for a very long time. But there's all
these knobs and dials you can flip around to play
with. I would strongly suggest doing your own homework to
see how it fits for your application.
The hardest thing in Kubernetes is something to understand about something
called an ingress. Ingress. This is actually the way you
in kubernetes map all your pods to the
different services and how things come in. Now it
becomes a really massive Yaml file, as you see here in the example
on the far left under ingress there. And that's a very simple example
where it maps the external path to
foo web. And then if it needs to ever hit data, it finds
a service for data and then goes to port 3000
on those pods. That's it.
And that's pretty nice. So it's a way to really map out how your microservice
based architecture fits inside of kubernetes. But the
problem is it's very static. So anytime you need to add new pod or
anytime you need to move something, you have to edit the ingress and it becomes
very tedious. And imagine you have 30, 40 different services.
This YAMl file gets massive and it's really unruly
to deal with. So you have to really pay attention to this. And that's one
reason why inside of openshift they have something called routes.
Routes actually extend the ingress so you don't have to worry
about it. And it is really just simply two commands
that you run and it deals with that Yaml file for you.
It takes the challenge of ingress
and having to deal with it manually and turn it into an interface that you
can just flip things on, flip things off very easily so you can
play around and get the things you need done quickly.
We're going to play with a router too during my demo,
just to kind of show you the advantage of it. And then we'll kind of
go from there. So IBM going to
talk about some things in the ecosystem just to make sure that if you do
see these pop up as you do go down this journey, you at least
have some form of reference. And it's not just gobbledygook,
gobbledygook. I can't say words. And then
you can be able to kind of work from there. The first
thing you need to talk about is helm. You'll see Helm all over the place.
Helm has become the de facto way to install applications
to openshift and Kubernetes, even though the Openshift ecosystem
has moved to something called operators. And that's a much larger conversation.
But when you see Helm, think of helm as just a package manager, a way
to programmatically install a bunch of things inside of Kubernetes or
Openshift and just kind of get them on there.
GitLab is a perfect example of it. If you want your own GitLab instance,
there's a helm chart to install, and helm takes care of the
lion's share of the work for you. So you have to worry about databases and
things like that. So it's very powerful and
it's a very interesting project. So take some time and do some research
there. Next we
have istio. Istio is in essence the
service mesh. Service mesh can be its own conference, I'm not
going to go into that, but this is a way to be able to intelligently
talk to and work around the
containers inside of your Kubernetes cluster, or Openshift cluster.
It allows you to have secure communication. You can actually trace
communication between the containers, which is
really neat. There are two major players, istio, and the other one is called Linker
D. I think it's Linker D, two now. And Istio
is kind of like everything in Kintric,
where Linkerd is just what you need to run
the service mesh. Two very different worlds. IBM has
leaned very heavily into istio, and we have a lot of
really smart people working on istio all the time.
So hence the reason why that I'm showing this one off again.
It's one of those things that you'll eventually get to inside of the Kubernetes
ecosystem. I strongly suggest waiting until you're much
more comfortable in the ecosystem before going there. But that is what
this is. And if you didn't know istio is
greek for sale, I learned that recently myself.
And finally, the next thing we're going to talk about is knative.
Knative is in essence the
serverless platform on top of Kubernetes.
So you might have heard of lambda or code engine from
IBM. And knative is a way to
run those types of scale down to zero infrastructures on top
of your Kubernetes cluster. So if you are hosting your own Kubernetes cluster,
for whatever reason, you can layer knative on top of it and get the
power of scale down to zero, which saves you a lot of
resources into your cluster. There's a lot more there,
but it's good to know knative is serverless.
Okay, so now we've talked about that ecosystem, and we've kind of
talked about all the different things. Let's actually get something deployed to Openshift.
So I'm going to go ahead and turn this off and
I'm going to shrink that.
So first thing first is here is my amazing application.
I'm trying to make this bigger, and I don't know why it's not.
View.
Zoom in.
All right, there we go. That's probably too big. There we go. This is
actually on GitHub JJ Asghar cloud native
Python example app. And I have this amazing application app,
Py. And as Python people, you know that this
is pretty much the most simple thing you can ever do.
I actually did this at Pyjamas, and that's the reason why it's there.
But as you can see here, I have a simple flask app, and this is
the worst amazing app ever. Speaking in pyjamas, that's all it does.
It's a simple python application running in flask that
rodozos. Okay, cool. But the most
important part is being able to get this to
talk to or deploying this to Openshift. So here we go.
Here is Openshift right here.
And I'm logged in as my user here. I'm as
an administrator right now. I'm going to go ahead and flip over to developer.
And as I was talking earlier about projects, as you can see
here are a bunch of projects inside of our application. IBM going to go ahead
and create a project, and we're going to call this conf 42
42, and then IBM going to hit create.
And while that's being created, Ibm going to make this just a little bit bigger.
And as you can see here, now I am in project 42
applications. I have a bunch of neat stuff here. This is me
as a developer. So I am just using this as what hopefully you would
be doing this as a downstream user. As you can see here,
it's like, hey, you need to add something to your project.
Well, one of the best things about it is you can just pull directly from
git so I'm going to go ahead and click on this from git repository.
I'm going to go ahead and go back real quick to this URL and
IBm going to go ahead and grab this from HTTP
over here. I'm going to go ahead and paste this URL directly in here.
It validates it already and looks and reads it and it's like, hey, check it
out. It already figured out that it's a python application. So without me even
doing anything, I just said, hey, look at this repo out on the Internet.
Go ahead and pull and see what's going on. And it grabs it and pulls
down the python. It defaults to something called a Ubi,
which is a universal base image which is basically, if you've
ever heard of Alpine, it's the red hat version of Alpine. So everything's stripped
out to be as small as it possibly can be.
And it's like, hey, let's call this something. I'm going to
go ahead and change this to comp conf
42. Then we'll go down
over here to conf
42 and then
we're going to create a deployment just like as we were saying earlier, that's what
it needs to know about. And then it actually has this little checkbox where
it's like, hey, do you need to expose this to the real world? And that's
what I want to, I want to expose this public URL. Go ahead and create.
So this kicks off, this takes a couple of minutes, but we can actually watch
this more or less in real time. So as you see here,
it's waiting for the build. Oh, it's pending.
So it's doing this just now. And it even created
a service for us out of the gate
and it even opens up the URL for us too. But we need
a way for that build to happen. While that's happening.
I'm going to go back over here and I'm going to show you what actually
finally convinced me that Openshift was the way of the future
and that is the built in ways of webhooks
with, I was supposed to delete that
before doing this demo and I apologize for that.
Here we go. So we all use webhooks to
talk to applications, whether it be back and forth or whatever,
but you can actually have webhooks on GitHub. As you can see here,
whenever any event happens, it sends a post request with
that. So I'm going to go ahead and create a webhook here.
I'm going to go back to our system and it should be building.
There we go. It's building, which is good.
Ibm going to go back to our build config here.
And if you see here inside of Openshift, they have
built in webhooks for each build config. So what
Ibm going to do is I'm going to copy this URL here and
ibm going to paste it inside of here and
I'm going to change this to Json and then I'm
going to create a webhook. So now whenever I make a
change to this repository, it will kick off a build on my
openshift cluster. So if I go here to make sure it's there.
Recent deliveries, we got a big old grid cream checkmark.
So that's good to know. So now we know for a fact, whenever we make
a change to this repository,
it'll send a post to our Openshift cluster, which allows us,
in essence, to have continuous delivery. So anytime I
merge into main in this thing, it will kick
off a build, which is pretty cool. But we're not going
to quite see that yet because we're going to go back to topology and we
got to actually see our system work first. So as this
is building, it's taking about two minutes. There we go. It just finished,
it's pushing out the pod for us.
It's thinking about it. And there's a built in registry inside of Openshift,
which is one of the neat parts, too, so you don't have to worry about
like Docker hub or quay or whatever. It's all self contained
inside of it. So your source code stays inside of your openshift cluster
and it's creating that container. Give it a second.
Put on some jeopardy music. Patiently wait.
There we go. Running. As you can see, a nice little blue circle here.
And I'm going to go ahead and click on this link right here.
And there we go. This is the worst app ever.
Speaking at pyjamas. We are not speaking there. Are we not? So we are speaking
at comp 42. So IBM going to do here is
I'm going to come back over to our application and I'm going to be
bad. Never do this in the real world. I'm going to go
ahead and edit this one right here.
And we're speaking at comp 42.
42. I keep putting three down and
of course I'm going to go ahead and commit to the master branch.
Don't do this in the real world. And there we go, we update that.
So if we go back over here now and look
at our. Oh, there it is. It already had the post, it's already building
it again. And if I've got my timing right,
it should only take a second. We should see the pod do
an intelligent restart so it actually spins up another pod,
kicks it over and then kills the other one. You can have liveliness
checks inside of it, so if you have applications that take a while to start,
you can do intelligent rollouts so it makes sure it comes up in a
good state before taking out the old one, which is important too. So you
don't have any real outages for your application either.
So there we go. Still running,
still running. Come on,
come on, don't make a liar out of
me.
How about now we can actually click
into the view here and actually look at the logs. There we
go. We are installing some packages.
Why are you taking so long?
Come on. There we go. All right,
we're on step ten of, I think, twelve. Oh, there we
go. No, it's only ten steps. It's copying the blobs into the
registry, pushing it and then storing.
There's the actual version. There we go.
Now it's pushing into the registry. I'm sorry, it's another thing ahead of time.
And so if we go back to our topology here,
click on this. There we go. We see, we are creating that
new container and we will kill this
pod in a moment.
Health checks, of course, are super powerful here for your
application so your downstream users never have problems.
And then.
How about now? Now,
come on, computer,
there we go. See, it's ran. Now it's kicked it over. So I'll go back
to that URL here and I will refresh. And there's
comp 42. Wonderful.
And that's it in essence, you saw how quickly I did this.
This is under 30 minutes and what have I done?
I went ahead and told you everything you need to know and
wired it up together. So to make sure you see how beautiful and how awesome
openshift is for your downstream. And I just did it through the GUI. Obviously there's
CLI commands and there's other ways to extend it to do
testing in front of it before releasing
to the production environment. There's just wonderful, wonderful things,
but that's just the bare minimum to understand the power of deploying
to openshift and under the other side.
So thanks so much, never hesitate.
Oops,
there we go, that one. That's how to find me at jjasgar
on Twitter or awesome@ibm.com. And thanks y'all
for having me. Take it easy.