Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hi, welcome to Comp 42 devsec Ops days.
I'm really excited that we get to talk about container scanning.
Today we're going to dig into adding container scanning to
a DevOps pipeline. Here's the part where I tell you I am definitely
going to post the slides on my site tonight. I've been that
person chasing the speaker. It's royal pain, which is
why you can go to robrich.org and end click
on presentations here up the top and here's container scanning
runs fast and stay safe. The slides are online right
now. While you're here on robrich.org, let's click on about
me and learn about some of the things that I've done recently. I'm a cyral
developer advocate. If you're struggled with your data mesh,
I would love to learn from you. I'm also a Microsoft MVP,
a Docker captain and a friend of Redgate. AZ Givecamp is really
fun AZ Givecamp brings volunteer developers together with charities to build
free software. We start building software Friday after work.
Sunday afternoon we deliver the completed software to the charities.
Sleep is optional, caffeine provided if you're in Phoenix, come join us
for the next AZ give camp. Or if you'd like a give camp in your
neighborhood, hit me up on email or twitter and let's get a gift camp
near you too. Some of the other things that I've done I
do a lot with container and Docker and Kubernetes training and
consulting. If you have that need, hit me up.
And one of the things I'm particularly proud of, I replied to a Net Rocks
podcast episode. They read my comment on the air and they sent me a mug.
Woohoo. So there's my claim to fame, my coveted net
rocks mug. So let's dig into container
scanning now. We talked about this guy.
Doesn't kubernetes just do this for me?
Well, kind of. Let's take a look at what is Kubernetes.
Oftentimes when we're greeted with Kubernetes, we have a diagram kind
of like this. It shows the control plane here in green. It shows the
worker nodes were in blue. And we can see the various components, the microservices
within kubernetes. Now there's lots of microservices here that
do a lot to keep kubernetes running, but they focus on
keeping kubernetes running. They do nothing for our process in our
container. Now, maybe when the user comes
in, if they don't have the right port or a hostname they won't get
past these load balancer. But once they're inside the pod,
once they're inside the container, if they can compromise that container,
they can easily pivot to any other pod within kubernetes.
Kubernetes does nothing for protecting the security
of these process running inside the container.
Okay, so we don't have anything in kubernetes. Let's flip over to Docker.
Now here in Docker we can compare Docker on the right to
virtual machines. On the left with Docker we still have a
hypervisor, we still have the same sandbox that
we have with virtual machines. So the processes can't communicate
with each other with shared network, with shared memory
space, but they can communicate with each other like
virtual machines do across the network. So if we're running Docker
swarm or even if we're running containers inside of kubernetes,
this container can reach out to this container and this container can reach out
to this container. There's nothing inside Docker
that keeps us secure. From a process perspective.
We still need to manage the process here.
So both from kubernetes and from Docker's perspective,
they are responsible for keeping track of their pieces.
We're responsible for the content in our container.
We could think of kubernetes much like a firewall in front of the container.
Now Kubernetes will do a lot for keeping track of our process.
And if it crashes it'll restart. But it doesn't do a whole lot for
making sure our containers don't get popped and making sure content doesn't
leak. Now kubernetes does have namespaces,
but namespaces are not a security boundary, they're an organizational
boundary. Once we schedule content within kubernetes,
every container is available to every other container over the network.
Kubernetes focuses on keeping Kubernetes running.
It's our task to secure the process in the container.
So what is a container? Well, a container has a file system,
it has users, it has a process, it has ports.
This looks a lot like a Linux machine.
And we know how to secure a Linux machine. That's easy. We can
do processes like making sure we're not running as ute root,
removing excess users, removing unused software,
keeping software up to date, pretty standard stuff,
were preventing unauthorized access into our system.
Well it's a Linux machine except the way that
we accomplish that is a little bit different. These are ephemeral,
isomorphic and deterministic hardware. They're short lived,
unchanging and the same every time. Now that's really cool.
That means that we don't run patch Tuesday inside of a container.
Rather we build a new container and replace these existing container with that new
content. Now if we're going to replace the container,
that works out really well. We just build a new container. And we know
that the content in that new container now has the patches that
we need. That also means that we don't really need to
keep track of what's running inside the container. Because at the point were we
created that container, that's when we establish that. Now if
a container does get popped, that's easy enough. We just evict that container and
spin up a new one. We may need to compensate for damage
that it pivoted to, but we don't need to, for example, scrub the
system. So securing container
by default, every container can communicate with every other container,
both inside docker and inside kubernetes. Now that
makes sense, kind of. It makes our processes, our microservices
really discoverable. But if we're going to want to limit
traffic across our docker switch or around our
Kubernetes cluster, we'll need to use a service mesh to do that.
By default, if we pop any container in the cluster, we can now
pivot to any other container in the cluster.
Now all of our internal methods now have IP addresses.
That's why we have all of our microservices. So if you're able to pop any
of our containers, all of our containers are now at risk.
As we look at threat vectors, we can look at threat vectors coming
from outside. Maybe we have CVE and
installed software. Maybe our app has a vulnerability that gets
exploited by incoming requests. Or maybe
we've exposed secrets inside of the content that we
distribute to machines. Maybe the app didn't render
that. But maybe we included more data
than we should have. And these secrets could come from outside, but they
could also come from underneath. Maybe our container is
running with excessive permissions. Or maybe our software has
a vulnerability that makes it wake up and do unexpected things.
We can kind of group these into the
threats coming from outside, the threats coming from underneath.
And then once we're compromised, what can they pivot to?
Wow, this sounds a lot like Linux.
So the good news, because these are ephemeral, isomorphic and
deterministic hardware. If any of our machines get compromised,
if any of our containers get compromised, we don't have to do a big scrub,
we just shut them down and create new ones. Now that's really
nice. We do need to keep track of what did they pivot to and probably
restart those containers as well. But for the
most part, the damage is done and we've removed it.
We don't need to uninstall things. We don't need to scrub stuff out
things. Deterministic hardware ensures that we're
the same every time. That's actually
really good for container scanning as well, because once we know the
contents inside of a container, we know the contents. We've established
what's running inside that container, and we can take that list of software
and compare it against a CVE database to know if our container
is compromised or what software we might need to upgrade.
Now, as we start talking about a DevOps pipeline,
we'll start running tests against our system. We might have unit
and integration tests. We might have static analysis of our source
code. We might have open source license compliance.
We might take inventory of our machine for known vulnerabilities.
And then finally, we'll take all of these results and we'll compare it
to our corporate level of tolerance.
Now, it's really easy to say, if any
of these tests fail, we're not going to deploy. But what
if there's a vulnerability that is not patched?
What if we don't have a vaccine for this
threat? Should we shut down our business until there's a
vaccine present? Should we hobble along? Should we
deploy it anyway and see how we can manage that
risk? It's possible that we may choose to say
yes. If there's a vulnerability in our things, it's still
good to push it. It might be better than the content that we
have running in production because we've been able to patch other things.
So let's take a look at each of these types of tests as we look
at unit and integration tests. There's nothing new
that I can add here. You need to do it.
Running unit and integration tests helps us validate our software,
and even so much more as we start running our build
more frequently and deploying this content into production.
A good suite of unit and integration tests helps us validate
that updated libraries don't cause harm to our software.
In short, unit and integration tests,
you should do this. Next up, let's take
a look at static code analysis. Now, there's lots of different tools
that we can use for static code analysis. Things is where we start to
look at our source code and see if we can find problems or vulnerabilities
inherent in the way we've built our software. Now, I'm not here to pitch
a software product. But as you grab these slides from robrich.org,
you can click on each of these blue links to be able to get more
information about these products. Hopefully that will help you to get
past the blank page and discover the product that is the best fit
for you. Static code analysis can help us avoid vulnerabilities
within our system. Now as we avoid these vulnerabilities,
then we can build more robust systems.
Next up, license analysis. Now here's some more software products.
Grab these slides from robrich.org and click through each of these blue links
to be able to take a look at ways that we can validate our licenses.
Now, what if we include a GPL library? Does that mean that
our entire system needs to be open source? If that's a concern,
validating these dependency these and the license requirements
associated with that may be really important to the organization.
Take a look at these software products and pivot to the software
product that works great for your environment.
Now let's take a look at policy validation.
Now we have the results of our static analysis. We have the results of our
unit and integration tests. We have the results of our license
and now were given the choice to go
or no go. If we have any failing unit tests,
should we not deploy this new version of these software?
If we have any license validation adjustments,
should we fail the build. Now these are great checks to make.
We may need to take a more pragmatic approach though, to say,
well, it's better than it was, or we need
to get this patch out right now because we're losing a whole lot of business.
And the fact that there's a failing test really
doesn't help us to solve this really urgent concern.
Where were bleeding money out of our organization.
So we may choose to have a different risk
policy associated with the urgency of the deployment.
So where's the serverless in all of things? Were been talking for
some time about adding, testing and validating
our software within our DevOps pipeline. Where's the serverless
part? Where's the container part? Yeah,
all of this so far is principles that work in DevOps pipelines.
Whether you're focusing on container or whether you're focusing on
any other type of software, you'll have unit tests,
integration tests, static analysis,
license validation, and let's add in the container space.
Container scanning when we take a look at container scanning,
our task is twofold. The first is to discover the content
that we have running in our container. Now we need to look at
the libraries that our application depends on. We need to look
at the content that we have installed in the operating system
and depending on our programming environment, we might have multiple
sets of libraries. Maybe our website has our back end libraries,
but it also has some node libraries to be able to construct our front end.
In this case, we need to inventory all of this software. We need
the package name and the version number. Now the
great thing, once we have this inventory of software, we can compare this against
a list of known vulnerabilities. Is any of our packages
vulnerable or what severity is the vulnerability?
Are any of these packages patched and should we upgrade? Now this is
great. That's the purpose of container scanning. We inventory our software
and we compare it to a list of vulnerabilities to take a look at what
software might be vulnerable. Now the great thing here is once
we've inventoried our software, our container, we know
at that container hash all of the packages that are
installed in our system. So periodically we can
recompar that software list to our vulnerability database
and understand if our container is newly vulnerable
based on newly discovered vulnerabilities in some of the software installed
in our system. We don't need to rescan the system.
It's can ephemeral, isomorphic and deterministic
system. If there's any problem we can evict that container and start
up a new one. But we know exactly the content that's in
that container. We just need to continuously compare it to our vulnerability
database. So when should we scan?
Now here's where we add some creativity to this process. There's lots
of points where we can scan our container and based on
that comparison, validate if our container is vulnerable.
Now the container scanning process does take a while.
So if your goal is to have a ten minute build
and container scanning takes 10 minutes, then you won't be able to
complete the container scanning within the build allotment. So maybe
we schedule the scanning but we validate the results
downstream or depending on our policy,
our risk assessment. Maybe we just accept that our build
is now going to take 20 minutes instead of ten. Yeah,
these process of discovering the software within our container
is pretty involved and it takes a while.
So when can we scan? Well, we can start off by putting
a process inside of our DevOps pipeline. The beauty
here is that as our software gets built
we can go build up this image. And once the image
is built, but before we push it to the registry, we can kick off the
process that will inventory this image.
And then perhaps we compare that to our vulnerability database and
identify if there are any severe threats within our container.
Now this is all before we've pushed it to our registry.
And if we can wait that long, then we know exactly when this
software is vulnerable and we can block that thing from even getting into
our registry if it's too vulnerable for our taste.
Now next up, we might choose to include this
scan in our container registry. What if we have some
content that is pulled straight from Docker hub or another
registry and isn't built through our DevOps pipeline?
We may need to scan our container registry periodically to
understand if there are any new containers that might be vulnerable and
added to our registry. Now in the case where we pull a
container and push it into our registry to be able to use it directly,
then this is a great place to catch it. We could also just
enumerate all of the containers. We know that if we've inventoried
our system with this particular container hash, that we
already know the content that's in there. So we can just
quickly scan through our registry, validate that we've inventoried
each of the containers there, and then continue matching
that inventory list to our vulnerability database.
Now next up, we could scan the content in
our cluster. Now arguably it's too late, our content
is already running. But if ever there's content that
we pull directly from Docker hub to start in our cluster,
then this is kind of the only place that we can catch it.
Now arguably you should probably pull the content and push it into
your own registry so that you can validate it first. But if ever we have
a pod that directly references external systems,
then we'll probably need to scan our cluster as
well. Now the assumption of each of these is that
that's the process where all things flow through. So if
we're going to pull content from another registry, then we
can't just can in the DevOps pipeline.
If we're going to pull content from Docker hub and not set
it into our own registry, then we can't just scan our
registry. We need to take a look at the content running in our cluster as
well. And there's one more spot. Kubernetes webhooks.
Kubernetes webhooks are a great place to be able to catch the content because
that's the pipeline that starts pods within our system.
Let's double click into that and take a look how it works. Now here as
we take a look at the Kubernetes webhook story, we have a mechanism
that has two spots where we can tie into it.
The first is the mutating webhooks.
Now in series, each webhook
gets called and it gets to change the yaml associated
with this request. Here's a spot where we might inject
sidecar containers for authentication or other resources.
Next up, we have a validating webhook. Now, it calls
each of them in parallel and each one gets to say yes or no.
Now perhaps at this point we say, hey, that container is running as
root, we're not going to let it through. We could also at this point
go quickly, inventory the system and validate
that there are no severe vulnerabilities within our containers.
Now if this is the first time that we've seen this container image,
we may not have enough time to complete that full container
scan. But if we have seen this layer hash before,
then we can probably just go look at the vulnerability database and
validate that this system isn't too vulnerable to be able to get started in
our cluster. So Kubernetes webhooks is a great
place to be able to give
that go or no go signal into our
cluster because if we say no, taken that pod,
that container won't even get started in our cluster.
That's great. Everything that starts in our cluster is
going to go through this process. So that validating webhook is a break place
to be able to dig in and do some final container
scanning. Now, hopefully we've already seen that image before,
so we're just validating the results. If this is the first time we've
seen the image and it takes 10 minutes to inventory the software,
then were not replying in time, and that validating webhook
is probably going to fail straight away. So maybe we fail it and
in 10 minutes, once we have the results, we try again to schedule
that pod. So these are the various
places where we can take a look at our content and each one has value
depending on where we have these content. If we're going to
always rebuild every image, including these images
from other sources, then putting in our DevOps pipeline might
be sufficient. But if there are any containers that aren't
built in our DevOps pipeline, we'll need to also add content into our
registry. Now if there are any images
that aren't pulled directly from our private registry, but are pulled from a
public registry, then we'll also need to add either a Kubernetes validating
webhook or a periodic scan through our cluster.
And the Kubernetes validating webhook is a great place to do that
double check. Nothing will start inside of our cluster
if we've stopped it at the validating webhook.
Now, what should we use for container scanning software?
I'm definitely not here to pitch container scanning software and
there are many more choices than this,
but this might help you get past a blank page as you grab these slides
from robrich.org. Click through each of these links to be able
to take a look at that software and choose the one that best
resonates with you and your organization. Now your choices
may be different than mine, but for the purpose of demos,
I do need to pick one. So I will reluctantly pick anchor,
maybe alphabetically, but anchor is
free and open source, so it's not a bad choice. It is
definitely not fast, but it does a good job.
Anchor, it's free and open source. It runs as a set
of microservices and it will do the inventory both
of operating system packages and of app packages.
It's great for container scanning for cves,
but it is not fast. The docs are also
not great. Let's take a look though at how we might include anchor
scanning and when you choose the particular software that you're going to use,
then you can use the same methodology to run your build. Our first
stop in anchor is to download the Docker compose file.
Now this allows us to be able to start up the anchor microservices.
What I find interesting about this is that it is a suite of
microservices, but this docker compose file just starts the same
image a bunch of times, passing in different arguments. So is
it microservices or is it a distributed
monolith? That's an argument for another day. Once we've got
anchor started, then we can docker compose exec
or Pip install anchor CLi to be able pardon
me. Once we've got Docker compose up,
we can do a docker compose exec or a pip install
to be able to get at these anchor CLI. That anchor CLI will
allow us to run the commands necessary to be able to get
at the container scanning content. So whether we've
exec in or PIP installed, we now have the anchor CLI
and we can begin the process first status first step
is to do an anchor CLI system status that will tell
us about the content that we have running now.
That's perfect. We have all of our microservices up. We know that the
anchor system is ready and the next one we do is an anchor CLI system
feeds list. Now as part of listing the feeds, it will
kick off the process of syncing the virus
definitions, not virus, these vulnerability definitions.
Now they have vulnerability definitions for each of these package
managers, NuGet, NPM, Maven and
they also have vulnerability lists for each
version of Linux. So we have red hat and Ubuntu and Alpine
and each of the versions of these so that we can take
a container and we can inventory it in all the ways.
Now the docs say that it takes about 10 minutes to populate all
the can data but let's take a look at the one that I did.
Here's my where
is it feeds update. We can see that I started
on the 20th and I finished on the 23rd.
Yeah, that was a little bit more than 10 minutes.
Okay, so we've taken a look at how we get the
anchor system updated. Now once we have
that list of feeds. Actually let's take another look at the list of feeds
and take a look at these things. So we have gem,
Java, NPM, Nuget, Python and that's really
great because we have the vulnerabilities for each one of the package managers.
We also then have vulnerabilities in each version of alpine
including the Amazon, two Centos, Debian,
Rel, Ubuntu. And so if you have
an operating system listed here you'll be able to get at the vulnerabilities
in the packages that might be installed in that operating system
together with these packages that you may have installed as
part of your application.
So that's the next step is grabbing the
content and using that to be able to check our container
to see if our container is vulnerable. Let's take a look at the commands.
Our first step is an anchor cli image add and we'll
give it that container or rather we'll give it that image
and version. Now this will queue the process.
It won't actually execute synchronously. Now if we choose we
can do an anchor cli image wait and that wait will wait
until that task is done. So now we've got the
build spinning for maybe 10 minutes now. That's fine.
We're taking a look at things image and we're waiting for it to be done.
And once it's done then we can take a look at the
results. Anchor Cli image get will get the results
and we can further filter those results based on
the inventory list or the vulnerability results.
As we take a look at these vulnerability results we can choose the
particular type of results that we'd like. Now in this case I'm
looking for all, but we could also look for those specific to a
type of package manager or the operating system and that will
export that list of results in JSON.
We can also take a look at the list of installed packages. So I'm going
to take a look at the content and in this case I'm going to take
a look at the content across all the systems. Now we can save that
list of content off and later we can recompare
that list of contents against our vulnerability database.
Did a new vulnerability get discovered? We can quickly identify
the active container that we have that have that
particular version installed. So now we know immediately
which builds we should restart to be able to secure that software from
that vulnerability.
So let's take a look at how we might integrate this build. I have here
a regular build that just does the normal
things. Docker build, Docker push,
kubernetes apply. Now that's perfect.
This DevOps pipeline, maybe I'm kicking it off from Teamcity
or Jenkins and it's just running things script to be able
to run the content. Now how would I add container scanning?
Let's come in and we'll add all of those anchor commands. So after I've
built my content, before I push my content, I'm going
to go run these anchor commands. Now in this case I'm going to pass
in not only my image name but also my docker file. So anchor can
do some additional checks and I'm going to choose to wait.
So yeah, this may take a long time, but I really want to
make sure that my container is good before I push it into my
container registry. So once it's done I'm going to go
grab the results in JSON format and I'll export those
to vuln JSON now next I'm going
to take a look inside that vulnerability JSON and look
for anything that is vulnerable. If there's anything that's
vulnerable, I'm going to fail the build. Now we talked
previously about how that might be a little bit aggressive. What if there's a
vulnerability that doesn't have a cure?
Should we block the push or might we want
to just push it anyway and take extra care with that build?
Now there's probably patched software in other places,
so just saying that there is any vulnerability at all
might not be pragmatic enough for our organization. Now depending
on your needs, that might be exactly what you need. Let's take a look at
that vulnerability JSON. Here's that list of vulnerabilities and
we can see the CVE, we can take a look at the content and
we get severity in this case
it's medium and we know whether this CVE
is patched. So in this case there is no
fix. A medium vulnerability
with no fix. Should we block pushing? Should we let it
go? Maybe a pragmatic approach might
be no high vulnerabilities and no unpatched vulnerabilities.
That might be a good pragmatic guess.
Okay, so once we've identified the things and we've talked about
how this is likely, not all of that practical, we can
next take a look at the content that we've got. So let's
take a look at these content. And yeah, that's a wall of
packages. This JSON wasn't formatted very well. So instead
let's take a look specifically at the content, just for my operating system.
Here's all the packages and their versions. So I
can now take a look at the content and compare this periodically
to my vulnerability database to know if my software has
become newly vulnerable based on recently discovered vulnerabilities.
I'll save both the vulnerability list and the content
off to a safe spot where I tag
that together with the image hash. And now
I can recheck that container periodically to
know if that image has been newly compromised.
Now once I've validated this image, and I know it's good enough,
now I'll push that to my registry. Now I know that only things
that have been validated are in my registry.
Well, what if a software package becomes newly vulnerable,
but is currently in the registry? Should we purge it out of the registry?
Well, maybe if it's the image that
is currently running in production. What if a pod needs
to restart now? Maybe there's no image that it
can use to restart.
It might take a more pragmatic rather than a more absolute methodology
here to say, well, let's purge old containers out of
our registry that are vulnerable, that aren't also
in use. Okay,
so we've modified our build to include container scanning.
And this was really helpful in being able to get this anchor build,
to be able to validate that our software is not vulnerable in the
way that is tolerable for our build. Now that
was great. We can do that for an on prem build. Let's also
take a look at how we might do this with GitHub actions. Now the
cool part about doing it with GitHub actions as opposed to doing it on
Prem, is that GitHub actions will keep the database up to date.
So we don't need to sync the feeds list, we don't need to
start anchor all of that is handled by GitHub actions.
That's perfect. Now here's one GitHub actions that
will allow us to do all of those steps in one place.
We'll pass in the image, we'll pass in the docker file. We're also going
to look through app packages, so it'll look for NPM and Nuget
and Python packages as well. And then here's a fail
on build command, which I haven't been able to get to work.
In theory it's supposed to fail if there are
too many vulnerabilities, but it doesn't.
Now they also recommend in their docs to go grab that
anchor reports folder that this task outputs and echo it
to the screen, or even better, push it as a build asset.
Now I can put it inside my dashboard alongside the test results,
and I'm specifically running this if always, if the build
fails for any reason, I still want the reports.
So let's take a look at this GitHub build. Now,
I have two versions of the software in this case, here's the container
scanning pass version, and here's these container scanning fail version.
And I've just committed different versions of software in each place.
Now you can see that with the failing one it
fails, and with the passing one it passes.
But I did some experiments here. Could I get it to pass
or fail based on doing different flags? Now here's where I was trying
to use the build fail flag,
but I also tried committing a JavaScript file
straight into my public folder, and it didn't
use static analysis of the files. Rather it
only looked at my package manifest. So it looked at my
package JSon and my packages config and
used that to be able to infer the software that was installed.
So if you just grab one Javascript file and you set it in place,
yeah, unfortunately anchor is going to miss this. So let's
take a look at the build that produced this. And this build is identical in
both sides. It's just the content that it's using
to be able to validate if it succeeds or fails.
So we'll start off by checking out our software standard here.
We'll do a docker build, and in this case I'm tagging
it as container scanning and giving it a GitHub shaw as
my version. Now that definitely isn't semantic versioning,
but it does mean that if ever we're using this image,
we'll know exactly the version of the software that built it,
which might help us reproduce that build. Next,
let's kick off the anchor scan. We talked about how I can't get fail build
to work, so maybe setting it to false or leaving it off
might be a good move. But in this case I've passed in my
docker file as well and I want to include app packages.
Next I'm going to output this build results to
these console. Now that's helpful if we want to use
our build output. But even better, I'm going to take those reports
and I'm going to upload those as build assets together with
my unit test results. Now I can include these as the
portions resulting from this build. Periodically I might cant to
recheck my containers to see if they're vulnerable and having both
the container results, the content results
and the vulnerability list might be really helpful for that.
I'm specifically doing this if always so that if it
failed previously, I'm still going to get these results.
Now let's go check to see if it's vulnerable. In this
case I'm looking for any vulnerability and if I find any then I
will fail the build. Now what if there's a vulnerability that isn't patched?
Should we not allow that in our
cluster? Or should we allow content at
a particular level? Maybe block no severe
vulnerabilities and maybe block patched
vulnerabilities. This is definitely a thing that
you need to discuss with your team and find your corporate level of
comfort with how much vulnerabilities can be in place.
Now we've learned in this industry that just saying
that there's a problem without a cure
may not be the best reason to be able to block out the
everything.
So we took a look at a GitHub actions run and how
we can use GitHub actions to avoid needing to sync
our content. And now we have this cloud based build that is excellent
at being able to validate our containers and get this content
into place. We took a look
at both the failing and passing GitHub action
scenario that was really cool. And so you might choose to clone this
repository and use that as an example,
or take this methodology and use it with a different tool.
Kubernetes. Kubernetes is a great mechanism for keeping
our containers running, but it doesn't protect the process.
The process of protecting the process running in our container is our responsibility.
Kubernetes only protects itself. We need to secure the container.
This has been a lot of fun getting to show you container scanning here at
comp 42. I'll be at that spot where the conference has
designated for a live q and A. Or if you're watching this later, hit me
up on Twitter at rob underscore rich. And you can grab these slides
and these code right now@robrich.org. Click on presentations.
Thanks for joining us and getting to talk about container scanning. I'll see you in
the next session.