Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello, I'm Martin Wimpress. I'm senior director for developer
relations and community at Slimai. Today I'll be
sharing five practices examples of best practices to better
secure your production ready containers.
I'll briefly introduce Slim AI. Slim AI was created
to give developers the power to build,
say, for cloud native applications with less friction
in connection with our open source Docker Slim. The Slimai
SaaS platform allows developers to optimize their containers,
reducing both overall size and vulnerability count.
By increasing efficiency and decreasing the attack surface,
Slim AI ensures you're only shipping into production
what you need to today I'm going to present five
security best practices for your production ready containers.
I'll briefly introduce our sample application,
then I'll look at container best practices with a specific focus
on where it impacts security.
You should understand exactly what you're shipping
into production, and I'll introduce several tools that
can help with that that you can start using today.
We'll then make an objective decision on what the best base image
is for our example application. And finally, we'll minify the
container images with Docker Slim to significantly reduce the attack
surface. So we're going to use Python for our
example. Here's a very simple Python flask app that
implements an even simpler restful API, and that app is just
for illustrative purposes. Its function is unimportant.
We could have done this example with node js. For example,
we'll container this app using four different base images
and several different container composition techniques.
So let's just take a look at one of those Docker files.
This docker file adheres to container best practice.
It uses an official Python base images.
A work directory is defined for our app. It has good
layer construction to minimize cache invalidation
and optimize build performance. Files are copied
only as required. A port is exposed, and it
uses an entry point for proper signal handling. But there are
a few things that can help specifically with container security,
so let's take a closer look at each of those.
If you do not specify a user in your docker file,
your app will run as root nobody is an unprivileged
system account. It's available by default in Debian,
Ubuntu, rel, Alpine, distroless, etc.
The nobody account is intended to run things
that don't need any special permissions. It's usually
reserved for services so that if they get
compromised, the wouldbe attacker has minimal
impact or access to the rest of the system.
By contrast, if the app is running in the root context,
then the wouldbe attacker potentially has complete access to
the container and possibly access to all the tools and
utilities shipped in the container that they can now use to disrupt
your operations. Choosing a version number for
your base images is often referred to as pinning.
Some tutorials teach newcomers to pin
their images to the latest tag. However,
containers are meant to be ephemeral, meaning that
they can be created, destroyed, started, stopped,
reproduced with ease and reliability. Using the
latest tag means there isn't a single source of truth for your
software bill of materials, resulting in your container getting
whatever the most recently updated version is.
A new version to the latest tag can introduce major
version bumps to the system and the language, which will likely
introduce breaking changes. Pinning a specific major and
minor version in your docker file is a trade off. You're choosing
to not automatically receive system upgrades and
language improvements and fixes via the update mechanisms,
but most devsecops teams prefer to
employ security scanning as a way to control updates
rather than dealing with the unpredictability that comes with container
build and runtime failures. We'll now see how pinning
a base image tack can be helpful.
Avoiding run AptGet upgrade in Dockerfiles
was considered best practice. It was considered best
practice in the vast majority of cases.
This is not good advice. Base images from vendors and
large projects are frequently updated using
the same tag to include critical bug fixes and security
updates. However, there can often be
days between the updates being packaged. Sorry,
between the updates being published in the package repositories and the
revised base images being pushed to the registries,
relying on the base image alone is not sufficient,
even from images blessed by Docker and container by
companies with plenty of resources. Now imagine a small
open source project maintained in somebody's spare time.
If you pin a stable base image, package updates
are purely focused on security fixes and severe bug
fixes. You can safely apply system updates without
fear of unexpected upgrades that may introduce breaking
changes, but you do need to be sure you're
really applying the latest updates. Docker builds
can be slow, so we use layer caching to reuse build
steps from prior builds to speed up the current one.
And while this does improve performance, there's a potential downside.
Caching can lead to insecure images for most
Docker file commands. If the text of the command hasn't changed,
the previously cached layer will be reused in the current build.
When you're relying on caching, those Aptget install
update upgrade run commands will
add old, possibly insecure packages into
your images. Even after the distro vendor
has released security updates,
so sometimes you're going to want to bypass
the caching, and you can do so by passing a couple of arguments
to Docker build pull pulls,
the latest version of the base image instead of using
the locally cached one, and no cache
ensures all additional layers in the Docker file get rebuilt
from scratch instead of relying on the layer cache.
If you add those arguments to Docker build, the new image will
have the latest system level packages and security updates.
But if you want the benefits of caching and get
security updates in a reasonable amount of time, you'll need to
have two build processes, your normal build process that
happens whenever new code is released, and a nightly
process that rebuilds your container image from scratch.
Using docker build pulls no cache
to ensure you have all the security updates.
So at this point, we now have a container image that adheres
to best practice, but what's actually inside it?
I'm not going to deep dive into container vulnerability
scanning and software bill of material generation
that cool be a talk in its own right. However,
you should absolutely perform vulnerability scans and generate
s bombs in your production container image, build pipelines,
and review the results. These are also extremely
useful tools for understanding what's in your containers,
what you are shipping to production, and what your potential exposure
is. For the purposes of this presentation, I have used docker
scan, which is powered by sneak and Docker S
bomb, which is powered by Sift. Other scanning
utilities are available, such as Trivi Gripe
and Claire. I do recommend that you give them all a try.
I use the Slimai SaaS platform and more recently the slim
AI docker extension to demystify containers
and really get to know what's inside them. Knowing what's
inside your container is critical to securing your software
supply chain. The slim platform lifts
the veil on container internals so you can analyze,
optimize, and compare changes before deploying
your cloud native apps. Let's use container
scanning and analysis to determine what the best base image would
be. For our example application,
the regular official Python base image is built
from Debian eleven and weighs in at 915 megabytes,
but smaller starting points are available.
I'll containerize our example app using four different
base images, including the official Python
image based on Alpine 315,
the official slim Python image based on Debian
eleven, a distroless multistage build,
and Ubuntu 22 four, which doesn't include
Python by default, so has to be installed via a docker run
command. Sometimes it's necessary to install
additional system packages as dependencies for your applications
or otherwise to help your build the image.
The Ubuntu base image doesn't include Python, so it needs
to be installed via AptGet using a dockerfile
run command. The default options for system
package installation on Debian, Ubuntu, and rel
can result in much bigger images than expected.
That 915 megabyte Python base image
I mentioned earlier is a good example.
More packages make the container image larger,
which may in turn increase the attack surface of the container.
So when you do need to install additional system packages,
avoid installing the recommended dependencies.
I've included examples here for Ubuntu and rel that install Python
three without those unnecessary recommended packages.
Using no install recommends on my
Ubuntu based container reduced the image size by approximately
266 megabytes. So let's build our
app with each of those base images and see how those final
image sizes stack up against one another.
Here's the results in terms of image size. The images all
include Python, our example app, and its dependencies.
And those dependencies are eleven packages that get installed via PiP.
The adage goes smaller is safer. A smaller
image size should correspond to fewer packages
that should result in fewer vulnerabilities.
So let's check that there's
no denying that these alpine results are excellent
with zero vulnerabilities. The official Python
images based on Debian eleven is not looking great,
however, with 84 vulnerabilities, of which 13
are critical and three are high. And these
are all in system packages installed via apT.
Distros, which is also based on Debian eleven has
46 vulnerabilities. Of those,
three are critical and seven are high. Again, these are all in system
packages installed via APT and with Distrolus
it's also difficult to do much about this.
Unlike traditional debian derived images,
there is no aptget in order to install the latest updates it
can be worked around, but it is nontrivial. And that
brings us to Ubuntu, the largest container image
but the second best vulnerability assessment. No critical or
high vulnerabilities at all, just seven medium and 16 low
risk vulnerabilities. Well, why is this? Ubuntu is
derived from Debian, right? Ubuntu is a commercially
backed Linux distro with a full time security team that
has slas to mitigate vulnerabilities for their customers
and users, which also includes mitigating all critical
and high vulnerabilities for the supported lifetime of the
distro. Debian, on the other hand, is a community project.
And while many Debian contributors, including myself,
do fix security issues in Debian, it simply cannot
provide the same level of commitment to security as the commercially
backed Linux vendors such as canonical, red Hat and
sousa. So looking at these results,
Alpine is the clear winner, right?
Well, sadly not. Python and node and some
other languages can result in significantly slower build times
and introduce runtime bugs and unexpected behavior
when using Alpine. This is due to the differences
in muscle used in Alpine as opposed to glib C used in most
other distributions. And this topic could also be a
talk of itself. Personally, I do not
recommend using Alpine for Python apps, but it can be great for
go and rust. So what
if I could have the low complexity of maintaining ubuntu based
containers and the security profile of Alpine? And what
if I can make containers smaller than Alpine?
Well, let's try that. The terms
slim, minify and optimized are used
interchangeably to describe the act of reducing the size of
a container image. Docker slim is a free and open source
software available from GitHub. Both Docker Slimai and the
Slim AI SaaS platform can automatically optimize your
containers. Don't change anything in your container image
and minify it up to 30 times,
making it more secure and reducing the attack surface of
the container. Docker Slim has been used with node, js,
Python, Ruby, Java, Golang,
Elixir, Rust R,
PhP and all running on top of Ubuntu, Debian,
Centos, Alpine, distroless and more.
We always get asked how Docker Slim works, so let's just
take a quick look at that. Docker slim
optimizes containers by understand your application and
what it actually needs. Using various analysis
techniques including static and dynamic tracing,
Docker slim will throw away what the container doesn't
need. A new single layer images is created and it's
composed from only those files in the original fat
image that are actually required by your app in
order for it to function. You can understand your
container images before and after optimization using
the Slim AI SaaS platform or the Slim AI Docker desktop
extension. There are also a number of benefits
to slimming containers only ship to production
what your app requires slim containers can be up to 30
times smaller than fat containers. Slim container
images are faster to deploy due to their lower size and faster
to start due to having fewer files inside them.
Slim containers can be less expensive to store and transfer,
and slimai containers reduce your attack surface.
In our report titled what we discovered analyzing
the top 100 public container images, we saw
an increasing trend in dev test QA and infrastructure
tooling being left inside production containers unused
shells interpreters cool utilities left
in your container images can be used against your infrastructure
to disrupt your operations if a container is compromised.
So let's take a look at those slimmed containers.
In most cases, there are significant size reductions to
be had slimming any container image regardless of the build technique
or base image used in our example here, we're seeing
between three and five times size reductions, quite modest
but valuable. All the same, as the container attack
surface has been significantly reduced, we often
see ten to 30 times size reductions in complex
applications. And our slim Ubuntu based
image is now just five megabytes larger than the slim alpine
based image, but with none of the compatibility concerns,
and 37 megabytes smaller than the
unoptimized alpine image. But has Slimming
also improved the vulnerability assessment? Let's find out.
Analyzing if vulnerable components exist in slimmed containers is
currently a manual task, as the metadata the scanning tools
use is often no longer available. That said,
it only takes a few minutes at most,
and using the slim AI SAS or the slim AI
Docker desktop extension to search for vulnerable components and confirm
that they're no longer present we've already confirmed
the Ubuntu based image was free of critical and high risk vulnerabilities.
The seven medium risk vulnerabilities were in e two
FS progs, which are Linux file system utilities,
lib, SQL lite three, and Perl base. It took
literally seconds to confirm that none of those components
exist in the slimmed image. So now we
just have the low risk vulnerabilities remaining.
And two of those low risk vulnerabilities were in the Python
mailcap module, something we should be interested in,
given we have a Python application here and it was specifically in
the find match function. Again, it took seconds
to use the slim AI Docker desktop extension to
confirm that the Python mailcap module was no longer
present in the slim image. In fact, let's do
that now. So this is our
unoptimized Ubuntu
image with our app inside it. So let's just analyzed that
very quickly. We're going to search
for mailcap because that's the Python module that
we're interested in. So let's bring those results up
and we can see here. Here is the mailcap module,
and here's the compiled bytecode for that module.
So this is in the unoptimized image, so we can confirm that there
it is, it's present. But now what we'll do is we'll look
at the slim image,
so we'll analyze this one and we'll do
the same search. So we'll
search for mailcap and as you can
see, no results have come back. So this is great. We had a Python app
with a vulnerable component. Our app doesn't use that component,
so consequently that vulnerability is entirely
absent from our resulting container. A couple
of minutes was all that was required to use the Slimai Docker
desktop extension to confirm that all of the
components affected by the remaining six low risk vulnerabilities
have also been entirely removed. So yes,
we can have our cake and eat it too.
We have an Ubuntu based images with no known vulnerabilities
of comparable size to a slimmed alpine image,
but with none of the potential complexity of working with Alpine.
It's also worth noting that the high risk vulnerability CVE
2021 three triple nine in Glibc
that exists in the distroless and official
python images that are based on Debian eleven
wants still present in those images, unlike Ubuntu,
where it's already been mitigated, and unlike Alpine,
where it never existed by virtue of alpine using muscle.
So therefore in this case, Ubuntu is
the best base image for our example application.
So let's take a look at what we've learned.
Following container best practice will set you up for success.
Always run Internet facing apps and services via unprivileged
user accounts. Pin your base images,
use a stable base images, and apply your security updates.
Be mindful of layer caching, introducing potentially
insecure packages into your container images
container scanning and analysis are essential to
fully understand what's inside your containers.
Do not install recommended packages to help keep
your containers, Slimai and secure Linux vendors have
security slas assess your base image options
and pick what is most suitable for your project.
You can slim just about any kind of container images,
including those based on alpine and distroless and
slimming significantly reduces vulnerabilities
and the attack surface. But yes, that was more than
five and you're welcome. So thanks
for listening. I hope you've picked up a few tips
to help improve the security of your containers.
We invite you to give Slimai SAS and the Slimai
Docker extension a try. Both are free to use and
also take Docker Slimai for a spin. If you have follow
up questions, then join the slim AI discord and ask.
Or you can follow us on Twitter and ask there. Keep building
and enjoy the rest of the conference.