Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hey folks, welcome. Let's talk about container security. But before
going into that, we need to talk about containers.
What is a container? If you look at the real world, your day to
day life, a container is something that holds something else. It holds a receptible or
a good, like your soda can or the airtight container
to put your food in. And basically what you're trying to do is making sure
that the outside world cannot influence the containment
that's inside. One thing is you need to pick your container correctly.
However, if you translate that concept into software,
people think in many cases that if you put an application into
a container, that by default it is safe from outside vulnerabilities,
the threats from in the outside world. Unfortunately, this is not
true. And that's why we are going to, or I am going to show you
what you can do to build safe containers. One thing you
have to keep in mind is that if you look at a container
in real life, it's a protective barrier around
the good, to protect the goods from the outside influence.
However, if you look at software, it's more or less we need two way
traffic. We need to get outside of the container, for instance, having a
UI towards any user, or you need to connect to
it with a database connection or a request or something like that.
So it is different and we need to cope with that.
But first, my name is Brian Vermeer. I'm a developer advocate for sneak.
I am a Java developer by trade. Currently I'm a Java champion.
I do a lot of stuff in the community and I love
that. But today we are going to talk about securing containers
and specifically Docker containers. Because Docker is the most
used, well known way to create containers.
It is not the only way, I know that. But there are a lot of
downloads of docker containers from Docker hub and people create
their containers based on that. So today we
can more or less say this is a best practice session on Docker image security.
Let's get into first, the first addition to the first tip I can
give you that is prefer a minimal base image. If you
look at how we build docker images, we build
that in a file, a Docker file. And normally you start with from
something. And that from something is say from Ubuntu
or from Debian or from node. You build
your application on top of an existing image that you probably download
from Docker Hub. Last year we did a research, oh well,
late 2019, we did a research on the top ten
free docker images that you could use, that you could download from Docker
Hub and we tested them for known vulnerabilities inside
the images and this was the result. All ten images had
vulnerabilities by default, although most of them are well
known images, well maintained images, images that might be certified
in some way or recognized by Docker or whatever you might call it.
Looking at each and every one of these images and well
take specifically a node images, a lot of vulnerabilities come in your
application if you just use this image. What we did in this research was
we took these images without any specific tag.
So basically that means at that point in time we took the latest image.
A lot of people do that and build their image that way. That means that
if I do this today there will probably be a slightly different scale.
Moreover, the importance is that the latest image might not be
the best image. For instance, if you took the node image and you look at
the node image closely, that node image is node JS built
on top of something else, on top of a full blown operating system
called Debian Jesse at that point in time.
So that means everything that comes from that layer of abstraction,
the full blown debian operating system plus the node
image and then your stuff comes on top of that. So it's layered
and then you have to think about that and think of yourself. Do I actually
need a full blown operating system to
build my tiny little rest servers on? Probably not.
If we take this any further and we look at a full operating system because
most images are built on top of can operating system layer, we see
that there are a lot of differences between different operating systems. If you
take Debian, for instance, you might use Debian fully,
but probably not. If you convert to a Debian slim image instead
of the normal debian image, you already well remediated
a bunch of vulnerabilities. You have less stuff in your image so
that will not harm you. If you go any further to the right on
this image, you see that things like Ubuntu or fedora
or even alpine may help you remediate a lot of threats. So choosing
your image correctly, your base image, your foundation of
your docker container is crucial. Think about it.
Do I need every single binary and every single library that
comes with a full blown operating system? Do I need that for
my application? Probably not. And in many cases things like fedora
or alpine even might be suitable enough. If I translate
it to for instance Java image like here and I'm using OpenJDk eleven.
If I do the latest image, the latest image is bound to the
Ubuntu version of this image. And that had at the point where
I tested it, and that's a couple of months ago, was 25 known
vulnerabilities in Ubuntu, not so much in OpenJDK. If I choose
a Debian version, which I can, which is a far far bigger one, it gives
me a lot of more vulnerabilities but also a lot of more unused binaries
in my case. Therefore, in some cases it would be wise to use Alpine to
look at this from can architectural perspective from the beginning.
Which choice do I make for my base image is a valid one because
Linux operating system vulnerabilities, they steadily increase. If you
see over time how this increases between the different sorts
of operating systems. You see that debian for instance,
in this case is the winner. But that doesn't mean
you need to use it. Maybe you do, maybe you don't. But make a conscious
choice. Make sure that you only use the stuff that you need,
because what you do not have in your container cannot
harm you. Let's go to the second option. And the second thing that
I want to show you is the least privileged principle and the least privileged user
in this case because think about it, the least privileged principle basically says
that you can only do what needs to be done,
nothing more, nothing less. For instance, if I come to the doctor,
I want to make sure that my doctor knows what my medical history is.
That doesn't mean that every doctor in the hospital and every nurse in the hospital
needs to know what my medical history is. Moreover, if he wants to operate
on my nose, he doesn't need to know what my shoe size is because that's
irrelevant. Make sure that you act the same way with
users and applications. If you run a docker
image straight away, or you write a docker file and you just
run the application within your docker image straight away, you run
it by default as root. Is that necessary?
Probably not. So the best thing you can do is create
a specific user for that. Create a specific user with only
the provides it needs and we can do it something like this.
I'm creating an image based on Ubuntu. Run my stuff
on that and oh, let me just outline the stuff that is important because these
three lines are important. In this case I create a specific
user and a specific user group for it. It's a system user
without a password, without the home directory, without shell access,
and I couple it to the group I created in the second yellow
line line I give it the ownership of
my application folder because that's what I need. I only need
the privileges on that folder. In the third line that
I marked over here, you see that I call the user because I can create
a user and I can give it privileges. But if I now call the user
and use that user I created, it will not do anything and it will still
run as root. If I do it like this, then every command afterwards will
be executed by the user I created it. So now you see
that by doing a few more lines you add a new user
with limited privileges. And that might make
sure that your scope is smaller than it was. Because think about security
like this. Security is always a chain of attacks. If something happens
and that thing like you can enter an application or a
container. In this case, if there's something wrong inside your
application or container, we can connect from one step to another step to
another step. And you can make it worse and worse and worse and worse.
So make sure that in every level or every stage that you
can apply secure practices. You should do that to prevent
if something is happening that it gets blown up in your face.
Also think about the image you're using. I'm now using an Ubuntu image,
but if I use for instance a node image, and this is can example from
node ten, the node images come with a specific user already
the node user. If I'm not aware of that and not
doing this like the underlined line over here, I'm just using the root
user. But if I call that node user because it's already there
and has limited privileges, I can use it. But you need to call it
before you execute the command that you want to come out. What do you want
to do? So be aware of that and find out if your base image or
the image that you're using already contains a user that you can use for this
instead of creating it. If not, you can recreate it yourself.
Next one, finding, fixing and monitoring open source vulnerabilities
in your operating system. Because I told you things like operating
systems already. We talked about that in the beginning of picking your base image.
But even if you look at build packs like this, you see there
is a large difference between where the build packs are used
from. So this again is somewhat older research, but just to give you some
numbers, it depends on if a build pack is built on debian or on Ubuntu
for instance, but also it depends over time. So if you
test your image from at the beginning when you created it,
one that's a very good, that's already very 100 points, but make sure that
over time you do that again and again and again because vulnerabilities will be
found and will be fixed over time. There was a question we
asked in our open source security report.
Well, a year and a half ago, when do you scan your docker images
for operating system vulnerabilities? And a lot of people unfortunately
do not, they do not know. They don't take care of that layer
of abstraction. We probably take care of our application, we probably take care
of our firewalls and that kind of stuff. But your operating system,
if you deliver a complete container, if you deliver a
complete image, then you are responsible for that operating system layer
as well. So don't input it blindly. Make sure that you scan that and
you are aware of what is happening. And one thing you can use are applicable
tools you can use is for instance the tooling that sneak delivers. And this
is an example if you install the Snyk Cli so
you can find that on the website. I won't go into that. Say for instance,
I'm fetching the image over here with docker pool. I'm Docker pooling
the note ten container. I can do something like sneak container test
and call that container I have on my local machine, scan it right away and
it will give you a number of vulnerabilities that are in there and even
give you, for instance, how you can fix them. In some cases
if you create a container yourself and you add your docker file
to it. So with a file and you add it to
the docker file that is used to create that container, we can give you remediation
advice on the base image. For instance like use another base image
to remediate already x amount of vulnerabilities because,
well, we know the info. The other thing you can do is testing is good.
Testing is good. Like when you're building, when you're creating, but also when you're in
production, you need to be aware of that. So one thing you can do
is doing a sneak container monitor and monitor that image.
What that does is it creates a snapshot of that point in time,
sends it over to the sneak UI and helps
you can it on a daily basis. It will scan it on a daily basis
and if there are new problems or new remediation
advice, it will ping you actively, which is helpful for
a development team, not only pointing out hey, something is wrong but
also we have a fix and it looks something like this. I did it for
the note ten image. I did that today. And you see that over here?
That we have the amount of vulnerabilities over here. If something has changed,
I will get pinged. In this case, my preference is by email, in my email
box. And that is interesting because did you know that 44%
of the Docker images vulnerabilities can be fixed with newer base images?
Like if I just switch out the base image and not doing anything
on the application or other key binaries that I might manually
put in, but just switching out the base image can already remediate
a lot of problems by getting to a smaller base image, by getting to
another version of a base image. So if we ask people,
how do you find out about new vulnerabilities in your deployed container in
production? A lot of people unfortunately don't, because once
it's in production, it's basically out of sight, and we're looking at new features that
holds for applications, but also for containers. And know that
20% of Docker image vulnerabilities can be fixed just by building
them. And that is interesting because in a lot of cases, if you look
at Docker images or docker files, they look like this,
like from Ubuntu latest. And you know that the latest
now might be different from the latest in two weeks
or three weeks, because once a problem is found, it will be fixed and a
new version will be out, and that will be the latest, and so forth and
so forth. So doing this and the same holds for
things like I'm doing an app, get over here on Python. The Python
version now might be different from the Python version in two weeks or two months.
This means that if you're having an application, even if the application
doesn't change the shell around it, the container where it
lives in can change by just updating or rebuilding
that image, reuse that Docker file,
not doing anything, and rebuild it, because every time you build it,
it potentially can have updates. So best practice
in this one is just rebuild it over time, once a week, once a month,
whatever is feasible for your application, just redo
that, even if your application did not change the single character.
However, if you do this, a best practice would be to use the
no cache to make sure that it doesn't hit the cache.
Normally it shouldn't be doing this, but now I force not to not
to hit the cache anyway, and I force it to download
the latest version. So rebuilding it can already solve a lot
of things, especially if that container is in production for ages
or for months or weeks. But you get me. So,
okay, story is enough, but what could possibly go wrong?
Interesting question, because when you create a container
or you use a container and build on top of that, all the binaries are
there. And I'm going to show you an application in just
a second that is using one of the binaries that
is in that container or on your operating system. And that binary is
imagemagic. And the version of imagemagic I'm using has an improper
input validation. And that sounds okay ish.
But with this improper input validation I can do code execution
and then it becomes kind of tragic. That's why this
vulnerability is called image tragic. I think it's funny.
However, I'm going to show you an application in a second,
and that application uses image magic. Let me get right
to that. All right, I'm running a container,
it runs on localhost and it's
here on my machine and it runs on port 30 112.
So let's get to here. And you see, let me reload
it. This is an application, it's a very simple node application that uses imagemagic
to where I can upload a picture and it resizes the picture to can
ideal format for Twitter. So what I can do, I can choose a
file and let's choose a file on my machine. Let's choose a
picture of myself that is large.
So I upload it and I say resize, and it resize
the picture to the actual size. That is very convenient
for Twitter. Okay, cool. So that works. My unit tests around it.
Fine, fine. But I know this is using image magic
on the dahut. What if my image was something different,
not just a regular image on my machine, but something like this. Let me
go over here.
Yes, it's here. Rce one jpeg.
Rce one jpeg. And this is my
jpg. Cool, right? Because this is possible with images
magic or this is valid. The problem is,
normally I'm allowed to call the URL in
this file. However, with this pipe, I break out of this URL
and basically execute touch Rce
one. So instead of actually downloading the
jpeg, wherever it is, I am securing a
command. And if I can execute this command, I obviously can execute,
maybe can execute other commands. So if I look
in the image, the image is already here and say something like
there, there is no rce one file yet.
But uploading this image, like I said, let's do
this again and let's choose that rce one file I
just showed you. So rce one jpeg, it breaks out
of that URL and it does the touch rce one. So open it,
resizes and nothing comes back because there is no image and
look at what's
there. There is an empty file called rce one. So I can do a
code execution and that is interesting because if I can do a code execution
like this, I can probably do other codes, create scripts
for instance, or run scripts. Got me?
Cool. Second one, I've got something interesting as well.
So instead of just doing rce one, I have an rce
two, you already guessed it.
So if
I'm looking at rce two, it does something similar.
But what that thing does is it tries to go
to a URL but it breaks out of it again by this
quote because this quote is something it basically says like
evaluate this first. And what it evaluates is I'm
going to an address host docker internal which basically refers
back to the host machine where this docker image works on. I'm looking
for a file, I'm getting the content of that file and print it out in
that same file on my docker machine. So I recreate that file and I
basically run it. So what's in this r sh?
Well r
sh is a script and that script gets again
to my localhost and gets a tarball,
Netcat. And Netcat is an application for networks,
not really important, but it unzips or untars the tarball
and it basically installs Netcat on that machine.
So by doing this, if this works,
I can execute scripts. Cool,
right. So first of all, what I need to do, I need
to make sure that my localhost serves this normally I
can do something on the outside. So let's serve this application because Netcat
is in this folder if I'm not mistaken. Yes. So the
r sh and the Netcat tarball is over here. So let's serve
this file on port 5000.
That's what I need. Address already in use.
Right, let me
check. Oh, I did it already in another thing
so I can redo the, terminate this one. Cool, I'm back
here. It's already used. It said says now it's not.
So it's now listening on port 5000 and it's now securing
as a web server just a small tool I can use. What I'm going to
do is I'm going to upload that rce two. So going back to
the first page, uploading rce two and
resize it and it does all sorts of things. You see it's
running but it doesn't do anything. I do need to do something on
my local machine. So Netcat is running but
on my local machine over here. Let's do it over here. I can do
Netcat in listening mode like lnv on 31
31. Just wait a few seconds and let's see what
happens. This is on my
local machine and because Netcat is running this doesn't do anything yet.
Nothing yet.
Let's retry.
All right, I should put it in listening mode first. So let's redo
this.
First thing I need to do is start up natcat over here in my
local machine and minus lnv
31 31. And I'm putting it in listening mode.
Interesting. Now I'm uploading that file I just showed you.
So I'm uploading that rce two file. I open it and
I resize it. And you see it's running, it's downloading the tarball,
it's unzipping, et cetera, et cetera, et cetera. So by doing this,
I'm executing stuff and I'm downloading other stuff because hey, it's possible.
So with this listening mode, I basically try to make a connection from my local
machine to my docker host. And you see by doing can ls,
I already have all the files in my root over here. So I now have
access, basically some sort of shell access to my local machine. And I can do
all sorts of things. But because I can download a tarball,
I can execute it by using this construct.
I am executing code or scripts on
a docker machine that wasn't designed for that. And that's because
we're using image magic that is outdated. That has problem. So the
problem is not so much in my application, but in a binary that is served
to me together with my docker images. So interesting
part. Take care of that. That's what can go wrong.
All right, getting back to my presentation,
next thing, use a linter. And we all know linters from
coding. If you are can application developer, you probably use a
linter or a source code analysis to write better code
to find bugs, maybe to find security issues as well.
And there are also security linters that you can use for docker files.
One of these things, Oz is huddlent or
hadolint. I'm not really sure how to pronounce it. However,
it can help you scan your docker file and see if there are problems
or not. If you created a file, you just pull
it to this one just like you do with your application that you use a
linter or a scanner for that scans your code to prevent bugs or
issues. You can do this with Hadolind as well. And Hadolind
can for instance, tell you like please use copy instead of add for
files and folders because that's better practice. So this is an easy
tool again that you can use when breaking docker files yourself
to prevent certain silly mistakes. Write that
down. Okay, next one. It's not only about your
container, because your container is a shell, is a
wrapper, and inside your container there's an application.
But both cases like your container is output facing,
but your application probably as well. So think about that. Your application should also
be secure. It's not only your application or only your container,
it's both. And looking at your application, say this is the binary
that you put into that application or into that container. How much
of that binary of your application is actually the code that you
wrote, that you wrote yourself or your team members wrote? Probably something
like this. The rest of it, like the rest of the yellow part,
probably frameworks, libraries, other libraries.
And we know that libraries import libraries import libraries, right?
So we depend a lot on, well, dependencies that
you put into your manifest file, like your package JSon or your palm
XMl. That's a good choice because we do not want to do the heavy lifting.
We do not want to create plumbing like yet another rest endpoint
or something like that. We want to create value and that's the code that we
wrote. However, do you know what's in that big yellow ball? What's happening in these
dependencies? Because we're responsible for everything, the code you wrote,
the dependencies, but, and the container. So make sure that if you pull in
dependencies, that these dependencies are good, that they are not harmful and
that they are up to date, maybe because if you look at vulnerabilities that
are found each year by ecosystem, it is growing, unfortunately.
And the point is, it's not so much the dependency that you pull in
yourself, but because that dependency that you pull in might depend
on something else. On something else, on something else several layers deep.
Most of the problems are in the indirect dependencies, so we
should take care of that as well by for instance,
scanning your application. And it again sounds like very theoretical.
And what can go wrong with that? Right? Well, let me show
you. I'm showing you a very small spring
application. And let me get this down, let me get
into the application. This application is quite small. This application is
a grocery list. As you can see, when I start this application up, it will
fill my grocery list with beans for fifty cents and milk for $1.09.
It's Java code. It's not really interesting to be honest. It's spring boot application.
This is the item I'm using the item. This is my domain.
It's not interesting at all because these three fields are the most
important one, the id that it's automatically generated,
a name and a cost. The rest of the functions are getters and better
to interact with name and cost. The interesting part comes into the
item repository and the item repository, as you see over here, I am using spring
data and spring data best. With spring data I can just create
an interface. So not even an implementation, an interface extending crud repository.
And by naming conventions I can create basic
functions like find by name, give it a parameter name and
spring data will take care of the rest. It will automatically generate
the implementation for me based on all other things like what is
my driver, et cetera. So by doing this I do not have to implement my
SQL query myself by naming convention. It does it for me, it generates it
for me, which is pretty neat because now I can focus on the business value.
Cool. With spring data rest that I implement or
I import over here, I just put an annotation over
here. And what I do is from that crud repository
that connects with my database, I instantly
create rest endpoints. So my crud repository are available as
rest endpoints, which is even greater because now I
have the basic functions available and I can work on integration
with other things. Right? Cool. Very easy prototyping typing, you would
say. So let's run this application and let's hope it works
because it's always praying to demo gods that these things work.
And it works. It's up and running. Let's go back to my browser
and let's go to localhost 80
80. It refers me to my hell browser. And the hell browser is just to
show you how a few things work. Like I can call the
items one, which gives me the first item in my grocery list,
which is beans for $0.50. If I do number two,
you will get milk for $1.09. Cool. But it can
also do things like search find by
name, which was my crud repository, and the thing that changed into an endpoint by
giving it a query parameter beer, I can
actually search for it. So now I have beer for 599.
Pretty cool. So we're done, let's go. But there's a problem in the
plumbing. In the spring data rest, this particular version is
vulnerable and the vulnerability is kind of not so obvious.
So let's go into that I am in the
right thing.
Let's go to that application and let's go to the exploits.
I will show you what you can do by showing you the
JSON body that I can give a certain curl request.
It's this buddy, let me just enlarge it for
you. Let me see what's going on. It is a piece of JSON.
And if I put this piece of JSON as part of a curl patch request,
for some reason I am allowed to utilize SPL
securing expression language. With an expression language I am allowed to
call variables, call objects, create new objects,
that kind of stuff. And what I do over here is I get the current
runtime and I execute a command. I get it as an input stream, redirect it
to an output stream and I can show it to you. So if command is
something different, not the word command but an actual command
like env or delete or make deer or whatever,
and this works, I can basically do anything
within the application or break out of the application and do anything on
the machine, in this case your docker image. So this
is cool, but let's just actually hack it.
Let's go for the passwd file. So if
we look over here we see that is curl patch request,
right? And that curl patch request has a content type and
we see the actual thing over here, like the runtime getting the runtime execute.
In this case I'm creating a string that says
etc passwd and I
call this request with this body on the endpointitem
one, which was there normally as just a get endpoint
to get my first item in my grocery list. But unfortunately I can do
a patch request this way as well. And by doing this, this is of course
my local machine. By running this I will have
access to my passwd file. And now my passwd
file doesn't contain a lot of information, luckily not anymore these
days. But if I can read this and your docker
container doesn't container the right privileges for the user, but for
instance the root user, we can read and write a lot of stuff
in your container without you knowing it. Cool. So take
care of your application as well. So I showed you what
can go wrong with can application. Next thing I want to tell
you is things about multistage builds. A multistage builds
is a marvelous thing in Docker because what you can do is you can split
builds in different steps and what you can do with that for instance is
you can divide your build image from your production image.
If you look on how to's on the Internet, on stack overflow,
or when you google things, you find very small
images or spare very small docker files, how you can build an image,
but in many cases, at build time or at creation,
you need some more information, and you do not want to leak
that into the image that goes to production. With a
multistage build, you can split these things. And for instance, that's interesting. With Java.
I'm using a Java image over here. I'm using the OpenJDK
maven three image, and that means it containers the
JDK, which is the Java development kit, roughly says it's
the JVM plus the Java runtime environment and the building
stuff. So what we can do with that is it contains Maven,
the complete JDK, and I copy my full source code in it
and I run it. But for a Java application, that's not what I need.
I can use Maven, I can use a JDK to build my
artifact. And I only need the build result because it's a compiled
language. So I do not need Maven and the whole JDK,
I just need a runtime environment to run this jar war,
whatever it is in it. You see that because all this tooling is in there.
Basically, you do not just want the car, you want the whole factory
to build the car. That's what you're doing here. It's over 600 megabytes. If I
do it like this, I still use that same thing as the
top part of my build images. So that is the build image as
building. And in the second part I refer to that.
So what I do with the second part is I create a production image,
and that only holds the Java runtime environment based on alpine.
And that is a very small distribution. That is what goes
into the actual product that I'm putting on my
server. I copy the jar file and I run it so
that final product will just be over 100 megabytes big,
and that's like a lot less. Also, if you,
for instance, building node images, like here, and I have an example on
node twelve, a node twelve image where I need to provide credentials
to get to a certain registry. And you do not want these registries,
you do not want this token, which is secret, you do
not want to leak it into your production image, because if it
is there, you can probably find that back, it's somewhere
in the cache. And by doing, for instance, history or something, you can retrieve these
kind of things. If we do a multistage build, everything that is
in the building image stays in the building image, and I only copy the stuff
over that I need so things like secrets or
extra binaries that you need during creation to check to whatever
you want to want to do or to do quality
assurance, I don't care. Do that in your build part and you only need
the product of it, right? So in this case the production image
is based on again note twelve, but a slim version which is a smaller
version of it. Separating these two is a very good practice and something you
should be doing. So you're using open source. Me too.
I'm an open source contributor as well, and open source is great,
but you have to make sure that you keep a thing in mind. If a
large open source package or container that is widely used is
vulnerable and compromised, there are a lot of victims.
So as a developer, you're responsible for the application and
the container around it to ship it to production. Make sure you are aware
what's in it. If there are already known vulnerabilities and how to
remediate them, because you do not want to be eventually secure. You want to be
secure as soon as possible, as soon as these vulnerabilities
are known. Little recap choose the right base image.
Make sure that the base image does not contain stuff you don't need. Make sure
that it's small, because every binary that ships with a
full blown operating system,
you might not need every single one of them. So make
it small. Make it concise. What you don't have cannot harm you.
Rebuild your image often, even if your application doesn't change,
building your image based on a certain base image or other
binaries that you pull in, they might change and they might have fixes
as well. Can your images during development, but also when
you go to production by taking a snapshot and monitor it. Same holds
for your applications of course, but be aware of that, that scanning is
one of the things that you need to constantly do in every single
part of your software development lifecycle. The multistage builds I
showed you is a good practice to make a separation
between your build image that might need stuff like tokens or
might need extra binaries to do the building stuff. Separate that
from the production image that is small and based on, for instance,
an alpine images or something like that. Use a security linter. A thing like Hadolint
that I showed you is a very nice small tool that you can add to
your toolset to can your docker file and prevent
silly mistakes. And last but not least, make sure that you do not
run your docker container as a root by default. This is the
case, so create a user or make sure that if the
container you base your image on, if it already has
a specific user, that you actually call it, but don't run it as root.
All right, that's about it for me. Thank you for listening. Thank you for watching
all the tooling I showed you you can use for free. And see you later.
Cheers.