Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hi everybody. Thank you so much for joining into this presentation about secrets management.
As you probably can tell, this is a pre recording and next to this we
actually have a running discord server where you can find me to ask me any
questions about this presentation or give me feedback.
Shall we get started? Alright, let's go.
Before we go to the actual content, a little bit about me I'm Jeroen Willemsen.
I'm a security architect at Xebia and a Phil Stack developer.
There's various ways you can reach out to me and I assemble all of those@allmylinks.com.
One specific thing I would like to ask you is actually go over there
and visit my trueq website where you can give me feedback about this presentation
because I would really love to hear from you what you've thought of
this presentation and how I can improve from this. That way
I can learn from you in sharing knowledge and making it a better
presentation next time. All right, thank you so much. Shall we get
started? The major question that
all of us possibly have faced already is can
you keep a secret? Whether as children's playing together,
whether it's growing towards becoming an adult,
or when we joined information security or cybersecurity,
because so many secrets were actually shared with us or
shared on systems that we're maintaining or securing.
Before we can actually start talking about how to do that and how to keep
those secrets in a proper way, let's first quickly glance upon what type
of secrets you might have. Of course,
passwords are something that's always shared upon. Okay, this is
a basic secret you should really secure, but there's so many more secrets
out there. Think about your HMAC keys or your encryption keys.
Think about your IM access keys for your cloud provider,
QR codes that might allow you to access certain materials,
your authentication links, your OTTP codes,
your private signing key, your GPG key. There are so many different
keys and passwords and different types of tokens that we should
consider when we try to protect a secret.
And now let's of course, as you can
tell, all of these secrets can be pretty important
and protecting those can be a very
interesting experience because we can often see that as a journey.
We start protecting them in one way. We learn from that and then we possibly
have to change our strategy and how we protect these secrets.
What we'd like to do is basically take you on a journey
throughout various places where we see secrets being stored or shared
and possibly somewhat secured. We'll start with
secrets hard code application, go to configuration, then we'll move
to containers, Docker containers that is.
And then we'll talk a little bit about how secrets can be protected in pods
or kubernetes the platform in this first place. And then we'll quickly
touch upon things like a third party solution provider like Hashicorp fault.
And last but not least, we'll talk about secrets being
managed in AWS. As you can tell, this is a kubernetes
driven talk a bit, but it doesn't really matter. I hope you'll
find this an enjoyable presentation
where you can basically learn from or in other ways get a little bit
inspired about your own secrets management strategy. Shall we get started?
Let's go. So the first one is basically secrets code.
First of all, many people start laughing about that. Why do people have stuff hard
coded? That should never happen. But what we often find ourselves in
is that as a developer you try to prototype something into making it
work. That prototype might actually need a certain password
or whatever that's linked to some API being provided
to make your prototype work. So what often happens first
is that we just hard code over there and then we basically forget
or don't find the time to clean that out.
Notice that the code that's currently on the screen is in Java. All of our
code is basically based on Java spring boot and anything beyond that.
Basically. So here it is. Here's your public Java
class called constants where we provide a public static stream password
which has a certain value.
Funny thing is that we often end up with discussions afterwards with
developers saying hey, but this is running in the back end and nobody should be
able to touch that. So why is hard coding a problem?
We'll touch upon that a little bit later.
Of course we can find various variants
of how that is done. Another one is for instance where you use spring boot
and you use certain annotations like the value annotation
where you actually have then your application or properties, or another property
file where you loaded the actual value of the secret in.
Though this is not hard coded in the Java code directly, it's still in
some sort of property files that's still committed to get next to the actual
controller class where we put it in. So another hard coded
secret. Luckily there is an easy
way of overloading this into various other places which we'll
cleanse upon a little bit later. But basically
you'll easily find this type of secrets in
your application code. Of course a way to override this
is by using your docker container. So on the
left over here you see our docker file where
we have an argument based password and a environment
based password and the argument based password has a default set. This can
be changed and then we have an environment based password
which says this is it. The nice thing is that when you basically build up
the compatiner, you can use the arc based password to be overloaded
as an argument, as a build argument. So you can, for instance, put something
in like this is on my command line, as you can see here in the
example below. And then you can compile a container or,
sorry, build the container on the right hand side, you actually
see how this could be consumed.
So we have our arc based password and our docker anth password being
loaded up as the values for
the string arc based password and string hard coded NF password.
This by itself already looks like, hey, but this is no longer really hard
coding. It, is it now, because it's no longer in our Java classes or in
our configuration stuff that's in code.
Unfortunately, the M Docker password is still actually within
the container. And what we'll also find out is that the build
argument is also in the container. And that actually brings us
to maybe having a short demo
about this. We put all of these different
samples in a project called Wrong Secrets, which has been created thanks
to the lovely people Bendahan and Ana Byers together with
me. And we're currently looking for new people that can help us out setting up
more different type of secrets as well. So if you ever stumbled
upon a funny way how a secret was stored within your company or somewhere
else, get in touch with us. The wrong secrets project
is basically a simple website, as you can see over here,
where we provide various challenges where you have to
put in, where you have to find the secret and basically put it in.
This goes all the way from Docker secrets, kubernetes secrets and stuff
stored in AWS.
You find your challenges, you'll be able to get
some scoring based on providing the solutions. And that way you can do
your secrets hunts throughout these different challenges.
But before we keep on
talking about that, let's go into a little demo actually, shall we?
So over here we actually see the system being
started up. So we'll run a container locally with
the first secrets loaded inside.
We see the spring boot application being starting up and
once it's started up, we want to go into the presentation actually.
So let's go over to the first challenge,
which is the hard coded password.
So as you can tell, the question here is,
can you find a hard code password? And if you just
try something, you'll see it doesn't work. But let's go back to our presentation
and take the default password that we found over there.
So once it's copied, apologies, copy pasting was a bit
of a problem during the pre recording over here.
And let's put it in and say submit. That's the
actual password, the one from the code. Let's do another challenge,
shall we? So now one of the Docker based challenges,
and let's take the actual arc based password.
So let's currently put into the command line copying
that in. Now going to the challenge, which is about
that, and then putting it in,
submitting it. And now the nice thing is that, well,
this of course looks kind of unsolvable in the first place,
but it actually isn't because you could actually
find these secrets very easily. But before we run into that,
let's go over and discuss a little bit of the troubles that we actually
have here. First of all, we run into visibility
problems. A lot of people can actually see the secrets
because the secrets in your docker container are right now part of the docker
layers that you can easily access, so you can tell what the actual values are.
The secrets in code can also be easily accessed the moment you
actually have access to git, or the moment you can download the container and decompile
it, and then you can easily find the values inside.
Apart from the visibility of the secret, we have a rotational issue.
Because the moment you need to rotate a secret, for instance, a password to a
certain API, and you rotated that at the API while
still having to rotate it in your code,
you'll run with an outdated password, possibly locking up the
account at the API. Or worse, when you actually have variants of the code
which do have the proper password, and variants of the code that do not have
the proper password, we end up in problems as well.
Then of course, there's reachability and authorization. Anybody who
has access to the Docker container has access to the secrets.
Anybody who has access to the code has access to the Java
and the configuration secrets. That also inclines some
sort of an authorization problem, because often the spread of
how you can access this docker container is way beyond the actual
amount of people that actually are allowed to actually see that secret or consume
it. What possibly could happen as well? If you
actually drill down the authorization part in terms of the Docker container registry,
it could still be the case that an ex employee that still has the
containers on his computer then shouldn't be authorized,
but can still use those secrets to still gain access
to the systems you should no longer be able to use.
Then of course there's a problem of history, because given
various instances of the Docker container and given various versions of the
code in git, we actually capture every variant of the secrets, which sometimes
actually makes it unfortunately a bit predictable what the next version of the
secret might be. And then there's of course, the auditability problem.
Can you actually tell who has access these secrets at
which moment and from where? If somebody just downloads it container,
of course you can't. Last but not least, we throughout
this code actually have an identification challenge in terms of
can you actually identify where the secrets were for in the first place?
Because just shouting password or end based argument
won't really help you to actually ascertain where the secret was used
for. That even makes it harder if you want to clean this up and migrate
it, because are we still using this password? And if so,
what? For now, many people will say, let's use the beeping system by just
wiping it out, and then possibly something goes wrong, which is our beep to say,
hey, let's put it back in. Unfortunately, that can lead
to very problematic solutions, so it's better to actually know what's happening.
So how can we detect and prevent this? So for this, we actually have a
bunch of open source tools that can help you. Two that are specifically easy
to use are Trufflehawk and Dockle. Let's first go over to Trufflehawk.
Trufflehawk allows you to basically scan code and
find the secrets defined in that code. Over here you can
see that trifleh nicely found our public string password and some
static new key. Not sure what that is for yet, but for
now at least, we found the password which
actually will help you to try to do this secret
hunting yourself for the wrong secrets project by trying this tool,
and you'll actually find the answer to solution one and four.
But more importantly, you can also run this tool at your own code base
and see if there's any secrets leaked inside of the
git repository. And then you can use stuff like BFG eight or other
tools to actually nicely wipe that out. But be
careful with rewriting history in git, it can become very hurtful,
so make sure you have a backup standing by. But that's
only just for the git. Decoding git. How about your container for that?
Tools like dockle can help. Dockle will easily identify certain
suspicious end keys found like they found over here.
Arcbase password Dockerf password. So these type of warnings
or errors or fatals should actually help you to identify certain environment
variables that you might not want to trust. So that
basically covered the first part of the docker containers.
So did we detect everything?
Maybe you want to try decided wrong secrets and
see if we detected really everything, or whether there's still secrets left inside
that we kept there. Maybe not even used anymore in the current version.
But you can at least understand that these tools don't cover everything because
secrets can be moved to caps,
concatenated, encoded, enriched with funny salts or
whatever in the naming to make it harder to be detected by SAS tools or
just actually be encrypted with another key that's harder to detect,
for which the key can actually still be at the same site as
within the docker container, within the source control. So for this
there's a bunch of tools you could try out, like CoQl and GitHub
or Trufflehawk or many other tools. And I would really encourage you to
try those different tools out, like your own repository, and see if you
can actually find stuff you shouldn't or didn't want to commit.
Knowing what you know right now so
much about code and containers, it's pretty clear that that's not the
direction to go. So what about Kubernetes configuration maps? So over
here, the right hand side, you can see config map definitions with some funny
data entry where we actually have a secret hidden for you. So this
is not in code of the app, but config mobs are often committed
to git, so therefore it's still hard coded in a repository.
And config maps by default are easily accessible unless
you really start doing your AbEC or RBAC
correctly. That means you need to set up additional configuration for your
Kubernetes cluster, which is often pretty challenges
if you don't think it through properly.
By default, config marks are not encrypted and they're just in the storage
called ETCD by default. And if you want to secure that,
you still have to start encrypting eTCD, which requires additional work.
And over here you can really recommend to put some metadata in.
That's actually something you can put in there easily.
You already see some metadata, but luckily you can add additional
free fields as well locally where you can actually identify where this
secret is for. So migration comes a little bit easier.
If you want to know a bit more about this or try this out and
see how this could possibly go wrong, please try challenge number
five of the wrong secrets application.
So config maps are really not the way to go.
Kubernetes secrets have been actually created to do this
with the single purpose of holding a secret on the
right hand side. Again, an example of a Kubernetes secret.
The nice thing is, as you can see at the data and the variable funnier,
which is holding a secret that by default all of these secrets are base
64 encoded, which means that it becomes very
easy to actually put something encrypted over there.
And the nice thing is that it's really making sense because isolating
secrets makes sense in terms of files and using your abaC or
RBEC correctly and making sure that this Kubernetes secret can only be
used in the namespace by the services that actually require it.
That still requires quite some effort, but it becomes easier
to think of it in the ways that you are
recommended to by the people behind Kubernetes secrets
can be encrypted, but that's again a bit challenging if it comes to, for instance,
key lifecycle and other stuff that happens within protecting it properly.
And the problem is still when your kubernetes cluster
is compromised, it's still easy to obtain the secret
then, which basically means that
you have to secure your cluster in a proper way and that you continuously have
to audit it for making sure that there's no easy step
up from compromising a pod all the way to its secret,
or secrets from other pods for that matter. And of course
we still have our identification rotation and expiry challenges because
we have to make sure that the secret itself actually contains the right value,
that we have enough metadata inside that we know where the secret is for,
and that the moment that the secret expires, that we do update our Kubernetes
secrets. That sounds already a bit
easier than having stuff directly in code because given the
whole rbug abug, we can actually solve the authorization problem. We can have proper
access logging in place so we know who actually visited the secret. So there's a
lot we can do with this, which makes this a great basis for your secrets
management in the first place. Given all the errors
you can make. I would like to invite you to try challenge six
from the wrong secrets project, where you can see how easily you can mess this
up. So if you want to stick to kubernetes secrets,
make sure you configure airbug. Well, that means
you lose your list privileged principle. Make sure people just
can't access secrets directly. Make sure that services
can only run the namespace they're designed to and make sure that
pods only are in the place that they really need to be.
And on top of that, make sure that secrets can only be
accessed by those entities that actually consume or produce them and nothing
else. Have your secrets metadata in place
so it becomes easier to migrate this towards some other solution because
everything in it of course is a bit in flux.
And when you actually migrate away from a certain app that
you need a secret for that, you know that you can ditch this specific secret
because you know it's related to this API. Make sure that your
basic storage called ECD is encrypted. Make sure that you
actually enforce a string and security context with emission controllers.
Or if you're still an older version of kubernetes, have psps in
place to make sure that whenever a pod is compromised,
it doesn't mean that the worker code gets compromised immediately, or that
it's easy to do other type of lateral movement towards other secrets than those
that should have been exposed to the compromised pod.
Harden your worker nodes. That's very important.
But I don't want to make this a presentation about hardening Kubernetes.
There are plenty of beautiful resources out there to watch out for.
Please just google them and you'll find your way. And if
you can automate the rotation of your secrets, make sure they
don't get steeled. Last but not least,
have regular security validations of your complete setup.
Your secrets are the diamonds in your cluster, and at some
point they're just the key to the other diamonds in your cluster, which is
the actual data that we're trying to protect. All right,
so much for Kubernetes secret. A lovely place to be in, a lovely
place to work with, with some challenges that can actually be quite
well managed. How about third party providers like
Iccorp fault? So here you can see a little bit of an example of an
iccorp fault setup. And before we go into details over
how isucopa fault works, let's just go over a few of the things that it
can do for you. It can do secret management where it can manage your static
secrets or dynamic secrets that can easily be changed. It actually
has credentials as a service we can configure backend to
connect to it. For instance, you can use your database
secrets backend with which you can let vault create temporal
credentials for users of that database.
There's a PKI provider packed in it. There is an encryption as a service.
If a transit backend,
the secrets by itself are versioned. There is a huge auditing
system involved which allows it, which allows it
to make it, which makes it very easy
to basically,
which makes it very easy to see what has happened with the vault secrets
backend in the first place. Which makes it
very easy to see what has happened with the vault in the first place.
And you can seal the vault, which basically means nothing goes in or out anymore.
So you can first resolve the security issue you're having and then move
ahead again. So how does it authenticate users?
So one way to do that is basically for a user to authenticate
with LDOP credentials through vault, which then goes to
your actual identity provider and verifies those credentials.
And based on that, it basically attaches policy to a token,
which is then returned to a user for which it can do various actions.
Something similar can happen on a pod running on Kubernetes,
for instance. A pod basically is deployed with
a service account token, which is an offer to the
application. The application can then authenticate the watch fault
using the service account token, which then in return
is validated by vault by asking the Kubernetes
API, hey, is this token okay? Can we really move ahead with this?
And based on that, vault returns a token that
the application can then use to from there onward start
consuming secrets.
Hold on. So right now we used vault multiple times
as something in the mean to manage secrets.
How does this pattern work in general? So this doesn't
just apply to hashicopfall, but to quite some secrets management services
in the first place. Also for those in the cloud, basically we
have a consumer of a solution that could be your service that
requires a password or an access key
towards a database. Then we have a solution that requires authentication.
For instance that given database,
the main secret management solution basically provides
the authentication means towards the consumer of the solution to
let it authenticate towards the solution that requires that authentication
in the first place. That is often done by providing
temporal credentials set up by the main secret management solution. At the solution
requiring outsourcing into the database gets a temporal role which
can then be provided back towards the consumer.
But in order to do that, the main secrets management solution
needs some sort of temp credential solution that's
based on a longer living credential.
Because if you want to create new temporal comes for the consumer of
the database, that means that you as a secrets provider need to
have a longer living credential living at the
actual database in order to create those various rules.
So yes, we can easily now rotate secrets
for the consumer of the database,
but it gets a bit harder to rotate those secrets
for the actual database itself, then of course the
main secret solution itself, for instance, vault or your
cloud provider has its own access keys as
well, which are required to create users.
For that, you basically need a secondary secrets management solution
that will hold the root credentials for your main system. Because otherwise,
if everything breaks down at your main secrets management solution system,
there's no way to access it anymore if you don't have comes break loss procedure.
Therefore, we now ended up with two secret management solutions in the first place.
All right, don't forget about that. And again,
there it holds. Make sure you can easily spot
how it's been used, by who it's been used, that it's auditable, that you can
rotate the secrets inside the secondary secrets management solution.
But enough to think of
in the future, let's see how we can actually use
vault, shall we? So here we have a short demo
of the key value backend being used
by the Java spring boot cloud application where
we basically have a vault password that
we want to obtain using a lot of different
auto configuration parts. Shall we take a short demo?
So if you look at the wrong secrets repository, you can basically see
how to start this up. But let's just after
a script has been fully started, you see that there's the secrets
challenge application being launched. And at 8200
we export. We actually have vault running or listening
for you to sign in. Now here in the vault administration back end
or management.
So as you can see over here we actually have the secret
management solution running. And at port number 8200 we also
have the port exposed for vault
where you can actually go into the
administration. There you can see that there
is a secret created and there's Kubernetes
authentication configured as well.
So we have comes basic default configuration set up over here.
And there's a secret challenge application role which can access
certain secrets. Then on
the policy to basically make sure that that role can access that stuff,
we allow the secret challenge to read two different paths
where we basically store your secrets. On that path,
the secret challenges path you can find a full password which has a
given value.
Then once we
take this value note
it's base 64 encoded.
And this is actually an old version of the secrets
challenge app when we used it for our
all day DevOps talk from code to vault and
then we submitted over there you can see, hey, that's the correct solution.
Basically the raw entry in vault,
as you can tell over here there's the deployment and then here
there's some bunch of bootstrap properties and
the actual configuration
code required to put it in. Note that
this is already a bit older, but over you can see we use the service
account token to authenticate towards a given vault instance. As you
can see, you can also use the vault token to directly authenticate
the word vault, but that's something I really don't recommend. Make sure
you actually leverage what Kubernetes offers you over there.
And for now, for the sake of time, let's move ahead a little bit
more of it will be explained at the wrong secrets project itself.
The nice thing, because the service basically leverages
the Kubernetes service token offered to it, we can now make
sure that we actually, from a service perspective, covered authentication
authorization and nice thing is that with
fault. We can also make sure that from a consumer level we can
also do authentication authorization a proper way because you have to authenticate
towards and give an LDAP. Note that in our demo we're using
vault tokens directly that are generated when setting up vault in the first
place. That's of course a very bad idea because
a token doesn't tell anything about who has set it up and whether
he's still authorized to access that particular secret.
We have auditability in place, of course, not during this demo,
but what you could normally do is forward all of that stuff to elk and
there you can see what happens. And a nice thing is you can also
audit the actual configuration because everything
is covered. The whole configuration of fault is covered in HCL and
with that you can actually configure the policies,
the resources and a lot of other different things that allows
for easy auditing. We also have temporarily
covered because by default some of
those temporal secrets are actually only there to be for
a given session and then invalidated again. And a nice thing
is that it's also very easy to actually rotate the secrets in the
KV backend and make sure that they're versioned so you can move ahead.
And another nice thing about that is that you can also put metadata inside so
you can see what is going on with them.
Blast radius is still something that you have to take care of yourself
that's not really related to word fault. It just
basically means you need to make sure that you
make sure that the secret itself is not consumable by other
platforms, aka just don't reuse your passwords.
So there's a lot of other things you can do with fault as well.
You could, for instance, allow template access to AWS or another cloud provider.
Make sure you have template credentials for your services and users of databases.
Use PKI there's so many things you can do with that.
So that makes it actually quite of an interesting one stop
shop for all your secrets management. The problem of course with that
is that it can become quite challenging because if
you put all of these different secret
management procedures into one product,
you end up with an extensive HCL rable, additional kubernetes
or terraform code to further provision it in the future, or other type of code.
In terms of how you can configure or deploy it correctly,
integrating the different outback ends can be safely
actually requires a lot of attention because it's easy to make mistakes in
terms of how do you expose the credentials, how do you make sure that
the roles get revoked properly, and how the temporary credentials can
be cleaned up in the first place. And not every DevOps
consumer, as in your developer, knows how to work with that.
It takes quite some training. If you
still like this solution a lot, make sure that
you store enough metadata about the secret where you store the actual secret.
Make sure that you have backups in place because the storage where vault is running
on could be damaged as well.
Like I mentioned earlier, we're using root tokens to use the
vault in the wrong secrets project. Make sure you don't
need those root tokens anymore because they're too powerful and they
are not related to any personal in the first place.
And even when they are in a certain way,
it's still hard to track whether that's actually being used by that person
or by somebody else who has obtained the root token.
So get ready for having your monster secrets secured
in your secondary secrets management setup.
And you still need to harden the environment where vault runs on. That means
if vault runs on kubernetes, well, we just talked about that,
right? What you have to do over there, even if it just runs as a
cluster somewhere else, make sure you harden the cluster, the network
and everything that's provisioned on site with it and
credential related backends can still be challenging. And there are so many
more things that can actually be challenging with this. That doesn't mean
it's a bad solution. Similar like Kubernetes
secrets, it can be used very well, but you do have to take
into consideration all the different challenges that you might have ended up with and
prepare for those. And well,
as a little bit of a show how problematic
it can be. If we scroll down on what we just showed
previously, you'll see that there's actually
a lot more being committed to git. In fact,
we committed some of our root tokens for vault.
So you can see that you're possibly not the only one that
might have have root tokens for vault in git.
Make sure you get rid of those, invalidate them. So enough
about vault. Let's move to something else, shall we?
Move to the cloud. So the examples
we're going to discuss today are based on AWS. Of course Google and Azure are
on their way for the wrong secrets project in OWAsp,
but they're kind of similar. Let's talk about the solutions
that we have so you can store
secrets in the AWS SSM parameter store showed
on the left, or use the AWS secrets management
as showed its icon on the right.
Both are covered in the wrong secret project with their own challenges.
The idea is basically that the secret lives over there,
and there's a few common challenges with these systems
when you store the secrets over there. First of all,
you need to make sure that the values are encrypted properly, for which you have
to leverage AWS KMs and configure the keys
that are used to encrypt that correctly, or use alternatives for encryption.
In that sense, then you have to take care of rotationing and versioning
the secrets in a proper way. And of
course the AWS SSM parameter store and secrets
manager work a little bit different in terms of how they expose
the secret and how you can regulate the access towards the secret.
Then of course you shouldn't forget to monitor the access of these two services
using cloud trail. And like already
mentioned with all the previous solutions, make sure you store some
metadata about the secret because even if you move it to the cloud,
it becomes very easy to forget where the secret was for in the first place.
And of course there's many other things you have to take care of,
but this is only on how to store the secret. Another important challenge
is of course should you be allowed to access it.
So for that in AWS you can use sts to authenticate against
the service and get some sort of credential for
which you can see the sorry.
So for that you can use AWS SDS again.
So for that you can use AWS SDs to authenticate against
and then you get some temporal credential like a role.
Luckily you can see all these type of authentication
attempts in cloud trail. That role, an IAM
role has its own definitions in
terms of the role itself and the attached policies which tell
whether the authenticated entity is actually allowed to
go to AWS SSM parameter store or the AWS secret manager,
and by that you can carefully design your
system in terms of access rights, whether a certain given entity should
be allowed to read a secret. On top of that, the secrets manager
also has its own resource policies to define whether somebody or something
should be allowed to access the secret in the first place.
The only problem with both of these, as in the secrets manager's
resource policies as well as the IAM policies in the first
place, is that it's easy to try to run off
as fast as possible to make it work in the cloud,
and then basically create two broad policies
or two broad definitions in terms of the roles,
which ends up in two powerful entities that are allowed
to do too many different things in order to eventually easily
obtain a secret in the first place. For that, I would
like to welcome you to try challenge eleven from the wrong secrets project and
find out what we mean by this.
So good
to keep in mind, make sure you have fine grade policies in place and
that you don't attach all of those policies to a single role.
And then of course the question is, if you look at your
setup in AWS, assuming some sort of ets or fargate solution,
at what level is an entity allowed
to go? There is a worker node which
hosts a bunch of pods and services allowed to
go to the SSM parameter store or to the secrets manager.
Or that you do this on a Kubernetes role level. Or do you specify
your authentication authorization means on the pod level? There's a
few things you can take away for that. The first thing is the closer the
authentication is done towards the actual back end service in
place. So for instance, the specific pod that hosts the container
that requires the secret, the more secure it
becomes because it harder becomes to compromise. This also means you
have more work to do because you need to set up those fine grade
access policies. You need to make sure that the specific back end servers running in
that pod is actually able to get the secret. So there's a
lot more work involved. Next,
you have to make sure that the secret is of course only exposed to the
pod that really requires it.
Of course, the next thing that we need to take care of is how do
we instruct the cloud to create, setup and configure these services,
as in am Sts wrong?
The secrets manager, the parameter store in your eks cluster.
You can do that by clicking around in the console, but it's
a far better way to use infrastructure as code to do so.
Unfortunately,
there might be a few problems with that because you might
also try to use infrastructure as code to actually insert the secret.
This can be done in various ways with
various providers for infrastructure as code to actually
resolve this in a proper way. But we created nice
challenge number nine in the wrong secrets project, which shows
how to not do this. Basically go ahead and try it out and see
what actually happens in terraform. If your secrets end up in terraform state,
for instance, and then of course,
how do you authenticate? And there we go back
to the old problem. If you want to authenticate to basically set up
the infrastructure, you again need to store those secrets somewhere.
Do you see how this continuously keeps on moving? Make sure that
you secure those secrets. And that's also
one of the things. For instance, if you use infrastructure as code from, for instance,
a pipeline to make this work, you also have
to secure your pipeline in order to, or your
CI CD pipeline, with which you basically provide the instructions
to your cloud provider to set up the infrastructure.
As you can tell from our wrong secrets project, we didn't include
those in the GitHub actions because it's easy to make mistakes
and it's very easy to let the secrets slide somewhere so that other people
in name of the CI CD pipeline can use
those secrets to then build up infrastructure within your own cloud or destroy
it. That doesn't mean it's a bad idea to
use a CI CD pipeline to set up your infrastructure. It's actually
a great idea, but it requires careful attention based
on the stuff that we just shared with you to make sure that the secrets
used to authenticate towards your cloud provider to set up your infrastructure
are kept well and kept secret.
So let's just dive into one of those infrastructure
as code challenges, shall we? So going back
to our little wrong secrets project,
let's do a little challenge. Challenge number nine.
So over here, we basically use terraform to provision
our environment in AWS, which is a great idea because it comes
very reproducible. Just to make sure it is really not
a hard coding joke. No, it's not. Okay, so let's open up our
terraform state that for this sample is stored on our local hard
drive. Of course, it's not the recommended way to do it normally in your enterprise
environment, but for now, for the demo, it's the easiest way to work with.
So we open visual studio and
open the secret. And then we start looking for the password.
Here we found actually some password. Okay,
that's strange. So while generating it through terraform, we actually
generated the secret itself, which is actually the correct secret.
So the reason that that worked is
because we used the terraform provider for the secrets manager and
the AWS SSM parameter store that does not encrypt
the secret. Luckily, there's alternative providers that actually have a
configuration to encrypt the secret in a proper way.
Use those when you really have to provide secrets through
infrastructure as code in this way.
So that's a lot of small chunks everywhere. Shall we try to
cover what we've covered today in terms of lessons learned?
First of all, as you can tell, there are so many ways how we can
mess up secrets management, and there's really no single solution
that will always work and always cater to your needs. Because face it,
you can make mistakes. So make sure you can and
will rotate your secrets, not only because of the
risks of course involved, but as well because the APIs
where you might need those secrets for, or any other type of system that requires
the secret might enforce you to rotate it in the
first place. Label your secrets so migration
will not hurt that much, or cleaning up or
improving your service landscape won't hurt that much.
Make sure you create a small blast radius, aka make
sure that the secret that you're using is not reused in some other context,
and that the secret actually still only opens up
a least privilege role at the system where it's required to.
So you can make sure that when the secret gets compromised, not your
full system gets compromised.
Make sure that the creation, consumption and monitoring of your secrets
is easy. If anybody in your organization that
needs to work of this has troubles understanding it,
start revising your secrets management solution.
Start revising your security procedures around them and see if you
can make it workable and simple. Because if it's not simple
to everybody, it becomes hard to use. People actually try to bypass
it and you possibly might find those beautiful yellow stickies attached to monitors
at people's desks, at home or in the office. When we're all
returned again after Covid with the actual secret in
there. Make sure you have security and break loss procedures
in place whenever this primary secrets management solution starts to
fill. Make sure that your secrets are actually short lived where possible.
And of course, there's so many other things. Storing secrets encrypted
is one thing, using the right access controls and policies is
another. Not locking them directly is a good idea, not copying
them locally to your computer or to your git repository is a good
idea, and there's so many others with them.
Moving to the cloud infrastructure as code is great,
but be careful with secrets in the state of the infrastructure as code provisioner,
use a solution that works for you, but be careful how you
manage it. That holds for the whole secrets management solution in
the first place. Secret management is,
in that case actually type of, kind of the result of your
infosecurity program. That means how you set up your iam, your hardening
policies, your procedures, how you make sure your code
is in a good shape, and all the other things.
So that basically means it's
a journey. You always have to upgrade, you always have to improve because there's always
stuff that you might made it a little bit too easy to obtain the
secret. And you will always need a secondary secrets
management system because you need to of course, secure the root secrets
of your primary secrets management system in the first place and
learn from us, from our GitHub actions. If you don't have time to harden
the pipeline that is your CI CD pipeline,
secrets have no place in there. Luckily, there are various resources on
how to harden your CI CD pipeline in a proper way,
which will then allow you to easily inject the secrets over there and start
using your pipeline to its fullest potential.
If you have any questions, feel free to ask them through discord as you
can find it this page. If you want to have the slides in a later
stage, just we'll also be sharing them at the conference and
we'll share them via Twitter. Feel free to get in touch with us.
Thank you so much for your time.