Transcript
This transcript was autogenerated. To make changes, submit a PR.
You. Hi everyone, thank you again for
joining in on my talk about practice,
threat hunting and the impact that it can bring to your organization.
So first of all, I wanted to bring forward this edition.
A goal without a plan is just a wish. Of course this can be applied
to anything from your daily life to your
work, but it is also true for threatening
in particular. The latter is a subject that has
become very important if not fundamental for all those organizations
that want to invest
on a defensive infrastructure.
But in general, in order to exploited the full stratanting
these needs to be backed up by an organized and
sound strategy or plan. So first
of all, I wanted to represent myself.
I'm Fulgio, I'm 25, I have
a master's degree in computer science with a deeper dive
in cybersecurity. I've been working for the past year and
a half as an analyst and I'm currently head of
the threat hunting project on an international client
and I will talk about this later.
So the pyramid that I was hinting before inside
of the title slide is the famous
or infamous pyramid of pain, developed by David Bianco.
Well scores the amount of pain that
specific indicators or iocs
can bring to adversaries and attackers if
the blue team or defensive team has had time and
care to protect against them. So we go from the simplest
ashbalus and eps to the more intricate
ttps that are techniques, tactics and procedures.
Of course, these are the ones that need more time and are more
complex to take care of,
especially on the detection field.
However, these are the ones that can bring more value
and more impact to the organization. And so in this talk
we will see, I will explain or at
least show why threatening should be more focused
on ttps and hoc. This can be achieved,
of course you could say, but why is it worth it? Is it
necessary? And since I am italian, I will do a
wine metaphor. So we can say that threat funding by itself is a
nice bottle of wine, but with just little more
we could buy a
better, more tasteful wine that can
increase our experience. But of course with a little bit more we can even reach
for a charcuterie bore of cheese. And the cheese of course
can go well with some orange gem. But overall these
are all stuff that do not increase a lot
the amount of money in our metaphor. So the
effort needed to increase and have a
better wine experience in this case. But in the
thread in the cybersecurityanalyst field,
a greater impact, greater achievements.
So my colleague Nico
that was incapable sadly today of joining us
and I have been tackling the problem and
endeavor of threat hunting based on ttps through
the project that we have been bringing forward for the past
one year, approximately, that is the Pandora project.
The Pandora project starts from three
main points. The first one is a single point of failure. So we
wanted to avoid basing
our detection solely on one defensive
system. So in this case, we took both EDR
and CM of the client as main
points, as main defensive systems.
And we applied so redundance. We wanted to
focus on the reduction of the dual time, so of the adversary inside
of the organization, basing our
activity on assume the
breach focus points. So we wanted to be proactive
towards attacks, so proactive towards threat hunting, and not just passive.
So not only after that the incident happened. And so
we wanted to already complement before
time the defensive services and the
defensive system already put in place by the client.
So where does this Pandora project present itself and position
itself. So we have the attackers whose
attacks, or at least the majority, are already blocked
by these well defined services that
are present commercially. So we have
EDR, CM, AB and so on.
But of course, as we know, especially nowadays,
there have been a lot of techniques capable of
circumventing such defensive measure. And so
we want to go and block such activities,
such possible aviation of
the defenses. And so, as the picture
explains, we will be placed between the infrastructure
and the already present defensive system.
But what kind of methodologies,
what approach did we use towards threat hunting?
In particular, we decided to shift from
a simplistic, we could say model
based solely on simple IOC. So the base of the pyramid
of pain, where we find ashes and ips, since they are
very brittle, meaning that an attacker can change them very easily
and very quickly. And this is done currently to
focus on ttps that we said are techniques,
tactics and procedures that are more general,
much more complicated and especially
are more advantageous, since the attacker needs to
change totally the approach to the attack. If we
are able to defend against their
kill chain, at least the kill chain that is being observed.
And to do so, we applied the methodology developed and explained
by Mitre, where the main graphic
admission on the right, but that I decided
to rewrite in order to demonstrate what we actually did
and to better explain it. So we
have first of all the first phase where we need
inevitably to analyze and create a
threat model, in this case of our client, but in general of what
we want to defend. Because first of all, of course, virtually any
kind of attacker can reach any kind of
device of company and whatsoever.
But we know that epts and in general attackers
have preferences are
more defined or specific towards sectors
or towards specific kind
of technologies. So the first important
aspect is to understand what actually we want to defend
from. This will help us also understand what kind of ttps
actually we want to tackle. Because of course we
could tackle them all, but these
will be too much expensive. So we want first
to focus on what is actually or has
a much more higher probability of reaching us.
After this, we take for
example the first atp and understand actually
what do we want to detect. So we study
it, we create these abstract analytics and
then we pass to the next phase that is determined data
requirements, where we need to understand if we actually have
visibility over this possible attack,
this technique. So in this case,
we need to understand if the data model is full.
Otherwise, as the next case suggests, we need
to understand what are the gaps and we need to as possible
fill them. So we need to, for example,
enable new event logs. We need for
example to introduce new sensors or for example fix
some misconfigurations, because possibly we
already have such information, but we
are losing them, so they are not tracked.
And after this, we have one of the most
important phases, that is the development first of all of the rules that
of course can be defined accordingly
according to the different languages that
defensive services that are implemented have.
But we also have the phase of the baselining that I
will explain better, but that distinguishes
such approach and also such project from the already put
in use solutions.
Then we have of course the testing phase where we
take the queries. So the detection analytics that we
have developed or that we want to develop to understand
whether they actually work or not. Because of course theory is
something, but practice is another. So in
this case, we developed a specific virtual environment
in order to replicate specific ttps.
In this way, we both tested the base
capabilities of the defensive services, but also
the complementary detection
capabilities that we introduce with our rules. And once
we have tested all of this, of course
we deploy the rules themselves and then we start the activity
of hunting. Once, for example, the rule creates an
alert. So, as I said,
baselining is one of the fundamental
aspects of this methodology, of this procedure,
of this project. Also, because of
course, services like commercially available EDRs
and CMs already have put in place some rules
that tackle a great quantity of
TTPs. However,
of course, voluntarily, they are not
covering all of them. This is because they
present themselves as general solution,
not tailored like the one that we are presenting
now. Because everything depends
on the infrastructure of the organization itself,
because every organization inside has a lot of
events that create a lot of noise that
inevitably either needs to be tuned by the organization itself.
So I mean the vendor itself or by the analyst and
the blue team of the organization. And this
can of course become very
expensive. In this case, we take this tool beforehand,
so we see what are the possible events
that can happen that can introduce noise, for example, daily routines
introduced by GPO or activities by
the operating system by itself.
Or for example, we could have short lasting outliers,
meaning the introduction of new technologies, of a new
software, or fixing of
something that introduces a lot of noise that we
need actually to understand where it is coming from, who is
doing it, in what is
actually required by them. And of course, we have of course the
classical noise introduced by admins and it
people in general that can do all sort of
activities that can mimic or can
somewhat resemble some specific ttps, even though
they are benign. So baselining is
a fundamental part, especially in order to avoid
disruption of daily operations due to the
enormous number of potential
false positives. In our case, then we have the
threat model. That, as I said, is also a very important
aspect. In our case, we took many different
sources, for example, reports,
and we didn't just stop
like a couple of them or just one, because in order to
have a complete picture, of course we need
to understand the different studies that have been brought forward.
Also to reduce the potential bias introduced by a
vendor and its report. And the result,
partially visible on the right, is a scale
of what are the most used ttps inside of the threat model.
And so at least an inspiration of what could be the
first objectives to tackle.
And then we have the testing environments that we have
developed for such case. So we have produced
three virtual machines. The first one is linked
directly to the EDR. So it is an EDR agent in
order to let us see what is the visibility of.
For example, some mimicked attacks and techniques
over this machine. For such reason they are called the
detonation, since we detonate specific behavior.
And instead we have another machine that is
the CM one, that pretty much does the same exact
thing of the EDR one, but with the CM
agent. And then we have the lock collector that reaches those machines
that are of course put in defensive
net so that they do not spread
malicious content all over the organization network,
but that also functions as a log collector for possible
analysis postmortem if we want to, for example,
view the event logs. In particular,
fundamental is that these machines, the detonation machine, at least
mimic exactly the most common environment
that is present inside of the organization of your organization.
In this case the
end user device that is one
of the most device
available to attackers that is most probably going
to be infected eventually down
the line. So the
deliverables that then can be retrieved
or are received after applying such methodology.
And in our case so are many.
So we have playbooks in the meaning that once of course you studied
TTP and you master it, of course you know
also what to check, how to circumvent
it, what to actually observe,
so you can create a simple guide for the analyst.
But also, as we said, we have the mitigation of the gaps through for
example network changes,
security, new security event logs,
new log sources and
of course as previously we have a new testing environment put
in place also for other possible activities like simple
detonation of a malware. But we also have threat discoveries.
For example, we could see anomalies already present
inside of the organization, but of course we are developing them or
these detection analytics also for
future possible attacks
to the organization. Now I
will present a very simple case just to
show what we actually did in our case.
So we took two different techniques
that share a common tool, in this case bitsadmin,
and we are talking about ingress, tool transfer and persistent
through system binding proxy execution. So we started
by understanding what bitsubmin is developing,
the abstract analytics. So what do we want
to actually detect? What we would like to see
in order to trigger an alert, we have the
data dictionary, so what are the sources that can help
us understand, for example,
what in an endpoint can trigger us, can alert us. And we
have either the event id for six, eight, eight. So the creation
of a new process or the Seismon event id one.
However, in our case the organization
did not have or did not deploy
Sisman and so we had in this case to work only with
event id, but also in that case we had for example to
fix the formatting or the parsing that was
present inside of the CM in order for the queries to properly work.
And so after such phases,
as we said, we go directly to the creation of the rules,
understanding what is the baseline, applying, so whitelisting.
And then we have the testing phase. In testing phase,
as we see, we used atomic red, that is a framework
that presents itself perfectly for
such activity, so it can
replicate specific ttps and
in our case we reproduce them. And as you can see
in the following slide, the test demonstrate that actually
neither EDR or CM were capable of detecting
such subject activities,
but instead with the application of the rules
that we showed previously, but of course translated in the appropriate
SQL language, query language were
in fact capable of being detected.
So as we
can see here from this slide, we have actual results.
So we have an alert of the EDR on the left, we have
results on the query on the right, the query that will be introduced
in the alert itself. And so we can see hoc.
This approach actually incremented the visibility,
incremented the efficiency and precision of
the detection capabilities of the defensive system, of the
organization itself. And so after
such application, after, in this case, this rule,
this scenario, what's next? What can be done?
So, first of all, techniques are not
set in stone. As we know, every day new vulnerabilities
are discovered. So for example, it's possible that a
technique that you have tackled has
been updated. So you need to retweak what you have already
done in the past and update it in order to be again
efficient. Then after this we
can of course go to the next TTP down the line in
order to eventually and possibly finish the list that we
have produced. And we could
even go directly and tackle apts instead
of ttps, apts meaning adversary and
attackers meaning to tackle specifically their
kill chain, their way of moving, even though
I don't suggest following such behavior before
having at least a mature enough model
or defensive infrastructure behind.
So this was all from me for today.
I hope you like the talk, I hope
you brought home something. And if you
have any kind of question, you can always reach me at the contacts
that I'm bringing you.
And thanks again and have a nice conference.