Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello and welcome to this video. It's a pleasure for me to see you here
and what we want to talk about today. Today I want to talk with you
about supply chain security. What are the key points,
what are the tools you can use, how you can protect yourself, and what
you can do to integrate this one into your CI or
in your software development process. By the way, my name is
Sven Ruppert. I'm from Germany and I'm developer advocate for JFrog.
And as you can see, I'm here in the woods. I'm in the swedish tools.
I'm here for a hike, let's say five to six days, and I will
take you on two journeys, first of all to the journey here to the swedish
woods. And the second one is to the supply chain security
topic. But now it's time to start and let's go.
Okay, say, let's start with the topic. One topic is supply chain security.
Supply chain security is a very, very big topic.
And it's not only including software development, and it means it's
including everything from what materials you need, what are
the processes, who's delivering what, third party components and so
on and so on. So the whole supply chain is
under attack right now. And we have different things that are
evolving or coming through this supply chain security over the last
one or two years. One thing is the geopolitics is more
and more influencing these attacks against the supply
chain. Because you can see that if your earlier
days had just an attack because you are a big company,
someone wants to have money from you. Now you are target because you are
a political target. And this is changing the world dramatically.
Because instead of having individuals, you have now groups
or even governments that are attacking you. And if you are
inside the supply chain for one of these big companies that are
not very friendly seen by other governments, it could be that a
government is now attacking you. The same with ransomware. So long
a time ago, ransomware was used more or less to encrypt all
this stuff, okay? And then you can pay money to get this stuff back.
So they have done it just to earn money.
But right now we can see that renderware attacks are more and more
political oriented. It means they never want to see money, they just want
to infect your system. They want to encrypt it, and that's it.
So we have different things that are coming more and more.
And if we are now thinking about that the
big companies spending a lot amount of money,
resources and intellectual property and all this stuff to
harden their own systems means that the tickers are
now going along the supply chain. So the pressure is not against
the final targets anymore. They're going along the supply chain. And means even
if you have a small, medium, business sized company with, let's say
20 people, you're delivering to one of these core companys,
then you are under attack, because it's the weakest point
inside the supply chain. But now we will have a look at the
part of the supply chain that is for us important, the software development
part. Now we want to talk a little bit about the
software version of supply chain security. That means that
we are just looking at what is going on here during
the time we are coding and working.
So I'm sealing now a little bit from this project
Salta and project Salsa is opensource
project from the Lynx foundation. I will explain it a little bit later. What's it?
But I have here this picture what I'm putting
here somewhere in the screen, and it
describes a typical software supply chain for software developers.
So you have, for example, common attacks against it. If you have source code,
make sure that you have always code review, so that nobody can sneak
in with malicious codes. Then this code is going to
this source code version system, git server, or something
like this. How to arm this? Well, you have to make sure that an
administrator will close all ports, maintain it, and administrative rights
are just limited to the person that's needed. That's a good thing.
Then the source code will be grabbed by the CI environment.
What couldn't happen here? Instead of compromising the source code on
this part of the server, the build infrastructure
could be compromised. Or you can have something like fetching the
original source code and overlaying with external source code. So there
is a very common attack. Then the build infrastructure itself,
what can you do? You have to harden it. Again, it's part of administrative
things, and so on and so on. And then the binaries were
pushed to some kind of repositories. So we are
going along from Opensource threads, build threads, to dependency threads inside this
repository where all the dependencies are, or the created binaries,
they will be delivered to production. Again, this is a piece of software,
and here we need support from administrative rights and so on.
So one thing is that all these components between each
other should be in some kind of zero trust environment
to make sure that nobody can sneak in with this one
on the other side. We have now two things left. If we are focusing,
what can we do as a software developer, we have source
code, we are creating that is moved along the pipeline will
be created or will be transformed to binaries. And then we have
the binaries we are creating and all dependencies.
So this is more or less what we have in our hands.
Source code, binaries and all the rest is something
that is infrastructure and secured by infrastructure.
Related topics I want to talk a little bit about the project where I
got this graphic from, and it's a project SalSA SLSA and
this is part of the Linux foundation. It's an open source project and
it's a documentation project. So there is no implementation or whatever.
It's really a documentation project. And this documentation
project is organized by
individual security experts some companies are working through.
But the main thing is that they want to collect
all the best practices around supply chain security to
make it accessible for people that are not directly
coming from the security area. So that you can start reading. So we have
two main topics in this project. The first topic
is the levels. The levels from zero to four will
describe how you can improve step by step
your supply chain with start using CI environment,
using audits and so on and so on. So if you want
to know more about these levels and project Saltatev,
I have a dedicated YouTube video on my channel in German as well as in
English, though there I will go through all these details.
But for you it's important to know there are these different levels and then you
can calibrate yourself where you are and what would be the next
logical or good step to increase your security. The other
part of this project is a documentation about the common attacks,
source threads, build threads, dependency threads and so on. And then
you will have description how this attack is going on,
prominent examples of it, what happened, what it can do against it and
so on. So you have this levels and you have the description
about this attacks itself. And altogether is a very good
documentation to harm your own environment and using
this as some kind of guideline. This project itself
is currently in alpha state. We are talking here about mid of 2022.
But even in this alpha state the project is
really really good. So I really recommend having a look there
and try it out. We spoke about the project
Salta, but there is an implementation. It's a Linux project as
well. And this is the project Posia Persia is an alpha state
as well. So JFrog started this
decentralized package management and this is covering exactly the
stage from we started building until we are
delivering this binary. What's excluded is the security
of the source code itself. It's not helping you to have code reviews
or non compromised source code, but it will start helping you
with building this stuff and distributing it.
So we have different things here. This project Persia is more or
less this decentralized package management system for maven Docker
and so on and so on. So what we have is we have this build
layer. So we are pushing to one of those peer to peer nodes
URL where the opensource code is in a commit reference. Then it will
grab this on different nodes source code and start building
it locally. After this build is done they
start sharing information about how
big is the binary and the fingerprints
and all this stuff. And if all they are the same, you can imagine that
it's not easy to compromise in a peer to peer system exactly.
The nodes that are randomly selected for this build. So the binary is the
self after this, then that binary will be transported
to the distribution layer and the distribution layer is then the peer to peer network
where you're requesting exactly this. So we have different things like for
example external reference binaries are not
so secure as inside the posia network. Build binaries.
We have gateways to Docker hub in maven central as
authorized nodes and so on and so on. So it's a
bigger project just right now.
In the beginning, the first reference implementations are done.
So have a look at Persia IO and if you
want to know more about it on my YouTube channel you will find the
video exactly about this project. Okay, now we want
to talk a little bit about the four main pillars of security testing and
protection. And this is something where we talk
about zest and dust is and RaSP.
So I want to start with the topic zast. Zest is
something that is a static application security protection.
So that means you start scanning all
parts of your application binaries, configuration,
source code and all this. But the application is not running. So you can start
with zast immediately with the first line of code. You can analyze
it. But what you're missing is a dynamic context because you're in the
static semantics of this. But you can scan 100%
of this part, sorry, all parts,
you have the access, you need access to all components,
otherwise you can't scan. But you can scan immediately with
the first step. The next thing, if you are not in the static application security
context is a dynamic one. And the dynamic application
security testing means
that we are now able to run the application. So we need
to have something running. It could be a test system, it could be
integration, staging, whatever. But if we are running this one now, we are
looking from outside like an hacker this attacks mostly or this
testing is mostly done with most common vulnerabilities like SQL
injection that you try to hack from outside. Here you don't
need knowledge about the internal technology, you just try
to hack the system over API from outside.
The good thing is that you don't need to know about the internal technology.
The bad thing is that you are not able to test 100% of
the systems. You can just indirectly test the most parts
of the system. But mostly it shows that you are not
able to scan everything because you are just able to scan everything that's reachable
by this API. And then we have
the need that something is running and you must be able
to address this. So mostly you have this cloud based providers or you must be
able to provide this attack information by yourself.
So this is dynamic application security testing. How to combine
base or how to get rid of the weak parts of the
different testing mechanisms. This is called is,
it's interactive application security testing. And this means that you try
to do both. You can see it like in security
debugging. So you are ramping
up the application, you are taking from outside, you're analyzing
inside the application, what's going on, and then you can start modifying the
attack vector. You need the knowledge about the technology, the attack vectors, how to
do this, the tool stack and so on. The big challenge here
is that for dust and dust you can just buy tools.
But for ISt you need to know a lot of stuff by yourself. So this
is mostly not for the beginners. It's something where
you need really trained and skilled people from the security environment
so that they are doing something. This is useful. Otherwise you are testing something
and you think it's secure and it's not. Okay,
so IST is something that is the combination of zust
and dust, but you need high skilled people to do this by
yourself. Now we are coming to the field where we are talking about
runtime protection. It's called Rasp runtime application security
protection. And as you can see, it's not testing anymore, it's a protection.
And this protection, it means that we are just able to do it
on production environment. So you're not able
to do it in fields like and staging or testing
environment. You need production. So what's mostly going on here
is that you have an agent. This agent is manipulating
like a performance tools application itself and try to
analyze what's going on into the system. Do we have suspicious
malicious activities and so on. And with machine learning in real
time, you try to identify if there is an ongoing attack and
if so, you have two choices, shutting down the system automatically
or alerting. So that's all but
having now dust. Dust is and RaSp,
where should you focus? RasP is just the last thing.
It makes no sense to trust only RASP. So it's an
additional thing is you need very high skilled and trained people.
So mostly it's not right. If you want to start dust, you need
something that's already running, isn't good at on.
But the best thing, what I would recommend is start
with toolings in static application security testing,
because you can start immediately with the first line of code if
you start with dust. So scanning all this stuff, you can
scan different things, you can scan your source code, you can scan configurations,
you can scan third party components. And what should you start scanning
to say the juice. If you're looking at how many lines of code
you're writing and how many lines of code you are adding via dependency
in the most project environments, it's in a way
that even if you have hundreds, thousands of lines of self
written code, you will have millions of lines
inside your dependencies. So inside the application you have the
application and a huge amount of dependencies. Even there.
The dependencies are bigger than the source code you
are writing, the operating system you're just writing a few lines of configuration
rest are dependencies. Docker statement even starts with an from statement,
helm charts and so on and so on and so on. And don't forget the
binaries of your CI environment, of your dev tools and
so on. So the whole dev stacks is a bunch
of binaries as well. So what should you focus on?
Opensource of binaries binaries, because these are the
low hanging fridates. So big question is if
we should focus on binaries where the best place to store them.
And it makes sense to have all binaries
in one central place because then you can use
exactly the same scanning mechanism for all layers.
So with artifactory there we have all these different dependency managers,
I may have no get, but Debian as well as Docker
and so on. So if you are just collecting binaries for your own
project, it's fine, but why not collecting the same or
over the same gateway, the binaries for your network infrastructure,
Linux service and so on and so on. Because then you have the central place
where all binaries are. And then you can apply with security rule there with security
scanners, and you need a scanner that is aware of metadata.
I will cover this in a few minutes. But the main thing is having
one central place and even if you're building stuff, don't grab it directly
from Docker hub or maven central because then you have no control
about what is allowed to be used,
fetched and how many times, what version and
so on and so on. Say a single place for binaries where all stuff
is coming through. Now the big question, scanning binaries. A lot of
companies are saying we are scanning binaries, but what is the biggest difference?
The big difference here is that if you have the binary
management and the knowledge of all dependency managers
and then you're connecting with a scanner. In our case it's artifactory,
where all the binaries are, where the metadata is
in, and then x ray is connected to artifactory. Now we are scanning it.
So if you're just scanning one binary and then you can identify what's a binary,
but you are missing information. Like is there a dependency,
transitive dependencies, do we have compiled test
scope, runtime, scope, dynamic link, static linked and so
on and so on. So a lot of metadata that is missing if you
are just looking at boundaries. And this is a big difference to
a lot of other solutions. And we can focus on
the whole tech stack, maven, nuget,
Linux, Debian, repositories, Docker and so on and so on.
So this is a big advantage if you have not only the binary
but as well the metadata available. And this is exactly
what we have with artifactory and x ray. Next thing, talking about
shift left or what's the best place to scan. So we have now the possibilities
to have access to the metadata. We have the possibility to scan
binaries. But what is a good place to scan? It could be,
for example, the CI environment. The CI environment is a security
gateway for everything. So shift left into the CI means
that you are scanning with every build, with every run and so on
and so on. That's necessary. There's a good place to
scan for security reasons or vulnerabilities,
malicious packages and all this stuff. And this is
more or less a machine gateway.
But is this exactly the place where you should
start with scanning? Is this a place that is the
earliest place that makes sense to scan for vulnerabilities
and compliance issues? I'm not sure about it because the CI
environment is somewhere along the supply chain,
somewhere along this production environment. And we have a bunch
of other places where it makes sense to scan as well. I want
to go a little bit into the history and then you will understand
why the CI environment should not be the only part
where you start scanning for security and vulnerabilities. Security issues.
It's called the SolarWinds hack. I'm not going into all details
of the SolarWinds hack, but the main thing is that this
company that's creating this Orion platform, having 300,000
customers and every customer is managing the network infrastructure
with this tool, and they have NCI environment to build their product,
and they have this automatic update cycle, so that with every new release,
it's automatically pushed to all the customers and so on. What happened?
Naga Group broke into this system and they are not
changing or modifying or stealing data or whatever. They went straight
to the CI environment and modified the CI environment. And the CI
environment then built with every cycle and compromised
binary. This compromised binary was then delivered by this automatic
update to all these customers. So after
this was done, it took one or three days,
just a few days, and they infected over 15,000
customers. 15,000 customers means not 15,000 servers.
No, they mean 15,000 networks.
And 15,000 networks infected is a huge thing that even the
US president, Mr. Biden, took this as something
to say. Okay, we have to change something. I will come to this a
little bit later, but what does it mean for us? For us, it means we
have now two attacks. Where's the attacks? Or we have to protect ourselves
that we are not consuming malicious or vulnerable
stuff, and we have to make sure that we are not pushing this
stuff out so that we are not distributing it.
And these two different ways we have to protect means
that we should be aware of all the security related topics,
not only inside the CR environment. We have to go a few
steps earlier and start with this. But now let's talk a
little bit around this SolarWinds hack and how it's affected
our way of developing. Okay, now what happened after the SolarWinds hack?
So Mr. Biden had this executive order of cybersecurity,
and it means that every software that is operated,
run, owned, or whatever used by the US government must fulfill
this requirement of NaSbOM. SBoM itself is a
software builds of material. It means the full list of all ingredients
that are used to build this binary. Practical.
It means if you have a binary and you have all dependencies, and then you
need a list of all dependencies with version name fingerprint.
So that's half the fullest. We know this at JFrog since a long time.
That's important not only after the SolarWinds hack, but we called it built
info. Build info is more or less a superset of sbom, because you
can generate this S bomb out of it, it's just part this S
bomb. But the build info, you will find it inside the platform
near the art factory repositories. It's more or less for
every binary that you are creating, you're collecting information like
environment, variables, dependencies,
whatever date, time, agent name,
all this stuff. Okay, so this information is pushed and stored together
with a binary inside art factory. So you have the whole context
of this creation for this binary. And that means later you
have some postmortem analyze possibilities so that you can
identify. Oh, all builds on agent three is something different.
You can have divs and so on and so on. The good thing is that
not only the immutable information is there, you have this actual vulnerability
information as well. So if you built yesterday a binary
and this binary is passing to production,
and tomorrow you know there is a vulnerability because the vulnerability database
just had an update, so we know it now,
then you can see there, oh, here's a vulnerability, it's affecting
us. This binary is used in this helm chart, in this production environment,
whatever, without scanning production. You have this information and you're never
building binaries twice. This is the build
info, a superset of S bomb if the CI environment
is not the earliest possible thing. So what should be the
next one? What I can recommend is for example, use your
ide. So inside your ide you have the possibility to
scan all dependencies as well. And this is done by the IDE plugin.
The IDE plugin itself is free from JFrog and
you can integrate in intellij vs. Code, eclipse and so
on. And then you're connecting with this IDE plugin to your artifactory. In xray instance,
what happens is that if you're for example, adding the
first dependency to your pom file,
then this information will be transferred to x ray. Xray will give you the
whole dependency tree together with the vulnerability information.
So you will have straight after adding a dependency,
all the knowledge about transitive dependencies,
vulnerabilities, possible fixed versions and so on
and so on. And this information will include for example,
references to cves and all the other stuff
that you need to decide if this one is critical for you. But this
is way earlier than a complaining CI environment.
Now we have this IDE plugin, but there are several
ways to get to this information as well, and one is a command line interface.
So even if you have no IDE started so far,
you can use a command line interface to scan the project. So you're cloning
a git repository chain into this repository and start JFrog
audit. And then you will get a list
of all vulnerability skins. And the
skin is based on the project infrastructure. So if there's a poem XML,
it will give you all the things or some other
dependency manager, so it will detect the dependency manager. So the main thing
is that you can now scan on command line projects,
but on the other side you can scan binaries as well. So if you have
local created Docker image, one thing what
you can do is you can use the Docker plugins or docker desktop plugins
so that you're scanning the self created docker image there.
Or you can export this docker image so that you have this tar
or this archive what you created and then scanning this one. You can do it
with jar, with docker images,
with whatever. So you can scan this and then you will get the full list
of all vulnerabilities. And this report you can push into
the section on demand scanning. On demand scanning means that
this scan will be stored inside artifactory and you can
share this information with your colleagues so that you can work
on this to improve this docker image and so on. The good thing here is
that you can create all this stuff without polluting the
CI environment or using opensource there because you're grabbing the
stuff, building it, scanning it locally, pushing the report
to this central place where you can have this as an audit
reference or whatever, and then you can work
with the full flexibility. What you need before you're using
resources inside the CI environment or stuff is bleeding
inside the repositories. You want to hold clean, same CLI
and on demand scanning is cool stuff because you can do it straight
during your prototyping. If you're scanning for vulnerabilities, you will find
one thing quite fast and often that is in
CVSS score. So I'm not going into all details what
a CVSS score is, because on my YouTube channel you will find different videos
about CVSS metrics and how to use it within CVSS
calculator to adjust this to your environmental metrics
or with environmental metrics to your environment, but the CVSS
values. This is something you should be aware of. And inside the JFrog
platform you will find not only the CBSS values,
CBS two, three one. You will find the
basic metrics as well. So you have two things you should
check out if there's a CVE, if there's a CVE for
vulnerability, check out that you get additional information. So if you're
clicking inside the platform on this vulnerability, you will get mitigation,
remediation information. You will get reference, Zuger, mutation,
all this stuff. And you can use this CVE to search inside
Google, for example, for additional information. On the other side
you will get the CVSS value and the basic metrics. And the basic
metrics, in short words, is more or less the worst case description of this
vulnerability. And there you can see if this is important
for you. For example, if this is a high CVSS value
because if the vulnerability could be
misused or could be applied over a public network if
connected but your system is air gapped, then it can address this.
But this is something that you should start reading this basic
metrics and then transfer it or transform it with environmental
metrics. I have a dedicated talk about using the CVS
calculator with this environmental metrics to scale the CBS values
for your needs. It's a little bit too much for this talk, but have
this in mind that you have all this information. So wherever you're scanning,
have a look at the CVSS score. Have a look if there's a
CVE number to get more information and decide if this
vulnerability is critical for you. Yes or no?
Okay. Finally, I found my place for tonight.
Perfect. So it was a long hike, definitely a
long hike here in the swedish woods. But I found this half island
and there's a perfect place to swim a bit.
And there I will have my camp, I will build up
there, my camp for tonight and that's it. But we
had a good talk about desegobs,
about dust, dust, rasp and all this stuff about toolings
and what you can do. And by the way, if you want to try out,
you have different possibilities. First of all, if you want to contact me,
use Twitter, LinkedIn. Don't use email, it's a disaster.
If you want to have more like this, check out my YouTube channel. I have
a german one, I have an english one. Make sure that you are selecting the
right one for you and then you will find a bunch of this
auto style it related cybersecurity stuff. That's perfect.
Otherwise, well, webinars workshops,
if you want to have hands on, they are for free. Go to JFork under
webinars and then you can find a webinar, you can register for
workshop and then Webase can have hands on on
this cybersecurity stuff. Otherwise, whatever day and time you're
watching this, have a nice day, have the rest of your
day. Relax. Whatever is necessary, I will do the same. For me,
the day is done. That's the last recording for today.
I will place my stuff there. Well, enjoy the sunset.
And that's it for today. So stay safe and
see you're.