Abstract
What is OWASP PurpleTeam?
PurpleTeam is a Developer focussed security regression testing CLI and SaaS targeting Web applications and APIs.
The CLI is specifically targeted at sitting within your build pipelines but can also be run manually.
The SaaS that does the security testing of your applications and APIs can be deployed anywhere.
Kim will briefly discuss the four year journey that has brought PurpleTeam from a proof of concept (PoC) to a production ready Developer first security regression testing CLI and SaaS.
An overview of the NodeJS micro-services with many features allowing a Build User (DevSecOps practitioner) to customise their Test Runs without having to write any tests by simply configuring a Job file.
Allowing multiple options to deal with false/true positives.
Setting alert thresholds in multiple places and for multiple testers (app-tester, tls-tester, server-tester) allowing the Build User to define what constitutes a successful or failed Test Run.
Why would I want it in my build pipelines?
In this section Kim will discus the problems that PurpleTeam solves, such as training the Build User with advice and tips on security defects as you fix the defects that PurpleTeam highlights.
As well as the huge cost savings of finding and fixing your application and infrastructure security defects early (as you’re introducing them) as opposed to late (weeks or months later with external penetration testing) or not at all.
OK, I want it, how do we/I set it up?
Kim will walk you through all of the components and how to get them set-up and configured
Great, but what do the work flows look like and how do I use it?
Let’s walk through the different ways PurpleTeam can be run and utilised, such as:
- Running purpleteam stand-alone (with UI)
- Running purpleteam from within your pipelines as a spawned sub process (headless: without UI)
- Running all of the PurpleTeam components, including debugging each and every one of them if and when the need arises
Transcript
This transcript was autogenerated. To make changes, submit a PR.
Jamaica make up real
time feedback into the behavior of your distributed systems and
observing changes exceptions errors in
real time allows you to not only experiment with confidence,
but respond instantly to get things working again.
Close is
a security regression testing CLI that's the front end
and software has a service backend targeting web
applications and APIs specifically built for developers.
The CLI is specifically targeted at sitting within your build pipelines
has if you're running it headless, but can also be runs manually
with a character user interface. You can choose
from either local and or cloud environments. If you
go with the local environment, you'll need to set up both front end that's the
CLI and backend microservices. If you choose the
cloud, all you need to do is get the CLI on your system you want
to run and configure it and create a job
file.
So it's been a four year journey that has brought Purpleteam from a proof
of concept to where it is now finished writing a
book series to help developers upskill their security and
I can lots of workshops with the proof of concept from
that book holistic and Vasec
for web developers to elicit developer feedback and confirm that
what I wrote but was actually true.
Now these are the books I've written. Youre noticed
the holistic infosec for web developers
series there. Now this is the actual website
for it. You can read these books freely online or you
can decide to buy them if you really like them. So the prefer
concept that Purpleteam was born out of was in
this first volume. Most of the time it's been seven days
a week and it's been two full time jobs.
I donated purple team local to OWASp in
quarter one of 2021. Purple Team Cloud
went to market in Q four of 2021.
I couldn't get the publicity that was needed to get enough customers on
board to make cloud financially viable. Plus, family relationships were
getting strained. So I donated cloud to OWASp
in Q one 2022.
So now the community gets to reap the benefits of a production ready, hosted security
aggression testing software as a service that you can plug into your build
pipelines to continuously test your web applications and APIs.
Building a tool that helps developers write secure code is a great way to
learn about security. If you want to learn more about information
security, we can assign you a mentor and you can help yourself
and the community by building purple team with us.
Now this is a high level architectural overview of the
purple team solution in the cloud. This is the same, but run locally
in the local environment. So what you've got here
is you've got the backend components, you've got the orchestrator,
which takes the requests from the CLI
and sends feedback back to the CLI
too. On what's happening with the application,
sorry, on what's happening with the testers. Now, I've got an application tester,
we've got a server tester and we've got a TLS tester.
This is your system under test. This is the Internet.
Now, what happens when you send a job file through from the CLI is
the orchestrator takes that, it sends it to the testers.
Now, with our server scanner
and TLS scanner, the emissaries sit within their containers,
they're embedded. Server scanner is not yet implemented, but it's
going to be soon and we're going to be using Nikto. TLS scanner
is all up and running and it's been in production for quite a few months
now. The application scanner, based on the
number of test sessions that are sent through in the job file from the CLI
to the orchestrator, we spin up that number of emissary
sets. These are your stage two containers, these are
your stage one containers. We spin up n number
of Zapp and seleniums.
And the way this is done is that the application tester talks
to in the local environment. It's Sam CLI. In the cloud it's just
lambda and we've got our lambda functions there which spin up
the zap and selenium containers using Docker compose UI
in the cloud, it's just ecs.
Now, as feedback comes back from the emissaries,
the testers are responsible for sending that feedback, for grooming
it and sending it through redis.
And then the orchestrator subscribes to our redis
channels. Now, if we're using a long pollen between the
CLI and the orchestrator, the orchestrator pushes those messages
back onto a list so that when the
CLI long poll requests come in, the orchestrator can pop those off the
list and send them back in batches. If we're
using server sent events, then as soon as the messages come in
from Redis, they are sent back in real time to the CLI.
For all intents and purposes, it doesn't make a lot of difference
which one of these you use. There are small pros and cons on each
and it's configurable in the config
for the orchestrator. Once the application testers are
finished their testing, they talk to Sam ClI
or lambda if it's in AWS again, which runs
another lambda function to bring these containers
down. How does pipelines help us as developers?
How does pipelines help us as a business that creates software?
And why would I want Purpleteam in my build pipelines?
To answer these questions, I'm going to take you back to a section that's in
a number of my previous talks have
we found bugs in software traditionally?
Basically, we haven't really, or we've done it really
late. So every
team has a week or two to find all the defects we've been conscientiously adding
for months. So it's approximately $20,000
per week. The engagement is generally two
weeks for a small to medium web application or API,
the software project, before release. It's about six months
for the same size project, and it's about $40,000
for the six months per project.
So generally, five criticals, ten highs, ten mediums,
and ten low severity bugs are going to be found,
and many bugs are left waiting to be exploited.
The business decides to only fix the five criticals,
and each bug now has an average cost of 15 plus times what
it would have cost to fix if it was found and fixed when it
was introduced. So that's five bugs
times 15, times $320, which is approximately two
developer hours, is $24,000.
So the bottom line for a six months project,
youre going to have a two weeks red teaming engagement that'll
cost you about $40,000. Now, only five red
team bugs are going to be fixed at a cost of 24,000.
So the bottom line, this is too expensive, it's too late,
and too many bugs are left unfixed because it's now so late in the software
development lifecycle. And each bug now costs 15
tips, plus what it would have cost defined and
fixed if it was found and fixed when it was introduced. Now,
these are based on statistics that I documented
in holistic infosec for web developers. That's my book,
taken from various sources.
Instead of deferring the finding and fixing of security defects to
a traditional red teaming exercise, Purpleteam helps
us find and fix our defects as we're creating them. But how,
you might ask? So Purpleteam runs against our web applications
and APIs has we're creating them, informing us of security defects
that we're introducing in close to real time. So Purpleteam
reports show us how to reproduce the attacks that
were found, and they also provide tips
on how not to introduce the same types of vulnerabilities.
Again,
so now we know we need purple tank.
So for the local setup we're going to be looking at local and
cloud. So for the local setup we get to the setup page and
we're going to work through these components.
So again, we've got the high level architectural overview there
of the local environment.
So wei need to set up a docker network. These are the details on how
to do that. And you're going to need a system under test.
This is the system that you want to be security testing.
If you don't have one at the moment and you're just wanting to get a
take pearl protein for a spin to see how it goes, then we'd
suggest using something like Owasp node goat.
Now we use this as well and we've got a document
bose override that you can use to help you get
going. And there's also Purpleteam infrastructure has code system under test
which basically brings up and brings down node
goat very quickly with a clean slate each time.
So you need to set up your lambda functions. The details on
the readme page here, stage two containers.
The details on the readme for the stage two containers
there, these are all in GitHub, the orchestrator
as well. For the orchestrator there's a couple of environment
variables that you need to set up. You need to apply some permissions
to directories and
if you've got a firewall running, you'll need to ideally
set up some firewall rules. You will need to set up some firewall
rules and you'll need host ip forwarding turned
on. Some details there around that. Now for the
testers we've got the application scanner, we've got the TLS
scanner and the server scanner which is not yet implemented.
So the details for setting these up are in their
readmes. Same for the Purpleteam
CLI, which I'm going to show you now.
So if you're using the cloud environment,
then all you need to do is set up the CLI. You won't need to
set up any of those backend components because they're all done for you.
So the CLi install options.
There's three options here. So if you're planning
on running or debugging purp standalone with
a UI, this is a good option for
this option here. This is good if you're planning on running or debugging
purp as a spawned NodeJs process, for example
from your node JS build pipelines. So yeah, you can
run it and debug it as well. And this
NPM install globally option is good if you're planning on running
or debugging purpleteam from a build pipeline written in a different
language, and you need to configure the
CLI. So that's the configuring details there.
And this is where you work out or you define whether you want
to run headless or with a kiwi.
And some details around the job file there, which is also found in the documentation,
in the documentation on the Purpleteam labs.com website.
There's a lot more information there.
And these are just the run sections, how you actually go about running the CLI.
So these are the sections in that readmute are just a little bit further down
than what I just showed you. So for those three
options, whichever one of those you picked to install,
there's an associated run option that you can
use to work out how you're going to run the CLI. It's pretty simple.
If you're training in the cloud, then job done. All you have to do is
have that ClI on your system and
configured and have a job file ready to feed
it.
If you're using local, then there's a little bit more to know about running
the back end components.
So the back end components workflow, this is
documentation for it steps you through these
sections.
So there's some details here around. If youre actually want to test the lambdas
themselves, generally you won't
need to do that. Just if you are contributing
to the Purpleteam M project and
debugging lambdas, you probably won't need to know that unless youre actually contributing.
Yeah, so this is the debugging section, the main debugging section.
So this is details on how to debug the application scanner and subprocesses.
So that's the subprocessors that spin up the stage two containers which have
cucumber in it as
well. And same
for the other testers, the orchestrator is the same, just that there's no sub
process. And for the front end, this is how
you run the front end. Also, there's some details around this
in the readme of the Purpleteam CLI that
we've already looked at, but this is just a little bit more details here.
Full system run these are the actual steps for the full system run.
So this is starting your CLI and having
all the backend components set up and running, ready to accept
requests. I'm going to walk through this
soon.
Hi. Today I'm going to show you a test run
with the backend components as well. I'm starting
docker stats to show you which containers are coming and going.
We start docker compose Ui, which is responsible for taking orders from our lambda
functions to start and stop the stage two containers.
Wei start Sam local, which is responsible for hosting our lambda functions
locally and we already have
our system under test running. Now once we've built our stage
one images with NPM run DC build,
we can bring them up with NPM run DC up and
then we start the CLI. In the bottom left terminal you can
see the validated, filtered and sanitized job file contents
in the top right terminal. Docker stats is showing us the stage two
containers being brought up in the bottom left terminal.
We're checking and retrying that the stage two containers have come up and are
responsive. All testers are now running as the test
run process in the CLI tester complete panel.
That's the donut meters youre will see the percentages progress.
These are total percentages per tester in the running statistics
panel just to the right of the donut meters. Each row
represents a test session as defined in the job file.
Here I'm training the CLI TLS tester log just
to save right arrowing on the CLI terminal to the TLS
tester screen and not being able to also see the app tester progress.
Back to the running statistics panel. The thresholds you see are
also defined in the job file as alert thresholds. A given test
session will be considered a fail if the bug count exceeds
the alert threshold. Alert thresholds are useful for brownfields
projects where you have existing defects but still want a
test to pass. These are the definitions. You may find yourself
referring to these quite often. Back to the running statistics
panel, you'll notice a complete column. These cells represent
percentage complete of the test session, where you may have more
than one of these for a given tester.
In order to initiate a test run, the build user needs to define
and supply a job file. This is the documentation
that will help explain the schema and help you construct your
job file.
Next, I'll show you some example job files.
This job file is very similar to the one we're using for this test
run, except we're targeting nodegoat sut purple
teamlabs.com, which is deployed using the purple
team infrastructure as code system under test project.
The new bugs panel of the CLI shows bugs over
and above any specified alert thresholds. If this
count is above zero, then you are going to have at least one failed test
session. The total tester progress meter to the
right of new bugs shows the combined progress of
all testers.
These logs I'm showing you are the raw CLI flows
taken from the current finished test run. This particular log
is from the Lowpriv user test session of the current
test run currently being ridden to the top of the two CLI
window panes as we speak. You'll notice that this particular
test session is only testing a single route. The profile route
of our system under test. This particular log
is from the admin user test session of the current test runs
currently being written to the bottom of the two CLI window panes as
we speak. This test session is testing two of our system
under test routes, the profile route followed by the memos route.
As you can see, the server tester is currently inactive.
Now we're looking at the TLS tester log. There is only
ever one of these per test run.
You'll notice the color codes in amongst the text. These are used
to display the log text in color. We'll see how this works soon.
We're looking at the same CLI logs as before. Tools such
as cat less and tail, if configured correctly, will render the
color codes. Just reiterating that these CLI logs
are currently being written. I've just taken them from the finished test
run. This is the lowpriv user test session CLI
log from the application tester. As you can see this is a
failed test session. This is the one and
only TLS scanner test session CLI log that I showed you before,
but with the color codes rendered, these CLI logs
are what is printed to the CLI terminal. If you are running it in Kiwi
mode versus no UI mode, right arrowing and left arrowing
in the CLI terminal will switch between the different testers windows.
As you can see, this is a failed test session. When you see the outcomes
have been downloaded to message, that means the test run
is complete and you can now inspect the report files generated
by the emissaries and the result files generated by cucumber.
This is what the outcomes archive looks like. Once it's been packed by the orchestrator
and sent to the CLI, you'll notice the report and
result files. This is the HTML report file
generated by the application emissary zapproxy
for the Lowpriv user app scanner test session.
It lists the alerts or defects along with how they were found,
how you can reproduce them, as well as directions for fixing them.
This is the HTML report file generated by the application
emissary for the Admin User app scanner test session.
This is the HTML portfolio generated by the TLS emissary
tests sh for the one and
only TLS scanner
test session.
This is the markdown report file generated by the application emissary
for the Admin user app scanner test session.
This is the CSV report file generated by the TLS
emissary for the one and only TLS scanner test session.
Here I'm highlighting the severity levels.
These can be one of low, medium, high, or critical.
Refer to the job failed documentation for further details on these.
This is the JSON report file generated by the TLS emissary
for the one and only TLS scanner test session.
These are the three NDJSON result files generated by cucumber
for the three test sessions Lopriv user
app scanner test session, admin user app scanner test
session, and the one and only TLS scanner test session.
The app scanner admin user test session for the profile route has
completed. It's now starting on the memos route. The app
scanner Lowpriv user test session for the single profile route
has finished. The log, which is just scrolled off the screen,
provides defect counts and details of where to look in
the reports. This is the log and outcomes
files documentation.
The app scanner admin user test session for the memos route
has completed, which means the test session it's in is
finished. In this case, both Lowprov user and admin
user test sessions have failed. The CLI log
file that I showed earlier contains details of how to use the
report files to locate and remediate the defects.
You'll also notice that the stage two containers have been brought
down. Now we've just right arrowed to the TLS tester to
watch it finish.
The test session for the TLS scanner has now finished.
This also failed because the defect count exceeded the alert
threshold that the build user defined in the job file.
You may also notice that the total test of progress meter
hasn't reached 100%. This is because the server scanner
isn't currently enabled. As you can see, the outcomes
files have been downloaded for you to inspect.
We look for contributors to come and join the OWAsp purport
team as well to help us continue to make Purpleteam awesome.
Thanks.