Abstract
We often hear about how silos in organizations create issues and complexity around collaboration, productivity, strategy, and so many other areas. It’s also true for security, and it’s why as the term DevOps has become popular, so has the term DevSecOps. For developers that aren’t trained in security, how do we get them to adopt a security mindset and apply it in the process? How do we put the ‘Sec’ in DevOps for developers? In this talk, Greta Jocyte will discuss how she encourages a security-inclusive framework within her team, her customers, and her organization.
Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hi, my name is Greta Yetzita. I'm a senior technical program
manager, Microsoft, and I'd like to talk to you about how you can bring
a security mindset to your team.
To set some context on why I'm talking about the subject,
I'd like to very quickly just talk about my own career journey. When I
finished college, I went straight into a role at IBM working on a
security product had never worked in the space of security before
and I had never even done this as a subject when I was in college.
You can imagine that working on a security product it was very important for
us to think about how we were securing our own systems
as this was something our customers were relying on. And this was a very intimidating
thought for me when I was first entering my career.
After that I was working as a development manager also on
a security product. And at this stage of my career it was
important for me to think about how am I instilling
a security mindset on my team? How was I bringing that
into our culture to make sure that we had
a high quality bar for security standards
and making sure that our customers could really rely on our products.
Today I'm a technical program manager at Microsoft.
We work with some of the largest companies around the world solving
their toughest technical challenges. We work in a code
with model where our software developers work alongside their software
developers to tackle whatever challenge it is that we're
working on and has part of that we like to instill engineering
best practices and that includes thinking about security.
I'm sure many of you are aware of why security is
important to think about or the impact of having poor security.
But to really get that message across, I wanted to share some recent figures
with you. These come from the most recent IBM security report,
which found that the average cost of a data breach globally
is $4.35 million.
When an organizations doesn't have sufficient
security measures in place, that number rises
significantly to $5.57 million.
And more concerning to me is once a data
breach incident occurs, on average it takes 227
days for a team to be able to remediate that
breach. So what can
you and your team do to think about security and
bring security more to the left of your dev
or dev psychops practices?
The first thing that I would recommend is to document
engineering best practices. I know that for
most developers the thought of documentation is daunting.
It's often not the task that we look forward to. But I also
know that when we're adopting a new tool
or a new product and they have good documentation, we often
really appreciate it. I myself have worked on
a high performing team that had really high standards for
security, but they had never documented what the best
practices around security were. And so it was really hard for
me to keep up with that. And when we had new people onboarding onto the
team or there were any major shifts and that knowledge
was no longer there from another team member, this could
often create chaos. And so
if your team is just starting on the journey of security and
you aren't really sure of what your security best practices should
be, I'd highly recommend that you take a look at some resources,
such as OWAsp. They have a top ten list and they
have a lot of other resources. Has. Well, there's the Sans top 25
and the team that I work on. We've also documented something that we
call the CSE engineering playbook.
And this is an overall engineering best practices playbook
that covers a lot of different topics, but we also include security, and I'd
highly recommend taking a look at that as part of your engineering best practices.
I'd recommend that you document things such as your pull request policy,
which might provide guidance, such as how many engineers need to
review a pull request before it's being merged,
or what the guidance is around actually writing
good, constructive feedback into a pull request.
This is something that I hadn't seen before until I had moved
to CSC, and I have found really
creates a great culture around providing good feedback
and enabling others to really have a growth mindset
in taking on that feedback instead of taking it personally or
negatively. I'd also recommend that
you add things around logging and error handling
and a vulnerability management policy. So a vulnerability management
policy might include things like when you've found a vulnerability,
depending on the severity, how long do you have in
order to try and mitigate that? What happens if you can't mitigate
that? Is there some type of escalation path that's in place?
This here is just an example from the CSE engineering playbook that
I've mentioned, and this is some guidance that we've created around pull requests.
This is just a short snippet of it, and you can scroll down
to see a lot more when you're actually in the page itself.
And this is just to give you an example of what your own
guidance might look like for your own engineering best practices.
And something from the developer mindset to think about every
time that you're committing code is have I reviewed
our engineering best practices before I've actually committed this?
The next thing I'd like to talk about is threat modeling. And I
know that the concept of threat modeling, if you've never done it before,
can be quite intimidating.
However, there are so many frameworks and resources
that are out there to support you through this process that
it really isn't has. Scary as it may seem,
you should do threat modeling really whenever you're doing any major
architectural changes, or sometimes even for smaller ones.
And this is something that you might have in your engineering best practices
to define at what stages you need to do threat modeling
for your team. There are a ton of different frameworks
out there that can help you go through the process
of threat modeling. So for example, stride, which is one that I would
often use, or the mitre attack framework. There are also
a lot of tools that will take you through the process itself as
well. So I mentioned OWASp earlier. They have a
threat Dragon tool that will help you go through the threat modeling
process if you've never done it before.
I also highly recommend that you upskill at least one
developers that is able to go through a threat modeling
exercise. It's often not realistic
to go and upskill your entire team taking up a week
or two weeks on a topic. And so getting
at least one developer who's upskilled in this area that can
then share it with the rest of the team can be a really effective way
to start bringing that knowledge into your team.
I have previously worked with a customer that had a
policy that required that we did threat modeling for any major architectural
changes. And we were building an entire platform.
And so we had documented our entire
architecture. We had multiple meetings with their
security team to get them up to speed on what our architecture was,
how the services were communicating to each other, and we handed
that over to them to review. We waited
weeks and it almost came to months
to hear back from them to find out if they
had identified any vulnerabilities or if there were any major changes
that we needed to make before we could release this.
And eventually we heard back from them and we heard that actually
they weren't able to do the threat modeling process because they'd never worked with
any services in the cloud before, and they weren't comfortable doing this
exercise because everything that they have ever worked with before was
on premise. So what we had to do in this case was
we as a team went through the threat modeling exercise ourselves,
and we used one or two people from the team who had expertise in this
area in order to be able to do that.
Then we went back to one of our security
experts at Microsoft and just confirmed that there weren't any major gaps
that we had potentially missed in our threat modeling exercise. And we shared
this back with the security team, with the customer to let them know that
we had thought about this. And up until the time that they feel comfortable going
through this exercise, we had gone through this too.
Not only did this help us build trust with the customer,
but one of the major outcomes of this was that the
developers then once they were actually working on the solution,
they were thinking about security and hardening even
more as they were going through their general development
practices.
So this is an example of what a very basic threat model might
look like. So on the left hand side of this diagram,
we see that we have an external web service,
we see that we have our Internet boundary, and then our cloud
network is our trust boundary. We see that we have three services
within there, and then of course a storage solution.
And then we see four arrows that indicate
the data flow between all of these different services.
At the top right, we see our list of assets
and their entry points and how they're authenticated
with. And then finally at the bottom right, you see
the potential threats and the mitigations to those threats.
So one potential threat might be that any
data that's in transit isn't being encrypted. And the mitigation
to that would make sure that you have the latest version of TLS
enabled.
And for engineers who are going
through a threat modeling practice, or who are even just introducing a new
component into the architecture, some things that you
might think about are is my data encrypted when
it's at rest? Is it encrypted when it's in transit?
And who has access to this service?
And how is this access actually enforced? Or how is this
access policy enforced?
The next thing I'd like to talk about is automation.
I think most developers know the power of automation.
It makes everyone's life easier and everything
is far less error prone. When we have a tool that we can trust
to do some type of action on a regular basis,
there areas different tools that you can use throughout your
development processes to automate some of the security
steps that you might need to take. So some things to think about
automating are for example container dependency scanning
tools. If you're using, for example, kubernetes and
Docker in your infrastructure, static code analysis
tools are thankfully quite popular and I
hope that it's something that you're already using. And this is a great way to
check for vulnerabilities in your code git commit
hooks are super powerful for small things that
can have big consequences. So for example, having a credential
scanner to make sure that you're not accidentally merging
any secrets into your public repository,
you can automate your secrets rotation. This is something
that is not fun to do when it's a manual task,
and so I'd highly recommend automating this step.
And of course there are a lot of pre built security frameworks,
so depending on the languages that your team areas using. So for example net
c sharp or if you're using node js and
you want to introduce authentication or something like
that, there's more than likely library or framework that you can
use in order to enable that. So you don't need to reinvent the wheel
and you can rely on those types of tools.
This is just an example to show how simple it is to introduce
automation. Sometimes I've taken the simplest example here,
but this is using git secrets as
a git commit hook. And here the most basic
step to it really is just doing a make install.
This is also a list of some container scanning tools that you can use.
So things like trivia or aqua. Sonarcube is a
super popular tool already and they have a dependency check
plugin that you can use. And there's also a tool called
Arent, which previously was known as Whitesource.
And a question that a developer might ask themselves is
what are our automated tools not catching?
While they're super useful and do make
the developers process much easier, they do have their gaps.
They're not catching everything. And it's really important to understand what are
the limitations of your tools and what are you doing to mitigate the
limitations.
Last but not least, I'd like for you to consider infrastructure.
We all know that technology is ever
evolving and we get new tools and technologies
that help to make the development process much easier.
But as we start to introduce these new tools and technologies, it's really important
that we understand how are we making sure
that we're not introducing any new attack vectors into our
solution. And so as an example, if your
team is already using kubernetes, or if they're looking to adopt
kubernetes soon, have your team gone and understood what
the security best practices are around adopting kubernetes
and maintaining a Kubernetes infrastructure?
Is your team looking at things like runtime security and having
binary authorization? And also,
have you taken a look at your cloud provider security configuration?
As a lot of organizations are shifting to the cloud,
there are a lot of security concerns. And then there's
also a mindset of the responsibility is
potentially on my cloud provider and it never is.
It's a shared responsibility and it's something that if
you're adopting a service from your cloud
provider, it's important that you understand what the security
best practices are around those. So as an
example, and I'm outing Azure a little bit here,
but by default on a lot of Azure services
there's actually a setting that is
turned on to allow any service from
another tenant or subscription to be able to
communicate with your service. Of course that is
a security concern and we never want that enabled.
We only want to communicate with trusted parties.
And so anytime I go and work on a customer engagement and
they have previous Azure services in place,
I always go and make sure that they have this disabled.
Another thing to think about when you're considering security outside
of just your code base or your DevOps
pipeline is to consider the maintainability of those
additional security measures that you're taking.
So for example, we have a motto in
the organizations that I work in where we always meet the customer
where they are. When we're working with a relatively junior
team, we're not going to recommend a lot of more
complex security solutions. So for example,
introducing vpns when they have a very simple
infrastructure that they're working with in the first place. So we
might introduce a phased approach where we have an initial set of recommendations
that is more maintainable for the team and then we provide
them with where they need to get to and what additional security
steps they need to introduce in the future.
So this is just an example of the Kubernetes best practices that
are documented on their website. It's not an exhaustive
list at all, but it's a really great resource to
take a look at for your team. If they're using something like Kubernetes or whatever
it is that you're using, make sure you go and take a look at the
documentation and look for what they have around security recommendations.
This here is an example from one of the Azure services or
virtual machines, just some of the recommended best
practices that users should take.
And so the question that a developer should ask themselves whenever
they're making a change to their infrastructure is,
is my change in compliance with our security policies?
To conclude my talk, I'd like to share some potential next
steps that you can take. And the first one is to take a look at
your team and where they are on their journey to shifting left.
You can take a look at the CSE Engineering playbook Owasp
Sans, or any other good security
resources that you're aware of. To really understand what the best
practices are and how your team aligns with those best practices
and where your potential gaps are, I'd also highly
recommend that you get at least one person on your team upskilled
on security fundamentals. This person can help you get
that knowledge sharing done across your team and
can help you really get that security mindset adopted across
your team. That's everything that I wanted to share with
you, so I thank you for your time again.
If you'd like to take a look at the engineering playbook that I spoke
about today, you can just search online for CSE engineering
playbook. I'm sure it's the first thing that will pop up.
Thank you.