Conf42 DevSecOps 2024 - Online

- premiere 5PM GMT

DevSecOps as an approach to building and deploying secure applications by “shifting left”

Video size:

Abstract

Security is crucial in today’s DevOps world, but it often only kicks in when the data center is already burning. I’ll walk you through a massive amount of tools, easy to deploy and integrate into your existing Azure DevOps and GitHub Actions streams, to optimize security from the start.

Summary

Transcript

This transcript was autogenerated. To make changes, submit a PR.
Hey everyone, Peter De Tender here, more than happy to have you here, for attending my session, discovering and talking about DevSecOps as an approach, I know it's a long title, DevSecOps as an approach to building and deploying secure applications by using shifting left. So my name is Peter. I'm a Microsoft technical trainer at Microsoft. Which means I'm actually helping our enterprise customers across the globe every single day, every single week. Delivering, workshops where, no surprise, Microsoft DevOps solutions, both Azure DevOps and GitHub, are my prime, topics. If you should have any questions during the conference or you're watching this later on as a virtual recording, don't hesitate reaching out on LinkedIn, Twitter. I'm on BlueSky in the meantime as well, where my handle is PDTIT. Now, what we're going to cover in the next 45 ish minutes is, the baseline of DevOps from there. Obviously I need to talk a bit about shifting left because it's part of the title of my session. And then mapping the concept, the culture of DevOps with some DevSecOps tooling, where I'm primarily focusing on Azure DevOps and GitHub within our Microsoft DevOps product family. And as you can figure out, I'm going to have quite some demos. Q& A is a little bit challenging, although we're live during the conference on the Discord channel. But again, if you have any questions afterwards, don't hesitate reaching out. You already know a little bit about me, and it's obviously more important to know about the session. But within the Microsoft role, I've been a trainer for close to six years now, before I joined Microsoft, initially out of Belgium, supporting the West European customers. But about three years ago, managed to move to the Redmond area. And in the meantime, because it's all virtual, workshops supporting customers all over the globe. And a little bit of free time that I have, I somehow still like to, share knowledge with communities like presenting at conferences like this one, writing technical posts. Although on my blog, it's been a little bit slow last couple of months, but then also supporting, publishers, writers, giving them some ideas about writing content. If I did not publish a book myself. So with that, we're going to jump into the actual topic of the session. And starting with the concept of DevOps. Now I'm pretty sure this is not the first time you hear why we do have the whole day wrapped around DevOps. So I'm pretty sure this is not, surprising if you've been watching all the other sessions. Now, my definition here, I would say the Microsoft definition is the union of people. Processes and products to enable continuous delivery of value to an end user. And you can think about this in almost each and every concept. Union of people that's really bringing the teams together. We have the developer teams, we have the ops teams, and we're going to try and break down that wall, trying to create a bridge, you could say, to bring those teams together. Now, in my DevOps classes, I always explain that it's a lot more than just DevOps. Why? Because we also have product managers, we have the business stakeholders, we have, C level, we have marketing, and obviously we also have the user and the customer, depending a bit on the scenario we're looking at. But that's in short what we're talking about. Next to that, the process is the automation piece, where I'll talk about it a little bit more later on, and then the product can be releasing a piece of application, to your internal users and obviously towards customers as well. Now, I also have a Peter definition. I think if, you're taking DevOps a little bit serious and I've been training and implementing DevOps at customers all over the globe for the last couple of years, so at some point in time, I came up with my own definition. So Peter's definition about DevOps is integrating the culture. delivering value to the end user, relying on team collaboration and workload automation. Why is that important? Because honestly, DevOps is 60 percent related to the culture. Like again, thinking about the concepts, breaking down the barriers, and only I would say 40% Focusing on the tooling. Now I'm not going to break down the importance of tooling, but as in a lot of scenarios, if you focus too much on the tooling, like implementing all the tools, but the teams are not collaborating, there's no focus on automating anything. Then in the end, you're not really doing DevOps. So that's my little twist on the DevOps definition and focusing on the culture. Now with that a little bit specific within DevOps, like why should we do DevOps? I would say it's really to automate, not only yourself. That's actually what I always talk about in my DevOps workshops at the first couple of minutes when starting, it's always about, you're going to love DevOps because it's really about automating yourself, automating your job, as much as possible. So wrapping this in a couple of, slides, it means automating yourself and even more so automating everything. So the starting point is the development cycle. The developer is the team or the individual, but preferably the team, I would say, responsible for building, writing the code, testing it, thinking about the language, NET, C sharp could be, like a Java app, could be Node, could be Python. So that's the flexibility we have. Once the developer creates code, like diving into Visual Studio, VS Code, Eclipse, IntelliJ, so many other examples. We're ready for the next step, and that's validating code. The validation means that we're not only trusting the developer work by checking the code, but we're also going to integrate some other capabilities, checking the syntax later on for security, checking for vulnerabilities, for example. To validate if the work that the developer created is actually working fine, that's where we're going to move it into a package. Now the package, you could say, in a web application example, could be like a web deploy zip package in like the NET framework or anything similar in other languages. The package also allows us to actually release it to deploy it right away. And that's where we move it into a running state. The running state, don't get fooled too much here. Doesn't mean moving in production. Running could be moving on your local machine, going through a bunch of validation steps, and then running it in a test environment. Running it in a test environment to do functional testing, smoke testing, And then maybe moving it into a staging environment, where now you're going to run against performance testing, maybe against security validation testing. And then ultimately, you want to move it into production. So the, the traditional way of deploying releasing Dev and test staging production is totally fine. You could also look into, canary early adopter. Maybe you're more familiar with blue green deployment, like a typical scenario in containers, but in the end, it's all doing DevOps in some way. or the other. And then last we have the operations, which means that we're going to manage, we're going to maintain it, we're going to operate it, and then we technically, you could say, go back to the starting point. So that's the overview of my DevOps cycle. From here, we're going to expand on it a little bit more and touching on the security of each and every of those cycles. So the cycles are not changing. The cool thing here is that obviously we're now moving into DevSecOps, where we're still doing the same thing. We're still automating ourselves. We're still. automating as much as possible, but now we're going to squeeze in security wherever possible. Now, what does it mean? Just a couple of highlights in the development stage. We're going to rely on, for example, threat modeling. I'll show you in a demo later on what it means. We're going to integrate security code scanning. linting and like a PowerShell script, for example, or we're going to integrate, built in security code scanning tools, maybe relying on third party tools to bring in other capabilities, maybe because you're standardized across different languages, which might not be part of the core DevOps tool that you're using to recognize code. We're going to make sure that we're storing credentials outside of the source code, where I'll show you a little bit of what Key Vault could look like. And then the peer reviewing, it's not really about the technology, it's not about the tooling, but that brings me back into the cultural piece. where you could rely on colleagues to do code reviews, to rely on them to actually, I don't know, get advice on what it means to actually write secure code, and so on. In the validation stage, a lot of it is quite similar to the actual developing piece, right? But that's where, again, we're going to run code analysis, still validating that we don't have credentials, that we integrate, secret security, secret management in again, Key Vault secret store, or using the local secret options like in a Visual Studio NET scenario. In the end, no longer storing code, secrets in the app settings JSON file. That's the main message. We can integrate approvals, which means that we're going to use, source code branching. We're going to use pull requests and it actually allows us to validate code submissions before they actually get merged into our source control. Integrating unit testing. And then if you want another example, if you're building, containerizing applications, then we can also integrate container vulnerability scanning, which is obviously the same concept as code scanning, but not only scanning code, but also the container wrapper around it. Then once we move into the next cycle, that's closer to the running state, right? We're going to secure infrastructure as code. Because up till now, in my examples only touched on writing code, being the developer, but it's Relevant in DevOps if we also start embracing automation for the operations team. So if you're using public cloud, you could look into Azure templates, bicep arm, terraform in different clouds. Terraform could, again, be a good option or, cloud formation in AWS where again, we rely on infrastructure as code writing template syntax to deploy something in a cloud environment. Integrating container scanning, integrating quality gates, allowing you to run your release pipelines, but following certain conditions. If the conditions are not met, then we're not even releasing the pipeline or not running the release pipeline. And then obviously everything you should know about managing, securing your cloud environment, integrating Azure network security, integrating Microsoft Defender products could be a good option if you're using cloud on the Microsoft side at least, and then a little bit more on the. infrastructure security, integrating overall network security, maybe pen testing, totally away from developing and code. And then we're going to run it. So that's again, where we have the security capabilities from the target platform. Again, using Azure as an example, would be RBAC role based access, and again, still validating credentials. They're not part of code, but we still need to start them somewhere. So you could use like app settings, variables or environment variables, app configuration, and again, preferably using Key Vault. And then once everything is up and running, you could look into something, like a Azure. Defender for cloud, where it's our security posture tool, allowing you to get a view on your security state, looking into threat detection and mitigation, like the actual operational piece, how to not only detect issues, but how to prevent them, but also do some investigation, creating incidents where out of the Microsoft world, it could, for example, be a tool like Sentinel or any other SIEM solution. If you want to use and look into that. Now, what we see in the field over the last, I don't know, 20 something years, that's where I started my career in 96. So I've been around for a while and this is basically what we did on prem cloud hybrid Where the security focus was always happening all the way at the end why because that's where the runtime is running That's also the environment which is typically under attack now nowadays. We actually find out and not only nowadays I mean we've been like made aware about this for multiple years already. When an attack happens, it's typically way too late, which means it's hard to detect. It's hard to fix. It's probably cost, costly to try and fix it. And every now and then, like a ransomware attack, there's a lot of information available from companies in the news who actually didn't even manage to recover from ransomware attacks. So all this to say that instead of waiting all the way to the end of the cycle. once it's only running in production, that's where we're going to validate security. The mindset now is we're going to shift left. And that's the clarification I would say of the session title. And even more I would say, don't just think about it in the, all the way at the left, like the planning phase, the developing phase, but also make sure that security comes back in each and every step of the scenario. That's pretty much it from the presentation perspective. So from here, it's going to shift, quite fast to a couple of different demos. I just have a few slides as a placeholder for the actual demos to know what I'm, supposed to show you. So I'm going to start with, the first one, and again, I'm mixing some of Azure DevOps with GitHub to show you that, again, it's not, Always about the tooling, it's a lot more about understanding the culture, the concept, and knowing what it's about. assuming that we are a developer, or a DevOps member, I'm gonna open up my DevOps portal, there we go. This is my, DevOps world, so I'm using Azure DevOps Cloud Service. And one of the first scenarios we have is the project abstraction. So the way we organize this is obviously permission based. We're going to create projects, and within each and every project, we have, the permission model. So in this case, let's say Peter is working as a DevOps team member on, my retail application. I'm going to give Peter specific permissions, can be owner, maybe not even required to be an owner, creating contributor permissions or so many other different levels, and I'm working on another project where maybe I'm just a project manager. I don't need any technical permissions to create code, to upload code, to run pipelines, and I have that flexibility. If you do not have the permissions. to see the project or to participate in the project team. You're not even seeing any of those details. And then within we can use the different Azure DevOps features where for now, since I was talking about, source control, which would be step number one, integrating your code in a source control scenario, which would be get based source control. using Azure DevOps repos or using GitHub and I'll show you that in the next couple of minutes. So I'm running my application and one of the scenarios I have here is a repository where I'm storing code. So this level here comes back here and I'm storing some code. I got my Azure pipelines and down here in my source folder, that's where I actually have my web application using NET code. I'm mixing my Azure deployment, like infrastructure as code, you could say with Azure Bicep, and I'm running some images and whatnot in one single repository. Works totally fine, but there might be a better option. The better option is what I have in another project, where I'm using a scenario that's now called multi repo. So in this example, the logic is roughly the same. I still have my eShopOnWeb, that's my actual application code for my retail website. But now I got some ARM templates for Infrastructure as Code. I'm migrating my ARM templates into Bicep, so I'm going to store them in yet, a different repository. And the mindset behind it is that I can integrate my source control permissions. into my repo as well. If my developer who's writing code working on the retail website, I only need to give them permissions to interact with my application code repo. They don't see anything else from the infrastructure side and obviously the other way around. So that's an easy solution multi repo model to split up where am I storing my code. From here, let's say we're moving on in the stack, and I'm going to make some changes. So I'm in my application. I'm just going to use an Azure container example, and I'm in the main repo. Inside, I'm going to make some changes. I'm going to use the Azure DevOps portal to show you how easy it is to edit code, but in a real life scenario, you would do this from your local machine, your development environment, making changes, committing the changes, synchronizing to the repo backend. But I'm going to save you all that to speed up my demo a little bit. So I'm going to make a change here. I'm just going to add, comment, Peter update software or something similar. Not too important. I'm going to commit. I'm going to confirm, and that's pretty much it. Now what happens here is okay, but it's also quite tricky. Why? Because now I'm overwriting code, and this could potentially break my code. So what we need is a mechanism. that I'm going to protect my main code. And that's where branching and pull requests and approvals will come in. So what we're going to do is back in our repos. And from there we go into branching. And as you can see, I already have multiple branches and behind the branch, I can interact my branch policies. So what I would recommend here is for example. forcing reviewers, remember I talked about the peer reviewing, which means that if I enable this, I can specify how many reviewers, and I could actually specify who is part of the reviewing permission team. And if, some code submission, a new update or update in the code, is getting submitted, that's going to trigger a pull request. And I'm not even allowed to merge if there's no approval. So that's, a good option there. There's a few other settings, but I guess the baseline, should be okay enough. So if I take one step back, I go back into, whoops, my branches. Let me go back to my repo, back to my branch. I'm gonna, Open up the other branch. So I got the main, I got a second one. And whenever we make a change in another branch besides main, it means that now we cannot even directly update the main branch. So we need to offer a pull request. now a pull request is actually a suggestion, you could say, where my colleague Jason here, made some updates in code and now suggesting them for me to merge them into main. We can still provide a title, we define a description and again, defining the reviewers. And that's, I would say bringing in again, the culture, like we're not just, allowing any developer, any DevOps team member to do whatever they want, but we actually want to stimulate that team collaboration and evaluating that. The next step in our shifting left cycle was after we have code being uploaded into our repos, which by the way, and GitHub is the same thing, we can shift to code scanning, code security scanning, and credential management. So I'm going to shift back to my DevOps environment where I'm going to switch back to my other project. I have my code. I'm going to run my pipeline. And as you can see, my pipeline has been running. And it's detecting security vulnerabilities. Some outdated packages, known vulnerabilities. How does this work? What I did is injecting a code scanning tool. How do you inject a code scanning tool? It's technically doing the same thing in your pipelines. Just running NET build, NET run, NET publish, for example, but now also integrating scanning. Next to that, What I have here in my environment is overall DevOps code coverage, and I could also integrate with third party tools. Now, how do you decide which tool you want to use? it's quite easy. You go into marketplace. visualstudio. com and you can look for extensions for integrations. You can install security tools in Visual Studio Code. Or you can switch to Average DevOps, searching for security, and it's going to give you a pretty extensive list of tools. Some of them are specific to, containers. Aqua here is a pretty good option. Snyk is a pretty good option. The built in Azure DevOps Security Scanner is a good option. And probably so many other ones as well, but I don't really know all of them. And it also depends a bit on the language, the development language you're using, like PowerShell code, NET code, Java code, where some scanners might be better than others for that specific language. Another option is not using Azure DevOps. Obviously nothing wrong with it, but what we can do is using GitHub. GitHub. The baseline of GitHub is about the same. Now, what we can do with GitHub is, again, creating repositories, using source code, using branching, using, reviewing everything I talked about on the DevOps side. So that's all quite similar. The look and feel is obviously slightly different, but apart from that, it's actually doing the same thing. We're now inside a repo. I'm just going to grab one of these. I got a pretty outdated one with a sample React app from years back. now I'm going to dive in security and the beauty of GitHub code scanning is that it's available by design. So you don't have to go out and search for your own tools on the marketplace. It's already part of the platform for public and private repos. Only thing you need to do is deciding if you want to use it. Yes or no. So you just enable or disable. And then from there you get, Dependabot, which is like a scanning, scheduled scanning tool. And it's going to report back about your potential vulnerabilities. And if you allow it to, it can actually help you fixing the problem by moving up a vulnerability package versioning into a newer version that's actually fixing the problem. Something like that. The other part here is credentials and secret management. Now, why is that important? Because something like this could happen. This was literally a screenshot, in already some years back, like January 2021. So four, almost five years ago, four years ago, where my personal access token, like a secret, a secured key, got published to GitHub. Now, if you know a little bit about Peter, in the meantime, there's one thing and it's Peter not sharing personal access tokens on GitHub. So what happened here, I was delivering a workshop on DevOps, interesting enough, showing how to create a DevOps personal access token, obviously sharing my screen during the whole day, and one of my learners was actually screen capturing recording. And then finding the personal access token in the recording, which obviously was not allowed and then storing it in one of their own GitHub repos. The cool thing was that GitHub detected this right away. And within just a couple of minutes already sending me a notification, because obviously they recognize the personal access token linked to an Azure DevOps organization. And even more interesting, you could go okay, makes sense. You're working for Microsoft. So it's probably Microsoft helping here. but it was not even related to the Microsoft relationship. It's just a core functionality of GitHub built in security. If you have Azure resources, storage account keys, SQL connection strings, AWS relational databases, personal access tokens, and whatnot, GitHub security documentation allowing you to validate which of your tools And the corresponding secrets can already be recognized out of DevOps. Sorry, out of GitHub security. The other piece in the demo was Key Vault. So how does that work? Imagine I have my pipeline, and in my pipeline I want to run some variables. So you could create a variable group Where you're going to store some variable names. So I'm using this for my demo deployments. If I want to deploy like Azure containers, I need a container name. I'm going to use a variable, but then I also need, some specific secrets, like in this case here, Azure Communication Connection String to interact with some AI services. And I'm going to store them as a secret key. How do you get into a traditional text string and turning it into a secret? That's what this little icon here is doing. It's going to encrypt it within your library and then transforming them into an asterisk next to that, you can define permissions. So this is another security model, which of my pipelines in my project can connect to this library. Instead of repeating the same, variables in each and every pipeline, you could move them into the library. The other nice thing is that instead of storing them encrypted here, we can now integrate The variable list in DevOps with Key Vault. There is a small catch, I would say. If I enable this link secrets to Key Vault, it's going to clear all my variables here. So it's like forcing you to have a dedicated, library of variables for Key Vault and, for anything that's not Key Vault. What is Key Vault itself? So in the Azure platform, You have Key Vault as a service. And what you're going to do here is defining keys, secrets, and if you want, certificates. Now, I told you at the start, as a developer, it is recommended to not store your connection strings, API keys, and so many other examples hard coded in your application. what we've seen in the field is that a lot of customers are, Indeed, not storing these in application code anymore, but storing it in Azure settings. So what it means is if I go into functions or web apps or databases, it's all based on the same concept. I can. Pull up my variables, so I go into my Azure functions. I open up my. Settings an inside my environment variables. I'm going to play with my variables. The old way is again, no longer in code, but storing them inside Azure. What's the benefit if it's not in code, but it is in Azure? Because you have Azure role based, sorry. You have Azure role based access to define who gets access to the Azure platform and actually manage, interact with our environments. So I got my App Insights, instrumentation key, just as an example. And it's connecting to. value. So even App Insights, like a monitoring tool for applications, requires a key. And that's what you see here. There we go. I might have a connection to a storage account. So I create a variable in my application, in my web app, in my function, and I need a connection string. So that's what I'm showing here. This connection string, again, should be treated confidential. And that's where now Key Vault comes in. So in my third example here, I'm connecting my app, my function into Azure AI services, and it needs an API key. To do that, we need the API key as the variable, but now this time the value is no longer pointing to the connection string. It's not having the actual content. It's not having the keys, the connection strings, but it's now pointing to Key Vault. The cool thing here is that for the look and feel, the experience of your app, you're not changing anything. The only thing you need to do in the app code is adding the Key Vault package for your development language, allowing the product to recognize what is, that add Microsoft Key Vault string. From here, it's pointing to Key Vault. Microsoft Key Vault connecting to a Key Vault URI, inside Key Vault connecting to the secret, and from there pulling up the computer vision API key variable. So how does this work inside my Key Vault? This is my Key Vault object inside keys and secrets. I'm going to define my actual variables. And from there, I'm going to store the actual key. Now, this, by the way, is not the key. This is just, a unique object ID. The key itself is hidden. It's secretly valued. It's encrypted. And in my variable settings, this is the only thing I need to specify. Allowing you to really integrate security, moving all secrets away from code, storing it into Key Vault. And on top of that's the other piece in the security story. We're going to define explicit permissions that not everyone can connect to Key Vault. So what we do here is, after adding the keys, We're going to move into access policies or using Azure role based access would be another option. That's the newer way of managing permissions where I'm specifying, hey, my dear function, you get, get permissions because you only need to get the keys. You don't need to change them. And for example, in the same way, Peter, the admin account can do get and list. Now there's a little bit of a chicken and egg scenario here. Why? Because someone, like being Peter in this case, my admin, my DevOps team member, still needs to add the original initial key into the story. now you could go what's the point in storing all that encrypted if Peter actually And that's where key vault key rotation comes in. In our Microsoft documentation you'll find guidance what it means to actually rotate keys. What it means is, for example, if I go into a storage account, imagine I have my access keys, Manually, I'm going to run the key rotation. It's going to change key one and key two over here. I'm going to copy paste them. I'm stored the connection string or the key in Keyval, and I'm done. You can automate this process like set rotation reminder, and you can interact with Azure Automation running a Cloud Shell script, running PowerShell script, running Azure Automation script to run that key rotation for you. Technically, it's updating key one, storing it in Key Vault, updating key two, storing it in Key Vault, but from an app perspective, your app is pointing to the Key Vault object. It's not pointing to the Key Vault object content. So that's, in short, how that part works. So that was pretty much it on storing secrets not in code. And if you have the, I would say the correct code scanning tool, it will actually detect that you do have keys. And then the solution is moving them out into a key vault scenario. Next option we touched on in our shifting left overview is what about containers? So what I'm doing here on the slide is just a listing of container scanning tools. There's again so many different ones. I mentioned Aqua without doing any marketing for them. It's just one of the so many tools. I have Qualys here. Why Qualys? Because we have it as an integrated security scanning feature inside, for example, Azure Container Registry. So that's going to be my next demo. How to integrate container scanning as part of our DevOps process. There's a couple of different options to do that. I'm going to switch back to my DevOps environment. I don't need this anymore. I'm getting a bit lost in my DevOps and browser portals. Okay, but we're back in business. Cool. So one option is using Sonar. Sonar is one of the so many third party code scanning tools, also valid for containers. The way it works, you have your build pipeline, like scanning code, validating code, or compiling code, where now we're gonna integrate a scanning task. If I open up the pipeline syntax, I'm installing NuGet package. Yes, I know this is an updated version. It's an old sample, but that's totally fine. We're going to run NET, restore. Later on, we're going to run NET test and so on. And what we're doing now is integrating a third party scanning tool as part of the pipeline. What this is going to do is, again, doing the same thing, looping through the code, running the build pipeline, but as part of the build process. It's now importing the latest updated patron files from this third party scanning tool, and it's going to run it and providing the details. So the outcome of this, if I look into the actual pipeline run, it's going to do NuGet, restore. net, import, export, all that cool fun, preparing the analysis, building the solution, running some functional testing, smoke testing, anything like traditional DevOps. And from there it's integrating the scanning. Oops, sorry, that was a bit too fast running the scanning. I wanted to move this little guy out of the way, but I couldn't get there. There we go. So in my scanning task here, it's now downloading this third party vendor sonar pattern files and running a scanning and then looping through all the scanning detection mechanisms. And then eventually, depending on the outcome, depending on how many vulnerabilities, depending on what kind of criticality, depending on so many parameters you can define, it's gonna flag the pipeline as a successful build or not. You can reuse this information once you move into release. So what we can do here is now going back into our pipelines Go into a new pipeline. I'll show you the classic interface like the more graphical interface, but the concept is the same. Imagine we're going to publish to our Azure app service. Where now I can enable my, pipeline quality gates. Now, before I can do that, I need to define some settings, which I wasn't planning on doing. I can do pre. deployment. I can integrate quality gates and I can validate My third party sonar cloud in this example, or running rest API calls, checking with Azure policy, is the environment I'm publishing to actually compliant with PCI with hyper with ISO 27 one. And if not, then I'm not allowing the pipeline release. So I'm not publishing my code, something like that, or here with a sonar example that I was using before. It's going to allow me to run the sonar scanning during the build, and then during the release, I can go back to it, Hey, can you check with the results of my build from a sonar cloud perspective and yes, for any other third party tool for that matter. And based on that, we're going to give you green light or not to literally move into that CICD end to end automation concept. Last little piece here, and then I think we're going to wrap it up, is Defender for Cloud. So I talked about Defender a little bit in the introduction, like setting the scene on DevOps and integrating security, where one of the pillars was all the way at the end, runtime. Your workload is running in Azure. I showed you functions, showed you web apps, showed you Key Vault. We're now out of Defender for Cloud, It's going to create a security posture overview. So it's not a security slash protection mechanism as such, although it actually comes with those features for specific Azure scenarios, but that's the paid version. But even in the free edition that you can just enable in your Azure subscriptions, it's going to list up security recommendations. You have secure score, so a ranking some, Like a set of points, you could say, the more points, the more secure your environment runs. And based on that, it's also giving you, an overview of your resources, a potential security risk, any alerts, attack path, and whatnot. If I drill down on my security score within my subscription, I open up my recommendations and it's going to list up anything that I should try and fix in my environment. So from here, I could, for example, filter on containers because that was the last piece I was talking about. And it's going to show me you, Peter, are running a repo, like a container registry. Inside there's a container image, and you should run a vulnerability scan. If I have something else in another Azure resource, like here, a storage account, it's going to show me public access to that storage account should be disabled, and so on. Since we're talking about DevOps, inside my Defender for Cloud is now also DevOps security. What it's doing here is allowing you to read out your repo state from Azure DevOps, from GitHub, and bringing in that security view into your, or the DevOps security view into your security team's overall management. I just need to refresh. Because it looks like it's not coming through. There we go. So I showed you my DevOps, I showed you my GitHub, and these are all my repos. So my React example here that I used in one of my demos to show you the built in GitHub Dependabot CodeQL security scanning tool is now also moved into my security view. I can open it up and I can see what is the security state? As no secrets. GitHub was reporting some security issues, and they're now nicely reported in my security environment. To again, once more focus on the culture, bringing down the walls between DevOps teams, operations team, DevOps security teams, and allowing everyone to get a view on the security state. Cool. So with that, I'm going to wrap it up from here. I started with the Peters DevOps definition. I'm pretty sure no one remembers, but you can move all the way back to the start. What I'm doing now is updating my definition. It's no longer just developing a culture of delivering value to end users, relying on team collaboration, workload automation. That's just doing DevOps. But now we're also integrating end to end security. And that's the baseline of the story. So with that, what did I cover? I started my session 43 minutes ago with setting the scene on DevOps, talked about the concept of shifting left, and then the majority of the session was touching on the tooling, Azure DevOps, GitHub as the Microsoft DevOps solutions, and then the second half of the session, or three quarters of the session, was jumping around across different demos. I hope you enjoyed. I hope you learned something new around DevOps, around DevSecOps, a little bit on the Microsoft story. There's obviously a lot more that we need to cover, but I only got 45 minutes to do that. I hope it gave you a little bit, of, awakening, a little bit teasing your brain. If you're part of DevOps and you're like, I had no idea how to integrate security, why it's important. Then I hope I helped you a bit understanding that message. With that, I'm going to close out. Thank you for watching my session. Thank you for being here. Have Don't hesitate, please, don't hesitate reaching out if you should have any questions. Petender at Microsoft. com, PDTIT on LinkedIn, on BlueSky on Twitter, but I'm not that active there anymore. And overall, if you forget anything else, I would say aka. ms slash PDTIT is going to send you to my blog where you find all the other details. Thank you for now. Enjoy the rest of COM 42 DevSecOps and hope to see you again in any of our other sessions.
...

Peter De Tender

Business Program Manager - Azure Technical Trainer @ Microsoft

Peter De Tender's LinkedIn account Peter De Tender's twitter account



Join the community!

Learn for free, join the best tech learning community for a price of a pumpkin latte.

Annual
Monthly
Newsletter
$ 0 /mo

Event notifications, weekly newsletter

Delayed access to all content

Immediate access to Keynotes & Panels

Community
$ 8.34 /mo

Immediate access to all content

Courses, quizes & certificates

Community chats

Join the community (7 day free trial)