Conf42 Python 2025 - Online

- premiere 5PM GMT

Enhancing Test Automation and Security with Python and AI for Quality-Driven DevSecOps

Video size:

Abstract

Discover how Python and AI are transforming test automation and security in DevSecOps! Learn to automate security checks, enhance test accuracy, and ensure continuous quality without breaking functionality—all with Python-based tools

Summary

Transcript

This transcript was autogenerated. To make changes, submit a PR.
Hello everyone. Welcome and thank you for joining me today. I'm excited to take you through how Python and AI can enhance test automation and security for quality driven DevSecOps approach. Before we dive in, let me give a quick introduction about myself. I'm Sriman Yaram, senior software engineering manager in test at Coupa Software with over 19 plus years of experience in distributed systems, microservices, and quality engineering practices. I have worked across industries like FinTech, SaaS, and enterprise products, delivering high scalable and resilient products used by Fortune 500 companies, with several products recognized as industry leaders and best in its class by research firms like Forster, Gartner, and IDC. Outside the work, I enjoy mentoring and sharing knowledge with a tech community, staying ahead in evolving software landscape. I explore AI, AI's impact on the ever changing, software industry, landscape. Before we dive in, here is a disclaimer. I would like to clarify that views and contents shared today are entirely my own and does not reflect those of my employers or, employer. or any, affiliated organization. The code samples provided are demonstration purpose only and may not be fully functional serving to illustrate, concepts rather than, rather than being a production ready code. Okay, before we dive in, again, quick agenda, introduction to, Dev, DevOps and DevSecOps. in this one, we'll try to focus on understanding DevSecOps, its importance and differences from Dev, DevOps. Then we'll go into role of Python in Dev, DevOps. Python automates security tasks and integrates into CI, CD pipelines. then we will roll into leveraging AI in DevSecOps, AI enhanced security with automated threat detection and proactive insights. Then we'll go into tool based approach. And I tried mapping, this, tools, in a DevSecOps, phase, right from the, development phase. So we divide the tools into dev phase. What are the tools that we can use, code review phase, build phase, deployment phase, production ready phase. And how these tools can be effectively implemented as well. Then we will, go into some of the best practices, practical tips, for gradual adoption and focusing on high impact areas. While I mentioned many times in the slides that CICD very frequently, in, in, in upcoming slides. But for this demo, our focus is on showing the key open source tool. These tools can be seamlessly integrated into CICD pipelines for GitHub actions or for automated security testing. Interaction. So DevOps. DevOps transforms software development by focusing on speed, collaboration, and automation. However, security often takes backseat, leading to vulnerabilities. Now, that's where, this, what, DevOps, DevSecOps comes into the picture. DevSecOps builds on DevOps by embedding security from day one. Right from the development phase itself, making it as a continuous, development. And it's a shared responsibility across development operations and security team. Why shift to DevSecOps? Cyber, attacks are becoming, more sophisticated and very frequent. Compliances like GDPR, SOC, HIPAA requires proactive security measures. A breach can, result in, several financial and, reputational damage. DevSecOps helps us to detect the vulnerabilities right in the development phase itself and prevent security gaps before they escalate. On this slide, my key takeaways are DevOps focused on speed. DevSecOps ensures speed with security, preventing costly mistakes and costly mistakes to be avoided as early as possible. While we are ensuring the compliances from the start as well. Let's quickly see the role of Python in DevSecOps. As we know, Python is one powerful language. Python is a game changer in DevSecOps. Python is a top choice for security automation because of its simplicity, flexibility, rich echo system. Rich echo library systems, while other languages can do similar, tasks, Python stands out of it's easy of use and fast deployment, making security automation accessible to everyone. I would like to highlight where Python, really helps vulnerability scanners. Detecting flaws early with the tools like Safety, Bandit, Dependabot, Trufflehawk, GitLinks. There are a lot of tools which are available. AI based security solutions. Python strengthens in AI, ML. Helps to build smarter threat detection and more models faster than any other. Another, good thing, threat detection. it analyzes the logs, real time logs to catch suspicious activities before they became bigger issues. While, the other things which I would like to highlight in Python as well, it's easy to learn and use. for even for non security professionals or for any other programming language experts as well. Works well with popular security and DevOps tools. Large community support. We have large community support and ready to use libraries with, which makes automation much easier. Again, key takeaway. Python makes DevSecOps practical and efficient by simplifying security automations and enabling teams to focus on building secure applications without extra complexities. Now comes why role of AI in security and DevSecOps. Why AI in security? It's a provoking question, right? Why AI in security? What are the key benefits we are getting? How does it contributes to DevSecOps? Yeah, security techs are getting smarter and faster. AI help us to stay ahead by Predicting, predicting potential risk by analyzing the patterns, just assume just like a weather app forecast, be forecast storms and weather conditions based on the day, based on the past data. Detecting suspicious activity in real time. Think like of a home security system that alerts you when it detects unusual moments. respond, instantly stop threats from spreading like an automatic sprinkler system that puts out a fire before it gets worse. Again, if we go back to the second question, the second provoking question, what are the key benefits that we are going to get? Automated Threat Scanning. AI can scan a huge amount of data quickly. Like a spam filter that sorts through emails to catch phishing attempts. Intelligent Code Analysis. It not only spots security flaws, but also suggests fixes just like a grammar checker, right? That corrects your mistakes as you write. Continuous security improvement. AI keeps learning and adapting. It's like your fitness tracker that adjusts goals based on your activity patterns. Now the third question I would try to address is how does AI contribute to DevSecOps? Smarter test pipelines. AI creates better test cases. Oh, assume like it's like GP, it's like your GPS. or maps suggesting the best route and, based on the real time, traffic. Real time anomaly detection. AI monitors, logs, and system, logs. It's like a surveillance camera and that notifies you an unexpected visitor is coming. Automated complaincy checks. AI ensures policies are followed. Think of it's like a tax software that helps to stay you complaint without manual effort to just to conclude the slide. my key takeaway here is AI is not here to replace us. It's there to make security smarter, faster and easier to manage. before we get into deep dive into the tools, here is, what I have divided the tools into six phases, considering the development life cycle. one is the static analysis, phase where I, dev phase, and we use, tools like static analysis. the, again, the security, Security is not just about having the tools again, it's about making them smarter with AI to catch issues early and automate responses effectively. AI helps to optimize security tools to work better and faster. So again, as I mentioned, at dev phase, we are going to use static analysis tools. at code review phase, we would be, using secret detection tools like Git Lake, TruffleHawk. there are, if you take the landscape of these tools, there are Hundreds of tools available. but for this demo I just picked for each phase two tools. Some of them would be common, but just to, for this demonstration purpose, I took two. There are, commercial tools as well. There are open source tools as well. There are lot of tools as well. the idea is to just how we can integrate these tools into our pipeline using. And then leveraging the AI to make sure the threats are attempted as early as they are detected. Now, static analysis tools, we are going to use PYLint and CodeQL. AI, here we'll leverage the AI help as well. AI does not just point out issues. It suggests fixes and predicts risks as well. another example like is like a smart assistant that helps you to write better, more, better code and more, secure code and a quality code. Another, one, again, we are speaking about code review phase, secret detection, git leaks or truffle hawk, right? AI helps cut down, false alarms and provides guidance on, storing sec, secret securely. Think of, it's like an experienced auditor who knows exactly what to look to. Then we'll go into dependency scan on the build phase. tools like safety, safety or dependabot, helps. And if you are leveraging AI, AI can analyze the trends and warns about potential vulnerabilities before they become serious. Similar to how a health app predicts potential issues based on the trends. Now then comes to functional phase or we can, sorry, deployment phase where we will do functional testing. Tools like Postman, Helinium, can be used here and leveraging AI creates and adapt to a test case, generation automatically test case generation fixed, fixing broken, issues or fix, reducing the flakiness in the test execution. Assume as we spoke, it's like a GPS rerouting when you hit a roadblock and, we will use then next comes as the security testing. That's where post deployment. this is where, DAST, comes into the picture. We call it as DAST. DAST tools, OSAP, ZAP is one tool. Yeah, and leveraging AI can help you pre authorize vulnerabilities based on the real, impact. in other words, it's like a doctor telling you which health, issues needs an urgent attention. Then comes, finally, post deployment. it's not just, sitting and relaxing, right? We have to do, the product, production phase as well to see if there are any anomalies in our, production. It's monitoring, right? when again, we are trying to leverage AI, continuously, it, it, it watches the logs and detects anomalies before they escalate. And an example could be just like a home security system that learns and adapts to new patterns. Again, to just to see what would be the key takeaway from here. AI makes security smarter by improving, detection, reducing manual effort and helping team to focus on what matters the most. Now, we'll try to look into some of the tools that we discussed in the past slide. So in the dev phase, we are focusing on, Two tools, uh, that is PY Linked and, CodeQL. if you see, the line number, before we go into the code demonstration, let me help you to understand what is PY Linked. Think of PYLint is like a spell checker for your Python code. It enforces coding standards, detects issues like unused variables, for example, unused variables, and ensures, ensures compliances with best practice, ensures compliances and, best practices. For example, practices like PEP8 and all those things can help you. with the AI, integrating with the AI, it offers smart suggestions. Thanks. auto fixes and better issue pre authorization. Then comes to, the other tool that we were talking about, right? CodeQL. What CodeQL is? CodeQL acts like a security detective, scanning to identify vulnerabilities in your code itself. could be such as like SQL injections, XSS and all things right at the development phase, analyzing the code itself. Now, integrating with AI, if you take, an example of, we are taking here as a Salesforce code, TFI base. that's one of the model, that's developed by Salesforce. what it does is it has past vulnerabilities, databases, and provides deeper insights, for better prevention. we, it's that model is being, basically, it's trained on analyzed, generated code insights. It helps summarizing, and, provides actionable, fixes. boosting your development, productivity by automating tedious tasks. another, thing that I want to highlight here is why we are taking a local model. Okay. There are models, there are different models. We can either use GPT or BERT or any other models. But here is the thing, right? So I'm taking a local model. So it's running on the local, your machine where your pipelines are, where this code is running. That's where the model will be run as well. please remember here, see certain things, right? Privacy comes into the picture. Some companies won't allow to share our findings to, external system. GDPR compliances comes into the pictures. That's where I took a model here, which is running in Salesforce. Salesforce model, which is running on local, but, there is one advantage and disadvantage as well. Let's talk about that running model locally ensures privacy, without sending data to external servers. However, these models may not be up to date. That's a cons because it's running in local. It has might learn from the past, data, but not one of which is going on the, currently, right? So that needs to be updated. So this model needs to be frequently updated to have, the good, vulnerability database with you. Again, there are possible other possible, models as well, which can run, locally is code, but, great for coding understanding. GPT codecs, provides intelligency code suggestions, tab 9, polycoder, and, other things. Now, when you're making a decision, just look into privacy versus updates, right? Local mode ensures privacy. But may miss the latest security trends that we discussed. Performance boost. AI helps developers focus on real issues, saving time and effort. Again, before we go, let's go into co, co, deep dive. So if you see, line number five to eight, if you focus, there is a function which is trying to do a run pylint, which is trying to run the lint, linting process. And, here we took a sample, Python file. You can run it in the entire repository. And then you get everything in, standard out, in the results. standardout file. And then we are also running, codeql, and, Code ql. Again, there is a process. You can, you can go and check the documentation, how it runs. We create a database with fi, python code scanning, Q Ls file. That's, latest, security vulnerabilities that can, it can understand. So it'll check based on that. So the new ones keeps updating. So again, this needs to be updated as well to have new, newer one. And then again, this is also, we are trying to store it in a result data, in a result, variable. And then, I'm trying to extract, based on the line numbers because I'm trying to remove all the junk, from this one. And then finally, what I'm trying to do is per line number that I'm retrieving, I'm passing it to the, my model. Okay. My model is now trying to understand and finally, gives me an output. let's see the output first. Okay. This is how the output looks. If you see PY linked, before, AI integrated. If you see, it says PY linked analysis report. Just focus on line number seven, constant name, secret key does not confirm to uppercase naming style, which just says that, secret key is not in a, does not confirm to uppercase naming style and all those things. It's just PY links is, Checking on, naming cases and all those things like camel cases and all those things. Okay. Now line number 15, again, combined injections are detected via os. py open and end of line, end of py lines. But I'm just for a demonstration, I'm taking these two lines. And if I, if you give the same thing to, AI before and after, if you see, It's not only suggesting to, use, the naming conventions and all those things. However, AI is also suggesting to use hard coded secret with secret key, using this. So it's a suggestion it is also giving. it's not only detecting your coding standards, but it also suggesting, the things as well. because we pass that, report to AI and the model that Salesforce model, and it's just trying to suggest those things. And similarly for line 15. It says, subprocess that run, versus instead of, os p, po open. So what it is trying to do is it's trying to speed up our work. It's trying to supercharge the developer, to start looking in this direction. It's a suggestion which is trying to give versus what we have with PY link. That's where the flexibility, that's where the real usage of the tools is coming into the picture, where it, we are trying to fasten up our work. Now, similarly, if you take a code ql. Security analysis report. Before AI integration, if you see line number 22, SQL injection vulnerability detected in raw query. However, we know that there is a SQL injection could be possible at line number 22, but how do we fix that? No, we don't know that. or an expert developer could know it, but as soon as we see this particular execution, we would not be able to justify that what could be the possible SQL engine. Injection attack. On line number 30, cross site scripting, is possible as well. Now, AI suggested fixes, if you see, it says replace raw query with parameterized SQL query, and it is suggesting as well. So that's the power of AI we are leveraging. That's the power of AI, the model we are trying to take, which will help us to fasten up, not only detecting, it is also providing the solution. These kinds of things are available in commercial tools. However, with open source also, we can achieve this, where we are taking the AI help to get solutions as well for the problems we have. Line number 30, use escape name to sanitize user input in HTML rendering. So this is the end of AI fixes. That's what it is suggesting. So tools like, there are a bunch of tools that we can use it, but if you are trying to leverage AI help, you are getting supercharged. that's where it helps. Now let's talk about a review phase. secret detections, while we, while many good coders also tries to, hard code a secret. So it could be in a test file or it could be anything or, any other, tokens as well. So we need, a tool which can help us in review phase. It's very hard if a developer, tries to review it, there is a chances that it'll mess. That's where these tools comes into the picture. secrets made, secrets, detection made simple with, GitLab Git Lake and Truffle Hawk. and again, AI is power by Salesforce code, TFI base. Again, you can use any of the models, which are, can be run locally. But here for our demonstration, I'm using this, again, secrets like API, keys. passwords and tokens can be unintentionally end up in code, right? So posing a serious security risk, manually finding them is a difficult, but tools like Git leaks and Truffle Hawks can help them or directly find them efficiently. we'll talk before, we go in before detail and the demo of the code, we'll see, what is Git leaks, Git leaks, basically, GitLinks tries to scan the entire repositories and finds, predefined patterns of secrets and entropy checks as well. it can be worked with any CACD pipeline, to catch serious, issues before, deployment. That means your PR cannot be merged until, unless you address this. you can integrate that with, pipelines or even in GitHub actions, you can integrate this. Trufflehawk again, dives deeper into commit history. to uncover high entropy strings that could be, that could be posing the credentials and all other sort of security issues. Ideal for post commit reviews to catch overlooked secrets. Then now, if you are integrating with AI, How it can help us. So AI, again, we have seen in the, in the, in the past example, smarter detection, Code T5 improves accuracy by learning from the past findings and reducing false positives. Sometimes it could be a false positive as well. That also will be helping us as well. Context awareness, it understands the code context. to different, real secrets from a noise, and also provides automatic remediation, provides actionable, recommendations such as moving secrets to secret values. Continuous learning adapts to evolving patterns, identifying new risks as they emerge. Okay. overall it's enhancing productivity, helps developers quickly address identified Issues with all with without manual intervention again, before we go deeper into the code Here is my key takeaway from this with ai power tools like salesforce Codify secrets detections becomes smarter faster and more reliable Ensuring better security without extra manual effort. Now if you see the code line number six to, 13. If you see what we are trying to do is we are trying to run, the tools basically. So the two tools we, you can either use TruffleHog or GitLex. See, refer line number 24 and 25. which where we are trying to call first TruffleHog and then later GitLinks and we are calling the method runScan. Again, if you see we are trying to run this and then we are trying to get the output and then again we are passing that both TruffleHog output and GI Lake's output into our model and model is trying to passe it and then it is trying to suggest the fixes. Let's see how the fixes looks like, what before and after, how it looks. So if you see Truffle hog before it is adjacent file, it's very hard to understand. at least, as a developers, we would be able to understand how we were, you know. this is very, vague strings found AWS key password and all sorts of things. So AI integration after AI, this is how it looks. Let's focus directly on the AI suggested fix. Move the keys to AWS secret manager. This is your friend, your buddy, who is trying, it's acting like your buddy who is trying to help you to understand that, hey, move this, things to a secret manager and reference it via environment variable. That's a very good, suggestion. And, GitLeaks, again, if you see GitLeaks, it is trying to say secrets are found in so and so file method and all sort of things. And now if you see the things, use the environment variables for, or a secret manager. Instead of storing credential in this file. So these are the suggestions we are getting not only suggesting it is try to help us our code and it will try to Say that there is a problem here and then you can fix it this way. So that's the Power that we are getting from integrating now. let's see a build phase. uh, in the build phase we will try to find out the hidden vulnerabilities in our third party libraries that we are using You However, finding the tools like safety dependable will help us to find out the libraries. However, the problem, comes into the picture is, challenges and dependency management. Okay. Manage, managing software dependency is a complex and challenging, challenging as well. Developer often time face, faces several key challenges. Like such as like security risk, right? Dependency may contain vulnerabilities that expose application to potential threats. That's what we are discussing. Tracking and resolving these vulnerabilities manually is very time consuming. Okay. it's very frustrating as well because one library you change, it may not be compatible because of multiple things. Okay. deprecated code issues. An example could be a deprecated code issues. Upgrading dependencies can introduce breaking changes. Okay. to the deprecated functions, I didn't identifying affected code. manually, fixing, requests a significant of effort of the time, again, high cost of, regression as well. now, because you have to test the entire application, whether it's working fine or not. Every dependencies, upgrade, this is time consuming. and overall, it's a problem. So what we can do is, using ai. The solution could be, use AI, so which would, help us to, help us in, dependency management, how AI can, help us in enhancing dependent management by automatically, detecting the vulnerabilities, compatibility analysis, and, risk predictions. These are the things, which could help us. And here is what my approach is. I'll explain the code in detail. But, it's a multi step process, which I would say, first one is where we are trying to find automated, vulnerability scanning. So we are using a tool like safety here. And I took an example of a safety or you can use depend on what as well, or another tool, to detect known vulnerabilities. First, it is trying to scan and it is trying to find out what are the vulnerabilities coming by our third party libraries that we are using. Okay, then we will try to analyze the code for deprecations. Say, let's say, there is a library we found and we want to use that, but is there any deprecated method or any code issues? So I use a technique called AST based parsing to identify outdated functions. Then I'm using, dependency visualization. That's not coming, that code is not showing up in the slide. We'll see in the next slide. graph based mapping to understand the relationships. So we are using JNN networks to understand the code relationships. And then finally, we are going with AI powered risk production. This is where the AI is, where we are using, GCN models. We are using GCN models to pre authorize updates and minimize the failures. again, We can use real time security again, which will, which we can integrate with the GitHub advisories to stay updated. So another tool which we can also use is GitHub advisories, which will have the updated, libraries information and all sort of things. if you go into, detail of the code, from line one, line number one, to PHI as where we are now focusing where, the function, what it does is it scans the. project dependencies using safety tool, to know, to find the, identify, known vulnerabilities. The output is passed into a JSON format for further processing. so that's where the JSON format is. Now, we will see in the next two slides how AI can help us, but, I'll just try to help you to understand here as, it pre authorized the detected vulnerabilities based on severity and expo. and, possible, noise. then, let's focus on the next, method. next code block, refer here nine to, 16, code block nine to 16. What it basically does the functions, scans and gives, gi gives a python to detect the deprecated function. As I mentioned, we are using a SD technique here. Try to find out the deprecated functions, to analyze the code structure. how AI can help here? AI can suggest alternative functions and estimate the impact of replacing the deprecate methods. that's automatically analyzing. Now let's go to the next block of the code. here if you see, step number three, we are trying to visualize the dependency vulnerabilities using network. What it basically does, build a dependencies graph. To visualize the relation between dependencies and their vulnerabilities. Highlights risks dependencies visually. So I haven't executed this code, but if you take an example here, you will see the dependency graph. now if you are leveraging AI, AI can detect complex relationships and predict that the impact of change before they happen. Okay. Now, fourth one, if you refer the block from 30 to 40, Two, where I would say, utilizes graph neural network GNN to predict risk scores for dependencies. AI learn pattern from historical vulnerabilities and dependency relationships. Now, provide AI can, AI helps to understand, the PID provides actionable risk to prioritize upgrades efficiently. whichever the library you want to upgrade, it will help you to understand that and reduce unnecessary updates by focusing on high risk, dependencies. Okay. this is my approach. There are multiple approaches that you can, take. And finally, the fifth one, fetch real time security update updates from advisory from GitHub. You can have your GitHub token and all sort of things and find out, the security advisory from Git as well. Now, let's see how this code looks like before AI. the, it is a manual process. It shows dependencies. NumPy has a high risk here. and then dependency request has high, high risk. One as well. Sorry, medium as well. Potentially deprecated old function in examplecode. py at line number 15. So again, as I mentioned, I'm not sharing the live code outputs because I'm just trying to show you how this code is. it's written by me. Example. code. py has an all deprecated method so that I, for demonstration purpose, I just captured here. Again, After AI automated the process looks like this dependency MPI cv security high and risk score is 0.85. So we got, 0.85 as the highest one recommended action upgrade to MPI to this one. Impact analysis deprecated function detected. In example, code Dopy rep replace this old function with the new function. that's a beauty of the AI coming into the picture So here again, I haven't taken the real time example, just a mocked up example here again dependency requests, for this library as well. We are trying to find out the score is zero zero point six five Recommendation is review api usage and apply patches. No breaking changes detected. So there is no breaking changes, which would be detected so Challenges, dependencies are listed without a pre authorization. If we, if you see the before AI, developers must manually assess impact and compatibility issues. extensive manual effort is required. With the, improvements of AI, Pre authorization of risk scores. You could see the risk scores as 0. 85 for NumPy. And automated code analysis AI can help you to identify the potential breaking changes and suggest the fixes as well. So overall, it reduces the testing effort, and, saves your time. Overall, on this particular one, my key takeaways, AI empowers developers to handle dependency management very effectively, with a confident, with a very confidence by automating the analysis and prioritizing the risk and overall reducing the time, and effort. Now, let's talk about functional testing. again, here we are taking, functional testing Postman, Neumann, and, Hellenium. the goal here is to ensure the correctiveness and reliability, by validating the business flows, workflows end to end. it could be the APIs or it could be, UI and also identify, the whole goal, right? So we want to identify the security as well as the functional, issues as well. to reduce the production issues, which will in turn saves the cost and effort. Here the tools, as I mentioned, Postman and Newman, just to give a quick overview for those who doesn't know what is Postman and Newman. Simplifies API testing, and automates workflows and enables continuous validation in CI CD pipeline. Hellenium, again, AI powered self healing tool that adapts to UI changes by tracking elements, locators, reducing flaky test failures. Let's say flaky, there is a flaky test case which has changed because of a locator. Hellenium can help us to bypass that and helps to move forward. And which is very much required, in, testing. with the AI integration here, what we can do is smart test pre authorization, focus on high risk areas by analyzing test history and failure trends. we can say, ensure resource are used efficiently, and, again, with the self healing automation in coming into the picture, as I mentioned. If there is a flakiness, we are reducing or minimizing the flakiness and, which will help us to overall maintain the test cases and focus on new future testing, and other things. And, AI can also help us providing actionable insights, provides root cause analysis and recommendations for improving test coverage. Again, my key takeaway here is AI driver functional test cases, reduces the flakiness. It accelerates debugging and improves reliability, making tests smarter and faster. Okay, now let's focus on security testing. Why security testing is still needed? Even with the security coding practices, security testing is essential to detect runtime vulnerabilities, evolving threats, and configuration issues that might arise post deployment as well. It ensures compliances, protects against emerging attacking, vectors and provides an additional layer of defense. So that's where security testing is very important. So we are using here two tools, OSAP, ZAP, and Nikito, both will enhance the security testing with DAST. And, SAST. So what is DAST? Dynamically Application, Security Testing. The ones which we have seen so far is, SAST tools, which is Static Application Security, Testing. now we are seeing the DAST tool here. OSAP, ZAP is one of the famous, tool, which is also called a Z Attack Proxy. very popular tool that simulates real world, attacks which are, To see how the application will behave at a run time. Okay, The attacks could be different the vulnerabilities that could be different sql injections cross site scripting security misconfigurations in our live applications, so it basically analyzes the web application without access to the source code mimicking, attacking as a behavior, attacking as a, customer or attacker behavior. Okay. While DAST focuses on runtime security, as I mentioned, SAST focuses on source and binaries to detect vulnerables at the code level before deployment. So that's why we need, testing, security testing at, DAST level as it's a runtime application. how AI can enhance our security testing. AI can enhance security testing by automating scans, prioritizing critical vulnerabilities, and reducing false positives, allowing team to focus on real threats and faster remediation before it goes into the production. So we are marking up the cases. And then trying to test our application. Okay. AI enhanced security testing, code overview. If you see how I am using this application. So generally people use, scripts to run. but however, I have coded everything here. So if you see a step one, we are running Ozap, and Nikito scans. so we first, we are triggering, Nikito at line number three and then, Zap. And then, we, once that is done, we are scanning the results and then we are pre authorizing which one we want to fix it. And then we are finally, reducing the false, positive. to go back again, line number one to seven, which will help you to perform the automated scans on the application, and then we are going, line number, 14, to 20, if we see we are pre authorizing the findings AI ranks vulnerabilities based on severity and risk factors. And then we are going with, false positives. We want to reduce the false positives. So we are using the filter basically. So from line number 23 to 31, it refines, results, ensuring, accuracy by filtering the irrelevant alerts. So if we have, we need that, we want to focus on the right ones, right? And then step five is where the suggest fixes. So we are trying to, take AI help on providing actionable security fixes, that could help us the acceleration of the development. Again, and then, line block number six. If you see, we are just atti running the, script. And I used my stream tech here. and, the key takeaway from this particular script is security testing with ai. Ensure proactive protection, by detecting active, actual evolving threats, and then we can automate, automation of the pre authorization and accelerating the remediations. So now, if we go back, how, basically it looks before and after, let's go back before AI integration. If you see, Before AI, challenges we see is manual security testing required constant oversight, large volumes of false positives overwhelmed with security teams. Lack of prioritization led to delayed responses. And again, fixing vulnerabilities requires extensive research and effort. However, after the ai, if you see now, if you see it's very neatly done, based on the prioritization it is giving, it's taking the risk score, risk factors as well. If you see, predictive risk score is 0.92, critical. That means we need to address immediately. Automated scans, run some schedules without, manual, manual interventions. AI filters out the noises and focus on critical vulnerabilities. Now, if you see it's pre authorized and then also, suggest a fixed, fixes, how you can fix it. So if you take an SQL injection attack, again, here it's example. login, use parameterized queries and input validation. Then again, suggested fixes, line number two, if you see, implement a proper input sanitize, sanitization using security library. so it detects and it pre authorize. So increased, the advantages that we get, again, is increased efficiency. security can focus on real threats without wasting time on false positives. again, improved, accuracy. AI reduces the chances of human error in manual pre authorization and response. And overall, again, faster reduction, faster, fixes. And overall, my key takeaway again, integrating AI into security, testing transforms a reactive approach then, transforms a reactive approach into a proactive and efficient processes, reducing exposures and ensuring faster, resolutions. Okay. Now let's see, with the, AI integration, how the logs looks like. So in traditional, security, the challenges, right? So if you see. if you take back the same scenario like detecting threats like brute force attacks and unauthorized access could lead inefficiencies because slow detection, reactive approach, false positives, manual, contaminations, which all creates time. Okay. If you take example before the logs, before, AI integration, if you see User login attempted it from so and so IP address. Multiple logins fails detected. Potential brute force attacks detected. Now, with the, without AI action, right? So security teams again has to go manually analyze the logs and block the IP after detection. In meanwhile, there could be a serious threats which could happen. All, all what is happening is response delays and further unauthorized possible, access attempts. With the AI, we are trying to enhance the security monitoring improvements with AI automated detection with, AI identifies suspicious patterns instantly proactive responses, reduce false positives, automated contamination, that's a beautiful immediately blocking of threats and enforcement of security, security measures. Now, if you see example here, we have detected an alert on our login page. So the risk is very critical. The scoring is also, we have done with our previous algorithm, 9.5 and what, contamination we are, try, trying to take IP address is blocked because we don't want to more number of, threats to come up or, request to come up. And we also enable, we have also identified what are the customers who has been, affected. And immediately we have enabled, MFA, we have, revoked the, session tokens as well. then we are also suggesting the developer to implement rate limiting and capture. So all this will help us, to immediately take an action, using, ai. so the key take, key takeaway again here. AI driven monitoring enables faster, detection, faster resolution, automate automation of contamination smarter, remediation. Reducing overall risks very effectively. With that saying, let's talk about some of the best practices. Okay. So best practices for AI driven DevOps, DevSecOps. Okay. Practically, let's see this. Start with small and grow smart. AI can be overwhelming. So it is because to start, it's recommended to start with small manageable projects instead of trying to implement everything at one place. One place. Pick specific area and then try to fix it. preauthorize what matters most. Some of the internal services are there which may not be, that, important. Customer facing, right? So take examples of customer facing applications. so focus, like, where there is a lot of customer facing security enhancements are required automation automate as much as repeated to tasks and then slowly move into what is the most required. These areas could help us you to provide a quick wins and help establish trust in the AI capabilities. Keep it transparent. Okay. once the biggest concern with AI is trust, ensure that AI driven decisions are clear and easy to understand. Okay. Transparency helps teams feel confident in AI recommendations. Make it easier and adopt, and act on AI driven insights. If possible, if people don't trust it, then it may not be, it won't be that useful. Always improve. A is not a one time setup. It needs continuous learning and improvement. We discussed, about, the local models versus, the frequently upgraded model. So again, regular updates are required, fine tuning, feedbacks, feedback loopholes. Help maintains, the accuracy and effectiveness of our models. And over time, the system would be predictable and trustable. As your time evolves, your AI model should also evolve. Integrate it smoothly. AI should fit into your existing DevSecOps workflows. Seamlessly, don't enable everything at once, start with smaller steps as we discussed. It should not feel like an overwhelming response to developer and also, should not be a burden rather than enhancement, to the current process. Choose AI solutions that, that compliment your tools and workflows, to make it a transition effortlessly. initially, you will see some, hiccups and some back, back, back steps. But slowly, you have to educate the developers and get things done. not developers, every engineering team. Okay. Now the future of AI in DevSecOps. Let's take a look at how AI is shaping the future of DevSecOps and why it is becoming as an essential part of modern security practices. AI enhances speed, accuracy and efficiency, proactive security and automation, scaling without human bias. enables a smarter, decisions making through insights, continuous improvement with feedback loops. So overall, what I want to say is AI, as it continues to evolve, it will also play in a bigger role in DevSecOps, driving efficiencies, automations, and smarter security strategies and smarter remediations, smarter contaminations that helps organizations to build safer and more resilient applications.
...

Srimaan Yarram

Senior Engineering Manager - Quality @ Coupa Software

Srimaan Yarram's LinkedIn account



Join the community!

Learn for free, join the best tech learning community for a price of a pumpkin latte.

Annual
Monthly
Newsletter
$ 0 /mo

Event notifications, weekly newsletter

Delayed access to all content

Immediate access to Keynotes & Panels

Community
$ 8.34 /mo

Immediate access to all content

Courses, quizes & certificates

Community chats

Join the community (7 day free trial)