Security Automation, more so Application Security automation specifically, has picked up tremendous steam over the past year. We at we45 have seen a steady increase in engagement amongst our customers and prospects in data sheets, blogs and conversations pertaining to anything around AppSec Tooling, Security Regression or DevSecOps in general. While we’ve met security and product engineering teams across varying complexity and market verticals, there have been a distinct set of conversations around surprisingly a finite set of common queries surrounding this topic.
Here is our compilation of what we’d like to call The Top 7 Myths of AppSec Automation for all of you who are either currently involved in or in the process of adopting DevSecOps in the near future.
One of the obvious strategies that teams envisage automation is by integrating Static (SAST) & Dynamic (DAST) analysis tools in the development pipeline. Since tools have built-in integrations with Continuous Integration (CI)& Defect tracking services, teams can get up and running with their tool automation plumbing in a matter of days. Sounds pretty straight forward right? Well not so much.
What tends to get overlooked is the fact that these scanners generate a lot of ‘Noise’ in the system, by way of false positives, repetitive results across scanners, disparity in vulnerability nomenclature (same bug by many names) etc. In an automated system, all this noise gets raised as high priority tickets in the bug tracking system, making it close to impossible for the engineering team to prioritize and remediate issues.
Vulnerability Correlation is essential as it greatly reduces manual triaging of results across tools. The crux of correlation lies it is ability to de duplicate repeated results across SAST & DAST, normalization of vulnerability nomenclature, flagging of false positives and arriving at a single set of unique results. These results can then be pushed to the defect trackers ensuring that engineering & security teams have a single set of vulnerabilities to further scrutinize.
While it is true that Security Automation will have a certain impact on your build time, the impact can be minimized depending on the architecture of your build pipeline. One option (especially for teams who are early adopters) is to setup a separate security pipeline. This can be setup as a parallel process and can be configured to run scans on a frequency and timeline which will not impact your mainstream application build time-frame.
Another option would be setup the SAST and Dependency checkers as part of your main pipeline and setup a separate pipeline for DAST scanning. Since SAST and Dep Checkers have a lesser execution time, they will have a reduced impact on your app pipeline. In this scenario one can configure static analysis to run on a daily basis and dynamic analysis to run on a weekly basis. The impact can be further reduced by tuning the scanner policy to run sanity scans on a daily builds and running deeper level scans on a weekly builds.
Additionally, in the event high severity flaws being identified, teams can configure the pipeline rules to break builds.
Product engineering teams have accepted the fact that in order to build a secure application from the bottom up, engineering & security teams need to work in unison. However, in almost every conversation that I have had on the subject, I am asked as to why QA needs to be involved; what value would they bring?
In simple-speak, functional walkthrough scripts developed by QA is crucial in providing DAST scanners additional context, thereby enabling the tool to scan with greater efficiency & depth. Let me break it down for you.
Almost all DAST scanners tend to be crawler based; which essentially means that in order to be effective the scanner will have to traverse through multiple pages of the web application. However, applications today are increasingly built as a single page app or heavily rely on a micro-services architecture. In such a scenario, application functions are not invoked based on URLs, but are called upon based on user input. Without multiple URLs to traverse and build a site map, DAST scanners will only scan the home page at best and end up doing a very cursory scan.
A QA walkthrough script essentially validates the functionality of the application by traversing through various modules sequentially. For example - Booking air tickets on a travel e-commerce website; the flow would be Login→Select Destination→Select Dates→Search & Select Carriers→Proceed to Checkout→Make Payment→Ticket Booked. The walkthrough script provides all the input parameters and output responses of the application as part of this workflow. Proxying the walkthrough script through DAST, gives the scanner context of the sequential workflow. The scanner can then traverse the application based on the workflow and fire its payloads accordingly.
This is especially important in the case of microservices or APIs. A walkthrough script will specify the API URL and specific input parameters extending the capabilities of DAST (which can only comprehend web pages) to APIs as well. Using walkthrough scripts also allows for module level scanning which comes in handy when you want to test only for iterative feature additions to the application.
In order for Continuous Security Automation to be effective, it is essential to change current pen-testing process to an iterative fashion, aligning them to new releases as much as possible. The first step would be to adopt a Threat Modeling approach to pen-testing. Threat Modeling is ideally conducted during application white-boarding and allows security testers to draw up threat scenarios and associated mitigations to critical sections of the application.Threat models can be further mapped to test cases. Pen testers can focus their effort to validate threat scenarios pertaining to logic flaws which are unique to each application. Threat scenarios of a generic nature (such as a CSRF or XSS) can be scripted and automated. These generic test scripts can run against every build, saving a lot of time as compared to manual validation of every vulnerability.
The current system of pen-testing which involves - testing the application in its entirety at periodic intervals and delivering ‘PDF’ reports at the end, does not fit in a DevOps environment. With PDF reports developers are unable to reproduce the vulnerability or the specific conditions in which the bug was found. They also have no way of validating remediations without going back to the security team. With the constant pressure to maintain a stable app and deliver new features; more often than not, developers ignore security reports as it just does not fall within their list of priorities or daily work-flow.
To ensure that pen-testing stays relevant for DevOps, security needs to adopt regression testing. Logic flaws identified during pentesting can be scripted and automated as part of engineering’s CI services. Every time the application is built, the CI will invoke the script to validate the presence of logic flaws. This benefits engineering by reducing their dependency on security and enables security to conduct pen-testing iteratively in line with new feature releases.
It is a common (and widespread) misconception that with Application Security Automation in place, penetration testing is no longer needed. Since pen-testing is primarily a manual process, it has no place in an automated environment.
Contrary to popular belief, testing done purely by tools does not construe a full Pen test. A pen test comprises of Vulnerability Assessment (VA) and manual exploitation or Pen Test (PT). VA is a tool driven process, which scans the application in its entirety, there by giving coverage. But VA only identifies 30-40% of the vulnerabilities in an application; essentially the low hanging fruits which would be generic flaws present in all types of applications.
Let’s be clear - There is no substitute to manual penetration tests. PT is the only way to identify logic flaws, such as a privilege escalation or an authorization bypass, which cannot be identified by tools. These flaws are unique to each application and are typically of a high severity nature. While VA gives coverage, PT brings depth, and both put together construe a comprehensive security assessment of your application.
In order to be relevant and effective in a DevOps environment, security testers need to invest in developing coding skills. Typically security testers have a black box or ‘outside-in’ view of an application. They do not understand the application's architecture or the finer nuances of how the app has been coded. As a result any assessment conducted by security is taken as a ‘finger-pointing’ exercise by engineering.
Investing in coding skills, helps them understand the unique aspects each programming language and appreciate as to why a particular functionality has been coded a certain way. It also broadens their skill set to conducting white box assessments or code review. Conducting a tabletop code walkthrough alongside developers enables security to make appropriate code changes without adversely affecting the stability of the app.
Skills in coding also helps security in creating ‘Exploit as Code’. Scripting of high severity vulnerabilities (logic bombs) identified during pen tests and automating them as part of the build ensures that they are identified early in the build process. Over time, these Exploit scripts can act as a regression suite, validating logic flaws across multiple iterations of the application.
Implementing continuous security automation does not mean that 100% automation will be achieved from day one. There are two core aspects to AppSec Automation - the first is the raw plumbing of DAST / SAST integration in continuous delivery, the second is continuous security regression. Identifying the right mix of open-source and commercial tools, configured with appropriate policies, scan frequency and finally integrating and automating them to run as part or in parallel to the build pipeline is achievable in a matter of days (as long as all dependencies and access requirements are met)
However, building security regressions will take more time. In most cases, the target app/apps are in production already. A base lining would have to be carried out to identify the backlog of security vulnerabilities. Once identified, these would have to be scripted and automated as part of the build pipeline.
The above compilation is reflective of very real conversations that we’ve had with Security Testers, Product Architects, Head of Engineerings, Developers, QA and DevOps professionals. So if any of these myths was holding you back from jumping on the automation bandwagon, hopefully our humble opinion has changed your opinion.