Team we45
June 28, 2018

Vulnerability Correlation in an Agile Pipeline

Introdution

There is a significant increase in companies taking up application testing as part of their agile process. As applications become more complex, with more add-ons and bug fixtures, application testing has become akin to risk mitigation as opposed to spending time and effort in securing the entire application.

The time consumed throughout the SDLC process plays a major role in this strategy, with manual labor being the biggest factor. Hence, to reduce manual labor, companies are embracing integration of security automation into their CI/CD pipeline. Justifiably so, automation helps reduce the manual labor, increases security of applications, and aids in faster release cycles. In other words, companies understand the value of integrating security as part of their DevOps pipeline.

Another factor driving the adoption of security automation is the maturity and availability of a variety of security tools (both licensed and open source). There are different tools for each stage of an application’s development cycle, which includes SAST tools (white box testing), DAST tools (dynamic testing), IAST tools (interactive testing), and RASP (Runtime Application Security Protection). Each of these tools has their own strengths and weakness. Therefore, companies implementing DevSecOps most likely have multiple tools integrated into their pipeline to get a comprehensive coverage.

Enhancing application security, increasing efficiency, and reducing manual labor through automation is all great. But, it comes with a certain headache. It raises the question of “how do companies deal with the different results yielded from these wide range of testing tools?” This probably hits home for a lot of developers out there that had late nights dealing with multiple reports, trying to make sense of it all. Below are some of the key factors leading to a conundrum with respect to automated security testing results.

Key Factors

  • Vulnerability Naming Convention:
  • There is a lack of a common vulnerability naming convention in the industry. The different tools, created by different companies, tend to use their version of names when referring to the different vulnerabilities that the tools detect. This becomes a disaster when your engineers have to deal with the wide range of reports and manually correlate the reported vulnerabilities. Adding to this problem is the repeated vulnerabilities; the engineers might end up creating multiple bug tracking tickets into their bug tracking system for the same bug
  • Report Format:
  • In addition to the different names, each of the tools yield their results in different formats. Even if you narrow down the reporting format into JSON format or XML format, it is usually machine readable, and takes up more time and energy to making it human readable
  • False positives & Risk scores:
  • All tools come with a certain margin of error. Depending on the context, each of these tools can report with high or low false positives. In addition, these reports have their own rating of risks associated with those flaws. One can only go through them manually to figure of which of them are false positives or true positives, and which of them of pose a risk to the application

A Possible Solution

An Application Vulnerability Correlation (AVC) tool would present a possible solution to the issues mentioned above. This correlation tool should have the following features:

  • Correlation and De-duplication:
  • The AVC tool should be able to correlate and consolidate different results across static, dynamic and code composition analysis. This gives the engineering team a more in-depth understanding of the flaw, during runtime and as code. As tools tend to also have commonly recognized flaws, but varied naming conventions, the tool should be able to de-duplicate these flaws and provide a unique result. This eliminates the need for manual aggregation of scan results from multiple sources
  • Risk Prioritization:
  • The different testing tools tend to differ in their risk rating of the vulnerabilities they recognize. The AVC tool should combine these different risk scores and rate them based on the industry standards, such as the application security index (score) or CWE. The benefit of using something like CWE is that it helps in normalization with respect to naming conventions of vulnerabilities. In addition, this AVC tool should use an intelligent tagging mechanism that tags false positives, which over time can help the tool to reduce future false positives
  • Defect Tracking:
  • Most of you probably already use a defect tracking tool, like Jira, in your pipeline. So, a good AVC would take all the unique (correlated) results and raise bug tickets in your defect tracking system so as to reduce manual involvement. Raising flaws in defect trackers also increases visibility of application security to the engineering team, resulting in a faster turnaround time for remediation. These flaws should be categorized based on severity, allowing engineering to prioritize which ones to address first

[caption id="attachment_6636" align="aligncenter" width="800"]

OWASP ZAP automation course CTA banner[/caption]

  • Reporting:
  • Furthermore, this AVC consolidated report should also be user friendly, as in, it should be in a human readable format. This should not be at the expense of losing details of the vulnerabilities. The engineers need as much reliable information they can have to remediate those vulnerabilities.  In addition, the report should include a comprehensive remediation advisory

we45's Orchestron, as a correlation engine, has the necessary features that can address your correlation needs and fit perfectly in your CI/CD pipeline. It can correlate and consolidate multiple results in your pipeline, and provide you with a comprehensive report, along with advisory on best remediation practices. The correlated vulnerabilities are automatically logged into bug tracking tools like JIRA and Github. The delivered report will also include the CWE ID, CVSS scores, and the DREAD scores, which your engineering team can use to prioritise and fix bugs as necessary. In addition, Orchestron has a webhook feature that enables you to easily integrate with different testing tools written in different languages.

The headaches induced by dealing with multiple reports in a CI/CD pipeline can be reduced dramatically by a correlation tool. By not using a correlation tool, you are doing yourself a disservice, even if you’re automating security into your pipeline. It not only reduces manual labour, it also increases visibility of vulnerabilities, and enables faster closure of security issues throughout your secure SDLC.

If you're wondering where to start with your search for an AVC, you can find Orchestron's community edition repository here.