We’ve reached a point in time when personas identify differences between “pentesting” and “pen-testing”....and rightfully so. While the actual merit in the differences in this particular example is debatable, what needs a closer look is the motivation behind the quest for such specificity.Circa 2010, when we commenced our journey into cyber-security, VAPT (Vulnerability Assessment and Penetration Testing) was the most preferred pseudonym for anything to do with a technical assessment - whether appsec, a network audit or a sweep scan. Technology groups have come a long way since then, thanks to the need and the adoption of focused approach in assessing an information asset.In this blog, I would like to bring out some significant progressions in our security assessments that have come about in assessing applications over the recent past.
The Test Case Buckets
Application Security assessments are no longer a matter of just tools and platforms as much as they are about depth and scale. With the changing landscape of application architecture to more microservices, SPAs and serverless deployments, we see that there is only so much that security tools (by themselves) can bring to the table. Depth (of assessments) has come to do more with tailor made test scenarios that are often time outside the capabilities of tools. However, this is not to dilute the capabilities that security platforms bring to the table. An effective application security assessment is in compartmentalising test scenarios into the ones that are tool dependent (and hence automatable at some point in time) and the ones that are mostly dependent on the skills of a pentester (typically logic flaws or config driven flaws).By extension of this fundamental principle, we at we45 felt the need to give our customers the opportunity to derive maximum value from our assessment services based on where they feel they are on their software security journey.
Penetration Testing - Plain and Simple (2010 - Present)
This is still one of the primary reasons that our customers engage with us on for the very first time, especially the ones who have the following primary objectives
In some very rare occasions, we have also been called on to augment the skills of their internal security team, just to bring in a third-party view to their test cases. In essence, one-off penetration tests are a great way for customers to understand both the skills of the vendor and also introspect on the resources (tech and skill) required internally to remediate vulnerabilities effectively. Such assessments ALWAYS commence with our engineers drawing up detailed abuser-driven threat models that in-turn map to their associated test cases. This is a great way to establish transparency in terms of coverage and context to the penetration test with the customer. The threat modeling exercise also paves way for us to understand “blind-spots” in workflows in the application. The test cases are then categorised in one of the buckets described above thereby minimising the dependency on a person for subsequent assessment iterations.The we45 touch : We believe that the merit of a sound security assessment is equally dependent on the remediation advisory as much as the vulnerability itself. To this extent, we try and ensure that we have more than one remediation strategy defined in our reports for not all standard remediations would work contextually with an application.
Custom Security Automation (CSA) bundle (2014 - Present)
An extension of the standard penetration test, our progression to custom security automation came around the early days of our DevSecOps ideation models focusing on the single most important aspect of penetration tests - effective remediation. Development teams often do not have the same tools or skills that penetration testers do which often results in issues with reproducing the vulnerability and subsequently remediating them. In effective remediation also results in the issue resurfacing or regressing along release schedules. The CSA tackles this problem statement in a two pronged approach - automation and training.Product teams who are used to the drill of frequent assessment iterations are looking to scale this activity and scale translates to automation. They are oftentimes looking to achieve one or more of the following
However, what often-times goes unnoticed is the availability of reusable technology components that can aid in this objective. One such is the scripting and automation technologies used by Quality Assurance (QA) teams. With the CSA bundle, our security engineers use components such as Selenium and Cucumber to help reproduce both pure logic and tool enabled vulnerabilities. These scripts are delivered along with the assessment reports to our customers who would then be able to use them to locate and understand the vulnerability.The Kicker : Scripts can be used by teams as part of their existing QA runs as regression scripts. So not only do they now have a way to help development teams find and fix the problem better, but also truly ensure if a bug is fixed or notThe CSA also bundles training along with the assessment contract, thereby setting a very strong context to the assessment itself. The developer security training is conducted either pre or post the assessment with interesting outcomes
Application Security as Code (ASaC) (2017 - Present)
Scaling application security for mature teams and organizations usually come in two distinct problem statements
In both these scenarios, there is insurmountable pressure on security engineering teams to cater to the following objectives with a fixed number of skilled resources to cater to increasing assessment iterations:
Automation at its most granular level boils down to “code”. We’ve always been a eat your own dog food company and in that we realised that our message to security engineering needing to “get code” has to start from home!The AppSec as Code (ASaC) is a mixture of service and solution enhancements that look at scaling primary penetration testing tasks using code and the power of integrations. For example, even a plain and simple penetration test can be made to scale exponentially by breaking down the individual phases of Reconnaissance, Discovery, Mapping and Exploitation to its granular test cases. This is where the bucketed approach comes in handy once again. By ascertaining which tools are the best fit for a specific test case (or a group of test cases) combined with outcome based assertions (such as finding an issue directly from the tool or by parsing the result to find a value) and stitching them together, we aim to increase reusability across agnostic test cases. The ASaC cuts across the following critical areas
Positive Disclaimer : The ASaC is not meant as a replacement to manual assessment strategies, but aims to augment them. We’re confident in our stand that true value comes as a force multiplier effect between tools and manual testing means
It is very important that security (testing) vendor realise and acknowledge the multitudinal changing dimensions of product teams in terms of awareness, skill and subsequently the maturity of their application security quotient. While penetration testing for some could be a critical bottleneck for a team to get their conversations going with their customer, it could be viewed as a critical cost center within engineering that needs a much deserved overhaul.