The Baby Steps of DevSecOps!

Rahul Raghavan
February 5, 2017
The Baby Steps of DevSecOps!

I've been on the road over the past 10-14 months speaking to security and product engineering teams on DevSecOps, Security in DevOps, Secure SDLC or Suzie as some might call it (doesn't really matter anymore). I'm just going to stick with 'DevSecOps' for the rest of this article. For starters DevSecOps is no more the 'new-kid-on-the-block' from a conceptual perspective. Thanks to RSA 2017, concept selling as been on a such a high note that product engineering teams have been overwhelmed with much information to the extent that few of them think that DevSecOps is a cakewalk, few others feel that they're behind the curve and the rest assume they've attained the nirvana! Interesting question at hand for the beginners - What is the HOP (before the SKIP and JUMP) DevSecOps?

It's not all about the Tech

DevSecOps like security, (and almost everything else) has the mystic triad - Technology, People and Process. The pillar that's obviously making the most noise is the technology angle of security automation - which are the scanners and the tooling platforms. Unfortunately the other two pillars form an equal, if not a larger share of the solution. Security and Engineering have forever been at loggerheads in getting a product release out to production. In most cases, engineering wins (after much deliberation) with release timelines taking a higher precedence. This (by design and by default) leads to security assessments being conducted either as an 'end-of the-chain' activity or at a frequency no more than twice a year - compared to >6 major releases in a year.

The solution to this classic Catch 22 situation is two fold - Visibility and Culture.

Contrary to usual belief - developers DO care about security (there, I've said it).They just need to be equipped to address it better.

Engineering teams are seldom equipped with the necessary artillery to have a go at security issues unlike functional issues. A key bottleneck that developers face while solving security issues is "replication". Often times, a developer looking at a P1 bug has no way to truly ascertain if his/her remediation is successful. This leads to a frequent back and forth 'bug validation' between security and engineering taking precious project bandwidth. The other aspect is prioritisation. There needs to be a common baseline (strengthened by platforms of course) that project vulnerabilities that are truly valid. More often engineering are presented with a laundry list of bugs with more than 30% of them having a low probability of impact, which take up significant engineering bandwidth for remediation. Finally, one cannot overlook the fact that "DevSecOps" demands a significant culture-shift in the way engineering works, and its about time the security community starts appreciating /empathising it. The first step in security automation is to internally "sell" the value of automation across the board. There needs to be tangible value to everyone in product engineering.

Fine Tuning

I (and I speak on behalf of my team) have always been fascinated by applications. No two e-commerce sites are the same. The motivation of a dating app's architecture is vastly different from a mobile social platform. It is this pervasive and dynamic landscape of applications that makes securing them both challenging and interesting. Add to this a flavour of continuous automation. I think I speak for a community at large when I say, there is no 'off-the-shelf-standard-one-size-fits-all' framework for DevSecOps that works perfectly at a granular level for product engineering teams across the board. However, what would definitely work is to have a reasonably sound, underlying skeletal structure that can then be customised to a focus application.

To start off, product engineering teams (engineering + security + DevOps) need to arrive at a realistic 'automation frequency' during the early stages of adopting DevSecOps. Assuming the current frequency of manual security assessments performed for a product is once a year, it is next to impossible for the team to aim at implementing a system that performs automated security assessments for every release (twice a month) within a 12 month time period. This again goes back to an understanding that DevSecOps is not 'just plugging technology components'. In the initial stages, a well defined implementation plan should aim at achieving security automation at a frequency thats a reasonable stretch from the current frequency of manual assessments. In the above example, a reasonable goal could be to expect the framework to cater to accumulated product releases say thrice a year. An alternate (if applicable) model would be to implement the system for certain modules of a larger product, perfect the system over a couple of months and then throw in the other components. Either which way, the essence is to start small and not overwhelm.

Another key element in the SKIP phase is to ensure that existing technology investments (like DAST/ SAST scanners, automation platforms, CI services, bug tracking platforms etc) are extended to their fullest capabilities to work in an integrated model. The corollary statement - to ensure that any newly procured technology platform, integrates seamlessly with the automation framework.

Build - Operate - Transfer

Finally, it is fundamental that engineering teams are equipped with the necessary skills required to continuously 'review and tune' a DevSecOps framework. A truly successful implementation of application security automation system is one that has minimal dependance on third party solution providers in the long run. Security automation technology firms should devise engagement models that are self-sustainable and maintainable by product engineering teams.

This blog was originally published on LinkedIn by Rahul.