Finding the right metrics to measure the effectiveness of your security programs can be challenging and subjective. While most everyone can agree on the ultimate objective of preventing breaches, there is much discussion about how to objectively measure and report on the effectiveness of everything between your first dollar invested in security and your planned security investments for the coming year.
If you are a senior security leader, you’ve already been faced with questions from your exec team or in the boardroom about “How’s our security program doing?” When it comes to application security, development and security teams can report that testing tools are in place and that they are resolving all priority issues, but does this really answer the question asked?
Application security is an ongoing process. Every day, developers write new code, utilize third-party code, and/or make calls to external APIs. With this new code, new security issues are introduced into the code base of both new and existing applications as they evolve. With this kind of constant change, how can we answer the question around the effectiveness of our application security programs?
I recently spent an hour with the program lead for Veracode’s new “Veracode Analytics” reporting solution and left feeling excited. I’m excited because I see the beginnings of a new approach to measuring the ongoing effectiveness of application security, not just at a point in time, but over time. Let me explain.
When we typically think about app-sec, we think about leveraging one of many different approaches to either preemptively close security gaps in an application or to prevent malicious behavior during runtime by intervening during execution. These approaches have both made a notable difference in the prevention of application-based attacks for the organizations employing them, but are organizations getting the most out of these tools?
To answer that question, enter Veracode Analytics. Veracode is offering development and security teams a new perspective, measuring not only the number of issues that have been identified and resolved, but also the types of reoccurring issues that are being introduced by specific development teams. And they are measuring this over time. This approach begins to focus on development behaviors that can help security and development teams shine a light on areas where specific dev teams need coaching or training to prevent the introduction of new issues.
Let’s face it, most security issues that are flagged require code changes, slowing down the development process. With the intense focus on getting code shipped faster, our ability to reduce the number of security issues introduced can have a direct impact on delivery times.
I spent many years as an application development manager where I was measured on my ability to rapidly introduce new feature sets. If my security program slowed me down, it was a negative force against my core objectives. When I think about the effectiveness of an application security program, yes, I want to be sure the code my team ships is secure, but I also care deeply about reducing the rate at which new security issues are introduced.
When we talk about the effectiveness of our application security programs, in addition to asking whether reported issues are resolved, we should be measuring the rate of issue introduction, and be looking to show a declining metric as development teams improve their security skills. Whether you are using Veracode, Synopsys, MicroFocus Fortify, AppScan, WhiteHat or another application security solution, measurement is the first step. Do you know what the most common security issues your developers are struggling with are? Do you know which development teams within your organization need the most help? Is your application security program effective?