Eliminating Friction Between Development And Security

Listen to learn: how to collaborate with developers and how collaboration can aid in cybersecurity efforts.

SecurityMetrics Podcast | 55

Eliminating Friction Between Development And Security

"In order for us to meet our end objective of risk mitigation on software and applications, we have to get the developers on our side. If you do not collaborate with the developers, you're not going to be able to manage that risk"

Harshil Parikh sits down with Host and Principal Security Analyst Jen Stone (MCIS, CISSP, CISA, QSA) to discuss how to eliminate friction between development and security.

Listen to learn:

  • How to collaborate with developers
  • How collaboration can aid in cybersecurity efforts
  • How setting clear expectations can improve teamwork

Resources:

Download our Guide to PCI Compliance! - https://www.securitymetrics.com/lp/pci/pci-guide

Download our Guide to HIPAA Compliance! - https://www.securitymetrics.com/lp/hipaa/hipaa-guide

[Disclaimer] Before implementing any policies or procedures you hear about on this or any other episodes, make sure to talk to your legal department, IT department, and any other department assisting with your data security and compliance efforts.

Transcript of Eliminating Friction Between Development And Security

Hello, and welcome back to the SecurityMetrics podcast. My name is Jen Stone. I'm one of the principal security analysts here at SecurityMetrics. A very interesting topic today, something that I think is is going to be very helpful to a lot of the the security people and compliance people, who are are listening.


There occasionally, I'll run into, conflict or not even maybe conflict is a is a heavy word for it, but just a little bit of friction between developers and the security team. And so I have someone here today to talk to me about this that is, very well versed and it's going to give us some really good advice. Let me tell you a little bit about Harshal Parikh. Harshal is a seasoned security leader with experience building security and compliant functions from the ground up.


Most recently, he served as the CISO at Medallia where he built the security and compliance team from scratch, scaled the team globally, achieved compliance in SOC two, ISO, FedRAMP, went through an IPO, and secured more than seven mergers and acquisitions.


He's been a frequent speaker at the RSA Conference, AppSecUSA, and CSS CSX Europe. Currently, Harshal serves as the cofounder and CEO of Tromso, a developer first application security management platform designed to control and secure the software delivery pipeline end to end, simplifying security at every step of application development. Harshal, thank you for joining me today.


Thank you for having me here, John. It's, and by the way, thank you for the the kind introduction.


Well, I think you're you're going to have a lot of really great information for us.


And a lot of of organizations already understand this friction. They they have experienced firsthand this friction between teams focused on development and those responsible for security, but some of our our listeners might not be aware that this is a challenge. Can you describe it for us from your perspective?


Yeah. I would love to. So the the fundamental shift that we have seen over the past few years is that every single business wants to be more agile. Everyone wants to do things faster at a cheaper cost and in a in a more agile way, And that's great. You know? We've seen that happen over the the pandemic. Everyone's shifting to technology more quickly.


And as a result, companies want to move faster, deliver features faster.


But where does security, fall into place in all of this? Right? So, typically, if you think about software development, security has always been sort of that gate where, you know, there's change review boards or change approval boards and, you know, everyone who has to come in there and get their changes approved before the next release. Right. Well, guess what? I mean, that worked when you had waterfall model of development, but it doesn't really work in a DevOps model.


Right.


Even simple controls like, change review on code change. Right? So if it's in scope of, you know, SOC two, ISO, what have you, how do you know when you have thousands of developers pushing code every single day? How do you know that they're actually reviewing the change?


They it is being peer reviewed. Right? So there's no easy way to enforce those controls at a at a scale of most companies. So it becomes really difficult to implement even simple controls like that.


And then you think about more sophisticated controls, like, you know, every single release meeting, your compliance controls, you're doing your testing correctly, everything is being tested, being risks are identified and resolved in a timely manner. All of those things are almost impossible to to do at scale when you have thousands of developers. Right? It just doesn't work.


So we felt this problem ourselves in our previous life, and there was no easy way for us to solve it. So we decided that, okay. You know, somebody has to solve this. So we started a company to do exactly that.


That's that's terrific. And and, I don't wanna minimize the problem because I've seen it firsthand, the the change to, to agile, especially when like you said, when you have, many, many developers contributing to the code base. The old style of controlling what's what's going getting released just isn't isn't effective. But, so I I don't wanna say that this is a problem we can fix easily or, or quickly, but maybe there are some quick hit ways that you know of that can better engage developers to take security seriously or or, or help them use, you know, their processes to make security just a part of that flow.


What can you recommend on that?


Yeah. I mean, I think it goes back to what developers are really incentivized to do on a day to day basis. Right? So if you're in an organization where the, the only priority for developers is to ship new features, that's what they will do.


But if if their, leadership is communicating the expectation that, yes, you have to ship features that are compliant, that are secure, then that's what they will do. But, in in addition to just talking about it, in addition to leaders just talking about it, how do you actually enforce those controls? Like, that's the key component. Right? Like so then what we have to shift our mindset to is instead of controls living in a spreadsheet or some Confluence document or SharePoint, those controls have to be automated into the systems where developers live.


Right.


The developers live in things like GitHub and Jenkins and, you know, other CICD pipelines. So how do you automate those controls in those CICD pipelines? That is the fundamental question. I think if if you start automating controls in CICD pipeline, it becomes very natural for developers to just, you know, follow and meet those controls.


So I think that's that's an incredibly effective way to do it. Just that it doing it is really hard.


For for sure. And and when you and I were preparing for this, conversation, you mentioned that there were, some best practices that could be put in place, starting with security guardrails.


So, can you tell me more about security guardrails and how that can be expanded into, real world activities or documents or or automations or what what are security guardrails?


Yeah. I mean, guardrails at the end of the day are just protection mechanisms that that tell you whether you are in the approved path or not. Like, if you go to the nonstandard route, if it as a developer, if you go to a nonstandard route that's not authorized, not combined, not secure, then something will tell you that you're going off path. Right?


So guardrails are think of I mean, going back to our previous conversation, which is controls in the CICD pipelines. If you're requiring this control, which is you have to have your code reviewed by somebody else, which in the world of GitHub, it could be a pull request that has to have an additional reviewer, then enforcing that control becomes a guardrail. Like, you cannot merge your code unless you have somebody review that code. That just becomes a software control.


Now there are some capabilities in in source control systems like GitHub and GitLab that you can force those types of controls. Just managing that is is a little bit tricky, but, you know, that becomes an example of a control. So now if I'm a developer, I know I I don't even need to understand SOC two. I don't need to understand ISO.


I don't need to understand FedRAMP.


But the systems where I live, that tells me that for me to be able to move ahead, I need to be able to I need to be reviewing my code with somebody else.


Okay. So so so these are some standardized things that are built into how the work is done without making them, be your security experts, without making them be your compliance experts. Is that am I hearing that correctly?


Exactly. Yes. Because as like, if you think about if you put yourself in the shoes of a developer, earlier, ten years ago, you were only writing software. And there was a QA team that would test it.


There was a deployment team that would deploy it, and there was an infrastructure team that would manage infrastructure. All that stuff was done by somebody else. Now developers have to do all of that. Right?


Developers are writing their own tests. Developers are the ones who are deploying their own code. Developers are ones who are configuring AWS and the cloud infrastructure to run their code. So now, just think putting the developer hat on, you're you have to do so many things.


You cannot be experts at all of those things and add that add the security angle to it as well. Like, if you're expecting the levers to be good at security, that's one of the seven other things that they have to do. They're never going to be experts at compliance or security. So we it's our job as compliance and security professionals to make that easy for them so we reduce the cognitive overload on developers and say, hey.


Here are the three things you just have to do. If you have questions, we'll help you. But if you don't wanna think about it, just do these three things, and you're done. Right?


You don't have to worry about it.


Right.


And just make it easy for them.


So one of the things that you mentioned was secure defaults. There's ways to set up the CICD, the, continuous implementation and continuous deployment, pipeline, for them. There are there are secure defaults that can be set up in there. And then another thing that you mentioned was, vulnerability management. So, this is something that I think is is a is a great kind of QA check. So a lot of times, you already have QA steps or or procedures that are part of that deployment process and including a an application vulnerability management check to it seems to be something that that you recommend as well.


Yeah. I mean, it's, we've been talking about that. We, meaning the industry, has been talking about that for a long time. But but here's the reality. What happens is, like, think about the most common source of vulnerabilities and applications today is through open source dependencies. Right? That's it's vast majority of issues come from that.


But when you are when you think of it from just just testing as a part of QA or testing as a part of CICD pipeline, what you're doing is testing the net new code. Right? The new code that developer added. But But the problem with that is if a developer is adding a new dependency today, there's a very high likelihood that that will be the latest version of the dependency. They're not going back and, you know, pulling out a two year old version of a dependency today.


Mhmm.


So most likely, it is free of vulnerabilities today. But what happens is just normal decay happens, and those things become stale and vulnerabilities get identified over a period of time. So by next year, I'm sure that vulnerability that you the dependency that you added today, it will have vulnerabilities by the next year.


Right.


So then if you're testing only the net new code being added, you're not gonna see a lot of things. What you'll see is in the overall code base, over a period of time, there are new vulnerabilities get added. So somebody has to think about what's the SLA, what's the remediation time frame of not just net new code, but also things that have already existed. It's just risk that we're incurring.


We have to manage that risk. And most compliance standard sort of SLAs, you know, seven days, thirty, sixty, ninety days based on the severity of it. Mhmm. So when those things get identified, developers don't understand why there's an SLA.


They don't know what the requirements are. So how do you make it easy for them to to see that, okay. This was a piece of code that was added last year.


Now it has vulnerability today, and it has to be fixed within fifteen days. Well, you know, the person who might have added the code last year, that person might not be in your company anymore. So who's gonna fix it? Right?


So how do you figure out all of those nuances, which is when you get issues, when you get vulnerabilities, who needs to fix it, what is the time frame? What's the right communication mechanism? Because developers are not gonna go into your GRC tool or vulnerability scanner to look at issues. So how do you communicate those things to the developers who are actually supposed to fix these things?


And how do you report it from a compliance perspective?


So that whole piece of managing vulnerabilities and risk is is a very complicated piece.


And it's, it's been around for a long time, but now it's becoming more and more important as more developers start pushing code faster.


Right. It see it feels like ownership, assigning ownership, taking ownership, understanding who owns, the the code, not just when it's released, but also, like you said, a year a year down the road when nobody's developing that piece and it exists, but then who owns it in terms of, not just who's supposed to be running the the scans, but also who's going to then fix it if the vulnerability comes up. Right?


Right. Yeah. And that's a big one. Right? Because it's, people change, teams change, things change over a period of time.


And let's be honest. I mean, all developers want to do the new shiny things. Nobody wants to go back and fix two year old or ten year old code.


So it's just you know, sometimes it could be interesting work, but a lot of times it's not for the developers. So you gotta figure out how to convince them or force or influence them in one way or the other to do it.


Right.


And that's where the leadership buy in comes to the picture. Right? So as a as an engineering leader, typically, you wanna maintain health of it. So maintaining security health of the of the software you're building and deploying is, is also an important task.


So if you don't have as a as a security lead, if you don't have centralized visibility, if you don't have, mature reporting and analytics, that that becomes increasingly more difficult, I think.


Yeah. Yeah. Exactly. And the the one other, trend that we are seeing is there's more and more sprawl of technology.


Right? So now you have different platforms, different technologies, multi cloud environments. People are in AWS and Azure and GCP and so many different technologies being used. As a result of that, security teams have so many different types of, you know, scanners and risk assessment systems, and each of them create their own data silos.


Mhmm.


So now if you think about, you know, a static analysis tool, a dependency scanner, infrastructure as code scanner, cloud scanner, CSBM solutions, all of those things come together, and now you have seven to eight different sources of data.


How do you act on that? Like, how do you know which one is more important than the other? How do you even make sense of it and communicate to the people who need to fix or do something about it? So that becomes a challenge.


So that's where the visibility, centralization becomes an important initiative to to really be able to move the needle forward.


Right. And so, do you have recommendations on how to increase centralized visibility and to really get your arms around that reporting and analytics? Because with that, if you don't have it centralized, how do you prioritize one against another?


Yeah. So what we used to do without, a good system in place was we used to funnel all of this data into, into Jira. And a lot of times, it would be a lot of tickets. Right? So, basically, any task tracking system.


And when for most cases of software security, it was okay because it's not high volume in most cases. Mhmm. In some cases, it is.


But infrastructure related vulnerabilities, they're usually very, very high volume. Right? So you don't wanna create a hundred and twenty five thousand tickets and assign it to somebody. That's not gonna work.


Right.


So then we had to figure out a way to consolidate those. We used to put it in a central database and then run some queries on it and selectively extract meaningful items out of it and put it in Jira. But the the idea what I'm trying to communicate here is that we have to put those things into one single place, whether it's a Jira or a ServiceNow or a Zendesk or even an open source project like Defect Dojo or a commercial project like. Whatever it is, it doesn't matter, but you have to put it somewhere and then triage and prioritize it.


So so now you understand, okay. These particular assets are in scope of SOC two or FedRAMP or ISO, and I need to treat them with a little bit more, care. So so we we are in compliance, versus, you know, something that's just internal system, no no external access, no customer access. You can treat that a little bit differently.


So you have to make those prioritization decisions, and then figure out how to communicate it to the the right people.


Right. So, that centralized view is so important and, being able to to choose what to do next because every organization is, limit has a limited number of people. They have a limited amount of money. They have they have limits on what they can actually accomplish. And and so, you might have an a a long list of vulnerabilities, but they may or may not be something you wanna address right away. Like you said, if it's related to secure high high security issue or if it's related to compliance. But without having a centralized way of looking at it, I don't think anybody can really effectively, get their arms around that.


Yeah. You're right. And and at the end of the day, vulnerability itself will have, like, a severity rating, you know, critical high, medium, low. It might have a CVSS, you know, value to it as well.


But just by itself, it's not very important. You you have to look at the criticality of the underlying asset. Right? Is it is it a system, application, or a server that's being accessed by external facing untrusted traffic?


Mhmm. Or it's an internal system that two people in marketing use, and it's not even accessible from the Internet. Right? So it depends on what the underlying asset is, and and vulnerability scanners will never give you that business risk view.


So you marry those two things together in terms of what's the technical severity of it and what's the business risk of the underlying asset. That's that's how you make prioritization decisions.


Exactly.


That's a very simple way of thinking about it. Right? Like, obviously, there's so many more sophisticated algorithms, but this here's a quick and easy way of doing it.


Yeah. And and in practice, it it does it takes time. It takes thought, and it takes, some real, or usually senior level drive behind something like that that is that all encompassing and and kind of overarching to to prioritize, vulnerabilities against each other, especially in the way that that you talked about. This episode is brought to you by our SecurityMetrics penetration testing team.


They do a lot of pen tests. They do a lot like network layer, application layer, segmentation checks. They're very, very knowledgeable and, some of them have even won, like, competitions at Defcon. So you can rely on these guys to know what they're doing.


Head over to www.securitymetrics.com/penetration-testing. Learn more about pen testing.


So, an intriguing statement I found, is that application security must be developer first.


Help me understand what developer first means and why that's important in in application security.


Right. And so the the reason developer first is important because, okay. So let's take a step back. So application security or software security.


It's the security of the applications or the software. Right? Now who writes those application? Who builds those application?


It's the developers. So guess what? As security people, we can find all the bugs we want, but we can't really do anything about it until and unless the developers actually agree to fix those things. Right?


Right.


So our objective, our mission as security professionals, is not to just find problems. That's an important piece, but it's not the end of it. The end of it, our objective is to the end objective is to manage the risk associated with it. So how do you manage it? First, you have to identify the risk, and then you mitigate it or you treat it somehow.


And that's the the the core component that's outside of our control in terms of how do you treat that risk.


That can only be done by the developers who are billing and writing those applications.


So in order for us to meet our end objective of risk mitigation on the software and applications, we have to get the developers on our side, or we have to work jointly together as a team. And for too long, security has been operating as a very different entity. You know? Security people are typically a team that sits in one corner of the building, doesn't talk to anybody else, and, you know, we we all think that we are super important. And we are. But at the same time, you know, it's, if you don't collaborate with the developers, you're not you're not gonna be able to manage that risk.


Right. Right. Absolutely. So so that brings us again to I think you've you've mentioned it a couple times. How do we make it easier? How do we make security easier or more for forefront for the developers?


Yeah. Yeah. So so I was just thinking about that, and the analogy that I came over is probably a really bad analogy. But let's imagine you're driving down a highway and there there's a speed limit of sixty five, but your car doesn't have a speedometer.


So you have no way of saying, well, are you driving within the speed limit or you're you're you're driving at a hundred and twenty miles an hour? You have no way to tell. Mhmm. That's the current state of development today where developers are building and pushing code, but they have no way to tell, are they doing this securely, or are they not doing this securely?


We are pumping thousands and thousands of security vulnerabilities into some system somewhere. But that's not even associated with an individual developer's work. It's not associated with a particular team that's working. It's like this global view of here are fifty five thousand vulnerabilities.


Go do something about it. Right? That that's not very actionable. So if you don't have that speedometer for a developer to, to know how well they're doing, how how do they stay within that speed limit?


How do they build secure code? So I think that's a that's a key piece, that of information that we need to communicate to developers in terms of what is the expectation.


We want you as security professionals, we want you developers to write secure code, and that means here are the things you should do. Like, you should use security falls. You should follow, you know, secrets management. You should use these tools for scanning.


You should fix these vulnerabilities within x number of days or whatever that is. Right? Mhmm. So we need to communicate those expectations and give them a way to actually do something about it.


That that doesn't exist today. So if if we are able to make them, self reliant, self-service, that's how we can make a big impact. We can help developers, become better at it.


So but but they have to be able to be aware of it first. They have to know the the purpose of the company, how to balance that against existing work. Right?


Yeah. Hundred percent. Yeah. I mean, just, adding on to that same example, even if you have speedometers and speed limits posted, there are still people who get tickets for speeding.


You know? And I'm not saying security people should be like cops. We're that's not how we should be. Right.


And maybe some culture, some company culture, it's okay. But, there there there has to be implications of violating security controls. Right? Like, if there are no implications of security controls being violated, then nobody's going to do anything about it.


Because it doesn't matter to them. Right?


Yeah. It doesn't matter to them. So there there are a lot of people who just want to do the right thing. There are a lot of developers intrinsically motivated to write good code, secure code, good quality code.


Not everyone is the same way. So there's gotta be both angles need to be covered in my opinion. So that's that's where we get get it in a lot of the the reporting aspects where the leadership, the dev leadership can see what's, what the metrics and the dashboards look like. So then they can be the owners of those, decision making.


And, honestly, this is not different than anything else. Like, developers are used to doing this type of, measurement and getting better at things from other angles, like quality, performance, scale, reliability. They do this all day long. Right?


Mhmm. It's just the fact that security is not at the same level today because we haven't been, friendly enough with developers or collaborate enough with developers so far.


Earlier, like, ten years ago, there used to or there still are in some companies. There used to be completely different teams that would do QA testing. Right? So Right.


The lovers would write code, ship it to QA, QA would do all the testing, and they would ship bugs back. Like, that's where we are in security today. But now in terms of quality and QA, that world has moved on to most modern companies where developers write their own code, the tests run automatically, they see the results in their CICD pipelines, and they go and fix it. They can accept the risk in some cases, but, you know, they own that decision.


That doesn't happen in security. And a lot of times, developers don't understand a lot of security. So we had to bridge that gap. We had to make security kinda similar to quality where, you know, it just happens naturally in their developer workflows.


And you were talking about motivating people to do good work, with regards to security. I know for myself, I am very unmotivated by, people being upset and and angry consequences. You know, just just yelling it. So when I was when I was young, I had a riding instructor.


I did a lot of horseback riding when I was a kid. My jumping instructor happened to be German. And I he would just yell at me in German. And I didn't understand a word that he was saying because I did not speak German.


Right? So he would be yelling at me, and I I knew that I was doing the right thing when the yelling got less.


And that's a terrible way to learn anything. So if it makes you feel like sometimes we're, like, yelling at at developers about security and they're going, why?


I cannot learn to ride from someone yelling in German. So, one of the ways that I think is a really good way to motivate people, especially in our industry, is gamifying things. Have you had any any experience with the gamification of of that?


Yeah. So we've seen, a few good examples of that being done very well. And people go crazy. A lot of times, people go crazy with this gamification, build very sophisticated programs, and that works. But there could be something very simple. Like, back in the day, in my previous role, we used to build very simple leaderboard of different teams.


Our objective of gamification was not the individual developer because a lot of times they don't care.


We wanted to get to the managers, the dev team managers. Right? Because managers want to be proud of their team. They wanna be proud of their, the work they're delivering, and they're uber competitive. So we built a very simple leaderboard of every single development manager, and we would bring MTTR metric. Like, how long did it take your team to fix the bugs that you own?


Mhmm.


That's it. Very simple. And we stack rank them based on the teams, not the individuals. We stack rank the teams based on whoever was the best at fixing bugs in a timely manner.


And that actually drove a very friendly competition between quite a few of those managers and just and that sparked this whole cycle of getting better and better and better Sure. At fixing things on time, remediating risks on time. So, I mean, we've got a few examples of that working quite well.


Excellent.


Yeah. I think we've all seen that where the the things that we focus on, the things that we measure are the things, that we can affect.


So, really a great set of of, information and advice for people who are looking at increasing security and development. Before we close today, do you have any any additional information on on working with, developers to increase their understanding of security or or or create that secure, culture and atmosphere there?


Yeah. I mean, I think the only thing I would add in addition to all the topics we talked about is, it's very important to to get both the bottoms up influence and motivation towards the developers, but it's also important to get the top down leadership approval.


If you're missing one of those things, it's not going to really work in the longer term. You have to tackle it at both the levels. It sounds intimidating, sounds difficult, sounds like a lot of work, but the results show up when you do both of those really well. If you have a leadership buy in, developers will spend time.


Their sprint and their work will consider the time investments that need to be made in security. If you have bottoms up support from the individuals, they will actually listen to you. They will respond to you. They'll hear you.


They'll understand you. Right. In a lot of cases, they'll even do better security than what you could do as a security professional. So, it's really important as, as security professionals for us to go both bottoms up and tops down to get that support.


I agree. Some of the the best security professionals that I have worked with, most knowledgeable, most capable, are actually their software engineers. That's what they do. And yet, they have a a desire to understand and implement good security. And and when it all comes together in that way, it's, it it the entire organization is better off, I think.


There you go. That's right.


Thank you so much for talking to me today, and I, I hope to have you back again in the future.


I love the conversation, Jen. Thank you.


Alright. Bye bye. Thanks for watching. To watch more episodes of SecurityMetrics podcast, click on the box on the left. If you prefer to listen to this podcast, it's available on all your favorite podcast platforms. See you on the slopes.

Get the Guide To PCI Compliance
Download
Get a Quote for Data Security
Request a Quote