Most developers will tell you that coding is one part science and one part art. As a recovering developer, I couldn't agree more. I've taken this philosophy and applied it to my journey as a security engineer.
In this article, I'm going to detail 5 things that I've done that have helped improve security that aren't typically in the job responsibilities of a security engineer. Some of these are centered on writing code, but not all of them! Even if you're not a developer, I hope that these stories inspire you to think creatively about your own security practices.
We were using GitHub Enterprise but were told we couldn't use GitHub Actions by our CTO. Instead, he wanted us to use Jenkins for all automation and builds. This limitation meant we couldn't enforce required actions at the organization level, and would have to manually add security steps to each Jenkins job. That's a daunting process, especially when you have a lot of teams and projects to manage (literally thousands of repositories).
I developed a GitHub app that acted as an orchestration layer, triggering Jenkins jobs based on GitHub events. I was able to integrate our SAST, SCA, and secret detection tools. Using Jenkins lowered the barrier to entry for contributions: if you could configure a Jenkins job, you could contribute to the project, no traditional programming required!
It definitely wasn't perfect, but it was reasonably effective. Given the option, I would have loved to use some off-the-shelf solutions like GitHub Actions, or tools that have a GitHub integrations built-in. Unfortunately, the budget wasn't there, so we were limited to what we could do with open source and custom solutions. In the end, I learned a lot about GitHub Enterprise, and got some really interesting insight into the types of things that were being developed at the company.
When I'm wearing my attacker hat, I'm always looking for ways to get my hands on developer secrets. If I get access to a private repository, the first thing I'm doing is scanning for secrets. Unfortunately, I'm ALSO guilty of accidentally checking in secrets to repositories at least a handful of times, even though I very clearly understand the impact.
One of the most challenging aspects to this problem was that I needed to change the mindset of some of my coworkers. Their believe was that the secrets were only "for development" and that exposing them in a private repo isn't a security risk. Given that some of the secrets are actually for cloud resources, I tended to disagree with that opinion.
Taking a step back and analyzing the root cause of the problem, I realized that developers didn't want to spend time managing the secrets because it was just too much work. If I gave them an easy to use tool, they might be more willing to use it. It would be a win-win: I won't complain about secrets in source control, and the developers won't have to listen to me complain about it!
I wrote some code that leveraged Azure Key Vault to manage the development secrets for our testing libraries. Having something off-the-shelf like HashiCorp Vault would have been a great solution, but we didn't have the resources deploy / configure it at that time. Instead, this solution integrated seamlessly and used SSO to authenticate with Azure Key Vault to retrieve secrets just in time.
There were a couple of hidden wins here as well:
Did it solve the problem of secrets in source control in its entirety? No, not even close, but I'm still counting it as a win. Improving security isn't about getting to 100% immediately, it's about incremental improvements.
In the early stages of a production incident, there's a lot of uncertainty. It's very difficult to immediately pinpoint a root cause, and there are hypotheses that don't pan out. If the team is concerned that there's a security implication, they will reach out to the security team for their input.
In a lot of security orgs, the security team will investigate/analyze then leave if there is no security impact. There's nothing wrong with that, as the security team may not be as well equipped to handle the situation. If you have some experience with the product, your infrastructure, site reliability, or any other skills that may be helpful to the response team, I'd consider sticking around if they don't mind having you there!
While unpleasant, stressful experiences are typically bonding experiences. If I'm brought into an incident and I don't have a competing / urgent issue, I will stay and see the incident through to completion with the response team. I've found this to be one of the best ways of building trust. Trust is one of the most important tools at a security engineer's disposal. If you have earned trust, a team is likely to listen when you raise the alarm over a serious security issue.
Ok, you caught me. This one isn't weird at all, at least for a security engineer. I still think it's worth mentioning though, because it was an opportunity to shift the perspective of security reviews.
There was an architecture guild board at my previous job, and I was honored to be invited. My role was to ensure that we followed security best practices and make security recommendations for new technology or products being implemented. It was a great way to get to know the engineering leaders, and I was able to become more involved in the development process.
The perception of security changed significantly. Instead of "Ugh, we need to do a security review", the conversation was more collaborative. While developers started working on a proof of concept for new technology, I would investigate security best practices for that tech in parallel. We'd meed offline, and the security review became an ongoing, collaborative process. I made myself available to the teams, and we worked together to see things through. It was a great opportunity to learn more about the business and core products.
Knowing your products helps you identify where the skeletons are.
Nobody wants to use a tool that's difficult, buggy, laggy, or just plain ugly. If you wind up on a website that looks like it was built in 1999, you're probably not going to stay there very long.
Like it or not, some people feel that way about security tools, not just dated website. This is why it's so critical that security tools are easy to use and intuitive. I've found that the best way to do this is to prioritize the user interface and usability of the tool.
A lot of modern tools are web-based. I am an absolutely awful designer, despite my experience as a frontend developer. For me, it was faster, easier and better to outsource the design step. I just purchase a theme in my framework of choice through a site like ThemeForest, and then get to work updating the content and implementing the features for my tool.
I have seen adoption metrics go up after simple UI/UX improvements in my own internal tools, and I'm sure that's not a coincidence. Spending the time to make a great experience for your users is objectively worth the effort.
These initiatives might not be the first things that come to mind when you think of application security, but they've been crucial in my growth as a security engineer. By thinking creatively and engaging deeply with teams and technology, I've been able to implement effective and sometimes unconventional security solutions. Remember, even in security, sometimes the unconventional path is the one that leads to the best solutions. Innovation doesn't happen by following a set of rules: it happens by being willing to take risks and try new things.
Ready to enhance your app's security? AppSec Assistant delivers AI-powered security recommendations within Jira.