portrait

Gene Gotimer

Gene Gotimer is a DevSecOps Engineer who loves playing with new tools, focusing on agile processes, securing development practices, and automating everything. Gene feels strongly that repeatability, quality, and security are all strongly intertwined; each depends on the other two, making agile and DevSecOps crucial to software development.

I am a DevSecOps Engineer. Most of my day is spent begging the development teams to update dependencies and images. Alright, that’s a bit of a stretch. I’m not begging; I'm just trying to prioritize it for them. And it isn’t most of my day, but it is an everyday, ordinary, recurring theme.

So why do I keep pushing for it? Why is it important, even for services and software that aren’t otherwise being actively developed? Most development teams would prefer to build new features rather than update old components, so why do I pick the fight?

It’s the security part of DevSecOps. I want to make sure the software we release is unlikely to be hacked. I could spend a lot of time reviewing every bit of code our developers write, ensuring they have been as diligent as possible to leave no security holes. But most of the code we deploy isn’t code we wrote- it is the third-party frameworks and libraries we use that make up anywhere from 40%-90% of our application. And I want to make sure that a security hole in one of them isn’t the way hackers get into our network. 

The lifecycles of those libraries, frameworks, and other dependencies are different from ours. They update at a different frequency, on different days, sometimes releasing when it is really inconvenient for us. Trying to sync up our releases with theirs isn’t reasonable. So, my recommendation is to update constantly.

If we are proactive and update all those dependencies as soon as possible, we will be ready to push out a new version of our code, without stopping to find out which updates are available. Our release isn’t delayed while we test the latest combination of updates and our latest changes, ensuring that everything still works together and that we don’t need to rewrite code and interfaces at the last minute. Functionality that was changed in the frameworks gets rolled into our application without the pressure of a looming deadline. Nothing is an emergency. We can make deploying our next version a business decision, not held up for technical, behind-the-scenes tasks. And that is one of the goals of continuous delivery and DevSecOps.

Sounds like a lot of work, doesn’t it? Yes, but we can make those deployments easier. Since we’ll be doing them all the time, we can and should invest in building a repeatable, reliable pipeline. That means automation because manual processes are neither repeatable nor reliable over time. Once we have a smooth, automated process for deploying our software, we don’t have any reason to reserve it for special occasions. We can stop wondering if we should deploy because why not? Since it is automated, we’ve reduced the cost and effort of deploying on a whim. Exercising the automated pipeline reduces the risk through practice and experience. More frequent deployments become an opportunity, not a burden. 

But what if we aren’t adding new functionality? If we aren’t rebuilding the app anyway, and we don’t have another version planned for release, possibly ever, what’s the point of updating then? Updating dependencies won’t stand in the way of a deployment that will never happen anyway, right? 

There are two problems with this thinking. First, having no planned releases doesn’t mean no releases are going to happen. In fact, this is a great example of why we want to be proactive. We still want to be ready for business to change their mind and decide we need a new release. Part of DevSecOps is putting ourselves into a position to support our business needs and desires. Second, those pesky libraries and frameworks are always changing, and some of those changes are crucial because they fix bugs and security holes. We can’t just sacrifice an application to hackers as an entry point to our network because it isn’t actively being developed. 

That’s okay. We scanned it a few weeks ago, so we know it is secure. Again, this is dangerous thinking. We didn’t know it was secure; we just didn’t find out it was insecure. That's a big difference. New vulnerabilities are discovered in components all the time. Fortunately, the maintainers of those components usually find out about the vulnerability soon after it is discovered, and in response, they mitigate the issue and push out a new version. That doesn’t help us unless we build that new version into our application. 

Even if this new version of a library isn’t intended to close a security hole, it doesn’t mean we can skip it. Those new vulnerabilities that are found aren’t always found in the latest version. Older versions are often found to be vulnerable to new types of attacks or problems. And some of those problems only become exploitable when combined with older versions of other software. Furthermore, new versions include bug fixes. Even if those aren’t labeled as security fixes, availability is a key factor in security. If our application is constantly down, our end users won’t care if it is due to a distributed denial-of-service attack (DDoS), a security vulnerability, or just buggy code. All they know is that they can’t use our software when they want to.  

But we know that version of our application works, so why change it? In fact, isn’t it safer to not make any changes? It seems to make logical sense, so let’s break down why we might feel that way. First, if the deployment is risky, we need that process fixed. We practice deployments all the time, as we already discussed. We want to make it so easy to deploy a new version that there aren’t any reasons not to. So, we should keep putting effort into improving the process until we feel confident it works. Second, if we feel changing the application code is risky, that is a problem in and of itself. Our pipeline should have enough unit and functional testing that we have confidence in what we are releasing. If our developers feel the code isn’t ready for prime time, they might be right. But we can reassure them with a safety net of tests that describe, document, and demonstrate the behavior of the code. This is even more reason to build and mature our pipeline. 

Ultimately, we want releasing software to be a low-stress, low-risk, non-event. We need our development and deployment pipeline to support that. One of the best DevSecOps feelings is confidently pushing out a release without expecting it to fail or, more optimistically, full faith that it will be successful and uneventful.

This update doesn’t have any security fixes. Can we just skip it? Don’t fall into this trap. Always be updating, right? Let’s say you are using version 3.2.1 of a library. 3.3.0 comes out, but it doesn’t affect anything we use. So, we skip it. Then, 4.0.0 comes out, likewise with nothing that forces us to upgrade. We don’t need any of the new features and don’t want to deal with the interface and behavior changes that will mean code changes in our application. By the time 4.1.0 comes out, we’ve saved tons of time by skipping all those useless updates, right?

Except version 4.1.0 closes a security hole that has existed since version 3.0.0. It is actively being exploited in the wild. We need to get to version 4.1.0 right now, and that means dealing with all those intermediate changes. It’s an emergency that can’t wait and will certainly affect the work already in progress. There are multiple minor releases that shouldn’t (but might) change behaviors and a major update that certainly will and might force interface changes as well. Plus, a bigger set of changes means we should do more testing. Instead of low-stress and proactive, this release will be disruptive, tiring, and reactive. 

If we can make these upgrades part of our normal development cadence, then we aren’t forced into doing a bunch of version updates on the same component when a critical finding forces it upon us. Or a new feature we want to use. Frequent small investments in time keep us from having to drop everything for an unplanned, time-consuming effort.

I’m convinced- I should always be updating. But how often is always? In a perfect world, we’d update every dependency as soon as a new version became available. But all of this advice is specifically because we don’t live in a perfect world. 

Most package managers have a capability or plugin to show us newer versions of any dependencies we use. Whether it is npm outdated, mvn versions:display-dependency-updates, or cargo update, finding out about new dependency versions is usually straightforward.

However, we still need to prioritize and schedule updates within our development process. We’ll never get anything done if we are constantly interrupted with dependencies to update. We are trying to be proactive, not overwhelmed. 

Software composition analysis (SCA) tools look at the components our application relies on and match those versions with known vulnerabilities. They not only show us components with known vulnerabilities but also explain how severe those vulnerabilities are, allowing us to make informed decisions about what to upgrade and when. 

I recommend treating critical vulnerabilities as close to a “drop-everything” situation. Critical vulnerabilities have a Common Vulnerability Scoring System (CVSS) score of 9.0 or above on a 10-point scale. Even if we don’t drop everything, we at least don’t want to wait until our next iteration or sprint to plan it. Get it done with haste.

High vulnerabilities should be at the top of the “plan-the-work-immediately” priority list but can usually wait to be planned so they aren’t too disruptive.

Keep in mind it is worth reading through the descriptions of the vulnerabilities to determine whether we are susceptible. A vulnerability scored as critical might only be high or medium based on how we configure or use the component. If we aren’t at immediate risk, they’ll still need to be fixed, but this might push the priority down.

Upgrades that aren’t fixing security vulnerabilities should be coordinated with other work to manage disruption. Major updates will need more care and effort than minor updates. Patch updates are often trivial. 

And always be updating to avoid getting caught unprepared.