The 2023 conference season officially kicked off on February 1st in Seattle. Over 1000 attendees, speakers, and security tool vendors gathered in Seattle for CloudNativeSecurityCon, the first stand-alone, in-person event of its kind. Over the course of 2 days and over 70 presentations, the cloud-native security community shared their knowledge about the state of open source security. Along the way, we had some great times and conversations about SBOMs, SLSA, SCA, and the many security challenges we all face.
With so much packed into the two days of the event, it would be impossible to cover it all, so here are just a few highlights.
An Event Born From The Community
Starting with the first keynote, we learned CloudNativeSecurityCon came about from community conversations at the Cloud Native Foundation flagship event KubeCon + CloudNativeCon back in 2019. Event co-lead Emily Fox from Apple told the backstory of how a small group of people had a conversation during an open session and went on to start a security meetup, leading eventually to this stand-alone event. It very much reminded me of the famous quote from Margaret Mead: "Never doubt that a small group of thoughtful, committed citizens can change the world: indeed, it’s the only thing that ever has."
Priyanka Sharma, Executive Director of the Cloud Native Computing Foundation, CNCF, introduced the whole event and laid out the importance of this gathering. She explained that over the next year, approximately $188.3 Billion is projected to be spent on security. It will take more than top-down solutions alone to solve the issues the industry faces. Instead, change needs to be driven by developers as the whole world continues to shift left. At the same time, 77% of organizations CNCF surveyed said 'poor training' and 'lack of collaboration' were major challenges they needed to tackle. The future can only be secure if we all work together, see everyone on the same team, and focus on better education. These tenets are very central to the mission of the event.
The Threat Horizon
One theme that came up in multiple sessions, as well as in hallway conversations, was new threats and how we can best prepare ourselves for what is emerging.
In his keynote "Fighting The Next War - Future Threats to OSS and Software Supply Chain Security," Brian Behlendorf, Managing Director of the Open Source Security Foundation, explained a bit about how we got here, starting from the early days of the internet when encryption was not encouraged. As shocking as it might be to someone just starting out in the security field, early on, cryptography was seen as the realm of the military and governments; application builders were discouraged from using encryption. Fortunately, that has changed, but the legacy is that there are a lot of exploits still lingering out there that should have been fixed in the early days of the internet but just weren't.
Brian said we need a new set of assumptions to better prepare us for the future. These include:
- There is no such thing as true zero-trust architecture. There is always a bias built in.
- Attacks that are considered hard will get easier over time.
- If there is an issue where you ever think 'no one would do that,' someone will eventually do it and then automate it for anyone to exploit.
- Things fall apart given enough time, especially software. All bugs are eventually discoverable and exploitable.
Brian finished his keynote with a quick round-up of new attack surfaces. He thinks we will see a new level of sophistication in automated spear phishing attacks, making it even more vital to find new ways to enable multi-factor authentication. MFA itself is also an area that needs constant vigilance, as more MFA authorization schemes fall victim to corrupted reset processes. He also predicts that we will see corrupted AI models that are trained to intentionally introduce defects, meaning the more we rely on things like ChatGPT to write code, the more security vulnerabilities we are going to see over time. He further predicts we will see artificial intelligence being abused to do things like jam up the zero-day pull requests, making it harder to find and apply the correct patches.
Fortunately, there are groups of dedicated volunteers working on solutions to these issues. Brian would love for you to join them and lend a hand where you can. You can find out more by visiting the OpenSSF website.
Zack Butcher, the founding engineer at Tetrate, kept the conversation on emerging threats going with this talk "From Google to NIST — The Future of Cloud Native Security." He asked us to consider one of the biggest concerns when pondering zero-trust, "what if the attacker is coming from inside the network?" While there is no single answer, Zack laid out five big checklist items used at Google to address this concern:
- Encryption in Transit
- Service Authentication
- Service-to-service Authorization
- End-user Authentication
- End user-to-resource Authorization
If you feel you have a good answer for each of those points at every hop, then you are in a good position. He stressed that this needs to be addressed at every step throughout your pipelines, not just at the front door.
Panel Of Opinions And Hot Takes
The session that I heard referenced or quoted the most in the hallways was the panel discussion "Cloud Native Security Landscape: Myths, Dragons, and Real Talk," led by Edd Wilder from Sysdig, featuring Sysdig CTO and founder Loris Degioanni, Kim Lewandowski, the founder of Chainguard, Isaac Hepworth, Google Group Product Manager and Randall Degges, Head of Developer Relations & Community at Snyk. Answers printed here are summaries only. I highly encourage you to watch the recording.
The first question up, "What is the single most important thing in security that people should be paying attention to?"
Isaac: There is a universal feeling that open-source dependencies introduce some level of risk into the software supply chain, but no one has a good grasp on this or what to do about it.
Randall: The fact is devs don't really care all that much about security, especially if it gets to be seen as 'too much.' We need to work to automate everything taking the pressure off of developers.
Loris: It is just as important to understand the full software lifecycle as it is to understand dependencies, which is where a lot of people are focused right now. A core reason people are overwhelmed is the focus on accuracy and precision. We must make this easier to consume and make it less stressful.
Kim: Supply chain security is the most pressing issue, and we need to adopt a 3 stage journey to fixing it: "Know, Fix, Force." Know what is running, where it runs, and what is in it. Fix all the things that make it insecure. Force, rather enforce, compliance across the organization. If parts of your infrastructure or org are insecure, then everything is insecure.
Next, the topic turned to Developer roles versus Security team roles.
Kim: Ideally, we are all on the same team, but the reality at many companies is the 'security team' versus the 'dev team.' Devs want to keep obstacles out of the way while they do their jobs.
Randall: The three keys to getting a dev to fix stuff are "education, gamifying the experience, and seamless tooling." YouTube and blogs are good for education, but what is the security team doing to better curate what is out there for the dev team? If we can make the experience of fixing security issues fun by using competitions and prizes, then we should. And lastly, if a tool can't be seamlessly integrated, then it is not likely going to be used. We should prioritize extending the tools that devs have already adopted, making them super powerful.
Loris: - Instead of focusing on tools, we need to focus on building a common language to get agreement on solving security problems. A common language will lead to visibility and enable communication that is not possible at the moment. Agreement leads to empowered people.
Isaac: SBOMs are mandatory, and this has caused a lot of teams to scramble. Come June 15th, the compliance date associated with the executive order, we will have over 250,000 SBOMs on file, but what do we do with those?
Instead of just being overwhelmed by security metadata from SBOMs, we can see this as the beginning of the 'bus for dependency solving.' This is a unique time for us to possibly build a new metadata transport layer of the supply chain, though the needed tools for this transformation have yet to emerge. While not a silver bullet, SBOMs can act as a basis for finding a better path forward for security.
The 'S' in 'SBOM' does not stand for Silver bullet.
Randall closed out the panel discussion with a call to action for anyone who can submit a pull request: We should be focused on solving known CVEs rather than fixating on less widespread yet overhyped issues like typosquatting. If we all committed to fixing one small security bug each, we would have an overall more secure ecosystem.
SBOMs Are On Everyone's Mind
We already broached the topic of software bill of materials in the previous section; SBOMs were a major topic throughout the event.
In his keynote "Panic in San Francisco: The Critical Vulnerability That Wasn't," Shane Lawrence from Shopify laid out how SBOMs were used in a real-world situation to save the internet. In August 2021, a new version of OpenSSL introduced a bug that, in case of a buffer overflow, could trigger a denial of service or, worse, could allow remote code execution. This became public knowledge on Halloween 2021. When the fix was announced, the next challenge was to understand where this version of OpenSSL was installed, and that is where SBOMs saved the day.
Shane reminded us all that SBOMs are not new; they have been around for years, originally used for license tracking. The OpenSSL team was able to leverage available SBOMs to track where the bug could be in network stacks and get things patched quickly and effectively.
When you run one of the most visited websites on earth, you gain a lot of experience keeping things secure. In their session, "How to Secure Your Supply Chain at Scale," Hemil Kadakia & Yonghe Zhao shared some of the lessons they have learned firsthand in keeping Yahoo safe. Like most all websites, Yahoo runs on open source, with it comprising between 85% to 97% of their stack. Unfortunately, attackers are well aware of this and have ramped up the number of attacks against the supply chain by 742% over the last 2 years alone.
In order to keep their software supply chain secure, they run three checks on every component:
- Image provenance check - can they attest that it contains what they think it contains?
- Image signature check - can they trust the signing authority?
- Image freshness check - how old is the component, and is a newer patched version available?
Relying on these three simple checks and leveraging Grafaes to audit their supply chain, they are able to set policies throughout Yahoo that keep everyone safe. They also shared some of the takeaways that helped this policy approach gain wider adoption, including the importance of making the process automatic and default behavior; the more they could reduce friction, the happier everyone was. This also helped reduce the time to deployment, as security was baked early into their process.
They also shared that pre-planning for adoption and enforcement was critical. They took the time to onboard everyone and explained the importance of provenance, the software build chain of custody, to the overall security of their software. They also rolled out health charts to visualize overall adoption, helping motivate teams while clearly communicating the status of the project overall.
From a more hands-on perspective, there was also a session from folks who are actively working to leverage SBOMs to keep their secure environments secure at a company where that is especially important, Lockheed Martin. Software Factory Engineers Ian Dunbar-Hall and Jerod Heck gave us an in-depth overview of how Lockheed Martin builds Internal and open source tooling. For them, it starts by thinking about SBOMs as 'packaging definitions' and as a way to move those definitions between secure environments.
Some of the tools they have contributed to in order to help in this area include Hoppr, a tool to "collect, process, & bundle your software supply chain," and Renovate, which detects updates to packages, container images, and other projects in GitLab. Their overarching goal is to deliver a single, reusable process that can scale between any and all teams. Ideally, this lets them maintain one data flow and one central security team across the entire business.
While walking us through their workflows, they did admit there are still hurdles they need to overcome, with the largest being incomplete SBOMs. While there is an ongoing push to standardize tooling, as there is a lot of legacy code and systems in use. They are also working to roll out unified reporting and improve component validation. If you want to further explore their approach to pipeline delivery using Hoppr and Renovate, among other tools, they have released a demo available on GitLab.
Trust And The Supply Chain
In his keynote "Back to the Future: Next-Generation Cloud Native Security," Matt Jarvis, Director of Developer Relations at Snyk, cited trust as the cornerstone upon which all our security relies. Specific to our supply chains, it comes down to signing to build chains of trust. We must have zero-trust and always verify, but at what point down the chain do we have to just trust our tools? He raised the issue of trusting our operating systems, the operating systems used to build those operating systems, and so on down the development tool stack. At what point do we lose the ability to audit and verify? Unfortunately, there is no clear answer to this dilemma, but it is an important point to recognize as we continue on the journey toward true zero-trust.
Trust was also central to the keynote from RedHat's Director of Product Security Emmy Eide "Trust and Risk in the Software Supply Chain." She shared research that showed that six out of every seven project vulnerabilities come from transitive dependencies, meaning they are inherited from the components used to build the software and not from new lines of code. In her opinion, there is one clear way forward: partnership. We are all part of the same ecosystem, and it is critical that we come together to solve common problems and CVEs to keep us all secure. The alternative is, unfortunately, what we all too commonly see now, people going for the path of least resistance, leaving vulnerabilities unsolved and kicked down the road for someone else to deal with.
SLSA and GUAC
One subset of the supply chain conversation centered on the framework Supply chain Levels for Software Artifacts, SLSA, pronounced 'salsa.' Going hand-in-hand with SLSA is Graph for Understanding Artifact Composition, GUAC. These two standardized approaches came up in multiple sessions, including the keynote "The Next Steps in Software Supply Chain Security" from Google's Brandon Lum, who was also the co-chair for the event.
Brandon explained there are a lot of current and emerging standards for supply chain security, including FRSCA, SigStore, CycloneDX, and Scorecards, just to name a few. The issue right now is not a lack of solutions but instead that everyone is overwhelmed with solutions. While it is almost trivial to produce a 300MB SBOM at this point, there is yet to be a clear answer to how we make sense of this much metadata.
Brandon suggests a path forward that starts with verifying the trustworthiness of solutions and sees SLSA as a good starting place. Once we know what level of trust we have for our software composition and signatures, we can start to leverage visualization tools like GUAC, which will help us digest the findings and prioritize our reactions. Combine this approach with package managers like PyPi and npm, and we have a chance at industry-wide standards of trust being established.
He also laid out how TAG-security classifies solutions as reactive, preventative, and proactive. If something is reactive, it answers the questions, "how am I affected?" and "how do I remediate the issue?" Preventative solutions mean asking, "have I taken the right safeguards? Are there sufficient security checks and approvals when I am choosing software?" Proactive solutions are concerned with preventing large-scale supply chain compromises.
If you are interested in getting more involved, Brandon left us with a call to action to dive deeper and even linked to a TAG-security issue where you can comment and start your journey.
Another session that explained the need for better software supply chain security and a path forward was "Spicing up Container Image Security with SLSA & GUAC," from Ian Lewis, Developer Advocate at Google. He explained that one of the biggest issues in security right now is that development and CI environments are not being treated with the same level of scrutiny as production. Dev and CI are approached with a 'YOLO mentality,' and many people think that only production systems need to be hardened. Malicious actors also know this, and this is one reason why we see more and more attacks along the supply chain year after year.
As with other talks throughout the event, the idea of trust once again took center stage. He asked, "How do you know software from ianmlewis on Docker Hub is the same software from ianmlewis seen on GitHub? And can we trust that images built from ianmlewis's code on GitHub are the same as ones built from source code from another ianmlewis repo?"
The answer comes down to software attestation. This is the metadata, or provenance, plus the software signature, plus the identity of the signer. Attestation verifies the 'what, where, when, and how' of software. It needs to be made of facts you can actually verify. Cryptographic signatures are a good example of something that is verifiable. And the best framework for software attestation is, once again, Software Supply Chain Levels for Software Artifacts, SLSA.
The biggest reason Ian cites for SLSA being the go-to standard is that it establishes common terminology while at the same time defining incrementally adoptable requirements. The provenance format, basically a specific JSON model, gives us all a standard communication format that is reusable across projects and languages. The real advantage of a common framework is that it makes it much simpler to verify and therefore trust software builds. Open source tools like the CLI slsa-verifier can be integrated into pipelines and even verify container images quickly, easily and in a non-disruptive way. Ian suggests checking out the SLSA development blog to stay up to date on the project.
Going hand in hand with SLSA is the framework that will help with discoverability and auditing, the previously discussed GUAC framework. As we continue to shift left, making it easier for devs to identify gaps sooner will help teams be more proactive. Standardizing on GUAC for visualization can help those conversations happen more effectively.
A Community Learning Together
There were a lot more sessions and plenty more pieces of wisdom shared throughout the event. This article would be a lot longer if I attempted to cram everything into this one post. Fortunately for everyone, the sessions were recorded and are already available online. This includes mine, "Security Does Not Need to Be Fun: Ignoring OWASP to Have a Terrible Time"
One of the biggest takeaways from this event is that we are all truly in this together. Given that almost all software relies on third-party, open source software, making out supply chains secure truly is going to require a whole industry effort. While there are plenty of tools to help us build SBOMs and awesome frameworks like SLSA and GUAC to help us make sense of our software composition, securing our software means finding a more collaborative future. Fortunately, organizations like OpenSSF and the Cloud Native Software Foundation are already leading the charge through ongoing open source projects and through putting on events like CloudNativeSecurityCon. I am already looking forward to the next Cloud Native event, where we can keep the conversation going.