Denver, Colorado, is home to vast mountain views of the easternmost section of the Southern Rocky Mountains, commonly referred to as the Front Range. This area has the highest elevated continuous paved road in the United States. The Mount Blue Sky Scenic Byway climbs to a staggering 14,130 feet (4,307 meters), offering a literal peak perspective on the landscape below. And just like gaining elevation gives you a clearer view of the terrain, there was once again the annual OWASP community event in Denver that gave attendees a clearer view of what’s lurking in today’s application security landscape, SnowFROC 2025.
The "FROC" in SnowFROC stands for Front Range OWASP Conference. The "Snow" in front of the name refers to the fact that it is typically snowy when the event is held, but while this year was indeed cold, we thankfully did not have to deal with any precipitation.
This year saw nearly 400 developers, defenders, and curious minds come together to dig into the challenges shaping today’s AppSec landscape. With a heavy focus on secrets, AI, and non-human identity (NHI) management, the event painted a clear picture: OWASP has a major role to play here in 2025 in protecting our ever-evolving applications and development practices. Speakers also shed some light on the fact that AI coding assistants can certainly help with some aspects of securing your code, but we are still nowhere near ready for it to produce production-ready, secure code without a lot of oversight.

Here are just a few highlights from a packed day of sessions that mixed practical advice and people sharing what they learned along the way.
An Open Source At Heart Community
At the core of many of the talks were stories from building and sharing code openly, beginning with the riveting keynote from HD Moore, Founder of runZero, from his work creating and evolving Metasploit, the world’s most used penetration testing framework. The talk was part history and part lessons he had wished they had learned earlier.
He reminded the audience that the GPL, or any open-source license, is there to protect the community and not the developer. While he warned that being an active open source developer likely makes you a maintainer for the whole life of the project, he encouraged more people to work openly to help us all evolve our work securing applications.
He also acknowledged the struggle to make OSS development a sustainable career, but said he has seen more and more people find success with paid feature development, support contracts, and managing hosted turnkey solutions. This is especially true when corporate users treat OSS as “free as in beer,” and do not contribute back.
The 2025 OWASP Top 10 for NHI Risks comes into focus
One of the standout sessions came from dual presenters Tal Skverer and Danielle Guetta, who both are contributors to the OWASP Top 10 for Non-Human Identities (NHI). In their joint session "Exposing the OWASP Non-Human Identity Top 10: Risks, Realities, and AI Impacts," they defined what NHIs are and walked us through a number of ways security can go wrong.
If you haven’t encountered the OWASP NHI Top 10 yet, you’re not alone, and that’s part of the problem this session set out to solve. As they pointed out, most devs and security folks don’t think of “identities” beyond humans. But in today’s systems, programmatic access, whether it’s service accounts, CI/CD bots, or cloud APIs, is everywhere. Each one is a potential breach point. This is too common of an issue, as one in five orgs reported a recent NHI-related incident, with a full 36% reported as severe.
While they walked us through all the top 10 entries, they drilled in on two in particular.
NHI1:2025 – Improper Offboarding. These NHIs don’t expire or get deactivated when projects end or people leave. No one knows who owns them. That makes them easy pickings for insider threats or post-exploitation attackers. We must assign risk ownership if we are going to solve offboarding, as anyone holding the associated responsibility should already be working to reduce or avoid risk.
NHI4:2025 – Overprivileged NHIs. Unfortunately, it is too common for developers to give NHIs admin-level access “just to get it working” but all too often forget to dial it back. Anyone can get to their crown jewels, maybe with write access, if the secret is compromised.
Overprivileged NHIs and really all the NHI top 10 are exacerbated by the adoption of agentic AI. We are treating these bots as if they were human, granting all sorts of access, but without the needed oversight we generally provide for human Identity and Access Management (IAM) governance. We must find better ways to adhere to the principles of least privilege whenever we grant any entity, human or NHI, access rights.
There is a reason they call it Copliot, not Autopilot
Across multiple sessions, AI’s role in development was held up for critical examination. Unfortunately, the evidence is that from a secure coding perspective, current AI tools are not there yet and still require developers to have a competent level of security knowledge to deploy AI-generated code safely. Two talks stood out while different angles on the same issue.
Attempting to Fixing code with AI tools
In his talk, “Don’t Make This Mistake: Painful Learnings of Applying AI in Security,” Eitan Worcel, CEO and Co-founder of Mobb, gave a data-backed teardown in comparing ChatGPT 3.5, 4.0, and Copilot’s code fixes against intentionally flawed applications. His results:
- 29% produced good fixes.
- 19% partially fixed the issue but introduced new vulnerabilities.
- 52% either broke the app, hallucinated nonsense, or deleted essential code.
He stressed that AI code suggestions are probabilistic, not deterministic. Humans tend to think in terms of defined states when we think of coding, expecting lines of code to attempt to solve the stated problem. LLMs don't work that way; they will try anything to meet the requirements, including making stuff up or just deleting other needed lines of code in an effort to get to the goal.
Eitan said you can’t trust AI coding agents blindly, especially in production workflows. Even with prompt engineering and vulnerability-specific guidance, AI required expert oversight to avoid disaster. At scale, this is incredibly slow.
Prompting AI to write a secure application
Rather than fix known security issues in code, in the session “The Dark Side of AI: Developing unsecure applications in minutes,” Chris Lindsey, Field CTO at OX Security, shared his experience trying to build a complete application by only prompting AI-powered coding tools. To begin, he asked ChatGPT and Claude to build a .NET application, complete with back and front end, from scratch. The output was not good. The code from ChatGPT looked OK, but none of the code compiled.
Claude, from Anthropic, produced code that mostly worked. Unfortunately, when he did get most of the way to a working application after many, many prompt iterations, a basic security scan unveiled over 70 security issues. For example, Claude suggested hardcoding secrets throughout the application, even after defining ways to safely handle credentials earlier in the code creation process.
Another example Chris cited was missing security headers, which were never mentioned unless explicitly asked. He described the current state of Claude as being similar to code from a senior developer who’s never worked in AppSec and the overall state as simply not ready to work without the developer driving the process who has a deep knowledge of coding and security best practices.
Secure code still needs humans and proven tooling
Throughout the day, the shared sentiment was clear: AI is a tool, not a solution. It’s good for basic scaffolding, OK at documentation, and works for some prototyping, but it’s not ready to autonomously build secure, production-ready applications. We still need humans in the loop to make decisions and ensure that we are not deploying insecure code that is welcoming to an attacker.
AI is not ready to replace the host of security tools that organizations like OWASP and commercial solution makers have been evolving for years. Your author was able to give a talk about secrets security, the tools that are available now to solve this, and where we are headed as an industry. While it would be great to think the machines will just automagically handle this for us, that day has not yet arrived.
Security, like good code, still requires people who care. It is a good thing we don't have to figure this all out alone. And you don't need to wait for next year in Denver to get together with like-minded folks working on securing our applications. SnowFROC is just one of the many get-togethers that the OWASP community holds throughout the year. There is likely an OWASP chapter near you. GitGuardian may even see you there.
