There's a narrative in cybersecurity right now that AI is creating unprecedented new security challenges. I don't buy it.
AI isn't creating new problems, but exposing the ones we've been ignoring for years, and accelerating them to a point where our old defenses can't keep up.
For years, secrets sprawl has been building as a technical dept. The problem was already there: hardcoded credentials in source code, passwords exchanged through Slack, secrets sitting on developer laptops in environment variables. Then AI accelerated it exponentially.
The Democratization of Code
Operations teams, sales people, marketing teams… They're all using AI assistants like Claude, Cursor and other coding agents to build applications and automations, shipping straight to production without security training. Despite efforts by AI providers to implement secure credential handling, the human factor remains the weakest link. Even when LLMs avoid generating hardcoded secrets, untrained users still paste secrets directly into their code. The barrier to creating software has vanished, but security awareness hasn’t scaled with it.
Attackers have figured this out: why break in, if you can just log in. Modern breaches increasingly follow a simple pattern: find a leaked credential (just a valid API key being used exactly as designed), authenticate as a legitimate user, move laterally through valid access, and exfiltrate data. Traditional incident response is useless against this because these attacks look like normal business activity.
And Here’s Why Non-Human Identities Are Exploding
For decades, cybersecurity has been built around human identity management. We authenticate people, authorize people, audit what people do. But a new category of identity is exploding across enterprises. Every AI agent, every service account, every automation needs credentials to prove who it is. Every developer now has multiple AI agents acting on their behalf: code completion assistants, PR review bots, testing automation. Soon, every person in your company will have multiple agents handling tasks.
Here's the problem: when I spin up an AI agent to help with my work, there's no standardized governance framework for managing what it can access. Should it inherit my full permissions? Get its own service account? Use which credentials? Most organizations haven't answered these questions, so agents end up either over-privileged (inheriting everything) or blocked entirely.
We've spent 30 years building sophisticated systems to manage human identities. We have nothing equivalent for machines. And we're way behind.
This is why we raised this round.
The market is telling us: solving governance at the speed of AI matters. 60% of our new enterprise customers committed to multi-year agreements in 2025, and 80% of our new revenue came from North America. These companies are investing in products to secure secrets and non-human identities.
Where the Capital Goes
The capital will accelerate three critical areas.
- Geographic expansion across the Americas, EMEA, and strategic verticals where AI adoption is fastest.
- Building comprehensive AI agent security capabilities – every autonomous agent represents a new attack surface that needs governance.
- Enterprise-scale NHI lifecycle management that gives organizations the same control over machine identities they have for human identities.
Now that the AI adoption ship has sailed, the productivity gains are too significant to walk back. The challenge is building governance that can keep up as things continue to accelerate. That means knowing where your secrets are, tracking what's still active (70% of secrets leaked two years ago still work today), and understanding who and what is using those credentials.
Enterprises are deploying thousands of AI agents without the governance infrastructure to manage them safely. If we don't solve this, AI's potential will always be limited by its security risk.
That's a future none of us can afford.
And that's why we're all in.
Read the press-release here.