There is a good chance that when you hear "San Francisco," you immediately think of cable cars. These moving national landmarks have been continually operating since 1873. This technology relies on tracks and a robust underlying cable technology, mostly hidden away from the public eye. This made SF the perfect backdrop for conversations about a modern technology that needs tracks and guardrails: agentic AI.
If there was a common thread among the conversations at this year’s RSA Conference, it was the emergence of agentic AI as an operational "colleague," not just a tool. This path poses a lot of questions for trust, governance, and identity in security. We are no longer merely integrating AI; we’re now onboarding it.
Here are just a few takeaways from this year's edition of the largest security conference on earth.
AI Is Transforming From Helpful Copilots to Agentic Colleagues
Techstrong assembled some of the leaders of the DevSecOps space to speak at DevOps Connect. This 10th anniversary edition's theme was "AI and Security: Transforming Modern AppDev." Kicking off the event-inside-an-event was the “Security in AI” panel, featuring Jason Clinton, CISO of Anthropic, Matt Knight, CISO at OpenAI, Joshua Saxe, AI security tech lead at Meta, and Moran Ashkenazi, CSO of JFrog, hosted by Best-Selling Author, Sol Rashidi. They didn’t dwell on hypothetical AI futures. Instead, they walked us through what’s already unfolding: the rise of agents that make decisions based on organizational context, not just input strings. These aren’t bots in Slack. They’re coworkers with context windows.
Jason described the shift as onboarding AI the way you would a human, equipping it with a manager, a set of tools, and a scope of responsibility. The implication is that we’re already beyond “least privilege,” in the way we have defined it for deterministic systems. As these agents gain autonomy, we’re entering an era where identity infrastructure must offer guarantees. Instead of tracking just who triggered a command, we now need to ask on whose behalf it was executed. That’s an identity and attribution challenge that current IAM architectures are not built to handle.
Matt reminded us that OpenAI’s security use cases are deliberately scoped to avoid agentic risk. Having LLMs summarize incident timelines is low-risk and high-value, for example. Compare that with giving an agent the ability to invalidate a credential, which might cause an outage in the name of following a security policy. Their research already shows how models can shift from observation to instruction in unpredictable ways, meaning all acts we empower these agents to take can lead to higher risks.
Joshua from the Llama team at Meta drilled in the inevitability of prompt injection and nondeterministic execution paths. He reminded us that, in fact, no LLM is zero-risk. Their internal policy is to restrict privileged access and to design for residual risk management, not complete elimination.
Moran talked about needing to get the whole organization on board with any policy early and meaningfully. She discussed the 'skeleton' framework they developed in-house to give teams guidance on what is allowable while still giving freedom to experiment. She said this revealed that, just like all other technology, successful rollouts of AI agents will boil down to getting change management right.
What united all these perspectives was not fear, but clarity. We are entering a cockpit model of AI interaction. There will be agents acting with context and independence. But humans remain in the pilot seat…for now.
Using AI to Stitch Together an Automated Attack
In his talk "The AI Imperative: Defining Zero Trust Security Strategies for a New Era," Deepen Desai, CSO of Zscaler, delivered one of the most grounded sessions of the conference by stripping away AI hype and walking the audience through the security implications of today’s actual enterprise usage. With over 536 billion AI/ML transactions tracked and enterprise usage up 36x, Deepen made it clear: the scale of AI deployment has already outpaced most governance frameworks. The raw telemetry paints a story of rampant data movement: 3,624 TB across enterprise AI services.
The most chilling moment came during the “RogueGPT” demo, where Deepan stitched together real-world tactics, techniques, and procedures (TTPs) into an automated agentic LLM-based attack: discovery via open source intelligence, spearphishing via LinkedIn scraping, malware propagation, and secrets exfiltration. None of it was fictional. It was based 100% on real-world breaches. The only fictional part was the orchestration, done by an agent instead of humans.
Deepen’s message wasn’t to be afraid; it was to operationalize Zero Trust in ways that acknowledge agentic behavior. He outlined a clear architecture: no attack surface via VPN/firewall exposure, TLS inspection at scale, app-to-app segmentation, and user-to-user isolation. “Phones don’t target each other laterally,” he quipped, asking why our infrastructure still does.
Are CISOs In The Best Seat To Guide AI Adoption?
No RSA Conference would be complete without an update from the researchers at IANS Research. Led by Nick Kakolowski, Senior Research Director, IANS Research, and Steve Martano, Partner, Cybersecurity Practice, Artico Search, their session "CISOs: Elevate Strategic Impact and Unlock New Career Paths" offered a stark diagnosis: the CISO role is undergoing a forced evolution, whether the industry is ready or not. Security leaders are no longer just defenders; they're digital stewards expected to guide enterprise AI transformation, even when they don’t own the underlying IT stack.
The research shared during the session showed that more than 50% of CISOs already have responsibility for domains like product security, cloud, and disaster recovery. Increasingly, they’re being pulled into data governance, M&A due diligence, and AI policy oversight, even if they don’t have the resources to match.
The real call to action came from the leadership vacuum AI is creating. “All decisions around AI will be made, with or without your voice in the room,” Steve warned the CISOs in the room. The path forward isn’t about defending turf; it’s about positioning yourself as a business risk executive who specializes in security.
Agentic AI Is Shattering The Trust Boundary
It’s tempting to think about agentic AI as a tooling revolution. More intelligent code review, faster automation, and AI-augmented detection are all real wins. Beyond the benefits, the conversations at RSA 2025 revealed a deeper risk brewing: agentic AI is dissolving the currently understood operational trust boundary, and we might not be prepared for the consequences.
Intent As An Attack Surface
AI agents operate not on static instructions but on inferred intent. This makes their decision-making non-deterministic and exploitable. Prompt injection remains a live vulnerability, not because attackers are clever, but because AI systems are designed to interpret instructions and try to make the requester 'happy.' As Deepen illustrated in his rogue GPT demo, we’re not far from adversaries chaining together GPT-driven playbooks that can scrape org charts, craft spearphishing campaigns, establish persistence, and exfiltrate secrets, all autonomously. In fact, we may already be there.
In a world where intent is programmable, security must adapt from monitoring actions to validating authorizations of inferred behavior.
Privileged Access Misuse Without Privilege Grant
As Jason from Anthropic noted, Claude now writes over half of their internal code, and they expect 90% by year’s end. All the other CISOs from AI companies I heard from shared similar rates from their platforms. What does it mean when an AI model, not a person, authors the logic behind access controls or telemetry? If the model makes a mistake, who owns the risk? Traditional attribution breaks down here.
We must now treat AI actions as potentially privileged by default, and build systems to sandbox, log, and review those actions as if they came from contractors. While this seems like all new turf, we have needed to answer this question of responsibility since we first started shipping production applications. It’s just all moving faster now.
Oversight As A Practice, Not A Policy
Multiple speakers emphasized that we must treat AI agents the way we train interns: tight loops, safe environments, and clear scopes. As Jason phrased it, “AI acts, understands, and remembers.” That cycle demands supervisors, not just frameworks.
Policy alone won’t secure this space. We need operational observability, watching what agents actually do in production. We need red teams capable of jailbreaking our own models. And we need escalation paths for when intent goes wrong.
Secrets, Access, And Being At The Center Of the NHI Conversation
Over at the GitGuardian booth on the main expo floor, where your author spent much of his time, there was a real sense of excitement. As Carole Winqwist, GitGuardian's CMO, commented, "The RSA Conference had an amazing energy. The very intense rhythm really reveals the strength of our industry and all the innovation going on. It stands apart as a venue to meet all kinds of profiles: colleagues, practitioners, customers, prospects, analysts, and press."
We shared our new NHI Governance platform with the attendees, asking them if they had tackled this emerging area of security or if they were focusing on secrets security maturity. The response was loud and clear: people want to talk about non-human identity, but we are still coming to terms with how to articulate what we need. Most attendees had never before considered how their service accounts, containers, and CI/CD tools mapped to any internal IAM policy.
What clicked for people wasn’t the definition of non-human identities, which is pretty broad and invites some philosophical debate. What people responded to was the shared access model. If something has a secret that grants access to another system, it’s part of the security boundary. It deserves the same scrutiny we give to human identities.
We showed people our new policy enforcement capabilities, currently based on the OWASP Top 10 for NHI risks, which checks for cross-environmental breaches where secrets shared between staging and production, long-lived tokens still valid for many months, and secrets duplicated across multiple vaults. Our platform helps people answer the underlying question: “What resources can this NHI reach, and where is that credential stored or leaked?”
When we asked, "What other policies would you want to enforce on NHIs?" most people did not have a ready answer. They admitted that it is a conversation they likely need to have. That’s not a failure, it’s an opportunity and a call to action for us in the security space. Teams need to think this through now and get the right tooling in place to make sure humans and AI agents alike follow your organization's best practices. We all can shape this domain if we move quickly, clearly, and with empathy for real workflows.
This is why GitGuardian’s NHI Governance tooling isn’t just detection, it’s inventory, enforcement, and remediation. Secrets are what make agentic AI, microservices, applications, and all of our other NHIs dangerous when unmanaged. They are also where governance becomes real. We would be glad to show you the same demo anytime you would like.
Beyond The Buzz: A Community Showed Up
RSA can be noisy and commercial, but this year, it had a special energy. From our conversations at the booth to the hallway track after the sessions, there was a shared sense that we’re all facing something new, and we’re trying to figure it out together. Attendees weren’t just vendors or practitioners. They were peers, co-navigators of a space we’re still mapping.
RSA 2025 made one thing painfully clear about agentic AI: this isn’t hype. The future isn’t just automation; it’s delegation. AI will make decisions we don’t explicitly authorize, on timelines we didn’t define, using access we forgot we granted.
Our collective shorthand landed faster than usual with AI, and that makes it feel like it is moving so rapidly. For example, a reference to "Claude writing half the codebase" didn’t get met with skepticism; it sparked a discussion about CI/CD hygiene and supply chain observability. It felt like we were trying to speak the same language across all areas of focus. Not AI vs. security. Not us vs. them. Just people trying to secure the future we’re already building.
We all walked away from RSAC with a feeling that we can't just keep doing things based on outdated risk assessments. If your security posture still assumes a human at the center of every credential, it’s time to re-examine why. GitGuardian isn’t just solving secrets sprawl. We’re shaping the governance fabric that secures NHIs, the foundation of this next frontier.
And right now, the frontier is wide open.
