When industry leaders gathered at SecDays France to discuss agentic AI, the question wasn't whether this technology would transform business operations—it was whether organizations could harness its power without falling victim to an unprecedented security nightmare.

Beyond Workflows: Understanding the Agentic Revolution
Unlike the deterministic workflows that have dominated enterprise automation, agentic AI represents a paradigm shift toward autonomous decision-making. As Noé Achache, Head of AI Engineering at Theodo, explained, traditional LLM implementations follow predictable paths—call an API, process the response, branch left or right based on predetermined logic. Agentic systems, however, operate with a fundamentally different philosophy: give them access to tools, data, and objectives, then let them determine the optimal path forward.
This distinction becomes crucial when considering protocols like MCP (Model Connection Protocol) and A2A (Agent-to-Agent), which have emerged as the connective tissue enabling this new ecosystem.

MCP, launched by Anthropic in late November 2024, has already spawned thousands of community-built servers in just six months—a testament to both its utility and the voracious appetite for AI automation across industries.

The democratization aspect cannot be overstated. Where previous automation technologies required specialized technical knowledge, agentic AI tools are accessible to anyone who can articulate their needs in natural language. This accessibility is driving adoption rates that dwarf previous technology cycles, creating what panelists described as both an unprecedented opportunity and a looming crisis.
The Traffic Surge: When Bots Become the New Normal
Gilles Walbrou from DataDome brought sobering statistics to the discussion. His company, which has been tracking bot traffic since 2015, has witnessed an explosion in AI-driven web traffic. What started as a manageable stream of automated requests has evolved into a torrent that's reshaping how websites think about security and access control.

The challenge extends beyond simple volume. Traditional bot detection relied on behavioral patterns that remained relatively static over time. AI agents, however, can adapt, learn, and modify their approaches in real-time. This creates an arms race between detection systems and increasingly sophisticated automated actors, some benevolent, others decidedly not.
Notably, industry responses are starting to emerge: Cloudflare just launched a marketplace that allows websites to charge AI bots for scraping their content, an initiative that could shift both the economics and the security dynamics of bot-driven traffic (read more on TechCrunch). Yet as Walbrou emphasized, the fundamental challenge remains: distinguishing between legitimate automation and malicious activity in an environment where the lines are increasingly blurred.
The Identity Crisis: When Machines Outnumber Humans
Perhaps the most profound security implication discussed was the explosion of non-human identities (NHIs) within enterprise environments. Arnault Chazareix drew a compelling parallel to the microservices revolution that began fifteen years ago, noting how each new service required its own authentication mechanisms and access controls. Agentic AI is triggering a similar multiplication, but at an exponentially greater scale.
Every AI agent requires credentials to function. Every MCP server needs authentication tokens. Every automated workflow demands access to the systems it manipulates. The result is a sprawling landscape of machine identities that can quickly outnumber human users by orders of magnitude, creating what security professionals are calling "secret sprawl"—a distributed ecosystem of credentials that's increasingly difficult to inventory, monitor, and secure.

This proliferation isn't merely a scaling problem; it's a fundamental shift in the attack surface. Unlike human identities, which follow predictable patterns of creation, modification, and deactivation, machine identities often exist in a perpetual state of ambiguity. They're created for specific projects, forgotten when those projects evolve, and left dormant with their original permissions intact.
The Democratization Dilemma: When Everyone's a Developer
One of the most fascinating aspects of the current AI revolution is how it's democratizing software creation through what the panel termed "vibe coding."

This phenomenon, where non-technical users can generate functional code through natural language prompts, represents both a tremendous productivity opportunity and a significant security risk.
As the discussion revealed, tools like GitHub Copilot can inadvertently generate insecure code, embedding vulnerabilities or exposing secrets in ways that might not be immediately apparent to users who lack deep security training. The panel highlighted emerging threats like "slop squatting," where malicious actors create packages with names commonly hallucinated by AI systems, waiting for unsuspecting users to download and execute compromised code.

The challenge is compounded by the speed at which AI-generated solutions can move from proof-of-concept to production. Traditional security review processes, designed for human-paced development cycles, struggle to keep up with the velocity of AI-assisted creation. This creates a dangerous gap where insecure code can proliferate faster than security teams can identify and remediate it.
Building Guardrails for the Agentic Future
Despite these challenges, the panel remained optimistic about solutions emerging to address agentic AI's security implications. A key trend they identified is the centralization of AI access through internal proxies and gateways—what some organizations are implementing as "LiteLLM" architectures that route all AI interactions through monitored, controlled channels.
Organizations looking to implement robust AI security frameworks can now reference the newly released OWASP AI Testing Guide, which provides a comprehensive methodology for systematically assessing AI systems across various dimensions including adversarial robustness, privacy, fairness, and governance (learn more about the guide).
These centralized approaches enable several critical security capabilities: mandatory guardrails for all AI interactions, comprehensive audit trails, automated detection of personally identifiable information (PII), and the ability to enforce organizational policies consistently across all AI use cases. The upcoming EU AI Act, which takes effect in August, will further incentivize such approaches by requiring high-risk AI providers to analyze, understand, and test their systems comprehensively.
The panel also discussed the emergence of specialized security agents—AI systems designed specifically to review and improve the security posture of other AI-generated content. This meta-approach to AI security represents a fascinating evolution where the technology itself becomes part of the solution to its own risks.
The Path Forward: Security in the Age of Agents
As the SecDays France roundtable concluded, participants shared their most exciting agentic AI use cases while acknowledging the responsibility that comes with this power. From automating complex business processes to unlocking previously inaccessible unstructured data, the potential applications seem limitless.
Yet the overarching message was clear: agentic AI is not just another technology to be secured—it's a fundamental shift that requires rethinking how we approach identity, access, and risk management. The security community's response must be equally transformative, embracing new methodologies while maintaining the fundamental principles that have guided cybersecurity for decades.
As one panelist aptly summarized: "We are all builders, and we are all becoming security-aware."
In the age of agentic AI, this dual identity isn't just beneficial—it's essential for navigating the extraordinary opportunities and unprecedented risks that lie ahead.
The conversation at SecDays France highlighted that while agentic AI represents uncharted territory, the fundamentals of security remain constant: inventory your assets, monitor continuously, and build governance into every layer of your technology stack. Only by embracing these principles can organizations safely harness the revolutionary potential of autonomous AI systems.
What is Agentic AI and How Does it Differ from Traditional AI?
Agentic AI refers to systems that can take autonomous actions and make decisions, unlike traditional LLM workflows that follow predetermined paths. While conventional AI systems process inputs and provide outputs through structured workflows, agentic AI can interact with external services, iterate on problems, and determine optimal solutions independently. This autonomy makes them more powerful but also introduces new security risks.
What are Non-Human Identities (NHIs) and Why are They Proliferating?
Non-Human Identities (NHIs) are digital identities assigned to machines, services, and automated systems rather than human users. With the rise of agentic AI, microservices, and automation, organizations now have exponentially more NHIs than human identities. Each AI agent, MCP server, and automated workflow requires credentials to function, creating what security professionals call "secret sprawl" across enterprise environments.
What is MCP and Why Does it Matter for Security?
MCP (Model Connection Protocol) is a standardized protocol that allows AI agents to communicate with external services, similar to how USB-C standardized device connections. Launched by Anthropic in late 2024, thousands of MCP servers have been created in just months. While MCP enables powerful integrations, it also creates new attack surfaces as each connection requires authentication and proper access controls.
What is "Vibe Coding" and What Security Risks Does it Introduce?
Vibe coding refers to the practice of non-technical users generating code through natural language prompts using AI tools like GitHub Copilot. While this democratizes software development, it introduces security risks including inadvertently generated insecure code, exposure of secrets, and the emergence of "slop squatting" attacks where malicious actors create packages with names commonly hallucinated by AI systems.
How Can Organizations Secure Agentic AI Systems?
Organizations should implement centralized AI access through internal proxies or gateways that enforce guardrails, maintain audit trails, and detect sensitive data exposure. Key strategies include: inventorying all machine identities, implementing security-by-design principles, using specialized security agents to review AI-generated content, and establishing governance frameworks that balance innovation with risk management.
