AI Agents Authentication: How Autonomous Systems Prove Identity
AI agents need to authenticate with numerous systems, making AI authentication a crucial security boundary that determines blast radius, revocability, and long-term governance risk.
AI agents need to authenticate with numerous systems, making AI authentication a crucial security boundary that determines blast radius, revocability, and long-term governance risk.
The data from this year's State of Secrets Sprawl report shows that AI is not creating a new secrets problem; it is accelerating every condition that already made secrets dangerous.
Anthropic's Claude Code Security launch sent shockwaves through cybersecurity markets. As GitGuardian's CEO, here's why I believe the real battle has shifted from code vulnerabilities to identity and secrets management in the AI era.
In this article, we will explore the hot topic of securing AI-generated code and demonstrate a technical approach to shifting security left for cloud AI agents by using Model Context Protocol (MCP) tools.
I built a demo showing how to wire up multiple AI agents using Google's Agent Development Kit (ADK) and the A2A protocol, with GitGuardian scanning content for secrets.
We found a path traversal vulnerability in Smithery.ai that compromised over 3,000 MCP servers and exposed thousands of API keys. Here's how a single Docker build bug nearly triggered one of the largest AI supply chain attacks to date.
Why agents break the old model and require rethinking traditional OAuth patterns.
Learn why deterministic security remains essential in an AI-driven world and how GitGuardian combines probability and proof for safe, auditable development.
Is agentic AI the productivity revolution we've been waiting for, or a security nightmare in the making? With AI agents now outnumbering humans and secrets proliferating across enterprise systems, the answer isn't simple. Read our insights from SecDays {France} 2025.
Align your AI pipelines with OWASP AI Testing principles using GitGuardian’s identity-based insights to monitor, enforce, and audit secrets and token usage.
Vibe coding might sound like a trendy term, but it's really just developing software without automated checks and quality gates. Traditional engineering disciplines have always relied on safety measures and quality controls, so vibe coding should be no different in my honest opinion.
How I wrapped large-language-model power in a safety blanket of secrets-detection, chunking, and serverless scale.