
Building a Secure LLM Gateway (and an MCP Server) with GitGuardian & AWS Lambda
How I wrapped large-language-model power in a safety blanket of secrets-detection, chunking, and serverless scale.
How I wrapped large-language-model power in a safety blanket of secrets-detection, chunking, and serverless scale.
Your RAG implementation can expose secrets in some unexpected ways. Secure your LLM deployments and scrub knowledge bases to prevent your secrets from leaking.
Worried about GitHub Copilot’s security and privacy concerns? Learn about potential risks and best practices to protect yourself and your organization while leveraging AI.
Researchers successfully extracted valid hard-coded secrets from Copilot and CodeWhisperer, shedding light on a novel security risk associated with the proliferation of secrets.
Our stack to go from experimenting to production won't have any more secrets for you.
Our breakthrough ML model FP Remover V2 slashes false positives by 80%, setting a new industry standard for secrets detection. Discover how we're helping security teams focus on real threats instead of chasing phantom alerts.
Would you trust AI to call 911? GitGuardian's ML engineer Nicolas posed this question at PyData Berlin, sparking a discussion on integrating ML into critical systems, debunking AI myths, and balancing innovation with safety in AI deployment.
GitGuardian is pushing its secrets detection engine precision to new heights. We enhanced our detection capabilities with Machine Learning to cut the number of false positives by half. Security and engineering teams will spend significantly less time reviewing and dismissing false alerts.
GitGuardian's Confidence Scorer, a machine-learning model, is being rolled out. Learn how it advances secret detection on GitHub and drives impactful developer alerts.
How can developers use AI securely in their tooling and processes, software, and in general? Is AI a friend or foe? Read on to find out.