xAI Secret Leak: The Story of a Disclosure

The rise of AI is increasing the secret sprawl for many reasons. In GitGuardian's State of Secret Sprawl report of 2025, we described how the use of Copilot seemed to increase the number of secret leaks. This data-based analysis came to confirm academic work about the security of LLM-generated source code, especially from a secrets point of view.

Another side of the story is that the wide adoption of AI multiplies the number of AI API providers, consumers, and Non-Human Identities linking the two. Because AI tools are used both by tech and non-tech people, the sources of the leak also tend to diversify. We now observe AI API tokens leaked by companies of all sizes and sectors, from small to big, from marketing to software development.

No company seems immune to this newly increasing risk, and in the past months, xAI, the company behind the Grok AI assistant, fell victim to it.

A leak timeline

Original leak and alerting

GitGuardian’s secret detection platform continuously scans the public GitHub repositories for new secrets. When we find one, an automated system sends an alerting e-mail to the commit author to alert them of the leak. This is what we call the Good Samaritan Program, which has been provided to all developers, free of charge, since 2017.

On the 2nd of March 2025, this automated system discovered a new secret in a commit in a public repository. The commit contained an xAI API key in an .env file. This is a classical secret leak scenario, and GitGuardian sent an email to the commit author to alert him of the incident. The only thing that made this specific commit stand out from the mass was the committer's email address: it was hosted under the x.ai domain. At the time, no further investigation was performed. This alert was drowned among the hundreds of other emails we sent to other developers at the same time.

Independent rediscovery

Two months later, Philippe Caturegli, an independent security researcher from Seralys, disclosed in a LinkedIn post that he had gained access to an xAI API key from a public repository. He also tagged GitGuardian, which brought the leak back to our attention.

The disclosure LinkedIn post by Philippe Caturegli.

It turned out that, after investigations, the API key was still valid, two months after the original discovery and alerting. Moreover, the key was more than a simple user’s key. In fact, the corresponding account not only had access to public Grok models (grok-2-1212, etc) but also to what appeared to be unreleased (grok-2.5V), development (research-grok-2p5v-1018), and private models (tweet-rejector, grok-spacex-2024-11-04).

At that point, we decided not to pursue the investigations but to formally and immediately notify X of the leak in a coordinated responsible disclosure. Indeed, we can only speculate on which data the private models were trained on, but chances are the LLM models had knowledge about X’s, Tesla’s, or SpaceX's intellectual property. Carefully querying those models could have led to disclosing this information, which could have been critical to those companies' business.

A responsible disclosure bad practice

We prepared a responsible disclosure e-mail with all the information required to quickly identify the leak source, affected keys and accounts, and start the remediation process. We then faced a first difficulty. xAI's main website does not expose a security.txt file. As we presented in a blog post earlier this year, RFC 9116 defines a standard way of publicly providing a company’s security contact information, thanks to a security.txt. This is an industry standard that xAI is not following.

x.com exposes such a security.txt file. However, this one leads to a HackerOne program at https://hackerone.com/twitter, and the file has expired since January 2024, a year after Twitter’s acquisition by Elon Musk.

Googling for a vulnerability disclosure page for xAI gave two results:

  • A HackerOne program for X at https://hackerone.com/x/.
  • A security page at https://x.ai/security.

The security page was unfortunately not very helpful for vulnerability disclosure and mostly contained information about users’ data security.

The HackerOne program page seemed to have xAI in scope, but because we are not looking for rewards and had bad experiences with leak disclosures via bug bounties, we tried to find a better option.

After some more digging, we finally identified the safety@x.ai email address as a good candidate for disclosure. Finding this security contact took us a few unnecessary hours. We sent the disclosure email on April 30th at 11:00 AM EST.

We received an answer from xAI 12 hours later:

Thank you for your email

For us to analyze and also for you to receive proper credit, if applicable, would you please submit this to xAI's Bug Bounty Program on HackerOne?
https://hackerone.com/x?type=team 

Thanks!
xAI Team

xAI’s team was redirecting us to their bug bounty program. Doing this delays the remediation process. Submitting the report to HackerOne and having it triaged and forwarded to the company could take additional hours or days during which the remediation would not start. For a company the size of X, replacing an Incident Response Team (PSIRT or CSIRT) with a bug bounty platform should not be an option and should be considered bad practice. Again, we were not looking for a reward.

Luckily, only a few hours later, the leaky repository was removed from GitHub and the key revoked. This was done without any update sent to us, completely out of bounds of the disclosure process. This means we could have wasted more time filling a bug bounty report and waiting for updates, just to be notified that the issue was invalid, because it was already fixed.

The leaky repository was deleted from GitHub.

This incident subsequently gained wider attention when cybersecurity journalist Brian Krebs covered the story in a detailed report.

It happens, just keep calm

Secret leaks happen. They happen to every company without distinction, and we can’t blame xAI or its developers for that, even if we could expect big companies handling a gigantic amount of customer data to be more careful with their security. We even expect that the growing adoption of AI in every sector will increase the incidence of AI-related secrets leaks.

With that in mind, every company should be prepared to receive security alerts for such incidents. The xAI case illustrates some common misconceptions and bad practices when it comes to responsible disclosure handling:

  • No easily identifiable security contact.
  • A bug bounty program used to replace or proxy an appropriate CSIRT team.
  • No transparent communication to researchers without updates about remediations.

Fortunately, if you want to be better prepared to receive security alerts from third parties, there are a few simple things you can do:

  • Have a team identified to handle disclosure inbound.
  • Give public information about security contacts.
  • Fine-tune your bug bounty scopes and policies.
  • Be prepared to receive negative feedback.
  • Follow a transparency-first approach in communications.

All those points are covered in depth in our dedicated blog posts.

From Alert to Action: Best Practices to Handle Responsible Disclosure
Responsible disclosure is an often overlooked but critical component of cybersecurity alerting processes. Explore key best practices that can enhance communication and collaboration with researchers, turning potential security threats into opportunities for stronger defense.
Security First, Transparency Always: Inside GitGuardian’s Responsible Disclosure Process
In the past 6 months, our security research team disclosed 24 critical vulnerabilities. Most have been successfully remediated. Our team’s contributions to cybersecurity have been formally recognized, with our researchers being listed in both Bayer’s and Oracle’s Security Researcher Hall of Fame.