← All posts

Why your AI coding assistant is a secret leak waiting to happen

Claude Code, Cursor, and GitHub Copilot dramatically accelerate development — and, if misconfigured, dramatically accelerate credential exposure. Here's the structural problem and how to fix it.

AI coding assistants are the most significant productivity shift in software development in years. Claude Code, Cursor, GitHub Copilot — these tools have changed how engineers write, debug, and ship code. The average developer using an AI assistant ships faster, catches more bugs, and gets unstuck faster.

They’re also, if misconfigured, one of the fastest ways to leak production credentials.

This isn’t a headline about AI going rogue or some exotic attack vector. It’s about a basic structural mismatch between how AI coding tools work and how secrets management was designed.

The model context window problem

Every AI coding assistant works by reading your code and maintaining a context window of what it has seen. When you paste an API key into a config file and the AI has that file open, the key is in the model’s context. When you ask the AI to “set up a Stripe integration” and it reads your .env file, your STRIPE_SECRET_KEY is in the model’s context.

In most setups, this context is:

  • Sent to an AI provider’s API (Anthropic, OpenAI, Microsoft) over the network
  • Stored in conversation history, potentially for model training purposes
  • Visible in any logs your IDE or AI tool maintains
  • Accessible to any MCP server or plugin the AI can call

The credential in your .env file was secret. The moment it enters the AI’s context window, it’s travelling across external infrastructure you don’t fully control.

Prompting the AI to use credentials amplifies the risk

The pattern gets worse when you actively use the AI with your credentials.

Consider a common workflow:

  1. You’re building an integration with Stripe
  2. You have STRIPE_SECRET_KEY=sk_live_xxx in your environment
  3. You ask Claude Code: “Run a test charge using my Stripe key”
  4. Claude constructs a Python script, reads your env, executes it

In this workflow, Claude Code has read your Stripe key, included it in a tool call to execute Python, and potentially logged it in its reasoning trace. The key has moved through multiple systems.

Most developers doing this aren’t thinking about it as a security incident. They’re thinking: “I asked the AI to help me test my Stripe integration and it worked.” Which is true. It also worked for anyone monitoring those API calls.

The 81% surge

In 2025, security researchers documented an 81% year-over-year increase in AI-related credential leaks. The methodology: tracking the pattern of leaked credentials that showed up in the context of AI tool configurations, AI-generated code, or AI-assisted workflows.

The 81% number isn’t about AI doing something malicious. It’s about:

  • AI tools that make it easy to rapidly create and configure integrations (and forget to secure them)
  • MCP config files that end up in version control with credentials embedded
  • AI-generated code that includes credential handling patterns from training data that predate modern secrets management
  • Developers moving faster, with less friction — including less friction around security steps

Speed is the risk factor. AI coding assistants make developers 30–50% faster. They also make credential exposure 81% more common.

How credentials end up in the wrong places

Config files committed to version control

The most common pattern. MCP servers are configured in JSON files that end up in dotfiles repos. Claude Desktop’s config at ~/.config/Claude/claude_desktop_config.json is a frequent offender. Cursor’s .cursor/mcp.json is another.

These files contain env blocks:

{
"env": {
"OPENAI_API_KEY": "sk-proj-xxxxxx",
"DATABASE_URL": "postgresql://user:password@host:5432/db"
}
}

The config goes into dotfiles. The dotfiles repo goes public. The keys go with it.

AI-generated code that echoes credentials

Ask an AI to write a script that uses an API, and it will often produce code that reads credentials from the environment and then logs them for debugging purposes:

api_key = os.environ.get('STRIPE_SECRET_KEY')
print(f"Using API key: {api_key}") # AI added this for "debugging"

The AI doesn’t know that the debug print statement is a problem. It’s doing what it’s been trained to do: help you debug. The credential ends up in your terminal output, which ends up in your CI logs, which may be accessible to your whole team.

Session logs and conversation history

AI coding tools maintain conversation history. Some sync to cloud. Some retain context across sessions for continuity. Every session where you worked with a credential-dependent system is a session where that credential potentially lives in a log file.

The fix: architectural separation

The solution isn’t to stop using AI coding assistants. The solution is architectural: keep credentials out of the AI’s context window entirely.

Zero-knowledge secrets vaults implement this through the MCP protocol itself. Instead of giving the AI access to your credentials, you give the AI a tool that can use credentials on your behalf without disclosing the values.

The pattern looks like this:

Without zero-knowledge vault (dangerous):

Claude → reads .env → sees STRIPE_SECRET_KEY → executes Stripe call with key in context

With zero-knowledge vault:

Claude → calls vault_run("stripe charge...") → vault injects key into subprocess env → subprocess executes → Claude gets output only

The AI never sees the key. The key is never in the context window. The key never travels to Anthropic’s API. The key is only ever in the memory of the local ov mcp serve process, for the duration of the subprocess execution.

What this looks like in practice

With OpaqueVault configured as your MCP server, the workflow changes:

Before:

"Run a test Stripe charge for $1 using my test key"
→ Claude reads STRIPE_TEST_KEY from env or config
→ Claude writes and executes a Python script
→ Key travels to Anthropic API as part of context

After:

"Run a test Stripe charge for $1 using the stripe key in my vault"
→ Claude calls vault_run("python stripe_charge.py", app="my-saas", env="staging")
→ OpaqueVault decrypts STRIPE_TEST_KEY locally, injects into subprocess
→ Subprocess runs, Claude gets exit code and stdout
→ Key never entered Claude's context

The AI is just as useful. The credential never left your machine.

A note on Cursor, Copilot, and other AI tools

Claude Code gets the most attention in MCP discussions because Anthropic invented the protocol, but the same risks apply to any AI coding assistant that has file access or env access.

Cursor reads your project files, including .env files unless you explicitly exclude them. GitHub Copilot has file context in your editor. Any AI tool that can read your working directory can read your secrets if they’re stored there.

The fix is the same regardless of which AI tool you use: don’t store secrets in files the AI can read. Use a vault that injects secrets at the process level, not the file level. For MCP-based workflows, OpaqueVault handles this natively. For other workflows, .env.local gitignore patterns and proper secret injection via CI/CD are the minimum.

The 29 million

In 2025, over 29 million secrets were leaked across public repositories and exposed APIs. The AI-related 81% surge is a subset of that. The broader trend is that secrets are proliferating faster than secrets management practices are evolving.

AI coding tools are a productivity multiplier. Used without proper credential hygiene, they’re also a leak multiplier. The same speed that makes them valuable makes them dangerous.

The answer isn’t to slow down. It’s to build the right infrastructure so that moving fast doesn’t mean moving carelessly.


OpaqueVault is an MCP-native, zero-knowledge secret vault that keeps credentials out of your AI agent’s context window. Built for teams using Claude Code, Cursor, and other AI coding assistants.

See how it works →

Zero-knowledge secrets for AI agents

Keep credentials out of Claude's context window.

OpaqueVault encrypts secrets client-side and injects them into subprocesses — your AI agent never sees the plaintext value.

Get started free → ← More posts