← All posts

MCP Secret Manager — How OpaqueVault Works

OpaqueVault is the only MCP-native secret manager built for AI coding agents. Here's how it works, why the architecture matters, and how to set it up in five minutes.

An MCP secret manager is a server that runs alongside your AI coding agent and handles credentials on its behalf — so the agent can authenticate against external APIs without the credentials ever appearing in the model’s context window.

OpaqueVault is, as of April 2026, the only purpose-built MCP secret manager we’re aware of. This page explains what that means architecturally, how it differs from generic secrets tools bolted onto MCP, and what the design decisions look like under the hood.

Disclosure: We built OpaqueVault. This is our architectural explanation of our own product.

What “MCP-native” actually means

The Model Context Protocol (MCP) is the standard Anthropic introduced for connecting AI agents to external tools and data sources. Every MCP server exposes a set of tools the AI can call. Most MCP servers are wrappers around existing APIs — a GitHub MCP server wraps the GitHub API, a Postgres MCP server wraps your database.

A secret manager built for MCP is different. Its job isn’t to expose a new data source — it’s to act as a credential broker between the AI and every other tool the AI uses. It has to be designed with a specific constraint in mind: the AI must never receive plaintext secret values, even though the AI is the one initiating the credential requests.

Most generic secrets tools (AWS Secrets Manager, HashiCorp Vault, 1Password) can technically be wrapped in an MCP server. But wrapping them naively creates a critical vulnerability: the MCP tool returns the secret value to Claude so Claude can use it. That’s exactly backwards.

OpaqueVault’s MCP surface is designed around this constraint from the ground up.

The MCP tool surface

OpaqueVault exposes seven MCP tools:

ToolWhat it does
vault_runRun a single command with secrets injected into the subprocess environment
vault_inject_envSpawn an interactive shell with secrets in the environment for a multi-command session
vault_list_secretsList secret names and metadata (no values)
vault_create_secretCreate a new encrypted secret
vault_update_secretUpdate an existing secret’s value
vault_delete_secretDelete a secret
vault_statusCheck vault health and session status

vault_run and vault_inject_env serve different workflows. vault_run is for a single authenticated command — run this script, make this API call. vault_inject_env opens an interactive shell with secrets populated as environment variables, which is useful when you need to run a sequence of commands that all need the same credentials, or when you’re debugging interactively and don’t want to re-inject for each step. In both cases, the credential values are never returned to Claude — they’re only in the subprocess environment.

Notice what’s missing: there is no get_secret tool. No tool returns a plaintext credential value to Claude. This is a deliberate, non-negotiable design decision — not a missing feature.

When Claude needs to make an authenticated API call, it calls vault_run:

Claude → vault_run("stripe charges list", app="my-saas", env="production")
→ OpaqueVault decrypts STRIPE_SECRET_KEY locally
→ Spawns subprocess with key in environment
→ Returns stdout + exit code to Claude
→ Claude sees the Stripe API response, not the key

The credential touched exactly one place: the subprocess environment on your local machine, for the duration of the command execution.

The two-process architecture

OpaqueVault runs as two separate processes:

ov mcp serve — the local MCP bridge. This runs on your machine, holds your Key Encryption Key (KEK) in memory after you authenticate, decrypts secrets on demand, and injects them into subprocesses. It communicates with Claude over stdio per the MCP spec.

api.opaquevault.com — the encrypted blob store. This is a dumb HTTP API that stores and retrieves ciphertext. It has no decryption capability. Even if it were fully compromised, the attacker gets ciphertext they cannot decrypt without your KEK, which never leaves your machine.

Claude Code ←— MCP/stdio —→ ov mcp serve (local) ←— HTTPS —→ api.opaquevault.com
(AI, no plaintext) (decrypts locally) (ciphertext only)

This architecture means the server is a zero-trust component. It stores your secrets and nothing else.

Secret organization: app/environment/name

Secrets in OpaqueVault are organized in a three-level hierarchy:

{app}/{environment}/{name}
my-saas/production/DATABASE_URL
my-saas/production/STRIPE_SECRET_KEY
my-saas/staging/DATABASE_URL
analytics/production/MIXPANEL_TOKEN

This maps directly to how teams actually deploy software — multiple apps, multiple environments, many secrets per combination. When you call vault_run, you specify which app and environment context to use, and OpaqueVault injects all secrets for that context into the subprocess.

The .ov.yaml file at your project root sets the default context:

app: my-saas
environment: production

With that in place, vault_run in your project directory automatically uses my-saas/production — no flags required.

Encryption: zero-knowledge from the ground up

“Zero-knowledge” means the server has no ability to decrypt your secrets. Here’s the full encryption chain:

master password → Argon2id(time=1, mem=64MB, threads=4) → KEK (32 bytes, never sent to server)
KEK + random DEK → AES-256-GCM → encrypted DEK (stored on server)
DEK + plaintext → AES-256-GCM → ciphertext (stored on server)

Your master password and KEK are never transmitted. The Argon2id parameters are hardcoded client-side — the server cannot negotiate weaker parameters. This is a deliberate security decision: a server that could suggest parameters could be compromised into suggesting weak ones. The tradeoff is that upgrading parameters (e.g. increasing memory cost as hardware improves) requires a client-side migration where secrets are re-derived and re-encrypted. That’s the right tradeoff for a zero-knowledge system — it keeps the security guarantee clean even if it makes future upgrades more deliberate.

Every encryption operation uses a fresh random nonce from crypto/rand. DEKs are zeroed from memory after use.

Transport: ML-KEM-768 + X25519 hybrid

Transport uses a hybrid key encapsulation mechanism combining X25519 (the current standard) with ML-KEM-768 (the NIST-selected post-quantum algorithm). The relevant threat model here is “harvest now, decrypt later”: adversaries today can record encrypted traffic with the intent to decrypt it once quantum computers capable of breaking elliptic curve cryptography become available. Estimates for when that’s practical range from 10 to 20+ years, but secrets created today — database passwords, long-lived API keys, signing keys — may still be in use then.

The hybrid approach means both algorithms must be broken simultaneously for the transport to be compromised. X25519 covers the present; ML-KEM-768 covers the quantum future. Major browsers (Chrome, Firefox) adopted the same hybrid in 2024 for the same reason.

Getting started

The step-by-step setup — install, initialize, create secrets, configure the MCP server, set project context — is covered in detail in How to Store Secrets for Claude Code. The short version:

Terminal window
brew install opaquevault-brew/tap/ov
ov vault init
ov secret create my-saas/production/STRIPE_SECRET_KEY

Add to your MCP config:

{
"mcpServers": {
"opaquevault": {
"command": "ov",
"args": ["mcp", "serve"]
}
}
}

From that point, Claude calls vault_run to execute authenticated commands — the credential is injected into the subprocess, Claude sees the output, and the key never enters the context window.

Why not just use environment variables?

Environment variables are process-scoped and don’t persist. They’re a fine mechanism for passing secrets into a single process — but they don’t solve:

  • Storage: where do the values live before you set them?
  • Distribution: how do teammates or CI systems get the values?
  • Rotation: how do you update them across environments?
  • Audit: who accessed what, and when?
  • AI safety: how do you ensure Claude never reads the raw value?

OpaqueVault handles all of these. It’s not a replacement for process-level env vars — it’s the layer above that manages, encrypts, and safely delivers them.


OpaqueVault is the MCP-native, zero-knowledge secret manager for AI coding agents. Get started free →

Related: 24,000 secrets found in MCP config files · Why your AI coding assistant is a secret leak

Zero-knowledge secrets for AI agents

Keep credentials out of Claude's context window.

OpaqueVault encrypts secrets client-side and injects them into subprocesses — your AI agent never sees the plaintext value.

Get started free → ← More posts