← All posts

Zero-Knowledge Secret Manager — What It Is and Why It Matters for AI Agents

A zero-knowledge secret manager means the server cannot decrypt your secrets. Here's why that architecture matters — especially when AI coding agents are in the loop.

A zero-knowledge secret manager is one where the service provider — the company running the server — cannot decrypt your secrets. Not “we promise not to look.” Cannot. Architecturally, mathematically, cannot.

This property matters in any secrets management context. When AI coding agents enter the workflow, it becomes essential.

What zero-knowledge means precisely

In a conventional secrets manager, your secrets are stored encrypted, but the service holds the keys. The provider could, in principle, decrypt any secret you store. More practically: a server compromise, a rogue employee, or a law enforcement request could all result in your secrets being decrypted without your consent.

Zero-knowledge architecture removes this capability entirely. Encryption happens on the client, before anything is sent to the server. The server stores and retrieves ciphertext but has no access to the keys that would let it decrypt that ciphertext.

The client — your machine — holds the key material. The server holds the ciphertext. Neither is useful without the other, and the server never receives the key material.

How OpaqueVault implements zero-knowledge

The encryption chain has three layers:

Layer 1: Master password → Key Encryption Key

master_password → Argon2id(time=1, mem=64MB, threads=4) → KEK (32 bytes)

Your master password is processed through Argon2id, the winner of the Password Hashing Competition and the current state-of-the-art for password-based key derivation. The resulting 32-byte Key Encryption Key (KEK) is never transmitted anywhere. It lives only in your memory and, during an active session, in the memory of the local ov mcp serve process.

The Argon2id parameters are hardcoded client-side. The server cannot suggest weaker parameters. There is no “forgot my master password” flow because there is no server-side recovery — the server doesn’t have anything to recover with.

Layer 2: KEK + random DEK → encrypted DEK

KEK + random_bytes(32) → AES-256-GCM → encrypted_DEK

Each secret gets a unique Data Encryption Key (DEK). The DEK is generated with crypto/rand (cryptographically secure), wrapped with the KEK using AES-256-GCM, and the resulting encrypted_DEK is stored on the server. The server stores the wrapped key but cannot unwrap it without the KEK it doesn’t have.

Layer 3: DEK + plaintext → ciphertext

DEK + plaintext_secret → AES-256-GCM → ciphertext

The actual secret value is encrypted with the DEK using AES-256-GCM with a fresh random nonce for every encryption operation. The ciphertext is stored on the server alongside the encrypted DEK.

What the server stores:

FieldValue on server
App namePlaintext (metadata only)
Environment namePlaintext (metadata only)
Secret namePlaintext (metadata only)
Secret valueAES-256-GCM ciphertext
Data Encryption KeyAES-256-GCM wrapped with KEK

The server can return the name of your my-app/production/STRIPE_SECRET_KEY. It cannot return the value.

Post-quantum transport: ML-KEM-768 + X25519

Zero-knowledge protects your secrets at rest. Transport security protects them in motion — specifically, when the client sends ciphertext to the server for storage and retrieves it for decryption.

OpaqueVault uses a hybrid key encapsulation mechanism combining X25519 (the current standard) and ML-KEM-768 (the NIST-selected post-quantum algorithm). The hybrid approach means:

  • If X25519 is broken by a quantum adversary, ML-KEM-768 still holds
  • If ML-KEM-768 has an undiscovered vulnerability, X25519 still holds
  • Both would have to be broken simultaneously for the transport to be compromised

“Harvest now, decrypt later” is a real attack model: adversaries record encrypted traffic today, planning to decrypt it when quantum computers become viable. The hybrid PQC approach ensures your secret traffic is safe against this even if you’re creating secrets today that need to stay secret for decades.

Why zero-knowledge matters specifically for AI agents

Every conventional reason to prefer zero-knowledge architecture — server compromise, insider threats, legal demands — still applies. AI coding agents add a new dimension.

When Claude Code or Cursor is helping you build, there’s a third party with access to your development context: the AI model and the infrastructure behind it. Your conversation context, the files the AI reads, and the outputs of tools the AI calls all flow through external infrastructure.

In a non-zero-knowledge architecture, your secrets could be compromised at two points:

  1. At the secrets server (the conventional threat model)
  2. Via the AI’s context window, if the secrets manager naively returns values to the AI

Zero-knowledge handles threat #1 by ensuring the server has nothing to compromise. OpaqueVault’s MCP design — specifically the absence of a get_secret tool — handles threat #2 by ensuring the AI never receives plaintext values in the first place.

The result is a system where:

  • The remote server cannot decrypt your secrets
  • The AI cannot read your secrets
  • Secrets are only ever decrypted locally, in memory, for the duration of a subprocess execution

Comparison: zero-knowledge vs. conventional secrets managers

PropertyConventionalZero-Knowledge
Provider can decryptYesNo
Server compromise riskHighCiphertext only
”Forgot password” recoveryUsually yesNo (by design)
Key locationServerClient only
AI context exposurePossibleArchitecturally prevented
Audit log sensitivityCan contain secret namesHMAC-hashed references

The tradeoffs are real. Zero-knowledge means no server-side recovery. If you lose your master password, your secrets are gone — the server cannot help you. This is the correct tradeoff for a secrets manager, where the security guarantee is more important than the recovery convenience.

Verifying the zero-knowledge property

The zero-knowledge claim is verifiable. OpaqueVault is open-source — the client-side encryption code is auditable. The server API accepts only ciphertext and has no decryption endpoint. You can verify this by reading the API spec:

  • POST /secrets — accepts encrypted_value (ciphertext), encrypted_dek (wrapped DEK)
  • GET /secrets/{id} — returns encrypted_value, encrypted_dek
  • There is no decrypt endpoint

The server is a blob store that speaks HTTPS. There is no server-side secret processing to trust or distrust.

Getting started

Terminal window
# Install
brew install opaquevault-brew/tap/ov
# Initialize (generates KEK from your master password, locally)
ov vault init
# Store a secret (encrypted client-side before sending)
ov secret create my-app/production/DATABASE_URL
# Configure as MCP server for Claude Code
# Add to ~/.config/claude/claude_desktop_config.json:
# { "mcpServers": { "opaquevault": { "command": "ov", "args": ["mcp", "serve"] } } }

From that point forward: your secrets are encrypted with keys only you hold, the server stores ciphertext it cannot read, and Claude Code can use your credentials to run authenticated commands without ever seeing the values.

That’s zero-knowledge secrets management. Not a promise — an architecture.


OpaqueVault is a zero-knowledge, MCP-native secret manager for AI coding agents. Get started free →

Related: MCP Secret Manager — How OpaqueVault Works · MCP Server Secrets Management — The Complete Guide

Zero-knowledge secrets for AI agents

Keep credentials out of Claude's context window.

OpaqueVault encrypts secrets client-side and injects them into subprocesses — your AI agent never sees the plaintext value.

Get started free → ← More posts