Integration

One line of code. Full split-knowledge privacy.

Swap your base_url. Your existing Anthropic SDK code routes through The Veil’s split-knowledge pipeline — PII detection, pseudonymisation, isolated inference, signed attestation. No rewrite. No new SDK. No migration project.

Last Updated: April 2026

The 1-Line Integration

If you’re already using the Anthropic SDK, integration is a config change — not a code change.

Python

# Before — direct to Anthropic
import anthropic
client = anthropic.Anthropic(api_key="sk-ant-...")
# After — through The Veil
import anthropic
client = anthropic.Anthropic(
    api_key="dsa_...",              # Your Veil API key
    base_url="https://gateway.your-veil.com"  # Your Veil Gateway
)

# Everything else stays the same
response = client.messages.create(
    model="claude-sonnet-4-20250514",
    max_tokens=1024,
    messages=[{"role": "user", "content": "Classify this ticket: ..."}]
)

TypeScript

// Before
const client = new Anthropic({ apiKey: "sk-ant-..." });
// After
const client = new Anthropic({
  apiKey: "dsa_...",
  baseURL: "https://gateway.your-veil.com"
});

// Everything else stays the same

Standard Anthropic Messages API. Your existing code, your existing prompts, your existing error handling — unchanged. The Veil intercepts, sanitises, isolates, signs, and returns.

Using OpenAI, Gemini, Mistral, or local models? The DSA Proxy API (Path 2 below) supports all providers with the same privacy guarantees. OpenAI SDK-compatible proxy is on the roadmap.

Anthropic ClaudeOpenAIGoogle GeminiMistralOllama (Local)

The Veil is AI-provider agnostic. Choose your model per request. Swap providers without changing your privacy posture.

What Happens Under the Hood

Every API call passes through a five-stage pipeline before reaching the AI model:

  1. 1Your code calls client.messages.create() — the request hits the Gateway.
  2. 2PII Firewall detects and strips identity fields. Three layers: named entity recognition (11+ custom recognizers), quasi-identifier risk scoring (k-anonymity, l-diversity), and an optional LLM PII Shield.
  3. 3ID Bridge generates an opaque pseudonymous token and stores the mapping in an encrypted vault. The AI will never see the real identity.
  4. 4Sandbox B runs inference on pseudonymised data in a network-isolated environment. The AI model processes the request without any path back to identity data.
  5. 5Response flows back through the Gateway. Tokens are re-linked to real identities in-memory (never persisted together), and a signed Veil Certificate is attached as proof.

Average overhead: single-digit milliseconds for PII detection. LLM inference time is unchanged.

Integration Paths

Find the path that fits your team:

Anthropic SDK Proxy

1 line of code·Minutes
Fastest

Change base_url to your Gateway. Your existing Anthropic SDK code — Python, TypeScript, Go — works unchanged. OpenAI SDK-compatible proxy is on the roadmap.

Teams already using the Anthropic SDK who want privacy without rewriting anything.

DSA Proxy API

~20 lines·Hours
Most control

Call POST /api/v1/proxy/messages with explicit field routing. Supports all 5 LLM providers — Claude, OpenAI, Gemini, Mistral, Ollama. Choose your model per request.

Teams who want explicit control over field routing, multi-provider support, or aren’t using the Anthropic SDK. Enables proving ground mode and compliance trace.

ServiceNow Integration

App import + 4 properties·1–2 days
ITSM

Import the x_dsa_privacy scoped app via Studio. Set endpoint, API key, model, and enabled flag. The Veil intercepts Now Assist and AI Agent calls automatically.

ServiceNow customers who want AI-powered ITSM without PII leaving their environment.

MCP Server

Install + config·Hours
AI Agents

Four tools: dsa_messages (inference), dsa_analyze (simplified), dsa_certificate (compliance proof), dsa_usage (quota). Two modes: transparent provider backend or explicit MCP tool calls.

Teams using Claude Desktop or building AI agents with MCP who need compliant tool access.

Scanner Assessment

CLI install + scan·30 minutes
Pre-sales

Run dsa-scanner init, scan, and report. Connectors for ServiceNow (REST API), CSV, and JSON. See your PII exposure before committing to a deployment.

Assessment phase — understand your data exposure before any integration work.

DSA Proxy API Format

For teams who want explicit control over field routing, the DSA proxy format separates what the AI should read (prompt_template) from the data that needs sanitisation (context):

{
  "prompt_template": "Classify this incident: {{description}}",
  "context": {
    "description": "User John Smith ([email protected]) reports VPN issue from Berlin office"
  },
  "model": "claude-sonnet-4-20250514",
  "max_tokens": 1024
}

Supported models

Claude

claude-sonnet-4-20250514

OpenAI

gpt-4o

Gemini

gemini-2.5-pro

Mistral

mistral-large-latest

Ollama

ollama/llama3

Same request format, same privacy pipeline, same Veil Certificate — regardless of which model you choose.

The prompt_template is validated for PII (rejected if any is found). Context fields pass through the full 3-layer sanitisation pipeline. The AI receives the template with pseudonymised context — never the original identity data.

Deployment Options

Docker Compose

Pilot / Small Teams
cp config.env.template config.env
# Fill in your values
./setup.sh
  • setup.sh auto-generates all crypto keys (Ed25519 signing, AES encryption, service tokens)
  • Validates Docker, Compose v2, openssl, python3
  • Full stack running in ~30 minutes
  • Includes local Ollama for LLM inference — no external API keys needed to test

Kubernetes / Helm

Production
helm install dsa ./deploy/helm \
  --namespace dsa-system \
  --create-namespace \
  --values deploy/helm/values-prod.yaml
  • Full Helm chart with subcharts per service
  • Node isolation support (Sandbox A and B on separate node pools)
  • 16+ NetworkPolicies across 7 namespaces
  • Feature flags for every service (audit.enabled, veil.enabled, etc.)

Air-Gapped / Sovereign

Government & Healthcare
  • Pre-built container images, offline Helm charts
  • Ollama or vLLM for local LLM inference — zero external API calls
  • No internet dependency. No data leaves your network. Ever.
  • Sovereign cloud compatible (OVHcloud, Scaleway, T-Systems, any EU provider)

Standard Kubernetes. Standard PostgreSQL. Standard Redis. No special hardware, no confidential computing enclaves, no vendor-specific silicon.

What You Get Back

Every response includes a link to its Veil Certificate — signed, timestamped, independently verifiable proof that identity data and AI processing were isolated throughout the pipeline.

{
  "status": "JOB_STATUS_COMPLETED",
  "result": { "text": "Category: Network, Priority: P2..." },
  "model_used": "claude-sonnet-4-20250514",
  "request_id": "req_4f3a1b2c8d9e",
  "veil": {
    "status": "available",
    "certificate_url": "/api/v1/veil/certificate/req_4f3a1b2c8d9e",
    "summary_url": "/api/v1/veil/certificate/req_4f3a1b2c8d9e/summary"
  }
}

Three certificate views: DPO summary for compliance officers, technical proof for security engineers, regulatory mapping (GDPR Art. 25/32, EU AI Act Art. 10/15) for auditors.

API Reference

Full API documentation is available via our OpenAPI 3.1 specification covering 30+ endpoints with request/response schemas, authentication details, tier requirements, and error codes.

Request API Documentation

Start with an assessment. Or just change one line.