Trusting your AI provider is not a compliance strategy.

The Veil

The AI sees the data. Never the person. Enforced by infrastructure, not policy.

Deploys to your infrastructure. We never see your data. Not a SaaS — not now, not ever.

GDPR Art. 25 · EU AI Act Art. 10 · Infrastructure-level enforcement

EU AI Act enforcement in --- days

Is your AI infrastructure ready?

The problem

Redaction is a promise. The Veil is proof.

Every enterprise is sending sensitive data to cloud AI providers — and trusting their terms of service to keep it safe. Providers change policies, get breached, get subpoenaed. Your compliance posture shouldn't depend on someone else's security team. GDPR fines hit €2.3B in 2025 alone. EU AI Act enforcement begins August 2026 with penalties up to 3% of global revenue.

The standard approach — strip names before sending data to the AI — is a software promise. A bug in the redaction code leaks customer names. A developer changes the rules. A compromised service skips redaction entirely. You only find out when the regulator does.

The Veil makes it architecturally impossible. The AI and the identity data are in separate network zones that cannot communicate — enforced at the infrastructure level, not application code. We can prove it with one command, in any deployment, at any time.

See it in action

A support ticket enters. Identity never reaches the AI.

The AI classified the ticket, suggested a resolution — and never learned the employee’s name.

ServiceNowIncoming Ticket
Reporter
Julia BergmannPII
Device
ThinkPad X1 Carbon (SN: PF3K7N2)PII
Location
Berlin HQ, Floor 3PII
Issue
VPN connection drops after 10 minutes of inactivity. Happens since last week's update.
ClaudeAI Analysis
What the AI received
tkn_8f3a92c1ThinkPad X1EU-Central
AI Analysis
Click below to process this ticket through The Veil
PII stripped → token assigned → AI responds
or

Enter your email for a live Claude analysis — unique every time

How it works

Two worlds. One bridge. Zero exposure.

The AI and the identity data are in separate network zones that cannot communicate. Enforced via Docker network segmentation and Kubernetes NetworkPolicies — not application code.

Identity Vault

Knows who. Names, emails, account numbers. Encrypted at the column level with row-level access control. OIDC authentication.

ID Bridge

The only link. Generates opaque pseudonymous tokens with time-based rotation. Re-linkage requires legal basis, dual approval, and audit trail.

AI Processing

Knows what. Patterns, risks, insights. Sees tokens only, never identities. Works with Anthropic, OpenAI, Mistral, or local models.

Read the full architecture deep-dive

PII Detection

Three layers of PII detection. Zero guesswork.

Every piece of user-submitted data passes through a multi-layer sanitization pipeline before reaching the AI. Each layer catches what the previous one missed. Operator-controlled prompt templates are static instructions and are rejected at the gateway if they contain hard identifiers.

1

Presidio NER

Active

Named entity recognition with spaCy models for German, English, and French. 11+ custom recognizers including Sozialversicherungsnummer (SVNR), Steuer-ID, Personalausweisnummer, Fallnummer, IBAN, US SSN, Medical Record Numbers, and ServiceNow ticket patterns. Cologne phonetics and Levenshtein distance scoring detect PII even with misspellings and abbreviations.

2

QI Risk Engine

Active

Quasi-identifier scoring detects re-identification risk from field combinations — even when no single field is PII. Uses k-anonymity estimation, l-diversity checks, and differential privacy budget tracking to score every record. Example: 'Female, age 67, ZIP 60314' scores at 0.72 risk and is auto-generalized to 'age band 65-69, ZIP prefix 603.' No other AI privacy platform offers automated quasi-identifier scoring.

3

LLM PII Shield

Deep scan

Fine-tuned Qwen 2.5 7B model running on a dedicated GPU instance inside the identity sandbox. Catches context-dependent PII that pattern matching misses. Raises detection from ~75% to >90%.

Detection rules are config-driven. PII detection supports German, English, and French text with language-specific context models and lemma-based boosting. A default config ships with every engagement; the sanitizer config is scoped and tuned per workflow during the Assessment.

The Veil Protocol

Signed attestation that isolation held.

Every request generates a Veil Certificate — a signed, timestamped, independently verifiable attestation that identity data and AI processing were isolated throughout the entire pipeline.

Signed claims across the pipeline

The Gateway emits an Ed25519-signed, request-level Veil claim and fails closed if that evidence cannot be recorded. Bridge, Sanitizer, Sandbox B, and Audit emit additional signed claims on a best-effort basis; the Witness runs all five consistency checks on whatever arrives and marks each certificate FULL or PARTIAL.

External timestamping

Certificates are timestamped via RFC 3161 TSA and logged to Sigstore Rekor transparency log. Tamper-evident, externally verifiable, court-admissible.

Three views for three audiences

DPO summary for compliance officers. Technical proof for security engineers. Regulatory mapping (GDPR Art. 25/32, EU AI Act Art. 10/15) for auditors.

The Gateway fails closed on its own Veil evidence: if it cannot record a signed, request-level claim, the inference is refused at the edge. Downstream pipeline services emit signed claims on a best-effort basis, so the Witness marks each certificate FULL or PARTIAL depending on which claims arrived. The protocol proves isolation happened; infrastructure (Kubernetes NetworkPolicies, network segmentation) is what enforces it.

Trust

Built to be audited.

EU
jurisdiction
Headquartered in Germany. No foreign government data access obligations.
$1B+
AI governance market by 2030
Gartner — enterprises are buying infrastructure-level privacy
3,000+
Automated tests
Go, Python, and TypeScript — unit, integration, and invariant tests
16
NetworkPolicies
Infrastructure-level isolation in Kubernetes
4
Deployment architectures
VPC, air-gapped, sovereign, consulting
AI providers supported
Anthropic, OpenAI, Mistral, or local models
TLA+
Formally verified isolation
Isolation invariant mathematically proven via TLC model checker across all reachable states
Ed25519
Veil Certificates (Ed25519 isolation proof per request)
Signed isolation evidence on every request — anchoring attempted asynchronously via Rekor/TSA; anchor state exposed as PENDING_ANCHOR, ANCHORED, or ANCHOR_FAILED on the certificate
GDPR Art. 25GDPR Art. 32EU AI Act Art. 10EU AI Act Art. 14

Use-Case Analyzer

Can The Veil help your team?

Describe your scenario — our AI evaluates honestly whether The Veil fits.

Ready to deploy AI on sensitive data?

The Veil deploys on your infrastructure in hours. Not a SaaS — we never see your data. No vendor lock-in. No foreign jurisdiction. Runs on standard Kubernetes — no special hardware required. In split deployment, only sanitized text, opaque pseudonymous tokens, and content hashes cross infrastructure boundaries. Fully local / air-gapped is the recommended default for healthcare and government.