Trust Center

Security

How The Veil protects sensitive data at the infrastructure level — not just the application layer.

Last Updated: April 2026

16+

Network policies

AES-256

Column encryption

17

Red team findings closed

0

Data leaves your infra

TLA+

Formally verified isolation

3,000+

Automated tests

Architecture-Level Isolation

The Veil enforces separation between identity data and AI processing at the infrastructure level. This is not an application-level control that can be bypassed by a misconfigured endpoint or a code bug.

  • Sandbox A (Identity) holds personal data — names, emails, account numbers. It has no network path to the AI processing layer.
  • Sandbox B (AI) processes data using only opaque pseudonymous tokens. It cannot resolve tokens to identities even if compromised.
  • ID Bridge is the sole link between sandboxes, issuing non-derivable tokens with governed re-linkage requiring multi-party approval.

Network Security

The sandbox isolation invariant is enforced at the network layer in every deployment mode.

Kubernetes (Production)

16+ Kubernetes NetworkPolicy resources enforce pod-level traffic rules across 7 namespaces. Sandbox A pods cannot open connections to Sandbox B pods, and vice versa. Each service has explicit ingress and egress rules — default-deny policies block all traffic not explicitly permitted. The Gateway is the only service that communicates with both sandboxes, and it never forwards identity and token data to the same downstream service in a single request.

Docker (Development)

14 isolated Docker bridge networks enforce the same sandbox boundary as production. Sandbox A and Sandbox B never share a network. Per-service spoke networks for audit, witness, and bridge-AI connections ensure even auxiliary services cannot bridge the two zones. The invariant is enforced at the Docker network layer, not application code.

Formal Verification

The Veil's core isolation invariant is not just tested — it is mathematically proven.

We maintain a TLA+ formal specification that models every system state reachable by the platform. The TLC model checker verifies that a single invariant holds across all states:

∀ m ∈ messages : m.destination = SandboxB ⟹ m.dataType ∉ {CustomerID, RawPII, IdentityContext}

No message carrying identity data can reach the AI processing sandbox. This is verified at the specification level — before code is written, before tests are run, before deployment.

No competitor in the AI privacy space offers formal verification of their isolation boundary. Testing checks the paths you thought of. Formal verification checks all of them.

Data Encryption

  • At restAES-256 column-level encryption on all PII fields in Sandbox A. Encryption keys are managed via HashiCorp Vault, AWS KMS, or Azure Key Vault depending on deployment target. Database-level encryption (LUKS / dm-crypt) provides a second layer.
  • In transitAll inter-service communication uses gRPC over TLS. External traffic terminates at the Gateway with TLS 1.2+ enforced. Certificate rotation is automated.
  • SessionsSession tokens encrypted with AES-256-GCM. Tokens are time-bounded and scoped to individual processing requests. No long-lived credentials are stored client-side.

Audit Trail

Decision records are appended to a cryptographic hash chain. The application role has INSERT and SELECT only. Deletion is permitted only for GDPR Article 17 erasure requests and retention expiry, executed through audited security-definer functions that write a signed erasure event to a separate append-only log alongside every deletion.

  • Each log entry includes a SHA-256 hash of the previous entry, forming a verifiable chain.
  • All re-linkage requests, token rotations, data access events, and administrative actions are recorded.
  • Auditors can independently verify chain integrity without access to the underlying data.

Access Governance

Re-linking a pseudonymous token to a real identity is the most sensitive operation in the platform. It is governed by multiple layers of control:

  • Four-eyes approval every re-linkage request requires sign-off from two independently authorized individuals.
  • Scoped attributes approvals grant access to specific data fields, not the full identity record.
  • Jurisdiction controls re-linkage can be restricted by legal jurisdiction, ensuring data sovereignty requirements are met.
  • Break-glass mechanism emergency access is available but triggers elevated audit logging, mandatory post-access review, and automatic notification to the data protection officer.

Security Testing

The platform has undergone adversarial red team testing targeting the sandbox isolation boundary. 17 findings were identified and remediated, including:

CLOSEDDNS side-channel attacks between sandboxes
CLOSEDCloud metadata service exposure from AI pods
CLOSEDPII leakage in application and infrastructure logs
CLOSEDToken rotation race conditions and replay attacks
CLOSEDEncoding bypass attacks against the PII firewall

All 17 findings were closed before the current release. Red team testing is conducted on each major release.

Compliance Posture

The Veil is designed to satisfy the technical requirements of EU regulation. The mappings below are architectural and evidence-layer mappings, not held certifications.

GDPR

  • Art. 25Data protection by design and by default. Pseudonymisation is the default processing mode. The AI sandbox structurally cannot access identity data beyond what is required.
  • Art. 32Security of processing. Infrastructure-level isolation, column-level encryption, append-only audit logs, and access governance controls address the technical and organisational measures required.

EU AI Act

  • Art. 10Data governance for high-risk AI systems. The split-knowledge architecture ensures training and inference data is processed without exposing the identity of data subjects.
  • Art. 15Transparency and logging. The cryptographic audit trail provides a complete, immutable record of all AI system decisions and data flows.

These are architectural and evidence mappings, not held certifications. The current launch vertical is ITSM / ServiceNow-style internal workflows; broader sector-specific attestations (DORA, MDR, eIDAS) are follow-on work and are not claimed today.

Deployment Model

Raw identity data never leaves your environment. In split deployment, only sanitized text, opaque pseudonymous tokens, and content hashes cross infrastructure boundaries — your sensitive data stays where it belongs. Fully local / air-gapped is the recommended default for healthcare and government.

  • Deployed via Helm charts to any Kubernetes cluster — cloud, on-premise, or air-gapped.
  • Fully local / air-gapped deployment is the recommended default for healthcare and government workloads: all components, including LLM inference via Ollama or vLLM, run inside the customer boundary with zero external calls.
  • Sovereign cloud compatible. Runs on national cloud infrastructure (e.g., OVHcloud, Scaleway, T-Systems) without modification.
  • Split deployment option forwards only sanitized text, opaque pseudonymous tokens, and content hashes to remote Sandbox B — never raw identity content. Sandbox A, the ID Bridge, the Sanitizer, and the Audit service always stay inside the customer boundary regardless of deployment shape.
  • No vendor lock-in. Built on PostgreSQL, Redis, and standard Kubernetes primitives.

Security Contact

To report a security vulnerability or request additional technical documentation, contact: