Use Case · Financial Services
AML, Fraud Detection & Credit Risk Scoring—Without Exposing Customer Identity to the AI
The Veil lets financial institutions run AI-driven AML monitoring, fraud detection, and credit risk scoring on pseudonymised token streams so the model never sees customer names, account numbers, or other PII. The architecture aligns with the technical pseudonymisation and data-minimisation measures GDPR, AMLD6, DORA, and PSD2 expect — architectural alignment, not a held certification under any of these regimes.
The Compliance Tension in Financial AI
Financial regulators demand AI-powered surveillance (AMLD6 Article 8 mandates effective transaction monitoring), while privacy regulators demand data minimisation (GDPR Article 5(1)(c)). Feed raw customer data into an AI model and you create a single-breach catastrophe point. Refuse to deploy AI and you fall behind on AML detection rates that regulators now expect.
The Veil resolves this tension by splitting identity from analytical payload at the architecture level, not the policy level.
How The Veil Works for Financial Services
| Stage | What Happens | Where |
|---|---|---|
| 1. Ingestion | Transaction records enter Sandbox A. Customer names, account numbers, and IBANs are replaced with cryptographic tokens. | Sandbox A (PII) |
| 2. AI Analysis | Pseudonymised data streams flow to Sandbox B. The AI model scores risk, detects anomalous patterns, and flags potential fraud—all on tokens, never on identities. | Sandbox B (AI) |
| 3. Risk Output | AI returns risk scores and behavioural clusters attached to tokens. No PII exists in Sandbox B at any point. | Sandbox B (AI) |
| 4. Re-linkage | Only for SAR (Suspicious Activity Report) filing: tokens are re-linked to customer identity inside Sandbox A, requiring four-eyes approval from two authorised compliance officers. | Bridge (controlled) |
Financial AI Use Cases
AML Transaction Monitoring
AI processes pseudonymised transaction flows—amounts, timing patterns, geographic routing, counterparty tokens—to identify structuring, layering, and integration behaviours. The model builds risk profiles on tokens, not on people.
- Token-level velocity and volume analysis
- Cross-border routing pattern detection without country-of-residence exposure
- Network graph analysis on pseudonymised counterparty relationships
Fraud Detection with Pseudonymised Behavioural Patterns
Behavioural biometrics—session timing, interaction cadence, device fingerprints—are hashed and tokenised before the fraud model sees them. The AI learns "Token-7a3f behaves differently from its baseline," never "John Smith logged in from an unusual device."
- Real-time anomaly scoring on tokenised session data
- Device fingerprint matching without storing raw device IDs in the AI sandbox
- PSD2 Strong Customer Authentication (Article 97) signals processed as abstract features
Credit Risk Scoring Without Identity Exposure
Credit models receive income bands, repayment history tokens, and utilisation ratios—never account holder names, dates of birth, or national ID numbers. The model outputs a risk tier attached to a token; the lending team re-links only for approved credit decisions.
- GDPR Article 22 compliance: automated decision-making uses pseudonymised inputs
- Model explainability outputs reference feature categories, not personal attributes
- Re-linkage for credit offer generation requires authorised workflow approval
Regulatory Alignment Mapping
| Regulation | Requirement | The Veil Response |
|---|---|---|
| GDPR Art. 5(1)(c) | Data minimisation | AI sandbox receives only pseudonymised tokens—minimum data necessary for analytical purpose |
| GDPR Art. 25 | Data protection by design | Architectural separation enforces pseudonymisation before AI processing, not as an afterthought |
| GDPR Art. 22 | Rights related to automated decision-making | Re-linkage for credit decisions requires human approval; model inputs are auditable feature sets, not raw PII |
| AMLD6 Art. 8 | Effective transaction monitoring systems | AI monitoring operates on full behavioural data (amounts, timing, routing)—only identity is removed |
| DORA Art. 6 | ICT risk management framework | Sandbox isolation limits blast radius; a breach of the AI environment yields no customer-identifiable data |
| PSD2 Art. 97 | Strong Customer Authentication | Authentication signals processed as abstract feature vectors in Sandbox B; raw SCA data stays in Sandbox A |
These are architectural and evidence-layer alignments, not held certifications. Certification roadmap available on request.
Breach Scenario: What an Attacker Gets
If an attacker compromises the AI sandbox (Sandbox B), they obtain:
- Risk scores attached to meaningless cryptographic tokens
- Behavioural pattern clusters with no link to real-world identities
- Transaction flow graphs where every node is a token, not a person or company
No customer names. No account numbers. No IBANs. The token-to-identity mapping exists only in Sandbox A, which the attacker has not reached.
Key Takeaways for Financial CISOs
- The Veil architecturally aligns with both AML monitoring mandates and GDPR data minimisation expectations—at the architecture level, not the policy level
- Four-eyes re-linkage for SAR filing creates an auditable, regulator-friendly workflow
- DORA operational resilience requirements are addressed by sandbox isolation limiting breach blast radius
- Credit risk models that never see PII reduce GDPR Article 22 automated decision-making risk