EU AI Act Compliance
August 2, 2026. Are you ready?
The EU AI Act enters enforcement on August 2, 2026. High-risk AI systems must demonstrate human oversight, decision logging, and data governance. This hub helps you assess your readiness, understand the requirements, and take action.
What the EU AI Act Requires
Article 14 — Human Oversight
High-risk AI systems must allow effective human oversight. You need records sufficient to verify compliance — not application logs, but forensic decision provenance.
Article 12 — Automatic Logging
AI systems must automatically record events during operation. Logs must be tamper-evident and retained for the system's lifetime or a regulatory minimum.
Article 10 — Data Governance
Training and operational data must be managed with appropriate governance. Identity data used by AI must be minimised and separation enforced.
Start the Conversation
Readiness Intake (10 questions)
Self-guided 10-question intake covering Article 14, 12, and 10. It is not a productized self-serve assessment — your answers scope the conversation for a real Assessment engagement. The Veil is not a self-serve SaaS; every engagement starts with a scoped Assessment.
Start intake →Compliance Checklist
23-point technical checklist covering decision logging, identity separation, audit trails, and data governance. Reference material only — not a compliance certification.
View Checklist →How The Veil Platform Addresses Each Requirement
| Requirement | Solution | |
|---|---|---|
| Art. 14 Human Oversight | TraceVault — cryptographic decision records with full provenance | Learn more → |
| Art. 12 Automatic Logging | Veil Witness — signed attestation claims with independent timestamps | Learn more → |
| Art. 10 Data Governance | The Veil Core — infrastructure-enforced identity separation | Learn more → |
| Art. 15 Accuracy & Robustness | Scanner — three-layer PII detection validates data quality | Learn more → |