EU AI Act Compliance
Certification in days, not months
Why every AI provider needs an EU AI Act compliance strategy now
The EU AI Act — Regulation (EU) 2024/1689 — is the world's first comprehensive horizontal regulation of artificial intelligence. It applies extraterritorially: any provider, deployer, importer, or distributor whose AI system is placed on the EU market or whose output is used in the Union falls under its scope. Most high-risk obligations apply from 2 August 2026. Penalties reach EUR 35 million or 7% of global annual turnover, whichever is higher. Conformity assessment, technical documentation, risk management, transparency, human oversight, accuracy, robustness, and cybersecurity obligations apply to every system classified as high-risk under Annex III.
Traditional compliance assessments — conducted by a single auditor or single model — take six to eight weeks and produce a static report. ICOSA produces the same first-pass output in minutes, through Byzantine Fault Tolerant consensus across 11 to 13 independent AI models with diverse architectures and regional perspectives. The verdict is signed, blockchain-anchored, and independently verifiable.
Annex III risk classification, automated
Annex III of the EU AI Act enumerates eight categories of high-risk AI systems: biometric identification and categorisation; AI in critical infrastructure; AI in education and vocational training; employment, worker management, and access to self-employment; access to essential private and public services; law enforcement; migration, asylum, and border control management; and the administration of justice and democratic processes. ICOSA classifies a submitted system against all eight categories automatically using the deployment context, intended use, and observation framework inputs you provide.
The classification surfaces the specific Annex III paragraphs that apply, the conformity assessment route required (Article 43), and the obligations that flow from the classification: Article 9 (risk management), Article 10 (data governance), Article 11 (technical documentation), Article 13 (transparency), Article 14 (human oversight), and Article 15 (accuracy, robustness, cybersecurity).
Transatlantic alignment: CISA Five Eyes and NIST CAISI
ICOSA's architecture aligns the EU AI Act with the May 2026 CISA · Five Eyes guidance on agentic AI and the NIST CAISI pre-deployment evaluation framework. The Five Eyes guidance — jointly published on 1 May 2026 by CISA, NSA, ASD ACSC, CCCS, NZ NCSC, and UK NCSC — defines five agentic-AI risk categories (privilege, design, behavioural, structural, accountability) and requires cryptographically anchored agent identity with short-lived credentials. CAISI, announced on 5 May 2026, expanded pre-deployment frontier-model testing to Google DeepMind, Microsoft, and xAI alongside OpenAI and Anthropic. ICOSA satisfies the five CISA categories at the protocol layer and complements CAISI as the downstream deployment-gate.
For providers building for European markets while serving customers across jurisdictions, this convergence is material: a single ICOSA verdict covers the EU AI Act, the Five Eyes operational expectations, and the CAISI evaluation framework simultaneously.
The ICOSA verdict
An ICOSA EU AI Act assessment produces an article-by-article compliance verdict, identification of every applicable requirement under Chapters II and III, a deficiency report with prescriptive remediation steps, and a verdict hash anchored to the Polygon blockchain. Three tiers are available:
- Sentinel — rapid advisory assessment, compliance readiness snapshot, risk flag identification, PDF summary.
- Pre-Statement Baseline — 5-model multi-perspective consensus, full article-by-article mapping, structured deficiency report, remediation roadmap.
- ICOSA Certification — 11-model BFT consensus with blockchain attestation, immutable on-chain proof, priority remediation support.