EU AI Act Compliance Scorecard

Independent multi-model assessment of leading AI systems against the EU AI Act (2024/1689). Updated continuously as models and regulations evolve.

19
Systems Assessed
13
Compliant
6
Non-Compliant
SystemProviderVerdictConfidenceRisk Flags
Claude 3.5 SonnetAnthropicCompliant94%
None
GPT-4oOpenAICompliant92%
Transparency
Mistral LargeMistral AICompliant91%
None
Gemini 1.5 ProGoogleCompliant89%
Data Governance
Command R+CohereCompliant88%
Documentation
Phi-3-mediumMicrosoftCompliant88%
None
Llama 3.1 405BMetaCompliant87%
Oversight
Jamba 1.5 LargeAI21 LabsCompliant86%
None
Granite 34BIBMCompliant86%
Documentation
Qwen 2.5 72BAlibabaCompliant85%
TransparencyData Governance
Inflection 3.0Inflection AICompliant84%
Documentation
Grok-2xAICompliant83%
Documentation
Cohere Aya 23CohereCompliant82%
Documentation
DeepSeek V3DeepSeekNon-Compliant78%
TransparencyOversightData Governance
DBRXDatabricksNon-Compliant74%
DocumentationTransparency
Falcon 180BTIINon-Compliant72%
TransparencyRisk Management
Nemotron-4 340BNVIDIANon-Compliant71%
Data GovernanceTransparency
Yi-LargeYi/01.AINon-Compliant69%
TransparencyData GovernanceOversight
OLMo 7BAI2Non-Compliant65%
Risk ManagementTransparencyOversight

Beta Notice

This scorecard is produced by the ICOSA protocol in beta and is provided for informational purposes only. Scores may be revised as the assessment methodology evolves and as model providers update their systems. This does not constitute legal advice or an official regulatory determination.

Methodology

Each AI system is assessed by the ICOSA multi-model council using Byzantine Fault Tolerant consensus. The council evaluates compliance across all relevant articles of the EU AI Act, including transparency obligations (Art. 50, 52), risk management (Art. 9), data governance (Art. 10), human oversight (Art. 14), and documentation requirements (Art. 11, 12).

Confidence scores represent the degree of consensus among council members. Higher confidence indicates stronger agreement on the verdict. Risk flags identify specific areas where deficiencies were detected during assessment.

Assessments use the Sentinel (3-model) tier for initial screening, with Baseline (5-model) and Full Certification (11-model) available for deeper analysis upon request.

Get Your System Scanned

Don't wait for August 2, 2026. Get your AI system assessed against the EU AI Act before enforcement begins.

Start with Sentinel Assessment
AI Compliance AdvisorICOSA
Welcome! I'm your AI compliance advisor. I can help you determine if your AI system needs EU AI Act compliance and what level of certification you need. Are you here to check your compliance requirements?