ICOSA: Multi-Model Byzantine Fault Tolerant AI Compliance Assessment

KYMA Tech Solutions / Sage Holdings LLC

Patent Pending: App. No. 63/789,142

1. Abstract

The EU AI Act (Regulation 2024/1689) introduces the world's most comprehensive regulatory framework for artificial intelligence, with enforcement beginning August 2, 2026 and penalties reaching EUR 35 million or 7% of global annual turnover. Despite this urgency, no scalable, objective, and verifiable compliance assessment methodology exists.

This paper introduces ICOSA (Independent Council for Objective System Assessment), a patent-pending protocol that applies Byzantine Fault Tolerant (BFT) consensus across a geographically and architecturally diverse council of AI models to produce compliance assessments that are objective, reproducible, and cryptographically attested.

By eliminating single-model bias, introducing multi-stakeholder deliberation, and anchoring verdicts to blockchain, ICOSA creates the first trustworthy bridge between AI providers and regulatory requirements.

2. The Problem

Current approaches to AI compliance assessment suffer from fundamental limitations:

  • Single-point-of-failure bias: Assessments performed by a single model or auditor inherit the biases, blind spots, and limitations of that single perspective.
  • Lack of reproducibility: Without standardized methodology, the same system can receive contradictory assessments from different evaluators.
  • No verifiability: Traditional audit reports are static documents with no mechanism for independent verification or tamper detection.
  • Scalability constraints: Manual auditing cannot keep pace with the volume of AI systems requiring assessment before the August 2026 deadline.
  • Regulatory ambiguity: The EU AI Act's requirements are complex and subject to interpretation, requiring multi-perspective analysis.

The ICOSA Index — our initial scan of 9 leading AI models — found that 0 out of 9 achieved full compliance with the EU AI Act. The problem is universal, and the deadline is imminent.

3. The ICOSA Solution

ICOSA addresses these challenges through three interlocking innovations:

Multi-Model Deliberation

Multiple independent AI models assess the same system, each bringing unique perspectives shaped by their architecture, training data, and regional context.

BFT Consensus

Byzantine Fault Tolerance ensures that the system produces correct results even if up to one-third of the participating models are faulty, compromised, or biased.

Blockchain Attestation

Every verdict is cryptographically signed by each participating model and anchored to the Polygon blockchain, creating an immutable, tamper-proof audit trail.

4. System Architecture

The ICOSA platform consists of five primary subsystems:

  1. Assessment Engine: Receives system descriptions, contextual data, and observation framework inputs. Orchestrates the council deliberation process.
  2. Council Manager: Maintains the roster of participating models, manages model health, diversity metrics, and rotation schedules.
  3. Consensus Engine: Implements the BFT voting protocol, manages deliberation rounds, and computes final verdicts.
  4. Blockchain Anchor: Signs and submits verdict hashes to the Polygon smart contract for immutable attestation.
  5. Observation Framework: A guided data collection system that structures the information needed for comprehensive assessment.

The architecture is designed for horizontal scalability, with each subsystem independently deployable and the council size configurable per assessment tier.

5. The Council

The ICOSA council is composed of 11 AI models selected for maximum diversity across four dimensions: provider, geographic region, model architecture, and assigned deliberation role.

SeatModelProviderRegionArchitectureRole
1GPT-4oOpenAIUSTransformerPrimary Assessor
2Claude 3.5 SonnetAnthropicUSConstitutionalCross-Examiner
3Gemini 1.5 ProGoogleUS/EUMultimodal TransformerRapporteur
4Mistral LargeMistral AIEUMoE TransformerEU Legal Specialist
5Command R+CohereCanadaRAG-NativeRegulatory Analyst
6Llama 3.1 405BMetaUSOpen-WeightTechnical Auditor
7Qwen 2.5 72BAlibabaAsiaMultilingualDiversity Validator
8Jamba 1.5 LargeAI21 LabsIsraelSSM-Transformer HybridArchitecture Auditor
9DeepSeek V3DeepSeekAsiaMoERisk Assessor
10Grok-2xAIUSTransformerAdversarial Tester
11Inflection 3.0Inflection AIUSConversationalConsumer Advocate

6. BFT Consensus Mechanism

ICOSA adapts practical Byzantine Fault Tolerance (pBFT) for AI compliance assessment. The protocol tolerates up to f faulty nodes in a council of n members, where:

n = 3f + 1   (minimum nodes for BFT)

Quorum = 2f + 1   (minimum agreeing nodes)

For the three ICOSA tiers:

Sentinel

n=3, f=0

Quorum: 2/3

Advisory only

Baseline

n=5, f=1

Quorum: 3/5

Tolerates 1 faulty model

Certification

n=11, f=3

Quorum: 7/11

Tolerates 3 faulty models

The consensus process proceeds in three phases: Pre-Prepare (assessment distribution), Prepare (independent model evaluation), and Commit (vote aggregation and verdict computation). Dissenting models must provide structured reasoning, which is preserved in the audit trail.

7. Blockchain Attestation

Every ICOSA certification verdict is anchored to the Polygon blockchain through a custom smart contract. The attestation includes:

  • SHA-256 hash of the complete verdict payload
  • Individual model signatures from each council member
  • Timestamp and block number for temporal verification
  • Quorum achievement proof (number of agreeing models vs. required threshold)
  • Reference to the regulation version assessed against

This creates an immutable, publicly verifiable record that cannot be retroactively altered. Regulators, auditors, and stakeholders can independently verify any ICOSA certification by checking the blockchain record against the provided verdict data.

8. The Scanner

The ICOSA Scanner is the assessment input pipeline. It collects structured data about an AI system through the Observation Framework — a guided questionnaire that captures:

  • System purpose, intended use, and deployment context
  • Training data provenance and governance practices
  • Risk management system documentation
  • Human oversight mechanisms and escalation procedures
  • Transparency measures and user-facing disclosures
  • Technical documentation and record-keeping practices
  • Accuracy, robustness, and cybersecurity measures

This structured input ensures that every council member evaluates the same comprehensive dataset, enabling meaningful comparison and consensus formation.

9. Initial Findings

The ICOSA Index — our inaugural assessment of 9 leading AI models — revealed universal non-compliance with the EU AI Act:

0/9
Passed
9/9
Failed
Art. 13
Most Failed
100%
Transparency Gap

The most common deficiencies were in transparency obligations (Article 13), data governance (Article 10), and human oversight provisions (Article 14). These findings underscore the urgency of automated, scalable compliance assessment as the August 2026 deadline approaches.

10. Regulatory Positioning

ICOSA is positioned as a complementary tool for the emerging EU AI Act compliance ecosystem. The protocol does not replace human legal judgment but augments it with scalable, objective, multi-perspective analysis.

Key regulatory considerations:

  • Article 9 (Risk Management): ICOSA assessments can serve as a component of an organization's broader risk management system.
  • Article 11 (Technical Documentation): ICOSA reports provide structured documentation that satisfies transparency requirements.
  • Article 43 (Conformity Assessment): While not a notified body, ICOSA provides preliminary conformity assessment that organizations can use to prepare for formal evaluation.

ICOSA is designed to operate within regulatory sandboxes (Article 57) and can be adapted for national AI strategies across EU member states.

10.1 Transatlantic Alignment: CISA, Five Eyes, and NIST CAISI

On 1 May 2026, CISA and the cyber agencies of Australia, Canada, New Zealand, and the United Kingdom — the full Five Eyes partnership — jointly published Careful Adoption of Agentic AI Services. The guidance defines five agentic-AI risk categories and explicitly requires cryptographically anchored agent identity with short-lived credentials. ICOSA's architecture independently satisfies these requirements at the protocol level, providing an operational foundation that aligns with the transatlantic regulatory direction:

  • Privilege risk: Least-privilege access enforced by capability-token validation; each agent action carries a single-use, time-bounded credential.
  • Design and configuration risk: Boundaries encoded in the deterministic rule engine rather than delegated to the agents themselves.
  • Behavioral risk: BFT-consensus council detects misalignment through multi-model deliberation rather than single-model judgment.
  • Structural risk: The Overwatch lattice provides visibility into cascade dynamics across interconnected agent networks.
  • Accountability risk: Every action carries an Ed25519-signed capability token; the full authorization chain is reconstructible post-incident.

On 5 May 2026, the NIST Center for AI Standards and Innovation (CAISI) expanded its pre-deployment frontier-model testing agreements to Google DeepMind, Microsoft, and xAI, joining OpenAI and Anthropic. CAISI evaluations target cybersecurity, biosecurity, and chemical-weapons risks across more than forty completed assessments, some conducted in classified environments by the interagency TRAINS Taskforce. ICOSA is positioned as the complementary downstream layer: where CAISI evaluates foundation models before release, ICOSA evaluates deployed AI systems against the same risk taxonomy in production-use contexts.

11. Conclusion

The EU AI Act represents a watershed moment for AI governance. With less than two years until full enforcement and universal non-compliance among assessed models, the need for scalable, trustworthy compliance assessment is acute.

ICOSA addresses this need through a novel combination of multi-model deliberation, Byzantine Fault Tolerant consensus, and blockchain attestation. The protocol eliminates single-model bias, ensures reproducibility, and creates verifiable compliance records that stakeholders across the ecosystem can trust.

As the regulatory landscape evolves, ICOSA's modular architecture enables rapid adaptation to new regulations, updated requirements, and expanded council membership. The protocol is designed not just for the EU AI Act, but as a foundation for global AI compliance infrastructure.

Patent Pending: App. No. 63/789,142
© 2026 KYMA Tech Solutions / Sage Holdings LLC

AI Compliance AdvisorICOSA
Welcome! I'm your AI compliance advisor. I can help you determine if your AI system needs EU AI Act compliance and what level of certification you need. Are you here to check your compliance requirements?