AI Governance
Real-time enforcement, not just assessment
Governance is enforcement, not paperwork
AI compliance assessments produce a verdict at one point in time. AI governance produces a verdict on every action the system takes. The difference is whether the policy engine is consulted before an agent executes, or after — whether the system is gated by enforcement, or merely audited after the fact.
ICOSA was built as a runtime governance control plane, not a compliance dashboard. Capability-token authorisation, cryptographically anchored agent identity, fail-closed defaults, and an immutable audit chain are the primitives. Compliance certificates are the byproduct.
The four governance primitives
- Betelgeuse — execution-layer capability tokens. Every agent action is gated by a single-use, time-bounded, Ed25519-signed credential bound to a specific state. The boundary is non-bypassable. Fail-closed by default. Pure deterministic rule engine with pre-issuance invariant evaluation against the post-state.
- Overwatch — behavioural verdicts via multi-model Byzantine Fault Tolerant consensus. Eleven to thirteen independent AI models evaluate every decision; supermajority agreement is required for a binding verdict. Single-model bias and single-vendor opinions are eliminated by construction.
- Overwatch Lattice — cascade-failure visibility across interconnected agent networks. Native bridge to Anduril Lattice for defense and critical-infrastructure integrations. Cross-agent dependency tracking reveals structural risks that single-agent analysis cannot.
- Audit chain — every verdict hash, every capability token, every authorisation decision is signed and anchored to the Polygon blockchain. Sui anchoring is available for sovereign deployments. The full authorisation chain is reconstructible post-incident.
Transatlantic regulatory alignment
The May 2026 regulatory cluster — CISA · Five Eyes guidance on agentic AI, NIST CAISI pre-deployment evaluation, and the convergent EU AI Act high-risk obligations — describes the same governance posture from three regulatory directions:
- Cryptographically anchored agent identity (Five Eyes; satisfied by Betelgeuse)
- Short-lived credentials and least-privilege scoping (Five Eyes; satisfied by capability-token TTL)
- Pre-deployment capability evaluation (CAISI; complementary to ICOSA's deployment-side enforcement)
- Article 9 risk management with reproducible evidence (EU AI Act; satisfied by BFT verdict and audit chain)
- Article 12 record-keeping (EU AI Act; satisfied by signed audit chain)
- Article 14 human oversight encoded in design (EU AI Act and Five Eyes; satisfied by deterministic rule engine, not delegated to agents)
Who deploys ICOSA governance
Regulators, supervisory authorities, central banks, defense and critical-infrastructure operators, life sciences and clinical-decision AI providers, judicial and legal-tech platforms, and enterprises with regulatory exposure in multiple jurisdictions. The control plane operates on-premises, in private cloud, or air-gapped for sovereign and defense deployments. The ORION Class tier extends the platform to SIL 4 and DO-178C principles environments.