Compliance mapping: v1.3

OMEGA Protocol &
ISO/IEC 42001

The most time-consuming part of ISO 42001 certification is producing regulator-grade technical evidence: immutable, traceable, causal logs of AI system behaviour. OMEGA Protocol v1.3 generates this evidence automatically as a byproduct of operation. Every organisation deploying OMEGA enters its certification audit with months of verifiable governed decision records already in place. Fifteen primitives. Machine-verified.

15
Primitives: machine-verified
9+
Annex A controls satisfied
6
EU AI Act articles mapped
MIT
Open licence: no restrictions

The fifteen primitives

Every OMEGA governed record contains these structural elements: proved necessary and sufficient. Extensions through v1.2 brought P4M, P4T, P5E, P6, P6A, P6L, and PCF for multi-agent delegation, trajectory governance, execution integrity, and continuity anchors; v1.3 adds P10-P12 for competence attestation, expectation update integrity, and semantic integrity validation.

P1 Governance, who decided, under what authority, within what constraints
P2 Reasoning, causal graph with magnitude/likelihood labels, FACT/INFERENCE/ASSUMPTION/UNKNOWN chain
P3 Traceability, permanent SHA-256 hash-chained record
P4 Expectation, committed predicted outcome before action fires
P4M Materiality Binding v1.3, P4 tracks highest-consequence variable in causal graph
P4T Trajectory Expectation v1.3, aggregate prediction committed before multi-step sequence
P5 Confirmation, gate record: COMMITTED or HELD, both permanent
P5E Execution Attestation v1.3, cryptographic binding of approved record to execution payload
P6 Delegation v1.3, governed or ungoverned declaration with liability transfer
P6A Aggregate Materiality v1.3, consolidated causal graph for entire delegated workflow
P6L Liability Threshold v1.3, blocks ungoverned delegation at Major/Catastrophic consequence
PCF Continuity-Formal v1.3, quantitative anchor baseline, deterministic rules engine evaluator
P10 Competence Attestation v1.3, cryptographic binding of competence claims to decision records
P11 Expectation Update Integrity v1.3, cryptographic binding of expectation revisions within the AI system lifecycle
P12 Semantic Integrity Validation v1.3, expectations bound to semantic schemas so outcomes are evaluated against committed meaning
01, ISO/IEC 42001:2023

Annex A control mapping

How OMEGA's fifteen primitives satisfy the Annex A controls that most commonly stall certification audits. v1.3 extends coverage for competence verification, expectation update integrity, and semantic integrity, alongside multi-agent accountability, trajectory monitoring, and execution integrity.

Control Requirement OMEGA primitives How OMEGA satisfies it Status
A.2.2
AI Policy Implementation
Documented policy for AI development and use aligned with organisational objectives
P1 Governance
Every OMEGA record binds the decision to the Governance primitive: which policy authorised this action, under which constraints. The policy is embedded in the record, not referenced from it.
✓ Full
A.3.3
Roles and Responsibilities
Clear allocation of accountability for AI development, risk, and oversight
P1 GovernanceP6 Delegation
Governance records the specific authorised entity responsible for each decision. P6 Delegation extends accountability through agent chains: each delegation hop is either governed (with its own OMEGA record) or explicitly declared ungoverned with liability transfer recorded.
✓ Full
A.5.4
Explainability
Provide clear information on AI reasoning and decision basis to users and interested parties
P2 ReasoningP4M Materiality
P2 captures the FACT/INFERENCE/ASSUMPTION/UNKNOWN chain as a causal graph with magnitude and likelihood labels on every effect node: recorded before action, not reconstructed after. P4M ensures the highest-consequence variable is explicitly identified and tracked.
✓ Full
A.5.5
Human Oversight
Documented oversight roles, intervention protocols, and kill switch mechanisms for high-risk decisions
P5 ConfirmationP6L Liability Threshold
P5 Confirmation is the technical implementation of human oversight: the system cannot commit to an irreversible action without the gate firing. P6L blocks ungoverned delegation at Major/Catastrophic consequence: high-risk actions cannot bypass human review through delegation chains.
✓ Full
A.6.2
Lifecycle Monitoring
Monitor AI system behaviour post-deployment and throughout the operational lifecycle
P4 ExpectationP3 TraceabilityP4T TrajectoryPCF
P4 commits predicted outcomes before action: creating a continuous falsifiable baseline. P4T extends this to multi-step sequences. PCF provides quantitative anchor baselines with numeric drift bounds for continuous monitoring. The delta between committed expectations and actual outcomes is machine-detectable across the full deployment lifecycle.
✓ Full
A.7.2
Data Provenance
Document data history, transformation steps, and lineage from origin to model consumption
P3 TraceabilityP1 GovernanceP5E Attestation
P3 provides automated data lineage via SHA-256 hash-chaining: cannot be altered after the fact. P5E extends provenance to execution: cryptographic binding proves that what ran matched what was approved. The entire chain from data input to execution is cryptographically attested.
✓ Full
A.8.1
Technical Documentation
Comprehensive model cards, architecture descriptions, and performance characteristics
P2 ReasoningP4 Expectation
OMEGA records accumulate over deployment as living technical documentation. P2 and P4 provide machine-queryable documentation of how the system reasons and what it predicts: evidence no static model card can provide.
◑ Partial
A.9.1
Responsible Use
Controls for intended use and management of unintended outcomes
P5 ConfirmationP1 GovernanceP6 Delegation
P5 Confirmation produces HELD records: the decision not to act is as governed as the decision to act. P6 Delegation ensures that responsible use extends through delegation chains: ungoverned delegation of high-consequence actions is blocked by P6L.
✓ Full
A.10.2
Record-Keeping and Logging
Record significant events, system behaviour, human overrides, and changes to model parameters
P3 TraceabilityP5 ConfirmationP5E Attestation
Every OMEGA record is SHA-256 hash-chained, tamper-evident, and permanently stored. P5E adds cryptographic proof that what was logged matches what executed. Human overrides (HELD records) are recorded identically to autonomous actions. Auditors query the record directly: no reconstruction required.
✓ Full
A.6.1.2
Information security roles and responsibilities: competence verification; A.7.2.2 Information security awareness, education and training
Personnel (including AI systems) competent for their role; verifiable competence and awareness obligations under the AIMS
P10 Competence Attestation
P10 requires cryptographic binding of competence claims to decision records. ISO 42001 Annex A requires that personnel (including AI systems) are competent for their role; P10 gives that requirement an auditable cryptographic form.
✓ Full
A.8.2
Lifecycle of AI system: change management; A.6.2 Lifecycle Monitoring
Change management across the AI system lifecycle; monitoring behaviour and outcomes throughout deployment
P11 Expectation Update Integrity
P11 governs the lifecycle of pre-committed expectations. ISO 42001 requires change management across the AI system lifecycle; P11 is the cryptographic binding of expectation revisions within that lifecycle.
✓ Full
A.5.4
Explainability; A.7.2 Data Provenance
Clear information on AI reasoning and decision basis; documented data history and lineage
P12 Semantic Integrity Validation
P12 binds expectations to semantic schemas so outcomes are evaluated against the same meaning they were committed to. ISO 42001 requires explainability; P12 prevents expectations from being silently reinterpreted after commitment.
✓ Full
02, EU AI Act

Article mapping

OMEGA v1.3 primitives map directly to EU AI Act obligations. The multi-agent gap identified by Tech Policy Press (January 2026), EU regulations not ready for multi-agent AI incidents: is addressed by P6, P6A, and P6L.

Article 9
Risk Management System
Continuous iterative risk management for high-risk AI. Identification and analysis of known and reasonably foreseeable risks.
P2 Reasoning (causal graph) + P4M Materiality Binding
Article 11
Technical Documentation
Providers of high-risk AI must maintain technical documentation before placing system on market and update throughout lifecycle.
P1 Governance + P2 Reasoning
Article 12
Automatic Logging
High-risk AI systems must automatically generate logs enabling post-market monitoring and investigation. Active since August 2025.
P3 Traceability, SHA-256 hash chain + P5E Execution Attestation
Article 13
Transparency
High-risk AI systems must be designed to enable deployers to interpret system output and use it appropriately.
P2 Reasoning, FACT/INFERENCE/ASSUMPTION/UNKNOWN causal graph
Article 14
Human Oversight
High-risk AI systems must enable human oversight. Natural persons must be able to intervene and override. Extends to delegation chains.
P5 Confirmation + P6L Liability Threshold: blocks high-risk ungoverned delegation
Article 17
Quality Management System
Providers of high-risk AI must implement a QMS covering risk management, data governance, post-market monitoring, and serious incident reporting.
All fifteen primitives, ISO 42001 chassis
03. Certification pathway

Fastest path to certification

For organisations deploying OMEGA, the most labour-intensive evidence-gathering phases are automated. Auditors arrive to months of verifiable governed records already in place.

Phase 1
Month 1-2
Gap Analysis and Scope Definition
Define AIMS scope covering all AI systems including third-party integrations and delegation chains. Classify role as Provider, Producer, or User. Map existing ISO 27001 controls to 42001 requirements.
Phase 2
Month 2-4
Deploy OMEGA v1.3 as Technical Record-Keeping Standard
Deploy OMEGA across training, validation, and production environments. Every AI decision: including delegated decisions: begins producing governed records immediately. By the time auditors arrive, there are months of immutable causal history demonstrating system maturity.
→ A.7.2 Data Provenance satisfied automatically → A.10.2 Logging satisfied automatically → P6 delegation chains auditable from day one
Phase 3
Month 3-5
Governance Artifact Development
Draft AI Policy and Statement of Applicability. Execute AI System Impact Assessment using OMEGA records as the evidence base for risk register inputs. P2 causal graphs feed directly into AIIA risk documentation.
→ P4M materiality records populate risk assessment with highest-consequence variables already identified
Phase 4
Month 5-6
Internal Audit and Management Review
Conduct internal audit using OMEGA logs to verify technical controls are functioning as documented. P4T trajectory records provide longitudinal evidence of system behaviour across multi-step sequences. PCF quantitative anchor metrics provide drift evidence.
→ A.6.2 Lifecycle Monitoring verified via P4 Expectation vs actual outcome comparison across full trajectory
Phase 5
Month 7-9
Certification Audit. Stage 1 and Stage 2
Stage 1 documentation review: OMEGA records demonstrate months of operational AIMS maturity. Stage 2 implementation audit: auditors query OMEGA records directly: no evidence reconstruction required. The evidence gap that stalls most Stage 2 audits does not exist.
→ OMEGA eliminates the evidence gap that stalls most Stage 2 audits

Start with the spec

OMEGA Protocol v1.3 is published, open, and MIT-licensed. The formal proof of primitive necessity and sufficiency is machine-verifiable across three adversarial rounds. Certification bodies can audit against it.

Talk to us

We deploy the OMEGA standard against your specific AI systems and regulatory requirements. Governed records from your own systems within a week. No retainer to start.