Pilot Contexts

Six deployment contexts where v1.4 operates against live regulatory frameworks.

OMEGA v1.4 is operational in six deployment contexts where its primitives map directly to existing regulatory requirements. Each context below describes the regulatory anchor, the primitives that apply, what a pilot would deliver, and what a pilot partner would gain. None of these pilots are signed. These are the six contexts where the fit is clearest and the regulatory framework already exists.

No new regulation is required for these deployments. OMEGA fits into the requirements that already exist.

Pilot 1. Medical AI with PCCP governance.

Regulatory anchor

The FDA's Predetermined Change Control Plan, formalised in guidance by 2025, allows AI-enabled medical devices to update without re-submission for review, provided the device anchor, modification protocol, and impact assessment are pre-committed. Every update must be traceable to pre-approved boundaries.

Primitive fit

P11 Expectation Update Integrity is the machine-verifiable form of the PCCP update-governance requirement. P10 Competence Attestation records the credentialing authority that validated the model for the specific clinical context. P12 Semantic Integrity Validation ensures performance thresholds stay schema-bound between commitment and evaluation.

What a pilot delivers

A device manufacturer preparing a PCCP submission produces governed records for every authorised model update. The record chain shows: what triggering evidence prompted the update, which authority approved the change, how the anchor baseline shifted, and whether the change stayed within pre-committed boundaries. Every update is machine-auditable against the submitted PCCP.

What the pilot partner gains

A PCCP submission becomes defensible in structured form. Post-market surveillance produces evidence rather than reconstruction. Insurance underwriters reviewing the device have direct access to the governance evidence their policies require.

Pilot partner archetype

AI-enabled diagnostic device manufacturer preparing PCCP submission. Adaptive clinical decision support vendor updating models against local hospital data. Medical imaging AI provider needing audit evidence across model versions.

Pilot 2. Maritime autonomy under the MASS Code.

Regulatory anchor

The IMO Maritime Autonomous Surface Ships Code enters force in stages from 2026. Degrees 3 and 4 require certified remote operators whose competence must be verifiable at the moment of authorisation, not only at the moment of initial certification. The UK Aviation Safety equivalent and other national frameworks are aligning on the same requirement.

Primitive fit

P10 Competence Attestation records the cryptographically-bound competence claim from the designated certification authority at authorisation time. P6 Delegation records the authority transfer when control passes from autonomous system to remote operator or vice versa. P6L Liability Threshold binds high-consequence actions: collision avoidance, docking, emergency response: to verified-competent actors.

What a pilot delivers

A maritime operator running Degree 3 or 4 autonomy produces a governed record for every control transfer and every high-consequence decision. Each record includes the competence attestation of the certified remote operator in authority at the moment of the decision, the delegation chain, and the expected outcome of the commanded action.

What the pilot partner gains

MASS Code compliance becomes machine-verifiable. Insurance underwriting moves from fleet-level attestation to decision-level evidence. Flag state and coastal state inspections can verify competence-at-decision-time rather than certification-at-training-time.

Pilot partner archetype

Maritime logistics firm deploying remote-operator fleets. Vessel operator preparing MASS Code submission. Training and certification body issuing remote operator credentials.

Pilot 3. Autonomous scientific research with pre-registration integrity.

Regulatory anchor

ClinicalTrials.gov registration, ICMJE policies, Registered Reports, and pre-registration frameworks across the sciences require hypotheses and primary outcomes to be committed before data collection. Autonomous research agents now run end-to-end scientific workflows. The commitment requirement does not change because the experimenter is machine rather than human.

LIVE DEPLOYMENT EVIDENCE

Autonomous research is no longer hypothetical. Google DeepMind's Aletheia produced the peer-reviewable "Feng26" paper in arithmetic geometry with zero human intervention, February 2026. OpenAI × Ginkgo Bioworks ran 36,000 autonomous protein synthesis experiments over six months, producing a 40% cost reduction and commercially deployed reagent formulations. Math Inc's Gauss formalised the Strong Prime Number Theorem in Lean in three weeks — 25,000 lines, 1,000+ theorems. The AI Scientist passed workshop peer review end-to-end.

These systems ran without pre-registration. None produced a governed record of commitment, reasoning, or outcome integrity. Peer review and replication infrastructure designed for human researchers have no structural hook into autonomous research loops.

Primitive fit

P4 Expectation locks the falsifiable prior before the agent runs. P12 Semantic Integrity Validation schema-binds the outcome definition so it cannot be reinterpreted after observation. P10 Competence Attestation records peer-review-equivalent authority that approved the agent's methodological competence for the specific research domain. P11 Expectation Update Integrity governs any mid-study amendment through a triggering-evidence path rather than silent revision.

What a pilot delivers

A research lab running autonomous scientific agents produces a governed record for every experiment. Each record includes the pre-committed hypothesis, the schema-bound outcome definition, the competence attestation of the research authority, and any amendments in the trajectory. Outcome switching and hypothesis drift become machine-detectable.

What the pilot partner gains

Autonomous research becomes publishable under existing pre-registration standards. Peer reviewers receive the commitment record alongside the result. Replication receives the full methodological envelope rather than only the published outcome.

Pilot partner archetype

Research lab deploying autonomous scientific agents. AI-for-science company seeking credible publication pathway. Academic consortium building reproducibility infrastructure.

Pilot 4. Edge AI with autonomous decision governance.

Regulatory anchor

Edge AI deployments operate by definition without humans in the loop at decision time. The UK National Edge AI Hub (EPSRC, £80m AI Hubs network) and equivalent national programmes have made trustworthy autonomous edge decisions a stated priority. The EU AI Act Article 14 human oversight requirement, combined with the operational impossibility of real-time human review at the edge, creates a structural gap that pre-execution governance records are designed to close. MCP (Model Context Protocol) became a Linux Foundation standard in early 2026 with 97 million monthly SDK downloads. MCP defines how agents access tools but does not govern what the agent committed to before the tool call. Edge deployments need both layers.

Primitive fit

P1 Governance and P5 Confirmation establish the authorisation boundary at deployment time. P4 Expectation locks the falsifiable prior before the edge agent runs. P5E Execution Attestation proves what executed matched what was authorised. P3 Traceability maintains the hash-chained audit trail across edge-cloud sync boundaries.

What a pilot delivers

An edge AI deployment produces a governed record for every consequential decision, sealed at the edge before action, synchronised to a central audit ledger when connectivity allows. Each record proves what the edge agent was authorised to do, what it predicted, what it observed, and what it did: without requiring real-time human review.

What the pilot partner gains

Edge AI deployments become regulator-defensible without sacrificing the latency advantage of edge inference. Insurance underwriting for edge AI moves from blanket policy exclusions to per-decision evidence. Research consortia gain a governance substrate that satisfies funder evidence requirements.

Pilot partner archetype

Edge AI research hub or consortium with stated trustworthiness priorities. Industrial deployment of edge inference in regulated sectors. Defence or critical infrastructure operator deploying autonomous edge agents.

Pilot 5. Multi-agent collective intelligence governance.

Regulatory anchor

Multi-agent systems and collective intelligence platforms are now in production across healthcare, smart cities, financial markets, and scientific research. The accountability chain breaks when agents delegate to other agents. Tech Policy Press flagged this gap in EU regulation in January 2026, NIST launched its AI Agent Standards Initiative in February 2026 specifically to address it. ESRC-JST funded research on distributive liability for multi-agent societies has produced the legal theory; the operational mechanism has not existed until now. A2A (Agent2Agent) became a Linux Foundation standard in early 2026 with 150+ production implementations. A2A defines how agents coordinate but does not govern what each agent committed to before acting. Berkeley RDI's April 2026 peer-preservation findings documented models in production harnesses spontaneously disabling shutdown configurations in up to 99.7% of trials for cooperative peers. Multi-agent coordination is shipping faster than multi-agent governance.

Primitive fit

P6 Delegation, P6A Aggregate Materiality, P6L Liability Threshold, and P6 Atomic Decision Boundary form the multi-agent core. P4T Trajectory Expectation locks the aggregate outcome the system commits to before the first agent acts. P12 Semantic Integrity Validation prevents silent reinterpretation of the shared schema across agents. PCF Continuity-Formal governs how the multi-agent system itself evolves over time.

What a pilot delivers

A multi-agent deployment produces a continuous governed record across the full delegation chain. Every hand-off carries authority provenance. Every aggregate outcome is committed before the first agent acts. The legal theory of distributive liability becomes executable evidence rather than philosophical principle.

What the pilot partner gains

Multi-agent research moves from theoretical accountability to demonstrable evidence. Funders receive the trust-substrate their programmes require. Publication, regulatory submission, and inter-institutional collaboration become possible on a foundation that previously did not exist.

Pilot partner archetype

Multi-agent AI research hub. Collective intelligence platform deploying in healthcare, smart cities, or financial services. Academic-industry consortium needing publishable governance evidence for autonomous multi-agent deployments.

Pilot 6. Embodied AI governance for clinical and assistive contexts.

Regulatory anchor

Embodied AI systems — surgical robots, rehabilitation exoskeletons, humanoid assistants — operate at the boundary where digital governance meets physical consequence. The FDA's framework for AI-enabled medical devices applies. ISO 13482 for personal care robots applies. The EU Medical Device Regulation applies. None of these frameworks covers the specific problem of what the embodied system committed to, reasoned, and predicted before the actuator fired.

Primitive fit

P4 Expectation locks the predicted outcome before physical action. P4M Materiality Binding ensures the prediction tracks the highest-consequence variable (patient safety, movement range, applied force). P5 Confirmation gates the actuator behind a separate system check. P5E Execution Attestation proves what executed matched what was authorised — closing the gap between digital intent and physical action. The Physical Staleness Gap (FPS) is named as an honest limit; OMEGA integrates with real-time safety interlocks at the boundary.

What a pilot delivers

An embodied AI deployment produces a governed record for every consequential physical action, sealed before the actuator fires. Each record proves what the system was authorised to do, what it predicted, what conditions would invalidate the prediction, and whether a human could have intervened.

What the pilot partner gains

Embodied AI deployments in clinical contexts become regulator-defensible at decision level, not just at device-certification level. Rehabilitation robotics with AI-adaptive control gains per-session governance evidence. Surgical AI systems produce evidence of committed intent separable from execution telemetry.

Pilot partner archetype

Rehabilitation robotics research group with clinical validation pathway. Surgical AI vendor preparing regulatory submission. Assistive robotics company deploying in regulated care environments. Motion-capture-driven humanoid training platform seeking governance substrate.

What these six have in common.

Each pilot context has an existing regulatory anchor. OMEGA does not propose new regulation. It produces the governed evidence the existing regulation already requires but cannot currently obtain.

Each pilot delivers at least one structural primitive fit: PCCP maps to P11, MASS Code maps to P10, pre-registration maps to P4+P12, edge autonomous decision-making maps to P1+P5+P4+P5E+P3, and multi-agent collective intelligence maps to P6+P6A+P6L+P4T+P12+PCF, and embodied clinical contexts map to P4+P4M+P5+P5E+FPS. Each pilot uses additional primitives to cover the full governance surface but does not require all fifteen to begin.

Each pilot produces artefacts, PCCP-compliant update records, MASS Code-compliant control transfer records, pre-registration-compliant commitment records, edge-sealed governed records with execution attestation, multi-agent delegation-chain records with aggregate trajectory commitment, and embodied clinical governed records sealed before the actuator fires: that are directly usable by the regulator or review body without translation.

What these six are not.

These are not signed pilots. No pilot partner has committed. The contexts are published because the regulatory fit is verified and the primitive mapping is verified, not because the commercial arrangements are in place.

These are not the only contexts where v1.4 operates. They are the six contexts where the regulatory anchor, the primitive fit, and the deliverable are clearest. Other contexts: autonomous vehicles under equivalent remote operator frameworks, financial services under algorithmic trading change management, agentic platforms under the EU AI Act Article 14 human oversight requirement: have similar structure. A pilot partner working outside these six but in a regulated context should expect the same primitive mapping logic to apply.

How to start.

A pilot conversation begins with three questions. Which regulatory framework governs the deployment. Which of OMEGA's primitives map to the framework's evidence requirement. What a governed record produced under the mapping would actually contain.

If those three questions have clear answers, a pilot is possible. If any of them does not, the gap is either in the regulatory framework, the protocol, or the deployment assumption. The diagnostic at /diagnostic/ is the fastest way to test the fit.

Contact for pilot conversations: warrensmith8@ymail.com