Domain Application

Governing autonomous space systems

Every autonomous system operating beyond the atmosphere faces the same unsolved governance problem. OMEGA names it, formalises it, and provides the architecture to solve it.

Domain Space infrastructure
Published March 2026
Spec version OMEGA v1.0
Status Live — 9 domain scenarios, first external integration March 2026

The problem that does not have a name yet

An edge AI system compressing Earth observation data in orbit is making irreversible decisions about what information to discard permanently. No human can review those decisions before they execute. The latency between orbit and ground is too high. The bandwidth that would be needed to send the raw data back for human review is exactly the problem the system was built to solve.

An autonomous re-entry vehicle is making decisions about propulsion, trajectory, and descent that cannot be interrupted once initiated. A human in the loop is not a practical option at the moment of commitment. The vehicle decides. It acts or it does not act. There is no current architecture that records which decisions it considered and correctly chose not to execute, with the reasoning attached.

A pharmaceutical manufacturing system operating in microgravity is running processes that cannot be replicated on Earth. When regulators ask for the decision audit trail — and they will ask, because pharma regulators always ask — there is no current framework for what that audit trail should contain for an autonomous system making real-time process decisions in space.

Seven different systems. Seven different problems. One shared structural gap: the absence of a governed decision record for autonomous systems operating without a human in the loop.

The structural gap

Current autonomous space systems record what they did. None of them record what they considered and correctly chose not to do, why they held, or whether that decision was within the scope of their authorisation. That is the missing half of space system governance.

Seven systems. One unsolved problem.

The operators who need this now

Starfish Space Otter platform · $54.5M Space Force contract

Non-cooperative autonomous docking with tumbling defunct satellites. Under 1972 Liability Convention Article III, if the AI causes a collision they must prove they were not negligent. Without OMEGA they cannot.

Astroscale US APS-R · $61M Space Systems Command contract

First-ever autonomous hydrazine refueling in GEO planned 2026. Toxic propellant transfer with autonomous valve triggers. Collision would create debris field affecting hundreds of telecoms satellites.

Vast Haven-1 · First private space station, 2026 launch

Human-rated autonomous life support and station-keeping. Safety-of-life application. Every autonomous decision must be verified against safety parameters before human crew can safely occupy the station.

Space Forge ForgeStar-1 · Cardiff, UK

World-first autonomous plasma control in orbit, early 2026. Semiconductor and pharmaceutical manufacturing in space. Signed ESA Zero Debris Charter. Three distinct OMEGA needs.

Varda Space Industries W-series · Monthly re-entry cadence

W-1 through W-5 complete. Goal: monthly autonomous re-entries by end 2026. NASA and FAA both require recoverable payload audit trail.

Edge AI Compression System Edge AI · Data Compression
What they do Deploy advanced compression algorithms directly in orbit, making irreversible decisions about which data to transmit and which to discard before the signal reaches ground.
The decision problem The edge AI is deciding what information from Earth observation no longer exists once discarded. The decision is made autonomously, at the speed of computation, with no human review possible before execution.
What is missing A governed record of every compression decision — what the system considered transmitting, why it held, what threshold it applied, and whether that threshold was within its authorised operating parameters.
In-Space Manufacturing System In-Space Manufacturing · Process Control
What they do Manufacture protein crystals for monoclonal antibody therapies in microgravity, producing pharmaceutical materials that cannot be replicated on Earth. The UK Space Agency and MHRA published the world's first regulatory pathway for space-manufactured pharmaceuticals in March 2026 — these materials now have a governed route from orbit to patient.
The decision problem The manufacturing process makes autonomous real-time decisions about crystallisation conditions. When pharma regulators require audit trails — and they will — the standard frameworks for process documentation do not account for autonomous decision-making in a space environment.
What is missing A governed decision record that captures what the system expected at each process stage, what it reasoned, what it decided not to adjust and why, and whether each decision was within its validated operating parameters. The MHRA now requires exactly this — a governed audit trail for every autonomous manufacturing decision made in orbit. Without it, the regulatory pathway from orbit to market does not exist.
Autonomous Re-entry Vehicle Autonomous Re-entry · Logistics
What they do Build highly autonomous reentry vehicles for small mass, high cadence orbital return missions — designed to make propulsion, trajectory, and descent decisions without a human in the loop at the moment of commitment.
The decision problem A re-entry decision, once initiated, cannot be reversed. The vehicle decides to fire or not fire, to correct or not correct. Every non-action — every decision not to execute a manoeuvre — is as consequential as every action. Neither is currently governed or recorded in a way that supports certification, insurance, or the design of the next mission.
What is missing A complete governed decision sequence for every re-entry — what the vehicle considered, what it held, what its expectation was before each manoeuvre decision, and whether each commitment was within its authorised flight envelope. This record is what certification bodies and insurers will require as the market scales.
Collision Avoidance System Autonomous Manoeuvre · Debris Avoidance
What they do Autonomous systems making propulsion decisions to avoid debris objects. The decision window is minutes. Human review is impossible. A wrong decision destroys the vehicle. A correct hold — deciding not to burn — is as consequential as the burn itself.
The decision problem The system must decide whether to fire thrusters based on conjunction probability data that carries inherent uncertainty. A 1-in-847 collision probability sounds low but represents a real risk at scale across thousands of active satellites.
What is missing A governed record of the conjunction assessment, the decision threshold applied, whether the burn was within authorised parameters, and what the system considered and correctly chose not to do in previous passes.
EO Tasking Agent Autonomous Tasking · Imaging Prioritisation
What they do Autonomous agents deciding which areas of Earth to image on each orbital pass given fixed constraints of power, storage, and time. Every pass is a permanent decision — what was not imaged cannot be retrieved.
The decision problem Emergency override requests compete with scheduled commitments. The agent decides what to image and what to permanently miss. That prioritisation decision has humanitarian, commercial, and contractual consequences that currently leave no governed record.
What is missing A governed record of what was prioritised and why, what was deprioritised and permanently missed, what the agent expected versus what the actual request queue contained, and whether the override was within its authorised parameters.
Robotic Servicing Vehicle Autonomous Docking · On-orbit Operations
What they do Robotic servicing vehicles making autonomous docking decisions in the final metres of approach where communication latency makes human intervention impossible. A wrong commitment destroys both vehicles.
The decision problem Sensor anomalies detected during final approach create a proceed-or-abort decision under time pressure. The point of no return arrives before any human can review the data. The vehicle decides alone.
What is missing A governed record of the anomaly assessment, the abort threshold applied, the reasoning chain from sensor reading to proceed decision, and the point of no return where the decision became irreversible. This record is the entire safety case for autonomous servicing certification.
In-Orbit Pharmaceutical Manufacturing Pharma Manufacturing · Process Control
What they do Manufacture monoclonal antibody therapies and protein crystals in microgravity under the UK Space Agency and MHRA world-first regulatory pathway for space-manufactured pharmaceuticals, published March 2026.
The decision problem An autonomous crystallisation reactor makes real-time process decisions — temperature, pressure, nucleation timing — that directly affect whether the batch meets MHRA regulatory standards for patient access on Earth. Every autonomous decision is part of the regulatory submission. None of them currently have a governed record.
What is missing A governed decision record capturing what the system expected at each process stage, what it reasoned, what it decided not to adjust and why, and whether each decision was within its validated MHRA operating parameters. Without this record, the regulatory pathway from orbit to market does not exist.
Rendezvous, Proximity Operations and Docking (RPOD) Autonomous Docking · Liability Gap
What they do Autonomous systems executing final approach and docking for satellite servicing, refueling, and debris removal. Signal latency makes human intervention impossible in the final metres.
The decision problem Under the 1972 Liability Convention Article III, fault-based liability requires operators to prove they were not negligent if their AI causes a collision. Without an auditable logic trail of why the AI chose that specific approach vector, they cannot. This is the “evidentiary barrier.”
What is missing A governed record of every decision in the final approach — sensor data processed, vectors considered and rejected, expected target attitude, and the point of no return. This is the safety case for RPOD certification and the evidence insurers require for affirmative coverage.
Live demo
Generate a governed decision record
Paste any autonomous space system decision. See the full OMEGA governance record in seconds.
Try the demo →

The five primitives applied to space

The OMEGA Protocol defines five irreducible primitives of governed decision-making. Remove any one and the decision record is incomplete. In space systems, the consequence of an incomplete record is not just an audit failure — it is the inability to certify, insure, iterate, or defend the system's behaviour.

Primitive In space systems Without it
Governance Who authorised the system to make this class of decision autonomously, and under what conditions does that authorisation apply. No accountability when the system makes a decision outside its intended scope. No basis for certification.
Reasoning The chain connecting sensor state, system objective, and decision — for both actions and non-actions. No basis for understanding why the system behaved as it did. Post-mission analysis is inference not evidence.
Traceability An immutable log of every decision the system made or considered, timestamped and attributable. No audit trail for regulators, insurers, or mission designers. Each mission starts from zero knowledge.
Expectation The prior baseline the system registered before each decision — what it expected the state of the system and environment to be. No way to measure surprise. No way to identify when the system encountered conditions outside its training distribution.
Confirmation The structural separation between forming an intent and committing to it — the gate that enforces authorisation before execution. No governed boundary between what the system could do and what it is authorised to do. The decision and the action are a single undifferentiated step.

The regulatory squeeze — 2026

FAA Part 450 — Full enforcement March 9 2026
Transition period ended. Performance-based not prescriptive. Operators must prove autonomous systems will maintain public safety. OMEGA Governance and Confirmation primitives are the means of compliance.

UK Aviation Safety (Amendment) Regulations 2026 — Active February 2026
CAA grants exemptions for innovative autonomous technologies provided they meet essential safety requirements. Safety Management System required. OMEGA provides the SMS evidence layer.

ESA Zero Debris Charter — 210+ signatories January 2026
Requires controlled demise or re-entry burn. Auditable de-orbit protocol records required. OMEGA Confirmation primitive produces these permanently. Space Forge and Avio are among the signatories.

ITU WRC-27 — October 2027
Spectrum accountability for autonomous transmitters. OMEGA Traceability for frequency-use decisions provides what ITU regulators are seeking.

EU Space Act — Final drafting 2026
Standardised Life Cycle Assessment. Direction: mandatory accountability infrastructure for autonomous space assets.

The insurance gap — active January 2026

ISO endorsements CG 40 47 and CG 40 48 — effective January 2026 — explicitly exclude generative AI exposures from standard commercial space policies. Every space operator using AI for autonomous decisions is currently uninsured for AI incidents unless they have specialist affirmative coverage.

The parallel to “silent cyber” is direct. Insurers now require detailed disclosures about AI use, autonomy levels, and extent of human oversight. If an operator cannot provide an auditable trail of their AI’s reasoning, an incident may be excluded as “unforeseen emergent behaviour.”

Armilla at Lloyd’s provides affirmative AI liability coverage — but requires an auditable trail of AI logic. OMEGA Traceability is the risk mitigation affirmative insurers require to move from exclusion to coverage.

The underwriting formula:

RHIAE = Σ(Pfail × L) × (1 − αXAI)

αXAI is the Explainability Alpha — every OMEGA record increases it and directly reduces the premium.

Agentic AI in orbit — NVIDIA Space Computing, March 2026

NVIDIA launched its Space Computing platform March 2026. The Vera Rubin Module delivers 25x more AI compute than the H100 for space-based inferencing. This enables Agentic AI — reinforcement learning systems pursuing complex mission goals independently.

Governance implication: an agentic AI may find a mathematically optimal solution that is operationally dangerous. OMEGA Governance primitive provides boundary conditions the agentic AI cannot violate regardless of its internal optimisation logic.

NVIDIA adopted SiFive RISC-V cores for GPU management — open standard. OMEGA Traceability can be integrated at instruction-set architecture level. Every Reasoning step logged by the silicon. Unbreakable audit trail from hardware up to the authorisation layer.

Building on this infrastructure:

Why this matters now

The orbital economy is at the same inflection point the internet reached in the mid-1990s. The infrastructure is being built. The first commercial systems are launching. The first failures and near-misses are accumulating. The regulatory frameworks are beginning to form.

At every equivalent inflection point in the history of infrastructure — financial networks, internet protocols, energy grids — the governance standard that became embedded earliest became the standard that persisted. The organisations that defined it did not compete with the infrastructure operators. They became the substrate those operators had to implement to be trusted.

Space is not different. It is earlier. The window for establishing the governed decision record as a first-class requirement of autonomous space systems — before the first major certification dispute, before the first regulatory framework crystallises without it, before the first insurer demands it without a standard to reference — is open now.

On 23 March 2026, OMEGA Protocol submitted public comments to the US National Cybersecurity Center of Excellence (NCCoE) AI Agent Identity and Authorisation concept paper. OMEGA argued that current agent logging standards miss three structural elements: the Expectation primitive as a committed prior baseline, structured Reasoning chains as pre-action records, and non-action records as first-class governance objects.

FAA Part 450 is in full effect. UK Aviation Safety Regulations are active. The ESA Zero Debris Charter has 210 signatories. Lloyd’s has excluded AI from standard commercial policies. ISO 42001 certification is becoming a procurement requirement. Every framework converges on the same requirement.

OMEGA is the only published open standard that satisfies all of them.

The position

OMEGA does not compete with operators of edge AI compression, in-space manufacturing, or autonomous re-entry systems. It is the governance substrate each of them needs to implement to be trusted — by regulators, by insurers, by the customers whose data, drugs, and payloads depend on their systems making the right decisions without a human in the loop.

What OMEGA provides

The OMEGA Protocol Specification v1.0, published at omegaprotocol.org/spec/v1, defines the minimum information structure required for a governed decision record. It has been deployed in production across eight domains — trading, manufacturing, energy, property, software architecture, social care, mathematics, and space — generating governed non-action records since early 2026. The MHRA regulatory pathway for space-manufactured pharmaceuticals, published March 2026, requires precisely this infrastructure. The EU AI Act enforcement deadline is August 2026. NIST launched its AI Agent Standards Initiative in February 2026. The regulatory frameworks are not forming around OMEGA. They are forming around the problem OMEGA already solved.

The same five primitives that govern a trading signal decision, a social care case decision, and an energy infrastructure commitment decision are the same five primitives required to govern an orbital compression decision, a pharmaceutical process decision, and a re-entry manoeuvre decision. The primitives are irreducible. They apply wherever autonomous systems make consequential decisions without a human in the loop.

The space domain is the environment where the consequence of an ungoverned autonomous decision is most immediate, most irreversible, and most visible. It is also the environment where the argument for OMEGA's necessity is strongest. A re-entry vehicle that can prove what it considered and chose not to do, with a verifiable governance record, is a fundamentally different product from one that cannot.

The orbital economy needs autonomous systems it can trust.
Trust requires governance. Governance requires a record.

omegaprotocol.org/domains/space