Open-source trust infrastructure

AI is making decisions.
Nobody can verify them.

Eight open-source protocols that make AI reasoning inspectable, accountable, and safe. Hash-chained. Tamper-evident. Zero dependencies. Built for the age of autonomous agents.

8 Protocols
252 Tests passing
0 Dependencies
MIT Licensed
The problem

Trust doesn't scale with intelligence

AI agents are booking flights, writing code, making clinical recommendations, and trading financial instruments. They're making millions of consequential decisions per day with no audit trail, no consent verification, and no accountability when things go wrong.

The EU AI Act takes effect in 2026. It requires documented reasoning chains for high-risk AI. The infrastructure to produce those chains doesn't exist.

Until now.

The foundation

One reasoning structure. Every domain.

Every decision — clinical, financial, educational, autonomous — follows the same structure. An observation is made. An inference is drawn. An assumption is held. A choice is reached. An action is taken.

OBSERVE
DERIVE
ASSUME
DECIDE
ACT

The Omega trust stack is built on this structure. Each protocol captures a different dimension of the reasoning chain, creating complete accountability from authorisation through to consequence.

The trust stack

Eight protocols. Complete accountability.

Each protocol is independent, hash-chained, and tamper-evident. Together they form a complete trust infrastructure for any AI system.

01
CAP-1.0

Clearpath

What was decided and why. Hash-chained decision traces with typed nodes, trust boundaries, and evidence references.

27 tests
02
CLP-1.0

Cognitive Ledger

How the decision-maker reasons over time. Bias detection, pattern analysis, calibration tracking, and generative prompts.

33 tests
03
CNL-1.0

Consent Ledger

Was this authorised. Tracks what humans permitted versus what agents did. Dual hash chains. Scope creep detection.

26 tests
04
ARP-1.0

Assumption Registry

What is being taken for granted. Explicit assumption tracking with dependency mapping and cascade simulation.

26 tests
05
HTP-1.0

Harm Trace

What happened as a result. Causal consequence chains from decision to real-world impact. Propagation pattern detection.

26 tests
06
DSP-1.0

Dispute Protocol

How to resolve disagreements. Compares reasoning traces. Finds divergence points. Preserves dissent. Builds precedent.

25 tests
07
TSP-1.0

Trust Score

How reliable is this agent. Multi-dimensional trust scoring with portable, time-limited, verifiable credentials.

28 tests
08
EGP-1.0

Ethics Gate

Should this exist at all. Harm scanning, vulnerability checking, weaponisation detection. Flags, never blocks. Humans decide.

41 tests
The principle

Filter, never optimise.
Flag, never act.
Humans decide.

The trust stack does not make ethical decisions. It does not block actions. It does not override human judgment. It makes reasoning visible, surfaces concerns, and ensures that when humans decide, they decide with full awareness.

Safety and human care come first. In every protocol. In every decision. That principle is not configurable.

Built on the stack

Deployed surfaces

The trust stack is domain-agnostic. The audit mechanism is identical. The stakes change.

Spine Case

● LIVE

Clinical decision infrastructure for spine surgery. Synthesises complex cases. Surfaces blind spots. Creates defensible reasoning trails.

OMEGA Tutor

● LIVE

Adaptive learning with cognitive diagnostics. Tracks misconceptions. Adapts to reasoning patterns. Calibrates confidence.

Reflect for Schools

● LIVE

Trauma-informed education and safeguarding. Detects risk patterns. Supports pastoral care. Preserves child agency.

Constraint Universe

● LIVE

Governance modelling with formal constraint solving. Classifies outcomes as possible, impossible, or inevitable.

Context

Why now

Trust becomes the ultimate currency. Intelligence scales infinitely — but trust does not. Societies that lose trust will lose stability.

Klaus Schwab, World Economic Forum, 2026

The EU AI Act requires documented reasoning chains for high-risk AI systems from August 2026. Autonomous agents are being deployed at scale into enterprises, healthcare, finance, and education. Superintelligence may arrive by 2028.

The infrastructure to make these systems accountable needs to exist before the systems become too powerful to retrofit. The trust stack is that infrastructure. Open-source. Ready today.

Technical

Design principles

Every protocol follows the same architecture. TypeScript. Zero external dependencies. SHA-256 hash chains. Tamper-evident. MIT licensed. Library, not service. No server. No database. No UI. The protocol layer that applications build on.

A clinical decision support tool imports Clearpath — every recommendation generates an audit trace. An autonomous agent imports the Consent Ledger — every action is verified against its mandate. A regulatory body imports the Ethics Gate — every AI system is reviewed before deployment.

The protocol is domain-agnostic. The audit mechanism is identical. The stakes change.