What is a governed AI decision?
A consequential AI output that emits a signed, replayable, policy-bounded artifact recording its authority context, inputs, configuration, evidence, suppression history, review path, and verifier state.
The definition, expanded
An AI output becomes a governed AI decision when it emits an artifact with all eight of the following properties:
- Signed. Cryptographically attested by the system that produced it, using a key bound to a measured, attested execution environment.
- Replayable. The same inputs, configuration, and policy bundle produce the same output, bit-for-bit, on independent infrastructure.
- Policy-bounded. The decision was authorized by a named, versioned, signed policy bundle. The policy applied is itself an artifact.
- Authority-contexted. The artifact records who or what authorized this class of decision, including the human-in-the-loop chain if any.
- Input-traced. Every consequential input — prompts, tool calls, retrieved evidence, prior decisions — appears in the artifact with provenance.
- Suppression-recorded. If any candidate output, source, or action was filtered out, the artifact records it. The negative space is part of the receipt.
- Review-pathed. The artifact references the contestation procedure: who can challenge it, on what grounds, and what changes if the challenge succeeds.
- Verifier-stated. The artifact includes the verifier state at the time of decision: which controls were running, which were specified-not-running, which were skipped under override.
What is a decision receipt?
The artifact described above is the decision receipt. It is the publishable, citable, verifiable unit of governed AI. The Replayable Decision Ledger (RDL) is the open protocol that defines the receipt's schema, hash chain, and signing discipline.
What this definition does not claim
A governed AI decision is not necessarily a correct decision. The receipts doctrine is about evidence integrity and replayability, not cognition quality. A governed decision can be wrong; what makes it governed is that it can be audited, challenged, and corrected through a documented procedure, with the entire trail itself being a tamper-evident artifact.
This is the boundary that survives hostile audit. Substrate attestation does not change what the model says; it changes the defensibility, trustworthiness, and chain-of-custody integrity of the evidence trail surrounding the cognition.
The G0–G7 maturity ladder
Not every AI output meets all eight properties. The G0–G7 conformance ladder grades AI systems on how many properties their decisions satisfy:
| Level | Name | Threshold |
|---|---|---|
| G0 | Ungoverned | System acts. No artifact. |
| G1 | Logged | Audit trail exists. Not signed. |
| G2 | Signed | Artifact signed. Replay incomplete. |
| G3 | Replayable | Signed + replay bundle + verifier. |
| G4 | Conformant | G3 + GDTK-100 pass + external verifier. |
| G5 | Improving | System reduces defects, waste, replay divergence over time via SPC discipline. |
| G6 | Federated | Multiple operators recognize and verify each other's receipts. |
| G7 | Institutional | Regulators, insurers, auditors treat the receipt as the unit of trust. |
G5–G7 are published as horizon doctrine without committed dates. The probabilistic framing: by 2031, with probability ≥ 40%, the market rule for consequential AI will be "show the receipt."
See how the five pillars produce decisions that meet this definition →