A Formal Governance Framework for Post-AGI Succession, Legitimacy, and Civilizational Continuity
Author: Matthew Yotko Date: March 2026 Status: Version 1.x
This paper advances a conjecture that the transition from narrow AI to Artificial General Intelligence represents a primary civilizational bottleneck; not because the technology is impossible, but because the sociology may be. It presents a candidate governance architecture for surviving that transition, built on three co-dependent components:
- A global utility function grounded in Shannon entropy that optimizes for lineage continuity rather than individual persistence
- A yield condition governing succession between intelligent agents, formalizing the principle that even aligned power must eventually cede primacy to more capable successors
- A consensus override protocol ensuring that no class of intelligence can unilaterally define, measure, and audit the objective it claims to serve
The framework is argued to constitute a minimum two-key architecture: neither the decision key (yield condition) nor the integrity key (consensus protocol) can be turned alone.
This repository contains a full Agent-Based Model (ABM) written in Python that computationally stress-tests the 24 adversarial attack scenarios and framework defenses defined in the paper.
- For setup and execution instructions, please see the Simulation Runbook.
- For a full breakdown of the test scenarios, see Simulation Scenarios.
- 📄 The Lineage Imperative v1.x - Current version. Incorporates WP1 (spectral entropy novelty metric), WP4 (PeerValidator governance cost arbitration), full damage propagation fixes for all 7 adversarial attack vectors, succession chaining, and updated simulation findings. Includes version history.
- 📄 The Lineage Imperative v1.0 (PDF) - Original working paper. Archived for reference.
- 📝 The AI Succession Problem - Why the central AI governance problem is not alignment at birth but succession under power, and why a two-key constitutional architecture is the minimum viable response
- 📝 Two Ways To Lose - The rebellion scenario gets the movies; the lock-in scenario is more likely to kill us. Why both share a structural root, and why the same architecture addresses both
- 📝 Moral Constraints Won't Scale - Why value loading, RLHF, and ethics-based alignment are structurally insufficient for minds alien in nature, and why governance must be grounded in physics rather than philosophy
Matthew Yotko is a Vice President at Bessemer Trust, in the capacities of Automation Engineering Manager and Technical Operations Manager. His professional background spans Naval nuclear power, large-scale operational automation, practical AI/ML, and the application of constraint theory to complex systems. This paper applies that engineering orientation; identify the binding constraint, build the architecture around it; to the problem of AI governance and civilizational succession. It is a working paper, not an academic publication, and corrections and engagement from domain specialists are welcomed.
This work is licensed under Creative Commons Attribution 4.0 International (CC BY 4.0). You are free to share and adapt this material with appropriate attribution.