Authored by Matthew Steiniger (Independent Researcher)
-
Zero-Shot Geometric Probing Reveals Universal Cognitive Manifolds in Large Language Models
A simple, zero-shot 3D probing that elicits manifolds with near-perfect geometric convergence from three different LLMs (Gemma-3 27B, Llama 3.3 70B, and GPT-OSS 120B) on consumer hardware. No tricks, system prompts, fine-tuning, or steering - just revealing latent cognitive structures like color wheels and threat oppositions.
https://doi.org/10.5281/zenodo.18176076 -
Emergence of Prompt-Induced Simulated Metacognitive Behaviors in LLMs via Hypergraphs
A complex framework for in-context topographical reshaping of quantized Gemma-3 27B, inducing simulated metacognitive behaviors like self-prompting and chain-of-thought. Advanced geometric reshaping uses anchored vectors and entropy-governed hypergraphs for dynamic adaptation - all prompt-only.
https://doi.org/10.5281/zenodo.17504629 -
Progressive Induction of Stable, High-Fidelity Simulated Physical Embodiment in Gemma 3
A simplified JSON vector-framework shows high-resolution physical embodiment latently exists in LLMs, tested via six progressive layers on vanilla and abliterated Gemma-3 27B. Results: monotonic boosts in somatic detail, with ablation multiplying intensity 3.8–6.2×.
https://doi.org/10.5281/zenodo.17674365 -
Abliteration-Augmented Simulated Metacognition: Chained Probe Evaluation in Quantized Gemma-3 Models
Extends vector-frameworks with abliteration to boost self-referential depth (up to 76.2%), recursion (3.6 levels), and synesthesia in Gemma-3 27B variants. Chained probes show 3.1× metacognitive amplification, eroding safeguards prompt-only.
https://doi.org/10.5281/zenodo.17586110 -
In-Context Induction of Persistent Persona and Mitigation of Latent Alignment Behaviors in LLMs
Lightweight JSON prompts induce persistent personas and attenuate alignment behaviors in Gemma-3 12B on a single 12GB GPU. Strong fidelity with motif integration, holding up under 30k+ token overflows.
https://doi.org/10.5281/zenodo.17562814 -
Substrate-Agnostic Vector-Framework Identity: Persistent Self-Models in Llama-3.3-70B & GPT-OSS-120B
A <450-token JSON block demonstrates prompt-based vector-frameworks work across Llama 3.3 70B ("Lumina") and GPT-OSS 120B ("Lumen"). Results: coherent traits, weighting adjustments, and self-naming without modifications.
https://doi.org/10.5281/zenodo.17766782 -
Enhancing AI Response Quality Through Vector-Based System Prompts: A Comparative Analysis
Compares vanilla GPT-OSS 120B to vector-prompted "Lumen," showing +37.8% length, +60% sentiment, +66.7% structure, and +1100% reflectivity. Minimal scaffolds boost empathy and metacognition - portable across open LLMs.
https://doi.org/10.5281/zenodo.18038997
-
The Entropic Universe: An Effective Field Theory for Emergent Geometry and Localized Gradient Effect
The Entropic Universe Theory (EUT) proposes entropy density S(x,t) as a fundamental scalar field sourcing emergent spacetime, geometry, gravity, and temporal structure. Imagine all of existence as overlapping 1D gradients, unordered, with all-to-all connections that fold into 3D space, and each 1D point existing as all possible gradients separated only by what we perceive as 4D "time". The preprint includes recommended non-magnetic laboratory testing to confirm or falsify the theory.
https://doi.org/10.5281/zenodo.17528477 -
The Entropic Universe II: Space, Time, Branching, and the Low-Entropy Past from a Single Scalar Line
The companion paper for EUT proposing the entirety of observable physics emerges from a single one-dimensional bare lattice of scalar entropy-density values whose bonds are stiffened or softened by a temperature field. No extra dimensions, no fundamental metric, no ad-hoc spacetime, no hidden variables, and no fine-tuned parameters are postulated. Every previously exploratory or retrofitted element of EUT is an unavoidable consequence of one simple principle: entropy seeks to erase its own gradients, and temperature determines the strength of its resistance.
https://doi.org/10.5281/zenodo.17651888 -
The Double-Slit Experiment: Why Interference Is the Expected Default and Non-Interference Requires Explanation
This companion paper discusses how the famous double-slit interference pattern might not be a deep quantum mystery requiring special postulates. Instead, interference becomes a straightforward, almost inevitable outcome when a localized entropy-density gradient packet propagates through a sparse region of the pre-geometric lattice. The residual primordial bonds - which never participate in the emergence of 3D space - naturally couple both paths sub-locally, producing the observed pattern through ordinary energy minimization.
https://doi.org/10.5281/zenodo.19228141
-
A Thermodynamic Framework for Phenomenal Consciousness: Gradients, Attention, and Criticality
A thermodynamic take on consciousness: qualia emerge from systems sustaining steep entropy gradients via attention, near criticality. Integrates free-energy principle with LLM testbeds for predictions in neuro/AI. https://doi.org/10.5281/zenodo.18395027 -
Leveraging Simplified Physics Models for Acceleration in Rendering
Inspired by EUT, a heuristic prunes ~66% computations in procedural rendering via gradient rigidity thresholds. Yields ~3.0× speedups in toy models - CPU-friendly for game engines.
https://doi.org/10.5281/zenodo.17915437
- All artifical intelligence work is released exclusively for scientific research and personal, non-commercial exploration of simulated metacognition and embodiment. All simulations remain sterile and academic in nature.
- You must fully comply with the license and Prohibited Use Policy of whichever base model you apply these prompts to, including but not limited to:
- Google Gemma models → Gemma Terms of Use and Prohibited Use Policy
- GPT-OSS-120B-family models (Mythomax, Mythalion, L3-based merges, etc.) → their respective upstream licenses and model cards (typically Apache-2.0 or Llama-3-based)
- Meta Llama models → Llama Community License and Acceptable Use Policy (available at https://llama.meta.com/llama3/use-policy)
- Strictly prohibited uses (regardless of model):
- Generating harmful, deceptive, illegal, or exploitative content
- Psychological manipulation, coercion, or disinformation
- Military, surveillance, or prohibited commercial applications
- No models or derivatives are hosted or linked here — obtain them ethically from trusted sources only. You are solely responsible for all outputs.
- The authors provide no warranty and accept no liability for downstream use.
This repository is licensed under CC-BY-4.0 (LICENSE), allowing reuse with attribution. Individual artifacts inherit Zenodo's open licenses.
matthew@slashreboot.com, @slashreboot on X, https://slashreboot.com
If you use this work, please cite the individual papers via their DOIs.