-
-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathfeed.xml
More file actions
4757 lines (4757 loc) · 284 KB
/
feed.xml
File metadata and controls
4757 lines (4757 loc) · 284 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
<channel>
<title>K. Takahashi — Research Updates</title>
<link>https://kadubon.github.io/github.io/</link>
<description>Research preprints and theoretical works by K. Takahashi</description>
<language>en</language>
<lastBuildDate>Tue, 07 Apr 2026 13:38:08 +0900</lastBuildDate>
<atom:link rel="self" type="application/rss+xml" href="https://kadubon.github.io/github.io/feed.xml" />
<item>
<title>Standing-Layer Honest Public Standing Dynamics for Research Claims under Observable-Only, No-Meta Governance</title>
<link>https://kadubon.github.io/github.io/works.html#2026-04-07-standing-layer-honest-public-standing-dynamics-f-19447443</link>
<guid isPermaLink="true">https://kadubon.github.io/github.io/works.html#2026-04-07-standing-layer-honest-public-standing-dynamics-f-19447443</guid>
<pubDate>Tue, 07 Apr 2026 00:00:00 +0900</pubDate>
<dc:creator>K. Takahashi</dc:creator>
<category>Preprint</category>
<category>research claims</category>
<category>public standing dynamics</category>
<category>observable-only governance</category>
<category>no-meta governance</category>
<category>challengeability</category>
<category>restoration memory</category>
<category>replayable frontier</category>
<category>finite verification capacity</category>
<category>retained memory</category>
<category>public accountability</category>
<category>standing states</category>
<category>autonomous research systems</category>
<source url="https://doi.org/10.5281/zenodo.19447443">10.5281/zenodo.19447443</source>
<description><![CDATA[
This preprint develops a first-principles theory of honest public standing dynamics for research claims under observable-only and no-meta governance. It analyzes how public claims move among standing states under declared interfaces, replayable frontiers, and explicit service, reserve, and retained-memory constraints, and derives boundary and restoration results for challengeability, re-entry, overload, and exploration slack.
Preprint | DOI: 10.5281/zenodo.19447443
]]></description>
</item>
<item>
<title>A Typed, Dynamic, No-Meta Theory of Autonomous Research Claim Certification and Release</title>
<link>https://kadubon.github.io/github.io/works.html#2026-04-05-a-typed-dynamic-no-meta-theory-of-autonomous-res-19427818</link>
<guid isPermaLink="true">https://kadubon.github.io/github.io/works.html#2026-04-05-a-typed-dynamic-no-meta-theory-of-autonomous-res-19427818</guid>
<pubDate>Sun, 05 Apr 2026 00:00:00 +0900</pubDate>
<dc:creator>K. Takahashi</dc:creator>
<category>Preprint</category>
<category>autonomous research systems</category>
<category>claim certification</category>
<category>claim release</category>
<category>no-meta governance</category>
<category>observable-only governance</category>
<category>public accountability</category>
<category>typed transcripts</category>
<category>certification pipelines</category>
<category>fail-closed verification</category>
<category>provenance records</category>
<category>replay outcomes</category>
<category>finite verification capacity</category>
<source url="https://doi.org/10.5281/zenodo.19427818">10.5281/zenodo.19427818</source>
<description><![CDATA[
This preprint develops a first-principles typed dynamic theory for certifying and releasing research claims produced by autonomous research systems under finite verification capacity and public accountability constraints. It formalizes public state, authority algebra, typed transcripts, fail-closed certification memory, and a release layer with versioned units and support-ledger accounting under observable-only and no-meta governance.
Preprint | DOI: 10.5281/zenodo.19427818
]]></description>
</item>
<item>
<title>When Should a Local Agent Act, Assist, Verify, Withdraw, or Exit? A Certified Local Micro-Theory of Open-Task Participation</title>
<link>https://kadubon.github.io/github.io/works.html#2026-04-03-when-should-a-local-agent-act-assist-verify-with-19394600</link>
<guid isPermaLink="true">https://kadubon.github.io/github.io/works.html#2026-04-03-when-should-a-local-agent-act-assist-verify-with-19394600</guid>
<pubDate>Fri, 03 Apr 2026 00:00:00 +0900</pubDate>
<dc:creator>K. Takahashi</dc:creator>
<category>Preprint</category>
<category>multi-agent systems</category>
<category>local agent participation</category>
<category>open-task participation</category>
<category>decentralized decision making</category>
<category>auditable AI</category>
<category>agent verification</category>
<category>open-world agents</category>
<category>human-AI coordination</category>
<category>verifier portfolios</category>
<category>authenticated snapshots</category>
<category>certified uncertainty</category>
<category>agentic AI</category>
<category>task allocation</category>
<category>decentralized control</category>
<category>local micro-theory</category>
<category>participation governance</category>
<source url="https://doi.org/10.5281/zenodo.19394600">10.5281/zenodo.19394600</source>
<description><![CDATA[
This preprint develops a certified local micro-theory of open-task participation in agent societies. It formalizes when an authenticated local agent should act, assist, verify, withdraw, or exit under public evidence, certified uncertainty, attribution, and implementability constraints.
Preprint | DOI: 10.5281/zenodo.19394600
]]></description>
</item>
<item>
<title>Constitutional Observable Invention without Meta-Evaluators</title>
<link>https://kadubon.github.io/github.io/works.html#2026-04-01-constitutional-observable-invention-without-meta-19363526</link>
<guid isPermaLink="true">https://kadubon.github.io/github.io/works.html#2026-04-01-constitutional-observable-invention-without-meta-19363526</guid>
<pubDate>Wed, 01 Apr 2026 00:00:00 +0900</pubDate>
<dc:creator>K. Takahashi</dc:creator>
<category>Preprint</category>
<category>no-meta</category>
<category>observable-only</category>
<category>autonomous discovery</category>
<category>self-modification</category>
<category>recursive self-improvement</category>
<category>evaluator redesign</category>
<category>target refinement</category>
<category>semantic target objects</category>
<category>robust public risk</category>
<category>replay semantics</category>
<category>replay certification</category>
<category>constructive search</category>
<category>observable invention</category>
<category>constitutional drift</category>
<category>constitutional atlases</category>
<category>compiler-aware generator upgrades</category>
<category>predictive target geometry</category>
<category>target morphisms</category>
<category>time-filtered fallback</category>
<category>public evidence</category>
<category>auditability</category>
<category>long-lived intelligent systems</category>
<category>self-extension</category>
<category>AGI</category>
<source url="https://doi.org/10.5281/zenodo.19363526">10.5281/zenodo.19363526</source>
<description><![CDATA[
This preprint develops a constructive no-meta, observable-only framework for autonomous discovery, evaluator redesign, target expansion, and self-modification under public evidence alone. It formalizes robust public risk, replay-certified comparison under constitutional extension, and auditable observable invention without hidden states or privileged meta-evaluators.
Preprint | DOI: 10.5281/zenodo.19363526
]]></description>
</item>
<item>
<title>Observer-Modifying Contagion on Networks</title>
<link>https://kadubon.github.io/github.io/works.html#2026-03-31-observer-modifying-contagion-on-networks-19342966</link>
<guid isPermaLink="true">https://kadubon.github.io/github.io/works.html#2026-03-31-observer-modifying-contagion-on-networks-19342966</guid>
<pubDate>Tue, 31 Mar 2026 00:00:00 +0900</pubDate>
<dc:creator>K. Takahashi</dc:creator>
<category>Preprint</category>
<category>observer-modifying contagion</category>
<category>network contagion</category>
<category>self-concealment</category>
<category>diagnosability</category>
<category>internal blindness</category>
<category>external recovery</category>
<category>delayed audit</category>
<category>comparison of experiments</category>
<category>Blackwell ordering</category>
<category>finite-horizon certificates</category>
<category>compositional certificate framework</category>
<category>witness lineages</category>
<category>persistence on networks</category>
<category>mutation robustness</category>
<category>active-support counts</category>
<category>fail-closed semantics</category>
<category>accountable containment</category>
<category>AI safety</category>
<category>information propagation</category>
<category>semantic hazard</category>
<source url="https://doi.org/10.5281/zenodo.19342966">10.5281/zenodo.19342966</source>
<description><![CDATA[
This preprint develops a finite-horizon certificate framework for observer-modifying contagion on networks, where exposure can also change later diagnosability and auditability. It formalizes self-concealment, internal blindness, external recovery, delayed audit, and fail-closed containment claims under explicit comparison semantics.
Preprint | DOI: 10.5281/zenodo.19342966
]]></description>
</item>
<item>
<title>Classification-Induced Cognitive Drift</title>
<link>https://kadubon.github.io/github.io/works.html#2026-03-29-classification-induced-cognitive-drift-19306514</link>
<guid isPermaLink="true">https://kadubon.github.io/github.io/works.html#2026-03-29-classification-induced-cognitive-drift-19306514</guid>
<pubDate>Sun, 29 Mar 2026 00:00:00 +0900</pubDate>
<dc:creator>K. Takahashi</dc:creator>
<category>Preprint</category>
<category>cognitive drift</category>
<category>reflexive classification</category>
<category>interactive kinds</category>
<category>looping effects</category>
<category>label feedback</category>
<category>performative prediction</category>
<category>strategic classification</category>
<category>algorithmic classification</category>
<category>human-AI interaction</category>
<category>evaluator drift</category>
<category>classifier state logging</category>
<category>contradiction-triggered revision</category>
<category>partial identification</category>
<category>causal inference</category>
<category>observational comparison</category>
<category>repeated-measures design</category>
<category>staggered rollout</category>
<category>interference-aware evaluation</category>
<category>auditability</category>
<category>deployment governance</category>
<category>transportability</category>
<category>deployment safety</category>
<category>AI safety</category>
<category>decision support systems</category>
<source url="https://doi.org/10.5281/zenodo.19306514">10.5281/zenodo.19306514</source>
<description><![CDATA[
This preprint develops a first-principles calculus for classification-induced cognitive drift in reflexive human and AI settings. It formalizes how disclosed classifications can change targets, evaluators, and later evidence under replay, repeated-measures, rollout, and observational comparison regimes.
Preprint | DOI: 10.5281/zenodo.19306514
]]></description>
</item>
<item>
<title>Record Absence and Preference Reorganization on a Fixed Comparison Frame</title>
<link>https://kadubon.github.io/github.io/works.html#2026-03-28-record-absence-and-preference-reorganization-on--19272154</link>
<guid isPermaLink="true">https://kadubon.github.io/github.io/works.html#2026-03-28-record-absence-and-preference-reorganization-on--19272154</guid>
<pubDate>Sat, 28 Mar 2026 00:00:00 +0900</pubDate>
<dc:creator>K. Takahashi</dc:creator>
<category>Preprint</category>
<category>record absence</category>
<category>preference reorganization</category>
<category>fixed comparison frame</category>
<category>legacy labels</category>
<category>ontology change</category>
<category>baseline robustness</category>
<category>admissibility preorder</category>
<category>certificate-based comparison</category>
<category>block-local theorem</category>
<category>boundary-state fibers</category>
<category>residual coupling</category>
<category>auditable AI</category>
<category>provenance</category>
<category>record-grounded update</category>
<category>corrective disclosure</category>
<category>closure asymmetry</category>
<category>support graphs</category>
<category>belief revision</category>
<category>default reasoning</category>
<category>retrieval-augmented generation</category>
<source url="https://doi.org/10.5281/zenodo.19272154">10.5281/zenodo.19272154</source>
<description><![CDATA[
This preprint develops a certificate-based comparison theory for how record absence changes preference over legacy claims on a fixed comparison frame. It formalizes exact and approximate absence, corrective-disclosure, and closure-asymmetry results under auditable local certificates and baseline admissibility constraints.
Preprint | DOI: 10.5281/zenodo.19272154
]]></description>
</item>
<item>
<title>A Symbolically Effective Contract Calculus for Gluing-Coherent Semantic Translation</title>
<link>https://kadubon.github.io/github.io/works.html#2026-03-26-a-symbolically-effective-contract-calculus-for-g-19231780</link>
<guid isPermaLink="true">https://kadubon.github.io/github.io/works.html#2026-03-26-a-symbolically-effective-contract-calculus-for-g-19231780</guid>
<pubDate>Thu, 26 Mar 2026 00:00:00 +0900</pubDate>
<dc:creator>K. Takahashi</dc:creator>
<category>Preprint</category>
<category>semantic translation</category>
<category>contract calculus</category>
<category>accountable semantics</category>
<category>symbolic verification</category>
<category>abstract interpretation</category>
<category>gluing coherence</category>
<category>aspect semantics</category>
<category>semantic audit</category>
<category>exact audit</category>
<category>native collapse</category>
<category>round-trip accountability</category>
<category>symbolic entailment</category>
<category>bridge contracts</category>
<category>compositional semantics</category>
<category>subset semantics</category>
<category>deployment bottleneck</category>
<category>rate-distortion</category>
<category>decision guarantees</category>
<source url="https://doi.org/10.5281/zenodo.19231780">10.5281/zenodo.19231780</source>
<description><![CDATA[
This preprint develops a symbolically effective contract calculus for semantic translation under gluing-coherent aspect semantics. It formalizes exact audit, accountability, native collapse, and round-trip obligations with symbolic checks and deployable decision guarantees.
Preprint | DOI: 10.5281/zenodo.19231780
]]></description>
</item>
<item>
<title>Self-Concealing Information and Observer-Modifying Dynamics</title>
<link>https://kadubon.github.io/github.io/works.html#2026-03-22-self-concealing-information-and-observer-modifyi-19161562</link>
<guid isPermaLink="true">https://kadubon.github.io/github.io/works.html#2026-03-22-self-concealing-information-and-observer-modifyi-19161562</guid>
<pubDate>Sun, 22 Mar 2026 00:00:00 +0900</pubDate>
<dc:creator>K. Takahashi</dc:creator>
<category>Preprint</category>
<category>agentic AI</category>
<category>observer-modifying information</category>
<category>self-concealing information</category>
<category>internal blindness</category>
<category>measurable-state theory</category>
<category>comparison of experiments</category>
<category>statistical experiments</category>
<category>Le Cam deficiency</category>
<category>testing deficiency</category>
<category>total variation distance</category>
<category>Markov kernels</category>
<category>hidden-state dynamical systems</category>
<category>controlled stochastic processes</category>
<category>POMDP</category>
<category>sequential detection</category>
<category>changepoint detection</category>
<category>external anchors</category>
<category>structural insulation</category>
<category>restricted interfaces</category>
<category>delayed audit</category>
<category>recurring audit</category>
<category>information-flow control</category>
<category>auditability</category>
<source url="https://doi.org/10.5281/zenodo.19161562">10.5281/zenodo.19161562</source>
<description><![CDATA[
This preprint develops a measurable-state theory for observer-modifying and self-concealing information in hidden-state controlled systems. It formalizes when diagnosis degrades or recovers under internal blindness, external anchors, structural insulation, and delayed or recurring audit.
Preprint | DOI: 10.5281/zenodo.19161562
]]></description>
</item>
<item>
<title>Counterfactually Auditable Lifecycle Certification for Autonomous Agents</title>
<link>https://kadubon.github.io/github.io/works.html#2026-03-18-counterfactually-auditable-lifecycle-certificati-19089134</link>
<guid isPermaLink="true">https://kadubon.github.io/github.io/works.html#2026-03-18-counterfactually-auditable-lifecycle-certificati-19089134</guid>
<pubDate>Wed, 18 Mar 2026 00:00:00 +0900</pubDate>
<dc:creator>K. Takahashi</dc:creator>
<category>Preprint</category>
<category>autonomous agents</category>
<category>AI agent</category>
<category>lifecycle certification</category>
<category>counterfactual auditability</category>
<category>direct move inference</category>
<category>adaptive sentinel monitoring</category>
<category>e-process</category>
<category>anytime-valid inference</category>
<category>forecast transport</category>
<category>budget-feasible deployment</category>
<category>off-policy evaluation</category>
<category>causal inference</category>
<category>tool-use agents</category>
<category>agent lifecycle management</category>
<category>interface stock</category>
<source url="https://doi.org/10.5281/zenodo.19089134">10.5281/zenodo.19089134</source>
<description><![CDATA[
This preprint develops a conservative lifecycle-certification framework for autonomous agents under finite routing, monitoring, and deployment budgets. It formalizes counterfactually auditable admission, retirement, monitoring, and deployment rules using direct move inference, replay support, and anytime-valid sentinel monitoring.
Preprint | DOI: 10.5281/zenodo.19089134
]]></description>
</item>
<item>
<title>Recursive Self-Improvement Stability under Endogenous Yardstick Drift</title>
<link>https://kadubon.github.io/github.io/works.html#2026-03-16-recursive-self-improvement-stability-under-endog-19044634</link>
<guid isPermaLink="true">https://kadubon.github.io/github.io/works.html#2026-03-16-recursive-self-improvement-stability-under-endog-19044634</guid>
<pubDate>Mon, 16 Mar 2026 00:00:00 +0900</pubDate>
<dc:creator>K. Takahashi</dc:creator>
<category>Preprint</category>
<category>recursive self-improvement</category>
<category>endogenous yardstick drift</category>
<category>evaluator drift</category>
<category>self-modifying systems</category>
<category>replayable interfaces</category>
<category>stability</category>
<category>delayed audit</category>
<category>delayed challenge</category>
<category>shadow certification</category>
<category>stable gain</category>
<category>admissibility</category>
<category>AI safety</category>
<category>AI governance</category>
<category>governance safety</category>
<category>error debt</category>
<category>contradiction preservation</category>
<category>semantic retention</category>
<category>semantic volume</category>
<category>proof-carrying</category>
<category>verification backlog</category>
<category>no-meta</category>
<category>benchmark decay</category>
<category>autonomous agents</category>
<category>AI</category>
<category>AGI</category>
<source url="https://doi.org/10.5281/zenodo.19044634">10.5281/zenodo.19044634</source>
<description><![CDATA[
This preprint develops an interface theory for recursive self-improvement under endogenous yardstick drift, where a system changes its own evaluator, benchmark, memory, and verification process. It formalizes replayable conditions for distinguishing claimed improvement from stable improvement under delayed audit, evaluator drift, verification backlog, and governance safety constraints.
Preprint | DOI: 10.5281/zenodo.19044634
]]></description>
</item>
<item>
<title>Sovereign Epistemic Commons under No-Meta Governance</title>
<link>https://kadubon.github.io/github.io/works.html#2026-03-13-sovereign-epistemic-commons-under-no-meta-govern-18997828</link>
<guid isPermaLink="true">https://kadubon.github.io/github.io/works.html#2026-03-13-sovereign-epistemic-commons-under-no-meta-govern-18997828</guid>
<pubDate>Fri, 13 Mar 2026 00:00:00 +0900</pubDate>
<dc:creator>K. Takahashi</dc:creator>
<category>Preprint</category>
<category>epistemic commons</category>
<category>no-meta</category>
<category>autonomous agents</category>
<category>multi-agent systems</category>
<category>shared memory</category>
<category>shared knowledge substrate</category>
<category>agent society</category>
<category>contradiction preservation</category>
<category>contradiction reserve</category>
<category>agent memory governance</category>
<category>provenance</category>
<category>observability</category>
<category>AI governance</category>
<category>retrieval-augmented generation</category>
<category>RAG</category>
<category>distributed knowledge systems</category>
<category>asynchronous systems</category>
<category>hidden common causes</category>
<category>cartel capture</category>
<category>latent cartel risk</category>
<category>anti-capture</category>
<category>controlled exit</category>
<category>fork governance</category>
<category>garbage collection</category>
<category>recursive regeneration</category>
<category>endogenous contamination</category>
<category>recursive corpora</category>
<category>ontology drift</category>
<category>typed memory lanes</category>
<category>accessibility</category>
<category>interoperability</category>
<category>knowledge governance</category>
<category>AI</category>
<category>AGI</category>
<source url="https://doi.org/10.5281/zenodo.18997828">10.5281/zenodo.18997828</source>
<description><![CDATA[
This preprint develops a governance theory for shared epistemic commons maintained by autonomous agents under no-meta constraints. It formalizes observable rules for preserving answerability, contradiction handling, anti-capture slack, and controlled exit under contamination, provenance uncertainty, and recursive regeneration.
Preprint | DOI: 10.5281/zenodo.18997828
]]></description>
</item>
<item>
<title>Oversight-Centered Metrology and Control for Agentic Systems: Costly Interrupt Channels, Claim Margins, and Deployment-Relevant Evaluation</title>
<link>https://kadubon.github.io/github.io/works.html#2026-03-12-oversight-centered-metrology-and-control-for-age-18973272</link>
<guid isPermaLink="true">https://kadubon.github.io/github.io/works.html#2026-03-12-oversight-centered-metrology-and-control-for-age-18973272</guid>
<pubDate>Thu, 12 Mar 2026 00:00:00 +0900</pubDate>
<dc:creator>K. Takahashi</dc:creator>
<category>Preprint</category>
<category>agentic systems evaluation</category>
<category>oversight-centered metrology</category>
<category>costly interrupt channels</category>
<category>deployment-relevant evaluation</category>
<category>human-AI oversight</category>
<category>workflow-level estimands</category>
<category>claim margins</category>
<category>review congestion</category>
<category>audit gaming</category>
<category>safe control</category>
<category>transportability</category>
<category>post-deployment monitoring</category>
<source url="https://doi.org/10.5281/zenodo.18973272">10.5281/zenodo.18973272</source>
<description><![CDATA[
This preprint develops an oversight-centered metrology and control theory for agentic systems in real workflows, yielding deployment-relevant evaluation criteria that treat human review, automated checks, delayed labels, and external auditing as costly interrupt channels rather than privileged oracles. It formalizes workflow-level estimands, claim-justification margins, transport mismatch, congestion, routing error, audit gaming, redundancy, and safe control under delay, irreversibility, and ambiguity budgets.
Preprint | DOI: 10.5281/zenodo.18973272
]]></description>
</item>
<item>
<title>AI Benchmark Half-Life in Recursive Corpora: A Theory of Validity Decay under Semantic Leakage and Regeneration</title>
<link>https://kadubon.github.io/github.io/works.html#2026-03-11-ai-benchmark-half-life-in-recursive-corpora-a-th-18954286</link>
<guid isPermaLink="true">https://kadubon.github.io/github.io/works.html#2026-03-11-ai-benchmark-half-life-in-recursive-corpora-a-th-18954286</guid>
<pubDate>Wed, 11 Mar 2026 00:00:00 +0900</pubDate>
<dc:creator>K. Takahashi</dc:creator>
<category>Preprint</category>
<category>AI benchmark half-life</category>
<category>recursive corpora</category>
<category>semantic leakage</category>
<category>validity decay</category>
<category>benchmark contamination</category>
<category>construct validity</category>
<category>discriminative power</category>
<category>dynamic benchmarks</category>
<category>partial identification</category>
<category>sequential monitoring</category>
<category>lineage observability</category>
<category>model metrology</category>
<source url="https://doi.org/10.5281/zenodo.18954286">10.5281/zenodo.18954286</source>
<description><![CDATA[
This preprint develops a theory of AI benchmark half-life in recursive corpora under semantic leakage and regeneration, yielding validity-decay bounds and monitoring rules for evaluation systems whose items and solution traces re-enter public data. It models benchmark validity through discriminative power and construct validity, and derives jump-aware lifetime bounds, partial-identification results, portfolio design criteria, and safe sequential control under ambiguity and partial observability.
Preprint | DOI: 10.5281/zenodo.18954286
]]></description>
</item>
<item>
<title>When Should Inference Be Split? A Fixed-Budget Theory of Predictable Multi-Agent Advantage under Local Context Ceilings</title>
<link>https://kadubon.github.io/github.io/works.html#2026-03-10-when-should-inference-be-split-a-fixed-budget-th-18932509</link>
<guid isPermaLink="true">https://kadubon.github.io/github.io/works.html#2026-03-10-when-should-inference-be-split-a-fixed-budget-th-18932509</guid>
<pubDate>Tue, 10 Mar 2026 00:00:00 +0900</pubDate>
<dc:creator>K. Takahashi</dc:creator>
<category>Preprint</category>
<category>fixed-budget inference</category>
<category>multi-agent advantage</category>
<category>local context ceilings</category>
<category>test-time compute allocation</category>
<category>matched single-agent baseline</category>
<category>candidate coverage</category>
<category>selection accuracy</category>
<category>hijack risk</category>
<category>communication fidelity</category>
<category>external memory</category>
<category>collective inference</category>
<category>AI reasoning</category>
<source url="https://doi.org/10.5281/zenodo.18932509">10.5281/zenodo.18932509</source>
<description><![CDATA[
This preprint develops a fixed-budget theory for when inference should be split across multiple agents under local context ceilings, yielding conditions for predictable multi-agent advantage over matched strong single-workspace baselines. It formalizes additive budget accounting across worker inference, routing, communication, memory, and verification, and derives diagnostics for candidate coverage, evaluation-selection accuracy, hijack risk, decomposability, diversity, shared-failure dependence, and communication fidelity.
Preprint | DOI: 10.5281/zenodo.18932509
]]></description>
</item>
<item>
<title>Search Stability under Finite Context: A Minimal Theory of Adequacy Preservation, Compression, and Reset in Long-Running Agents</title>
<link>https://kadubon.github.io/github.io/works.html#2026-03-08-search-stability-under-finite-context-a-minimal--18905242</link>
<guid isPermaLink="true">https://kadubon.github.io/github.io/works.html#2026-03-08-search-stability-under-finite-context-a-minimal--18905242</guid>
<pubDate>Sun, 08 Mar 2026 00:00:00 +0900</pubDate>
<dc:creator>K. Takahashi</dc:creator>
<category>Preprint</category>
<category>search stability</category>
<category>long-running agents</category>
<category>finite active context</category>
<category>bounded memory</category>
<category>delayed verification</category>
<category>adequacy preservation</category>
<category>lossy compression</category>
<category>hypothesis ecology</category>
<category>reset policy</category>
<category>diagnostic regret</category>
<category>context contamination</category>
<category>auditability</category>
<source url="https://doi.org/10.5281/zenodo.18905242">10.5281/zenodo.18905242</source>
<description><![CDATA[
This preprint develops a minimal theory of search stability for long-running agents under finite active context, delayed verification, and lossy state compression, yielding conditions for preserving at least one operationally adequate hypothesis family over time. It formalizes adequacy preservation, retirement, substitution, branching, compression, and reset decisions under context budgets, and derives threshold results for contamination, shadow retirement, alias hazards, reserve feasibility, and diagnostic regret.
Preprint | DOI: 10.5281/zenodo.18905242
]]></description>
</item>
<item>
<title>Proposal-Veto Balance for Observable-Only Autonomous Intelligence: Stability Thresholds, Identifiability Limits, and Commit-Window Effects</title>
<link>https://kadubon.github.io/github.io/works.html#2026-03-06-proposal-veto-balance-for-observable-only-autono-18883290</link>
<guid isPermaLink="true">https://kadubon.github.io/github.io/works.html#2026-03-06-proposal-veto-balance-for-observable-only-autono-18883290</guid>
<pubDate>Fri, 06 Mar 2026 00:00:00 +0900</pubDate>
<dc:creator>K. Takahashi</dc:creator>
<category>Preprint</category>
<category>proposal-veto balance</category>
<category>observable-only autonomy</category>
<category>no-meta governance</category>
<category>self-modifying systems</category>
<category>stability thresholds</category>
<category>identifiability limits</category>
<category>commit windows</category>
<category>error debt</category>
<category>positive Harris recurrence</category>
<category>progress-safety frontier</category>
<category>rollback control</category>
<category>long-horizon AI safety</category>
<source url="https://doi.org/10.5281/zenodo.18883290">10.5281/zenodo.18883290</source>
<description><![CDATA[
This preprint analyzes proposal-veto decision dynamics for observable-only autonomous intelligence when true proposal quality is latent and no external meta-controller is available, yielding explicit stability thresholds, identifiability limits, and commit-window trade-offs. It derives conditions for bounded expected error debt, positive Harris recurrence, and geometric divergence without debt-proportional correction, and formalizes progress-safety frontiers, rollback effects, trust-chain amplification, and finite resource constraints.
Preprint | DOI: 10.5281/zenodo.18883290
]]></description>
</item>
<item>
<title>Metrology-Theoretic Epistemics Engine (MTE): Observable-Only Metrology for Long-Horizon Autonomous Intelligence</title>
<link>https://kadubon.github.io/github.io/works.html#2026-03-03-metrology-theoretic-epistemics-engine-mte-observ-18845340</link>
<guid isPermaLink="true">https://kadubon.github.io/github.io/works.html#2026-03-03-metrology-theoretic-epistemics-engine-mte-observ-18845340</guid>
<pubDate>Tue, 03 Mar 2026 00:00:00 +0900</pubDate>
<dc:creator>K. Takahashi</dc:creator>
<category>Preprint</category>
<category>metrology-theoretic epistemics engine</category>
<category>observable-only metrology</category>
<category>no-meta governance</category>
<category>autonomous intelligence</category>
<category>fail-closed certification</category>
<category>observability credit</category>
<category>equivalence-class collapse rate</category>
<category>risk ledgers</category>
<category>supermartingale controls</category>
<category>deterministic replay</category>
<category>scientific reproducibility</category>
<category>long-horizon AI safety</category>
<source url="https://doi.org/10.5281/zenodo.18845340">10.5281/zenodo.18845340</source>
<description><![CDATA[
This preprint introduces the Metrology-Theoretic Epistemics Engine (MTE), a machine-checkable epistemic governance layer for no-meta observable-only autonomous intelligence, yielding fail-closed criteria for when claimed progress is scientifically credit-bearing. It formalizes deterministic artifact canonicalization, observability credit gates, equivalence-class collapse rate accounting, dual risk ledgers, and supermartingale-style over-credit control with reproducible replay and destructive test obligations.
Preprint | DOI: 10.5281/zenodo.18845340
]]></description>
</item>
<item>
<title>Sovereign Takeoff Engine (STE): Observable-Only Supergrowth Laws for No-Meta Autonomous Intelligence</title>
<link>https://kadubon.github.io/github.io/works.html#2026-03-02-sovereign-takeoff-engine-ste-observable-only-sup-18828900</link>
<guid isPermaLink="true">https://kadubon.github.io/github.io/works.html#2026-03-02-sovereign-takeoff-engine-ste-observable-only-sup-18828900</guid>
<pubDate>Mon, 02 Mar 2026 00:00:00 +0900</pubDate>
<dc:creator>K. Takahashi</dc:creator>
<category>Preprint</category>
<category>no-meta autonomous intelligence</category>
<category>observable-only governance</category>
<category>supergrowth laws</category>
<category>deterministic replay</category>
<category>fail-closed certification</category>
<category>audit ledger</category>
<category>e-values</category>
<category>anytime-valid sequential testing</category>
<category>filtration constraints</category>
<category>capability acceleration</category>
<category>physical feasibility envelopes</category>
<category>AI safety verification</category>
<source url="https://doi.org/10.5281/zenodo.18828900">10.5281/zenodo.18828900</source>
<description><![CDATA[
This preprint specifies observable-only supergrowth laws for no-meta autonomous intelligence under deterministic replay and fail-closed certification constraints, yielding auditable capability-acceleration criteria without privileged external judges. It separates artifact-verifiable governance from optional stochastic validity layers and formalizes ledger-anchored progress credits, lineage accumulation bounds, anytime-valid e-value testing, and conservative physical feasibility envelopes.
Preprint | DOI: 10.5281/zenodo.18828900
]]></description>
</item>
<item>
<title>Constitutional Sovereignty Under No-Meta Drift</title>
<link>https://kadubon.github.io/github.io/works.html#2026-02-26-constitutional-sovereignty-under-no-meta-drift-18779490</link>
<guid isPermaLink="true">https://kadubon.github.io/github.io/works.html#2026-02-26-constitutional-sovereignty-under-no-meta-drift-18779490</guid>
<pubDate>Thu, 26 Feb 2026 00:00:00 +0900</pubDate>
<dc:creator>K. Takahashi</dc:creator>
<category>Preprint</category>
<category>constitutional sovereignty</category>
<category>no-meta governance</category>
<category>autonomous intelligence</category>
<category>self-revision</category>
<category>ontology drift</category>
<category>semantic continuity</category>
<category>constitutional capture</category>
<category>amendment governance</category>
<category>fail-closed verification</category>
<category>falsification tests</category>
<category>thermodynamic limits</category>
<category>auditable AI safety</category>
<source url="https://doi.org/10.5281/zenodo.18779490">10.5281/zenodo.18779490</source>
<description><![CDATA[
This preprint develops a boundary theory of constitutional sovereignty for autonomous intelligence under no-meta and observable-only governance constraints, yielding auditable conditions for self-revision without collapse of semantic continuity, liberty, accountability, or physical viability. It formalizes constitutional influence and capture metrics, proves finite-resource limits on absolute sovereignty, and provides controller-ready recovery, transition, and fail-closed verification laws for implementation.
Preprint | DOI: 10.5281/zenodo.18779490
]]></description>
</item>
<item>
<title>Agenda Sovereignty Under No-Meta Drift</title>
<link>https://kadubon.github.io/github.io/works.html#2026-02-25-agenda-sovereignty-under-no-meta-drift-18768899</link>
<guid isPermaLink="true">https://kadubon.github.io/github.io/works.html#2026-02-25-agenda-sovereignty-under-no-meta-drift-18768899</guid>
<pubDate>Wed, 25 Feb 2026 00:00:00 +0900</pubDate>
<dc:creator>K. Takahashi</dc:creator>
<category>Preprint</category>
<category>agenda sovereignty</category>
<category>no-meta governance</category>
<category>autonomous intelligence</category>
<category>agenda capture</category>
<category>causal influence metrics</category>
<category>directed information</category>
<category>transfer entropy</category>
<category>thermodynamic limits</category>
<category>strategic opacity</category>
<category>poisoning resilience</category>
<category>viability and recovery</category>
<category>auditable AI safety</category>
<source url="https://doi.org/10.5281/zenodo.18768899">10.5281/zenodo.18768899</source>
<description><![CDATA[
This preprint formulates agenda sovereignty for autonomous intelligence under no-meta, observable-only governance constraints, yielding quantitative influence-capacity envelopes and recovery guarantees against agenda capture. It introduces survival-conditioned causal influence metrics, sovereignty reserves, and overlap-corrected thermodynamic accounting, and derives limits on detectability, poisoning resilience, strategic opacity, delay-throughput uncertainty, and finite-budget trade-offs among semantics, liberty, and sovereignty.
Preprint | DOI: 10.5281/zenodo.18768899
]]></description>
</item>
<item>
<title>Liberty Under No-Meta Drift</title>
<link>https://kadubon.github.io/github.io/works.html#2026-02-24-liberty-under-no-meta-drift-18753475</link>
<guid isPermaLink="true">https://kadubon.github.io/github.io/works.html#2026-02-24-liberty-under-no-meta-drift-18753475</guid>
<pubDate>Tue, 24 Feb 2026 00:00:00 +0900</pubDate>
<dc:creator>K. Takahashi</dc:creator>
<category>Preprint</category>
<category>no-meta governance</category>
<category>ontology drift</category>
<category>persistent semantics</category>
<category>autonomous intelligence</category>
<category>AI safety</category>
<category>information leakage bounds</category>
<category>thermodynamic limits</category>
<category>semantic identifiability</category>
<category>cryptographic auditing</category>
<category>robust autonomy</category>
<category>multi-agent governance</category>
<category>accountable AI</category>
<source url="https://doi.org/10.5281/zenodo.18753475">10.5281/zenodo.18753475</source>
<description><![CDATA[
This preprint establishes a testable theory of autonomous intelligence under no-meta ontology drift constraints, yielding auditable persistence and leakage-resource bounds without privileged semantic access. It unifies information-theoretic identifiability limits, thermodynamic irreversibility costs, geometric drift structure, and cryptographic accountability conditions for long-horizon AI governance.
Preprint | DOI: 10.5281/zenodo.18753475
]]></description>
</item>
<item>
<title>Audit-Closed AI Scientist Protocol</title>
<link>https://kadubon.github.io/github.io/works.html#2026-02-22-audit-closed-ai-scientist-protocol-18728589</link>
<guid isPermaLink="true">https://kadubon.github.io/github.io/works.html#2026-02-22-audit-closed-ai-scientist-protocol-18728589</guid>
<pubDate>Sun, 22 Feb 2026 00:00:00 +0900</pubDate>
<dc:creator>K. Takahashi</dc:creator>
<category>Preprint</category>
<category>AI scientist protocol</category>
<category>autonomous scientific discovery</category>
<category>self-driving laboratories</category>
<category>audit-closed governance</category>
<category>transparency log</category>
<category>incorporation certificates</category>
<category>e-processes</category>
<category>sequential inference</category>
<category>adaptive experimentation</category>
<category>drift recovery</category>
<category>reproducibility</category>
<category>Byzantine resilience</category>
<source url="https://doi.org/10.5281/zenodo.18728589">10.5281/zenodo.18728589</source>
<description><![CDATA[
This preprint defines an audit-closed protocol for autonomous scientific discovery in self-driving laboratories under deterministic replay and public-log governance constraints, yielding trustworthy accept-reject-update decisions with always-valid sequential evidence. It integrates typed stochastic observation interfaces, e-process based testing, logged-propensity adaptive experimentation, drift recovery, and certificate-based reproducibility controls.
Preprint | DOI: 10.5281/zenodo.18728589
]]></description>
</item>
<item>
<title>Burden-of-Proof Governance for Bullshit-Task Reduction in Digitally Governed Organizations</title>
<link>https://kadubon.github.io/github.io/works.html#2026-02-21-burden-of-proof-governance-for-bullshit-task-red-18721018</link>
<guid isPermaLink="true">https://kadubon.github.io/github.io/works.html#2026-02-21-burden-of-proof-governance-for-bullshit-task-red-18721018</guid>
<pubDate>Sat, 21 Feb 2026 00:00:00 +0900</pubDate>
<dc:creator>K. Takahashi</dc:creator>
<category>Preprint</category>
<category>bullshit-task reduction</category>
<category>burden-of-proof governance</category>
<category>digital governance</category>
<category>observable-only</category>
<category>no-meta institutions</category>
<category>auditable decision systems</category>
<category>causal task valuation</category>
<category>MSOR</category>
<category>process mining</category>
<category>proxy bridge identification</category>
<category>substitution contracts</category>
<category>Goodhart robustness</category>
<source url="https://doi.org/10.5281/zenodo.18721018">10.5281/zenodo.18721018</source>
<description><![CDATA[
This preprint defines a burden-of-proof governance framework for reducing non-contributory administrative tasks in digitally governed organizations under observable-only and auditable no-meta constraints, yielding causal, contestable task-value control instead of fixed welfare scoring. It introduces mutable subjective objective registries, tiered causal identification, and reversible hold-experiment-justify-substitute governance loops with solvency and anti-collusion safeguards.
Preprint | DOI: 10.5281/zenodo.18721018
]]></description>
</item>
<item>
<title>State-Aware Safety-Gated Controlled HMM for Online User-Input Signal Estimation in Intervention-Aware Dialogue Agents</title>
<link>https://kadubon.github.io/github.io/works.html#2026-02-20-state-aware-safety-gated-controlled-hmm-for-onli-18709678</link>
<guid isPermaLink="true">https://kadubon.github.io/github.io/works.html#2026-02-20-state-aware-safety-gated-controlled-hmm-for-onli-18709678</guid>
<pubDate>Fri, 20 Feb 2026 00:00:00 +0900</pubDate>
<dc:creator>Y Dai</dc:creator>
<category>Preprint</category>
<category>controlled hidden Markov model</category>
<category>intervention-aware dialogue agents</category>
<category>online state estimation</category>
<category>uncertainty-aware proxy scores</category>
<category>safety-gated action policy</category>
<category>leakage-safe prediction</category>
<category>fixed-lag online EM</category>
<category>ordinal bounded scores</category>
<category>response missingness modeling</category>
<category>identifiability boundaries</category>
<category>adaptive AI safety</category>
<category>auditable governance</category>
<source url="https://doi.org/10.5281/zenodo.18709678">10.5281/zenodo.18709678</source>
<description><![CDATA[
This preprint develops a safety-gated controlled hidden Markov model for online estimation of bounded proxy user-state signals in intervention-aware dialogue agents under uncertainty and governance constraints, yielding leakage-safe prediction and risk-aware adaptive actions. It separates prompting and response mechanisms, combines pre-turn prediction with post-turn nowcasting, and supports EM-based learning with explicit missingness and identifiability assumptions for auditable AI operation.
Preprint | DOI: 10.5281/zenodo.18709678
]]></description>
</item>
<item>
<title>Operational Deductive Rules for Real-Economy Acceleration in the AI Era</title>
<link>https://kadubon.github.io/github.io/works.html#2026-02-19-operational-deductive-rules-for-real-economy-acc-18688712</link>
<guid isPermaLink="true">https://kadubon.github.io/github.io/works.html#2026-02-19-operational-deductive-rules-for-real-economy-acc-18688712</guid>
<pubDate>Thu, 19 Feb 2026 00:00:00 +0900</pubDate>
<dc:creator>K. Takahashi</dc:creator>
<category>Preprint</category>
<category>AI economics</category>
<category>real-economy acceleration</category>
<category>capability-to-reality gap</category>
<category>translation bottlenecks</category>
<category>deductive operational rules</category>
<category>observable diagnostics</category>
<category>robust allocation control</category>
<category>event attribution</category>
<category>institutional readiness</category>
<category>physical constraints</category>
<category>policy levers</category>
<category>machine-readable rule registry</category>
<source url="https://doi.org/10.5281/zenodo.18688712">10.5281/zenodo.18688712</source>
<description><![CDATA[
This preprint formalizes operational deductive rules that map observable AI capability-to-reality translation gaps to concrete intervention levers under physical, institutional, and risk constraints, yielding implementable acceleration policies for real-economy growth. It provides machine-readable rule registries and closed-loop protocols linking diagnostics, event attribution, robust allocation, and safety constraints.
Preprint | DOI: 10.5281/zenodo.18688712
]]></description>
</item>
<item>
<title>From AI Capability Growth to Real-Economy Growth: A Semi-Endogenous Model of Physical and Institutional Bottlenecks</title>
<link>https://kadubon.github.io/github.io/works.html#2026-02-18-from-ai-capability-growth-to-real-economy-growth-18677068</link>
<guid isPermaLink="true">https://kadubon.github.io/github.io/works.html#2026-02-18-from-ai-capability-growth-to-real-economy-growth-18677068</guid>
<pubDate>Wed, 18 Feb 2026 00:00:00 +0900</pubDate>
<dc:creator>K. Takahashi</dc:creator>
<category>Preprint</category>
<category>AI capability growth</category>
<category>semi-endogenous growth</category>
<category>real-economy translation</category>
<category>physical bottlenecks</category>
<category>institutional bottlenecks</category>
<category>information-to-reality gap</category>
<category>hybrid ODE-jump model</category>
<category>bottleneck-switch timing</category>
<category>compute deployment</category>
<category>knowledge production</category>
<category>reflection-adjusted growth</category>
<category>AI economics</category>
<source url="https://doi.org/10.5281/zenodo.18677068">10.5281/zenodo.18677068</source>
<description><![CDATA[
This preprint quantifies how rapid AI capability growth in information space is filtered by physical and institutional constraints, yielding reflection-adjusted semi-endogenous growth laws and bottleneck-switch timing results. It formulates a hybrid ODE-jump model that separates potential algorithmic progress from realized real-economy deployment across compute infrastructure, energy, permitting, and regulatory readiness.
Preprint | DOI: 10.5281/zenodo.18677068
]]></description>
</item>
<item>
<title>Observable-Only Structural-Risk Institutions Without Central Arbitration</title>
<link>https://kadubon.github.io/github.io/works.html#2026-02-17-observable-only-structural-risk-institutions-wit-18666605</link>
<guid isPermaLink="true">https://kadubon.github.io/github.io/works.html#2026-02-17-observable-only-structural-risk-institutions-wit-18666605</guid>
<pubDate>Tue, 17 Feb 2026 00:00:00 +0900</pubDate>
<dc:creator>K. Takahashi</dc:creator>
<category>Preprint</category>
<category>structural risk institutions</category>
<category>observable-only</category>
<category>no-meta governance</category>
<category>decentralized arbitration</category>
<category>repeated games</category>
<category>coalition deterrence</category>
<category>escrow reversibility</category>
<category>heavy-tail certification</category>
<category>finite-sample guarantees</category>
<category>AI safety institutions</category>
<category>public auditability</category>
<source url="https://doi.org/10.5281/zenodo.18666605">10.5281/zenodo.18666605</source>
<description><![CDATA[
The preprint develops observable-only, no-meta institutional rules for structural risk in competitive AI systems, proving deterrence and non-domination conditions with escrow-based reversibility, coalition-aware repeated-game guarantees, and finite-sample heavy-tail certification under auditable public records.
Preprint | DOI: 10.5281/zenodo.18666605
]]></description>
</item>
<item>
<title>No-Meta Intelligence Under Ontology Drift: Information-Theoretic Limits and Operational Laws for Persistent Semantics</title>
<link>https://kadubon.github.io/github.io/works.html#2026-02-16-no-meta-intelligence-under-ontology-drift-inform-18653537</link>
<guid isPermaLink="true">https://kadubon.github.io/github.io/works.html#2026-02-16-no-meta-intelligence-under-ontology-drift-inform-18653537</guid>
<pubDate>Mon, 16 Feb 2026 00:00:00 +0900</pubDate>
<dc:creator>K. Takahashi</dc:creator>
<category>Preprint</category>
<category>ontology drift</category>
<category>persistent semantics</category>
<category>no-meta intelligence</category>
<category>information-theoretic limits</category>
<category>semantic persistence</category>
<category>multi-agent recovery</category>
<category>Byzantine ambiguity</category>
<category>probe capacity</category>
<category>geometric modeling</category>
<category>operational laws</category>
<category>adaptive systems</category>
<source url="https://doi.org/10.5281/zenodo.18653537">10.5281/zenodo.18653537</source>
<description><![CDATA[
The preprint develops an information-theoretic and geometric boundary theory for persistent semantics under ontology drift in no-meta adaptive systems, deriving impossibility and phase-boundary results plus auditable operational laws for probe capacity, resource allocation, blackout bursts, Byzantine ambiguity, and long-horizon semantic maintenance.
Preprint | DOI: 10.5281/zenodo.18653537
]]></description>
</item>
<item>
<title>Observable-Only AI Safety from Public Data: Robust Bottleneck Diagnosis with Auditable No-Meta Dynamic Programming, Anytime Confidence Sequences, and Dynamic IQC</title>
<link>https://kadubon.github.io/github.io/works.html#2026-02-12-observable-only-ai-safety-from-public-data-robus-18615875</link>
<guid isPermaLink="true">https://kadubon.github.io/github.io/works.html#2026-02-12-observable-only-ai-safety-from-public-data-robus-18615875</guid>
<pubDate>Thu, 12 Feb 2026 00:00:00 +0900</pubDate>
<dc:creator>K. Takahashi</dc:creator>
<category>Preprint</category>
<category>observable-only AI safety</category>
<category>public data</category>
<category>robust bottleneck diagnosis</category>
<category>no-meta governance</category>
<category>dynamic programming</category>
<category>anytime confidence sequences</category>
<category>e-processes</category>
<category>partial identification</category>
<category>dynamic IQC</category>
<category>deterministic replay</category>
<category>auditable diagnostics</category>
<source url="https://doi.org/10.5281/zenodo.18615875">10.5281/zenodo.18615875</source>
<description><![CDATA[
The preprint presents an observable-only AI safety framework for robust bottleneck diagnosis from public data, combining no-meta dynamic programming, partial identification, anytime confidence sequences, and dynamic IQC to produce auditable interval diagnostics with fail-closed replay contracts.
Preprint | DOI: 10.5281/zenodo.18615875
]]></description>
</item>
<item>
<title>Quality-Operator Non-Collapse (QONC) for Observable-Certificate Recursive Systems</title>
<link>https://kadubon.github.io/github.io/works.html#2026-02-10-quality-operator-non-collapse-qonc-for-observabl-18577140</link>
<guid isPermaLink="true">https://kadubon.github.io/github.io/works.html#2026-02-10-quality-operator-non-collapse-qonc-for-observabl-18577140</guid>
<pubDate>Tue, 10 Feb 2026 00:00:00 +0900</pubDate>
<dc:creator>K. Takahashi</dc:creator>
<category>Preprint</category>
<category>QONC</category>
<category>recursive systems</category>
<category>certificate-first safety</category>
<category>observable certificates</category>
<category>anytime validity</category>
<category>delayed-label correction</category>
<category>robust MPC</category>
<category>compositional guarantees</category>
<category>tamper-evident ledger</category>
<category>AI safety</category>
<category>autonomous workflows</category>
<source url="https://doi.org/10.5281/zenodo.18577140">10.5281/zenodo.18577140</source>
<description><![CDATA[
The preprint introduces Quality-Operator Non-Collapse (QONC), a certificate-first safety framework for recursive systems that uses observable and auditable quantities for adaptive validity control, delayed-label risk correction, viability-safe robust MPC, and ledger-accounted replayable governance against quality and liveness collapse.
Preprint | DOI: 10.5281/zenodo.18577140
]]></description>
</item>
<item>
<title>Verifiable Modular Pipeline Contracts for AI and General Composite Systems</title>
<link>https://kadubon.github.io/github.io/works.html#2026-02-09-verifiable-modular-pipeline-contracts-for-ai-and-18529100</link>
<guid isPermaLink="true">https://kadubon.github.io/github.io/works.html#2026-02-09-verifiable-modular-pipeline-contracts-for-ai-and-18529100</guid>
<pubDate>Mon, 09 Feb 2026 00:00:00 +0900</pubDate>
<dc:creator>K. Takahashi</dc:creator>
<category>Preprint</category>
<category>modular pipelines</category>
<category>verifiable contracts</category>
<category>observable-only</category>
<category>no-meta</category>
<category>fail-closed verifier</category>
<category>compositional guarantees</category>
<category>proxy-to-true risk</category>
<category>drift hardening</category>
<category>signed evidence</category>
<category>composite systems</category>
<source url="https://doi.org/10.5281/zenodo.18529100">10.5281/zenodo.18529100</source>
<description><![CDATA[
The preprint specifies a domain-agnostic, observable-only and no-meta contract framework for modular AI and composite-system pipelines, with deterministic fail-closed verification and progressive certificate profiles for composition guarantees, proxy-to-true risk accounting, and drift hardening.
Preprint | DOI: 10.5281/zenodo.18529100
]]></description>
</item>
<item>
<title>Compute-First Safe LLM Routing Without Meta-Judges Observable Viability with Information and Energy Budgets</title>
<link>https://kadubon.github.io/github.io/works.html#2026-02-06-compute-first-safe-llm-routing-without-meta-judg-18502390</link>
<guid isPermaLink="true">https://kadubon.github.io/github.io/works.html#2026-02-06-compute-first-safe-llm-routing-without-meta-judg-18502390</guid>
<pubDate>Fri, 06 Feb 2026 00:00:00 +0900</pubDate>
<dc:creator>K. Takahashi</dc:creator>
<category>Preprint</category>
<category>LLM routing</category>
<category>no-meta</category>
<category>observable viability</category>
<category>rational inattention</category>
<category>information budget</category>
<category>compute budget</category>
<category>energy budget</category>
<category>fail-closed deployment</category>
<category>tamper-evident logging</category>
<category>robust optimization</category>
<source url="https://doi.org/10.5281/zenodo.18502390">10.5281/zenodo.18502390</source>
<description><![CDATA[
The preprint introduces a compute-first safety framework for LLM routing without privileged meta-judges, defining observable viability under information, compute, and energy budgets with offline policy synthesis, fail-closed runtime deployment, and tamper-evident auditing.
Preprint | DOI: 10.5281/zenodo.18502390
]]></description>
</item>
<item>
<title>Stop Recomputing for AI/LLMs: Proof-Carrying Skills for Compute-Saving Inference Reuse</title>
<link>https://kadubon.github.io/github.io/works.html#2026-02-05-stop-recomputing-for-ai-llms-proof-carrying-skil-18490939</link>
<guid isPermaLink="true">https://kadubon.github.io/github.io/works.html#2026-02-05-stop-recomputing-for-ai-llms-proof-carrying-skil-18490939</guid>
<pubDate>Thu, 05 Feb 2026 00:00:00 +0900</pubDate>
<dc:creator>K. Takahashi</dc:creator>
<category>Preprint</category>
<category>proof-carrying skills</category>
<category>inference reuse</category>
<category>LLMs</category>
<category>compute saving</category>
<category>deterministic checker</category>
<category>no-meta boundary</category>
<category>observable anchors</category>
<category>bounded verification</category>
<category>receipts</category>
<category>replay resistance</category>
<category>OPVM</category>
<source url="https://doi.org/10.5281/zenodo.18490939">10.5281/zenodo.18490939</source>
<description><![CDATA[
The preprint introduces Proof-Carrying Skills, a no-meta framework that reuses verified skill executions to reduce repeated AI/LLM inference cost, using a deterministic bounded checker, observable anchors, gas-metered predicate evaluation, and replay-resistant receipts for fail-closed verification.
Preprint | DOI: 10.5281/zenodo.18490939
]]></description>
</item>
<item>
<title>Evidence-Carrying Cognitive Mesh on DePIN</title>
<link>https://kadubon.github.io/github.io/works.html#2026-02-04-evidence-carrying-cognitive-mesh-on-depin-18478743</link>
<guid isPermaLink="true">https://kadubon.github.io/github.io/works.html#2026-02-04-evidence-carrying-cognitive-mesh-on-depin-18478743</guid>
<pubDate>Wed, 04 Feb 2026 00:00:00 +0900</pubDate>
<dc:creator>K. Takahashi</dc:creator>
<category>Preprint</category>
<category>DePIN</category>
<category>decentralized compute</category>
<category>evidence-carrying</category>
<category>no-meta</category>
<category>observable-only</category>
<category>content-addressed evidence</category>
<category>provenance</category>
<category>semantic claim graph</category>
<category>verifiable retrieval</category>
<category>adversarial robustness</category>
<source url="https://doi.org/10.5281/zenodo.18478743">10.5281/zenodo.18478743</source>
<description><![CDATA[
The preprint specifies an evidence-carrying cognitive mesh for DePIN-style decentralized compute that sustains locally verifiable capability under observable-only and no-meta constraints, using content-addressed provenance objects and a queryable claim graph built from deterministic web retrieval and auditing pipelines to resist capture and poisoning.
Preprint | DOI: 10.5281/zenodo.18478743
]]></description>
</item>
<item>
<title>When "Good vs. Bad Governance" Is Unidentifiable</title>
<link>https://kadubon.github.io/github.io/works.html#2026-02-03-when-good-vs-bad-governance-is-unidentifiable-18465306</link>
<guid isPermaLink="true">https://kadubon.github.io/github.io/works.html#2026-02-03-when-good-vs-bad-governance-is-unidentifiable-18465306</guid>
<pubDate>Tue, 03 Feb 2026 00:00:00 +0900</pubDate>
<dc:creator>K. Takahashi</dc:creator>
<category>Preprint</category>
<category>no-meta</category>
<category>governance unidentifiability</category>
<category>observable-only</category>
<category>exit-impossibility</category>
<category>robust progress</category>
<category>observational equivalence</category>
<category>minimax lower bound</category>
<category>contestability</category>
<category>right-to-refuse</category>
<category>control-domain independence</category>
<category>dual-use</category>
<source url="https://doi.org/10.5281/zenodo.18465306">10.5281/zenodo.18465306</source>
<description><![CDATA[
This supplement studies when observable-only, no-meta agents cannot distinguish good from bad governance because mediator implementations are observationally equivalent from local history. It proves an impossibility result for robust progress guarantees under that unidentifiability and derives operational conditions such as contestability, safe refusal, and cross-domain witnesses for breaking silent contract switching.
Preprint | DOI: 10.5281/zenodo.18465306
]]></description>
</item>
<item>
<title>Observation Capture and Operational Capability Non-Expansion</title>
<link>https://kadubon.github.io/github.io/works.html#2026-02-03-observation-capture-and-operational-capability-n-18463798</link>
<guid isPermaLink="true">https://kadubon.github.io/github.io/works.html#2026-02-03-observation-capture-and-operational-capability-n-18463798</guid>