eBPF-based GPU causal observability agent
-
Updated
Apr 4, 2026 - Go
eBPF-based GPU causal observability agent
To improve the adaptability of Large Language Models (LLMs) by examining and optimizing the storage paradigm within autoregressive transformer models. The emphasis is on pinpointing and editing the locations where factual associations are stored, ensuring that the models retain current and relevant information without requiring extensive retraining
Mechanistic analysis of a GPT-2–like model exploring the compositionality gap in transformers. Using Logit Lens and Causal Tracing, the study identifies and overcomes a deep-layer bottleneck via dataset enhancement addressing the stated Compositionality Gap (NeurIPS24).
Causal intervention framework for mechanistic interpretability research. Implements activation patching methodology for identifying causally important components in transformer language models.
Add a description, image, and links to the causal-tracing topic page so that developers can more easily learn about it.
To associate your repository with the causal-tracing topic, visit your repo's landing page and select "manage topics."