Tag: Interpretability
All the articles with the tag "Interpretability".
-
A Statistical Case Against Empirical Human-AI Alignment
This position paper argues against forward empirical human-AI alignment due to statistical biases and anthropocentric limitations, advocating for prescriptive and backward alignment approaches to ensure transparency and minimize bias, supported by a case study on language model decoding strategies.
-
Talking Heads: Understanding Inter-layer Communication in Transformer Language Models
This paper investigates inter-layer communication in Transformer LMs by identifying low-rank communication channels via SVD, demonstrating their causal role in prompt sensitivity through interventions that significantly improve performance on context retrieval tasks like the Laundry List task.
-
Understanding Fact Recall in Language Models: Why Two-Stage Training Encourages Memorization but Mixed Training Teaches Knowledge
本文通过跨任务梯度追踪工具揭示了混合训练通过增加共享参数的数量和重要性,并在关键注意力头中集中这些参数,从而教授知识并提升语言模型的事实回忆泛化能力。
-
Internal Chain-of-Thought: Empirical Evidence for Layer-wise Subtask Scheduling in LLMs
本文通过层级上下文掩码和跨任务补丁方法,验证了大型语言模型内部存在‘内部思维链’,即在不同网络深度学习并按序执行复合任务的子任务,从而提升了模型透明度并为指令级行为控制开辟了新路径。
-
Steering LLM Reasoning Through Bias-Only Adaptation
本文通过训练转向向量(steering vectors)验证了大型语言模型中推理能力已潜藏的假设,在数学推理任务上以极高的参数效率接近甚至超过全模型微调的表现。