Tag: Transformer
All the articles with the tag "Transformer".
-
Mitigate Position Bias in Large Language Models via Scaling a Single Dimension
本文提出通过缩放隐藏状态中的位置通道来缓解长上下文语言模型的位置偏差问题,并在多个模型和任务上验证了其有效性,特别是在“中间丢失”基准测试中显著提升了中间位置信息的利用率。
-
Large Language Models are Locally Linear Mappings
本文提出了一种通过分离Jacobian将大型语言模型在特定输入点转化为近乎精确局部线性系统的方法,揭示了模型内部低秩语义结构,并初步探索了输出引导应用,但泛化性和实用性受限。
-
CAT Merging: A Training-Free Approach for Resolving Conflicts in Model Merging
CAT Merging提出了一种无需训练的多任务模型合并框架,通过参数特定的修剪策略有效减少知识冲突,在视觉、语言和视觉-语言任务上显著提升了合并模型性能,平均准确率分别提高2.5%(ViT-B/32)和2.0%(ViT-L/14)。
-
Competition Dynamics Shape Algorithmic Phases of In-Context Learning
This paper introduces a synthetic sequence modeling task using finite Markov mixtures to unify the study of in-context learning (ICL), identifying four competing algorithms that explain model behavior and phase transitions, thus offering insights into ICL's transient nature and phenomenology.
-
How does Transformer Learn Implicit Reasoning?
本文通过在受控符号环境中从头训练Transformer模型,揭示了隐式多跳推理的三阶段发展轨迹,并利用跨查询语义补丁和余弦表示透镜工具,阐明了推理能力与隐藏空间聚类的关联,为模型可解释性提供了新见解。