Posts
All the articles I've posted.
-
MELON: Provable Indirect Prompt Injection Defense via Masked Re-execution and Tool Comparison
MELON introduces a novel training-free defense against indirect prompt injection attacks on LLM agents by detecting independence of tool calls from user inputs through masked re-execution, achieving superior attack prevention (0.24% ASR on GPT-4o) and utility preservation (58.78% UA on GPT-4o) compared to existing methods.
-
Understanding Fact Recall in Language Models: Why Two-Stage Training Encourages Memorization but Mixed Training Teaches Knowledge
本文通过跨任务梯度追踪工具揭示了混合训练通过增加共享参数的数量和重要性,并在关键注意力头中集中这些参数,从而教授知识并提升语言模型的事实回忆泛化能力。
-
Unveiling the Key Factors for Distilling Chain-of-Thought Reasoning
本文系统研究了CoT蒸馏中教师模型选择、粒度和格式对小型语言模型(SLMs)推理能力的影响,发现强模型受益于高粒度CoT而弱模型偏好中等粒度,格式影响有限,且教师模型能力并非决定学生表现的唯一因素。
-
MoM: Linear Sequence Modeling with Mixture-of-Memories
The Mixture-of-Memories (MoM) architecture introduces multiple independent memory states with a routing mechanism to enhance memory capacity and reduce interference in linear sequence modeling, achieving significant performance gains over other linear models on recall-intensive tasks and nearing Transformer performance at larger scales while maintaining efficiency.
-
UFT: Unifying Supervised and Reinforcement Fine-Tuning
本文提出统一微调(UFT)框架,通过整合监督微调和强化微调,利用提示引导探索和混合目标函数,在不同规模模型和推理任务上均表现出色,并理论上证明了样本复杂度的指数级改进。