Tag: Continual Learning
All the articles with the tag "Continual Learning".
-
Train with Perturbation, Infer after Merging: A Two-Stage Framework for Continual Learning
本文提出Perturb-and-Merge (P&M)框架,通过训练时任务向量扰动和推理时模型凸组合合并,结合LoRA实现参数高效持续学习,在多个基准数据集上显著缓解灾难性遗忘并提升性能。
-
Context-Free Synthetic Data Mitigates Forgetting
本文提出了一种上下文无关合成数据(CFS)方法,通过生成无条件样本并结合微调和预训练损失,缓解大型语言模型在数据不可知场景下的灾难性遗忘,实验在Olmo-1B和R1-Distill-Llama-8B模型上验证了其有效性。
-
Graceful Forgetting in Generative Language Models
本文提出Learning With Forgetting (LWF)框架,通过自生成知识、Fisher信息矩阵加权的遗忘置信度计算和周期性遗忘策略,在生成式语言模型的微调中实现优雅遗忘,实验表明其在大多数领域特定问答任务上显著提升性能。
-
Analyzing Mitigation Strategies for Catastrophic Forgetting in End-to-End Training of Spoken Language Models
本文研究了口语语言模型(SLM)端到端训练中的灾难性遗忘问题,通过评估模型合并、LoRA缩放因子折扣和经验回放三种策略,发现经验回放最为有效,且结合其他方法可进一步提升性能。
-
Task-Core Memory Management and Consolidation for Long-term Continual Learning
This paper introduces Long-CL, a human memory-inspired framework for long-term continual learning, leveraging task-core memory management and selective sample consolidation to significantly outperform baselines by 7.4% and 6.5% AP on two novel benchmarks, MMLongCL-Bench and TextLongCL-Bench, while mitigating catastrophic forgetting.