Tag: Representation Learning
All the articles with the tag "Representation Learning".
-
GuidedQuant: Large Language Model Quantization via Exploiting End Loss Guidance
GuidedQuant通过整合最终损失梯度信息并保留输出通道内权重依赖性,结合LNQ算法显著提升了大型语言模型在权重和激活量化下的性能,实现了更高效的后训练量化。
-
Exploring the Role of Diversity in Example Selection for In-Context Learning
本文提出基于多样性的上下文学习(DICL)方法,通过最大边际相关性(MMR)算法重新排序示例以平衡相关性和多样性,在多个数据集和大型语言模型上实现了约70%的下游任务性能提升或维持。
-
Unveiling Language-Specific Features in Large Language Models via Sparse Autoencoders
This paper uses Sparse Autoencoders to identify and manipulate language-specific features in Large Language Models, introducing a monolinguality metric, demonstrating context dependency via code-switching, and enhancing steering vectors for better control over multilingual generation while revealing significant language-specific impacts through ablation studies.
-
Latent Factor Models Meets Instructions: Goal-conditioned Latent Factor Discovery without Task Supervision
本文提出Instruct-LF方法,通过结合LLMs的指令遵循能力和梯度-based统计模型,实现无需任务监督的目标导向潜在因素发现,提高了下游任务性能并在人工评估中被偏好。
-
SuperARC: An Agnostic Test for Narrow, General, and Super Intelligence Based On the Principles of Recursive Compression and Algorithmic Probability
本文提出SuperARC测试框架,通过算法概率和Kolmogorov复杂度的原理,设计了一个客观的AGI和ASI评估方法,证明递归压缩等价于预测,并展示了LLMs的局限性。