Posts
All the articles I've posted.
-
Task-Oriented Semantic Communication in Large Multimodal Models-based Vehicle Networks
This paper proposes a task-oriented semantic communication framework for LMM-based vehicle AI, using LLaVA with Semantic Matching for efficient image slicing and Fusion Attention-based power allocation to prioritize critical data transmission, achieving significant accuracy improvements (up to 33.1% at low SNR) in traffic VQA tasks.
-
General-Reasoner: Advancing LLM Reasoning Across All Domains
本文提出General-Reasoner,通过零强化学习结合跨领域高质量数据集和基于生成模型的验证器,显著提升大型语言模型在多领域推理任务上的性能,同时保持数学推理的有效性。
-
SoLoPO: Unlocking Long-Context Capabilities in LLMs via Short-to-Long Preference Optimization
SoLoPO通过将长上下文偏好优化分解为短上下文优化和短到长奖励对齐,显著提升了大型语言模型在长上下文任务中的性能和训练效率,同时保持短上下文能力。
-
You Do Not Fully Utilize Transformer's Representation Capacity
本文提出Layer-Integrated Memory (LIMe),通过学习跨层路由机制整合之前所有层的Key-Value表示,显著缓解Transformer的表示崩塌问题,并在语言建模、推理任务和深层网络中实现更快收敛和更高准确率。
-
When Less Language is More: Language-Reasoning Disentanglement Makes LLMs Better Multilingual Reasoners
本文提出了一种无训练干预方法,通过在推理时移除大型语言模型中的语言特异性表示以解耦语言和推理,显著提升了多语言推理性能,尤其是在中低资源语言上,同时揭示了语言信号与推理准确性的负相关性。