Tag: Prompt Engineering
All the articles with the tag "Prompt Engineering".
-
Concise Reasoning, Big Gains: Pruning Long Reasoning Trace with Difficulty-Aware Prompting
本文提出难度感知提示(DAP)方法,通过动态调整推理轨迹长度构建精简的LiteCoT数据集(100K样本,平均720token),训练的Liter模型在多个推理基准上显著优于传统长CoT方法,同时大幅降低训练和推理成本。
-
To CoT or not to CoT? Chain-of-thought helps mainly on math and symbolic reasoning
This paper demonstrates through meta-analysis and experiments that Chain-of-Thought (CoT) prompting significantly enhances large language model performance on math and symbolic reasoning tasks, but offers limited benefits for non-symbolic tasks and underperforms compared to tool-augmented approaches.
-
Stop Overthinking: A Survey on Efficient Reasoning for Large Language Models
本文首次系统调查了大型语言模型高效推理的进展,通过分类模型、输出和提示-based方法,探讨了减少"过度思考"现象的策略,以优化计算效率并保持推理能力。
-
Beyond the Last Answer: Your Reasoning Trace Uncovers More than You Think
本文提出了一种通过分割大型语言模型推理轨迹为子思维并从中间状态生成多条推理路径、最终以众数聚合答案的方法,显著提高了数学推理任务的准确性(最高提升13%),并揭示了答案一致性与正确性的相关性。
-
本文通过提出位置 ID 操纵的 PFT 方法,揭示并解决了 LLM 在角色分离学习中依赖捷径的问题,提高了模型的鲁棒性和安全性,同时保持了性能。