Tag: Supervised Learning
All the articles with the tag "Supervised Learning".
-
Long-Short Chain-of-Thought Mixture Supervised Fine-Tuning Eliciting Efficient Reasoning in Large Language Models
This paper introduces Long-Short Chain-of-Thought Mixture Supervised Fine-Tuning (LS-Mixture SFT), which combines long and short CoT datasets to fine-tune non-reasoning LLMs, achieving a 2.3% average accuracy improvement and 47.61% response length reduction on reasoning benchmarks.
-
Small Models, Smarter Learning: The Power of Joint Task Training
本文通过ListOps数据集上的小型Transformer模型实验,揭示联合任务训练(如MAX+MED+SUM)显著降低学习难度、减少参数需求,并引导模型发现基于数字属性的高效算法,而非单纯记忆符号表。
-
Thinking Out Loud: Do Reasoning Models Know When They're Right?
本文通过对比指令微调、监督微调和强化学习训练的大型推理模型,发现推理导向训练显著提升了推理任务中的准确性和校准能力,但在事实性任务中可能削弱小规模模型对知识边界的感知。
-
Cyber Security Data Science: Machine Learning Methods and their Performance on Imbalanced Datasets
This paper systematically evaluates machine learning classifiers and imbalance learning techniques on two cybersecurity datasets, revealing that XGB and RF perform robustly, while sampling and ensembling effects vary, emphasizing the need for dataset-specific method selection.
-
RaCT: Ranking-aware Chain-of-Thought Optimization for LLMs
RaCT通过链式思维(CoT)提示和排序偏好优化(RPO)的两阶段训练框架,显著提升了大型语言模型在文本重排序任务中的性能,同时保留了其通用语言建模能力,在多个基准上超越基线模型。