Safetensors
qwen2

Light-R1-14B-DS: SOTA 14B Math Model with RL

Model Trained From Release Date AIME24 AIME25 GPQA
OpenThinker-32B Qwen2.5-32B-Instruct 25.2.12 66.0 50.9 61.6
DeepSeek-R1-Distill-Qwen-14B Qwen2.5-14B 25.1.20 69.7 50.2 59.1
Light-R1-14B-DS (ours) 🤗 DeepSeek-R1-Distill-Qwen-14B 25.3.12 74.0 60.2 61.7
Light-R1-32B (ours) 🤗 Qwen2.5-32B-Instruct 25.3.4 76.6 64.6 61.8

technical report

GitHub page

wandb log

We introduce Light-R1-14B-DS, the first open-source successful RL attempt on already long-COT finetuned models of simialr sizes under light budget. Light-R1-14B-DS is also the State-Of-The-Art 14B math model with AIME24 & 25 scores 74.0 & 60.2, outperforming many 32B models.

Recent RL works have successfully trained RL on base models (usually with -zero in their names), or on 1.5B models (with response length interestingly decreasing then increasing), or on QwQ-32B with presumably prohibitively heavy compute.

Light-R1-14B-DS marks one step further in reproducing and democratizing DeepSeek-R1. We have finally seen expected behavior during RL training: simultaneous increase in response length and reward score on an already long-COT finetuned model (see wandb log).

Originated from DeepSeek-R1-Distill-Qwen-14B, Light-R1-14B-DS underwent our long-COT RL Post-Training and achieved a new State-Of-The-Art across 14B-Math models: 74.0 & 60.2 on AIME 24 & 25 respectively. Light-R1-14B-DS also performed well on GPQA without any specific training. We are excited to release this model along with the technical report, and will continue to perfect our long-COT RL Post-Training.

Usage

Same as DeepSeek-R1-Distill-Qwen-14B.

Data Decontamination

We carefully evaluated data contamination of several open-sourced datasets. While certain contamination may be inevitable during pre-training, it is unacceptable for post-training to compare on benchmarks. MATH-500 is somewhat compromised with tens of questions that are identical or only numbers changed. AIME 24 and 25 stay intact but we have to pay special attention when we incorporate AIME data up to 2023.

Light-R1 did thorough decontamination with exact matching (excluding digits) and N-gram (N=32) matching.

Citation

@misc{lightr1proj,
      title={Light-R1: Curriculum SFT, DPO and RL for Long COT from Scratch and Beyond}, 
      author={Liang Wen, Yunke Cai, Fenrui Xiao, Xin He, Qi An, Zhenyu Duan, Yimin Du, Junchen Liu, Lifu Tang, Xiaowei Lv, Haosheng Zou, Yongchao Deng, Shousheng Jia, Xiangzheng Zhang},
      year={2025},
      eprint={},
      archivePrefix={},
      url={https://github.com/Qihoo360/Light-R1}, 
}
Downloads last month
48
Safetensors
Model size
14.8B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for qihoo360/Light-R1-14B-DS

Finetuned
(21)
this model
Quantizations
3 models

Collection including qihoo360/Light-R1-14B-DS