Pensez: Less Data, Better Reasoning -- Rethinking French LLM
Abstract
Large language models (LLMs) have demonstrated remarkable capabilities in various natural language processing tasks. However, achieving strong performance in specialized domains like mathematical reasoning and non-English languages often requires extensive training on massive datasets. This paper investigates a contrasting approach: strategic fine-tuning on a small, high-quality, bilingual (English-French) dataset to enhance both the reasoning capabilities and French language proficiency of a large language model. Rather than relying on scale, we explore the hypothesis that targeted data curation and optimized training can achieve competitive, or even superior, performance. We demonstrate, through targeted supervised fine-tuning (SFT) on only 2,000 carefully selected samples, significant improvements in mathematical reasoning. Specifically, Pensez 7B exhibits an increase in accuracy of the base model up to 20% on the AIME25 and a 12% increase on a French MATH level 5 benchmark. These results challenge the prevailing assumption that massive datasets are aprerequisite for strong reasoning performance in LLMs, highlighting the potential of strategic data curation and optimized fine-tuning for enhancing both specialized skills and multilingual capabilities. Our findings have implications for the efficient development of high-performing, multilingual LLMs, especially in resource-constrained scenarios.
Community
Less data, Higher performance!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- InfiR : Crafting Effective Small Language Models and Multimodal Small Language Models in Reasoning (2025)
- AdaCoT: Rethinking Cross-Lingual Factual Reasoning through Adaptive Chain-of-Thought (2025)
- Towards Better Understanding of Program-of-Thought Reasoning in Cross-Lingual and Multilingual Environments (2025)
- MMLU-ProX: A Multilingual Benchmark for Advanced Large Language Model Evaluation (2025)
- Audio-Reasoner: Improving Reasoning Capability in Large Audio Language Models (2025)
- Small Models Struggle to Learn from Strong Reasoners (2025)
- Adapting Language-Specific LLMs to a Reasoning Model in One Day via Model Merging - An Open Recipe (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on HF中国镜像站 checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper