Stop Overthinking: A Survey on Efficient Reasoning for Large Language Models
Abstract
Large Language Models (LLMs) have demonstrated remarkable capabilities in complex tasks. Recent advancements in Large Reasoning Models (LRMs), such as OpenAI o1 and DeepSeek-R1, have further improved performance in System-2 reasoning domains like mathematics and programming by harnessing supervised fine-tuning (SFT) and reinforcement learning (RL) techniques to enhance the Chain-of-Thought (CoT) reasoning. However, while longer CoT reasoning sequences improve performance, they also introduce significant computational overhead due to verbose and redundant outputs, known as the "overthinking phenomenon". In this paper, we provide the first structured survey to systematically investigate and explore the current progress toward achieving efficient reasoning in LLMs. Overall, relying on the inherent mechanism of LLMs, we categorize existing works into several key directions: (1) model-based efficient reasoning, which considers optimizing full-length reasoning models into more concise reasoning models or directly training efficient reasoning models; (2) reasoning output-based efficient reasoning, which aims to dynamically reduce reasoning steps and length during inference; (3) input prompts-based efficient reasoning, which seeks to enhance reasoning efficiency based on input prompt properties such as difficulty or length control. Additionally, we introduce the use of efficient data for training reasoning models, explore the reasoning capabilities of small language models, and discuss evaluation methods and benchmarking.
Community
This paper provides the first structured survey to systematically explore the current progress of
efficient reasoning for LLMs.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning (2025)
- SoftCoT: Soft Chain-of-Thought for Efficient Reasoning with LLMs (2025)
- InftyThink: Breaking the Length Limits of Long-Context Reasoning in Large Language Models (2025)
- Towards Widening The Distillation Bottleneck for Reasoning Models (2025)
- Self-Evolved Preference Optimization for Enhancing Mathematical Reasoning in Small Language Models (2025)
- From System 1 to System 2: A Survey of Reasoning Large Language Models (2025)
- Thinking Machines: A Survey of LLM based Reasoning Strategies (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on HF中国镜像站 checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper