🧠 LLM-QE: Improving Query Expansion by Aligning Large Language Models with Ranking Preferences

This is the official model for LLM-QE: Improving Query Expansion by Aligning Large Language Models with Ranking Preferences.

The LLM-QE model is designed to enhance query expansion in information retrieval tasks by leveraging Large Language Models (LLMs), improving the alignment between LLMs and ranking preferences during query expansion.


📄 Paper

For a detailed explanation of the methodology and experiments, please refer to our paper:
LLM-QE: Improving Query Expansion by Aligning Large Language Models with Ranking Preferences


🔄 Reproduce the Results

To reproduce the experiments and benchmarks from the paper, follow the instructions provided in the official GitHub repository: 👉 GitHub: NEUIR/LLM-QE.

🛠 Model Details

  • Model Name: LLM-QE-DPO
  • Architecture: LLaMA3-8B-Instruct with query expansion alignment using ranking preferences

📈 Usage:

You can use this model for query expansion tasks, particularly in information retrieval systems that benefit from alignment with ranking preferences.

🔖 Citation

If you use LLM-QE in your work, please consider citing our paper:

@misc{yao2025llmqeimprovingqueryexpansion,
      title={LLM-QE: Improving Query Expansion by Aligning Large Language Models with Ranking Preferences}, 
      author={Sijia Yao and Pengcheng Huang and Zhenghao Liu and Yu Gu and Yukun Yan and Shi Yu and Ge Yu},
      year={2025},
      eprint={2502.17057},
      archivePrefix={arXiv},
      primaryClass={cs.IR},
      url={https://arxiv.org/abs/2502.17057}, 
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for yaosijiaaaaa/LLM-QE-DPO

Finetuned
(566)
this model