DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning Paper • 2501.12948 • Published Jan 22 • 347
DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model Paper • 2405.04434 • Published May 7, 2024 • 18
Model Compression and Efficient Inference for Large Language Models: A Survey Paper • 2402.09748 • Published Feb 15, 2024 • 1
Keyformer: KV Cache Reduction through Key Tokens Selection for Efficient Generative Inference Paper • 2403.09054 • Published Mar 14, 2024 • 1
FastCache: Optimizing Multimodal LLM Serving through Lightweight KV-Cache Compression Framework Paper • 2503.08461 • Published 4 days ago
Beyond RAG: Task-Aware KV Cache Compression for Comprehensive Knowledge Reasoning Paper • 2503.04973 • Published 8 days ago • 20