librarian-bot commited on
Commit
6aa26cc
·
verified ·
1 Parent(s): a60581c

Scheduled Commit

Browse files
data/2503.05397.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.05397", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [AI Agentic workflows and Enterprise APIs: Adapting API architectures for the age of AI agents](https://huggingface.co/papers/2502.17443) (2025)\n* [AppAgentX: Evolving GUI Agents as Proficient Smartphone Users](https://huggingface.co/papers/2503.02268) (2025)\n* [MACI: Multi-Agent Collaborative Intelligence for Adaptive Reasoning and Temporal Planning](https://huggingface.co/papers/2501.16689) (2025)\n* [Mobile-Agent-E: Self-Evolving Mobile Assistant for Complex Tasks](https://huggingface.co/papers/2501.11733) (2025)\n* [PoAct: Policy and Action Dual-Control Agent for Generalized Applications](https://huggingface.co/papers/2501.07054) (2025)\n* [Multi-Agent Autonomous Driving Systems with Large Language Models: A Survey of Recent Advances](https://huggingface.co/papers/2502.16804) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on HF中国镜像站 checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.07536.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.07536", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Vision-R1: Incentivizing Reasoning Capability in Multimodal Large Language Models](https://huggingface.co/papers/2503.06749) (2025)\n* [MM-Eureka: Exploring Visual Aha Moment with Rule-based Large-scale Reinforcement Learning](https://huggingface.co/papers/2503.07365) (2025)\n* [Boosting the Generalization and Reasoning of Vision Language Models with Curriculum Reinforcement Learning](https://huggingface.co/papers/2503.07065) (2025)\n* [Visual-RFT: Visual Reinforcement Fine-Tuning](https://huggingface.co/papers/2503.01785) (2025)\n* [MedVLM-R1: Incentivizing Medical Reasoning Capability of Vision-Language Models (VLMs) via Reinforcement Learning](https://huggingface.co/papers/2502.19634) (2025)\n* [R1-Searcher: Incentivizing the Search Capability in LLMs via Reinforcement Learning](https://huggingface.co/papers/2503.05592) (2025)\n* [R1-Zero's\"Aha Moment\"in Visual Reasoning on a 2B Non-SFT Model](https://huggingface.co/papers/2503.05132) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on HF中国镜像站 checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.07565.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.07565", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Score-of-Mixture Training: Training One-Step Generative Models Made Simple via Score Estimation of Mixture Distributions](https://huggingface.co/papers/2502.09609) (2025)\n* [Distributional Diffusion Models with Scoring Rules](https://huggingface.co/papers/2502.02483) (2025)\n* [Training Consistency Models with Variational Noise Coupling](https://huggingface.co/papers/2502.18197) (2025)\n* [Visual Generation Without Guidance](https://huggingface.co/papers/2501.15420) (2025)\n* [Direct Discriminative Optimization: Your Likelihood-Based Visual Generative Model is Secretly a GAN Discriminator](https://huggingface.co/papers/2503.01103) (2025)\n* [Towards Training One-Step Diffusion Models Without Distillation](https://huggingface.co/papers/2502.08005) (2025)\n* [Beyond and Free from Diffusion: Invertible Guided Consistency Training](https://huggingface.co/papers/2502.05391) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on HF中国镜像站 checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.07572.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.07572", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Advancing Language Model Reasoning through Reinforcement Learning and Inference Scaling](https://huggingface.co/papers/2501.11651) (2025)\n* [Scaling Test-Time Compute Without Verification or RL is Suboptimal](https://huggingface.co/papers/2502.12118) (2025)\n* [Satori: Reinforcement Learning with Chain-of-Action-Thought Enhances LLM Reasoning via Autoregressive Search](https://huggingface.co/papers/2502.02508) (2025)\n* [Learning from Failures in Multi-Attempt Reinforcement Learning](https://huggingface.co/papers/2503.04808) (2025)\n* [Towards Widening The Distillation Bottleneck for Reasoning Models](https://huggingface.co/papers/2503.01461) (2025)\n* [ACECODER: Acing Coder RL via Automated Test-Case Synthesis](https://huggingface.co/papers/2502.01718) (2025)\n* [S$^2$R: Teaching LLMs to Self-verify and Self-correct via Reinforcement Learning](https://huggingface.co/papers/2502.12853) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on HF中国镜像站 checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.07587.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.07587", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [UrbanVideo-Bench: Benchmarking Vision-Language Models on Embodied Intelligence with Video Data in Urban Spaces](https://huggingface.co/papers/2503.06157) (2025)\n* [Sce2DriveX: A Generalized MLLM Framework for Scene-to-Drive Learning](https://huggingface.co/papers/2502.14917) (2025)\n* [Embodied Scene Understanding for Vision Language Models via MetaVQA](https://huggingface.co/papers/2501.09167) (2025)\n* [VLM-E2E: Enhancing End-to-End Autonomous Driving with Multimodal Driver Attention Fusion](https://huggingface.co/papers/2502.18042) (2025)\n* [Evaluation of Safety Cognition Capability in Vision-Language Models for Autonomous Driving](https://huggingface.co/papers/2503.06497) (2025)\n* [BEVDriver: Leveraging BEV Maps in LLMs for Robust Closed-Loop Driving](https://huggingface.co/papers/2503.03074) (2025)\n* [VaViM and VaVAM: Autonomous Driving through Video Generative Modeling](https://huggingface.co/papers/2502.15672) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on HF中国镜像站 checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.07604.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.07604", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Unveiling Reasoning Thresholds in Language Models: Scaling, Fine-Tuning, and Interpretability through Attention Maps](https://huggingface.co/papers/2502.15120) (2025)\n* [SoftCoT: Soft Chain-of-Thought for Efficient Reasoning with LLMs](https://huggingface.co/papers/2502.12134) (2025)\n* [Inference-Time Computations for LLM Reasoning and Planning: A Benchmark and Insights](https://huggingface.co/papers/2502.12521) (2025)\n* [MathFimer: Enhancing Mathematical Reasoning by Expanding Reasoning Steps through Fill-in-the-Middle Task](https://huggingface.co/papers/2502.11684) (2025)\n* [Quantifying Logical Consistency in Transformers via Query-Key Alignment](https://huggingface.co/papers/2502.17017) (2025)\n* [Order Doesn't Matter, But Reasoning Does: Training LLMs with Order-Centric Augmentation](https://huggingface.co/papers/2502.19907) (2025)\n* [Sketch-of-Thought: Efficient LLM Reasoning with Adaptive Cognitive-Inspired Sketching](https://huggingface.co/papers/2503.05179) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on HF中国镜像站 checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.07639.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.07639", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Route Sparse Autoencoder to Interpret Large Language Models](https://huggingface.co/papers/2503.08200) (2025)\n* [LF-Steering: Latent Feature Activation Steering for Enhancing Semantic Consistency in Large Language Models](https://huggingface.co/papers/2501.11036) (2025)\n* [SAE-V: Interpreting Multimodal Models for Enhanced Alignment](https://huggingface.co/papers/2502.17514) (2025)\n* [Interpreting CLIP with Hierarchical Sparse Autoencoders](https://huggingface.co/papers/2502.20578) (2025)\n* [Efficiently Editing Mixture-of-Experts Models with Compressed Experts](https://huggingface.co/papers/2503.00634) (2025)\n* [Universal Sparse Autoencoders: Interpretable Cross-Model Concept Alignment](https://huggingface.co/papers/2502.03714) (2025)\n* [How LLMs Learn: Tracing Internal Representations with Sparse Autoencoders](https://huggingface.co/papers/2503.06394) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on HF中国镜像站 checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.07699.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.07699", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Learning Few-Step Diffusion Models by Trajectory Distribution Matching](https://huggingface.co/papers/2503.06674) (2025)\n* [ProReflow: Progressive Reflow with Decomposed Velocity](https://huggingface.co/papers/2503.04824) (2025)\n* [Diffusion-Sharpening: Fine-tuning Diffusion Models with Denoising Trajectory Sharpening](https://huggingface.co/papers/2502.12146) (2025)\n* [Adding Additional Control to One-Step Diffusion with Joint Distribution Matching](https://huggingface.co/papers/2503.06652) (2025)\n* [Optimizing for the Shortest Path in Denoising Diffusion Model](https://huggingface.co/papers/2503.03265) (2025)\n* [ROCM: RLHF on consistency models](https://huggingface.co/papers/2503.06171) (2025)\n* [Straight-Line Diffusion Model for Efficient 3D Molecular Generation](https://huggingface.co/papers/2503.02918) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on HF中国镜像站 checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.07703.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.07703", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Decoder-Only LLMs are Better Controllers for Diffusion Models](https://huggingface.co/papers/2502.04412) (2025)\n* [TextAtlas5M: A Large-scale Dataset for Dense Text Image Generation](https://huggingface.co/papers/2502.07870) (2025)\n* [IMAGINE-E: Image Generation Intelligence Evaluation of State-of-the-art Text-to-Image Models](https://huggingface.co/papers/2501.13920) (2025)\n* [Multimodal Representation Alignment for Image Generation: Text-Image Interleaved Control Is Easier Than You Think](https://huggingface.co/papers/2502.20172) (2025)\n* [DesignDiffusion: High-Quality Text-to-Design Image Generation with Diffusion Models](https://huggingface.co/papers/2503.01645) (2025)\n* [Bringing Characters to New Stories: Training-Free Theme-Specific Image Generation via Dynamic Visual Prompting](https://huggingface.co/papers/2501.15641) (2025)\n* [Augmented Conditioning Is Enough For Effective Training Image Generation](https://huggingface.co/papers/2502.04475) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on HF中国镜像站 checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.07860.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.07860", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Generative Frame Sampler for Long Video Understanding](https://huggingface.co/papers/2503.09146) (2025)\n* [Towards Fine-Grained Video Question Answering](https://huggingface.co/papers/2503.06820) (2025)\n* [Can Multimodal LLMs do Visual Temporal Understanding and Reasoning? The answer is No!](https://huggingface.co/papers/2501.10674) (2025)\n* [VideoPhy-2: A Challenging Action-Centric Physical Commonsense Evaluation in Video Generation](https://huggingface.co/papers/2503.06800) (2025)\n* [MomentSeeker: A Comprehensive Benchmark and A Strong Baseline For Moment Retrieval Within Long Videos](https://huggingface.co/papers/2502.12558) (2025)\n* [MMVU: Measuring Expert-Level Multi-Discipline Video Understanding](https://huggingface.co/papers/2501.12380) (2025)\n* [HAIC: Improving Human Action Understanding and Generation with Better Captions for Multi-modal Large Language Models](https://huggingface.co/papers/2502.20811) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on HF中国镜像站 checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.07891.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.07891", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [MMTEB: Massive Multilingual Text Embedding Benchmark](https://huggingface.co/papers/2502.13595) (2025)\n* [Enhancing Lexicon-Based Text Embeddings with Large Language Models](https://huggingface.co/papers/2501.09749) (2025)\n* [mmE5: Improving Multimodal Multilingual Embeddings via High-quality Synthetic Data](https://huggingface.co/papers/2502.08468) (2025)\n* [xVLM2Vec: Adapting LVLM-based embedding models to multilinguality using Self-Knowledge Distillation](https://huggingface.co/papers/2503.09313) (2025)\n* [FaMTEB: Massive Text Embedding Benchmark in Persian Language](https://huggingface.co/papers/2502.11571) (2025)\n* [DeepRAG: Building a Custom Hindi Embedding Model for Retrieval Augmented Generation from Scratch](https://huggingface.co/papers/2503.08213) (2025)\n* [Franken-Adapter: Cross-Lingual Adaptation of LLMs by Embedding Surgery](https://huggingface.co/papers/2502.08037) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on HF中国镜像站 checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.07920.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.07920", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Diffusion Models Through a Global Lens: Are They Culturally Inclusive?](https://huggingface.co/papers/2502.08914) (2025)\n* [SEA-HELM: Southeast Asian Holistic Evaluation of Language Models](https://huggingface.co/papers/2502.14301) (2025)\n* [Preserving Cultural Identity with Context-Aware Translation Through Multi-Agent AI Systems](https://huggingface.co/papers/2503.04827) (2025)\n* [PerCul: A Story-Driven Cultural Evaluation of LLMs in Persian](https://huggingface.co/papers/2502.07459) (2025)\n* [Cross-Cultural Fashion Design via Interactive Large Language Models and Diffusion Models](https://huggingface.co/papers/2501.15571) (2025)\n* [Scaling Pre-training to One Hundred Billion Data for Vision Language Models](https://huggingface.co/papers/2502.07617) (2025)\n* [RusCode: Russian Cultural Code Benchmark for Text-to-Image Generation](https://huggingface.co/papers/2502.07455) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on HF中国镜像站 checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.08037.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.08037", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [OmniEraser: Remove Objects and Their Effects in Images with Paired Video-Frame Data](https://huggingface.co/papers/2501.07397) (2025)\n* [OmniPaint: Mastering Object-Oriented Editing via Disentangled Insertion-Removal Inpainting](https://huggingface.co/papers/2503.08677) (2025)\n* [Get In Video: Add Anything You Want to the Video](https://huggingface.co/papers/2503.06268) (2025)\n* [3D Object Manipulation in a Single Image using Generative Models](https://huggingface.co/papers/2501.12935) (2025)\n* [HumanDiT: Pose-Guided Diffusion Transformer for Long-form Human Motion Video Generation](https://huggingface.co/papers/2502.04847) (2025)\n* [VideoHandles: Editing 3D Object Compositions in Videos Using Video Generative Priors](https://huggingface.co/papers/2503.01107) (2025)\n* [VidCRAFT3: Camera, Object, and Lighting Control for Image-to-Video Generation](https://huggingface.co/papers/2502.07531) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on HF中国镜像站 checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.08102.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.08102", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [A-MEM: Agentic Memory for LLM Agents](https://huggingface.co/papers/2502.12110) (2025)\n* [Towards Anthropomorphic Conversational AI Part I: A Practical Framework](https://huggingface.co/papers/2503.04787) (2025)\n* [GOD model: Privacy Preserved AI School for Personal Assistant](https://huggingface.co/papers/2502.18527) (2025)\n* [R$^3$Mem: Bridging Memory Retention and Retrieval via Reversible Compression](https://huggingface.co/papers/2502.15957) (2025)\n* [Adaptive Tool Use in Large Language Models with Meta-Cognition Trigger](https://huggingface.co/papers/2502.12961) (2025)\n* [MAS-GPT: Training LLMs to Build LLM-based Multi-Agent Systems](https://huggingface.co/papers/2503.03686) (2025)\n* [MMRC: A Large-Scale Benchmark for Understanding Multimodal Large Language Model in Real-World Conversation](https://huggingface.co/papers/2502.11903) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on HF中国镜像站 checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.08120.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.08120", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Democratizing Text-to-Image Masked Generative Models with Compact Text-Aware One-Dimensional Tokens](https://huggingface.co/papers/2501.07730) (2025)\n* [DynamicID: Zero-Shot Multi-ID Image Personalization with Flexible Facial Editability](https://huggingface.co/papers/2503.06505) (2025)\n* [OmniMamba: Efficient and Unified Multimodal Understanding and Generation via State Space Models](https://huggingface.co/papers/2503.08686) (2025)\n* [Multimodal Representation Alignment for Image Generation: Text-Image Interleaved Control Is Easier Than You Think](https://huggingface.co/papers/2502.20172) (2025)\n* [Unleashing the Potential of Large Language Models for Text-to-Image Generation through Autoregressive Representation Alignment](https://huggingface.co/papers/2503.07334) (2025)\n* [I Think, Therefore I Diffuse: Enabling Multimodal In-Context Reasoning in Diffusion Models](https://huggingface.co/papers/2502.10458) (2025)\n* [A Token-level Text Image Foundation Model for Document Understanding](https://huggingface.co/papers/2503.02304) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on HF中国镜像站 checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.08307.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.08307", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [UniForm: A Unified Diffusion Transformer for Audio-Video Generation](https://huggingface.co/papers/2502.03897) (2025)\n* [MaskFlow: Discrete Flows For Flexible and Efficient Long Video Generation](https://huggingface.co/papers/2502.11234) (2025)\n* [A Comprehensive Survey on Generative AI for Video-to-Music Generation](https://huggingface.co/papers/2502.12489) (2025)\n* [AR-Diffusion: Asynchronous Video Generation with Auto-Regressive Diffusion](https://huggingface.co/papers/2503.07418) (2025)\n* [MagicInfinite: Generating Infinite Talking Videos with Your Words and Voice](https://huggingface.co/papers/2503.05978) (2025)\n* [Taming Teacher Forcing for Masked Autoregressive Video Generation](https://huggingface.co/papers/2501.12389) (2025)\n* [Enhance-A-Video: Better Generated Video for Free](https://huggingface.co/papers/2502.07508) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on HF中国镜像站 checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.08417.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.08417", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [HumanDiT: Pose-Guided Diffusion Transformer for Long-form Human Motion Video Generation](https://huggingface.co/papers/2502.04847) (2025)\n* [MotionMatcher: Motion Customization of Text-to-Video Diffusion Models via Motion Feature Matching](https://huggingface.co/papers/2502.13234) (2025)\n* [TransVDM: Motion-Constrained Video Diffusion Model for Transparent Video Synthesis](https://huggingface.co/papers/2502.19454) (2025)\n* [CatV2TON: Taming Diffusion Transformers for Vision-Based Virtual Try-On with Temporal Concatenation](https://huggingface.co/papers/2501.11325) (2025)\n* [AR-Diffusion: Asynchronous Video Generation with Auto-Regressive Diffusion](https://huggingface.co/papers/2503.07418) (2025)\n* [SayAnything: Audio-Driven Lip Synchronization with Conditional Video Diffusion](https://huggingface.co/papers/2502.11515) (2025)\n* [How to Move Your Dragon: Text-to-Motion Synthesis for Large-Vocabulary Objects](https://huggingface.co/papers/2503.04257) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on HF中国镜像站 checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.08478.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.08478", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [InstaFace: Identity-Preserving Facial Editing with Single Image Inference](https://huggingface.co/papers/2502.20577) (2025)\n* [DynamicID: Zero-Shot Multi-ID Image Personalization with Flexible Facial Editability](https://huggingface.co/papers/2503.06505) (2025)\n* [IP-FaceDiff: Identity-Preserving Facial Video Editing with Diffusion](https://huggingface.co/papers/2501.07530) (2025)\n* [Towards Consistent and Controllable Image Synthesis for Face Editing](https://huggingface.co/papers/2502.02465) (2025)\n* [Removing Averaging: Personalized Lip-Sync Driven Characters Based on Identity Adapter](https://huggingface.co/papers/2503.06397) (2025)\n* [FlipConcept: Tuning-Free Multi-Concept Personalization for Text-to-Image Generation](https://huggingface.co/papers/2502.15203) (2025)\n* [Turn That Frown Upside Down: FaceID Customization via Cross-Training Data](https://huggingface.co/papers/2501.15407) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on HF中国镜像站 checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.08507.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.08507", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [New Dataset and Methods for Fine-Grained Compositional Referring Expression Comprehension via Specialist-MLLM Collaboration](https://huggingface.co/papers/2502.20104) (2025)\n* [ChatReID: Open-ended Interactive Person Retrieval via Hierarchical Progressive Tuning for Vision Language Models](https://huggingface.co/papers/2502.19958) (2025)\n* [REF-VLM: Triplet-Based Referring Paradigm for Unified Visual Decoding](https://huggingface.co/papers/2503.07413) (2025)\n* [UFO: A Unified Approach to Fine-grained Visual Perception via Open-ended Language Interface](https://huggingface.co/papers/2503.01342) (2025)\n* [Towards Fine-Grained Video Question Answering](https://huggingface.co/papers/2503.06820) (2025)\n* [Your Large Vision-Language Model Only Needs A Few Attention Heads For Visual Grounding](https://huggingface.co/papers/2503.06287) (2025)\n* [Taking Notes Brings Focus? Towards Multi-Turn Multimodal Dialogue Learning](https://huggingface.co/papers/2503.07002) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on HF中国镜像站 checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.08588.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.08588", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Understanding and Mitigating Gender Bias in LLMs via Interpretable Neuron Editing](https://huggingface.co/papers/2501.14457) (2025)\n* [Gender Encoding Patterns in Pretrained Language Model Representations](https://huggingface.co/papers/2503.06734) (2025)\n* [Auto-Search and Refinement: An Automated Framework for Gender Bias Mitigation in Large Language Models](https://huggingface.co/papers/2502.11559) (2025)\n* [Dual Debiasing: Remove Stereotypes and Keep Factual Gender for Fair Language Modeling and Translation](https://huggingface.co/papers/2501.10150) (2025)\n* [Mitigating Bias in RAG: Controlling the Embedder](https://huggingface.co/papers/2502.17390) (2025)\n* [Beneath the Surface: How Large Language Models Reflect Hidden Bias](https://huggingface.co/papers/2502.19749) (2025)\n* [DR.GAP: Mitigating Bias in Large Language Models using Gender-Aware Prompting with Demonstration and Reasoning](https://huggingface.co/papers/2502.11603) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on HF中国镜像站 checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.08605.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.08605", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Ouroboros-Diffusion: Exploring Consistent Content Generation in Tuning-free Long Video Diffusion](https://huggingface.co/papers/2501.09019) (2025)\n* [SOYO: A Tuning-Free Approach for Video Style Morphing via Style-Adaptive Interpolation in Diffusion Models](https://huggingface.co/papers/2503.06998) (2025)\n* [RepVideo: Rethinking Cross-Layer Representation for Video Generation](https://huggingface.co/papers/2501.08994) (2025)\n* [AR-Diffusion: Asynchronous Video Generation with Auto-Regressive Diffusion](https://huggingface.co/papers/2503.07418) (2025)\n* [AdaFlow: Efficient Long Video Editing via Adaptive Attention Slimming And Keyframe Selection](https://huggingface.co/papers/2502.05433) (2025)\n* [Inversion-Free Video Style Transfer with Trajectory Reset Attention Control and Content-Style Bridging](https://huggingface.co/papers/2503.07363) (2025)\n* [Text2Story: Advancing Video Storytelling with Text Guidance](https://huggingface.co/papers/2503.06310) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on HF中国镜像站 checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.08619.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.08619", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [One-Prompt-One-Story: Free-Lunch Consistent Text-to-Image Generation Using a Single Prompt](https://huggingface.co/papers/2501.13554) (2025)\n* [TextAtlas5M: A Large-scale Dataset for Dense Text Image Generation](https://huggingface.co/papers/2502.07870) (2025)\n* [ARMOR v0.1: Empowering Autoregressive Multimodal Understanding Model with Interleaved Multimodal Generation via Asymmetric Synergy](https://huggingface.co/papers/2503.06542) (2025)\n* [SANA 1.5: Efficient Scaling of Training-Time and Inference-Time Compute in Linear Diffusion Transformer](https://huggingface.co/papers/2501.18427) (2025)\n* [Show-o Turbo: Towards Accelerated Unified Multimodal Understanding and Generation](https://huggingface.co/papers/2502.05415) (2025)\n* [Decoder-Only LLMs are Better Controllers for Diffusion Models](https://huggingface.co/papers/2502.04412) (2025)\n* [Augmented Conditioning Is Enough For Effective Training Image Generation](https://huggingface.co/papers/2502.04475) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on HF中国镜像站 checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.08625.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.08625", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [UFO: A Unified Approach to Fine-grained Visual Perception via Open-ended Language Interface](https://huggingface.co/papers/2503.01342) (2025)\n* [Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement](https://huggingface.co/papers/2503.06520) (2025)\n* [Pixel-Level Reasoning Segmentation via Multi-turn Conversations](https://huggingface.co/papers/2502.09447) (2025)\n* [REF-VLM: Triplet-Based Referring Paradigm for Unified Visual Decoding](https://huggingface.co/papers/2503.07413) (2025)\n* [The Devil is in Temporal Token: High Quality Video Reasoning Segmentation](https://huggingface.co/papers/2501.08549) (2025)\n* [DSV-LFS: Unifying LLM-Driven Semantic Cues with Visual Features for Robust Few-Shot Segmentation](https://huggingface.co/papers/2503.04006) (2025)\n* [PixFoundation: Are We Heading in the Right Direction with Pixel-level Vision Foundation Models?](https://huggingface.co/papers/2502.04192) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on HF中国镜像站 checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.08638.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.08638", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [SongGen: A Single Stage Auto-regressive Transformer for Text-to-Song Generation](https://huggingface.co/papers/2502.13128) (2025)\n* [DiffRhythm: Blazingly Fast and Embarrassingly Simple End-to-End Full-Length Song Generation with Latent Diffusion](https://huggingface.co/papers/2503.01183) (2025)\n* [InspireMusic: Integrating Super Resolution and Large Language Model for High-Fidelity Long-Form Music Generation](https://huggingface.co/papers/2503.00084) (2025)\n* [GVMGen: A General Video-to-Music Generation Model with Hierarchical Attentions](https://huggingface.co/papers/2501.09972) (2025)\n* [ImprovNet: Generating Controllable Musical Improvisations with Iterative Corruption Refinement](https://huggingface.co/papers/2502.04522) (2025)\n* [Metis: A Foundation Speech Generation Model with Masked Generative Pre-training](https://huggingface.co/papers/2502.03128) (2025)\n* [Everyone-Can-Sing: Zero-Shot Singing Voice Synthesis and Conversion with Speech Reference](https://huggingface.co/papers/2501.13870) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on HF中国镜像站 checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.08644.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.08644", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [SafeRAG: Benchmarking Security in Retrieval-Augmented Generation of Large Language Model](https://huggingface.co/papers/2501.18636) (2025)\n* [MM-PoisonRAG: Disrupting Multimodal RAG with Local and Global Poisoning Attacks](https://huggingface.co/papers/2502.17832) (2025)\n* [Riddle Me This! Stealthy Membership Inference for Retrieval-Augmented Generation](https://huggingface.co/papers/2502.00306) (2025)\n* [Collapse of Dense Retrievers: Short, Early, and Literal Biases Outranking Factual Evidence](https://huggingface.co/papers/2503.05037) (2025)\n* [From Retrieval to Generation: Comparing Different Approaches](https://huggingface.co/papers/2502.20245) (2025)\n* [Making Them a Malicious Database: Exploiting Query Code to Jailbreak Aligned Large Language Models](https://huggingface.co/papers/2502.09723) (2025)\n* [A Practical Memory Injection Attack against LLM Agents](https://huggingface.co/papers/2503.03704) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on HF中国镜像站 checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.08684.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.08684", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Collapse of Dense Retrievers: Short, Early, and Literal Biases Outranking Factual Evidence](https://huggingface.co/papers/2503.05037) (2025)\n* [SePer: Measure Retrieval Utility Through The Lens Of Semantic Perplexity Reduction](https://huggingface.co/papers/2503.01478) (2025)\n* [LLM-QE: Improving Query Expansion by Aligning Large Language Models with Ranking Preferences](https://huggingface.co/papers/2502.17057) (2025)\n* [From Retrieval to Generation: Comparing Different Approaches](https://huggingface.co/papers/2502.20245) (2025)\n* [MultiConIR: Towards multi-condition Information Retrieval](https://huggingface.co/papers/2503.08046) (2025)\n* [ASRank: Zero-Shot Re-Ranking with Answer Scent for Document Retrieval](https://huggingface.co/papers/2501.15245) (2025)\n* [Scaling Sparse and Dense Retrieval in Decoder-Only LLMs](https://huggingface.co/papers/2502.15526) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on HF中国镜像站 checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.08685.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.08685", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Democratizing Text-to-Image Masked Generative Models with Compact Text-Aware One-Dimensional Tokens](https://huggingface.co/papers/2501.07730) (2025)\n* [Robust Latent Matters: Boosting Image Generation with Sampling Error](https://huggingface.co/papers/2503.08354) (2025)\n* [V2Flow: Unifying Visual Tokenization and Large Language Model Vocabularies for Autoregressive Image Generation](https://huggingface.co/papers/2503.07493) (2025)\n* [Beyond Next-Token: Next-X Prediction for Autoregressive Visual Generation](https://huggingface.co/papers/2502.20388) (2025)\n* [Unleashing the Potential of Large Language Models for Text-to-Image Generation through Autoregressive Representation Alignment](https://huggingface.co/papers/2503.07334) (2025)\n* [Layton: Latent Consistency Tokenizer for 1024-pixel Image Reconstruction and Generation by 256 Tokens](https://huggingface.co/papers/2503.08377) (2025)\n* [FlexVAR: Flexible Visual Autoregressive Modeling without Residual Prediction](https://huggingface.co/papers/2502.20313) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on HF中国镜像站 checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.08686.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.08686", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [VARGPT: Unified Understanding and Generation in a Visual Autoregressive Multimodal Large Language Model](https://huggingface.co/papers/2501.12327) (2025)\n* [Multimodal Mamba: Decoder-only Multimodal State Space Model via Quadratic to Linear Distillation](https://huggingface.co/papers/2502.13145) (2025)\n* [Multimodal Representation Alignment for Image Generation: Text-Image Interleaved Control Is Easier Than You Think](https://huggingface.co/papers/2502.20172) (2025)\n* [MINT: Multi-modal Chain of Thought in Unified Generative Models for Enhanced Image Generation](https://huggingface.co/papers/2503.01298) (2025)\n* [MMRL: Multi-Modal Representation Learning for Vision-Language Models](https://huggingface.co/papers/2503.08497) (2025)\n* [Unleashing the Potential of Large Language Models for Text-to-Image Generation through Autoregressive Representation Alignment](https://huggingface.co/papers/2503.07334) (2025)\n* [DoraCycle: Domain-Oriented Adaptation of Unified Generative Model in Multimodal Cycles](https://huggingface.co/papers/2503.03651) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on HF中国镜像站 checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.08689.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.08689", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [PLPHP: Per-Layer Per-Head Vision Token Pruning for Efficient Large Vision-Language Models](https://huggingface.co/papers/2502.14504) (2025)\n* [Stop Looking for Important Tokens in Multimodal Language Models: Duplication Matters More](https://huggingface.co/papers/2502.11494) (2025)\n* [TokenButler: Token Importance is Predictable](https://huggingface.co/papers/2503.07518) (2025)\n* [Dynamic Token Reduction during Generation for Vision Language Models](https://huggingface.co/papers/2501.14204) (2025)\n* [Unshackling Context Length: An Efficient Selective Attention Approach through Query-Key Compression](https://huggingface.co/papers/2502.14477) (2025)\n* [Token Pruning in Multimodal Large Language Models: Are We Solving the Right Problem?](https://huggingface.co/papers/2502.11501) (2025)\n* [Moment of Untruth: Dealing with Negative Queries in Video Moment Retrieval](https://huggingface.co/papers/2502.08544) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on HF中国镜像站 checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.08890.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.08890", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [MedBioLM: Optimizing Medical and Biological QA with Fine-Tuned Large Language Models and Retrieval-Augmented Generation](https://huggingface.co/papers/2502.03004) (2025)\n* [MeDiSumQA: Patient-Oriented Question-Answer Generation from Discharge Letters](https://huggingface.co/papers/2502.03298) (2025)\n* [Benchmarking Multimodal RAG through a Chart-based Document Question-Answering Generation Framework](https://huggingface.co/papers/2502.14864) (2025)\n* [MedHallu: A Comprehensive Benchmark for Detecting Medical Hallucinations in Large Language Models](https://huggingface.co/papers/2502.14302) (2025)\n* [Aligning LLMs to Ask Good Questions A Case Study in Clinical Reasoning](https://huggingface.co/papers/2502.14860) (2025)\n* [FIND: Fine-grained Information Density Guided Adaptive Retrieval-Augmented Generation for Disease Diagnosis](https://huggingface.co/papers/2502.14614) (2025)\n* [Structured Outputs Enable General-Purpose LLMs to be Medical Experts](https://huggingface.co/papers/2503.03194) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on HF中国镜像站 checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.09089.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.09089", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Code to Think, Think to Code: A Survey on Code-Enhanced Reasoning and Reasoning-Driven Code Intelligence in LLMs](https://huggingface.co/papers/2502.19411) (2025)\n* [Automated Benchmark Generation for Repository-Level Coding Tasks](https://huggingface.co/papers/2503.07701) (2025)\n* [DependEval: Benchmarking LLMs for Repository Dependency Understanding](https://huggingface.co/papers/2503.06689) (2025)\n* [FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation](https://huggingface.co/papers/2503.06680) (2025)\n* [ToolCoder: A Systematic Code-Empowered Tool Learning Framework for Large Language Models](https://huggingface.co/papers/2502.11404) (2025)\n* [RefactorBench: Evaluating Stateful Reasoning in Language Agents Through Code](https://huggingface.co/papers/2503.07832) (2025)\n* [A Survey On Large Language Models For Code Generation](https://huggingface.co/papers/2503.01245) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on HF中国镜像站 checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.09151.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.09151", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [GEN3C: 3D-Informed World-Consistent Video Generation with Precise Camera Control](https://huggingface.co/papers/2503.03751) (2025)\n* [IMFine: 3D Inpainting via Geometry-guided Multi-view Refinement](https://huggingface.co/papers/2503.04501) (2025)\n* [MotionMatcher: Motion Customization of Text-to-Video Diffusion Models via Motion Feature Matching](https://huggingface.co/papers/2502.13234) (2025)\n* [OmniEraser: Remove Objects and Their Effects in Images with Paired Video-Frame Data](https://huggingface.co/papers/2501.07397) (2025)\n* [ObjectMover: Generative Object Movement with Video Prior](https://huggingface.co/papers/2503.08037) (2025)\n* [MotionCanvas: Cinematic Shot Design with Controllable Image-to-Video Generation](https://huggingface.co/papers/2502.04299) (2025)\n* [VideoHandles: Editing 3D Object Compositions in Videos Using Video Generative Priors](https://huggingface.co/papers/2503.01107) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on HF中国镜像站 checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.09402.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.09402", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [VideoRAG: Retrieval-Augmented Generation with Extreme Long-Context Videos](https://huggingface.co/papers/2502.01549) (2025)\n* [Narrating the Video: Boosting Text-Video Retrieval via Comprehensive Utilization of Frame-Level Captions](https://huggingface.co/papers/2503.05186) (2025)\n* [Fine-Grained Video Captioning through Scene Graph Consolidation](https://huggingface.co/papers/2502.16427) (2025)\n* [MomentSeeker: A Comprehensive Benchmark and A Strong Baseline For Moment Retrieval Within Long Videos](https://huggingface.co/papers/2502.12558) (2025)\n* [Omni-RGPT: Unifying Image and Video Region-level Understanding via Token Marks](https://huggingface.co/papers/2501.08326) (2025)\n* [Streaming Video Question-Answering with In-context Video KV-Cache Retrieval](https://huggingface.co/papers/2503.00540) (2025)\n* [HierarQ: Task-Aware Hierarchical Q-Former for Enhanced Video Understanding](https://huggingface.co/papers/2503.08585) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on HF中国镜像站 checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.09419.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.09419", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Rethinking Video Tokenization: A Conditioned Diffusion-based Approach](https://huggingface.co/papers/2503.03708) (2025)\n* [TIDE : Temporal-Aware Sparse Autoencoders for Interpretable Diffusion Transformers in Image Generation](https://huggingface.co/papers/2503.07050) (2025)\n* [Latent Swap Joint Diffusion for Long-Form Audio Generation](https://huggingface.co/papers/2502.05130) (2025)\n* [USP: Unified Self-Supervised Pretraining for Image Generation and Understanding](https://huggingface.co/papers/2503.06132) (2025)\n* [AR-Diffusion: Asynchronous Video Generation with Auto-Regressive Diffusion](https://huggingface.co/papers/2503.07418) (2025)\n* [SARA: Structural and Adversarial Representation Alignment for Training-efficient Diffusion Models](https://huggingface.co/papers/2503.08253) (2025)\n* [DiffVSR: Revealing an Effective Recipe for Taming Robust Video Super-Resolution Against Complex Degradations](https://huggingface.co/papers/2501.10110) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on HF中国镜像站 checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}