Abstract
Recent advancements in video generation have achieved impressive motion realism, yet they often overlook character-driven storytelling, a crucial task for automated film, animation generation. We introduce Talking Characters, a more realistic task to generate talking character animations directly from speech and text. Unlike talking head, Talking Characters aims at generating the full portrait of one or more characters beyond the facial region. In this paper, we propose MoCha, the first of its kind to generate talking characters. To ensure precise synchronization between video and speech, we propose a speech-video window attention mechanism that effectively aligns speech and video tokens. To address the scarcity of large-scale speech-labeled video datasets, we introduce a joint training strategy that leverages both speech-labeled and text-labeled video data, significantly improving generalization across diverse character actions. We also design structured prompt templates with character tags, enabling, for the first time, multi-character conversation with turn-based dialogue-allowing AI-generated characters to engage in context-aware conversations with cinematic coherence. Extensive qualitative and quantitative evaluations, including human preference studies and benchmark comparisons, demonstrate that MoCha sets a new standard for AI-generated cinematic storytelling, achieving superior realism, expressiveness, controllability and generalization.
Community
Hi, thank you for your interest in MoCha! Releasing the source code and model weights requires approval from Meta. However, I can try to implement MoCha using open-source video generation models.
Meanwhile, MoChaBench will be released soon—please stay tuned.
Generee une blague
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MagicInfinite: Generating Infinite Talking Videos with Your Words and Voice (2025)
- AudCast: Audio-Driven Human Video Generation by Cascaded Diffusion Transformers (2025)
- ChatAnyone: Stylized Real-time Portrait Video Generation with Hierarchical Motion Diffusion Model (2025)
- Versatile Multimodal Controls for Whole-Body Talking Human Animation (2025)
- X-Dancer: Expressive Music to Human Dance Video Generation (2025)
- Long Context Tuning for Video Generation (2025)
- Teller: Real-Time Streaming Audio-Driven Portrait Animation with Autoregressive Motion Generation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on HF中国镜像站 checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper