Perceptually Accurate 3D Talking Head Generation: New Definitions, Speech-Mesh Representation, and Evaluation Metrics
Abstract
Recent advancements in speech-driven 3D talking head generation have made significant progress in lip synchronization. However, existing models still struggle to capture the perceptual alignment between varying speech characteristics and corresponding lip movements. In this work, we claim that three criteria -- Temporal Synchronization, Lip Readability, and Expressiveness -- are crucial for achieving perceptually accurate lip movements. Motivated by our hypothesis that a desirable representation space exists to meet these three criteria, we introduce a speech-mesh synchronized representation that captures intricate correspondences between speech signals and 3D face meshes. We found that our learned representation exhibits desirable characteristics, and we plug it into existing models as a perceptual loss to better align lip movements to the given speech. In addition, we utilize this representation as a perceptual metric and introduce two other physically grounded lip synchronization metrics to assess how well the generated 3D talking heads align with these three criteria. Experiments show that training 3D talking head generation models with our perceptual loss significantly improve all three aspects of perceptually accurate lip synchronization. Codes and datasets are available at https://perceptual-3d-talking-head.github.io/.
Community
Project page : https://perceptual-3d-talking-head.github.io/
We define three essential criteria with evaluation metrics for perceptually accurate 3D talking heads and enhance existing 3D talking head generation models across these three aspects.
OMG So amazing!!!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- ARTalk: Speech-Driven 3D Head Animation via Autoregressive Model (2025)
- KeyFace: Expressive Audio-Driven Facial Animation for Long Sequences via KeyFrame Interpolation (2025)
- AV-Flow: Transforming Text to Audio-Visual Human-like Interactions (2025)
- Removing Averaging: Personalized Lip-Sync Driven Characters Based on Identity Adapter (2025)
- Shushing! Let's Imagine an Authentic Speech from the Silent Video (2025)
- PC-Talk: Precise Facial Animation Control for Audio-Driven Talking Face Generation (2025)
- Towards High-fidelity 3D Talking Avatar with Personalized Dynamic Texture (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on HF中国镜像站 checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper