LHM: Large Animatable Human Reconstruction Model for Single Image to 3D in Seconds

Overview

This repository contains the models of the paper LHM: Large Animatable Human Reconstruction Model for Single Image to 3D in Seconds.

LHM is a feed-forward model for animatable 3D human reconstruction from a single image in seconds. Trained on a large-scale video dataset with an image reconstruction loss, our model exhibits strong generalization ability to diverse real-world scenarios

Quick Start

Please refer to our Github Repo

Download Model

from huggingface_hub import snapshot_download 
# 500M Model
model_dir = snapshot_download(repo_id='3DAIGC/LHM-500M', cache_dir='./pretrained_models/huggingface')

Citation

@inproceedings{qiu2025LHM,
  title={LHM: Large Animatable Human Reconstruction Model from a Single Image in Seconds},
  author={Lingteng Qiu and Xiaodong Gu and Peihao Li  and Qi Zuo
     and Weichao Shen and Junfei Zhang and Kejie Qiu and Weihao Yuan
     and Guanying Chen and Zilong Dong and Liefeng Bo 
    },
  booktitle={arXiv preprint arXiv:2503.10625},
  year={2025}
}
Downloads last month
9
Safetensors
Model size
958M params
Tensor type
I64
·
F32
·
BOOL
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support