Infinite Mobility: Scalable High-Fidelity Synthesis of Articulated Objects via Procedural Generation
Abstract
Large-scale articulated objects with high quality are desperately needed for multiple tasks related to embodied AI. Most existing methods for creating articulated objects are either data-driven or simulation based, which are limited by the scale and quality of the training data or the fidelity and heavy labour of the simulation. In this paper, we propose Infinite Mobility, a novel method for synthesizing high-fidelity articulated objects through procedural generation. User study and quantitative evaluation demonstrate that our method can produce results that excel current state-of-the-art methods and are comparable to human-annotated datasets in both physics property and mesh quality. Furthermore, we show that our synthetic data can be used as training data for generative models, enabling next-step scaling up. Code is available at https://github.com/Intern-Nexus/Infinite-Mobility
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Articulate AnyMesh: Open-Vocabulary 3D Articulated Objects Modeling (2025)
- Re3Sim: Generating High-Fidelity Simulation Data via 3D-Photorealistic Real-to-Sim for Robotic Manipulation (2025)
- 3D Human Interaction Generation: A Survey (2025)
- I2V3D: Controllable image-to-video generation with 3D guidance (2025)
- Articulate That Object Part (ATOP): 3D Part Articulation from Text and Motion Personalization (2025)
- MagicArticulate: Make Your 3D Models Articulation-Ready (2025)
- How to Move Your Dragon: Text-to-Motion Synthesis for Large-Vocabulary Objects (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on HF中国镜像站 checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper