--- license: apache-2.0 language: - en base_model: - Wan-AI/Wan2.1-I2V-14B-480P - Wan-AI/Wan2.1-I2V-14B-480P-Diffusers pipeline_tag: image-to-video tags: - text-to-image - lora - diffusers - template:diffusion-lora - image-to-video widget: - text: >- The video starts with a portrait of a rabbit There's a d15n3y Disney princess transformation, the rabbit is now wearing a blue dress. The camera pans out to reveal the background as a brown, classical style hallway. There are white butterflies that appear to be falling. The rabbit still wears the dress from the d15n3y Disney princess transformation. output: url: example_videos/rabbit_disney_princess.mp4 - text: >- The video starts with a portrait of a man. There's a d15n3y Disney princess transformation, the man is now wearing a blue dress. The camera pans out to reveal the background as a brown, classical style hallway. There are white butterflies that appear to be falling. The man still wears the dress from the d15n3y Disney princess transformation. output: url: example_videos/man_disney_princess.mp4 ---
This LoRA is trained on the Wan2.1 14B I2V 480p model and allows you to make any person/object in an image become a Disney Princess version of themselves!
The key trigger phrase is: d15n3y Disney princess transformation
For best results, try following the structure of the prompt examples above. These worked well for me.
This LoRA works with a modified version of Kijai's Wan Video Wrapper workflow. The main modification is adding a Wan LoRA node connected to the base model.
See the Downloads section above for the modified workflow.
The model weights are available in Safetensors format. See the Downloads section above.
Training was done using Diffusion Pipe for Training
Special thanks to Kijai for the ComfyUI Wan Video Wrapper and tdrussell for the training scripts!