--- license: apache-2.0 language: - en base_model: - Wan-AI/Wan2.1-I2V-14B-480P - Wan-AI/Wan2.1-I2V-14B-480P-Diffusers pipeline_tag: image-to-video tags: - text-to-image - lora - diffusers - template:diffusion-lora - image-to-video widget: - text: >- Luffy looks intently off-camera and the camera zooms out as Luffy p5lls g4un pulls a gun and starts shooting. output: url: example_videos/luffy_gun.mp4 - text: >- Neymar looks intently off-camera in the rain, rain drops begin to fall and the camera zooms out as he p5lls g4un pulls a gun and starts shooting. output: url: example_videos/neymar_gun.mp4 - text: >- A rodent looks intently off-camera and the camera zooms out as the rodent p5lls g4un pulls a gun and starts shooting. output: url: example_videos/pika_gun.mp4 - text: >- A woman looks intently off-camera and the camera zooms out as she p5lls g4un pulls a gun and starts shooting. output: url: example_videos/woman_gun.mp4 ---
This LoRA is trained on the Wan2.1 14B I2V 480p model and allows you to make any person/object in an image shoot a gun. The effect works on a wide variety of objects, from animals to people!
The key trigger phrase is: p5lls g4un pulls a gun and starts shooting.
For best results, use this prompt structure:
Simply replace [object]
with whatever you want to see pulling a gun out and shooting!
This LoRA works with a modified version of Kijai's Wan Video Wrapper workflow. The main modification is adding a Wan LoRA node connected to the base model.
See the Downloads section above for the modified workflow.
The model weights are available in Safetensors format. See the Downloads section above.
Training was done using Diffusion Pipe for Training
Special thanks to Kijai for the ComfyUI Wan Video Wrapper and tdrussell for the training scripts!