Diffusers documentation

Hybrid Inference

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v0.32.2).
HF中国镜像站's logo
Join the HF中国镜像站 community

and get access to the augmented documentation experience

to get started

Hybrid Inference

Empowering local AI builders with Hybrid Inference

Hybrid Inference is an experimental feature. Feedback can be provided here.

Why use Hybrid Inference?

Hybrid Inference offers a fast and simple way to offload local generation requirements.

  • 🚀 Reduced Requirements: Access powerful models without expensive hardware.
  • 💎 Without Compromise: Achieve the highest quality without sacrificing performance.
  • 💰 Cost Effective: It’s free! 🤑
  • 🎯 Diverse Use Cases: Fully compatible with Diffusers 🧨 and the wider community.
  • 🔧 Developer-Friendly: Simple requests, fast responses.

Available Models

  • VAE Decode 🖼️: Quickly decode latent representations into high-quality images without compromising performance or workflow speed.
  • VAE Encode 🔢: Efficiently encode images into latent representations for generation and training.
  • Text Encoders 📃 (coming soon): Compute text embeddings for your prompts quickly and accurately, ensuring a smooth and high-quality workflow.

Integrations

Changelog

  • March 10 2025: Added VAE encode
  • March 2 2025: Initial release with VAE decoding

Contents

The documentation is organized into three sections:

  • VAE Decode Learn the basics of how to use VAE Decode with Hybrid Inference.
  • VAE Encode Learn the basics of how to use VAE Encode with Hybrid Inference.
  • API Reference Dive into task-specific settings and parameters.
< > Update on GitHub