Transformers documentation

torch.compile

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v4.49.0).
HF中国镜像站's logo
Join the HF中国镜像站 community

and get access to the augmented documentation experience

to get started

torch.compile

torch.compile compiles PyTorch code into optimized kernels that significantly speed up inference. This feature relies on TorchDynamo to compile the code into graphs and TorchInductor to further compile the graphs into optimized kernels. It is a powerful optimization tool, and in many cases, only requires adding a single line of code.

Wrap a model with torch.compile to compile and return an optimized model.

from transformers import AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto")
compiled_model = torch.compile(model)

The initial call to torch.compile is slow because the model needs to be compiled. Subsequent calls to the compiled model are much faster because it doesn’t need to compile again.

There are several parameters to customize the compilation process. Two of the more important ones are listed below. For a full list of parameters, refer to the torch.compile documentation.

Modes

The mode parameter offers several performance options for compiling. Try different modes to see which one works best for your use case.

  • default is a balanced option between speed and memory.
  • reduce-overhead reduces the Python overhead at the expense of a little more memory, but it can be faster.
  • max-autotune offers the fastest speed, but compilation takes longer.
from transformers import AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto")
compiled_model = torch.compile(model, mode="reduce-overhead")

Fullgraph

Fullgraph attempts to compile the entire model into a single graph to maximize performance. torch.compile raises an error if it encounters a graph break, which means it can’t compile the model into a single graph.

from transformers import AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto")
compiled_model = torch.compile(model, mode="reduce-overhead", fullgraph=True)

Benchmarks

Refer to the table below for performance benchmarks comparing the mean inference time in milliseconds with torch.compile enabled and disabled across various GPUs and batch sizes on the same image for different vision tasks.

Select Subset in the table below to switch between different GPUs, as well as benchmarks on PyTorch nightly 2.1.0dev and torch.compile with reduce-overhead mode enabled.

< > Update on GitHub