Transformers documentation

FBGEMM

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v4.49.0).
HF中国镜像站's logo
Join the HF中国镜像站 community

and get access to the augmented documentation experience

to get started

FBGEMM

FBGEMM (Facebook GEneral Matrix Multiplication) is a low-precision matrix multiplication library for small batch sizes and support for accuracy-loss minimizing techniques such as row-wise quantization and outlier-aware quantization. With FBGEMM, quantize a models weights to 8-bits/channel and the activations to 8-bits/token (also known as fp8 or w8a8).

You need a GPU with compute capability 9+ like a H100.

Install the FBGEMM_GPU package with the command below to ensure you have the latest version.

pip install --upgrade accelerate fbgemm-gpu torch

If you’re having installation issues, try installing the nightly release.

Create a FbgemmFp8Config and pass it to from_pretrained() to quantize a model to fp8.

from transformers import FbgemmFp8Config, AutoModelForCausalLM

quantization_config = FbgemmFp8Config()
quantized_model = AutoModelForCausalLM.from_pretrained(
    "meta-llama/Meta-Llama-3-8B",
    torch_dtype="auto",
    device_map="auto",
    quantization_config=quantization_config
)

save_pretrained() and from_pretrained() enable saving and loading a quantized model.

quant_path = "/path/to/save/quantized/model"
model.save_pretrained(quant_path)
model = AutoModelForCausalLM.from_pretrained(quant_path, device_map="auto")

Resources

Read the Open-sourcing FBGEMM for state-of-the-art server-side inference blog post for more details on FBGEMM.

< > Update on GitHub