Transformers documentation

GGUF

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v4.49.0).
HF中国镜像站's logo
Join the HF中国镜像站 community

and get access to the augmented documentation experience

to get started

GGUF

GGUF is a file format used to store models for inference with GGML, a fast and lightweight inference framework written in C and C++. GGUF is a single-file format containing the model metadata and tensors.

The GGUF format also supports many quantized data types (refer to quantization type table for a complete list of supported quantization types) which saves a significant amount of memory, making inference with large models like Whisper and Llama feasible on local and edge devices.

Transformers supports loading models stored in the GGUF format for further training or finetuning. The GGUF checkpoint is dequantized to fp32 where the full model weights are available and compatible with PyTorch.

Models that support GGUF include Llama, Mistral, Qwen2, Qwen2Moe, Phi3, Bloom, Falcon, StableLM, GPT2, Starcoder2, and more

Add the gguf_file parameter to from_pretrained() to specify the GGUF file to load.

# pip install gguf
from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF"
filename = "tinyllama-1.1b-chat-v1.0.Q6_K.gguf"

torch_dtype = torch.float32 # could be torch.float16 or torch.bfloat16 too
tokenizer = AutoTokenizer.from_pretrained(model_id, gguf_file=filename)
model = AutoModelForCausalLM.from_pretrained(model_id, gguf_file=filename, torch_dtype=torch_dtype)

Once you’re done tinkering with the model, save and convert it back to the GGUF format with the convert-hf-to-gguf.py script.

tokenizer.save_pretrained("directory")
model.save_pretrained("directory")

!python ${path_to_llama_cpp}/convert-hf-to-gguf.py ${directory}
< > Update on GitHub