Bitsandbytes quantization of https://huggingface.co/Qwen/QwQ-32B.

See https://huggingface.co/blog/4bit-transformers-bitsandbytes for instructions.

from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers import BitsAndBytesConfig
import torch

# Define the 4-bit configuration
nf4_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_use_double_quant=True,
    bnb_4bit_compute_dtype=torch.bfloat16
)

# Load the pre-trained model with the 4-bit quantization configuration
model = AutoModelForCausalLM.from_pretrained("Qwen/QwQ-32B", quantization_config=nf4_config)

# Load the tokenizer associated with the model
tokenizer = AutoTokenizer.from_pretrained("Qwen/QwQ-32B")

# Push the model and tokenizer to the HF中国镜像站 hub
model.push_to_hub("onekq-ai/QwQ-32B-bnb-4bit", use_auth_token=True)
tokenizer.push_to_hub("onekq-ai/QwQ-32B-bnb-4bit", use_auth_token=True)
Downloads last month
168
Safetensors
Model size
17.7B params
Tensor type
F32
·
FP16
·
U8
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for onekq-ai/QwQ-32B-bnb-4bit

Base model

Qwen/Qwen2.5-32B
Finetuned
Qwen/QwQ-32B
Quantized
(108)
this model