Update README.md
Browse files
README.md
CHANGED
@@ -10,7 +10,7 @@ quantized_by: Thireus
|
|
10 |
|
11 |
# WizardLM 70B V1.0 – EXL2
|
12 |
- Model creator: [WizardLM](https://huggingface.co/WizardLM)
|
13 |
-
- FP32 Original model used for quantization: [WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0)
|
14 |
- FP16 Model used for quantization: [WizardLM 70B V1.0-HF](https://huggingface.co/simsim314/WizardLM-70B-V1.0-HF) – float16 of [WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0)
|
15 |
- BF16 Model used for quantization: [WizardLM 70B V1.0-BF16](https://huggingface.co/Thireus/WizardLM-70B-V1.0-BF16) – bfloat16 of [WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0)
|
16 |
|
|
|
10 |
|
11 |
# WizardLM 70B V1.0 – EXL2
|
12 |
- Model creator: [WizardLM](https://huggingface.co/WizardLM)
|
13 |
+
- FP32 Original model used for quantization: [WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) – float32
|
14 |
- FP16 Model used for quantization: [WizardLM 70B V1.0-HF](https://huggingface.co/simsim314/WizardLM-70B-V1.0-HF) – float16 of [WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0)
|
15 |
- BF16 Model used for quantization: [WizardLM 70B V1.0-BF16](https://huggingface.co/Thireus/WizardLM-70B-V1.0-BF16) – bfloat16 of [WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0)
|
16 |
|