Upload README.md
Browse files
README.md
CHANGED
@@ -55,6 +55,7 @@ Multiple GPTQ parameter permutations are provided; see Provided Files below for
|
|
55 |
## Repositories available
|
56 |
|
57 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/phi-2-GPTQ)
|
|
|
58 |
* [Microsoft's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/microsoft/phi-2)
|
59 |
<!-- repositories-available end -->
|
60 |
|
@@ -110,12 +111,12 @@ Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with T
|
|
110 |
|
111 |
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
|
112 |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
|
113 |
-
| main | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 1.84 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
|
114 |
-
| gptq-4bit-32g-actorder_True | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 1.98 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
|
115 |
-
| gptq-8bit--1g-actorder_True | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 3.05 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
|
116 |
-
| gptq-8bit-128g-actorder_True | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 3.10 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
|
117 |
-
| gptq-8bit-32g-actorder_True | 8 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 3.28 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
|
118 |
-
| gptq-4bit-64g-actorder_True | 4 | 64 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 1.89 GB | No | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
|
119 |
|
120 |
<!-- README_GPTQ.md-provided-files end -->
|
121 |
|
|
|
55 |
## Repositories available
|
56 |
|
57 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/phi-2-GPTQ)
|
58 |
+
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/phi-2-GGUF)
|
59 |
* [Microsoft's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/microsoft/phi-2)
|
60 |
<!-- repositories-available end -->
|
61 |
|
|
|
111 |
|
112 |
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
|
113 |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
|
114 |
+
| [main](https://huggingface.co/TheBloke/phi-2-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 1.84 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
|
115 |
+
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/phi-2-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 1.98 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
|
116 |
+
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/phi-2-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 3.05 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
|
117 |
+
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/phi-2-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 3.10 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
|
118 |
+
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/phi-2-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 3.28 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
|
119 |
+
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/phi-2-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 1.89 GB | No | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
|
120 |
|
121 |
<!-- README_GPTQ.md-provided-files end -->
|
122 |
|