Update README.md
Browse files
README.md
CHANGED
@@ -24,17 +24,7 @@ Censorship level: <b>Medium</b>
|
|
24 |
Merge 2: LLAMA-3_8B_Unaligned_Alpha was SLERP merged with TheDrummer/Llama-3SOME-8B-v2.
|
25 |
Soup 1: Merge 1 was combined with Merge 2.
|
26 |
Final Merge: Soup 1 was SLERP merged with Nitral-Archive/Hathor_Enigmatica-L3-8B-v0.4.
|
27 |
-
|
28 |
-
|
29 |
|
30 |
-
The final model is surprisingly coherent (although slightly more censored), which is a bit unexpected, since all the intermediate merge steps were pretty incoherent.
|
31 |
-
|
32 |
-
## LLAMA-3_8B_Unaligned_Alpha_RP_Soup is available at the following quantizations:
|
33 |
-
|
34 |
-
- Original: [FP16](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup)
|
35 |
-
- GGUF: [Static Quants](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup_GGUF)
|
36 |
-
- EXL2: [5.0 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup_EXL2_5.0bpw) | [6.0 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup_EXL2_6.0bpw) | [7.0 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup_EXL2_7.0bpw) | [8.0 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup_EXL2_8.0bpw)
|
37 |
-
|
38 |
<details>
|
39 |
<summary>Mergekit configs:</summary>
|
40 |
|
@@ -121,6 +111,16 @@ dtype: float16
|
|
121 |
|
122 |
```
|
123 |
</details>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
124 |
|
125 |
# Model instruction template: (Can use either ChatML or Llama-3)
|
126 |
# ChatML
|
|
|
24 |
Merge 2: LLAMA-3_8B_Unaligned_Alpha was SLERP merged with TheDrummer/Llama-3SOME-8B-v2.
|
25 |
Soup 1: Merge 1 was combined with Merge 2.
|
26 |
Final Merge: Soup 1 was SLERP merged with Nitral-Archive/Hathor_Enigmatica-L3-8B-v0.4.
|
|
|
|
|
27 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
28 |
<details>
|
29 |
<summary>Mergekit configs:</summary>
|
30 |
|
|
|
111 |
|
112 |
```
|
113 |
</details>
|
114 |
+
|
115 |
+
The final model is surprisingly coherent (although slightly more censored), which is a bit unexpected, since all the intermediate merge steps were pretty incoherent.
|
116 |
+
|
117 |
+
## LLAMA-3_8B_Unaligned_Alpha_RP_Soup is available at the following quantizations:
|
118 |
+
|
119 |
+
- Original: [FP16](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup)
|
120 |
+
- GGUF: [Static Quants](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup_GGUF)
|
121 |
+
- EXL2: [5.0 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup_EXL2_5.0bpw) | [6.0 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup_EXL2_6.0bpw) | [7.0 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup_EXL2_7.0bpw) | [8.0 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup_EXL2_8.0bpw)
|
122 |
+
|
123 |
+
|
124 |
|
125 |
# Model instruction template: (Can use either ChatML or Llama-3)
|
126 |
# ChatML
|