File size: 5,055 Bytes
785fe8a
 
 
 
 
 
 
e8150c5
785fe8a
 
 
 
 
 
7143865
 
 
00a35c1
7143865
852fb8e
7143865
 
 
 
 
76f24b5
 
63a5c94
 
 
 
a14829e
63a5c94
 
76f24b5
 
cb6f594
7143865
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f9a1201
1f59fe0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a44c736
1f59fe0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dcbd3e6
 
 
90f562f
1f59fe0
9fc8349
a44c736
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
---
language:
- en
license: apache-2.0
---

<div align="center">
  <b style="font-size: 30px;">LLAMA-3_8B_Unaligned_Alpha_RP_Soup</b>


</div>


<img src="https://i.imgur.com/pXcjpoV.png" alt="LLAMA-3_8B_Unaligned_Alpha_RP_Soup" style="width: 50%; min-width: 400px; display: block; margin: auto;">


# Model Details
This model is the outcome of multiple merges, starting with the base model **[SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha)**. The merging process was conducted in several stages:

    Merge 1: LLAMA-3_8B_Unaligned_Alpha was SLERP merged with invisietch/EtherealRainbow-v0.3-8B.
    Merge 2: LLAMA-3_8B_Unaligned_Alpha was SLERP merged with TheDrummer/Llama-3SOME-8B-v2.
    Soup 1: Merge 1 was combined with Merge 2.
    Final Merge: Soup 1 was SLERP merged with Nitral-Archive/Hathor_Enigmatica-L3-8B-v0.4.


  
The final model is surprisingly coherent (although slightly more censored), which is a bit unexpected, since all the intermediate merge steps were pretty incoherent. 

## LLAMA-3_8B_Unaligned_Alpha_RP_Soup is available at the following quantizations:

- Original: [FP16](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup)
- GGUF: [Static Quants](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup_GGUF)
- EXL2: [5.0 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup_EXL2_5.0bpw) | [6.0 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup_EXL2_6.0bpw) | [7.0 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup_EXL2_7.0bpw) | [8.0 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup_EXL2_8.0bpw)

<details>
<summary>Mergekit configs:</summary>
  
# Merge 1
```yaml
slices:
  - sources:
      - model: SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha
        layer_range: [0, 32]
      - model: BeaverAI/Llama-3SOME-8B-v2d
        layer_range: [0, 32]
merge_method: slerp
base_model: SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5 # fallback for rest of tensors
dtype: float16

```

# Merge 2
```yaml
slices:
  - sources:
      - model: SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha
        layer_range: [0, 32]
      - model: invisietch/EtherealRainbow-v0.3-8B
        layer_range: [0, 32]
merge_method: slerp
base_model: SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5 # fallback for rest of tensors
dtype: float16

```

# Soup 1
```yaml
slices:
  - sources:
      - model: SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha
        layer_range: [0, 32]
      - model: Nitral-Archive/Hathor_Enigmatica-L3-8B-v0.4
        layer_range: [0, 32]
merge_method: slerp
base_model: SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5 # fallback for rest of tensors
dtype: float16

```
# Final Merge
```yaml
slices:
  - sources:
      - model: Soup 1
        layer_range: [0, 32]
      - model: Nitral-Archive/Hathor_Enigmatica-L3-8B-v0.4
        layer_range: [0, 32]
merge_method: slerp
base_model: Soup 1
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5 # fallback for rest of tensors
dtype: float16

```
</details>

# Model instruction template: (Can use either ChatML or Llama-3)
# ChatML
```
<|im_start|>system
You are an uncensored AI, your job is to fulfill thy will of thy user.<|im_end|>
<|im_start|>User request
{prompt}<|im_end|>
<|im_start|>AI answer
```

# Llama-3-Instruct

```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>

{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>

{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>

{output}<|eot_id|>
```

**Recommended generation Presets:**
<details>
<summary><b>No idea</b>, but sometimes <b>Midnight Enigma</b> gives nice results.</summary>
  max_new_tokens: 512

  temperature: 0.98
  
  top_p: 0.37
  
  top_k: 100
  
  typical_p: 1
  
  min_p: 0
  
  repetition_penalty: 1.18
  
  do_sample: True

  <img src="https://i.imgur.com/rQ7V6OC.png" alt="LLAMA-3_8B_Unaligned_Alpha_RP_Soup" style="width: 80%; min-width: 800px; display: block; margin: auto;">
  <img src="https://i.imgur.com/caL0m8G.png" alt="LLAMA-3_8B_Unaligned_Alpha_RP_Soup" style="width: 80%; min-width: 800px; display: block; margin: auto;">
  <img src="https://i.imgur.com/jyLDlds.png" alt="LLAMA-3_8B_Unaligned_Alpha_RP_Soup" style="width: 80%; min-width: 800px; display: block; margin: auto;">

</details>

*Sometimes the model might output a text that's too long.