Update README.md
Browse files
README.md
CHANGED
@@ -9,7 +9,7 @@ tags:
|
|
9 |
- SanjiWatsuki/Kunoichi-DPO-v2-7B
|
10 |
- maywell/PiVoT-0.1-Evil-a
|
11 |
- mlabonne/ArchBeagle-7B
|
12 |
-
-
|
13 |
license: apache-2.0
|
14 |
language:
|
15 |
- en
|
@@ -21,7 +21,7 @@ This is a merge of pre-trained language models created using [mergekit](https://
|
|
21 |
## Merge Details
|
22 |
### Merge Method
|
23 |
|
24 |
-
This model was merged using the DARE TIES to merge Kunoichi with PiVoT Evil and to merge ArchBeagle with
|
25 |
|
26 |
### Models Merged
|
27 |
|
@@ -29,7 +29,7 @@ The following models were included in the merge:
|
|
29 |
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
|
30 |
* [maywell/PiVoT-0.1-Evil-a](https://huggingface.co/maywell/PiVoT-0.1-Evil-a)
|
31 |
* [mlabonne/ArchBeagle-7B](https://huggingface.co/mlabonne/ArchBeagle-7B)
|
32 |
-
* [
|
33 |
|
34 |
### Configuration
|
35 |
|
|
|
9 |
- SanjiWatsuki/Kunoichi-DPO-v2-7B
|
10 |
- maywell/PiVoT-0.1-Evil-a
|
11 |
- mlabonne/ArchBeagle-7B
|
12 |
+
- LakoMoor/Silicon-Alice-7B
|
13 |
license: apache-2.0
|
14 |
language:
|
15 |
- en
|
|
|
21 |
## Merge Details
|
22 |
### Merge Method
|
23 |
|
24 |
+
This model was merged using the DARE TIES to merge Kunoichi with PiVoT Evil and to merge ArchBeagle with Silicon Alice, and then merge the resulting 2 models with the gradient SLERP merge method.
|
25 |
|
26 |
### Models Merged
|
27 |
|
|
|
29 |
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
|
30 |
* [maywell/PiVoT-0.1-Evil-a](https://huggingface.co/maywell/PiVoT-0.1-Evil-a)
|
31 |
* [mlabonne/ArchBeagle-7B](https://huggingface.co/mlabonne/ArchBeagle-7B)
|
32 |
+
* [LakoMoor/Silicon-Alice-7B](https://huggingface.co/LakoMoor/Silicon-Alice-7B)
|
33 |
|
34 |
### Configuration
|
35 |
|