Inv commited on
Commit
b52c4af
·
verified ·
1 Parent(s): 65998d5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -5
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- base_model:
3
  - mistralai/Mistral-7B-v0.1
4
  library_name: transformers
5
  tags:
@@ -10,7 +10,9 @@ tags:
10
  - maywell/PiVoT-0.1-Evil-a
11
  - mlabonne/ArchBeagle-7B
12
  - NeverSleep/Noromaid-7B-0.4-DPO
13
-
 
 
14
  ---
15
  # konstanta-final
16
 
@@ -19,7 +21,7 @@ This is a merge of pre-trained language models created using [mergekit](https://
19
  ## Merge Details
20
  ### Merge Method
21
 
22
- This model was merged using the SLERP merge method.
23
 
24
  ### Models Merged
25
 
@@ -87,9 +89,9 @@ parameters:
87
  - filter: self_attn
88
  value: [0, 0.5, 0.3, 0.7, 1]
89
  - filter: mlp
90
- value: [1, 0.5, 0.7, 0.3, 0]
91
  - value: 0.5
92
  int8_mask: true
93
  normalize: true
94
  dtype: float16
95
- ```
 
1
  ---
2
+ base_model:
3
  - mistralai/Mistral-7B-v0.1
4
  library_name: transformers
5
  tags:
 
10
  - maywell/PiVoT-0.1-Evil-a
11
  - mlabonne/ArchBeagle-7B
12
  - NeverSleep/Noromaid-7B-0.4-DPO
13
+ license: apache-2.0
14
+ language:
15
+ - en
16
  ---
17
  # konstanta-final
18
 
 
21
  ## Merge Details
22
  ### Merge Method
23
 
24
+ This model was merged using the DARE TIES to merge Kunoichi with PiVoT Evil and to merge ArchBeagle with Noromaid 0.4 DPO, and then merge the resulting 2 models with the gradient SLERP merge method.
25
 
26
  ### Models Merged
27
 
 
89
  - filter: self_attn
90
  value: [0, 0.5, 0.3, 0.7, 1]
91
  - filter: mlp
92
+ value: [1, 0.5, 0.7, 0.3, 0]SLERP
93
  - value: 0.5
94
  int8_mask: true
95
  normalize: true
96
  dtype: float16
97
+ ```