Qwen2.5 Merged
Collection
Making Qwen2.5 greater with Merging
•
6 items
•
Updated
This is a merge of pre-trained language models created using mergekit.
Metric | Value |
---|---|
GSM8k (zero-shot) | 90.75 |
HellaSwag (zero-Shot) | 80.77 |
MBPP (zero-shot) | 63.08 |
This model was merged using the Linear DARE merge method using Qwen/Qwen2.5-7B as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
base_model: Qwen/Qwen2.5-7B
dtype: bfloat16
merge_method: dare_linear
parameters:
lambda: 0.7484721287441042
normalize: 1.0
slices:
- sources:
- layer_range: [0, 28]
model: Qwen/Qwen2.5-7B
- layer_range: [0, 28]
model: Qwen/Qwen2.5-Math-7B
parameters:
density: 0.8456557088847347
weight: 0.11064925820848412
- layer_range: [0, 28]
model: Qwen/Qwen2.5-7B-Instruct
parameters:
density: 0.5247829319933462
weight: 0.6901952279079901