Qwen2.5 Merged
Collection
Making Qwen2.5 greater with Merging
•
6 items
•
Updated
This is a merge of pre-trained language models created using mergekit.
Metric | Value |
---|---|
GSM8k (zero-shot) | 90.06 |
HellaSwag (zero-Shot) | 82.77 |
MBPP (zero-shot) | 62.21 |
This model was merged using the Model Breadcrumbs merge method using Qwen/Qwen2.5-7B as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
base_model: Qwen/Qwen2.5-7B
dtype: bfloat16
merge_method: breadcrumbs
parameters:
lambda: 0.9075603207928135
normalize: 1.0
slices:
- sources:
- layer_range: [0, 28]
model: Qwen/Qwen2.5-7B
- layer_range: [0, 28]
model: Qwen/Qwen2.5-Math-7B
parameters:
density: 0.11722197443445775
gamma: 0.07547691839721048
weight: 0.17267293536872041
- layer_range: [0, 28]
model: Qwen/Qwen2.5-Coder-7B
parameters:
density: 0.48352747334554935
gamma: 0.0753405327865558
weight: 0.11164770709858211
- layer_range: [0, 28]
model: Qwen/Qwen2.5-7B-Instruct
parameters:
density: 0.8190520808683315
gamma: 0.022307694128235696
weight: 0.7626295102691242