nthehai01/Qwen2.5-7B-Instruct-Math-Code-breadcrumbs

This is a merge of pre-trained language models created using mergekit.

Performance

Metric Value
GSM8k (zero-shot) 90.06
HellaSwag (zero-Shot) 82.77
MBPP (zero-shot) 62.21

Merge Details

Merge Method

This model was merged using the Model Breadcrumbs merge method using Qwen/Qwen2.5-7B as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model: Qwen/Qwen2.5-7B
dtype: bfloat16
merge_method: breadcrumbs
parameters:
  lambda: 0.9075603207928135
  normalize: 1.0
slices:
- sources:
  - layer_range: [0, 28]
    model: Qwen/Qwen2.5-7B
  - layer_range: [0, 28]
    model: Qwen/Qwen2.5-Math-7B
    parameters:
      density: 0.11722197443445775
      gamma: 0.07547691839721048
      weight: 0.17267293536872041
  - layer_range: [0, 28]
    model: Qwen/Qwen2.5-Coder-7B
    parameters:
      density: 0.48352747334554935
      gamma: 0.0753405327865558
      weight: 0.11164770709858211
  - layer_range: [0, 28]
    model: Qwen/Qwen2.5-7B-Instruct
    parameters:
      density: 0.8190520808683315
      gamma: 0.022307694128235696
      weight: 0.7626295102691242
Downloads last month
0
Safetensors
Model size
7.62B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for nthehai01/Qwen2.5-7B-Instruct-Math-Code-breadcrumbs

Collection including nthehai01/Qwen2.5-7B-Instruct-Math-Code-breadcrumbs