zephyr-7b-sft-lora-accum4-lr3e_6
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.1309
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 50.0
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
2.0761 | 0.55 | 13 | 2.0454 |
2.0457 | 1.57 | 27 | 2.0037 |
1.9868 | 2.55 | 40 | 1.9578 |
1.9477 | 3.57 | 54 | 1.9089 |
1.893 | 4.55 | 67 | 1.8629 |
1.8529 | 5.57 | 81 | 1.8235 |
1.8242 | 6.55 | 94 | 1.7907 |
1.7901 | 7.57 | 108 | 1.7589 |
1.757 | 8.55 | 121 | 1.7352 |
1.7344 | 9.57 | 135 | 1.7108 |
1.7193 | 10.55 | 148 | 1.6879 |
1.6953 | 11.57 | 162 | 1.6634 |
1.6687 | 12.55 | 175 | 1.6432 |
1.6512 | 13.57 | 189 | 1.6192 |
1.6177 | 14.55 | 202 | 1.5982 |
1.6033 | 15.57 | 216 | 1.5741 |
1.5791 | 16.55 | 229 | 1.5489 |
1.5583 | 17.57 | 243 | 1.5230 |
1.5296 | 18.55 | 256 | 1.4971 |
1.4996 | 19.57 | 270 | 1.4642 |
1.4715 | 20.55 | 283 | 1.4362 |
1.4352 | 21.57 | 297 | 1.3987 |
1.387 | 22.55 | 310 | 1.3683 |
1.3554 | 23.57 | 324 | 1.3398 |
1.3537 | 24.55 | 337 | 1.3165 |
1.3102 | 25.57 | 351 | 1.2972 |
1.2974 | 26.55 | 364 | 1.2778 |
1.2876 | 27.57 | 378 | 1.2619 |
1.2645 | 28.55 | 391 | 1.2473 |
1.2686 | 29.57 | 405 | 1.2336 |
1.2332 | 30.55 | 418 | 1.2226 |
1.24 | 31.57 | 432 | 1.2142 |
1.2144 | 32.55 | 445 | 1.2031 |
1.2239 | 33.57 | 459 | 1.1926 |
1.1971 | 34.55 | 472 | 1.1878 |
1.1878 | 35.57 | 486 | 1.1820 |
1.1937 | 36.55 | 499 | 1.1767 |
1.19 | 37.57 | 513 | 1.1704 |
1.1774 | 38.55 | 526 | 1.1671 |
1.17 | 39.57 | 540 | 1.1629 |
1.1843 | 40.55 | 553 | 1.1574 |
1.1725 | 41.57 | 567 | 1.1525 |
1.1737 | 42.55 | 580 | 1.1473 |
1.1592 | 43.57 | 594 | 1.1473 |
1.1462 | 44.55 | 607 | 1.1418 |
1.1536 | 45.57 | 621 | 1.1411 |
1.1294 | 46.55 | 634 | 1.1377 |
1.1448 | 47.57 | 648 | 1.1343 |
1.1363 | 48.55 | 661 | 1.1331 |
1.1527 | 49.57 | 675 | 1.1308 |
Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0
- Datasets 2.14.6
- Tokenizers 0.14.1
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.
Model tree for shkang/zephyr-7b-sft-lora-accum4-lr3e_6
Base model
mistralai/Mistral-7B-v0.1