codebyzeb's picture
Final model for experiment Dutch
d842af3 verified
|
raw
history blame
3.64 kB
---
library_name: transformers
tags:
- Dutch
- generated_from_trainer
model-index:
- name: childes-segmentation-100k-gpt2_lm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# childes-segmentation-100k-gpt2_lm-model
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- epoch: 4000.0
- eval_absolute_seg_boundary_fscore_Boundary Prediction: 0.6381
- eval_absolute_seg_boundary_fscore_Entropy: 0.4936
- eval_absolute_seg_boundary_fscore_Increase in Boundary Prediction: 0.6397
- eval_absolute_seg_boundary_fscore_Increase in Entropy: 0.6171
- eval_absolute_seg_boundary_fscore_Increase in Loss: 0.6068
- eval_absolute_seg_boundary_fscore_Increase in Rank: 0.6806
- eval_absolute_seg_boundary_fscore_Loss: 0.5355
- eval_absolute_seg_boundary_fscore_Majority Vote Cutoff: 0.7011
- eval_absolute_seg_boundary_fscore_Majority Vote Spike: 0.7273
- eval_absolute_seg_boundary_fscore_Rank: 0.5571
- eval_absolute_seg_type_fscore_Boundary Prediction: 0.1447
- eval_absolute_seg_type_fscore_Entropy: 0.2626
- eval_absolute_seg_type_fscore_Increase in Boundary Prediction: 0.3233
- eval_absolute_seg_type_fscore_Increase in Entropy: 0.3100
- eval_absolute_seg_type_fscore_Increase in Loss: 0.2509
- eval_absolute_seg_type_fscore_Increase in Rank: 0.4174
- eval_absolute_seg_type_fscore_Loss: 0.2412
- eval_absolute_seg_type_fscore_Majority Vote Cutoff: 0.4319
- eval_absolute_seg_type_fscore_Majority Vote Spike: 0.4164
- eval_absolute_seg_type_fscore_Rank: 0.2972
- eval_bpc: 4.4805
- eval_loss: 3.1056
- eval_model_preparation_time: 0.0008
- eval_perplexity: 22.3231
- eval_runtime: 47.9148
- eval_samples_per_second: 2.964
- eval_spike_seg_boundary_fscore_Boundary Prediction: 0.7308
- eval_spike_seg_boundary_fscore_Entropy: 0.5944
- eval_spike_seg_boundary_fscore_Increase in Boundary Prediction: 0.7195
- eval_spike_seg_boundary_fscore_Increase in Entropy: 0.6282
- eval_spike_seg_boundary_fscore_Increase in Loss: 0.6258
- eval_spike_seg_boundary_fscore_Increase in Rank: 0.6735
- eval_spike_seg_boundary_fscore_Loss: 0.5578
- eval_spike_seg_boundary_fscore_Majority Vote Cutoff: 0.7329
- eval_spike_seg_boundary_fscore_Majority Vote Spike: 0.7085
- eval_spike_seg_boundary_fscore_Rank: 0.6106
- eval_spike_seg_type_fscore_Boundary Prediction: 0.3885
- eval_spike_seg_type_fscore_Entropy: 0.2817
- eval_spike_seg_type_fscore_Increase in Boundary Prediction: 0.3610
- eval_spike_seg_type_fscore_Increase in Entropy: 0.2866
- eval_spike_seg_type_fscore_Increase in Loss: 0.3167
- eval_spike_seg_type_fscore_Increase in Rank: 0.3724
- eval_spike_seg_type_fscore_Loss: 0.2626
- eval_spike_seg_type_fscore_Majority Vote Cutoff: 0.4120
- eval_spike_seg_type_fscore_Majority Vote Spike: 0.3425
- eval_spike_seg_type_fscore_Rank: 0.3228
- eval_steps_per_second: 0.104
- step: 100000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 30000
- training_steps: 100000
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.18.0
- Tokenizers 0.19.1