Yassmen commited on
Commit
5eda1b7
·
verified ·
1 Parent(s): 19fcc50

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -1
README.md CHANGED
@@ -13,9 +13,15 @@ model-index:
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
  should probably proofread and complete it, then remove this comment. -->
15
 
 
16
  # Wav2Vec2_Fine_tuned_on_CremaD_Speech_Emotion_Recognition
17
 
18
- This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on an unknown dataset.
 
 
 
 
 
19
  It achieves the following results on the evaluation set:
20
  - Loss: 0.6258
21
  - Accuracy: 0.7890
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
  should probably proofread and complete it, then remove this comment. -->
15
 
16
+
17
  # Wav2Vec2_Fine_tuned_on_CremaD_Speech_Emotion_Recognition
18
 
19
+ This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english)
20
+ The dataset used to fine-tune the original pre-trained model is the [CremaD dataset](https://github.com/CheyneyComputerScience/CREMA-D). This dataset provides 7442 samples of recordings from actors performing on 6 different emotions in English, which are:
21
+
22
+ ```python
23
+ emotions = ['angry', 'disgust', 'fearful', 'happy', 'neutral', 'sad']
24
+ ```
25
  It achieves the following results on the evaluation set:
26
  - Loss: 0.6258
27
  - Accuracy: 0.7890