Update README.md
Browse files
README.md
CHANGED
@@ -34,7 +34,7 @@ Our approach leverages *CLIP* as a prior for perceptual tasks, inspired by cogni
|
|
34 |
|
35 |
## Performance
|
36 |
|
37 |
-
The model was trained on the *EmoSet dataset* using the common train, val, test splits and exhibits *state-of-the-art performance compared to previous methods.
|
38 |
|
39 |
## Usage
|
40 |
|
|
|
34 |
|
35 |
## Performance
|
36 |
|
37 |
+
The model was trained on the *EmoSet dataset* using the common train, val, test splits and exhibits *state-of-the-art performance* compared to previous methods.
|
38 |
|
39 |
## Usage
|
40 |
|