Upload 3 files
Browse filesONNX model - a fine tuned version of DistilBERT which can be used to classify text as one of:
- neutral, offensive_language, harmful_behaviour, hate_speech
The model was trained using the [csfy tool](https://github.com/mrseanryan/csfy) and [this dataset](https://huggingface.co/datasets/seanius/toxic-or-neutral-text-labelled)
label_mapping.json
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"labels": [
|
3 |
+
"harmful_behaviour",
|
4 |
+
"hate_speech",
|
5 |
+
"neutral",
|
6 |
+
"offensive_language"
|
7 |
+
]
|
8 |
+
}
|
toxic-or-neutral-text-labelled.onnx
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:84bb119ee47a1eccb5c3eefc721becfe5f5c199626f0cbb9d8f9a058bf720342
|
3 |
+
size 267962835
|
toxic-or-neutral-text-labelled.q_8.onnx
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4f25d7f29402910602d9f1341d128eadf2bf3b9b815b254db92c1fbc15723b11
|
3 |
+
size 67373289
|