Awan LLM
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -9,8 +9,9 @@ Realized a tokenization mistake with the previous DPO model. So this is now a ne
|
|
9 |
|
10 |
- https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k
|
11 |
|
|
|
12 |
|
13 |
-
We are happy for anyone to try it out and give some feedback and we
|
14 |
|
15 |
|
16 |
Instruct format:
|
|
|
9 |
|
10 |
- https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k
|
11 |
|
12 |
+
The open LLM results are really BAD lol. Something with this dataset is disagreeing with llama 3?
|
13 |
|
14 |
+
We are happy for anyone to try it out and give some feedback and we won't have the model up on https://awanllm.com on our LLM API...
|
15 |
|
16 |
|
17 |
Instruct format:
|