Awan LLM commited on
Commit
0c38913
·
verified ·
1 Parent(s): 39ddaa9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -9,8 +9,9 @@ Realized a tokenization mistake with the previous DPO model. So this is now a ne
9
 
10
  - https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k
11
 
 
12
 
13
- We are happy for anyone to try it out and give some feedback and we will have the model up on https://awanllm.com on our LLM API if it is popular.
14
 
15
 
16
  Instruct format:
 
9
 
10
  - https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k
11
 
12
+ The open LLM results are really BAD lol. Something with this dataset is disagreeing with llama 3?
13
 
14
+ We are happy for anyone to try it out and give some feedback and we won't have the model up on https://awanllm.com on our LLM API...
15
 
16
 
17
  Instruct format: