migtissera commited on
Commit
a776d33
·
1 Parent(s): ceb2cc8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -16,7 +16,7 @@ Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to
16
 
17
  ## Evaluation
18
 
19
- We evaluated Synthia-7B-v3.0 on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI.
20
 
21
  Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). Section to follow.
22
 
@@ -47,8 +47,8 @@ ASSISTANT:
47
  import torch, json
48
  from transformers import AutoModelForCausalLM, AutoTokenizer
49
 
50
- model_path = "migtissera/Synthia-7B-v3.0"
51
- output_file_path = "./Synthia-7B-conversations.jsonl"
52
 
53
  model = AutoModelForCausalLM.from_pretrained(
54
  model_path,
 
16
 
17
  ## Evaluation
18
 
19
+ We evaluated Synthia-11B-v3.0 on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI.
20
 
21
  Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). Section to follow.
22
 
 
47
  import torch, json
48
  from transformers import AutoModelForCausalLM, AutoTokenizer
49
 
50
+ model_path = "migtissera/Synthia-11B-v3.0"
51
+ output_file_path = "./Synthia-11B-conversations.jsonl"
52
 
53
  model = AutoModelForCausalLM.from_pretrained(
54
  model_path,