migtissera commited on
Commit
ceb2cc8
·
1 Parent(s): 1b45418

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +120 -0
README.md CHANGED
@@ -1,3 +1,123 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+
5
+ # Synthia-11B-v3.0
6
+ SynthIA-11B-v3.0 (Synthetic Intelligent Agent) is a model trained with guidance on Orca-2 paper. It has been fine-tuned for instruction following as well as having long-form conversations. SynthIA-3.0 dataset contains the Generarized Tree-of-Thought prompt plus 10 more new long-form system contexts. However, in the training phase the system context was removed as suggested in Orca-2 paper.
7
+
8
+
9
+
10
+
11
+ To evoke generalized Tree of Thought + Chain of Thought reasoning, you may use the following system message:
12
+ ```
13
+ Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
14
+ ```
15
+
16
+
17
+ ## Evaluation
18
+
19
+ We evaluated Synthia-7B-v3.0 on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI.
20
+
21
+ Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). Section to follow.
22
+
23
+ ||||
24
+ |:------:|:--------:|:-------:|
25
+ |**Task**|**Metric**|**Value**|
26
+ |*arc_challenge*|acc_norm||
27
+ |*hellaswag*|acc_norm||
28
+ |*mmlu*|acc_norm||
29
+ |*truthfulqa_mc*|mc2||
30
+ |**Total Average**|-|||
31
+
32
+ <br>
33
+
34
+ ## Example Usage
35
+
36
+ ### Here is prompt format:
37
+
38
+ ```
39
+ SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
40
+ USER: What is the difference between an Orca, Dolphin and a Seal?
41
+ ASSISTANT:
42
+ ```
43
+
44
+ ### Below shows a code example on how to use this model:
45
+
46
+ ```python
47
+ import torch, json
48
+ from transformers import AutoModelForCausalLM, AutoTokenizer
49
+
50
+ model_path = "migtissera/Synthia-7B-v3.0"
51
+ output_file_path = "./Synthia-7B-conversations.jsonl"
52
+
53
+ model = AutoModelForCausalLM.from_pretrained(
54
+ model_path,
55
+ torch_dtype=torch.float16,
56
+ device_map="auto",
57
+ load_in_8bit=False,
58
+ trust_remote_code=True,
59
+ )
60
+
61
+ tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
62
+
63
+
64
+ def generate_text(instruction):
65
+ tokens = tokenizer.encode(instruction)
66
+ tokens = torch.LongTensor(tokens).unsqueeze(0)
67
+ tokens = tokens.to("cuda")
68
+
69
+ instance = {
70
+ "input_ids": tokens,
71
+ "top_p": 1.0,
72
+ "temperature": 0.75,
73
+ "generate_len": 1024,
74
+ "top_k": 50,
75
+ }
76
+
77
+ length = len(tokens[0])
78
+ with torch.no_grad():
79
+ rest = model.generate(
80
+ input_ids=tokens,
81
+ max_length=length + instance["generate_len"],
82
+ use_cache=True,
83
+ do_sample=True,
84
+ top_p=instance["top_p"],
85
+ temperature=instance["temperature"],
86
+ top_k=instance["top_k"],
87
+ num_return_sequences=1,
88
+ )
89
+ output = rest[0][length:]
90
+ string = tokenizer.decode(output, skip_special_tokens=True)
91
+ answer = string.split("USER:")[0].strip()
92
+ return f"{answer}"
93
+
94
+
95
+ conversation = f"SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
96
+
97
+
98
+ while True:
99
+ user_input = input("You: ")
100
+ llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: "
101
+ answer = generate_text(llm_prompt)
102
+ print(answer)
103
+ conversation = f"{llm_prompt}{answer}"
104
+ json_data = {"prompt": user_input, "answer": answer}
105
+
106
+ ## Save your conversation
107
+ with open(output_file_path, "a") as output_file:
108
+ output_file.write(json.dumps(json_data) + "\n")
109
+
110
+ ```
111
+
112
+ <br>
113
+
114
+ #### Limitations & Biases:
115
+
116
+ While this model aims for accuracy, it can occasionally produce inaccurate or misleading results.
117
+
118
+ Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content.
119
+
120
+ Exercise caution and cross-check information when necessary. This is an uncensored model.
121
+
122
+
123
+ <br>