suayptalha commited on
Commit
a70ad0b
·
verified ·
1 Parent(s): eab135c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -14
README.md CHANGED
@@ -41,20 +41,25 @@ Instruction Adherence: Retains high fidelity in understanding and following user
41
 
42
  **Loading the Model:**
43
  ```py
44
- from transformers import AutoModelForCausalLM, AutoTokenizer
45
-
46
- # Load the model and tokenizer
47
- model_name = "suayptalha/FastLlama-3.2-1B-Instruct"
48
- tokenizer = AutoTokenizer.from_pretrained(model_name)
49
- model = AutoModelForCausalLM.from_pretrained(model_name)
50
-
51
- # Example usage
52
- input_text = "Solve for x: 2x + 3 = 7"
53
- inputs = tokenizer(input_text, return_tensors="pt")
54
- outputs = model.generate(**inputs)
55
- response = tokenizer.decode(outputs[0], skip_special_tokens=True)
56
-
57
- print(response)
 
 
 
 
 
58
  ```
59
 
60
  **Dataset:**
 
41
 
42
  **Loading the Model:**
43
  ```py
44
+ import torch
45
+ from transformers import pipeline
46
+
47
+ model_id = "suayptalha/FastLlama-3.2-1B-Instruct"
48
+ pipe = pipeline(
49
+ "text-generation",
50
+ model=model_id,
51
+ torch_dtype=torch.bfloat16,
52
+ device_map="auto",
53
+ )
54
+ messages = [
55
+ {"role": "system", "content": "You are a friendly assistant named FastLlama."},
56
+ {"role": "user", "content": "Who are you?"},
57
+ ]
58
+ outputs = pipe(
59
+ messages,
60
+ max_new_tokens=256,
61
+ )
62
+ print(outputs[0]["generated_text"][-1])
63
  ```
64
 
65
  **Dataset:**