Vecteus
Collection
Vecteus
•
5 items
•
Updated
•
1
The Mistral-7B--based Large Language Model (LLM) is an noveldataset fine-tuned version of the Mistral-7B-v0.1
VecTeus has the following changes compared to Mistral-7B-v0.1.
This model was created with the help of GPUs from the first LocalAI hackathon.
We would like to take this opportunity to thank
Freed from templates. Congratulations
BAD: あなたは○○として振る舞います
GOOD: あなたは○○です
BAD: あなたは○○ができます
GOOD: あなたは○○をします
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "Local-Novel-LLM-project/Vecteus-Forte"
new_tokens = 1024
model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True, torch_dtype=torch.float16, attn_implementation="flash_attention_2", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_id)
system_prompt = "あなたはプロの小説家です。\n小説を書いてください\n-------- "
prompt = input("Enter a prompt: ")
system_prompt += prompt + "\n-------- "
model_inputs = tokenizer([system_prompt], return_tensors="pt").to("cuda")
generated_ids = model.generate(**model_inputs, max_new_tokens=new_tokens, do_sample=True)
print(tokenizer.batch_decode(generated_ids)[0])