Uploaded model
- Developed by: kanoza
- License: apache-2.0
- Finetuned from model : unsloth/mistral-nemo-base-2407-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
Mistral Nemo MCQ Question Generator
Overview
A fine-tuned Mistral Nemo model specializing in generating multiple-choice questions (MCQs) across various domains.
Model Details
- Base Model: Mistral Nemo Base 2407
- Fine-Tuning: LoRA with 4-bit quantization
- Training Dataset: SciQ
- Primary Task: Automated MCQ Generation
Key Features
- Scientific domain question generation
- Supports multiple context types
- High-quality, contextually relevant options
- Configurable question complexity
Installation
pip install transformers unsloth
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("path/to/model")
tokenizer = AutoTokenizer.from_pretrained("path/to/model")
Usage Example
def generate_mcq(context, instruction):
prompt = f"""
Instruction: {instruction}
Context: {context}
"""
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=128)
return tokenizer.decode(outputs[0])
# Example application
context = "Photosynthesis converts sunlight into plant energy."
mcq = generate_mcq(context, "Create a multiple-choice question")
print(mcq)
Performance Metrics
- BERTScore F1: [Placeholder]
- ROUGE-1 F1: [Placeholder]
- Generation Accuracy: [Placeholder]
Limitations
- Primarily trained on scientific content
- Requires careful prompt engineering
- Potential bias in question generation
Ethical Considerations
- Intended for educational research
- Users should verify generated content
License
Apache 2.0
Contributing
Contributions welcome! Please open issues/PRs on GitHub.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support