Mistral-24B-Reasoning-zhTW
This model is a fine-tuned version of mistralai/Mistral-Small-24B-Instruct-2501, specifically optimized for mathematical reasoning tasks and enhanced for Traditional Chinese (zh-Hant-TW) language support.
Model Details
Model Description
- Developed by: Yenting Lin
- Funded by: Ubitus
- Model type: Instruction-tuned language model for reasoning
- Language(s) (NLP): English (en), Traditional Chinese (zh-Hant-TW)
- Finetuned from model: mistralai/Mistral-Small-24B-Instruct-2501
Training Details
The model was trained using 4×8 H100 GPUs, provided by Ubitus.
- Downloads last month
- 8
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.
Model tree for ubitus/Mistral-24B-Reasoning-zhTW
Base model
mistralai/Mistral-Small-24B-Base-2501