merge This is a merge of pre-trained language

Goal of this merge was to create all round model that capable to decent RU.

This model is merge of 14B qwen 2.5 finetunes, so i reccomend to try it if you tired of mistral nemo.

Tested on 400 replies, model is creative and stable, good both in assistant and RP/ERP, it follows instructions fine.

While still creative, in rp model will write short answers, if not prompted to write opposite.

RU performance is what i wanted to improve, and i succeed. Model able to stable ru rp, replies not too dry.

Tested on ChatML, T1.01 XTC 0.1 0.1, for ru - T1.01 xtc 0.1 0.01

Downloads last month
19
Safetensors
Model size
14.8B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for OddTheGreat/Blagoveshchensk_14B_V4

Quantizations
6 models

Collection including OddTheGreat/Blagoveshchensk_14B_V4