Qwen merges
Collection
4 items
•
Updated
•
1
merge This is a merge of pre-trained language
Goal of this merge was to create all round model that capable to decent RU.
This model is merge of 14B qwen 2.5 finetunes, so i reccomend to try it if you tired of mistral nemo.
Tested on 400 replies, model is creative and stable, good both in assistant and RP/ERP, it follows instructions fine.
While still creative, in rp model will write short answers, if not prompted to write opposite.
RU performance is what i wanted to improve, and i succeed. Model able to stable ru rp, replies not too dry.
Tested on ChatML, T1.01 XTC 0.1 0.1, for ru - T1.01 xtc 0.1 0.01