--- base_model: - huihui-ai/MicroThinker-1B-Preview - huihui-ai/Llama-3.2-1B-Instruct-abliterated - cognitivecomputations/Dolphin3.0-Llama3.2-1B library_name: transformers tags: - mergekit - merge license: llama3.2 --- # about Nothing special here, just a first attempt with 1b. --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [huihui-ai/Llama-3.2-1B-Instruct-abliterated](https://huggingface.co/huihui-ai/Llama-3.2-1B-Instruct-abliterated) as a base. ### Models Merged The following models were included in the merge: * [huihui-ai/MicroThinker-1B-Preview](https://huggingface.co/huihui-ai/MicroThinker-1B-Preview) * [cognitivecomputations/Dolphin3.0-Llama3.2-1B](https://huggingface.co/cognitivecomputations/Dolphin3.0-Llama3.2-1B) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: model_stock models: - model: cognitivecomputations/Dolphin3.0-Llama3.2-1B parameters: weight: 1.0 - model: huihui-ai/MicroThinker-1B-Preview parameters: weight: 1.0 base_model: huihui-ai/Llama-3.2-1B-Instruct-abliterated dtype: bfloat16 normalize: true ```