Formulae/MITA-V1.0-7B-2-24-2025
Overview
Formulae/MITA-V1.0-7B is the first-generation MITA model, designed as a general-purpose, uncensored model. This version serves as the foundation for future MoE (Mixture of Experts) developments.
Built using the TIES merging method, MITA-V1.0-7B combines multiple fine-tuned models to create a balanced and robust generalist model.
Merge Details
- Base Model: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
- Merged Models:
- Merge Method: TIES (Trim, Elect Sign & Merge)
- Data Type: bfloat16
What is TIES?
TIES (Trim, Elect Sign & Merge) is a model merging technique designed to preserve valuable parameters when combining multiple models. Unlike naive merging methods, TIES minimizes parameter interference by:
- Resetting parameters that changed minimally during fine-tuning.
- Resolving sign conflicts between different models.
- Merging only the aligned parameters to ensure stability.
📖 Reference: TIES Paper
Capabilities
✅ General-Purpose Intelligence – Balanced across multiple tasks.
✅ Uncensored Outputs – Designed for open and unrestricted conversation.
✅ Strong Reasoning – Maintains logical coherence across different domains.
Limitations & Risks
⚠ No Task Specialization – While versatile, it does not excel in any specific domain like coding or math.
⚠ Potential Biases – As an uncensored model, users should verify outputs for accuracy and ethical considerations.
Usage Disclaimer
MITA-V1.0-7B is an experimental foundation model. Users should validate critical outputs, especially for sensitive or factual queries.
Contribute
We welcome contributions, including quantizations, fine-tuning, and further enhancements.
💡 Support Us: Buy Me a Coffee
📩 Contact: [email protected]
Future Development
MITA-V1.0-7B is just the beginning. Future versions will integrate MoE architectures for better scalability and specialization.
Made possible with MergeKit.
- Downloads last month
- 118