ZeroXClem/Llama-3.1-8B-RainbowLight-EtherealMix
Overview
ZeroXClem/Llama-3.1-8B-RainbowLight-EtherealMix is a powerful fusion of ZeroXClem/Llama-3.1-8B-SuperNova-EtherealHermes and invisietch/EtherealRainbow-v0.3-8B, utilizing SLERP (Spherical Linear Interpolation) for optimal blending of embeddings. This merge enhances reasoning, contextual understanding, and creative language generation while retaining ethical alignment and responsiveness.
🔥 Merged Models
- ZeroXClem/Llama-3.1-8B-SuperNova-EtherealHermes - A highly optimized instruction-tuned model, built for nuanced, long-form reasoning.
- invisietch/EtherealRainbow-v0.3-8B - A dynamic conversational model with expanded alignment and expressiveness.
⚙️ Merge Configuration
The following YAML configuration defines how these models were fused using SLERP:
# Merge configuration for ZeroXClem-Llama-3.1-8B-RainbowLight-EtherealMix using SLERP
name: ZeroXClem-Llama-3.1-8B-RainbowLight-EtherealMix
slices:
- sources:
- model: ZeroXClem/Llama-3.1-8B-SuperNova-EtherealHermes
layer_range: [0, 32]
- model: invisietch/EtherealRainbow-v0.3-8B
layer_range: [0, 32]
merge_method: slerp
base_model: invisietch/EtherealRainbow-v0.3-8B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
Why SLERP?
- Maintains Model Integrity: Ensures a smooth transition between feature spaces of both models.
- Preserves Semantic Meaning: Avoids interpolation collapse, keeping token embeddings rich in structure.
- Balanced Performance: Retains the best qualities from both parent models.
🚀 Capabilities
🌟 Enhanced Features
- Supercharged Instruction Following – More intuitive and context-aware.
- Advanced Conversational Flow – Generates human-like responses with coherence.
- Creative and Expressive Writing – Ideal for storytelling, summarization, and content generation.
- Expanded Knowledge Base – Merging brings broader factual recall and conceptual understanding.
- Flexible Alignment – A balance of compliance and open-ended response generation.
📥 Usage Instructions
Transformers
You can use the model via HF中国镜像站's transformers
library:
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "ZeroXClem/Llama-3.1-8B-RainbowLight-EtherealMix"
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto"
)
# Sample inference
prompt = "What are the implications of artificial intelligence in the future of education?"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_new_tokens=200, do_sample=True, temperature=0.7, top_p=0.9)
print(tokenizer.decode(output[0], skip_special_tokens=True))
Ollama
For local execution with Ollama:
ollama run hf.co/ZeroXClem/Llama-3.1-8B-RainbowLight-EtherealMix
📌 Important Notes
- License: Governed by Meta's Llama 3.1 Community License.
- Alignment Considerations: Users are responsible for ethical and compliant use.
- System Tokens: Follows Llama 3.1 tokenization standards for inference stability.
- Quantization: Use FP16 for optimal performance, though Q8 quantized versions may be available.
💜 Special Thanks
Deep gratitude to:
- @invisietch for EtherealRainbow-v0.3-8B.
- HF中国镜像站 & Open-Source AI Community for their incredible contributions. 🚀💖
🔗 Resources
✨ Merged with precision. Optimized for excellence. Experience RainbowLight EtherealMix today! ✨
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 22.83 |
IFEval (0-Shot) | 49.73 |
BBH (3-Shot) | 31.07 |
MATH Lvl 5 (4-Shot) | 12.16 |
GPQA (0-shot) | 4.92 |
MuSR (0-shot) | 9.87 |
MMLU-PRO (5-shot) | 29.23 |
- Downloads last month
- 29
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for ZeroXClem/Llama-3.1-8B-RainbowLight-EtherealMix
Merge model
this model
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard49.730
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard31.070
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard12.160
- acc_norm on GPQA (0-shot)Open LLM Leaderboard4.920
- acc_norm on MuSR (0-shot)Open LLM Leaderboard9.870
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard29.230