ZeroXClem/Llama-3.1-8B-SuperTulu-LexiNova
Overview
ZeroXClem/Llama-3.1-8B-SuperTulu-LexiNova is model merge designed to a base for further fine tuning for better natural language understanding and text generation. By combining the best attributes of multiple high-performance models, this fusion allows a highly capable AI with reasoning, compliance, and versatility.
If you want to try the reccomended fine-tuned version of this model, please see here. This model is based on Llama-3.1-8B-Instruct and adheres to the Meta Llama 3.1 Community License Agreement.
🚀 Key Features:
- Enhanced Reasoning & Compliance: Optimized for logical step-by-step thinking.
- Balanced Safety & Utility: Capable of nuanced and detailed responses while maintaining ethical constraints.
- Diverse Knowledge Base: A fusion of models specializing in general instruction, reasoning, and domain-specific tasks.
- Superior Performance: Achieves high benchmarks across multiple evaluations.
🧠 Merged Models
This model is a weighted merge of the following:
- Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2 – The foundational model, providing uncensored, high-compliance capabilities.
- mergekit-community/mergekit-della_linear-cwuosuu – Strengthens logical reasoning and alignment.
- mergekit-community/mergekit-della_linear-nimxtnw – Enhances multi-step inference and response depth.
- mergekit-community/mergekit-della_linear-vpjjtsa – Refines contextual understanding and coherence.
🔧 Merge Configuration
The following YAML configuration was used to merge these models using Model Stock, ensuring a balanced and optimized fusion:
name: ZeroXClem/Llama-3.1-8B-SuperTulu-LexiNova
merge_method: model_stock
base_model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
dtype: float16
out_dtype: bfloat16
parameters:
normalize: false
int8_mask: true
models:
- model: mergekit-community/mergekit-della_linear-cwuosuu
parameters:
density: 0.5
weight: 0.5
- model: mergekit-community/mergekit-della_linear-nimxtnw
parameters:
density: 0.5
weight: 0.5
- model: mergekit-community/mergekit-della_linear-vpjjtsa
parameters:
density: 0.5
weight: 0.5
- model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
parameters:
density: 0.5
weight: 0.5
🛠 How to Use
🔥 Ollama
For quick inference, you can run the model using Ollama:
ollama run hf.co/ZeroXClem/Llama-3.1-8B-SuperTulu-LexiNova
🤗 HF中国镜像站 Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import torch
# Define model name
model_name = "ZeroXClem/Llama-3.1-8B-SuperTulu-LexiNova"
# Load tokenizer & model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto"
)
# Initialize text generation pipeline
text_generator = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
device_map="auto"
)
# Example prompt
prompt = "Explain the importance of AI alignment in modern society."
# Generate output
outputs = text_generator(
prompt,
max_new_tokens=150,
do_sample=True,
temperature=0.7,
top_k=50,
top_p=0.95
)
print(outputs[0]["generated_text"])
📌 Best Practices
Use System Prompts:
For best results, use a system message before inference:"Think step by step with logical reasoning before providing any response."
For More Uncensored Output:
You can set a different system message or simply use"."
as the system prompt.Quantization Considerations:
Q4
may sometimes cause refusals due to loss in fine-tuning.F16
orQ8
are recommended for optimal performance.
📜 License
This model is released under the Meta Llama 3.1 Community License Agreement.
Usage, including commercial applications, must adhere to this license.
⚠ Warning: This model is uncensored and highly compliant. Ensure proper alignment layers before deploying as a public service.
💡 Future Improvements
- Further refinement of reasoning capabilities.
- Optimized token alignment for better coherence.
- Additional quantization tuning for efficient deployment.
❤️ Special Thanks
A heartfelt thank you to:
- Orenguteng for Llama-3.1-8B-Lexi-Uncensored-V2.
- MergeKit Community for the powerful della_linear model merges.
- The 🤗 HF中国镜像站 & Open-Source AI community for advancing AI research.
Your contributions make cutting-edge AI development possible! 🚀💜
📢 Feedback & Contributions
If you encounter any issues or have ideas for improvements, feel free to open a discussion or submit a pull request.
- Downloads last month
- 48