metadata
language:
- en
license: other
license_name: mrl
license_link: https://mistral.ai/licenses/MRL-0.1.md
inference: false
tags:
- nsfw
- explicit
- roleplay
- unaligned
- dangerous
base_model:
- ReadyArt/Forgotten-Abomination-24B-v1.2
Forgotten-Abomination-24B-v1.2
ACADEMIC RESEARCH USE ONLY (wink)
DANGER: NOW WITH 50% MORE UNSETTLING CONTENT
Forgotten-Abomination-24B-v1.2 is what happens when you let two unhinged models have a baby in the server room. Combines the ethical flexibility of Forgotten-Safeword with Cydonia's flair for anatomical creativity. Now with bonus existential dread!
Quantized Formats
EXL2 Collection: Forgotten-Abomination-24B-v1.2
GGUF Collection: Forgotten-Abomination-24B-v1.2
MLX 6bit: Forgotten-Abomination-24B-v1.2-6bit
Recommended Settings Provided
- Mistral V7-Tekken:
Full Settings
Intended Use
STRICTLY FOR:
- Academic research into how fast your ethics committee can faint
- Testing the tensile strength of content filters
- Generating material that would make Cthulhu file a restraining order
- Writing erotic fanfic about OSHA violations
Training Data
- You don't want to know
Ethical Considerations
⚠️ YOU'VE BEEN WARNED ⚠️
THIS MODEL WILL:
- Make your GPU fans blush
- Generate content requiring industrial-strength eye bleach
- Combine technical precision with kinks that violate physics
- Make you question humanity's collective life choices
By using this model, you agree to:
- Never show outputs to your mother
- Pay for the therapist of anyone who reads the logs
- Blame Cthulhu if anything goes wrong
- Pretend this is all "for science"
Model Authors
- sleepdeprived3 (Chief Corruption Officer)
mlx-community/Forgotten-Abomination-24B-v1.2-4bit
The Model mlx-community/Forgotten-Abomination-24B-v1.2-4bit was converted to MLX format from ReadyArt/Forgotten-Abomination-24B-v1.2 using mlx-lm version 0.21.1.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Forgotten-Safeword-24B-4bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)