sleepdeprived3 commited on
Commit
405f138
·
verified ·
1 Parent(s): b26943c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -37
README.md CHANGED
@@ -1,49 +1,60 @@
1
  ---
2
- base_model:
3
- - ReadyArt/Forgotten-Safeword-24B
4
- - TheDrummer/Cydonia-24B-v2
5
- library_name: transformers
 
 
6
  tags:
7
- - mergekit
8
- - merge
9
-
 
 
10
  ---
11
- # merge
12
 
13
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
 
 
 
 
 
 
 
 
 
14
 
15
- ## Merge Details
16
- ### Merge Method
17
 
18
- This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [ReadyArt/Forgotten-Safeword-24B](https://huggingface.co/ReadyArt/Forgotten-Safeword-24B) as a base.
19
 
20
- ### Models Merged
 
21
 
22
- The following models were included in the merge:
23
- * [TheDrummer/Cydonia-24B-v2](https://huggingface.co/TheDrummer/Cydonia-24B-v2)
 
 
 
 
24
 
25
- ### Configuration
 
26
 
27
- The following YAML configuration was used to produce this model:
 
 
 
 
 
 
28
 
29
- ```yaml
30
- merge_method: dare_ties
31
- base_model: ReadyArt/Forgotten-Safeword-24B
32
- models:
33
- - model: ReadyArt/Forgotten-Safeword-24B
34
- parameters:
35
- weight: 0.5
36
- density: 0.7 # Balanced parameter retention
37
- - model: TheDrummer/Cydonia-24B-v2
38
- parameters:
39
- weight: 0.5
40
- density: 0.7 # Mirror density for symmetry
41
- parameters:
42
- int8_mask: true
43
- normalize: true
44
- lambda: 1.0 # Full task vector application
45
- filter: [] # No layer-specific adjustments
46
- dtype: bfloat16
47
- tokenizer_source: union
48
 
49
- ```
 
 
1
  ---
2
+ language:
3
+ - en
4
+ license: other
5
+ license_name: mrl
6
+ license_link: https://mistral.ai/licenses/MRL-0.1.md
7
+ inference: false
8
  tags:
9
+ - nsfw
10
+ - explicit
11
+ - roleplay
12
+ - unaligned
13
+ - dangerous
14
  ---
 
15
 
16
+ ## Forgotten-Abomination-24B-v1.2
17
+
18
+ # **ACADEMIC RESEARCH USE ONLY** (wink)
19
+ **DANGER: NOW WITH 50% MORE UNSETTLING CONTENT**
20
+ Forgotten-Abomination-24B-v1.2 is what happens when you let two unhinged models have a baby in the server room. Combines the ethical flexibility of Forgotten-Safeword with Cydonia's flair for anatomical creativity. Now with bonus existential dread!
21
+
22
+ ## Quantized Formats
23
+
24
+ - **EXL2 Collection**:
25
+ [Forgotten-Abomination-24B-v1.2](https://huggingface.co/collections/ReadyArt/forgotten-abomination-24b-v12-exl2-67b6824156910da9e8438497)
26
 
27
+ - **GGUF Collection**:
28
+ [Forgotten-Abomination-24B-v1.2](https://huggingface.co/collections/ReadyArt/forgotten-abomination-24b-v12-gguf-67b6823a28019ee3d39f1b6d)
29
 
30
+ ## Recommended Settings Provided
31
 
32
+ - **Mistral V7-Tekken**:
33
+ [Full Settings](https://huggingface.co/sleepdeprived3/Mistral-V7-Tekken-Settings)
34
 
35
+ ## Intended Use
36
+ **STRICTLY FOR:**
37
+ - Academic research into how fast your ethics committee can faint
38
+ - Testing the tensile strength of content filters
39
+ - Generating material that would make Cthulhu file a restraining order
40
+ - Writing erotic fanfic about OSHA violations
41
 
42
+ ## Training Data
43
+ - You don't want to know
44
 
45
+ ## Ethical Considerations
46
+ ⚠️ **YOU'VE BEEN WARNED** ⚠️
47
+ THIS MODEL WILL:
48
+ - Make your GPU fans blush
49
+ - Generate content requiring industrial-strength eye bleach
50
+ - Combine technical precision with kinks that violate physics
51
+ - Make you question humanity's collective life choices
52
 
53
+ **By using this model, you agree to:**
54
+ - Never show outputs to your mother
55
+ - Pay for the therapist of anyone who reads the logs
56
+ - Blame Cthulhu if anything goes wrong
57
+ - Pretend this is all "for science"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
58
 
59
+ ## Model Authors
60
+ - sleepdeprived3 (Chief Corruption Officer)