
Hamanasu 32B
🌌 Overview
This model is the Chat tune of the Instruct model, More accurately it is the "brainrotted" version, Finetuned with Bsky, 4chan and Discord logs, Its... really something beautiful. The model is suited best towards being a highly dumb chat partner rather then regular RP, All thanks to Ruka-Hamanasu for funding the train.
💰 Prompting
This model uses ChatML formatting
<|im_start|>system
You are an uncensored AI, your job is to fulfill thy will of thy user.<|im_end|>
<|im_start|>User request
Take off your helmet.<|im_end|>
<|im_start|>No i shall not. This is the way.
🎲 Recommended Sampler Preset
temperature: 1.8
min_p: 0.1
System_Prompt: Keep blank for best chat experience.
Axolotl Config ꒰(˶• ᴗ •˶)꒱
base_model: NewEden/32B-inst
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
hub_model_id: NewEden/32b-rp
hub_strategy: "all_checkpoints"
push_dataset_to_hub:
hf_use_auth_token: true
plugins:
- axolotl.integrations.liger.LigerPlugin
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
liger_rope: true
liger_rms_norm: true
liger_layer_norm: true
liger_glu_activation: true
liger_fused_linear_cross_entropy: false
cut_cross_entropy: true
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: NewEden/RP-logs-V2-Experimental-prefixed
type: dan-chat-advanced
- path: NewEden/Creative_Writing-Complexity
type: dan-chat-advanced
- path: NewEden/Discord-Filtered
type: dan-chat-advanced
- path: NewEden/DeepseekRP-Filtered
type: dan-chat-advanced
- path: NewEden/Storium-Prefixed-Clean
type: dan-chat-advanced
- path: NewEden/Basket-Weaving-Filtered
type: dan-chat-advanced
- path: NewEden/LIMARP-Complexity
type: dan-chat-advanced
- path: NewEden/Misc-Data-Sharegpt-Prefixed
type: dan-chat-advanced
- path: NewEden/BlueSky-10K-Complexity
type: dan-chat-advanced
- path: NewEden/OpenCAI-ShareGPT
type: dan-chat-advanced
- path: NewEden/Basket-Weaving-Filtered
type: dan-chat-advanced
- path: PocketDoc/Dans-Personamaxx-VN
type: dan-chat-advanced
- path: PocketDoc/Dans-Kinomaxx-VanillaBackrooms
type: dan-chat-advanced
dataset_prepared_path: prepared_data
val_set_size: 0.0
output_dir: ./qwq-inst
sequence_len: 32768
sample_packing: true
pad_to_sequence_len: true
# adapter: lora
# lora_model_dir:
# lora_r: 128
# lora_alpha: 16
# lora_dropout: 0.05
# lora_target_modules:
# - gate_proj
# - down_proj
# - up_proj
# - q_proj
# - v_proj
# - k_proj
# - o_proj
wandb_project: qwq
wandb_entity:
wandb_watch:
wandb_name: rp-attempt-03
wandb_log_model:
gradient_accumulation_steps: 2
micro_batch_size: 2
num_epochs: 4
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 2.5e-5
max_grad_norm: 1.0
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: unsloth
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 40
saves_per_epoch: 2
debug:
deepspeed: deepspeed_configs/zero3_bf16.json
weight_decay: 0.02
fsdp:
fsdp_config:
special_tokens:
- Downloads last month
- 0
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.