File size: 780 Bytes
b0d582f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
---
library_name: transformers
pipeline_tag: text-generation
tags:
- 32b
- IQ3_M
- ablated
- deepseek
- gguf
- iq3
- llama-cpp
- qwen
- text-generation
---

# roleplaiapp/deepseek-r1-qwen-2.5-32B-ablated-IQ3_M-GGUF

**Repo:** `roleplaiapp/deepseek-r1-qwen-2.5-32B-ablated-IQ3_M-GGUF`
**Original Model:** `deepseek-r1-qwen-2.5-32B-ablated`
**Quantized File:** `deepseek-r1-qwen-2.5-32B-ablated-IQ3_M.gguf`
**Quantization:** `GGUF`
**Quantization Method:** `IQ3_M`  

## Overview
This is a GGUF IQ3_M quantized version of deepseek-r1-qwen-2.5-32B-ablated
## Quantization By
I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models.
I hope the community finds these quantizations useful.

Andrew Webby @ [RolePlai](https://roleplai.app/).