DavidAU mradermacher commited on
Commit
6c689bb
·
verified ·
0 Parent(s):

Duplicate from mradermacher/Qwen2.5-QwQ-37B-Eureka-Triple-Cubed-GGUF

Browse files

Co-authored-by: team mradermacher <[email protected]>

.gitattributes ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Qwen2.5-QwQ-37B-Eureka-Triple-Cubed.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ Qwen2.5-QwQ-37B-Eureka-Triple-Cubed.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
38
+ Qwen2.5-QwQ-37B-Eureka-Triple-Cubed.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
39
+ Qwen2.5-QwQ-37B-Eureka-Triple-Cubed.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ Qwen2.5-QwQ-37B-Eureka-Triple-Cubed.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
41
+ Qwen2.5-QwQ-37B-Eureka-Triple-Cubed.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
42
+ Qwen2.5-QwQ-37B-Eureka-Triple-Cubed.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
43
+ Qwen2.5-QwQ-37B-Eureka-Triple-Cubed.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
44
+ Qwen2.5-QwQ-37B-Eureka-Triple-Cubed.IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
45
+ Qwen2.5-QwQ-37B-Eureka-Triple-Cubed.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
46
+ Qwen2.5-QwQ-37B-Eureka-Triple-Cubed.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-QwQ-37B-Eureka-Triple-Cubed.IQ4_XS.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:da233dbba9b080a06814a27d22b2ef7fd55ca8f3e044daf33a3f0fedcb939f48
3
+ size 19971585024
Qwen2.5-QwQ-37B-Eureka-Triple-Cubed.Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:070103ce34fafd9f38c44130c42f850735204f2aaa0a605744e2fe42b30da1b1
3
+ size 13739725824
Qwen2.5-QwQ-37B-Eureka-Triple-Cubed.Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ccbe58bdc79e02e2ddcc07d6306c84d4aaab8e728b32bc81f1c5f4ef3a4d01ae
3
+ size 19280569344
Qwen2.5-QwQ-37B-Eureka-Triple-Cubed.Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f6b199c7a6a1b6c0c4cc36bfc2c346aa54471f421e616f0e0412a54d282fd03
3
+ size 17795523584
Qwen2.5-QwQ-37B-Eureka-Triple-Cubed.Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a8cd534b7fbe5e7cf7402ae09d8c520162b814badd5d67513ac2a518db95597a
3
+ size 16068977664
Qwen2.5-QwQ-37B-Eureka-Triple-Cubed.Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f40799d57f85039c94280a61956b014db3f6e9738f5078954188b6f3075b9b03
3
+ size 22197433344
Qwen2.5-QwQ-37B-Eureka-Triple-Cubed.Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:56442fc2acc3b935c8659b771d85d70d9ed902cc616d5bd4a5ce56a998dad38a
3
+ size 20996813824
Qwen2.5-QwQ-37B-Eureka-Triple-Cubed.Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:138dab4653a65970aa9993b92ef7f962973405276f330bf312ccbdb44d678329
3
+ size 26022441984
Qwen2.5-QwQ-37B-Eureka-Triple-Cubed.Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc4ba4db982878166d7483df54e93d8f0a83bcf01c3b212d219a91029e67528d
3
+ size 25320551424
Qwen2.5-QwQ-37B-Eureka-Triple-Cubed.Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:20e2a95d154c17231970ff8bd49f5937b663f2a92a8a4346f8a7f5aba219b025
3
+ size 30086513664
Qwen2.5-QwQ-37B-Eureka-Triple-Cubed.Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f961dca4b28056383f13c16db9a2270a5a4a87c21d3b1b7d0df4869d0b11d77a
3
+ size 38965945344
README.md ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: DavidAU/Qwen2.5-QwQ-37B-Eureka-Triple-Cubed
3
+ language:
4
+ - en
5
+ library_name: transformers
6
+ quantized_by: mradermacher
7
+ tags:
8
+ - Cubed Reasoning
9
+ - QwQ-32B
10
+ - reasoning
11
+ - thinking
12
+ - r1
13
+ - cot
14
+ - deepseek
15
+ - Qwen2.5
16
+ - Hermes
17
+ - DeepHermes
18
+ - DeepSeek
19
+ - DeepSeek-R1-Distill
20
+ - 128k context
21
+ - merge
22
+ ---
23
+ ## About
24
+
25
+ <!-- ### quantize_version: 2 -->
26
+ <!-- ### output_tensor_quantised: 1 -->
27
+ <!-- ### convert_type: hf -->
28
+ <!-- ### vocab_type: -->
29
+ <!-- ### tags: -->
30
+ static quants of https://huggingface.co/DavidAU/Qwen2.5-QwQ-37B-Eureka-Triple-Cubed
31
+
32
+ <!-- provided-files -->
33
+ weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
34
+ ## Usage
35
+
36
+ If you are unsure how to use GGUF files, refer to one of [TheBloke's
37
+ READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
38
+ more details, including on how to concatenate multi-part files.
39
+
40
+ ## Provided Quants
41
+
42
+ (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
43
+
44
+ | Link | Type | Size/GB | Notes |
45
+ |:-----|:-----|--------:|:------|
46
+ | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-QwQ-37B-Eureka-Triple-Cubed-GGUF/resolve/main/Qwen2.5-QwQ-37B-Eureka-Triple-Cubed.Q2_K.gguf) | Q2_K | 13.8 | |
47
+ | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-QwQ-37B-Eureka-Triple-Cubed-GGUF/resolve/main/Qwen2.5-QwQ-37B-Eureka-Triple-Cubed.Q3_K_S.gguf) | Q3_K_S | 16.2 | |
48
+ | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-QwQ-37B-Eureka-Triple-Cubed-GGUF/resolve/main/Qwen2.5-QwQ-37B-Eureka-Triple-Cubed.Q3_K_M.gguf) | Q3_K_M | 17.9 | lower quality |
49
+ | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-QwQ-37B-Eureka-Triple-Cubed-GGUF/resolve/main/Qwen2.5-QwQ-37B-Eureka-Triple-Cubed.Q3_K_L.gguf) | Q3_K_L | 19.4 | |
50
+ | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-QwQ-37B-Eureka-Triple-Cubed-GGUF/resolve/main/Qwen2.5-QwQ-37B-Eureka-Triple-Cubed.Q4_K_S.gguf) | Q4_K_S | 21.1 | fast, recommended |
51
+ | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-QwQ-37B-Eureka-Triple-Cubed-GGUF/resolve/main/Qwen2.5-QwQ-37B-Eureka-Triple-Cubed.Q8_0.gguf) | Q8_0 | 39.1 | fast, best quality |
52
+
53
+ Here is a handy graph by ikawrakow comparing some lower-quality quant
54
+ types (lower is better):
55
+
56
+ ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
57
+
58
+ And here are Artefact2's thoughts on the matter:
59
+ https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
60
+
61
+ ## FAQ / Model Request
62
+
63
+ See https://huggingface.co/mradermacher/model_requests for some answers to
64
+ questions you might have and/or if you want some other model quantized.
65
+
66
+ ## Thanks
67
+
68
+ I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
69
+ me use its servers and providing upgrades to my workstation to enable
70
+ this work in my free time.
71
+
72
+ <!-- end -->