kenrogers commited on
Commit
88e0f0c
·
verified ·
1 Parent(s): f7f6ec4

Add new SentenceTransformer model

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 1536,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": true,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,688 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - sentence-similarity
5
+ - feature-extraction
6
+ - generated_from_trainer
7
+ - dataset_size:100
8
+ - loss:MatryoshkaLoss
9
+ - loss:MultipleNegativesRankingLoss
10
+ base_model: Alibaba-NLP/gte-Qwen2-1.5B-instruct
11
+ widget:
12
+ - source_sentence: "1. What is the significance of the beginning of thinking and end\
13
+ \ of thinking tokens in the context of recurrent depth? \n2. How does the concept\
14
+ \ of thinking about a single token relate to the overall sequence in the discussion?"
15
+ sentences:
16
+ - done by researchers at Lawrence Livermore and elsewhere um this is not this not
17
+ you know something that product teams are using today this is this is very much
18
+ a research grade thing that is cool and we're seeing some you know early signs
19
+ that it's potentially quite useful um I I wanna I want to zoom in on on like just
20
+ when people think about the the actual how of this when they think about actually
21
+ implementing this in in maybe one of their applications so whereas in the Coconut
22
+ space you're you're going to go and you're G to you're gonna like oh nope not
23
+ going out into natural language space just going to sit here and chew on it chew
24
+ on it chew on it and then I'm gonna pop out my final answer so all you get is
25
+ final answer baby it's called a blackbox okay when we go to the recurrent depth
26
+ piece um you said something interesting earlier when we were chatting about this
27
+ and it was it was like I'm going to think and think and think and think and think
28
+ and all of a sudden I know and I'm
29
+ - chains of thought and this is where this idea of test time compute came up and
30
+ this was a paper from Google in August last year called scaling test time compute
31
+ you know it's basically taking that scaling paper originally and saying well now
32
+ we have this sort of other axis to scale on and again this is the idea that we're
33
+ anthropomorphizing a little bit but humans tend to think longer on difficult problems
34
+ maybe we should let machines do that and when we think of test time Compu it's
35
+ just time spent thinking you know and so if we we think about kind of how we can
36
+ leverage this we've seen some of these things come out in recent weeks and recent
37
+ months we talked about deep seek R1 just last week and you know this is the same
38
+ idea it thinks before it answers and this is again just sort of the next step
39
+ in the evolution of what we've got going on here and we saw moreover deep seek
40
+ one generates one token at a time it's able to spend more time processing and
41
+ it generates these thinking
42
+ - piece um you said something interesting earlier when we were chatting about this
43
+ and it was it was like I'm going to think and think and think and think and think
44
+ and all of a sudden I know and I'm going to do that just with one token is sort
45
+ of the recurrent depth paper and then I'm going to let the rest of the tokens
46
+ stream like normal right so so there's this real interesting idea of like which
47
+ things are you thinking about and this is where the idea of a beginning of thinking
48
+ end of thinking token or a beginning of sequence end of sequence token comes into
49
+ play you can think about the sequence or you can think about the thinking is this
50
+ does this make sense um yeah okay okay we can think about the thinking right so
51
+ we can uh we can double we can double it up uh yeah we can think about the thing
52
+ yeah yeah okay okay so so recurrent depth in short I mean is like you think about
53
+ a single token and then you let the sequence go like that's what I thought was
54
+ interesting and and maybe
55
+ - source_sentence: '1. What are the different systems mentioned in the context, and
56
+ how are they categorized?
57
+
58
+ 2. How does the concept of "Meta Meta learning" relate to the discussion in the
59
+ context?'
60
+ sentences:
61
+ - right we kind of got to go a little bit more into the blackbox we gota go back
62
+ beyond the unknown yeah it happens but it's it's it's it's the the timing is right
63
+ and uh with companies like you know Nvidia with companies the other accelerators
64
+ that are that are coming out they're super good at inference Gro and S all these
65
+ other peeps right uh we're getting real fast at inference and so the spending
66
+ that time you know becomes less and less impactful to the user experience but
67
+ more importantly uh you know we have a lot of applications LMS aren't good for
68
+ yet where we don't care about response latency like research like uh PhD level
69
+ math where it's like it doesn't matter if it takes a day yeah because that means
70
+ it didn't take some some other person a day right like that's the that's the the
71
+ we're at this time the models are capable enough that we can think about problems
72
+ that we can't just do ourselves faster it the whole the whole you know ecosystem
73
+ is set up for this to be the right
74
+ - perceptron artificial neural networks and single neurons to many neurons and so
75
+ on but it really really got going with deep learning and we we saw we train bigger
76
+ and bigger models we get better and better output and results this has been known
77
+ for years and it's it's known even today I mean this is from the CEO of anthropic
78
+ Dario amade and you know we want to think about the the the place that we are
79
+ based on where we've been want to put this in context here so we go from pre-training
80
+ which is the thing that sort of takes a long time it's the it's the show goth
81
+ it's got all the possibilities we can sort of think of this as months many many
82
+ tens of thousands of gpus maybe more these days and as we saw at nurs this past
83
+ uh you know year Ilia suit noted that pre-training as we know it will end now
84
+ we've seen that there's been a lot of focus subsequently especially you know out
85
+ in public on posttraining and on this idea of rhf and RL we talked about this
86
+ in our recent reasoning model
87
+ - we've got we've got sort of this idea may we could call it system one is GPT 40
88
+ mini and then we could say Okay reasoning model that's spitting out tokens is
89
+ like is like system two and then maybe like we've got this sort of system three
90
+ that's in in in latent space and then we've got this like system four that's like
91
+ that's like you know On Any Given specific type of of of thing I might want to
92
+ extra think about in latent space I might go deeper on it so so there's just this
93
+ real sort of very interesting abstraction Ed meta thinking at play here and um
94
+ and it gets back to me to sort of the idea of like kind of Meta Meta learning
95
+ I mean they they really they really did away with our ability to use that term
96
+ and in the gpt3 paper so um we're at the end of sort of the words that we that
97
+ we can concretely say we know what to do but we know what to do with the code
98
+ so I want to go ahead and just quickly introduce this to you uh whiz we're going
99
+ to do a quick demo here and we're going to
100
+ - source_sentence: '1. What does the term "tokenless" refer to in the context of scaling
101
+ and model architecture?
102
+
103
+ 2. How does the looping process contribute to generating responses without resolving
104
+ back to tokens?'
105
+ sentences:
106
+ - allow us to scil uh that seems a little a little weird to me I'm not it's not
107
+ very intuitive what's your intuition behind this yeah so I the word tokenless
108
+ is a is a is a fun one I would say like the idea is we don't need to resolve back
109
+ to tokens to scale because we can just keep looping around before we get to tokens
110
+ right really anything that allows us to keep looping around until we get to The
111
+ Final Answer uh is is g to allow us to scale right because we can say well Loop
112
+ more Loop more Loop more right like uh W with with uh you know with say like generating
113
+ a response right if I generate for five tokens versus if I generate for 70,000
114
+ tokens right like we we have this idea of a scaling axis on the on the inference
115
+ side this is just kind of shoving that inside the uh the model architecture as
116
+ opposed to allowing it to resolve to token space but the idea is we're still adding
117
+ information at every step at every stage right that we do a a loop as it were
118
+ so we're getting a better and
119
+ - perceptron artificial neural networks and single neurons to many neurons and so
120
+ on but it really really got going with deep learning and we we saw we train bigger
121
+ and bigger models we get better and better output and results this has been known
122
+ for years and it's it's known even today I mean this is from the CEO of anthropic
123
+ Dario amade and you know we want to think about the the the place that we are
124
+ based on where we've been want to put this in context here so we go from pre-training
125
+ which is the thing that sort of takes a long time it's the it's the show goth
126
+ it's got all the possibilities we can sort of think of this as months many many
127
+ tens of thousands of gpus maybe more these days and as we saw at nurs this past
128
+ uh you know year Ilia suit noted that pre-training as we know it will end now
129
+ we've seen that there's been a lot of focus subsequently especially you know out
130
+ in public on posttraining and on this idea of rhf and RL we talked about this
131
+ in our recent reasoning model
132
+ - allow us to scil uh that seems a little a little weird to me I'm not it's not
133
+ very intuitive what's your intuition behind this yeah so I the word tokenless
134
+ is a is a is a fun one I would say like the idea is we don't need to resolve back
135
+ to tokens to scale because we can just keep looping around before we get to tokens
136
+ right really anything that allows us to keep looping around until we get to The
137
+ Final Answer uh is is g to allow us to scale right because we can say well Loop
138
+ more Loop more Loop more right like uh W with with uh you know with say like generating
139
+ a response right if I generate for five tokens versus if I generate for 70,000
140
+ tokens right like we we have this idea of a scaling axis on the on the inference
141
+ side this is just kind of shoving that inside the uh the model architecture as
142
+ opposed to allowing it to resolve to token space but the idea is we're still adding
143
+ information at every step at every stage right that we do a a loop as it were
144
+ so we're getting a better and
145
+ - source_sentence: '1. What is the relationship between latent space and natural language
146
+ in the context of a Transformer architecture?
147
+
148
+ 2. How does the GPT style architecture process a sequence to predict the next
149
+ token?'
150
+ sentences:
151
+ - piece um you said something interesting earlier when we were chatting about this
152
+ and it was it was like I'm going to think and think and think and think and think
153
+ and all of a sudden I know and I'm going to do that just with one token is sort
154
+ of the recurrent depth paper and then I'm going to let the rest of the tokens
155
+ stream like normal right so so there's this real interesting idea of like which
156
+ things are you thinking about and this is where the idea of a beginning of thinking
157
+ end of thinking token or a beginning of sequence end of sequence token comes into
158
+ play you can think about the sequence or you can think about the thinking is this
159
+ does this make sense um yeah okay okay we can think about the thinking right so
160
+ we can uh we can double we can double it up uh yeah we can think about the thing
161
+ yeah yeah okay okay so so recurrent depth in short I mean is like you think about
162
+ a single token and then you let the sequence go like that's what I thought was
163
+ interesting and and maybe
164
+ - it's kind of funny in a logical way if you look up logic it uses the word reason
165
+ and there we are caught in a loop but reasoning is about thinking latent space
166
+ is about using a representation of our data that sort of captures the essential
167
+ features of it we can think of latent space as embedding space or the space of
168
+ math and numbers in other words it's just not the space of words and natural language
169
+ let's think about how this manifests in a Transformer architecture here I'm showing
170
+ a GPT style architecture from the gpt2 paper what we want to think about is we
171
+ want to put a sequence in and we want to get some next token prediction out when
172
+ we put the sequence in we're in the space of natural language when we get the
173
+ next token out we're in the space of natural language betwix in between we're
174
+ going to be in latent space we're going to be in embedding space we're going to
175
+ be in the space where we can do math and stuff and importantly we can kind of
176
+ think that we're putting in this big
177
+ - architecture is built upon a latent depth recurrent block that is run for a randomly
178
+ sampled number of iterations during training I can't see it let's see what they
179
+ gave us in the paper they gave us this bad boy personally not a fan of this diagram
180
+ whiz thinks it's totally fine and it makes perfect sense let's see if we can break
181
+ it down for you guys here a visualization of the architecture we have the Prelude
182
+ block we have the recurrent block recurrent block recurrent block and then this
183
+ Koda so each block consists of a number of su layers okay the blue Prelude block
184
+ embeds the input into latent space all right where the green shared recurrent
185
+ block is a block of layers that is repeated to compute the final latent state
186
+ which is decoded by the layers of the red Coda block back to our gp2 architecture
187
+ diagram let's think about how we're still kind of doing this loop back we're still
188
+ doing this reasoning in in space and now let's label the Prelude the recurrent
189
+ block and the Koda we
190
+ - source_sentence: '1. What is meant by the terms "hidden state," "latent space,"
191
+ and "embedding space" in the context of reasoning models?
192
+
193
+ 2. How do the last hidden states function as input embeddings in a typical Chain
194
+ of Thought reasoning model?'
195
+ sentences:
196
+ - the next step in the evolution of what we've got going on here and we saw moreover
197
+ deep seek one generates one token at a time it's able to spend more time processing
198
+ and it generates these thinking tokens that explain its Chain of Thought So we're
199
+ generating tokens during our chains of thought and that makes them explainable
200
+ very cool all right I want to bring whiz back up for just a moment here okay just
201
+ to be super clear reasoning and test time compute do you think these are sort
202
+ of you know triple equal sign or or how would you how would you say you know generally
203
+ we're talking about the same thing but they're not the same thing yeah they're
204
+ they're this so so okay they're not literally the same thing of course but they're
205
+ also pretty much the same thing in in how we talk about it in 2025 today that's
206
+ right that's right so so reasoning is some right now because our models are System
207
+ One machines right this is the this is the they're not reasoners they're they're
208
+ uh they're they're
209
+ - impact of this kind of approach on test time compute scaling some of the working
210
+ hypotheses and some of the things people are interested in in looking out there
211
+ on the llm edge for as we continue to see the field progress I want to demonstrate
212
+ both approaches and check out the new coconut Library as well so how we're going
213
+ to go through this is we're going to essentially introduce this idea of reasoning
214
+ and latent space then we're going to talk about the scaling part of this before
215
+ we dig into the specific approaches and we get the demo on both approaches by
216
+ the end so it should be a lot of fun today let's go ahead and dig in reasoning
217
+ in latent space let's root ourselves first in some definitions when we talk about
218
+ reasoning we're talking about the action of thinking about something and it's
219
+ kind of funny in a logical way if you look up logic it uses the word reason and
220
+ there we are caught in a loop but reasoning is about thinking latent space is
221
+ about using a representation of our
222
+ - of the reasoning State when we say hidden state or latent space or embedding space
223
+ or this sort of space of math and computation we're talking about the same space
224
+ of course the the exact state of the space changes depending on where we are in
225
+ the Transformer but let's take a look at the image from the paper in a typical
226
+ Chain of Thought reasoning model we're going to ask a question we're going to
227
+ generate some tokens and we're going to think kind of out loud we're we're going
228
+ to let the chains of thought flow when you click into the Chain of Thought on
229
+ 01 as we've seen before you can see sort of in the side panel the the steps it's
230
+ thinking through now conversely to actually thinking out loud we have here that
231
+ the last hidden states are used as input embeddings okay well what does this mean
232
+ well it let's go back to our gpt2 style diagram and think about this the input
233
+ embeddings here are where we're essentially looping back to so what we do is we
234
+ kind of loop back before we generate
235
+ pipeline_tag: sentence-similarity
236
+ library_name: sentence-transformers
237
+ metrics:
238
+ - cosine_accuracy@1
239
+ - cosine_accuracy@3
240
+ - cosine_accuracy@5
241
+ - cosine_accuracy@10
242
+ - cosine_precision@1
243
+ - cosine_precision@3
244
+ - cosine_precision@5
245
+ - cosine_precision@10
246
+ - cosine_recall@1
247
+ - cosine_recall@3
248
+ - cosine_recall@5
249
+ - cosine_recall@10
250
+ - cosine_ndcg@10
251
+ - cosine_mrr@10
252
+ - cosine_map@100
253
+ model-index:
254
+ - name: SentenceTransformer based on Alibaba-NLP/gte-Qwen2-1.5B-instruct
255
+ results:
256
+ - task:
257
+ type: information-retrieval
258
+ name: Information Retrieval
259
+ dataset:
260
+ name: Unknown
261
+ type: unknown
262
+ metrics:
263
+ - type: cosine_accuracy@1
264
+ value: 0.875
265
+ name: Cosine Accuracy@1
266
+ - type: cosine_accuracy@3
267
+ value: 1.0
268
+ name: Cosine Accuracy@3
269
+ - type: cosine_accuracy@5
270
+ value: 1.0
271
+ name: Cosine Accuracy@5
272
+ - type: cosine_accuracy@10
273
+ value: 1.0
274
+ name: Cosine Accuracy@10
275
+ - type: cosine_precision@1
276
+ value: 0.875
277
+ name: Cosine Precision@1
278
+ - type: cosine_precision@3
279
+ value: 0.3333333333333333
280
+ name: Cosine Precision@3
281
+ - type: cosine_precision@5
282
+ value: 0.2
283
+ name: Cosine Precision@5
284
+ - type: cosine_precision@10
285
+ value: 0.1
286
+ name: Cosine Precision@10
287
+ - type: cosine_recall@1
288
+ value: 0.875
289
+ name: Cosine Recall@1
290
+ - type: cosine_recall@3
291
+ value: 1.0
292
+ name: Cosine Recall@3
293
+ - type: cosine_recall@5
294
+ value: 1.0
295
+ name: Cosine Recall@5
296
+ - type: cosine_recall@10
297
+ value: 1.0
298
+ name: Cosine Recall@10
299
+ - type: cosine_ndcg@10
300
+ value: 0.9538662191964322
301
+ name: Cosine Ndcg@10
302
+ - type: cosine_mrr@10
303
+ value: 0.9375
304
+ name: Cosine Mrr@10
305
+ - type: cosine_map@100
306
+ value: 0.9375
307
+ name: Cosine Map@100
308
+ ---
309
+
310
+ # SentenceTransformer based on Alibaba-NLP/gte-Qwen2-1.5B-instruct
311
+
312
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Alibaba-NLP/gte-Qwen2-1.5B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct). It maps sentences & paragraphs to a 1536-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
313
+
314
+ ## Model Details
315
+
316
+ ### Model Description
317
+ - **Model Type:** Sentence Transformer
318
+ - **Base model:** [Alibaba-NLP/gte-Qwen2-1.5B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct) <!-- at revision 0d2ad8e1ac654a2b626e62154778a70868141208 -->
319
+ - **Maximum Sequence Length:** 32768 tokens
320
+ - **Output Dimensionality:** 1536 dimensions
321
+ - **Similarity Function:** Cosine Similarity
322
+ <!-- - **Training Dataset:** Unknown -->
323
+ <!-- - **Language:** Unknown -->
324
+ <!-- - **License:** Unknown -->
325
+
326
+ ### Model Sources
327
+
328
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
329
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
330
+ - **HF中国镜像站:** [Sentence Transformers on HF中国镜像站](https://huggingface.co/models?library=sentence-transformers)
331
+
332
+ ### Full Model Architecture
333
+
334
+ ```
335
+ SentenceTransformer(
336
+ (0): Transformer({'max_seq_length': 32768, 'do_lower_case': False}) with Transformer model: Qwen2Model
337
+ (1): Pooling({'word_embedding_dimension': 1536, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': True, 'include_prompt': True})
338
+ (2): Normalize()
339
+ )
340
+ ```
341
+
342
+ ## Usage
343
+
344
+ ### Direct Usage (Sentence Transformers)
345
+
346
+ First install the Sentence Transformers library:
347
+
348
+ ```bash
349
+ pip install -U sentence-transformers
350
+ ```
351
+
352
+ Then you can load this model and run inference.
353
+ ```python
354
+ from sentence_transformers import SentenceTransformer
355
+
356
+ # Download from the 🤗 Hub
357
+ model = SentenceTransformer("kenrogers/gte-ft-yt-2")
358
+ # Run inference
359
+ sentences = [
360
+ '1. What is meant by the terms "hidden state," "latent space," and "embedding space" in the context of reasoning models?\n2. How do the last hidden states function as input embeddings in a typical Chain of Thought reasoning model?',
361
+ "of the reasoning State when we say hidden state or latent space or embedding space or this sort of space of math and computation we're talking about the same space of course the the exact state of the space changes depending on where we are in the Transformer but let's take a look at the image from the paper in a typical Chain of Thought reasoning model we're going to ask a question we're going to generate some tokens and we're going to think kind of out loud we're we're going to let the chains of thought flow when you click into the Chain of Thought on 01 as we've seen before you can see sort of in the side panel the the steps it's thinking through now conversely to actually thinking out loud we have here that the last hidden states are used as input embeddings okay well what does this mean well it let's go back to our gpt2 style diagram and think about this the input embeddings here are where we're essentially looping back to so what we do is we kind of loop back before we generate",
362
+ "impact of this kind of approach on test time compute scaling some of the working hypotheses and some of the things people are interested in in looking out there on the llm edge for as we continue to see the field progress I want to demonstrate both approaches and check out the new coconut Library as well so how we're going to go through this is we're going to essentially introduce this idea of reasoning and latent space then we're going to talk about the scaling part of this before we dig into the specific approaches and we get the demo on both approaches by the end so it should be a lot of fun today let's go ahead and dig in reasoning in latent space let's root ourselves first in some definitions when we talk about reasoning we're talking about the action of thinking about something and it's kind of funny in a logical way if you look up logic it uses the word reason and there we are caught in a loop but reasoning is about thinking latent space is about using a representation of our",
363
+ ]
364
+ embeddings = model.encode(sentences)
365
+ print(embeddings.shape)
366
+ # [3, 1536]
367
+
368
+ # Get the similarity scores for the embeddings
369
+ similarities = model.similarity(embeddings, embeddings)
370
+ print(similarities.shape)
371
+ # [3, 3]
372
+ ```
373
+
374
+ <!--
375
+ ### Direct Usage (Transformers)
376
+
377
+ <details><summary>Click to see the direct usage in Transformers</summary>
378
+
379
+ </details>
380
+ -->
381
+
382
+ <!--
383
+ ### Downstream Usage (Sentence Transformers)
384
+
385
+ You can finetune this model on your own dataset.
386
+
387
+ <details><summary>Click to expand</summary>
388
+
389
+ </details>
390
+ -->
391
+
392
+ <!--
393
+ ### Out-of-Scope Use
394
+
395
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
396
+ -->
397
+
398
+ ## Evaluation
399
+
400
+ ### Metrics
401
+
402
+ #### Information Retrieval
403
+
404
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
405
+
406
+ | Metric | Value |
407
+ |:--------------------|:-----------|
408
+ | cosine_accuracy@1 | 0.875 |
409
+ | cosine_accuracy@3 | 1.0 |
410
+ | cosine_accuracy@5 | 1.0 |
411
+ | cosine_accuracy@10 | 1.0 |
412
+ | cosine_precision@1 | 0.875 |
413
+ | cosine_precision@3 | 0.3333 |
414
+ | cosine_precision@5 | 0.2 |
415
+ | cosine_precision@10 | 0.1 |
416
+ | cosine_recall@1 | 0.875 |
417
+ | cosine_recall@3 | 1.0 |
418
+ | cosine_recall@5 | 1.0 |
419
+ | cosine_recall@10 | 1.0 |
420
+ | **cosine_ndcg@10** | **0.9539** |
421
+ | cosine_mrr@10 | 0.9375 |
422
+ | cosine_map@100 | 0.9375 |
423
+
424
+ <!--
425
+ ## Bias, Risks and Limitations
426
+
427
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
428
+ -->
429
+
430
+ <!--
431
+ ### Recommendations
432
+
433
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
434
+ -->
435
+
436
+ ## Training Details
437
+
438
+ ### Training Dataset
439
+
440
+ #### Unnamed Dataset
441
+
442
+ * Size: 100 training samples
443
+ * Columns: <code>sentence_0</code> and <code>sentence_1</code>
444
+ * Approximate statistics based on the first 100 samples:
445
+ | | sentence_0 | sentence_1 |
446
+ |:--------|:-----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
447
+ | type | string | string |
448
+ | details | <ul><li>min: 30 tokens</li><li>mean: 41.05 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 180 tokens</li><li>mean: 208.98 tokens</li><li>max: 231 tokens</li></ul> |
449
+ * Samples:
450
+ | sentence_0 | sentence_1 |
451
+ |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
452
+ | <code>1. What are the two big ideas aimed at scaling the power of LLMs during inference mentioned in the context? <br>2. How does the concept of reasoning in latent space relate to the efficiency of computation during inference?</code> | <code>okay whiz we're talking about reasoning in latent space today is that the same as test time compute yeah that's right nice nice okay and we've got two big ideas to cover that are aimed at scaling the power of llms during inference is that right that yeah that's right so we have we have two you know latent space methods uh we have our continuous Chain of Thought or coconut right and then we have our more more directly more uh you know uh budget forcing recurrent depth uh model yes man that's a lot so when we look across both of those there appears to be a pretty simple explanation it's almost like uh you know when we when we're in that sort of thinking space of computation we don't have to do the thinky thinky in words and that's better maybe even it will allow us to find a new scaling axis is that right yeah that's exactly right I mean the idea is that we have this uh you know we we have this way of taking advantage of of uh the most powerful thinking space in the Transformer and not</code> |
453
+ | <code>1. What are the two big ideas aimed at scaling the power of LLMs during inference mentioned in the context? <br>2. How does the concept of reasoning in latent space relate to the efficiency of computation during inference?</code> | <code>okay whiz we're talking about reasoning in latent space today is that the same as test time compute yeah that's right nice nice okay and we've got two big ideas to cover that are aimed at scaling the power of llms during inference is that right that yeah that's right so we have we have two you know latent space methods uh we have our continuous Chain of Thought or coconut right and then we have our more more directly more uh you know uh budget forcing recurrent depth uh model yes man that's a lot so when we look across both of those there appears to be a pretty simple explanation it's almost like uh you know when we when we're in that sort of thinking space of computation we don't have to do the thinky thinky in words and that's better maybe even it will allow us to find a new scaling axis is that right yeah that's exactly right I mean the idea is that we have this uh you know we we have this way of taking advantage of of uh the most powerful thinking space in the Transformer and not</code> |
454
+ | <code>1. What is the significance of staying in the "mind Palace" of the Transformer instead of resolving back to token space? <br>2. What are the key concepts that need to be covered before demonstrating large reasoning models?</code> | <code>is that right yeah that's exactly right I mean the idea is that we have this uh you know we we have this way of taking advantage of of uh the most powerful thinking space in the Transformer and not just like for a second right not automatically resolving back to token space but kind of staying in this very like uh you know in in the mind Palace of the of the Transformer without having to write down the words yes okay okay okay so basically scaling is dead Long Live scaling something like that yeah scaling has died uh we should scale yeah all right all right all right well I'm pumped for the demos today we're going to see some thinking in latent space let's cover all the Concepts we need to get there we'll get you back in for some discussions along the way because this one's pretty meta thanks whiz all right guys we are gonna rock out on large reasoning models today while we were originally going to just cover chain of continuous thought or coconut we saw a paper come out a couple</code> |
455
+ * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
456
+ ```json
457
+ {
458
+ "loss": "MultipleNegativesRankingLoss",
459
+ "matryoshka_dims": [
460
+ 768,
461
+ 512,
462
+ 256,
463
+ 128,
464
+ 64
465
+ ],
466
+ "matryoshka_weights": [
467
+ 1,
468
+ 1,
469
+ 1,
470
+ 1,
471
+ 1
472
+ ],
473
+ "n_dims_per_step": -1
474
+ }
475
+ ```
476
+
477
+ ### Training Hyperparameters
478
+ #### Non-Default Hyperparameters
479
+
480
+ - `eval_strategy`: steps
481
+ - `per_device_train_batch_size`: 10
482
+ - `per_device_eval_batch_size`: 10
483
+ - `num_train_epochs`: 10
484
+ - `multi_dataset_batch_sampler`: round_robin
485
+
486
+ #### All Hyperparameters
487
+ <details><summary>Click to expand</summary>
488
+
489
+ - `overwrite_output_dir`: False
490
+ - `do_predict`: False
491
+ - `eval_strategy`: steps
492
+ - `prediction_loss_only`: True
493
+ - `per_device_train_batch_size`: 10
494
+ - `per_device_eval_batch_size`: 10
495
+ - `per_gpu_train_batch_size`: None
496
+ - `per_gpu_eval_batch_size`: None
497
+ - `gradient_accumulation_steps`: 1
498
+ - `eval_accumulation_steps`: None
499
+ - `torch_empty_cache_steps`: None
500
+ - `learning_rate`: 5e-05
501
+ - `weight_decay`: 0.0
502
+ - `adam_beta1`: 0.9
503
+ - `adam_beta2`: 0.999
504
+ - `adam_epsilon`: 1e-08
505
+ - `max_grad_norm`: 1
506
+ - `num_train_epochs`: 10
507
+ - `max_steps`: -1
508
+ - `lr_scheduler_type`: linear
509
+ - `lr_scheduler_kwargs`: {}
510
+ - `warmup_ratio`: 0.0
511
+ - `warmup_steps`: 0
512
+ - `log_level`: passive
513
+ - `log_level_replica`: warning
514
+ - `log_on_each_node`: True
515
+ - `logging_nan_inf_filter`: True
516
+ - `save_safetensors`: True
517
+ - `save_on_each_node`: False
518
+ - `save_only_model`: False
519
+ - `restore_callback_states_from_checkpoint`: False
520
+ - `no_cuda`: False
521
+ - `use_cpu`: False
522
+ - `use_mps_device`: False
523
+ - `seed`: 42
524
+ - `data_seed`: None
525
+ - `jit_mode_eval`: False
526
+ - `use_ipex`: False
527
+ - `bf16`: False
528
+ - `fp16`: False
529
+ - `fp16_opt_level`: O1
530
+ - `half_precision_backend`: auto
531
+ - `bf16_full_eval`: False
532
+ - `fp16_full_eval`: False
533
+ - `tf32`: None
534
+ - `local_rank`: 0
535
+ - `ddp_backend`: None
536
+ - `tpu_num_cores`: None
537
+ - `tpu_metrics_debug`: False
538
+ - `debug`: []
539
+ - `dataloader_drop_last`: False
540
+ - `dataloader_num_workers`: 0
541
+ - `dataloader_prefetch_factor`: None
542
+ - `past_index`: -1
543
+ - `disable_tqdm`: False
544
+ - `remove_unused_columns`: True
545
+ - `label_names`: None
546
+ - `load_best_model_at_end`: False
547
+ - `ignore_data_skip`: False
548
+ - `fsdp`: []
549
+ - `fsdp_min_num_params`: 0
550
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
551
+ - `fsdp_transformer_layer_cls_to_wrap`: None
552
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
553
+ - `deepspeed`: None
554
+ - `label_smoothing_factor`: 0.0
555
+ - `optim`: adamw_torch
556
+ - `optim_args`: None
557
+ - `adafactor`: False
558
+ - `group_by_length`: False
559
+ - `length_column_name`: length
560
+ - `ddp_find_unused_parameters`: None
561
+ - `ddp_bucket_cap_mb`: None
562
+ - `ddp_broadcast_buffers`: False
563
+ - `dataloader_pin_memory`: True
564
+ - `dataloader_persistent_workers`: False
565
+ - `skip_memory_metrics`: True
566
+ - `use_legacy_prediction_loop`: False
567
+ - `push_to_hub`: False
568
+ - `resume_from_checkpoint`: None
569
+ - `hub_model_id`: None
570
+ - `hub_strategy`: every_save
571
+ - `hub_private_repo`: None
572
+ - `hub_always_push`: False
573
+ - `gradient_checkpointing`: False
574
+ - `gradient_checkpointing_kwargs`: None
575
+ - `include_inputs_for_metrics`: False
576
+ - `include_for_metrics`: []
577
+ - `eval_do_concat_batches`: True
578
+ - `fp16_backend`: auto
579
+ - `push_to_hub_model_id`: None
580
+ - `push_to_hub_organization`: None
581
+ - `mp_parameters`:
582
+ - `auto_find_batch_size`: False
583
+ - `full_determinism`: False
584
+ - `torchdynamo`: None
585
+ - `ray_scope`: last
586
+ - `ddp_timeout`: 1800
587
+ - `torch_compile`: False
588
+ - `torch_compile_backend`: None
589
+ - `torch_compile_mode`: None
590
+ - `dispatch_batches`: None
591
+ - `split_batches`: None
592
+ - `include_tokens_per_second`: False
593
+ - `include_num_input_tokens_seen`: False
594
+ - `neftune_noise_alpha`: None
595
+ - `optim_target_modules`: None
596
+ - `batch_eval_metrics`: False
597
+ - `eval_on_start`: False
598
+ - `use_liger_kernel`: False
599
+ - `eval_use_gather_object`: False
600
+ - `average_tokens_across_devices`: False
601
+ - `prompts`: None
602
+ - `batch_sampler`: batch_sampler
603
+ - `multi_dataset_batch_sampler`: round_robin
604
+
605
+ </details>
606
+
607
+ ### Training Logs
608
+ | Epoch | Step | cosine_ndcg@10 |
609
+ |:-----:|:----:|:--------------:|
610
+ | 1.0 | 10 | 0.9539 |
611
+ | 2.0 | 20 | 0.9077 |
612
+ | 3.0 | 30 | 0.9539 |
613
+ | 4.0 | 40 | 0.9539 |
614
+ | 5.0 | 50 | 0.9539 |
615
+ | 6.0 | 60 | 0.9539 |
616
+ | 7.0 | 70 | 0.9539 |
617
+ | 8.0 | 80 | 0.9539 |
618
+ | 9.0 | 90 | 0.9539 |
619
+ | 10.0 | 100 | 0.9539 |
620
+
621
+
622
+ ### Framework Versions
623
+ - Python: 3.11.11
624
+ - Sentence Transformers: 3.4.1
625
+ - Transformers: 4.48.3
626
+ - PyTorch: 2.5.1+cu124
627
+ - Accelerate: 1.3.0
628
+ - Datasets: 3.3.2
629
+ - Tokenizers: 0.21.0
630
+
631
+ ## Citation
632
+
633
+ ### BibTeX
634
+
635
+ #### Sentence Transformers
636
+ ```bibtex
637
+ @inproceedings{reimers-2019-sentence-bert,
638
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
639
+ author = "Reimers, Nils and Gurevych, Iryna",
640
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
641
+ month = "11",
642
+ year = "2019",
643
+ publisher = "Association for Computational Linguistics",
644
+ url = "https://arxiv.org/abs/1908.10084",
645
+ }
646
+ ```
647
+
648
+ #### MatryoshkaLoss
649
+ ```bibtex
650
+ @misc{kusupati2024matryoshka,
651
+ title={Matryoshka Representation Learning},
652
+ author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
653
+ year={2024},
654
+ eprint={2205.13147},
655
+ archivePrefix={arXiv},
656
+ primaryClass={cs.LG}
657
+ }
658
+ ```
659
+
660
+ #### MultipleNegativesRankingLoss
661
+ ```bibtex
662
+ @misc{henderson2017efficient,
663
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
664
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
665
+ year={2017},
666
+ eprint={1705.00652},
667
+ archivePrefix={arXiv},
668
+ primaryClass={cs.CL}
669
+ }
670
+ ```
671
+
672
+ <!--
673
+ ## Glossary
674
+
675
+ *Clearly define terms in order to be accessible across audiences.*
676
+ -->
677
+
678
+ <!--
679
+ ## Model Card Authors
680
+
681
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
682
+ -->
683
+
684
+ <!--
685
+ ## Model Card Contact
686
+
687
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
688
+ -->
added_tokens.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "<|endoftext|>": 151643,
3
+ "<|im_end|>": 151645,
4
+ "<|im_start|>": 151644
5
+ }
config.json ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "Alibaba-NLP/gte-Qwen2-1.5B-instruct",
3
+ "architectures": [
4
+ "Qwen2Model"
5
+ ],
6
+ "attention_dropout": 0.0,
7
+ "auto_map": {
8
+ "AutoModel": "Alibaba-NLP/gte-Qwen2-1.5B-instruct--modeling_qwen.Qwen2Model",
9
+ "AutoModelForCausalLM": "Alibaba-NLP/gte-Qwen2-1.5B-instruct--modeling_qwen.Qwen2ForCausalLM",
10
+ "AutoModelForSequenceClassification": "Alibaba-NLP/gte-Qwen2-1.5B-instruct--modeling_qwen.Qwen2ForSequenceClassification"
11
+ },
12
+ "bos_token_id": 151643,
13
+ "eos_token_id": 151643,
14
+ "hidden_act": "silu",
15
+ "hidden_size": 1536,
16
+ "initializer_range": 0.02,
17
+ "intermediate_size": 8960,
18
+ "is_causal": false,
19
+ "max_position_embeddings": 131072,
20
+ "max_window_layers": 21,
21
+ "model_type": "qwen2",
22
+ "num_attention_heads": 12,
23
+ "num_hidden_layers": 28,
24
+ "num_key_value_heads": 2,
25
+ "rms_norm_eps": 1e-06,
26
+ "rope_scaling": null,
27
+ "rope_theta": 1000000.0,
28
+ "sliding_window": null,
29
+ "tie_word_embeddings": false,
30
+ "torch_dtype": "float32",
31
+ "transformers_version": "4.48.3",
32
+ "use_cache": true,
33
+ "use_sliding_window": false,
34
+ "vocab_size": 151646
35
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.4.1",
4
+ "transformers": "4.48.3",
5
+ "pytorch": "2.5.1+cu124"
6
+ },
7
+ "prompts": {
8
+ "query": "Instruct: Given a web search query, retrieve relevant passages that answer the query\nQuery: "
9
+ },
10
+ "default_prompt_name": null,
11
+ "similarity_fn_name": "cosine"
12
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model-00001-of-00002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1bc91d6f9b4cab1b0187ee542857b05e09df7af3e871bb02047e198a5ba31f21
3
+ size 4994887136
model-00002-of-00002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:60ada1d52bc894e05df2a824089bd9ff3a3ccba35af60787f307a3a64bb667ae
3
+ size 1178224504
model.safetensors.index.json ADDED
@@ -0,0 +1,345 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 6173075456
4
+ },
5
+ "weight_map": {
6
+ "embed_tokens.weight": "model-00001-of-00002.safetensors",
7
+ "layers.0.input_layernorm.weight": "model-00001-of-00002.safetensors",
8
+ "layers.0.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
9
+ "layers.0.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
10
+ "layers.0.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
11
+ "layers.0.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
12
+ "layers.0.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
13
+ "layers.0.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
14
+ "layers.0.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
15
+ "layers.0.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
16
+ "layers.0.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
17
+ "layers.0.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
18
+ "layers.0.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
19
+ "layers.1.input_layernorm.weight": "model-00001-of-00002.safetensors",
20
+ "layers.1.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
21
+ "layers.1.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
22
+ "layers.1.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
23
+ "layers.1.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
24
+ "layers.1.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
25
+ "layers.1.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
26
+ "layers.1.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
27
+ "layers.1.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
28
+ "layers.1.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
29
+ "layers.1.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
30
+ "layers.1.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
31
+ "layers.10.input_layernorm.weight": "model-00001-of-00002.safetensors",
32
+ "layers.10.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
33
+ "layers.10.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
34
+ "layers.10.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
35
+ "layers.10.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
36
+ "layers.10.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
37
+ "layers.10.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
38
+ "layers.10.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
39
+ "layers.10.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
40
+ "layers.10.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
41
+ "layers.10.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
42
+ "layers.10.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
43
+ "layers.11.input_layernorm.weight": "model-00001-of-00002.safetensors",
44
+ "layers.11.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
45
+ "layers.11.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
46
+ "layers.11.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
47
+ "layers.11.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
48
+ "layers.11.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
49
+ "layers.11.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
50
+ "layers.11.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
51
+ "layers.11.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
52
+ "layers.11.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
53
+ "layers.11.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
54
+ "layers.11.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
55
+ "layers.12.input_layernorm.weight": "model-00001-of-00002.safetensors",
56
+ "layers.12.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
57
+ "layers.12.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
58
+ "layers.12.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
59
+ "layers.12.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
60
+ "layers.12.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
61
+ "layers.12.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
62
+ "layers.12.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
63
+ "layers.12.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
64
+ "layers.12.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
65
+ "layers.12.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
66
+ "layers.12.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
67
+ "layers.13.input_layernorm.weight": "model-00001-of-00002.safetensors",
68
+ "layers.13.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
69
+ "layers.13.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
70
+ "layers.13.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
71
+ "layers.13.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
72
+ "layers.13.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
73
+ "layers.13.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
74
+ "layers.13.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
75
+ "layers.13.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
76
+ "layers.13.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
77
+ "layers.13.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
78
+ "layers.13.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
79
+ "layers.14.input_layernorm.weight": "model-00001-of-00002.safetensors",
80
+ "layers.14.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
81
+ "layers.14.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
82
+ "layers.14.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
83
+ "layers.14.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
84
+ "layers.14.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
85
+ "layers.14.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
86
+ "layers.14.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
87
+ "layers.14.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
88
+ "layers.14.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
89
+ "layers.14.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
90
+ "layers.14.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
91
+ "layers.15.input_layernorm.weight": "model-00001-of-00002.safetensors",
92
+ "layers.15.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
93
+ "layers.15.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
94
+ "layers.15.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
95
+ "layers.15.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
96
+ "layers.15.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
97
+ "layers.15.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
98
+ "layers.15.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
99
+ "layers.15.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
100
+ "layers.15.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
101
+ "layers.15.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
102
+ "layers.15.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
103
+ "layers.16.input_layernorm.weight": "model-00001-of-00002.safetensors",
104
+ "layers.16.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
105
+ "layers.16.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
106
+ "layers.16.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
107
+ "layers.16.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
108
+ "layers.16.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
109
+ "layers.16.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
110
+ "layers.16.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
111
+ "layers.16.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
112
+ "layers.16.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
113
+ "layers.16.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
114
+ "layers.16.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
115
+ "layers.17.input_layernorm.weight": "model-00001-of-00002.safetensors",
116
+ "layers.17.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
117
+ "layers.17.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
118
+ "layers.17.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
119
+ "layers.17.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
120
+ "layers.17.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
121
+ "layers.17.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
122
+ "layers.17.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
123
+ "layers.17.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
124
+ "layers.17.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
125
+ "layers.17.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
126
+ "layers.17.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
127
+ "layers.18.input_layernorm.weight": "model-00001-of-00002.safetensors",
128
+ "layers.18.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
129
+ "layers.18.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
130
+ "layers.18.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
131
+ "layers.18.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
132
+ "layers.18.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
133
+ "layers.18.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
134
+ "layers.18.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
135
+ "layers.18.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
136
+ "layers.18.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
137
+ "layers.18.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
138
+ "layers.18.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
139
+ "layers.19.input_layernorm.weight": "model-00001-of-00002.safetensors",
140
+ "layers.19.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
141
+ "layers.19.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
142
+ "layers.19.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
143
+ "layers.19.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
144
+ "layers.19.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
145
+ "layers.19.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
146
+ "layers.19.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
147
+ "layers.19.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
148
+ "layers.19.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
149
+ "layers.19.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
150
+ "layers.19.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
151
+ "layers.2.input_layernorm.weight": "model-00001-of-00002.safetensors",
152
+ "layers.2.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
153
+ "layers.2.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
154
+ "layers.2.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
155
+ "layers.2.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
156
+ "layers.2.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
157
+ "layers.2.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
158
+ "layers.2.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
159
+ "layers.2.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
160
+ "layers.2.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
161
+ "layers.2.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
162
+ "layers.2.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
163
+ "layers.20.input_layernorm.weight": "model-00001-of-00002.safetensors",
164
+ "layers.20.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
165
+ "layers.20.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
166
+ "layers.20.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
167
+ "layers.20.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
168
+ "layers.20.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
169
+ "layers.20.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
170
+ "layers.20.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
171
+ "layers.20.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
172
+ "layers.20.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
173
+ "layers.20.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
174
+ "layers.20.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
175
+ "layers.21.input_layernorm.weight": "model-00002-of-00002.safetensors",
176
+ "layers.21.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
177
+ "layers.21.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
178
+ "layers.21.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
179
+ "layers.21.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
180
+ "layers.21.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
181
+ "layers.21.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
182
+ "layers.21.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
183
+ "layers.21.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
184
+ "layers.21.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
185
+ "layers.21.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
186
+ "layers.21.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
187
+ "layers.22.input_layernorm.weight": "model-00002-of-00002.safetensors",
188
+ "layers.22.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
189
+ "layers.22.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
190
+ "layers.22.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
191
+ "layers.22.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
192
+ "layers.22.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
193
+ "layers.22.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
194
+ "layers.22.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
195
+ "layers.22.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
196
+ "layers.22.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
197
+ "layers.22.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
198
+ "layers.22.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
199
+ "layers.23.input_layernorm.weight": "model-00002-of-00002.safetensors",
200
+ "layers.23.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
201
+ "layers.23.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
202
+ "layers.23.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
203
+ "layers.23.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
204
+ "layers.23.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
205
+ "layers.23.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
206
+ "layers.23.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
207
+ "layers.23.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
208
+ "layers.23.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
209
+ "layers.23.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
210
+ "layers.23.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
211
+ "layers.24.input_layernorm.weight": "model-00002-of-00002.safetensors",
212
+ "layers.24.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
213
+ "layers.24.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
214
+ "layers.24.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
215
+ "layers.24.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
216
+ "layers.24.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
217
+ "layers.24.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
218
+ "layers.24.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
219
+ "layers.24.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
220
+ "layers.24.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
221
+ "layers.24.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
222
+ "layers.24.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
223
+ "layers.25.input_layernorm.weight": "model-00002-of-00002.safetensors",
224
+ "layers.25.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
225
+ "layers.25.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
226
+ "layers.25.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
227
+ "layers.25.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
228
+ "layers.25.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
229
+ "layers.25.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
230
+ "layers.25.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
231
+ "layers.25.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
232
+ "layers.25.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
233
+ "layers.25.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
234
+ "layers.25.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
235
+ "layers.26.input_layernorm.weight": "model-00002-of-00002.safetensors",
236
+ "layers.26.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
237
+ "layers.26.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
238
+ "layers.26.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
239
+ "layers.26.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
240
+ "layers.26.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
241
+ "layers.26.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
242
+ "layers.26.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
243
+ "layers.26.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
244
+ "layers.26.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
245
+ "layers.26.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
246
+ "layers.26.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
247
+ "layers.27.input_layernorm.weight": "model-00002-of-00002.safetensors",
248
+ "layers.27.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
249
+ "layers.27.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
250
+ "layers.27.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
251
+ "layers.27.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
252
+ "layers.27.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
253
+ "layers.27.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
254
+ "layers.27.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
255
+ "layers.27.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
256
+ "layers.27.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
257
+ "layers.27.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
258
+ "layers.27.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
259
+ "layers.3.input_layernorm.weight": "model-00001-of-00002.safetensors",
260
+ "layers.3.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
261
+ "layers.3.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
262
+ "layers.3.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
263
+ "layers.3.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
264
+ "layers.3.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
265
+ "layers.3.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
266
+ "layers.3.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
267
+ "layers.3.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
268
+ "layers.3.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
269
+ "layers.3.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
270
+ "layers.3.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
271
+ "layers.4.input_layernorm.weight": "model-00001-of-00002.safetensors",
272
+ "layers.4.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
273
+ "layers.4.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
274
+ "layers.4.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
275
+ "layers.4.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
276
+ "layers.4.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
277
+ "layers.4.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
278
+ "layers.4.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
279
+ "layers.4.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
280
+ "layers.4.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
281
+ "layers.4.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
282
+ "layers.4.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
283
+ "layers.5.input_layernorm.weight": "model-00001-of-00002.safetensors",
284
+ "layers.5.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
285
+ "layers.5.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
286
+ "layers.5.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
287
+ "layers.5.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
288
+ "layers.5.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
289
+ "layers.5.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
290
+ "layers.5.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
291
+ "layers.5.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
292
+ "layers.5.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
293
+ "layers.5.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
294
+ "layers.5.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
295
+ "layers.6.input_layernorm.weight": "model-00001-of-00002.safetensors",
296
+ "layers.6.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
297
+ "layers.6.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
298
+ "layers.6.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
299
+ "layers.6.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
300
+ "layers.6.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
301
+ "layers.6.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
302
+ "layers.6.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
303
+ "layers.6.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
304
+ "layers.6.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
305
+ "layers.6.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
306
+ "layers.6.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
307
+ "layers.7.input_layernorm.weight": "model-00001-of-00002.safetensors",
308
+ "layers.7.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
309
+ "layers.7.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
310
+ "layers.7.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
311
+ "layers.7.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
312
+ "layers.7.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
313
+ "layers.7.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
314
+ "layers.7.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
315
+ "layers.7.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
316
+ "layers.7.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
317
+ "layers.7.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
318
+ "layers.7.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
319
+ "layers.8.input_layernorm.weight": "model-00001-of-00002.safetensors",
320
+ "layers.8.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
321
+ "layers.8.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
322
+ "layers.8.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
323
+ "layers.8.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
324
+ "layers.8.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
325
+ "layers.8.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
326
+ "layers.8.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
327
+ "layers.8.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
328
+ "layers.8.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
329
+ "layers.8.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
330
+ "layers.8.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
331
+ "layers.9.input_layernorm.weight": "model-00001-of-00002.safetensors",
332
+ "layers.9.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
333
+ "layers.9.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
334
+ "layers.9.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
335
+ "layers.9.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
336
+ "layers.9.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
337
+ "layers.9.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
338
+ "layers.9.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
339
+ "layers.9.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
340
+ "layers.9.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
341
+ "layers.9.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
342
+ "layers.9.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
343
+ "norm.weight": "model-00002-of-00002.safetensors"
344
+ }
345
+ }
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 32768,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>"
5
+ ],
6
+ "eos_token": {
7
+ "content": "<|endoftext|>",
8
+ "lstrip": false,
9
+ "normalized": false,
10
+ "rstrip": false,
11
+ "single_word": false
12
+ },
13
+ "pad_token": {
14
+ "content": "<|endoftext|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false
19
+ }
20
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6d14fba4f2dfcd2267034313a2d5f79f25c4c300a02b94a3e89a6657b116e1df
3
+ size 11418534
tokenizer_config.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_eos_token": true,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "151643": {
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "151644": {
14
+ "content": "<|im_start|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "151645": {
22
+ "content": "<|im_end|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ }
29
+ },
30
+ "additional_special_tokens": [
31
+ "<|im_start|>",
32
+ "<|im_end|>"
33
+ ],
34
+ "auto_map": {
35
+ "AutoTokenizer": [
36
+ "Alibaba-NLP/gte-Qwen2-1.5B-instruct--tokenization_qwen.Qwen2Tokenizer",
37
+ "Alibaba-NLP/gte-Qwen2-1.5B-instruct--tokenization_qwen.Qwen2TokenizerFast"
38
+ ]
39
+ },
40
+ "bos_token": null,
41
+ "chat_template": "{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}",
42
+ "clean_up_tokenization_spaces": false,
43
+ "eos_token": "<|endoftext|>",
44
+ "errors": "replace",
45
+ "extra_special_tokens": {},
46
+ "model_max_length": 32768,
47
+ "pad_token": "<|endoftext|>",
48
+ "split_special_tokens": false,
49
+ "tokenizer_class": "Qwen2Tokenizer",
50
+ "unk_token": null
51
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff