llm-wizard commited on
Commit
4bfc69e
·
verified ·
1 Parent(s): 547c3fb

Add new SentenceTransformer model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 1024,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,685 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - sentence-similarity
5
+ - feature-extraction
6
+ - generated_from_trainer
7
+ - dataset_size:156
8
+ - loss:MatryoshkaLoss
9
+ - loss:MultipleNegativesRankingLoss
10
+ base_model: Snowflake/snowflake-arctic-embed-l
11
+ widget:
12
+ - source_sentence: What concerns do some people have regarding the value and impact
13
+ of LLMs?
14
+ sentences:
15
+ - 'I think people who complain that LLM improvement has slowed are often missing
16
+ the enormous advances in these multi-modal models. Being able to run prompts against
17
+ images (and audio and video) is a fascinating new way to apply these models.
18
+
19
+ Voice and live camera mode are science fiction come to life
20
+
21
+ The audio and live video modes that have started to emerge deserve a special mention.
22
+
23
+ The ability to talk to ChatGPT first arrived in September 2023, but it was mostly
24
+ an illusion: OpenAI used their excellent Whisper speech-to-text model and a new
25
+ text-to-speech model (creatively named tts-1) to enable conversations with the
26
+ ChatGPT mobile apps, but the actual model just saw text.'
27
+ - 'So far, I think they’re a net positive. I’ve used them on a personal level to
28
+ improve my productivity (and entertain myself) in all sorts of different ways.
29
+ I think people who learn how to use them effectively can gain a significant boost
30
+ to their quality of life.
31
+
32
+ A lot of people are yet to be sold on their value! Some think their negatives
33
+ outweigh their positives, some think they are all hot air, and some even think
34
+ they represent an existential threat to humanity.
35
+
36
+ They’re actually quite easy to build
37
+
38
+ The most surprising thing we’ve learned about LLMs this year is that they’re actually
39
+ quite easy to build.'
40
+ - 'The GPT-4 barrier was comprehensively broken
41
+
42
+ In my December 2023 review I wrote about how We don’t yet know how to build GPT-4—OpenAI’s
43
+ best model was almost a year old at that point, yet no other AI lab had produced
44
+ anything better. What did OpenAI know that the rest of us didn’t?
45
+
46
+ I’m relieved that this has changed completely in the past twelve months. 18 organizations
47
+ now have models on the Chatbot Arena Leaderboard that rank higher than the original
48
+ GPT-4 from March 2023 (GPT-4-0314 on the board)—70 models in total.'
49
+ - source_sentence: What organizations have produced better-than-GPT-3 class models
50
+ in the past year?
51
+ sentences:
52
+ - 'Here’s the sequel to this post: Things we learned about LLMs in 2024.
53
+
54
+ Large Language Models
55
+
56
+ In the past 24-36 months, our species has discovered that you can take a GIANT
57
+ corpus of text, run it through a pile of GPUs, and use it to create a fascinating
58
+ new kind of software.
59
+
60
+ LLMs can do a lot of things. They can answer questions, summarize documents, translate
61
+ from one language to another, extract information and even write surprisingly
62
+ competent code.
63
+
64
+ They can also help you cheat at your homework, generate unlimited streams of fake
65
+ content and be used for all manner of nefarious purposes.'
66
+ - 'A year ago, the only organization that had released a generally useful LLM was
67
+ OpenAI. We’ve now seen better-than-GPT-3 class models produced by Anthropic, Mistral,
68
+ Google, Meta, EleutherAI, Stability AI, TII in Abu Dhabi (Falcon), Microsoft Research,
69
+ xAI, Replit, Baidu and a bunch of other organizations.
70
+
71
+ The training cost (hardware and electricity) is still significant—initially millions
72
+ of dollars, but that seems to have dropped to the tens of thousands already. Microsoft’s
73
+ Phi-2 claims to have used “14 days on 96 A100 GPUs”, which works out at around
74
+ $35,000 using current Lambda pricing.'
75
+ - 'One way to think about these models is an extension of the chain-of-thought prompting
76
+ trick, first explored in the May 2022 paper Large Language Models are Zero-Shot
77
+ Reasoners.
78
+
79
+ This is that trick where, if you get a model to talk out loud about a problem
80
+ it’s solving, you often get a result which the model would not have achieved otherwise.
81
+
82
+ o1 takes this process and further bakes it into the model itself. The details
83
+ are somewhat obfuscated: o1 models spend “reasoning tokens” thinking through the
84
+ problem that are not directly visible to the user (though the ChatGPT UI shows
85
+ a summary of them), then outputs a final result.'
86
+ - source_sentence: What are AI agents commonly understood to be, according to the
87
+ context provided?
88
+ sentences:
89
+ - 'Except... you can run generated code to see if it’s correct. And with patterns
90
+ like ChatGPT Code Interpreter the LLM can execute the code itself, process the
91
+ error message, then rewrite it and keep trying until it works!
92
+
93
+ So hallucination is a much lesser problem for code generation than for anything
94
+ else. If only we had the equivalent of Code Interpreter for fact-checking natural
95
+ language!
96
+
97
+ How should we feel about this as software engineers?
98
+
99
+ On the one hand, this feels like a threat: who needs a programmer if ChatGPT can
100
+ write code for you?'
101
+ - 'A lot of people are excited about AI agents—an infuriatingly vague term that
102
+ seems to be converging on “AI systems that can go away and act on your behalf”.
103
+ We’ve been talking about them all year, but I’ve seen few if any examples of them
104
+ running in production, despite lots of exciting prototypes.
105
+
106
+ I think this is because of gullibility.
107
+
108
+ Can we solve this? Honestly, I’m beginning to suspect that you can’t fully solve
109
+ gullibility without achieving AGI. So it may be quite a while before those agent
110
+ dreams can really start to come true!
111
+
112
+ Code may be the best application
113
+
114
+ Over the course of the year, it’s become increasingly clear that writing code
115
+ is one of the things LLMs are most capable of.'
116
+ - 'Gemini 1.5 Pro also illustrated one of the key themes of 2024: increased context
117
+ lengths. Last year most models accepted 4,096 or 8,192 tokens, with the notable
118
+ exception of Claude 2.1 which accepted 200,000. Today every serious provider has
119
+ a 100,000+ token model, and Google’s Gemini series accepts up to 2 million.'
120
+ - source_sentence: How can hobbyists create their own fine-tuned models?
121
+ sentences:
122
+ - 'Getting back to models that beat GPT-4: Anthropic’s Claude 3 series launched
123
+ in March, and Claude 3 Opus quickly became my new favourite daily-driver. They
124
+ upped the ante even more in June with the launch of Claude 3.5 Sonnet—a model
125
+ that is still my favourite six months later (though it got a significant upgrade
126
+ on October 22, confusingly keeping the same 3.5 version number. Anthropic fans
127
+ have since taken to calling it Claude 3.6).'
128
+ - 'Gemini 1.5 Pro also illustrated one of the key themes of 2024: increased context
129
+ lengths. Last year most models accepted 4,096 or 8,192 tokens, with the notable
130
+ exception of Claude 2.1 which accepted 200,000. Today every serious provider has
131
+ a 100,000+ token model, and Google’s Gemini series accepts up to 2 million.'
132
+ - 'I run a bunch of them on my laptop. I run Mistral 7B (a surprisingly great model)
133
+ on my iPhone. You can install several different apps to get your own, local, completely
134
+ private LLM. My own LLM project provides a CLI tool for running an array of different
135
+ models via plugins.
136
+
137
+ You can even run them entirely in your browser using WebAssembly and the latest
138
+ Chrome!
139
+
140
+ Hobbyists can build their own fine-tuned models
141
+
142
+ I said earlier that building an LLM was still out of reach of hobbyists. That
143
+ may be true for training from scratch, but fine-tuning one of those models is
144
+ another matter entirely.'
145
+ - source_sentence: What is the significance of prompt engineering in DALL-E 3?
146
+ sentences:
147
+ - 'Now add a walrus: Prompt engineering in DALL-E 3
148
+
149
+ 32.8k
150
+
151
+ 41.2k
152
+
153
+
154
+
155
+ Web LLM runs the vicuna-7b Large Language Model entirely in your browser, and
156
+ it’s very impressive
157
+
158
+ 32.5k
159
+
160
+ 38.2k
161
+
162
+
163
+
164
+ ChatGPT can’t access the internet, even though it really looks like it can
165
+
166
+ 30.5k
167
+
168
+ 34.2k
169
+
170
+
171
+
172
+ Stanford Alpaca, and the acceleration of on-device large language model development
173
+
174
+ 29.7k
175
+
176
+ 35.7k
177
+
178
+
179
+
180
+ Run Llama 2 on your own Mac using LLM and Homebrew
181
+
182
+ 27.9k
183
+
184
+ 33.6k
185
+
186
+
187
+
188
+ Midjourney 5.1
189
+
190
+ 26.7k
191
+
192
+ 33.4k
193
+
194
+
195
+
196
+ Think of language models like ChatGPT as a “calculator for words”
197
+
198
+ 25k
199
+
200
+ 31.8k
201
+
202
+
203
+
204
+ Multi-modal prompt injection image attacks against GPT-4V
205
+
206
+ 23.7k
207
+
208
+ 27.4k'
209
+ - "blogging\n 68\n\n\n ai\n 1092\n\n\n \
210
+ \ generative-ai\n 937\n\n\n llms\n 925\n\n\
211
+ Next: Tom Scott, and the formidable power of escalating streaks\nPrevious: Last\
212
+ \ weeknotes of 2023\n\n\n \n \n\n\nColophon\n©\n2002\n2003\n2004\n2005\n2006\n\
213
+ 2007\n2008\n2009\n2010\n2011\n2012\n2013\n2014\n2015\n2016\n2017\n2018\n2019\n\
214
+ 2020\n2021\n2022\n2023\n2024\n2025"
215
+ - 'The environmental impact got much, much worse
216
+
217
+ The much bigger problem here is the enormous competitive buildout of the infrastructure
218
+ that is imagined to be necessary for these models in the future.
219
+
220
+ Companies like Google, Meta, Microsoft and Amazon are all spending billions of
221
+ dollars rolling out new datacenters, with a very material impact on the electricity
222
+ grid and the environment. There’s even talk of spinning up new nuclear power stations,
223
+ but those can take decades.
224
+
225
+ Is this infrastructure necessary? DeepSeek v3’s $6m training cost and the continued
226
+ crash in LLM prices might hint that it’s not. But would you want to be the big
227
+ tech executive that argued NOT to build out this infrastructure only to be proven
228
+ wrong in a few years’ time?'
229
+ pipeline_tag: sentence-similarity
230
+ library_name: sentence-transformers
231
+ metrics:
232
+ - cosine_accuracy@1
233
+ - cosine_accuracy@3
234
+ - cosine_accuracy@5
235
+ - cosine_accuracy@10
236
+ - cosine_precision@1
237
+ - cosine_precision@3
238
+ - cosine_precision@5
239
+ - cosine_precision@10
240
+ - cosine_recall@1
241
+ - cosine_recall@3
242
+ - cosine_recall@5
243
+ - cosine_recall@10
244
+ - cosine_ndcg@10
245
+ - cosine_mrr@10
246
+ - cosine_map@100
247
+ model-index:
248
+ - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
249
+ results:
250
+ - task:
251
+ type: information-retrieval
252
+ name: Information Retrieval
253
+ dataset:
254
+ name: Unknown
255
+ type: unknown
256
+ metrics:
257
+ - type: cosine_accuracy@1
258
+ value: 0.875
259
+ name: Cosine Accuracy@1
260
+ - type: cosine_accuracy@3
261
+ value: 1.0
262
+ name: Cosine Accuracy@3
263
+ - type: cosine_accuracy@5
264
+ value: 1.0
265
+ name: Cosine Accuracy@5
266
+ - type: cosine_accuracy@10
267
+ value: 1.0
268
+ name: Cosine Accuracy@10
269
+ - type: cosine_precision@1
270
+ value: 0.875
271
+ name: Cosine Precision@1
272
+ - type: cosine_precision@3
273
+ value: 0.3333333333333333
274
+ name: Cosine Precision@3
275
+ - type: cosine_precision@5
276
+ value: 0.20000000000000004
277
+ name: Cosine Precision@5
278
+ - type: cosine_precision@10
279
+ value: 0.10000000000000002
280
+ name: Cosine Precision@10
281
+ - type: cosine_recall@1
282
+ value: 0.875
283
+ name: Cosine Recall@1
284
+ - type: cosine_recall@3
285
+ value: 1.0
286
+ name: Cosine Recall@3
287
+ - type: cosine_recall@5
288
+ value: 1.0
289
+ name: Cosine Recall@5
290
+ - type: cosine_recall@10
291
+ value: 1.0
292
+ name: Cosine Recall@10
293
+ - type: cosine_ndcg@10
294
+ value: 0.9538662191964322
295
+ name: Cosine Ndcg@10
296
+ - type: cosine_mrr@10
297
+ value: 0.9375
298
+ name: Cosine Mrr@10
299
+ - type: cosine_map@100
300
+ value: 0.9375
301
+ name: Cosine Map@100
302
+ ---
303
+
304
+ # SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
305
+
306
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
307
+
308
+ ## Model Details
309
+
310
+ ### Model Description
311
+ - **Model Type:** Sentence Transformer
312
+ - **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b -->
313
+ - **Maximum Sequence Length:** 512 tokens
314
+ - **Output Dimensionality:** 1024 dimensions
315
+ - **Similarity Function:** Cosine Similarity
316
+ <!-- - **Training Dataset:** Unknown -->
317
+ <!-- - **Language:** Unknown -->
318
+ <!-- - **License:** Unknown -->
319
+
320
+ ### Model Sources
321
+
322
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
323
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
324
+ - **HF中国镜像站:** [Sentence Transformers on HF中国镜像站](https://huggingface.co/models?library=sentence-transformers)
325
+
326
+ ### Full Model Architecture
327
+
328
+ ```
329
+ SentenceTransformer(
330
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
331
+ (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
332
+ (2): Normalize()
333
+ )
334
+ ```
335
+
336
+ ## Usage
337
+
338
+ ### Direct Usage (Sentence Transformers)
339
+
340
+ First install the Sentence Transformers library:
341
+
342
+ ```bash
343
+ pip install -U sentence-transformers
344
+ ```
345
+
346
+ Then you can load this model and run inference.
347
+ ```python
348
+ from sentence_transformers import SentenceTransformer
349
+
350
+ # Download from the 🤗 Hub
351
+ model = SentenceTransformer("llm-wizard/legal-ft-v0")
352
+ # Run inference
353
+ sentences = [
354
+ 'What is the significance of prompt engineering in DALL-E 3?',
355
+ 'Now add a walrus: Prompt engineering in DALL-E 3\n32.8k\n41.2k\n\n\nWeb LLM runs the vicuna-7b Large Language Model entirely in your browser, and it’s very impressive\n32.5k\n38.2k\n\n\nChatGPT can’t access the internet, even though it really looks like it can\n30.5k\n34.2k\n\n\nStanford Alpaca, and the acceleration of on-device large language model development\n29.7k\n35.7k\n\n\nRun Llama 2 on your own Mac using LLM and Homebrew\n27.9k\n33.6k\n\n\nMidjourney 5.1\n26.7k\n33.4k\n\n\nThink of language models like ChatGPT as a “calculator for words”\n25k\n31.8k\n\n\nMulti-modal prompt injection image attacks against GPT-4V\n23.7k\n27.4k',
356
+ 'The environmental impact got much, much worse\nThe much bigger problem here is the enormous competitive buildout of the infrastructure that is imagined to be necessary for these models in the future.\nCompanies like Google, Meta, Microsoft and Amazon are all spending billions of dollars rolling out new datacenters, with a very material impact on the electricity grid and the environment. There’s even talk of spinning up new nuclear power stations, but those can take decades.\nIs this infrastructure necessary? DeepSeek v3’s $6m training cost and the continued crash in LLM prices might hint that it’s not. But would you want to be the big tech executive that argued NOT to build out this infrastructure only to be proven wrong in a few years’ time?',
357
+ ]
358
+ embeddings = model.encode(sentences)
359
+ print(embeddings.shape)
360
+ # [3, 1024]
361
+
362
+ # Get the similarity scores for the embeddings
363
+ similarities = model.similarity(embeddings, embeddings)
364
+ print(similarities.shape)
365
+ # [3, 3]
366
+ ```
367
+
368
+ <!--
369
+ ### Direct Usage (Transformers)
370
+
371
+ <details><summary>Click to see the direct usage in Transformers</summary>
372
+
373
+ </details>
374
+ -->
375
+
376
+ <!--
377
+ ### Downstream Usage (Sentence Transformers)
378
+
379
+ You can finetune this model on your own dataset.
380
+
381
+ <details><summary>Click to expand</summary>
382
+
383
+ </details>
384
+ -->
385
+
386
+ <!--
387
+ ### Out-of-Scope Use
388
+
389
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
390
+ -->
391
+
392
+ ## Evaluation
393
+
394
+ ### Metrics
395
+
396
+ #### Information Retrieval
397
+
398
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
399
+
400
+ | Metric | Value |
401
+ |:--------------------|:-----------|
402
+ | cosine_accuracy@1 | 0.875 |
403
+ | cosine_accuracy@3 | 1.0 |
404
+ | cosine_accuracy@5 | 1.0 |
405
+ | cosine_accuracy@10 | 1.0 |
406
+ | cosine_precision@1 | 0.875 |
407
+ | cosine_precision@3 | 0.3333 |
408
+ | cosine_precision@5 | 0.2 |
409
+ | cosine_precision@10 | 0.1 |
410
+ | cosine_recall@1 | 0.875 |
411
+ | cosine_recall@3 | 1.0 |
412
+ | cosine_recall@5 | 1.0 |
413
+ | cosine_recall@10 | 1.0 |
414
+ | **cosine_ndcg@10** | **0.9539** |
415
+ | cosine_mrr@10 | 0.9375 |
416
+ | cosine_map@100 | 0.9375 |
417
+
418
+ <!--
419
+ ## Bias, Risks and Limitations
420
+
421
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
422
+ -->
423
+
424
+ <!--
425
+ ### Recommendations
426
+
427
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
428
+ -->
429
+
430
+ ## Training Details
431
+
432
+ ### Training Dataset
433
+
434
+ #### Unnamed Dataset
435
+
436
+ * Size: 156 training samples
437
+ * Columns: <code>sentence_0</code> and <code>sentence_1</code>
438
+ * Approximate statistics based on the first 156 samples:
439
+ | | sentence_0 | sentence_1 |
440
+ |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
441
+ | type | string | string |
442
+ | details | <ul><li>min: 11 tokens</li><li>mean: 20.34 tokens</li><li>max: 36 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 134.95 tokens</li><li>max: 214 tokens</li></ul> |
443
+ * Samples:
444
+ | sentence_0 | sentence_1 |
445
+ |:---------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
446
+ | <code>What model do I run on my iPhone?</code> | <code>I run a bunch of them on my laptop. I run Mistral 7B (a surprisingly great model) on my iPhone. You can install several different apps to get your own, local, completely private LLM. My own LLM project provides a CLI tool for running an array of different models via plugins.<br>You can even run them entirely in your browser using WebAssembly and the latest Chrome!<br>Hobbyists can build their own fine-tuned models<br>I said earlier that building an LLM was still out of reach of hobbyists. That may be true for training from scratch, but fine-tuning one of those models is another matter entirely.</code> |
447
+ | <code>How can hobbyists create their own fine-tuned models?</code> | <code>I run a bunch of them on my laptop. I run Mistral 7B (a surprisingly great model) on my iPhone. You can install several different apps to get your own, local, completely private LLM. My own LLM project provides a CLI tool for running an array of different models via plugins.<br>You can even run them entirely in your browser using WebAssembly and the latest Chrome!<br>Hobbyists can build their own fine-tuned models<br>I said earlier that building an LLM was still out of reach of hobbyists. That may be true for training from scratch, but fine-tuning one of those models is another matter entirely.</code> |
448
+ | <code>What is the total cost to process 68,000 images mentioned in the context?</code> | <code>That’s a total cost of $1.68 to process 68,000 images. That’s so absurdly cheap I had to run the numbers three times to confirm I got it right.<br>How good are those descriptions? Here’s what I got from this command:<br>llm -m gemini-1.5-flash-8b-latest describe -a IMG_1825.jpeg</code> |
449
+ * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
450
+ ```json
451
+ {
452
+ "loss": "MultipleNegativesRankingLoss",
453
+ "matryoshka_dims": [
454
+ 768,
455
+ 512,
456
+ 256,
457
+ 128,
458
+ 64
459
+ ],
460
+ "matryoshka_weights": [
461
+ 1,
462
+ 1,
463
+ 1,
464
+ 1,
465
+ 1
466
+ ],
467
+ "n_dims_per_step": -1
468
+ }
469
+ ```
470
+
471
+ ### Training Hyperparameters
472
+ #### Non-Default Hyperparameters
473
+
474
+ - `eval_strategy`: steps
475
+ - `per_device_train_batch_size`: 10
476
+ - `per_device_eval_batch_size`: 10
477
+ - `num_train_epochs`: 10
478
+ - `multi_dataset_batch_sampler`: round_robin
479
+
480
+ #### All Hyperparameters
481
+ <details><summary>Click to expand</summary>
482
+
483
+ - `overwrite_output_dir`: False
484
+ - `do_predict`: False
485
+ - `eval_strategy`: steps
486
+ - `prediction_loss_only`: True
487
+ - `per_device_train_batch_size`: 10
488
+ - `per_device_eval_batch_size`: 10
489
+ - `per_gpu_train_batch_size`: None
490
+ - `per_gpu_eval_batch_size`: None
491
+ - `gradient_accumulation_steps`: 1
492
+ - `eval_accumulation_steps`: None
493
+ - `torch_empty_cache_steps`: None
494
+ - `learning_rate`: 5e-05
495
+ - `weight_decay`: 0.0
496
+ - `adam_beta1`: 0.9
497
+ - `adam_beta2`: 0.999
498
+ - `adam_epsilon`: 1e-08
499
+ - `max_grad_norm`: 1
500
+ - `num_train_epochs`: 10
501
+ - `max_steps`: -1
502
+ - `lr_scheduler_type`: linear
503
+ - `lr_scheduler_kwargs`: {}
504
+ - `warmup_ratio`: 0.0
505
+ - `warmup_steps`: 0
506
+ - `log_level`: passive
507
+ - `log_level_replica`: warning
508
+ - `log_on_each_node`: True
509
+ - `logging_nan_inf_filter`: True
510
+ - `save_safetensors`: True
511
+ - `save_on_each_node`: False
512
+ - `save_only_model`: False
513
+ - `restore_callback_states_from_checkpoint`: False
514
+ - `no_cuda`: False
515
+ - `use_cpu`: False
516
+ - `use_mps_device`: False
517
+ - `seed`: 42
518
+ - `data_seed`: None
519
+ - `jit_mode_eval`: False
520
+ - `use_ipex`: False
521
+ - `bf16`: False
522
+ - `fp16`: False
523
+ - `fp16_opt_level`: O1
524
+ - `half_precision_backend`: auto
525
+ - `bf16_full_eval`: False
526
+ - `fp16_full_eval`: False
527
+ - `tf32`: None
528
+ - `local_rank`: 0
529
+ - `ddp_backend`: None
530
+ - `tpu_num_cores`: None
531
+ - `tpu_metrics_debug`: False
532
+ - `debug`: []
533
+ - `dataloader_drop_last`: False
534
+ - `dataloader_num_workers`: 0
535
+ - `dataloader_prefetch_factor`: None
536
+ - `past_index`: -1
537
+ - `disable_tqdm`: False
538
+ - `remove_unused_columns`: True
539
+ - `label_names`: None
540
+ - `load_best_model_at_end`: False
541
+ - `ignore_data_skip`: False
542
+ - `fsdp`: []
543
+ - `fsdp_min_num_params`: 0
544
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
545
+ - `fsdp_transformer_layer_cls_to_wrap`: None
546
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
547
+ - `deepspeed`: None
548
+ - `label_smoothing_factor`: 0.0
549
+ - `optim`: adamw_torch
550
+ - `optim_args`: None
551
+ - `adafactor`: False
552
+ - `group_by_length`: False
553
+ - `length_column_name`: length
554
+ - `ddp_find_unused_parameters`: None
555
+ - `ddp_bucket_cap_mb`: None
556
+ - `ddp_broadcast_buffers`: False
557
+ - `dataloader_pin_memory`: True
558
+ - `dataloader_persistent_workers`: False
559
+ - `skip_memory_metrics`: True
560
+ - `use_legacy_prediction_loop`: False
561
+ - `push_to_hub`: False
562
+ - `resume_from_checkpoint`: None
563
+ - `hub_model_id`: None
564
+ - `hub_strategy`: every_save
565
+ - `hub_private_repo`: None
566
+ - `hub_always_push`: False
567
+ - `gradient_checkpointing`: False
568
+ - `gradient_checkpointing_kwargs`: None
569
+ - `include_inputs_for_metrics`: False
570
+ - `include_for_metrics`: []
571
+ - `eval_do_concat_batches`: True
572
+ - `fp16_backend`: auto
573
+ - `push_to_hub_model_id`: None
574
+ - `push_to_hub_organization`: None
575
+ - `mp_parameters`:
576
+ - `auto_find_batch_size`: False
577
+ - `full_determinism`: False
578
+ - `torchdynamo`: None
579
+ - `ray_scope`: last
580
+ - `ddp_timeout`: 1800
581
+ - `torch_compile`: False
582
+ - `torch_compile_backend`: None
583
+ - `torch_compile_mode`: None
584
+ - `dispatch_batches`: None
585
+ - `split_batches`: None
586
+ - `include_tokens_per_second`: False
587
+ - `include_num_input_tokens_seen`: False
588
+ - `neftune_noise_alpha`: None
589
+ - `optim_target_modules`: None
590
+ - `batch_eval_metrics`: False
591
+ - `eval_on_start`: False
592
+ - `use_liger_kernel`: False
593
+ - `eval_use_gather_object`: False
594
+ - `average_tokens_across_devices`: False
595
+ - `prompts`: None
596
+ - `batch_sampler`: batch_sampler
597
+ - `multi_dataset_batch_sampler`: round_robin
598
+
599
+ </details>
600
+
601
+ ### Training Logs
602
+ | Epoch | Step | cosine_ndcg@10 |
603
+ |:-----:|:----:|:--------------:|
604
+ | 1.0 | 16 | 0.9638 |
605
+ | 2.0 | 32 | 0.9539 |
606
+ | 3.0 | 48 | 0.9539 |
607
+ | 3.125 | 50 | 0.9539 |
608
+ | 4.0 | 64 | 0.9539 |
609
+ | 5.0 | 80 | 0.9539 |
610
+ | 6.0 | 96 | 0.9539 |
611
+ | 6.25 | 100 | 0.9539 |
612
+ | 7.0 | 112 | 0.9539 |
613
+ | 8.0 | 128 | 0.9539 |
614
+ | 9.0 | 144 | 0.9539 |
615
+ | 9.375 | 150 | 0.9539 |
616
+ | 10.0 | 160 | 0.9539 |
617
+
618
+
619
+ ### Framework Versions
620
+ - Python: 3.11.11
621
+ - Sentence Transformers: 3.4.1
622
+ - Transformers: 4.48.2
623
+ - PyTorch: 2.5.1+cu124
624
+ - Accelerate: 1.3.0
625
+ - Datasets: 3.2.0
626
+ - Tokenizers: 0.21.0
627
+
628
+ ## Citation
629
+
630
+ ### BibTeX
631
+
632
+ #### Sentence Transformers
633
+ ```bibtex
634
+ @inproceedings{reimers-2019-sentence-bert,
635
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
636
+ author = "Reimers, Nils and Gurevych, Iryna",
637
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
638
+ month = "11",
639
+ year = "2019",
640
+ publisher = "Association for Computational Linguistics",
641
+ url = "https://arxiv.org/abs/1908.10084",
642
+ }
643
+ ```
644
+
645
+ #### MatryoshkaLoss
646
+ ```bibtex
647
+ @misc{kusupati2024matryoshka,
648
+ title={Matryoshka Representation Learning},
649
+ author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
650
+ year={2024},
651
+ eprint={2205.13147},
652
+ archivePrefix={arXiv},
653
+ primaryClass={cs.LG}
654
+ }
655
+ ```
656
+
657
+ #### MultipleNegativesRankingLoss
658
+ ```bibtex
659
+ @misc{henderson2017efficient,
660
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
661
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
662
+ year={2017},
663
+ eprint={1705.00652},
664
+ archivePrefix={arXiv},
665
+ primaryClass={cs.CL}
666
+ }
667
+ ```
668
+
669
+ <!--
670
+ ## Glossary
671
+
672
+ *Clearly define terms in order to be accessible across audiences.*
673
+ -->
674
+
675
+ <!--
676
+ ## Model Card Authors
677
+
678
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
679
+ -->
680
+
681
+ <!--
682
+ ## Model Card Contact
683
+
684
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
685
+ -->
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "Snowflake/snowflake-arctic-embed-l",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 1024,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 4096,
13
+ "layer_norm_eps": 1e-12,
14
+ "max_position_embeddings": 512,
15
+ "model_type": "bert",
16
+ "num_attention_heads": 16,
17
+ "num_hidden_layers": 24,
18
+ "pad_token_id": 0,
19
+ "position_embedding_type": "absolute",
20
+ "torch_dtype": "float32",
21
+ "transformers_version": "4.48.2",
22
+ "type_vocab_size": 2,
23
+ "use_cache": true,
24
+ "vocab_size": 30522
25
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.4.1",
4
+ "transformers": "4.48.2",
5
+ "pytorch": "2.5.1+cu124"
6
+ },
7
+ "prompts": {
8
+ "query": "Represent this sentence for searching relevant passages: "
9
+ },
10
+ "default_prompt_name": null,
11
+ "similarity_fn_name": "cosine"
12
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dd816b902f0c87a3a986538fa7ce4611e2c9becb13e50963233366d7905e0ab5
3
+ size 1336413848
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_lower_case": true,
47
+ "extra_special_tokens": {},
48
+ "mask_token": "[MASK]",
49
+ "max_length": 512,
50
+ "model_max_length": 512,
51
+ "pad_to_multiple_of": null,
52
+ "pad_token": "[PAD]",
53
+ "pad_token_type_id": 0,
54
+ "padding_side": "right",
55
+ "sep_token": "[SEP]",
56
+ "stride": 0,
57
+ "strip_accents": null,
58
+ "tokenize_chinese_chars": true,
59
+ "tokenizer_class": "BertTokenizer",
60
+ "truncation_side": "right",
61
+ "truncation_strategy": "longest_first",
62
+ "unk_token": "[UNK]"
63
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff