metadata
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:156
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l
widget:
- source_sentence: Why does the author find the term "agents" extremely frustrating?
sentences:
- >-
We already knew LLMs were spookily good at writing code. If you prompt
them right, it turns out they can build you a full interactive
application using HTML, CSS and JavaScript (and tools like React if you
wire up some extra supporting build mechanisms)—often in a single
prompt.
Anthropic kicked this idea into high gear when they released Claude
Artifacts, a groundbreaking new feature that was initially slightly lost
in the noise due to being described half way through their announcement
of the incredible Claude 3.5 Sonnet.
With Artifacts, Claude can write you an on-demand interactive
application and then let you use it directly inside the Claude
interface.
Here’s my Extract URLs app, entirely generated by Claude:
- >-
“Agents” still haven’t really happened yet
I find the term “agents” extremely frustrating. It lacks a single, clear
and widely understood meaning... but the people who use the term never
seem to acknowledge that.
If you tell me that you are building “agents”, you’ve conveyed almost no
information to me at all. Without reading your mind I have no way of
telling which of the dozens of possible definitions you are talking
about.
- >-
I love the term “slop” because it so succinctly captures one of the ways
we should not be using generative AI!
Slop was even in the running for Oxford Word of the Year 2024, but it
lost to brain rot.
Synthetic training data works great
An idea that surprisingly seems to have stuck in the public
consciousness is that of “model collapse”. This was first described in
the paper The Curse of Recursion: Training on Generated Data Makes
Models Forget in May 2023, and repeated in Nature in July 2024 with the
more eye-catching headline AI models collapse when trained on
recursively generated data.
- source_sentence: >-
What paper did Meta publish in December that is relevant to
inference-scaling models?
sentences:
- >-
My personal laptop is a 64GB M2 MacBook Pro from 2023. It’s a powerful
machine, but it’s also nearly two years old now—and crucially it’s the
same laptop I’ve been using ever since I first ran an LLM on my computer
back in March 2023 (see Large language models are having their Stable
Diffusion moment).
That same laptop that could just about run a GPT-3-class model in March
last year has now run multiple GPT-4 class models! Some of my notes on
that:
- >-
Nothing yet from Anthropic or Meta but I would be very surprised if they
don’t have their own inference-scaling models in the works. Meta
published a relevant paper Training Large Language Models to Reason in a
Continuous Latent Space in December.
Was the best currently available LLM trained in China for less than $6m?
Not quite, but almost! It does make for a great attention-grabbing
headline.
The big news to end the year was the release of DeepSeek v3—dropped on
Hugging Face on Christmas Day without so much as a README file, then
followed by documentation and a paper the day after that.
- |-
The GPT-4 barrier was comprehensively broken
Some of those GPT-4 models run on my laptop
LLM prices crashed, thanks to competition and increased efficiency
Multimodal vision is common, audio and video are starting to emerge
Voice and live camera mode are science fiction come to life
Prompt driven app generation is a commodity already
Universal access to the best models lasted for just a few short months
“Agents” still haven’t really happened yet
Evals really matter
Apple Intelligence is bad, Apple’s MLX library is excellent
The rise of inference-scaling “reasoning” models
Was the best currently available LLM trained in China for less than $6m?
The environmental impact got better
The environmental impact got much, much worse
- source_sentence: >-
How does the performance of the Llama 32 3B model compare to GPT-4
according to the context?
sentences:
- >-
I think this means that, as individual users, we don’t need to feel any
guilt at all for the energy consumed by the vast majority of our
prompts. The impact is likely neglible compared to driving a car down
the street or maybe even watching a video on YouTube.
Likewise, training. DeepSeek v3 training for less than $6m is a
fantastic sign that training costs can and should continue to drop.
For less efficient models I find it useful to compare their energy usage
to commercial flights. The largest Llama 3 model cost about the same as
a single digit number of fully loaded passenger flights from New York to
London. That’s certainly not nothing, but once trained that model can be
used by millions of people at no extra training cost.
- >-
Meta’s Llama 3.2 models deserve a special mention. They may not be GPT-4
class, but at 1B and 3B sizes they punch massively above their weight. I
run Llama 3.2 3B on my iPhone using the free MLC Chat iOS app and it’s a
shockingly capable model for its tiny (<2GB) size. Try firing it up and
asking it for “a plot outline of a Netflix Christmas movie where a data
journalist falls in love with a local ceramacist”. Here’s what I got, at
a respectable 20 tokens per second:
- >-
Prince Canuma’s excellent, fast moving mlx-vlm project brings vision
LLMs to Apple Silicon as well. I used that recently to run Qwen’s QvQ.
While MLX is a game changer, Apple’s own “Apple Intelligence” features
have mostly been a disappointment. I wrote about their initial
announcement in June, and I was optimistic that Apple had focused hard
on the subset of LLM applications that preserve user privacy and
minimize the chance of users getting mislead by confusing features.
- source_sentence: >-
What was introduced by the Chatbot Arena team in December regarding user
interaction with models?
sentences:
- >-
The year of slop
2024 was the year that the word "slop" became a term of art. I wrote
about this in May, expanding on this tweet by @deepfates:
- >-
The two main categories I see are people who think AI agents are
obviously things that go and act on your behalf—the travel agent
model—and people who think in terms of LLMs that have been given access
to tools which they can run in a loop as part of solving a problem. The
term “autonomy” is often thrown into the mix too, again without
including a clear definition.
(I also collected 211 definitions on Twitter a few months ago—here they
are in Datasette Lite—and had gemini-exp-1206 attempt to summarize
them.)
Whatever the term may mean, agents still have that feeling of
perpetually “coming soon”.
- >-
Then in December, the Chatbot Arena team introduced a whole new
leaderboard for this feature, driven by users building the same
interactive app twice with two different models and voting on the
answer. Hard to come up with a more convincing argument that this
feature is now a commodity that can be effectively implemented against
all of the leading models.
I’ve been tinkering with a version of this myself for my Datasette
project, with the goal of letting users use prompts to build and iterate
on custom widgets and data visualizations against their own data. I also
figured out a similar pattern for writing one-shot Python programs,
enabled by uv.
- source_sentence: >-
What does the cost of training the DeepSeek v3 model suggest about the
future of training costs for AI models?
sentences:
- >-
I think this means that, as individual users, we don’t need to feel any
guilt at all for the energy consumed by the vast majority of our
prompts. The impact is likely neglible compared to driving a car down
the street or maybe even watching a video on YouTube.
Likewise, training. DeepSeek v3 training for less than $6m is a
fantastic sign that training costs can and should continue to drop.
For less efficient models I find it useful to compare their energy usage
to commercial flights. The largest Llama 3 model cost about the same as
a single digit number of fully loaded passenger flights from New York to
London. That’s certainly not nothing, but once trained that model can be
used by millions of people at no extra training cost.
- >-
There’s still plenty to worry about with respect to the environmental
impact of the great AI datacenter buildout, but a lot of the concerns
over the energy cost of individual prompts are no longer credible.
Here’s a fun napkin calculation: how much would it cost to generate
short descriptions of every one of the 68,000 photos in my personal
photo library using Google’s Gemini 1.5 Flash 8B (released in October),
their cheapest model?
Each photo would need 260 input tokens and around 100 output tokens.
260 * 68,000 = 17,680,000 input tokens
17,680,000 * $0.0375/million = $0.66
100 * 68,000 = 6,800,000 output tokens
6,800,000 * $0.15/million = $1.02
- |-
Large Language Models
They’re actually quite easy to build
You can run LLMs on your own devices
Hobbyists can build their own fine-tuned models
We don’t yet know how to build GPT-4
Vibes Based Development
LLMs are really smart, and also really, really dumb
Gullibility is the biggest unsolved problem
Code may be the best application
The ethics of this space remain diabolically complex
My blog in 2023
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.75
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9583333333333334
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.75
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3194444444444444
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.20000000000000004
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000002
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.75
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9583333333333334
name: Cosine Recall@3
- type: cosine_recall@5
value: 1
name: Cosine Recall@5
- type: cosine_recall@10
value: 1
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8884777424494903
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8506944444444445
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8506944444444443
name: Cosine Map@100
SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-l. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: Snowflake/snowflake-arctic-embed-l
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 1024 dimensions
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- HF中国镜像站: Sentence Transformers on HF中国镜像站
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Gonalb/legal-ft-v0")
# Run inference
sentences = [
'What does the cost of training the DeepSeek v3 model suggest about the future of training costs for AI models?',
'I think this means that, as individual users, we don’t need to feel any guilt at all for the energy consumed by the vast majority of our prompts. The impact is likely neglible compared to driving a car down the street or maybe even watching a video on YouTube.\nLikewise, training. DeepSeek v3 training for less than $6m is a fantastic sign that training costs can and should continue to drop.\nFor less efficient models I find it useful to compare their energy usage to commercial flights. The largest Llama 3 model cost about the same as a single digit number of fully loaded passenger flights from New York to London. That’s certainly not nothing, but once trained that model can be used by millions of people at no extra training cost.',
'Large Language Models\nThey’re actually quite easy to build\nYou can run LLMs on your own devices\nHobbyists can build their own fine-tuned models\nWe don’t yet know how to build GPT-4\nVibes Based Development\nLLMs are really smart, and also really, really dumb\nGullibility is the biggest unsolved problem\nCode may be the best application\nThe ethics of this space remain diabolically complex\nMy blog in 2023',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.75 |
cosine_accuracy@3 | 0.9583 |
cosine_accuracy@5 | 1.0 |
cosine_accuracy@10 | 1.0 |
cosine_precision@1 | 0.75 |
cosine_precision@3 | 0.3194 |
cosine_precision@5 | 0.2 |
cosine_precision@10 | 0.1 |
cosine_recall@1 | 0.75 |
cosine_recall@3 | 0.9583 |
cosine_recall@5 | 1.0 |
cosine_recall@10 | 1.0 |
cosine_ndcg@10 | 0.8885 |
cosine_mrr@10 | 0.8507 |
cosine_map@100 | 0.8507 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 156 training samples
- Columns:
sentence_0
andsentence_1
- Approximate statistics based on the first 156 samples:
sentence_0 sentence_1 type string string details - min: 13 tokens
- mean: 20.15 tokens
- max: 29 tokens
- min: 43 tokens
- mean: 130.44 tokens
- max: 204 tokens
- Samples:
sentence_0 sentence_1 What tools did the author describe in their writing about Claude Artifacts?
I’ve found myself using this a lot. I noticed how much I was relying on it in October and wrote Everything I built with Claude Artifacts this week, describing 14 little tools I had put together in a seven day period.
Since then, a whole bunch of other teams have built similar systems. GitHub announced their version of this—GitHub Spark—in October. Mistral Chat added it as a feature called Canvas in November.
Steve Krouse from Val Town built a version of it against Cerebras, showcasing how a 2,000 token/second LLM can iterate on an application with changes visible in less than a second.What is the name of the feature added by Mistral Chat in November?
I’ve found myself using this a lot. I noticed how much I was relying on it in October and wrote Everything I built with Claude Artifacts this week, describing 14 little tools I had put together in a seven day period.
Since then, a whole bunch of other teams have built similar systems. GitHub announced their version of this—GitHub Spark—in October. Mistral Chat added it as a feature called Canvas in November.
Steve Krouse from Val Town built a version of it against Cerebras, showcasing how a 2,000 token/second LLM can iterate on an application with changes visible in less than a second.Why does the author find the term "agents" extremely frustrating?
“Agents” still haven’t really happened yet
I find the term “agents” extremely frustrating. It lacks a single, clear and widely understood meaning... but the people who use the term never seem to acknowledge that.
If you tell me that you are building “agents”, you’ve conveyed almost no information to me at all. Without reading your mind I have no way of telling which of the dozens of possible definitions you are talking about. - Loss:
MatryoshkaLoss
with these parameters:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 10per_device_eval_batch_size
: 10num_train_epochs
: 10multi_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 10per_device_eval_batch_size
: 10per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 10max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: round_robin
Training Logs
Epoch | Step | cosine_ndcg@10 |
---|---|---|
1.0 | 16 | 0.8968 |
2.0 | 32 | 0.8939 |
3.0 | 48 | 0.8786 |
3.125 | 50 | 0.8731 |
4.0 | 64 | 0.8702 |
5.0 | 80 | 0.8684 |
6.0 | 96 | 0.8713 |
6.25 | 100 | 0.8731 |
7.0 | 112 | 0.8885 |
8.0 | 128 | 0.8856 |
9.0 | 144 | 0.8885 |
9.375 | 150 | 0.8885 |
10.0 | 160 | 0.8885 |
Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.1
- Transformers: 4.48.2
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}