metadata
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:100
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: Alibaba-NLP/gte-Qwen2-1.5B-instruct
widget:
- source_sentence: >-
1. What is the significance of the beginning of thinking and end of
thinking tokens in the context of recurrent depth?
2. How does the concept of thinking about a single token relate to the
overall sequence in the discussion?
sentences:
- >-
done by researchers at Lawrence Livermore and elsewhere um this is not
this not you know something that product teams are using today this is
this is very much a research grade thing that is cool and we're seeing
some you know early signs that it's potentially quite useful um I I
wanna I want to zoom in on on like just when people think about the the
actual how of this when they think about actually implementing this in
in maybe one of their applications so whereas in the Coconut space
you're you're going to go and you're G to you're gonna like oh nope not
going out into natural language space just going to sit here and chew on
it chew on it chew on it and then I'm gonna pop out my final answer so
all you get is final answer baby it's called a blackbox okay when we go
to the recurrent depth piece um you said something interesting earlier
when we were chatting about this and it was it was like I'm going to
think and think and think and think and think and all of a sudden I know
and I'm
- >-
chains of thought and this is where this idea of test time compute came
up and this was a paper from Google in August last year called scaling
test time compute you know it's basically taking that scaling paper
originally and saying well now we have this sort of other axis to scale
on and again this is the idea that we're anthropomorphizing a little bit
but humans tend to think longer on difficult problems maybe we should
let machines do that and when we think of test time Compu it's just time
spent thinking you know and so if we we think about kind of how we can
leverage this we've seen some of these things come out in recent weeks
and recent months we talked about deep seek R1 just last week and you
know this is the same idea it thinks before it answers and this is again
just sort of the next step in the evolution of what we've got going on
here and we saw moreover deep seek one generates one token at a time
it's able to spend more time processing and it generates these thinking
- >-
piece um you said something interesting earlier when we were chatting
about this and it was it was like I'm going to think and think and think
and think and think and all of a sudden I know and I'm going to do that
just with one token is sort of the recurrent depth paper and then I'm
going to let the rest of the tokens stream like normal right so so
there's this real interesting idea of like which things are you thinking
about and this is where the idea of a beginning of thinking end of
thinking token or a beginning of sequence end of sequence token comes
into play you can think about the sequence or you can think about the
thinking is this does this make sense um yeah okay okay we can think
about the thinking right so we can uh we can double we can double it up
uh yeah we can think about the thing yeah yeah okay okay so so recurrent
depth in short I mean is like you think about a single token and then
you let the sequence go like that's what I thought was interesting and
and maybe
- source_sentence: >-
1. What are the different systems mentioned in the context, and how are
they categorized?
2. How does the concept of "Meta Meta learning" relate to the discussion
in the context?
sentences:
- >-
right we kind of got to go a little bit more into the blackbox we gota
go back beyond the unknown yeah it happens but it's it's it's it's the
the timing is right and uh with companies like you know Nvidia with
companies the other accelerators that are that are coming out they're
super good at inference Gro and S all these other peeps right uh we're
getting real fast at inference and so the spending that time you know
becomes less and less impactful to the user experience but more
importantly uh you know we have a lot of applications LMS aren't good
for yet where we don't care about response latency like research like uh
PhD level math where it's like it doesn't matter if it takes a day yeah
because that means it didn't take some some other person a day right
like that's the that's the the we're at this time the models are capable
enough that we can think about problems that we can't just do ourselves
faster it the whole the whole you know ecosystem is set up for this to
be the right
- >-
perceptron artificial neural networks and single neurons to many neurons
and so on but it really really got going with deep learning and we we
saw we train bigger and bigger models we get better and better output
and results this has been known for years and it's it's known even today
I mean this is from the CEO of anthropic Dario amade and you know we
want to think about the the the place that we are based on where we've
been want to put this in context here so we go from pre-training which
is the thing that sort of takes a long time it's the it's the show goth
it's got all the possibilities we can sort of think of this as months
many many tens of thousands of gpus maybe more these days and as we saw
at nurs this past uh you know year Ilia suit noted that pre-training as
we know it will end now we've seen that there's been a lot of focus
subsequently especially you know out in public on posttraining and on
this idea of rhf and RL we talked about this in our recent reasoning
model
- >-
we've got we've got sort of this idea may we could call it system one is
GPT 40 mini and then we could say Okay reasoning model that's spitting
out tokens is like is like system two and then maybe like we've got this
sort of system three that's in in in latent space and then we've got
this like system four that's like that's like you know On Any Given
specific type of of of thing I might want to extra think about in latent
space I might go deeper on it so so there's just this real sort of very
interesting abstraction Ed meta thinking at play here and um and it gets
back to me to sort of the idea of like kind of Meta Meta learning I mean
they they really they really did away with our ability to use that term
and in the gpt3 paper so um we're at the end of sort of the words that
we that we can concretely say we know what to do but we know what to do
with the code so I want to go ahead and just quickly introduce this to
you uh whiz we're going to do a quick demo here and we're going to
- source_sentence: >-
1. What does the term "tokenless" refer to in the context of scaling and
model architecture?
2. How does the looping process contribute to generating responses without
resolving back to tokens?
sentences:
- >-
allow us to scil uh that seems a little a little weird to me I'm not
it's not very intuitive what's your intuition behind this yeah so I the
word tokenless is a is a is a fun one I would say like the idea is we
don't need to resolve back to tokens to scale because we can just keep
looping around before we get to tokens right really anything that allows
us to keep looping around until we get to The Final Answer uh is is g to
allow us to scale right because we can say well Loop more Loop more Loop
more right like uh W with with uh you know with say like generating a
response right if I generate for five tokens versus if I generate for
70,000 tokens right like we we have this idea of a scaling axis on the
on the inference side this is just kind of shoving that inside the uh
the model architecture as opposed to allowing it to resolve to token
space but the idea is we're still adding information at every step at
every stage right that we do a a loop as it were so we're getting a
better and
- >-
perceptron artificial neural networks and single neurons to many neurons
and so on but it really really got going with deep learning and we we
saw we train bigger and bigger models we get better and better output
and results this has been known for years and it's it's known even today
I mean this is from the CEO of anthropic Dario amade and you know we
want to think about the the the place that we are based on where we've
been want to put this in context here so we go from pre-training which
is the thing that sort of takes a long time it's the it's the show goth
it's got all the possibilities we can sort of think of this as months
many many tens of thousands of gpus maybe more these days and as we saw
at nurs this past uh you know year Ilia suit noted that pre-training as
we know it will end now we've seen that there's been a lot of focus
subsequently especially you know out in public on posttraining and on
this idea of rhf and RL we talked about this in our recent reasoning
model
- >-
allow us to scil uh that seems a little a little weird to me I'm not
it's not very intuitive what's your intuition behind this yeah so I the
word tokenless is a is a is a fun one I would say like the idea is we
don't need to resolve back to tokens to scale because we can just keep
looping around before we get to tokens right really anything that allows
us to keep looping around until we get to The Final Answer uh is is g to
allow us to scale right because we can say well Loop more Loop more Loop
more right like uh W with with uh you know with say like generating a
response right if I generate for five tokens versus if I generate for
70,000 tokens right like we we have this idea of a scaling axis on the
on the inference side this is just kind of shoving that inside the uh
the model architecture as opposed to allowing it to resolve to token
space but the idea is we're still adding information at every step at
every stage right that we do a a loop as it were so we're getting a
better and
- source_sentence: >-
1. What is the relationship between latent space and natural language in
the context of a Transformer architecture?
2. How does the GPT style architecture process a sequence to predict the
next token?
sentences:
- >-
piece um you said something interesting earlier when we were chatting
about this and it was it was like I'm going to think and think and think
and think and think and all of a sudden I know and I'm going to do that
just with one token is sort of the recurrent depth paper and then I'm
going to let the rest of the tokens stream like normal right so so
there's this real interesting idea of like which things are you thinking
about and this is where the idea of a beginning of thinking end of
thinking token or a beginning of sequence end of sequence token comes
into play you can think about the sequence or you can think about the
thinking is this does this make sense um yeah okay okay we can think
about the thinking right so we can uh we can double we can double it up
uh yeah we can think about the thing yeah yeah okay okay so so recurrent
depth in short I mean is like you think about a single token and then
you let the sequence go like that's what I thought was interesting and
and maybe
- >-
it's kind of funny in a logical way if you look up logic it uses the
word reason and there we are caught in a loop but reasoning is about
thinking latent space is about using a representation of our data that
sort of captures the essential features of it we can think of latent
space as embedding space or the space of math and numbers in other words
it's just not the space of words and natural language let's think about
how this manifests in a Transformer architecture here I'm showing a GPT
style architecture from the gpt2 paper what we want to think about is we
want to put a sequence in and we want to get some next token prediction
out when we put the sequence in we're in the space of natural language
when we get the next token out we're in the space of natural language
betwix in between we're going to be in latent space we're going to be in
embedding space we're going to be in the space where we can do math and
stuff and importantly we can kind of think that we're putting in this
big
- >-
architecture is built upon a latent depth recurrent block that is run
for a randomly sampled number of iterations during training I can't see
it let's see what they gave us in the paper they gave us this bad boy
personally not a fan of this diagram whiz thinks it's totally fine and
it makes perfect sense let's see if we can break it down for you guys
here a visualization of the architecture we have the Prelude block we
have the recurrent block recurrent block recurrent block and then this
Koda so each block consists of a number of su layers okay the blue
Prelude block embeds the input into latent space all right where the
green shared recurrent block is a block of layers that is repeated to
compute the final latent state which is decoded by the layers of the red
Coda block back to our gp2 architecture diagram let's think about how
we're still kind of doing this loop back we're still doing this
reasoning in in space and now let's label the Prelude the recurrent
block and the Koda we
- source_sentence: >-
1. What is meant by the terms "hidden state," "latent space," and
"embedding space" in the context of reasoning models?
2. How do the last hidden states function as input embeddings in a typical
Chain of Thought reasoning model?
sentences:
- >-
the next step in the evolution of what we've got going on here and we
saw moreover deep seek one generates one token at a time it's able to
spend more time processing and it generates these thinking tokens that
explain its Chain of Thought So we're generating tokens during our
chains of thought and that makes them explainable very cool all right I
want to bring whiz back up for just a moment here okay just to be super
clear reasoning and test time compute do you think these are sort of you
know triple equal sign or or how would you how would you say you know
generally we're talking about the same thing but they're not the same
thing yeah they're they're this so so okay they're not literally the
same thing of course but they're also pretty much the same thing in in
how we talk about it in 2025 today that's right that's right so so
reasoning is some right now because our models are System One machines
right this is the this is the they're not reasoners they're they're uh
they're they're
- >-
impact of this kind of approach on test time compute scaling some of the
working hypotheses and some of the things people are interested in in
looking out there on the llm edge for as we continue to see the field
progress I want to demonstrate both approaches and check out the new
coconut Library as well so how we're going to go through this is we're
going to essentially introduce this idea of reasoning and latent space
then we're going to talk about the scaling part of this before we dig
into the specific approaches and we get the demo on both approaches by
the end so it should be a lot of fun today let's go ahead and dig in
reasoning in latent space let's root ourselves first in some definitions
when we talk about reasoning we're talking about the action of thinking
about something and it's kind of funny in a logical way if you look up
logic it uses the word reason and there we are caught in a loop but
reasoning is about thinking latent space is about using a representation
of our
- >-
of the reasoning State when we say hidden state or latent space or
embedding space or this sort of space of math and computation we're
talking about the same space of course the the exact state of the space
changes depending on where we are in the Transformer but let's take a
look at the image from the paper in a typical Chain of Thought reasoning
model we're going to ask a question we're going to generate some tokens
and we're going to think kind of out loud we're we're going to let the
chains of thought flow when you click into the Chain of Thought on 01 as
we've seen before you can see sort of in the side panel the the steps
it's thinking through now conversely to actually thinking out loud we
have here that the last hidden states are used as input embeddings okay
well what does this mean well it let's go back to our gpt2 style diagram
and think about this the input embeddings here are where we're
essentially looping back to so what we do is we kind of loop back before
we generate
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on Alibaba-NLP/gte-Qwen2-1.5B-instruct
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.875
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 1
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.875
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3333333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.2
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.1
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.875
name: Cosine Recall@1
- type: cosine_recall@3
value: 1
name: Cosine Recall@3
- type: cosine_recall@5
value: 1
name: Cosine Recall@5
- type: cosine_recall@10
value: 1
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9538662191964322
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9375
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9375
name: Cosine Map@100
SentenceTransformer based on Alibaba-NLP/gte-Qwen2-1.5B-instruct
This is a sentence-transformers model finetuned from Alibaba-NLP/gte-Qwen2-1.5B-instruct. It maps sentences & paragraphs to a 1536-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: Alibaba-NLP/gte-Qwen2-1.5B-instruct
- Maximum Sequence Length: 32768 tokens
- Output Dimensionality: 1536 dimensions
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- HF中国镜像站: Sentence Transformers on HF中国镜像站
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 32768, 'do_lower_case': False}) with Transformer model: Qwen2Model
(1): Pooling({'word_embedding_dimension': 1536, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': True, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("kenrogers/gte-ft-yt-2")
# Run inference
sentences = [
'1. What is meant by the terms "hidden state," "latent space," and "embedding space" in the context of reasoning models?\n2. How do the last hidden states function as input embeddings in a typical Chain of Thought reasoning model?',
"of the reasoning State when we say hidden state or latent space or embedding space or this sort of space of math and computation we're talking about the same space of course the the exact state of the space changes depending on where we are in the Transformer but let's take a look at the image from the paper in a typical Chain of Thought reasoning model we're going to ask a question we're going to generate some tokens and we're going to think kind of out loud we're we're going to let the chains of thought flow when you click into the Chain of Thought on 01 as we've seen before you can see sort of in the side panel the the steps it's thinking through now conversely to actually thinking out loud we have here that the last hidden states are used as input embeddings okay well what does this mean well it let's go back to our gpt2 style diagram and think about this the input embeddings here are where we're essentially looping back to so what we do is we kind of loop back before we generate",
"impact of this kind of approach on test time compute scaling some of the working hypotheses and some of the things people are interested in in looking out there on the llm edge for as we continue to see the field progress I want to demonstrate both approaches and check out the new coconut Library as well so how we're going to go through this is we're going to essentially introduce this idea of reasoning and latent space then we're going to talk about the scaling part of this before we dig into the specific approaches and we get the demo on both approaches by the end so it should be a lot of fun today let's go ahead and dig in reasoning in latent space let's root ourselves first in some definitions when we talk about reasoning we're talking about the action of thinking about something and it's kind of funny in a logical way if you look up logic it uses the word reason and there we are caught in a loop but reasoning is about thinking latent space is about using a representation of our",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1536]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.875 |
cosine_accuracy@3 | 1.0 |
cosine_accuracy@5 | 1.0 |
cosine_accuracy@10 | 1.0 |
cosine_precision@1 | 0.875 |
cosine_precision@3 | 0.3333 |
cosine_precision@5 | 0.2 |
cosine_precision@10 | 0.1 |
cosine_recall@1 | 0.875 |
cosine_recall@3 | 1.0 |
cosine_recall@5 | 1.0 |
cosine_recall@10 | 1.0 |
cosine_ndcg@10 | 0.9539 |
cosine_mrr@10 | 0.9375 |
cosine_map@100 | 0.9375 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 100 training samples
- Columns:
sentence_0
andsentence_1
- Approximate statistics based on the first 100 samples:
sentence_0 sentence_1 type string string details - min: 30 tokens
- mean: 41.05 tokens
- max: 60 tokens
- min: 180 tokens
- mean: 208.98 tokens
- max: 231 tokens
- Samples:
sentence_0 sentence_1 1. What are the two big ideas aimed at scaling the power of LLMs during inference mentioned in the context?
2. How does the concept of reasoning in latent space relate to the efficiency of computation during inference?okay whiz we're talking about reasoning in latent space today is that the same as test time compute yeah that's right nice nice okay and we've got two big ideas to cover that are aimed at scaling the power of llms during inference is that right that yeah that's right so we have we have two you know latent space methods uh we have our continuous Chain of Thought or coconut right and then we have our more more directly more uh you know uh budget forcing recurrent depth uh model yes man that's a lot so when we look across both of those there appears to be a pretty simple explanation it's almost like uh you know when we when we're in that sort of thinking space of computation we don't have to do the thinky thinky in words and that's better maybe even it will allow us to find a new scaling axis is that right yeah that's exactly right I mean the idea is that we have this uh you know we we have this way of taking advantage of of uh the most powerful thinking space in the Transformer and not
1. What are the two big ideas aimed at scaling the power of LLMs during inference mentioned in the context?
2. How does the concept of reasoning in latent space relate to the efficiency of computation during inference?okay whiz we're talking about reasoning in latent space today is that the same as test time compute yeah that's right nice nice okay and we've got two big ideas to cover that are aimed at scaling the power of llms during inference is that right that yeah that's right so we have we have two you know latent space methods uh we have our continuous Chain of Thought or coconut right and then we have our more more directly more uh you know uh budget forcing recurrent depth uh model yes man that's a lot so when we look across both of those there appears to be a pretty simple explanation it's almost like uh you know when we when we're in that sort of thinking space of computation we don't have to do the thinky thinky in words and that's better maybe even it will allow us to find a new scaling axis is that right yeah that's exactly right I mean the idea is that we have this uh you know we we have this way of taking advantage of of uh the most powerful thinking space in the Transformer and not
1. What is the significance of staying in the "mind Palace" of the Transformer instead of resolving back to token space?
2. What are the key concepts that need to be covered before demonstrating large reasoning models?is that right yeah that's exactly right I mean the idea is that we have this uh you know we we have this way of taking advantage of of uh the most powerful thinking space in the Transformer and not just like for a second right not automatically resolving back to token space but kind of staying in this very like uh you know in in the mind Palace of the of the Transformer without having to write down the words yes okay okay okay so basically scaling is dead Long Live scaling something like that yeah scaling has died uh we should scale yeah all right all right all right well I'm pumped for the demos today we're going to see some thinking in latent space let's cover all the Concepts we need to get there we'll get you back in for some discussions along the way because this one's pretty meta thanks whiz all right guys we are gonna rock out on large reasoning models today while we were originally going to just cover chain of continuous thought or coconut we saw a paper come out a couple
- Loss:
MatryoshkaLoss
with these parameters:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 10per_device_eval_batch_size
: 10num_train_epochs
: 10multi_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 10per_device_eval_batch_size
: 10per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 10max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: round_robin
Training Logs
Epoch | Step | cosine_ndcg@10 |
---|---|---|
1.0 | 10 | 0.9539 |
2.0 | 20 | 0.9077 |
3.0 | 30 | 0.9539 |
4.0 | 40 | 0.9539 |
5.0 | 50 | 0.9539 |
6.0 | 60 | 0.9539 |
7.0 | 70 | 0.9539 |
8.0 | 80 | 0.9539 |
9.0 | 90 | 0.9539 |
10.0 | 100 | 0.9539 |
Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.1
- Transformers: 4.48.3
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}