|
--- |
|
tags: |
|
- sentence-transformers |
|
- sentence-similarity |
|
- feature-extraction |
|
- generated_from_trainer |
|
- dataset_size:100 |
|
- loss:MatryoshkaLoss |
|
- loss:MultipleNegativesRankingLoss |
|
base_model: Alibaba-NLP/gte-Qwen2-1.5B-instruct |
|
widget: |
|
- source_sentence: "1. What is the significance of the beginning of thinking and end\ |
|
\ of thinking tokens in the context of recurrent depth? \n2. How does the concept\ |
|
\ of thinking about a single token relate to the overall sequence in the discussion?" |
|
sentences: |
|
- done by researchers at Lawrence Livermore and elsewhere um this is not this not |
|
you know something that product teams are using today this is this is very much |
|
a research grade thing that is cool and we're seeing some you know early signs |
|
that it's potentially quite useful um I I wanna I want to zoom in on on like just |
|
when people think about the the actual how of this when they think about actually |
|
implementing this in in maybe one of their applications so whereas in the Coconut |
|
space you're you're going to go and you're G to you're gonna like oh nope not |
|
going out into natural language space just going to sit here and chew on it chew |
|
on it chew on it and then I'm gonna pop out my final answer so all you get is |
|
final answer baby it's called a blackbox okay when we go to the recurrent depth |
|
piece um you said something interesting earlier when we were chatting about this |
|
and it was it was like I'm going to think and think and think and think and think |
|
and all of a sudden I know and I'm |
|
- chains of thought and this is where this idea of test time compute came up and |
|
this was a paper from Google in August last year called scaling test time compute |
|
you know it's basically taking that scaling paper originally and saying well now |
|
we have this sort of other axis to scale on and again this is the idea that we're |
|
anthropomorphizing a little bit but humans tend to think longer on difficult problems |
|
maybe we should let machines do that and when we think of test time Compu it's |
|
just time spent thinking you know and so if we we think about kind of how we can |
|
leverage this we've seen some of these things come out in recent weeks and recent |
|
months we talked about deep seek R1 just last week and you know this is the same |
|
idea it thinks before it answers and this is again just sort of the next step |
|
in the evolution of what we've got going on here and we saw moreover deep seek |
|
one generates one token at a time it's able to spend more time processing and |
|
it generates these thinking |
|
- piece um you said something interesting earlier when we were chatting about this |
|
and it was it was like I'm going to think and think and think and think and think |
|
and all of a sudden I know and I'm going to do that just with one token is sort |
|
of the recurrent depth paper and then I'm going to let the rest of the tokens |
|
stream like normal right so so there's this real interesting idea of like which |
|
things are you thinking about and this is where the idea of a beginning of thinking |
|
end of thinking token or a beginning of sequence end of sequence token comes into |
|
play you can think about the sequence or you can think about the thinking is this |
|
does this make sense um yeah okay okay we can think about the thinking right so |
|
we can uh we can double we can double it up uh yeah we can think about the thing |
|
yeah yeah okay okay so so recurrent depth in short I mean is like you think about |
|
a single token and then you let the sequence go like that's what I thought was |
|
interesting and and maybe |
|
- source_sentence: '1. What are the different systems mentioned in the context, and |
|
how are they categorized? |
|
|
|
2. How does the concept of "Meta Meta learning" relate to the discussion in the |
|
context?' |
|
sentences: |
|
- right we kind of got to go a little bit more into the blackbox we gota go back |
|
beyond the unknown yeah it happens but it's it's it's it's the the timing is right |
|
and uh with companies like you know Nvidia with companies the other accelerators |
|
that are that are coming out they're super good at inference Gro and S all these |
|
other peeps right uh we're getting real fast at inference and so the spending |
|
that time you know becomes less and less impactful to the user experience but |
|
more importantly uh you know we have a lot of applications LMS aren't good for |
|
yet where we don't care about response latency like research like uh PhD level |
|
math where it's like it doesn't matter if it takes a day yeah because that means |
|
it didn't take some some other person a day right like that's the that's the the |
|
we're at this time the models are capable enough that we can think about problems |
|
that we can't just do ourselves faster it the whole the whole you know ecosystem |
|
is set up for this to be the right |
|
- perceptron artificial neural networks and single neurons to many neurons and so |
|
on but it really really got going with deep learning and we we saw we train bigger |
|
and bigger models we get better and better output and results this has been known |
|
for years and it's it's known even today I mean this is from the CEO of anthropic |
|
Dario amade and you know we want to think about the the the place that we are |
|
based on where we've been want to put this in context here so we go from pre-training |
|
which is the thing that sort of takes a long time it's the it's the show goth |
|
it's got all the possibilities we can sort of think of this as months many many |
|
tens of thousands of gpus maybe more these days and as we saw at nurs this past |
|
uh you know year Ilia suit noted that pre-training as we know it will end now |
|
we've seen that there's been a lot of focus subsequently especially you know out |
|
in public on posttraining and on this idea of rhf and RL we talked about this |
|
in our recent reasoning model |
|
- we've got we've got sort of this idea may we could call it system one is GPT 40 |
|
mini and then we could say Okay reasoning model that's spitting out tokens is |
|
like is like system two and then maybe like we've got this sort of system three |
|
that's in in in latent space and then we've got this like system four that's like |
|
that's like you know On Any Given specific type of of of thing I might want to |
|
extra think about in latent space I might go deeper on it so so there's just this |
|
real sort of very interesting abstraction Ed meta thinking at play here and um |
|
and it gets back to me to sort of the idea of like kind of Meta Meta learning |
|
I mean they they really they really did away with our ability to use that term |
|
and in the gpt3 paper so um we're at the end of sort of the words that we that |
|
we can concretely say we know what to do but we know what to do with the code |
|
so I want to go ahead and just quickly introduce this to you uh whiz we're going |
|
to do a quick demo here and we're going to |
|
- source_sentence: '1. What does the term "tokenless" refer to in the context of scaling |
|
and model architecture? |
|
|
|
2. How does the looping process contribute to generating responses without resolving |
|
back to tokens?' |
|
sentences: |
|
- allow us to scil uh that seems a little a little weird to me I'm not it's not |
|
very intuitive what's your intuition behind this yeah so I the word tokenless |
|
is a is a is a fun one I would say like the idea is we don't need to resolve back |
|
to tokens to scale because we can just keep looping around before we get to tokens |
|
right really anything that allows us to keep looping around until we get to The |
|
Final Answer uh is is g to allow us to scale right because we can say well Loop |
|
more Loop more Loop more right like uh W with with uh you know with say like generating |
|
a response right if I generate for five tokens versus if I generate for 70,000 |
|
tokens right like we we have this idea of a scaling axis on the on the inference |
|
side this is just kind of shoving that inside the uh the model architecture as |
|
opposed to allowing it to resolve to token space but the idea is we're still adding |
|
information at every step at every stage right that we do a a loop as it were |
|
so we're getting a better and |
|
- perceptron artificial neural networks and single neurons to many neurons and so |
|
on but it really really got going with deep learning and we we saw we train bigger |
|
and bigger models we get better and better output and results this has been known |
|
for years and it's it's known even today I mean this is from the CEO of anthropic |
|
Dario amade and you know we want to think about the the the place that we are |
|
based on where we've been want to put this in context here so we go from pre-training |
|
which is the thing that sort of takes a long time it's the it's the show goth |
|
it's got all the possibilities we can sort of think of this as months many many |
|
tens of thousands of gpus maybe more these days and as we saw at nurs this past |
|
uh you know year Ilia suit noted that pre-training as we know it will end now |
|
we've seen that there's been a lot of focus subsequently especially you know out |
|
in public on posttraining and on this idea of rhf and RL we talked about this |
|
in our recent reasoning model |
|
- allow us to scil uh that seems a little a little weird to me I'm not it's not |
|
very intuitive what's your intuition behind this yeah so I the word tokenless |
|
is a is a is a fun one I would say like the idea is we don't need to resolve back |
|
to tokens to scale because we can just keep looping around before we get to tokens |
|
right really anything that allows us to keep looping around until we get to The |
|
Final Answer uh is is g to allow us to scale right because we can say well Loop |
|
more Loop more Loop more right like uh W with with uh you know with say like generating |
|
a response right if I generate for five tokens versus if I generate for 70,000 |
|
tokens right like we we have this idea of a scaling axis on the on the inference |
|
side this is just kind of shoving that inside the uh the model architecture as |
|
opposed to allowing it to resolve to token space but the idea is we're still adding |
|
information at every step at every stage right that we do a a loop as it were |
|
so we're getting a better and |
|
- source_sentence: '1. What is the relationship between latent space and natural language |
|
in the context of a Transformer architecture? |
|
|
|
2. How does the GPT style architecture process a sequence to predict the next |
|
token?' |
|
sentences: |
|
- piece um you said something interesting earlier when we were chatting about this |
|
and it was it was like I'm going to think and think and think and think and think |
|
and all of a sudden I know and I'm going to do that just with one token is sort |
|
of the recurrent depth paper and then I'm going to let the rest of the tokens |
|
stream like normal right so so there's this real interesting idea of like which |
|
things are you thinking about and this is where the idea of a beginning of thinking |
|
end of thinking token or a beginning of sequence end of sequence token comes into |
|
play you can think about the sequence or you can think about the thinking is this |
|
does this make sense um yeah okay okay we can think about the thinking right so |
|
we can uh we can double we can double it up uh yeah we can think about the thing |
|
yeah yeah okay okay so so recurrent depth in short I mean is like you think about |
|
a single token and then you let the sequence go like that's what I thought was |
|
interesting and and maybe |
|
- it's kind of funny in a logical way if you look up logic it uses the word reason |
|
and there we are caught in a loop but reasoning is about thinking latent space |
|
is about using a representation of our data that sort of captures the essential |
|
features of it we can think of latent space as embedding space or the space of |
|
math and numbers in other words it's just not the space of words and natural language |
|
let's think about how this manifests in a Transformer architecture here I'm showing |
|
a GPT style architecture from the gpt2 paper what we want to think about is we |
|
want to put a sequence in and we want to get some next token prediction out when |
|
we put the sequence in we're in the space of natural language when we get the |
|
next token out we're in the space of natural language betwix in between we're |
|
going to be in latent space we're going to be in embedding space we're going to |
|
be in the space where we can do math and stuff and importantly we can kind of |
|
think that we're putting in this big |
|
- architecture is built upon a latent depth recurrent block that is run for a randomly |
|
sampled number of iterations during training I can't see it let's see what they |
|
gave us in the paper they gave us this bad boy personally not a fan of this diagram |
|
whiz thinks it's totally fine and it makes perfect sense let's see if we can break |
|
it down for you guys here a visualization of the architecture we have the Prelude |
|
block we have the recurrent block recurrent block recurrent block and then this |
|
Koda so each block consists of a number of su layers okay the blue Prelude block |
|
embeds the input into latent space all right where the green shared recurrent |
|
block is a block of layers that is repeated to compute the final latent state |
|
which is decoded by the layers of the red Coda block back to our gp2 architecture |
|
diagram let's think about how we're still kind of doing this loop back we're still |
|
doing this reasoning in in space and now let's label the Prelude the recurrent |
|
block and the Koda we |
|
- source_sentence: '1. What is meant by the terms "hidden state," "latent space," |
|
and "embedding space" in the context of reasoning models? |
|
|
|
2. How do the last hidden states function as input embeddings in a typical Chain |
|
of Thought reasoning model?' |
|
sentences: |
|
- the next step in the evolution of what we've got going on here and we saw moreover |
|
deep seek one generates one token at a time it's able to spend more time processing |
|
and it generates these thinking tokens that explain its Chain of Thought So we're |
|
generating tokens during our chains of thought and that makes them explainable |
|
very cool all right I want to bring whiz back up for just a moment here okay just |
|
to be super clear reasoning and test time compute do you think these are sort |
|
of you know triple equal sign or or how would you how would you say you know generally |
|
we're talking about the same thing but they're not the same thing yeah they're |
|
they're this so so okay they're not literally the same thing of course but they're |
|
also pretty much the same thing in in how we talk about it in 2025 today that's |
|
right that's right so so reasoning is some right now because our models are System |
|
One machines right this is the this is the they're not reasoners they're they're |
|
uh they're they're |
|
- impact of this kind of approach on test time compute scaling some of the working |
|
hypotheses and some of the things people are interested in in looking out there |
|
on the llm edge for as we continue to see the field progress I want to demonstrate |
|
both approaches and check out the new coconut Library as well so how we're going |
|
to go through this is we're going to essentially introduce this idea of reasoning |
|
and latent space then we're going to talk about the scaling part of this before |
|
we dig into the specific approaches and we get the demo on both approaches by |
|
the end so it should be a lot of fun today let's go ahead and dig in reasoning |
|
in latent space let's root ourselves first in some definitions when we talk about |
|
reasoning we're talking about the action of thinking about something and it's |
|
kind of funny in a logical way if you look up logic it uses the word reason and |
|
there we are caught in a loop but reasoning is about thinking latent space is |
|
about using a representation of our |
|
- of the reasoning State when we say hidden state or latent space or embedding space |
|
or this sort of space of math and computation we're talking about the same space |
|
of course the the exact state of the space changes depending on where we are in |
|
the Transformer but let's take a look at the image from the paper in a typical |
|
Chain of Thought reasoning model we're going to ask a question we're going to |
|
generate some tokens and we're going to think kind of out loud we're we're going |
|
to let the chains of thought flow when you click into the Chain of Thought on |
|
01 as we've seen before you can see sort of in the side panel the the steps it's |
|
thinking through now conversely to actually thinking out loud we have here that |
|
the last hidden states are used as input embeddings okay well what does this mean |
|
well it let's go back to our gpt2 style diagram and think about this the input |
|
embeddings here are where we're essentially looping back to so what we do is we |
|
kind of loop back before we generate |
|
pipeline_tag: sentence-similarity |
|
library_name: sentence-transformers |
|
metrics: |
|
- cosine_accuracy@1 |
|
- cosine_accuracy@3 |
|
- cosine_accuracy@5 |
|
- cosine_accuracy@10 |
|
- cosine_precision@1 |
|
- cosine_precision@3 |
|
- cosine_precision@5 |
|
- cosine_precision@10 |
|
- cosine_recall@1 |
|
- cosine_recall@3 |
|
- cosine_recall@5 |
|
- cosine_recall@10 |
|
- cosine_ndcg@10 |
|
- cosine_mrr@10 |
|
- cosine_map@100 |
|
model-index: |
|
- name: SentenceTransformer based on Alibaba-NLP/gte-Qwen2-1.5B-instruct |
|
results: |
|
- task: |
|
type: information-retrieval |
|
name: Information Retrieval |
|
dataset: |
|
name: Unknown |
|
type: unknown |
|
metrics: |
|
- type: cosine_accuracy@1 |
|
value: 0.875 |
|
name: Cosine Accuracy@1 |
|
- type: cosine_accuracy@3 |
|
value: 1.0 |
|
name: Cosine Accuracy@3 |
|
- type: cosine_accuracy@5 |
|
value: 1.0 |
|
name: Cosine Accuracy@5 |
|
- type: cosine_accuracy@10 |
|
value: 1.0 |
|
name: Cosine Accuracy@10 |
|
- type: cosine_precision@1 |
|
value: 0.875 |
|
name: Cosine Precision@1 |
|
- type: cosine_precision@3 |
|
value: 0.3333333333333333 |
|
name: Cosine Precision@3 |
|
- type: cosine_precision@5 |
|
value: 0.2 |
|
name: Cosine Precision@5 |
|
- type: cosine_precision@10 |
|
value: 0.1 |
|
name: Cosine Precision@10 |
|
- type: cosine_recall@1 |
|
value: 0.875 |
|
name: Cosine Recall@1 |
|
- type: cosine_recall@3 |
|
value: 1.0 |
|
name: Cosine Recall@3 |
|
- type: cosine_recall@5 |
|
value: 1.0 |
|
name: Cosine Recall@5 |
|
- type: cosine_recall@10 |
|
value: 1.0 |
|
name: Cosine Recall@10 |
|
- type: cosine_ndcg@10 |
|
value: 0.9538662191964322 |
|
name: Cosine Ndcg@10 |
|
- type: cosine_mrr@10 |
|
value: 0.9375 |
|
name: Cosine Mrr@10 |
|
- type: cosine_map@100 |
|
value: 0.9375 |
|
name: Cosine Map@100 |
|
--- |
|
|
|
# SentenceTransformer based on Alibaba-NLP/gte-Qwen2-1.5B-instruct |
|
|
|
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Alibaba-NLP/gte-Qwen2-1.5B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct). It maps sentences & paragraphs to a 1536-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. |
|
|
|
## Model Details |
|
|
|
### Model Description |
|
- **Model Type:** Sentence Transformer |
|
- **Base model:** [Alibaba-NLP/gte-Qwen2-1.5B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct) <!-- at revision 0d2ad8e1ac654a2b626e62154778a70868141208 --> |
|
- **Maximum Sequence Length:** 32768 tokens |
|
- **Output Dimensionality:** 1536 dimensions |
|
- **Similarity Function:** Cosine Similarity |
|
<!-- - **Training Dataset:** Unknown --> |
|
<!-- - **Language:** Unknown --> |
|
<!-- - **License:** Unknown --> |
|
|
|
### Model Sources |
|
|
|
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net) |
|
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) |
|
- **HF中国镜像站:** [Sentence Transformers on HF中国镜像站](https://huggingface.co/models?library=sentence-transformers) |
|
|
|
### Full Model Architecture |
|
|
|
``` |
|
SentenceTransformer( |
|
(0): Transformer({'max_seq_length': 32768, 'do_lower_case': False}) with Transformer model: Qwen2Model |
|
(1): Pooling({'word_embedding_dimension': 1536, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': True, 'include_prompt': True}) |
|
(2): Normalize() |
|
) |
|
``` |
|
|
|
## Usage |
|
|
|
### Direct Usage (Sentence Transformers) |
|
|
|
First install the Sentence Transformers library: |
|
|
|
```bash |
|
pip install -U sentence-transformers |
|
``` |
|
|
|
Then you can load this model and run inference. |
|
```python |
|
from sentence_transformers import SentenceTransformer |
|
|
|
# Download from the 🤗 Hub |
|
model = SentenceTransformer("kenrogers/gte-ft-yt-2") |
|
# Run inference |
|
sentences = [ |
|
'1. What is meant by the terms "hidden state," "latent space," and "embedding space" in the context of reasoning models?\n2. How do the last hidden states function as input embeddings in a typical Chain of Thought reasoning model?', |
|
"of the reasoning State when we say hidden state or latent space or embedding space or this sort of space of math and computation we're talking about the same space of course the the exact state of the space changes depending on where we are in the Transformer but let's take a look at the image from the paper in a typical Chain of Thought reasoning model we're going to ask a question we're going to generate some tokens and we're going to think kind of out loud we're we're going to let the chains of thought flow when you click into the Chain of Thought on 01 as we've seen before you can see sort of in the side panel the the steps it's thinking through now conversely to actually thinking out loud we have here that the last hidden states are used as input embeddings okay well what does this mean well it let's go back to our gpt2 style diagram and think about this the input embeddings here are where we're essentially looping back to so what we do is we kind of loop back before we generate", |
|
"impact of this kind of approach on test time compute scaling some of the working hypotheses and some of the things people are interested in in looking out there on the llm edge for as we continue to see the field progress I want to demonstrate both approaches and check out the new coconut Library as well so how we're going to go through this is we're going to essentially introduce this idea of reasoning and latent space then we're going to talk about the scaling part of this before we dig into the specific approaches and we get the demo on both approaches by the end so it should be a lot of fun today let's go ahead and dig in reasoning in latent space let's root ourselves first in some definitions when we talk about reasoning we're talking about the action of thinking about something and it's kind of funny in a logical way if you look up logic it uses the word reason and there we are caught in a loop but reasoning is about thinking latent space is about using a representation of our", |
|
] |
|
embeddings = model.encode(sentences) |
|
print(embeddings.shape) |
|
# [3, 1536] |
|
|
|
# Get the similarity scores for the embeddings |
|
similarities = model.similarity(embeddings, embeddings) |
|
print(similarities.shape) |
|
# [3, 3] |
|
``` |
|
|
|
<!-- |
|
### Direct Usage (Transformers) |
|
|
|
<details><summary>Click to see the direct usage in Transformers</summary> |
|
|
|
</details> |
|
--> |
|
|
|
<!-- |
|
### Downstream Usage (Sentence Transformers) |
|
|
|
You can finetune this model on your own dataset. |
|
|
|
<details><summary>Click to expand</summary> |
|
|
|
</details> |
|
--> |
|
|
|
<!-- |
|
### Out-of-Scope Use |
|
|
|
*List how the model may foreseeably be misused and address what users ought not to do with the model.* |
|
--> |
|
|
|
## Evaluation |
|
|
|
### Metrics |
|
|
|
#### Information Retrieval |
|
|
|
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) |
|
|
|
| Metric | Value | |
|
|:--------------------|:-----------| |
|
| cosine_accuracy@1 | 0.875 | |
|
| cosine_accuracy@3 | 1.0 | |
|
| cosine_accuracy@5 | 1.0 | |
|
| cosine_accuracy@10 | 1.0 | |
|
| cosine_precision@1 | 0.875 | |
|
| cosine_precision@3 | 0.3333 | |
|
| cosine_precision@5 | 0.2 | |
|
| cosine_precision@10 | 0.1 | |
|
| cosine_recall@1 | 0.875 | |
|
| cosine_recall@3 | 1.0 | |
|
| cosine_recall@5 | 1.0 | |
|
| cosine_recall@10 | 1.0 | |
|
| **cosine_ndcg@10** | **0.9539** | |
|
| cosine_mrr@10 | 0.9375 | |
|
| cosine_map@100 | 0.9375 | |
|
|
|
<!-- |
|
## Bias, Risks and Limitations |
|
|
|
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* |
|
--> |
|
|
|
<!-- |
|
### Recommendations |
|
|
|
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* |
|
--> |
|
|
|
## Training Details |
|
|
|
### Training Dataset |
|
|
|
#### Unnamed Dataset |
|
|
|
* Size: 100 training samples |
|
* Columns: <code>sentence_0</code> and <code>sentence_1</code> |
|
* Approximate statistics based on the first 100 samples: |
|
| | sentence_0 | sentence_1 | |
|
|:--------|:-----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| |
|
| type | string | string | |
|
| details | <ul><li>min: 30 tokens</li><li>mean: 41.05 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 180 tokens</li><li>mean: 208.98 tokens</li><li>max: 231 tokens</li></ul> | |
|
* Samples: |
|
| sentence_0 | sentence_1 | |
|
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| |
|
| <code>1. What are the two big ideas aimed at scaling the power of LLMs during inference mentioned in the context? <br>2. How does the concept of reasoning in latent space relate to the efficiency of computation during inference?</code> | <code>okay whiz we're talking about reasoning in latent space today is that the same as test time compute yeah that's right nice nice okay and we've got two big ideas to cover that are aimed at scaling the power of llms during inference is that right that yeah that's right so we have we have two you know latent space methods uh we have our continuous Chain of Thought or coconut right and then we have our more more directly more uh you know uh budget forcing recurrent depth uh model yes man that's a lot so when we look across both of those there appears to be a pretty simple explanation it's almost like uh you know when we when we're in that sort of thinking space of computation we don't have to do the thinky thinky in words and that's better maybe even it will allow us to find a new scaling axis is that right yeah that's exactly right I mean the idea is that we have this uh you know we we have this way of taking advantage of of uh the most powerful thinking space in the Transformer and not</code> | |
|
| <code>1. What are the two big ideas aimed at scaling the power of LLMs during inference mentioned in the context? <br>2. How does the concept of reasoning in latent space relate to the efficiency of computation during inference?</code> | <code>okay whiz we're talking about reasoning in latent space today is that the same as test time compute yeah that's right nice nice okay and we've got two big ideas to cover that are aimed at scaling the power of llms during inference is that right that yeah that's right so we have we have two you know latent space methods uh we have our continuous Chain of Thought or coconut right and then we have our more more directly more uh you know uh budget forcing recurrent depth uh model yes man that's a lot so when we look across both of those there appears to be a pretty simple explanation it's almost like uh you know when we when we're in that sort of thinking space of computation we don't have to do the thinky thinky in words and that's better maybe even it will allow us to find a new scaling axis is that right yeah that's exactly right I mean the idea is that we have this uh you know we we have this way of taking advantage of of uh the most powerful thinking space in the Transformer and not</code> | |
|
| <code>1. What is the significance of staying in the "mind Palace" of the Transformer instead of resolving back to token space? <br>2. What are the key concepts that need to be covered before demonstrating large reasoning models?</code> | <code>is that right yeah that's exactly right I mean the idea is that we have this uh you know we we have this way of taking advantage of of uh the most powerful thinking space in the Transformer and not just like for a second right not automatically resolving back to token space but kind of staying in this very like uh you know in in the mind Palace of the of the Transformer without having to write down the words yes okay okay okay so basically scaling is dead Long Live scaling something like that yeah scaling has died uh we should scale yeah all right all right all right well I'm pumped for the demos today we're going to see some thinking in latent space let's cover all the Concepts we need to get there we'll get you back in for some discussions along the way because this one's pretty meta thanks whiz all right guys we are gonna rock out on large reasoning models today while we were originally going to just cover chain of continuous thought or coconut we saw a paper come out a couple</code> | |
|
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: |
|
```json |
|
{ |
|
"loss": "MultipleNegativesRankingLoss", |
|
"matryoshka_dims": [ |
|
768, |
|
512, |
|
256, |
|
128, |
|
64 |
|
], |
|
"matryoshka_weights": [ |
|
1, |
|
1, |
|
1, |
|
1, |
|
1 |
|
], |
|
"n_dims_per_step": -1 |
|
} |
|
``` |
|
|
|
### Training Hyperparameters |
|
#### Non-Default Hyperparameters |
|
|
|
- `eval_strategy`: steps |
|
- `per_device_train_batch_size`: 10 |
|
- `per_device_eval_batch_size`: 10 |
|
- `num_train_epochs`: 10 |
|
- `multi_dataset_batch_sampler`: round_robin |
|
|
|
#### All Hyperparameters |
|
<details><summary>Click to expand</summary> |
|
|
|
- `overwrite_output_dir`: False |
|
- `do_predict`: False |
|
- `eval_strategy`: steps |
|
- `prediction_loss_only`: True |
|
- `per_device_train_batch_size`: 10 |
|
- `per_device_eval_batch_size`: 10 |
|
- `per_gpu_train_batch_size`: None |
|
- `per_gpu_eval_batch_size`: None |
|
- `gradient_accumulation_steps`: 1 |
|
- `eval_accumulation_steps`: None |
|
- `torch_empty_cache_steps`: None |
|
- `learning_rate`: 5e-05 |
|
- `weight_decay`: 0.0 |
|
- `adam_beta1`: 0.9 |
|
- `adam_beta2`: 0.999 |
|
- `adam_epsilon`: 1e-08 |
|
- `max_grad_norm`: 1 |
|
- `num_train_epochs`: 10 |
|
- `max_steps`: -1 |
|
- `lr_scheduler_type`: linear |
|
- `lr_scheduler_kwargs`: {} |
|
- `warmup_ratio`: 0.0 |
|
- `warmup_steps`: 0 |
|
- `log_level`: passive |
|
- `log_level_replica`: warning |
|
- `log_on_each_node`: True |
|
- `logging_nan_inf_filter`: True |
|
- `save_safetensors`: True |
|
- `save_on_each_node`: False |
|
- `save_only_model`: False |
|
- `restore_callback_states_from_checkpoint`: False |
|
- `no_cuda`: False |
|
- `use_cpu`: False |
|
- `use_mps_device`: False |
|
- `seed`: 42 |
|
- `data_seed`: None |
|
- `jit_mode_eval`: False |
|
- `use_ipex`: False |
|
- `bf16`: False |
|
- `fp16`: False |
|
- `fp16_opt_level`: O1 |
|
- `half_precision_backend`: auto |
|
- `bf16_full_eval`: False |
|
- `fp16_full_eval`: False |
|
- `tf32`: None |
|
- `local_rank`: 0 |
|
- `ddp_backend`: None |
|
- `tpu_num_cores`: None |
|
- `tpu_metrics_debug`: False |
|
- `debug`: [] |
|
- `dataloader_drop_last`: False |
|
- `dataloader_num_workers`: 0 |
|
- `dataloader_prefetch_factor`: None |
|
- `past_index`: -1 |
|
- `disable_tqdm`: False |
|
- `remove_unused_columns`: True |
|
- `label_names`: None |
|
- `load_best_model_at_end`: False |
|
- `ignore_data_skip`: False |
|
- `fsdp`: [] |
|
- `fsdp_min_num_params`: 0 |
|
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} |
|
- `fsdp_transformer_layer_cls_to_wrap`: None |
|
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} |
|
- `deepspeed`: None |
|
- `label_smoothing_factor`: 0.0 |
|
- `optim`: adamw_torch |
|
- `optim_args`: None |
|
- `adafactor`: False |
|
- `group_by_length`: False |
|
- `length_column_name`: length |
|
- `ddp_find_unused_parameters`: None |
|
- `ddp_bucket_cap_mb`: None |
|
- `ddp_broadcast_buffers`: False |
|
- `dataloader_pin_memory`: True |
|
- `dataloader_persistent_workers`: False |
|
- `skip_memory_metrics`: True |
|
- `use_legacy_prediction_loop`: False |
|
- `push_to_hub`: False |
|
- `resume_from_checkpoint`: None |
|
- `hub_model_id`: None |
|
- `hub_strategy`: every_save |
|
- `hub_private_repo`: None |
|
- `hub_always_push`: False |
|
- `gradient_checkpointing`: False |
|
- `gradient_checkpointing_kwargs`: None |
|
- `include_inputs_for_metrics`: False |
|
- `include_for_metrics`: [] |
|
- `eval_do_concat_batches`: True |
|
- `fp16_backend`: auto |
|
- `push_to_hub_model_id`: None |
|
- `push_to_hub_organization`: None |
|
- `mp_parameters`: |
|
- `auto_find_batch_size`: False |
|
- `full_determinism`: False |
|
- `torchdynamo`: None |
|
- `ray_scope`: last |
|
- `ddp_timeout`: 1800 |
|
- `torch_compile`: False |
|
- `torch_compile_backend`: None |
|
- `torch_compile_mode`: None |
|
- `dispatch_batches`: None |
|
- `split_batches`: None |
|
- `include_tokens_per_second`: False |
|
- `include_num_input_tokens_seen`: False |
|
- `neftune_noise_alpha`: None |
|
- `optim_target_modules`: None |
|
- `batch_eval_metrics`: False |
|
- `eval_on_start`: False |
|
- `use_liger_kernel`: False |
|
- `eval_use_gather_object`: False |
|
- `average_tokens_across_devices`: False |
|
- `prompts`: None |
|
- `batch_sampler`: batch_sampler |
|
- `multi_dataset_batch_sampler`: round_robin |
|
|
|
</details> |
|
|
|
### Training Logs |
|
| Epoch | Step | cosine_ndcg@10 | |
|
|:-----:|:----:|:--------------:| |
|
| 1.0 | 10 | 0.9539 | |
|
| 2.0 | 20 | 0.9077 | |
|
| 3.0 | 30 | 0.9539 | |
|
| 4.0 | 40 | 0.9539 | |
|
| 5.0 | 50 | 0.9539 | |
|
| 6.0 | 60 | 0.9539 | |
|
| 7.0 | 70 | 0.9539 | |
|
| 8.0 | 80 | 0.9539 | |
|
| 9.0 | 90 | 0.9539 | |
|
| 10.0 | 100 | 0.9539 | |
|
|
|
|
|
### Framework Versions |
|
- Python: 3.11.11 |
|
- Sentence Transformers: 3.4.1 |
|
- Transformers: 4.48.3 |
|
- PyTorch: 2.5.1+cu124 |
|
- Accelerate: 1.3.0 |
|
- Datasets: 3.3.2 |
|
- Tokenizers: 0.21.0 |
|
|
|
## Citation |
|
|
|
### BibTeX |
|
|
|
#### Sentence Transformers |
|
```bibtex |
|
@inproceedings{reimers-2019-sentence-bert, |
|
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", |
|
author = "Reimers, Nils and Gurevych, Iryna", |
|
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", |
|
month = "11", |
|
year = "2019", |
|
publisher = "Association for Computational Linguistics", |
|
url = "https://arxiv.org/abs/1908.10084", |
|
} |
|
``` |
|
|
|
#### MatryoshkaLoss |
|
```bibtex |
|
@misc{kusupati2024matryoshka, |
|
title={Matryoshka Representation Learning}, |
|
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, |
|
year={2024}, |
|
eprint={2205.13147}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.LG} |
|
} |
|
``` |
|
|
|
#### MultipleNegativesRankingLoss |
|
```bibtex |
|
@misc{henderson2017efficient, |
|
title={Efficient Natural Language Response Suggestion for Smart Reply}, |
|
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, |
|
year={2017}, |
|
eprint={1705.00652}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL} |
|
} |
|
``` |
|
|
|
<!-- |
|
## Glossary |
|
|
|
*Clearly define terms in order to be accessible across audiences.* |
|
--> |
|
|
|
<!-- |
|
## Model Card Authors |
|
|
|
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* |
|
--> |
|
|
|
<!-- |
|
## Model Card Contact |
|
|
|
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* |
|
--> |