fix: add paper order
Browse files
README.md
CHANGED
@@ -136,10 +136,6 @@ This model was presented in the paper [Training Sparse Mixture Of Experts Text E
|
|
136 |
| Arctic Embed v2 Large | 568 | 1024 | **55.65** | 66.00 | ❌ | ❌ | ❌ |
|
137 |
| mE5 Large | 560 | 1024 | 51.40 | 66.50 | ❌ | ❌ | ❌ |
|
138 |
|
139 |
-
## Paper Abstract
|
140 |
-
|
141 |
-
Transformer-based text embedding models have improved their performance on benchmarks like MIRACL and BEIR by increasing their parameter counts. However, this scaling approach introduces significant deployment challenges, including increased inference latency and memory usage. These challenges are particularly severe in retrieval-augmented generation (RAG) applications, where large models' increased memory requirements constrain dataset ingestion capacity, and their higher latency directly impacts query-time performance. While causal language models have addressed similar efficiency challenges using Mixture of Experts (MoE) architectures, this approach hasn't been successfully adapted to the general text embedding setting. In this paper, we introduce Nomic Embed v2, the first general purpose MoE text embedding model. Our model outperforms models in the same parameter class on both monolingual and multilingual benchmarks while also maintaining competitive performance with models twice its size. We open-source all code, models, and evaluation data to ensure full reproducibility of our training pipeline at https://github.com/nomic-ai/contrastors.
|
142 |
-
|
143 |
## Model Architecture
|
144 |
- **Total Parameters**: 475M
|
145 |
- **Active Parameters During Inference**: 305M
|
@@ -149,6 +145,11 @@ Transformer-based text embedding models have improved their performance on bench
|
|
149 |
- **Maximum Sequence Length**: 512 tokens
|
150 |
- **Languages**: Supports dozens of languages (see Performance section)
|
151 |
|
|
|
|
|
|
|
|
|
|
|
152 |
|
153 |
## Usage Guide
|
154 |
|
|
|
136 |
| Arctic Embed v2 Large | 568 | 1024 | **55.65** | 66.00 | ❌ | ❌ | ❌ |
|
137 |
| mE5 Large | 560 | 1024 | 51.40 | 66.50 | ❌ | ❌ | ❌ |
|
138 |
|
|
|
|
|
|
|
|
|
139 |
## Model Architecture
|
140 |
- **Total Parameters**: 475M
|
141 |
- **Active Parameters During Inference**: 305M
|
|
|
145 |
- **Maximum Sequence Length**: 512 tokens
|
146 |
- **Languages**: Supports dozens of languages (see Performance section)
|
147 |
|
148 |
+
## Paper Abstract
|
149 |
+
|
150 |
+
Transformer-based text embedding models have improved their performance on benchmarks like MIRACL and BEIR by increasing their parameter counts. However, this scaling approach introduces significant deployment challenges, including increased inference latency and memory usage. These challenges are particularly severe in retrieval-augmented generation (RAG) applications, where large models' increased memory requirements constrain dataset ingestion capacity, and their higher latency directly impacts query-time performance. While causal language models have addressed similar efficiency challenges using Mixture of Experts (MoE) architectures, this approach hasn't been successfully adapted to the general text embedding setting. In this paper, we introduce Nomic Embed v2, the first general purpose MoE text embedding model. Our model outperforms models in the same parameter class on both monolingual and multilingual benchmarks while also maintaining competitive performance with models twice its size. We open-source all code, models, and evaluation data to ensure full reproducibility of our training pipeline at https://github.com/nomic-ai/contrastors.
|
151 |
+
|
152 |
+
|
153 |
|
154 |
## Usage Guide
|
155 |
|