CrossEncoder based on microsoft/MiniLM-L12-H384-uncased

This is a Cross Encoder model finetuned from microsoft/MiniLM-L12-H384-uncased using the sentence-transformers library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.

Model Details

Model Description

Model Sources

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import CrossEncoder

# Download from the 🤗 Hub
model = CrossEncoder("tomaarsen/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-lambdaloss-hard-neg")
# Get scores for pairs of texts
pairs = [
    ['How many calories in an egg', 'There are on average between 55 and 80 calories in an egg depending on its size.'],
    ['How many calories in an egg', 'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.'],
    ['How many calories in an egg', 'Most of the calories in an egg come from the yellow yolk in the center.'],
]
scores = model.predict(pairs)
print(scores.shape)
# (3,)

# Or rank different texts based on similarity to a single text
ranks = model.rank(
    'How many calories in an egg',
    [
        'There are on average between 55 and 80 calories in an egg depending on its size.',
        'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.',
        'Most of the calories in an egg come from the yellow yolk in the center.',
    ]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]

Evaluation

Metrics

Cross Encoder Reranking

  • Datasets: NanoMSMARCO_R100, NanoNFCorpus_R100 and NanoNQ_R100
  • Evaluated with CrossEncoderRerankingEvaluator with these parameters:
    {
        "at_k": 10,
        "always_rerank_positives": true
    }
    
Metric NanoMSMARCO_R100 NanoNFCorpus_R100 NanoNQ_R100
map 0.5187 (+0.0292) 0.3631 (+0.1021) 0.6371 (+0.2175)
mrr@10 0.5136 (+0.0361) 0.6282 (+0.1284) 0.6482 (+0.2215)
ndcg@10 0.5964 (+0.0560) 0.4271 (+0.1021) 0.6738 (+0.1732)

Cross Encoder Nano BEIR

  • Dataset: NanoBEIR_R100_mean
  • Evaluated with CrossEncoderNanoBEIREvaluator with these parameters:
    {
        "dataset_names": [
            "msmarco",
            "nfcorpus",
            "nq"
        ],
        "rerank_k": 100,
        "at_k": 10,
        "always_rerank_positives": true
    }
    
Metric Value
map 0.5063 (+0.1163)
mrr@10 0.5967 (+0.1286)
ndcg@10 0.5658 (+0.1104)

Training Details

Training Dataset

Unnamed Dataset

  • Size: 167,227 training samples
  • Columns: query, docs, and labels
  • Approximate statistics based on the first 1000 samples:
    query docs labels
    type string list list
    details
    • min: 11 characters
    • mean: 34.24 characters
    • max: 117 characters
    • min: 3 elements
    • mean: 6.50 elements
    • max: 10 elements
    • min: 3 elements
    • mean: 6.50 elements
    • max: 10 elements
  • Samples:
    query docs labels
    what is a natural hormone replacement ['Natural Hormone Replacement Therapy (“BHRT”) is common term for the treatment of conditions caused by the effects of hormone deficiencies resulting from menopause. BHRT uses hormones that are identical in their mollecular structure to the hormones produced naturally within the human body.', 'Natural hormone replacement therapy (HRT) is also known as bioidentical hormone therapy. It utilizes estradiol, progesterone or testosterone that are identical in structure to hormones found in a woman’s body.', 'NATURAL HORMONE REPLACEMENT. Natural hormone replacement therapy is a safer, sensible, effective, and free from most of the side effects of synthetic hormones. Every day in the United States 3,500 women enter menopause.', 'Natural or bio-identical hormone replacement therapy in the form of administering estrogen from estrogenic foods or taking progesterone creams has not been clinically tested. Much of the information is anecdotal only.', 'Bioidentical hormone therapy is often called nat... [1, 0, 0, 0, 0, ...]
    average nba age ["The average age for an NBA rookie is around 20. Some are 19 and some are 22 or older, but most come out after their freshman year in college, which would put them at 19 or 20. …. + 4 others found this useful. If I get to be a basketball player I would like to be 6'10. The average height of an NBA player is around 6 feet 7 inches. The tallest NBA player ever was Gheorghe Mureaÿan, mureåÿan who was 7 feet 7 inches. Tall in, contrast the SHORTEST nba player ever Was Tyrone Muggsy, bogues who was 5 feet 3 inches. tall", 'While there is no specific age in which NBA players are told to retire, the average age in which they do retire is 36. It has been said it is around the age of 32. But if you look at the way players are training and keeping in shape lately, the average age has increased a little bit. For examp … le, Derek Jeter just had one of his most productive years (both on offense and defense) and he is going to be 36 years old next June.', 'The youngest player ever to play i... [1, 0, 0, 0, 0, ...]
    does laila engaged to meera's brother ['Laila Got Engaged To Meera Brother Ahsan. admin April 9, 2015 Laila Got Engaged To Meera Brother Ahsan 2015-04-10T03:50:40+00:00 Latest Happning No Comment. After the late buildup on media about Laila discovering her life accomplice through a network show, Laila has at long last discovered her “To-Be” Ahson. Kaun Bane Ga Laila Ka Dulha was a quite discussed fragment where youthful men contended to be Laila’s husband to be on APlus Morning Show, facilitated by Noor', 'Kaun Bane Ga Laila Ka Dulha was a much talked about segment where young men competed to be Laila’s groom on APlus Morning Show, hosted by Noor. Ahson, surprisingly happens to be the brother of film actress Meera and it has been revealed by sources that Laila and Ahson have been in a relationship for some time.', 'As we all be acquainted with that Laila was in look for of her life colleague. The beat show Kaun Banega “ Laila Ka Dulha ” was aired on A plus. In this part, men from special places take part and compete every ... [1, 0, 0, 0, 0, ...]
  • Loss: LambdaLoss with these parameters:
    {
        "weighting_scheme": "sentence_transformers.cross_encoder.losses.LambdaLoss.NDCGLoss2PPScheme",
        "k": null,
        "sigma": 1.0,
        "eps": 1e-10,
        "reduction_log": "binary",
        "activation_fct": "torch.nn.modules.linear.Identity",
        "mini_batch_size": 16
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 1,000 evaluation samples
  • Columns: query, docs, and labels
  • Approximate statistics based on the first 1000 samples:
    query docs labels
    type string list list
    details
    • min: 11 characters
    • mean: 34.02 characters
    • max: 94 characters
    • min: 3 elements
    • mean: 6.50 elements
    • max: 10 elements
    • min: 3 elements
    • mean: 6.50 elements
    • max: 10 elements
  • Samples:
    query docs labels
    what is the medicine called for tonsillitis ['Tonsillitis is usually caused by a virus and does not require prescription medicine. For information on over-the-counter pain medicine and other self-care options, see Home Treatment. An antibiotic, usually amoxicillin or penicillin, is used to treat tonsillitis caused by strep bacteria. Although tonsillitis caused by strep bacteria usually will go away on its own, antibiotics are used to prevent the complications, such as rheumatic fever, that can result from untreated strep throat. ', 'You have two tonsils, one on either side at the back of the mouth. The picture below shows large non-infected tonsils (no redness or pus). Tonsillitis is an infection of the tonsils. A sore throat is the most common of all tonsillitis symptoms. In addition, you may also have a cough, high temperature (fever), headache, feel sick, feel tired, find swallowing painful, and have swollen neck glands. ', 'Tonsillitis (/ˌtɒnsɪˈlaɪtɪs/ TON-si-LEYE-tis) is inflammation of the tonsils most commonly caused by v... [1, 0, 0, 0, 0, ...]
    where does an amur leopard live ['Snowy Remote Area. - This is the biome for the Amur Leopard.- They live in Korea, China, Japan, and Russia.- It is known to adapt to any specific environment if it provides food and water.- It is the only leopard known to live in the harsh, cold winters of the Russian Far East. ', 'The Amur leopard (Panthera pardus orientalis) is a leopard subspecies native to the Primorye region of southeastern Russia and the Jilin Province of northeast China. It is classified as Critically Endangered since 1996 by IUCN. In 2007, only 19–26 wild Amur leopards were estimated to survive. The Amur leopard is the only Panthera pardus subspecies adapted to a cold snowy climate (the snow leopard, which favors a similar habitat, belongs to a different species). Amur leopards used to be found in northeast Asia, probably in the south to Peking, and the Korean Peninsula.', 'Amur leopards differ from other subspecies by a thick coat of spot-covered fur. They show the strongest and most consistent divergence in... [1, 0, 0, 0, 0, ...]
    what is the structure of the endocrine system ['The Endocrine System means the structure of glands that secrete hormones through the circulatory system into the receptive organs. In physiology, the endocrine system is a system of glands, each of which secretes a type of hormone directly into the bloodstream to regulate the body. The endocrine system is … in contrast to exocrine system, which secretes its chemicals using duct', 'The major glands of the endocrine system are the hypothalamus, pituitary, thyroid, parathyroids, adrenals, pineal body, and the reproductive organs (ovaries and testes). The pancreas is also a part of this system; it has a role in hormone production as well as in digestion.', 'Overview. The endocrine system—the other communication system in the body—is made up of endocrine glands that produce hormones, chemical substances released into the bloodstream to guide processes such as metabolism, growth, and sexual development. Hormones are also involved in regulating emotional life. The major glands of the endocr... [1, 0, 0, 0, 0, ...]
  • Loss: LambdaLoss with these parameters:
    {
        "weighting_scheme": "sentence_transformers.cross_encoder.losses.LambdaLoss.NDCGLoss2PPScheme",
        "k": null,
        "sigma": 1.0,
        "eps": 1e-10,
        "reduction_log": "binary",
        "activation_fct": "torch.nn.modules.linear.Identity",
        "mini_batch_size": 16
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • learning_rate: 2e-05
  • num_train_epochs: 1
  • warmup_ratio: 0.1
  • seed: 12
  • bf16: True
  • load_best_model_at_end: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 12
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss Validation Loss NanoMSMARCO_R100_ndcg@10 NanoNFCorpus_R100_ndcg@10 NanoNQ_R100_ndcg@10 NanoBEIR_R100_mean_ndcg@10
-1 -1 - - 0.0548 (-0.4856) 0.2336 (-0.0914) 0.0201 (-0.4805) 0.1028 (-0.3525)
0.0001 1 1.3208 - - - - -
0.0239 250 1.4825 - - - - -
0.0478 500 1.4323 1.3017 0.4655 (-0.0749) 0.3590 (+0.0339) 0.5570 (+0.0563) 0.4605 (+0.0051)
0.0718 750 1.2689 - - - - -
0.0957 1000 1.2101 1.1896 0.5067 (-0.0337) 0.3570 (+0.0320) 0.5661 (+0.0655) 0.4766 (+0.0212)
0.1196 1250 1.1773 - - - - -
0.1435 1500 1.1249 1.1359 0.5727 (+0.0323) 0.3785 (+0.0535) 0.6712 (+0.1705) 0.5408 (+0.0854)
0.1674 1750 1.1226 - - - - -
0.1914 2000 1.1277 1.0931 0.5964 (+0.0560) 0.4271 (+0.1021) 0.6738 (+0.1732) 0.5658 (+0.1104)
0.2153 2250 1.1009 - - - - -
0.2392 2500 1.1058 1.1070 0.5630 (+0.0226) 0.3656 (+0.0405) 0.6730 (+0.1723) 0.5338 (+0.0785)
0.2631 2750 1.0996 - - - - -
0.2870 3000 1.0856 1.0669 0.5764 (+0.0359) 0.3653 (+0.0403) 0.6453 (+0.1447) 0.5290 (+0.0736)
0.3109 3250 1.103 - - - - -
0.3349 3500 1.077 1.0820 0.5827 (+0.0423) 0.3648 (+0.0398) 0.6493 (+0.1487) 0.5323 (+0.0769)
0.3588 3750 1.0845 - - - - -
0.3827 4000 1.0571 1.0640 0.5923 (+0.0518) 0.3470 (+0.0220) 0.6966 (+0.1960) 0.5453 (+0.0899)
0.4066 4250 1.0574 - - - - -
0.4305 4500 1.0531 1.0687 0.5590 (+0.0186) 0.3330 (+0.0080) 0.6686 (+0.1680) 0.5202 (+0.0648)
0.4545 4750 1.0504 - - - - -
0.4784 5000 1.0397 1.0350 0.5764 (+0.0360) 0.3500 (+0.0250) 0.6774 (+0.1768) 0.5346 (+0.0792)
0.5023 5250 1.0676 - - - - -
0.5262 5500 1.0507 1.0391 0.5929 (+0.0525) 0.3517 (+0.0267) 0.6690 (+0.1683) 0.5379 (+0.0825)
0.5501 5750 1.0355 - - - - -
0.5741 6000 1.0271 1.0353 0.5544 (+0.0139) 0.3566 (+0.0316) 0.6765 (+0.1759) 0.5292 (+0.0738)
0.5980 6250 1.0375 - - - - -
0.6219 6500 1.0274 1.0230 0.5520 (+0.0115) 0.3626 (+0.0375) 0.6867 (+0.1861) 0.5337 (+0.0784)
0.6458 6750 1.0234 - - - - -
0.6697 7000 1.0196 1.0276 0.5436 (+0.0032) 0.3653 (+0.0403) 0.6815 (+0.1808) 0.5301 (+0.0748)
0.6936 7250 1.0316 - - - - -
0.7176 7500 1.0272 1.0295 0.5533 (+0.0129) 0.3519 (+0.0268) 0.6514 (+0.1508) 0.5189 (+0.0635)
0.7415 7750 1.028 - - - - -
0.7654 8000 1.0315 1.0065 0.5452 (+0.0048) 0.3399 (+0.0149) 0.6679 (+0.1673) 0.5177 (+0.0623)
0.7893 8250 1.0219 - - - - -
0.8132 8500 1.0107 1.0276 0.5501 (+0.0097) 0.3422 (+0.0172) 0.6876 (+0.1869) 0.5266 (+0.0713)
0.8372 8750 1.0232 - - - - -
0.8611 9000 1.0148 1.0081 0.5446 (+0.0042) 0.3358 (+0.0108) 0.6703 (+0.1696) 0.5169 (+0.0615)
0.8850 9250 1.0198 - - - - -
0.9089 9500 1.0134 1.0088 0.5398 (-0.0006) 0.3418 (+0.0168) 0.6622 (+0.1615) 0.5146 (+0.0592)
0.9328 9750 1.0276 - - - - -
0.9568 10000 1.0265 1.0119 0.5555 (+0.0151) 0.3496 (+0.0246) 0.6854 (+0.1848) 0.5302 (+0.0748)
0.9807 10250 1.0175 - - - - -
-1 -1 - - 0.5964 (+0.0560) 0.4271 (+0.1021) 0.6738 (+0.1732) 0.5658 (+0.1104)
  • The bold row denotes the saved checkpoint.

Environmental Impact

Carbon emissions were measured using CodeCarbon.

  • Energy Consumed: 0.537 kWh
  • Carbon Emitted: 0.209 kg of CO2
  • Hours Used: 1.775 hours

Training Hardware

  • On Cloud: No
  • GPU Model: 1 x NVIDIA GeForce RTX 3090
  • CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K
  • RAM Size: 31.78 GB

Framework Versions

  • Python: 3.11.6
  • Sentence Transformers: 3.5.0.dev0
  • Transformers: 4.49.0
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.4.0
  • Datasets: 3.3.2
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

LambdaLoss

@inproceedings{wang2018lambdaloss,
  title={The lambdaloss framework for ranking metric optimization},
  author={Wang, Xuanhui and Li, Cheng and Golbandi, Nadav and Bendersky, Michael and Najork, Marc},
  booktitle={Proceedings of the 27th ACM international conference on information and knowledge management},
  pages={1313--1322},
  year={2018}
}
Downloads last month
9
Safetensors
Model size
33.4M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support text-classification models for sentence-transformers library.

Model tree for tomaarsen/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-lambdaloss-hard-neg

Finetuned
(54)
this model

Evaluation results