--- license: other license_name: nvidia-open-model-license license_link: >- https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/ inference: false fine-tuning: false tags: - nvidia - llama3.3 datasets: - nvidia/HelpSteer3 base_model: meta-llama/Llama-3.3-70B-Instruct library_name: transformers --- # Model Overview ## Description: Llama-3.3-Nemotron-70B-Select is a large language model that leverages Meta-Llama-3.3-70B-Instruct as the foundation and is fine-tuned using scaled Bradley-Terry modeling to select the most helpful LLM generated response to user queries. This model is ready for commercial use. ## License/Terms of Use: GOVERNING TERMS: Use of this model is governed by the [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/) . Additional Information: [Llama 3.3 Community License Agreement](https://www.llama.com/llama3_3/license/). Built with Llama. ## Arena Hard LeaderBoard As of 18 Mar 2025, augmenting models with the Feedback-Edit Inference Time Scaling (ITS) approach leads to the highest performance on Arena Hard. The Feedback-Edit Inference Time Scaling system comprise of the following models: 1. [Llama-3.3-Nemotron-70B-Feedback](https://huggingface.co/nvidia/Llama-3.3-Nemotron-70B-Feedback) 2. [Llama-3.3-Nemotron-70B-Edit](https://huggingface.co/nvidia/Llama-3.3-Nemotron-70B-Edit) 3. [Llama-3.3-Nemotron-70B-Select](https://huggingface.co/nvidia/Llama-3.3-Nemotron-70B-Select) | Model | Arena Hard (95% CI) | |:-----------------------------|:----------------| | Llama-3.3-Nemotron-Super-49B-v1 + **Feedback-Edit ITS** | **93.4 (-1.1, 1.0)** | | Llama-3.1-Nemotron-70B-Instruct + **Feedback-Edit ITS** | 92.7 (-1.2, 0.9) | | o1-mini-2024-09-12 | 92.0 (-1.2, 1.0) | | o1-preview-2024-09-12 | 90.4 (-1.1, 1.3) | | Llama-3.3-Nemotron-Super-49B-v1 | 88.3 (-1.6, 1.6) | | claude-3-5-sonnet-20241022 | 85.2 (-1.4, 1.6) | | Llama-3.1-Nemotron-70B-Instruct | 84.9 (-1.7, 1.8) | ## Use Case: Llama-3.3-Nemotron-70B-Select selects the most helpful LLM generated response to user queries, for users who are interested in improving performance through Inference-Time-Scaling for general-domain, open-ended tasks. ## Release Date: 03/18/2025 ## References(s): * [HelpSteer2-Preference](https://arxiv.org/abs/2410.01257) * [SteerLM method](https://arxiv.org/abs/2310.05344) * [HelpSteer](https://arxiv.org/abs/2311.09528) * [HelpSteer2](https://arxiv.org/abs/2406.08673) * [The future of AI: Built with Llama](https://ai.meta.com/blog/future-of-ai-built-with-llama/) * [Meta's Llama 3.3 Webpage](https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_3) * [Meta's Llama 3.3 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_3/MODEL_CARD.md) ## Model Architecture: **Architecture Type:** Transformer
**Network Architecture:** Llama 3.3
We developed this model using Llama-3.3-70B-Instruct as its foundation. This model contains 70 billion parameters. ## Input: **Input Type(s):** Text
**Input Format:** String
**Input Parameters:** One Dimensional (1D)
**Other Properties Related to Input:** Max of 128k tokens
## Output: **Output Type(s):** Float
**Output Format:** One Single Float
**Output Parameters:** One-Dimensional (1D)
**Other Properties Related to Output:** The float value represents the quality of the response, with higher value representing higher quality.
## Software Integration: **Runtime Engine(s):**
* [NeMo - 24.05.llama.3.1]
**Supported Hardware Microarchitecture Compatibility:**
* NVIDIA Ampere
* NVIDIA Hopper
* NVIDIA Turing
**Supported Operating System(s):** Linux
## Quick Start You can use the model using HuggingFace Transformers library with 2 or more 80GB GPUs (NVIDIA Ampere or newer) with at least 150GB of free disk space to accomodate the download. This code has been tested on Transformers v4.45.0, torch v2.3.0a0+40ec155e58.nv24.3 and 2 A100 80GB GPUs, but any setup that supports meta-llama/Llama-3.1-70B-Instruct should support this model as well. If you run into problems, you can consider doing pip install -U transformers. ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "nvidia/Llama-3.3-Nemotron-70B-Select" model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto") tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "What is the distance between the Earth and the Sun?" good_response = "The distance from Earth to the Sun is 93 million miles" bad_response = "The distance from Earth to the Sun is 39 million miles" for response in [good_response, bad_response]: messages = [{'role': "user", "content": prompt}, {'role': "assistant", "content": response}] tokenized_message = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=False, return_tensors="pt", return_dict=True) response_token_ids = model.generate(tokenized_message['input_ids'].cuda(),attention_mask=tokenized_message['attention_mask'].cuda(), max_new_tokens=1, return_dict_in_generate=True, output_scores=True) quality = response_token_ids['scores'][0][0][0].item() print(quality) # Example quality - note that higher scores means higher quality, and scores can be negative. # good_response: -4.78125 # bad_response -7.21875 ``` ## Model Version: v1.0 # Training and Testing Datasets: ## Training Datasets: **Dataset Name:** HelpSteer3 **Dataset Link:** https://huggingface.co/datasets/nvidia/HelpSteer3 **Data Collection Method by dataset**
* [Hybrid: Human, Synthetic]
**Labeling Method by dataset**
* [Human]
**Properties:**
* 38,459 prompts, each with a pair of responses as well as human preferences between the pair of responses. ## Testing Datasets: **Dataset Name:** HelpSteer3 **Dataset Link:** https://huggingface.co/datasets/nvidia/HelpSteer3 **Data Collection Method by dataset**
* [Hybrid: Human, Synthetic]
**Labeling Method by dataset**
* [Human]
**Properties:**
* 2,017 prompts, each with a pair of responses as well as human preferences between the pair of responses. # Inference: **Engine:** [Triton](https://developer.nvidia.com/triton-inference-server)
**Test Hardware:** H100, A100 80GB, A100 40GB
## Limitations: The model was trained on data that contains toxic language, unsafe content, and societal biases originally crawled from the internet. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts. The model may generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. ## Ethical Considerations: NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their supporting model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. For more detailed information on ethical considerations for this model, please see the Model Card++ [Explainability](explainability.md), [Bias](bias.md), [Safety & Security](safety.md), and [Privacy](privacy.md) Subcards. Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).