zhilinw commited on
Commit
696c0f3
·
verified ·
1 Parent(s): cebc1cb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -0
README.md CHANGED
@@ -94,6 +94,10 @@ We developed this model using Llama-3.3-70B-Instruct as its foundation. This mod
94
 
95
  ## Quick Start
96
 
 
 
 
 
97
  ```python
98
  import torch
99
  from transformers import AutoModelForCausalLM, AutoTokenizer
 
94
 
95
  ## Quick Start
96
 
97
+ You can use the model using HuggingFace Transformers library with 2 or more 80GB GPUs (NVIDIA Ampere or newer) with at least 150GB of free disk space to accomodate the download.
98
+
99
+ This code has been tested on Transformers v4.45.0, torch v2.3.0a0+40ec155e58.nv24.3 and 2 A100 80GB GPUs, but any setup that supports meta-llama/Llama-3.1-70B-Instruct should support this model as well. If you run into problems, you can consider doing pip install -U transformers.
100
+
101
  ```python
102
  import torch
103
  from transformers import AutoModelForCausalLM, AutoTokenizer