Update README.md
Browse files
README.md
CHANGED
@@ -94,6 +94,10 @@ We developed this model using Llama-3.3-70B-Instruct as its foundation. This mod
|
|
94 |
|
95 |
## Quick Start
|
96 |
|
|
|
|
|
|
|
|
|
97 |
```python
|
98 |
import torch
|
99 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
|
94 |
|
95 |
## Quick Start
|
96 |
|
97 |
+
You can use the model using HuggingFace Transformers library with 2 or more 80GB GPUs (NVIDIA Ampere or newer) with at least 150GB of free disk space to accomodate the download.
|
98 |
+
|
99 |
+
This code has been tested on Transformers v4.45.0, torch v2.3.0a0+40ec155e58.nv24.3 and 2 A100 80GB GPUs, but any setup that supports meta-llama/Llama-3.1-70B-Instruct should support this model as well. If you run into problems, you can consider doing pip install -U transformers.
|
100 |
+
|
101 |
```python
|
102 |
import torch
|
103 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|