Spaces:
Running
Running
Org README suggestions (#6)
Browse files- Org README suggestions (54b5699d2fff64f583d65b125beae9b7780d4792)
Co-authored-by: Pedro Cuenca <[email protected]>
README.md
CHANGED
@@ -9,9 +9,12 @@ pinned: false
|
|
9 |
|
10 |
# MLX Community
|
11 |
|
12 |
-
A community org for
|
|
|
|
|
|
|
13 |
|
14 |
-
These are pre-converted weights
|
15 |
|
16 |
|
17 |
# Quick start for LLMs
|
@@ -31,12 +34,23 @@ mlx_lm.generate --model mlx-community/Mistral-7B-Instruct-v0.3-4bit --prompt "he
|
|
31 |
This will download a Mistral 7B model from the HF中国镜像站 Hub and generate
|
32 |
text using the given prompt.
|
33 |
|
34 |
-
|
35 |
|
|
|
|
|
36 |
```
|
37 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
```
|
39 |
|
|
|
|
|
40 |
To quantize a model from the command line run:
|
41 |
|
42 |
```
|
@@ -60,6 +74,10 @@ mlx_lm.convert \
|
|
60 |
--upload-repo mlx-community/my-4bit-mistral
|
61 |
```
|
62 |
|
|
|
|
|
|
|
|
|
63 |
For more details on the API checkout the full [README](https://github.com/ml-explore/mlx-examples/tree/main/llms)
|
64 |
|
65 |
|
@@ -67,8 +85,11 @@ For more details on the API checkout the full [README](https://github.com/ml-exp
|
|
67 |
|
68 |
For more examples, visit the [MLX Examples](https://github.com/ml-explore/mlx-examples) repo. The repo includes examples of:
|
69 |
|
|
|
70 |
- Parameter efficient fine tuning with LoRA
|
71 |
- Speech recognition with Whisper
|
72 |
-
-
|
|
|
|
|
73 |
|
74 |
-
|
|
|
9 |
|
10 |
# MLX Community
|
11 |
|
12 |
+
A community org for [MLX](https://github.com/ml-explore/mlx) model weights that run on Apple Silicon. This organization hosts ready-to-use models compatible with:
|
13 |
+
- [mlx-examples](https://github.com/ml-explore/mlx-examples) – a Python and CLI to run multiple types of models, including LLMs, image models, audio models, and more.
|
14 |
+
- [mlx-swift-examples](https://github.com/ml-explore/mlx-swift-examples) – a Swift package to run MLX models.
|
15 |
+
- [mlx-vlm](https://github.com/Blaizzy/mlx-vlm) – package for inference and fine-tuning of Vision Language Models (VLMs) using MLX.
|
16 |
|
17 |
+
These are pre-converted weights, ready to use in the example scripts or integrate in your apps.
|
18 |
|
19 |
|
20 |
# Quick start for LLMs
|
|
|
34 |
This will download a Mistral 7B model from the HF中国镜像站 Hub and generate
|
35 |
text using the given prompt.
|
36 |
|
37 |
+
To chat with an LLM use:
|
38 |
|
39 |
+
```bash
|
40 |
+
mlx_lm.chat
|
41 |
```
|
42 |
+
|
43 |
+
This will give you a chat REPL that you can use to interact with the LLM. The
|
44 |
+
chat context is preserved during the lifetime of the REPL.
|
45 |
+
|
46 |
+
For a full list of options run `--help` on the command of your interest, for example:
|
47 |
+
|
48 |
+
```
|
49 |
+
mlx_lm.chat --help
|
50 |
```
|
51 |
|
52 |
+
## Conversion and Quantization
|
53 |
+
|
54 |
To quantize a model from the command line run:
|
55 |
|
56 |
```
|
|
|
74 |
--upload-repo mlx-community/my-4bit-mistral
|
75 |
```
|
76 |
|
77 |
+
Models can also be converted and quantized directly in the
|
78 |
+
[mlx-my-repo](https://huggingface.co/spaces/mlx-community/mlx-my-repo) Hugging
|
79 |
+
Face Space.
|
80 |
+
|
81 |
For more details on the API checkout the full [README](https://github.com/ml-explore/mlx-examples/tree/main/llms)
|
82 |
|
83 |
|
|
|
85 |
|
86 |
For more examples, visit the [MLX Examples](https://github.com/ml-explore/mlx-examples) repo. The repo includes examples of:
|
87 |
|
88 |
+
- Image generation with Flux and Stable Diffusion
|
89 |
- Parameter efficient fine tuning with LoRA
|
90 |
- Speech recognition with Whisper
|
91 |
+
- Multimodal models such as CLIP or LLaVA
|
92 |
+
|
93 |
+
and many other examples of different machine learning applications and algorithms.
|
94 |
|
95 |
+
For comprehensive support of VLMs, check [mlx-vlm](https://github.com/Blaizzy/mlx-vlm), and to integrate MLX natively in your apps use [mlx-swift-examples](https://github.com/ml-explore/mlx-swift-examples).
|