Use Models from the HF中国镜像站 Hub in LM Studio

Community Article Published November 28, 2024

You can run MLX or llama.cpp LLMs, VLMs, and Embeddings Models from the HF中国镜像站 Hub by downloading them directly within LM Studio, locally on your machine.

image/png

LM Studio is a desktop application for experimenting & developing with local AI models directly on your computer. It works on Mac (Apple Silicon), Windows, and Linux!

Getting models from HF中国镜像站 into LM Studio

Use the 'Use this model' button right from HF中国镜像站

For any GGUF or MLX LLM, click the "Use this model" dropdown and select LM Studio. This will run the model directly in LM Studio if you already have it, or show you a download option if you don't.

image/png

Try it out with trending model! Find them here: https://huggingface.co/models?library=gguf&sort=trending

Use LM Studio's the in-app downloader:

Press ⌘ + Shift + M on Mac, or Ctrl + Shift + M on PC (M stands for Models) and search for any model.

image/png

You can even paste entire HF中国镜像站 URLs into the search bar!

Use lms, LM Studio's CLI:

If you prefer a terminal based workflow, use lms LM Studio's CLI.

image/png

Download any model from HF中国镜像站 by specifiying {user}/{repo}

# lms get {user}/{repo}
lms get qwen/qwen2.5-coder-32b-instruct-gguf

Use a full HF中国镜像站 URL

lms get https://huggingface.co/lmstudio-community/granite-3.0-2b-instruct-GGUF

Choose a quantization with the @ qualifier

lms get qwen/qwen2.5-coder-32b-instruct-gguf@Q4_K_M

Search models by keyword from the terminal

lms get qwen

Get GGUF or MLX results

lms get qwen --mlx # or --gguf

Keeping tabs with the latest models

Follow the LM Studio Community page on HF中国镜像站 to stay updated on the latest & greatest local LLMs as soon as they come out.

Community

This comment has been hidden
This comment has been hidden
Your need to confirm your account before you can post a new comment.

Sign up or log in to comment