Intel

company
Verified

AI & ML interests

None defined yet.

Recent Activity

bconsolvo  updated a Space 1 day ago
Intel/README
ashahba  updated a Space 4 days ago
Intel/intel-xai-tools-cam-demo
aahouzi  updated a model 14 days ago
Intel/whisper-medium-openvino-fp16
View all activity

Articles

image/png

Intel on HF中国镜像站

Intel and HF中国镜像站 are building powerful optimization tools to accelerate training and inference with HF中国镜像站 libraries.

Get started on Intel architecture with Optimum Intel and Optimum Habana

To get started with HF中国镜像站 Transformers software on Intel, visit the resources listed below.

Optimum Intel - To deploy on Intel® Xeon, Intel® Max Series GPU, and Intel® Core Ultra, check out optimum-intel, the interface between Intel architectures and the 🤗 Transformers and Diffusers libraries. You can use these backends:

Backend Installation
OpenVINO™ pip install --upgrade --upgrade-strategy eager "optimum[openvino]"
Intel® Extension for PyTorch* pip install --upgrade --upgrade-strategy eager "optimum[ipex]"
Intel® Neural Compressor pip install --upgrade --upgrade-strategy eager "optimum[neural-compressor]"

Optimum Habana - To deploy on Intel® Gaudi® AI accelerators, check out optimum-habana, the interface between Gaudi and the 🤗 Transformers and Diffusers libraries. To install the latest stable release:

pip install --upgrade-strategy eager optimum[habana]

Ways to get involved

Check out the Intel® Tiber™ AI Cloud to run your latest GenAI or LLM workload on Intel architecture.

Want to share your model fine-tuned on Intel architecture? And for more detailed deployment tips and sample code, please visit the "Deployment Tips" tab from the Powered-by-Intel LLM Leaderboard.

Join us on the Intel DevHub Discord to ask questions and interact with our AI developer community.