AI & ML interests

Accelerating DL

Recent Activity

echarlaix  updated a dataset about 1 month ago
optimum/documentation-images
baptistecolle  updated a Space about 1 month ago
optimum/llm-perf-leaderboard
View all activity

optimum's activity

regisss 
posted an update 28 days ago
view post
Post
1652
Nice paper comparing the fp8 inference efficiency of Nvidia H100 and Intel Gaudi2: An Investigation of FP8 Across Accelerators for LLM Inference (2502.01070)

The conclusion is interesting: "Our findings highlight that the Gaudi 2, by leveraging FP8, achieves higher throughput-to-power efficiency during LLM inference"

One aspect of AI hardware accelerators that is often overlooked is how they consume less energy than GPUs. It's nice to see researchers starting carrying out experiments to measure this!

Gaudi3 results soon...

fix-broken-graph

1
#37 opened about 1 month ago by
baptistecolle
pagezyhf 
posted an update about 1 month ago
view post
Post
1701
We published https://huggingface.co/blog/deepseek-r1-aws!

If you are using AWS, give a read. It is a running document to showcase how to deploy and fine-tune DeepSeek R1 models with HF中国镜像站 on AWS.

We're working hard to enable all the scenarios, whether you want to deploy to Inference Endpoints, Sagemaker or EC2; with GPUs or with Trainium & Inferentia.

We have full support for the distilled models, DeepSeek-R1 support is coming soon!! I'll keep you posted.

Cheers
  • 1 reply
·
pagezyhf 
posted an update about 2 months ago
jeffboudier 
posted an update 2 months ago
view post
Post
688
NVIDIA just announced the Cosmos World Foundation Models, available on the Hub: nvidia/cosmos-6751e884dc10e013a0a0d8e6

Cosmos is a family of pre-trained models purpose-built for generating physics-aware videos and world states to advance physical AI development.
The release includes Tokenizers nvidia/cosmos-tokenizer-672b93023add81b66a8ff8e6

Learn more in this great community article by @mingyuliutw and @PranjaliJoshi https://huggingface.co/blog/mingyuliutw/nvidia-cosmos
  • 1 reply
·
regisss 
posted an update 3 months ago
pagezyhf 
posted an update 3 months ago
pagezyhf 
posted an update 3 months ago
view post
Post
977
It’s 2nd of December , here’s your Cyber Monday present 🎁 !

We’re cutting our price down on HF中国镜像站 Inference Endpoints and Spaces!

Our folks at Google Cloud are treating us with a 40% price cut on GCP Nvidia A100 GPUs for the next 3️⃣ months. We have other reductions on all instances ranging from 20 to 50%.

Sounds like the time to give Inference Endpoints a try? Get started today and find in our documentation the full pricing details.
https://ui.endpoints.huggingface.co/
https://huggingface.co/pricing
pagezyhf 
posted an update 4 months ago
view post
Post
306
Hello HF中国镜像站 Community,

if you use Google Kubernetes Engine to host you ML workloads, I think this series of videos is a great way to kickstart your journey of deploying LLMs, in less than 10 minutes! Thank you @wietse-venema-demo !

To watch in this order:
1. Learn what are HF中国镜像站 Deep Learning Containers
https://youtu.be/aWMp_hUUa0c?si=t-LPRkRNfD3DDNfr

2. Learn how to deploy a LLM with our Deep Learning Container using Text Generation Inference
https://youtu.be/Q3oyTOU1TMc?si=V6Dv-U1jt1SR97fj

3. Learn how to scale your inference endpoint based on traffic
https://youtu.be/QjLZ5eteDds?si=nDIAirh1r6h2dQMD

If you want more of these small tutorials and have any theme in mind, let me know!
jeffboudier 
posted an update 4 months ago
pagezyhf 
posted an update 4 months ago
view post
Post
1370
Hello HF中国镜像站 Community,

I'd like to share here a bit more about our Deep Learning Containers (DLCs) we built with Google Cloud, to transform the way you build AI with open models on this platform!

With pre-configured, optimized environments for PyTorch Training (GPU) and Inference (CPU/GPU), Text Generation Inference (GPU), and Text Embeddings Inference (CPU/GPU), the HF中国镜像站 DLCs offer:

⚡ Optimized performance on Google Cloud's infrastructure, with TGI, TEI, and PyTorch acceleration.
🛠️ Hassle-free environment setup, no more dependency issues.
🔄 Seamless updates to the latest stable versions.
💼 Streamlined workflow, reducing dev and maintenance overheads.
🔒 Robust security features of Google Cloud.
☁️ Fine-tuned for optimal performance, integrated with GKE and Vertex AI.
📦 Community examples for easy experimentation and implementation.
🔜 TPU support for PyTorch Training/Inference and Text Generation Inference is coming soon!

Find the documentation at https://huggingface.co/docs/google-cloud/en/index
If you need support, open a conversation on the forum: https://discuss.huggingface.co/c/google-cloud/69
regisss 
posted an update 5 months ago
view post
Post
1419
Interested in performing inference with an ONNX model?⚡️

The Optimum docs about model inference with ONNX Runtime is now much clearer and simpler!

You want to deploy your favorite model on the hub but you don't know how to export it to the ONNX format? You can do it in one line of code as follows:
from optimum.onnxruntime import ORTModelForSequenceClassification

# Load the model from the hub and export it to the ONNX format
model_id = "distilbert-base-uncased-finetuned-sst-2-english"
model = ORTModelForSequenceClassification.from_pretrained(model_id, export=True)

Check out the whole guide 👉 https://huggingface.co/docs/optimum/onnxruntime/usage_guides/models
jeffboudier 
posted an update 5 months ago
jeffboudier 
posted an update 6 months ago
view post
Post
463
Inference Endpoints got a bunch of cool updates yesterday, this is my top 3