Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up

All HF Hub posts

openfree 
posted an update 1 day ago
view post
Post
3231
Huggingface Space Leaderboard 🚀
Hello Huggingface Community!

VIDraft/Space-Leaderboard

We are excited to introduce the Huggingface Space Leaderboard, a service that lets you view the latest trending Spaces on the Huggingface platform at a glance. This service helps you quickly explore a wide range of creative projects and will spark new inspiration for your own ideas. 🎉

Detailed Feature Overview

1. Real-time Trend Reflection
Automated Aggregation: Analyzes and ranks over 500 popular Spaces on Huggingface in real time.
Accurate Ranking: Combines various metrics such as likes, engagement, and creation time to accurately reflect the latest trends.
Instant Updates: Data is continuously updated, so you always see the most current popular Spaces.

2. Intuitive Preview
70% Scaled Preview: Each Space is displayed at 70% scale, providing a neat and clear preview at a glance.
Easy Visual Comparison: View multiple Spaces side by side to easily compare their designs and functionalities.
Error Handling: In case of loading issues, a clear error message with a direct link is provided to help resolve any problems.

3. Creator Statistics
Top 30 Creators Analysis: A chart visualizes the number of Spaces created by the most active creators, giving you a clear view of the community’s top contributors. 📊
Data-driven Insights: Analyze the activity trends of each creator to gain fresh insights and inspiration.
Collaboration Opportunities: Use the statistics to easily identify potential collaborators within the community.

Why Choose the Huggingface Space Leaderboard?
🚀 Fast and Reliable: Real-time data updates deliver the latest trends instantly, ensuring you gain insights without any delays.
🔎 Easy Search Functionality: Easily find the Space you’re looking for with filters by name, owner, or tags.
💡 Intuitive Design: A clean, user-friendly interface makes it simple for anyone to navigate and explore.
  • 1 reply
·
ginipick 
posted an update about 17 hours ago
view post
Post
1518
🌐 GraphMind: Phi-3 Instruct Graph Explorer

✨ Extract and visualize knowledge graphs from any text in multiple languages!

GraphMind is a powerful tool that leverages the capabilities of Phi-3 to transform unstructured text into structured knowledge graphs, helping you understand complex relationships within any content.

ginigen/Graph-Mind

🚀 Key Features

Multi-language Support 🌍: Process text in English, Korean, and many other languages
Instant Visualization 🧩: See extracted entities and relationships in an interactive graph
Entity Recognition 🏷️: Automatically identifies and categorizes named entities
Optimized Performance ⚡: Uses caching to deliver faster results for common examples
Intuitive Interface 👆: Simple design makes complex graph extraction accessible to everyone

💡 Use Cases

Content Analysis: Extract key entities and relationships from articles or documents
Research Assistance: Quickly visualize connections between concepts in research papers
Educational Tool: Help students understand the structure of complex texts
Multilingual Processing: Extract knowledge from content in various languages

🔧 How It Works

Enter any text in the input field
Select a model from the dropdown
Click "Extract & Visualize"
Explore the interactive knowledge graph and entity recognition results

GraphMind bridges the gap between raw text and structured knowledge, making it easier to identify patterns, extract insights, and understand relationships within any content. Try it now and transform how you interact with textual information!
#NLP #KnowledgeGraph #TextAnalysis #Visualization #Phi3 #MultilingualAI
  • 1 reply
·
prithivMLmods 
posted an update 1 day ago
view post
Post
1410
Gemma-3-4B : Image and Video Inference 🖼️🎥

🧤Space: prithivMLmods/Imagineo-Chat

@gemma3-4b : {Tag + Space_+ 'prompt'}
@gemma3-4b-video : {Tag + Space_+ 'prompt'}
By default, it runs: prithivMLmods/Qwen2-VL-OCR-2B-Instruct

Additionally, I have also tested Aya-Vision 8B vs Custom Qwen2-VL-OCR for OCR with test case samples on messy handwriting for experimental purposes to optimize edge device VLMs for Optical Character Recognition.

📜Read the blog here: https://huggingface.co/blog/prithivMLmods/aya-vision-vs-qwen2vl-ocr-2b
  • 1 reply
·
jasoncorkill 
posted an update 1 day ago
view post
Post
1679
Benchmarking Google's Veo2: How Does It Compare?

The results did not meet expectations. Veo2 struggled with style consistency and temporal coherence, falling behind competitors like Runway, Pika, Tencent, and even Alibaba. While the model shows promise, its alignment and quality are not yet there.

Google recently launched Veo2, its latest text-to-video model, through select partners like fal.ai. As part of our ongoing evaluation of state-of-the-art generative video models, we rigorously benchmarked Veo2 against industry leaders.

We generated a large set of Veo2 videos spending hundreds of dollars in the process and systematically evaluated them using our Python-based API for human and automated labeling.

Check out the ranking here: https://www.rapidata.ai/leaderboard/video-models

Rapidata/text-2-video-human-preferences-veo2
prithivMLmods 
posted an update 2 days ago
clem 
posted an update about 6 hours ago
view post
Post
250
We just crossed 1,500,000 public models on HF中国镜像站 (and 500k spaces, 330k datasets, 50k papers). One new repository is created every 15 seconds. Congratulations all!
  • 1 reply
·
thomwolf 
posted an update 1 day ago
view post
Post
1426
We've kept pushing our Open-R1 project, an open initiative to replicate and extend the techniques behind DeepSeek-R1.

And even we were mind-blown by the results we got with this latest model we're releasing: ⚡️OlympicCoder ( open-r1/OlympicCoder-7B and open-r1/OlympicCoder-32B)

It's beating Claude 3.7 on (competitive) programming –a domain Anthropic has been historically really strong at– and it's getting close to o1-mini/R1 on olympiad level coding with just 7B parameters!

And the best part is that we're open-sourcing all about its training dataset, the new IOI benchmark, and more in our Open-R1 progress report #3: https://huggingface.co/blog/open-r1/update-3

Datasets are are releasing:
- open-r1/codeforces
- open-r1/codeforces-cots
- open-r1/ioi
- open-r1/ioi-test-cases
- open-r1/ioi-sample-solutions
- open-r1/ioi-cots
- open-r1/ioi-2024-model-solutions
BrigitteTousi 
posted an update 2 days ago
burtenshaw 
posted an update 1 day ago
view post
Post
978
Here’s a notebook to make Gemma reason with GRPO & TRL. I made this whilst prepping the next unit of the reasoning course:

In this notebooks I combine together google’s model with some community tooling

- First, I load the model from the HF中国镜像站 hub with transformers’s latest release for Gemma 3
- I use PEFT and bitsandbytes to get it running on Colab
- Then, I took Will Browns processing and reward functions to make reasoning chains from GSM8k
- Finally, I used TRL’s GRPOTrainer to train the model

Next step is to bring Unsloth AI in, then ship it in the reasoning course. Links to notebook below.

https://colab.research.google.com/drive/1Vkl69ytCS3bvOtV9_stRETMthlQXR4wX?usp=sharing
·
clefourrier 
posted an update 1 day ago
view post
Post
1241
Gemma3 family is out! Reading the tech report, and this section was really interesting to me from a methods/scientific fairness pov.

Instead of doing over-hyped comparisons, they clearly state that **results are reported in a setup which is advantageous to their models**.
(Which everybody does, but people usually don't say)

For a tech report, it makes a lot of sense to report model performance when used optimally!
On leaderboards on the other hand, comparison will be apples to apples, but in a potentially unoptimal way for a given model family (like some user interact sub-optimally with models)

Also contains a cool section (6) on training data memorization rate too! Important to see if your model will output the training data it has seen as such: always an issue for privacy/copyright/... but also very much for evaluation!

Because if your model knows its evals by heart, you're not testing for generalization.