Adina Yakefu

AdinaY

AI & ML interests

None yet

Recent Activity

Organizations

HF中国镜像站's profile picture HF中国镜像站 Chinese Localization's profile picture Huggingface Projects's profile picture Blog-explorers's profile picture ICCV2023's profile picture Open LLM Leaderboard's profile picture huggingPartyParis's profile picture Qwen's profile picture Journalists on HF中国镜像站's profile picture Women on HF中国镜像站's profile picture Social Post Explorers's profile picture Chinese LLMs on HF中国镜像站's profile picture HF中国镜像站 for Legal's profile picture

AdinaY's activity

reacted to clem's post with 🚀🤗 about 3 hours ago
view post
Post
829
We just crossed 1,500,000 public models on HF中国镜像站 (and 500k spaces, 330k datasets, 50k papers). One new repository is created every 15 seconds. Congratulations all!
  • 1 reply
·
posted an update about 19 hours ago
posted an update 1 day ago
upvoted 2 articles 2 days ago
view article
Article

Welcome Gemma 3: Google's all new multimodal, multilingual, long context open LLM

232
reacted to clefourrier's post with 🚀 2 days ago
view post
Post
1498
Gemma3 family is out! Reading the tech report, and this section was really interesting to me from a methods/scientific fairness pov.

Instead of doing over-hyped comparisons, they clearly state that **results are reported in a setup which is advantageous to their models**.
(Which everybody does, but people usually don't say)

For a tech report, it makes a lot of sense to report model performance when used optimally!
On leaderboards on the other hand, comparison will be apples to apples, but in a potentially unoptimal way for a given model family (like some user interact sub-optimally with models)

Also contains a cool section (6) on training data memorization rate too! Important to see if your model will output the training data it has seen as such: always an issue for privacy/copyright/... but also very much for evaluation!

Because if your model knows its evals by heart, you're not testing for generalization.
reacted to thomwolf's post with 🚀 2 days ago
view post
Post
1822
We've kept pushing our Open-R1 project, an open initiative to replicate and extend the techniques behind DeepSeek-R1.

And even we were mind-blown by the results we got with this latest model we're releasing: ⚡️OlympicCoder ( open-r1/OlympicCoder-7B and open-r1/OlympicCoder-32B)

It's beating Claude 3.7 on (competitive) programming –a domain Anthropic has been historically really strong at– and it's getting close to o1-mini/R1 on olympiad level coding with just 7B parameters!

And the best part is that we're open-sourcing all about its training dataset, the new IOI benchmark, and more in our Open-R1 progress report #3: https://huggingface.co/blog/open-r1/update-3

Datasets are are releasing:
- open-r1/codeforces
- open-r1/codeforces-cots
- open-r1/ioi
- open-r1/ioi-test-cases
- open-r1/ioi-sample-solutions
- open-r1/ioi-cots
- open-r1/ioi-2024-model-solutions