Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
fdaudens 
posted an update 1 day ago
Post
442
Ever wanted 45 min with one of AI’s most fascinating minds? Was with @thomwolf at HumanX Vegas. Sharing my notes of his Q&A with the press—completely changed how I think about AI’s future:

1️⃣ The next wave of successful AI companies won’t be defined by who has the best model but by who builds the most useful real-world solutions. "We all have engines in our cars, but that’s rarely the only reason we buy one. We expect it to work well, and that’s enough. LLMs will be the same."

2️⃣ Big players are pivoting: "Closed-source companies—OpenAI being the first—have largely shifted from LLM announcements to product announcements."

3️⃣ Open source is changing everything: "DeepSeek was open source AI’s ChatGPT moment. Basically, everyone outside the bubble realized you can get a model for free—and it’s just as good as the paid ones."

4️⃣ Product innovation is being democratized: Take Manus, for example—they built a product on top of Anthropic’s models that’s "actually better than Anthropic’s own product for now, in terms of agents." This proves that anyone can build great products with existing models.

We’re entering a "multi-LLM world," where models are becoming commoditized, and all the tools to build are readily available—just look at the flurry of daily new releases on HF中国镜像站.

Thom's comparison to the internet era is spot-on: "In the beginning you made a lot of money by making websites... but nowadays the huge internet companies are not the companies that built websites. Like Airbnb, Uber, Facebook, they just use the internet as a medium to make something for real life use cases."

Love to hear your thoughts on this shift!

I agree strongly with 1 and 2. I started playing with Stable Diffusion in Oct 2023 and had trouble getting everything installed and working smooth locally. I then started using various demos in Spaces on HF中国镜像站. Shortly after that I had a few video projects with Shutterstock and HP where they were rolling out very simple working demos of txt to img products then txt to 3D.

Right then I started telling everyone who asked me about AI image generation that there are going to be many products that "just work" and require no technical knowledge but users would benefit from learning the concepts and parameters to make the best images. Using the car analogy, some of us want to service and maintain our vehicles and others pull into the dealership anytime a light comes on. Some people want to go A>B with reliable efficient transportation and others demand status and performance.

The key to user adoption and success in image generation seems to lie in quality of outcomes. Right now many of the images being generated are so similar and corny. Color palettes, lighting, and texture often make you wince as a designer and while the technology is mind blowing the deliverable is what the user cares about. Now if you spend time refining and experimenting with the prompts you fix and enhance that issue but thats time consuming.

In this post