January 8, 2026
AI MasterClass
Zero → Production. Fast.
Don't Be Smart.

Smart thinkers are paralyzed right now.
While the rest of the industry is debating AGI timelines, safety guardrails, and optimal learning paths, a different group is quietly winning. I call them the Foolish Doers.
There's a quote making the rounds: "A foolish doer will outperform a smart thinker every time." It sounds like the kind of motivational fluff you'd see on a gym wall. But looking at the updates from this week, it might be the only strategy that actually works.
This newsletter we give ode to the three teams who stopped thinking and started doing.
NVIDIA: Physics is the New Code
Most people would have called NVIDIA's latest move foolish.
For years, the "smart consensus" was that Large Language Models were the frontier. Text was king. But while the industry obsessed over chatbots, NVIDIA asked a different question: "What if we stopped teaching AI to read and started teaching it to drive?"
They released Alpamayo-R1 this week. It doesn't just react to traffic; it understands it.
They built a Chain of Causation dataset, combined it with vision-language models, and reduced off-road incidents by 35%. But the technical specs aren't the point. The point is Jensen Huang's framing: "Physics is the new code."
99 milliseconds of latency later, they have cars navigating complex urban environments because they ignored the text-obsessed crowd and focused on the physical world.
ByteDance: The "Ugly" Ship
While Meta was reportedly spending $2 billion to acquire Manus for their AI ambitions, ByteDance did something very un-fashionable.
They released Seed1.8—a model for real-world agency. But they didn't do it with a polished keynote. They didn't wait for perfection. They just shipped.
They put up a GitHub repo. They released "cookbooks" for code agents. And most surprisingly, they included a "Call for Bad Cases" section. They literally asked the internet to find where their model fails.

This is the opposite of the PR-perfect launch. It's messy. It's vulnerable. It's the work of people who are more interested in learning the truth than looking smart.
DeepSeek: Breaking the Sacred Cow
Since 2015, there has been a "Sacred Cow" in deep learning: the Residual Connection. It's in everything—CNNs, Transformers, BERT, GPT. For ten years, the smart thinkers agreed: It works. Don't touch it.
DeepSeek touched it. And at first, they broke it.
Their first attempt to improve it was a disaster. Signals got amplified by 3000x. Exploding gradients everywhere. A smart thinker would have looked at the error logs, concluded the original approach was optimal, and quit.
DeepSeek kept going. They pushed through the failure to create Manifold-Constrained Hyper-Connections.
They took that 3000x explosion and tamed it down to 1.6. The result? A model that beats the baselines on benchmarks like GSM8K. They found a breakthrough only because they were willing to break the standard first.

The Real Pattern
Here is what connects NVIDIA, ByteDance, and DeepSeek.
They didn't ask for permission. They didn't wait for consensus. They didn't spend years building the "perfect" solution in stealth mode. They stayed upwind.
Paul Graham describes this perfectly: "An early version of a new project will sometimes be dismissed as a toy. It's a good sign when people do this. That means it has everything a new idea needs except scale, and that tends to follow."
ByteDance shipped a toy. DeepSeek broke a rule. NVIDIA ignored the trend. The foolish doers aren't waiting. The smart thinkers are still making plans.
What This Means for You
I talk to a lot of people who want to get into AI. The most common question I get is: "Where should I start?"
And the most common mistake I see is people treating this question as something that needs to be answered before they begin. They want the optimal roadmap. They want to understand the full landscape before writing a line of code. They want to be smart about it.
Don't be smart. Be foolish.
Your first AI project will probably be a toy. Ship it anyway. Your first agent will probably break. Debug it and learn. Your first automation will probably feel pointless. Build it and see what it teaches you.
The gap between you and the people doing interesting work in AI isn't knowledge—it's velocity.
Speaking of Doing...
Cohort 3 of AI MasterClass is coming together. If you've been reading along and thinking "I should actually learn this stuff properly"—this might be your moment. We aren't building another course where you watch videos and collect certificates.
We are building things. Breaking things. Shipping things. The foolish doers among you will know what to do.
More details coming soon. In the meantime, keep building.
See you soon.
-AI MasterClass