January 23, 2026
AI MasterClass
Zero → Production. Fast.
The Canary Just Died

Mathematics was supposed to be the last fortress.
This week, a wall fell.
Erdős Problem #728.
Paul Erdős was the giant of 20th-century mathematics—more published papers than anyone in history. When he died in 1996, he left behind hundreds of unsolved problems. The Mount Everests of number theory. Simple to state, impossible to climb.
This week, one crumbled. Not because a brilliant mathematician had a breakthrough. An AI did it.
GPT-5.2 Pro, combined with a logic engine called Aristotle, solved a factorial divisibility problem that had been open for decades. The AI navigated an ambiguity in Erdős's original statement, reduced it to a binomial divisibility question, and proved a logarithmic gap phenomenon using Kummer's Theorem.

Terence Tao—arguably the greatest living mathematician—confirmed it. Not a database lookup. An original proof. No prior literature found.
The kicker: the proof was verified in Lean, a formal logic language where if the code compiles, the math is correct. No hallucinations. No vibes. Machine-checked truth.
We just watched System 2 reasoning happen in silicon.
What Everyone's Missing
The viral take is "AI solves 50-year-old problem." Great headline. Wrong story.
Tao pushed back on the framing. These aren't problems that resisted all human efforts. They're problems nobody seriously tried. Low-hanging fruit on a tall tree.
Then there's this stat:
GPT-5.2 scores 77% on structured, competition-level math. On open-ended research requiring genuine insight? 25%.
That 52-point gap is everything.
The AI didn't get smarter. It got wrapped in a system—GPT plus Aristotle plus Lean verification plus human prompting. The model proposed. The logic engine checked. The verifier confirmed. No single piece solved it. The orchestration did.
LLMs Are Saturating. Applications Are Not.
Here's my honest take:
The core LLM is approaching a ceiling. We're scaling compute, but returns are flattening.
The action is in the application layer. Memory. Tools. Verification. Multi-agent orchestration. Wrapping pattern-matching engines in systems that compensate for their weaknesses.
Math and coding fall first because they're checkable. The AI verifies its own work. Iterates. Repairs errors. That's not a limitation—it's a roadmap. Structured domains with clear feedback loops fall one by one. Unstructured domains where you can't verify truth? Those walls hold.
Yann LeCun has been saying it for years: LLMs lack causal world models. They can't answer "what happens if I do something no one's ever done?" They interpolate brilliantly. They don't extrapolate. Ptolemy's epicycles predicted planetary motion beautifully—and were completely wrong. LLMs are very good at epicycles.
The Industrial Reality
The AI labs aren't just building better models. They're building infrastructure.
Reports suggest Microsoft and OpenAI are working on a $100 billion compute cluster called Stargate, targeting 2028. Electricity demands requiring national-level production increases.
Industrial mobilization we haven't seen since World War II. Happening while most people debate whether ChatGPT is "really" intelligent.
The compute isn't for bigger models. It's for systems. Agents that run for hours, search millions of strategies, verify their own work, iterate until something compiles.
Erdős #728 is proof of concept. The sonic boom is coming.

The Magic Isn't Leaving
A developer recently said software has lost its "magic." You tell the AI what you want, it happens. Monkey pushing a button.
I see it differently.
The magic is moving up a level. Instead of debugging edge cases, you pose the problems. You direct this alien intelligence toward things that matter.
When Kevin Barreto worked with GPT-5.2 to crack Erdős #728, it was collaboration—human intuition, machine rigor. That's the future. More interesting than doing everything ourselves.
Now Stop Reading This
Dopamine from information is a dangerous drug. Your life changes when you stop consuming and start doing.
Close this newsletter. Apply one thing you learned this week. Write one prompt. Build one tool. Solve one problem that's been sitting there.
The wall fell in mathematics. Where's the wall in your field?
What are you going to do about it?
That's what we dig into in the AI Masterclass—not hype, but how these systems actually work and how to build with them. Join us if you want the fundamentals.
But only after you've built something first.
See you soon.
-AI MasterClass