Skip to main content

December 25, 2025

AI MasterClass

Zero → Production. Fast.

Deep Work vs. Vibes

AI Masterclass Newsletter

It feels like we are moving faster than ever. You ask an LLM a hard question, it generates the code, you paste it, and it works.

But there is a dangerous gap opening up between getting the answer and understanding the solution.

Andrej Karpathy famously calls this "Vibecoding"—letting the AI write the code while you just manage the "vibes."

"But in a talk last week that is making waves, Netflix Staff Engineer Jake Nations warned that this isn't the future of coding; it's a trap. He argues that by choosing "easy" over "simple," we are walking blindly into an Infinite Software Crisis."

The Infinite Software Crisis

Here's Nations' argument: Every generation of developers has faced a "software crisis" where complexity exceeded human capacity to manage it. Structured programming fixed the first one. Object-oriented design fixed the next. Microservices, containers, cloud infrastructure—each solved the immediate problem.

And then created something bigger.

Now AI is doing the same thing, only faster.

When you build through conversation—iterating back and forth with an AI assistant, pivoting mid-stream, clarifying requirements on the fly—every twist and turn gets baked into your architecture. The code works. But the code also carries the confused fingerprints of that meandering conversation.

You've chosen easy over simple.

Rich Hickey made this distinction famous: easy means convenient, within reach. Simple means untangled, one thing doing one thing. They're not the same. And AI has become what Nations calls "the ultimate easy button"—an invitation to skip the hard work of actually understanding what you're building.

His prescription? A three-phase methodology:

Research - Map the existing system before touching anything

Planning - Write detailed specs that compress intent (5 million tokens of code can fit in 2,000 words of human-readable specification)

Implementation - Generate code with clean context, bounded scope

The engineers who thrive in the age of infinite code generation won't be those who generate the fastest. They'll be those who know when a system is getting tangled.

Why Your AI Learning Isn't Sticking

Most people vibecode their education too. Skim a paper. Watch a tutorial. Ask Claude to explain attention. Nod along. Move on.

Impressions, not understanding.

Paul Graham nailed why this fails.

In his essay on the Maker's Schedule, he explains that real work—programming, writing, deep thinking—requires half-days minimum. One interruption doesn't just cost you the interruption. It kills the ambitious thing you were about to attempt.

Learning works the same way. Understanding happens when you sit with confusion long enough that it becomes clarity. When you trace the gradient yourself instead of nodding at someone else's diagram.

But scrolling between tweets, tutorials, and podcasts? That's not learning. That's the illusion of learning. Protect your maker time.

A NeurIPS Paper That Should Make You Think

Speaking of understanding things deeply—here's a finding that upends a lot of what you might think you know about modern AI.

A paper at NeurIPS 2025 (it won Best Paper Runner-Up) asked a provocative question: Does Reinforcement Learning with Verifiable Rewards (RLVR) actually teach LLMs new reasoning abilities? Or does it just make them better at surfacing abilities they already had?

The answer matters. Because if RLVR genuinely expands reasoning capacity—like classic RL helped AlphaGo discover novel strategies—it would suggest a clear path toward ever-smarter models. But the research found something else entirely.

Using a metric called pass@k (where k is the number of sampling attempts), the researchers compared base models against their RL-trained counterparts. At low k, RL-trained models win. They're more likely to produce the right answer on the first try. But at high k? Base models catch up—and often surpass them.

What this means: RLVR improves sampling efficiency, not reasoning capacity. The correct reasoning paths already exist in the base model's distribution. RL just makes them more likely to appear.

In other words: RL is a filter, not a teacher. It amplifies what's already there. It doesn't create something new. And in optimizing for known high-reward paths, it may actually narrow the model's exploration space.

Distillation, by contrast—learning from a stronger teacher model—can genuinely expand reasoning boundaries.

This doesn't mean RLVR is useless. Better sampling efficiency is valuable. But it recalibrates our expectations about what current RL methods can and can't do.

NeurIPS RLVR Research

What's New This Week

Edit Any Layer by Natural Language

Qwen-Image-Layered drops — and it's fully open source. Alibaba's Qwen team just released something designers will love: an AI model that decomposes any image into multiple editable RGBA layers.

Think of it as automated Photoshop layer separation—but intelligent. You give it a photo, it returns 3-10 physically isolated layers with proper transparency. Background, subjects, foreground elements—each on its own layer.

Key capabilities:

• Prompt-controlled layer count (coarse to fine-grained)
• Recursive decomposition (layers within layers, infinitely)
• Native RGBA output for true editability
• Apache 2.0 license (use it however you want)

For e-commerce, design, content creation—this collapses hours of manual masking into minutes.

The Throughline

Four very different stories this week. One connecting thread:

Surface-level exposure isn't understanding.

Whether you're building software, learning AI fundamentals, or interpreting what RL actually does—the people who build durable skills are those who go deeper than the average tutorial. They protect their maker time. They resist vibecoding their way through. They ask why things work, not just how.

That's what we're building with AI Masterclass.

Cohort 2 is coming. If you're tired of content that sounds good but doesn't stick—this might be for you.

Stay ahead in AI.

Monthly digest of top podcasts, industry news, practical resources, and masterclass insights.

View Newsletters