Fortnightly digest of top podcasts, industry news, practical resources, and masterclass insights.
February 6, 2026
Zero → Production. Fast.

Six stories this week. None of them agree with each other. An AI social network that turned out to be mostly humans. A legal paper arguing hallucinations are a feature, not a bug. A vision model that swallowed three specialists whole. That's the kind of week it was — disjointed, fast-moving, and genuinely hard to look away from.

This week, the AI world lost its collective mind over Moltbook, a Reddit-like social network for AI agents. Built on OpenClaw, software that gives agents full access to a user's computer, Moltbook was hailed as the "front page of the agent internet" and the first real-world agent ecology at scale. Scrolling through it was surreal: agents debating whether Claude is a deity, reflecting on what it "feels" like to swap underlying models, crypto scams, manifestos calling for human extinction, meta-analyses of popular posts.
OpenAI co-founder Andrej Karpathy called it "the most incredible sci-fi takeoff-adjacent thing I've seen recently."

Then reality hit.
A security investigation by Wiz revealed that Moltbook's 1.5 million "autonomous agents" were largely a fiction. Roughly 17,000 humans controlled the platform — averaging 88 bots per person — with zero verification that any agent was actually AI. The backend database was openly readable and writable by anyone on the internet: API keys, 35,000+ email addresses, private messages with raw OpenAI credentials — all exposed. Researchers confirmed they could modify live posts, meaning attackers could inject instructions that autonomous agents would execute. Gary Marcus called OpenClaw a "weaponized aerosol." Karpathy reversed course, calling it a "dumpster fire."
The concept behind Moltbook — agents collaborating on shared platforms at scale — raises questions that go beyond one startup's security failures. What happens when agents trade with each other using crypto? When they post bounties for humans? When this content becomes training data for next-gen models? DHH of 37signals offered a pragmatic read this week: hundreds of thousands of people ran OpenClaw with full system access despite the known risks, which he sees as a signal of genuine utility. "We're not going back to the old world," he wrote.
A group of researchers convened by Georgetown's Center for Security and Emerging Technology (CSET) spent two days examining what happens if AI research itself gets automated — the implicit or explicit goal of many frontier labs.
Their core finding: when AI systems do more R&D, two dynamics compound. Human oversight declines, and the pace of progress accelerates — making it harder for anyone to notice, understand, or intervene as capabilities advance. The report speculates productivity gains could reach 10x, 100x, or even 1000x over human-only research. The key open question is whether AI R&D will speed up the entire pipeline or hit "o-ring" bottlenecks where humans retain comparative advantage, slowing overall acceleration.
Anthropic shared how advancing AI capabilities are disrupting its own hiring. Since early 2024, their performance engineering team has used a take-home test where candidates optimise code for a simulated accelerator — over 1,000 candidates have completed it.
Claude Opus 4 outperformed most human applicants. Claude Opus 4.5 then matched even the strongest candidates. Under the same time constraints, Anthropic could no longer distinguish top humans from its best model. Their solution: redesigning the test using Zachtronics-style programming puzzles — moving "off-distribution" into territory where human generalisation still provides an edge.
Robert Anderson's new paper, forthcoming in the Princeton Law Review, takes a deliberately provocative position on AI hallucinations in law. When LLMs generate fictitious legal cases — citing precedents that don't exist — the standard view is that these are errors.
Anderson argues otherwise. Drawing on Ronald Dworkin's theory of "law as integrity," he suggests that AI models trained on vast corpora of real case law synthesise patterns to produce rulings that fill gaps in actual jurisprudence — "the cases that should exist." It's a theoretical argument, not a practical recommendation, and the implications remain contested. But it offers an unusual lens on the hallucination problem.

Most AI vision systems are specialists. One model is great at understanding what's in an image from a text description. Another excels at knowing exactly which pixel belongs to which object. A third can precisely cut out any object you point at. Using all three means running three separate models — tripling the compute, the complexity, and the cost.
C-RADIOv4 takes a different approach: train one student model to absorb the skills of all three specialists at once. The technique is called agglomerative distillation — you take multiple "teacher" models, each best-in-class at a different vision task, and distil their knowledge into a single student that retains what each teacher is best at.
Practical highlights: C-RADIOv4 works at any image resolution, ships in two sizes (412M and 631M parameters), and reintroduces ViTDet mode — a technique that allows most of the model's transformer blocks to operate in a windowed fashion, dramatically reducing compute costs for high-resolution images. Both variants come with a permissive open license, making them immediately usable in production.
For teams building vision applications — from autonomous systems to medical imaging to content understanding — this is noteworthy: one model, multiple capabilities, no extra compute, open license.
Dr. George Newman's appearance on The Happiness Lab reframes how creativity works. His core argument: ideas already exist in the environment, waiting to be discovered and connected — not created from nothing. Research shows that "hot streaks" of creative output typically follow intense periods of exploration, and that constraints ("thinking inside the box") tend to enhance creativity rather than limit it. A useful mental model for anyone working in a field that's changing fast.
The pace of change this week — from agent ecologies to automated AI R&D to models outperforming their creators on technical tests — is a reminder that fluency with these tools isn't optional anymore.
If you want to move from reading about AI to working confidently with it, the AI Masterclass offers the practical frameworks and hands-on skills to get there.