report

Narrative Liquidity: How Capitalism Tells Fables to Build the Future

Back Research Notes Narrative Liquidity: How Capitalism Tells Fables to Build the Future Published on October 20, 2025 By Jordi Visser Or: Why Andrej Karpathy’s Realism and AI’s Hype Machine Are Both Essential “Homo sapiens rule the world because it is the only animal that can believe in things that exist purely in its own imagination.” — Yuval Noah Harari, Sapiens Capitalism tells fables about the future so it can afford to build it. This isn’t criticism, it’s description. Every major technological leap arrives wrapped in exaggeration. The steam engine was sold as perpetual motion; the internet was marketed as pure democracy. AI is simply the latest myth that pays its own bills. As AI stocks continue their bubble-like ascent, every news item seems to shock people like a branch snapping in The Blair Witch Project . This time, the panic centered on Alibaba’s Aegaeon system and a new Dwarkesh Patel interview with former OpenAI and Tesla engineer Andrej Karpathy. I’ll focus this piece on that interview, using quotes from Yuval Noah Harari to frame what it reveals. It was a good interview, for what it’s worth. During it, Karpathy said we’re not building animals, we’re summoning ghosts” and projected AGI a decade away. X posts lit up and treated it as retreat, as if one of AI’s most respected engineers had cooled on progress. But Karpathy wasn’t being cynical. He was being precise. He was explaining that intelligence isn’t conjured; it’s constructed through architecture, memory, and iteration. The real story isn’t that Karpathy sees AI as having slowed down. It’s that he exposed the gap between narrative velocity and engineering reality, and that gap is where innovation actually gets financed. Hype as Infrastructure “History is often shaped by small groups of people who can weave convincing stories about the future.” — Harari, Homo Deus Hype isn’t dishonesty. It’s narrative liquidity, the conversion of abstract technical possibility into mobilized capital and coordinated human attention. Without the breathless funding rounds, without the promises of imminent superintelligence, the research that makes real progress possible doesn’t get funded. The exaggeration isn’t a bug in the system; it’s the feature that allows the system to exist at all. This is the evolutionary purpose of hype: it transforms pure mathematics into public belief, and belief is what moves resources. As Harari would say, “Shared fictions hold societies together.” In that sense, every AGI press release is a civic performance, a way of synchronizing human attention around the next collective frontier. Karpathy’s warning that AGI remains a decade away isn’t slowing progress. It’s puncturing a marketing narrative while the actual infrastructure gets built behind it. He knows that architecture, not advertising, builds cognition. But he also knows, whether he says it or not, that advertising funds architecture. The irony is perfect: the very hype he’s correcting is what pays for the labs where he and others do the patient, disciplined work of actually building intelligence. The Fable That Finances Itself “The most important thing to know about humans is that they are storytelling animals.” — Harari, Sapiens We live in an exponential world narrated by linear minds. Our species evolved to compress chaos into stories that make the unpredictable feel predictable. Markets and media thrive on this as they translate complexity into headlines, capital, and clicks. So when the narrative says “AGI is imminent,” capital flows. When the narrative says “progress has stalled,” funding hesitates. The story isn’t separate from the technology, it’s the economic engine that powers it. This is why Harari’s insight about religion and money applies perfectly to AI: both are shared fictions that become real through collective belief. AI is the newest of those fictions. We are manifesting intelligence by believing in it hard enough to fund it into existence. Every venture deck, every think-piece, every doom-tweet feeds the recursive loop. The machines don’t just learn from data; they learn from our conviction. Our hype trains their compute budgets. Exponential Blindness and Engineering Patience “The human brain has not evolved for the pace of change it itself has unleashed.” — Harari, Homo Deus Humans are biologically linear thinkers. We measure progress in decades, not doublings. Our brains are tuned for the rhythm of seasons, not the velocity of GPUs. So when Karpathy describes a ten-year buildout for true agentic AI, the media calls it “slowing.” But exponential growth always looks flat from within. In its early phase, compounding hides behind invisibility, the quiet stretch of geometric growth before perception catches up. That’s where AI is today. We are in the deceptive middle of the curve, mistaking the silence before the surge for stagnation. Karpathy’s patience isn’t pessimism; it’s temporal literacy. He’s fluent in exponents, even if headlines are not. He’s describing the invisible interval between excitement and embodiment, the stretch where scaffolding hardens into structure. The media hears hesitation. In truth, it’s the sound of maturity. Exponential change is silent while it’s compounding. The only thing slowing is our ability to narrate it. The Vantage Point Problem “The real difference between us and chimpanzees is the mysterious glue that enables millions of humans to cooperate effectively… That glue is made of stories.” — Harari, Sapiens Karpathy’s vision of AGI, rooted in architecture, memory, and continual learning isn’t unique. It’s part of a lineage shared with Demis Hassabis at DeepMind, Yann LeCun at Meta, Elon Musk and Geoffrey Hinton’s adaptive systems. All of them describe general intelligence as something that must remember, perceive, and reason, a process, not a threshold. But others take the next step. Sam Altman, Ilya Sutskever, Shane Legg, they argue that once AI can recursively improve itself, designing and training its successors, progress will accelerate beyond measurable timeframes. At that point, the curve bends too fast for prediction. The clock itself breaks. Karpathy’s stance differs not in substance but in proximity. He’s no longer inside a frontier-model lab where recursive self-improvement is being chased at industrial scale. He does not have access to the latest models currently delayed by a lack of compute and power. He speaks from the perspective of an engineer watching the ecosystem rather than directing it, a vantage point that favors caution over theater. His tone reflects the distance between those building the core models and those reflecting on the infrastructure still required to sustain them. That distance isn’t disillusionment; it’s perspective. In exponential systems, even being one layer removed makes the slope look slower than it is. This is why the same technology can generate both Altman’s “AGI by 2027” and Karpathy’s “decade of infrastructure work.” Both are describing the same curve from different positions on it. The difference isn’t in the math; it’s in the proximity to the acceleration. The Economic Function of Mythology “In the future, those who tell the best stories will control the world.” — Harari, 21 Lessons for the 21st Century Here’s what Karpathy understands that the hype cycle obscures: both the exaggeration and the correction are necessary. The fable (“AGI is two years away!”) generates the funding rounds. The correction (“actually, it’s a decade of hard infrastructure work”) keeps the engineers honest and the timelines realistic. Together, they create a oscillating system that moves forward. Capitalism doesn’t build the future through accuracy, it builds through belief that’s confident enough to deploy capital but realistic enough to iterate. The hype creates the conditions; the engineering delivers the results. The narrative of AI progress has become self-fulfilling not because the story is true, but because the story mobilizes the resou