Back Research Notes The Superpower of Self-Rewriting Systems Published on February 24, 2026 By Jordi Visser Because my father trained me in the art of horse handicapping, and taught me to avoid betting favorites, I’ve always considered myself a contrarian when it comes to trading and investing. I don’t like writing about whatever is already the center of attention. But as I sit on a train to Boston to visit my son, I find myself needing to write another paper on AI’s disruption to software and enterprises. This one is about AI torque, sparked by a new post from Andrej Karpathy. The progress in non-institutional AI agent development is now compounding at a pace that is difficult for traditional institutions to absorb. I remember the first time I took my kids out in my Tesla back in 2013. We were driving on the highway. I glanced in the rearview mirror, smiled at them in the back seat, and pressed the accelerator. The torque hit instantly. They screamed, not because we were going fast, but because of how fast we got there. There was no buildup. No engine noise. No warning. Just immediate thrust. It felt like the time constant of the system had changed. That is the exact feeling I’ve had over the past several months watching AI agent architectures evolve. This isn’t about the destination of AGI; it’s about AI torque, the nonlinear acceleration of capability that leaves slower organizational structures structurally disadvantaged. The 100 MPH Entrepreneur vs. the 40 MPH Enterprise We are entering a world with two very different speed limits. On one side, you have the High-Speed Entrepreneur. They’re driving at 100 mph. They have no legacy debt, no adoption friction, and no meeting-heavy culture. They’re building with “digital employees”: persistent agentic systems capable of modifying workflows, generating code, executing integrations, and documenting changes with minimal human intervention. The direction of travel here is clear: AI-native builders will capture outsized advantage. On the other side, you have the 40 MPH Enterprise. They aren’t slow because they lack talent; they’re slow because they’re towing a massive weight: procurement cycles, security reviews, legacy SaaS sprawl, and unstructured data locked in silos the organization isn’t allowed to touch. Saying SaaS survives because enterprises won’t move faster isn’t a case for legacy SaaS; it’s an acknowledgment that enterprises are structurally constrained by a “too much to lose” mentality and governance models designed for a lower-torque environment. The modern enterprise is like a driver stopped on the shoulder of a high-speed highway, attempting to merge into the right lane while the 100 mph traffic of the AI-native world passes by. Enterprises don’t optimize for speed; they optimize for permissioning and blame minimization. While teams are busy “syncing,” the digital employees of competitors are compounding throughput. In a world of compounding torque, the distance between 100 mph and 40 mph doesn’t just grow, it widens structurally. Over time, the slower vehicle doesn’t disappear overnight; it becomes marginal. And in markets, marginalization compounds into outcomes. The Age of Bespoke, Real-Time Software The traditional SaaS model is like a restaurant with a fixed menu where you wait for the product roadmap to give you the features you need. In the 40-mph world, you buy software and then wait for permission to use it. In the 100-mph world, software is bespoke and disposable. The new meta isn’t buying a platform, it’s using a “Claw”: a persistent, self-modifying runtime. If an entrepreneur needs a specific integration for a sales pipeline or a custom hedging workflow, they don’t wait for a dev cycle. They give their digital employee a “Skill.” The agent then modifies the underlying code and tooling in real time, tests it, and redeploys. The distinction is fundamental: enterprises need buy-in before decisions. AI-native operators use agents that execute decisions and then refactor the architecture to support them. Karpathy’s Point: Skills Over Config Karpathy highlighted a practical version of this self-rewriting capability: rather than piling on config files and brittle “if-then-else” logic, you build a system where skills instruct the agent to modify the actual code. His example was the idea of a command like “/add-telegram” that integrates Telegram support by evolving the repo itself, keeping the core maximally forkable while allowing it to grow into exotic configurations without becoming a config mess. Instead of building software like a machine with a thousand tiny switches (config files) and a spaghetti mess of “if this, then that” rules, you build it like a skilled apprentice who can actually change the machine itself when you ask for something new. Think of the old way like a big office building where every change requires filling out forms and moving furniture by hand. You want a new conference room? You don’t rebuild anything, you just keep adding signs, temporary walls, and extension cords. Over time, the building becomes structurally fragile. Karpathy is describing the opposite: a building with a renovation crew built in. You say, “Add a conference room,” and the crew updates the blueprint and rebuilds that section cleanly so the building stays orderly instead of turning into a maze. Digital Employees, Local Runtimes, Physical Torque We’re also seeing a faster ecosystem emerge where high-speed entrepreneurs are effectively “ghosting” their digital employees into physical hardware—personal, always-on runtimes that live on a local network. Karpathy even joked (via an Apple Store anecdote) that these small devices are “selling like hotcakes,” which captures a growing pull toward local, hardware-anchored AI setups. This isn’t a “software seat” you rent. It’s a private digital assistant that lives on your network, can be permissioned to your files, and can be iterated on daily. The aesthetic is part of the point: a physical box anchored to a personal runtime—auditable, direct, and close to the data. The Speed-Limit Gap: Comparative Framework Capability The 40 MPH Enterprise The 100 MPH Entrepreneur Codebase 400k+ lines of accreted bloat ~4,000 lines of auditable logic Decision Speed Committee & consensus Agentic autonomy Software Model Fixed menu (SaaS) Bespoke / self-rewriting Infrastructure Centralized cloud silos Local “Claw” runtimes Why Smaller Wins: Auditability Is the Unlock The world is moving faster than even the builders expected. Karpathy’s recent posts suggest his own “decade of agents” timeline has compressed materially. This is where the size difference matters. A sprawling, vibe-coded monster creates a larger attack surface and more “unknown unknowns.” By contrast, a compact, containerized runtime is something a human can reason about and an AI can fully comprehend. The goal isn’t just speed; it’s speed with auditability. The Real Moat To fully harness this torque, we have to address the vulnerabilities inherent in the early phase of agent development. In the 100-mph lane, self-rewriting isn’t reckless, it’s disciplined. The best operators mitigate risk by choosing auditable, containerized, local-first setups that increase control. Meanwhile, many enterprises compound their 40-mph drag by treating governance as the product, conducting audits that take so long the underlying frontier has already moved. The moat isn’t SaaS vs. not-SaaS; it’s the ability to iterate safely at high velocity. This isn’t a technology shift. It’s a time-compression shift. This self-rewriting superpower democratizes innovation, turning every entrepreneur into a one-person development team. For asset managers, torque translates into edge through compression: faster research loops, faster scenario generation, and faster workflow iteration. But the edge won’t come from letting agents “trade.” It comes from letting agents compress the full research and implementation cycle, while humans remain the