report

The OpenClaw Effect: How Agentic Software is Forcing the $90 Trillion AI Factory Transition

Back Research Notes The OpenClaw Effect: How Agentic Software is Forcing the $90 Trillion AI Factory Transition Published on March 9, 2026 By Jordi Visser During geo-political events that lead to uncertainty and corrections in the markets, it is a great time to reassess secular trends. As was the case during the Liberation Day fears, don’t lose sight of AI certainty. Regardless of how long this event disrupts markets and swings the inflation – recession pendulum back in forth in the daily commentary, AI will continue to move forward. This week, I want to use recent news and a podcast to go back to a theme I wrote about in late December that is still in the early stages. In my December 29 th paper, The Nvidia-Groq Deal: Why AI’s Next Frontier Requires Architectural Revolution Over Moore’s Law , I argued that the next phase of AI would not be decided simply by who could buy the most transistors. It would be decided by who could move data through memory, power, and latency constraints more efficiently. The Groq event was not a full acquisition. It was a non-exclusive licensing agreement, along with Jonathan Ross, Sunny Madra, and other Groq team members joining Nvidia. That distinction matters because it at the time, it sent a message on AI and chips. Nvidia was not swallowing a startup to remove a competitor; it was absorbing inference-first IP and talent because inference efficiency had become strategic. The core of the December thesis still holds. At the edge, AI runs into hard physical limits faster than the market expected. You cannot miniaturize a cloud data center into a smartphone, a car dashboard, or an industrial sensor and expect the economics to work. Power budgets are tighter, latency is visible to the user, and every trip to off-chip memory carries a real cost in watts, heat, and dollars. What has changed since December is not the physics. What has changed is where the market now sees those same constraints. The memory wall that first looked like an edge-computing problem is now showing up as the organizing principle for the next generation of AI infrastructure. That is why Jensen Huang’s keynote speech at the CES event where he highlighted the upcoming launch of the Vera Rubin platform mattered more than a normal product cycle. Rubin was presented not as a better chip, but as a rack-scale system built from six tightly coupled components, with Nvidia claiming up to a 10x reduction in inference token cost relative to Blackwell. The important point for investors is not the marketing multiple. It is the architectural admission embedded in the product: AI is no longer scaling on the back of a single processor. It is scaling through extreme co-design across compute, networking, memory, storage, and software, because the unit of competition is becoming the AI factory, not the standalone GPU. Nvidia made the second half of that admission explicit when it introduced NVLink Fusion. For the past two years, the market has tried to frame AI infrastructure as a zero-sum contest between general-purpose GPUs and custom ASICs. NVLink Fusion says that framing is obsolete. Nvidia is now explicitly enabling hyperscalers and partners to connect custom compute to Nvidia rack-scale systems, and the first named ecosystem partners include Marvell, MediaTek, Synopsys, and Cadence. In other words, the winning architecture is no longer monolithic. It is heterogeneous. GPUs will remain critical for frontier training and fast-changing inference, but stable, high-volume workloads are increasingly likely to migrate toward custom silicon that can deliver better economics per query. The market spent two years obsessing over who could manufacture the fastest chips, but the smartest capital is now aggressively buying the connective tissue required to link these heterogeneous components. Nvidia’s recent $2 billion direct equity investment in Coherent is a structural admission that pushing electrical signals through copper is hitting severe power and heat limits. To make rack-scale systems like Vera Rubin work, they must replace electrical circuits with optical ones. At the facility scale, Meta’s multi-year, up to $6 billion agreement with Corning to secure optical fiber exclusively for its AI data centers highlights this same macro-level networking wall. You cannot realize the economics of the heterogeneous AI factory if your architecture is choked by the power and latency costs of copper wires and insufficient network bandwidth. This is where the software story and the hardware story finally meet. The demand accelerant is not “AI” in the abstract. It is agentic AI. The clearest signal of this shift just came from Jerry Murdock, the notoriously press-shy co-founder of Insight Partners, a growth equity giant with over $90 billion in AUM. He is also is a member of the Board of Trustees of both The Santa Fe Institute and The Aspen Institute. In his first-ever long-form interview on Harry Stebbings’ 20VC podcast late last month (in an episode titled The SaaS Apocalypse ), Murdock argued that the next wave of software will be driven by agents acting continuously rather than humans prompting intermittently. When a founding architect of the SaaS economy goes on the record to warn that autonomous agents will turn traditional systems of record into valueless databases, infrastructure investors need to listen whether you agree or not. That distinction matters because it changes the denominator in the infrastructure model. Traditional copilots scale with human attention. Agents scale with the number of deployed workflows and the number of actions each workflow executes. Jerry specifically mentioned the OpenClaw phenomenon and how it should be read through that lens. Its popularity, more than 100,000 GitHub stars and 2 million visitors in a single week, was not proof of monetization, but it was clear evidence of latent demand for autonomous software that can operate persistently. If you don’t believe Jerry, Jensen Huang said last week, “OpenClaw is probably the single most important release of software, probably ever.” Once inference demand starts to behave like industrial throughput instead of occasional user traffic, the economics change fast. Some workloads will always need GPU flexibility. But repetitive, well-defined tasks, such as ranking, retrieval, fraud screening, code review, call center triage, enterprise search, and workflow orchestration, do not need the same cost structure as frontier model development. That is why the custom silicon story has moved from white paper to earnings release. Broadcom said last week that it now has line of sight to AI chip revenue above $100 billion in 2027. Marvell validated this exact timeline on its March 5 earnings call. CEO Matt Murphy explicitly guided that Marvell’s custom silicon business is on track to grow over 20% this year and will double next year as new hyperscaler programs hit high-volume production. Furthermore, validating the optical networking bottleneck, Murphy highlighted that their electro-optics portfolio is growing at a 50%-plus rate this year. Because of this accelerating demand for both custom compute and the connective tissue between them, Marvell lifted its corporate outlook, projecting total revenue will grow more than 30% to approach $11 billion in fiscal 2027. Third party forecasts are moving in the same direction: Omdia sees the broader AI data center chip market reaching $286 billion by 2030, while Counterpoint expects AI server compute ASIC shipments to triple by 2027. The foundry and packaging layer is what keeps this from becoming just another semiconductor storytelling exercise. TSMC said advanced packaging was slightly over 10% of revenue last year and that 10% to 20% of its 2026 capital budget will go toward advanced packaging, testing, mask-making, and related capacity. That is not a side business anymore; it is becoming central to the AI stack. January 2026 revenue was up 36.8% year over year, and TS