HyperAIHyperAI

Command Palette

Search for a command to run...

AI's Global Domination: Hype, Challenges, and Underlying Transformations

If Marc Andreessen’s 2011 declaration that “software eats the world” marked a turning point in tech history, then today, fourteen years later, Benedict Evans, former partner at a16z and renowned technology analyst, has issued a powerful follow-up with his latest report titled AI Eats the World. This 90-page deep dive, the third in a biannual series launched in late 2024, seeks to cut through the frenzy and skepticism surrounding generative AI, offering a nuanced analysis of the technological, economic, and cultural forces reshaping the global landscape. Key insights from the report include: Platform Shift Repeats: Generative AI is triggering another major platform transition—historically occurring every 10–15 years—though its final form remains unclear. Unprecedented Capital Surge: In 2025, Microsoft, AWS, Google (Alphabet), and Meta are projected to spend $400 billion on capital expenditures—nearly doubling their 2024 levels and surpassing the entire global telecom industry’s annual investment of $300 billion. Model Convergence: Top large language models are now performing so similarly on benchmark tests that differences are measured in single-digit percentages, raising questions about whether models are becoming commoditized. Low User Engagement: Despite OpenAI’s claim of 800 million weekly active users for ChatGPT, surveys show only about 10% of U.S. users interact with AI chatbots daily, with most using them occasionally. Slow Enterprise Adoption: Around 40% of CIOs say they won’t deploy LLM projects until at least 2026. Current success cases are limited to coding assistance, marketing, and customer support—what Evans calls the “absorption” phase. Reimagining Recommendation Systems: AI may shift recommendations from relevance-based to intent-driven, potentially transforming the $1 trillion global advertising market. Historical Parallels: The report draws on past technological shifts—like the decline of elevator operators in the 1950s and the automation of office work—to warn that when automation succeeds, it becomes invisible infrastructure, no longer labeled as “AI.” The report opens with a central thesis: we are in the midst of another platform transition, a cycle that has seen the rise of mainframes, personal computers, the web, and smartphones. Each time, early leaders fade as new paradigms emerge. Microsoft, once dominant in operating systems, saw its market share drop from near 100% in the 2010s to under 20% by 2025. Apple, once a PC pioneer, was overtaken by IBM-compatible machines. The same pattern holds in search, social media, and mobile—first-mover advantage is often short-lived. Yet, as in past transitions, the future remains uncertain. In the early days of the internet, failed ideas like AOL, Yahoo portals, Flash, WAP, and J2ME revealed how hard it is to predict what will emerge. Today, possibilities range from chat-based interfaces to AI agents, voice-first systems, Model Context Protocol (MCP), wearables, or entirely new UI paradigms. No one knows for sure. The scale of investment, however, is undeniable. In 2025, the four major cloud providers are expected to spend $400 billion—nearly double their initial projections. This capital is primarily funneled into data center construction. In the U.S., the value of new data center construction (excluding servers) has now surpassed that of office buildings—a first in history. Schneider Electric’s February 2025 survey revealed that power supply is now the top bottleneck for data center development, surpassing chip availability, fiber access, and land. Nvidia has emerged as the central beneficiary. Its quarterly revenue rose from under $10 billion in early 2023 to nearly $60 billion by 2025—surpassing Intel at its peak. Evans compares this to the rise of Sun Microsystems, though threats from Chinese and cloud provider chip designs are growing. Still, demand outpaces supply, with TSMC’s capacity struggling to keep up. The financial strain is evident even among giants. While cloud companies have seen strong cash flow growth since the pandemic, capital expenditures are rising faster. Capital leases—non-cash financing—now make up a growing share of spending. Some analysts suggest Oracle’s cloud capital expenditure may exceed its revenue by over 100%, a level that defies traditional business logic. OpenAI’s strategy exemplifies the scale of ambition. In October 2025, it announced plans for over 30 gigawatts of AI infrastructure, with a total investment of $1.4 trillion. Its vision: adding one gigawatt of capacity per week—equivalent to building two-thirds of the world’s current data center capacity annually. This would require $200 billion per gigawatt, meaning $1 trillion in annual spending. To fund this, OpenAI is building complex financing structures involving Nvidia, Oracle, SoftBank, and Middle Eastern investors—what Evans calls “circular revenue,” where OpenAI uses Nvidia’s cash to buy Nvidia chips, while Nvidia’s revenue flows from Microsoft and Google—its competitors and partners. This model echoes the financial engineering of the dot-com bubble. But the real question is: what do we get for it? On the model side, performance has improved dramatically. New models launch weekly, and Chinese and open-source communities are rapidly catching up. Yet benchmarks are saturating. Third-party evaluations from ArtificialAnalysis and LMArena show that top models—Claude, Gemini, GPT, and others—now differ by only a few percentage points. This raises a fundamental question: if models are converging, where is the moat? There’s no clear network effect or technical barrier. Models may be becoming commodities—like compute power in the cloud. User data confirms the disconnect. While OpenAI claims 800 million weekly active users, only about 5% pay. More telling is usage frequency: Deloitte’s 2025 survey found that only 10% of U.S. users interact with AI chatbots daily, and over half use them monthly or less. This pattern holds across multiple studies. Why the low engagement? Evans offers three explanations: few use cases are obvious and easy to integrate; only certain roles (programmers, marketers, writers) can adapt their workflows; and most users need AI wrapped into specific tools, not a generic chat window. As Steve Jobs said, “People don’t know what they want until you show it to them.” The experience must come first. In enterprises, adoption is slow but growing. The most successful applications are in coding, marketing, customer support, and automation—what Evans calls the “absorption” phase. AI coding tools are being hailed as the “new AWS,” lowering the barrier to software development. Y Combinator’s Garry Tan reported that 95% of YC startups’ code is now AI-generated, meaning founders no longer need large teams or massive funding. Consulting firms are leading the charge. Accenture’s new generative AI contracts grew from near zero in February 2023 to over $1.8 billion by August 2025. Palantir’s enterprise revenue surged in 2025, driven by its AI platform. But McKinsey’s 2025 survey found that among companies using generative AI, fewer than 5% have fully deployed LLMs. Morgan Stanley’s CIO survey showed that only 25% had deployed LLMs by late 2025, another 25% planned to in the second half, and nearly 40% said they wouldn’t deploy until 2026 or later. These delays stem from classic tech adoption challenges: security, privacy, IP, hallucinations, legal risks, legacy systems, and data integration. But AI adds a new layer: how to handle errors. LLMs produce factual inaccuracies and unpredictable outputs—features built into the technology. Can these be automated? Is human review efficient? How much traditional software must be layered on top? Evans invokes Jevons’ Paradox: increased efficiency often leads to higher total consumption, not lower. In the industrial revolution, steam power didn’t reduce labor—it created new industries. So, when we have “infinite interns,” what becomes possible? The most profound shift may be in recommendation systems. Today’s systems rely on user data and network effects. But AI could shift from correlation to intent. Buying tape? A system that understands you’re moving might recommend light bulbs, smoke detectors, or home insurance. This is a move from “what’s similar” to “what you need.” The value question becomes: when you search or shop, what do you really want? A tool? A curated experience? A story? The future may split into utility (answers, logistics) and experience (curation, delight). A Tokyo bookstore that displays one book a week—Morioka Shoten—represents a different value proposition than Amazon. This leads to OpenAI’s strategic dilemma. It’s pursuing everything: infrastructure deals with Oracle, Nvidia, Intel, Broadcom, AMD; e-commerce; ads; browsers; video; social; hardware with Jony Ive; and even biotech. It’s trying to both bundle and split. But why? If ChatGPT already has 800 million users, why build a browser or video app? Is it because the product form isn’t clear? Or because models are becoming commodities, forcing OpenAI to extend upstream and downstream to capture value? The startup ecosystem reflects this uncertainty. Y Combinator reports that AI startups made up nearly half of its 2024 and 2025 batches. These companies are betting on product, UX, vertical data, and distribution—not just models. Yet consumer behavior suggests a long road ahead. Bain’s 2025 survey found that despite high AI awareness, most people still default to traditional search engines—even when using Google Gemini, which is built into Chrome. Evans reminds us that technological transitions are rarely linear. In 1997, Wired declared “The Web is Dead.” In 2025, NPR reported that publishers face “extinction-level” threats from AI search. But the web didn’t die. Publishing didn’t vanish. It evolved. He cites a 1956 U.S. Congressional report on automation, which discussed industries like metalworking, chemicals, electronics, transportation, and office work—many of which sound eerily familiar. The report noted that automatic elevators had reduced the number of elevator operators from 95,000 in 1950 to under 10,000 by 1990. Automation didn’t eliminate work—it redefined it. Similarly, the introduction of barcodes and databases in the 1970s allowed U.S. supermarkets to increase their SKU count from 5,000 to 50,000 by 2005—creating entirely new retail models. So what’s next? Evans frames the challenge in three layers: absorption (automating known tasks), innovation (creating new products and services), and disruption (redefining entire industries). Most progress is still in absorption. But the real transformation lies in innovation and disruption. What can LLMs split? What is currently bundled? The internet split physical assets into online distribution. What can AI split now? The answer may lie in shifting from relevance to intent—moving from “what’s related” to “what you need.” This could redefine marketing, retail, and even personal assistance. The report ends with a reflection: when AI succeeds, it stops being called AI. We don’t call search engines “AI”—we just call them search. We don’t call elevators “AI”—we call them elevators. So when generative AI becomes seamless, what will we call it? Perhaps “software.” Perhaps “assistant.” Perhaps something we haven’t imagined yet. The question isn’t whether AI will eat the world—but how, when, and what kind of world it leaves behind.

Related Links