Random Thoughts

The Uber Trap for AI

May 10, 2019. Uber prices its IPO at $45 a share, opens at $42, and closes at $41.57. The most anticipated consumer tech listing since Facebook is a dud. The product works, and the growth rate is healthy. Hundreds of millions of people use it. It has reshaped cities, labor markets, and the consumer expectation of what “getting somewhere” costs. And yet on the day it finally had to show its numbers to the public market, the market shrugged.

The reason was buried in the S-1, though it had been hiding in plain sight for years: billions of dollars of consumer discounts, driver incentives, referrals, and other payments were doing real work in holding the system together.

We are about to run the same experiment with OpenAI, except 10x larger, OpenAI’s $850 billion against Uber's $80 billion at IPO.


The narrative

Every pre-IPO unicorn tells some version of the same story, and OpenAI’s version is almost perfectly faithful to Uber’s.

First: unit costs are about to collapse. Uber’s version was autonomy. The losses on each ride were temporary because the driver would eventually disappear. OpenAI tells the same story. Inference is expensive today, but newer chips and better models/algorithmic efficiency will steadily drive the cost per token lower. Trust the asymptote.

Second: scale creates pricing power. Uber had the rider-driver flywheel. OpenAI has the data-and-scale flywheel. More users produce more interaction data, which improves the model, which attracts more users. Both stories end in the same destination: today’s subsidies become tomorrow’s moat.

Third: TAM expansion justifies the mark. Uber stopped being a ride-hailing company and became a mobility platform, then a logistics platform, moving anything from point A to point B. OpenAI employs a similar story: chat, then agents, then robotics, then AGI. Each story expands the addressable market just as the previous one becomes harder to sell.

Put the three together, and you have the bull case: costs are about to collapse, the winner will own the market, and the market itself is much larger than people think. Current unprofitability is not a flaw. It is the entry fee.

The question is whether the economics ever become as good as the narrative says.


Who is making money

In ride-hailing, the answer became obvious once you looked at the value chain from the other side. The car manufacturers sold the cars. Oil companies sold the fuel. Insurers priced the risk. Riders captured most of the consumer surplus through fares that were often below the true cost of the service. Uber sat in the middle, absorbing the losses that made everyone else’s economics work.

AI in 2026 has the same structure but a different cast.

The cleanest rents sit upstream, where the constraints are physical or regulatory. Think NVIDIA, ASML, TSMC, GEV, etc. These layers are capacity-constrained by permitting and physics. That is where margins are real.

Downstream, the picture is murkier. Microsoft funds OpenAI, which commits to Azure. Amazon and Google both fund Anthropic, which commits to AWS and GCP. NVIDIA invests in both the labs and the neoclouds that rent compute to them, and has committed billions to buy capacity back from those same neoclouds. Equity flows one way, compute contracts the other, and everyone books the result as revenue. It looks less like arm's-length commerce and more like a closed-loop booking its own internal transfers as external demand.

According to current projections, the four largest hyperscalers are about to invest roughly $630 billion in AI infrastructure in 2026. This is no longer Big Tech investing around the edges of its cash machine; it is dumping the entire cash machine itself into the buildout. What’s worse, the neoclouds are often running balance sheets collateralized by GPUs whose residual values depend on the same capex cycle continuing.

At the end of the chain, the user pays a subscription or API bill that may be well below the true cost of serving the product. The consumer surplus is real. The physical-layer margins are real. Much of what sits between them appears to be a loss absorber.


Disruption without capture

The strongest AI bull case today is not the consumer chatbot. It is the enterprise software stack.

The argument is easy to see. Salesforce, ServiceNow, Workday, and the rest of the seat-based software economy sit atop workflows that AI can plausibly automate, simplify, or replace. Forget the $20 subscription. The real prize is the SaaS industry.

The disruption is probable. The profit capture is not.

SaaS became a wonderful business for two reasons. First, the marginal cost was near zero. Once the software was built, the ten-thousandth seat cost almost nothing to serve. Second, switching costs were high. Customers invested years of labor into integrations, customization, reporting, workflows, training, and data models. Leaving was expensive.

AI-native software weakens both pillars. Inference has a real variable cost. Costs scale linearly with usage in a way that classic SaaS largely did not. That alone makes the economics look less like software and more like a utility.

As for switching costs, the deepest lock-in was never just that the software was expensive to build. It came from integration into business processes, stored data, customized workflows, compliance, and operational risks associated with migration. Those frictions are still real.

Coding agents weaken part of that moat. They make it cheaper to build missing features and write integrations. That does not erase switching costs, but it does lower some implementation costs and reduce part of the protection incumbents used to enjoy.

At the same time, switching costs at the model layer are often much lower. Many customers often can test models side-by-side and route traffic dynamically. As model quality converges, the model layer looks more vulnerable to competition than traditional enterprise software.

This matters beyond AI-native apps. Coding agents lower the cost of software production across the industry. They shift the supply curve for software to the right. When features become cheaper to build, pricing power across the sector comes under pressure.

That does not mean all software margins go to zero, but it does mean SaaS economics may weaken, while model companies sit at an even weaker position. AI can absolutely disrupt software, but it shifts much of the value to customers rather than to the companies that provide the models.

So the SaaS replacement story may be directionally correct, but economically wrong. AI can absolutely eat software. But the value may transfer mostly to customers, with some retained by infrastructure and distribution, rather than to the new entrants doing the eating. The bull case describes deflation, not necessarily a durable profit margin.


The Uber Trap

At this point, the obvious objection is that price wars do not last forever. Industries consolidate, and the survivors regain pricing power. Why not here?

Because the problem in AI is not just competition. It is the structure.

The first structural problem is that the margin profile looks weaker than the narrative requires. Pricing power is limited because customers can test substitutes quickly, and the cost of recreating software is falling. At the same time, falling inference cost is not a moat. Better chips and models diffuse across the field. If everyone’s cost curve falls together, the savings are competed away. Memory is one example. DRAM became indispensable, and the cost declined exponentially, but those savings were passed to the customers. Even differentiated hardware, such as CPUs, rarely sustains software-like margins. AI may remain technically impressive and commercially important while still maturing into a business the market values more like semiconductors or utilities than like SaaS.

The second problem is that there is no obvious escape. The only imaginable way out is a decisive capability lead that lets one firm charge a premium. But pursuing that lead requires enormous, recurring capex, and rivals are chasing the same outcome on the same broad hardware curve. And the current evidence seems to validate this, as each of the frontier labs is within weeks of the others in terms of model capabilities. Spending is necessary to stay in the game, but necessity is not the same as return on capital.

Nor does consolidation solve the deeper issue. Airlines consolidated and remained weak businesses for decades. Ride-hailing consolidated into a duopoly and still matured into something closer to transportation margins than software margins. AI’s consolidation has started as well. Multiple prominent startups have been acquired by Big Tech. That may improve the economics from catastrophic to mediocre. However, it does not transform them into the high-margin software business that the current valuations seem to imply.

And that is the Uber trap. AI companies may not be able to price like classic SaaS, maintain cost advantages, or spend their way into a monopoly. Even if the technology works and adoption explodes, the profit margin may remain smaller than the story requires.


The IPO reckoning

Before an S-1, a company controls the numbers. It decides what to emphasize, what to adjust away, and what to bury in footnotes. After an S-1, the numbers start controlling the narrative. Public investors get to compare the story against audited reality.

That was the Uber experience. The IPO did not kill the company. It forced the market to stop valuing a story and start valuing a business. Since the IPO, Uber has turned profitable and grown revenue at a healthy pace, but it has only returned 56%, well below the S&P 500’s 140%+ in the same period. You do not need AI to fail to get a catastrophic repricing of the labs. You only need AI to succeed at Uber's margin structure.

I do not know what numbers OpenAI or Anthropic will eventually disclose. But the broad pattern in frontier AI looks similar: aggressive framing, unclear separation between gross demand and net economics, generous launch pricing followed by rationing, and capital structures that seem to keep losses out of line of sight.

None of that proves the business model fails. It does suggest that the eventual disclosure regime matters a great deal. If the S-1s show that the core products are structurally less profitable than private-market marks imply, the repricing will not stop at the labs. It will move through the whole AI economy.


AGI the exit

The people inside the trap are not stupid. They can see the trap. They keep running into it anyway because stopping is worse. If the spending slows, the valuation logic breaks immediately.

That is where AGI enters the story. AGI functions as an escape hatch from the trap. It allows the narrative to leap over ordinary economic questions by claiming the transition is so large that old rules no longer apply. However, scale does not abolish economics. Even a transformative technology still has to answer basic questions: who captures the surplus, what keeps competitors from driving prices toward cost, and what justifies the capital invested in building it. When AGI arrives, whatever that means, a commodity with infinite demand is still a commodity. The best cost structure wins a low-margin business at scale, and the rest compete prices to the floor.

In ride-hailing, the technology can work. Consumers can love it. Entire industries can be reorganized around it. And yet the companies closest to the product may still turn out to be disappointing businesses.

The Uber Trap is not that AI fails. It is that AI succeeds, and the success is captured by everyone except the AI model providers, who are currently valuing as if they will own the profit pool. The models will improve. Adoption will rise. Software will get rebuilt around them. And margins may still compress all the way down to a floor far lower than today’s valuations imply.


The re-rate

When the market finally decides that the model makers’ profit margin is smaller than expected, the impact will be wide.

Supplier margins going to the moon are not laws of physics. Customers are willing to pay because they believe they are racing toward a vast future profit pool. If the eventual public disclosure reveals that the market is structurally low-margin, buyers will stop paying a premium for upstream supplies. The result is not the end of AI. It is a repricing of the entire stack, from the labs at the center to the upstream suppliers and downstream applications.