OpenAI’s release GPT5 feels like the end of something.
It isn’t the end of the Generative AI bubble. Things can, and still will, get way more out of hand. But it seems like the end of what we might call “naive AI futurism.”
Sam Altman has long been the world’s most skillful pied piper of naive AI futurism. To hear Altman tell it, the present is only ever prologue. We are living through times of exponential change. Scale is all we need. With enough chips, enough power, and enough data, we will soon have machines that can solve all of physics and usher in a radically different future.
The power of the naive AI futurist story is its simplicity. Sam Altman has deployed it to turn a sketchy, money-losing nonprofit AI lab into a ~$500 billion (but still money-losing) private company, all by promising investors that their money will unlock superintelligence that defines the next millennium.
Every model release, every benchmark testing success, every new product announcement, functions as evidence of the velocity of change. It doesn’t really matter how people are using Sora or GPT4 today. It doesn’t matter that companies are spending billions on productivity gains that fail to materialize. It doesn’t matter that the hallucination problem hasn’t been solved, or that chatbots are sending users into delusional spirals. What matters is that the latest model is better than the previous one, that people seem to be increasingly attached to these products. “Just imagine if this pace of change keeps up,” the naive AI futurist tells us. “Four years ago, ChatGPT seemed impossible. Who knows what will be possible in another four years…”
Naive AI futurism is foundational to the AI economy, because it holds out the promise that todays phenomenal capital outlay will be justified by even-more-phenomenal rewards in the not-too-distant future. Actually-existing-AI today cannot replace your doctor, your lawyer, your professor, or your accountant. If tomorrow’s AI is just a modest upgrade on the fancy chatbot that writes stuff for you, then the financial prospects of the whole enterprise collapse.
Sam Altman had dinner with a group of journalists last night, in an attempt to paper over GPT5’s clumsy rollout. Casey Newton provides the highlight reel. Two of his bullet points jumped out at me:
OpenAI is maybe basically profitable at the inference level. One of the funnier exchanges of the evening came when Altman said that if you subtracted the astronomical training costs for its large language models, OpenAI is “profitable on inference.” Altman then looked to Lightcap for confirmation — and Lightcap squirmed a little in his seat. “Pretty close,” Lightcap said. The company is making money on the API, though, Altman said.
(…)
OpenAI plans to continue spending astronomical amounts. Eventually, he said, OpenAI will be spending “trillions of dollars on data centers.” “And you should expect a bunch of economists to wring their hands and be like, 'this is so crazy, it’s so reckless,” he said. “And we’ll just be like, you know what? Let us do our thing, please.”
Notice the gulf between those two statements. (1) The company is close to breakeven on the one part of the business that generates revenue. (2) It plans to keep raising trillions of dollars from investors to spend on its broader ambitions.
…Look, I am not a business genius. But as I understand it, the point of a business plan is to eventually make more money than you are spending. If your company is breakeven on merchandise, but losses billions per year on the rest of the product line, then that’s a company that is supposed to, y’know, fail.
The naive AI futurism story is how Altman brings in the cash for those trillion-dollar data centers. Investors have to believe that we are living through exponential times, that the next product release will fundamentally change life as we know it. Otherwise they’re just keeping the bubble afloat. And that story just became a lot harder to sell.