- Why machine learning is being sold as intelligence — and why the distinction matters
- Faced with a fork in the AI road, the U.S. has taken the wrong path
- Predictive LLM technology demands vast capital outlay with limited returns
2025 was the year I finally got to the bottom of AI — and discovered that it isn’t AI at all. It’s machine learning.
And that distinction matters.
What passes for “AI” today is a narrow, fragile subset of machine learning (ML), dominated by large language models (LLMs), that has been mis-sold as general intelligence, mis-financed as critical infrastructure and mis-applied as a solution to problems it cannot solve.
The market has quietly split in two. One path leads toward grounded, domain-specific, ML-based automation that improves industrial productivity and profitability. The other leads toward consumer-focused, capital-intensive, probabilistic language systems whose economics and technical limits are already showing strain.
The U.S. has bet overwhelmingly on the latter — pouring debt, subsidies and hype into systems that excel at generating plausible text but struggle with accuracy, reliability and return on investment.
A brief history of AI using small words
It’s time to relearn what AI is and what it can and can’t do.
The original attempts to create AI — symbolic reasoning in the 1960s and 1970s, and rule-based systems in the 1980s and 1990s — didn’t work.
But in the late 1990s, ML — not AI — arrived. ML mavens successfully swapped symbolic AI in favor of statistics and data-driven models. For the next two decades, ML did its thing —variously labelled data mining, predictive analytics and deep learning, depending on who was selling it.
Then, in the early 2020s, three things happened in short succession that changed everything.
First, LLMs debuted, perched atop ML like a hat, ingesting vast amounts of text data to generate predictions.
Second, OpenAI hooked its LLM-on-ML up to a chatbot so regular folks could have a go.
Third, product marketers realized that enough time had passed to sex up dull-sounding ML as AI, again — yeah, baby! — without triggering the industry’s memory of AI as a failed idea.
And this turned out to be the most successful piece of rebranding since Purdue Pharma hired McKinsey to work out how to hook America’s poor on its new line of for-profit opiates. New AI — really ML with bits — triggered an adoption curve unprecedented in software history, as ChatGPT went from zero to 800 million weekly users in less than two years.
It’s the great LLM illusion, Charlie Brown
Consumers bought into the AI rebranding in part because its developers borrowed the lexicon of neuroscience — neurons, synapses, learning — to describe their mathematical code. This provoked widespread confusion; neural networks may be named after neurons, but they function much more like spreadsheets than the human brain.
The anthropomorphizing of AI by OpenAI and others through chirpy, happy-to-help chatbot personalities that gave the math a voice also helped foster the illusion of human-like intelligence.
In reality, the only human thing about LLMs is that they make mistakes. By design, LLMs are innately disposed to hallucinations. LLMs are probabilistic sequence predictors; they ingest information and take an educated guess at the most credible answer to a query. Accuracy improves in proportion to the volume and quality of the information that LLMs ingest, but there isn’t a way to prevent them from introducing errors, because LLMs deal in plausibility, not facts.
Some level of inaccuracy is tolerable in consumer applications like ChatGPT, but it won’t fly in mission-critical enterprise or industrial systems.
And this is not an AI issue. This is, very specifically, an LLM issue — one that machine learning is immune to.
The industrial realm settled the AI safety question nearly 30 years ago with the IEC 61508 standard. First published in 1998 and written by safety and control engineers — not Silicon Valley bros — it made one thing clear: critical software systems must tolerate as little uncertainty as possible. This is why industry uses machine learning in tightly constrained, validated environments and keeps humans in the loop for the most critical applications. It’s also why industry deliberately air-gaps anything that matters from LLMs.
Too big to fail?
In a more rational universe, LLMs would already be back in the box with the other broken toys.
But this is America. So instead, this brittle iteration of ML is attracting the broadest flood of capital — $109 billion in the U.S. in 2024. More confounding still, OpenAI, Google and Anthropic are presenting it as a credible path to artificial general intelligence (AGI), a proposition whose technical incoherence is exceeded only by its spectacular recklessness.
Yes, please — let’s build the supreme intellect on the planet atop a technology that hallucinates by design. What could possibly go wrong?
Talk is cheap. Chat is cheaper.
America’s over-rotation on consumer LLMs is also bad business, at both the corporate and national levels.
Based on publicly available information, the average revenue per ChatGPT query is around 2–3 cents. That’s less than a U.S. phone call (3–4 cents) or a Google Search query (5–7 cents).
Yet in 2025, Big Tech spent 10 times as much on AI infrastructure as carriers did on building and upgrading their phone networks. The only way that would make sense is if a wave of irresistible consumer AI applications were about to arrive, exponentially increasing per-query value.
That’s not going to happen. The primary utility of ChatGPT and other AI chatbots is to make consumer search easier. Beyond that, usefulness and ROI collapse fast.
A small corner of the communications industry insists LLMs can be used to improve customer service. This claim doesn’t survive contact with reality. AI customer service chatbots are sh-t. Anyone who has been trapped in an AI-powered help desk loop will tell you that these systems aren’t configured to resolve problems — they’re designed to exhaust customers into abandoning their request for health care coverage or, worse, a refund.
And yes, there are almost innumerable other things you could use LLMs to do — but remember that immovable fork in the road: error-free ML this way, blundering LLMs that way. That divide will always relegate startups reliant on LLMs to applications that, in the grand scheme of a new digital economic order, just don’t matter that much.
Focus is choosing what not to chase
Conversely, the AI startups with the best chance of success — the ones immune to the vicissitudes of the coming bubble — are specialists wheeling in the gyre of American tech delusion.
Take Vultr. It’s often lumped in with “data-center players,” which misses the point. Vultr doesn’t build data centers or bet its balance sheet on speculative real estate. Instead, it arbitrages compute and shapes it for specific high-value services and applications. Its business is orchestrating GPUs, capital and go-to-market execution across a fragmented, heterogeneous AI ecosystem, whether models are large or small, centralized or sovereign, booming or retrenching. If an LLM bubble bursts, demand doesn’t disappear — it fragments, decentralizes, and looks for cheaper, more flexible infrastructure. That’s Vultr’s home turf.
Cerebras is another instance of success based on specialism. It’s often grouped with LLM-era chip startups, but that framing is too narrow. Its chip-in-a-box architecture wasn’t built specifically for chatbots or consumer AI, but for a broader class of problems where model size, memory locality and communication overhead break conventional GPU clusters — ranging from LLMs to scientific simulation, bioinformatics, vision and multimodal systems. That focus limits Cerebras’ addressable market, but it also insulates the company from hype cycles. If LLM spending cools, demand doesn’t disappear; it shifts toward fewer, harder, more economically constrained workloads where efficiency and predictability matter more than scale alone. Cerebras isn’t reliant on an AI boom for success. It just needs those problems to remain unsolved by general-purpose hardware.
'This time will be different!' — Every bubble, ever
With LLMs, we have a new consumer-targeted technology so bold, fresh, and supremely important that the U.S. tech community loses all sense of logic, discipline or proportion — and launches into a wild land grab, abandoning traditional metrics like profitability in favor of user counts and market share, while investors pile into already massively overvalued stocks that will inevitably come crashing down.
We’ve seen this movie before with the dot-com bubble. Different plot. Same ending.
Amid the LLM madness, there is at least one group that understands America’s obsession with LLMs is a mistake: the hyperscalers whose business plans depend materially on LLM revenue. Amazon (AWS), Google, Meta, Microsoft (Azure) and Oracle are all talking up plans to spend tens of billions building AI data centers in anticipation of LLM demand. But the way these projects are being financed tells a more cautious story.
Rather than funding them on balance sheet, hyperscalers are pushing build-outs into special-purpose vehicles, loading them with 60–80% project debt and surrounding the remainder with tax credits, municipal incentives and long-term power-purchase agreements.
The goal is insulation: cap electricity costs, socialize grid upgrades and shift utilization risk to lenders, utilities and local governments. The most revealing detail is the price of that debt. Coupon rates in the 6–9% range are not what markets charge for essential utility infrastructure; they are what they charge for speculative assets with uncertain future cash flows. If LLM growth were as inevitable as the rhetoric suggests, this capital would be cheap and equity-heavy. Instead, the elevated coupons speak for themselves, signaling how markets — and the hyperscalers themselves — really perceive LLM risk.
It’s not surprising that LLMs have put America’s tech industry under their spell. They embody the new American zeitgeist: confident, garrulous and unburdened by truth. The U.S. has finally reached its Foghorn Leghorn era of technology. I say, I say, I say it’s getting AI wrong.
To subscribe to more of this kind of thing, click here.
Steve Saunders is a British-born communications analyst, investor and digital media entrepreneur with a career spanning decades.
Opinion pieces from industry experts, analysts or our editorial staff do not represent the opinions of Fierce Network.