- IBM should be a big winner in the digital industrial revolution, but it’s missing a step – and a stack
- The company has made two major Industry 4.0 mistakes – one tactical (spinning out Kyndryl) and one strategic (jumping on America’s LLM bandwagon)
- For a rigidly rule-based business, IBM has chosen a weird time to favor probabilistic language models over determinism
Revenues from the digitalization of vertical industries worldwide — part of a broader push toward a new global digital economy — already total several hundred billion dollars, with credible forecasts pointing to a trillion-dollar upside over the next decade.
Vive la Révolution industrielle 4.0!
On paper, IBM should be a natural winner in this transition. Instead, it enters the next phase of industrial digitalization, having made two pivotal (and extremely poor) strategic choices: the separation of Kyndryl and the alignment of its AI strategy closely with the LLM-led trajectory of U.S. hyperscalers.
The spin-out of its network capabilities as Kyndryl in 2021 was intended to free IBM to pursue more glamorous and high-margin AI opportunities, while leaving Kyndryl to handle the workaday but indispensable task of running enterprise and industrial IT and OT environments — laying Category 6 cabling, fixing printers, ensuring the robots didn’t become sentient and revolt, etc.
In doing so, it magically (alakazam!) created two silos in an industry that is trying its utmost to move in the opposite direction, towards seamless integration. That is an especially ill-advised tactic in the Industry 4.0 world, where networks are so complex and critical that their owners generally prefer one throat to choke — a full Industry 4.0 stack from a single supplier.
Siemens, Rockwell Automation, Schneider Electric, and Mitsubishi Electric all deliver proper Industry 4.0 stacks that span from the factory’s concrete floor to the top of the software intelligence layer.
Siemens’ Xcelerator, Rockwell’s FactoryTalk, Schneider’s EcoStruxure, and Mitsubishi’s e-F@ctory are soup-to-nuts affairs, spanning sensors, controls, industrial networks, execution systems, analytics, and embedded AI, with machine-learning models woven directly into engineering, operations, and optimization workflows.
IBM, of course, invented exactly this one-stop-shop approach 60 years ago with its mainframe computers, but in a weird plot twist it has now chosen instead to abandon the lock-in model to focus on selling its watsonx enterprise AI suite, which sits above the network — on data platforms, AI models, and governance — rather than in the physical, deterministic layers where industrial systems actually run. In other words, watsonx is designed to be used over other vendors’ Industry 4.0 stacks, not to replace them.
The wrong trousers, Gromit!
IBM’s reputation as an AI leader leads to the assumption — certainly amongst its own staff — that its industrial software must surpass that of the established Industry 4.0 stack players.
Actually, and after the excommunication of its Kyndryl unit, AI is IBM’s biggest strategic Industry 4.0 misstep.
This is important, so pay attention: what the U.S. comms industry is pitching as “AI” today isn’t really AI at all. It’s machine learning (ML), with various bells (chatbots) and whistles (LLMs) bolted on. ML is a deterministic technology, meaning it doesn’t hallucinate and, when it does fail, it fails in bounded, predictable ways. LLMs, in contrast, are probabilistic systems, which means they fail unpredictably and with misplaced confidence (chatbots aren’t deterministic or probabilistic; they’re just predictive text subroutines cosplaying as creepy empaths).
Industries (and their Industry 4.0 stack suppliers) around the world have sweated the details of what passes for AI in the 21st Century and — unsurprisingly — opted out of the LLMs and chatbots, choosing to build their factories, ports, and nuclear power stations around error-free ML instead. This is now accepted AI doctrine in the world of critical heavy industries. For it is written.
But faced with this fork in the road, IBM has taken the same path as Amazon Web Services, Google and Microsoft, with a strategy more LLM-focused than ML-focused, even though its industrial credibility rests on its historical contributions to classical ML.
This is an attempt to follow the money. U.S. hyperscalers are betting that LLM-based AI will spawn a tsunami of new, massively profitable consumer applications. So far, that hasn’t happened. In fact, it’s starting to look a lot like their LLM obsession — with its absurd company valuations and hundreds of billions of dollars promised to building data centers — is likely to trigger the next great U.S. stock market collapse.
Computer says, 'No'
IBM’s approach to LLMs is embodied in Granite — a family of foundation models sitting inside the watsonx platform designed for governed, industry-specific AI use. Granite is very IBM: serious code; staid, not flashy; smaller and more predictable than consumer LLMs; prioritizing reliability and transparency versus the more grandiose solutions of its hyperscaler competitors.
But Granite is still LLM-based, using the same transformer architecture as GPT-4. Ipso facto, Granite still makes mistakes. And that puts IBM on the wrong side of the AI fence in the heavy industry world.
Industrial firms don’t like ambiguity. They are the only organizations with reliability standards that exceed those of Tier 1 carriers. No amount of “enterprise hardening” or human-in-the-loop safeguarding can disguise the fact that Granite is a probabilistic word-guesser, and no VP of Nuclear Operations wants to lie awake at night worrying that an AI subsystem has decided a red LED is actually green and now is exactly the right time (LLM chef’s kiss) to withdraw those boron carbide control rods from the pressurized water reactor, resulting in a “3.6 roentgen, not great, not terrible” moment.
A tip of the (red) hat
Things get even stranger when you consider that behind IBM’s marketing noise around watsonx and Granite sits the most strategically valuable Industry 4.0 asset it owns: Red Hat. Acquired in 2019 for roughly $34 billion, Red Hat provides the neutral, industrial-grade substrate that makes modern hybrid industrial architectures operationally viable — across edge, on-prem, and cloud.
It’s all a bit of a mess, and a missed opportunity. Looked at in the abstract, IBM’s Industry 4.0 stack consists of watsonx/Granite squishy LLMs at the top, rock-solid Red Hat code in the middle, and nothing underneath that IBM actually controls. This isn’t a recipe for success, but fixing it would require IBM to admit it hasn’t got this absolutely 100% right.
I’m not holding my breath on that front. This is IBM we’re talking about.
For more of this kind of thing, click here.
Steve Saunders is a British-born communications analyst, investor and digital media entrepreneur with a career spanning decades.
Opinion pieces from industry experts, analysts or our editorial staff do not necessarily represent the opinions of Fierce Network.