- OpenClaw, an operating system for AI agents, has become the fastest-growing open source project in history Nvidia CEO Jensen Huang said
- Nvidia's new NemoClaw software layers on enterprise security, policy guardrails and optimized Nvidia models
- The Nemotron Coalition unites labs including Mistral AI and LangChain to build open frontier models across specialized domains
NVIDIA GTC, SAN JOSE, CALIF. — Agentic AI framework OpenClaw is the new dominant open source platform, growing faster than Linux, Kubernetes or HTML did, Nvidia CEO Jensen Huang said during his keynote address this week.
OpenClaw is "essentially the operating system of agentic computers," Huang explained, comparing its role in agentic computing to Windows and Mac for the personal computer.
OpenClaw became the fastest-growing open source project in the history of computing within weeks of its launch. That's faster than Linux achieved that distinction in 30 years.
The implications for enterprise IT and telcos are sweeping. Just as Linux gave every company a common open source foundation to build on, OpenClaw does the same for agentic AI, Huang said. The central question now for every CEO, every software company and every technology organization is the same: "What's your OpenClaw strategy?" Huang said.
What does this mean for telcos?
Telcos have a direct stake in the OpenClaw moment. Huang didn't draw the conclusion himself, but Nvidia laid out the pieces this week.
Huang described a future in which base stations are no longer merely radio infrastructure but AI inference platforms — running agentic workloads at the network edge, reasoning about traffic and optimizing beam forming in real time. That vision positions carriers as both consumers of agentic AI and potential operators of the distributed compute infrastructure it runs on.
OpenClaw provides the software foundation that makes edge-deployed agents practical at scale, and NemoClaw's enterprise security model addresses the governance requirements that regulated industries like telecom demand before any agent touches sensitive network or customer data.
For carriers already exploring AI-RAN and edge compute as new revenue streams, the arrival of a standard, open, enterprise-ready agentic framework is an essential missing piece of the architecture.
What OpenClaw is and why it matters
OpenClaw is, at its core, an agentic operating system, Huang said. It lets users stand up a working AI agent with a single command, then direct that agent through natural language to use tools, access large language models, read files, spawn sub-agents and complete complex multi-step tasks autonomously.
Because the project is open source, Nvidia and a global ecosystem can keep contributing to OpenClaw as a shared foundation — the same compounding dynamic that made Linux and Kubernetes irreplaceable over decades, Huang noted at a Q&A with journalists and analysts Tuesday. Every company in the world can branch from OpenClaw and build its own agentic strategy atop it, he said.
The enterprise IT landscape that exists today — software companies, creative tools, file systems, systems of record, governance and compliance frameworks — remains relevant and valuable in the post-OpenClaw world. It just acquires agents on top.
NemoClaw: The enterprise-ready reference design
OpenClaw's power surfaces significant security problems. An agent operating inside a corporate network can access sensitive employee data, execute code and communicate externally, Huang said. That combination requires control.
To address this need, Nvidia worked OpenClaw developer Peter Steinberger — who was hired by OpenAI last month and who attended the GTC conference — and leading security and computing experts to develop NemoClaw, a reference design that layers enterprise-grade security and privacy onto the OpenClaw framework.
Like OpenClaw, NemoClaw installs with a single command. It incorporates the new Nvidia OpenShell runtime, which provides open models inside an isolated sandbox along with policy-based network and privacy guardrails. A privacy router allows agents to access frontier cloud models while keeping sensitive data local. Most importantly, NemoClaw connects directly to existing enterprise policy engines — the governance infrastructure large organizations already have in place — so agents operate automatically within those policies.
NemoClaw can run on any dedicated platform — including Nvidia GeForce RTX PCs and laptops, RTX Pro workstations and DGX Station and DGX Spark AI supercomputers — providing always-on local compute for autonomous agents. Enterprise deployments can run at data center scale.
The Nemotron Coalition: open frontier models for every domain
Alongside NemoClaw, Nvidia also launched the Nemotron Coalition, which sounds like an empire of "Star Trek" villains but is in fact a global collaboration of AI labs and model builders working together on open frontier models. Founding members include Black Forest Labs, Cursor, LangChain, Mistral AI, Perplexity, Reflection AI, Sarvam and Thinking Machines Lab.
The coalition's first project will be a base model co-developed by Nvidia and Mistral AI, trained on Nvidia DGX Cloud, with other members contributing data, evaluation frameworks and domain expertise. The resulting model will be open-sourced and will underpin the upcoming Nemotron 4 family of models.
No single model can serve every industry, Huang said. Biology, physics, autonomous vehicles, robotics and human language each require different foundations. The Nemotron Coalition's mission is to build those specialized foundations in the open — accessible to developers and organizations that need to post-train and customize AI for their specific regions and domains.
Every SaaS company becomes an AgaaS company
The arrival of OpenClaw has direct consequences for the software industry, and Huang addressed at length concerns sometimes described as the "SaaSpocalypse" (though Huang didn't use that term). SaaSpocalypse refers to the concern that AI agents will route around enterprise software vendors entirely and collapse the value of SaaS applications and the companies that operate those apps.
Huang is more optimistic. "Every single SaaS company will become an AgaaS company — agentic as a service company," he said.
To show how agentic AI and conventional IT can coexist, Huang cited the example of engineering design applications Synopsys and Cadence. Today, the ceiling on engineering tool sales is the number of engineers available to use them. In the future, agentic engineers will use the same tools — Synopsys, Cadence, every design application — because when agents complete work, the results must go back into structured data that humans can audit, interrogate and reproduce exactly. After all, the ground truth of the design lives in the file system and the database.
As the number of agentic engineers grows, the number of tool licenses required is likely to explode, not shrink, Huang said.
That principle extends to SQL and enterprise databases broadly. SQL will not die because agents are here — it's where the ground truth of business is stored, Huang said. When agents finish their work, the results go back into the SQL database so they can be controlled and queried.
Enterprise infrastructure is not made obsolete by AI; it becomes the substrate that makes AI trustworthy. Huang said.
Both structured and unstructured data are part of this picture. About 90% of enterprise information generated annually is unstructured — PDFs, video, speech, documents that until now couldn't be easily queried because there was no way to index their meaning, Huang said at the keynote. AI changes that.
Nvidia introduced accelerated libraries for structured data (CUDA-based acceleration of SQL engines and data frames) and for semantic vector databases that make unstructured data queryable, announced Monday alongside partners including IBM, Dell and Google Cloud.
Huang's message at GTC this week is that the agentic era is not arriving — it is here, the foundation is open source, and the window to build a strategy is now. Any organization — software vendor, telco, manufacturer, or otherwise — that waits for the landscape to settle before acting is making the same mistake that companies made when they delayed their Linux strategy, their web strategy, and their cloud strategy. For companies that fail to seize the moment, that story doesn't turn out well.
Overall, Huang and Nvidia had a busy week at the conference, which saw the AI giant pivot from training to inference. Additionally:
- Nvidia unveiled the Vera Rubin platform — seven new chips in production — promising 10x inference throughput per watt and one-tenth the cost per token versus Blackwell.
- Nvidia and T-Mobile discussed how they are pushing physical AI to the network edge via AI-RAN, turning base stations into AI inference infrastructure.
- Pushing AI to the network edge was a theme at the conference, as AT&T, Cisco and Nvidia talked about their AI Grid implementation.
- Comcast disclosed its own use of AI Grid technology, which could be a turning point for telco's search for new revenue streams.
- And Dell upgraded its AI Factory to help organizations push AI past the pilot stage.