The Five Nine podcast: Nvidia GTC highlights - AI & GPU announcements explained

Turns out Nvidia isn't just a chip company, it's morphing into a systems company. On the heels of Nvidia GTC in San Jose, we sat down with three of our favorite analysts — Jack Gold, Anshel Sag and Leonard Lee — to translate Nvidia's tech talk into tangible takeaways. Here's what you need to know about the company's latest chip and AI announcements. 

To learn more about the topics in this episode, check out our Nvidia GTC coverage hub here

Catch the video at top, listen to the audio edition and read our transcript with embedded story links below, or watch this and future episodes on YouTube

This podcast is written and hosted by Diana Goovaerts. It is edited by Diana Goovaerts and Matt Rickman. Liz Coyne is our executive producer. Special thanks to guests Jack Gold, from J. Gold Associates; Leonard Lee, from neXt Curve; and Anshel Sag, from Moor Insights and Strategy.


Diana Goovaerts, Executive Editor, Fierce Network: Is Nvidia just a chip company? What is OpenClaw and how big is it gonna be? I'm Diana Goovaerts and this is The Five Nine. 

Nvidia held its GTC event this week in San Jose. So, we got together with three of our favorite analysts to get their top takeaways from the show and learn a little bit more about where the tech giant is headed.

And no, they are not just a chip giant anymore. It seems Nvidia is trending towards being a systems company. To unpack what exactly that means and why it matters, we are here with Anshel Sag, Leonard Lee and Jack Gold. Thank you all for being here today. Let's jump right in with the first question, which is a round table of your top takeaways.

So guys, why don't you give them to me straight? 

Anshel Sag, Moor Insights and Strategy: I would say there are a few. I think because Nvidia's become such a big company covering so many different aspects of the IT industry, I think one, they're very much on the whole OpenClaw trend. I think they very much are invested there and wanting to enable agentic experiences because I think they realize that if they can get people to individually experiment with agentic experiences, it will grow for them and enable bigger and more of that and inherently agentic applications or whatever you wanna call agents. They're more resource intensive. So, I think that's always a good thing for a company like Nvidia that wants to sell more GPUs. But I think the broader takeaway, I would say, is that they're no longer just a GPU company.

You know, it's not just about GPU hardware and software, it's now about, you know, they're selling full racks of CPUs. You know, they're now selling Groq LPUs and, they're now selling all those things together, plus networking, you know, they're doing CPO. 

So, they're in the networking business. Their networking business is gonna probably be bigger than most networking companies revenue just because of how big their footprint is, on a global scale. And I think that's one thing that a lot of people are missing is like, Nvidia's scale is so large that they're talking about a trillion dollars before the end of 2027, right?

So, that is a absolutely gargantuan level of scale that every single segment of their business, you know, there's gonna be a point where their CPU business will be bigger than some CPU companies. So, I think some people are really not appreciating the scale of what they're doing. 

But if you look at it, you know, they aren't really selling people individual GPUs. They're not selling people blades of GPUs. They're not even selling people racks of GPUs. They're selling them entire pods, which are like, you know, half a data center or even the whole data center, right? So, I think that's the thing to understand is that like Nvidia's value really is most appreciated at scale.

And that's kind of what Jenssen's been trying to convince people of, you know, the whole buy more, save more mantra that he's been telling. I'm surprised he hasn't said it this year, or at least at this GTC. He's got other messages to, you know, communicate. But, you know, there were a lot of key takeaways, but I would say that's the biggest one.

Leonard Lee, neXtCurve: I think one of the overarching impressions that I got out of the event this year was that, and I posted about this, is that, Nvidia is no longer just a GPU company. And you know, this is important because that is how Nvidia has been characterized. 

Its identity is based on GPU. But we saw, throughout the course of the event GTC 2026, is that there's a lot more to what Nvidia has to do in order to react to some of these shifts that are happening in the AI space that transcends what GPU can do. And so, what we're seeing is Nvidia become much more of a heterogeneous computing company. 

And so if you look at, what I thought was one of the more interesting announcements, coming out of this year's event, which was the Vera Rack, we're talking about a full CPU rack. And the way that they were positioning, this rack was, for the support of agentic AI.

So, as you start to look at what the future of computing looks like, it's probably far less of this diametric or this, let's say complete takeover or quote unquote accelerated computing. You're looking at something that's a lot more hybrid and driven off of what will likely be, you know, diverse requirements for compute. And a lot of it may not be AI.

Diana Goovaerts: All right. Jack, what about you? I know you, you had a, a note out on this during the, the conference and pretty interesting takeaway. I think it related to inference. Talk to me about that for a second.

Jack Gold, J. Gold Associates: One of the things that, I think Nvidia showed in, this year at GTC, and by the way, it was interesting that they – if you look at the overall amount of content they had, a small piece of that was around their chips. So, it really is about becoming a systems company, to Leonard's point. But also, it is indicative of the fact that the AI marketplace is moving. It's moving rapidly towards an inference-based model and not a model training model, a training environment.

So, Nvidia made its name on these monster GPU at a kilowatt each or more. Now, inference can't handle that. You can't work without an inference. Inference is going to be distributed. It's going to be at the edge. It's going to be in the data center. It's gonna be on my device that I carry with me. It's going to be power constrained. It's going to also be cost constrained.

You know, Nvidia chips aren't cheap. They cost a pretty penny. So what Nvidia's trying to do is trying to say – there's an interesting graph. I can't remember exactly what Jensen titled it, but basically it was ‘buy our monster systems because the cost per token generated with our big systems is gonna be better than the cost per token of a smaller system.’ 

And that's just his way of saying, ‘Hey, you know, we wanna play in the inference space as well.’ So it's going to be a very interesting world going forward for Nvidia. They're gonna be fine, but there's gonna be a lot of competition from ARM-based world, even from Intel to some extent. But it's gonna be a really distributed AI world that they may or may not have as much play in. 

Diana Goovaerts: Yeah. So I mean, it seems like the point that everybody noticed, across all of you guys is that Nvidia isn't just a chip company anymore. They're really kind of positioning themselves as a systems company. So let's dig into one of those new kind of segments that they're talking. You know, we heard announcements at the show around OpenClaw, NemoClaw, Nemotron. It all kind of sounds a little bit sci-fi esque, but what are those things, and why do they matter? 

Anshel Sag: OpenClaw is obviously something that's not exclusive to Nvidia. It's kind of an architecture for allowing people to run agents on their own computer and to do it in a way that fits their needs. So once it's configured to run OpenClaw, you can basically tell it to do whatever you want. And in some cases it does more than what you want because you haven't built enough guardrails to control what it's trying to do, which is why Nvidia, you know, with their NemoClaw stuff has put some protections in place to prevent your OpenClaw implementation from getting over its skis a little bit.

And the whole Nemo naming, it's really just an Nvidia branded thing for their own internal brand of open models. So, yeah, you know, they've really pivoted heavily towards open models, as well as kind of deploying their own, architecture. For open source models, you know, they're really proud of how many different models they have.

They have a world model, they've got the NemoClaw models, they've got models for robots. You know, they have all kinds of models specifically for making it easier to implement things and to do it in an open source way.

Leonard Lee: This is interesting because we have to understand how quickly things are moving, and that's not necessarily a good thing. OpenClaw came out, this whole Moltbook thing really became a thing, just a little over a month ago. And, now what we see with NemoClaw is what Nvidia claims is an enterprise grade version of OpenClaw, right? 

And what OpenClaw really is an agentic framework and the purpose of it is really to be deployed on device as a, let's call it, a personal AI. So, there's a lot of companies that have been trying to do this. You know, Apple, Lenovo, Honor, Microsoft, you know, Google, you name them, to create these safe, let's say consumer grade, enterprise grade AIs, right? 

And so now we have OpenClaw coming in as an quote, unquote open framework that was deposited in GitHub for anybody to basically download and start messing around with, their own agent now kind of take off, right? The question is who's using this stuff right?

Because it's also becoming a huge concern for the cybersecurity community that are very concerned about its use and its ability basically to augment what a threat actor can do, not only within someone else's system, but enabling them to scale out and do very high precision attacks on their targets.

Right? And so, this is an interesting development. I kind of think it's pretty dangerous stuff. There's a big question mark as to whether or not, this stuff has actually been hardened and secured in a way that is safe for consumers or even enterprises. One of the things that's really interesting about NemoClaw is they introduced this whole idea of, or this concept of OpenShell.

It's still difficult to see how well formed that is and how consistently it's been instituted. But the thing, and going back to this, this is like really moving really fast. Given that a lot of people still don't even know how to use OpenClaw safely, it's difficult to fathom that you've arrived at enterprise grade in this timescale.

And so it's gonna be really interesting going forward given all this excitement around OpenClaw and NemoClaw to see whether this can be deployed safely in a corporate environment or, you know, within organizations or even for consumers. 

Diana Goovaerts: I wanna follow up on, on a point that Leonard made, which is, you know, has there been enough time, is there enough security built into NemoClaw, right? Because OpenClaw is not that old. It only came out a couple months ago. Has there been enough time to harden that, to mature it, to actually deliver on enterprise grade, which is a higher bar than I think most people realize in terms of security and reliability and all other sorts of KPIs.

Anshel Sag: Totally. I would say if there was any other company involved other than Nvidia, I would say probably not, but if you paid attention to what was happening, the creator of Open class said he was working with Nvidia to fix some security issues. So, they've actually been working together for quite some time.

And if there's a company to vet security at an enterprise grade, it is Nvidia. So I would say I'm not as concerned if we're talking an Nvidia-based solution. I think there will probably be vulnerabilities that are discovered. It's inevitable. But I would also say that, it may be time for some companies to experiment, but maybe not deploy.

And also, you know, it's okay to be conservative. I think there's a lot of companies that are conservative, but also, if you wanna be at the edge, there's a little bit of risk. But I think part of an Nvidia NemoClaw deployment is the level of security and safety that they are inherently trying to promote for their customers. 

Diana Goovaerts: Yeah. Jack, I wanted to see what your take is on what they're doing around systems, not just around OpenClaw, but some of the other announcements that came outta GTC that had you saying, ‘huh, this isn't just a chip company. They’re making a systems play here.’ What struck you there? 

Jack Gold: When we look at, at the market going forward, our prediction is that over the next one to two years 80 to 85% of AI workloads are not gonna be training, they're gonna be inference. They may be agentic, but agentic looks a lot more like inference than it does training. And so, it's very important that Nvidia has a major stake in the space. 

They're doing that in a, a couple of different ways. At the processor level, they're talking about not only having a GPU, they're building their own CPUs. They're building their own LPUs. This is the Groq thing. Groq is really important for them from an inference perspective, from an agentic perspective. They're still working with Intel on CPU, so that's not gonna go away. They're working with a AMD as well, and they're gonna be working with ARM, they're gonna be working with everybody. They don't really care, but they would certainly like to sell their own.

And they're putting a whole lot of effort behind their backend system software. So, for instance, Omniverse, which is kind of their digital twin world, right? They're putting together a specific version that actually helps you build out your data center. They're working with partners that know about electricity and plumbing and building and construction, all that kind of stuff. And so there's gonna be other versions. There's one for healthcare, there's one for science, there's one for physics, there's one for all kinds of stuff. 

So, they're looking at saying, we wanna be a full-blown womb to tomb systems company. We wanna be more like, let's say, an IBM, right, than an Intel. Whether they can fully get there, they're still gonna have to rely on their partners. That certainly can't go away. But hardening, hardening, OPenClaw with NemoClaw is a good first step to get into the enterprise play.

Look, 50% or more of their business is still in the hyperscaler world, so they've gotta put together a strong story for hyperscalers, and so what, what does a systems play mean for them from the hyperscaler perspective?

It's all of the above. It's GPUs, CPUs, LPUs, it's software, it's modeling, it's all of that kind of stuff. And so I think what we're going to see from Nvidia over the next two years is a continuation of not reducing their efforts around chips – certainly they're gonna build massive chips and racks and systems – but more focus on how do we put all of this together in a big bundle for you. 

Diana Goovaerts: Wrap it up with a nice little bow. I just wanna clarify for everybody on in the audience, it's Groq, not that Grok that Elon Musk has. It's a Groq with a q. 

Groq is a company that was founded in 2016. They came out with their first LPU, which is a language processing unit in 2019. And it's funny you mention, IBM Jack, because before Nvidia straight up bought Groq, IBM had actually signed a partnership with them because they recognized that the technology could potentially be a game changer for enterprise inferencing because it can be done at a low cost.

So, one other thing that I wanted to touch on that came out of GTC kind of tangentially it was news that Cisco is working with Nvidia – specifically Nvidia's RTX Pro Blackwell series GPUs – to kind of create something called the AI grid, which has nothing to do with power. If you're thinking of the power grid like I was, nope. That can completely threw me for a loop.

But what they're trying to do is basically create a grid of intelligence, and do so using network operators’ existing infrastructure. So, putting those GPUs out on the edges of the network using Cisco's Mobility platform and basically turning that into a web of inferencing.

So I'm curious what you guys think of that, because to me, if operators – we already know that Comcast and AT&T seem to be working on stuff around this – could that be the mythical revenue generating thing that they need to really tap into AI as a revenue generator and not just a cost cutter? I'm kind of curious what you're thinking around that kind of an announcement. Jack, maybe you go first this time.

Jack Gold: Sure. So, let's talk about Cisco for a moment. Cisco, everyone knows Cisco as a networking company. Over the last three years, they've really switched. They're still a networking company, don't get me wrong. But where they're really going is they wanna be the infrastructure conduit, if you will, for AI across all platforms.

They have a huge security effort as well around AI and they have their own chips and they have their own networking, et cetera. They also have a monstrous installed base. 

So Nvidia is smart to work with Cisco. Cisco needs Nvidia as well to try to put together a channel, connectivity channel, if you will, perhaps. And so, there's benefit to both of those guys to work together. It's really important. 

There is another issue that, they didn't really speak to very much, which is very important from a networking perspective, and that is as we look at inference and agents, it's no longer about network speed, it's about network latency. 

If you've got an agent running, you can't wait three seconds for it to take an action, especially if it's running a machine tool in a physical AI environment. If it's not working in less than 20 or 30 milliseconds, it doesn't work. It is very important that Nvidia has a really strong networking partner to manage interconnectivity, to manage bandwidth latency, but also to manage network security. 

Agent security is something – and Leonard hit on this a little bit earlier – agent security is something that people take for granted and they shouldn't.

Diana Goovaerts: Yeah, and I mean, to your point about Nvidia having almost half of their customer base be hyperscalers, having Cisco on board helps them not only tap into enterprise, but also the telco marketplace, which we know has become an emerging focal point for the company. Right? I mean, last fall at a different GTC, they announced a billion dollar investment in Nokia. They're working on AI-RAN. They're working with T-Mobile on physical AI implementations.

Telco has suddenly kind of come to the fore as perhaps the next place Nvidia pedals its GPUs as training becomes less of a focus. 

Anshel Sag: What it means is I think they're self-aware. I think they're self-aware that, you know, to your point, they're not gonna do this alone. And truthfully, if you look at how Nvidia does things, for the most part, they don't really like being the sole partner, sole player in any kind of new market. They like to work with the companies who are trusted in that space and to scale with them. 

I think a perfect example of that is what they've done in the data center. You know, they really have leaned heavily on Dell, HP, Supermicro, all the companies who are well known in the space to go out there and build these servers for their customers and also to service them.

Because Nvidia really doesn't have the infrastructure to maintain and service their customers. And that's why I think it's, you know, especially in telco, there's a need. The GSIs are absolutely necessary to keep things running. So, I think that's a very fair assessment of them knowing what they can and can't do, and that they need to work with partners who are already trusted in the space. 

We actually did a tour of the telco zone yesterday at GTC. And, you know, we talked to SoftBank, we talked to Deutsche Telecom, we talked to NTT Data, we talked Booz Allen, who's like their partner in the whole U.S. thing.

So yeah, they have a lot of efforts. I think, you know, they're smart in the sense that they're like trying to do different things with different players that cater to their strengths.

Leonard Lee: Yeah, so I think it is really truly a Nvidia plus Cisco effort here, and I did have a chance to chat with Masum Mir about this. I think one of the things that makes this effort unique, and I think it'll be a good discovery, is the collaboration that Nvidia has with Cisco in Secure AI Factory.

The reason why I bring this up is one of the things that Cisco is focused on is security. So they have AI defense, they have all these artifacts that they've developed over the course of the last year and a half to address security for AI as well as agentic, not only for enterprises, but looking at service providers as well. 

And so I think this is gonna be a good discovery. I think this will be a positive in that it will, it will give Cisco and Nvidia, especially with AT&T, the opportunity to explore and figure out what are those security capabilities that need to be developed and implemented in order to have safe agentic as well as just generative AI at the edge. 

There's huge open questions about how these things will be deployed. Will they be deployed as containers, like applications deployed as containers? Is it gonna be serverless? What do you do in order to isolate agents as well as maybe even functions? How do you deploy and protect data? Because people just think in terms of logic and the GPU, they don't think in terms of memory, they don't think in terms of storage.

There are all these things that are required in order for you to actually implement generative AI. These are all things that, actually have been missing from the talk track oddly, but will be reconciled as Nvidia and others look at AI-RAN and they start to kick the tires on what does it take to actually make the stuff safe, and then what are the economics and the business of AI-RAN going to look like.

Those are things that have not been thought out very well, and are largely theoretical and hypothetical.

Diana Goovaerts: That's a perfect place to lead me into my last question, which is in 30 seconds or less, what is the one big question that you came away from GTC with that still needs to be answered?

Anshel Sag: Really like, it's not even a scale question 'cause they've clearly shown that they can do scale. I think, in telecom, I think the challenge is gonna be like, how much of the market will they take? You know, how much of 6G is gonna be powered by Nvidia? How how much of the AI-RAN Alliance is gonna actually be meaningful?

I think the transition from 5G to 6G is gonna be very crucial and how much of that transition are they gonna actually be able to capture? 

Now is the perfect time to start that process. But it's unclear how much they're going to end up actually capturing. 

Leonard Lee: The big, big question for me is, with agentic in particular with NemoClaw, which Jensen is characterized as the new chat GPT moment, how are you gonna make that safe? 

Jack Gold: There are a couple of things, that, that I worry about. Number one – and we just had this discussion a little while ago – is security. AI is great, but if you don't have security wrapped around it well enough, it can do some real damage, especially as we move to physical AI. So that's number one.

Number two is what does an AI system in the future look like? Can Nvidia really do that just by themselves? I think the answer to that is no. So, I worry about will they be making – it's not just about acquisitions, it's more about partnerships. It's more like not about Groq, it's more like what they're doing with Cisco.

I think there's gonna be a lot more of that to be determined with whom and how soon and what it's gonna look like.

Diana Goovaerts: Thank you guys so much for your time. I think that's a great place to leave it and, we'll see you again next time. And to all of our listeners and viewers, wherever you are listening or watching, make sure you like and subscribe and we will catch you again next time.