At NVIDIA’s GTC 2026 conference — 3,500 attendees, $40 trillion in combined market cap in the room — Jensen Huang made a statement that should reframe how every software executive thinks about their business model. Not eventually. Now. “Every single company will need compute for revenues.” Not compute for infrastructure. Not compute for efficiency. Compute for revenues. That’s a different sentence. And if he’s right, the entire software industry is mid-transformation into something it doesn’t fully recognize yet.
The Earnings Print That Set the Stage
Before getting into the structural argument, it’s worth understanding the moment Huang was speaking from. NVIDIA had just reported what one attendee described to Huang as possibly “the single best earnings print in recorded human history.” Huang’s response was characteristically dry: “It must be only recorded humanity. I’m sure somebody had better returns.”
The stock was actually down 30 cents the day of the interview. Huang didn’t seem particularly bothered. That composure makes more sense when you understand the thesis he was laying out — because if the thesis is right, what happened last quarter is the least interesting part of the story. The stock is up roughly 22,000% over the prior decade. Short-term moves are noise against that signal. And Huang’s framing wasn’t about defending a stock price. It was about explaining a structural shift in how the entire global economy is going to be organized around compute.
“You can’t hold the stock back,” he said. “You can’t hold it back.” That’s not a boast. It’s a description of what happens when the underlying demand driver is structural rather than cyclical.
Compute Equals GDP: The Sovereign Infrastructure Argument
The boldest claim Huang made at GTC wasn’t about NVIDIA. It was about nations. “Compute equals GDP. Therefore, every country will have it.” And then: “Not one country in the future will say ‘we’re going to opt out on intelligence.’”
Unpack that chain of logic: needing GDP growth means needing intelligence. Needing intelligence means needing digital infrastructure. Needing digital infrastructure means needing AI. Needing AI means needing compute. Every step follows from the one before it. If you accept the premise that AI-powered systems genuinely produce economic output — better logistics, better healthcare allocation, better financial modeling, better agricultural yield prediction — then compute becomes as foundational as electricity grids or port infrastructure.
This isn’t a vague observation about technology trends. It has immediate geopolitical implications. The sovereign AI race that’s been building since 2023 — with countries from the UAE to France to India announcing national AI infrastructure programs — now has a clear economic rationale that goes beyond national pride or defense concerns. Nations that don’t build compute capacity aren’t just falling behind technologically. By Huang’s framing, they’re opting out of a primary mechanism of future GDP generation. That’s not a position any government can politically sustain for long.
For software companies, the implication is slightly different but equally structural: your customers — whether they’re enterprises, governments, or consumers — are going to be increasingly compute-embedded environments. Your product either integrates with that reality or it becomes a legacy artifact.
The Internet Already Proved It: Why CSPs Converted All Their Capex
Here’s where Huang moved from prediction to past tense, which is the more interesting rhetorical move. He noted that all major cloud service providers — Meta, Google, AWS — took their entire capital expenditure budget and converted it to generative and agentic AI infrastructure. All of it. Not a portion. Not a pilot program. Everything.
“The entire internet industry could take 100% of their capex and make it AI because it’s better. We’ve proven it to be better.” The word “proven” is doing a lot of work there. This isn’t theoretical ROI. Meta’s ad targeting, Google’s search results, AWS’s developer tooling — these have all been measurably improved by AI integration, and the companies have the revenue data to show it. Huang’s point is that the ROI experiment has been run at scale by the largest technology companies on Earth, and they came back and said: convert everything.
Why does this matter for software companies further down the stack? Because it establishes that the pattern is real and replicable. The internet giants didn’t adopt AI infrastructure out of fear of missing out. They adopted it because it made their core products better in ways that translated directly to revenue — better search means more ad clicks, better recommendations mean more purchases, better feed ranking means more engagement. The mechanism works. The question for every other software company is when they run the same experiment in their domain, not if.
The Token-Driven Software Industry: Two Paths, Both Require Compute
This is the section where the argument gets most concrete for software businesses. Huang laid it out simply: “The entire software industry will be token driven.” And then he described the two paths available to any software company in that world.
Path one: produce tokens yourself. You build AI capabilities into your product — your own models, your own inference pipelines, your own AI features. To do this, you need compute. You are now a compute buyer.
Path two: resell tokens. You integrate third-party AI (OpenAI’s API, Anthropic’s Claude, Google’s Gemini) into your product and pass the intelligence through to your customers. You don’t train models. You don’t run your own inference at scale. But you still need compute to handle the token throughput your customers are consuming through your platform. You are still a compute buyer.
There is no path three. “For the first time, the entire IT industry will have to be fueled by compute,” Huang said. Salesforce building Einstein AI into CRM workflows. SAP embedding AI into ERP processes. Oracle running AI across its cloud applications. ServiceNow automating IT workflows with AI agents. None of these companies can deliver their AI-enhanced products without token generation happening somewhere, and token generation requires compute. The old software economics — write code once, sell it infinitely at near-zero marginal cost — doesn’t disappear, but it gets a new dependency layered on top of it.
Huang put it directly: “You pick your favorite software company and I can show you exactly how they’re going to be token driven.” That’s a falsifiable claim, and it’s worth taking seriously because of what it implies for software margins, pricing models, and infrastructure strategy.
What This Looks Like in Practice: A Framework for Software Companies
- Identify your token surface area. Which features in your product involve AI-generated outputs? Every one of those is a token cost center. Map it now if you haven’t.
- Decide: produce or resell? Are you building your own models and inference pipelines, or are you routing through API providers? Both are valid, but they have very different cost structures, margin profiles, and vendor dependencies.
- Model token costs into your pricing. Unlike traditional SaaS where incremental users have near-zero marginal cost,
Recent Posts
link to Google Just Bet $40 Billion on Anthropic: Inside the Circular Finance Powering the AI Race Google Just Bet $40 Billion on Anthropic: Inside the Circular Finance Powering the AI Race
Google will invest $10 billion now and up to $30 billion more in Anthropic, creating the largest single company bet on an AI rival in history. The deal reveals how circular finance is reshaping the...
link to GPT-5.5: OpenAI Stops Selling a Chatbot and Starts Selling an Agent GPT-5.5: OpenAI Stops Selling a Chatbot and Starts Selling an Agent
OpenAI released GPT-5.5 on April 23, 2026, positioning it as an autonomous agent rather than a chatbot. With 82.7% on Terminal-Bench 2.0, a verified mathematical proof, and $30 per million output...
