Jensen Huang confirmed on March 18 that Nvidia resumed manufacturing specialized artificial intelligence chips for the Chinese market. Speaking at the GTC 2026 conference in San Jose, the chief executive disclosed that the company received a massive surge of orders alongside necessary US export licenses over the last fourteen days. This strategic pivot allows the Silicon Valley giant to re-enter a critical geography that had been largely restricted by evolving trade regulations since 2022.
Meanwhile, the broader tech industry is struggling with a fundamental change in how artificial intelligence is valued and sold. Nvidia has begun framing its newest hardware not as simple servers, but as manufacturing equipment for digital intelligence. Huang argued that the output of these machines consists of tokens, which are the basic units of text and logic processed by large language models. Financial Times analysts indicate that the return to China could generate over $11 billion in incremental revenue within the current fiscal year.
In fact, the shift toward token-based economics is redefining corporate balance sheets across the globe. Tokens serve as the detailed measurement for AI activity, where roughly four characters of text equal one token. Unlike traditional software that relies on flat monthly subscriptions, AI service providers now bill clients based on the volume of tokens consumed or generated. Business Insider reported that major players like OpenAI and Anthropic have already standardized this usage-based pricing model.
China Market Resumption and Export Controls
Specialized semiconductors designed to meet US Department of Commerce specifications are now flowing back into Chinese data centers. Huang noted that the recent batch of licenses covers chips that maintain high-speed interconnectivity while adhering to total processing power caps. Beijing-based technology firms had previously struggled to secure the high-end H100 and H200 variants, leading to a temporary slowdown in regional model training. Revenue from the Chinese market previously accounted for a major portion of the company's total data center sales before the regulatory clampdown.
For instance, domestic Chinese competitors attempted to fill the void with homegrown accelerators, yet they faced persistent manufacturing yield issues. The sudden availability of sanctioned Nvidia hardware is expected to consolidate the market share of the California-based firm once again. Huang stated that the demand from Chinese cloud providers remained resilient throughout the period of restricted access. Shipping volumes for the new China-compliant chips are projected to ramp up sharply by the third quarter.
Yet, the resumption of trade comes with strict monitoring requirements to ensure the hardware is not repurposed for prohibited military applications. US officials have established a structure that requires periodic audits of large-scale clusters deployed within the region. Nvidia remains the primary beneficiary of this regulatory clarity, as it allows for long-term production planning. The company has adjusted its supply chain to prioritize these specific configurations for international export.
Token Measurement and Enterprise Spending
Large language models process information by breaking down sentences into numerical sequences known as tokens. Huang spent a considerable portion of his keynote explaining why these units have become the most important metric in modern computing. He suggested that future corporate budgets will include specific allocations for token consumption, much like current budgets account for cloud storage or electricity. Digital agents, which are autonomous applications capable of performing multi-step tasks, are the primary drivers of this increased token usage.
According to the Nvidia leadership, the cost of generating tokens will continue to fall as hardware becomes more energy-efficient. This pricing model deviates from traditional SaaS because it directly ties expenses to the productivity of the AI model. Companies that deploy thousands of autonomous agents may soon find that token costs represent a major portion of their operating expenses. Huang argued that the efficiency of his hardware provides the lowest possible cost per token in the industry.
But the move toward a token-based economy extends beyond simple business transactions. Huang suggested that engineers might soon receive token budgets to enhance their personal productivity. In a move that surprised many analysts, he even proposed that access to these digital units could become a standard part of compensation packages. This proposal sparked debate among labor experts regarding the valuation of compute-as-a-benefit.
It is now one of the recruiting tools in Silicon Valley: How many tokens comes along with my job?
Computing power is the new crude oil.
By contrast, some industry observers worry that a reliance on token-based billing will create unpredictable costs for startups. If a model becomes popular overnight, the resulting token bill could bankrupt a firm that lacks a expandable revenue model. Huang countered this by emphasizing that Nvidia hardware is designed to maximize output while minimizing power draw. He claimed that the company is currently the world leader in cost-effective token production, often referring to himself as the Token King.
Recruiting Incentives and Talent Wars
In turn, the proposal to include tokens in compensation packages reflects the desperate search for talent in the AI sector. Huang floated the idea of offering engineers tokens worth half their annual salary as a recruiting tool. The logic suggests that an engineer with a massive compute budget can build and test models much faster than one restricted by corporate quotas. The shift places Nvidia at the center of corporate accounting and human resource strategy.
Altman, the chief executive of OpenAI, has expanded this concept into the area of social policy. He suggested that a future iteration of universal basic income could take the form of Universal Basic Compute. Under this plan, every citizen would receive a monthly allocation of tokens from a model like GPT-7. These tokens could be used for personal projects, donated to medical research, or sold on an open market for cash. The hypothetical scenario assumes that compute will eventually become as fundamental to survival as shelter or food.
At its core, the idea of Universal Basic Compute relies on the assumption that AI productivity will generate enough wealth to support the entire population. Altman argued that raw intelligence will soon be treated like a public utility, comparable to water or electricity. He believes that owning a slice of the global productivity generated by AI is a more lasting model than traditional cash transfers. The debate over this concept is likely to intensify as the cost of model training continues to escalate.
To that end, the relationship between hardware providers and model developers is becoming steadily symbiotic. Nvidia provides the manufacturing equipment, while companies like OpenAI turn that compute into the tokens that fuel the economy. Huang maintains that the world is currently at the beginning of a long-term transition toward this token-centric architecture. He envisions a future where every piece of software is at bottom a token-generating engine.
Even so, the path toward a tokenized society is fraught with technical and ethical hurdles. The energy requirements for generating billions of tokens every second are staggering. Researchers are currently looking for ways to reduce the precision of calculations without sacrificing the quality of the AI output. Nvidia has introduced new liquid-cooled systems specifically designed to handle the thermal load of massive token factories.
Separately, the environmental impact of this massive compute expansion is still a point of contention for activists. While Huang focuses on efficiency, the total aggregate energy consumption of the AI sector is rising. The industry must balance the promise of universal compute with the physical limits of power grids and carbon targets. Tokens represent the first quantifiable unit of intellectual labor.
The Elite Tribune Perspective
Call it the commodification of thought. Jensen Huang is not just selling chips; he is attempting to establish a new global currency where Nvidia is the central bank. By positioning tokens as a line item in corporate budgets and a component of engineer salaries, Huang is making Nvidia hardware the essential infrastructure of the 21st century. The restart of China operations proves that even the most stringent national security concerns eventually bend to the gravity of silicon-based profit. While the US government talks about containment, the reality is that the AI economy is too integrated to be sliced into geopolitical silos.
Sam Altman’s vision of Universal Basic Compute is even more audacious and potentially more deceptive. It frames a future of extreme wealth concentration as a benevolent giveaway of digital crumbs. Giving citizens a slice of GPT-7 output does not empower them; it makes them dependent on the proprietary algorithms of a handful of corporations. If tokens become the new UBI, the tech giants will have successfully replaced the social contract with an end-user license agreement. It is not the democratization of intelligence. It is the final stage of corporate capture where every human interaction is measured, billed, and taxed by the Token King.