The post Nvidia Drops Nemotron 3 Super Amid $26 Billion Open-Model AI Bet—America’s Answer to Qwen? appeared on BitcoinEthereumNews.com. In brief Nvidia launchedThe post Nvidia Drops Nemotron 3 Super Amid $26 Billion Open-Model AI Bet—America’s Answer to Qwen? appeared on BitcoinEthereumNews.com. In brief Nvidia launched

Nvidia Drops Nemotron 3 Super Amid $26 Billion Open-Model AI Bet—America’s Answer to Qwen?

For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

In brief

  • Nvidia launched Nemotron 3 Super, a 120B open-weight AI model optimized for autonomous agents and ultra-long context tasks.
  • The hybrid Mamba-Transformer MoE architecture delivers faster reasoning and over 5× throughput while running at 4-bit precision.
  • Nvidia’s $26 billion investment into open-source AI wants to counter China’s rise in the field.

Nvidia just shipped Nemotron 3 Super, a 120-billion-parameter open-weight model built to do one thing well: run autonomous AI agents without bleeding your compute budget dry.

That’s not a small problem. Multi-agent systems generate a lot more tokens than a normal chat—every tool call, reasoning step, and slice of context gets re-sent from scratch. As a result, costs explode, models tend to drift, and the agents slowly forget what they were supposed to be doing in the first place… or at least decrease in accuracy.

Nemotron 3 Super is Nvidia’s answer to all of that. The model runs 12 billion active parameters out of 120 billion total, using a mixture-of-experts (MoE) design that keeps inference cheap while retaining the reasoning depth complex workflows need. It packs a 1-million-token context window, so agents can hold an entire codebase, or nearly 750,000 words in memory before collapsing.

To build its model, Nvidia combined three components that rarely appear together in the same architecture: Mamba-2 state-space layers—a faster, memory-efficient alternative to attention for handling long token streams—along with Transformer attention layers for precise recall, and a new “Latent MoE” design that compresses token embeddings before routing them to experts. That allows the model to activate four times as many specialists at the same compute cost.

The model was also pretrained natively in NVFP4, Nvidia’s 4-bit floating-point format. In practice, that means the system learned to operate accurately within 4-bit arithmetic from the very first gradient update, rather than being trained at high precision and compressed afterward, which often causes models to lose accuracy.

For context, a model’s precision is measured in bits. Full precision, known as FP32, is the gold standard—but it is also extremely expensive to run at scale. Developers often reduce precision to save compute while trying to preserve useful performance.

Think of it like shrinking a 4K image down to 1080p: The picture still looks the same at a glance, just with less detail. Normally, dropping from 32-bit precision all the way to 4-bit would cripple a model’s reasoning ability. Nemotron avoids that problem by learning to operate at low precision from the start, instead of being squeezed into it later.

Compared to its own predecessor, Nemotron 3 Super delivers more than five times the throughput. Against external rivals, it’s 2.2x faster than OpenAI’s GPT-OSS 120B on inference throughput, and 7.5x faster than Alibaba’s Qwen3.5-122B.

We ran our own quick test. The reasoning held up well, including on prompts that were deliberately vague, badly worded, or based on wrong information. The model caught small errors in context without being asked to, handled math and logic problems cleanly, and didn’t fall apart when the question itself was slightly off.

The full training pipeline is public: weights on Hugging Face, 10 trillion curated pretraining tokens seen over 25 trillion total during training, 40 million post-training samples, and reinforcement learning recipes across 21 environment configurations. Perplexity, Palantir, Cadence, and Siemens are already integrating the model in their workflows.

The $26 billion bet

The model may be one piece of a larger strategy. A 2025 financial filing shows Nvidia plans to spend $26 billion over the next five years building open-weight AI models. Executives confirmed it, too.

Bryan Catanzaro, VP of applied deep learning research, told Wired the company recently finished pretraining a 550-billion-parameter model. Nvidia released its first Nemotron model back in November 2023, but that filing makes clear this is no longer a side project.

The investment is strategic considering Nvidia’s chips are still the default infrastructure for training and running frontier models. Models tuned to its hardware give customers a built-in reason to stay on Nvidia despite efforts from competitors to use other hardware. But there’s a more urgent pressure behind the move: America is losing the open-source AI race, and losing it fast.

Chinese open models went from barely 1.2% of global open-model usage in late 2024 to roughly 30% by the end of 2025, according to research by OpenRouter and Andreessen Horowitz. Alibaba’s Qwen overtook Meta’s Llama as the most-used self-hosted open-source model, according to Runpod. American companies including Airbnb adopted it for customer service. Startups worldwide are building on top of it. Beyond market share, that kind of adoption creates infrastructure dependencies that are hard to reverse.

While U.S. giants like OpenAI, Anthropic, and Google keep their best models locked behind APIs, Chinese labs from DeepSeek to Alibaba have been flooding the open ecosystem. Meta was the one major American player competing in open source with Llama, but Zuckerberg recently signaled the company might not make future models fully open.

The gap between “best proprietary model” and “best open model” used to be massive—and in America’s favor. That gap is now very small, and the open side of the ledger is increasingly Chinese.

There’s also a hardware threat underneath all of this. A new DeepSeek model is widely expected to drop soon, and it’s rumored to have been trained entirely on chips made by Huawei—a sanctioned Chinese company. If that’s confirmed, then it would give developers around the world, particularly in China, a concrete reason to start testing Huawei’s hardware. China’s Ziphu AI is already doing that.

That’s the scenario Nvidia most needs to prevent: Chinese open models and Chinese chips building an ecosystem that doesn’t need Nvidia at all.

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.

Source: https://decrypt.co/360929/nvidia-drops-nemotron-3-super-26-billion-open-model-ai-bet

Market Opportunity
Belong Logo
Belong Price(LONG)
$0.002123
$0.002123$0.002123
+6.79%
USD
Belong (LONG) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Tags: