Nvidia stock slipped on Wednesday as investors reacted to fresh competitive pressure from Amazon’s new Trainium 3 artificial intelligence chip, the latest sign that major cloud providers are accelerating efforts to develop their own AI silicon. At the time of publishing, the Nvidia stock was down 0.6% to trade at around $180.34.Amazon unveiled Trainium 3 on Tuesday, pitching it as a cost-efficient alternative for training and operating AI models. The company said the new chip can reduce AI training and inference costs by up to 50% compared with systems using equivalent GPUs — the category dominated by Nvidia.Amazon also said it plans to use Nvidia’s NVLink Fusion technology in its future AI computing infrastructure, integrating it with the forthcoming Trainium4 chip.“With Nvidia NVLink Fusion coming to AWS Trainium4, we’re unifying our scale-up architecture with AWS’s custom silicon to build a new generation of accelerated platforms,” Nvidia CEO Jensen Huang said. “Together, NVIDIA and AWS are creating the compute fabric for the AI industrial revolution.”Nvidia stresses long-term demand despite competitive movesNvidia is working to reassure investors that it can maintain dominant market share even as Amazon, Google and other hyperscalers expand use of in-house silicon. The company’s neutral position in the market — as a supplier rather than a direct cloud-services competitor — remains a strategic advantage, as some technology giants may prefer not to depend heavily on rival hardware.Nvidia CFO Colette Kress said Tuesday that AI models trained on its new Blackwell chips will begin emerging in about six months. She noted the company has $500 billion in bookings for Blackwell and Rubin chips through 2026, excluding an upcoming deal with OpenAI that has yet to be finalised.Separately, European AI start-up Mistral said it trained its next-generation models on Nvidia hardware. The companies highlighted that Mistral’s Large 3 model achieved a tenfold performance improvement on Nvidia’s GB200 NV72 server racks compared with the previous H200 generation.Competition landscape starting to get intenseWhile Oracle Cloud Infrastructure’s earlier adoption of more than 50,000 AMD chips signalled growing interest in non-Nvidia solutions, the competitive pressure now is coming most visibly from Amazon Web Services. With Trainium 3, AWS has taken a significant step toward deepening its in-house AI silicon strategy. The chip is said to offer four times the performance of its predecessor and reduces energy consumption by 40%, underscoring AWS’s ambition to optimise its data centres around its own hardware.Google, meanwhile, is extending more aggressive outreach for its Tensor Processing Units, promoting TPUs to major customers such as Meta. The push suggests Google is seeking to expand TPU adoption among hyperscalers that have traditionally relied on Nvidia GPUs.The combined efforts of Amazon, Google and AMD signal a broadening competitive landscape in the AI hardware sector. While Nvidia remains the clear leader, its largest customers are now among its most visible challengers — each moving to reduce reliance on external suppliers and expand control over their AI infrastructure.The post Nvidia stock continues slide: is the AI darling's moat drying up as competition intensifies? appeared first on InvezzNvidia stock slipped on Wednesday as investors reacted to fresh competitive pressure from Amazon’s new Trainium 3 artificial intelligence chip, the latest sign that major cloud providers are accelerating efforts to develop their own AI silicon. At the time of publishing, the Nvidia stock was down 0.6% to trade at around $180.34.Amazon unveiled Trainium 3 on Tuesday, pitching it as a cost-efficient alternative for training and operating AI models. The company said the new chip can reduce AI training and inference costs by up to 50% compared with systems using equivalent GPUs — the category dominated by Nvidia.Amazon also said it plans to use Nvidia’s NVLink Fusion technology in its future AI computing infrastructure, integrating it with the forthcoming Trainium4 chip.“With Nvidia NVLink Fusion coming to AWS Trainium4, we’re unifying our scale-up architecture with AWS’s custom silicon to build a new generation of accelerated platforms,” Nvidia CEO Jensen Huang said. “Together, NVIDIA and AWS are creating the compute fabric for the AI industrial revolution.”Nvidia stresses long-term demand despite competitive movesNvidia is working to reassure investors that it can maintain dominant market share even as Amazon, Google and other hyperscalers expand use of in-house silicon. The company’s neutral position in the market — as a supplier rather than a direct cloud-services competitor — remains a strategic advantage, as some technology giants may prefer not to depend heavily on rival hardware.Nvidia CFO Colette Kress said Tuesday that AI models trained on its new Blackwell chips will begin emerging in about six months. She noted the company has $500 billion in bookings for Blackwell and Rubin chips through 2026, excluding an upcoming deal with OpenAI that has yet to be finalised.Separately, European AI start-up Mistral said it trained its next-generation models on Nvidia hardware. The companies highlighted that Mistral’s Large 3 model achieved a tenfold performance improvement on Nvidia’s GB200 NV72 server racks compared with the previous H200 generation.Competition landscape starting to get intenseWhile Oracle Cloud Infrastructure’s earlier adoption of more than 50,000 AMD chips signalled growing interest in non-Nvidia solutions, the competitive pressure now is coming most visibly from Amazon Web Services. With Trainium 3, AWS has taken a significant step toward deepening its in-house AI silicon strategy. The chip is said to offer four times the performance of its predecessor and reduces energy consumption by 40%, underscoring AWS’s ambition to optimise its data centres around its own hardware.Google, meanwhile, is extending more aggressive outreach for its Tensor Processing Units, promoting TPUs to major customers such as Meta. The push suggests Google is seeking to expand TPU adoption among hyperscalers that have traditionally relied on Nvidia GPUs.The combined efforts of Amazon, Google and AMD signal a broadening competitive landscape in the AI hardware sector. While Nvidia remains the clear leader, its largest customers are now among its most visible challengers — each moving to reduce reliance on external suppliers and expand control over their AI infrastructure.The post Nvidia stock continues slide: is the AI darling's moat drying up as competition intensifies? appeared first on Invezz

Nvidia stock continues slide: is the AI darling’s moat drying up as competition intensifies?

3 min read

Nvidia stock slipped on Wednesday as investors reacted to fresh competitive pressure from Amazon’s new Trainium 3 artificial intelligence chip, the latest sign that major cloud providers are accelerating efforts to develop their own AI silicon.

At the time of publishing, the Nvidia stock was down 0.6% to trade at around $180.34.

Amazon unveiled Trainium 3 on Tuesday, pitching it as a cost-efficient alternative for training and operating AI models.

The company said the new chip can reduce AI training and inference costs by up to 50% compared with systems using equivalent GPUs — the category dominated by Nvidia.

Amazon also said it plans to use Nvidia’s NVLink Fusion technology in its future AI computing infrastructure, integrating it with the forthcoming Trainium4 chip.

“With Nvidia NVLink Fusion coming to AWS Trainium4, we’re unifying our scale-up architecture with AWS’s custom silicon to build a new generation of accelerated platforms,” Nvidia CEO Jensen Huang said.

“Together, NVIDIA and AWS are creating the compute fabric for the AI industrial revolution.”

Nvidia stresses long-term demand despite competitive moves

Nvidia is working to reassure investors that it can maintain dominant market share even as Amazon, Google and other hyperscalers expand use of in-house silicon.

The company’s neutral position in the market — as a supplier rather than a direct cloud-services competitor — remains a strategic advantage, as some technology giants may prefer not to depend heavily on rival hardware.

Nvidia CFO Colette Kress said Tuesday that AI models trained on its new Blackwell chips will begin emerging in about six months.

She noted the company has $500 billion in bookings for Blackwell and Rubin chips through 2026, excluding an upcoming deal with OpenAI that has yet to be finalised.

Separately, European AI start-up Mistral said it trained its next-generation models on Nvidia hardware.

The companies highlighted that Mistral’s Large 3 model achieved a tenfold performance improvement on Nvidia’s GB200 NV72 server racks compared with the previous H200 generation.

Competition landscape starting to get intense

While Oracle Cloud Infrastructure’s earlier adoption of more than 50,000 AMD chips signalled growing interest in non-Nvidia solutions, the competitive pressure now is coming most visibly from Amazon Web Services.

With Trainium 3, AWS has taken a significant step toward deepening its in-house AI silicon strategy.

The chip is said to offer four times the performance of its predecessor and reduces energy consumption by 40%, underscoring AWS’s ambition to optimise its data centres around its own hardware.

Google, meanwhile, is extending more aggressive outreach for its Tensor Processing Units, promoting TPUs to major customers such as Meta.

The push suggests Google is seeking to expand TPU adoption among hyperscalers that have traditionally relied on Nvidia GPUs.

The combined efforts of Amazon, Google and AMD signal a broadening competitive landscape in the AI hardware sector.

While Nvidia remains the clear leader, its largest customers are now among its most visible challengers — each moving to reduce reliance on external suppliers and expand control over their AI infrastructure.

The post Nvidia stock continues slide: is the AI darling's moat drying up as competition intensifies? appeared first on Invezz

Market Opportunity
null Logo
null Price(null)
--
----
USD
null (null) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Tags:

You May Also Like

Once Upon a Farm Announces Pricing of Initial Public Offering

Once Upon a Farm Announces Pricing of Initial Public Offering

BERKELEY, Calif.–(BUSINESS WIRE)–Once Upon a Farm today announced the pricing of its initial public offering of 10,997,209 shares of its common stock, 7,631,537
Share
AI Journal2026/02/06 08:15
Forward Industries Bets Big on Solana With $4B Capital Plan

Forward Industries Bets Big on Solana With $4B Capital Plan

The firm has filed with the U.S. Securities and Exchange Commission to launch a $4 billion at-the-market (ATM) equity program, […] The post Forward Industries Bets Big on Solana With $4B Capital Plan appeared first on Coindoo.
Share
Coindoo2025/09/18 04:15
332M accounts and $28B TVL,

332M accounts and $28B TVL,

The post 332M accounts and $28B TVL, appeared on BitcoinEthereumNews.com. PayPal USD debuts on TRON as a permissionless token PYUSD0, enabled by LayerZero’s OFT standard and the Stargate Hydra extension. The announcement on September 18, 2025 (Geneva) introduces native interoperability between chains and transfers without manual steps for users; the news echoes elements already communicated by PayPal at the launch of PYUSD PayPal Newsroom. The move concerns an ecosystem that includes 332 million accounts and over $28 billion in TVL. In this context, the fungibility of a stablecoin regulated across multiple networks and the use of TRON as a settlement layer for payments and remittances is at stake. According to the data collected by TRONSCAN updated as of September 18, 2025, the network metrics confirm the cited volumes and highlighted traffic patterns. Our editorial team has verified the transaction logs and monitored the public chain metrics to corroborate the reported figures; the observations on daily flows and TVL are consistent with the network dashboards. Industry analysts observe that the entry of a regulated issuer like PayPal tends to increase institutional interest, provided there is transparency on reserves and compliance checks. What is PYUSD0 on TRON and why is it relevant PYUSD0 is the representation of PayPal USD on TRON. It is pegged one-to-one to PYUSD through the OFT standard: the two tokens remain a single stablecoin, fungible and reconciled across chains. The integration is made possible by Stargate Hydra, now operational through LayerZero. According to the founder of TRON, Justin Sun, the extension on TRON expands access and trust for users and institutions. For Bryan Pellegrino (CEO of LayerZero Labs), stablecoins represent a pillar of global payments and remittances, as the native compatibility between chains enables their operational scalability. It must be said that the alignment between issuer, cross-chain infrastructure, and settlement network is a key element. Key Numbers: TRON…
Share
BitcoinEthereumNews2025/09/19 08:18