• AiNews.com
  • Posts
  • Nvidia Adds DeepSeek-R1 to AI Services Despite Record $589B Stock Loss

Nvidia Adds DeepSeek-R1 to AI Services Despite Record $589B Stock Loss

A futuristic AI data center with glowing server racks processing DeepSeek-R1’s reasoning AI models. Large digital screens display Nvidia’s NIM microservice integration and complex AI computations. The setting conveys high-tech innovation, AI computing power, and the intersection of financial impact and advanced technology.

Image Source: ChatGPT-4o

Nvidia Adds DeepSeek-R1 to AI Services Despite Record $589B Stock Loss

In a surprising move, Nvidia has integrated DeepSeek-R1 into its AI services, despite the model’s announcement causing a historic $589 billion loss in Nvidia’s stock value—the largest single-day drop in stock market history.

DeepSeek-R1, an open-source model specializing in reasoning, logical inference, and advanced problem-solving, is now accessible through Nvidia’s NIM microservice on build.nvidia.com. This integration of the 671-billion-parameter model allows developers to experiment with its capabilities and deploy AI-powered agents with enhanced reasoning efficiency.

DeepSeek-R1: A New Era in AI Reasoning

Unlike traditional AI models that generate direct responses, reasoning models like DeepSeek-R1 take a more sophisticated approach. They perform multiple inference passes, using techniques such as:

  • Chain-of-thought processing (breaking down complex problems step by step).

  • Consensus reasoning (evaluating multiple perspectives to refine answers).

  • Search-based methods (retrieving and analyzing relevant information dynamically).

This approach, known as test-time scaling, allows AI to iteratively “think” through problems, leading to more accurate and complex outputs. However, this process also requires significant computational power, making accelerated computing critical for real-time agentic AI inference.

Why Nvidia Is Betting on DeepSeek-R1

DeepSeek-R1 is a large mixture-of-experts (MoE) model with 671 billion parameters, ten times more than many other open-source large language models, supporting input of 128,000 tokens. Each layer has 256 experts, with each token routed to eight separate experts for evaluation, demanding high-performance GPUs and low-latency connections.

With Nvidia’s NIM microservice and Hopper architecture, a single server with eight H200 GPUs can run DeepSeek-R1 at up to 3,872 tokens per second. Future Nvidia Blackwell architecture advancements promise even greater efficiency, with fifth-generation Tensor Cores delivering up to 20 petaflops of FP4 compute power.

How Developers Can Use DeepSeek-R1

Nvidia is making DeepSeek-R1 available as a secure, enterprise-ready NIM microservice, offering:

  • Industry-standard API support for easy deployment.

  • High-efficiency performance for AI-driven applications.

  • Customizable options via Nvidia AI Foundry and NeMo software.

  • Secure and private deployment on preferred infrastructure.

  • Developers can test and experiment with DeepSeek-R1 on build.nvidia.com, with a full API release expected soon.

What This Means

This development is nothing short of astonishing. DeepSeek-R1’s announcement wiped out $589 billion from Nvidia’s stock in a single day—the largest loss in market history. And yet, Nvidia is now integrating R1 into its own AI services.

Who does that? A company that sees the bigger picture. Despite the market’s reaction, Nvidia is doubling down on reasoning AI, recognizing its long-term potential. This move highlights how AI innovation is outpacing even financial expectations and suggests that Nvidia is betting big on the future of agentic AI systems.

Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiro’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.