Last week, semiconductor stocks like Nvidia (NASDAQ: NVDA), Advanced Micro Devices (NASDAQ: AMD), and Micron Technology (NASDAQ: MU) plunged on news that a Chinese start-up called DeepSeek had figured out how to train artificial intelligence (AI) models for a fraction of the cost of its American peers.
Investors were concerned that DeepSeek’s innovative approach would trigger a collapse in demand for graphics processors (GPUs) and other data center components, which are key to developing AI. However, those concerns might be overblown.
Meta Platforms (NASDAQ: META) is a huge buyer of AI chips from Nvidia and AMD. On Jan. 29, CEO Mark Zuckerberg made a series of comments that should be music to the ears of investors who own AI hardware stocks.
Successful Chinese hedge fund High-Flyer has been using AI to build trading algorithms for years. It established DeepSeek as a separate entity in 2023 to capitalize on the success of other AI research companies, which were rapidly soaring in value.
Last week’s stock market panic was triggered by DeepSeek’s V3 large language model (LLM), which matches the performance of the latest GPT-4o models from America’s premier AI start-up, OpenAI, across several benchmarks. That isn’t a concern at face value, except DeepSeek claims to have spent just $5.6 million training V3, whereas OpenAI has burned over $20 billion since 2015 to reach its current stage.
To make matters more concerning, DeepSeek doesn’t have access to the latest data center GPUs from Nvidia, because the U.S. government banned them from being sold to Chinese firms. That means the start-up had to use older generations like the H100 and the underpowered H800, indicating it’s possible to train leading AI models without the best hardware.
To offset the lack of computational performance, DeepSeek innovated on the software side by developing more efficient algorithms and data input methods. Plus, it adopted a technique called distillation, which involves using a successful model to train its own smaller models. This rapidly speeds up the training process and requires far less computing capacity.
Investors are concerned that if other AI firms adopt DeepSeek’s approach, they won’t need to buy as many GPUs from Nvidia or AMD. That would also squash demand for Micron’s industry-leading data center memory solutions.
Nvidia’s GPUs are the most popular in the world for developing AI models. The company’s fiscal year 2025 just ended on Jan. 31, and according to management’s guidance, its revenue likely more than doubled to a record $128.6 billion (the official results will be released on Feb. 26). If recent quarters are anything to go by, around 88% of that revenue will have come from its data center segment thanks to GPU sales.
That incredible growth is the reason Nvidia has added $2.5 trillion to its market capitalization over the last two years. If chip demand were to slow down, a lot of that value would likely evaporate.
AMD has become a worthy competitor to Nvidia in the data center. The company plans to launch its new MI350 GPU later this year, which is expected to rival Nvidia’s latest Blackwell chips that have become the gold standard for processing AI workloads.
But AMD is also a leading supplier of AI chips for personal computers, which could become a major growth segment in the future. As LLMs become cheaper and more efficient, it will eventually be possible to run them on smaller chips inside computers and devices, reducing reliance on external data centers.
Finally, Micron is often overlooked as an AI chip company, but it plays a critical role in the industry. Its HBM3E (high-bandwidth memory) for the data center is best in class when it comes to capacity and energy efficiency, which is why Nvidia uses it inside its latest Blackwell GPUs. Memory stores information in a ready state, which allows the GPU to receive it instantaneously when needed, and since AI workloads are so data intensive, it’s an important piece of the hardware puzzle.
Meta Platforms spent a whopping $39.2 billion on chips and data center infrastructure during 2024, and it plans to spend as much as $65 billion this year. Those investments are helping the company further advance its Llama LLMs, which are the most popular open-source models in the world, with 600 million downloads. Llama 4 is due to launch this year, and CEO Mark Zuckerberg thinks it could be the most advanced in the industry, outperforming even the best closed-source models.
On Jan. 29, Meta held a conference call with analysts about its fourth quarter of 2024. When Zuckerberg was quizzed about the potential impact of DeepSeek, he said it’s probably too early to determine what it means for capital investments into chips and data centers. However, he said even if it results in less capacity requirements for AI training workloads, it doesn’t mean companies will need fewer chips.
Instead, he thinks capacity could shift away from training and toward inference, which is the process by which AI models process inputs from users and form responses. Many developers are moving away from training models by using endless amounts of data, and focusing on “reasoning” capabilities instead. This is referred to as test-time scaling, and it involves the model taking extra time to “think” before rendering an output, which results in higher-quality responses.
Reasoning requires more inference compute, so Zuckerberg thinks companies will still need the best data center infrastructure to maintain an advantage over the competition. Plus, most AI software products haven’t achieved mainstream adoption yet, and Zuckerberg acknowledges that serving many users will also require additional data center capacity over time.
So, while it’s hard to put exact numbers on how DeepSeek’s innovations will reshape chip demand, Zuckerberg’s comments suggest there isn’t a reason for Nvidia, AMD, and Micron stock investors to panic. In fact, there is even a bullish case for those stocks over the long term.
Ever feel like you missed the boat in buying the most successful stocks? Then you’ll want to hear this.
On rare occasions, our expert team of analysts issues a “Double Down” stock recommendation for companies that they think are about to pop. If you’re worried you’ve already missed your chance to invest, now is the best time to buy before it’s too late. And the numbers speak for themselves:
Nvidia: if you invested $1,000 when we doubled down in 2009, you’d have $307,661!*
Apple: if you invested $1,000 when we doubled down in 2008, you’d have $44,088!*
Netflix: if you invested $1,000 when we doubled down in 2004, you’d have $536,525!*
Right now, we’re issuing “Double Down” alerts for three incredible companies, and there may not be another chance like this anytime soon.
Learn more »
*Stock Advisor returns as of February 3, 2025
Randi Zuckerberg, a former director of market development and spokeswoman for Facebook and sister to Meta Platforms CEO Mark Zuckerberg, is a member of The Motley Fool’s board of directors. Anthony Di Pizio has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Advanced Micro Devices, Meta Platforms, and Nvidia. The Motley Fool has a disclosure policy.
Mark Zuckerberg Just Delivered Incredible News for Nvidia, AMD, and Micron Stock Investors was originally published by The Motley Fool