DeepSeek Trains AI with 11X Less GPU
This is a OpenAI news story, published by Tom's Hardware, that relates primarily to Anthropic news.
OpenAI news
For more OpenAI news, you can click here:
more OpenAI newsAnthropic news
For more Anthropic news, you can click here:
more Anthropic newsNews about Ai research
For more Ai research news, you can click here:
more Ai research newsTom's Hardware news
For more news from Tom's Hardware, you can click here:
more news from Tom's HardwareAbout the Otherweb
Otherweb, Inc is a public benefit corporation, dedicated to improving the quality of news people consume. We are non-partisan, junk-free, and ad-free. We use artificial intelligence (AI) to remove junk from your news feed, and allow you to select the best tech news, business news, entertainment news, and much more. If you like this article about Ai research, you might also like this article about
DeepSeek. We are dedicated to bringing you the highest-quality news, junk-free and ad-free, about your favorite topics. Please come every day to read the latest DeepSeek team news, supercomputers news, news about Ai research, and other high-quality news about any topic that interests you. We are working hard to create the best news aggregator on the web, and to put you in control of your news feed - whether you choose to read the latest news through our website, our news app, or our daily newsletter - all free!
AI hardwareTom's Hardware
•Chinese AI company's AI model breakthrough highlights limits of US sanctions
86% Informative
Chinese AI startup DeepSeek says it has trained an AI model comparable to the leading models from heavyweights like OpenAI , Meta , and Anthropic .
The company used a cluster of 2,048 Nvidia H800 GPUs, each equipped with NVLink interconnects for GPU -to- GPU and InfiniBand interconnect for node communications.
The claims haven't been fully validated yet, but the startling announcement suggests that while US sanctions have impacted the availability of AI hardware in China , clever scientists are working to extract the utmost performance from limited amounts of hardware.
Secondly , although our deployment strategy for DeepSeek-V3 has achieved an end-to-end generation speed of more than two times that of DeepSeek-V2 , there still remains potential for further enhancement. Fortunately, these limitations are expected to be naturally addressed with the development of more advanced hardware." Anton Shilov is a contributing writer at Tom’s Hardware . Over the past couple of decades , he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends..
VR Score
89
Informative language
92
Neutral language
39
Article tone
formal
Language
English
Language complexity
73
Offensive language
not offensive
Hate speech
not hateful
Attention-grabbing headline
not detected
Known propaganda techniques
not detected
Time-value
long-living
External references
2
Source diversity
2
Affiliate links
no affiliate links