Sign Up to Our Newsletter

Be the first to know the latest tech updates

[mc4wp_form id=195]
Tech News

“Predatory pre-annoucement” – The brain behind the largest CPU ever calls out Nvidia for spreading ‘FUD’ amidst surprise updated GPU roadmap announcement

“Predatory pre-annoucement” – The brain behind the largest CPU ever calls out Nvidia for spreading ‘FUD’ amidst surprise updated GPU roadmap announcement



Nvidia is using deceptive practices and abusing its market dominance to quash the competition, according to Cerebras Systems CEO Andrew Feldman, after the firm unexpectedly announced its latest GPU product roadmap in October 2023.

Nvidia outlined new graphics cards set for annual release between 2024 and 2026 to add to the industry leading A100 and H100 GPUs currently in such high demand, with organizations across the industry sphere swallowing them up for generative AI workloads.

But Feldman labelled this news a “predetary pre-announcement” speaking to HPCWire, highlighting the firm has no obligation to see through on releasing any of the components it’s teased. By doing this, he’s speculated it’s only confused the market, especially in light of the fact Nvidia was, say, a year late with the H100 GPU. And he doubts Nvidia can see through on this strategy, nor might it want to.

Nvidia is just ‘throwing sand up in the air’

Nvidia teased yearly leaps on a single architecture in its announcement, with the Hopper Next following the Hpper GPU in 2024, followed by the Ada Lovelace-Next GPU, a successor to the Ada Lovelace graphics card, set for release in 2025.

“Companies have been making chips for a long time, and nobody has ever been able to succeed on a one-year cadence because the fabs do not change at a one-year pace, Feldman countered to HPCWire.

“In many ways, it has been a terrible block of time for Nvidia. Stability AI said they were going to go on Intel. Amazon said the Anthropic was going to run on them. We announced a monstrous deal that would produce enough compute so it would be clear that you could build… large clusters with us.  

“[Nvidia’s] response, not surprising to me, in the strategy realm, is not a better product. It’s… throw sand up in the air and move your hands a lot. And you know, Nvidia was a year late with the H100.”

Feldman has designed the world’s largest AI chip in the world, the Cerebras Wafer-Scale Engine 2 CPU – which is 46,226 square-mm and contains 2.6 trillion transistors across 850,000 cores. 

He told the New Yorker that massive chips are better than smaller ones because cores communicate faster when they’re on the same chip rather than being scattered across a server room.

More from TechRadar Pro



Source link

Team TeachToday

Team TeachToday

About Author

TechToday Logo

Your go-to destination for the latest in tech, AI breakthroughs, industry trends, and expert insights.

Get Latest Updates and big deals

Our expertise, as well as our passion for web design, sets us apart from other agencies.

Digitally Interactive  Copyright 2022-25 All Rights Reserved.