close
close

Le-verdict

News with a Local Lens

Elon Musk prepares to double xAI supercomputer to 200,000 Nvidia GPUs
minsta

Elon Musk prepares to double xAI supercomputer to 200,000 Nvidia GPUs

PCMag editors select and review products independently. If you purchase through affiliate links, we may earn commissions, which help support our essay.

In the race to create next-generation AI, 100,000 enterprise GPUs aren’t enough for Elon Musk. His startup xAI is already preparing to expand a supercomputer in Memphis, Tennessee, to 200,000 GPUs.

Nvidia announced the news on Monday, developer that xAI’s “Colossus” supercomputer is doubling its size. Musk too tweeted that the supercomputer is set to incorporate 200,000 Nvidia H100 and H200 GPUs into a 785,000 square foot building.

Musk’s supercomputer in Memphis is notable for how quickly his startup assembled GPUs into a functional AI processing cluster. “From start to finish, this was done in 122 days,” Musk said. Supercomputers typically take years to build.

His company also likely paid at least $3 billion to assemble the supercomputer, since it’s currently made up of 100,000 Nvidia H100 GPUs, which typically cost around $30,000 each. Musk now wants to upgrade the facility with H200 GPUs, which have more memory but cost closer to $40,000 per unit.

As a result, Musk will have to fork out billions more on top of paying the supercomputer’s electricity costs. Its ultimate goal is to expand the Colossus supercomputer to 300,000 Nvidia Blackwell B200 GPU by next summer.

Musk is betting big on Nvidia’s GPU technology to help improve xAI Grok chatbot and other AI technologies. On Monday, ServeTheHome published a video from inside the Colossus supercomputing facility, which contains numerous racks of Nvidia GPU servers.

Musk isn’t the only one buying GPUs to train next-generation AI. Meta, OpenAI and Microsoft have also acquired technologies from Nvidia, including Blackwell GPU.

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *