Tesla’s own supercomputer received an additional 1,600 Graphics processorsup 28% from a year ago.
Tesla technical manager Tim Zaman claims that this will put the car in the 7th place in the world for the number of GPUs.
The machine now has a total of 7,360 Nvidia A100 GPUs, which are built specifically for the data center. serversbut use the same architecture as the top-of-the-line GeForce RTX 30 series cards.
Telsa supercomputer upgrade
It’s likely that Tesla needs all the computing power it can get right now. The company is currently working on “neural networks” used to process the vast amount of video data collected by the company’s vehicles.
The latest update could be just the beginning of Tesla’s high-performance computing (HPC) ambitions.
In June 2020, Elon Musk said that “Tesla is developing a neural network training computer called Dojo to process really massive amounts of video data”, explaining that the planned machine will achieve performance of more than 1 exaFLOPs, which is one quintillion operations floating point to the second, or 1000 petaFLOPs.
A performance of more than 1 exaFLOPs would place the machine among the most powerful supercomputers in the world, as only a few current supercomputers have officially broken the exascale barrier, including The Frontier supercomputer at Oak Ridge National Laboratory in Tennessee, USA.
You might even be able to get a job building a new computer. musk – asked his followers on Twitter “consider joining our AI or computers/chips team if that sounds interesting.”
However, Dojo will not be dependent on Nvidia hardware. The planned car should be powered by Tesla’s new D1 Dojo chip, which the automaker said could have up to 362 TFLOPs at an AI Day event.
- Want to do your own AI research in the cloud? Check out our guide to best cloud hosting
Via Tom’s equipment (opens in a new tab)