How much this purchase cost her is unknown.
Tesla purchased 10,000 Nvidia H100 accelerators and created a supercomputer
The cluster will be used to train the autopilot of Tesla cars.
The Tesla AI 10k H100 cluster will be operational on Monday. Thanks to real video training, we can have the largest training datasets in the world, with cache capacity exceeding 200 PB – orders of magnitude larger than LLM
The peak performance of such a supercomputer will be 340 PFLOPS (PF64), and if we talk about INT8 calculations, then we are talking about almost 40 exaflops! Focusing on double-precision calculations, we can say that the Tesla system is the fourth most powerful supercomputer in the world.
The company is not going to abandon its plans to launch the Dojo system, but such a supercomputer will work sometime later.