Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Cerebras launches 900,000-core 125 PetaFLOPS wafer-scale processor for AI — theoretically equivalent to about 62…

Mar 15, 2024 - tomshardware.com
Cerebras Systems has launched its Wafer Scale Engine 3 (WSE-3), an AI wafer-scale chip with double the performance of its predecessor, the WSE-2. The WSE-3, which powers Cerebras's CS-3 supercomputer, has 4 trillion transistors, 900,000 AI cores, and 44GB of on-chip SRAM, and can reach a peak performance of 125 FP16 PetaFLOPS. The CS-3 can be used to train AI models with up to 24 trillion parameters, a significant improvement over supercomputers powered by the WSE-2 and other modern AI processors.

Cerebras has highlighted the CS-3's superior power efficiency and ease of use, maintaining the same power consumption as its predecessor despite doubling its performance. The company is also working with institutions such as the Argonne National Laboratory and the Mayo Clinic, demonstrating the CS-3's potential in healthcare. A strategic partnership with G42 is set to expand with the construction of the Condor Galaxy 3, an AI supercomputer featuring 64 CS-3 systems.

Key takeaways:

  • Cerebras Systems has unveiled its Wafer Scale Engine 3 (WSE-3), a new AI wafer-scale chip with double the performance of its predecessor, the WSE-2, and powers the CS-3 supercomputer.
  • The CS-3 supercomputer can be used to train AI models with up to 24 trillion parameters and can support 1.5TB, 12TB, or 1.2PB of external memory.
  • The CS-3 can be configured in clusters of up to 2048 systems and offers native support for PyTorch 2.0, accelerating training up to eight times faster than traditional methods.
  • A strategic partnership between Cerebras and G42 is set to expand with the construction of the Condor Galaxy 3, an AI supercomputer featuring 64 CS-3 systems.
View Full Article

Comments (0)

Be the first to comment!