Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Amazon's Trainium2 AI Accelerator Features 96 GB of HBM, Quadruples Training Performance

Nov 30, 2023 - anandtech.com
Amazon Web Services (AWS) has launched Trainium2, a new accelerator for artificial intelligence (AI) workloads, which significantly enhances performance compared to its predecessor. The AWS Trainium2, designed by Amazon's Annapurna Labs, is specifically for training foundation models (FMs) and large language models (LLMs) with up to trillions of parameters. It offers four times higher training performance, two times higher performance per watt, and three times as much memory – a total of 96GB of HBM.

The company has not disclosed specific performance numbers for Trainium2, but it claims that its Trn2 instances can scale out with up to 100,000 Trainium2 chips to achieve up to 65 ExaFLOPS of low-precision compute performance for AI workloads. This scaling could significantly reduce the training time for a 300-billion parameter large language model from months to weeks. AWS partners, such as Anthropic, are ready to deploy the new accelerator.

Key takeaways:

  • Amazon Web Services has introduced Trainium2, a new accelerator for artificial intelligence workloads that significantly increases performance compared to its predecessor.
  • Trainium2 is designed specifically for training foundation models and large language models, featuring four times higher training performance, two times higher performance per watt, and three times as much memory.
  • Amazon aims to enable its clients to access up to 65 'AI' ExaFLOPS performance for their workloads, with its Trn2 instances scalable with up to 100,000 Trainium2 chips.
  • Amazon has partners, such as Anthropic, ready to deploy the AWS Trainium2 accelerators, with the expectation that it will be at least 4x faster than the first-generation Trainium chips for some key workloads.
View Full Article

Comments (0)

Be the first to comment!