The updates are designed to address capacity issues faced by companies such as Salesforce, Thompson Reuters, BMW, and AI startups like Luma, Perplexity, Stability AI, and Hugging Face. The new tools can help businesses avoid overspending by overprovisioning servers for their training jobs and allow for more efficient use of resources. According to AWS, these updates can help reduce costs by up to 40% for organizations.
Key takeaways:
- Amazon's cloud computing unit, AWS, is announcing updates to its SageMaker HyperPod platform, aimed at making model training and fine-tuning more efficient and cost-effective for enterprises.
- AWS is launching 'flexible training plans' where HyperPod users can set a timeline and budget for their model training, with SageMaker handling the infrastructure provisioning and running the jobs.
- The SageMaker team is also launching HyperPod Recipes, benchmarked and optimized recipes for common architectures that encapsulate best practices for using these models.
- AWS is now allowing enterprises to pool resources and create a central command center for allocating GPU capacity based on a project's priority, which can help reduce costs by up to 40% for organizations.