Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Amazon reportedly develops new multimodal language model - SiliconANGLE

Nov 28, 2024 - siliconangle.com
Amazon has reportedly developed a multimodal large language model (LLM) called Olympus that could be launched as early as next week. The model, which can process text, images, and videos, is expected to be announced at the AWS re:Invent event and may be offered through Amazon Web Services. The Olympus model is said to be capable of searching video repositories for specific clips using natural language prompts and helping energy companies analyze geological data. It's unclear if this is the same LLM that Amazon was reported to be spending millions on last November, a new version, or a completely different system.

The Olympus model could potentially reduce Amazon's reliance on Anthropic PBC, a company that has received $8 billion in funding from Amazon. The model could be integrated with Bedrock, a managed service that provides access to cloud-hosted frontier models, including more than half a dozen Amazon-developed models. Amazon's AI strategy also includes hardware, with the company developing two chip lineups, AWS Trainium and AWS Inferentia, optimized for training and inference workloads.

Key takeaways:

  • Amazon has reportedly developed a multimodal large language model, known internally as Olympus, which can process text, images, and videos.
  • The model is expected to debut as early as next week during AWS re:Invent and may be offered through Amazon Web Services, possibly via AWS Bedrock.
  • Olympus could help users search video repositories for specific clips using natural language prompts and assist energy companies in analyzing geological data.
  • The development of Olympus could be a move by Amazon to reduce its reliance on Anthropic PBC, a company it has funded, as other tech giants also work to bring more of their AI stacks in-house.
View Full Article

Comments (0)

Be the first to comment!