Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Databricks expands Mosaic AI to help enterprises build with LLMs | TechCrunch

Jun 12, 2024 - techcrunch.com
Databricks, a year after acquiring MosaicML, has rebranded it as Mosaic AI and integrated it into its AI solutions. The company is launching five new Mosaic AI tools at its Data + AI Summit, including Mosaic AI Agent Framework, Mosaic AI Agent Evaluation, Mosaic AI Tools Catalog, Mosaic AI Model Training, and Mosaic AI Gateway. These tools aim to improve the quality and reliability of AI models, ensure cost-efficiency, and maintain data privacy. The company is also extending its Unity Catalog system to govern which AI tools and functions large language models (LLMs) can use when generating answers.

The Mosaic AI Agent Framework and the Mosaic AI Tools Catalog are two services being launched by Databricks to help developers build their own retrieval augmented generation (RAG)-based applications. The company is also launching the Mosaic AI Agent Evaluation, an AI-assisted evaluation tool that combines LLM-based judges to test AI performance in production. Other new tools include the Mosaic AI Model Training service, which allows users to fine-tune models with their organization’s private data, and the Mosaic AI Gateway, a unified interface to query, manage, and deploy any open source or proprietary model.

Key takeaways:

  • Databricks is launching five new Mosaic AI tools at its conference: Mosaic AI Agent Framework, Mosaic AI Agent Evaluation, Mosaic AI Tools Catalog, Mosaic AI Model Training, and Mosaic AI Gateway.
  • The Mosaic AI Agent Framework and the Mosaic AI Tools Catalog are two services that Databricks is launching to help developers build their own RAG-based applications.
  • Databricks is extending its Unity Catalog system to let enterprises govern which AI tools and functions these large language models (LLMs) can call upon when generating answers.
  • Databricks is also launching the Mosaic AI Agent Evaluation, an AI-assisted evaluation tool that combines LLM-based judges to test how well the AI does in production, and allows enterprises to quickly get feedback from users.
View Full Article

Comments (0)

Be the first to comment!