The company also introduced more customization options for large batch text processing, enabling data teams to build natural language processing pipelines with high processing speeds at scale. Snowflake ML now supports Container Runtime, allowing users to efficiently execute distributed ML training jobs on GPUs. The company also unveiled Model Serving in Containers, enabling teams to deploy both internally and externally-trained models for production using distributed CPUs or GPUs.
Key takeaways:
- Snowflake has announced new advancements that accelerate the path for organizations to deliver easy, efficient, and trusted AI into production with their enterprise data.
- With Snowflake’s latest innovations, developers can effortlessly build conversational apps for structured and unstructured data with high accuracy, efficiently run batch large language model (LLM) inference for natural language processing (NLP) pipelines, and train custom models with GPU-powered containers.
- Snowflake is unveiling more customization options for large batch text processing, so data teams can build NLP pipelines with high processing speeds at scale, while optimizing for both cost and performance.
- Snowflake ML now supports Container Runtime, enabling users to efficiently execute distributed ML training jobs on GPUs. This is a fully managed container environment accessible through Snowflake Notebooks and preconfigured with access to distributed processing on both CPUs and GPUs.