The new release also includes features like integrated pgvector experience, which allows database webhooks to automatically generate embeddings whenever a new row is inserted into a database table. The company has also announced experimental support for Llama & Mistral with 'Supabase.ai' API, which is simple to use and supports streaming responses. Supabase is also working with Ollama to generate human-like interactions, making it possible for local development, self-hosted, and on the platform. Access to open-source LLMs is currently invite-only while the company manages demand for the GPU instances.
Key takeaways:
- Supabase Edge Functions now support AI inference, making it easier to run AI models with a new built-in API.
- Developers can generate embeddings using models like 'gte-small' and use Large Language Models like 'llama2' and 'mistral' for GenAI workloads.
- Supabase has improved the developer experience by removing cold starts using Ort and adding LLM support using Ollama.
- Open-source LLMs are currently invite-only while demand for the GPU instances is managed, with plans to extend support for more models in the future.