Opera is testing this new set of local LLMs in the developer stream of Opera One as part of its new AI Feature Drops Program. Opera One Developer users can now select the model they want to process their input with. The local LLMs require 2-10 GB of local storage space per variant and will be used instead of Aria, Opera’s native browser AI, until a user starts a new chat with the AI or switches Aria back on.
Key takeaways:
- Opera is adding experimental support for 150 local Large Language Model (LLM) variants from approximately 50 families of models to its Opera One browser in developer stream.
- The local AI models are a complimentary addition to Opera’s online Aria AI service and include models like Llama from Meta, Vicuna, Gemma from Google, and Mixtral from Mistral AI.
- Using Local Large Language Models means users’ data is kept locally on their device, allowing them to use generative AI without the need to send information to a server.
- Opera presented Opera One, its AI-centric flagship browser based on Modular Design principles and a new browser architecture with a multithreaded compositor in early 2023.