The new feature will analyze the uploaded video frame-by-frame to understand the content and provide possible solutions or answers to the user's query. While the initial application is for Google Search, the company suggests the technology could also be used to understand video content on phones, private cloud storage like Google Photos, and public platforms like YouTube. Google has not yet announced how long the feature will be in testing or when it will be available in other markets.
Key takeaways:
- Google is aiming to make searching video a bigger part of Google Search, using Gemini AI, in response to competition from video platforms like TikTok and Instagram.
- The new feature will allow users to search using a video they upload combined with a text query to get an AI overview of the answers they need.
- This multimodal capability builds on an existing search feature that lets users add text to visual searches, introduced in 2021, which has helped Google in areas that it typically struggles with.
- While the feature will initially be available as an experiment in Search Labs for users in the U.S. in English, it has implications in other areas as well, including understanding the video on your phone, those uploaded to private cloud storage like Google Photos, and those publicly shared via YouTube.