If you use Gemini on a daily basis, you must be aware that Google has done pretty neat work with its AI assistant and its accompanying features. Speaking of which, Gemini Live is among them. For the uninitiated, this feature allows you to have a free-flowing, natural conversation with the Gemini AI assistant in real time. It’s like your everyday back-and-forth conversation, but with AI. Now, it seems Google is looking to upgrade Gemini Live’s capabilities with Thinking Mode and more features.

Gemini Live is about to get “Thinking Mode” and “Experimental Features”

Folks over at 9to5Google today reported on upcoming upgrades based on findings from “Labs” in the latest beta version of the Google app. For those unaware, Labs is Google’s experimental program that allows you to opt into testing new features for the Gemini Android app . Per their APK Insight report, the Google beta app, version 17.2, reportedly hints at four new features coming, out of which two are related to Gemini Live .

First, Google might add a new “Thinking Mode” for Gemini Live. Based on the details shared by the news outlet, in this mode, Gemini Live takes some time to think more to offer detailed responses. Meanwhile, the second is “Experimental Features.” Gemini Live appears to also be getting what Google calls its “cutting-edge features.” The features in question reportedly include multimodal memory, better noise handling, and responding when it sees something. Apart from these three, the list of Gemini Live’s Experimental Features also includes “personalized results based on your Google apps.”

Notably, these additions to Labs hint at Gemini Live being powered by Gemini 3. As of now, the feature uses the power of Gemini 2.5 Flash to respond. The Gemini Live Thinking Mode could use either the Thinking or Pro models for detailed responses. As far as Experimental Features are concerned, Google might use Gemini 3 Flash or Pro for the same. Whereas, for the capability where Google mentions seeing something and responding, the company could be using Project Astra. All of these are assumptions, for now. We expect to hear more about these Labs when Google makes an official announcement.

Other two Labs spotted in the beta version of the app

Apart from the Thinking Mode and experimental features, there’s also mention of two others, which are UI Control and Deep Research. As the name suggests, UI Control is all about an AI agent controlling your phone to complete tasks. It’s unclear what kind of agent Google is referring to, but it could be the long-rumored Gemini agent. Only time will tell if we’ll see it on Android. Meanwhile, Deep Research will “delegate complex research tasks.”