Next Story
Newszop

Google I/O 2025: Gemini Live on smartphones get new features for more interactive conversations

Send Push
Google is improving its Gemini Live with new capabilities. At its annual developer conference, Google I/O 2025 , the company announced the rollout of camera and screen sharing capabilities for the AI-based assistant available on smartphones, allowing for more interactive conversations. For users, this update means that they can now get real-time visual assistance directly from their phone. During the event’s keynote speech, the company also showed a demo video of how the Gemini Live feature will be able to identify objects around the user using the phone’s camera. Users will also be able to share their smartphone’s screen for Gemini Live to go through it and answer their questions.


Gemini Live camera and screen sharing features: Availability


Google has announced that the Gemini Live with camera and screen sharing capabilities has already started rolling out to all users on Android and iPhones for free.



How Gemini Live will offer users more personalised conversations

In a blog post, the company notes that Gemini Live will integrate more deeply into daily life, connecting with other Google apps within the coming weeks. This integration will enable functions like instantly creating Google Calendar events when planning with friends, or providing real-time details from Google Maps for cravings like deep-dish pizza.

Google Maps, Calendar, Tasks, and Keep are among the initial Google ecosystem connections planned, with more integrations expected. Users can manage these app connections and their information within Gemini Live's settings.

Gemini Live is also getting new tools to boost creativity and research. Imagen 4 will allow users to quickly turn ideas into high‑quality images with clear text for presentations, social media graphics, or event invites. This feature is now available in the Gemini app.

Meanwhile, Veo 3 will take it further by turning simple text prompts into full video scenes complete with natural sounds and character voices, giving a truly immersive experience. This feature is available now for US Google AI Ultra subscribers.

Gemini Live is also getting the Deep Research feature that now combines private documents, like PDFs and images, together with public data to create custom, in‑depth reports. This will help users combine their own insights with broader trends, saving time and highlighting new connections. Soon, research will pull information directly from Google Drive and Gmail as well, the company promised.

Moreover, English‑language Google AI Pro and Ultra subscribers in the US can also try Gemini in Chrome on desktop (Windows and macOS) soon. The first version will clarify or summarise anything on a webpage, with the company planning to add multi‑tab support and site navigation in upcoming updates.

Loving Newspoint? Download the app now