Google’s Gemini Live is on the verge of a significant update that will let users see exactly what the AI assistant is referring to. By highlighting specific objects on your screen while sharing your camera, Gemini Live promises to make identifying and discussing items in real time much simpler.
Google’s Gemini Live AI assistant will show you what it’s talking about

Key Takeaways:
- Google is rolling out new Gemini Live features next week
- Gemini Live supports real-time AI conversations
- Users can now highlight items on-screen through camera sharing
- This update aims to simplify object identification
- The change could improve how people interact with and discuss topics using AI
Introduction
Google is adding a fresh layer of interactivity to Gemini Live, its AI assistant that has already gained attention for real-time conversation capabilities. With this upcoming update, the assistant will do more than just speak—it will visually point out objects you’re referencing right on your screen, helping you and the AI stay on the same page.
How Gemini Live Works
Gemini Live is designed to facilitate real-time interactions between users and artificial intelligence. Before this upgrade, users relied on text or voice commands, but the new feature sets the stage for more dynamic conversations. As Google explains, “Gemini Live, its AI assistant that you can have real-time conversations with,” will soon evolve beyond mere talk.
New Highlighting Feature
One of the most anticipated enhancements is the ability for Gemini Live to pinpoint items on your screen while you share your camera feed. According to Google, “Next week, Gemini Live will be able to highlight things directly on your screen while sharing your camera, making it easier for the AI assistant to point out a specific item.” This direct visual guidance could be a game-changer for anyone who wants to clarify which object the AI is referring to in a busy setting or product demonstration.
User Experience Benefits
Imagine trying to confirm which ingredient to use in a cooking recipe or identify a technical component in your workspace. The visual instruction offered by Gemini Live could reduce confusion and speed up tasks. “If you’re trying…” to show an item and get immediate AI feedback, you can quickly see precisely what the assistant references.
Conclusion
As AI technology continues to progress, Google’s move toward visual interaction with Gemini Live could mark the next frontier in human-machine dialogue. By bridging the gap between discussion and demonstration, the company aims to provide a smoother, more intuitive experience that brings AI into everyday life in a vivid new way.