
SEOUL, May 27 (AJP) - Google has re-entered the smart glasses market with a new prototype of XR-enabled smart glasses, unveiled at the Google I/O 2025 conference on May 20.
The device aims to blend cutting-edge artificial intelligence with mainstream wearability and a fashion-forward design, showcasing a significant leap in functionality compared to previous attempts.
The prototype features an embedded transparent display on the right lens, integrated with a camera, microphone, and speaker system. This configuration allows for seamless interaction with Google’s latest multimodal AI model, Gemini 2.5.
A simple press-and-hold gesture on the frame activates Gemini, enabling the glasses to process the surrounding environment in real time.
The device is designed to recognize objects, translate conversations, provide navigation guidance, and even recall past activities, such as a user's preferred coffee order or the artist and style of a painting.
Visuals captured by the glasses are previewed directly on the lens, while audio feedback is delivered through frame-integrated speakers engineered to minimize sound spillover, enhancing user privacy in public settings.
The system, though still in prototype phase, demonstrates a sophisticated integration of real-time vision and conversational AI, acting as a proactive everyday companion. Gemini can assist with shopping by identifying products, offer contextual overlays, and transcribe spoken dialogue as it occurs.
The XR glasses are currently equipped to support a comprehensive suite of intelligent functions, including live translation, in-lens navigation, object and product recognition and photo and video capture.
They also enable visual search, memory recall for places and tasks, in-lens notifications, voice command interaction, conversation transcription, and context-aware assistance, all powered by Gemini AI.
Copyright ⓒ Aju Press All rights reserved.