The Future of Wear OS: Integrating Smartwatches with Augmented Reality

The world of wearable technology continues to evolve rapidly, with an increasing focus on the integration of various devices to enhance user experience. In my weekly column, I delve into the state of Wear OS, examining new developments, updates, and exciting applications that emerge in this dynamic field.
Wear OS, also known as Android Wear, along with Google Glass, made its debut to the public in 2014. Although these two innovations were initially introduced separately, the impending Google I/O 2025 eventscheduled for May 20promises to showcase the new augmented reality (AR) glasses while potentially integrating them with Android smartwatches for a cohesive extended reality (XR) experience. Its an exciting prospect that could elevate the functionality of both devices.
In the lead-up to I/O 2025, Google and Samsung have been actively presenting demos of their Android XR capabilities. This journey began with the unveiling of the Samsung Moohan XR headset, and most recently, at an XR TED Talk, the impressive "Project HAEAN" AR glasses were showcased, demonstrating their ability to record and recall visual experiences.
Although no specific Wear OS panels are on the agenda for I/O 2025, there will be two crucial panels focusing on Android XR. One session is dedicated to the Android XR SDK and its associated AI tools, while the other will center on the development of AR applications. This includes incorporating "3D models, stereoscopic video, and hand-tracking" into existing Android apps, indicating a significant step toward a more immersive user experience. Given that I will be attending I/O, I eagerly anticipate the chance to test these glasses firsthand and evaluate the functionality of their Gemini command and gesture controls.
Having observed demonstrations from prominent tech reviewers like Marques Brownlee and articles from platforms such as The Verge, I have come to the conclusion that while voice and gesture controls are promising, they will likely not suffice for an optimal Android XR experience. Hence, Android smartwatches should be regarded as a vital component of this ecosystem.
When engaging with VR games on my Meta Quest 3, I generally prefer using controllers. However, my experiences with hand tracking across various XR devicesincluding Apple Vision Pro and Snap Spectaclesoften leave me thinking that the technology is still in its infancy. In challenging environments, such as poorly lit rooms or bright outdoor spaces, the reliability of inside-out camera tracking is compromised, making it difficult to execute precise hand gestures. Although hand tracking can work reasonably well under ideal conditions, I find it more convenient to rely on a conventional controller.
Envisioning the use of AR glasses outdoors raises concerns about the social implications of performing deliberate gestures that may inadvertently draw attention from bystanders. The essence of smart glasses should be their seamless integration into everyday life. However, if attention is drawn to the technology, it could reignite the stigma associated with early adopters of Google Glass, often derisively termed "Glassholes."
In contrast, the implementation of Gemini voice commands appears to be a natural fit for this technology. Demonstrations indicate that Gemini can execute commands with reasonable reliability after a brief processing period. The multimodal Live mode allows users to point at an object to receive information, eliminating the need for a controller. However, I often find myself hesitant to engage with my Ray-Ban Meta smart glasses in public when requesting the Meta AI to capture photos, preferring to reserve such interactions for private moments.
Google envisions a future where speaking freely to AR glasses in public could become commonplace. While this notion is intriguing, I remain skeptical about the practicality of having conversations with AI in settings such as public transportation, offices, or grocery stores. In such environments, I would rather employ a less intrusive, non-verbal method of interaction.
Even if concerns about societal norms are brushed aside, there are still practical challenges to consider. Ambient noise can interfere with voice commands, and there is always the potential for accidental activations of the Gemini AI. Furthermore, the slight delay in processing commands can lead to frustration, making traditional button inputs feel more immediate and effective.
Meta has recognized these limitations in designing its Orion AR glasses, which are accompanied by an sEMG neural band that can detect subtle finger gestures. This approach allows for discreet interactions with the device, minimizing the need for vocal commands or constant visibility of the user's hands. Such innovations are essential for making AR glasses a practical tool for everyday use.
Given that both Google and Samsung possess existing wearable technologynamely smartwatches equipped with input screens and gesture recognitionthere exists a natural opportunity for these devices to work in tandem with smart and AR glasses.
The common applications of Android watches include checking notifications, tracking fitness activities, and initiating voice commands via Google Assistant. However, these devices can also facilitate actions on other devices, such as taking photographs, unlocking smartphones using ultra-wideband technology, controlling Google TV, and monitoring Nest Doorbell feeds.
Imagine if Wear OS were to introduce an Android XR mode that could display notifications on smart glasses and mirror content from the active application. This feature could enhance user engagement by allowing immediate contextual actions, such as controlling video playback or capturing images, simply by tapping the watch's interface. Additionally, incorporating features like the rotating crown or bezel for scrolling through menus while viewing content on AR glasses would provide a more intuitive method of interaction compared to traditional hand gestures.
Samsungs Galaxy Watches already support basic gesture commands, such as double taps and knocks. This functionality could potentially enhance Android XR controls by providing an alternative method of selecting or navigating options if visual hand gestures fail to register accurately.
Personally, I would feel more optimistic about the future of AR glasses if I knew there was a reliable tactile option alongside the expected voice and gesture controls. The critical question remains: can Google develop Wear OS to function as an effective control interface for these augmented experiences?
Recently, a patent filing by Samsung indicated interest in employing smartwatches or smart rings as controllers for XR applications. However, the details revealed were rather ambiguous, primarily focusing on the Galaxy Ring. This suggests that Samsungs engineers are exploring new avenues for XR control mechanisms. While the Project Moohan XR headset may initially launch with conventional controllers, the ultimate goal appears to be creating all-day smart glasses that necessitate more discreet and consistent control methods than voice commands and hand gestures alone.
While it makes sense for Samsung to consider smart rings as controllersgiven they are unobtrusive and free from complex operating systemsI maintain that Wear OS could serve as a superior alternative, seamlessly integrating into the XR ecosystem and enhancing user experience.