Samsung and Google are working on an Apple Vision Pro-like mixed reality VR headset running Android XR and Google Gemini. We knew that already and even got a demo of it last year. But Samsung also revealed a little more at its phone-focused Samsung Unpacked winter event, specifically, a common Google-Samsung AI ecosystem partnership that could be the missing piece to join it all together. That AI-infused experience will be on a next-gen VR/AR headset this year, but expect it to also be running on the Galaxy S25 phone and glasses that will connect to them.
In a sense, I already got a preview of what the future holds at the end of last year.
A seeing AI that works in real time
Samsung briefly addressed upcoming VR/AR headsets and glasses at its latest Unpacked event, but we largely knew about those already. Still, Samsung’s demonstration of real-time AI that can see things on your phone or through cameras is exactly the trend we were expecting to arrive in 2025.
Project Moohan (meaning “Infinity” in Korean) is a VR headset with passthrough cameras that blend the virtual and real, much like the Vision Pro or Meta’s Quest 3. The design feels a lot like Meta’s discontinued Quest Pro but with far better specs. The headset has hand and eye tracking, runs Android apps via an Android XR OS being fully revealed later this year, and uses Google Gemini AI as an assistive layer throughout. Google’s Project Astra tech, which enables that real-time assistance on glasses, phones and headsets, is debuting on Samsung’s Galaxy S25 series of phones. But I’ve already seen it in action on my face.
My demos last year let me use Gemini to assist me as I looked around a room, watched YouTube videos or did basically anything else. Live AI needed to be started up into that live mode to use it, after which it could both see and hear what I was looking at or hearing. There were pause modes to temporarily stop the live assistance too.
Samsung showed off what looks like similar real-time AI functions on the Galaxy S25 phones, and more was promised. I expect it’ll be able to work while watching videos on YouTube, much like my Android XR demo did. And according to Samsung and Google’s execs working on Android XR, it could even be used for live help while playing games.
Better battery life and processing…for glasses?
Samsung and Google have also confirmed they’re working on smart glasses, also using Gemini AI, to compete with Meta’s Ray-Bans and a wave of other emerging eyewear. AR glasses are also apparently in the works.
While Project Moohan is a standalone VR headset with its own battery pack and processors, much like Apple’s Vision Pro, the smaller smart glasses Google and Samsung are working on — and any glasses after that — will rely on connections and processing assistance from phones to work. That’s how smart glasses like Meta’s Ray-Bans already work.
But, maybe, with more features means the need for more intensive phone processing. Live AI could start becoming an increasingly used feature, leaning on phones to continually be working to assist these glasses. The better processing, graphics, and most importantly, improved battery life and cooling sounded to me like ways to make these phones better pocket computers for eventual glasses.
A personal data set that these AI gadgets will need
Samsung also announced an obscure-sounding Personal Data Engine that Google and Samsung’s AI will take advantage of, bucketing personal data into a place where AI could possibly develop richer conclusions and connections to all the things that are part of your life.
How that plays out or is secured, or where its limits are, was extremely unclear. But it sounds like a repository of personal data that Samsung and Google’s AI can train off and work with connected extended products, including watches, rings and glasses.
Camera-enabled AI wearables are only as good as the data that can assist them, which is why so many of these devices right now feel clunky and weird to use, including Meta’s Ray-Bans in their AI modes. Usually, these AI devices hit walls when it comes to knowing things your existing apps already know better. Google and Samsung are clearly trying to fix that.
Will I want to trust that process with Google and Samsung, or anyone else? How will these phones, and future glasses, make that relationship between AI and our data clearer and more manageable? It feels like we’re watching one shoe drop here, with others coming when Google’s I/O developer conference will likely discuss Android XR and Gemini’s advances in far more depth.
Samsung’s making Project Moohan its first headset, following with glasses in the future after that. Expect Google to get into more details along with Samsung at the developer-focused Google I/O conference around May or June and possibly the full rundown in the summer at Samsung’s next expected Unpacked event. By then, we may know a lot more about why this seemingly boring new wave of Galaxy S25 phones might be building up an infrastructure that will play out in clearer detail by the end of the year…or even after that.