Nvidia’s CEO Explains How Its New AI Models Could Work on Future Smart Glasses


Tech gadgets — whether they be phones, robots or autonomous vehicles — are getting better at understanding the world around us, thanks to AI. That message rang loud and clear throughout 2024 and became even louder at CES 2025, where chipmaker Nvidia unveiled a new AI model for understanding the physical world and a family of large language models for powering future AI agents.

Nvidia CEO Jensen Huang is positioning these world foundational models as being ideal for robots and autonomous vehicles. But there’s another class of devices that could benefit from better real-world understanding: smart glasses. Tech-enabled eyewear like Meta’s Ray-Bans are quickly becoming the hot new AI gadget, with shipments of Meta’s spectacles crossing the 1 million mark in November according to Counterpoint Research. 

Such devices seem like the ideal vessel for AI agents, or AI helpers that can understand the world around you using cameras and processing speech and visual input to help you get things done rather than just answering questions. 

Huang didn’t say whether Nvidia-powered smart glasses are on the horizon. But he did explain how the company’s new model would power future smart glasses if partners were to adopt the technology for that purpose.

“The use of AI as it gets connected to wearables and virtual presence technology like glasses, all of that is super exciting,” Huang said in response to question about whether its models would work on smart glasses during press Q&A at CES.

Read more: Smart Glasses Are Going to Work This Time, Google’s Android President Tells CNET

Watch this: These New Smart Glasses Want to Be Your Next AI Companion

Huang pointed to cloud processing as an option, which would mean queries that use Nvidia’s Cosmos model could be handled in the cloud rather than on the device itself. Compact devices like smartphones use this method often to lighten the processing load when running demanding AI models. If a device maker wanted to create glasses that could leverage Nvidia’s AI models on-device rather than relying on the cloud, Huang said Cosmos would distill its knowledge into a smaller model that’s less generalized and optimized for specific tasks.

Nvidia’s new Cosmos model is being touted as a platform to gather data about the physical world to train models for robots and self-driving cars — similar to the way a large language model would learn how to generate text responses after being trained on written media. 

“The ChatGPT moment for robotics is coming,” Huang said in a press release. 

Nvidia also announced a set of new AI models built with Meta’s Llama technology called Llama Nemotron, which is designed to accelerate the development of AI agents. But it’s interesting to think about how these AI tools and models could potentially be applied to smart glasses, too.

A recent Nvidia patent filing fueled speculation about upcoming smart glasses, although the chipmaker hasn’t made any announcements about future products in that space. But Nvidia’s new models and Huang’s comments come as Google, Samsung and Qualcomm announced last month that they’re building a new mixed reality platform for smart glasses and headsets called Android XR, hinting that smart glasses could soon become more prominent.

Several new types of smart glasses were also on display at CES 2025, such as the RayNeo X3 Pro and Halliday smart glasses. The International Data Corporation also predicted in September that shipments of smart glasses would grow by 73.1% in 2024. Nvidia’s moves are another space to watch for, too.




Leave a Comment