NVIDIA is using Apple Vision Pro to build next-generation AI-powered humanoid robots

NVIDIA humanoid robot making toast
(Image credit: NVIDIA)

NVIDIA has revealed that mixed reality devices like Apple Vision Pro can be used to help developers accelerate the development of humanoid robots. 

NVIDIA unveiled project GR00T in March, “a general-purpose foundation model for humanoid robots, designed to further its work driving breakthroughs in robotics and embodied AI.” 

The “moonshot” initiative is an attempt to build a universal AI brain for humanoid robot platforms that definitely won’t rise up, take over the world, and kill us all. ‘What could possibly go wrong?’ aside, NVIDIA has revealed how devices like Apple Vision Pro can help power its new synthetic data generation pipeline to turn human movements simulated commands a robot can follow. 

iRobot 

NVIDIA Accelerating the Future of AI & Humanoid Robots - YouTube NVIDIA Accelerating the Future of AI & Humanoid Robots - YouTube
Watch On

In a video, the company has revealed how a new set of tools for developers in the humanoid robot ecosystem will help them build their AI models better and more efficiently, thanks to the aforementioned new synthetic data generation pipeline. 

NVIDIA says that the new platform can collect human demonstrations of movement using a device like Apple Vision Pro — making toast, for example. Once gathered, the data can be multiplied by “1000x or more” using NVIDIA’s simulation tools. 

Once the data is gathered, it can then be reapplied and used to train a humanoid robot that definitely won’t turn on you and your family. NVIDIA says it’s “one step closer to solving the AI brain for humanoid robots,” and Apple Vision Pro is part of the solution. 

It’s not the first time NVIDIA has leveraged Apple Vision Pro to give us a glimpse of the future. Earlier this year it unveiled its OpenUSD-based Omniverse enterprise digital twins for Apple Vision Pro. IT demoed a full-fidelity digital twin of a car streamed directly to Apple Vision Pro, letting a designer toggle paint, choose trim, and even sit inside the vehicle, using spatial computing to blend 3D photorealistic environments with the physical world. 

More from iMore

Stephen Warwick
News Editor

Stephen Warwick has written about Apple for five years at iMore and previously elsewhere. He covers all of iMore's latest breaking news regarding all of Apple's products and services, both hardware and software. Stephen has interviewed industry experts in a range of fields including finance, litigation, security, and more. He also specializes in curating and reviewing audio hardware and has experience beyond journalism in sound engineering, production, and design. Before becoming a writer Stephen studied Ancient History at University and also worked at Apple for more than two years. Stephen is also a host on the iMore show, a weekly podcast recorded live that discusses the latest in breaking Apple news, as well as featuring fun trivia about all things Apple. Follow him on Twitter @stephenwarwick9