Last fall, Apple's Machine Learning Journal began a deep dive into 'Hey, Siri', the voice trigger for the company's personal digital assistant. (See below.) This spring, the Journal is back with another dive into how it tackles not only knowing what is said but who said it, and how it balances imposter acceptance vs. false rejections.
As is typical for Apple, it's a process that involves both hardware and software.
And yeah, that's right down to the silicon, thanks to an always-on-processor inside the motion co-processor, which is now inside the A-Series system-on-a-chip.
Master your iPhone in minutes
iMore offers spot-on advice and guidance from our team of experts, with decades of Apple device experience to lean on. Learn more with iMore!
The series is fascinating and I very much hope the team continues to detail it. We're entering an age of ambient computing where we have multiple voice-activated AI assistants not just in our pockets but on our wrists, on our laps and desks, in our living rooms and in our homes.
Voice recognition, voice differentiation, multi-personal assistants, multi-device mesh assistants, and all sorts of new paradigms are growing up and around us to support the technology. All while trying to make sure it stays accessible... and human.
Rene Ritchie is one of the most respected Apple analysts in the business, reaching a combined audience of over 40 million readers a month. His YouTube channel, Vector, has over 90 thousand subscribers and 14 million views and his podcasts, including Debug, have been downloaded over 20 million times. He also regularly co-hosts MacBreak Weekly for the TWiT network and co-hosted CES Live! and Talk Mobile. Based in Montreal, Rene is a former director of product marketing, web developer, and graphic designer. He's authored several books and appeared on numerous television and radio segments to discuss Apple and the technology industry. When not working, he likes to cook, grapple, and spend time with his friends and family.