Wow, Pixel 2 XL can't actually shoot in Portrait Mode — it does it all in post
I'm using the Google Pixel 2 XL as my primary phone this week. I've owned "the Google Phone", off and on, since the Nexus One launched in 2010, and I bought the original Pixel last year. Google does a lot of interesting and exciting things and I like to keep current with them.
One of the things I was most interested in checking out this year was Google's version of Portrait Mode. (Yeah, fine, Google used Apple's name for the feature, but consistency is a user-facing feature.)
So, as soon as I had the Pixel 2 XL set up, I fired up the camera and got set to shoot me some Portraits Mode. But.. I didn't see the option.
A tale of two Portrait Modes
On iPhone, Portrait Mode is right up-front, in-your-face labeled, and just a swipe to the side. On Pixel 2 XL, I eventually discovered, it's hidden behind the tiny menu button on the top left. Tap that first. Then select Portrait Mode from the drop-down menu. Then you're in business. Sort of.
At first, I thought I was Portrait Mode-ing it wrong. I framed a photo and... nothing. No depth effect. No blur. I double checked everything and tried again. Still no blur. Nothing. I took a couple shots. Nothing and more nothing.
Exasperated, I tapped on the photo thumbnail to take a closer look. The full-size photo lept up onto my screen. And it was completely in focus. Not a bit of blur to be seen. Then, a few seconds later, it happened. Bokeh happened.
One of these Portrait Modes is not like the other
It turns out, Pixel 2 XL can't actually shoot in Portrait Mode. By that I mean it can't render the depth effect in real time and show it to you in the preview before you capture the photo.
Master your iPhone in minutes
iMore offers spot-on advice and guidance from our team of experts, with decades of Apple device experience to lean on. Learn more with iMore!
It can still use the dual pixels in its phase-detect auto-focus system to grab basic depth data (at least on the rear camera — the front camera has no PDAF system, so there's no depth data for portrait selfies) and combine that with its machine learning (ML) segmentation map, but only after you open the image in your camera roll. Only in post.
Difference between #PortaitMode first run UX on #Pixel2XL vs. #iPhoneX (or 7/8 Plus). 🧐🤔🤯 pic.twitter.com/Mvt2KjE19iDifference between #PortaitMode first run UX on #Pixel2XL vs. #iPhoneX (or 7/8 Plus). 🧐🤔🤯 pic.twitter.com/Mvt2KjE19i— Rene Ritchie (@reneritchie) November 22, 2017November 22, 2017
I didn't realize any of this when I first tried the Pixel 2 XL Portrait Mode. I hadn't noticed it in the Pixel 2 reviews I'd read. (When I went back and looked more carefully, I did see a couple of them mentioned it in passing.)
Machine Learned
I guess that means the only thing more impressive than Google's machine learning process is its messaging process — it got everyone to focus on "can do it with just one lens!" and totally gloss over "can't do it live!" That's some amazing narrative control right there.
Now, undeniably, inarguably, Google does an amazing job with the segmentation mask and the entire machine-learned process. Some may call the results a little paper cutout-like but, in most cases they're better than Apple's sometimes too-soft edging. And glitches are fewer as well. But it all only happens in post.
Otherwise, Google is absolutely killing it with the the Pixel 2 XL. What they can do with a single lens, especially with the front one that provides no actual depth data, is industry leading. Hopefully, it drives Apple to up its own ML game. That's the great thing about Apple have the dual-lens system on the back and TrueDepth on the front — the company can keep pushing new and better bits and neural nets. It's much harder to retrofit new atoms.
Photographer vs. photography
What I like about Apple's Portrait Mode is that it doesn't just feel like an artificial intelligence (AI) filter you're applying to your photos in post. It doesn't feel like Prism or Faces. It feels like you're shooting with a camera and lens that really produces a depth of field.
It informs my process and how I frame my shot. I can't imagine shooting without it any more than I can imagine shooting with my DLR and fast prime lens and not seeing the image it will actually capture before I press the shutter.
And, at least for me, it's way better than shooting, switching, waiting, checking, switching back, shooting again, switching, waiting, checking... and one and on.
Showing the real-time depth effect preview on iPhone 7 Plus, iPhone 8 Plus, and iPhone X wasn't easy for Apple. It took a lot of silicon and a lot of engineering. But it unites camera and photo.
It makes it feel real.
Rene Ritchie is one of the most respected Apple analysts in the business, reaching a combined audience of over 40 million readers a month. His YouTube channel, Vector, has over 90 thousand subscribers and 14 million views and his podcasts, including Debug, have been downloaded over 20 million times. He also regularly co-hosts MacBreak Weekly for the TWiT network and co-hosted CES Live! and Talk Mobile. Based in Montreal, Rene is a former director of product marketing, web developer, and graphic designer. He's authored several books and appeared on numerous television and radio segments to discuss Apple and the technology industry. When not working, he likes to cook, grapple, and spend time with his friends and family.