Portrait Mode: iPhone X Camera vs. DSLR vs Pixel 2 XL
Computational photography is the biggest leap forward in image capture since digital photography freed us from film. iPhone X — like iPhone 8 Plus and iPhone 7 Plus — uses it and a dual lens camera system to capture depth data and then applies machine learning to create an artificial bokeh effect. The Pixel 2 XL borrows the phase-detection auto-focus (PDAF) system to grab depth data, combines it with a machine-learned segmentation map, and create a similar artificial bokeh.
But how do they compare to the optical quality of a Canon 5D Mark III paired with a 50mm ƒ/1.4 lens that doesn't need to compute or simulate anything?
iPhone X = DSLR-quality... Maybe?
Canon 5D Mark III with 50mm ƒ/1.4 lens
This is the reference. An amazing sensor in the camera body combined with a terrific fast prime lens makes for an amazingly terrific photo. Go figure.
Because there's no depth data, segmentation mapping, machine learning, or any other processing involved — just the gorgeous physics of light and glass. The separation between subject and background is "perfect" and the bokeh consistent across elements and lines.
Apple iPhone X
On iPhone X, like iPhone 8 Plus and iPhone 7 Plus, Apple uses a dual-lens camera system to capture both the image and a layered depth map. (It was 9 layers on iOS 10, it may be more by now, including foreground and background layers.) It then uses machine learning to separate the subject and apply a custom disc-blur to the background and foreground layers. Because of the layers, it can apply the custom disc-blur to lesser and greater degrees depending on the depth data. So, closer background elements can receive less blur than background elements that are further away.
Apple can display the portrait mode effect live during capture, and stores depth data as part of the HEIF (high-efficiency image format) or stuffs it into the header for JPG images. That way, it's non-destructive and you can toggle depth mode on or off at any time.
Master your iPhone in minutes
iMore offers spot-on advice and guidance from our team of experts, with decades of Apple device experience to lean on. Learn more with iMore!
In practice, Apple's Portrait Mode looks overly "warm" to me. It appears as though the iPhone's camera system is allowing highlights to blow out in an effort to preserve skin tones. It's generally consistent with how it applies the blur effect but can be far too soft around the edges. In low light, the custom disc-blur can look gorgeous and the noise seems deliberately pushed away from a mechanical pattern and into an artistic grain.
The result is imperfect images that pack powerful emotional characteristics. You see them better than they look.
Google Pixel 2 XL
On Pixel 2 and Pixel 2 XL, Google uses machine learning to analyze the image and create a segmentation mask to separate the subject from the background. If available, Google will also use the regular single lens camera system and double-dips on the dual pixels in the phase-detection auto-focus system (PDAF) to get baseline depth data as well. Google then combines the two and applies a blur effect in proportion to the depth. (I'm not sure what kind of blur Google is using; it may be a disc-blur like Apple.)
In practice, Google's Portrait mode looks a little "cold" to me. It seems to want to prevent blowouts even at the expense of skin tones. Blurring isn't as consistent but the edge detection is far, far better. At times, it can look too sudden, almost like a cutout, and will preserve details even a real camera wouldn't. It doesn't resort to artistry to compensate for the limitations of the system, it pushes towards a more perfect system.
The result is images that are almost clinical in their precision. They look sometimes better than you see them, even when compared to a DLSR.
Moving targets
Which photo you prefer will be entirely subjective. Some people will gravitate towards the warmth and artistry of iPhone. Others, the almost scientific precision of Pixel. Personally, I prefer the DSLR. It's not too hot, not too cold, not too loose, not too severe.
It's also completely unbiased. Apple and Google's portrait modes still skew heavily towards human faces — it's what all that face detection is used for. You can get heart-stopping results with pets and objects, but there just aren't enough models yet to cover all the wonderous diversity found in the world.
The good news is that computational photography is new and improving rapidly. Apple and Google can keep pushing new bits, new neural networks, and new machine learning models to keep making it better and better.
Portrait mode on iPhone has gotten substantially better over the last year. I imagine the same will be true for both companies this year.
Rene Ritchie is one of the most respected Apple analysts in the business, reaching a combined audience of over 40 million readers a month. His YouTube channel, Vector, has over 90 thousand subscribers and 14 million views and his podcasts, including Debug, have been downloaded over 20 million times. He also regularly co-hosts MacBreak Weekly for the TWiT network and co-hosted CES Live! and Talk Mobile. Based in Montreal, Rene is a former director of product marketing, web developer, and graphic designer. He's authored several books and appeared on numerous television and radio segments to discuss Apple and the technology industry. When not working, he likes to cook, grapple, and spend time with his friends and family.