Have you ever wondered how Pixel 3 and Pixel 3 XL predict depth of field without the use of a dual camera? In a post on its official blog, the Mountain View giant explains in detail how you can take advantage of machine learning and do amazing things with it.
The third generation Google Pixels both have the some of the best cameras on a smartphone today. Yet, many have wondered how it is possible to achieve such incredible results using a single sensor instead of two or three, as already seen on competing devices. In fact, even without a second rear camera, Google devices produce a bokeh effect in portrait mode thanks to software.
Last year the Pixel 2 and Pixel 2 XL used Phase Detection Autofocus (PDAF), also known as dual-pixel auto focus, along with a homemade algorithm. The PDAF captures two slightly different photos of the same scene and creates a parallax effect for a depth effect. But that wasn't enough for Google.
With the Pixel 3 models, the world's best search engine has made a few tweaks, including the comparison of blurred images in the background with closer images focused differently. Google then uses AI and machine learning to count the number of pixels in an image of a person's face in order to estimate their distance from the camera.
The company has even built a cover (if we can call it that) which can house five Pixel 3 devices to take five identical photos at once, but at different angles. Using WIFI, the company captured the images simultaneously (or in about 2 milliseconds from each other), allowing it to create five different parallaxes, thus helping to create a more accurate depth effect.
Don't you think it's amazing what Google has been able to do with just one camera? Let us know in the comments.