After much confusion about the purpose of the dedicated image processing chip in the Google Pixel 2 and Pixel 2 XL smartphones, the purpose of the Pixel Visual Core is finally becoming clear. The February patch included an update that turns on the Visual Core to improve photos taken with third-party apps through the old Camera API. However, Google's own camera app still doesn't make use of the special chip.
The Pixel Visual Core was discovered by chance during a teardown by the smartphone repair site iFixit. Afterward, it became clear that it was a dedicated, but still inactive, chip for photo processing. A November Fonearena interview with Brian Rakowski, VP of Product Management at Google and Tim Knight, head of the Pixel camera team, could have clarified things at this point and put a stop to the mounting confusion. But, confusion persisted nevertheless...
I got a fun correction from Google today: The Google Camera app does not use the Pixel Visual Core. Google's camera app doesn't use Google's camera chip. Facebook and Snapchat are the first ever uses of it.— Ron Amadeo (@RonAmadeo) February 7, 2018
Google Project Management VP Brian Rakowski said in the aforementioned interview, "The Visual Core which we will be turning on in the coming apps will primarily be for 3rd party apps." But wouldn't the photos taken with the Google Camera app look much better with the Visual Core? Or at least processed more efficiently with it? To this point, he only says: "Turns out we do pretty sophisticated processing, optimising and tuning in the camera app itself to get the maximum performance possible. [...] So we don’t take advantage of the Pixel Visual Core, we don’t need to take advantage of it."
A glance at the data sheet also shows why Google doesn't need to use the Pixel Visual Core. The Pixel 2 and Pixel 2 XL have a Snapdragon chipset, which in turn has a Hexagon digital signal processor, which is optimized for tasks like image processing and expected to deliver the same speed and efficiency. Relying on it has two advantages. Because, as long as Google optimizes for it, the camera app will achieve similar results on first and second generation Pixel devices - the first gen devices are known to have no Visual Core - without any changes to the code.
They still use Hexagon (QDSP) from the Google Camera app on the Pixel 2 (XL) just like the Pixel (XL) and they still have a special google_camera_app SELinux policy domain. Other apps can't do what Google Camera does right now.— CopperheadOS (@CopperheadOS) February 7, 2018
In contrast, Google can program the Visual Core so that it performs only a single task. It does not have to be a jack of all trades like the Hexagon unit, but instead only deal with pictures that an app delivers using the Camera.takePicture() method. However, this method is part of the old interface, and it seems to be used by WhatsApp, Instagram and others, but not by Google's own camera app.
A chip was needed for this?
The fact that Google (together with Intel) developed and integrated a whole chip for this may seem overkill. In fact, it wouldn't be expected for Google to give so few tasks to the co-processor. However, this might just be a bit of flexing for the engineers at Google and a small show of power over the other chip makers, especially Qualcomm.
Google hoards the know-how about the internal processes in its chips and keeps it a close secret. This is partly due to the fact that smartphone innovation is largely determined by chip designers. Also, the lifetime of a smartphone is capped by them, as software updates are possible only in conjunction with matching kernel drivers for the chipsets. Google created Project Treble with this in mind.
We can't judge whether Google will call the Visual Core a success. Until then, the Instagram photos of your Pixel 2-owning friends will look even more #nofilter than before.