Interview: Google sheds light on the "software-defined camera"

Interview: Google sheds light on the "software-defined camera"

The camera plays a critical role in the high-end smartphone sector. Google has made great progress in recent years and delivered enormously good results in the Pixel and Pixel 2. We had the opportunity to talk to Isaac Reynolds, product manager for the camera in the Pixel 2. He provided us with some insights into the camera’s particular philosophy.

There’s a lot of discussion about smartphone cameras. This isn’t just about the quality of the images, but also about how the images are produced. Manual settings or the fast automatic mode? This inevitably leads to the question of what the relationship is between a smartphone camera and an expensive SLR camera. So what is Google's stance on these dilemmas?

Google18 5025
Isaac Reynolds, product manager for the camera in the Pixel 2. / © Ulrich Schaarschmidt/Google LLC

Isaac Reynolds has a clear view of this discussion: typically, the camera is seen primarily from a hardware perspective. “But historically Google is more of a software company,” he told us. With the Google Pixel, this manifests itself in the camera, among other aspects. Of course, the hardware is still significant, but Google mainly relies on the software. And here Reynolds is referring in particular to the HDR+ technology, which Google has been working on for six years now. In a sense, the Pixel camera is a “software-defined camera”, i.e. it is essentially software-oriented.

HDR+ makes the difference

HDR+ doesn’t work like a classic HDR mode, which combines several images with different exposure settings into one. HDR+ uses up to ten underexposed images shot with the same exposure setting, and then the HDR+ algorithm combines these into one. Through this method, the Pixel camera can achieve long exposure times without actually using a long exposure time image. Various techniques are used to ensure that the images remain sharp.

A manual mode is therefore difficult to implement for the Pixel because it would eliminate the HDR+ functionality: HDR+ processes up to ten images that Google could make available as RAW files, but there would be no software that could process them into an image. Google has therefore decided against RAW functionalities in the Google camera.

AndroidPIt google pixel XL 9805
The Pixel camera is known for HDR+ (first generation Pixel pictured here). / © NextPit

Software is also used for zooming to ensure better results. This includes RaisR, an algorithm that helps to improve the details in situations with high contrast. This works particularly well with letters.

Google has also placed a great emphasis on portrait mode. Although it simulates the optical bokeh effect of a camera, Google does it in their own way. For example, it always focuses on people, even when they’re not in a plane of focus. Google combines two technologies. First, its estimation uses two images from different angles. This is possible thanks to the dual pixel sensor, which has sufficient minimal displacement. Machine learning is also used to better match the contours of people.

Google: Point and shoot is the goal

In general, the concept of the Google camera is above all to provide a simple photo solution. Reynolds is currently rejecting elaborate modes with object recognition. The team has modern media formats such as AV1 on screen for the Google camera, but the support needs to continue to grow. A Pixel user doesn’t have to worry about the storage space anyway, because the pictures are synchronized via the cloud to Google Photos. But that also means that for Google, Google Photos and the Google camera belong together.

Google18 4918
Point and shoot: The motto for the Google camera. / © Ulrich Schaarschmidt/Google LLC

Pixel Visual Core: Google doesn’t use the chip

Reynolds also brought clarification to an issue that had previously been contradictorily communicated by Google. Because Pixel 2 contains the Pixel Visual Core, a processor that can actually accelerate machine learning apps. At first it was unused, and in fact, the Google camera doesn’t use it until today.

The Visual Core is currently only used for third-party software for which the Pixel Visual Core provides HDR+ functions. Reynolds said that there are reasons for this. Google had wanted to give app developers easy access to HDR+, but for their own camera app they needed more complex functions that cannot currently be implemented using the Visual Core. In the future, however, it is conceivable that Google will also address the Visual Core.

The duality of the dual camera

Dual cameras are the trend of the hour and hardly any high-end smartphone can do without one. There is, however, no inevitable effect on the image quality. Reynolds points out that a dual camera is associated with costs that result in higher smartphone prices, so buyers incur higher costs. Two image sensors cost more than one and dual cameras require more resources in memory. A dual camera therefore requires compromises that Google apparently didn’t want to make with the Pixel 2 XL.

Is the camera an important purchase criterion for you? And which camera do you currently use? Let us know in the comments!


Write new comment:
All changes will be saved. No drafts are saved when editing
Write new comment:
All changes will be saved. No drafts are saved when editing

  • Albin Foro May 14, 2018 Link to comment

    Fine for Google if it wants its stock camera app to maximize algorithmic point and shoot at the expense of manual / RAW functionality for billions of casual users - what's important is to continue to design Android, as best possible, to facilitate third party apps with different goals and strengths and to facilitate other hardware OEMs to continue to experiment with other sensor, lens and software configurations. The key (for now) has to be full Camera2 API implementation. The sad fact is it's a dog's breakfast among different phone models regarding the unlocking of Camera2's RAW and manual settings potentials, and third party app developers seem to be stymied by the plethora of new hardware offerings on cameras with only partial access to the API. My guess is RAW and manual exposure controls will produce better low-light images than HDR+ (algo-processed multishot handshake and subject motion blur) in a lot of shooting situations - both should be available to serious photographers.

    Google seems to be designing a photo app for Pixel, etc. that is "hard to beat" with manual settings by casual users. Have to agree with the comment that there's no comparison of image output from a DSLR with a large sensor and decent glass, or even a large sensor compact camera like Canon's G series, but that's got to be in the hands of a knowlegeable user. Last decade's DSLR boom with millions of retired duffers running around with $5-digits of camera gear shooting on "Auto" are long gone - the gear is stowed away in a box and they're all using their phones now - DSLR sales have slumped back to the natural market of professionals and serious hobbyists.

  • Rusty H. May 14, 2018 Link to comment

    I had to stop reading at this: "question of what the relationship is between a smartphone camera and an expensive SLR camera".
    As an amateur photographer for almost 40 years, there can be NO relation between a super tiny smartphone image sensor, and even a consumer grade APS-C image sensor. Not taking even the lens on the front into consideration, when you look at the image sensor in the google phone, a 1/2.6 size sensor, which is MANY times smaller than even the APS-C, not to mention the full frame d-SLR sensor, the light gathering level is no where near what a d-SLR can achieve. Then you have the issues with the signal to noise ratio, crosstalk with that many sensors, packed that tight together, and how they have to use software, in hopes of knocking down noise in lower or non perfect light.
    I wish this whole slim, colorful, sexy, stylish crap with phones would disappear, and, they would opt for something like the galaxy s4 zoom, which had a slightly larger sensor, a real OPTICAL zoom lens mounted on the device.
    Don't get me wrong, I'm amazed pinhole sensors work this well, but the comparisons, or "just as good as a d-SLR" is just bunk.