Smartphones with a dual camera – How does it work and what’s in it for you?

On Wednesday, Apple announced the iPhone 7 and iPhone 7 Plus, once again paying a lot of attention to the camera quality of the phones. That is easy to understand, thanks to large sensors and large apertures, the quality of many high-end smartphones is already at the level of a traditional compact camera. In addition, the video options have also increased in recent years, with 4k quality and beautiful slow-motion options. It is therefore not surprising that in many cases the smartphone has become our primary camera: excellent quality and always at hand. This makes the camera one of the most important elements of the smartphone.

There is only one front on which it still falls short: optical zoom. There are some smartphones that can zoom optically, such as the Samsung Galaxy K zoom , and models with a snap-on camera, such as the Moto Z Play , but that’s about it. Due to the moving lens elements in the lens, such a product quickly becomes relatively large and heavy. Current smartphones can only zoom digitally; this does not provide more details, but is at the expense of image quality.

The solution seems to lie in simply adding a second camera, as Apple has now done with the iPhone 7 Plus, but as we have seen before with various Android phones. However, two cameras allow for more than just zooming in without loss of quality, such as measuring depth, taking 3D photos and videos, merging different exposures, improving image quality and using different focal lengths. In this article we look at the recent history of the dual camera, the techniques behind it and the possible applications.

Not new

So the concept of the dual camera is not new. At the beginning of 2014, HTC released the M8 , which was equipped with two cameras. The second camera was not used to capture images, but to measure depth. You could use that to focus afterwards, but the implementation was very poor. A blur filter was simply applied, mimicking the effect of limited depth of field. The Honor 6 Plus used the same premise, but with two identical cameras and two different focus points. That implementation did not turn out to be revolutionary either.

The LG G5 and V20 are also equipped with two different cameras. In this case, there are two different focal lengths: standard wide angle and super wide angle. So you can’t zoom in, but you can switch between the two. This can come in handy in certain situations, although our review shows that the camera with super wide angle delivers a somewhat less good image quality.

The Nokia Pureview 808 and Lumia 1020 are equipped with only a single lens, but Nokia had come up with something to be able to ‘zoom’. The sensor has a resolution of no less than 41 megapixels. However, this resolution is used as standard source material to distill 5 megapixel photos, so that you can zoom without visible loss of quality. This is called oversampling and works quite well, limiting the negative aspects of digital zoom.

Monochrome sensor

With the P9 , Huawei opted for a slightly different implementation of the second camera, just like with the Honor 8. Here, the sensor, lens construction and resolution are identical. The second camera omits the Bayer color filter, so that the sensor produces monochrome photos. A sensor contains only light-sensitive diodes and is actually color blind. By using a color filter, the pixels suddenly “see” color, dividing them into 50 percent green, 25 percent blue, and 25 percent red pixels.

You lose resolution because of such a color filter, because four pixels are used to produce one color. This is partly compensated by interpolation, i.e. artificially increasing the resolution, but this always results in a loss of detail, about half the resolution compared to a monochrome sensor. Compared to a Bayer-based sensor, this would result in more detailed images. It can also enhance details in shadows and highlights. Moreover, color noise is absent, luminance noise is not.

Huawei’s claim is that the images from the two sensors are merged into a single image, which then combines the best of both worlds: color images with more detail, dynamic range and higher light output.

It works?

Huawei is rather mysterious about exactly how this comes about, but technically it can be done in two ways. With the simplest method, the images are not merged, but cleverly analysed. The monochrome image is then used as a starting point to detect differences between the monochrome and the color photograph, for example in terms of sharpness, details in shadows and noise. That information is then used to optimize certain areas of the color photo. Such techniques are already being used to reduce noise, for example, with multi-shot noise reduction , which is very useful for astrophotography, for example. For regular photos the usefulness is doubtful, although that also depends on the implementation.

Another method comes down to joining images together through blending . This is similar to the way HDR photos are made. Three or more photos with different exposures are then merged into one photo, which contains more dynamic range and therefore shows more detail in shadows and highlights. With HDR, the three images are fairly equivalent to each other, but that is not the case with a monochrome and a color photo; after all, the result must remain colour, so the black and white tone should not be visible in the photo. This is possible with smart algorithms, but a too careful method has only a moderate or no effect.

The practice is disappointing

It might sound convincing in theory, because a sensor without a color filter can actually capture more sharpness; the Leica M Monochrom is a good example of this. Yet we see little of this in practice. During our review of the Huawei P9 , the two cameras did not appear to provide a significant advantage. There was hardly any extra sharpness and we also did not see the claimed extra light output, less noise or more dynamic range due to the double construction in practice. In fact, the black and white photos seem to contain more noise, suggesting that the color photos are processed more.

Corephotonics and Linx

In the recent past, two companies have loudly stated that the combination of two cameras actually leads to better image quality. Corephotonics and Linx, both coincidentally Israeli companies, have shown impressive results in various demos and white papers . Start-up Linx specializes in modular camera modules, up to four units, in which monochrome and color sensors can be combined, but which also enable depth measurement and improved focus. This company was acquired by Apple in early 2015.

Competitor Corephotonics claims to be able to do the same, although they are still limited to using two cameras. The company specializes in image optimization through computational photography. Like Linx, Corephotonics claims to enable better, more detailed photos, thanks to the addition of a monochrome sensor, where the images are ‘fused’.

Different focal lengths

One of the simpler, but interesting uses of two cameras is of course the use of different focal lengths. As described earlier, LG already applies this, but only with wide angle. Apple does the same with the just-announced iPhone 7 Plus, but with a 28mm f/1.8 wide angle and a 56mm f/2.8 telephoto lens. For photos and videos, you can then choose from these two positions and actually zoom in – equivalent to 2x optical zoom.

The image quality of this will be many times better than with digital zoom, because the image is not artificially enlarged, but there is actually a smaller angle of view and therefore more details. In other words, such an implementation will increase the image quality of smartphones one step further. In addition, a camera with a longer focal length, for example, is more suitable for taking portraits, because faces are less distorted than with a wide-angle lens.

Computational Photography

An intermediate variant is also conceivable, in which computational photography is used to bridge the difference between the two focal lengths. This is also a specialization of Corephotonics, which recently announced two modules with an equivalent range of 3 and 5x optical zoom respectively.

The company states that this works as follows. The user first chooses a certain focal length by zooming the screen in the regular way. When the shutter button is pressed, the camera first produces two separate images at the two different focal lengths. These are then combined into a single image via a proprietary ‘fusion engine’, which is then processed in the regular manner via the image signal processor .

The company does not make any concrete statements about how the merging of the two focal lengths works exactly. In principle, it should be at the expense of sharpness and detail, but Corephotonics states that the effective resolution is equal to or even better than the result via mechanical optical zoom. We question that, because if you wanted to show a field of view of 35mm, for example, based on a wide-angle and telephoto lens, then the image in the center would be very detailed due to the details of the telephoto, but significantly less sharp outside. . After all, interpolation of the wide-angle image is used for this.

Dual camera in the iPhone 7 Plus

The iPhone 7 Plus is the first smartphone with both a wide-angle and a telephoto lens and could therefore have applied the zoom principle proposed by CorePhotonics. After all, Apple has taken over competitor Linx, which also specializes in computational photography. Yet Apple does not seem to have dared to apply this, whether or not because internal tests did not meet expectations. While Apple claims digital zoom performance is now four times better than before, allowing you to digitally zoom up to 10x, it’s unclear to what extent similar technology has been used for this. Apple does use computational photography to create an artificial depth of field effect with bokeh, because depth measurement is possible thanks to the dual camera.


Smartphones with dual cameras now seem to be on the rise. After HTC, LG, Huawei, Honor and now Apple, we are waiting for models from other manufacturers, such as Samsung and Sony. Kenichiro Toshida, Sony’s CFO, recently expressed the expectation that smartphones with two cameras will make a big advance in 2017. If the added value appears and the consumer is willing to pay extra for this, what we are seeing now is only the beginning. Because a smartphone with more than two cameras is not unthinkable, especially if the different implementations, depth, focal lengths, image optimization and smart zoom are combined.