The camera of the new iPhone Xs is still 12 megapixels, but Apple has added a plethora of computational functions that make it much more interesting than it seems. Here we offer a quick review of the most important and what they mean.
12 megapixels with a larger sensor
Apple has not detailed exactly what size your new sensor has in relation to the previous one, but any improvement in this regard is always good news.
According to the company, each pixel has a size of 1.4 microns. In this case the figure is better the higher, but 1.4 is similar to previous models, so we will have to wait to see the results. Both lenses have integrated stabilization, something very useful especially in the telephoto.
The camera of the new iPhone deepens in the computational photography. The company says that the A12 Con Neural Engine is capable of performing a trillion simultaneous operations at the time of taking the photo. In practice, what the processor does is a very advanced version of the HDR. Instead of a single photo, take 9 and analyze in real time values such as exposure, white balance, focus, or face recognition.
Then, the chip selects the optimal values for each section and combines them into a single photo. It’s like the automatic mode of cameras, but on steroids.
Apple is not prodigal in technical details, but if what we have been taught in the keynote sample photos is true, not only is it able to freeze the movement of the objects in the image, but to achieve an incredibly natural lighting in the process. The functions depend on the processor, not the camera, so they are available on both the iPhone XS and XS Max and the iPhone XR.
Bokeh with depth control
In good law, Apple has not invented anything new here. The adjustable blur of the background is something that we already saw in the Galaxy Note 8, to mention just one example. Apple has been slow to introduce this feature, but it seems pretty well integrated. As in the Galaxy, it is activated with the portrait mode and can be adjusted after taking the photo.