What is computational photography? It's the magic behind your phone's camera
Your phone's camera is really a computer.
Your smartphone's camera is much more than just a camera. Every time you take a photo, it's doing a lot more than just hoovering up tiny dots of light and displaying them on-screen. A large part of the behind-the-scenes magic is computational photography, which sounds complicated, but is a pretty broad term that refers to cameras manipulating an image digitally, using a built-in computer, as opposed to applying good old-fashioned optics.
The camera in any modern smartphone acts as a standalone computer. It uses specialized computing cores to process the digital information captured by the camera sensor and then translates it into an image we can see on the display or share across the internet — or even print out and hang on the wall. It's very different from the old way, where photographers used film and darkroom, but it's how every great phone camera works.
The image sensor is where it all starts. It's a rectangular array of tiny, light-sensitive semiconductors known as photosites. The end product is the created image, also a rectangular array but made of colored pixels. The conversion isn't a one-to-one map between photosites and pixels. That's where image processing comes in, and where computational photography begins.
How it works
The image sensor in your phone is covered by a pattern of red, green, and blue light filters. This means that a single photosite registers light in only one color (expressed by a band of light wavelengths). The final image has all three colors available in every single pixel, where the intensity of brightness determines what color our eyes can see.
Next, your phone's computer/camera employs a sharpening algorithm. This accentuates edges and blends transitions from one color to another. Remember, each pixel in the image can only be one color, but there are millions of colors to choose from. The edge between a red flower and a blue sky needs to be sharp but also blended along the edge. Getting this right isn't easy.
Things like white balance and contrast are then addressed. These processes can make a big difference in the quality of the photo, as well as the actual colors in it. These changes are all just numbers; after your phone algorithm defines color edges, it's much simpler to then adjust the actual shade of the color or the level of contrast.
Finally, the output data is analyzed, and the image is compressed. Colors that are very close to each other are switched to be the same color (because we can't see the difference), and if possible, groups of pixels are merged into a single piece of information, leading to a smaller file output size.
Be an expert in 5 minutes
Get the latest news from Android Central, your trusted companion in the world of Android
These tools create an image that's as accurate as those created by the old film method. However, computational photography can do a lot more by changing the data using the same algorithms. For example, portrait photography changes the edge detection and sharpening process, while night photography changes the contrast and color balance algorithms. And "AI" scene detection modes in modern phones also use computational photography to identify what's in the shot — a sunset, for example — and change the white balance to produce a pleasing shot with warm colors.
More recently, smartphones' increased processing power considerably expanded computational photography's power. It's a big part of why some of the best Android phones, like the Google Pixel 8 and Samsung Galaxy S24, take such great photos. The raw, number-crunching power of these devices means that phones can sidestep some of the weaknesses of their small sensors with computational photography. Whereas normally, a much larger sensor might be required to take a clear photo in low light, computational techniques can intelligently brighten and denoise images to produce better-looking shots.
Take the Google Pixel series, for instance. Google is happy to explain how computational photography assists in taking great photos from modest hardware, so it's a good example of what we're talking about.
At the heart of the Pixel Camera app's magic is multi-frame photography. That's a computational photography technique involving taking several photos in quick succession at different exposure levels, then stitching them together into a single image with even exposure throughout.
Because your phone was probably moving while you were taking the photo, HDR+ relies on Google's algorithms to put the image back together without any ghosting, motion blur, or other aberrations while also intelligently reducing the appearance of noise in places. That's a very different process from what you might think of as photography. The computing power onboard and the code commanding it is just as important, if not more so, than the lens and the sensor.
Multi-frame photography is at the heart of most smartphones' night mode capabilities, including Google's Night Sight feature. These computation-heavy features not only take several long exposures over a few seconds but also compensate for the large amount of movement taking place with a handheld shot. Once again, computational power is required to gather all that data and rearrange it into a pleasing, blur-free photo.
Take this idea to the next step, and you have the Google Pixel's Astrophotography mode, which uses computational photography to compensate for the earth's rotation, producing clear photos of the cosmos while also not overexposing landscape details.
The next step in computational photography is applying these techniques to video as well. Google tries to bring the same level of HDR+ processing applied to still photos and video in the same way. It's not quite there yet, but each generation of Pixel phones is better than the last in this regard.
So when you notice your next phone taking way better photos than the model it's replacing, chances are it's not just the camera hardware that's responsible. Rather, it's the entire computer system behind it.
Jerry is an amateur woodworker and struggling shade tree mechanic. There's nothing he can't take apart, but many things he can't reassemble. You'll find him writing and speaking his loud opinion on Android Central and occasionally on Threads.