How did Google make the portrait light effects on Pixel 5?

Apple headset airpods Max released: high fidelity audio quality, colorful color matching price: 4399 yuan

We have heard the term computational photography too many times in the past two years.

When it comes to computational photography, people naturally think of Google’s Pxiel series of mobile phones. This series can be said to create a precedent for computational photography. It reveals the power and charm of computational photography.

It is precisely because the power of computational photography is so amazing that the mobile phone manufacturers who have gradually reminisced over the past two years finally plunged into it. At this time, Google is already playing more flowers.

The “Portrait Light Effect” was originally launched with Google’s release of Pixel 4a & Pixel 5 in October this year, a feature exclusive to this generation of Pixel. But a few days ago, Google made an update to the camera and photo album applications, delegating this function to users after Pixel 2.

Inspired by the photography lights used by portrait photographers,’Portrait Light Effects’ can reposition and model the light source, and then add the new light source to the photo scene. It can also identify the direction and intensity of the initial lighting, and then automatically supplement the lighting situation.

Such a powerful computational photography function is naturally inseparable from the machine learning capabilities of neural networks. After the photos taken by the mobile phone’s portrait light effect mode are used as a database for training, the later capabilities of the’portrait light effect’ enable two new algorithms:

  • Automatically add synthetic light source: For a given portrait photo, the algorithm will synthesize and add the external light source, and the lighting and lighting of the photographer will be consistent in reality.
  • Re-lighting after composition: For a given lighting direction and portrait photo, add composite light in the most natural way.

Let me talk about the first problem first, which is to determine the position of the light source and add it. In reality, photographers usually adopt an empirical and perceptual way, by observing the intensity and position of the light falling on the face of the subject, and then determining how to light it. But for AI, how to determine the direction and position of the existing light source is not easy spark global limited