ambient lighting according to its own characteristics.

To this end, Google adopted a new machine training model-omnidirectional lighting contours. This new lighting calculation model can use the human face as a light detector to infer the direction, relative intensity and color of the light source from all the illumination, and it can also estimate the head posture in the photo through another facial algorithm.

Although it sounds very tall, the actual training model’s rendering effect is quite lovely. It will treat the human head as three round silver spherical objects. The top ball’texture’ is the roughest, used to simulate Diffuse reflection of light. The ball in the middle is also matte, which is used to simulate a more concentrated light source. The bottom ball is a mirror’material’, which is used to simulate a smoother mirror reflection.

In addition, each sphere can reflect the color, intensity and directionality of the ambient lighting according to its own characteristics.

In this way, Google can get the direction of the post-composite light source. For example, the classic portrait light source is located 30° above the line of sight and between 30° and 60° with the camera axis. Google also follows this classic rule.

After learning the direction of adding a light source to a portrait, the next thing to do is how to make the added light source more natural.

The previous question is a bit like the score of “Dugu Nine Swords”. After learning it, I will do some fixed questions. To solve the latter problem, you need to make the “Dugu Nine Swords” as much actual combat as possible, integrate different actual situations, and then learn to crack the world’s martial arts.

spark global limited

spark global limited In order to solve this problem, Google has developed another new training model to determine the self-directional light source to be added to the original photo. Under normal circumstances, it is impossible to train this model with the existing data, because it cannot face the almost infinite light exposure, and it must match the human face perfectly.

For this reason, Google has created a very special device for training machine learning-a spherical “cage”. There are 64 cameras with different viewing angles and 331 individually programmable LED light sources in this device.

If you have been to Dolby Cinema, there is a link in the pre-screening show in Dolby Cinema where the sound moves in a hemispherical dome to simulate the almost infinite direction in reality. The Google device actually has a similar principle.

By constantly changing the direction and intensity of the illumination and simulating complex light sources, you can get the data of the reflected light from human hair, skin, and clothes, so as to obtain what the illumination should be under complex light sources.

Google invited 70 different people to train this model with different face shapes, hairstyles, skin colors, clothes, accessories and other characteristics. This ensures that the synthesized light source matches reality to the maximum.

In addition, Google does not directly output the final image through the neural network model, but allows the neural network model to output a lower resolution quotient image.