Another day, another fun internet thing that uses neural networks for facial manipulation. This time it’s DeepWarp, a demo created by Yaroslav Ganin, Daniil Kononenko, Diana Sungatullina, and Victor Lempitsky, that uses deep architecture to move human eyeballs in a still image.
smile-manipulator FaceApp), but without such a singular, detailed focus.
The authors note that their findings in this study could be applied to solve real-world issues of eye movement, like for “gaze correction in video conferencing.” It could also be useful for “talking head” scenarios, when reliance on a teleprompter shifts a person’s line of sight away from the camera.
The demo is available to try here. All you need to do is choose an image (horizontal seems to work best) featuring a person facing forward. After you upload that image, you can pick one of four eye-movement options, including roll and cross. DeepWarp will spit out an mp4 file of the resulting googly-eyed person. I tried this using images of Keanu Reeves and several dogs, but the demo didn’t work with the dogs.
“Our system is reasonably robust against different head poses and deals correctly with the situations where a person wears glasses,” the authors wrote in their study. “Most of the failure modes (e.g., corresponding to extremely tilted head poses or large redirection angles involving disocclusion of the different parts of an eye) are not inherent to the model design and can be addressed by augmenting the training data with appropriate examples.”
The authors say they plan to work on making the demo work more quickly in the future.
Loading comments...