The Max-Planck Institute for Informatics (MPII) in Saarbrücken was founded in 1990 with the aim of advancing cutting-edge, algorithm-based research for a wide range of applications. A recently unveiled project uses artificial intelligence (AI) to match actors' facial expressions more easily and precisely with dubbed language. While this most obviously promises to save time and cut costs in the film industry, the software can also be used to correct eye gaze and head positions in video conferences and opens up new possibilities in the fields of video post-production and visual effects.
An international team from the University of Bath, the company Technicolor, the Technical University of Munich and Stanford University, led by the MPII, recently unveiled the technology behind Deep Video Portraits at the SIGGRAPH 2018 conference in Vancouver, Canada. In contrast to previous methods, which focus exclusively on central facial expressions, Deep Video Portraits is able to animate entire faces in videos, including the eyes, eyebrows and head positions, using the familiar computer graphics controls. Deep Video Portraits can even synthesize a plausible static background in the video while the head is moving. Hyeongwoo Kim from the MPII says: "We work with 3D renderings of a parametric face model to record a video of the detailed movements of the eyebrows, mouth, nose and head position of the person recording the voice-over. The system transfers these movements to the target actor in the film so as to synchronize the lips and facial movements exactly with the dubbed speech."
Max-Planck Institute for Informatics (66123 Saarbrücken, Germany)