Another great paper from Samsung AI lab! et al. animate heads using only few shots of target person (or even 1 shot). Keypoints, adaptive instance norms and GANs, no 3D face modelling at all. ▶️ 📝
Three months ago when we launched , we stated that one way to tell someone is real is to ask for multiple images; there was no way to create multiple images of the same fake person. Well, damn. That sure didn't last long.
Faces have historically been one of the hardest things to generate because humans are so good at spotting fakes. This is a great video.
Few-Shot Adversarial Learning of Realistic Neural Talking Head Models Deep-fakes from one or two photos. See the YouTube clip for footage of it learning from historical photos and even a painting
wow ! animated heads using only few shots of target person
Wow. This is fascinating research. How has no one turned this into a selfie / impersonation app yet? — Few-Shot Adversarial Learning of Realistic Neural Talking Head Models
This technical video-explainer of the latest Samsung AI "Realistic Neural Talking Head Models" has already been watched 700k times in a week, an impressive achievement for a computer science paper. Congrats to and colleagues.
Didn't know Mona was basic
We can now generate video of a specific person talking, based on a single photo. Having 10 frames of video of the person increases the realism considerably. Starting at 4:17, has faked videos of e.g. Marilyn Monroe, Albert Einstein and the Mona Lisa.