A mini experiment pushing the boundaries of a model primarily used on human faces.
First Order Motion models take a 'driver' image or video use this to create an animated sequences with motion transferred from the driving source and object/visual features taken from the source imagery.
In this experiment I use my own facial expressions to drive source imagery of dogs. Many machine learning models derive from datasets of human faces. This is a humorous exploration of what happens when you try to force this information into untrained animal features.
Spoiler - it breaks!
This idea came about from a series of artists imitating this process with politicians hereand here