Current algorithms for the detection of deepfakes increasingly rely on remote pulse reading to distinguish them from genuine videos of people. Here, scientists show for the first time that the most recent deepfakes feature a global pulse rate which appears realistic. This worrying development makes it necessary for deepfake detectors to become more powerful, for example, by focusing on local variations in blood flow within the face.
Wait, they can detect your pulse via a video? How? Variation in flushing during systolic vs diastolic phases of the heartbeat? Unconscious synchronization of affect/verbalization/whatever with one’s own heartbeat? Given the following, I think it must be closer to the former:
Easy, for example Eulerian Video Magnification framework.