
The problem: Deepfakes and other AI-generated images can be hard to detect, especially as models get advanced enough to avoid the obvious signs, like extra fingers and odd-looking earlobes.
The solution: By applying methods typically used by astronomers, researchers were able to detect when the lighting wasn’t right in the eyes of deepfaked people. In a real image, the light reflections in a left and right eye should be similar, but they will be slightly off in deepfakes because many AI models have trouble with physics. These differences are subtle, but astronomers know how to measure tiny changes in light — they do it all the time to understand the size, movement, and composition of stars and galaxies. When using their methods to analyze the reflections in eyeballs, researchers correctly flagged deepfakes roughly 70% of the time.
Yes, but: Now that this deepfake red flag has been documented, AI can be trained to overcome it. However, researchers are hopeful that detecting lighting differences in other parts of an image can work in tandem with existing tools to improve deepfake detection.