From IEEE Spectrum:
A growing unease has settled around evolving deepfake technologies that make it possible to create evidence of scenes that never happened. Celebrities have found themselves the unwitting stars of pornography and politicians have turned up in videos appearing to speak words they never really said.
Concerns about deepfakes have led to a proliferation of countermeasures. New laws aim to stop people from making and distributing them. Earlier this year, social media platforms including Facebook and Twitter banned deepfakes from their networks. And computer vision and graphics conferences teem with presentations describing methods to defend against them.
So what exactly is a deepfake, and why are people so worried about them?
There’s a lot of confusion around the term “deepfake,” though, and computer vision and graphics researchers are united in their hatred of the word. It has become a catch-all to describe everything from state-of-the-art videos generated by AI to any image that seems potentially fraudulent.
A lot of what’s being called a deepfake simply isn’t: For example, a controversial “crickets” video of the U.S. Democratic primary debate released by the campaign of former presidential candidate Michael Bloomberg was made with standard video editing skills. Deepfakes played no role.