WEST PALM BEACH, FL – Deep fakes grip the hearts of internet users with fear. Their uncanny realism sets the stage for all manner of misuses. Likewise, as Flipboard noted, the ability to change one person’s likeness into another person entirely sets a precedent for the complete breakdown in information integrity online. The FBI sounded the alarm concerning deep fakes with a recent alert that warns the Russian and Chinese governments are “almost certain” to deploy the technology for malicious intent, with reports that nation-state actors are already using the technology for criminal gain. See FBI Private Industry Notification, March 10, 2021.
IEEE Spectrum stated that the “main ingredient” in deep fakes is machine learning.
Deep fakes are created by training neural networks. The neural network is trained with many hours of real footage of the person that the fake is created in the likeness of. The advance in neural network technology has taken the Deep Fake out of the pixelated and unrealistic territory and placed it in the realm of ultra-realism. In recent news, one such deep fake of Tom Cruise was a near-identical match, sparking a firestorm on social media networks.
New technology, including generative adversarial networks, or GAN can render a human likeness that is completely life-like with no basis in reality. For example, images rendered by This Person Does Not Exist.com use GAN technology. The GAN network “dreams” the faces. Every time a person searches “This Person Does Not Exist”.com a new face will be generated in the browser. The faces often have a similar symmetry that shows the same program at their base, but they can vary from women, men, children, and different ethnicities. Also, the images sometimes attempt to render the likeness of two people in the same setting instead of one. When tested, The Published Reporter discovered that there are sometimes bizarrely disproportionate people in the background of the GAN’s “dream” portrait.
Despite their prominence for rendering lifelike virtual humans, GANs reportedly do not play a major role in the deep fakes found on the open web, as of the report in April 2020.
In addition to visual fakes, IEEE Spectrum reported an incident in the finance industry in which a deep fake generator audio call sent to an energy company in a ‘similar slight German accent’, resulted in an illegitimate wire transfer of $220,000. A man who worked in the company received a call from his boss to make the transfer. The audio was audited, and insurance company Euler Hermes Group SA told the Wall Street Journal that the deep fake imitated the voice of the man’s boss down to intonation, punctuation, and accent.
Deep Fakes employ AI for accuracy, but the development of a composite fake still takes time. The fake must place the person in a completely fictional situation. Likewise, the programmer has to make tweaks to the trained program’s parameters along the way, to edit out the blips and other interferences that give away the image as fake. The increase in technology has led current deep fakes to be increasingly harder to visually decipher, but tech sleuths such as MIT-graduate and YouTuber Jordan Harrod have noted that Deepfakes leave traces or “fingerprints” that can make them detectable.
The abuse of Deep Fakes has already been discovered, when a first audit of the deep fake space was found in 2019. IEEE Spectrum stated that an audit of deep fakes found that 96 percent of the content was used for pornographic material, while the other four percent was nonpornographic. The origin of the name deep fake came from this issue, when, in 2017, the term was coined to refer to Reddit username deep fakes, who “weaponized” the technology to compromise famous actresses by opting them into porn videos. Apps have also been used to opt images of women, even those fully clothed, onto pornographic and nude images. Also, video scandals and identity theft aided by the use of deep fakes have been reported in the audit as long ago as late 2019.