EDITORIALS OPINIONS

What Are Deepfakes? How Devastating Can They Be?

In the age of the internet and mass media, many things are easily fabricated and faked. Deepfakes are among them. But, what is a deepfake? A deepfake is a fake, highly realistic image, video, or audio created by artificial intelligence that uses neural learning to fabricate them. The AI creates fake media after it analyzes multiple footages of someone or something (neural learning). This new technology is already incredibly more potent than photoshopping – it’s nearly impossible to tell the difference between a perfect deepfake and a real product. This could mean some videos you watch today could be deepfakes! Since the beginning of 2019, the number of deepfakes on the internet nearly doubled over nine months. The chances of unknowingly watching a deepfake are increasingly likely by the day, as they become more accessible and easy to produce, while detection abilities lag behind. Are deepfakes worth the trouble, and how dangerous could they be? 

Number of deepfakes found on the internet over time.

When it comes to applications for this technology, it has both many upsides and downsides. Advanced technology like this has many practical and beneficial applications in today’s society. For one, Deepfakes are considered a form of synthetic media, and it’s very powerful for video or game developers. These deepfakes “are used to join and overlay existing images, videos, and soundtracks onto original content,” making game development/modeling much easier. Other deepfakes used for progress include Vince Lombardi’s Superbowl resurrection ad, or even in education, where, “Fifty thousand people learn the basics of marketing through video courses with a virtual mentor who, “addresses each employee personally by name.”  All of these applications are great for Deepfakes, provide a steady future for this technology, and allow for it to have a net positive impact on society rather than a negative impact. Furthermore, Deepfake technology isn’t going anywhere, and rather than trying to suppress such innovation, embracing the technology and properly regulating it is more important for both the long term development of Deepfakes and for overall innovation.

‘FIFA’ game player textures greatly improved with deepfakes. 

However, it cannot be denied that deepfakes come with heavily dangerous drawbacks and devastating consequences if they are not properly regulated. Unregulated deepfakes will have massive impacts on people such as defamation and misinformation. Since most people on the internet do not investigate the source of a media and its credibility, people may fall for deepfake media and not think about it twice. This lack of research is especially prevalent in politics, where if a deepfake video was made to spread misinformation about a politician through their ‘own’ words or other prominent figures, people may take the videos for granted. Considering the modern volatile political scene, this is incredibly dangerous. This is also especially problematic in the nuclear age where such an interaction would affect nuclear doctrine, posturing, and signaling. Furthermore, if deepfakes can be used to harm politicians, they can certainly harm the common individual. Currently, the most common use of deepfakes is for adult content, where they are used to shame others or for people’s own fantasies. This is especially devastating to those who are being targeted by the deepfake, as they have no real guaranteed way to combat the deepfake. At the current rate, the rate of detection for deepfakes is vastly outpaced by their creation methods. Well made deepfakes cannot be detected with the naked eye, and this inability to detect casts doubts about media sources on the internet. So, is there at least any legislation to assist on this matter? 

Image: Deepfake video of Putin, juxtaposed to a real reference.

Unfortunately, there is a current lack of regulation on deepfakes, which allows for the proliferation of malicious deepfakes. As these malicious deepfakes spread, they create a lack of trust among the populace as even real audio or video is unable to be distinguished from what is fake. One example of how the exposure of deepfakes can erode trust is through Republican congressional candidate Winnie Heartstrong who tried to use the exposition of deepfakes into the media field as a way to discredit the reliability of the George Floyd video. Such power would have far reaching effects on the credibility of information, especially in the hands of an influencer or politician. As long as there is no way to actually prove and show that a video or photo is a deepfake and their dissemination is continued, the malicious use of deepfakes won’t stop. Thus, the only way to prevent such an affront to information credibility is to work to build impactful deepfake regulation that serves as preventative measures that would identify and stop the dissemination of deepfake material. 

So, what can you do to help? One step would be to call your local government representatives or present them with an email on why deepfakes should be a targeted priority. If enough people request a policy change or addition, change can certainly make progress. It’s only a matter of time. 

Learn more about Encode Justice and our mission here: https://encodejustice.org/ 

Read more articles here:

Follow our socials:

Mark Zheng
Mark Zheng is a senior at BASIS Peoria! He's kinda weird. But he's somehow Editor-in-Chief. Somehow. Who let him have this position?