19.09.2019, by Karlijn Raaijmakers
Technology is giving truth a hard time. Seeing something in order to believe it seems to no longer be an actual verification of the truth. Especially now, with the rise of so-called ‘deepfakes’, truth has become even more of an illusion. Deepfakes are fake videos or images in which artificial intelligence is used to edit faces and/or bodies in(to) the video or image, to create the illusion of certain people saying or doing things which they never actually said or did.
From FaceApp to Zao
The technology behind deepfakes has existed for quite a while, though it was only recently that the trend inflated. The recent and viral FaceApp gave users the possibility to edit their selfies, and make them, for example, look like an elderly version of themselves. This app became a trending topic at the end of July, when the news came out that the app made users gave up their privacy and facial data by agreeing to the Terms and Conditions.
In September, another deepfake app went viral. This is the Chinese app ‘Zao’, in which users can put their own faces in various film scenes and TV shows, making it seem like they are the actual actor. All they need to do is upload a picture of their face.
Though this app might seem innocent and fun, there are again warnings surrounding privacy. Similar to FaceApp, users lose the rights to their images, and therefore their facial data, to Zao. This has especially big consequences in Zao’s case, as China has introduced the facial recognition payment system, meaning you do not want your facial data to fall into the wrong hands. Both Zao and FaceApp impose questions of how far artificial intelligence can go in terms of accessibility and levels of realism. Should artificial intelligence really be out in the open like this?
Though we have always lived in a slightly distorted reality filled with lies and ‘fake news’, the rise of deepfakes can make it (almost) impossible to distinguish the real from the fake. Various news channels have expressed their concern with this trend, stating the increased risk of scamming and extortion. Technological developments are taking place quickly, allowing the most realistic fake realities to be created. Through increasing accessibility, it will only be a matter of time before this technology falls into the wrong hands, if it has not done that already.
Besides general scamming and extortion, two fields are impacted heavily by the deepfake industry. One of these is the pornography industry. After all, the name ‘deepfake’ was first used by a Reddit user who posted a porn video in which he replaced the porn actress’s face with that of a famous actress. This marks the rise of deepfake revenge pornography, ‘deepnudes’ and fake celebrity pornography. Dionne Stax, Dutch journalist and TV presenter, was recently the victim of deepfake pornography, when her face was edited into a porn video and spread around a number of websites.
The deepfake porn industry might not seem as influential on a global level as it is on an individual level. However, when deepfakes cross politics, everyone should be afraid. Images and videos of politicians can get deepfaked into saying or doing things they did not actually do. In the long run, this can influence political results, as well as the future of a country, or perhaps even the world.
What the Future Holds
But really, everyone with even the slightest digital footprint is at risk. What you see and hear is no longer an indication of truth. Though, it is important to remember that deepfakes and artificial intelligence are not inherently bad. Deepfakes are like a double-edged sword. Besides all the scary dangers on one side of the sword, there is also the other side, filled with positive possibilities, innovativity and a lot of fun! There is after all a reason these deepfake apps go viral; they are super entertaining. They allow you to be the star of the biggest movies, or meet the future version of yourself. Banning deepfakes would not only be nearly impossible, it would also put an end to that positive side. As long as preventative measures are taken, like the verification technology experts are already working on, and as long as people are educated where necessary; innovations and increasingly accessible creative technology should be encouraged.