Deepfakes have been in the news a lot lately, and for good reason: the technology is an important one. Deepfakes are videos that are created using artificial intelligence to make people appear to be doing and saying things they never did. These videos may be used to spread misinformation or create “fake news” with the intent to manipulate public opinion. That’s why we’ve been following deepfake technology closely, and keeping you informed about what it means for you and your family.
We’ve written about the ways Facebook is fighting deepfakes, how deepfakes could change our world, and how to spot fake videos. We’ve also explored the ethical concerns around deepfakes, including if they should be regulated by governments as a form of hate speech.
Deepfakes are also being used by artists who want to create new forms of art or cultural commentary with their work. And there are many creators who use it as a tool for fun or entertainment purposes only, just like any other software program that might help someone make a video or do something else creative online.
But beyond all this, there is another aspect of deepfake technology that we haven’t explored yet: how it will affect us personally
Deepfakes are videos in which the subject is face-swapped using machine-learning algorithms. In essence, it’s like a digital mask, or a computerized version of what Hollywood makeup artists have been doing for years. The technology has advanced to the point where deepfake videos are now ubiquitous on the internet, and many times it’s impossible to distinguish them from real footage. They range from silly or mischievous, like swapping Nicolas Cage into every movie ever made, to gruesomely violent: There are deepfake videos of celebrities dying in horrific ways, or being raped, or even committing acts of genocide.
You can see why people might be concerned: If you can’t trust your eyes and ears anymore, how can you trust anything? There are also obvious political ramifications — imagine if someone posted a video of Barack Obama saying “I hate black people” — but let’s not go there just yet. Instead, we’re going to look at how deepfake technology works, and why it matters to our lives.
In a world where people are so skeptical of everything they see on the Internet, it’s time to start being even more critical of the videos we watch.
You’ve probably heard about deepfakes, but what are they? Deepfakes are videos that were created or manipulated using AI or machine learning to show someone else doing or saying something that they never did. In other words, these videos use artificial intelligence to create fake images and videos that replace the original person with someone else’s face. One of the most popular uses of deepfakes is in pornographic content, where celebrities’ faces are superimposed over the bodies of adult film actors (or the faces of women who look like celebrities).
Since 2017, deepfake technology has been used in disturbing ways. One popular example is a video featuring former US President Barack Obama making disparaging remarks about President Donald Trump. When you watch the video, it’s hard to believe that it’s not really Obama speaking.
What makes these videos so convincing is that deepfakes can accurately mimic people’s facial expressions and lip movements. Unfortunately, anyone with a computer can make their own deepfake videos and upload them to the internet without fear of getting caught.
This isn’t just a problem for politicians and celebrities; it’s
A deepfake is a video that has been produced using machine learning to swap one person’s face for another. In November 2017, the anonymous Reddit user ‘deepfakes’ released software called FakeApp that made it much easier to create deepfake videos, kickstarting the current era of proliferation.
The term ‘deepfake’ is derived from the combination of ‘deep learning’ and ‘fake’. Deep learning is a subset of machine learning (a subset of AI) that uses neural networks — essentially computer systems that are modelled on the human brain. These neural networks learn tasks by analysing training data. The more data they have, the better they perform.
In terms of deepfakes, these neural networks are used to analyse large quantities of video footage of a person in order to learn their facial features. Then, when faced with a new piece of footage — say, a still image or video clip of someone else — they can map those features onto it.
In its simplest form, this technique can be used to realistically swap faces in videos. Although this sounds simple in theory, realistically swapping faces in a video requires accurate tracking and alignment between the two subjects in each frame. This is because faces naturally change shape when we speak or move our heads.