Deepfakes: Emerging digital threat to your mental peace

thedigitalfit.com
11 Min Read

The synthetic media or deepfakes have posed a new challenge to humanity, and women, teenagers and children are the soft targets. Is this affecting your peace too?

Deepfakes (a mix of ‘deep learning’ and ‘fakes’) has taken our society by storm in recent days, with altered images and videos of people from all walks of lives — from common citizens to celebrities — being appeared on social media platforms, leaving the victims traumatised and at mental health and depression risk.
We have recently witnessed a growing debate over voice and video deepfakes from the US to India, as some Bollywood celebrities like Rashmika Mandana, Katrina Kaif, Kajol and Alia Bhatt fell prey to the deepfakes. However, women, teenagers and children are the soft targets of deepfakes or synthetic media.

There are estimated to be over 15,000 deepfake videos out there right now. Some are just for fun, while others are trying to manipulate your opinions. But now that it only takes a day or two to make a new deepfake, that number could rise very rapidly.
According to Justin Thomas, a professor of psychology at Zayed University in the UAE, humiliation is a powerful and painful emotion, and deepfakes can be used to embarrass, harass and even blackmail their targets. These videos are becoming easier to create to look increasingly realistic. The technology won’t be limited to targeting celebrities and personal deepfakes are already here.

What are deepfakes?

Deepfake videos are clips that have been altered using artificial intelligence (AI) technology to switch out one person’s voice or face with that of a different person. Tools for making deepfakes have recently become much cheaper and more accessible, amplifying conversations about potential creative applications as well as potential risks — such as spreading misinformation and manipulating viewers’ memories.
According to experts, the prevalence of deepfakes, which are compelling and AI-generated videos or audio recordings, has witnessed a notable increase in recent times. Sonit Jain, CEO of GajShield Infotech, tells A Lotus In The Mud that the prevalence of deep fakes, which are compelling and AI-generated videos or audio recordings, has witnessed a notable increase in recent times.
“This surge can be attributed to the growing accessibility of deep fake technology and its application in various domains. Deep fakes have found utility in entertainment, political manipulation, and even fraudulent activities,” he adds.
The word “deepfakes” originated in December 2017 with an anonymous user on the online platform Reddit who called himself “deepfakes.” He applied deep-learning algorithms to digitally superimpose faces of celebrities on actors in pornographic content.
Proper deepfakes first emerged on the scene in 2019 with fake videos of Meta CEO Mark Zuckerberg and former US House Speaker Nancy Pelosi. If you have seen former US President Barack Obama calling Donald Trump a “complete dipshit”, or Zuckerberg having “total control of billions of people’s stolen data,” you now know what deepfake is.

Fakeness: Mental Threat

The social impact of deepfakes

As deepfake technologies become more sophisticated and accessible to the broader online community, their use puts women participating in digital spaces at increased risk of experiencing violence online and abuse. In a ‘post-truth’ era, the ability to discern what is real and what is fake allows malevolent actors to manipulate public opinion or ruin the social reputation of individuals to wider audiences.
While the scholarly research on the topic is sparse, a recent study titled ‘Deepfakes and Harm to Women by Jennifer Laffier and Aalyia Rehman from Ontario Tech University explored the harm women have experienced in technology and deepfakes.
Results of the study suggest that deepfakes are a relatively new method to deploy gender-based violence and erode women’s autonomy in their on-and-offline world. This study highlighted the unique harms for women that are felt on both an individual and systemic level and the necessity for further inquiry into online harm through deepfakes and victims’ experiences.
About 96 percent of deepfakes on the internet were pornography, according to an analysis by AI firm DeepTrace Technologies, and virtually all pornographic deepfakes depicted women.
“People viewing explicit images of you without your consent – whether those images are real or fake – is a form of sexual violence,” according to Kristen Zaleski, director of forensic mental health at Keck Human Rights Clinic at the University of Southern California.

Deepfakes have interpersonal consequences too

Video deepfakes have the potential to modify our memories and even implant false memories, and they can also modify a person’s attitudes toward the target of the deepfake.
One recent study revealed that exposure to a deepfake depicting a political figure significantly worsened participants’ attitudes toward that politician. Even more worryingly, given social media’s ability to target content to specific political or demographic groups, the study revealed that micro-targeting the deepfake to groups most likely to be offended (Christians) amplified this effect relative to sharing the deepfake with a general population.
According to a paper in the journal Cyberpsychology, Behavior, and Social Networking by Jeffrey T. Hancock and Jeremy N. Bailenson, it is possible for people to develop resilience to novel forms of deception such as deepfakes. Deepfake technology is already used in Hollywood movies, for example, the portrayal of Princess Leia in Star Wars VIII after the actor Carrie Fisher had died.
“An important harm we have not yet considered is the nonconsensual victim portrayed in a deepfake to be doing or saying something that they did not. One of the most common early forms of deepfakes is the alteration of pornography, depicting nonconsensual individuals engaging in a sex act that never occurred typically by placing a person’s face on another person’s body,” they wrote.
Given the power of the visual system in altering our beliefs already described, and the influence that such deepfakes can have on self identity, the impact on a victim’s life can be devastating. Although empirical research to date is limited, it is not difficult to imagine how deepfakes could be used to extort, humiliate, or harass victims, they added.

8 ways to detect deepfakes

  • Pay attention to the face. High-end deepfake manipulations are almost always facial transformations.
  • Pay attention to the cheeks and forehead. Does the skin appear too smooth or too wrinkly? Is the agedness of the skin similar to the agedness of the hair and eyes? Deepfakes may be incongruent on some dimensions.
  • Pay attention to the eyes and eyebrows. Do shadows appear in places that you would expect? Deepfakes may fail to fully represent the natural physics of a scene.
  • Look at the glasses. Is there any glare? Is there too much glare? Does the angle of the glare change when the person moves? Once again, deepfakes may fail to fully represent the natural physics of lighting.
  • Look at facial hair or lack thereof. Does this facial hair look real? Deepfakes might add or remove a mustache, sideburns, or beard. But, deepfakes may fail to make facial hair transformations fully natural.
  • Pay attention to facial moles. Does the mole look real?
  • Pay attention to blinking. Does the person blink enough or too much?
  • Pay attention to the lip movements. Some deepfakes are based on lip syncing. Do the lip movements look natural?

How to protect yourself against deepfakes

Look for the following characteristics of a deepfake video:

  • jerky movement
  • shifts in lighting from one frame to the next
  • shifts in skin tone
  • strange blinking or no blinking at all
  • lips poorly synched with speech
  • digital artifacts in the image

Educate yourself and others on how to spot a deepfake. Make sure you are media literate and use good quality news sources. “The main advice at the moment is not to exaggerate the threat or try to recognize voice/video deepfakes where they don’t exist. Nevertheless, you need to be aware of possible threats and be prepared for advanced deepfake fraud becoming a new reality in the near future,” Dmitry Anikin, Senior Data Scientist at Kaspersky, tells A Lotus In The Mud.
Have a secret code word that every family member knows, but that criminals wouldn’t guess. If someone claiming to be your daughter, grandson or nephew calls, asking for the code word can separate real loved ones from fake ones. Pick something simple and easily memorable that doesn’t need to be written down (and isn’t posted on Facebook or Instagram).

Ask the other person in the video call to turn their head around and to put a hand in front of their face. Those maneuvers can be revealing because deepfakes often haven’t been trained to do them realistically. The most reliable way to smoke out deepfakes may be to insist on an in-person meeting. There are software tools that automatically look for AI-generated glitches and patterns in an effort to separate legitimate audio and video from fake.

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *