Google has introduced new online safety features to help people affected by non-consensual sexually explicit fake content, often known as “deepfakes”.
The new updates, which were developed based on feedback from experts and victim-survivors, include – “removal processes to make it easier for people to remove deepfake content from Search and updates to Google’s ranking systems to keep this type of content from appearing high up in Search results”.
When someone successfully requests to have explicit non-consensual fake content featuring them removed, Google’s systems will also filter explicit results on similar searches about them, the tech giant explained in a blogpost.
In addition, when someone successfully removes an “image from Search under our policies, our systems will scan for – and remove – any duplicates of that image that we find”.
The company said that these efforts are designed to give people added peace of mind, especially if they’re concerned about similar content about them popping up in the future.
In addition to improving processes for reporting and removing deepfake content, Google said that they are also updating their “ranking systems’ for queries where there’s a higher risk of explicit fake content appearing in Search.
“First, we’re rolling out ranking updates that will lower explicit fake content for many searches. For queries that are specifically seeking this content and include people’s names, we’ll aim to surface high-quality, non-explicit content – like relevant news articles – when it’s available,” Google said.
As a result of these changes, people will be able to read about the impacts deepfakes are having on society instead of viewing non-consensual fake images on pages.
Additionally, the tech giant is working on a way to distinguish between legitimate explicit content, such as a nude scene by an actor, and explicit fake content, in order to protect legitimate images from deepfakes.