Taylor Swift Deepfakes Call Attention to Nonconsensual Pornography and AI Laws

The recent widespread dissemination of deepfake pornographic pictures featuring Taylor Swift on the internet is yet another vivid and alarming example of the legislative change that needs to be made when it comes to protecting victims of deepfake pornography. Pornographic images of Swift, generated by artificial intelligence (AI), went viral on X (formerly Twitter). One particular post gained over 45 million views. 

Outraged fans swiftly took action, flooding the platform with positive content and images of the singer, using search terms like “Taylor Swift AI” and “Taylor Swift deepfake.” Swift’s devoted fanbase also rallied behind her, making the hashtag #ProtectTaylorSwift trend and posting thousands of messages condemning the distribution of these nonconsensual images.

This incident is not an isolated case. Swift, one of the most recognizable figures in the world, is the latest victim of deepfake technology being used against her. This harmful practice has also been used to target individuals all over the world, from K-pop stars to Twitch streamers to journalists to high school girls

What Is Deepfake Pornography?

Deepfake pornography refers to the creation of manipulated images or videos using a specific type of machine learning (AI) technology. These fabricated images insert individuals’ faces onto explicit content without their consent. The dissemination of such material is considered a form of image-based sexual abuse, also called non-consensual pornography or revenge porn, and has been outlawed in several countries, most recently in the United Kingdom. Similar to revenge pornography, deepfake pornography inflicts extreme damage upon victims, including psychological, personal, and professional consequences.

According to 2023 research conducted by Home Security Heroes, an organization specializing in digital harm and identity theft, deepfake pornography constitutes 98 percent of all deepfake videos available online. Furthermore, it was discovered that 99 percent of the targets of deepfake manipulations are women.

What is the law on deepfake pornography? 

In the United States, 48 states and the District of Columbia currently have criminal laws against revenge porn. Some states are working towards updating their legislation to encompass deepfake pornography as well, including Illinois, Virginia, New York, and California. Depending on the way each statute is drafted, it is possible that those existing laws will already extend to instances of deepfake pornography. However, the regulations differ from state to state. Most states have not enacted legislation explicitly addressing deepfake pornography. Even among those that do, not all of them adequately address the issue of technology’s role in creating and disseminating such images and videos. 

There is currently no federal criminal law addressing either deepfake pornography or other forms of nonconsensual pornography, such as revenge porn. As such, it is up to each individual state to address and impose criminal consequences on perpetrators of deepfake pornography and nonconsensual pornography.

If you have been a victim of deepfake pornography, it’s essential to confer with an attorney who is familiar with the nonconsensual pornography laws in your state to fully understand your potential legal options. 

You can read more about the laws regarding deepfake porn and nonconsensual pornography in general in our Complete Guide to Nonconsensual Pornography

How have major technology companies approached the issue? 

X, a platform where Swift’s images have been widely circulated, explicitly prohibits sharing “synthetic, manipulated, or out-of-context media.” This includes content that intentionally aims to deceive people or falsely represents reality. The company purports to have a strict zero-tolerance policy towards such content. 

In practice, however, the nonconsensual pornography lawyers at Katherine O’Brien Law have found that major social media platforms, including X, can be slow to respond and even mistakenly deny requests for the removal of nonconsensual pornography on behalf of victims. However, Ms. Swift’s celebrity status and dedicated fanbase appear to be applying pressure on X to work quickly to remove and stop the spread of the deepfake content. 

Other platforms, such as Reddit or Meta, also have policies in place to prevent the sharing of intimate or sexually explicit media without the consent of the individuals involved, but these major platforms often face challenges with removing the content because of the sheer amount of deepfake and nonconsensual pornography shared on their platforms. It can be difficult for the platforms to swiftly address each instance of publication on their own, and their automated reporting process can lead to mistaken denials and slow action to stop the spread of offending content. 

In 2021, Meta introduced a new tool in partnership with the UK Revenge Porn Helpline’s platform, StopNCII.org, to address this issue. More recently, the parent company of Facebook and Instagram announced a policy requiring digitally altered images related to social, electoral, and political matters to be labeled when published on their platforms. This policy aims to safeguard upcoming elections in some of the world’s largest democracies.

Many platforms face difficulties in controlling such content. The pictures of Swift, notably, were produced and shared within a Telegram group conversation, as discovered by 404Media. Telegram has previously been unsuccessful in preventing this type of content because their platform operates with end-to-end encryption and bad actors take advantage of that protection to conduct illegal activity like the distribution of deepfake and revenge pornography. Even search engines like Google and Bing have struggled, as NBC News recently found with prominently displaying nonconsensual deepfake pornography featuring female celebrities in search results.

The fundamental question is: why does this continue to occur? The concerning truth is that AI-generated images are becoming increasingly prevalent and posing new risks to those depicted. Compounding the issue are the uncertain legal landscape, social media platforms that have failed to establish adequate safeguards, and the ongoing advancement of artificial intelligence. In a report published in January 2024, the international women’s rights organization Equality Now outlined these factors and called for “urgent and comprehensive responses from technological innovation, legal reform, and societal awareness” to address the undeniable surge of deepfake pornography.

The circumstances surrounding Swift’s case highlight the fact that anyone can be a victim of deepfake pornography. It is crucial for legislators and Internet platforms to address this digital crisis because the problem will only worsen as AI technology continues to develop. 

If you have been a victim of deepfake pornography, contact the nonconsensual pornography lawyers at Katherine O’Brien Law for a free consultation about how we can protect your rights and work to restore your online reputation and privacy.

Katherine O'Brien

New Jersey expungement lawyer Katherine North O’Brien has been practicing expungement law for her entire career and has handled hundreds of complex criminal record expungements. She has also assisted in the drafting of briefs on expungement issues before the New Jersey Supreme Court. Katherine is passionate about helping people clean their criminal records and, therefore, started Katherine O’Brien Law to offer those with criminal convictions a fresh start.