Home /Blogs/Does New York Protect Against the Distribution of Deepfake Pornography?
March 18, 2024 | LitigationNew YorkTechnology

Does New York Protect Against the Distribution of Deepfake Pornography?

post image
Carlianna Dengel

Associate Attorney

Olivia Loftin

Associate Attorney

The use of deepfake technology to create nonconsensual explicit content, commonly known as “deepfake porn,” has raised serious privacy, consent, and legal concerns.  Deepfake porn involves creating fake sexually explicit media using someone’s likeness.  Celebrities like Taylor Swift and even high school students have been victims of the distribution of deepfake pornography, emphasizing the concerning nature of this technology.  While it’s well known that celebrities and others generally have a right to exploit their name, image, and likeness, the law has been expanding to ensure that they are protected from the distribution of deepfake porn as well.  New York is one of several states that provides protections for celebrities against the distribution of deepfake porn.

How Are Deepfakes Created?

Deepfakes utilize open-source machine learning tools allowing the compilation of celebrities’ faces from various sources like Google image search, stock photos, and YouTube videos.  Deep learning forms the basis of this technology which involves training the machine on a large set of data of real faces, from which it learns how to create fictitious photos or videos.

Are Celebrities and Other Individuals Protected from Deepfake Porn?

In New York, legislation has been signed into law by Governor Kathy Hochul (S1042A), explicitly banning the distribution of AI-generated deepfake content depicting nonconsensual sexual images.  Violators may face up to a year in jail, and victims can pursue legal action for damages in civil court.  At this time, the U.S. has no federal laws specifically banning the creation or sharing of deepfake images.

Legal mechanisms such as Digital Millennium Copyright Act (DMCA) complaints are currently used to combat the proliferation of deepfake porn on a federal level.  For example, Google and Microsoft provide forms for victims to request the removal of such explicit content from search results.

Despite efforts to address the issue, search engines like Google and Bing still display results related to nonconsensual deepfake tools.  Google allows victims to request content removal, but it does not actively search and delist deepfakes independently.  The challenges of regulating deepfake technology lie in its legitimate uses, such as entertainment and satire, making the passage of federal revenge porn bans contentious due to concerns about freedom of speech.  Additionally, the difficulty of gathering evidence for legal action and the broad nature of some proposed bills contribute to the complexity of addressing the issue at a regulatory level.  The prevalence of nonconsensual deepfake porn, predominantly affecting women, underscores the urgent need for comprehensive legal frameworks and technological safeguards.


New York’s ban on the distribution of deepfake pornography tackles challenges posed by modern technologies such as generative AI, allowing deepfake pornographic images of anyone to be created.  For any inquiries or legal support regarding the ban on deepfake pornography, our team of dedicated attorneys is available to guide you.


Contributions to this blog by Alastair Mecke.



Photo by Markus Spiske on Unsplash
Share This