wordpress blog stats
Connect with us

Hi, what are you looking for?

California bans political deepfakes during elections and non-consensual pornographic deepfakes

Deepfake
Courtesy: Facebook

Earlier this month, California state in the United States signed two bills that ban political deepfakes during elections and non-consensual pornographic deepfakes. The Bills, respectively known as AB 730 and AB 602, were signed into law by Governor of California Gavin Newsom, on October 3. 

Ban on political deepfakes

Political deepfakes garnered attention earlier this year after a doctored video of US House Speaker Nancy Pelosi went viral depicting her as drunk and slurring. California’s ban on political deepfakes is aimed at tackling voter manipulation ahead of the 2020 US Presidential elections. The law will remain in force till January 1, 2023, unless extended.

The law bans deepfakes on campaign material relating to any political candidate. It also bans any other intentionally manipulated image, audio, or video of the candidate’s appearance, speech, or conduct. 

Ban on deepfakes in campaign material: The law prohibits a person or entity (including a campaign committee) to produce, distribute, publish, or broadcast campaign material that superimposes a person’s photograph over a candidate’s (or the other way around) – while knowing or turning a blind eye to the fact that doing so will cause false representation. This provision covers only campaign material where pictures or photographs are used, and is applicable for elections to any public office. 

But there’s an exception: If the material is accompanied by a disclaimer stating that “This picture is not an accurate representation of fact” in the biggest font size used in the whole campaign material, then this ban will not apply. 

Advertisement. Scroll to continue reading.

Ban on political deepfakes for non-campaign material: The law prohibits the distribution of “materially deceptive audio or visual media” of the candidate by any person or entity that intends to harm the candidate’s reputation, or to mislead a voter into voting for or against the candidate. The person doing this will be held responsible if they did so while knowing that the material was deceptive, or turning a blind eye to its deceptive nature. 

“Materially deceptive audio or visual media” means an image, audio, or video recording of a candidate’s appearance, speech, or conduct that has been intentionally manipulated to make it look authentic, and alter its understanding. This means the law isn’t limited to deepfakes, and would also cover photoshopped images, for instance.

But there are exceptions: 

  • Media depicting satire or parody
  • Publisher or employee of a newspaper that is involved in the regular publication of such material (excluding those whose primary purpose is the publication of campaign advertising or communication)
  • A radio or television broadcasting station that is paid to broadcast such deceptive media
  • A website, or a regularly published newspaper or magazine of general circulation that clearly states that it does not accurately represent the speech or conduct of the candidate.

What remedies the law provides:   

  • Temporary injunction order by any registered voter against any campaign material in violation of the above-mentioned law.
  • The candidate can seek civil action against the entity that produced, distributed, published, or broadcast the picture or photograph. The court may award compensation of an amount equal to the cost of producing, distributing, publishing, or broadcasting the campaign material that violated this section, in addition to reasonable attorney’s fees and costs.

This isn’t easy to implement: The ban may lead to the chilling of free speech, according to The Guardian, because political speech is highly protected in the US, especially in the online sphere. Moreover, because of the size of the internet, it would be difficult to monitor online speech. 

Ban on pornographic deepfakes

What’s banned: This law bans the non-consensual creation and sharing of sexually explicit material in which the depicted person is shown to act in a way that the person did not actually perform, or shown to be performing in an altered depiction. 96% of all deepfakes online are non-consensual pornographic deepfakes, of which 99% depict women, according to a report by cyber-security company Deeptrace. It covers any audio-visual work showing a realistic depiction of:

  • The nude body parts of another human being as the nude body parts of the depicted individual.
  • Computer-generated nude body parts as the nude body parts of the depicted individual.
  • The depicted individual engaging in sexual conduct in which the depicted individual did not engage

Consent for such work should be in the form of a written agreement signed by the person in question. Such consent can be withdrawn within 3 business days (from the date of giving the consent) if the depicted person was not given at least 72 hours to review the terms and conditions of the agreement before signing it.

Some exceptions to this are: A person will not be held liable if:

  • he/she discloses the banned sexually explicit material in the course of reporting unlawful activity, or during legal proceedings.
  • The material is any of the following:
    • A matter of legitimate public concern
    • A work of political or newsworthy value or similar work (however, material is not of newsworthy value solely because the depicted person is a public figure)
    • Commentary, criticism, or disclosure that is otherwise protected by the California Constitution or the US Constitution

But a mere disclaimer of authenticity or unauthorized work does not exempt a person under this law.

What remedies the law provides: The law allows a wide range of remedies to persons depicted in pornographic deepfakes such as: 

Advertisement. Scroll to continue reading.
  • injunction, 
  • punitive compensation, 
  • reasonable attorney fees 
  • and even the profit made from the creation or distribution of the sexually explicit material.

The law also provides for minimum statutory compensation of $1,500 to a maximum of $30,000. If the act was done with knowledge or by turning a blind eye to its consequences, then the maximum compensation will be $150,000.

Written By

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.

Views

News

The Delhi High Court should quash the government's order to block Tanul Thakur's website in light of the Shreya Singhal verdict by the Supreme...

News

Releasing the policy is akin to putting the proverbial 'cart before the horse'.

News

The industry's growth is being weighed down by taxation and legal uncertainty.

News

Due to the scale of regulatory and technical challenges, transparency reporting under the IT Rules has gotten off to a rocky start.

News

Here are possible reasons why Indians are not generating significant IAP revenues despite our download share crossing 30%.

You May Also Like

News

Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...

Advert

135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...

News

Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...

News

By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Name:*
Your email address:*
*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ