wordpress blog stats
Connect with us

Hi, what are you looking for?

US Federal Trade Commission Targets Deep Fake Deception: New Rule to Shield Against Impersonation

Once this rule is adopted, deep fake generation tools could be held liable for the scams perpetrated using their technology. This increased liability for service providers could spell significant implications for AI platforms, leading to implementation of rigorous checks and balances

The US Federal Trade Commission (FTC) proposed a rule on February 15 prohibiting individuals’ impersonation. This would extend to impersonation protections already in place for US businesses and the US government to individuals as well. The FTC says that it is taking this action in light of rising complaints around impersonation fraud, stating that emerging technologies like artificial intelligence (AI) generated images (deep fakes) could turbocharge these frauds. The FTC is also seeking comments on whether the revised rule should, “declare it unlawful for a firm, such as an AI platform that creates images, video, or text, to provide goods or services that they know or have reason to know is being used to harm consumers through impersonation.”

This proposed rule comes soon after US lawmakers introduced two separate bills to curb deep fakes—the No AI Fraud Act (which allows people to assert intellectual property rights over their likeness and voice) and the DEFIANCE Act (which allows victims of AI-generated porn and deep fakes to sue for compensation). The US Federal Communications Commission has also recently proposed to make voice-cloning technology (deep fakes) used in common robocall scams targeting consumers illegal.

What does the original rule say?

The original rule (called Rule on Impersonation of Government and Businesses) prohibits the impersonation of government, businesses, or their officials. It allows the FTC to seek monetary relief from scammers that use government seals or business logos, spoof government/business emails and web addresses and falsely imply affiliation with a business or the government.

This rule applies not only to the scammer but also to the people/ companies that provide “means or instrumentalities” for those committing the impersonation. This means that if for instance, a company provides means to create fake government IDs, it would be held liable for violation of the rule.

Proposed additions to the rule:

During the consultation on the original rule, the FTC saw comments from consumers and organizations suggesting that the impersonation of individuals should also be included under the scope of the rule. The Electronic Privacy Information Center mentioned that while the reported losses from romance and other familial scams are not as high as those reported to be caused by the government and business imposters, this is a result of their “personal nature” of individual impersonation scams.  “It is highly likely that many fewer victims of these scams actually make reports to government and other agencies about the devastating losses they have suffered,” the Center told the FTC.

As such, the updated rule is proposed to have the following major additions—

  • Addition of individual impersonation: An individual under the proposed rule has been defined as “a person, entity, or party, whether real or fictitious, other than those that constitute a business or government.” Impersonation of individuals in connection with commerce is also prohibited under the proposed rule, this would cover those impersonators who misrepresent that they are a particular individual or are affiliated with a particular individual. The activities included under individual impersonation are
    • calling, messaging or contacting a person while posing as someone else
    • creating websites or social media accounts impersonating the name, identifying information, or insignia of an individual
    • placing advertisements, dating profiles, or personal ads posing as another person
  • Changing “means and instrumentalities” to “provision of goods and services for unlawful impersonation”: This makes it unlawful for a person/company to provide goods or services with knowledge or reason to know that those goods or services will be used in impersonations. This modification was brought acknowledging comments to the original rule that stated that the means and instrumentalities provision was too broad.

Why it matters:

Once this rule is adopted, deep fake generation tools could be held liable for the scams perpetrated using their technology. This increased liability for service providers could have significant implications for AI platforms, pushing them to implement more rigorous checks and balances to prevent their technologies from being used for fraudulent purposes.

Companies have already been taking steps to this effect. For instance, OpenAI has recently announced that its image generation tool DALL·E would include Coalition for Content Provenance and Authenticity (C2PA) — a set of metadata that encodes details about the content’s place of origin (provenance) using cryptography. Open AI is also working on a tool that will detect images generated by DALL·E. Social media platforms like TikTok and YouTube have also begun watermarking AI-generated content. But digital watermarks and provenance data can both be bypassed. Bad actors can take screenshots of a deep fake which would rid it of provenance information and can also crop the watermark out.

Given how readily the current measures to ensure deep fake transparency can be bypassed, AI image-generation companies might have to put in efforts to ensure that their tools cannot be used to generate a real person’s image to comply with the proposed rule.

US vs India’s approach to tackling deep fake impersonation:

In India, the government has currently been reliant on the Information and Technology (IT) Act, 2000 and the IT Rules, 2021 to curb the spread of deep fakes. The IT Rules [Section 3 (1) (b)] require social media platforms to “make reasonable efforts” to avoid hosting content that impersonates another person. In November 2023, platforms were asked to modify their terms and conditions to explicitly mention this provision of the IT Rules and inform the users about their obligations when publishing content.

The difference here is that while the FTC is holding AI companies responsible, in India, social media platforms are the ones being asked to take charge of curbing deep fakes. Despite the current differences, it is possible that India might take a similar stance to the US. Speaking at Financial Express’ Digifraud & Safety Summit in November last year, India’s IT Minister Rajeev Chandrasekhar said that India has been consistent in its stance that everything it does in technology would be based on the principle of openness, safety, and legal accountability for platforms. He mentioned that the same approach would also be used for AI. This could mean that once India’s AI regulation (the Digital India Act) is introduced, we could expect to see AI companies being held accountable just like they are under the FTC’s proposed rule.

Also read:

STAY ON TOP OF TECH NEWS: Our daily newsletter with the top story of the day from MediaNama, delivered to your inbox before 9 AM. Click here to sign up today!


Written By

Free Reads


The ‘Reforming Intelligence and Securing America Act’ (RISAA) is a legislation to reauthorize Section 702 of the Foreign Intelligence Surveillance Act (FISA).


In its submission, the Interior Ministry said the decision to impose a ban was "made in the interest of upholding national security, maintaining public...


Among other things, the security requirements include data encryption and regular review and updated access permissions to reflect personnel changes.

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.



NPCI CEO Dilip Asbe recently said that what is not written in regulations is a no-go for fintech entities. But following this advice could...


Notably, Indus Appstore will allow app developers to use third-party billing systems for in-app billing without having to pay any commission to Indus, a...


The existing commission-based model, which companies like Uber and Ola have used for a long time and still stick to, has received criticism from...


Factors like Indus not charging developers any commission for in-app payments and antitrust orders issued by India's competition regulator against Google could contribute to...


Is open-sourcing of AI, and the use cases that come with it, a good starting point to discuss the responsibility and liability of AI?...

You May Also Like


Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...


135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...


By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...


Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Your email address:*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ