“Thanks to AI tools that create “synthetic media” or otherwise generate content, a growing percentage of what we’re looking at is not authentic, and it’s getting more difficult to tell the difference…Some of these tools may have beneficial uses, but scammers can also use them to cause widespread harm,” the United States Federal Trade Commission (FTC) takes on the problem of fake content created by generative AI in its latest blogpost. After having cautioned businesses about keeping their AI-related claims in check, the FTC has now shed light on how AI can be used to “spread deception” and emphasises on the urgent need for digital companies to address the “deeper and emerging threat” of what it calls the “AI fake” problem. Focus on AI fakes: From generative AI chatbots to software that create deepfakes and voice clones, the blogpost clearly outlines how AI can be abused by fraudsters to use it for detrimental purposes. The “fake content” created by notorious users using these tools can then be accessed by large groups and may potentially harm specific communities and sections of the population. “They can use chatbots to generate spear-phishing emails, fake websites, fake posts, fake profiles, and fake consumer reviews, or to help create malware, ransomware, and prompt injection attacks. They can use deepfakes and voice clones to facilitate imposter scams, extortion, and financial fraud. And that’s very much a non-exhaustive list,” the statement adds. Why it matters: The FTC blogpost reminds us that it is necessary to look beyond…
