Highlighting that generative artificial intelligence tools such as ChatGPT, GPT-4, Bard etc. may become a “key criminal business model of the future”, a new Europol report states that these models will make it easier for “malicious actors to perpetrate criminal activities with no necessary prior knowledge”. The report ‘ChatGPT: The Impact of Large Language Models on Law Enforcement’ was published by Europol, which works with the European Union states to combat crime, on March 27, 2023. It explores how Large Language Models (LLMs)—which can perform language-processing tasks—like OpenAI’s ChatGPT and their information-curation capabilities can be exploited for “nefarious purposes” such as phishing, child harassment, financial crimes, deception and others. The report focuses on the results of the workshops organised by the Europol Innovation Lab with domain experts to analyse how LLMs can be abused and what can law enforcement agencies do to tackle them. The experts chose ChatGPT, given its common usage and the high traction it has received in the past few months, to explore its pros as well as its “negative potential”. Here, we look at some of the major criminal use cases of generative AI tools that the report outlines. Why it matters: From cybercrimes to misinformation and propaganda posing dangers to the vulnerable sections of the population, a number of anticipated negative impacts of generative AI tools have ignited discussions about “AI ethics” and the ways in which companies can develop “responsible AI” tech. Both, the pros and cons have encouraged the hype around ChatGPT and…
