Artificial Intelligence (AI) language-based models like Open AI's ChatGPT, Google's Bard, and Microsoft's Bing can be used by threat actors to carry out various malicious activities and target individuals and organizations, the Indian Computer Emergency Response Team (CERT-In) warned in an advisory issued on May 9. The agency also outlined measures users and organizations can take to safeguard themselves. The concerns highlighted by the Indian government's cybersecurity agency are not new and have been in discourse ever since the rapid rise of AI. However, this is the agency's first advisory on generative AI, signaling growing concern among the government. What malicious activities can generative AI be used for? CERT-In listed the following malicious activities that threat actors can carry out with the help of generative AI: Write malicious code for exploiting vulnerabilities, constructing malware and ransomware, and performing privilege escalation. Disseminate fake news, scams, misinformation, phishing messages, and deep fakes. "A threat actor can ask for a promotional email, a shopping notification, or a software update in their native language and get a well-crafted response in English, which can be used for phishing campaigns," CERT-In gave as an example. Create fake websites, web pages, and apps to distribute malware to users Scrape information "from the internet such as articles, websites, news and posts, and potentially taking Personal Identifiable Information (PII) without explicit consent from the owners to build a corpus of text data." What safety measures can be adopted by users and organizations to minimize the adversarial threats arising from…
