We missed this earlier: On June 20, Singapore-based computer software company Group-IB’s threat intelligence platform came out with a press release saying that between June 2022 and May 2023, 101,134 ChatGPT accounts had been compromised. It mentioned that the credentials for these compromised accounts are being traded on dark web marketplaces. The company also said that a majority of the compromised accounts are from the Asia Pacific region, and India tops the list with 12,632 compromised accounts. Group-IB says that most accounts were breached by the Raccoon info stealer. An info stealer is a type of malware that collects credentials saved in browsers (such as bank card details, browsing history, etc.) and sends them to the malware operator. Why it matters: Whenever someone uses ChatGPT, the tool stores both the search query and the response it generates. Group-IB points out that there is a growing trend among employees to use the tool to perform their job functions, which means that unauthorized access to ChatGPT accounts could result in the exposure of confidential or sensitive information. This sensitive information can then be exploited for targeted attacks against both the companies and their employees. The fear of sensitive company information being leaked into the chatbot isn’t unwarranted. In May this year, Samsung found three instances of employees inputting confidential source code data and internal meeting notes into ChatGPT while using the AI tool for code-writing support. Once such data has been entered into the chatbot, there is no way of retrieving it or deleting it from…
