Italy’s data protection authority has commenced a “fact-finding investigation” into how large amounts of personal data online is being used for training Artificial Intelligence (AI) systems, according to a report by Reuters on November 22, 2023.
While the data protection regulator’s website does not provide any information on the development, Reuters reported that the examination is being conducted to check whether websites are undertaking “adequate measures” to protect their personal data from unwarranted scraping by AI companies.
The regulator has invited stakeholders including academics, AI experts, and consumer groups to submit their comments on the fact-finding process in 60 days. Further, the authority also reserves the right to employ necessary measures on the basis of the results of the review.
Task force to study AI developments: In April this year, the European Data Protection Board (EDPB) had announced that it will be establishing a task force to cooperate and “exchange information on possible enforcement actions conducted by data protection authorities” against AI companies like OpenAI. The move came in response to Italy’s temporary ban on ChatGPT against non-compliance with European Union’s General Data Protection Regulations (GDPR) for protection of Italian residents’ personal data.
Similarly, the Japanese government had also indicated plans to launch a task force to study the benefits and harms of using generative AI in different sectors, according to NHK World-Japan. The United Kingdom has set up a Foundation Model Taskforce comprising experts from the government, industry, and academic to study risks associated with AI, explore mitigating measures, and recommend guardrails in accordance with international standards. In September, the Personal Data Protection Office of Poland had also initiated investigation against OpenAI regarding allegations of failure to comply with GDPR and indulging in data processing activities that are “unlawful and unreliable” and non-transparent.
Why it matters:
Non-permitted use of public data and people’s personal data for foundational models that run AI services have become high-priority concerns regarding privacy rights and Copyright infringement for many countries as they deliberate upon AI regulations. China, in its generative AI rules published in July, had stated that the AI service provider is obligated to protect the user’s “input information and use records” and is prohibited from collecting illegal collection and sharing unnecessary personal information of the users and usage records, which can expose a user’s identity. Whereas, France is in the process of establishing a taxation regime for works generated by AI using works, whose origin remains uncertain. Countries as well as AI companies have emphasised on the need for multi-stakeholder discussions on establishing suitable standards for AI development and deployment. In this context, findings from Italy’s investigation will be worth checking.
Also Read:
- EU Data Protection Board To Launch Taskforce On Action Taken By Italy Against OpenAI’s ChatGPT
- ChatGPT Back In Italy After OpenAI Introduces Measures Demanded By The Italian Privacy Regulator
- Poland To Investigate OpenAI Regarding Non-Compliance With EU’s Data Protection Regulations
- Entrepreneur And Investor Ian Hogarth To Lead UK’s Taskforce Examining AI Risks And Global Guardrails
STAY ON TOP OF TECH NEWS: Our daily newsletter with the top story of the day from MediaNama, delivered to your inbox before 9 AM. Click here to sign up today!