Content moderators from Kenya who have trained OpenAI’s ChatGPT have petitioned the country’s National Assembly, calling for an investigation into the operations of companies like Samasource, registered in Kenya, to whom big tech companies like Google, Meta, and OpenAI outsource their content moderation and AI work. The petition, shared with MediaNama by digital rights advocate Mercy Sumbi, sheds light on the working conditions of young Kenyan workers employed to label a wide range of internet content as toxic and harmful for ChatGPT training. Samasource, a San Francisco-based company, essentially employs workers to label and filter data and content for big tech companies.
What are the issues raised by Kenyan employees?
The petition reveals significant details about the nature of work that Kenyan content moderators are employed in for training AI models of OpenAI since 2021 – it was then that the company partnered with Samasource Kenya. The petitioners were engaged in temporary contracts with Sama to train ChatGPT, which involved, “reading and viewing material that depicted sexual and graphic violence and categorizing it”. This meant the workers were regularly exposed to content including “acts of bestiality, necrophilia, incestuous sexual violence, rape, defilement of minors, self-harm (e.g. suicide), and murder” among others.
The petitioners highlight that the nature of the job and the work undertaken by them were not sufficiently described in their contracts. They were regularly exposed to harmful content without adequate psychological support, and many workers developed “severe mental illnesses including PTSD, paranoia, depression, anxiety, insomnia, sexual dysfunction”. Additionally, the workers were sent back home without receiving their pending dues or any medical care for the impact on their mental health caused by the job when the contract between Sama and OpenAI abruptly ended.
Article continues below, you might also want to read: Summary: Global Technology Policy Council lists core principles for use of generative AI systems
An investigation by Time earlier this year revealed how OpenAI employed Kenyan workers to label tens of thousands of snippets of text from the “darkest recesses of the internet,” depicting violence, hate speech, and sexual abuse. These labeled samples were used to train ChatGPT’s models, helping the chatbot learn to identify and filter such content. The investigation also uncovered that the data labelers employed by Sama for OpenAI were paid low wages, ranging from around $1.32 to $2 per hour, depending on seniority and performance.
The petitioners emphasize that the outsourcing model employed by big tech companies from the US often hurt the rights of the Kenyan citizens against exploitation and fail to provide safe employment conditions. They have also complained that the workers are paid poorly and are mostly “disposed of at will”.
Why it matters:
The petition uncovers issues related to the fast-paced deployment of AI that remain underserved in narratives restricted to benefits and end-user harms from algorithm-based tools. The working conditions highlighted by Kenyan workers and how their rights are impacted in the process of development of AI are critical to the debate of ensuring accountability from AI developers and the companies that deploy them. Whether it is a direct impact or an indirect involvement, one must also question who ultimately benefits from such operations and at what cost. As countries focus on the regulation of AI and AI businesses through a risk-based and rights-based approach, the case put forth by Kenyan workers is pertinent to the areas of intervention needed to adopt a comprehensive regulatory approach.
What are petitioners asking for?
According to the petition reviewed by MediaNama, the petitioners have appealed for:
- Investigation into the nature of work and working conditions of Kenyan employees at companies like Samasource.
- Interrogate the role of the Ministry of Labour in the protection of Kenyan youth working for Sama or other companies on behalf of tech companies outside Kenya.
- Make recommendations to prevent the exploitation of workers and propose the withdrawal of licenses of companies that enable the exploitation of Kenyan employees.
- Bring in a law to regulate outsourcing of “harmful and dangerous” tech work and to protect workers engaged in such work arrangements.
- Amend the country’s Employment Act 2007 to offer protection to workers engaged in outsourced work.
- Define exposure to harmful content as an “occupational hazard” in relevant country laws.
Observations by Kenyan Courts in a Complaint against Meta:
In June this year, a Kenyan employment court ordered Meta to provide “proper medical, psychiatric and psychological care” to content moderators in Nairobi who screened content for Facebook, as per a report by Guardian. While the case dealt with Facebook’s move to declare around 260 such screeners in Nairobi as “redundant”, the underlying challenges to the company include a growing discontent among the workers who underwent traumatic experiences while screening toxic content under tight timelines without adequate psychological support. According to the report, a Kenyan court has also ruled that Meta was the primary or principal employer of the workers in Nairobi and Sama was only an agent and that the work done by these moderators ultimately served and was also provided by Meta.
STAY ON TOP OF TECH NEWS: Our daily newsletter with the top story of the day from MediaNama, delivered to your inbox before 9 AM. Click here to sign up today!