"Facebook was willing to trade the lives of the Rohingya people for better market penetration in a small country in Southeast Asia," said a class action complaint filed on behalf of Rohingya refugees holding Facebook responsible for the Myanmar genocide of Rohingyas. The complainants have sued Facebook for US$150 billion in California. In the class-action complaint seen by MediaNama, they alleged that Facebook refused to stop hateful anti-Rohingya content on its platform, instead benefiting from increased user engagement and growth through such content. By pointing the finger at Facebook's algorithms, this complaint goes further than any other in ascribing responsibility to the platform for amplifying hateful content. If upheld, it could pave the way for similar action against the company for violent events across the world, including India. Part 1 - The Defective Design of The Facebook Algorithm The complaint, filed by a Rohingya woman based in the United States on behalf of more than 10,000 refugees settled in the U.S., outlines in great detail how Facebook's algorithms geared towards hateful content: Facebook runs on a logic of engagement: The complaint quoted a study published in Nature claiming that engagement was the core logic of Facebook's news feed. "“[T]he Feed’s ... logics can be understood through a design decision to elevate and amplify ‘engaging’ content.... [T]he core logic of engagement remains baked into the design of the Feed at a deep level,” the complaint quoted. Hateful content is most engaging, and hence prioritised: Facebook is aware that hateful content is…
