Facebook has hit out at claims made in the the Netflix docudrama ‘The Social Dilemma’ which portrayed the growing divisiveness and danger on social media and the internet, and the role big tech companies like Facebook and Google are playing in this transformation. The company claimed that the film buries “substance in sensationalism”, and rather than offer a nuanced look at technology, it gives a “distorted view of how social media platforms work to create a convenient scapegoat for what are difficult and complex societal problems”.

In a rebuttal posted on its website, Facebook defended its algorithms, claimed that its ad business doesn’t make users the product, argued that a majority of the content on its platform is neither polarising or political, and highlighted the measures it has employed to maintain election integrity. However, while defending its position, Facebook also did not mention several instances of these very issues that have plagued its platforms. Here’s a point by point counter of Facebook’s rebuttals:

1. Facebook says every internet company uses algorithms, but remained silent on algorithmic bias

Facebook downplayed its use of algorithms, saying that every consumer facing app, including dating apps, cab hailing apps, and Netflix itself uses them. It said Netflix uses algorithms to determine who it thinks should watch ‘The Social Dilemma’ film, and then recommends it to them. Facebook uses algorithms to show content that’s more relevant to what people are interested in, whether it’s posts from friends or ads. “Portraying algorithms as ‘mad’ may make good fodder for conspiracy documentaries, but the reality is a lot less entertaining,” Facebook said.

Facebook did not even acknowledge algorithmic bias in its argument, and how its own platforms have often been found to suffer from it. A study conducted by the Northeastern University, last year, had found that just changing headlines of ads slightly had a significant impact on who the algorithm targeted the ad at. For instance, job postings secretaries and preschool teachers were shown to a higher fraction of women, while postings for janitors and taxi drivers were shown to a higher proportion of minorities.

  • In 2016, a ProPublica report had shown that it was possible to use certain Facebook tools to eliminate specific racial groups from ad reach.
  • No clarification on algorithmic bias: Facebook also did not find it necessary, to at least acknowledge the larger issue of algorithmic bias, which usually stems from inherent biases while creating training datasets. Just a few months ago, Facebook had launched an investigation into potential bias within its algorithms following the Black lives matter campaign.

2. Facebook claims its ad business helps small businesses and users, but doesn’t shed light on its extensive data collection and the type of ads it sometimes allows

“Facebook is an ads-supported platform, which means that selling ads allows us to offer everyone else the ability to connect for free,” the company said. It claimed that this model allows small businesses and entrepreneurs to grow and compete with bigger brands by more easily finding new customers. “But even when businesses purchase ads on Facebook, they don’t know who you are. We provide advertisers with reports about the kinds of people who are seeing their ads and how their ads are performing, but we don’t share information that personally identifies you unless you give us permission,” Facebook argued.

Facebook has always maintained that it doesn’t sell user data to make money. However, the type and kind of data it amasses to target ads at users is extensive. Everything from the types of content users engage with, the features they use, the actions they take, and the people or accounts they interact with, among others is collected by Facebook. Apart from that:

  • Facebook was found to be giving major tech companies access to user data: Internal documents from Facebook had also revealed that it gave access to users’ personal data to several of the world’s largest technology companies including Netflix, Microsoft, Amazon, and Spotify, exempting them from its privacy rules Facebook collects this data not just from one platform, but from its other platforms, such as Instagram to better target ads at users.
  • Ads containing hate speech: Facebook has also courted controversy by allowing problematic ads on its platform in the first place. For instance, earlier this year, Facebook approved ads from US President Donald Trump’s re-election campaign that featured an inverted triangle, imagery that Nazis used to designate political prisoners. The platform later removed the ads.

3. Facebook said it protects users’ privacy even though its has had a chequered past including non-consensual data harvesting

Facebook claimed that it has policies that prohibit businesses from sending sensitive data about people, including users’ health information or social security numbers, through Facebook’s business tools. It also added that it takes steps to prevent potentially sensitive data sent by businesses from being used in its systems. Facebook also claimed that it supports regulations that can guide the industry as a whole.

While Facebook did mention its agreement with the Federal Trade Commission, it did not elaborate on why an agreement was needed in the first place. Facebook’s response did not mention the Cambridge Analytica scandal where the analytics firm had access to data of 87 million Facebook users, which included data of over half a million Indian users. The scandal resulted in the company being questioned and fined by governments around the world, including a $5 billion fine by the FTC in the US.

Facebook has also had a chequered history in ensuring children’s privacy:

  • In 2019, The Verge reported of a design flaw in the Facebook Messenger Kids’ app that allowed users to sidestep protections in the group chat system, allowing children to enter group chats with unapproved strangers. Facebook admitted the flaw and sent alerts to the parents of the children affected.
  • In 2018, a report in the Wired found that the majority of experts who vetted Messenger Kids before its launch were given money by Facebook.
  • The same year, the Campaign for a Commercial-Free Childhood, in an open letter to CEO Mark Zuckerberg, had called for shutting down the app, warning that the app is “harmful to children and teens,” and that it could “undermine children’s healthy development.”

4. Facebook says ‘overwhelming’ content on its platform is not polarising or political

Facebook response made it sound as if polarising content is not a big issue on the platform, even as its own employees have been accused of not blocking hateful speech to aid the company’s business prospects, and as several major advertisers have pulled back their ads as the social media platform has found it difficult to deal with hateful speech.

“The truth is that polarization and populism have existed long before Facebook and other online platforms were created and we consciously take steps within the product to manage and minimize the spread of this kind of content,” Facebook argued, and added that an “overwhelming majority of content that people see on Facebook is not polarizing or even political”.  It also said that it removed over 22 million pieces of hate speech in the second quarter of 2020, over 94% of which it found before someone reported it.

But Facebook’s own employees have been found to allow hateful speech on the platform for boosting the company’s business prospects. The Wall Street Journal had reported that Facebook had refused to take down hateful content by governing Bharatiya Janata Party (BJP) members in order to avoid damage to its business prospects in the country. The report found that Facebook India’s public policy team, headed by Ankhi Das, had refused to take down posts by Raja Singh, a BJP MLA from Telangana, although they were flagged as “hate speech”. Singh, in his posts, had said “Rohingya Muslim immigrants should be shot, called Muslims traitors and threatened to raze mosques”:

  • Advertisers stopped advertising on Facebook for a while due to hateful content: Several advertisers such as Microsoft had stopped advertising on Facebook worried that their ads might show up next to hateful content. According to Microsoft, examples of inappropriate content includes hate speech, pornography, terrorist content, etc. Microsoft had also pulled ads from YouTube over similar concerns, but eventually restored the ads.
    • Companies such as Starbucks, Coca-Cola, and Hersheys have paused advertising on Facebook expressing concern over the company’s handling of misinformation and hate speech, and its reluctance to act against controversial content posted by US President Donald Trump.
  • Facebook has left hateful content up on its platform on several occasions: Mark Zuckerberg recently admitted that Facebook mistakenly left up a page that called for violence ahead of Black Lives Matter protests in the US. Everyday hateful content remains active on Facebook, sometimes for months before they are removed. In 2018, a team of United Nations investigators found that Facebook had been used to whip up hatred against Rohingya Muslims. Facebook later even admitted that it hadn’t done enough to prevent the platform from being “used to foment division and incite offline violence”.

5. Facebook claims its taking steps to increase election integrity, but stayed silent on how its platforms have been used by political parties to potentially mislead voters

The social media platform “acknowledged” it made mistakes in US’ 2016 presidential elections. However, it said that the film left out what Facebook had done since 2016 to stop people from using Facebook to interfere in elections. It said that in 2018, it created the Ad Library which makes all ads running on Facebook visible to people, even if they didn’t see the ad in their own feed.

Facebook’s acknowledgement of its 2016 missteps did not reflect the extent to which the platform was reportedly used to fiddle with the US elections. Special counsel Robert Mueller’s investigation into the Trump campaign revealed that throughout 2016, the Internet Research Agency (IRA), a Russian company, had  Facebook accounts publishing an increasing number of materials supporting the Trump Campaign and opposing the Clinton Campaign. For example, on May 31, 2016, the operational account “Matt Skiber” began to privately message dozens of pro-Trump Facebook groups asking them to help plan a “pro-Trump rally near Trump Tower. The report also the IRA purchased advertisements from Facebook that showed that the IRA purchased advertisements from Facebook that promoted the IRA groups on the newsfeeds of US audience members. According to Facebook, the IRA purchased over 3,500 advertisements, and the expenditures totaled approximately $10,000. Other than that:

  • Facebook’s reported role in influencing 2016 US elections: Since the 2016 US presidential elections concluded, media reports have highlighted how Facebook actually helped Donald Trump to win the election. Online echo chambers, fake news masquerading as genuine content, and political ads have been found to have played an influential role in Trump’s election. Facebook had always denied fact checking political ads, and even though it recently said that it won’t allow new political ads a week before the 2020 US elections, old political ads will be allowed to run on the platform.
  • Indian political parties have accused Facebook of interfering in elections: The Indian National Congress — the country’s major opposition party — recently called out Facebook’s “blatant bias and dubious content regulation” citing recent media reports. “This is damning and serious allegation of Facebook’s interference in India’s electoral democracy”, Congress MP K.C. Venugopal said in the Rajya Sabha last month.
  • Facebook’s ad library is not fool proof: Even though Facebook’s ad library offers some transparency on the ads running on the platform, more than half of Facebook pages that displayed U.S. political ads during a recent 13-month period concealed the identities of their backers, according to research reviewed by Politico.

6. Facebook boasts about its measures to tackle misinformation, but doesn’t mention how misinformation on its platforms led to mob lynchings in India

“The idea that we allow misinformation to fester on our platform, or that we somehow benefit from this content, is wrong,” Facebook claimed, and argued that it is the only major social media platform with a global network of more than 70 fact-checking partners, who review content in different languages around the world. Content identified as false by our fact-checking partners is labelled and down-ranked in News Feed. Misinformation that has the potential to contribute to imminent violence, physical harm, and voter suppression is removed outright, including misinformation about COVID-19, Facebook claimed.

However, a recent report by activist group Avaaz found that posts spreading health misinformation attracted as many as 3.8 billion views on Facebook in the last year, peaking during the COVID-19 pandemic, with 460 million views in April alone. Also:

  • Misinformation on WhatsApp has had real world consequences: Facebook also did not comment on the misinformation that is propagated via its encrypted chat platform WhatsApp, with often real world consequences. In 2018, several episodes of mob lynching occurred in multiple states such as Jharkhand, Telengana, Karnataka, Assam, West Bengal, Uttar Pradesh, Chhatisgarh, Gujarat, Tripura, Maharashtra, due to widely circulated WhatsApp messages
  • WhatsApp is used by political parties to spread false propaganda: In India, political parties use WhatsApp to rally supporters, and “IT Cells” are believed to be using the platform for spreading misinformation and hate speech.

7. Facebook claims its platform adds value and is not addictive

“Our News Feed product teams are not incentivized to build features that increase time-spent on our products. Instead we want to make sure we offer value to people, not just drive usage,” Facebook said. It said that it made changes to its News Feed to prioritise “meaningful social interactions”, and deprioritise viral videos. The change led to a decrease of 50M hours a day worth of time spent on Facebook, it claimed.

However, even as Facebook claimed it deprioritised viral videos on its platform, it simultaneously launched new products in bid to increase the average time users spend on the platform. For instance, in 2018, when the company made the changes to is News Feed to prioritise “meaningful social interactions” it launched a dedicated tab called Facebook Watch where users can endlessly scroll through viral videos. Then, in the aftermath of the TikTok ban in India, Facebook swiftly launched Reels on Instagram in a bid to create a product like TikTok. Apart from that:

  • Studies have shown that design techniques like push notifications and the endless scroll of newsfeed have created a feedback loop which keeps users hooked to their devices. The Social Dilemma’s website mentions a study published in the American Journal of Epidemiology which found that higher social media use correlated with self-reported declines in mental and physical health and life satisfaction.

Also read