By Trisha Jalan and Sneha Johari
Facebook has suspended 652 accounts, pages and groups for misleading users of Facebook and Instagram across the US, UK, Middle East, and Latin America, it said in a news update. The company said that it acted on the tip off of a cybersecurity firm and investigated for multiple months before taking the cluster down.
Facebook’s 4 separate investigations found that “coordinated inauthentic behavior” related to Iran and Russian military intelligence. However, the two activities were unrelated, Facebook added.
Statistics from Facebook’s investigations
Facebook said that some of these pages, groups and accounts were linked to Iranian state media, some of which were created in 2013.
Statistics related to the Iranian disinformation accounts:
- Presence on Facebook and Instagram: 74 Pages, 70 accounts, and 3 groups on Facebook, as well as 76 accounts on Instagram.
- Followers: About 155,000 accounts followed at least one of these Pages, 2,300 accounts joined at least one of these groups, and more than 48,000 accounts followed at least one of these Instagram accounts.
- Advertising: More than $6,000 in spending for ads on Facebook and Instagram, paid for in US and Australian dollars. The first ad was run in Jan 2015, and the last was run in August 2018. Some ads have been blocked since the launch of our political ads transparency tools launched. We have not completed our review of the organic content coming from these accounts. (emphasis added)
- Events: 3 events hosted.
Another investigation revealed that some accounts and pages, first created in 2011, “largely shared content about Middle East politics in Arabic and Farsi, and also content about politics in the UK and US in English.”
- Presence on Facebook and Instagram: 168 Pages and 140 accounts on Facebook, as well as 31 accounts on Instagram.
- Followers: About 813,000 accounts followed at least one of these Pages and more than 10,000 followed at least one of these Instagram accounts.
- Advertising: More than $6,000 in spending for ads on Facebook and Instagram, paid for in US dollars, Turkish lira, and Indian rupees. The first ad was run in July 2012, and the last was run in April 2018. We have not completed our review of the organic content coming from these accounts. (emphasis added)
- Events: 25 events hosted.
Facebook has not disclosed how much of this spending was made in Indian rupees. As of now, it is unclear whether these accounts had anything to do with India, or Indian users of Facebook or Instagram, or if the ads were run by Indians.
In terms of the Russian activity, Facebook said that it removed pages, groups and accounts which the US government had linked to Russian military intelligence. It did not specify when these accounts were created and what the statistics related to these were, however their more recent activity was focused on politics in Syria and Ukraine. It concluded that “To date, we have not found activity by these accounts targeting the US.”
Although this is the second disinformation campaign Facebook has uncovered this month, the previous announcement reveals political influence campaigns spread across continents and originating from a state actor that is not Russia. Samples of information spread from these accounts include:
Some key takeaways from Facebook’s news update:
- Facebook was tipped off by a cybersecurity firm FireEye, the research or findings did not originate at Facebook. Although Facebook did mention that it works with the firm.
- Facebook used publicly available website registration information, related IP addresses and looked into “shared administrators” of pages.
- Some of these “disinformation” posters have bypassed Facebook’s own moderation/detection standards, and even advertised on the platform, which, we could be assured, has been by real people.
- Some of these accounts were created as early as 2011, that’s 7 years ago.
- Facebook claims that it is still investigating into these events, but that it would “make changes to better detect people who try to evade our sanctions compliance tools and prevent them from advertising” after its investigation was complete and getting inputs from government officials.
- Facebook’s “aim” on identifying a fake/non authentic campaign is to determine “the extent of the bad actors’ presence on our services; their actions; and what we can do to deter them.” After determining that it is getting no new information from it, Facebook decides to take it down, since “time is unlikely to bring us more answers.”
- It will hold off taking any action if it would tip off its “adversary and prompt them to change course”, especially “highly sophisticated actors adept at covering their tracks.” However, it takes down content from “amateur actors” immediately, regardless of who they are or how they operate.
- In case of immediate risk to safety, Facebook will “always move quickly” in case someone is out to determine the location of a user, and if it suspects that the person may be in physical danger.
- On being asked (pdf) if Facebook could internally detect such “inauthentic” behaviour, Nathaniel Gleicher, head of Facebook’s cybersecurity policy, told reporters that some investigations were internal, some through working with firms like FireEye, some open source investigation and some of it from law enforcement. He also added, “One of the things that we’ve learned that is very clear is that no one company can solve this problem on its own.”
- Gleicher did not provide a yes or no answer when asked whether Facebook would provide researchers and reporters with the information found from its investigations.
- Mark Zuckerberg added that Facebook now had 20,000 people “working on security and content review, implementing ads transparency to a higher standard than what’s even on TV or print media today, verifying advertisers running political and issue oriented ads, working on deeper partnerships with law enforcement and government and other companies to be able to do signal sharing so that way we could run investigations like this better.”
Facebook upped security on its platform ahead of the US mid-term elections amidst widespread backlash from the US government about the Cambridge Analytica fiasco and Russian interference in 2016 Presidential elections via Facebook. Just as Facebook spoke to the press, Twitter said that it has suspended 284 accounts with ties to Iran, but did not furnish any more details. FireEye wrote in its post that “We have identified multiple Twitter accounts directly affiliated with the sites, as well as other associated Twitter accounts, that are linked to phone numbers with the +98 Iranian country code.”
Top executives from Facebook, Twitter, and Google will testify to the US Senate Intelligence Committee September 5 on social media manipulation and interference from Russia — and now, it appears, in other countries as well.
Facebook fake accounts not restricted to the US
So far, there has only been proof of Russian meddling in the US. As FireEye and stakeholders across the US have made it clear, the disinformation spell on Facebook spreads across US, UK, Middle East and Latin America. Facebook has not disclosed which and how many countries’ users were targeted in the Middle East and Latin America. Meanwhile, presidential elections are coming up in Brazil in October.
The stakes appear to be higher in Facebook’s home ground as the US mid-term elections are coming up in November. Facebook added no new users from the US in the second quarter of 2018; 47% of Facebook’s total $13.2 billion revenue came from the US and Canada. Both these countries provide Facebook with its highest revenue per user (ARPU) as much as $26 in this quarter. The EU comes in next with $8.6 ARPU, which is 202 times less that the US and Canada ARPU.