Home » , , , , ,

Facebook stops Marketplace rollout due to illegal listings


Share on Facebook0Tweet about this on TwitterShare on LinkedIn6Email this to someone
illegal-free

A day after Facebook launched its classifieds space Marketplace, multiple listings related to guns, alcohol, and animals started popping on its feed, leading the company to temporarily halt the global roll out of the product. Facebook said that this was due to a technical issue and that it was working on fixing it.

The company will also improve its image recognition technology to identify illegal products before they’re uploaded and monitor its systems to identify and remove the violating listings before more people are given access to the Marketplace. It is hoping that along with its employees, its regular users will also report such listings to aid it in taking them down.

What is prohibited on Facebook?

Facebook says that items, products and services sold on its platform need to follow a commerce policy along with its community standards. Facebook’s content policies here.

As of now, the commerce policy prohibits users from uploading illegal, prescription or recreational drugs; tobacco items; ‘unsafe supplements’; weapons, ammunition and explosives’ animals; adult items and services; alcohol; adult health items; real money gambling services; fraudulent, misleading, deceptive or offensive items or posts; ‘overtly sexualised positioning’ and non physical items like services, subscriptions, rentals and digital products among others.

On the rollout of Marketplace, the company replaced the Messenger (which has 1 billion users) shortcut on the Facebook app with a Marketplace shortcut, where it is available: the US, UK, Australia and New Zealand. Facebook claimed that 450 million users checked out buy and sell Facebook Groups monthly, which led it to launch Marketplace. Just last month, Facebook launched Groups Discover, a platform where users could browse groups by category and get recommendations based on friends, location and interests.

Advertisement

Problems with automation of content regulation and AI partnership

Note that recently, Facebook could not decipher the difference between a historically important image of a naked child fleeing a napalm attack and had to reinstate the picture and post after much criticism. Facebook has also been accused of downgrading conservative news in its Trending Topics module, something Facebook denied, but also shut down its Trending Topics editorial team. The more Facebook automates its content (more here and here), the more issues there are likely to crop up.
Last week, Amazon, Google, Facebook, IBM, Microsoft and Google’s DeepMind launched a Partnership on AI where they would support research including ethics, fairness and inclusivity, transparency, privacy etc. of artificial intelligence.

MediaNama’s take: Tech companies usually have editorial teams which follow guidelines to moderate and remove at least a part of the content (which doesn’t adhere to guidelines) uploaded to their platforms. We think that currently Facebook uses more automation than human intervention, leading to errors which a human would think twice before committing. This does not imply that Facebook should stop its automation efforts: it has big data like nobody has had data before. Unless this was a one time spam attack, human judgements would have spotted these listings immediately, maybe even before they were live. This also does not mean that content moderation should be a full fledged editorial task, but more of a combination of automation and human judgment which is tweaked with time, given that currently, machines do not understand context as well as humans do.

Share on Facebook0Tweet about this on TwitterShare on LinkedIn6Email this to someone