Facebook announced this week its introduction of inventory filters to make it easier for advertisers to control how their brands are presented across media. This filter will apply to ads placed in Instant Articles, Audience Network, and Facebook in-stream video. Facebook has brought it three kinds of filters: limited, standard, and full. Advertisers who select limited will get maximum protection, and will be protected even against bad language; it allows advertisers to exclude certain categories which it does not want its brand to advertise on. Standard inventory offers moderate protection and is the default choice; full inventory offers the least protection.

Besides this filter, Facebook runs block lists to prevent ads from running on specific publishers the advertiser does not want their ads on. It also runs brand certification for third-party partners to help advertisers manage their preferences.

Ad companies reprimand platforms on harmful content

Facebook’s brand safety filter tied with Proctor & Gamble chief brand officer Marc Pritchard blasting internet platforms – without specifying a company – for not fixing ad-related issues fast enough. Pritchard blamed the tech companies for proliferation of violent and harmful content placed next to ads. P&G, he said, would move its money to services that can guarantee effectiveness and are completely free of offensive content. Last month, the World Federation of Advertisers called on its members across the globe to pressurise platforms into doing more for prevention of the abuse of their services.

The statement was made in the context of the live-streaming of the New Zealand mosque shooting, and the subsequent spread of the footage online. The video was uploaded and re-uploaded repeatedly after the stream was taken down, and used to promote other gaming-related videos.

Increasing concern of brand safety online

Brand safety for advertisers was highlighted in February when major companies such as Disney, AT&T, Epic Games, and Nestle pulled ads from YouTube after it turned out that their ads were being placed against content that was facilitating a “soft-core pedophilia ring”. Two years ago, multiple major advertisers had pulled spending from YouTube, after their ads surfaced next to extremist and violent content.