What we missed: In March this year, Meta announced its plans to introduce inventory filters that categorize content based on its risk levels. It also introduced its collaboration with software development company Zefr to create an AI-based third-party verification tool that reports the context in which ads appear on the Facebook feed. More about Meta’s inventory filters and AI verification tool: Meta claims that its inventory filters have been designed in line with Global Alliance for Responsible Media (GARM, an advertiser organization that challenges harmful content online) and use its suitability framework to segregate content into high, low, and medium-risk content. Based on that it gives advertisers three options to choose from— Expanded inventory: shows ads next to all content that meets community standards and is eligible for monetization. Moderate inventory: filters out content considered high risk by GARM’s framework (such as non-violent crimes, use of medicine, alcohol or tobacco, and minor crimes). Limited inventory: filters out content that would be considered both high and medium risk. Examples of medium risk include—minor injuries, bodily functions, few strong words, and discussion of mildly suggestive topics or revealing clothing (romance, breastfeeding, educational content, etc). On the other hand, the verification tool allows advertisers to measure, verify and understand the suitability of content near their ads to help them make informed decisions in order to reach their marketing goals. Why it matters: It must be noted that the categorization done by Meta’s inventory filters can negatively impact creators. The tool has a strange…
