wordpress blog stats
Connect with us

Hi, what are you looking for?

Explained (Courtesy Meta): How AI Systems Recommend Content on Instagram and Facebook

The tech giant will also be releasing a “Content Library” of Facebook data for researchers.

Only a few weeks after explaining how it ranks content on Instagram, Meta has released a detailed explanation of how its AI systems recommend content across Instagram and Facebook. The tech giant will also be releasing a “Content Library” of Facebook data for researchers.

“These systems make it more likely that the posts you see are relevant and interesting to you,” wrote Meta’s President of Global Affairs Nick Clegg yesterday. “We’re also making it clearer how you can better control what you see on our apps, as well as testing new controls and making others more accessible.”

Why it matters: This is a welcome step towards transparency—especially given that social media platforms have been criticized for saying very little about how they rank content. As we’ve previously reported, moves like these “might help creators whose livelihood depends on how their content is ranked, as well as researchers who study the spread of misinformation and other harmful content”.

Clegg’s post is also a reflection of the sign of the times—Meta’s shift towards algorithmic-driven recommendations, as Casey Newton noted in Platformer:

“But while the transition away from Facebook’s old friends and family-dominated feeds to Meta’s algorithmic wonderland seems to be proceeding mostly without incident, the move has given the company a new policy and communications challenge. If you’re going to recommend lots of posts for people to look at, you have to know why you’re making those recommendations. Without a thorough understanding of how the company’s many interconnected systems are promoting content, you can wind up promoting all sorts of harms. And even if you don’t, an app’s users will have a lot of questions about what they’re seeing. What exactly do you know about them — or think you know about them? Why are they seeing this instead of that?”


STAY ON TOP OF TECH POLICY: Our daily newsletter with top stories from MediaNama and around the world, delivered to your inbox before 9 AM. Click here to sign up today! 


Is there a safe harbor subtext here?: As fears of all-knowing and uncontrollable AI systems abound, could this be Meta’s attempt to clarify the powers platforms actually have over the user-generated content they generate?

Remember: Google and Twitter only recently escaped being held liable for third-party content recommended by their algorithms in the United States, which included ISIS recruitment videos. Clegg had also previously written about the need to “challenge the myth that algorithms leave people powerless over the content they see,” stating that the current blog is a continuation of that premise.

Meta releases 22 ‘system cards’ to explain how algorithms shape content recommendations: “We use a wide variety of predictions in combination to get as close as possible to the right content, including some based on behavior and some based on user feedback received through surveys,” Clegg noted, alongside releasing 22 ‘system cards‘ explaining different AI-powered content recommendations on Instagram and Facebook. For example:

  • Recommending reels on Instagram: The Reels AI System first collects all the potential reels a user may be interested in—like reels from accounts the user follows or reels or accounts similar to ones recently engaged with. The AI system then analyses each reel’s ‘input signal’, which includes data on the reel’s length, similarity to other reels, and what proportion of the reel’s content matches content the user typically interacts with. Then, the system selects 10-100 of the most relevant reels for the user. In-built ‘integrity processes’ help the system reduce the distribution of problematic content. Then, the AI system uses models to predict content the user might find valuable or relevant. It calculates a relevance score for each reel and orders them accordingly. For example, higher ‘value’ reels are shown higher up in your Feed.
  • Finding ‘People You May Know’ on Facebook: Facebook’s AI systems first collect profiles users might be interested in—like mutual friends or people from groups the user is a part of. The system then calculates a score for each profile, determining the likelihood of the user engaging with it. The system displays them to the user based on these scores, while filters remove profiles that may violate Facebook’s Community Standards.

In a detailed and simultaneously released technical post on how Meta’s algorithms work, the company also explained how it recommends new content with little engagement to users:

[New content] poses what are known as cold start problems, where there isn’t much data to learn from yet. To address this challenge, we developed a few-shot learning system called Meta Interest Learner to accurately match new content to prospective audiences based on their interests, even when there are very few engagements. We also leverage various online learning algorithms to help better distribute new content so that every new piece of high-quality content has a chance to be exposed to a large, relevant audience…”

Meta shares the ‘signals’ that help algorithms determine relevant content: A majority of these inputs are used by Facebook Feed to rank content, Clegg noted. They include detailed insights into how users access Facebook and what sorts of content they interact with. The ranked content’s attributes are also analysed, including who posted it, who created it, and how others have engaged with the post. Finally, the algorithm also analyses how users have interacted with ranked posts (or similar ones), the creator of the post, and the user that shared the post.

Meta has refrained from disclosing signals that make it easier for people to side-step its harmful content guidelines and “defences”, Clegg added.

Prediction models then gauge the relevance of the content itself, determining the actions the users might take on a post, how much they’ll spend viewing it, their interest in the post, or how others will interact with the post if the user engages with it. They include:

  • Frequently used prediction models: How likely is it that a user will be interested in content from their friends; How likely is it that a user will be interested in joining a group that shared a post; How likely is it that a user will be interested in the page that shared a post, among many others.
  • Moderately used prediction models: “How likely is it that a user will comment on the post; How likely is it that the user will “find a news article informative, if the post contains a link to a news article”, among many others.
  • Less frequently used prediction models: How likely is it that a user will donate to a fundraiser post?; How likely is it that they’ll send the post to someone in a message?, among many others.

Meta also rolling out updates that may enhance user control over recommended content

Meta set to expand the ‘Why Am I Seeing This’ feature for Reels: To be rolled out on Instagram and Facebook in a few weeks, this feature will allow users to click on a reel and understand how their previous activities informed the algorithms delivering them content. The feature was first launched for some Feed content and all ads across Instagram and Facebook.

Instagram testing interest-driven Reels recommendations: This will allow users to indicate that they are interested in the Reels being recommended to them. Meta introduced a “Not Interested” back in 2021.

Meta Content Library and API to be rolled out soon for researchers: The Library will include data from Facebook’s public posts, pages, groups, and events, and from data from Instagram’s public posts, and creator and business accounts.

“These tools will provide the most comprehensive access to publicly-available content across Facebook and Instagram of any research tool we have built to date and also help us meet new data-sharing and transparency compliance obligations,” Clegg claimed. Notably, limited access to Meta’s data has been called out by researchers investigating speech, misinformation, and content moderation on the platform.

Researchers can search and filter the data using a graphical user interface or a programmatic API. While the University of Michigan’s Inter-university Consortium for Political and Social Research is the first to access it, other qualified researchers pursuing scientific or public interest research can apply with partners who are experts in secure data sharing for research.


This post is released under a CC-BY-SA 4.0 license. Please feel free to republish on your site, with attribution and a link. Adaptation and rewriting, though allowed, should be true to the original.

Read more

Written By

Free Reads

News

"We believe the facts and the law are clearly on our side, and we will ultimately prevail," the company said on the enactment of...

News

Zuckerberg expressed confidence in monetizing AI through methods like ads and paid access to larger models, leveraging Meta's successful history with scaled technologies.

News

The data leakage comes on the same day as the Reserve Bank of India (RBI) restricted Kotak Mahindra Bank from onboarding customers over online/mobile...

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.

Views

News

NPCI CEO Dilip Asbe recently said that what is not written in regulations is a no-go for fintech entities. But following this advice could...

News

Notably, Indus Appstore will allow app developers to use third-party billing systems for in-app billing without having to pay any commission to Indus, a...

News

The existing commission-based model, which companies like Uber and Ola have used for a long time and still stick to, has received criticism from...

News

Factors like Indus not charging developers any commission for in-app payments and antitrust orders issued by India's competition regulator against Google could contribute to...

News

Is open-sourcing of AI, and the use cases that come with it, a good starting point to discuss the responsibility and liability of AI?...

You May Also Like

News

Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...

Advert

135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...

News

By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...

News

Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Name:*
Your email address:*
*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ