So why exactly is a fuss being made over Facebook manipulating the newsfeeds of some of its users? Facebook has been manipulating users newsfeeds through its proprietary algorithm for years.

Today, on the basis of your activity  – probably on the basis of your likes, comments, comment views,  your relationship with the person posting the status update, what’s trending in the world or in your city – the Facebook algorithm already decides whose and which updates it should show you. It’s called personalization, and it’s meant to increase your engagement on the platform. All Facebook did was conduct a user study by introducing an additional parameter that removed certain types of posts, in order to assess how that might impact your mood: does an increment in the percentage of happy posts make you happy, or sad posts make you sad? That’s another form of manipulation, but even despite this, Facebook is reinforcing certain types of behavior or content that you already exhibit. There is no neutrality in that newsfeed. You always have friends whose updates you don’t see on Facebook because its algo assumes you might not be interested in it, on the basis of your past behavior on the site.

What this means is that you are not entirely in control of what you see on Facebook: sure, you can choose to ignore certain types of updates, or change your relationship (acquaintances vs friends) with certain people, but that doesn’t mean that Facebook isn’t looking at other signals that are not in your control.

It’s the same way that Google determines which links you’re likely to click on, and personalizes (and hence manipulates) the search results you view. That means it is effectively editing your access to information.

Both these changes impact your world-view, which is determined by what kind of information you have access to. All Facebook did was actively choose how it manipulated the newsfeeds of 689,003 users to check whether that improves their mood or not. That’s the difference between personalization and this experiment: here there was a specific objective, and the mode of manipulation was determined by humans and not algorithms.

I was on a CNN-IBN debate on this yesterday (watch it here), and Alok Kejriwal made a valid point: H&M plays peppy music in its store in order to lift your mood, and increase your propensity to buy clothes. Can Facebook manipulate your mood to put you in a similar mood, to get you to click on e-commerce ads? How about lots of Football updates during the World Cup, to coincide with ads selling Football jerseys?

What if, some day, Facebook and Google decide who they want as the Prime Minister of India, and begin manipulating search results to show only certain kinds of updates? The kind of information we’re exposed to determines our world view, and shapes public opinion.

This is why you should read multiple websites, newspapers and magazines, and differing points of view before taking your own call, and why personalization of content is limiting and dangerous. In fact, this is exactly why I prefer Twitter over Facebook – I choose whom to follow and unfollow, but I get all their updates, and it’s my decision.

Kushan Mitra made an interesting point yesterday – to address this, all Facebook needs to do is include a tab for “All updates”. There is already such a link (if you’re logged in to Facebook, click here), but there is no way of making it the default view. You should be allowed to do that, in only to know what kind of people you really have “friended” on Facebook.

*

To understand more about the dangers of personalization, do watch Eli Pariser’s excellent TED video: