On July 8, Adam Mosseri, head of Instagram, announced two new anti-bullying measures, one of which has already been rolled out. The first is powered by AI, and notifies people “when their comment may be considered offensive before it’s posted”. The second, which will be tested soon, is called Restrict. It will make a bully’s comments visible only to them and no one else.
AI-powered reflective tool
The intervention by AI will notify people when their comment may be considered offensive before it is posted. Instagram claimed that the reflective time offered by this move encouraged people to undo their comment and share something less hurtful in early tests.
It is unclear how this tool will work for messages in languages other than English, especially for transliterated messages. For instance, the example that Instagram uses (given above), when transliterated into Hindi, would read, “Tum kitne badsurat aur moorkh ho”. Would the AI detect that as well? An Instagram spokesperson told MediaNama that they “have started rolling out this feature to English-speakers [sic]”. They further said that they plan to “roll it out to everyone globally later this year”. It still does not answer our question of transliterated messages.
This feature, which will be tested soon, will enable users to protect their account from unwanted interactions. Once you “restrict” someone, comments on your posts from that person will only be visible to that person. You can choose to approve their comments and make them available publicly or to the rest of your followers. Restricted people won’t be able to see when the restrictor is active on Instagram, or if s/he has read their DMs. Instagram said that they are introducing this feature because younger people are reluctant to block, unfollow, or report their bully out of fear of escalating the situation.
*** Update (12: 24 pm): This article was updated with the response from Instagram.