On July 8, Adam Mosseri, head of Instagram, announced two new anti-bullying measures, one of which has already been rolled out. The first is powered by AI, and notifies people “when their comment may be considered offensive before it’s posted”. The second, which will be tested soon, is called Restrict. It will make a bully’s comments visible only to them and no one else. AI-powered reflective tool The intervention by AI will notify people when their comment may be considered offensive before it is posted. Instagram claimed that the reflective time offered by this move encouraged people to undo their comment and share something less hurtful in early tests. [caption id="attachment_202639" align="aligncenter" width="722"] Instagram's AI-powered feature that will ask users to reflect on potentially rude/abusive comments. Source: Instagram[/caption] It is unclear how this tool will work for messages in languages other than English, especially for transliterated messages. For instance, the example that Instagram uses (given above), when transliterated into Hindi, would read, “Tum kitne badsurat aur moorkh ho”. Would the AI detect that as well? An Instagram spokesperson told MediaNama that they "have started rolling out this feature to English-speakers [sic]". They further said that they plan to "roll it out to everyone globally later this year". It still does not answer our question of transliterated messages. Restrict feature This feature, which will be tested soon, will enable users to protect their account from unwanted interactions. Once you “restrict” someone, comments on your posts from that person will only be…
