Facebook has expanded its Artificial Intelligence (AI) tool to prevent suicides outside of US (except the European Union), it said in a blogpost. The social media giant said in its post that it is using pattern recognition to detect posts or live videos (Facebook Live) where someone might be expressing thoughts of suicide and to help respond to reports faster.
The company has rolled out this tool in March this year, but at that time a Facebook user or a Facebook friend was required to help the victim.
Now, Facebook has upgraded the tool, and now it connects people with first responders like local police- minimizing the need of involving Facebook user/friend. Post detection, Facebook provides people with a number of support options, such as the option to reach out to a friend and even offer suggested text templates. “We also suggest contacting a help line and offer other tips and resources for people to help themselves in that moment,” the blog said.
Facebook has a team, who review reports that come in and prioritize the most serious reports based on the points detected by the AI tool. To detect users intending to self-harm or commit suicide, Facebook’s technology uses signals like the text used in the post and comments (for example, comments like “Are you ok?” and “Can I help?” can be strong indicators). “In some instances, we have found that the technology has identified videos that may have gone unreported,” the company adds in its blog post.
Social network’s team include 80 local partners globally — in whatever language people use Facebook in. The company said that over the last month, it worked with first responders on over 100 wellness checks detected by their AI-based technology.
Facebook’s suicide prevention
While the social media company said in its blog post that it is working on suicide prevention for the last few years, but it paid heed to the issue, even more, when Facebook livestream incidents of suicides being live streamed came to limelight.
In its earnings call early this month, CEO Mark Zuckerberg announced that the company has made investments in AI tools and increasing staffing of that was working on suicide prevention He added that the company has brought the amount of time to review suicide Live videos down under 10 minutes now. “That might still be a conservative estimate, and we’re continuing to work on that,” he added.
Social Networks on Self-harm
Social media platforms are increasingly paying attention to suicides prevention lately. Twitter too is battling with issues like online harassment and self-harm. This month, Twitter had published new set of rules and had updated its policies to address issues such as online abuse, spam, self-harm and other such issues on the social network. It said that when Twitter will receive reports that a person is threatening suicide or self-harm, it will take a number of steps to assist those users, such as reaching out to that person and providing resources such as mental health experts.