Facebook is expanding its suicide prevention tool by integrating AI algorithms to Live videos. This is similar to the tools introduced in 2015 to spot warning signs in user posts. The company announced in a blog post today that it's deploying AI alogorithms to identify user behaviour and prevent suicide.
With the new AI tools, Facebook aims to connect the user to crisis support organisations even when the user is on air on Facebook Live. It is also streamlining tools to flag a user and report suicidal intentions. Facebook says, "It is in a unique position — through friendships on the site — to help connect a person in distress with people who can support them."
For the past two years, Facebook has been expanding its artificial intelligence team, under Joaquin Candela, and this marks the first real use on the social network. Facebook CEO Mark Zuckerberg recently said his company plans to identify posts related to terrorism, using AI.
As part of this effort, Facebook is partnering with many mental health organisations in the US to support vulnerable users via Messenger. Facebook has been trying to contact and support users, thought to be at risk of suicide for years now. While earlier it relied on other users to flag such behaviour, the company is now using pattern-recognition algorithms.
"Our Community Operations team will review these posts and, if appropriate, provide resources to the person who posted the content, even if someone on Facebook has not reported it yet," Facebook wrote in a blog post.
Facebook's algorithm change follows the death of a 14-year-old girl, who livestreamed her suicide in January. Facebook is testing its new tools in the United States first, but it is not sharing details regarding future rollouts.