Facebook gets proactive to help suicidal users
Facebook has begun using artificial intelligence to identify members that may be at risk of killing themselves.
The social network has developed algorithms that spot warning signs in users’ posts and the comments their friends leave in response. After confirmation by Facebook’s human review team, the company contacts those thought to be at risk of self-harm to suggest ways they can seek help.
The tool is being tested only in the US at present and comes in the wake of a 14 year old girl streaming her suicide on Facebook Live earlier this year, although apparently Facebook was working on this issue before the
It marks the first use of AI technology to review messages on the network since founder Mark Zuckerberg announced last month that he also hoped to use algorithms to identify posts by terrorists, among other concerning content.
Facebook also announced new ways to tackle suicidal behaviour on its Facebook Live broadcast tool and has partnered with several US mental health organisations to let vulnerable users contact them via its Messenger platform.
Pattern recognition
Facebook has offered advice to users thought to be at risk of suicide for years, but until now it had relied on other users to bring the matter to its attention by clicking on a post’s report button.
It has now developed pattern-recognition algorithms to recognise if someone is struggling, by training them with examples of the posts that have previously been flagged.
Talk of sadness and pain, for example, would be one signal. Responses from friends with phrases such as “Are you OK?” or “I’m worried about you,” would be another.
Once a post has been identified, it is sent for rapid review to the network’s community operations team.
Key issues
Getting the right response as quickly as possible is critical. There is an ongoing discussion between Facebook and suicide prevention organisations on how best to do this.
One suggestion is that Facebook automatically contacts the family and friends of a user they are concerned about. Facebook has acknowledged that contact from friends or family was typically more effective than a message from Facebook, but said this approach raised complex privacy issues.
Facebook has already introduced new measures to its live streaming platform among concerns that troubled young people may indulge in copycat behaviour. The organisation is now trying to help at-risk users while they are broadcasting, rather than wait until their completed video has been reviewed some time later.
Now, when someone watching the stream clicks a menu option to declare they are concerned, Facebook displays advice to the viewer about ways they can support the person they are concerned about.
The stream is also flagged for immediate review by Facebook’s own team, who then overlay a message with their own suggestions if appropriate.

Facebook considered shutting down the stream immediately but were advised by experts that this would remove the opportunity for people to reach out and offer support.
Although the new system is being rolled out worldwide, a new option to contact a choice of crisis counsellor helplines via Facebook’s Messenger tool, however, is limited to the US for now.
Facebook said it needed to check whether other organisations would be able to cope with demand before it expanded the facility.
[Much of the information in this post was taken from an article by Leo Kelion for BBC News.]
All innovation posts are kindly sponsored by Socrates 360 which provides a complete solution for staff, prisoners, probationers, etc. combining engaging content, simple set-up and an easy tracking system. Socrates 360 has no influence over editorial content.