Facebook’s New Suicide-Prevention Features Are a Good Start

Man laying on bed at late night in a dark room checking his smartphone. Internet addiction.
Photo: Artur Debat/Getty Images

For many young people, the online world is a dangerous place. As of 2007, about one third (32 percent) of teenagers said they’d been cyberbullied — the dissemination of embarrassing photos, threatening messages, and cruel gossip is now a sadly commonplace part of growing up online. This behavior can have profound effects on its victims, leading to mental illness, self-harm, or even, in the worst (and most famous) cases, suicide.

But Facebook thinks that the same internet hive mind so famous for its cruelty could be harnessed for good instead. On February 25, the social-media giant announced that it would roll out tools to help users who might be struggling with suicidal thoughts. Facebook’s development teams worked with organizations such as Forefront, Now Matters Now, the National Suicide Prevention Lifeline, and Save.org to design its new tools.

These partnership had been in the works for a while, said Dr. Madelyn Gould, a Columbia University psychiatrist who has worked with the NSPL for more than a decade. “I’ve been aware of their efforts, and the efforts of other suicide prevention organizations, to collaborate with Facebook,” she said, adding that the social-networking site could be “an innovative and potentially powerful way to reach suicidal individuals.”

According to a Facebook video, the new features, which have been rolled out steadily since February, offer users a few different options when they come across a friend’s post they find concerning. Users will be able to message the person whose post they’re flagging directly, reach out to another friend to rally a support group around that person, chat with a trained helper, or call a suicide hotline (hotline operators are trained to deal with people who call on behalf of friends or loved ones). They can also ask Facebook to look at the post and determine whether the user appears to be in crisis.

The next time that user logs on, they’ll see pop-up windows offering links to advice and videos specifically designed by professional mental-health organizations for individuals struggling with these issues. (Evidently the tools have yet to reach every user — when I clicked the drop-down box to report a status yesterday, only the usual options were available.)

Many users responded to Facebook’s announcement of the initiative positively, explaining that they turned to virtual social networks when they felt a lack of concern from friends and family offline. In one comment left under Facebook’s post, a user who had suffered through suicidal thoughts explained, “The only reason I posted thoughts like that on Facebook in the first place was because my friends weren’t responding or reaching out, and I felt desperate to get someone to notice I wasn’t doing well.”

Some experts, too, have responded positively to Facebook’s efforts. The benefit of this approach, said Dr. Dana Alonzo, the director of Columbia’s suicide prevention research program (who hasn’t worked with Facebook on suicide prevention), is that it eliminates a step for Facebook users who are potentially in crisis. “They might not take the step of researching on their own, and now that information is being sent to them, so they might be more likely to use those resources than they would if left to their own devices,” she said.

Placing the tools to help at friends’ fingertips might also make reaching out much less intimidating. “One of the main myths about suicide is if you say something, you’re going to give the individual that idea,” Alonzo said. “That’s not true. In fact, you might be the only person who gives them the opportunity to express their thoughts.”

So Facebook’s ideas are good, Alonzo concluded, but certainly shouldn’t be seen as an anti-suicide cure-all, simply because it’s only capable of reaching out to users who have explicitly posted suicidal thoughts, which doesn’t account for the entire at-risk population. She also pointed out that this tool — like just about any tool used by ever-fallible human beings — brings with it some potential for abuse. “Ill-intentioned individuals could report someone for having a troubling post even when they don’t as a means of embarrassment or harassment,” she said.

It’s also worth asking whether Facebook might be sending humans to do a machine’s job. The company famously spends considerable resources analyzing posts to figure out which ones are most likely to elicit engagement and alters users’ news feeds accordingly. It would seem the site could apply the same algorithmic heft to automatically identify posts that might indicate a user in crisis.

Johannes Eichstaedt, a data scientist who specializes in developing software that can extract the emotional content of social-media posts, said that an algorithm could probably do about as well as a human in measuring suicidal intent based on status posts. “Ideally, and this might be morbid, you would train a learning algorithm based on a history of status updates that have preceded actual suicide attempts, and use normal ones as a control,” he said in an email.

But Eichstaedt pointed out that even a good algorithm would lack some advantages that users have. Facebook friends might know if someone has a history of mental illness or suicide, if they’re just going through an isolated rough patch, and how they’re likely to respond to support. (On the other hand, of course, an algorithm could pick up cries for help that humans failed to notice, whether because a given user’s posts didn’t show up in his or her friends newsfeeds, or because they didn’t have enough friends to notice in the first place.)

A Facebook representative told Science of Us that there are no plans for such an automated system, and Eichstaedt explained that there’s good reason for the site to be cautious on this front. “Think about the two different kinds of errors in that classification task: false positives and false negatives,” he said. “False positives could easily garner negative responses. Worse, think about whether the algorithm failed to flag suicidal intent and parents [of a suicide victim] sued Facebook.” So even if an algorithm could up the chances that a post that hints at psychological distress would be flagged, shifting the responsibility for such detection from humans to an algorithm could bring considerable costs — and given how easily Facebook garners negative press, the company is probably hyperaware of this fact.

The current system certainly has some limitations, and it is certainly likely to be refined in the long run. But although it’s not perfect, it does appear to be a good-faith attempt to redirect the energy of internet mobs that often seem more interested in tearing people down than in lending them a hand. 

Facebook’s New Suicide-Prevention Features