The Potential Danger in Therapy Apps Like Talkspace

Photo: Miguel Pereira/Getty Images

On Monday, the Verge dropped a big, largely anonymously sourced investigative piece that took a critical look at Talkspace, the text-based therapy app that has reportedly raised $28 million in funding. Within hours, co-founder Oren Frank had taken to Medium, publishing a post under the title “Response to false accusations against Talkspace.” The kerfuffle points to where health care could be heading, and the promises and dangers involved.

A point that leaps out from Cat Ferguson’s reporting is tension between the app, the therapists that work for it, and the potential for clients to be a danger to themselves or others. In one anonymously sourced anecdote, a therapist’s client tells her that her baby is being driven around by a drunk family member. While her state required therapists to report incidents of child abuse, Talkspace is anonymous; users have to volunteer emergency-contact info for therapists to be able to act on it. Similarly, when clients would muse about committing suicide — “suicidal ideation” in the literature — the only instructions therapists reportedly received was to tell the client to call the suicide hotline, phone 911, or head to the hospital.

When I followed up with Talkspace over email, spokesperson Chi Zhao replied, saying that it’s up to therapist’s “professional discretion” to collect emergency-contact information. When I asked if Talkspace is changing its patient-anonymity policy so that therapists can report unsafe conditions — as in child or elder abuse — the reply was direct: “We are not changing our policy and entrust the judgment of professional, licensed therapists to provide quality care through our platform.” But in the face of mortal danger, can a text message provide the care needed to help people through a crisis?

It’s not that text-based therapy, in and of itself, is a bad thing: As Steph Yin observed for Motherboard last year, a lot of the promise of the field is that it could make therapy “democratized,” since waiting times, travel, scheduling, and other barriers of entry fall away. Talkspace plans start at $128 a month, with $172 and $276 plans that add in, respectively, a monthly or weekly video chat. All of those price points represent a fraction of what a real-life psychotherapist would cost, especially in a city like New York. Correspondingly, Talkspace regards itself as therapy for all. That sounds promising at a theoretical level, and as the space develops, it stands to help thousands, if not millions, of people get help they wouldn’t otherwise receive.

Which brings us back to the p-word: platform. Talkspace does not describe itself as a clinic, but a platform — in computing, as John Herrman recently noted in The New York Times Magazine, that’s “a system that enables other systems,” like the way Uber provides the structure and software for outside contractors (drivers) to make money from picking up outside customers (passengers). The tech world is gaga about them, since they create marketplaces unto themselves: the iPhone and apps, Airbnb and lodging, Uber and cabs, Facebook and news, and, now, Talkspace and therapy. “Claiming to provide a platform, in Silicon Valley, doesn’t demand defense,” he observed. “It is the defense. Platforms don’t cause problems; people do.”

Platforms have become “abdications of responsibility,” Herrman contends, like the way that Mark Zuckerberg maintained that Facebook was just a platform, not a media company, thereby relieving it of needing editorial judgment regarding the fake news that coursed through it. The corollary with Talkspace appears to be that the responsibility in emergencies is on the therapist, who must, at their discretion, collect the contact information themselves. “All of the risk is on the therapist, all the work is done by the therapist, but there’s a tremendous amount of fear and control — and they dangle this carrot, that you’re part of something big and important,” an anonymous, current Talkspace therapist told the Verge. “It’s neurotic handcuffing.”

With mental health care, the stakes are high. Case in point: There’s a post by Talkspace staff writer Joseph Rauch on the company blog titled “How Online Therapy Helps Mental Health Professionals Prevent Suicide” from last September. It links to a 2015 JAMA Psychiatry study that found that web-based cognitive behavioral therapy leads to a lower incidence of suicidal ideation in medical interns compared to a control group. The post then turns to the work of Talkspace therapist Katherine Glick, who tells the story of a patient experiencing suicidal ideation. Through the daily support found on Talkspace, the patient “felt like she had the active presence she needed,” Glick is quoted as saying, and, in the words of Rauch, “Glick’s client was able to eliminate her suicidal thoughts before her mental health deteriorated to the point of attempting suicide.”

That’s a happy ending, but what if it didn’t work out? What if her client was texting about killing herself? Would it be up to Glick to, in the heat of the moment, gather her emergency-contact info to intervene, and if that didn’t work out, and the patient killed herself, Talkspace wouldn’t be at all responsible? That’s the unexplored legal and ethical landscape that platforms — whether Facebook or Talkspace — are moving us into. When something goes awry, who’s culpable?

The Potential Danger in Therapy Apps Like Talkspace