Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Sensitive Mental Health Information Is Also A Content Moderation Challenge (2020)

from the tricky-questions dept

Summary: Talkspace is a well known app that connects licensed therapists with clients, usually by text. Like many other services online, it acts as a form of ?marketplace? for therapists and those in the market for therapy. While there are ways to connect with those therapists by voice or video, the most common form of interaction is by text messages via the Talkspace app.

A recent NY Times profile detailed many concerns about the platform, including claims that it generated fake reviews, lied about events like the 2016 election leading to an increase in usage, and that there were conflicts between growing usage and providing the best mental health care for customers. It also detailed how Talkspace and similar apps face significant content moderation challenges as well — some unique to the type of content that the company manages.

Considering that so much of Talkspace?s usage includes text based communications, there are questions concerning how Talkspace handles that information and how it protects that information.

The article also reveals that the company would sometimes review therapy sessions and act on the information learned. While the company claims it only does this to make sure that therapists are doing a good job, the article suggests it is often used for marketing purposes as well.

Karissa Brennan, a New York-based therapist, provided services via Talkspace from 2015 to 2017, including to Mr. Lori. She said that after she provided a client with links to therapy resources outside of Talkspace, a company representative contacted her, saying she should seek to keep her clients inside the app.

?I was like, ?How do you know I did that??? Ms. Brennan said. ?They said it was private, but it wasn?t.?

The company says this would only happen if an algorithmic review flagged the interaction for some reason ? for example, if the therapist recommended medical marijuana to a client. Ms. Brennan says that to the best of her recollection, she had sent a link to an anxiety worksheet.

There was also a claim that researchers at the company would share information gleaned from looking at transcripts with others at the company:

The anonymous data Talkspace collects is not used just for medical advancements; it?s used to better sell Talkspace?s product. Two former employees said the company?s data scientists shared common phrases from clients? transcripts with the marketing team so that it could better target potential customers.

The company disputes this. ?We are a data-focused company, and data science and clinical leadership will from time to time share insights with their colleagues,? Mr. Reilly said. ?This can include evaluating critical information that can help us improve best practices.?

He added: ?It never has and never will be used for marketing purposes.?

Decisions to be made by Talkspace:

  • How should private conversations between clients and therapists be handled? Should those conversations be viewable by employees of Talkspace?
  • Will reviews (automated or by human) of these conversations raise significant privacy concerns? Or is it needed to provide quality therapeutic results to clients?
  • What kinds of employee access rules and controls need to be put on therapy conversations?
  • How should any research by the company be handled?
  • What kinds of content need to be reviewed on the platform, and should it be reviewed by humans, technology, or both?
  • Should the company even have access to this data at all?

Questions and policy implications to consider:

  • What tradeoffs are there behind providing more access to therapy in an easier format and the privacy questions raised by storing this information?
  • How effective is this form of treatment for clients?
  • What kinds of demands does this put on therapists — and does being monitored change (for better or for worse) the kind of support they provide?
  • Are current regulatory frameworks concerning mental health information appropriate for app-based therapy sessions?

Resolution: Talkspace insists that it is working hard to provide a better service to clients who are looking to communicate with therapists, and challenges many of the claims made in the article. Talkspace?s founders also wrote a response to the article that, while claiming to ?welcome? scrutiny, also questioned the competency of the reporter who wrote the NY Times story. They also argued that most of the negative claims in the Times piece came from disgruntled former workers — and that some of it is outdated and no longer accurate.

The company also argued that it is IPAA/HITECH and SOC2 approved and has never had a malpractice claim in its network. The company insists that access to the content of transcripts is greatly limited:

To be clear; only the company?s Chief Medical Officer and Chief Technology Officer hold the ?keys? to access original transcripts, and they both need to agree to do so. This has happened just a handful of times in the company?s history, typically only when a client points to particular language when reporting a therapist issue that cannot be resolved without seeing the original text. In these rare cases, Talkspace gathers affirmative consent from the client to view that original text: both facts which were made clear to the Times in spoken and written interviews. Only Safe-Harbor de-identified transcripts (A ?safe harbor? version of a transcript removes any specific identifiers of the individual and of the individual?s relatives, household members, employers and geographical identifiers etc.) are ever used for research or quality control.

Filed Under: , ,
Companies: talkspace

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Sensitive Mental Health Information Is Also A Content Moderation Challenge (2020)”

Subscribe: RSS Leave a comment
6 Comments
Anonymous Coward says:

The article also reveals that the company would sometimes review therapy sessions…

No, nuuuuu, no, hard nope. I don’t care how or why they review sessions, they shouldn’t be doing that.

The company disputes this. “We are a data-focused company,…

Well, that’s a problem and conflict of interest right there. You should be a communications platform, facilitating the connecting of clients with therapists. This is a whole new downhill trend for corporate-run medicine.

  • They also argued that most of the negative claims in the Times piece came from disgruntled former workers…*

I always love this form of deflection. There are, more frequently than not, valid reasons people are disgruntled. It isn’t like some negative personality trait with only an internal source.

has never had a malpractice claim in its network.

That is due to the quality of your therapists, not your bullshit company. This also would indicate that the same therapist-employees (or former) are less likely to be full of shit that you are. Thanks for pointing it out.

Finally: Talkspace insists that it is working hard to provide a better service to clients who are looking to communicate with therapists, and challenges many of the claims made in the article.

Ha, we don’t even have to look at those claims, only your own, to see highly questionable practices and motives.

MathFox says:

Re:

They also argued that most of the negative claims in the Times piece came from disgruntled former workers…*

I always love this form of deflection. There are, more frequently than not, valid reasons people are disgruntled. It isn’t like some negative personality trait with only an internal source.

I have left companies for the unethical practices they had. And I consider (structurally) breaking the confidentiality that your customers expect significantly worse than what I’ve encountered before. In all likelihood I would have not just become disgruntled, but have picked up a whistle to blow too.

That Anonymous Coward (profile) says:

We can’t even help people without making sure we get paid a bit extra.
We want people to stay on our platform, even if there are better resources available out there.
We want the data so we can show potential ‘partners’ we are worthy of those nice drug rep visits where our staff gets lunch.

Anyone who has anything bad to say is always disgruntled, we are always perfect & they are jealous.

The platform and concept might be the best thing since sliced bread, but the go go go attitude to increasing income sources will always ruin things. Profit overcomes the desire to help & profit drives everything. Those who are supposed to be helped are just a means to a paycheck.

Leave a Reply to MathFox Cancel reply

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow