Skip to Content

National Eating Disorders Association takes its AI chatbot offline after complaints of ‘harmful’ advice

<i>boonchai wedmakawand/Moment RF/Getty Images</i><br/>An eating disorder prevention organization said it had to take its AI-powered chatbot offline after some complained the tool began offering “harmful” and “unrelated” advice to those coming to it for support.
boonchai wedmakawand/Moment RF/Getty Images
An eating disorder prevention organization said it had to take its AI-powered chatbot offline after some complained the tool began offering “harmful” and “unrelated” advice to those coming to it for support.

By Catherine Thorbecke, CNN

New York (CNN) — An eating disorder prevention organization said it had to take its AI-powered chatbot offline after some complained the tool began offering “harmful” and “unrelated” advice to those coming to it for support.

The National Eating Disorders Association (NEDA), a nonprofit organization aimed at supporting people impacted by eating disorders, said on Tuesday that it took down its chatbot, dubbed “Tessa,” after some users reported negative experiences with it.

“It came to our attention last night that the current version of the Tessa Chatbot, running the Body Positive program, may have given information that was harmful and unrelated to the program,” the organization said in a statement posted to Instagram on Tuesday. “We are investigating this immediately and have taken down that program until further notice for a complete investigation.”

Liz Thompson, NEDA’s CEO, said in an email to CNN that the Tessa chatbot had a “quiet” launch in February 2022, ahead of a more recent crop of AI tools that have emerged since the release of ChatGPT late last year. (NEDA emphasized in an email that its Tessa tool is “not ChatGBT,” in an apparent reference to the viral chatbot.)

Nonetheless, the complaints highlight the current limitations and risks of using AI-powered chatbots, particularly for sensitive interactions such as mental health matters, at a time when many companies are rushing to implement the tools.

Thompson blamed Tessa’s apparent failure on inauthentic behavior from “bad actors” trying to trick it and emphasized that the bad advice was only sent to a small fraction of users. NEDA did not provide specific examples of the advice Tessa offered, but social media posts indicate the chatbot urged one user to count calories and try to lose weight after the user told the tool that they had an eating disorder.

NEDA’s move to take the chatbot offline also comes in the wake of the organization reportedly firing the human staffers who manned its separate eating disorder Helpline after staffers voted to unionize.

NPR reported last month that it obtained audio from a call where a NEDA executive told staffers that they were being fired. On the call, the executive said it was “beginning to wind down the helpline as currently operating,” adding that the organization was undergoing a “transition to Tessa.” (CNN has not independently confirmed the audio.)

In a statement last week, the union formed by the NEDA helpline workers warned: “A chat bot is no substitute for human empathy, and we believe this decision will cause irreparable harm to the eating disorders community.”

NEDA previously told CNN that the organization was “not at liberty to discuss employment matters.” Thompson said the two tools – the AI-powered chatbot and the human-operated helpline – are part of two different initiatives from the company, and pushed back at suggestions that the chatbot was intended to replace fired staffers.

Even before ChatGPT renewed interest in chatbots, similar tools generated some controversy. Meta, for example, released a chatbot in August 2022 that openly blasted Facebook and made anti-Semitic remarks.

The more recent crop of AI chatbots like ChatGPT, which are trained on vast troves of data online to generate compelling written responses to user prompts, have raised concerns for their potential to spew misinformation and perpetuate bias.

Earlier this year, news outlet CNET was called out after it used an AI tool to generate stories that were riddled with errors, including one that offered some wildly inaccurate personal finance advice. Microsoft and Google have also been called out for AI tools dispensing some insensitive or inaccurate information.

The-CNN-Wire
™ & © 2023 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.

Article Topic Follows: CNN - Business/Consumer

Jump to comments ↓

Author Profile Photo

CNN Newsource

BE PART OF THE CONVERSATION

KIFI Local News 8 is committed to providing a forum for civil and constructive conversation.

Please keep your comments respectful and relevant. You can review our Community Guidelines by clicking here

If you would like to share a story idea, please submit it here.

Skip to content