Controversy erupted over the non-consensual AI mental health experiment

An AI-generated image of a person talking to a secret robotic handler.
Zoom in / An AI-generated image of a person talking to a secret robotic handler.

Ars Technica

On Friday, Coco co-founder Rob Morris announce on Twitter that his company ran an experiment giving AI-written mental health counseling to 4,000 people without informing them first, The Verge reports. The critics have it named The experiment is completely unethical because Koko did not obtain informed consent from the people seeking advice.

Koko is a non-profit mental health platform that connects teens and adults who need mental health help with volunteers through messaging apps like Telegram and Discord.

On Discord, users log into a Koko Cares server and send direct messages to a Koko bot that asks several multiple-choice questions (eg, “What’s your darkest mind about this?”). Then you share the person’s concerns – written in the form of a few sentences of text – anonymously with another person on the server who can reply anonymously with a short message of their own.

During the AI ​​experiment — which was applied to about 30,000 messages, according To Morris- Volunteers providing assistance to others had the option to use a response automatically generated by OpenAI’s GPT-3 large language model instead of writing one themselves (GPT-3 is the technology behind the recently popular ChatGPT chatbot).

Screenshot from Koko's demonstration video showing a volunteer selecting a treatment response written by GPT-3, an AI language model.
Zoom in / Screenshot from Koko’s demonstration video showing a volunteer selecting a treatment response written by GPT-3, an AI language model.

Coco

In his thread of tweets, Maurice Says That subjects rated AI-designed responses highly until they knew they were written by AI indicates a major lack of informed consent during at least one stage of the experiment:

Messages composed by AI (and supervised by humans) were rated significantly higher than those written by humans themselves (p < .001). Response times decreased by 50% to less than a minute. However...we pulled this from our platform very quickly. why? Once people learned that the messages were co-generated by a machine, it didn't work. Simulating empathy feels weird and empty.

On the server front, the administrators write, “Koko connects you with real people who truly understand you. Not therapists, not counselors, just people like you.”

Soon after the Twitter thread was posted, Morris received many responses criticizing the experiment as unethical, citing concerns about No informed consent and asks if it is Institutional review board (IRB) approved the trial. In the United States, it is illegal to conduct research on humans without legally effective informed consent unless the IRB finds that consent can be waived.

In a response on Twitter, Maurice He said that the experiment would be “exempt” from the informed consent requirement because he was not planning to publish the results, which inspired a parade of horrified responses.

The idea of ​​using AI as a therapist isn’t new at all, but the difference between Koko’s experience and typical AI treatment methods is that patients usually know they’re not talking to a real human being. (Interestingly, one of the earliest chatbots, ELIZA, simulated a psychotherapy session.)

In the case of Koko, the platform introduced a hybrid approach where a human moderator could preview a message before it was sent, rather than a live chat format. However, without informed consent, critics argue that Koko violated ethical rules designed to protect people at risk from harmful or abusive research practices.

On Monday, Morris shared a post reacting to the controversy detailing Koko’s path forward with GPT-3 and AI in general, writing, “I receive criticism, concerns, and questions about this work with sympathy and openness. We share an interest in making sure that any uses of AI are taken care of.” handled rigorously, with a deep concern for privacy, transparency and risk mitigation.Our Clinical Advisory Board meets to discuss guidelines for future work, specifically in relation to IRB approval.”

Leave a Comment