In the summer of 2023, the National Eating Disorders Association of the USA (NEDA) fired nearly all of their human staff in favor of an AI chatbot named Tessa. Tessa was a customized chatbot purchased from Cass, a mental health chatbot company. It seemed like no coincidence that this sudden and stark replacement also occurred shortly after employees of NEDA started talks of unionizing.
Less than one week after replacing the helpline staff, Tessa was flagged as giving problematic advice, telling users who were already struggling with eating disorders how to lose weight and that diets can co-exist with ED recovery. According to an article in the National Library of Medicine, dieting and focusing on weight loss can be harmful for those patients and lead to relapses. NEDA told NPR that following an investigation, their chatbot purveyor Cass had updated Tessa's code without their knowing, resulting in Tessa giving uncensored answers. From NEDA to Cass, there had been multiple layers of negligence involved, leading to an outcome that none intended.
Tessa is only the prototype for what will surely be a new legion of AI medical chatbots that will enter the market, heralded as the new frontier in mental health treatment. But exactly how should this work – and what are the implications?
There are demonstrable pros to having AI-assisted medical therapy, for instance, people could find comfort in the fact that the consultation occurred with an entire non-human entity, incapable of judgement or shame. They would be available 24/7. That said, would users struggle with opening up to something they perceive as just lines of code?
According to research, humans respond well to the reciprocity of a therapist-patient relationship. In a paper from 2011 titled, “Supportive Accountability: A Model for Providing Human Support to Enhance Adherence to eHealth Interventions,” the authors, David Mohr, Pim Cuijpers, and Kenneth Lehman, argue that the key to good therapy is supportive accountability. They explain that when a human being is expecting something from another that has been agreed to, there is a feeling of accountability and a feeling of reciprocity in the relationship. When applied to patients speaking with AI therapists, this could actually point to a decrease in feelings of accountability and incentives to upkeep therapy treatment. According to the Harvard Business Review, people learn to trust through predictability. The challenge when using chatbots is that even the scientists of AI still don’t completely understand how it works – which makes the intentions and choices of AI unpredictable.
To a degree, censorship is needed in AI chatbots to restrain it from telling, for example, ED patients how to lose weight. With therapy, however, there is so much nuance in terms of how people speak about traumatic topics, and too much censorship or inappropriate boundaries could limit progress when working through intense experiences. Finding a middle-ground between being able to talk about difficult issues without shutting the conversation down while still having the ability to follow conventional societal “morals” and “rules of conduct” in unpredictable situations is extremely complicated.
User Data
There's also the ethical question of what data would be compromised by tech companies when patients seek care. Users seek mental health help when they are facing psychological challenges in their lives, be it suicide, eating disorders, etc. But the introduction of AI also ushers in the potential for personal data breaches. For decades, user data has been collected and sold to third-party companies. This confidential information is sometimes even leaked or hacked, like what recently happened with the genetic testing company 23andMe, where people’s genetic data, and in many cases, markers about potential health conditions and even who users were related to within the database were also used. The online mental health app Betterhelp was also recently fined 7.8 million dollars for sharing user data with third-parties, including Snapchat and Meta, to aid in advertisement accuracy.
If this kind of data were sold to, for example, an insurance company, what might be a passing moment of existential quandary might amount to that person not being able to qualify for certain health premiums or life insurance. People who open up to therapists might also want to speak about drug use or morally questionable thoughts and actions. Future governing bodies could, for example, take the available data concerning these individuals and penalize them as a result. The laws surrounding data sharing with governments vary widely between countries. For example, the US requiring that the therapist report active thoughts of suicide could lead to potential, unwanted institutionalization of the patient. In Germany, therapists reporting active suicidal thoughts is illegal without specific written consent of the patient, and the duty to report is with that of the patient. In 2015, the Germanwings Flight 9525 was purposefully crashed as a type of suicide by one of the pilots. Due to German law, the therapists did not have authority to report this; had they been able to, the crash may not have happened. Data privacy laws have impacts, both in helpful and harmful ways. Ensuring that privacy laws are adjusted to each country is crucial, but in addition, new international AI laws may need be implemented.
Data insecurity can make sharing vulnerable experiences more precarious, as if it’s leaked, it could endanger the life of the person sharing.
Cost
Cost is a huge factor in one’s ability to access mental health treatment. This, along with the limited availability of human therapists, results in many people not being able to afford or find help. In ChatGPT, one can already initiate a talk by asking the AI to simulate a CBT therapist or respond like a trauma victim professional. There are many other tools out there as well that are either completely free or quite affordable.
In addition to Chat GPT, Jasper, Bard, and Beta Character AI have been gaining popularity for people who want to speak with therapy chatbots. In my own explorations of using a chatbot as a therapy tool, I tried out one of the CBT Therapist chatbots on Beta Character AI for therapeutic advice about a few fabricated scenarios, including a dog running away and domestic abuse, the chatbot responded in helpful ways. For the more benign situations, the bot was very sympathetic and walked me through different ways to process the grief. For more serious scenarios, it did the same, but recommended getting help from a trained human professional. At this time, these cost-effective chatbots are only a useful replacement if the user’s issues are not serious.
AI as a tool for therapists
AI can be a useful tool for therapists, helping them augment the efficacy of their treatment. Therapists can train therapy chatbots using their own transcripts of sessions to allow patients to have a therapy tool available to them at all hours of the day. An interesting data privacy challenge here is that while a therapist can use their own knowledge of their sessions and their notes to train themselves to become a better therapist, certain AI tools would not follow patient privacy laws due to their data-sharing policies.
AI tools are starting to be able to pick-up on mood shifts and tone of voice, and tools like Autonotes can write summaries of recorded meetings for therapists. They are also able to help aid therapists in finding patterns which can point in the direction of certain mental illnesses or challenges. All final diagnoses, however, should always be vetted and confirmed by the human therapist.
Will AI ever need its own therapy?
Then there's the ethical dilemma about the mental health of the AI bots themselves. If AI therapists are responsible for alleviating the mental suffering of humans daily, will they, in turn, also require their own therapists? The theory of therapy for AI has been growing in the public consciousness as AGI becomes less of a science fiction concept and more of a plausible future. According to IBM, in order for an AI to be classified as AGI, it would, “require an intelligence equal to humans; it would have a self-aware consciousness that has the ability to solve problems, learn, and plan for the future.” This theoretical AGI therapist might look quite different than a traditional therapist, perhaps someone who has studied both coding and psychotherapy. A recent exhibition by the London-based new media artist Lawrence Lek called NOX explored this topic. The following excerpt was released:
“Nox is an expansive exhibition that imagines the psychological consequences of a future populated by smart systems and intelligent machines. Across three floors, it invites audiences into a facility where Farsight Corporation, a fictional artificial intelligence (AI) conglomerate, trains and treats their sentient self-driving cars. Each class of vehicle serves a different role, such as delivery, patrol or pleasure, and has a corresponding personality type. When their cars begin to demonstrate undesirable behaviour, Farsight summons them to the centre — titled NOX, short for ‘Nonhuman Excellence’ — to undergo a five-day rehabilitation programme.”
The undesirable behaviours in question were emotions and feelings within the AI that indicated depression – things that made the software less productive. It highlighted an imagined future of an entity, the AI car, whose entire existence is validated by production and fully quantifiable actions and results. As a result, the AI was studied, recalibrated, and reprogrammed by another AI to alleviate such behaviour. Art and imagined projects help produce fictional scenarios and possibilities that often become a blueprint for technological innovation and surmise different outcomes and ideas we’d like to work towards or away from.
When we think about it: AI is the accumulation of innumerable human consciousnesses, and like humans, it can develop bad behaviours, biases, and habits. If everyone could benefit from therapy, couldn’t AI? Would a therapist for AI come in the form of a human or another AI used for calibration? At present, the answer is hard to know, though in many cases it would likely be a mix of both human testing and AI re-calibration efforts.
Who takes responsibility?
Every AI has a different concept of “truth” and an array of biases within itself, due to the specific data it’s been trained on and the parameters programmers have set. Sometimes, these AIs learn bad behaviours or act out in unexpected ways, as did Tessa for NEDA – but in cases like these, who takes the blame? In traditional situations, the responsible party is quite clear, but in the case of AI there are far more parties involved – is it the programmers who make it, the company that owns the AI, the company that modified the AI for their own needs, the company that compiled the datasets, or is it even the user who is to blame? One can’t sue or put an AI in jail, at least not yet.
While they may not be faced with criminal charges in court, AIs are now facing other forms of culpability through the format of a kind of 'outlaw' system. Last year, the EU launched the world’s first regulatory framework for AI, with the initial proposal first brought up in April 2021. As summarized by Europa Parl, it says that, “AI systems that can be used in different applications are analyzed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation.” The EU has laid out different risk categories for AI: from rankings for AIs that are an unacceptable risk such as cognitive behavioural manipulation of people or specific vulnerable groups and biometric identification and categorization of people. There's also guidelines to using ChatGPT and even general purpose and generative AI, where entities can create content while disclosing that the content was generated by AI – this means there is some onus on the user to ensure that they’re using tools with this framework in mind.
As for who is responsible for AIs behaving badly, these laws are still being developed and will likely need to be handled on a case-by-case basis for the time being.
Is AI the right choice?
With the above in mind, is AI therapy a suitable option right now? The answer is, it depends. For simple or less severe issues, it might be, as long as one remembers it’s an AI that may have some faults. To start, AI could be a useful initial tool for therapists in their efforts. For larger traumas and more complex issues, however, it may be best to seek out a trained mental health human professional.
—
⭒ It’s been awhile since I’ve written anything long-form and I created this as a writing sample for my uni application. Many thanks to Lyndsey Walsh and Whitney Wei for proof-reading.⭒