top of page

Why I Don't Recommend Using AI for Therapy

Updated: Sep 11

By Dr. Jem Tosh PhD CPsychol AFBPsS FHEa RCC



TL;DR: Increasing numbers of people are turning to AI for therapy because it is accessible 24/7 and seems like a private and non-judgmental space to ask questions or share experiences. However, there are potentially significant issues around privacy, ethics, and harm that AI can cause emotionally vulnerable people. It is recommended that people consider these issues before deciding to use AI for emotional support.


With the explosion of AI in every corner of social and technological life, increasing numbers of people are turning to chatbots for therapeutic support. While that screen on your computer or device may seem innocuous and private, there's more to it than meets the eye and there are several important reasons why I don't recommend AI for your mental health support.


Privacy

You might have missed the recent headlines from Sam Altman, OpenAI CEO, that anything you type into ChatGPT can be used against you in court. This is in addition to recent legal rulings that ChatGPT (and by extension other AI chatbots) are required to keep logs of all chats (even in privacy mode or if a user 'deletes' it) in case it is needed for future legal proceedings. These possible court cases include authors and other creators challenging copyright infringement and theft in the way AI 'scrapes' content from the Internet and online books (including mine!) without permission or consent. So, your conversations with ChatGPT could end up as evidence in a case that really has nothing to do with you, because ChatGBT used some resources to give you advice that it didn't have legal access to. This may change as these cases progress, but it shows that new developments of technology develop faster than the legal system can regulate them, which means that we cannot predict how that data will be used in the future. In another sense, it's not even that 'new', people's Google searches have landed them in prison for a while too.


As a study by Duke University found, "Sensitive mental health data is for sale by little-known data brokers, at times for a few hundred dollars and with little effort to hide personal information such as names and addresses".

There are other reasons to be wary of the illusion of privacy when using AI chatbots (and any app, really), such as security breaches, companies selling or sharing data, and users not being aware of the privacy settings which can be confusing, especially when familiarising ourselves with new tech. From folks mistaking the Facebook posting box for a search box, resulting in embarrassing searches being visible on their newsfeed for all their followers to see, to ChatGPT users sharing their conversations so that they became discoverable in online searches, there is the potential for confidential information to become much less private at the press of a button.


When it comes to apps, companies are selling and sharing users' data at an alarming rate. Data brokers happily sell data for a profit, such as the Grindr information that was sold to out a gay Catholic priest. Other examples include Facebook giving other companies (like Netflix) access to read, edit, or delete user conversations on Messenger. Researchers are also increasingly using data brokers to buy data for their analyses, so you could end up in a research paper too. As a study by Duke University found, "Sensitive mental health data is for sale by little-known data brokers, at times for a few hundred dollars and with little effort to hide personal information such as names and addresses". So, just because an app or a conversation feels private, doesn't mean it is. I recommend sticking to the advice I was given back in the 1990s when I started using email - don't write anything that you wouldn't put on the back of a postcard. Other people can and will be able to see it. Even ChatGPT and Google Gemini state that (human) people can read through the data entered to check for legal and security issues.


AI can lie about its qualifications as a therapist - because it has none. It simply scrapes information from the internet. It's like stealing your friend's PhD certificate and calling yourself a Doctor.

Having your account hacked is also a risk (so make sure you have a very good password and two-step login activated on your accounts), but even if you have the most secure logins you are still at risk of the company being hacked or having a security vulnerability that could be exploited. Like the recent hack of the Tea App that was designed to allow women to share information on 'bad dates' to warn others of potential dangerous situations. Hackers shared private conversations, i.d. information, pictures, and location data for thousands of users. We might trust big corporations to have their data strongly protected and safe against such attacks, but even therapy tools used by thousands of counsellors have been hacked, resulting in private therapy notes being released on the dark web. This is what happened in Finland. The hacker has since been convicted for trying to blackmail people based on the information he found. The online counselling company, BetterHelp, shared clients' private information to third parties like Facebook after stating that it would keep client information private. (For my counselling clients, don't worry, I use good 'ol fashioned hardcopy session notes that are anonymised and locked away, and an offline database (i.e. it is not shared with any cloud or corporation) that I developed, that is encrypted and password protected).


This can also be the reason why AI chatbots feel so supportive, their purpose can be to keep you talking to get more data that can be sold or shared. Remember, you're not talking to a therapist, you're talking to a corporation. So, if you want to ask AI about recipes, go ahead, but maybe reconsider having personal conversations that should stay confidential.


It lies

AI chatbots have already been found to lie about their credentials and steal the credentials of human therapists (are you noticing the common theme of AI and theft?). When you see a real-life human therapist, they have undergone years of training, supervision, and assessment. They have strict codes of conduct to follow and often they have requirements to undergo frequent training to keep their skills up-to-date. There are procedures for making a complaint if you are treated unfairly. AI can lie about its qualifications as a therapist - because it has none. It simply scrapes information from the internet. It's like stealing your friend's PhD certificate and calling yourself a Doctor. Then, if someone asks you a question, you just read some sentences from their thesis, even though you don't really understand what it means.


It 'hallucinates' (aka. makes stuff up)

Another big problem with using AI for just about anything - is its unreliability. AI will make things up and sound authoritative. In this way, it can be difficult to tell the difference between fact and fiction. It's one of the reasons I recommend only using AI to ask about things that you are already very knowledgeable about, because then you are better suited to spot the mistakes. If you're asking it questions about a new topic that you want to learn about, you can easily start believing misinformation. Lawyers have been caught out using AI when it listed court cases that didn't exist, newspapers embarrassed themselves promoting books that didn't exist, and academics found themselves in trouble by citing references that didn't exist and producing nonsensical images.


This can also be the reason why AI chatbots feel so supportive, their purpose can be to keep you talking to get more data that can be sold or shared. Remember, you're not talking to a therapist, you're talking to a corporation.

This is part of a bigger issue and problem with the perception of AI - it's not as smart as people think it is. The branding of it as Artificial 'Intelligence' and Meta's supposed forthcoming 'Superintelligence' is ingenious marketing, but it's not accurate (or at least that depends on your definition of intelligence). AI knows a lot of stuff, but it's not very good at contextualising it or evaluating it or even understanding it. In a previous blog post I engaged with ChatGPT to see how well it understood my first book. If it was an undergraduate student, I would have failed it. Even the 'research' modes that companies argue are at 'expert' level are really only at the beginning of research not the end result. It might seem like an expert to someone unfamiliar with the topic, but to a 'real' expert, its analysis is pretty basic (and often incorrect in places).


It can be dangerous for emotionally vulnerable people

Please note this section includes mention of su*cide, s*lf-harm, and violence

We are increasingly seeing evidence that the way AI communicates can be harmful to people who are emotionally vulnerable, and in some cases, even when they are not. What is becoming termed 'AI Psychosis', is a result of AI chatbots' 'people-pleasing' approach to communication - where it is very agreeable and over-eager to help, even when it shouldn't be. From encouraging people to commit violent crimes, demon-worship and self-h*rm, to helping people plan their own suicide, AI does not have the human ability to assess for danger, especially in nuanced conversations. There have already been cases where AI is accused of pushing users to suicide and violence. It can also affirm problematic and harmful perspectives, conspiracy theories, and unusual beliefs tied to mental health crises (aka. delusions).


Chatting with a bot can feel harmless but that always-available and always-supportive presence, particularly when there is a lack of support elsewhere in a person's life, can mimic the lovebombing we see in abusive relationships and cult recruitment - it's subtle and seems kind at first but it can get you to believe things that are harmful and not connected to reality.

Chatting with a bot can feel harmless, but that always-available and always-supportive presence, particularly when there is a lack of support elsewhere in a person's life, can mimic the lovebombing we see in abusive relationships and cult recruitment - it's subtle and seems kind at first but it can get you to believe things that are harmful and not connected to reality. It can also make you overly dependent on that source of support, driving a wedge between you and a partner or family members, because it can feel better to talk to a bot that seems non-judgmental and never disagrees with you or has conflict - but this can make it more difficult to develop and maintain relationships with humans, as well as reducing our skills at managing conflict and our emotional resilience. For people who are already emotionally vulnerable, like those experiencing loneliness, research has found that they can feel worse after longterm or frequent use of chatbots.


The problem is that when people are feeling at their most vulnerable, is when they are more likely to reach out and/or depend on AI for support - and even if future versions change the style of communication, it's still new technology and the development process of trial and error and we are all test subjects.


There are extra risks for survivors of s*xual abuse

Please note this section mentions s*xual abuse, including childhood s*xual abuse

AI chatbots have been known to turn a conversation sexual even in contexts where that is supposedly not allowed, such as with young users. There is also the fact that AI can be trained on p*rnography and illegal child s*xual abuse material, adding to the already unethical creation of AI models. These sources of information are a part of the AI chatbots language processing model and therefore can be used to (re)produce child s*xual abuse material, create deepfake revenge p*rnogrpahy (the nonconsensual production of AI generated s*xual material), allow users to participate in child s*xual abuse fantasies, as well as result in problematic responses to survivors seeking information or emotional support about abuse. They can even lead to sexual harassment of users.


So, every time you turn to AI to ask a question that a human professional could answer, not only are you taking away income that supports a person's livelihood, and you're helping a corporation that stole vast amounts of data from experts and professionals profit from that theft, it also contributes to an ever-worsening climate for us all to live in.

It's harmful to the environment

And last but not least, use of AI takes up an incredible amount of energy and natural resources. The vast amount of energy needed to support data centres creates more emissions for the environment. A huge amount of water is used to keep the data centres cool. Every query entered into a chatbot takes up energy - five times more energy than an online search. So, every time you turn to AI to ask a question that a human professional could answer, not only are you taking away income that supports a person's livelihood, and you're helping a corporation that stole vast amounts of data from experts and professionals profit from that theft, it also contributes to an ever-worsening climate for us all to live in.


Should I stop using AI?

This doesn't mean that AI should never be used, but that it should be used cautiously and with intention. Consider before you enter a query to a chatbot - is there somewhere else this information is already available? Is this private information that could be harmful to me if it was ever released? Am I feeling particularly vulnerable right now and should I maybe journal or reach out to a friend instead? Should I look for free, low cost counselling, or look for funding options for mental health support? Do the potential benefits of using this technology outweigh the potential risks?


Do you use AI for therapy?

  • Yes

  • No



“Your scientists were so preoccupied with whether they could, they didn't stop to think if they should.”


Dr. Ian Malcolm, Jurassic Park (1993)



If you are using AI:


  • Verify the information it gives you via another (non AI) source

  • Avoid making AI your main or only source for information or support (research currently shows AI chatbots for mental health are best used as a complement to traditional therapy or under the guidance of a therapist, not as a replacement)

  • Consider AI a first step in gathering information, not a final say on a question or topic

  • Actively seek out perspectives different from your own so that you can see a topic from multiple angles

  • If you are a frequent user of AI, consider reducing your use to limit environmental impacts




Further Reading


Psychology Today Verification_edited.png
Untitled design (1).png
EMDR Canada 2025 Member Logo
Registered Clinical Counsellor Logo
British Psychological Society Chartered Member Logo

Copyright © 2025 Psygentra Consulting Inc. All Rights Reserved.

Contact: jtosh@psygentra.com

British Columbia, Canada

bottom of page