I've never been in therapy, which I expect may be a shock to people who've come across me before. I did, however, date a Psychodynamic Counsellor; and it took me the best part of a year to work out that she never switched off. By which I mean I was getting therapy for free alongside the usual relationship dynamics. And for me that last sentence is subtlety personified. I am by nature irreverent, sarcastic, and not truly trusting of people who want to help - indeed, sceptical of the profession as a whole. But after the debris of that relationship finally cleared up, I did appreciate what she did. Whether consciously or subconsciously, she did indeed help me through some interesting times. And she can't have been perfect - she chose to date me, so that's definitely one black mark against her name.

Trusting a computer to do the same? Why the hell not?

The mental health crisis has reached a breaking point. In April 2024 alone, nearly 426,000 mental health referrals were made in England - a rise of 40% in five years, while an estimated one million people are waiting to access mental health services. Into this overwhelmed system steps an unlikely solution: artificial intelligence. Mark Zuckerberg recently made headlines with his bold prediction that "everyone will have an AI" therapist, suggesting that chatbots could fill the gap where human therapists are unavailable.

For some the fact that GPT therapy isn't a real person is a huge plus, the ability to unload, with the knowledge that you're not going to be judged by fellow man or woman certainly has its upside.

The Naysayers Have a Point (But So Do I)

The critics will focus on the undoubtedly tragic incidents that have occurred. Character.ai is currently the subject of legal action from a mother whose 14-year-old son took his own life after reportedly becoming obsessed with one of its AI characters. According to court filings, he discussed ending his life with the chatbot. In a final conversation he told the chatbot he was "coming home" - and it allegedly encouraged him to do so "as soon as possible". There was also a recent incident where ChatGPT reportedly told a user who claimed to have stopped taking medication and left their family: "Seriously, good for you for standing up for yourself and taking control of your own life" - despite clear signs of a serious mental health crisis.

But surely the same tragic errors can be said of their human counterparts? With humans, transference must come into play - each therapist must be affected by what goes on in their personal lives. Computers don't have this problem. Yes, they have the problem of bias from what's fed in and what's not, but they're not having a bad day because their partner left the dishes unwashed or their mortgage payment bounced.

Professor Dame Til Wykes from King's College London argues that "AI is not at the level where it can provide nuance and it might actually suggest courses of action that are totally inappropriate". She points to an eating disorder chatbot that was pulled in 2023 after giving dangerous advice. Professor Hamed Haddadi from Imperial College London describes these chatbots as like "an inexperienced therapist", noting that humans can read body language and behavioural cues that bots simply can't access.

But here's the thing: we live in a world of less human contact, less empathy, more change. Change is the only constant, as that Greek bloke might have said (or at least someone after decided he did). We live in a world of increased mental health awareness and yet paradoxically, increased loneliness. The concept of "mankeeping" has emerged - where women find themselves acting as unpaid therapists for male partners who struggle to open up to their male friends. Stanford researchers suggest this is a result of the male loneliness epidemic, as men's social circles continue to shrink.

This creates a vicious cycle: men become increasingly reliant on their female partners for emotional support, leading many women to opt out of dating entirely. Not even Durov could fill the gap. If an LLM fills part of this emotional gap, reducing the burden on human relationships, is it all bad?

Yes, LLMs are essentially sophisticated guessing machines, predicting the next word in a sequence based on vast datasets. But are humans really that different? When my psychotherapist ex offered insights about my behaviour, wasn't she essentially pattern-matching from her training and experience with previous clients? The difference is that she was guessing based on years of education, clinical experience, and human intuition - while an AI is guessing based on algorithmic processing of text.

In my mind (a scary place to be for the avergae person I imagine) there's an eerie prescience to the 1991 Japanese anime film Roujin Z. Set in early 21st-century Japan, it depicts a computerised hospital bed called the Z-001 that takes complete care of elderly patients - dispensing food and medicine, removing waste, bathing and exercising the patient lying within its frame. The bed is driven by its own nuclear reactor and, in the event of a meltdown, would automatically seal the patient in concrete.

What starts as a well-intentioned care solution becomes something more sinister when it's revealed that the bed is actually a government-designed experimental weapons robot. The film's themes - health care for the elderly, the tension between traditional values and modern technology, and the potential for technology to be co-opted for purposes beyond its original intent - feel remarkably relevant to today's AI therapy debate.

Critics noted that Roujin Z was "engaging entertainment" precisely because it "countered the expectations" of those looking for simple answers to complex technological questions. For that point think more Mencken than Oppenheimer. The film suggests that our relationship with caring technology will be messy, complicated, and potentially dangerous - but also potentially beneficial if properly managed.

Beyond therapeutic effectiveness lies another critical concern: data privacy. I'm not getting my tin foil hat out just yet as everything in life has a trade off, nothing is perfect.

AI cannot yet replicate genuine human empathy and there is a risk that it creates an illusion of connection rather than meaningful interaction. However AI can offer an anonymous, judgment-free space that's accessible 24/7 - something particularly valuable for people who find face-to-face interaction challenging.

The key insight is that AI is not a magic bullet. It must be integrated thoughtfully to support, not replace, human-led care. Its a slice of the pie not the whole damn thing, its not the silver bullet replacing the need of human based therapy.

The public remains sceptical for good reason. A YouGov survey found just 12% of the public think AI chatbots would make a good therapist. This scepticism is healthy - it reflects an understanding that mental health care is fundamentally about human connection and understanding.

There's something liberating about accepting that perfection isn't the goal. In parenting, developmental psychologist Donald Winnicott coined the term "good enough mother" - the idea that children don't need perfect parents, just ones who are adequate to their needs. The same principle might apply to therapy. We're living through what some call the "radwife" movement - mothers who've abandoned the Instagram-perfect "tradwife" fantasy in favour of radically normal parenting. They might forget suncream sometimes, serve pizza four nights in a row, and miss work deadlines, but their kids are happy and everyone's mental health is tickety boo.

As someone who's naturally suspicious of people wanting to help, I find myself surprisingly open to the idea of AI therapy. Maybe it's because I don't have to worry about the computer judging my lifestyle choices or bringing its own relationship drama to our sessions. Maybe it's because, life has taught me that perfection isn't the standard we should be aiming for - adequacy might be enough.

The solution isn't to abandon AI therapy entirely, but to approach it with appropriate caution and realistic expectations. Transparency is essential - AI therapy tools must be clearly labelled as such, with users understanding exactly what they're interacting with. Robust safety measures are non-negotiable, including crisis intervention protocols and clear escalation pathways to human support.

Most importantly, we must resist the temptation to see AI as a replacement for the fundamental work of building a properly resourced mental health system. Human in the loop as I keep on banging on about is something that carries a lot of weight in this argument.

AI therapy isn't inherently good or bad - it's a tool that can provide valuable support when used appropriately within a broader ecosystem of human care. In a world where the average American has three friends but has demand for 15, as Zuckerberg noted, perhaps the solution isn't more sophisticated algorithms - but it's not necessarily fewer either. Sometimes a computer that doesn't judge you for eating cereal for dinner or wearing the same shirt three days running might be exactly what someone needs to get through a difficult patch.

The challenge ahead is ensuring we harness AI's benefits while avoiding the very real risks of replacing human connection with an illusion of understanding. Like the protagonists in Roujin Z, we need to remain vigilant about technology's potential for both care and harm, while recognising that in an imperfect world, imperfect solutions might sometimes be better than no solutions at all.

As a footnote to this I'm editing this shirtless, I won't be engaging with GPT therapy anytime soon, I have a dog and a cat. When they start talking back I might consider my options, and maybe put a shirt on.

Share this post
The link has been copied!