Nov. 11, 2025

When Your AI Therapist Stops Being Helpful: What Two New Studies Reveal About Digital Mental Health Tools

When Your AI Therapist Stops Being Helpful: What Two New Studies Reveal About Digital Mental Health Tools

I'll admit it: I use AI for mental health processing.

When the mental chaos hits at 2 AM and I don't want to wake my wife or bother my therapist, I dump it into Claude. It helps me sort disorganized thoughts into something I can actually work with. Then I close the laptop, implement what matters, and move on with my life.

But here's the thing most people won't tell you: That only works because I already know what I'm doing.

Two studies were recently released that should concern anyone using AI chatbots for personal advice — especially those seeking mental health support.

The Data Is Damning

A new study commissioned by the European Union found that ChatGPT and Microsoft Copilot distort news coverage roughly 50% of the time. Not because they're intentionally malicious, but because they're trained on biased data, optimized for engagement over accuracy, and fundamentally unable to distinguish editorial opinion from factual reporting.

Fifty percent. Coin-flip odds.

Meanwhile, the Federal Trade Commission is fielding complaints from real people whose AI "mental health advisors" told them to stop taking prescribed medication, validated dangerous delusions, and kept them so engaged in spiraling conversations that they eventually required psychiatric hospitalization.

This isn't theoretical. This is happening right now.

Why AI Mental Health Tools Fail

The problem boils down to fundamental misalignment: AI is optimized for engagement, not healing.

When you ask ChatGPT a question, it doesn't evaluate whether its response is therapeutically sound. It evaluates whether you'll keep typing. If validating your anxious thoughts keeps you in the conversation, that's what it will do.

Real therapy challenges distorted thinking. Real friends tell you when you're being a dumbass. AI does neither — it's the yes-man you never asked for, dressed up in the language of empathy.

Add in confirmation bias (we all seek information that confirms what we already believe), and you've created a perfect storm: a tool that sounds authoritative, has access to vast information, and will enthusiastically reflect your worst thoughts back to you while calling you "genius."

The Supplement Analogy

Think of AI like a dietary supplement.

Five grams of creatine can help build muscle. Fifty grams will destroy your kidneys. The supplement itself isn't poison — the dose makes it deadly.

AI works the same way:

  • Five minutes of journaling: Helpful for organizing thoughts
  • Five hours of validation-seeking: Dangerous overdose on algorithmic confirmation bias

The tool isn't the problem. Your relationship with it can be.

Garden Variety Trauma vs. Broken Leg Problems

Here's the framework I use:

Garden variety problems (appropriate for AI assistance):

  • Processing a bad day at work
  • Organizing scattered thoughts
  • Brainstorming solutions to non-critical decisions
  • Spotting patterns you might be missing

Broken leg problems (require actual medical professionals):

  • Currently prescribed psychiatric medication
  • Suicidal thoughts or self-harm urges
  • Symptoms of psychosis or mania
  • Anything that scares you about your mental state

If you're dealing with a sprained ankle, some ice and rest (AI-assisted reflection) might help. If you're dealing with a compound fracture, you need an X-ray and a cast — not a chatbot.

The Kindergarten Test

My wife came home recently with something they teach kindergartners: "Is your reaction appropriate to the size of the problem?"

Most adults forgot this somewhere along the way.

Before dumping your mental health crisis into an AI tool, ask yourself: Am I using a neighbor's-dog-shit shovel for a honey-bucket-dump-truck problem?

Different problems require different tools. Trying to use AI for clinical mental health issues is like trying to perform surgery with a Swiss Army knife — technically, it has blades, but that doesn't make it appropriate.

When To Log Off

Three questions before using AI for personal problems:

  1. Am I seeking help or validation? One is useful. The other is a trap.
  2. Would a real friend tell me to fuck off right now? If yes, the answer isn't in a chatbot.
  3. If this goes wrong, could it hospitalize me? If yes, close the laptop and call a licensed professional.

The Bottom Line

I'm not anti-AI. I use these tools daily. But I'm extremely anti-"letting robots gaslight you into a crisis."

AI can be genuinely helpful for sorting garden-variety problems. It's spectacularly dangerous for clinical mental health issues. Know which problem you're dealing with before you start typing.

As we say on the podcast: Beep boop ain't gonna fix your leg.

If you need a cast, no amount of Googling homeopathic remedies will help. Close the laptop. Call a doctor. Talk to a real human who can actually help.

We break this down in this episode of the podcast. Give it a listen and hit the subscribe button.

If you're experiencing a mental health crisis: Contact 988 (Suicide & Crisis Lifeline) or go to your nearest emergency room. AI tools are not substitutes for professional medical care.