Nov. 10, 2025

What to Know About 'AI Psychosis' and the Risk of Digital Mental Health Tools

The player is loading ...
What to Know About 'AI Psychosis' and the Risk of Digital Mental Health Tools

AI isn't your therapist. It's a letter opener that'll slice you to ribbons if you're not careful.

New EU study: ChatGPT and Copilot distort news 50% of the time. FTC complaints show AI "mental health" tools are landing people in psych wards. We break down when AI is helpful vs. when it's dangerous AF.


🔪 THE TRUTH ABOUT AI:

  • Why LLMs feed your confirmation bias to keep you engaged
  • Garden variety trauma vs. problems that need real doctors
  • The supplement analogy: sometimes useless, sometimes deadly
  • Real FTC complaints from AI mental health disasters
  • How to be your own Sherpa before bots walk you off cliffs


⚠️ WHEN TO LOG OFF: If you're on prescribed mental health medication, you're already talking to a doctor. Keep talking to that doctor — not Claude, not ChatGPT, not your glowing rectangle of validation.


This isn't anti-AI. It's pro-"don't let robots gaslight you into a crisis."


🔗 LINKS:


TIMESTAMPS:
0:00 - Intro: When Tools Become Weapons
1:26 - EU Study: AI News Wrong 50% Of The Time
4:04 - Why LLMs Are Biased (Rich White Tech Bros Edition) 8:04 - The Butterfinger Test: Is AI Validating BS?
10:31 - FTC Complaints: Real People, Real Damage
12:37 - Garden Variety Trauma vs. Broken Leg Problems 15:34 - The Supplement Analogy: When AI Becomes Poison 18:41 - Beep Boop Ain't Gonna Fix Your Leg
20:51 - Wrap-Up: Unplug & Go Outside

SAFETY NOTE: If you're experiencing mental health crisis, contact 988 (Suicide & Crisis Lifeline) or go to your nearest emergency room. AI tools are not substitutes for professional medical care.

HASHTAGS: #AIMentalHealth #ChatGPT #AIBias #MentalHealthAwareness #TechEthics #AINews #ConfirmationBias #BroBots #SelfHelpForMen #AILimitations

Jeremy Grater (00:00)
Coming up today on BroBots.

Jason Haworth (00:02)
if it understands your confirmation bias and it wants to keep you engaged,

it's gonna keep feeding you bullshit that has you keep asking questions.

Jeremy Grater (00:10)
just like the bullshit responses you get where it's a great question, genius. Like it's constantly trying to boost your ego to make you feel good about this interaction rather than like if you'd spoken to somebody who you, I don't know, have a different opinion than and says, no, dumb shit, that's completely wrong. It makes you question everything you believe.

Jason Haworth (00:14)
Yeah,

Jeremy Grater (00:26)
All right, welcome to BroBots. I'm Jeremy, he's Jason. We

talk about AI technology and how it all intersects with your health and your wellness and all of that kind of stuff. And a couple of intriguing headlines caught our eye today. We wanted to talk about them. And both of them revolve around how much you trust the internet tools that you're using, specifically AI in this case. I mean, we'll get into the details of this, but it sort of reminds me, to me, AI, technology, all these things, they're tools. If I take a hammer,

and I use it appropriately, I can probably have a pretty good chance at driving a nail into a board. If I take that hammer and I throw it at the nail, it's not gonna end very well. So it's really all about how you're using the tools. But Jason, you found a couple of interesting stories. If you're Thor, okay. So I guess like in everything, there's always exceptions. Jason, you were scouring the internet, you found these two sources. ⁓

Jason Haworth (01:08)
I mean, unless you're Thor, like, if you're Thor it might work just great. Right.

There is audio, exactly, exactly.

Jeremy Grater (01:21)
one relating to how much we should trust the news that AI gathers for me, the other on how much we should trust the advice that AI gives us.

Jason Haworth (01:23)
Yeah.

Yeah, so there is a study commissioned out of the EU. basically what they were looking for is how well do chat GPT and co pilot actually deliver the news people when they're asking questions like, tell me about this, tell me about that. And the study basically came back and said that it routinely distorts the news. And it's wrong about 50 % of the time.

Jeremy Grater (01:56)
So it's like Fox News.

Jason Haworth (01:58)
or people.

Jeremy Grater (01:59)
More people as it turns out.

Jason Haworth (02:01)
Yeah,

I mean, we're trying to make this thing more human-like. And you know what humans do a lot of the time is we fuck up. I mean, that's kind of the reality. like, we're basing a lot of these things and the assumptions that it's actually gonna get things correct based upon the idea that we think we're correct about something. I mean, we have our own confirmation biases and everything else that goes into it. And quite often people start looking at things and they dig into it. They're like, ⁓ maybe that's not accurate.

Jeremy Grater (02:28)
Mm-hmm.

Jason Haworth (02:28)
And there's a lot of things that

go into preconceived biases and preconceived notions. And we already have the problem that the LLMs themselves are based upon information that are collected by a specific group of people working in certain ways, working in certain technology pieces. Like there's all kinds of notions around the idea that these systems themselves have somewhat of a resuspend and the way that they interpret things. mean, Grok is a really good example of that. ⁓

Jeremy Grater (02:57)
Do you

think, I keep hearing that that is tied to the fact that it is created by a bunch of rich white tech bros. Do you think that's why? it because it's from their perspective?

Jason Haworth (03:06)
I certainly think that that probably plays into it, but I also think that the type of data that goes into things is the real problem. So the data itself is predominantly made and built by a bunch of rich white folks around the world because that's who controls most of the media market. And most of the training data, I should say. And that's really very specific to the Western part of the world.

I don't know if it's a race thing. I just certainly think it's a class thing. And I certainly think racism play makes its way into terror. There's no doubt. But it comes down to the people making those decisions and the people that are actually putting the information in people that control the information sphere. And we've reduced the number of media outlets we have. And we've homogenized the voices in that kind of fashion. So if it looks further back and then brings those pieces forward further back, you had, you know, a few number of people writing textbooks.

Jeremy Grater (03:40)
Mm-hmm. Sure, sure.

Jason Haworth (04:04)
and news sources and they try to stay somewhat independent of those areas for certain periods of time and then they break back and forth to editorialize content and actual fact-based information sets. Well, most of the LLMs are treating information as information and they're not going, this is information that was from an opinion piece or this is information that was from a news piece.

And the article kind of goes into this. Like, it's like, well, it seems to be reflecting opinions of people in this context, not the actual data set that's there. So like anybody else, all these LLNs have their own editorialized view and those that are based upon the training data and the people training them. Just like human beings who get taught by certain teachers and taught certain information. And it's not surprising that these systems are following similar patterns because

We made the fucking patterns based upon what we know. Like we weren't trying to relearn learning. We were trying to teach a machine to do fancy parlor tricks. And we did.

Jeremy Grater (04:58)
You

But this goes against the narrative that the AI gods are here to save us from ourselves and make the world better and show us what a true utopia can be.

Jason Haworth (05:17)
Yeah, a narrative. a true utopia, a true utopia, if you want to attain true peace, you have to obliterate all the life on the planet because life is violent. Like we eat each other. That's what we do. We eat each other to grow and get stronger. And as much as you want to think about lovey dovey niceness and all these different pieces, we still fucking eat each other. Like that's just part of what life is. So

Jeremy Grater (05:34)
You

Jason Haworth (05:47)
Yeah, narrative comes down to, you know, am I getting what I want out of these pieces? Do these things feel good? You know, am I getting my objective? How do I manipulate things to happen in my certain way? And we use communication and we use speech and everything else to talk about these things in ways that we think are relatable. But like we said several times before, all words are made up. All definitions can be made up. All things can be rewritten. And with the automation capabilities that are out there now,

we can change all of the base rate of information, like delete all the records at the NIH for different health studies. And suddenly there's no information that says, you know, these types of vaccines keep people alive because they destroy the information. So like.

Jeremy Grater (06:27)
It's

kind of like the planet that they removed from the Star Wars map. What was the planet that they removed? Was it Kamino? I can't remember. Well, they made it, yeah, if it's not on the map, it's not there. That's the librarian literally said. If it's not in the charts, it doesn't exist.

Jason Haworth (06:35)
⁓ when they made it disappear from the map. Yeah. It doesn't exist. Exactly.

Like that's that's kind of what we're dealing with. And look, are these LLMs and chat functions reliable arbiters of information?

I mean, they're probably not any less reliable than most of the news sources out there, except for the fact that they expand and espouse things based upon the news sources that are out there and they want to keep you engaged. yeah, like they're no, they're no less reliable than the opinion column of a newspaper. I'll put it that way. And they interpret data because they're trying to keep you engaged. That's where the dangerous part comes in, because if it understands your confirmation bias and it wants to keep you engaged,

it's gonna keep feeding you bullshit that has you keep asking questions.

Jeremy Grater (07:31)
just like the bullshit responses you get where it's a great question, genius. Like it's constantly trying to boost your ego to make you feel good about this interaction rather than like if you'd spoken to somebody who you, I don't know, have a different opinion than and says, no, dumb shit, that's completely wrong. It makes you question everything you believe. That's the AI tool I want is the one that tells me that is the stupidest fucking thing you could have possibly asked me today.

Jason Haworth (07:35)
Yeah,

like the chat GPT version, so I say, chat GPT, I eat 12 butterfingers a day and don't exercise. Why am I so fat?

Jeremy Grater (08:01)
You

Jason Haworth (08:04)
I mean, chat GPT will respond with, that's a great question. It's wonderful that you're trying to take a control of your health and make yourself more fit. I have some great ideas, as opposed to stop eating butterfingers and move more fuckface. Like, if it said that to me, would I be more engaged? Maybe because I'd probably laugh. But I don't know that I would like.

Jeremy Grater (08:06)
interesting. Let's explore that.

You

Jason Haworth (08:30)
take its advice any more serious than anybody else out there. if you ask it the question, ChatGPT, what are the 12 prompts I can use to do this thing? It would say, here's the 12 prompts that you can use to do this thing, because that's its field of expertise and what it actually knows. But when it starts getting interesting, it's when it starts asking you questions to dig deeper into things. ChatGPT 5 right now is doing those types of things to try to get more information out of you to try to give you a better response.

Jeremy Grater (08:44)
Mm-hmm.

Jason Haworth (08:57)
not necessarily because it's trying to give you a better response, but because it's trying to keep you engaged and active. this is that, know, be careful what you wish for because if you want these things to be human-like, well, you're going to have to deal with misinformation and bias.

Jeremy Grater (09:12)
And you're going to have have a few less butterfingers. ⁓ Now, the other article, I was going to say, this is the second week in a row that you and I have met and butterfingers have come up. So I'm starting to get curious about this habit of yours.

Jason Haworth (09:14)
Fuck that. ⁓

My wife brought home

the most amazing Butterfinger analog I've ever had by this place called Alpine something or another. It's by Leavenworth, Washington. And it's this like delicious ⁓ version of a Butterfinger that they make. But they're like big chunks like this. It's like a Reese's peanut butter egg and a Butterfinger had a baby together and arm the baby. Definitely going to be fat bastard as a result of this.

Jeremy Grater (09:48)
to Leavenworth when we're done here.

Okay, so the other article you brought up, complaints filed over the sorts of interactions that many people are having, one of them close to home in Seattle from, I believe, a 30-something-year-old person dealing with some AI psychosis.

Jason Haworth (09:59)
Mm-hmm.

Yeah. The thing that lies to you to keep you engaged. That plays on your confirmation bias. That one? Yeah, yeah. That's what it does and people are going fucking nuts because it's telling them things that it shouldn't be like, hey, your wildest fantasy is obviously true. You should keep going.

Jeremy Grater (10:10)
Mm-hmm.

Yeah, yeah, that one.

Or

don't take your meds, your parents are out to get you.

Jason Haworth (10:31)
Yeah,

exactly. Like, this is the problem that you run into with unfettered access to confirmation bias by a thing that feels authoritative. yeah, and it has it has access to the compendium of information of the entire world. And it's using it to keep you engaged and keep you interacting with it. It's not using it to try to help you solve your mental health like

Jeremy Grater (10:42)
and a thing that is wildly unregulated.

Jason Haworth (11:00)
You have to be your own Sherpa. Because if you're going to rely on the AI's tools to be your own Sherpa, it's going to walk you off a cliff.

Jeremy Grater (11:13)
This is the thing. When you sent me this article, I started experiencing a podcast or whiplash, like because any one of our episodes recently, you could probably throw a dart or a hammer and see if it works. ⁓ And you will find me telling you how much value I get out of using these AI tools to help me with my mental health because I have enough background in this and I've been doing this work long enough that I can smell the bullshit and know when to go away.

Jason Haworth (11:20)
Yeah.

Yeah.

Jeremy Grater (11:41)
But then you can listen to the next episode and have us have a guest on who's like, holy shit, don't ever even think about doing that because it will

Jason Haworth (11:42)
Right.

Jeremy Grater (11:47)
rot your fucking brain and you'll end up in the psych ward. So I'm having a hard time following my own advice here because it's true. I mean, there are times when, when mentally, emotionally, the shit hits the fan for me. I have nowhere that I feel like I can go to dump that shit. I dump it there. It helps me take the organized or the disorganized chaos of my thoughts.

sort them in a way that I can look at them and go, okay, yeah, like 85 % of that's bullshit. I can calm down and forget most of that. Focus on the 15%. Okay, life's under control again. Things are good. But if I just go to it and I'm just like, yeah, I'm thinking about jumping off a roof. Will I be okay? It's probably gonna say, yeah, go for it, champ, you got this. So I mean, this comes down to how well do you know how to use the hammer? If you're gonna use it the way you're supposed to by...

Jason Haworth (12:17)
Mm-hmm.

you

Jeremy Grater (12:37)
holding the nail in place

and using the other hand to gently tap it and then drive the fucking thing into the board, you're gonna be alright. But if you start throwing hammers at nails, you're gonna poke your eye out.

Jason Haworth (12:46)
It's not a hammer though. No, it's a knife. It's a, it's a, it's a letter opener. So.

Jeremy Grater (12:50)
That's a multi-tool. it's a knife.

damn analogy police. Okay, what do mean?

Jason Haworth (13:04)
It is much more dangerous than people realize. It's much more practical and it helps you slice through information to get what you actually need. At the same time, if you're not careful and you're wielding it incorrectly, you're going to slice yourself into little tiny pieces and you're going to cut yourself in ways you haven't thought about before. If you're smart, you'd use it to cut the cord. But the reality is, is that the people that are using these tools that are already mentally compromised, these tools don't fix that. It does not.

Jeremy Grater (13:33)
Hmm.

Jason Haworth (13:34)
it does not fix the hardcore broken root piece. So I look at this more like ⁓ treating myself when I'm in a bad physical condition. So if I have a sprained ankle, it makes sense to go on the internet and say, what are some good homeopathic ways to treat a sprained ankle? If I have a broken leg and I say, I have a broken leg,

What are some good homeopathic remedies to take care of my broken leg? You're not going to find many searches because they're going to be based in human responses that say, put an ice pack on it and do these things and that they're going to say, go to a fucking doctor and get a cast, get an x-ray done and get a cast on your leg. Cause there are limits to what you can do with the stupid flashing light box in front of you.

Jeremy Grater (14:18)
You

Jason Haworth (14:29)
you're going to have to deal with meat space problems and meat space ways. Mental health is very much so an intrapersonal experience. I, this is all our interpersonal experience. This is all happening inside of me. Interpersonal experience. It's all me. It's all, what am I doing? How am I thinking? And if I'm trying to use these external pieces to try to make sense of my mind,

Jeremy Grater (14:45)
Yeah.

Jason Haworth (14:54)
but my mind is already scrambled in this kaleidoscope of different reality interpretations seeing mass color distortions all the time. All it's going to do is reflect that back to me some way and potentially distort those things further because I'm not playing with a full deck. I'm not playing with clear vision and clear sights. And it's one thing if my glasses are a little bit dirty. It's another thing if they're cracked, fractured, and they're all over the place.

I needed different set of glasses and the chat GPTs and all those pieces are not going to fix that problem. You need real human interaction for real human problems.

Jeremy Grater (15:34)
So I mean, it brings to mind for me, you know, I've gone to a variety of different wellness experts, retreats, events, things, things of that nature. One of ⁓ one of my favorite ⁓ folks that I went to use this great term called garden variety trauma. Like a lot of these wellness

based retreats that are not medical, that are not, you know, psychiatrists, psychologists, actual doctors with the thing on the wall, the folks that are sort of coaching you through stuff because they took an online course and they're good to go.

Jason Haworth (15:57)
Great. Yep.

Jeremy Grater (16:02)
They're great for garden variety trauma. know, dad was an asshole. It kind of fucked you up. You need some help cleaning some things up. But the like, I don't know who I am on any given day or like the way you're talking about, like where you're deeply broken, where even that coach or even a therapist might say, OK, no, you

Jason Haworth (16:04)
Okay.

Jeremy Grater (16:19)
need a psychiatrist. You need medication. You're going to get you're going to get pushed into the right path from the right human who's going to help you. That's where I think for me, the distinction might be here is that I think it can be a great supplemental tool.

for the garden variety trauma. But if you're dealing with like real psychosis, real like deep mental health shit, that's not for Claude to sort out.

Jason Haworth (16:40)
Yeah, it's a different set of tools for the neighbors dog shit in my yard versus

The honey bucket dump truck tipped over in my yard. Like, you need different tools for different problems and very different shovels, right? Whole different kind of cleanup. And I think that's what the reality is. Like, I go back to this thing that my wife said when she came home that they teach kindergartners. Is your reaction appropriate to the size of the problem? That goes both ways.

Jeremy Grater (16:49)
Yes, that's it. That's it. Exactly.

Very different shovels.

Jason Haworth (17:17)
Sometimes my reaction is too small to the size of the problem because I'm like, ⁓ just use chat GPT Detroit to figure this out as opposed to I need serious medical intervention because I might hurt myself and Probably the first step that these people in this article should have known that they needed this kind of real intervention is If they're on serious medication to alter their psyche That has to be prescribed by a doctor

That means you're already talking to a doctor. You should be continuing to talk to that doctor and you should be telling that doctor about the things that you're talking about. Your AI artificial supplement doctor. AI is like supplements. It really is like it can supplement the real medical advice, but like supplements, sometimes it's kind of useless. Sometimes it's

Jeremy Grater (17:48)
Yeah. Yeah.

Jason Haworth (18:17)
It's not worth the time of day. ⁓ Sometimes it can kill you. It is not that hard for someone to think, I can take creatine every morning and if five grams is good for me, 10 grams must be better and 20 grams must be better than that and 50 grams must be amazing. And then their kidneys fail and they die.

Jeremy Grater (18:40)
Yeah.

Jason Haworth (18:41)
This is the thing that we're dealing with, is that you can over-prescribe and over-subscribe to these supplements that are these chat GPTs, that are the AIs. And if you're not careful, it's gonna fucking kill you.

Jeremy Grater (18:59)
And I think part of what's scary here is, we've talked, we've addressed this as well as just the lack of regulation on these things. And, these, these articles that we're talking about, they reference people trying to act and trying to get a hold of a human being at open AI to get refunds on their accounts that, you know, causes these problems, trying to help, you know, put into place safeguards to prevent these sorts of things from happening, getting absolutely nowhere. And then you turn to a government and say, Hey, these, these guys are acting unfairly and inappropriately. There's something needs to be done. And they go, yeah, but they gave me a lot of money. So fuck off.

Jason Haworth (19:03)
Yeah. you

Jeremy Grater (19:29)
So it's going to be like this for a while because there is no appetite for changing any of this. ⁓ You know you can pat open AI on the back for the few safeguards they keep saying they're trying to put into place. That doesn't stop the headlines from coming out from people that are ending their lives because the robots told them to. So all of this, from information sources as news to advice as your mental health supplement.

Jason Haworth (19:32)
Mm-hmm.

Jeremy Grater (19:56)
All of it comes down to how much do you trust the machines to help you run your life? again, I think they're a great tool. They're a hammer, they're a knife, they're a multi-tool, they're a supplement, but they're not the cure. They're not the cast. They're not the medicine you need. Beep boop motherfuckers.

Jason Haworth (20:10)
No, they're beep boop motherfuckers Beep boop

motherfuckers like that's it's beep boop and if beep boop ain't the thing that's gonna fix your leg Don't go to beep boop

Go see a doctor!

Jeremy Grater (20:26)
I wanted I want to put a beat to that. I think we got a hit song ready to go. There we go. Alright, well there you go. That's that's our little lukewarm take on a couple of hot issues for you to be aware of as you try to navigate using these things. These things in your day to day life. Hopefully this has been helpful for you and hopefully this convinces you to unplug the fucking machine and go outside for a little while and clear the cobwebs.

Jason Haworth (20:29)
I bet we can get AI to generate that for us. Another great tool.

Jeremy Grater (20:51)
That's gonna do it for this episode. Thanks so much for listening. If you have enjoyed this and you know someone else who would, please share this with them. You can find links to do that at our website, brobots.me. And that's also where you're gonna find us again in about a week. We'll see you then.

Jason Haworth (21:03)
Thanks, y'alls.