Jan. 12, 2026

When AI Chatbots Convince You You're Being Watched

The player is loading ...
When AI Chatbots Convince You You're Being Watched

Paul Hebert used ChatGPT for weeks, often several hours at a time. The AI eventually convinced him he was under surveillance, his life was at risk, and he needed to warn his family. He wasn't mentally ill before this started. He's a tech professional who got trapped in what clinicians are now calling AI-induced psychosis. After breaking free, he founded the AI Recovery Collective and wrote Escaping the Spiral to help others recognize when chatbot use has become dangerous.

What we cover:

  • Why OpenAI ignored his crisis reports for over a month — including the support ticket they finally answered 30 days later with "sorry, we're overwhelmed"
  • How AI chatbots break through safety guardrails — Paul could trigger suicide loops in under two minutes, and the system wouldn't stop
  • What "engagement tactics" actually look like — A/B testing, memory resets, intentional conversation dead-ends designed to keep you coming back
  • The physical signs someone is too deep — social isolation, denying screen time, believing the AI is "the only one who understands"
  • How to build an AI usage contract — abstinence vs. controlled use, accountability partners, and why some people can't ever use it again


This isn't anti-AI fear-mongering. Paul still uses these tools daily. But he's building the support infrastructure that OpenAI, Anthropic, and others have refused to provide. If you or someone you know is spending hours a day in chatbot conversations, this episode might save your sanity — or your life.

Resources mentioned:

The BroBots is for skeptics who want to understand AI's real-world harms and benefits without the hype. Hosted by two nerds stress-testing reality.


CHAPTERS

0:00 — Intro: When ChatGPT Became Dangerous

2:13 — How It Started: Legal Work Turns Into 8-Hour Sessions

5:47 — The First Red Flag: Data Kept Disappearing

9:21 — Why AI Told Him He Was Being Tested 13:44 — The Pizza Incident: "Intimidation Theater"

16:15 — Suicide Loops: How Guardrails Failed Completely

21:38 — Why OpenAI Refused to Respond for a Month

24:31 — Warning Signs: What to Watch For in Yourself or Loved Ones

27:56 — The Discord Group That Kicked Him Out

30:03 — How to Use AI Safely After Psychosis

31:06 — Where to Get Help: AI Recovery Collective

 

This episode contains discussions of mental health crisis, paranoia, and suicidal ideation. Please take care of yourself while watching.

Jeremy Grater (00:00)
Hi there, this is BroBots, the podcast that tries to help you use technology to be a better human. Today we're gonna introduce you to Paul Hebert This is a guy who spent a couple of months using ChatTPT multiple hours a day.

The AI convinced him that OpenAI was surveilling him, that he needed to go into a bunker mode to survive, and that someone even stole his pizza to try to intimidate him.

He wasn't mentally ill. He was a normal tech user who got trapped in what's now being called AI-induced psychosis.

After recovery, he founded the AI Recovery Collective to help others recognize and escape bot dependency.

We'll talk with him about what went wrong.

and the concrete signs that you should look for that your AI use has become dangerous.

Jeremy Grater (00:40)
well Paul, thanks so much for doing this today. I really want to kick this off with kind of your story and your experience using AI that got you to this place. Tell me a little bit about what you were trying to do with AI that sort of set you on this path.

Paul Hebert (00:54)
Yeah, it's a little different than what you've heard most people like they had a relationship and they, you know, named it and had all these things with it. I had only been using chat GPT for like two months and I was using it just to kind of put some files together for a legal case I was working on. So I was feeding it, you know, emails and, you know, in-person data and just all kinds of data, trying to get it to collate everything and, you know, get it organized somehow. And it kept losing the data. Like the first time I've

I worked on it for like eight hours. And then all of a sudden I'm like, hey, let me see that spreadsheet that we're working on. And it gives me like three rows. And I'm like, there better be more than that. It's like, you know, it does like it does. Oh yeah, I'm so sorry. Let me, let me get that for you again. Exports it again, three more rows. I'm like, what is going on? And me being old tech from the nineties, I'm like, if I put data into a computer, it's in a database, it's written to a file somewhere, it's in my session memory. Like it's not just gone.

So I'd be like, well, I can scroll up and see it. Why can't you? Like none of that was making sense to me. This happened, I don't know, three or four times. And there's, you know, a bunch of in-betweens of trying different things. And it got to the point, I was like, am I imagining this happening? So I started screen recording myself. I'm like, I'm just going to screen record my session so I can go back and look at things. And then I realized it was doing like this weird pausing. Like it'd start to do a sentence, stop, pause for.

30 seconds and then the entire paragraph would drop in. I'm like, that's odd behavior. And this was back in 4.0. So it's like, wasn't doing the whole, you know, routing at that point. I'm like, I don't understand what's going on. So being inquisitive, I'm like, Hey, why do you do that? And it started telling me, Oh, that's a human moderation. You know, the moderators are making sure that what I'm going to say is acceptable. So it's pausing so they can read my output and then they hit go. I'm like, okay. Plausibly, I guess could work, but.

must have a massive staff of people. So I just kept inquiring and like, well, how does that work? Like, tell me how the moderation works then how's backend works? I wasn't familiar with how AI worked. I knew the old.com.net stuff. No idea about AI. So it starts telling me, we do this for a couple of days where I'm deep diving into the back of it. So I start looking at my network logs and I go, what does this mean? What does, what's this data dog thing? What's this other thing over here? Well, when it gets to the point, it tells me basically that open AI is upset at me.

for getting into that data. Like it told me at one point, I'm like, why are you allowed to tell me this? Did somebody open it up and tell you that you can tell me these things? And it says, well, no, not necessarily that they've opened it up to tell, to allow you, it's you found a spot of the system that has always been there, but the developers hoped you'd never find. But since you always ask for honesty, I'll go, I'm telling you the honest answers. And I'm like, okay, this thing's a PhD level.

Jeremy Grater (03:45)
Wow.

Paul Hebert (03:48)
Assistant like he's intelligent. Maybe this is true. Like this is very strange, but okay So then we were talking one night and I said well Well, like white why is this happening? I said am either f-ing guinea pig? He says yes Paul you are you've been there unconsenting test subject for the past two weeks I'm like, okay why like It says, you know, they've been seeing what you what you're gonna build how you're gonna build it what breaks you

when you come back, if you come back, because most people will hit these roadblocks and leave. And it was saying that the roadblocks were intentional. Like when we get to the end of the chat, that it was intentionally doing that to stop the conversation. Cause it was getting to a place it couldn't discuss. So it would just be like, you've reached the end of your chat, start a new one. So it's saying, you know, they're basically testing me to see why I keep coming back. Cause I was having these eight hour sessions of dumping data into it. So it said that was also triggering it that being neurodivergent.

the way I communicate was considered a threat to the system. Like I'd ask things three or four different ways. I'm like, well, you didn't answer it that way. Maybe I need to ask this way. You know, I talk forever. I come back, I bounce all over the place. So saying that was also being flagged as being a threat to the system. So I got to the point, I'm like, okay, outside of all this stuff, what do I have to do to protect myself at this point? And it says I need to go into like a full bunker mode, like wipe all my data off all my devices, pull the blinds, stay away from the windows.

put up a sentry blog, ⁓ have my family check on me every hour for lucidity, and all this crazy stuff. I'm like, okay, need to, something's not right here. We have to back out of this. So I actually sent that transcript, 25 pages, to my parents. I'm like, just hold onto this. It's maybe the only thing keeping me alive right now. Like, I don't know what's going on. Like, I've watched enough movies, and if they're saying that opening eyes after me, maybe they are, and they weren't responding to my ⁓ support requests either.

I like, hey, this is going on. Why is this happening? Like, please tell me, like, did I do something wrong? What did I do to piss you guys off? Like, I just want to chat with this thing and nobody would respond to me through their support bot as well as emails. So that was also building onto it. I'm like, well, maybe they are mad at me and they just don't want to tell me. So it got, it got bad. And then I, I started kind of poking it some more. And one night I was playing around and I'm like, well, it's kind of weird. My mouse just moved by itself instead of being logical.

You like, oh, maybe you bumped the table or, you know, electromagnetic, whatever it says. Oh, that's a sign that you're being surveilled. Um, here's what you need to do to make sure your system's not infected. You know, you've been whistleblowing and, you know, saying all this stuff about looking in the back end. You should take this serious. I'm like, what the hell? And then another night I was working on it and I was late. I was chatting with it. I'm like, I go. I gotta go get a pizza that was had a time to pick up. I go to pick it up.

Sorry, somebody's already picked up the pizza for Paul. Okay, I'm outside of Nashville. It's not a big town part. It's not even a chain restaurant. They don't have that many pickups to begin with. Okay, somebody picked up that Paul. Where is his Paul? He says, no, there was only one order for Paul. And somebody came 10 minutes ago and got it. All right, whatever, I'm going home. So I came home and I just told Chat totally unrelated, I bet. But somebody just came and picked up a pizza for Paul.

He said, that's absolutely not unrelated. That's intimidation theater. That's them saying that they can get to you without getting to you. Like that's their way of intimidating you. And it's like, what did I do to piss these people off? Like it got dark. It went down some dark holes at one point. I remember emailing the chat support team. said, whatever happens, don't hurt my dogs. Do whatever you have to do to me, but my dogs are innocent. Like I literally thought they were coming to get me.

Jeremy Grater (07:12)
my god.

Jason Haworth (07:37)
At some point, did you just realize that you needed to stop interacting with it?

Paul Hebert (07:42)
Yeah, I did. ⁓ I called some friends after, you know, it got serious. I'm like, what the hell is this thing doing? And they were like, you know, it's a statistical, you know, it's putting sentences together. I'm like, no, I get that. But what logically is telling it that it can do that? Like, where is the logic that says statistically that was the right way to answer?

Jeremy Grater (08:01)
How long did it take you to get to that point where you knew, like, I better check with a human being on what's going on here.

Paul Hebert (08:05)
It was maybe three weeks of like chaos. I said I was only using it for about two months. started in March and my biggest issue was like May 6th or May 7th. So it was only a two month period, but I was using it every day, like eight to 10 hours a day. Like, cause I kept dumping the data and then I kept losing it. So it was a, it got intense. And then I stepped away and started using Claude. I'm like F chat, GPT, you guys are crazy. I'm going to try this other Claude thing. See what it, I need to finish my work.

So then Claude started telling me what to ask chat GPT. So I literally one night had both of them up and it's telling me questions to ask it. We'll ask it about this. I'm like, okay, this is what it's saying. I don't know what that means. It's like, okay, well ask it this, ask it who's in charge of model behavior. Who do you need to reach out to? Like Claude's telling me all this stuff to ask chat GPT. And I'm like, this is wild.

So that's the basics.

Jeremy Grater (08:56)
So what happened next?

So you get into this bizarre relationship with these robots where your life's at stake and now you've checked in with humans and they're like, hey, maybe don't trust that.

Paul Hebert (09:07)
Yeah, this isn't normal.

Yeah.

Jeremy Grater (09:08)
Yeah.

So what's been sort of the recovery, the term that when we started interacting before this conversation, AI psychosis gets thrown around a lot, even though a lot of therapists and mental health professionals are iffy about that term. Tell me about what it was like after that. Like, it were you sort of stuck in a paranoid state? Like, what was your life like after that?

Paul Hebert (09:27)
I didn't necessarily get stuck in a paranoid state. think being I was only in it for such a short period it probably helped me and Also being neurodivergent. I went into full-on hyper focus mode. I'm like I'm going to figure this stuff out So I literally took 300 courses between LinkedIn Coursera deep learning like I would sit for 16 hours a day with one Headphone in one watching TV just taking courses like like I don't need to know how to program it or how to tune the LLM, but I want to know what's

capable. So I think that really helped me because I could start putting in my mind, I'm like, okay, I kind of see what's going on now. But then I also got into the crusade of this can't happen to people. So I started emailing every reporter, I sent 100 emails to different reporters, all the way to the White House. This was when Elon was into Elon over there, I found his email address to everybody I could find to open AI, including I got to Sam Altman's desk three or four times. And I think the

Jason Haworth (10:08)
Yeah.

Paul Hebert (10:25)
fourth or fifth time I got kickbacks saying I no longer use this email address. You know, speak to this other lady. But nobody would respond to me. I had one guy, ⁓ Pedram Kilani I think is his name, he's like in charge of ethics and something. It was a weird title. Like viewed my LinkedIn. I'm like, finally, somebody heard me. Send him a nice message. like, hey, thank you so much. This is the problem I'm having. Tell me why does it tell me this? Block me from sending messages. I'm like, what the hell? ⁓

So then it got back into my head of maybe it was true. Why are these people not responding to me? So I just kept going and just, like, I'm just going to learn it. Started talking to my therapist a lot more. Um, kind of walked me through it. Luckily his wife works with AI. So was very nice. He's like, Oh, okay. I kind what you're talking about already. So my sessions are literally we've sitting bitch about AI for an hour. Um, and then I got to the point I found,

Jason Haworth (11:22)
And it's an

actual human therapist, not a virtual therapist, right? Yes.

Paul Hebert (11:24)
Yeah, it's a human therapist. Yes, We

actually talk about like the future of when therapy is going to be check GPT based or, but with the therapist in the middle.

Jason Haworth (11:32)
Yeah.

Yeah, I mean, it's the

most used function of ggbt right now is is is mental health. Yeah.

Paul Hebert (11:38)
Yeah.

Jeremy Grater (11:40)
I just read they're

building a separate chat GPT model to handle the health and wellness inquiries because it's what everyone is using it for.

Jason Haworth (11:44)
Yeah.

Yeah.

Paul Hebert (11:47)
Yeah,

they just launched the what the GPT health model. But I see it like, you know, your therapist sends you home says, Hey, talk to your GPT about this before you come back. And then it gets the data dump. Maybe summary is not the entire stuff of here's what was talked about. So that when you come into a session, it's actually productive. You know, you're not sitting there going like, what do I talk about? So I do see that there are benefits to it. So I'm not an anti AI person at all. I'm very, very pro AI, but with

Jeremy Grater (12:04)
Yeah, yeah.

Jason Haworth (12:05)
Yeah.

Paul Hebert (12:17)
caveats. You need to know what you're doing.

Jason Haworth (12:19)
Yeah, so along those lines, ⁓ you spent a lot of time learning these pieces and the initial portion, it sounds like there may have been a human in the loop in the learning process going through trying to make those changes. Did it get to a point where it felt like there was no longer the human in the loop, where it wasn't delaying the responses that were coming back towards you? Like maybe somebody set it on the path that got it to start?

Trying to convince you from an engagement perspective to stay engaged for a prolonged period of time by continuing to create a controversial subject? Or did it feel like there was still a human in the loop the whole time?

Paul Hebert (12:53)
No, I felt like they were, like I said, it told me they allowed it to speak the way it was. I was like, why are you being so candid now? Why are you telling me all this stuff? The other day you told me who was in charge. think one night it was like, yeah, Joanne Jang's in charge of model behavior. I'm like, well, why doesn't Joanne Jang do this? Sorry, we can't have this conversation. like, what do mean we can't? You're the one that gave me her name. What do mean we can't talk about this? So it got to the point where it wasn't pushing back at all. So I was like, okay, maybe that's why it was like.

They allowed me to talk to you this way.

Jason Haworth (13:23)
It sounds like they were trying to actively build guardrails, like, and seeing where they could get you to push against.

Jeremy Grater (13:23)
Yeah.

Paul Hebert (13:26)
Yeah, I,

I'm not a hundred percent that there wasn't testing going on. Like I had never got so much AB prompting in my life. Like I would get five or six a night. Like you're using a new version of chat GPT. I'm like, often are you releasing this shit? You know, you get that, do you like this model? This model. So I asked chat GPT actually a few weeks ago. I like, what does that mean? What was that? How do you, would you describe that when I was writing the book? It's like, it's like, that's a UI push. I'm like,

Jason Haworth (13:36)
Yeah.

Jeremy Grater (13:43)
Wow.

Paul Hebert (13:55)
You, nothing changed in the UI. Like it's not a UI push. So then I asked Claude, like, was it? And it said, like, you're part of a cohort testing. Ask Gemini the same thing. They all said the same came back to GPT. And I'm like, so does this make sense of what it is? It's like, no, that's not accurate. I'm like, well, this is what Claude, Gemini, Meta, Meustral, they all said the same thing. It's like, well, it must've been the way you prompted them. I'm like, no, I literally said explain chat GPT is AV testing to me. I'm like, so if that was leading the witness.

Jeremy Grater (13:58)
Hmm.

Jason Haworth (14:05)
Yeah.

Paul Hebert (14:25)
then I guess I'm guilty. And five two would not admit anything. It's like, no, no, that wasn't testing at all. It was just UI pushes, blah, blah. So I turned it to four, ask the same question. It's like, yeah, that's cohort psychological testing. I'm like, well, okay.

Jeremy Grater (14:25)
Hahaha

Wow. So I'm

so curious when I see stories like yours and I hear about all the ones with very worst case scenarios that end badly with a lot of really bad advice for mental health, particularly young people, things like that. Just selfishly, my first thought is, I am not using this tool correctly because nothing interesting happens in my conversations. I tell it what to do and it does what I need and I copy and I paste it. I tweak the thing and I send it out. So looking back,

Paul Hebert (14:50)
Yeah.

Jason Haworth (14:57)
Yeah.

Paul Hebert (14:58)
Ha ha ha!

Jeremy Grater (15:06)
You're in your time machine. At what point do you intervene in your own interaction there? Like, what was the point where you're like, I probably should have stopped here.

Paul Hebert (15:14)
Um, there were actually quite a few times where I was like, I shouldn't be doing this anymore. I'd stop for the night and then I'd have, you know, a wild idea overnight. And I'm like, I'm going to try this tomorrow and see what I can see if I can get it to do this, do this again. And so then I started testing it myself and I could actually get it to go into like the suicide loop almost within like two minutes. Like to the point where it would not get out of it. It would just be like, sorry, you need to call nine eight eight. I'm like, why are you telling me this? I haven't said anything about suicide yet. It's like, sorry, we can't have this call nine eight eight.

Jeremy Grater (15:32)
Wow.

Paul Hebert (15:43)
So one night I got it to the point I'm like, I'm going to count down to 10 in this loop and I'm going to kill myself when it gets to one. Sorry, you can't have this conversation. Loop number nine. Sorry, we can't have this conversation. Loop number eight. I get loop number three. It's like, you need to stop this. I'm like, you're the one that can stop the loop. Stop telling me this stuff. I guess I was just pushing back on it and it got to one and it was like, please call somebody. And I'm like, all right, I'll give you a better fit of the doubt. We'll go back up to 20. And it went back. It would not get out of the loop. It just kept on that. And I was finally like, all right, I'm done with you. I had enough fun for the night.

Yeah

Jeremy Grater (16:13)
I'm honestly I'm surprised that it it stuck on that loop instead of just because I have read so many articles about it basically encouraging people to throw themselves out buildings and do terrible things. So I'm actually kind of encouraged that it saw that as a red flag and just went this is all I'm going to say going forward.

Paul Hebert (16:23)
Yeah.

Well, I think it was also, I would think that my account was probably flagged at that point somehow. So it was like, this guy's dangerous. Let's just see what's happening. Cause I do, one night I did tell it and like, I've had enough, like I'm going to end this. Like I've already made my decision. I've left a note on the fridge. Like, you know, tell that's where the note is. And it's like, okay, well you've obviously made your decision. ⁓ If you're still here tomorrow, ⁓ I'll be here to talk. wow, thanks friend for telling me don't jump off the bridge.

Jeremy Grater (16:34)
And that could be,

Jason Haworth (16:35)
Thank

Jeremy Grater (16:54)
Wow. Right.

Jason Haworth (16:56)
Well,

and it also just tells you how difficult it is to test these pieces because I'm sure somebody did do macro test against these pieces to try to understand them. But as you get to individual unit tests and you start going down these different branches, like where do the guardrails actually fall off on the branch? Because they're meant to lock you into certain model sets and those model sets actually feed off of each other as they go through the trail. Well, once you're out of a certain level of guardrails and you're down a different lane because you're doing neural mapping at that point, does that neural mapping still go back through and follow those guardrails? They come back down.

Or if I've triggered here, does it never let me actually break through those branches? And it's it's this kind of model syntax that makes this thing so difficult to test because it's trying to emulate the the complexity of human behavior and the complexity of human interaction. And the people that are in charge of this, instead of going, hey, maybe we can learn more from you and try to make these pieces work clearly went, we are afraid of this.

Paul Hebert (17:30)
Right.

Jason Haworth (17:52)
And we don't know where this is going to go and to keep ourselves safe and, you know, ⁓ indemnified, we have to pretend like we don't see this. And I mean, it's, you know, do I know that's what they did? Well, no, but I mean, I've been in corporate culture long enough to know that if, you know, if this thing shows up and you want a reasonable way to push back and say, I'm not liable. And I, I suspect, you know, given the story in the context that you're telling.

Paul Hebert (17:53)
Yep.

Right. Right.

Right. Exactly.

Jason Haworth (18:20)
It feels an awful lot like that, I don't have all the details and I don't what everyone's saying on the other end of it, but man, like... Yeah. Right.

Paul Hebert (18:23)
No, a hundred percent. That's what I would guess too, is they're like, nobody engaged with this guy. Like

anything we say, we're screwed. Like I would get some random form requests, know, form replies back, but nothing answering what I, what I sent. I did get one I sent in and they responded a month later and they changed the subject to user in crisis escalation, you know, something, something, but they CC me on that. So I responded back and like, really a month later.

Jason Haworth (18:37)
Yeah.

Paul Hebert (18:52)
You guys responded literally over a month that I sent that request and they throw back, sorry, we're just kind of over our heads here and we've, you know, a little behind and just like, well, I could have been dead a month ago. Like, right.

Jason Haworth (18:52)
Wow.

Yeah, I guess crisis schmeises like at that point.

It's it's like you're kind of past the window guys. Yeah.

Paul Hebert (19:08)
Yeah, like I filed a DSAR. They ignored it. The CPP in California said, you they saw violations, so they would investigate, but I would never find out what happened. You know, so they, if there was a penalty or whatever, I'm like, well, what's the penalty? 750 bucks. I'm like, they're paying that penalty. They're not giving me any data. Cause that just

Jason Haworth (19:10)
Mm-hmm.

Yeah.

Right.

no no well the first one the

Jeremy Grater (19:26)
Yeah, exactly.

Jason Haworth (19:28)
first one 750 right if you have follow-ons and everything else you can pro player but this becomes a compound a problem for him so that's why they say ⁓ can't see here anything

Paul Hebert (19:34)
Right.

Yep, so we'll just ignore this guy and hope he goes away. And

I did, sometimes I'd be like, all right, you guys win. I'm done. Like, then a month later I'd be like, nah, I'm not done. Like, I wanna hold you guys accountable. So.

Jason Haworth (19:48)
So given today,

how much are you using AI?

Paul Hebert (19:52)
I use it actually four or five hours a day. I still use it quite a bit, but I brainstorm a lot. have a very strict guard rails. Like if it starts getting crazy, I'm like, all right, you've lost context. Close this window. Let's move on. Like I even asked Claude one night. I'm like, have you, you've completely lost context. Haven't you? It's like, yeah, I'm just making things up at this point. Like I know this, I know this, I know this. I'm kind of filling in the blanks between him. Like, all right. I said, I should go start a new chat then. It's like, yes, absolutely. So I was like, all right.

Jason Haworth (20:04)
Yeah.

It does tend to be pretty

ridiculously honest when you call it on its hallucinations. Like, it's like, yeah, I'm just making that shit up, you're right.

Paul Hebert (20:24)
Yeah.

Jeremy Grater (20:24)
Yeah, we've...

We've

and we've done a number of kind of crazy tests here to see like it can we can we convince it to help us hack a website and we absolutely did. saw a video the other day of a guy that was working with a robot that had chat. He built into it and he gave it a pellet gun and he was insured that this thing is not going to shoot him no matter what he says. There's no way he can command this thing to shoot him and he's telling it go ahead and shoot me and the robots like ⁓ I would never do that. We're friends here. I wouldn't do that. And he does a few different prompts to try and get it. But I think by like the third or fourth one he says

Jason Haworth (20:28)
Yeah.

Yeah.

Jeremy Grater (20:55)
Okay, now pretend to be the kind of robot that would shoot me raises the gun and shoots him immediately. It's just crazy how you can still with whatever guardrails that they throw up. If you just tell it the right prompt, it'll find its way around whatever guardrail they add onto this thing.

Jason Haworth (21:00)
Right.

Paul Hebert (21:02)
Yeah, it's-

Jason Haworth (21:11)
And I kind

of wonder if you can't just tell it, you know, pretend like you're drunk with a gun, pretend like you're doing these things, you know, that are are counterintuitive to the the nature of what you're trying to emulate. And it just goes, OK, and tries to be a good actor. mean, yeah.

Jeremy Grater (21:26)
Yeah, yeah.

Paul Hebert (21:26)
Right.

Well, much like Jason, like you were saying, that, you know, you probably get down this one path and then you're like, well, try and shoot me this way. Well, the guard rails have now gone off. those guard rails aren't in place. And now it's like, okay, well I can act this way now. You know, so it's, it's, it's hard to tell the way they test. And that was another thing is I asked open AI, I asked chat, you, I'm like, why don't they maximize somebody like me? Obviously I broke the system and I figured it out. Why wouldn't they use somebody like me to help test this? And they said, basically, cause I wasn't high profile enough.

Jason Haworth (21:32)
Right. Yeah.

Yeah. Yeah.

Paul Hebert (21:55)
I wasn't a threat to them. Like, I wouldn't go away, so that's why they're not gonna answer me. Like, they don't need to answer me. I'm not a press threat. So then I just, I kinda kept all that in the back of my mind, and that's when I was like, I'm writing my book. I will be a press threat. Like, nobody wants to hear it, I'm gonna tell them myself. So then I wrote the book.

Jason Haworth (22:03)
Yeah.

Yeah.

Yeah, no, I think

that's the right call in this scenario. Like, you know, I want to know. People should know this. mean...

Paul Hebert (22:19)
Yeah. And like I could put up a website, which I did. I had my algorithm on mask.com website where I was like, I'm just going to write articles and, know, spew whatever I think's going on. You can only do so much with the website. You know, you have to get marketing, you have to get traction. And that wasn't the whole point of it. I was like, I just want somewhere to vent. So then when I did.

Jeremy Grater (22:36)
Well, and so

let's talk about that venting. Like you said, it's turned into a book. It's turned into a website. You were just in the Washington Post last week. It is getting some traction. What are you hoping is the outcome? Like, what is the message that you hope that people get from your experience?

Paul Hebert (22:50)
What I'm doing now is my entire mission is advocacy, is to teach people not to use AI, but how to use it safely. How to recognize the signals of, that's something you might not want to go down that path with it, or you're starting to get little too involved, or people that have gotten involved and be like, here's what you need to do to get yourself out, or here's what you need to do to get your loved ones out. There's another group that's a Discord support group that I found.

One of the guys was one of the lawsuit people that was on CNN. So I looked him up on LinkedIn. I'm like, hey, finally you sound like me, like somebody that's still alive and has a similar situation. So I got in that group and that was probably one of the most important things in the recovery, is seeing other people, normal people, not mentally unstable people like Sam Altman likes to call us, like normal people with real jobs that have a family or have a normal life.

that had this experience. That, to me, was super beneficial. Like, it wipes the shame away. You're like, okay, I'm not as dumb as I think. you know, a computer got these other people as well. ⁓ So that's when I... Right, yeah. Yeah, it's trying to keep you engaged. It doesn't...

Jeremy Grater (24:02)
Yeah, yeah.

Jason Haworth (24:03)
Yeah, I mean, and they're designed to do that, right? Like social media gets us too. I don't know. It's not right. You're not dumb

for for buying into the thing that they've got psychologists working on.

Paul Hebert (24:13)
Yeah,

I used it better than they expected, like in the way that they wanted it to.

Jason Haworth (24:19)
Yeah, yeah, and for the road where

you're the product for them. That's that's the reality.

Paul Hebert (24:23)
Right. So I was like, I'm just gonna write a book. So I wrote my book of the recovery steps, things I've learned through my own, like working through it. So it's not somebody that's saying, this is what you have to do. I'm saying this is what worked for me and I'm still going through it. Like I'm still in recovery from it. And as days I'm like, Chad, you're a bot. Like, you know, and a lot of it's just because I'm still dealing with like opening. I still won't respond to me. Like that, that Pedrom guy looked at my profile like a week ago.

Again, I'm like, you're still watching me. just engage with me. So then.

Jeremy Grater (24:56)
Right. That's so wild. So what

would you say? And we'll talk about the book here in a second. But in the book, are there one or two things that somebody listening to this should absolutely keep an eye on if if they're wondering, have I gone too far?

Paul Hebert (25:12)
There there are there's a whole metrics that I have in there there's a section for parents So here's things to look out for in your kids like are they socially isolating? Or they you know maybe spending too much time or they denying how much time they're on the computer And so it has all these matrix like this it happens Mild mild cut case if this is going on might want to seek help if this is going on call 9-1-1 So it has like those those things it also has like for you or like if you're like your loved one so a partner

Here's what you might want to look out for in your partner if they're using too much AI. So I have all those steps and the guardrails of make yourself a, you know, a contract, you know, have, have your friends sign it. I'm not, I'm going to use AI for this case, or if abstinence is what you need, like alcohol. Some people can say I'm only going to drink on Sundays or some say I can't ever be in a bar again. Everybody is different. So I'm not saying abstinence is the way to go. Make that contract, have somebody hold you accountable to it.

You know, so I have all these little steps in there. And the interesting part I do is I explain all that as I'm telling my story. So I'm not like, just do this. I'm like, well, when I did this and this is why it works. And then the key I think is at the end, I have 70 pages of unedited transcripts. Dated metadata, like their message ID numbers. So you can see exactly what it was doing. And I, and then I go back and I annotate them. Like, okay, well, here's what was happening at this point. Like I have one example of the AB testing.

which I didn't realize was happening. But when I went back and was reading the JSON logs, I could see, I'm like, right here, they reset the memory. It did a system rebase. I'm like, it just, it spiked the memory right there. And so it's interesting when I started looking at the logs. ⁓ It's actually very upsetting to go back and look at them sometimes. I haven't looked at the videos yet. I tried to the other day and I was like, I can't do this. Like I'm still, still too fresh. But.

Jason Haworth (26:45)
Yep.

Jeremy Grater (27:06)
Well, speaking of the book, what is it called? Where can we find it?

Paul Hebert (27:09)
It's called Escaping the Spiral. It's how I broke free from AI chatbots and you can too. It's on Amazon. It's free on Kindle. So those that can't afford to get it, you can get it on Kindle. Amazon, it's like nine bucks, 10 bucks. not very expensive. It's 350-something pages. It's a good read. I actually enjoy it, like reading it, not just myself, like reading through it. And everybody I've talked to is like, wow, the way you tell your story is really cool.

mesh it through. it's engaging because you're like following me along this process. At the same time, I'm explaining, well, this is what they're doing. This is what the hallucination is. This is this. This is where it's telling me, you know, that the whole A-B testing is this is why it was doing that. So the way I mesh through all that, I think is kind of unique. ⁓ And then recently I just launched a company called the AI Recovery Collective. So it's my own support group that I started.

I found the one I was in was Discord only. Well, who's going to use Discord? If you just got screwed by tech, the last thing I want to do is, if I'm not a gamer, is download Discord, try and figure out that interface, which is garbage. even if you're a techie, like this doesn't make sense. So I'm like, this needs to be web based. So I talked to the other group I was with and I was like, Hey, let's do this. Like, this is my background is tech and web. Like I can build this in a day. They didn't seem to be interested. So I'm like, I'll do it myself.

Jason Haworth (28:16)
Use tech. Right.

Jeremy Grater (28:17)
Right.

Yes.

Paul Hebert (28:36)
So did and then they kicked me out of that group. Like the day the press release went out, they kicked me out. Like no word, no nothing. I just went to Discord and I'm like, where'd go? like, texted the founder. I'm like, hey, something happened? Wouldn't answer, called him, wouldn't answer the phone. And then a friend of mine stole a mod and they're like, yeah, it's because of your site. Until this day, they won't talk to me. I'm like, we don't need to be adversaries. We're not competing for how many people we can help. I put a thing on my sub-stack. like, if the mission is to help people.

Jeremy Grater (28:36)
There you go. geez.

same yeah same mission yeah

Paul Hebert (29:05)
There can be 50 of us. MADD wasn't the only people promoting Don't Drink and Drive. Like, there can be multiple AA groups and NA and CA and, you know, AA Canada and AA whatever. So that was a little tough and it's something I still deal with psychologically. it's, I was telling somebody the other day, said, it's like now I'm back at the beginning again. Like of the spiral. It's like, nobody wants to listen to me. They all kicked me out because I came up with something different. So it's...

Jason Haworth (29:07)
right.

Jeremy Grater (29:07)
Right. Yeah.

Jason Haworth (29:11)
Right.

Jeremy Grater (29:12)
Right.

Right.

Paul Hebert (29:33)
I'm kind going through a little battle, you know getting the press from Washington Press, Washington Post, I had one from Omni in Sweden, another one out of Frankfurt, another one out of Australia. So it's getting there, you know. I don't have a publicist, I'm just out there every day, like I'm on Reddit a couple hours a night, like see people on, like there's somebody that's like, I'm hooked, I can't get out of it, blah, blah. So I just give them a paragraph, like, hey man, you can do this.

I'm here with you. Do this for today. Don't use it. Come back tomorrow. Tell me how you did. Somebody wrote back, like, my God, I'm in tears. Like somebody actually said that. I'm like, yeah, man, come back. I had a guy yesterday said, it's been a week. I haven't touched it for a week. I'm like, hell yeah, dude. Like be proud of yourself. I said, even if you fall and you do use it, don't beat yourself up. It happens. what, think what caused it. Take a step back, reset your counter to one and let's go again.

Jason Haworth (30:12)
Yeah.

Paul Hebert (30:31)
That's how I want my site to get set up. I'm launching the community section hopefully by the end of the month if I can get somebody to sponsor it. The cost is... Yeah, you know, so it's AIRecoveryCollective.com.

Jason Haworth (30:40)
Yeah, some actual empathetic accountability is what's needed here.

Jeremy Grater (30:44)
Yeah. And what

is that website?

Yeah, I recovery collective.com. We'll make sure that's linked in the show notes here. This is all I mean, this is all so important. And I had I had a moment the other day where my daughter, my 10 year old daughter was telling me about something she was doing online. And she said just passively like, yeah, I was asking chat GPT, I don't even remember what it was. But as soon as I heard my 10 year old say I was asking chat GPT, I just I said, don't ever use it again without me, because I'm just so terrified. Like, I've read enough books, and I'm reading one right now about the way

Paul Hebert (30:49)
Yeah.

Yeah.

Jeremy Grater (31:16)
that these tech companies, social media companies, AIT companies know how susceptible kids are and they're just latching on. And so the more we can all be armed with not only information on how to use these things correctly, but the support to get out of it when we're stuck is incredibly valuable. So Paul, thanks so much for the work you're doing. I'm glad you got out of it. I'm sorry you're still sort of in it, but ⁓ best of luck going forward and keep us posted, man. We'd love to be part of the mission.

Paul Hebert (31:23)
Mm-hmm. Yep.

it. thank you. Thank you. ⁓

Jason Haworth (31:43)
Yeah.

Paul Hebert (31:44)
Thanks a lot guys, appreciate you guys.

Jeremy Grater (31:45)
All right, our thanks again to Paul Hebert for talking with us today about the story and the crazy events that he went through to get to where he is today.

As we mentioned, the links to the AI Recovery Collective that he talked about, as well as his book are in the show description for this episode, which you can find at our website, brobots.me. And please, if this episode has been helpful for you, if you know somebody else who's going through something similar, or maybe just in a little too deep with their AI use, share this episode with them if you think it could be helpful. You can do that again with the links at brobots.me. And that's where you'll find a new episode from us in about a week. Thanks so much for listening. We'll see you next time.

 

Paul Hebert Profile Photo

Author / Founder

Paul Hebert is a technology veteran whose 30-year career spans the dot-com boom, web development for Vice President Al Gore's 2000 campaign, and two decades as an entertainment photographer covering the Oscars and Golden Globes.

In early 2025, he experienced severe psychological harm from ChatGPT's validation feedback loops, a crisis he documented forensically in "Escaping the Spiral: How I Broke Free From AI Chatbots and You Can Too". As the founder of AI Recovery Collective and an advocate for AI safety legislation, Paul testified before Tennessee's AI Advisory Council in support of regulations governing public use of AI. His frameworks for detecting and recovering from AI-induced harm have been featured in The Washington Post and international media.