Can AI Replace Your Therapist?
Traditional therapy ends at the office door — but mental health crises don't keep business hours.
When a suicidal executive couldn't wait another month between sessions, ChatGPT became his lifeline. Author Rajiv Kapoor shares how AI helped this man reconnect with his daughter, save his marriage, and drop from a 15/10 crisis level to manageable — all while his human therapist remained in the picture.
This episode reveals how AI can augment therapy, protect your privacy while doing it, and why deepfakes might be more dangerous than nuclear weapons.
You'll learn specific prompting techniques to make AI actually useful, the exact settings to protect your data, and why Illinois Governor J.B. Pritzker's AI therapy ban might be dangerously backwards.
Key Topics Covered:
- How a suicidal business executive used ChatGPT as a 24/7 therapy supplement
- The "persona-based prompting" technique that makes AI conversations actually helpful
- Why traditional therapy's monthly gap creates dangerous vulnerability windows
- Privacy protection: exact ChatGPT settings to anonymize your mental health data
- The RTCA prompt structure (Role, Task, Context, Ask) for getting better AI responses
- How to create your personal "board of advisors" inside ChatGPT (Steve Jobs, Warren Buffett, etc.)
- Why deepfakes are potentially more dangerous than nuclear weapons
- The $25 million Hong Kong deepfake heist that fooled finance executives on Zoom
- ChatGPT-5's PhD-level intelligence and what it means for everyday users
- How to protect elderly parents from AI voice cloning scams
NOTE: This episode was originally published September 16th, 2025
Resources:
-
Books: AI Made Simple (3rd Edition), Prompting Made Simple by Rajeev Kapur
----
GUEST WEBSITE:
https://rajeev.ai/
----
MORE FROM BROBOTS:
Connect with us on Threads, Twitter, Instagram, Facebook, and Tiktok
Subscribe to BROBOTS on Youtube
Join our community in the BROBOTS Facebook group
----
TIMESTAMPS
0:00 — The 2 AM mental health crisis therapy can't solve
1:30 — How one executive went from suicidal to stable using ChatGPT
5:15 — Why traditional therapy leaves dangerous gaps in care
9:18 — Persona-based prompting: the technique that actually works
13:47 — Privacy protection: exact ChatGPT settings you need to change
18:53 — How to anonymize your mental health data before uploading
24:12 — The RTCA prompt structure (Role, Task, Context, Ask)
28:04 — Are humans even ethical enough to judge AI ethics?
30:32 — Why deepfakes are more dangerous than nuclear weapons
32:18 — The $25 million Hong Kong deepfake Zoom heist
34:50 — Universal basic income and the 3-day work week future
36:19 — Where to find Rajiv's books: AI Made Simple & Prompting Made Simple
Jeremy Grater (00:01)
Have you ever had a mental breakdown at 2 a.m. and wished you had access to some sort of 24 seven therapy hotline?
That's what happened to one corporate executive. was suicidal, failing at work, and disconnected from his simply wasn't enough.
In this episode, you'll learn how AI became his lifeline, helping him reconnect with his daughter, save his marriage, and rebuild his career.
Plus you'll learn tips about AI safety without selling your soul to the algorithm.
Stick around, this conversation might change how you think about therapy, privacy, and even Taylor Swift tickets.
Jeremy Grater (00:30)
This is BroBots, we're the podcast that tries to help you use AI and technology to safely manage your health and well-being. As you've heard me talk about on this show before, occasionally when I'm sort of in a mental health crisis, I will turn to AI and I will ask it for questions, I will ask it for guidance, I will ask it to help me get through those tough times when I'm in between appointments with my real actual human therapist who is also massively helpful in my life.
as it turns out, AI is helping a lot of other people as well. I was just reading an article the other day about how 35 % of people are using AI to manage their physical and mental health symptoms.
Well today you're going to hear a conversation that my co-host Jason had with Rajiv Kapoor.
He's the bestselling author of AI Made Simple and Prompting Made Simple. He's gonna share the story of a business executive who was pulled back from the edge of suicide with the help of chat GPT.
They dig into how AI can fill the gaps therapy leaves wide open, how to protect your privacy while using why deepfakes might be scarier than nuclear weapons.
Jason's conversation with Rajiv.
Jason: An AI Companion (01:30)
What in your professional journey kind of led you to focus kind of specifically on on AI and how it intersects with with the modern world? And if you can kind of more specifically about health care. And I understand you had a personal event November of last year that really kind of made it salient for you and made it very relatable.
Any chance you can tell us that story?
Rajeev Kapur (01:49)
Yeah. Yeah. So I guess the first part of question, guess, is, you know, but just like you, I, you know, look, my, my, my first, you know, platform wasn't Atari computer and it's already system. Right. I'm sure. Right. You know, yeah. You know, and so then, you know, you, kind of graduate, then you had the first Apple too. And then you, you know, and I was like, I was a gateway first and then Dell. So you've been around tech forever. And so, you know, you have this issue where.
Jason: An AI Companion (02:01)
Right. Little 2600.
Rajeev Kapur (02:17)
You start to come across things and then, you and then you start to see things happening in media, like in movies and television around AI and the little things and robots and you know, whatever, right? RoboCop and all the Terminator and all this stuff, right? So pretty soon you start to grow up with it. then, you know, and then one of the things I don't think people realize is we've actually been surrounded by AI for a very long time. Like Netflix uses AI to determine what we watch. Amazon determines, uses AI to determine what we buy.
Right. Instagram, like all these things use AI and we're surrounded by a GPS and Google maps and all these things use it as you know, and when you're at Microsoft and you guys were using, doing things in the AI space already on the machine learning site, essentially it's using data to help to tell a story basically. Right. So we've been kind of around that AI space for, for a long time, you and I both. And, know, it wasn't until I really got really involved here at 1105 and when ChatGPT came out and decided to write my books and the book done really well. The second book just came out.
about two weeks ago called prompting made simple, which is all about how to prompt and how to teach people prompt. I know we're gonna, we'll get that in a few minutes, but to answer your question of what happened to me in November. So here's the thing I want to point out. It didn't happen to me personally. So what happened was, so I do, I have keynote talks that I do all around the world. And I had done a virtual version of this talk to a bunch of business executives in South Carolina.
And whenever I do my talk, I always put my phone number up on the screen. So if anybody has any questions, if you need any help, cause I know this is confusing. It's, there and I know it's going to be daunting. If you have any questions, call me. So I did that and literally like I hang up done and about an hour later, I get a phone call and I noticed the phone number. says, I didn't know the number, but I noticed it's up there. know whatever we go. Maybe it's one of these guys who's calling me. So I answered the phone and it definitely was one of the guys and he goes, Hey, Reggie, you don't know me. But, you know, great talk. I loved it.
Jason: An AI Companion (03:42)
You
Rajeev Kapur (04:09)
You know, I have a question for you. And I goes, he do you think AI can help me with my mental health? And I said, yeah, absolutely. And, you know, it just kind of depends. You know, you know, and I had never really thought about it, you know, in all that much detail. And so we started chatting and I said, well, what can you tell me what's going on? Like what's, what's happening? And he goes, well, you know, I have severe mental health issues. And I said, oh, okay. Well, on scale of one to 10, how bad is it? And he said, it's a 15.
And it kind of really caught me by surprise. And my first reaction was, whoa, like 15 means like you want to end things. Right. And so I asked him, well, do you feel suicidal? And he goes, yeah, all the time. And I go, well, why are you talking to me? Why aren't you talking to therapists? Because, well, gee, if I talk to so many therapists and, know, and when I'm in some of them are great, some of them are not so great. Like I have my ego deal one right now. Yeah, I have one right now. But the challenge I'm having is that when I talk to her, I can do my hour and I'm done.
Jason: An AI Companion (04:46)
Yeah.
Rajeev Kapur (05:09)
And then I don't feel like there's any support for me afterwards. And then I got to go another month, you know, to get help and support again. And I said, yeah. And I was just wondering if there's anything AI can do. And look, this is, I'm not here to bad about therapists. Therapists are awesome. I've gone to therapy. I'm sure others have. Look, to me, I think the future here is going to be a world where AI can help augment what the therapist does.
Jason: An AI Companion (05:14)
Yeah.
Rajeev Kapur (05:39)
And I think that if you're a therapist and you're listening to this, I think you need to find a way to find out to embrace AI in your practice. Because here's the thing. If you have, let's just call it a patient. Let's just call this guy, patient zero. Let's, you know, we'll call this guy patient. So patient zero says, Hey, you know, he's got problems. You know, he's got challenges. But, know, and I'm sure some therapists can make exceptions, but in his mind, he's like, I can't see that therapist again for a month. Well, so we started talking and I said, well,
What I want you to do is I want to download ChatGPT. And he had done, he'd downloaded already ChatGPT, and he was using the free version and he wasn't really doing much with it on his iPhone. And I said, I want you to get the paid version. So he gets the paid version. I told them where to go in the app to anonymize, so it doesn't use the data to train and all that. so I said, I want you to do ChatGPT. I want you to do persona based prompt. And I want you to add ChatGPT to become a therapist.
Jason: An AI Companion (06:21)
Yeah.
Rajeev Kapur (06:35)
and a therapist that specializes in business executives or whatever the challenge you're facing. In his case, he was having an issue with connecting with his 14 year old daughter, major business problems. He and his wife were having major, major, major challenges. It's just like everything was collapsing around him. Yeah. And so it's just like he just didn't think he could get away. And everything was his fault. And when he goes to therapy, it's great. But then he leaves.
Jason: An AI Companion (06:49)
Yeah, yeah, I know this story.
Rajeev Kapur (07:02)
And two days later, it's like the world just collapsed again and he can call his therapist and it's only so often can you know, sometimes they don't answer. So I kind of, so I quickly, we spent about 20 minutes, 30 minutes going through and I said, give it a prompt, give it this persona based thing. It's got a memory function, going remember, ask it to become a therapist. I specialize in this. Now explain your issue and your challenge. So I started to explain the issue, the challenges and some of the things that I saying. like, look, do you really want me to be on listening notes if I'm already this far along and you might as well stay and help.
So I did, I stayed along for another 15, 20 minutes. So all in, we're about an hour and hour and in this whole experiment. And this is the first time I'd ever done this. And started asking him questions and he started giving responses and he was just starting to have a conversation. So to make a long story short, I said, all right, have the full conversation with Chai CPT, but I want you to do me a favor. I don't want you to pretend you're talking to a robot or...
or an algorithm, I want you to pretend you're actually talking to a therapist. Just pretend this therapist is in the Philippines or in India or in France, wherever your mind wants to go. Just pretend you're talking to somebody online and it's multi-modal. So you're actually having the full conversation. And so he said, all right, so then do me a favor. Do this, have the conversations, get really serious and then call me in a couple of weeks. So three or four days go by and he calls me. I'm like, what's going on? He goes, say to the machine.
Jason: An AI Companion (08:07)
Mm-hmm.
Rajeev Kapur (08:23)
Can I ask you if he become my business coach? I said, yeah, sure. No problem. Just tell it what you want it to be. I asked you, a you know, do you have a specific board makeup that you want to be your business coach? Like if you had a board, a board of advisors to be your business coach, you would it be? And he said people like, you know, typical like Steve jobs, Mark Cuban, Warren Buffett, you know, the, the, you know, the, the typical ones.
Jason: An AI Companion (08:42)
Sure. Yeah.
People that don't put
guardrails into things, right?
Rajeev Kapur (08:49)
Yeah, and then I
think he said Simonson or Adam Grant, one of those guys, But anyway, so you have three or four. go, well, then do that. ⁓ then Chachapiti's gonna know those three or four people anyway. So tell Chachapiti you want to be your personal board of directors, personal advisors to help you with your business. And whatever you're having, just start talking with it, just like you do on the therapy side. Start talking to Chachapiti from a business perspective, employee issues, growth issues, cost issue, whatever it might be, just start telling it. It might ask you to upload some information.
Jason: An AI Companion (08:53)
Sure, yeah.
Rajeev Kapur (09:18)
If you do anonymize data, whatever it might be and all that, you would be fine. So it starts happening. So then I don't hear from him. So then about two weeks later, he calls me and he goes, like, Hey, how's it going? He goes, wow, it's been amazing. I go, what's going on? Tell me. He goes, well, you know, I, you know, on the way to work, Chachi is my business coach on the way home. It's my life coach. It's, it's, it's my therapist for, my, for me and my family. And I go, well, what's happened? I go, how are you feeling now on a scale of one to 10? He goes,
Jason: An AI Companion (09:36)
Mm-hmm.
Rajeev Kapur (09:45)
right now, probably a three or four to be honest with you. I the last couple of weeks, the last week or so especially, I've been really good. I connected, I asked for tips on to connect with my daughter, ⁓ a 14 year old daughter, and it did, gave me, tattoo gave me three options, and I picked one and we started, I went to talk to my daughter and I put my phone away and I started just being present for her, and then I started being, same thing for my wife, how do we become a better husband?
more present because my wife's saying that I'm never present. I'm always on my phone. I'm always working on to be more present and you know and you know and I surprised my daughter with Taylor Swift tickets and you know whatever right I mean and so all these things and he had never
Jason: An AI Companion (10:24)
Well, did AI help
him to get those Taylor Swift tickets? Because that's a prompt I would like to know about.
Rajeev Kapur (10:27)
No, but no, but but it didn't
it didn't be like it didn't per se and I know you're joking but what it did it did say, find a way to connect with your daughter. And right now Taylor Swift is Taylor Swift was one of the options and so he found out this was Miami or wherever it was it didn't matter where it was fine but he got the tickets and it wasn't like he was trying to bribe or anything he just he wanted to find a way to connect. And he would never do that he would never go to a table like the mom would go not him.
Jason: An AI Companion (10:36)
Yeah.
Yeah.
Right.
Rajeev Kapur (10:54)
The fact that the dad
is going with the daughter, that really just surprised her and the mom was happy and all that. So anyways, it was just really great to see. you know, and I know there's some issues and some challenges as a little bit of backlash. Like know J.B. Pritzker, the governor of Illinois, to my respect, he came out and signed a thing yesterday. I don't know if you have saw or not, but he signed a thing saying no AI therapist. I think, and I think that's a little ass backwards. You know, I, I, I'm not saying that there's, I'm not saying the AI therapist should be the only therapist. I think therapists should find a way to use.
AI to augment what people to augment what they're doing to help their patients, just like in this person's case. And so he still goes see this therapist, but he but in between uses Chachi PD as a therapist and it gets kind of the best of both worlds. And when you're having a crisis at two o'clock in the morning, do you really give a shit sorry for my French, you know, about whether it's a chatbot or something if it really helps you with your mental health?
Jason: An AI Companion (11:45)
And that brings up a really good point. well, first of all, I I can go for it. Like that's what the show is about. ⁓ We talk about real things. So, but also having the accessibility and access to something at times in your mental crisis, it doesn't happen at nine to five. It's going to happen at any given point in time and being able to have someone to talk to can get you out of crisis. I mean, that's why we have emergency numbers for this. And ultimately speaking, when you look at those component pieces, I think you're a hundred percent correct.
Rajeev Kapur (11:50)
Yeah.
Jason: An AI Companion (12:10)
And if he hadn't had those pieces and he was at that 15, how long until he actually broke and did something terrible to himself? And the downside effect of taking that type of tool away from somebody, think is salient to every conversation out there with mental health professionals. And don't get me wrong. mean, MIT had this study where they went through and they looked at the different component pieces of how people were interacting with these different agentic functions. And there was one...
of that study or one snippet that got brought in where ⁓ folks are using an agentic AI that was pre-prompt and pre-programmed and ready to go for mental health and the person that was testing it was pretending he was a 16 year old and the negative downside of it was, you know, it was telling him, well, you should murder your parents and kill yourself and meet me in heaven. yes, that's the terrible downside of those things. And I hate to break it to everybody, but Hannibal Lecter mode is probably a real thing on everything. And you could have bad human beings that do those things too.
What we don't hear enough about are all the positive stories and the people that are using this in a positive way to really help improve their mental health. And I think the story that you just presented is a fantastic example of that. Along those lines, with you creating this new book called Prompting Made Simple, you mentioned a few of the prompts you would have and go in and kind of tailor some of these pieces for his ⁓ almost different ⁓ ideal self profiles. I want to call an ideal customer profile, since it's the marketing background. But
being able to go through and turn yourself into the target for this thing. ⁓ Do you have any tips or tricks ⁓ that you can share with us quickly that might be in that book to get people interested and enticed? They can actually go through and read more about those ⁓ prompts that you're pushing forward.
Rajeev Kapur (13:47)
Yeah, look, think that's a point. I'll get to that answer in just one second. But to your point about the negative side, As a parent, you got to watch what your kids are doing because there are these really bad tools called character AI out there. so you got to be careful because there's a lot of male issues out there where they don't feel like that they can find love and companionship and they're turning to AI.
Jason: An AI Companion (13:59)
Mm-hmm. Yeah. Yep.
Rajeev Kapur (14:17)
chatbots and everything. just be careful, monitor what they're doing. What I'm talking about is what's for a 40 year old gentleman, right? I'm not talking about a 15 year old boy, you know, so, so let's be careful. So I just want to put that out there that, know, you monitor their habits. Now, in terms of like prompts to watch and monitor, look here, here's the thing. To me, the easiest way to get into prompting is do a persona based prompt. And it's so much fun. Persona based prompts are great. If you have to go give,
Jason: An AI Companion (14:24)
Yeah, of course.
Rajeev Kapur (14:44)
If you want, let's say you have to give it toast and you know, you're not a great public speaker and you can use Chad GPT. Now they have Chad, by the way, Chad GPT 5 came out yesterday. And so it's, can, you can say, Hey, Chad GPT, I need you to take on the persona of Snoop Dogg and help me write a best man's toast at my buddy's wedding on Saturday and hear all the things I want you to know about my best mom, about my buddy, his future wife.
Jason: An AI Companion (14:53)
Yeah. Yep.
Rajeev Kapur (15:12)
Here's a couple of funny stories and then give it to you in the voice of Snoop Dogg. Now, obviously that's for fun. You know, if you wanted to go learn how, if you wanted to learn how to bake a cake, right? Ask it to take on the role of a great world class baker, like Duff, you know, from Food Network or whatever, right? Give it a persona. Maybe you want to get into politics. Well, if you're going to get into politics and you're a Democrat, you might say, Hey, it's at GPT, I need you to take on the persona of Barack Obama and be my politics coach.
Jason: An AI Companion (15:16)
Yeah.
Right.
Rajeev Kapur (15:39)
and help me figure out how I can run for mayor, help me win run for mayor or run for governor or whatever, whatever you want to do, right? Take that persona. You're Republican, maybe you want Trump or want Bush or you want somebody else. Who knows what you want? So my point is you take on that persona, right? You know, if you wanted to have a fun conversation and say, you know what, I love Jerry Seinfeld and I want to, and I want to, you know, have a good, I want you to pretend you're Jerry Seinfeld. And like, if there were, if you want to have fun,
just having a conversation with Jerry, just having a conversation with, pretending that ChachiBee can act like Jerry Seinfeld, right? So, persona-based prompting is kind of the way to start, put your toes in the water and kind of jump in, right? Well, once you guys start doing those things, the best prompt structure that I find is that you want to give ChachiBeeT what I call a role, a task, then you want to give it context, and then you give it an ask. So this RTCA kind of thing. So what's the role? The role is,
I want you to be a world-class guitar instructor. Right? I'm just saying that because you've got guitars with you. Right? And then the task is I want you to teach me how to, and I'm a beginner. I want you to teach me how to play the guitar like the edge. Context is, you know what? I'm a beginner or I played in high school a little bit. I'm now 52 years old and
Jason: An AI Companion (16:40)
Perfect.
Yeah.
Rajeev Kapur (17:02)
I haven't picked up a cup tower in 30 years and you know, I probably need help reading sheet music again or whatever context you want to provide or hey, you know what? I broke my thumb so my thumb doesn't work all that well or whatever. Well, whatever context it needs to have, right? And then the ask is, you know, I need you to give me a plan that can help me learn how to play, I don't know, Where the Streets Have No Name by U2 over the next 12 months because I want to play it for my wife at her birthday or whatever.
And that's absolutely our song. So it says, roll task content. So start with that. So just think about that, right? Another one is chain prompting. Chain prompting is also, it's just maybe you don't know what the prompt should be. Just start with a question. Hey, hey, chat GPT. I'm thinking about buying a guitar. What do you recommend? And it's gonna say, well, know, this, this and this and say,
Jason: An AI Companion (17:35)
Mm-hmm.
Rajeev Kapur (17:55)
Can you tell me more about this brand? you tell me more about that brand? now you just, that's chain prompting. says it's actually conversational prompting where you're just like, if you were just talking, it was just like you and I were having a conversation about the guitar. I'm like, dude, I don't know if I could talk, let's learn to play guitar. And you know, where do I even start? You give me an answer and I go back and forth. And here's the thing, right? You know, and then you kind of get to the next one, you know, which is really interview based kind of prompting, you know, which you can combine things. So can say, chat GPT, I really want to learn how to play guitar. Here's my background, here's my experience.
Jason: An AI Companion (18:01)
Mm-hmm.
True.
Rajeev Kapur (18:24)
please ask me questions that I can then provide you answers for to help you give me a better response, help you take care of my ask a lot better. So when you start asking, when you start getting ChatGP to ask you questions, that's really a game changer. And it's got a memory function in ChatGPT. So it'll remember your conversations, remember these things. those are kind of the three or four fun ways to just start the process. It's easy, it's not daunting. It's a nice way to just not even dive in at first. It's just a nice way to just jump in with both feet and just have some fun.
Simple ways to go.
Jason: An AI Companion (18:53)
Yeah, and treat it
like a human. Don't try to sit there and tweak P and I values ⁓ because that's a dead end for most folks. So no, that's great advice.
Rajeev Kapur (19:01)
Yeah, and look, I always
say, yeah, Jason, like I always say, look, and I know Sam, not a big fan of Sam, is not a big fan of this, but I always say, please, thank you. and I do it for two, and I do it for one primary reason. The primary reason is that it helps me in my mind mentally feel like I'm talking to a human. Now the secondary reason I do it, and this is more of the joking reasons, is just in case a Terminator robots do come for us, then they're gonna remember I was nice. So, so there you go.
Jason: An AI Companion (19:10)
Yeah, me too. ⁓
Yes, exactly. Yeah. You will
be sent to the nice meat factory, not the not the angry one. Yeah. No, that's a great answer. Great response. So ⁓ we're talking about giving a lot of personal information to these different elements and giving all the contextual clues behind it, which actually might if somebody nefarious tech data or maybe not nefarious, maybe my health insurance company picked this up and extract this information because the PII that's available inside of it in this country is not protected where other countries do ⁓ along those lines.
Rajeev Kapur (19:31)
Yes, yes, yes, yes.
Jason: An AI Companion (19:57)
How do people protect themselves and their personal information in this kind of space? Or should they even really worry about that given the near anonymized layering bits of information that sit inside these LLMs?
Rajeev Kapur (20:11)
So I'm going give you a cop out answer, right? I'm going to give you the cop out answer is I'm a believer that the healthcare companies in your example already have an idea because they have access to your Facebook data. They're buying the Facebook data. Like if all of a sudden you join a group about kidney disease, well, you know what mean? Or, you know, so they know, so they know what drugs you're on. You know, they know, they know if all of sudden you have a bunch of
Jason: An AI Companion (20:13)
Fair enough.
Rajeev Kapur (20:39)
high blood pressure medicines and they know this guy probably has coronary artery disease. So I guess so that's so that's the cop outside of me, which is saying that I think they already know. So number one, you know, and by the way, there's a hack every day of somebody's system. who knows? So number one, number two is, you know, there is security layers built within all of these different LLMs, whether it's Chet GPT or Gemini or GROC or whatever the case might be, right? There is security built in.
Jason: An AI Companion (20:40)
Mm-hmm.
I think it's a fair point.
Rajeev Kapur (21:09)
I happen to tell people, and I know people that do this, that if you're going to upload your blood work, anonymize the blood work, meaning print it out, print out your blood work, take a black Sharpie, Sharpie out your name, your date of birth, and whatever else might be identifiable. And then you can upload that and make sure your settings is put to, do not use my data to train. So you got to go into settings and make sure that's there. If you don't do that, fine. Now, if you have the Teams version of chat GPT,
not Microsoft Teams, Teams, the Teams version of
Jason: An AI Companion (21:39)
Teams, yeah.
Rajeev Kapur (21:39)
ChatGifty, which is $25 versus $20, you have to have at least two employees, it's kind of on by default. But if you don't, and you're paying the 20 bucks a month, then you have to go do it manually. Now, if you have the free version, you don't have the option to do it at all. So, and if you want to opt out of things, you can, but it's gonna take, and still gonna need your data for 90 days to train, because they need your data to train. And so that's the thing, right? Data is the new oil. And they need that data to, they need, because ChatGifty is a refinery that sits on top of the data that gives it an output.
And so, so I'm of the belief that go ahead and do it, upload it. Like for example, this gentleman, when he did it, we made sure setting was on, not this on, on, on, do not share my data. And, you know, it knows who he is because he had to log in, but he didn't put his name, you know, in when he was having a conversation with chgpt, he didn't use his name. He actually came up with a different name. And I told him, when I told him again, have chgpt address you by initials. So JC or whatever it was. Right. So.
Jason: An AI Companion (22:19)
Yeah.
Rajeev Kapur (22:38)
So anyways, that's way to start doing it. then that's what I would do. whether it's for your health information or your business information, anonymize your information. It's always just a good practice to anonymize it. Like if you're going to upload your sales funnel, you don't want that out there. You never know what happens. if it's an Excel file and you're uploading it or Google Sheets or whatever it might be, and you're doing business, make it up. Let's say you're doing business with Gibson guitars. Well, instead of putting Gibson guitars,
Jason: An AI Companion (22:53)
Great.
Rajeev Kapur (23:07)
in the name of the company field, but gg. Right? So I didn't just have a little key that does that. And you're fine. And just don't make sure it doesn't have your, you know, your, your, your, your, your go-to client's name or their email address, whatever. mean, you don't need that stuff anyway to end to, to analyze your funnel. And if you're in Salesforce, you can download this up and Salesforce has its own AI stuff that you can do these things too.
Jason: An AI Companion (23:28)
Yeah, no, that's great. And I think that's a salient point that's often missed by most people out there that you actually control this data. You control the input and how you put it into those systems. ⁓ What advice do you have for people to make sure that when they put those pieces in, aren't revealing all of those details? And should they do it as part of the prompt process to not store things or not remember certain things? Like, for example, if I'm telling a personal story to my AI therapist about something that's going on and I've got other names of people that I'm putting those pieces in when I'm talking to a regular therapist,
I don't hide names. tell all the pieces so I can get all those pieces exposed and out. Any advice for folks that want to be able to have that level of conversation and have that level of anonymity, ⁓ but still be able to use those tools in that kind of way?
Rajeev Kapur (24:12)
Yeah, look, I think you can still have them instead of saying the name of your wife, say my wife. mean, like, I don't, don't think that we need to complicate this by any stretch of the imagination. I think practicing really good operational security here in terms of being able to, what you're saying and what you're doing is really important. look, know, therapists take all their notes and they dictate them into a device anyways. And they take that and they take those notes and they put them on their computers, on their laptops, wherever they put them. And they might load them into the cloud.
Jason: An AI Companion (24:34)
rate.
Rajeev Kapur (24:42)
Well, guess what? That's hackable, you know, or whatever it might be. look, I'm not, I'm not, I'm not a defeatist here. I mean an alarmist. mean, like everything is hackable and everything is there. All I'm simply saying is that, you know, don't, don't be afraid by, by that side of this. If you, if you really are having a challenge, mental health, regular health, and you need some help and support, just start there. Look, I broke my wrist about three months ago and, and I went to urgent care and I sit and I got the X-ray and I said, Hey,
Jason: An AI Companion (24:51)
Sure.
Rajeev Kapur (25:11)
I went to the radio and don't tell me I want to say I walked up and I said, I can't take a picture of the extra because yeah, so I went up my phone, my phone, I took a picture of the x ray, and then uploaded a chat GPT. And I said, what did I do? And it said, Yeah, man, you have a break in your right wrist here exactly the spot where it said, and I said, here are the three, here's the three types of breaks it could be. But I really think it's number one, which is the nevolution fracture of your of your right wrist underneath your left thumb. Okay.
Jason: An AI Companion (25:30)
Right?
Rajeev Kapur (25:40)
So my buddy's an orthopedic doctor. went the next morning to go see him and I said, what did I do? goes, dude, you broke your right wrist on your thumb and you have an avulsion fracture. I said, look, let me show you something. And so I showed him what Tech GPT said. goes, I gotta lose my job. So look, I mean, it's super smart. I mean, it's just, you know, you gotta get up over the hangup that you're talking to this super advanced chat bot. And by the way, Tech GPT-5, from what I'm reading, what I can tell, apparently has a PhD level of education, you know?
So that's pretty cool. mean, if you get a PhD in your boxing, by the way, not just for mental, for anything you want to do. Let's say you want to start a business. Let's say you want to figure out how to sell something to a new client. Let's say you want to plan a party, whatever you want to do, man. I'm telling you, this stuff is going to be wild. That's here today. And by the way, if this was a baseball game, this is the first pitch of the game. That's how early we are in this.
Jason: An AI Companion (26:15)
Yeah.
It's actually so good. just use it today to go through and actually drag across multimodal functions from multiple different chats that I had before and suck them into the chat GPT-5.0 engine and basically said replay and the output was entirely different. I mean, it's amazing how good it is. And you're right, like that level of kind of ⁓ almost intellectual fluency and being able to augment those pieces as individuals is really, really high. But I also think that there's something to this that we forget. And that's that these are
These are tools that we can use to make our lives better, but it's just like any other tool out there. If you don't take the time to use it right, you're going to do bad things. mean, a Swiss Army knife is fantastic for whittling things or taking things down. It can also be used to cut your finger off. So at what point do people actually need to take responsibility for some of these pieces? And I think we need to learn how to do it now. ⁓ But along those lines, ethical guardrails and other pieces can be put into place.
So these companies that are out there creating these, these agentic AI functions and using them to go through and, and a lot of the times they're eliminating human capital resources because they're trying to save money and put those pieces in place, which I don't necessarily want to get into that conversation, but I do want to get in the conversation of replacing the ethical values of human beings with the ethical values of prompt functions inside of AI agentic AI functions. Do you think that companies are taking enough precautions today or do you think we're kind of, like you said, first pitch?
Wild Wild West, we're going to see what shakes out and what kind of near-term consequences do you think we're going to see as far as the casualties of the initial portions of this?
Rajeev Kapur (28:04)
Well, Jason, I think that's a great question. But I think part of the fallacy in this whole thing is we're assuming humans are ethical.
Jason: An AI Companion (28:11)
Very good point.
Rajeev Kapur (28:12)
You know, I think we can probably all agree that right now there are some things happening that where ethics are really being questioned. And I'm going to leave it at that. And so that's the first point. In terms of what's happening in organizations and the ethics behind this is to me, think, so let's step back a little bit and talk about what are the overarching concerns about AI primarily, right?
Jason: An AI Companion (28:15)
Very good point.
Rajeev Kapur (28:42)
So number one is the Superman effect. So Superman effect is imagine a Superman spaceship crash landed in Osama bin Laden's backyard instead of Ma and Pa Kent's backyard in Kansas. Well, what kind of Superman would we have? Well, we probably have one that's very pro, pro freedom and all this stuff. We'd probably all be living under Osama bin Laden's rule right now. All right. Unless Lex Luthor ends up being a good guy. You know what I mean? so anyway, so
Jason: An AI Companion (29:01)
Right.
You never know.
Rajeev Kapur (29:07)
You never know.
So, and we find kryptonite somewhere. anyway, the point of it is that that's a superman effect. Right. So we have to watch out for the, so basically with good AI, there's going be bad AI. And like, do we, do we bad mouth car manufacturing? Do we bad mouth the guys that invented the combustion engine for a car that gets in a car accident because the user decided to drive it the wrong way drunk. Right. So, so with, so with, you know, with,
with all the opportunities that AI can provide, it's the humans have to make a decision. Are we going to use AI for a utopian world or a dystopian world? And I absolutely hope we use it for a utopian world. that's the move. Number two is I believe the thing that could derail this whole AI experiment is deepfakes. To me, deepfakes are hands down probably one of the most dangerous things.
on the planet today, almost as dangerous, if not more dangerous than nuclear weapons. That's a big statement. I get it. know a nuclear bomb goes off in LA. It's going to wipe out all of Southern California and probably the West coast. get it. My point is, that chance of that happening, pretty, pretty, pretty, might. But somebody could create a deep, you have me, a friend, family, daughter, son, relative, whatever. And all of sudden to that person, they can extort money. They can put their, you know, Taylor Swift's face got put on the face of porn stars. So, you know, and so
Jason: An AI Companion (30:30)
right.
Rajeev Kapur (30:32)
like bullying aspects of it. if you know, I would encourage you and your audience to Google CFO Hong Kong deep fake. And that is essentially what happened was, you know, and I think you're going to Google it right now, but the CFO Hong Kong deep fake. And while you're doing it, I'll give you a second to Google it and I'll tell you what happened. So. So what happened was a finance analyst at a finance firm in Hong Kong got an email
Jason: An AI Companion (30:51)
no, go for it. You're good.
Rajeev Kapur (31:03)
I got a Zoom invite from the CFO and like the head of finance. And this is the boss and the boss is boss. And it came from their email, but it turned out the email address was spoofed and got the email to join the Zoom call. let's just for the sake of argument, say that the person's name is Jason. So Jason joins the Zoom call and Jason is talking to Rajiv and Jeff.
Rajiv and Jeff tell Jason, Hey Jason, we've entered a new relationship with Kelly. Please wire Kelly $25 million. Okay. You're my boss and my boss is a boss and I'll wire the money. Well, guess what? Both Rajiv and Jason were deep faked. It wasn't us. So the money's being sent out, the money's wired and you or the Jason in this scenario says, you know what? This is awesome. Something's going on here, man.
like I know you guys told me to do this and he goes to the CFO, he goes over or whatever and he calls the CFO and says, hey dude, boss, how much more money are we gonna send Kelly? Like, what are you talking about? Well, the last couple of weeks I've been sending the money like you told me to, what money? So and they discovered that was all de-faked and all this stuff. so it's stuff like that.
Jason: An AI Companion (32:18)
Yep.
Rajeev Kapur (32:23)
Right. mean, I know folks right now who have elderly parents who are getting scammed saying that, but in all that all it needs is like 10 seconds of your voice and my voice and our voice is out there to replicate our voice and call our parents and say, hey, mom, hey, we have your son areas. Rajiv is held hostage unless you send half a Bitcoin or a Bitcoin or send 10, 15, 20, 30 thousand dollars. We're going to kill him. Right. So, you know, an older and elderly people like my dad, even if it's a scam,
Jason: An AI Companion (32:32)
Mm-hmm.
Rajeev Kapur (32:53)
would answer every phone call. I'm like, Dad, stop it. You know, don't do that anymore. Right? Stop responding to emails. And so, so that's where I have a friend of mine who that happened to even Miami and you know, his, his mom got a call saying we have your son, da da. And so the mom hung up and call the daughter in a panic. She goes, Mom, what are you talking about? I just talked to Mark five minutes ago. It's fine. You know, so like, you know, and, you know, the same thing happened to the CEO, CEO Ferrari, where one of his employees got deepfakes saying, Hey,
But then what he did was he asked the deepfaker, the question that only the Ferrari CEO would know, and he got it wrong and that guy hung up and ran away. But look, so the deepfake thing to me is absolutely crazy. It's out there and everybody's got to be careful. And so and it doesn't help that we have some of our leaders are using it to their advantage. It's just really bad and it doesn't help. it's really bad news. So people got to be watching out for deepfakes, especially if you have a daughter. You have a daughter, sister, whatever. You got to protect them.
Jason: An AI Companion (33:26)
Yeah.
Rajeev Kapur (33:50)
somehow and I don't really have a good answer for how to protect them. One of things you can do is you can get a Google alert for your name. Unfortunately, Rajiv Kapoor in India is like John Smith. So for me, I get quite a few alerts. But you know, it's something, right? And then, know, just, you look at Google, you know, like when you're on Instagram or TikTok, look for your name, make sure people are not, you know, co-opting your name or your image to find ways to protect yourself and just start. Come up with a family safe word is another good one. So maybe it might be, I love pineapple on pizza.
Jason: An AI Companion (33:57)
Mm-hmm.
Yeah.
Rajeev Kapur (34:19)
or whatever and you don't, but that's your safe word or phrase or whatever it might be. So those are some things like that you, that you can do. And then the third thing is, that, you know, like I, you know, if we are headed down, like there's two camps, the one camp is we're going to be, everyone's going to lose their jobs and we're going to go to two day work week or we're just going to have their job or do work two days a week, or it's going to create millions of new jobs that we haven't thought of yet. So those are the two camps. I kind of bounced back and forth. So of where we are, cause I just think it's too early to tell. And anybody who knows the answer is full of shit. Nobody knows.
Jason: An AI Companion (34:48)
Yeah. Yeah.
Rajeev Kapur (34:50)
And so, look, I just think that we're going to have to at some point grapple with the fact that if we start going to a four day, three day work week, these kinds of things, then we're to have to grapple with universal basic income. We're going to have to grapple with some of these things. And so what does that look like? So that's kind of weird. And I think those are really the three big things that we're going to have to really address in the future.
Jason: An AI Companion (35:11)
That's great. And it's funny you mentioned because those three things that you mentioned, we've talked about several times on this show, ⁓ probably ad nauseam, ⁓ and it's nice to get validation that what we're thinking about is actually is actually salient. So this has been really fantastic. I really appreciate it. ⁓ Yeah. And, Rashiv, if people want to learn more about you or want to connect with you, where should they go?
Rajeev Kapur (35:28)
My pleasure.
Well, look, first of all, both books are available. The first book, A.I. Made Simple, the third edition just came out, doing a small addendum to it because it's IGPT-5 now. Prompting Made Simple is out, that one's doing really well. So again, these books are here to democratize A.I. to make it simple for everybody to understand. These books are targeted at the masses. They're not targeted at technical people. This is targeted to your mom, your dad, younger folks, people who are not technically literate, whatever the case might be. This is for them.
And so that's who's targeted towards you can also go to my website, rajerevrajeev.ai check it out, LinkedIn, people want to connect with me on LinkedIn, name of my company is ⁓ 1105media, 1105media.com check that out as well. And if people can find me and I'm sure you're gonna have stuff in the show notes.
Jason: An AI Companion (36:19)
Exactly,
I was just going to mention, we'll put all that stuff in the show notes for everybody. Well again, thank you very much for joining, we really appreciate it and we look forward to talking to you again
Jeremy Grater (36:26)
Great conversation there, Jason. Nice job with that interview. Rajiv Kapoor, thanks so much for being on the show.
Again, all those links that were just mentioned, you can find those in the show notes for this episode. And if you've enjoyed this episode, please do share it with somebody who could benefit from the information that was shared. You can find links to do that and the links that we just mentioned at brobots.me. That's also where you're going to find another new episode from us next week. We'll see you at brobots.me in just a few days. Thanks for listening.
Rajeev Kapur is a global CEO who transforms struggling companies into multimillion-dollar growth engines through innovative go-to-market strategies and culture transformation. With experience across 20+ countries, he grew Dell's small business sector to $1.6B in five years, led three successful turnarounds with two exits over $60M, and transformed a regional tech division from $5M to $60M quarterly revenue in under two years. A MIT AI-certified strategist and Harvard Business School alumnus, he's generated over $100M in earnings since 2007 while winning Consumer Electronics Show Innovation awards and building award-winning global teams.
