The Real Risk of Trusting AI With Your Health Decisions


The internet taught everyone to self-diagnose. AI made it faster, more persuasive, and significantly more dangerous.
Dr. Ajit Barron-Dhillon — ER physician, military veteran, and someone who has watched patients demand MRIs for minor complaints because 'the internet said so' — joins Jason to talk about what AI-assisted health research actually does to people who think they're being smart about it.
The conversation covers confirmation bias in clinical settings, supplement stacks optimized by ChatGPT, the cheerleader problem in medical AI, and why being above-average intelligent with these tools may make you more vulnerable, not less. If you use AI or Google to research your health, this conversation is specifically for you.
Topics Discussed
- Why AI self-diagnosis is dangerous specifically for informed, health-conscious people
- What ER physicians are actually seeing when patients arrive with internet-sourced diagnoses
- How confirmation bias turns AI research into an expensive form of being wrong
- When AI-assisted supplement optimization is useful — and when it's not
- Why peer-reviewed research and AI training data are not the same thing
- What a responsible approach to AI health research actually looks like
CHAPTERS
- 0:00 — Jeremy's Intro: Sick and Googling While Hosting an AI Health Episode
- 1:17 — Kids Unplugging: Why In-Person Dating Is the New Counterculture
- 2:40 — The No-Wi-Fi Coffee Shop and What the Internet Can't Tell You
- 9:47 — I Let ChatGPT Optimize My Supplement Stack. Here's What Happened.
- 11:59 — The Telemedicine Loophole: AI + Social Engineering for Prescriptions
- 14:25 — Why Your Doctor Doesn't Know What You're Supplementing
- 20:16 — NIH PubMed Is Being Scrubbed — and Why That Matters
- 28:40 — She's Not Fighting Logic. She's Fighting Belief.
- 32:58 — Star Trek, Dr. McCoy, and the Tricorder We're Almost Building
- 37:11 — What a PubMed-Only AI Would Actually Look Like
- 44:58 — The Tool Gets You 80% There. The Human Closes the Gap.
Jeremy Grater: all right, that was our good friend, Dr. Ajit Baron-dilan We can always count on him to share insane stories and colorful analogies. We hope you've enjoyed that conversation. If you did, please share it with somebody. You can do that with the links at our website, BroBots.me. And that's where you're going to find a brand new episode from us next and every Monday morning. So make sure you subscribe. Don't miss it. BroBots.me. We'll see you there or on YouTube if you want to look at our faces while we talk for some reason. The internet taught everyone to self-diagnose their medical conditions. AI made it faster, more persuasive, and significantly more dangerous. hear from one of our oldest friends. Dr. Ajit Baran Dhillon is a physician, military veteran, and someone who has watched patients demand MRIs for minor complaints because the internet said so. We'll talk about what AI-assisted health research actually does to people who think they're being smart about it, AI-optimized supplement stacks, and why being more intelligent than others with these tools may make you more vulnerable, not less. Welcome to BroBots. This is the podcast that tries to help you be a better human by being smarter about how you use technology. It's kind of ironic that Ajit and Jason in this conversation are talking about using the Internet to diagnose health conditions. I had to sit out this conversation because I was sick and frantically Googling how to get better. You're going to find out what's going on in the doctor's mind when you come to them with the research you've done with Dr. Claude appointment. and some of the crazy requests patients have made based on that research. But their conversation begins somewhere that's near and dear to my heart, and it's how kids are interacting with the internet, with social media, and how there is a growing push as we get more technologically advanced and more plugged in to try to unplug and reconnect with each other in reality, in real life, and with nature.
Jason Haworth: it seems like there are smaller things that could do and should be doing on a more frequent basis. â And one of those big things that people are trying to push now disconnecting from your computer and social media. â a wide-ranging set of debates right now in different states where people are going through and trying to enforce certain regulations and other countries are doing this to try to prevent kids from being â so obsessed with that absorbed by social media and there's a whole movement among younger kids these days to disconnect from it and never join up to begin with. Apparently dating live in person and actually talking to people is like a cool new hip things for kids because they no longer want to use these digital tools to kind of make those pieces work. â As someone in the medical profession, as someone that has kind of drifted between multiple different, not just careers with specialties and trying different things in your past military experience, you've got a large battery of experience that other people would never see. And I think it gives you a unique perspective. Along these lines, the human beings interfacing with technology at the rate that we are and really augmenting our reality. â Can you talk briefly about what you think that does to us as people at various different stages in our life, like, you know, kind of start with kids and then talk about how it affects us old fuckers like we are now where we're trying to deal with shit and trying to figure out how to not be obsoleted by our robot overlords.
Ajit: That is such a broad, awesome question that has multi levels â that I would love to. But the one thing I was talking about, as you, yes, you're not gonna hold my hand and remind me, know, there's a lot of stuff in there, but it reminds me of something I saw one time. It was really cool. It was only just a few years ago, a year or two before the pandemic. And â it was a coffee shop.
Jason Haworth: I wanted to give you riff room.
Ajit: in the Midwest. don't know where I was. I think I was in outside of Chicago or something. But there was this really cool coffee house and it looked like something, you know, like what we would have had like a coffee house in Seattle in the 90s. Okay. Something like out of friends in episode, right? And it was so cool. And it was so cool that I was feeling that way. And I walk in and there was a chalkboard saying, we don't have any Wi Fi. So don't ask if we have any Wi Fi. Why don't you pretend it's the 90s and talk to someone? Exactly. So that's my thought of, you know, it's very interesting. With some of the stuff I've done in my life, you know, I've lived in Europe, you know, medicine, you know, there military, but you know, a lot of the things that I've done, and this is where I could take the internet and so it's a really funny event. Where the younger generation if I'll sit there and talk about unless I can show them, you know,
Jason Haworth: da
Ajit: you know, real life photos or, or, you know, or degrees and certificates. Okay, when I talk about my past, the immediate thing that the younger generation immediately wants to do is start Googling it and see if any any of it is real, â They Google everything. And so if they can't find it on the internet or if there's something skewed or, â know, immediately one can be, â I mean, I've been called a liar plenty of times. Like there's no way you've done There's no way, â recently, â you were really in Antarctica. Like, yeah, dude. And just because I got to do it and they couldn't do it or they couldn't. Yeah, I the first thing people look at are like a flight stand article. Like no, there's no flight standard article. You gotta go to a different country and you get on a boat to get down there, but it creates a lot of â misinformation and it creates a it creates sides that I mean we've seen it obviously in the political atmosphere, right? You know if you Google something or if you search something you're going to find two sides that coin so â just because you can't find some information or you find false information, â that kind of makes it very, very difficult. And so when I come, like for example, when I came back from Ukraine, you know, I'm talking about, you know, the stuff that we had to do and there was a lot of eyebrows raised. Like one of the things that we had to do was we had to turn off our cell phones. thinking, why the, why the hell would you want to turn off your cell phones? Well, the Russians have gotten really good at, you know, receiving and transmissions that just happened from cell phones. But about six months after I got back, the New York Times did an article about this and what you have to do. So I was like, okay, so now you believe me, right? Because you got something published there, but there's so much stuff that isn't published. A lot of the atrocities that happened over there, and unfortunately some of the things that I saw that I still think about to this day, is not there. There might be some mention here and there.
Jason Haworth: Yeah.
Ajit: But the internet is only as good as how honest a human is going to put something on there. And â it's only as good as what you put on there. And a lot of what happens on this planet does not go on the internet. What's on the internet is just what's on the internet. it really destroys a lot of things that we do. Like, mean, there's things in medicine where it's like, hey, try this because we've
Jason Haworth: Yeah. Mm-hmm. Yeah.
Ajit: actually witness a good research, know, like solid research that can back up this particular, you know, component that you can add to someone's therapy. And if they can't find it or it's inconclusive, it's probably a bad thing to do. And this is something that a physician has just recently been a part of. Hey, you know, it's part of an experimental study. I think this is going to really help. And they'll be like, no, I'd rather take ivermectin and give it to my four-year-old or something. The lack or ability to interpret the that you're looking up correctly because you're just not experienced â in certain aspects what's. being â presented. And when that happens, even to the most astute internet researcher and you're looking up information to understand what the other information is trying to say, you have to have a proper ability to put that together. And if you're doing that on your own, that's foolish. Some people can do it really good. Some people can do it well. But that's also based on whatever information you're finding is going to be truthful or relevant or helpful, really, because there's so much disinformation out there. â I don't â
Jason Haworth: Beer. Well, it's interesting. like, I'm on Facebook and I've trained my feed to pretty much give me dog videos, cat videos, silly things that I would like and enjoy. But it likes to inject things that are targeted towards men of my age. Like, clearly I need hair plugs. As we can all see on video that I don't need fucking hair plugs. I have too much hair on my skull. But it also thinks I need testosterone supplements, which I probably do.
Ajit: Okay. as we all do. here.
Jason Haworth: It also thinks that I need things because I'm too fat so I'm constantly getting semi-glutite ads, compounding pharmacy ads, and boner pills all the time. All mixed in with these cat videos and dog videos and cute red panda videos, trying to figure out and probe me to keep the machine that is Facebook funded by advertisers. Which is shitty and fucked up and not a great way to run a society where
Ajit: love it though, man. Jason, I love it, dude. When I get stuff like that saying I need hair plugs, I need erectile dysfunction medication, I need this. They have the best photos and stuff to go along with it, you know?
Jason Haworth: They're pretty good. They're pretty good. Yeah. Yeah, like, like the schvanz size increasing. Increase your schvanz by nine inches. Like, wait a minute. Yes. Exactly. This has never worked for anybody, but it will always work for you. And like lately I keep getting ads for like sex pillows. I'm like, what the f- Okay. Pillows? Okay. Great.
Ajit: Oh, man. â we got some of those. They're awesome, dude.
Jason Haworth: â No doubt. No doubt. But like, it's just this thing where I'm constantly inundated with these things over and over again. it's funny, because I'll actually take the, I'll look at those ads that they come in for like different supplements and I'll like look at the ingredients and I'll be like, all right, well, how many of these things am I taking today? And I'll start looking through these things and I'm like, â okay, these guys want $50 for $10 of supplement shit. Like, okay, I get what they're doing, but I'm like, also, they're not much different than the stack that I take. So I finally like took.
Ajit: Yeah.
Jason Haworth: took my stack and put it into an Excel spreadsheet and I had some, and I just took everything that was in my supplement box, which is like 48 different things. and I loaded up what my current regimen is â and sent it into chat, GPT, and I'm like, hey, can you tell me how effective of a combination stack this is? And I gave it like my vitals. I'm a 51 year old man. Here's my exercise routine as I built all these things in. And, you know, of course it tells me, oh, wow, you're doing a great job. Way to take care of your health, blah, blah, blah, blah. And I go, no, no, like for real. And it's like, okay, so you're taking too many of these things. You're going to have too many signals inside your system because you're taking some things that you should be taking during the day, that you should be taking during the night. And I'm reading through this and I'm like, okay, yeah, these are actually like legitimate pieces. So I'll start digging through this stuff and start to kind of uncover it. And I'm playing with my with with my supplements, right? Like, which is not uncommon. It's trial and error. And I've been doing this for a long time. So I know how to like record and track these things over a period of time. But I got to the end of it and it basically cut my â to win to cut my load in half by pulling out certain things and reducing things and taking some things over during the day and some things at night. And I've been
Ajit: Yeah. Yeah, yeah.
Jason Haworth: doing it for about two weeks now and I've noticed some slight difference, like I sleep a little better, â my recovery feels a little lighter, â but I have enough experience fucking with this that I feel like I'm confident enough to do this. That being said, I just met with my naturopath today and said, hey, I made some adjustments because chatGPT said that it makes these changes on the stack that we've been working on with me for this period of time. And he goes, yeah. Those are actually not unreasonable. Like, you know, we were waiting for your next blood test to go through, but like it doesn't, it's not a bad idea to like tune a couple of these things, but you know, you need to take some control of your health, which I feel like the chat can enable just like WebMD enabled, but I also feel like I could go off the fucking reservation because I can order myself ketamine online right now. Like,
Ajit: Yeah.
Jason Haworth: Like, wait a minute, like the stuff that's available that I can get is insane. Like, and you can get anything, anywhere, all the time. And it's one thing if it would be like, I need these things to make this stuff work. But like some of the things that like I've, I've watched some of these different AI journeys come through and the AI is like, yeah, you should definitely go get methadone. Like, huh? Like, okay.
Ajit: â huh.
Jason Haworth: Maybe that's not such a great idea, but if you walk into a doctor's office and you tell them the chat GPT says I have this symptom and this thing and I need methadone or I need Vicodin like they're gonna say you're fucking crazy get the hell out of here. But in the time of telemedicine like how long until people start using chat GPT and avatars to go through and start asking questions the right way to get a physician and socially engineer them to go sure here's some drugs.
Ajit: don't know they are. Yeah.
Jason Haworth: Like that's kind of where I see this going and evolving. Like it's this thing where it's it's this tool that has great potential, but if you don't disconnect and look at the real world and actually try to understand things like library results and blood tests, you're not going to get this out of this, what you want. And you're likely going to put yourself in real harm. â I'm just throwing that out there and you're welcome to say, yeah, a hundred percent. But also if I'm wrong about something, please let me know here.
Ajit: I don't think you're wrong about anything. I hear, you know, it's like this. We rely on technology, we rely on data, we rely on things that, know, we can trust based on the people that are providing this information. When you look at supplements and things like that, you know, a lot of times the stuff that's, you know, bought at natural stores and all, there's always a kind of like a little disclaimer saying, hey, you know, if you're pregnant or nursing, which fortunately I've never been, you know, always consult a doctor, you know, and even when people consult, right, right, never consult a nurse. I'm not going to say anything to that. but even when you consult certain supplements and things with a doctor, there's a strong chance the doctor may not know what you're talking about. Okay. It's like, I had no idea. Yeah.
Jason Haworth: You never nursed. You never nursed. I've run into it all the time. Yes!
Ajit: I don't keep up with natural store supplements. That's part of a whole different thing, which I think falls into different food categories and stuff. what we look for is we look for some evidence of there being peer-reviewed science, peer-reviewed research done on these things. And we also look at cultures. That's another really interesting thing. Like, this culture has been eating this stuff for 25 years, and not one of them have ever experienced, let's say, a really rare kind of cancer. Or maybe they have. you know, a larger life expectancy. And sometimes that's correlated with, if you eat this stuff, you'll be like them, but you're not really correlating any kind of other metrics like their genetics and other things too. So, like, yeah, right, right. Like, culture wants. Yeah, exactly. You know, so there is merit to some of the stuff and there's really cool associations, but that kind of stuff never really, it doesn't necessarily suggest it's harmful for you.
Jason Haworth: Right. Or like, walking. Maybe their culture walks. While you sit on your ass all day.
Ajit: But, and it doesn't say it's going to give you amazing benefit either, but there's always that thing saying, well, you know, some research or some associations or with some, you â know, and races, you know, who live and eat and do this in this environment, they will have some level of benefit there. But see, when you're broadcasting a product like an herb supplement, okay, it's done in such a way where you're thinking, well, if I do this, I should be pretty good for the rest of my life, you know? And it's like, not really that true. Now in some cases, yeah, it actually does help, you know, whatever, you know, it is, but the thing is that since a lot of physicians aren't really, you know, totally familiar with everything out there, you know, like, for example, you know, for years I dealt with, â I worked with a group that provided â high-poor, the high dose cannabis for cancer patients. â
Jason Haworth: Yeah.
Ajit: And when you talk about that with your peers who aren't really aware or familiar with that, and I've got definitely a livestock here in the Bay Area that just for no reason will just insult me based on the fact that I talk about it, â they think that it's witch medicine. But no, there's actually some peer reviewed stuff. the kind of research that I've seen shows that it actually does have some really neat affinity towards â some types of cancer. See, I have to be careful what I say because I don't want to be quoted directly saying that I directly said that this will cure this or do that. You have to be very careful about that. But it was enough for me to get involved and really see people's lives change. if you were to Google, you know, hey, does cannabis cure cancer? I hope it would say no. Okay, I really do, which goes completely against what I know of the substance and the different delivery methods that have been shown to show some good positive stuff. But it needs to be done on the same level that the information you get from the internet should be enough information where, or I'm sorry, not enough, but the kind of information that will give you the same confidence, okay, and should elicit. exactly the results that you're expecting, whether it be a 1 % improvement or 100 % improvement based on the things that we currently know. And, you know, the easiest example I can give is if you don't drink water at all or any liquid of any sort, or any food of any sort for a certain period of time, you will die. Okay, that's that's a given thing. If you don't eat or drink anything for a certain period of time, you will die. Yeah. Okay.
Jason Haworth: Likely of dehydration effects, right?
Ajit: Now that we can sit on, okay? I don't think we can find information saying if you eat ice cream and only ice cream every single day, you will live a very healthy life. These are extremes, but these are the kind of things that I'm gonna look for before I do anything to anyone, â or more importantly, if anyone does anything to me that is regarding my life, my health, my anything. And certainly I've taken chances with herbs and supplements because... you know, hey, I'm reading about it. I'm researching them. â If I don't find anything truly conclusive, I'm probably not going to waste the money or the time to do it. But if I see enough, whether it be personal testimonial from people I know, what I'm reading on the Internet, but what I'm reading more is I love going to NIH's â pub bed. I think it's what it's called. â And I like reading peer-reviewed abstracts. Now, not saying peer-reviewed stuff from, you know, Juan Pepe to, you know, Beaker. I'm looking at like actual university based research. And then when I look at the research, if you know how to read a research paper, you know how to read an abstract, you look at all of the things, or at least you imagine all the things that are not included in the research, all the variables that have been excluded, not because they wanted to, just because they weren't, â you know, they weren't put into the study. And so that makes it a little bit more palatable, a little bit easier for me to understand, you know, if this might be beneficial or not. â If I can't get that, I'm probably gonna forget it and just go with what I know may work. And I know that's a really broad, really kind of like, it's not really an answer that directs to really the question, but it's just, you have to be very careful. So just as much as you're keen to research AI or do anything, you should do the other research that you should do, which is not relying on AI, which is relying on your old school digging and. that could be asking somebody who is knowledgeable in this subject, â reading things that are definitely published for scientific purposes, like anything in the New England Journal of Medicine, and even things that are published there. You're just showing a well-done research study that doesn't necessarily prove how it's gonna work with the entire population. â
Jason Haworth: Mhm. Yeah, and oddly enough, mean, a lot of that NIH research is going away. Like they're actively scrubbing articles and they're changing things and they're moving pieces around, which is making the source of truth harder to define. And then you've, mean, I've done this before where I mean, like I don't have a medical degree and I am capable of reading most of those reports and understanding them if I spend the time to go through and like, you know, Google what this fucking word means.
Ajit: Notice that, Mm-hmm. Mm-hmm. Yeah, yeah, yeah.
Jason Haworth: Like I'm able to put it together, but I will say I have taken a few of these articles and I put them into chat GPT and I said, can you summarize this for me in a way that I'm going to understand? And it will go through and it will go, all right, here's the salient bit of information that I think you need to know and understand. And then I can actually start asking it questions based upon that. And it gives me good human contextual readable information. That being said, I'm still not a doctor.
Ajit: Mm-hmm.
Jason Haworth: So I don't know if chat GPT or clogged, nobody else I'm using is full of shit because they might very well be. And I could be putting myself at additional risk and exposure. But if I'm talking about, you know, buying one bullshit boner supplement over another bullshit boner supplement, it's probably not such a dangerous thing. Although I have read a lot about your him they lately, you know, giving people strokes and heart attacks. yeah, yeah.
Ajit: Yeah, of course. Yohanbe was the answer years ago to Viagra. But almost anything that goes in that mechanism, if you understand the mechanism of action, yes, there's a potential for a lot of things. Wow.
Jason Haworth: Right, right. Yes, yeah, I'm gonna have to create better vasodilation and suddenly I'm gonna put more blood through things which all your blood clots go. Yay Catch this brain It's it's interesting because you know you're you're able to I'm able to go through and synthesize this and I'm I'm not the smartest person on the planet But I'm you know, I've got a decent college degree and I've got a bit of information on this
Ajit: Hell yeah. No, no, hold on, Jason, first of all, you're different. I'm sorry, I didn't mean to interrupt like that, but you're different. I know how you do that, but not everyone else does it the way you do it. no, none of us are the smartest people on the planet, but we're above average compared to a lot of people. And what you're doing is by far more responsible than what some other people would do.
Jason Haworth: D. Yeah, and I do take the time to run stuff past doctors like yourself or my naturopath or you know, I finally just got myself a primary doctor â just a couple weeks ago and â you know, I've been using my naturopath for most of my stuff for a very very long time because he's on MD and an ND. So you know, he's got both sides of it. He treats it with multiple different medicine types and that's his whole thing and I really like him. But to get access to certain specialists or other pieces, I have to go through a primary care physician. Along those lines, I just did a quick check. The number of doctors appointments that are primary care preventative physicians is about 50%. The other 50 % are either emergency or urgent care, which means people are going in because something is broke or not working right. And my gut says a lot of these folks going into the ER and the urgent care probably don't have primary doctors and probably don't see a primary physician on a regular basis and they're just dropping in to do these things and that being the case I really wonder if â maybe you're not the best person answers maybe maybe â Denise is the right one to answer this but is she seeing enough to take a people coming in going I really think I have the answer or is she seeing a bunch of people coming in saying This thing told me this is what it is, I need you to tell me if it's right.
Ajit: Um, the needs that is the right person asked because that's kind of a world she lives in. The answer is she gets both of those. What percentage? I don't know. But it's very funny enough, it's done by population too. So, you know, whether it here in the Bay Area or, you know, over towards the Central Valley she will get
Jason Haworth: Mm-hmm. Sorry.
Ajit: she will get a population that will do that. She'll also get a population that will come in. And I kid you not, these are the beautiful things of the progression of technology and information and society. â My favorite, because whenever she comes home or not even when she comes home when she's there, she'll call me and she'll tell me some funny, really funny stories that if it wasn't, she said if it wasn't for me in her life, she would just be an angry person. She just would come home pissed off. She's like, I hate all this stuff. But because she had me, we turned it into, we turned that frown upside down. Like I remember back in the day, I did some clinical rotations in New York and it was at a time where HIV was really, just, wasn't the, believe it or not, it's a very manageable thing right now, but it costs money and it costs, you have to be very observant. But, Back then, â Sarah Silverman had a comedy thing where she made a comment, when life gives you aids, make lemonades. And it was kind of something that we would say around because we had to, you know, it's like, like some people would say like, you are so insensitive. I'm like, asshole, are you dealing with these patients every day? No, you're not. man. And if anything, that's just kind of a nice way to say, look, man, here's what life is, okay? This is what we gotta deal with. So.
Jason Haworth: You get a laugh. Right. Right.
Ajit: I do the same thing with her. So she might call me and say like, hey, there's a Google, you know, â MD here. There's someone here. But sometimes this is one of my favorite things. She will get a patient. It comes into the emergency room and says, yeah, I need an MRI. And it's like, OK, but why are you here? And it's like, well, my, you know, my lips really shat or something. I'm obviously making a comedic exaggeration, but it's completely something, â you know. anti reasoning for getting someone an MRI. And then she'll say, Well, I don't really see a reason for you to get an MRI because you know, if you're here for this, and then here's the beautiful thing about society, she'll get a comment back saying, Well, if you're not going to give me an MRI, that means you're racist. And I'm like, Whoa, whoa, whoa, whoa, whoa, nope, nope, nope, nope, that that that is our racism is she's, she's deciding which diagnostic equipment or mode you need for us to fix you. But
Jason Haworth: That's not how racism works.
Ajit: She will get a ton of that. She will get... â
Jason Haworth: Are stupid people a race? Are we... Is that how we're classifying this?
Ajit: â my God, that is right. And it doesn't make any sense because it doesn't make any sense. Okay. But there are certain sociological factors, okay, that she will get for not doing or doing whatever weird, which completely just is out of the realm of what her medical experience and her medical knowledge is. Okay. Like, how do you deal with that? Like, you know, like, and again, these are extreme exaggerated things, but
Jason Haworth: No! Right.
Ajit: If I was to actually show you or have her here and give real details of what happened yesterday in the ER, it would blow your mind. Okay. And so here's how the process sometimes goes. Tom's in, has a paper cut, wants an MRI. Well, I can't give you an MRI. Well, you know what? I'd like to speak to your supervisor and then, you know, she'll do a dosey-dosey. â here I am. I'm my own supervisor. You know, no, no, no, I need a real doctor. Okay. You need a real doctor. Why? Well, I need a doctor who's a man.
Jason Haworth: I'm sure.
Ajit: Because men are real. She will get all the time. So here you have a separation.
Jason Haworth: Ha ha! And I'm guessing it's men primarily that say this shit, right?
Ajit: That's right, the real doctor's here. you know, but you know, what you have there is you have already a skewed assumption and knowledge of what the patient is expecting to get based on the internet. And then if they don't get what they want, then they immediately turn it into something that is non-medical. It becomes sociological. It becomes, you know, cultural. It becomes something else. And she's like, â I'm ideal of this. And then you have to get, you know, the institution involved. And then it further complicates it.
Jason Haworth: hahahaha
Ajit: It's unbelievable what she has to deal with. And I never have had to deal with it. mean, she gets the worst of it for sure.
Jason Haworth: She's fighting belief. Yeah. But she's fighting belief. Like we've we've she's not fighting logic and reason. She's she's not fighting an idea. She's fighting a belief. Somebody has looked at this thing and now they believe this is the thing that's happening as opposed to going, hey, maybe this is what's occurring. And I I see it happen all the time. Like I'm just in the tech field and I deal with folks quite often that believe a thing works a certain way. And then I have to.
Ajit: Yeah.
Jason Haworth: prove with a whole bunch of data points that that's not how it occurred. And I'm successful about half the time in convincing them. The people that work with them that came with the thought that it's an idea are like, â yeah, I totally get them. But then I made an enemy because I've challenged someone's belief system on a technology piece that has nothing to fucking do with like anything out there. But it's I think it's I think this is happening a lot more because AI is really, really good.
Ajit: Yeah.
Jason Haworth: at giving us information sets. And then if you follow the trail with it and you follow the Google and you put the information in the right order, it spits out the same results and you wind up with confirmation bias because you go in with an idea saying, I think this is something like this. I think this is why here are the pieces. And the AI wants to keep you engaged and happy. So it spits back an answer that's going to keep you engaged and happy. And the fact that likely that it's going to spit back and say, no, dumb dumb, you shouldn't do that is low. but the likelihood that it spits back with, well, not exactly, it kind of might be this, and then someone follows the kind of might be this rat hole, is a really common fucking thing. And it's not that there's not value in following the absurd or following the road less traveled kind of piece, but as a doctor, I'm guessing you're dealing in macros and you're dealing with data in such a way that you're looking at graphs and charts and probabilities.
Ajit: Uh-huh.
Jason Haworth: to try to find a way and carve a path or something to try to get to an endpoint that has a positive outcome because you're dealing with systems that aren't inherently transactional. Like there's multiple things being fed into them simultaneously. It's not like, you know, put this pill in, get this net result. It's put this thing in along with these other things along with this and this might be the thing that causes these trends to change and move in a certain direction. And there's certainly things that can be, you know, big stopping points. Like I can take a pill of cyanide and die right away. But I can also do something like smoke a cigarette that has a little bit of cyanide in it and be just fine. it's rate, it's volume, it's interactions, it's all these pieces happening simultaneously and it's systems. And these chat systems have been set up and designed with these neural networks to find approximations of understanding of how they put those pieces in, to try to sort data, to make those things more palatable and reasonable. And we've turned around and given them like a human level of interactive consciousness, not consciousness, but a human level interactive version of perceptual output that looks like consciousness. So we feel like we're talking to something that actually can reply back to us in a way that makes sense to us. And the fact that they can go through and you can create personification of this to say, hey, pretend you're a doctor that does this. I mean, you could tell it to pretend you're one of the.
Ajit: Mm-hmm.
Jason Haworth: fucking doctors from ER or Noah Wiley from the pit. Pretend you're Noah Wiley from the pit and answer me this, expecting you're gonna get an answer. And again, it's just a continual loop of confirmation bias, misinformation, poorly distinguished outcomes because we think we fucking know what we're doing and we don't. Yeah, I think I just ranted. I don't know that I have a question in there, but fuck man, like this is driving me insane because I see so many potential things for good and I see us just
Ajit: Yeah.
Jason Haworth: locking it up because it's so easy for it to do it. It's so easy to just let people run with it and, you know, Darwin themselves out of existence.
Ajit: Yeah. Yeah, dude, first of all, there's lots of questions in there. know, during this whole podcast, which I want the viewers at home to know, I had no idea what we were going to be talking about. That's probably why I sounded a little slow despite my Oaxacan coffee and Oaxacan background.
Jason Haworth: Dear viewers, I just found out Jeremy was sick, so we had nothing prepared past this, and Ajit said he had time, so we are doing this.
Ajit: I love it though, because when I have time, I mean, and it's funny because I think we should have done this with both of us drinking, okay? But â the thing is, I started thinking as you were talking about watching old episodes of Star Trek, like I'm talking like Perk and Spock, know, Nimoy and Batner, okay? And when you look at Bones, the doctor, right? mean, that fucker had like probably, I mean, you was kind of, he was the ship's doctor, right?
Jason Haworth: â yeah. Mm-hmm.
Ajit: And I look at Denitza and she's the ship's doctor. the thing about Denitza and Bones, okay, what was his real name or whatever? Dr. McCoy, was Leonard McCoy. Bones is what they called him. I love that, hey Bones, get over here. But when you look at him, I remember watching the series really religiously and well. was part of, you know.
Jason Haworth: Do you call her bones? Dr. Leonard McCoy.
Ajit: our, you know, grow up, not grow up, our upbringing, you know, eventually, you know, we saw the other trends of what Hollywood would do. And in fact, I think the next generation was the best iteration of like, you know, the entire franchise. But no, it's so good, dude. But when you you watch those doctors, especially starting with Dr. McCoy, I mean, the dude had the easiest job in the world, man. He could give a physical in two seconds just by scanning a walkie talkie or something.
Jason Haworth: Yeah. TNG's so good.
Ajit: And I'm starting and I'm sorry, I'm sorry, Yeah, waving the tricorder, doing whatever it is. And then I remember there was maybe an episode or two where the fucker had to do surgery. I mean, it was almost like non deliberate. But when I look back on that particular episode, I don't know who needed it, Spock or or or Shatner. don't know. But when it came down to the surgery, I mean, he's not he has to actually cut in and you see sweating.
Jason Haworth: Wave in a tricorder. waving the tricorder.
Ajit: like, like, â fuck me. I mean, I don't have a device for this. I got to actually do this on my own, you know, and he's the doctor that's been chosen by Starfleet. So he should know. But he's totally like sweaty. He gets the job done successfully. And I'm sure with the aid of the other technology that he has. But it showcased something very interesting as I look back on it, which was the technology for day to day was good enough to a point where, you know, you could probably
Jason Haworth: Hahaha!
Ajit: save lives with certain conditions that right now would be like, there's no way you're gonna save someone with that. But with that technology, you could, but sometimes you have to get right down to the nitty gritty and do things manually. And it looked like that was a big struggle for him. And yet, now, especially where we live, we're close enough to Palo Alto where we see a lot of really cool stuff coming out of Stanford, where it's biomedical, bio...
Jason Haworth: Mhm.
Ajit: you know, just amazing, you know, technology that is combined with, you know, the healthcare providers that are going to implement this, that are, you know, it's amazing, life-changing, life-saving stuff. But when I look at that with some of the stuff they have, I mean, it's very identical to some of the stuff you would see in Star Trek. And I think that possibly could be deliberate because you got techies. who are also Trekkies who, you know, who want to actually see if something like this could be possible. And, you know, for years we've seen a development of these things that look very similar to Star Trek, you know, kind of devices that do probably even more than what that Star Trek device did â back in the day. And then we have equipment that we know will help us confirm certain things. Like if one knows how to interpret a CT scan or an MRI correctly, you can get a pretty good diagnosis of what could be wrong. But of course, then you have to go further and you sometimes may have to get a sample. You'll have to do a biopsy, you'll have to do something that will confirm that. But that changes how it used to be, which is like, well, I can't really see anything down there, let's just cut it open and see what we find. So in that regard, that technology is really good, but even that technology that was something that was scanning and observing, Even that at some point could have been
Jason Haworth: Mm-hmm.
Ajit: a questionable thing to do just like AI. So what I'm hoping is that the use of how AI works, instead of it being in like the broad spectrum of like what chat GPT will do, like I would be more impressed, for example, to see an AI solely getting its information from a truck, like let's say NIH is PubMed, okay? And let's just say nothing is being scrubbed from PubMed, that it is like, you know, a fixed library that is only being added to but is peer reviewed and vetted before it is in that database. I would like to see, for example, an AI that gets this information solely from that versus an aggregate of the entire crap that's on the internet. And in that regard, I can see a very good positive thing. I can see where we don't have to say we're messing this up or we're doing this all wrong. We don't know anything. We can do something that can help the people that are supposed to help others. with better tools. The same thing can be done in isolated military aggregates of AI, isolated things involving cooking techniques using AI. I think it's a very interesting and broad start, but we need to get more fine tuned with it. Again, in the same way, the example I gave was medical imaging used for diagnostics. It has come a long way.
Jason Haworth: Mm-hmm. Yeah.
Ajit: But if you look at some of these diagnostic â equipment from 30, 40 years ago, compared to what we can do now, it's almost like you look at the stuff back then and it's like, dude, are you looking at a photo from space or are you looking at a human body? And it was really poor stuff, but they were able to build off that. So I don't think building off the stuff from back then caused a lot of harm or too much harm, but I think building off something that is so broad spectrum, I use the term broad spectrum as a pun from
Jason Haworth: Yeah.
Ajit: you know, medical shit, but it really is. And I think that is something that can hurt for right now, unless we start fine tuning how we want it to be. Now, of course, if we fine tune every area of this, well, then there's the argument on the potential of what that can do for human society, what it can do to jobs, what it can do to how we interact, collaborate, you know. And frankly, I see enough people now that are anti-social for many reasons. â I think overall this is going to further problems socially, whether it be by argument or whether it be by â seclusion, know, for one's choice. I don't know. But I think there's so much good positive application where you could get certain AIs to do the job that is the responsible job and the right job. â But right now, no, we're not seeing that. I'm seeing an internet part two. which is like, first there was the internet and it's like, yes, yes, we will all become smarter because of this. And then we have AI, which is also saying the same thing. And no, I disagree, causes arguments. causes, because now you have a generation or several generations underneath us that are as a total, not everyone, that view this as gospel. And we took the same test too, man. Danica and I, â
Jason Haworth: Mm-hmm.
Ajit: went down the AI road of evaluating people that we know in real life. And we then questioned it, but focused our questions. Like with one particular person, we focused on them saying, know, well, why are they, â like, what kind of medical specialty are they and what kind of this? And then we started asking the questions, well, why are they such a bad doctor? Are they such a bad doctor because they've done this and this? And then AI, yeah, well, kind of encouraged them, saying, yeah, there's a good chance that that doctor, who probably could have started off on our initial questioning, was, you know, good doctor, a well-respected doctor, but you get to the very end and it's kind of like, yeah, you shouldn't see this doctor, man. You know what, you're right with the questions you're asking. He's a terrible doctor to go to, you know?
Jason Haworth: Yeah. I mean, the AI and the internet, they're tools, right? Like, and tools have different valuations and we've taken this tool and put it in everybody's hands. And if you make it the equivalent of a hammer, if you put a hammer in one person's hand versus another, they can probably both bang nails. But somebody can turn it into a weapon. Somebody can make beautiful sculptures out of it. Some people can only bash themselves in the foot and the hand with it. And You're giving to, mean, are human beings, we are tool users, that is what we are. But not every tool is meant for every human being to be optimally used. And not every human being should be given access to every tool. cars are a tool, but we make you take a test before you can actually use these things. A key is a tool to get into your house. Kids have to learn how to do those pieces. We've become reliant though on these tools to supplement actual human knowledge. I am actually, I mean, speaking of the pit, I just watched this episode last night. There was a cyber attack that occurred at other hospitals in this mythical area inside of, â I think it's in Pittsburgh is where the show set and the hospital administration decides that they're going to shut down all of their IT and go back to using manual boards. to try to make these things work and the ER goes into absolute chaos because nobody knows how to use a fucking whiteboard. is, yeah, yes, like is that how fragile we actually are as a society, as people, because we've become reliant on these tools and the more things that we hand over to AI, the more reliant we become on these pieces.
Ajit: Really? â my God.
Jason Haworth: At what point does self-reliance â have to take precedence over technological reliance? Because I rely way too fucking much on technology, I know that, but I also know that if I had to go out into the forest and survive, I could. I don't know that, I think my kids could, because I've taught them enough of how to make those pieces work, but I don't think the neighbor's kids could. I think they'd be real problems, and I think they would die on the street if they didn't have running water. â
Ajit: Yeah, feel confident in things that I can do too and I have to be honest with myself and I have to be honest with the people around me. I'll go into what I mean by that in one second, but regarding tools. You're absolutely right. The analogies you gave are absolutely right. They are tools. Anything that you can that you can use for good. I'm certain of course. If someone wants to discredit me and say, you know, that can't be used for good and that can't be used for bad, fine, man. You got me. OK, and this is what it means to be honest with yourself and honest with other people. It's like you got me that that one item you just mentioned totally shuts down what I'm or doesn't. OK, like I can use this to drink my coffee or I can use it to help assist me bashing your face in. OK, â yeah, that's an extreme.
Jason Haworth: All words are made up, it's okay. Right.
Ajit: So tools can be used for good, tools can be used for bad. They're fucking tools, okay? They have no conscious and neither does AI, okay? AI is a tool. It can help you get to something good or something bad. It depends how you wanna use it. You need to understand what it's there for. So if you wish to use it for something bad, go for it. If you wish to use it for something that you don't know and you're using it solely on how to learn something, you know what? it might do a decent job, might do a bad job. There's a great example one time, there was something, I forgot it was, but it was kind of a, you know, â God, you know what it was? â God, it was something so simple. It was how to pop open â an ampule for an injection. And it was in and it was
Jason Haworth: Yup.
Ajit: It was not something she was used to. And it's not something you normally see in the US. Yeah, that's what it was. And I said, â yeah, yeah, what you need for this is you need this other tool. And that tool pops. And she goes, what are you talking about? I'm like, I'll show you. And I looked, and I went to Google and I said, no, let me show you. I know exactly what this is. But even though what she looked up would have been good enough of how to pop the thing correctly and get what she needed and provide the injection.
Jason Haworth: Different lid types. Yep.
Ajit: What I showed her was something that was not mentioned in that video, which was an actual device used to safely pop this thing open and then how to properly administer the syringe to give the injection. that is something where she first said, okay, look, I need to use this thing. I'm not gonna pretend I know how to use it. She relied on the internet, a tool, okay, to see how it's done.
Jason Haworth: Yeah.
Ajit: And despite recognizing, and I'm sure with what she did, she could have easily, but not easily, but just within time, just done it. But she said, no, my husband, you know, went to medical school and did his training over in Europe. Maybe he knows something about this. So here we have three different things in this process. And I found something that she didn't see. And so therefore, once we applied that, or at least we found the other part of the device, She was able to do her job more effectively. That took more time, but that was the responsible way of using tools and still vetting it the same way. Let's say people want to get a second opinion in a medical diagnosis. She came to me for that. That took time, but that was the responsible time. And I'm happy to say that person is most likely alive today. Okay. Whereas I can't say other people, you know, will go in that process. You, I know, will go through that process. And it's a...
Jason Haworth: Yeah. I th-
Ajit: It's some people think that that's overly cautious. No, it's not overly cautious. It's the right thing. It's either your life or somebody else's life. And forget medical stuff. What if it's simply getting work done? What if it's avoiding plagiarism? What if it's cooking the best waffle? You should go through a normal process before you decide full Star Trek automation is the way to go. And I think right now we're in the infancy of that. And that's what makes it scary. And that's what makes it
Jason Haworth: I f-
Ajit: It makes it weird and that's why it's easy just what you said. Hey, man We may not know what we're doing exactly as a society as as tech culture goes and advances These things that are supposed to make our lives easier better and all that. Yes, we hope that exists there's also the other side that makes far worse. And so I quote Demetri Martin comedian and he said something years ago, which was an absolutely hilarious thing regarding the internet He said, you know the internet is full of sexual predators. But to be fair, it's also full of sexual prey. And it was really funny when you hear that, but then now, you you fast forward, it's like, that was funny, but it also showcased something that is real, and it shows how the internet can be used for good and for bad. And it's something that we should always be cognizant of, I think. And there's that.
Jason Haworth: Yeah, no, think that's good. I think what I'm taking away from this though is that â in the â example you provided where your wife needed help making a tool work, the tool she turned to was you â to actually get the right answer. And I would like to encourage everyone to be your own tool. Be a tool for yourself, for lack of a better term.
Ajit: I know I think she told me that you're such a tool. I don't like you're welcome
Jason Haworth: Yeah, exactly. You're welcome.







