Jan. 28, 2026

Should You Trust AI With Medical Advice? What Happened When We Tried

Should You Trust AI With Medical Advice? What Happened When We Tried

ChatGPT just launched a medical advice tool, and the internet is divided. Doctors are skeptical. Patients are desperate. And most people are already using it anyway.

I know because I’m one of them.

Last week, my dog ate something mysterious off the floor while I was fixing the dishwasher. By the time I realized what happened, I had no idea if he’d swallowed a piece of plastic, a chunk of food, or—worst case—an undissolved dishwasher tab.

I called every vet I could find. They all said the same thing: “Bring him in.”

So I did. And when I got there, the estimate was $1,200. Blood work. X-rays. Induced vomiting. Charcoal. The works.

I asked ChatGPT: “Is this necessary?”

It told me blood work would only reveal pre-existing conditions - not what he ate. It told me that inducing vomiting with a caustic material could do more harm than good. It told me to monitor symptoms and come back if things changed.

The vet didn’t like that. They handed me a waiver to sign for “declining care” and sent me home.

My dog was fine. I saved $1,200. And two days later, someone else in my town told me they’d done the same thing.

If two people in a small town are using AI for medical triage, everyone is.

Subscribe now


Why AI Medical Advice Is Becoming Inevitable

Here’s the uncomfortable truth: rural hospitals are closing because they can’t stay profitable. For-profit healthcare in the U.S. costs three times the global average, and small-town clinics can’t keep up.

Telehealth filled the gap. Now, AI is the next economic step.

ChatGPT’s new medical tool isn’t some rogue experiment. It’s a response to the fact that people already Google their symptoms, already use WebMD, and already can’t afford to see a doctor for every question.

AI doesn’t fix the system. It just makes it faster.


What AI Gets Right (And Dangerously Wrong)

AI is good at triage. Really good.

About 90% of medicine is triage—figuring out what to do next, which path to take, what’s urgent, and what can wait. AI can summarize symptoms, cross-reference medical databases, and give you better questions to ask your doctor.

That’s useful.

But here’s where it gets dangerous: AI homogenizes care.

As my co-host Jason put it: “This rounds up all the unicorns and shoots them.”

The doctors who see things differently, who deviate from standard treatment plans, who solve the cases no one else can - they’ll be optimized out. Because when every diagnosis is tracked, monitored, and compared to an algorithmic baseline, there’s no room for creative problem-solving.

You’ll get faster care. But you won’t get better care.


Listen to this story on The BroBots podcast . We’re on Apple Podcasts, Spotify, YouTube, and everywhere you listen.

Subscribe to the Podcast


The Liability Black Hole You’re Walking Into

Here’s the part no one’s talking about: when AI screws up, you’re holding the bag.

If ChatGPT misdiagnoses you and you die, good luck winning that lawsuit. The terms of service are already written. The liability shield is already built.

And when insurance companies start buying access to AI medical data - which they will - your symptoms, your questions, your patterns will be used to optimize you out of coverage.

This isn’t paranoia. It’s economics.


How to Use AI Medical Advice Without Getting Screwed

I’m not telling you not to use AI. I used it. It helped.

But here’s how to do it responsibly:

Treat it as a research tool, not a replacement. Let AI summarize what you’d spend hours Googling. Then take that summary to a real doctor.

Ask better questions. Use AI to figure out what you don’t know. Then challenge the answers you get from real physicians.

Understand the incentives. AI medical tools are being built to reduce costs, not improve outcomes. Keep that in mind.

Own your health. Don’t rely on these systems to care about you. They don’t. They’re optimized for profit.


The Future of Healthcare Isn’t Human vs AI

It’s figuring out how to use AI without getting economically optimized into an early grave.

I saved $1,200 last week. That’s real. That matters.

But I also know that the same tool that helped me avoid an unnecessary vet bill is being designed to replace human judgment in a system that already prioritizes profit over care.

The question isn’t whether you should trust AI with medical advice.

The question is: what happens when it’s your only option?

Subscribe here to see more articles like this!