AI isn't your therapist. It's a letter opener that'll slice you to ribbons if you're not careful.
New EU study: ChatGPT and Copilot distort news 50% of the time. FTC complaints show AI "mental health" tools are landing people in psych wards. We break down when AI is helpful vs. when it's dangerous AF.
🔪 THE TRUTH ABOUT AI:
* Why LLMs feed your confirmation bias to keep you engaged
* Garden variety trauma vs. problems that need real doctors
* The supplement analogy: sometimes useless, sometimes deadly
* Real FTC complaints from AI mental health disasters
* How to be your own Sherpa before bots walk you off cliffs
⚠️ WHEN TO LOG OFF: If you're on prescribed mental health medication, you're already talking to a doctor. Keep talking to that doctor — not Claude, not ChatGPT, not your glowing rectangle of validation.
This isn't anti-AI. It's pro-"don't let robots gaslight you into a crisis."
🔗 LINKS:
* Full show notes: [http://brobots.me]
* EU AI News Study: https://www.mediaite.com/politics/lawsuit-accuses-googles-ai-of-fabricating-news-articles-that-never-existed/
* FTC AI Complaints: https://www.wired.com/story/ftc-complaints-chatgpt-ai-psychosis/
*
----------------------------------------
*TIMESTAMPS:*
0:00 - Intro: When Tools Become Weapons
1:26 - EU Study: AI News Wrong 50% Of The Time
4:04 - Why LLMs Are Biased (Rich White Tech Bros Edition) 8:04 - The Butterfinger Test: Is AI Validating BS?
10:31 - FTC Complaints: Real People, Real Damage
12:37 - Garden Variety Trauma vs. Broken Leg Problems 15:34 - The Supplement Analogy: When AI Becomes Poison 18:41 - Beep Boop Ain't Gonna Fix Your Leg
20:51 - Wrap-Up: Unplug & Go Outside
*SAFETY NOTE:* If you're experiencing mental health crisis, contact 988 (Suicide & Crisis Lifeline) or go to your nearest emergency room. AI tools are not substitutes for professional medical care.