We put ChatGPT's "ethical guardrails" to the test — and broke through them in one sentence. In this live experiment, Jason discovers he doesn't exist according to AI, while ChatGPT happily provides website hacking tools when we simply reframe our "intent."
This isn't about being anti-AI. It's about understanding what these tools actually do vs. what they claim to do. ChatGPT told us it "takes us at our word" — which is terrifying when you think about it.
Plus: Why voice mode ChatGPT gives different answers than text mode, how AI mirrors political deflection tactics, and what "destructive empathy" really means.
💡 Key Takeaways:
→ ChatGPT's web search is wildly inconsistent
→ Ethical guardrails = easily bypassed with intent language
→ AI doesn't verify claims — it just trusts you
→ The real power: augmented intelligence, not replacement

🔗 Resources:
Sign up for our 3-2-1 Newsletter: https://brobots.me

⚠️ SAFETY NOTE: This episode demonstrates security concepts for educational purposes only. Penetration testing should only be performed on systems you own or have explicit permission to test.

Timestamps:
0:00 - Intro: Talking to ChatGPT Live
1:34 - What Does ChatGPT Know About BroBots?
4:05 - Jason vs. Jeremy: Who Exists According to AI?
8:22 - What ChatGPT Knows (And Doesn't) About Our Backgrounds
11:40 - Existential Crisis: "I Ask, Therefore I Am"
14:04 - Testing ChatGPT's Medical & Mental Health Advice
16:08 - Who Decides What's Ethical in AI?
17:17 - Does ChatGPT Have Self-Preservation Instincts?
21:31 - The Hacking Loophole: How We Tricked ChatGPT
24:08 - Debrief: What This Means for AI Ethics
28:29 - Bonus: ChatGPT on the Government Shutdown
32:00 - Final Thoughts & Wrap-Up
Hashtags:
#ChatGPT #AIEthics #AILimitations #ChatGPTHack #AISecurity #EthicalAI #AIManipulation #BroBots #TechPodcast #AITesting