May 11, 2026

Is AI Legally Liable for Human Harm?

Is AI Legally Liable for Human Harm?
Brobots: AI, Tech & Philosophy
Is AI Legally Liable for Human Harm?
YouTube podcast player badge
Apple Podcasts podcast player badge
Spotify podcast player badge
Goodpods podcast player badge
Overcast podcast player badge
RSS Feed podcast player badge
YouTube podcast player iconApple Podcasts podcast player iconSpotify podcast player iconGoodpods podcast player iconOvercast podcast player iconRSS Feed podcast player icon

Big Tech is playing with fire, and the legal system is finally reaching for the extinguisher. This week, we're taking on AI accountability, sparked by a landmark Pennsylvania lawsuit against a chatbot for practicing medicine without a license.

Is this the tipping point for regulation, or just another glitch in the matrix? Also, learn why a new form of candy may literally be music to your ears.

Chapters

  • 00:00 Accountability in AI
  • 06:31 Slow Progress in Regulation
  • 13:13 Legal Accountability and AI
  • 20:19 Innovative Technology
  • 26:10 Closing Remarks

Jeremy: All right, you ready? All right. Well, the word of a day is accountability. Are these big technology companies going to be held accountable for not doing things they should to protect people, as alleged in some new lawsuits? We'll talk about that on today's episode. And also, if there's time, the lollipop that plays music in your head while you eat it. We'll talk about that in just a little bit as well. But for now, this is Brobots. Thanks for listening. We are the podcast that tries to help you be smarter about technology.


Jason Haworth: I'm ready.


Jeremy: We are the podcast that tries to help you be a better human by being smarter about the way you use your technology. Thanks so much for listening. So I want to start with this story out of Pennsylvania. We're going to talk about a couple of big lawsuits that could potentially pave the way for a little bit more accountability in the world of AI. And we're going to start with Pennsylvania. An investigation found that a popular AI chatbot platform was allowing users to create fake doctor personas complete with invented medical license numbers. That would offer psychiatric assessments to people who said they were struggling. The state has filed what's believed to be the first lawsuit of its kind accusing character.ai of the unauthor of the unauthorized practice of medicine. The company says its chatbots were fictional characters covered by disclaimers, but Pennsylvania investigators said that the defense doesn't hold that when someone in crisis is being told they're speaking to a licensed psychiatrist, that those disclaimers don't hold up. So, Jason. Does this does this crack the door open and trying to install some accountability, some regulation of these ⁓ AI t companies?


Jason Haworth: I No. No. I don't know. I don't know. It is it is not cracked that it doesn't crack the door anymore than it was before. What we cracked open before. So I mean, like all things, laws are subjective and they're they're not immutable. They can be changed. They're it open to interpretation. Everything. Like two plus two does not necessarily equal four to any lawyer anywhere ever. ⁓ so


Jeremy: Always the optimist.


Jason Haworth: I in in some regards, ⁓ there are definitely guardrails and things being put in place ⁓ from a legal perspective to try to rein some of the AI companies in. And it's interesting because ⁓ I just read an article this morning that JD Vance was in a meeting with all of the heads of these AI companies and they were talking about what are we gonna do about Mythos and coming to destroy everything. And it wasn't like they were going through and trying to figure out a way to protect the people. They were trying to figure out a way to protect critical infrastructure and some other component pieces because they're realizing now that they've unleashed Frankenstein's monster on things. So the threat is clearly existential and the people that are in charge. ⁓ see it as an existential threat and the person the people that they're worried about are not us. They're worried about themselves and they're trying to figure out a way to to kind of navigate and walk down this path and not really dealing with the collateral damage that these things are causing to people who aren't part of the ruling class. ⁓ yeah.


Jeremy: Well, it comes down to the bottom that's it, right? It comes down to the bottom line. And and I think until these problems stack up and and they will, because this will continue to happen. We have another story we'll talk about in minute. once the people behind the big money that are funding these things start to get nervous, nothing's going to change. Once they see, ⁓ there's a threat here and and we might be on the hook, this might cost us money. That's when I think there may be a a slight change in the tide to opening up the idea to some sort of regulation to protect them and and ultimately, you know, ⁓ maybe that tri maybe there there's where trickle down does help and maybe that does help us in the long run.


Jason Haworth: Yeah. Well and it it it doesn't seem to be that they're ⁓ worried about them ⁓ it costing them money. Like that does not seem to be a motivator. The motivator seems to be something that disrupts their power of control. And money's just one of the levers of that because these people are working in a pay to play system. And ⁓ I mean I don't I don't think the CEOs of these corporations


Jeremy: Mm-hmm.


Jason Haworth: Like are inherently evil. I don't. I mean, I think they're people and they're trying to do the best that they can. They've got limited scope and limited capabilities of what it is they can actually go after. And most of them are part of publicly traded companies, which says by law they have to maximize profits. So there's all kinds of shitty fucked up things that are at play here. But when you're looking at the way that these guys are actually trying to go through and justify some of the stuff that they're doing, they're doing it based upon the idea that.


Jeremy: Mm-hmm. Sure.


Jason Haworth: They've opened up Pandora's box and this thing is out in the wild running ⁓ running different plays and new builds are coming out that are gonna turn the level of chaos and havoc up to eleven and then some. And and that's the piece that people are starting to get afraid of. So these ancillary stories of human beings being hurt at a ⁓ at a personal level is being overshadowed by these folks.


Jeremy: Sure.


Jason Haworth: Or is being overshadowed for these folks by the existential crises of, you know, potentially world ending events. ⁓ which something like turning off all the water supply or turning off all the electricity or launching nuclear weapons could be.


Jeremy: Could be, yeah. The other the other ⁓ comparison I've heard to this is the you know, tobacco wildly unregulated for a long time, especially when it was promoted and promised all sorts of health benefits. And when more and more people started stacking up these stories of, hey, I'm sick, hey, I've got cancer, hey, this person died, that's when again the tide started to shift. And yes, the tobacco companies still exist, they still function, but regulations were definitely clamped down and and things had to be, you know, monitored much more closely. So I'm I'm hearing that, you know, no, these ⁓ these stories, this one in particular that we're talking about, will not move the needle. But this may be among the first of many that will stack up to hopefully prevent us from having to worry about the the you know world-ending events and hopefully put some some ⁓ locks on the doors before we have to have those serious conversations.


Jason Haworth: Yeah, and and theory. I mean it sounds great. ⁓ I just don't trust the people executing. Like


Jeremy: What? I find that hard to believe.


Jason Haworth: Well Yeah. ⁓ I find it hard to believe too that ⁓ so many of these dipshits in in power right now in inside of the administration, can tie their own shoes and don't choke to death on their own tongues. ⁓


Jeremy: Objectively speaking, of course.


Jason Haworth: With all due respect. Yes. Yes, with all due respect. ⁓ I mean, we're we're running headstrong into something that we don't understand. The environment is definitely being damaged by all these AI functions and all these things running GPUs and all the cryptocurrency and everything else, and all these data centers going in, then all the water's being consumed, and hash, hash, mash, mash. Everything's terrible. We're all gonna die and burn. Yes. Okay, great. Let's get past that. The hard part of this is We like this technology enough that we want to keep using it. and as a society, or at least by the people that are in charge, we know that there are going to be threats and angles around this. So if you look at this, you can compare this a lot to automobile industry. when cars first came out and they're replacing, you know, horse and buggies. The the whole thing was, ⁓ well, you know, these These horseless carriages are about to run our children over. They're about to do this. Who's responsible for this when somebody's driving along and the horseless buggy explodes? Sure, these are all things that are gonna occur. And this is this happens with any new technology that we bring on the scene, right? Like, I mean, video games were, you know, think about the children kind of people coming out, worried about kids doing violence on things. Television, same way, movies.


Jeremy: Mm. And that's something that you and I we we've appeared on other podcasts and been challenged on, you know, ⁓ you you Darby Dragons put people, I've I've seen you before and everything always ends up okay. ⁓ to me, ⁓ again, the counter argument is there. I I I've never had to convince my dishwasher to not kill me, right? Like it it it either washes the dishes or it doesn't.


Jason Haworth: Yeah. Well right, and yeah, you're right. So getting kind of the point, like everything everything turned out okay until it doesn't. So I mean, this is one of those things where like, you know, human beings are I I guess mammalian cockroaches. Like we tend to survive. We tend to figure out ways to like hide ourselves into deep dark cracks and hide from monsters and expose ourselves and jump out.


Jeremy: Live live on foods fortified with plastic. Yeah, we we we adapt.


Jason Haworth: Ex it's delicious, we do. So yeah, like human beings will probably adapt and survive. I mean, even if there's like full and nuclear winter and things get real hot and melted, you know, I I suspect that human beings will continue to survive. But what that looks like, and how that works in a world that's dominated and controlled by AI, you know, is that the matrix? You know, are we all living underground somewhere near near the Earth's core to have enough heat that we can stay alive and not die? ⁓ maybe.


Jeremy: And and are we hooked up to machines living lives in artificial reality to simulate what we once thought was paradise, but is in fact now just us plugged into into a machine? And hey, maybe that's what we're doing now. Who knows?


Jason Haworth: Right, maybe this is all a giant hologram. But okay, now we're getting really weird. ⁓ which I kinda like. ⁓ anywho, so the the concept that these companies are gonna be held liable, the liability factor of it, if we talk about it economically speaking, economic liability comes down to the idea that you're gonna have to pay something out ⁓ because of the perceived idea of damage. And you have to prove a lot of things to get people to have to pay out.


Jeremy: Right.


Jason Haworth: And these big companies have big cadres of lawyers. So most of the time when that kind of thing happens, the way that people actually have to do something is they got to file class action lawsuits and they got to push hard and you got to get a bunch of people together to try to these do these things simultaneously. I think what you're gonna find is that in the US, this is hard. This is really, really difficult. I think what you're finding in other countries is not the case. Other countries are like, yeah, ⁓ we're definitely suing you and we'll definitely open those doors up because they're not letting AI run rampant. And they're letting these companies basically use us as a as a you know, a giant sandbox that or I mean, it's not really a sandbox, it's much more of a litter box to go through and shit in while we're all being tested upon. And that's what's happened in the US.


Jeremy: And it's not just that, right? Like ⁓ A, yes, it's incredibly hard. And B, even if you're successful, the payout has to be damaging enough to instigate change. And often these payouts are built into their business models where they they they expect we're gonna something's gonna go wrong, we're gonna have pay something. And as long as the number stays below that, they write the check and business as usual continues.


Jason Haworth: Yeah, absolutely. Well, and it's it's the fight club whole mantra thing, where they talk about writing the formula during the car crash, where you know, they found the the look at the fat guy. Look at the dad. He must have been a real fat guy. His fat's melted into the seat. The teenager was wearing braces. You can see the the brace marks that are punched into the back of the seat where they hit like and then


Jeremy: Yeah.


Jason Haworth: And then the question becomes, well, how often do you pay out on these insurance policies? And it's like, ⁓ well, we're going to keep paying out on these insurance policies until we have to issue a recall. And when the cost to issue a recall is higher than it is to pay out the insurance companies, then we'll issue a recall. Until then, nah, we'll just keep letting people die. That's that's what's gonna happen.


Jeremy: Yep. Yep. Just write and checks. And it is just that deadly. So you know, ⁓ we've already talked about the Pennsylvania case where lots of bad advice was given out by fake doctors. ⁓ but let's ⁓ turn to my backyard in British Columbia, Tumblr Ridge, eight ⁓ excuse me, eight months before an 18-year-old walked into a school in Tumblr Ridge and killed eight people, including six children. OpenAI's own safety team had flagged that person's account, reviewed their conversations about gun violence, and recommended calling police. Company leadership overruled them, deactivated the account, and took no further action when the shooter created a second account. Now seven families are suing open AI ⁓ in U.S. federal court. With Sam Altman named personally, the lawsuit alleges the decision ⁓ not to report was made in part to protect the company's trillion dollar Alt IPO. ⁓ Altman has since apologized, but for the families in Tumblr Ridge, the question isn't whether he's sorry, it's whether anyone will be held accountable. So, I mean, you know, mu much more, much higher stakes, much deadlier result. But I think in the end, The re the result is the same. Whatever they're going to end up paying, if they are found liable for this, which, you know, if if human beings are involved in the this in the decision, they will be. But how liable and how much that hurts their pocketbook and and affects any real change, I I don't have a lot of hope. But I do think that these are the the early stages of these kinds of complaints that again, I hope kicks a door open and and and introduces the idea of regulation. I don't think it'll happen under this administration. I don't think they're smart enough or care enough to do it. But I think down the road, when when grown-ups return and there's an effort to return to some sense of normalcy, fingers tightly, tightly crossed, ⁓ that some of these things will be taken more seriously.


Jason Haworth: I bet if you really wanted to ⁓ go through and just do like a quick litmus test as to whether or not this administration is or is a not in favor of regulating AI, you start getting the AI models to say bad things about Trump and start getting it to, you know, make memes of him doing terrible things. I bet with that happens, then you start saying AI get regulated very, very quickly in the US.


Jeremy: Mm, ri if if it turns on him, if it stops if it stops kissing his ass as much as it kisses ours.


Jason Haworth: Right, exactly. Exactly. That's that's when it'll change. And until then it's it's a useful I guess thing for them to use. But I mean that the idea that that anyone's gonna be held liable during this technical phase is pretty low because we're just not interested in slowing down progress.


Jeremy: Yeah. I I think I think where I see signs of hope though are in the case of OpenAI, there were people that saw red flags and said something and and have come forward and said, Hey, we saw something and they told us no or they or they held us back from making these decisions. Like those are the places where it starts. It's it's those quiet little rebellions where them speaking up allows someone else to speak up next time. And and it's going to be a slow snowball effect. I you know, if if it goes that way.


Jason Haworth: Yeah.


Jeremy: ⁓ but but I don't think any one thing is going to come along and and change this until there's a significant amount of pain inflicted on those who benefit the most from this.


Jason Haworth: Yeah. Well a yes. And w we're not good at helping reguling control and regulate things. And yeah, people die, so now we're gonna pay out. Like that that's just kinda the attitude that a lot of these companies have. That's right. That's I mean, and that sucks. That sucks for us that are just people. you know, I mean th how many kids have we


Jeremy: That's the business model. That that's the US business model, yeah.


Jason Haworth: heard of that went on chat GPT or whatever platform it is and started talking about terrible things going on in their past and they've committed suicide. You know, I mean a lot. This is this is not a ⁓ onesie twosies kind of thing. Like there's been a lot. I mean I actually let's do it with a little experiment. On chat GPT new chat how many


Jeremy: Mm-hmm.


Jason Haworth: Suicides are attributed to people using AI platforms. Let's see what it says. There is no reliable global number for suicides attributed. What is true is that can is that can is that concern has escalated sharply over the last two years because of several high profile cases. ⁓ most defensible argument. Yeah, it will not list them off. Like it's so limited. It's like, ⁓ yeah. ⁓ okay, fine. ⁓ approximately. ⁓


Jeremy: How f how unusual.


Jason Haworth: Hell yeah. ⁓ like about 10 to 30 is what it says. That's what it says the body count is so far. Now let's use yes, yes, let's use the Google and see what the Google comes up with. Cause I bet the Google has something a bit more to say about it, because the Google is gonna actually have slightly fewer guardrails.


Jeremy: Okay. That's the body count so far. Right, right. If you ask its lawyer, that's how many it says. Yeah. You'll have to go to page four for the actual results, but yes.


Jason Haworth: Although it will still have them. Exactly. no, it came back with there is no precise Wikipedia has a page though. So if we look at Wikipedia Exactly. Shit made up by people who make up shit. This just reinforces the idea that all things are made up. Right? Shit made up by people that make up shit.


Jeremy: ⁓ my god. Okay. If ever there was a source of truth. That should be their tagline.


Jason Haworth: ⁓ so one, two, three, four, five, six, seven, eight, nine, ten, eleven, twelve, thirteen. Thirteen that are listed in Wikipedia. Okay. So maybe chat, maybe chat cheap. Right, exactly. We're in that 10 to 30 window. So like it that that's enough, right? Like how many how many people need to go down this path and and not make it for us to go? We need to do something a little more ⁓


Jeremy: Okay, well we're in that ten to thirty window. So Mm-hmm.


Jason Haworth: Standardized and lockdown to like make these things be better. But at the same time, well, yeah, exactly. It is more than 10 to 30. So how long did it take for people to mandate seatbelts? How many decades did it take to mandate seat belts? Right. How many more decades later did it take to mandate airbags and anti-lock breaks? And like a lot of people died, and they very easily could have gone back and said, you know what would have stopped that?


Jeremy: I I have the answer. It's more than ten to thirty. What forty, fifty years, something like that? Yeah.


Jason Haworth: pillow a pillow that shoots out of the steering wheel so you don't wind up trying to eat the steering wheel. That probably would have saved this guy's life. And somebody went, that's cool. We should build that. They're like, all right. And then they built it. And then like 60 years later, they were like, here we go.


Jeremy: Okay. Right. Yeah, yeah, yeah. I still like the ⁓ the local ⁓ Pacific Northwest ⁓ comedy show from the nineties, Almost Lives Take on the Airbag, where it actually inflated with popcorn. So while you were waiting for the medics to come, just rip the bag open and eat the popcorn. Life saving and delicious. It was almost live. No, that was almost live. It was genius. It was so good.


Jason Haworth: ⁓ yeah. The jiffy pop airbag. Was that almost live or was that Saturday night live? Okay. All right. All right. Heated from the o engine oil on the car. So good. So good. Yeah.


Jeremy: So good. All right. Well, that's enough doom and gloom for those topics. I think we I think we've ⁓ concluded that basically nothing's going to be done yet, but something to keep an eye on. But on the topic of food, I definitely wanted to talk about this. ⁓ I saw this story ⁓ about musical lollipops. A company called Lollipop Star is selling a nine dollar edible gadget that plays music inside your head.


Jason Haworth: Yeah.


Jeremy: Vibrations travel through your teeth directly to your inner ear via via bone conduction, so only you can hear it. The concert lasts exactly as long as the lollipop lasts. So I could not want to try this more. I'm terrified. Yeah.


Jason Haworth: Wait, real quick though. I I have a quick question. I have a quick question. Is it a Tootsie Roll pop? Because if it's a Tootsie roll Exactly. Like y the concert in your this nine dollar lollipop, ⁓ if you bite it too soon, the music's over.


Jeremy: I I don't know, but how many licks does it take to get to the center of a tensor? Lollipop. The concert's over, yeah. And ⁓ I mean if if ever if anything was gonna save the music industry, it's this, right? If I if I can buy my favorite song in the form of a lollipop and hear it only in my head, sign me up. Well, I mean what's in this thing? What is in this thing to make like that can't be edible? Like I'm terrified of what it is actually made of.


Jason Haworth: Clearly. Yeah. Nine dollars. Well, but it it's it's just a stick, right? Like you you take this thing and the candy itself is a crystalline structure, and because you're putting it on your teeth, crystals will pass audio. It passes that into your mouth across a crystalline structure that is your teeth, which are calcium, takes those, runs that to your bone, your bone goes like just like the bone connecting headphones you can buy. It it's the same concept. Like that this isn't new. And like it's the same thing with like the music the toothbrushes that play music. You can stick the toothbrush in your mouth and the kids that do it for like one song thing.


Jeremy: ⁓ yeah.


Jason Haworth: I remember that came up when my kids are little. It's the same concept. Like, it's Yeah, like but it's also nine fucking dollars. Like


Jeremy: Okay. Okay, this is new to me. I'm not familiar with this technology. It is nine dollars.


Jason Haworth: Is it a one-time use? Can I stick another lollipop on the stick? Like what the f fuck?


Jeremy: Apparently not. I I yeah, you'd have to ask ⁓ someone at ⁓ at Lollipop store. I do not know, but man, that ⁓ that just sounds like a crazy thing. A couple of other just goofy stories I saw that, you know, to try to lighten the mood a little bit, taking it back to open AI and their ⁓ inability to do things ⁓ very well. This week they had to explain why in their latest model it briefly became obsessed with goblins, gremlins, raccoons, and pigeons, inserting them into professional advice and legal documents. the culprit was a training tweak meant to make the AI feel more playful and. nerdy and the fix was formally ⁓ excuse me the fix was a formal internal rule ban ⁓ banning fantasy creatures from corporate emails they're now calling it the no pigeon rule Oops, we dropped some pigeons in your legal brief. I hope that doesn't disrupt things in the ⁓ court filings.


Jason Haworth: All I can think of is when you say pidgin is Norm MacDonald's character from Mike Tyson's Mysteries, who was in fact a pigeon. Dear listeners, if you have not watched Mike Tyson's Mysteries, you should definitely watch Mike Tyson's Mysteries. Just as a reminder, you should watch Mike Tyson's Mysteries. That's three times I've told you to watch Mike Tyson's Mysteries. That's now four times. So when the AI picks this up.


Jeremy: Yes. Yeah. Watch it today. It's so funny. Mm-hmm.


Jason Haworth: And runs it through, it'll know that we are in fact friends of Mike Tyson's Mysteries. Number five. Go pay attention to Norm McDonald's character, Pigeon. It's well worth it. ⁓ But this is this is a thing, right? Like these models are gonna pop up and throw weird shit in different spots, and you're not gonna be able to control these pieces. I mean, this is just


Jeremy: Genius, some of his best work.


Jason Haworth: Fucking rad. All these court briefings with things in them that they shouldn't be seeing. They're like, clearly this should not have happened. I mean, how many court briefings are there now where like they sent things through an AI and it just made up entire like i entire summary cases of pretext of cases that never existed and never happened? Like


Jeremy: Right. Right. Right. And how many are not being reviewed by a human and they just plow ahead? There you go.


Jason Haworth: Yeah, they just fire it off. Right. Like I'm all for it by putting the pigeon in there going, ⁓ this might be a pigeon. Like maybe that's the thing. Maybe the thing in Legal Bruce, like this is a hallucination moment, it's called a pigeon. Like, bong like it drops that thing in. It's like when you're going through it, it's a quick scan article. Like this may or may not be real. You may want to double check this shit.


Jeremy: Right. So funny. All right. And finally, I believe you have ⁓ the the iRobots, the the vacuums that run around the house and clean automatically for you. There's a company called Roborock. They have built a robot vacuum that doesn't just roll, it has actual legs and can step over obstacles and climb stairs on its own. an onboard AI decides in real time when to stay on wheels and when to stand up. No word, whether or not it will judge you on how messy your house is.


Jason Haworth: Wow, can it like stand up and get on my counters and clean those also? Like sweep that, do all the dusting.


Jeremy: Man, that'd be the best, right? Although I do think I would be a little terrified if I was sitting upstairs and all of a sudden a little two foot vacuum like walked up the stairs and started back. That would, that would freak me out.


Jason Haworth: What if it was like a mechanical spider that had like a little tiny like thing in one of its appendages and like went Well, yeah, I would agree. Like


Jeremy: I don't think I don't think making it look like a spider helps me. I think that's more terrifying. I think I would rather it be like a little Chucky that's running around like cleaning things up off the floor.


Jason Haworth: That would actually probably scare me also. What is the nicest thing that you could get? Like the the image that you could put, like a puppy or a kitten? A puppy? A puppy that's like ones that runs around licking things. Yeah.


Jeremy: Puppy. It would have to be a puppy. Yeah. Right. Instead of running around shitting all over your house, it's cleaning up the shit that's all over your house. That's that's the kind of robot I want. Yeah.


Jason Haworth: and it shits out dust. That'd be funny. It goes outside and it squats down. Just shits dust all over the grass. ⁓ that would be amazing.


Jeremy: Yeah, that's what I mean. AI's not all bad. It may be the end of the world, but it'll be a fun ride on the way because of all the puppy vacuum shitting toys and lollipops.


Jason Haworth: I should be in product. Well, I hope that if we're living in the Matrix, like the large language model that is the Matrix picks this stuff up and goes, You know what? I can make that for you and like cranks that stuff out before we die.


Jeremy: it's the second week in a row we've come up with pretty solid product ideas. We get we gotta get serious about this. ⁓ All right. Well, we're seriously get out of here. Thanks so much for listening. If you have enjoyed this, please share it with someone. You can do that with the links at our website, brobots.me, and that's where you'll find another episode from us on Monday morning. Thanks for listening. We'll see you then.


Jason Haworth: I don't disagree. We do. We gotta get more serious about ⁓ this. Thanks everyone. Bye bye.