April 13, 2026

AI Just Built a Cyberweapon. Is Anyone Ready?

The player is loading ...
AI Just Built a Cyberweapon. Is Anyone Ready?
YouTube podcast player badge
Apple Podcasts podcast player badge
Spotify podcast player badge
Goodpods podcast player badge
Overcast podcast player badge
RSS Feed podcast player badge
YouTube podcast player iconApple Podcasts podcast player iconSpotify podcast player iconGoodpods podcast player iconOvercast podcast player iconRSS Feed podcast player icon

Anthropic's new Mythos model didn't just get better at writing code — it got better at breaking it. In an hour, an AI mapped decades of hidden vulnerabilities across live systems. In four hours, a supply chain attack silently exfiltrated 500,000 credentials and compromised 20,000 repositories. The question isn't whether this is alarming. It's whether the companies and governments responsible for protecting critical infrastructure — water, power, gas — are anywhere close to ready. On this episode of The BroBots, Jeremy and Jason work through what Anthropic's internal memo actually said, what a cyberweapon-grade AI changes about the attack surface, and why Jason thinks the survivalists have been right all along.

Key Moments

  • 00:00 — Anthropic's Mythos: what the internal memo actually said and why it's different
  • 01:38 — The LiteLLM supply chain attack: how 500,000 credentials were stolen in 4 hours
  • 04:27 — Zero-day attacks explained: why signature-based detection can't stop what it hasn't seen
  • 06:44 — Mythos vs. prior models: from 60s to 77–78% effectiveness — what that jump means
  • 09:34 — Jeremy tries to find the optimism: Glasswing, the $100M security head start
  • 11:06 — The real threat: why utilities and infrastructure are the soft targets
  • 13:21 — Regulation vs. arms race: should billionaire AI companies have a leash?
  • 15:13 — The nuclear analogy: what a global AI treaty would actually require
  • 17:09 — 'Easy mode': the counterargument that Mythos's test conditions were unrealistic
  • 20:22 — Jason's actual survival advice: fire, water, neighbors, and a CRT in the attic

Follow Brobots: www.brobots.me/follow

Jeremy Grater: Hey, this is Bro Bots. This is the podcast that tries to help you be a better human by being smarter about how you use technology. And today we have to talk about this ⁓ big breaking story regarding Anthropic's super monster AI tool that can basically hack and destroy society as we know it. What could go wrong? Jason, I I imagine you have some concerns.


Jason Haworth: I I have some concerns and I have some opinions. So I I read a little blurb on this last week. But basically, Anthropic is releasing a new version of their LLM that they're calling Mythos. And there was an internal memo that went out that basically said, ⁓ we don't know how to contain and control this thing. It does such a good job of finding things that we can exploit in people's software that it basically went through and found. Bugs that had lived inside of certain systems for years, maybe decades, and it found them in an hour. And humans have been trying to find these things forever. And security. Absolutely. And and the thing is that this kind of technology is going to create autonomy within these working agent sets. So there was a new attack that happened. And I'm going to get real geeky here.


Jeremy Grater: And these are like security loopholes. These are like major gaps in in security. Okay.


Jason Haworth: and there most people reference it as the light LLM attack. ⁓ you can you can hear it, you know, referred to as as the team PCP attack, all kinds of things. But basically it happened is it attacked what you would call kind of the trust chain. So when software developers want to download code that's from an open source location because they want to implement it for their corporate environment, they go out to a repository and it's typically on a site called GitHub. And GitHub is this open source development function, or it's this website repository where you can go through and you can download all kinds of different source packages. So companies went out there and they downloaded this light LLM package. And light LLM is basically like a proxy to get to other large language models inside the system. So it acts as like a proxy gateway. So people can connect to it and then they can connect to like Chat GPT or Anthropic or any of these other pieces. Well What happened was this company went and they attacked a particular part of the light LLM supply chain. And they went through and they figured out a way to inject bad code. And what happened was for about a four hour period, whenever somebody downloaded this code, it would execute and the code would automatically try to grab all the credentials it could find on the machine. And then it would package those pieces up and it would send them off to the internet for anybody to pick up and run with. And what happened was, you know, there was essentially, you know, ⁓ how big was it? It was something like 300 gigabytes of exfiltrated data, 500,000 stolen credentials, 20,000 repositories compromised in like a matter of a few days. And People saw it and they were kind of powerless to stop it because they didn't see it until after the attack had already occurred. And by the time they were actually able to detect it and their security awareness systems, it was too late. Everything was out there, everything's free, everything's in the dark web. And this affected major software vendors too, major security vendors, like people out there that have big systems. I'm not going to go through all the names. I don't want to get in a liability lawsuit. I this is not a tech based program. That being said, the


Jeremy Grater: The last three and a half minutes would say otherwise. I know. I know.


Jason Haworth: Well, but I and I'm really trying to narrow this down to make it easier for people to understand. And I apologize for those of you that that aren't prop tops. But basically, what this did is most systems out there, what they do is they look for signatures for things that look bad. And by a signature, typically what it happens is they look for things that either happen as clearly identifiable objects. So you know how you have to download your antivirus software all over again, like all the time.


Jeremy Grater: That's okay.


Jason Haworth: You're downloading what are known as signature updates. And what these are are ways for the code to actually be scanned as it's moving through the kernel on your system to say, this looks like something suspicious that somebody else saw before, and I'm going to block it. Makes perfect sense. The problem with these kinds of attacks is that one, you have what's called a zero-day attack. So the first time something goes out. So it's kind of like, I don't know. what's going to hurt me on the zero day because I can't write a signature for a thing that I can't see. So the first people to get hit by this get hit, people have to figure it out, and then they can write a signature update and then they can send that off. Well, zero day attacks that happen really, really fast happen so fast you can't stop them. So there's no way you can do a signature based way to look at this to try to stop these kinds of attacks. The next piece is to look for anomalous behavior. Is it trying to go places it shouldn't? Is it trying to ke overcommunicate? Well, most of the ways that you do that type of analysis is you do that type of analysis after the problem has occurred because you have to collect a bunch of information, provide the right context, and then try to understand it. Well, these things were happening so fast that the context awareness tools and all the things going through and the post-processing things couldn't couldn't get to them fast enough. The attacks were done before anybody could actually go, ⁓ shit, something bad is really happening. The other way that you do this is you start trying to create architectures that can contain these pieces so things can't get past the perimeter. And the reality is that most of these applications run inside the cloud, and the cloud did not set themselves up with great perimeter security because the clouds themselves have all kinds of ways to get out to the internet and it's a shared security model, and it was made for developers to move fast and go quickly and write these things up. Well, what they did is they fucked over security people because They didn't put all the controls in properly. And really, what we're fighting now is bots that understand the architecture better than the people that wrote the software. They can analyze these pieces faster, map the vulnerabilities, extract what they want, and get out before people can actually do anything. It is a terrifying prospect. And this all happened before Mythos came out. The old version of this got like their their their score on their ⁓


Jeremy Grater: Ha ha ha.


Jason Haworth: ⁓ basically the software effectiveness score. I'm I'm not gonna get into all the details of it for the old versions was in the 60s. This new version is getting closer to the 80s, like it's like 77, 77, 78%. So it is a step change in how effective this is. And if you think about it, the old version was like putting like a principal architect in this spot. And saying, hey, go try to figure out how to break out of prison. Now what you've done is you've given this thing and it's no longer a principal architect. It is a collection of super intelligent criminals that can work in coordination, that can build these pieces up. And now it's not just figure out how to get out of prison. It's figure out how to get out of prison, take the prison over while you're at it, go rob a jewelry store, hijack an airplane, and crash it into a train. Like that's the effectiveness that you're yeah, it definitely doesn't sound good, right? So if any of you guys have played the game Cyberpunk, like this is all the bad things happening that cyberpunk could happen all at once. And the technology companies are like, we're gonna start this thing that we call Glasswing, and that's gonna let us go through and send this model out earlier to the people that do the security analysis so they can make smart decisions as to how it is to implement these pieces so they can protect themselves from these kinds of attacks.


Jeremy Grater: That all sounds bad. That all sounds bad. Yeah.


Jason Haworth: Which sounds good, but really what it is is I think Anthropic is trying to figure out a way to get the rest of the community to keep using their software, which is inherently buggy and problematic and exposes all kinds of things that open people up to attack so that they can at least have some indemnification when their shit starts attacking people. Because that's what's gonna happen. And if we are not aware of this and how these pieces work. ⁓ it's going to be very, very problematic. And I'm I'm not I'm not saying this as a doom and gloom thing, right? I'm I'm saying we've been talking about this happening for a long, long time. And now this is this is like the real opening salvo to the real cyber arms race because we're taking all these pieces where it used to be you have to have somebody that knew what they were doing to tell the thing what to do. Now all you have to do is say, ⁓ go, go fetch me that.


Jeremy Grater: So ⁓ ⁓ Right.


Jason Haworth: And it's like, you got a boss and it does all the things it has to do to do it. And some of the things you can do it is you can say, Go fetch me the crown jewels from behind the fucking Fort Knox thing. And it goes, All right, boss, you got it. And it goes and does it. I mean, this is this is scary shit.


Jeremy Grater: Yeah. So I'm gonna get out of my comfort zone and going in comfort zone again and try to be somewhat optimistic here. So ⁓ let's start with the fact that that they discovered how powerful this thing is. And rather than going like, yeah, let's put it out there and see what happens, for the most part, they've locked it down and gone like, let's let's hold on to this, let's not give this to everybody. We're gonna keep this under lock and key for the most part. Let's throw a hundred million bucks toward, you know, all these security companies and let's try to use this thing. To close the gaps that it would otherwise exploit in the wrong hands. I mean, that's a hundred million dollars, maybe not a huge amount of money, but for these small companies that ⁓ that desperately need tools like this to do their jobs better, these are some good things, these are some positive things to come out of this terrifying ⁓ creation, right? Like like Frankenstein ⁓ is looking a little bit more like the young Frankenstein, Frankenstein than scary Frankenstein, Frankenstein.


Jason Haworth: Well, I I I think Frankens the the idea is to have Frankenstein fright Frankenstein. So the idea is to go through and have bots fighting bots. Yeah.


Jeremy Grater: Right. Yeah, no, that's the thing, right? So now we can use this thing to find all of the things that something like this would exploit. So if this were in the in the wrong hands, and I'm not saying it's not, but if it were in hands that, you know, ⁓ were perhaps more sinister, i game over. Right? Like the they would exploit it, our bank accounts would be wiped out, the water system would be destroyed, like all of the bad things would happen in a blink.


Jason Haworth: I I think I think you have to look at the motivation of the people using the tools. So the organizations that would actually use these tools to do nefarious things, they've either got a financial impetus or a political impetus, something along those. This this is a massive arms race. And we just gave everybody nuclear weapons. And that's that's what's happening. And I don't think Claude, I don't think Anthropic is going through and saying, ⁓ we're gonna give this to all the security pros so they can lock this down ahead of time. I think what they're really saying is here's a foot race and we're gonna give the security companies a head start. Good luck. And


Jeremy Grater: Yeah. Yeah.


Jason Haworth: Here's the reality. Most companies don't move fast. And they've adopted things and they've adopted software supply chains that are going to be highly exploitable for a long period of time because they're not going to patch and they're not going to fix these problems quickly enough. And if you get back to where the real threat is to what we have, it's like our utilities, our infrastructure, the things that we rely on. Because The people that run and operate those, they're typically not the best funded. They typically have limited tool sets that they can call upon and actually use. And they tend to be regulatorily constricted. So it's harder for them to have good defensive capabilities. And it's harder for them to fix their software packages because they don't have as many people that work for them, because they don't have the best, the best getting paid to do these things. So it's like. Do I put it? ⁓ it's the JV versus the Super Bowl championships. Yox go hawks. ⁓ but that's kind of the fight that we're heading into. Is that you're there, these people aren't gonna attack the the hard targets, they're gonna attack the soft targets. And the reality is that a lot of our infrastructure, the things that keep us alive: water, power, gas.


Jeremy Grater: Ha ha ha.


Jason Haworth: That's what's gonna get attacked.


Jeremy Grater: So one of the things that's raises for me, we talk a lot about regulation and and you know, how how these things have been able to to run wild. I ⁓ I don't know how to feel about it. ⁓ this this makes me question if there were regulation, would this slow down our ability to win this arms race, right? Like and and by our I mean, you know, Western society in general, more likely the, you know upper upper upper ⁓ class. But ⁓ and and I don't include myself in that by means. But you know when when you talk about sort of nation states getting ⁓ you access to these sort of things, do you think it's a good thing that that these things don't have ⁓ a leash on them for for this?


Jason Haworth: Me either. Mm-hmm. No, I think it's insane. I think it's insane. I I I think this is the same as unfettered nuclear weapon access. And it's actually it's scarier. And we've had several guests on here that have talked about it in this this kind of context where it's like this is scarier than any type of nuclear weapon piece. They've talked about it in terms of deep fakes, misinformation, moving things around. This is a different kind of attack. This isn't tricking humans into doing things. This is tricking the software into allowing things to happen that is programmable by humans. And


Jeremy Grater: So should should billionaire companies still have just free reign of the like w what would you do if if you could wave a magic wand?


Jason Haworth: If I can mave a wave a magic wand, I would have regulatory compliance against all these billionaire companies and I would put strict guardrails in that say if somebody uses what you have to attack somebody, you're on the hook. Because that's what's happening. I mean, that that's that that's the only way to get these folks to slow down and to take these things seriously. And the concern of the


Jeremy Grater: But if they slow down, doesn't that open the door for for the competition to do exactly what they've done?


Jason Haworth: It does, but this needs to be something that happens at a global level, just like we did with nuclear technology. I mean, we had to come together to say, let's have let's have an armisti ⁓ an actual clampdown on ⁓ nuclear weapons so we can go through and actually prevent people from going through and making unfettered bombs. Keep in mind there are still two players with over five thousand bombs each in their arsenal. ⁓ So, I mean, that's the problem is that we've we've unleashed this thing, giving people access to it, let them run with it. So now the question is, is do we unleash this? Do do we try to regulate this in such a way that we can ⁓ try to let the try to keep the genie in the bottle a little bit longer while we try to figure this out? And I think our greed is too high that we're not gonna let it happen. And I think the fact that the Trump administration put this like 10-year moratorium, which gives us only nine years left now on AI, and we've seen this massive order of magnitude escalation in this arms race just in the last couple of weeks. Man, what what's gonna happen in the next 10 years is is not knowable. And and it's this is this is a lot scarier than I think people realize. And ⁓ the biggest thing I think people need to figure out is how to make fire and how to boil water. 'Cause 'cause this shit's they're they're gonna hit utilities.


Jeremy Grater: Yeah. Now what about I mean, you know, there there are skeptics from from the articles I've read about this that say, you know, the testing of this mythos thing ⁓ was not up to snuff, right? Like the they they played on easy mode to be able to pull this off. Is there is there legitimacy to that? Is it or or is that just ⁓ wishful thinking?


Jason Haworth: Sure, there's definite legitimacy to that. And do you want to know what the reality is? Most of the world is set to easy mode in the cyberspace. So the interwebs are already easily exploitable by existing tool sets because people think that they've got ways to make these pieces work. The reality is, is that there are tools to lock all these things down and companies pay a shit ton of money for them. The problem with it is that it takes human capital to go through and set them up correctly. And what they do is they wind up making them more permissive than they should be. Because if they lock things down, it requires extra testing. It slows the velocity of business down and it makes it harder to innovate. And because if you slow down innovation, that means death in the world because we are we're so driven to make things happen fast that people go, I'm just gonna let security go by and not worry. And then they get hacked and they're kind of fucked. Like this is like going through and putting a ring doorbell on the front of your house and a bunch of locks and leaving the back door wide open all the time and the gate unlocked. And I mean, we've just the way that we've set up these modern systems is, you know, we think that attackers are going to go through and they're going to try to attack these systems head on. There's a really good book called The Burglar's.


Jeremy Grater: Yeah.


Jason Haworth: Guide to the city, which basically says the best burglars out there are architects because they understand how things work and they can read an architectural diagram and they can go, ⁓ well, I see a service tunnel here. I don't need to go through the front door to get past security. I just need to figure out a way to get in the service tunnel. And then they get in the service tunnel.


Jeremy Grater: Or maybe an exhaust port that if a if a a proton missile was fired at just the right angle could hit the center and blow the whole thing up. Yeah.


Jason Haworth: Just at the right angle. Yes. Exactly. Same kind of logic. It's not about attacking the thing head on. It's about finding the gaps in the security structure that's there in place. Because the reality is that we have enough resources, we have enough goodness, we have enough capacity to feed everybody, give everybody clothes, water. We have enough capacity. To meet everyone's needs. And we don't do it. We consolidate these things, put these things in people's hands, we create tribalism, we create reasons to not be nice and cooperate with each other. So we wind up in these situations where we've got all these great things to do, these things, and all it takes is for someone to get pissed off and decide that they're gonna go and blow up shit for other people. Because human nature is not cooperative or kind. Human nature is tribalist. It's perimeterist, it's unsafe, and we will do what we can to try to maintain an advantage over other people. Now we're gonna accelerate these things and we're gonna see the worst of human nature come out in these twins.


Jeremy Grater: So with that, what do we do besides grab the popcorn and watch?


Jason Haworth: ⁓ that's a valid question. So one, learn how to make a fire on your own.


Jeremy Grater: Ha ha ha ha ha.


Jason Haworth: Two, learn how to boil water, because you're gonna need those things to survive. Three, maybe start keeping your poop in a jar so you have fertilizer to feed the potatoes you're gonna have to grow in your backyard or on your deck. Like the this is Well, the survivalists have always been right all along. Like you should always have basic skills so you can understand how to make things work. Do I think society is gonna collapse as a result of this? No. Things are gonna get weird, they're gonna get scary, but people are gonna start pulling the plug on these automated systems.


Jeremy Grater: So are you saying the survivalists have been right all along? Ha ha ha.


Jason Haworth: And I think that's probably the reality. They're gonna start locking down internet access. They're gonna stop going through and just letting the machines make these things automated. And you can read this in the sci-fi literature over and over and over again. Like there's a whole thing about that in Star Wars lore. There's a whole thing about that with the Borg inside of Star Trek. Dune is like all about that. Like it's all about the machines taking over and having these like huge 10,000 year wars. I There's a ton of speculative fiction. Well, yeah. I mean, you're you're not entirely incorrect. Like there's there's a ton of speculative fiction that rolls around that. And and ironically a really good ⁓ proxy way to understand this. Like we've gone through and the US decided to do a war of choice to go through to try to shut these folks down so Trump could distract people from the Epstein files. And


Jeremy Grater: Isn't Iran getting close to that at this point? Sorry, I don't


Jason Haworth: The US has gone in and blown up a bunch of shit with no thought around how to do these things. Trump threatened genocide and a nuclear war against Iran, which is wrong and fucked up, and he shouldn't have done that. ⁓ and the allies have begun to turn against us, and NATO has begun to turn against us, and people don't want to keep living with that kind of craziness. So they're pushing us out. At the same time, Iran is looking for ways to attack our infrastructure and to fuck with us because We bomb the shit out of ⁓ Like, yeah, like you're if if you don't play nice in the sandbox, you're gonna get sand kicked in your face. And the reality is, is the way that the cyber infrastructure and cyber framework works, everybody has access to nuclear fucking weapons. Like the idea of changing the physical battlefield from needing ships and planes and rockets and missiles to I got little drones that can fly in and blow shit up. That's what's happening on the internet at a much faster scale. And you can't just you can just decide I'm gonna automatically create a million drones to go and do this. And it's low cost, it's easy to make happen, and people are doing it.


Jeremy Grater: So fire, water, unplug. That's that's our takeaways from from this nuclear arms waste race we find ourselves in.


Jason Haworth: Yeah. I mean, you know, there's a reason why I've got solar panels and I've still got a lot of my DVD media and I've still got a VCR and an old CRT television hidden up in the attic somewhere. So when the bad shit does happen, I can at least watch some entertainment while the world burns. I mean I yeah, I mean so


Jeremy Grater: Ha ha ha.


Jason Haworth: I mean, I my biggest advice is learn how to be nice to each other. Learn how to be nice to your neighbors because you're gonna need them. You're get to know people around you because you're gonna need them. Because when we start turning things off, which is what's gonna happen, companies, corporations, nation state actors are gonna start turning this shit off. Because that's the only way that they're gonna survive. And it's not gonna take much. It's gonna be a few big infrastructure attacks.


Jeremy Grater: Yeah.


Jason Haworth: where the water or the electrical grid gets gets trashed and people are gonna go, Nope, click. And they're they're gonna Yeah. Yeah. I mean it's I mean, granted, we don't know how many people might die in the process, but you know


Jeremy Grater: I really, really tried to steer us into an optimistic place to end this one and I failed. So


Jason Haworth: Well, but no no no no no on on the optimistic side, everyone will learn new skills. Everyone will they will learn new skills, they will get they'll they'll get to know their neighbors. It'll it'll be great. Like it'll it'll be really, really awesome. And then all those things, well, right, but everything that you've been storing in that attic and storage space for a rainy day, you'll get to use all of it, which is amazing. Like


Jeremy Grater: ⁓ We'll all know how to sew. It'll be great. It'll be the seventies.


Jason Haworth: It won't just be there going to waste. You won't have to throw away those food packs that you bought just in case. Cause you'll be able to use them. You'll actually have you'll you'll need you'll need to use them. Yes, yes. And and this is this is the crazy part, is that we are adaptive as a species and we will figure out a way to cockroach our way through this problem, most likely, but we don't know what the outcome is going to be on the other side. And countries with poor cyber resilience.


Jeremy Grater: You'll finally be able to use that survival kit that you bought. Yeah. Yeah. ⁓ great.


Jason Haworth: and poor ⁓ capabilities to go through on countermeasures, like the US, Canada, most of Europe. I I mean basically with the exception of China. These things are gonna not work well for them and things are gonna get shut down. So I mean On the plus side, if you know Mandarin, you're gonna be in great shape.


Jeremy Grater: I think it's only getting worse. I think we should wrap things up.


Jason Haworth: I d I keep it on the positive. The inevitable heat death of the universe will happen. This none of this will last forever. It's gonna be okay. It's gonna be just fine. Yes.


Jeremy Grater: Right. Yeah. That's true. It's gonna be okay. It's gonna be okay. All right. Well, hopefully it's ⁓ okay long enough for us to come back Monday to probots.me. That's where you can follow us. Share this episode if you wanna I don't know, scare the shit out of your friends. And ⁓ we'll see you then. ⁓ thanks so much for listening. We'll see you Monday at BroBots.me.


Jason Haworth: Thanks every thing. Thanks everyone. Bye-bye.