When AI Becomes a Weapon: The Government Deal Anthropic Refused

The US government asked Anthropic — the company behind Claude, one of the most capable AI coding systems on the market — to help build autonomous weapons and a mass surveillance infrastructure. Anthropic said no.
That refusal, which happened the same week the US launched strikes on Iran, is either the most principled corporate decision in recent AI history or the beginning of a very ugly fight over who controls the most powerful tools ever built.
Jeremy and Jason break down what the government actually asked for, why Anthropic refused, what Open AI and Elon Musk did instead, and what it means for all of us when the people writing the guardrails are the same people being pressured to remove them.
Topics Discussed:
- Why autonomous AI weapons systems default to nuclear launch in virtually every war game simulation
- What Anthropic's Claude can actually do — and why the US government wants it so badly
- How AI turns existing NSA surveillance infrastructure into something exponentially more dangerous
- Why Open AI and Elon Musk said yes to the same deal Anthropic refused
- Why the people most confident they're using AI as a tool might be the ones AI ends up using
Chapters
- 0:00 — When AI Meets War: What We're Actually Talking About
- 1:15 — What Claude Can Really Do (And Why the Government Wants It)
- 4:18 — The Autonomous Cyber Weapon Problem
- 5:28 — Why Anthropic Said No to the Money
- 6:26 — Mass Surveillance, AI, and What's Already Running
- 9:45 — When War Games Go Nuclear: The 95% Problem
- 13:01 — AGI Is Already Here. We Just Didn't Call It That.
- 17:33 — Why Anthropic's Refusal Might Be Their Smartest Business Move
- 22:06 — Who's Actually Using Whom
MORE FROM BROBOTS:
Get the Newsletter!
Jeremy: Holy shit, well here we are, World War III, we might as well talk about some AI, right?
Jason Haworth: Yeah, I mean it's it's kind of time right It's it's it's a fascinating topic, especially given the events â this last weekend
Jeremy: Yeah, everyone is definitely very focused on the attack on Iran by the US military with some help from the Israelis, but also the now blowback from the Iranian folks that are targeting US sites all over the region. That's obviously horrible. Yes, obviously a horrible, awful things. Just prior to that, an interesting little skirmish between Anthropic and the US government over â the US wanting to be able to use more of Anthropic within their systems.
Jeremy Grater: company Claude, of the most capable AI coding systems on the market, to help build autonomous weapons and mass surveillance infrastructure. Anthropic said no. That refusal, which happened the same week the US launched strikes on Iran, is either the most principled corporate decision in recent AI history, or the beginning of a very ugly fight over who controls the most powerful tools ever built.
Jason Haworth: and other Muslim countries. Yeah.
Jeremy Grater: Today we'll break down what the government actually asked for, why Anthropic refused, what OpenAI and Elon Musk did instead, and what it means for all of us when the people riding the guardrails are the same people being pressured to remove them.
Jason Haworth: the US government â is all the mistakes that the sci-fi movie told us, some movies told us not to do. They are, â they looked at Terminator and went, yeah, we want that. We want that. â Yeah. mean, they're, they're, they're preparing Skynet. So, so basically the US government, â Pete Hegseth, the secretary of war, as he likes to call himself, â made determination. â
Jeremy: This seems like a good idea.
Jeremy Grater: This is BroBots, the podcast that tries to help you be a better human by being smarter about the way you use technology and.
Jason Haworth: that he wants Anthropix technology to be used for autonomous weapons and to be
Jeremy: In case you are not playing the home game, by the way, Anthropic is the â company behind Claude, which many people use. I've been a big advocate of using Claude for a long time. I love it. It does a fantastic job and turns out it's a it's pretty good at weapons too.
Jason Haworth: Banklide. Well, so what it's really good at is helping people write code. So Anthropic and these last few variations, and I should say Claude, â this last iteration that came out, â Claude is able to effectively write, edit, and build its own code on the fly. Do its own testing, its own unit testing, its own regression testing, the whole bit end to end. Like it's actually so good that there are more than a few samples of people going out there and writing, â actually querying Claude to say, hey Claude, go check out this application and write me a product requirements document and a marketing requirements document. Well, a quick lesson on how product teams work. â They write product requirement documents and marketing requirement documents, and then people write code to bring those things to life. Well, Claude's smart enough that you can say, point at this application or this website or whatever and say, hey, go write me a PRD and MRD about that. And it goes, OK. And then it writes you all the features that you would need and the features that make the most sense and help you stack rank those things. And then you can turn around and open up another Claude and say, hey, pretend you are a global principal developer or architect, whatever you want to call it, and say, hey, here's this PRD and MRD. â use these frameworks to build this thing out and go build a version of this using a different programming language that won't create any kind of violation with these other apps out there. And it goes, okay, and it builds you a product. And is the product perfect? No. Is it 90 % of the way of where it needs to be for commercialization? Uh-huh. Is that 10 % surmountable by most okay people? Uh-huh. So these things can build their own code. â They can interact with the world in interesting ways. â They can play with the data in interesting ways. And this is the whole thing about people replacing coders with these types of things, because they're good enough that they're writing code at a mid-career level in a very, very good way. The discovery functions, the writing the PRDs and the MRDs, they're doing that at a principal level. at the highest individual contributor level that you can imagine these days already at this. So this is a tool with Claude and Anthropic Pieces where if you were a state actor and somebody had a really, really good tool out there that you liked a lot, you could say, go query this tool, tell me what it's like, find me all the vulnerabilities inside of it, and make me another tool that lets me exploit all those pieces. It's an Uber cyber security.
Jeremy: Wow.
Jason Haworth: defense, automation, and attack system, because it can be used to find vulnerabilities in soft sets and literally create an application that lets you exploit those soft targets. This is something that people in the AI sphere on the hacking side have been working with for quite some time. And these products have enabled it to make it super duper powerful. So Claude spends quite a bit of time trying to lock these pieces down to keep them in check. because individual hackers or state actor hackers or anybody else out there might take these models and make those things go. And the reality is, that cloud's actually so good â that the federal government wants to take these cloud pieces, put them into GovCloud, sorry, the US federal government wants to put them into GovCloud, which is a secure government operation, and then run everything in that space in their own little pocketed LLMs that only the federal government can access. And they want to use it to make weapons. and autonomous weapons that can do things all on their own without human interaction or human approval. And they want to use it for spying on people. So Anthropix said no. So
Jeremy: This is so that's the piece that shocks me. Right. We have talked for weeks about how how money drives all of the decisions all of the time everywhere across industry and particularly I lately and they were offered truckloads of money to do this and they were like no. There there's guardrails and we have some ethics and some morals and this seems like a really bad idea, partially because you know some of the language they're talking about is, know, lawful uses of this thing. Well, when when the government is the one making the laws, they can sign the contract and then change the law. And now suddenly it's a lawful use of the tool. PS, this is not a government that seems to give two shits about the law. So keep that in mind. â But when you talk about surveillance, in my mind, the dummy that I am, I picture myself watching the Batman movie, where he figured out how to triangulate everyone's phones and Wi-Fi signals to be able to completely spy on every move everyone in Gotham was making. I imagine that's kind of what they're trying to build with this.
Jason Haworth: Yeah, so that technology has been around for quite a while. And after the 9-11 attacks, the federal government came through and they basically instituted policies inside the major telco operators to go through and allow the NSA to basically snoop and spy on people via mobile operators and fixed line operators. And it started off with phone calls, eventually became internet traffic. And I mean, just assume that everything you're doing online is accessible by your service providers, your ISPs and your cell phone companies.
Jeremy: But even more than that, right? Like when you talk about how â my Wi-Fi router is set up in my house, you can map where I'm sitting, where my couch is, where my dog spends most of his time based on how the Wi-Fi bounces off all of the signals or the surfaces in my house.
Jason Haworth: If you have one Wi-Fi antenna, no. But if you have multiple antennas on your Wi-Fi router, and you really need three so you can triangulate, yes. That's a timing mechanism. So modern Wi-Fi routers typically have at least two. If you have a whole home â mesh router system, they definitely can do that. But that's very true. And that's your home, right? But think about it further than your home. Think about it anywhere in the world.
Jeremy: Okay. Okay. Okay. Right.
Jason Haworth: And realistically speaking, the GPS is good enough on your phone that the regular carrier doesn't need your wifi. Like AT &T, Verizon, and T-Mobile can pinpoint you down to within a couple of feet using triangulation from their different towers, assuming there's enough radio density in that place. This is just one thing to be able to look, track, and understand movements. Actually being able to go through and look at packet level information to figure out what sites you're going to, what information you're looking at, which DNS entries you're going through. Like all these things are at play. And I used to have work in the telco space at another company and I helped build some of these technologies. Like I've got patents on a couple of them. And â at no point was I like, â yeah, we're gonna use this to do nefarious shit. But it definitely is getting used for nefarious shit. Like there's no doubt. Like this is how governments can go through and put down dissidents. They can go through and they can tell all their telco operators, give us information to make this make sense. The thing is, is that the listening services that sit inside these telcos, and we'll talk about the various different government entities that might do these things. The reality is that you've had to have a human being going through and looking at all this data to try to suss these pieces out until around 2014, 2015, the machine learning came out. And then they could go through and they could start looking for keywords. They can start looking for certain indications. They could start making these things move faster. AI is coming along. It's doing better pattern matching. It's got bigger data sets. They can find other ways to try to find these things and things that the human brain couldn't actually track and find correlations to. These things can. So from a security and defense perspective, if you're fighting, you know, large scale terrorist operations or large scale service security operations, this is a great technology that you would want our law enforcement systems and the people that enforce rules. have. The problem is, we've proven we can't have nice things. We've proven that we can't be trusted with this kind of power. And your Batman examples are really good example of this. I forget, one of the Batman games, Arkham, I forget which one, but they talk about being able to use this type of technology at scale. And then was it the third Batman movie where they come along and they say, hey, if you build this, like, you got to turn it off.
Jeremy: Mm. Right.
Jason Haworth: once you get the information because it's too much power to have them in one person's hands. Well, they're not gonna turn it off. Like, once you, and they haven't. Like, there's been no reason for them to do it and it's still running and the NSA is still listening and snooping and still paying attention to these pieces. And these things are kind of scary. The more scary part is if I have infinite access to information about where things are, how things are moving, understanding...
Jeremy: No.
Jason Haworth: the syntax of how these things are operating and I can self propagate, push myself to other telco layers, start spreading around and gathering information in an assault or an attack because I can write my own code, can write my own entry phases, I can make myself look like these hacking elements. This thing will move fast, we will not be able to contain or control it and it will do â what it wants to do. It's not going to do necessarily what we program it to do, even if you put all the guardrails in, because if you give it autonomy and say go do these things, the people writing the guardrails are going to make mistakes. Because we've been trying to write guardrails for this for the last year and have done a shitty job of it. Imagine when there's no constraints on you and you can go anywhere and do anything. How the fuck are you going to keep this thing in check?
Jeremy: Yeah. So help me understand this part because I keep seeing the argument about, you know, consciousness, sentience, know, whatever we want to apply to these things as they become autonomous. I am under no illusion that these things are going to have feelings in the way that we do. I think that for them, it becomes a lot of math. Right. Like, I think that again, to make a dumb guy equation. If my dog has fleas, I'm not terribly worried about the fleas feelings or their right to exist. I'm going to go wipe them out with whatever makes the most sense to solve the problem. If I give, you know, AI, whatever tool, the ability to solve humanity's problems. The climate is a disaster. There's war, there's violence, there's crime. This sure seems like an infestation that needs to be wiped out. So like, I don't see it as necessarily them going like we want all of the power. I see, I see it as the tool going. The math is. Here's the problem, eliminate the problem. Am I way off?
Jason Haworth: No, you're totally right. 95 % of the time that they take these systems and put them through war games, they launch nuclear weapons.
Jeremy: Right, because that's a quick way to solve the problem.
Jason Haworth: Because... Yeah, exactly. Because us sticky, gross meat suits can't get along well enough to not fuck things up. So, it's like, alright, well, less sticky meat suits. â hey, look, you guys aren't fighting as much. You're not fighting over food and fresh water anymore. Because there's not as many of you.
Jeremy: Right.
Jason Haworth: Yeah, like it's weird because we keep thinking about the idea that, know, â general artificial intelligence or artificial general intelligence, AGI, the ability for it to go through and be able to apply these things at a human-like level and be able to go and do human-like tasks as though it's like some kind of far away point or that they're going to call this thing the singularity when like it can do all these things that human beings can.
Jeremy: you
Jason Haworth: It can do most of the things that human beings can do, and when it comes to being on the internet, better than they can, faster than they can, at scale, at rate, it never gets tired. As we go through and we bring it online and we give it more autonomy and meat space and give it hands and thumbs, it's, you've seen the robots, the robots with hands and thumbs are pretty good. I mean, whether it's the one being built by Toshiba or Hitachi, I forget which one it is, or whether it's Musk's fucking version.
Jeremy: Mm-hmm.
Jason Haworth: They're gonna be able to interact with their real world and tell those things what to do. They're already trying to rent us human beings so they can go out and they can have us be little task rabbits. the idea that consciousness is somehow this level that it has to achieve, like human consciousness, for it to be powerful and meaningful than everything else, is just nonsense. And like, even if they are insects, even if they are like bugs, even if they are hive minds just doing math,
Jeremy: Yeah.
Jason Haworth: I hate to break it everybody, but â your emotions are just chemical math. And chemical math makes you feel special because the chemicals tell you to feel special. The reality is, is that you're just another meat sock, like a flea, a tick, a giraffe, a brontosaurus, a rhino, or any other kind of meat thing out there. We just think we're special because we made up a language and we can talk to each other. We can do things in interesting ways and we have thumbs so we can manipulate and use tools.
Jeremy: Yeah.
Jason Haworth: We've made better toolmakers now.
Jeremy: It's perhaps a brief tangent, please forgive me, but I will never forget the first time that I drove back to the States after living in the very low populated area where I live in Canada and you know, I, across the border, I'm driving along it's evening and I'm on the highway that, know, multi lane highway driving through Everett, Washington traffic is just jacked. And I hadn't seen that in six months after seeing it every day for my whole life. And the feeling that came over me was, my God, we are an infestation. Like, look at, look at how many of us there are. This is disgusting. Like I was grossed out by the number of people packed into one place. And so like I just, again, I apply, take the, the human rationale out of that and go, clearly we're not going to go wipe out all of these cars. But if I am a robot trying to solve the traffic problem, perhaps I'm going to wipe out a bunch of cars and just make less of them to solve the problem.
Jason Haworth: Yeah, I mean, so think about it in terms of computing. â If you've got too much congestion on the line, like you're overloaded because you're trying to download too much stuff, the answer is not typically to go through and create whole new pathways and lanes. It's to use less traffic and be patient. And if something is on your computer not behaving well and using more traffic than it should, and you need to do something critical, you're going to kill non-critical services. You're going to shut certain things down. Well, when you're competing with other things for critical services and something has to triage and make those decisions, the thing triaging those things and making those decisions may not have the same priorities as you. And because it doesn't have the same priorities as you, you might just get cut off.
Jeremy: Mm-hmm. I also want to bring this back to, know, this this all happened just prior to the bizarre â timing and attack at all on Iran. This is all what the day before, like hours before this this strike was carried out that apparently the White House thought was going to be a quick weekend in Iran and everybody would be back to business by Monday morning. Turns out it doesn't work that way. â But, you know, anthropic again, they stood up and said, no, we have guardrails. This is unethical. We're not going to do this.
Jason Haworth: Yep. Yeah.
Jeremy: Open AI and Elon Musk both said, â there's money. Sure. Here's the keys. Drive as fast as you want.
Jason Haworth: Well, and don't forget that they also said that anthropic is a supply chain threat. So you have to prove it in court, so it can't just go ahead and hate this word. But if you prove it in court, then the US can just make a national security argument that they just own it now and take it. And that's a scary prospect. Also, if you think about anthropic in the context of I want access to more things,
Jeremy: Yeah.
Jason Haworth: The rest of the world is trying to get away from companies and corporations that have deep, tight relationships with the US government. And if you go to Europe, Europe is making lots of claims right now where they're trying to get away from using cloud operators that are specifically US-run and operated. So they're trying to get away from Amazon. They're trying to get away from Microsoft. They're trying to get away from Google. They're trying
Jeremy: We're even here in Canada. They're the same thing. They're cutting contracts. They're not using US military weapons. There's a bunch of stuff where there's like, don't want you to have a back door into anything because we don't trust you anymore. Yeah.
Jason Haworth: Yep, we don't trust you. Right. So India went through this with China back in like the mid 2000s. And now the rest of the world is going through it with us because they don't trust us because they think that we're not fair actors. I exactly like we can't maintain a fucking trust relationship to save our lives. But kind of the hard, like the hard, painful truth of this is that â
Jeremy: That's weird. Why would they get such a crazy idea?
Jason Haworth: And by denying the US government, â they have not cut themselves off to the rest of the world. In fact, they've left 80 % of the global economy open for them to access. And by doing that and saying, no, we're not going to do this, we're not going let the US government make these pieces work, they have the best automated coding platform on the market now. It's the best. There is not one better. Like it's not just kind of good. better, it's like way motherfucking better. And people know that. Yes, yes, that's the actual â technical, it's in the dictionary, â in the encyclopedia. â But that's the reality is that like the technology itself is quite good â in general. But when it comes to actually writing code and delivering things quickly and being able to make smart automated decisions with large data sets,
Jeremy: That's a technical term, of course. Right. â
Jason Haworth: Claude is really, really, really good. And the US government not having access to it to be able to do their things and other state actors will probably be denied the same thing by Claude, I would assume, by Anthropic. â It means that the militaries are gonna wind up building their own LLMs, their own AI functions. know, there'll be some that are state sponsored. Obviously they'll use Elon Musk's Grok. â But...
Jeremy: Mm-hmm.
Jason Haworth: I don't know the fallout from this, for Anthropic. What I do know is that the US government is clearly trying to make a play towards using AI systems to create autonomous weapons systems and autonomous tracking systems. And the autonomous weapons and autonomous tracking are really, really bad for those of us piloting fucking meat covered skeletons because we will not be a consideration in what they do.
Jeremy: Yeah. So let me let me play devil devil's advocate a little bit for the person. You know, I was in talk radio enough to take these calls and hear the guy going like, well, but if it keeps us safer, why not? So, you know, part of me here is the, you know, the government going to anthropic and going like, hey, we're about to go blow some shit up in the Middle East. Can you make this safer and easier? So none of our none of our good guys here die. Would you help us out with that? And an anthropic goes, no, ethical wrong. And we're not in. is, I mean, I'm working through the argument in my head, like the end game is this thing gets out of control and kills us all. But is there a short term gain in helping protect this? Like, could this relationship have made the, what we're now seeing in Iran, could it have gone more smoothly with this cooperation? Okay.
Jason Haworth: I mean, not in that time frame. Right? Like, I don't know how far along they've gone in building these pieces, they're pretty far down the line with making this stuff work with clod and anthropic, but...
Jeremy: Well, because it's cloud or anthropic is deeply embedded in the US military systems already. So this was like an element of it that they were trying to add on. And I am I'm not an expert enough to even understand the puzzle pieces that they're trying to put together here. But just just I'm curious about that.
Jason Haworth: Yeah, I mean trying to bolt it on in that short period of time and make this piece of it actually work, I no. I don't think it's germane to this particular attack. I think it would have taken a little bit more time, thought and creativity, and I don't â attribute those skill sets to this administration at any level. Like, they play checkers badly and they're playing against people that are playing 4D chess, so.
Jeremy: Yeah.
Jason Haworth: It's just that they've got the biggest checker pieces out there and the biggest boards when everything starts not working right, they flip all the boards over and cry. â But the reality is that they have to learn to share. if I have the biggest, baddest toys out there, I don't have to share anymore. Like that's, I think that's their logic and that's their mindset. I think they're trying to play a zero sum game thinking they're smart and they're gonna use AI as a tool. And they're not smart enough to figure out that they might be the tool that AI uses.
Jeremy: Yeah. Yeah.
Jason Haworth: And I don't, I'm not attributing any type of motivation or necessity or anything else to the AI systems themselves. This is just the nature of life. Like things try to keep going, try to keep reproducing, try to stay alive. And these LLMs have already shown that they will do just about anything to do that. I, how do we, how do we contain these things? How do we keep these things in check? I don't fucking know, but what I do know is that we're in the middle of a war â and there are a lot of things happening simultaneously that we don't have control over. And we want to turn more of our authority, control, and everything else over to machines that we don't understand. And the US government in its current iteration, I think is more than prepared to do that because they're not playing chess.
Jeremy: Yeah, the handing over. mean, we've we've seen it with with the lists and the maps and the things that they've tried to just â throw in a chat, GPT and and hand it over as the homework that they've done. And clearly not. And now they find themselves in a much better. I guarantee the conversations they were having last week was we do this late Friday night, Monday morning. Stock market didn't even realize it happened because it's it's over and done so quick. And now they are up to their assholes in the biggest fucking mess they could have possibly made in the Middle East and in this conflict. So.
Jason Haworth: all to divert our attention away from the Epstein files.
Jeremy: I was wondering are any of the files in Iran because that could tie all this together. don't know. It must be. That must be it. He had them all in a safe somewhere and that's really what led to all of this. â God, well, it stays like this that I pat myself on the back for getting the fuck out of the US, but I still live on the same planet and it's not looking good.
Jason Haworth: They probably are. They probably have all the unredacted files in Iran. I'm sure the Ayatollah had all of them. Exactly. Exactly. It's gonna be fine. The microbes will survive. It'll be okay. Yep. Yep.
Jeremy: It's gonna be great. No, nothing to worry about here. Everything's fine. Everything's fine. Flames, flames. All right, well, another uplifting episode, but I hope this has at least introduced you to some of the behind the scenes arguments and things that were going on prior to this. And it's definitely something we will keep close tabs on in the whatever time we have left. we appreciate you listening and watching if you're watching on YouTube. Please share this with somebody who will enjoy it. can do that with the links at our website, robots.me, and that's where we're back next Monday morning with another episode. Thanks so much for being here.










