The Pentagon keeps promising to follow the law when using AI, but what are the limits?
By Sean Lyngaas, CNN
(CNN) — The Iran war has seen the US military use AI more than any conflict before, drawing on vast amounts of data — from satellites, signals intelligence and elsewhere — piped into software programs made by contractors like Palantir.
AI tools like Anthropic’s Claude have sifted through the data far quicker than any human could to flag potential targets to strike for commanders, according to multiple sources familiar with US operations.
The ubiquity of AI tools in war has raised questions about whether those tools are contributing to errors on the battlefield. Some congressional Democrats have pushed the Pentagon to answer questions about whether AI may have been partially at fault for a US strike in February that hit an Iranian elementary school and, according to Iranian state media, killed at least 168 children. But what are the limits on the military’s use of AI?
Defense Secretary Pete Hegseth has emphasized that humans at the Pentagon, not AI agents, make the ultimate call on who to kill in war.
“We follow the law and humans make decisions,” Hegseth told the Senate Armed Services Committee last week. “AI is not making lethal decisions.”
Pentagon spokesmen have similarly repeatedly said that the military’s use of AI follows the law.
But other than specifying that commanders are responsible for lethal targeting decisions and their consequences, the law does not place explicit limits on where AI can be used in the so-called kill chain. The speed with which AI helps commanders make those lethal decisions is raising new questions of when and how often a human needs to be involved in the process, legal experts told CNN.
The lack of restrictions has led to some very public debates about the ethics of AI in warfare. The Pentagon is in a messy legal battle with a leading American AI firm, Anthropic, after that company insisted on some limitations in how its technology might be used, with Hegesth calling the company’s CEO an “ideological lunatic” over the demand.
“The story is ultimately one of how fast you choose to — or can afford not to — run with scissors,” said Gary Corn, a former deputy legal counsel in the Office of the Chairman of the Joint Chiefs of Staff. “And we see that the approach presently is, ‘We’re going to sprint as fast as we can with scissors.’ That’s the core of the Anthropic fight.”
US Air Force Colonel John Boyd coined the phrase “OODA loop” (observe, orient, decide, act) to describe the iterative windows in battle when commanders have to make decisions. Much of the legal framework for the use of AI stems from pre-existing law that’s tied to who is responsible when those decisions are made.
“AI is exponentially increasing” the speed at which commanders and their support staff will have to navigate OODA loops in battle, said Cory Simpson, a former legal adviser to US Special Operations Command.
In war, those who get through that loop the quickest have an advantage.
In a video posted to X by Palantir in March, Cameron Stanley, the Pentagon’s chief digital and AI officer, praised how Palantir’s Maven Smart System software has transformed US military targeting. He demonstrated how the software, which he said is deployed “across the entire Department [of Defense],” can identify potential military targets and move them into a “workflow” for military leaders to consider.
“This is revolutionary,” Stanley said. “We were having this done in about eight or nine systems, where humans were literally moving detections left and right in order to get to our desired end state, in this case, actually closing a kill chain.”
Rapid technological advancements mean that autonomous weapons systems can be wired to try to avoid civilians. But the technology is not ready for — and experts say we should never hand over — weighing the moral calculus of how much civilian collateral damage is acceptable in war. The US also faces potential adversaries that place much less emphasis on avoiding civilian casualties.
“The biggest concerns … are with the predictability and control over a capability that you put into operation,” said Corn, who is now an adjunct professor at American University’s Washington College of Law, referring to autonomous systems, including drones, that can operate without human involvement. “You have to have a confidence level that the system is going to operate within the bounds of what the law allows in targeting.”
What the law and Pentagon policy say
The law of armed conflict and international humanitarian law dictate that military commanders are responsible for minimizing, to the extent feasible, civilian casualties in war, regardless of the technology used to kill people. The commanders draw on counsel from judge advocates, attorneys embedded in commands across the military.
In 2023, as adoption of AI was expanding across the defense industry, the Pentagon issued a directive for military personnel on how to handle the technology. “Autonomous and semi-autonomous weapon systems will be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force,” the directive says.
Another set of Pentagon guidelines, issued in the first Trump administration in 2020, used the same phrase, “appropriate levels of judgment,” to describe how officials can use AI.
The 2023 directive is still in effect. It leaves open to interpretation what constitutes “appropriate” human judgement.
“The Department maintains in [the 2023 directive] that a human operator has always been in the loop when using autonomous capabilities,” a Pentagon official said in a statement when CNN asked about the latest legal guidance for using AI in war. “The responsibility for the lawful use of any AI tool rests with the human operator and the chain of command, not within the software itself.”
Simpson, the former Special Operations Command legal adviser, said the need for legal experts at every stage in the process, from buying a weapon to firing it, is only going to grow.
“As much as [AI] is changing the application of weapons in warfare, it is going to change the professions behind them in how they need to train differently and think about processes differently,” Simpson said.
In the late 2000s and early 2010s, the pace of US military operations in Afghanistan was somewhat limited by the ability to gather and analyze data to find potential targets, according to retired Gen. Michael “Erik” Kurilla.
Over the next decade and a half, data analytics, and later AI, allowed the US military to dramatically increase the number of strikes it could conduct against adversaries, Kurilla said last month at Vanderbilt University’s Institute of National Security.
With more data came the need for more humans to review and approve all of the potential targets and carry out missions to strike them.
AI “gives you decision advantage, taking tens of thousands and hundreds of thousands of data points to bring them to you in a more coherent fashion,” said Kurilla, who oversaw the US military’s 2025 bombing campaign against Iran.
A year later, the AI-supported “kill chain” that Kurilla helped build out has again been at work over Iran.
“At [US Central Command], we built a system that allowed us to dynamically prosecute over a thousand targets every 24 hours, with the capacity to do even more. Brad Cooper is using that same system today against Iran and improving it every day,” Kurilla said, referring to his successor at Central Command.
Targeting mistakes the US has made in the Iran war, including the US airstrike that hit the elementary school, are renewing scrutiny of how AI might be used by the military. It is not yet clear if AI played any role in the error of striking the school. The Pentagon is investigating the incident.
Corn said such an investigation would seek to answer the question: “Was it reasonable or unreasonable to rely on the intelligence, and by extension any AI system that may have been used and the output?”
Somewhere along the line, bad information was likely fed to the commander who approved the strike. And whether intelligence is curated by AI or not, the commander (or their advisers) has to know where it comes from.
“The AI is only as good as the data it can draw on — no different than humans are only as good as the data they can draw on,” Corn said.
CNN’s Zachary Cohen contributed to this report
The-CNN-Wire
™ & © 2026 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.
