The Defense Department’s cutting-edge research arm has promised to make the military’s largest investment to date in artificial intelligence systems for U.S. weaponry, committing to spend up to $2 billion over the next five years in what it depicted as a new effort to make such systems more trusted and accepted by military commanders.
The agency sees its primary role as pushing forward new technological solutions to military problems, and the Trump administration’s technical chieftains have strongly backed injecting artificial intelligence into more of America’s weaponry as a means of competing better with Russian and Chinese military forces.
While Maven and other AI initiatives have helped Pentagon weapons systems become better at recognizing targets and doing things like flying drones more effectively, fielding computer-driven systems that take lethal action on their own hasn’t been approved to date.
“DoD does not currently have an autonomous weapon system that can search for, identify, track, select, and engage targets independent of a human operator’s input,” said the report, which was signed by top Pentagon acquisition and research officials Kevin Fahey and Mary Miller.
“Technologies underpinning unmanned systems would make it possible to develop and deploy autonomous systems that could independently select and attack targets with lethal force,” the report predicted.
While AI systems are technically capable of choosing targets and firing weapons, commanders have been hesitant about surrendering control The report noted that while AI systems are already technically capable of choosing targets and firing weapons, commanders have been hesitant about surrendering control to weapons platforms partly because of a lack of confidence in machine reasoning, especially on the battlefield where variables could emerge that a machine and its designers haven’t previously encountered.
Michael Horowitz, who worked on artificial intelligence issues for Pentagon as a fellow in the Office of the Secretary of Defense in 2013 and is now a professor at the University of Pennsylvania, explained in an interview that “There’s a lot of concern about AI safety – [about] algorithms that are unable to adapt to complex reality and thus malfunction in unpredictable ways. It’s one thing if what you’re talking about is a Google search, but it’s another thing if what you’re talking about is a weapons system.”
This article was summarized automatically.