Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations
Contact 90.5 WESA with a story idea or news tip: news@wesa.fm

CMU, U.S. Army Partner To Create AI For Future Wars

Carnegie Mellon University
AFC Commander General John Murray gives Brigadier General Matthew Easley the patch for the U.S. Army Futures Command.

On Friday, the U.S. Army activated its new Artificial Intelligence Task Force, which will have a hub at Carnegie Mellon University and develop military uses for artificial intelligence.

Military representatives who were at the ceremony said U.S. adversaries are already developing AI, and that collaborating with institutions like CMU is necessary to minimize American causalities.

"[AI] really means that we will enbable our soliders, our young men and women, to think and act faster on the battlefield with more clarity, more precission," said Secretary of the Army Mark Esper, "and in some ways able to lift the fog of war that can sometimes lead to bad outcomes." 

AI expert Paul Scharreserved as an Army Ranger in Iraq and Afghanistan, and is the author of the book “Army of None: Autonomous Weapons and the Future of War.”

He agreed the U.S. military has little choice regarding AI. He said not pursuing this tech is like riding onto a World War II battlefield on horseback, and then encountering planes.

"Any country that wants to have an adequate defense is going to have to invest in artificial intelligence and robotics," he said.

But Scharre cautioned that the military must be careful with its AI tools.

“We wouldn’t want to see a flash war, where algorithms interact in an unexpected way that leads to things happening very, very quickly, faster than humans can respond and then a crisis or a conflict starting to spiral out of control,” he said.

Critics, including AI entrepreneur Elon Musk, argue that militarizing this technology poses an existential threat to humanity. In 2017, Musk and more than 100 other tech leaders signed an open letter to the UN , where they called autonomous weapons a “Pandora’s box” and said these weapons may eventually be used “against innocent populations," and also "hacked to behave in undesirable ways.”

Ultimately, Scharre said that military AI will ideally have joint human-machine cognitive systems to analyze situations. He added that most future military AI uses will likely have nothing to do with weapons, but rather focus on logistics and energy efficiency.

“I would say 99.9 percent of those applications won’t be things that are problematic or that raise tricky questions,” said Scharre. “But some of them will and we’ve got to talk about those.”

He said it’s important that the Department of Defense work closely with AI engineers and be transparent in how it plans to use this tech.

WESA receives funding from Carnegie Mellon University.

Sarah Boden covers health and science for 90.5 WESA. Before coming to Pittsburgh in November 2017, she was a reporter for Iowa Public Radio. As a contributor to the NPR-Kaiser Health News Member Station Reporting Project on Health Care in the States, Sarah's print and audio reporting frequently appears on NPR and KFF Health News.