From Military Officer Magazine: Teaming Up With AI

From Military Officer Magazine: Teaming Up With AI
Photo by Staff Sgt. David Dobrydney/Air Force

(This article by Hope Hodge Seck originally appeared in the November 2023 issue Military Officer, a magazine available to all MOAA Premium and Life members. Learn more about the magazine here; learn more about joining MOAA here.)

 

Around Christmas 2022, a specially modified F-16 fighter test aircraft took off from Edwards AFB, Calif., with a hidden co-pilot aboard: an algorithmic artificial intelligence (AI) program capable of flying the jet from liftoff to touchdown without any human inputs. A human safety pilot went along for the ride. This first AI-controlled live flight was a major milestone for the Defense Advanced Research Projects Agency, or DARPA, which has also pitted AI agents against human fighter pilots in simulator-based showdowns. Here, too, the algorithms proved not only capable but dominant. With agile machine learning and the ability to process data and make decisions faster than the most experienced human ace, the AI pilot bested its human competitor in a 5-0 simulation.

 

While the military is likely still years away from fielding AI controllers on fighter jets, one of the major challenges DARPA researchers have encountered in experiments is not technology-based but uniquely human: Some pilots don’t trust their machine teammates and won’t give them the chance to prove themselves. An earlier experiment in DARPA’s Air Combat Evolution (ACE) program showed that one pilot in aerial combat tests was turning off his AI assistant before it could help him fly, convinced it would make mistakes and endanger his mission. In an ongoing ACE experiment, DARPA is now hooking pilots in trainer jets up to physiological sensors, monitoring responses such as heart rate and sweat production to understand whether pilots are trusting the AI agents in the cockpit.

 

Trusting the Machine

Fear of handing over some aspects of control to a machine is understandable. But Lt. Cmdr. Trevor Phillips-Levine, USN, director for the Navy’s Joint Close Air Support branch, points out that Air Force F-16 pilots have been flying with a computer co-pilot for nearly a decade. The jets’ automatic ground collision avoidance system (Auto-GCAS) overrides pilot controls and pulls up when it senses they’re about to crash into the ground. When the system was added to F-16s beginning in 2014, it often irritated pilots with false alarms and “nuisance pull-ups,” prompting some to disable it. But the technology improved, and with new rules from leaders forcing pilots to use it, Auto-GCAS began to prove itself. To date, it has saved the lives of 11 pilots who passed out or became disoriented in aerial maneuvers.

 

The term “artificial intelligence,” which conjures up imagery from the “Terminator” movies, might also be a barrier to understanding, making the technology seem more mysterious and removed — and more capable — than it truly is.

 

“There’s a lot of chaff” around the buzzword of AI, Phillips-Levine said.

 

ai-internal-screens.jpg

Air Force Research Laboratory staff watch while a pilot uses the automatic ground collision avoidance system (Auto-GCAS) in a flight simulator. (Photo by Richard Eldridge/Air Force)

 

DoD calls AI “the ability of machines to perform tasks that normally require human intelligence,” and it has created a new office to manage its development and working groups to study the ethics of using AI decision-making in warfare. Most of the tasks on the agenda of the Pentagon’s Chief Digital and Artificial Intelligence Office (CDAO) are far less dramatic than machine-controlled dogfights in the sky. Instead, the office is harnessing AI to process massive amounts of data faster and more effectively, enabling better resource management and threat detection.

 

At every level, trust remains a critical element: Humans who work with AI systems must give them the opportunity to succeed but be prepared to intervene when they fall short. It’s clear, however, that understanding AI and becoming comfortable with its use will be a critical requirement for military practitioners and leaders who want to be technologically equipped and ready for the next fight. And how the U.S. military wields AI tools and weapons will set it apart from its adversaries and set the tone for the future of warfare.

 

Decoding Data

For the CDAO, which was formed in 2022 as a successor to the Pentagon’s Joint Artificial Intelligence Office, one top achievement from its first year was a collaborative project with U.S. Transportation Command and the Defense Logistics Agency to locate fuel-carrying cargo ships worldwide, an essential piece of the fuel supply chain that sustains global U.S. military operations. Working with previous methods, the agencies had identified 1,000 fuel carriers; but with improved AI algorithms pulling data from a broader range of courses, they located 14,000, said Margie Palmieri, the CDAO’s deputy chief digital and AI officer. Now that DoD has a better picture of global fuel distribution, she said, leaders can make better decisions about how to manage it.

 

“AI is definitely driving us to better data quality,” Palmieri said.

 

ai-internal-simulator.jpg

An Air Force pilot uses the Auto-GCAS to demonstrate an automatic fly up maneuver in a research flight simulator. (Photo by Richard Eldridge/Air Force)

 

Another upcoming project, in collaboration with the U.S. Coast Guard, will integrate AI algorithms on aircraft used for maritime detection, allowing computers to analyze multiple video streams collected from flights over the ocean and flag anomalies that might prove to be a mariner in distress or an act of piracy.

 

“It helps sensor operators so they’re not constantly looking at video all the time,” Palmieri said. “It creates faster response times and saves aircraft hours, because if you’re looking for something, it’s a lot easier to have an AI algorithm sort through the noise, especially if you’re looking at a lot of water.”

 

AI Elephant Wranglers

Much of the office’s current work focuses on helping decision-makers understand AI tools and how to use them — and eliminating barriers to adopting new AI processes. The CDAO runs training seminars for senior DoD leaders that target their misconceptions about the "black box" of AI, focusing on the fears of some and the overblown expectations of others, about what smart processing can do.

 

“I think we have to work on the perception of a variety of stakeholders that you can sprinkle some AI on top and then magically get better,” Palmieri said.

 

[MORE FROM THE MAGAZINE: MOAA’s 2023 Holiday Gift Guide]

 

Technologists Eric Velte and Aaron Dant of defense contractor ASRC Federal liken AI capabilities to the war elephants that carried soldiers and equipment into battle for thousands of years. The highly intelligent, powerful creatures enabled armies to dominate on the battlefield, but they always required human operators to direct them and exert control if their behavior became erratic.

 

For this reason, Velte and Dant propose the creation of an “AI operator” role — perhaps contained within its own military job specialty — that serves to help warfighters get the most out of their AI sidekicks while steering them around the pitfalls. These AI operators would work in diverse teams but be assigned to specific AI-enabled combat systems, such as drone swarms or autonomous naval vessels. While the warfighter in charge of a given system might have to focus on the demands of combat operations, the AI operator, they say, would be able to spot errors or blind spots in the algorithms. This would theoretically help troops to rely on AI and machine-learning systems with more confidence, trusting not that the systems will never make a mistake but that safeguards are in place for when they do.

 

“If you have a team of people looking at that higher level of abstraction, as opposed to the specific action the object’s doing, they can make that change in a reasonable amount of time to allow the drone swarm or the weapons platform, whatever it is, to adapt,” said Dant, chief data scientist for ASRC.

 

[RELATED: More Recommended Reads From MOAA] 

 

While no military service has rolled out an AI operator job yet, Velte and Dant said they’ve had productive conversations with the technology development arm of the Marine Corps. That service published a new strategy update in June that emphasized its intent to embrace “intelligent robotics and autonomous systems” in warfare but to keep highly trained human operators in the loop and at the center of fighting decisions.

 

How much to keep humans involved in decision-making as machines become increasingly able to identify next steps is a problem U.S. military ethicists have studied for years. In 2020, after 15 months of consultation with AI experts from every sector, the Pentagon released its “Ethical Principles for Artificial Intelligence.” These stated the U.S. military would use AI only in ways that were responsible; were limited bias; invited traceability and transparency; were subject to rigorous reliability testing; and stayed governable, with a deactivation mechanism always built in to prevent “unintended consequences.”

 

The Threat of Bad Actors

The enemies Americans are likely to face in future combat theaters, however, have no similar AI code of values. Already, Russia has reportedly deployed a drone in its assault on Ukraine that can loiter over a space until it identifies a target and decides to kill it based on pre-programmed instructions. 

 

Even short of weapons making kill decisions, the prospect of unbridled AI presents dark possibilities. Dant describes future AI-powered machines equipped for psychological operations that target social media users within certain demographics to spread disinformation or manipulate emotions at a “large population scale.” The prospects of what adversaries might do with unharnessed AI are troubling, but evidence is beginning to suggest ethical frameworks like those embraced by DoD might become the norm in the long run.

 

In a paper presented at a national security symposium earlier this year, Velte and Dant cite drone operations in Ukraine, where conventionally controlled Ukrainian drones and Iranian-made Shahed “kamikaze” drones capable of kill decisions have entered the fight. While both are autonomous, the Shahed drones were more likely to kill civilians, risking outrage and war crimes accusations without creating an advantage in the fight.

 

[MORE FROM THE MAGAZINE: Live and Play 2023]

 

“This illustrates the importance of keeping an ethical approach to AI-driven weapons, as there is no need to sacrifice ethical principles for effectiveness,” the authors wrote.

 

Benjamin Boudreaux, a policy researcher at RAND Corp. who has studied the ethical implications of AI, adds that the U.S. can strengthen its alliances by remaining responsible with AI tools. While proposals for new treaties banning or limiting autonomous weapons have faced pushback, the U.S. has the opportunity, he said, to model new international norms around use of AI.

 

“These ethical principles are really crucial … to have legitimacy in the eyes of the American public but also to be able to partner with allies,” Boudreaux said.

 

Better Together

Early experimentation has also shown AI tools frequently work most effectively when they are augmenting human intelligence rather than replacing it. AI technology is prone to “winters”: After initial enthusiasm, users become disillusioned by the technology’s weaknesses and limitations and abandon it. Some defense experts who spoke with Military Officer believe even now the impact of AI tech on military operations will be incremental rather than transformative.

 

In today’s operational landscape, dominated by long-range standoff weapons, even ambitious applications like AI-powered jet maneuvers might have limited use. “How important are dogfights [now]?” asked Zak Kallenborn, an adjunct fellow with the Center for Strategic and International Studies, referring to DARPA’s F-16 experiment.

 

Yet studies and trials continue to prove humans equipped with AI-powered rapid data processing and analysis tools will consistently outperform humans going it alone, said Phillips-Levine, the Navy branch director. In the Polish military, he added, some experienced pilots of Soviet-era MiG-29 jets have faced elimination because they could not learn to trust the advanced algorithms of the F-35 Joint Strike Fighter.

 

And yet, for those open to learning how to work with the new technology, AI tools should not represent a job threat but a way to spend less time on mundane processes and more on creativity and solution development, the CDAO’s Palmieri said.

 

“What we usually see … is that the person doing that process, which was probably heavily burdened with administration, is now much freer to think about the implications of what they’re doing,” she said. “And they become much more effective at their individual job.” 

 

Hope Hodge Seck is a writer on military issues and is based in the Washington, D.C., area.

 

Military Officer Magazine

Discover more interesting stories in MOAA's award-winning magazine.

Learn More