US Military Leverages AI to Combat Taliban During Afghanistan Drawdown
The chaotic withdrawal from Afghanistan and the subsequent Taliban takeover have prompted many to question the lessons learned from the 20-year war. However, one significant achievement from the U.S.’s involvement in the conflict has emerged: the use of artificial intelligence (AI) in combat operations.
In 2019, as U.S. and coalition forces began reducing their troop presence across Afghanistan, the remaining forces faced a challenge in maintaining human intelligence networks used to monitor Taliban movements.
By the end of 2019, the number of Taliban attacks against U.S. and coalition forces surged to levels unseen in the previous decade, prompting the U.S. military to develop an AI program called “Raven Sentry.”
In a recent article, U.S. Army Colonel Thomas Spahr, chair of the Department of Military Strategy, Planning, and Operations at the U.S. Army War College, quoted A.J.P. Taylor and said, “War has always been the mother of invention.” Spahr pointed to the development of tanks during World War I, the atomic weapon in World War II, and the use of AI to track Open-Source Intelligence as examples of U.S.’s technological advancements in warfare.
Raven Sentry aimed to alleviate the burden on human analysts by sifting through vast amounts of data, including “weather patterns, calendar events, increased activity around mosques or madrassas, and activity around historic staging areas.”
Despite initial challenges during the development phase, a team of intelligence officers, known as the “nerd locker,” came together to create a system that could “reliably predict” terrorist attacks.
“By 2019, the digital ecosystem’s infrastructure had progressed, and advances in sensors and prototype AI tools could detect and rapidly organize these data patterns,” Spahr, who was also involved in the program, stated.
Although the AI program was cut short by the withdrawal on Aug. 30, 2021, its success was attributed to a “culture” of tolerance for early failures and technological expertise.
Spahr mentioned that the team developing Raven Sentry “was aware of senior military and political leaders’ concerns about proper oversight and the relationship between humans and algorithms in combat systems.”
He also emphasized that AI testing is “doomed” if leadership does not tolerate experimentation during the program’s development phase.
By October 2020, less than a year before the withdrawal, Raven Sentry achieved a 70% accuracy threshold in predicting the timing and location of attacks – technology that has proven vital in major wars today, including those in the Middle East and Ukraine.
“Advances in generative AI and large language models are increasing AI capabilities, and the ongoing wars in Ukraine and the Middle East demonstrate new advances,” the U.S. Army colonel wrote.
Spahr also stressed that for the U.S. and its allies to maintain their AI technological competitiveness, they must “balance the tension between computer speed and human intuition” by educating leaders who remain skeptical of emerging technologies.
Despite the AI program’s success in Afghanistan, the Army colonel warned that “war is ultimately human, and the adversary will adapt to the most advanced technology, often with simple, common-sense solutions.”
“Just as Iraqi insurgents learned that burning tires in the streets degraded US aircraft optics or as Vietnamese guerrillas dug tunnels to avoid overhead observation, America’s adversaries will learn to trick AI systems and corrupt data inputs,” he added. “The Taliban, after all, prevailed against the United States and NATO’s advanced technology in Afghanistan.”