Many have questioned the lessons learned from the 20-year war in Afghanistan following the chaotic withdrawal and subsequent Taliban takeover, but one major accomplishment from the U.S.’s time fighting the Taliban has emerged – the use of Artificial Intelligence to track terrorist attacks. 

In 2019, U.S. and coalition forces began drawing down their troop presence across the country, which left remaining forces strapped when it came to their ability to maintain human intelligence networks used to monitor Taliban movements.

By the end of 2019, the number of Taliban attacks levied at U.S. and coalition forces spiked to levels not seen since the decade prior, prompting security forces in Afghanistan to develop an AI program known as ‘Raven Sentry.’

In a report released earlier this year, U.S. Amy Colonel Thomas Spahr, chair of the Department of Military Strategy, Planning, and Operations at the U.S. Army War College, quoted A.J.P. Taylor and said, ‘War has always been the mother of invention.’ Spahr pointed to the development of tanks during World War I, the atomic weapon in World War II and the use of AI to track Open-Source Intelligence as the U.S.’s longest lasting war began to wind down.

Raven Sentry looked to take the load off human analysts by sorting through vast amounts of data that drew from ‘weather patterns, calendar events, increased activity around mosques or madrassas, and activity around historic staging areas.’

Despite some initial challenges when the technology was first developed, a team of intelligence officers pulled together to form a group dubbed the ‘nerd locker’ to develop a system that could ‘reliably predict’ a terrorist attack. 

‘By 2019, the digital ecosystem’s infrastructure had progressed, and advances in sensors and prototype AI tools could detect and rapidly organize these dispersed indicators of insurgent attacks,’ Spahr, who was also involved with the program, first reported The Economist.

Though the AI program was cut short by the withdrawal on Aug. 30, 2021, its success was attributed to a ‘culture’ of tolerance for early failures and technological expertise. 

Spahr said the team developing Raven Sentry ‘was aware of senior military and political leaders’ concerns about proper oversight and the relationship between humans and algorithms in combat systems.’

He also pointed out that AI testing is ‘doomed’ if leadership does not tolerate experimentation when programs are developing. 

By October 2020, less than a year before the withdrawal, Raven Sentry had reached a 70% accuracy threshold in predicting when and where an attack would likely occur – technology that has proven critical in major wars today, both in the Middle East and Ukraine.

 ‘Advances in generative AI and large language models are increasing AI capabilities, and the ongoing wars in Ukraine and the Middle East demonstrate new advances,’ the U.S. Army colonel wrote.

Spahr also said that if the U.S. and its allies want to keep its AI technology competitive, it must ‘balance the tension between computer speed and human intuition’ by educating leaders who remain skeptical of the ever-emerging technology. 

Despite the success the AI program saw in Afghanistan, the Army colonel warned that ‘war is ultimately human, and the adversary will adapt to the most advanced technology, often with simple, common-sense solutions.’

‘Just as Iraqi insurgents learned that burning tires in the streets degraded US aircraft optics or as Vietnamese guerrillas dug tunnels to avoid overhead observation, America’s adversaries will learn to trick AI systems and corrupt data inputs,’ he added. ‘The Taliban, after all, prevailed against the United States and NATO’s advanced technology in Afghanistan.’

This post appeared first on FOX NEWS

Check Also

China releases 3 ‘wrongfully detained’ Americans, White House says

China is releasing three Americans Wednesday who the White House says were ‘wrongfully det…