Subscribe

(U.S. Army illustration)

In recent weeks U.S. military bases in Syria and Iraq have been subjected to increasing attacks, jeopardizing the safety of our service members and our national security. On Oct. 23, artificial intelligence-powered alert systems transmitted a warning to military and political leaders that U.S. service members had perished in one of these attacks. Fortunately, subsequent “ground truth” revealed that no U.S. soldiers or personnel had lost their lives, averting a potential military response triggered by the false alert.

Similarly, the bombing of Al-Ahli Arab Hospital in Gaza set off a cascade of social media posts and news reports attributing the tragedy to Israel. Once again, AI-powered systems monitoring these online platforms sent alerts to policymakers, military leaders and intelligence officials worldwide.

The deluge of information in our contemporary world, some of it disinformation, fueled by countless mobile cameras, microphones and firsthand accounts, creates an enormous challenge for national security decision-makers striving to distill facts from the fog of events to make informed decisions.

Our national security establishment has increasingly turned to sophisticated AI technologies to help us get “ahead of the news” by identifying emerging trends, patterns and significant events in real time. AI-powered tickers are now a staple on the desktops and mobile devices of national security analysts, playing a pivotal role in shaping news narratives and briefing materials for politicians.

So how did AI get to a place where it could potentially disseminate false information to government officials? To oversimplify: AI is excellent at re-creating the results of its trained data sets, which, if not done with care and responsibility, introduces bias into the system. AI systems, driven by their data sets, recognize “an event” and then perpetuate these claims as the “data” corroborated by their training. These AI systems push alerts to the wider information ecosystem who rush to alert their networks. As one alert happens, then so does the next. AI-powered tools are a critical part of our national security future, but we have not yet implemented proper guardrails to ensure that we are using these systems wisely. This week in London leaders, including Vice President Kamala Harris, are meeting to discuss global safeguards for AI, and national security needs to be at the top of the list.

The Biden administration has taken a step in the right direction with the recent issuance of an executive order for a “Blueprint for an AI Bill of Rights,” which is primarily focused on the deployment of automated systems in our daily lives, from agriculture to financial services. Proper testing in line with the Biden administration’s executive order that includes reporting on test failures is essential. This open source model of testing has done much for cybersecurity and it can do the same for AI. We also need processes that “ground truth” our AI alerts, as was done by the U.S. military following the false alert of U.S. soldier casualties at al-Asad Air Base in western Iraq. While AI systems have undoubtedly enhanced our national security capabilities, they must adhere to new standards that allow analysts and decision-makers to corroborate the accuracy of an event before taking action. There are already AI systems available that provide this level of accuracy and the essential “ground truth” needed to respond effectively to rapidly unfolding events.

The bombing of the Al-Ahli Arab Hospital, while tragic, demonstrates how AI can help decision makers. Rapid detection of such events facilitates timely responses by first responders, ultimately saving lives and informing effective policy and political actions. As outlined in the White House executive order, a controlled deployment environment is key to harnessing the full potential of AI, which is crucial for maintaining our competitive edge in a rapidly evolving global landscape.

At a time when people overreact to the first thing on their feed, leaders need to have cooler heads and receive verification before acting. However, if we fail to address the challenges posed by poorly deployed AI, we risk perpetuating misinformation and deepening divisions. Conversely, when deployed responsibly, AI can lead to better responses, more informed policies, and a deeper understanding and empathy for others in an increasingly complex and interconnected world.

James Neufeld is the CEO of Edmonton, Alberta-based samdesk.io, an AI company that provides real-time, ground-truthed alerting for U.S. and global companies, government and NGOs.

Sign Up for Daily Headlines

Sign up to receive a daily email of today's top military news stories from Stars and Stripes and top news outlets from around the world.

Sign Up Now