U.S. Space Force has temporarily banned the use of web-based generative artificial intelligence tools and so-called large language models that power them, citing data security and other concerns, according to a memo seen by Bloomberg News.
The Sept. 29 memorandum, addressed to the Guardian Workforce, the term for Space Force members, pauses the use of any government data on web-based generative AI tools, which can create text, images or other media from simple prompts. The memo says they “are not authorized” for use on government systems unless specifically approved.
Chatbots and tools like OpenAI’s ChatGPT have exploded in popularity. They make use of language models that are trained on vast amounts of data to predict and generate new text. Such LLMs have given birth to an entire generation of AI tools that can, for example, search through troves of documents, pull out key details and present them as coherent reports in a variety of linguistic styles.
Generative AI “will undoubtedly revolutionize our workforce and enhance Guardian’s ability to operate at speed,” Lisa Costa, Space Force’s chief technology and innovation officer, said in the memo. But Costa also cited concerns over cybersecurity, data handling and procurement requirements, saying that the adoption of AI and LLMs needs to be “responsible.”
No further explanations were provided. Experts have flagged a risk that, under some conditions, voluminous and potentially non-public data involved in feeding models with documents and prompts could end up leaking out into the public arena or get hacked in other ways. The memo said Costa planned to release new guidance within 30 days.
A spokesperson for Space Force, which is part of the Defense Department, said the September 29 memo seen by Bloomberg was authentic and that a strategic pause was issued to protect personnel and Space Force data while determining how to integrate the capability to support missions and Guardians.
The Space Force’s decision has already impacted at least 500 individuals who were using a generative AI platform called Ask Sage, according to Nicolas Chaillan, the company’s founder. Ask Sage aims to provide a secure generative AI platform that works with several LLM models including those from Microsoft Corp. and Alphabet Inc.’s Google, according to Chaillan.
Chaillan, the former inaugural chief software officer for the Air Force and Space Force, criticized the agency’s decision to pause use of generative AI — especially as the Defense Department has called for accelerated adoption of AI. In August, the Pentagon launched a generative AI task force to examine use cases for LLMs and how to analyze and integrate them across the Defense Department. “It’s a very short-sighted decision,” Chaillan said.
The CIA has already developed a generative AI tool intended for widespread use among the intelligence community. Chaillan said his platform already meets security requirements to protect data.
Besides customers throughout the defense industrial base, Chaillan said more than 10,000 customers in the rest of the Defense Department, including 6,500 in the Air Force, are still using his company’s software. Some Defense Department users even pay the $30-a-month fee out of their own pocket, he said, adding it was helping them ease the burden of writing reports.
“Clearly, this is going to put us years behind China,” he wrote in a September email complaining to Costa and several senior defense officials, in which he argued his service had already been “whitelisted” and approved for use by the Air Force, according to correspondence reviewed by Bloomberg.
Tim Gorman, a Pentagon spokesperson, told Bloomberg in July that defense services and agencies are allowed to temporarily authorize Ask Sage to process, store and transmit unclassified information that is releasable to the public. Ask Sage is also seeking authorization from the Defense Department to work with controlled information that cannot be released to the public.
The Space Force spokesperson didn’t comment on a question from Bloomberg about Chaillan’s email. A Defense Department spokesperson didn’t comment on the Space Force memo.
Chaillan resigned from his post as the first chief software officer in the Defense Department in 2021, criticizing the Pentagon over its slow adoption of AI and saying the U.S. risked falling behind China.