The US Space Force has implemented a temporary suspension on the use of web-based generative artificial intelligence tools, including ChatGPT, for its personnel due to data security apprehensions, as per a memo seen by Reuters.
Dated September 29, the memo was directed to the Space Force’s workforce, referred to as “Guardians.” It informs personnel that they are not permitted to use such AI tools, particularly large-language models, on government computers until they obtain official approval from the Chief Technology and Innovation Office of the Space Force.
The decision for the temporary ban is attributed to concerns regarding data aggregation risks.
Generative AI, driven by large language models that assimilate extensive datasets to learn, has experienced significant growth over the past year. It underpins an array of continually evolving products, such as OpenAI’s ChatGPT, which can swiftly generate various content types, including text, images, or video, in response to simple prompts.
The memo quotes Lisa Costa, the Chief Technology and Innovation Officer of the Space Force, who acknowledges that this technology is poised to transform their workforce and enhance the effectiveness of Guardians’ operations.
An Air Force spokesperson has corroborated the temporary suspension, initially reported by Bloomberg.
The spokesperson, Tanya Downsworth, issued a statement clarifying, “A strategic pause on the use of Generative AI and Large Language Models within the US Space Force has been implemented as we determine the best path forward to integrate these capabilities into Guardians’ roles and the USSF mission. This is a temporary measure to protect the data of our service and Guardians.”
The memo also states that Lisa Costa’s office has formed a task force for generative AI in conjunction with other Pentagon offices to explore responsible and strategic utilization of this technology.
Further guidelines on the Space Force’s deployment of generative AI will be disclosed within the coming month.