Skip to Content

The US government plans to go all-in on using AI. But it lacks a plan, says a government watchdog

The US government plans to vastly expand its reliance on artificial intelligence, but it is years behind on policies to responsibly acquire and use the technology from the private sector, according to a new federal oversight report.
Marco Bertorello/AFP/Getty Images/File
The US government plans to vastly expand its reliance on artificial intelligence, but it is years behind on policies to responsibly acquire and use the technology from the private sector, according to a new federal oversight report.

By Brian Fung and Sean Lyngaas, CNN

Washington (CNN) — The US government plans to vastly expand its reliance on artificial intelligence, but it is years behind on policies to responsibly acquire and use the technology from the private sector, according to a new federal oversight report.

The lack of a government-wide standard on AI purchases could undercut American security, wrote the Government Accountability Office (GAO) in a long-awaited review of nearly two-dozen agencies’ current and planned uses for AI. The GAO is the government’s top accountability watchdog.

The 96-page report released Tuesday marks the US government’s most comprehensive effort yet to catalog the more than 200 ways in which non-military agencies already use artificial intelligence or machine learning, and the more than 500 planned applications for AI in the works.

It comes as AI developers have released ever more sophisticated AI models, and as policymakers scramble to develop regulations for the AI industry in the most sensitive use cases. Governments around the world have emphasized AI’s benefits, such as its potential to find cures for disease or to enhance productivity. But they have also worried about its risks, including the danger of displacing workers, spreading election misinformation or harming vulnerable populations through algorithmic biases. AI could even lead to new threats to national security, experts have warned, by giving malicious actors new ways to develop cyberattacks or biological weapons.

GAO’s broad survey sought answers from 23 agencies ranging from the Departments of Justice and Homeland Security to the Social Security Administration and the Nuclear Regulatory Commission. Already, the federal government uses AI in 228 distinct ways, with nearly half of those uses having launched within the past year, according to the report, reflecting AI’s rapid uptake across the US government.

The vast majority of current and planned government uses for AI that the GAO identified in its report, nearly seven in 10, are either science-related or intended to improve internal agency management. The National Aeronautics and Space Administration (NASA), for example, told GAO it uses artificial intelligence to monitor volcano activity around the world, while the Department of Commerce said it uses AI to track wildfires and to automatically count seabirds and seals or walruses pictured in drone photos.

Closer to home, the Department of Homeland Security said it uses AI to “identify border activities of interest” by applying machine learning technologies against camera and radar data, according to the GAO report.

Agencies adopting AI

The report also highlights the hundreds of ways federal agencies use AI in secret. Federal agencies were willing to publicly disclose about 70% of the total 1,241 active and planned AI use cases, the report said, but declined to identify more than 350 applications of the technology because they were “considered sensitive.”

Some agencies were extraordinarily tight-lipped about their use of AI: the State Department listed 71 different use cases for the technology but told the GAO it could only identify 10 of them publicly.

Although some agencies reported relatively few uses for AI, those handful of applications have attracted some of the most scrutiny by government watchdogs, civil liberties groups and AI experts warning of potentially harmful AI outcomes.

For example, the Departments of Justice and Homeland Security reported a total of 25 current or planned use cases for AI in the GAO’s Tuesday report, a tiny fraction of NASA’s 390 or the Commerce Department’s 285. But that small number belies how sensitive DOJ and DHS’s uses cases can be.

As recently as September, the GAO warned that federal law enforcement agencies have run thousands of AI-powered facial recognition searches — amounting to 95% of such searches at six US agencies from 2019 to 2022 — without having appropriate training requirements for the officials performing the searches, highlighting the potential for AI’s misuse. Privacy and security experts have routinely warned that relying too heavily on AI in policing can lead to cases of mistaken identity and wrongful arrests, or discrimination against minorities.

(The GAO’s September report on facial recognition coincided with a DHS inspector general report finding that several agencies including Customs and Border Patrol, the US Secret Service and Immigration and Customs Enforcement likely broke the law when officials bought Americans’ geolocation histories from commercial data brokers without performing required privacy impact assessments.)

While officials are increasingly turning to AI and automated data analysis to solve important problems, the Office of Management and Budget, which is responsible for harmonizing federal agencies’ approach to a range of issues including AI procurement, has yet to finalize a draft memo outlining how agencies should properly acquire and use AI.

“The lack of guidance has contributed to agencies not fully implementing fundamental practices in managing AI,” the GAO wrote. It added: “Until OMB issues the required guidance, federal agencies will likely develop inconsistent policies on their use of AI, which will not align with key practices or be beneficial to the welfare and security of the American public.”

Under a 2020 federal law dealing with AI in government, OMB should have issued draft guidelines to agencies by September 2021, but missed the deadline and only issued its draft memo two years later, in November 2023, according to the report.

OMB said it agreed with the watchdog’s recommendation to issue guidance on AI and said the draft guidance it released in November was a response to President Joe Biden’s October executive order dealing with AI safety.

Biden’s AI approach

Among its provisions, Biden’s recent AI executive order requires developers of “the most powerful AI systems” to share test results of their models with the government, according to a White House summary of the directive. This year, a number of leading AI companies also promised the Biden administration they would seek outside testing of their AI models before releasing them to the public.

The Biden executive order adds to the growing set of requirements for federal agencies when it comes to AI policies by, for example, tasking the Department of Energy to assess the potential for AI to exacerbate threats involving chemical, biological, radiological or nuclear weapons.

Tuesday’s GAO report identified a comprehensive list of AI-related requirements that Congress or the White House has imposed on federal agencies since 2019 and graded their performance. In addition to faulting OMB for failing to come up with a government-wide plan for AI purchases, the report found shortcomings with a handful of other agencies’ approaches to AI. As of September, for example, the Office of Personnel Management had not yet prepared a required forecast of the number of AI-related roles the federal government may need to fill in the next five years. And, the report said, 10 federal agencies ranging from the Treasury Department to the Department of Education lacked required plans for updating their lists of AI use cases over time, which could hinder the public’s understanding how of the US government uses AI.

The-CNN-Wire
™ & © 2023 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.

Article Topic Follows: National-World

Jump to comments ↓

CNN Newsource

BE PART OF THE CONVERSATION

KIFI Local News 8 is committed to providing a forum for civil and constructive conversation.

Please keep your comments respectful and relevant. You can review our Community Guidelines by clicking here

If you would like to share a story idea, please submit it here.

Skip to content