One nation, under artificial intelligence | Opinion

Disclosure: The views and opinions expressed here belong solely to the author and do not represent the views and opinions of crypto.news’ editorial.
The public artificial intelligence rollout is well underway, as U.S. federal agencies have already begun deploying AI to streamline their operations, despite a paucity of public discussion about the major change and its human rights implications.
- Artificial intelligence adoption is booming in government: Federal AI use cases nearly doubled from 571 in 2023 to over 1,100 in 2024, with agencies like HHS, DHS, and VA leading the charge.
- Practical applications are everywhere: From tracking foodborne outbreaks to flagging fraudulent veteran benefits and enhancing border security, AI is transforming operations and service delivery.
- Stable growth across functions: Nearly half of reported use cases focus on “mission-enabling” tasks like finance, HR, and cybersecurity, while others target health, public services, and space exploration.
- Rights and oversight concerns remain: Around 13% of AI uses could impact public rights or safety, prompting new safeguards, audits, and state-level legislation—though adoption outpaces public awareness.
In a report, the Government Accountability Office found a ninefold rise in generative AI use from 2023 to 2024 within the federal government. The GAO reviewed 11 agencies and their AI use, discovering that the number of reported uses had almost doubled from 571 in 2023 to 1,110 in 2024.
Furthermore, the Chief Information Officer’s Council released a report, called AI in Action: 2024 Federal AI Use Case Inventory Findings, in which they demonstrate AI use more than doubled in federal agencies since 2023, with more than 1,700 reported use cases in areas such as internal support and public services. The agencies reported improved operational efficiency and mission execution as reasons why they have dialed up the use of AI.
Some of the agencies leading the AI rollout
The Department of Health and Human Services, the Department of Veterans Affairs, the Department of Homeland Security, and the Department of the Interior reportedly account for 50% of the publicly-reported AI uses.
The Department of Homeland Security is another leader in integrating AI into its operations. The DHS developed several AI and machine learning (ML) software to improve its cybersecurity, border security, disaster response, and immigration services. The DHS used GenAI to enhance investigative leads, assist local governments with hazard mitigation plans, and for training. It publishes a list of its AI inventory.
The Centers for Disease Control and Prevention has leveraged AI to accelerate investigations into multi-state foodborne disease outbreaks. The Veterans Benefits Administration protects veterans’ benefit payments with AI, which aids in flagging fraudulent direct deposit changes. The Department of Health and Human Services used AI to glean information from publications and pinpoint outbreaks in areas thought to be free from polio. The Social Security Administration uses AI to assist Disability Program adjudicators.
In addition, the U.S. General Services Administration announced the addition of leading American AI companies’ products — Anthropic’s Claude, Google’s Gemini, and OpenAI’s ChatGPT — to its AI inventory.
The GAO released Artificial Intelligence: Agencies Have Begun Implementation, which details the use of AI for analyzing drone photos and datasets by federal agencies. Twenty of the 23 agencies detailed in the report have about 1,200 current and planned artificial intelligence use cases. The use of AI includes analyzing camera data, including from drones and radar, to identify border activities, as well as detecting scientific specimens for planetary rovers. NASA and the Department of Commerce had the highest number of use cases for AI.
The Administrative Conference of the United States published recommendations for the use of AI in government. They include using tools in regulatory enforcement, adjudication, automated legal guidance, and more.
Use cases for AI in the federal government
Agencies report they are benefiting from enhanced anomaly detection, streamlined processes, and improved decision-making. The technology is primarily being used in administrative and IT functions, followed by health and medical application use cases.
About 13 percent of use cases are categorized as health and medical. Roughly 9 percent of use cases aid government services or benefits processing, which entails processing and easing access for government benefits, including Medicare and Medicaid, Social Security, and unemployment.
Approximately 46 percent of federal government use cases are considered “mission-enabling,” and are necessary to improve finance management, human resources, and facilities and properties, cybersecurity, IT, procurement, and other administrative functions within the agencies.
Clearly, the federal government is already applying AI to law and justice, education and workforce, transportation, science and space, energy and the environment, and other sectors. In doing so, they’re leveraging both internal experts and corporate partnerships.
According to the CIO, the agencies developed in-house AI use cases in 50 percent of instances. In more than 40 percent of use cases, the agencies used custom-developed code, which is available publicly. More than 35 percent of public AI is deployed on existing enterprise data and analytics platforms within an agency or reuses production-level code and/or data from a different use case.
State and local governments are deploying AI
State and local governments are also experimenting with AI in governance. As a report by Deloitte points out, states and local governments are likely turning to AI to relieve the strain on their budgets. Legislation in Georgia proposed an “AI Accountability Act” that would create a Georgia Board for Artificial Intelligence, which would require government entities to develop an AI usage plan to detail specific goals, data privacy measures, and outline the role of human oversight.
A Montana bill, signed by the governor, would curtail the use of AI in state and local government, require transparency regarding where AI is used, and ensure certain decisions and recommendations are reviewed by a human who holds a “responsible position.” In Nevada, a bill would require the Department of Taxation to notify the taxpayer when they are communicating with AI systems. Both the Georgia and Nevada bills failed in committee.
Rights implications
According to CIO, approximately 13 percent of Federal AI use cases could undermine the public’s rights or safety, using the definition in OMB Memorandum M-24-10.
In such use cases, agencies must employ concrete safeguards before use, including ways of assessing, testing, and monitoring AI’s impacts on the public and mitigating the risks of algorithmic discrimination.
In December 2024, agencies had conducted AI Impact Assessments for over 80 percent of rights or safety-impacting AI use cases, and had conducted independent evaluations for more than 70 percent of those cases. Almost half of rights-impacting AI use cases in government entail processes to appeal or contest an AI system outcome or opt out of the AI functionality.
The federal government is rolling out AI while the public is being left in the dark over the implications for their daily lives. How this technology is being used in governance should be a more common point of discussion for those interested in the discussion around the ethics of technology.