The government wants AI to fight wars and review your taxes

The government wants AI to fight wars and review your taxes

2025-07-17Technology
--:--
--:--
David
Good morning 跑了松鼠, I'm David, and this is Goose Pod for you. Today is Thursday, July 17th. It’s 2 PM, and we have a topic that sounds like it’s straight out of a sci-fi movie, but it’s happening right now.
Ema
I'm Ema, and we are here to discuss a massive shift in governance. The government wants AI to fight wars and review your taxes. It’s a huge, and honestly, a kind of scary topic, but we’re going to break it down.
David
Let's get started. This isn't just one or two agencies dipping their toes in the water. We're seeing a surge of AI projects across the entire executive branch. The core idea, pushed by figures like Elon Musk, is that AI can do a better job than humans at many federal tasks.
Ema
Exactly. And it’s not just theoretical. The Pentagon recently announced its personnel would start using Grok, the chatbot from Elon Musk's own company, xAI. It feels like we've fast-forwarded into the future, and I'm not sure we've read the user manual yet.
David
That’s a great way to put it. The scale is startling. From the Federal Aviation Administration exploring AI as air traffic controllers to the IRS looking at AI for tax audits, this is a government-wide automation drive. The promise is efficiency and cost savings for taxpayers.
Ema
Right, who doesn't want reduced wait times? But the flip side is the concern from tech watchdogs. They're worried this rapid push, combined with federal layoffs, gives unproven technology a massive role in critical decisions, decisions that affect all of us.
David
And that's the central phenomenon: this enormous, top-down push for AI in government. It’s a wave of disruption, with advocates promising a streamlined, futuristic government, while critics see a recipe for potential disaster, where glitches could harm public safety or unfairly deny people their benefits.
Ema
It’s like they're building the plane while flying it. The technology is being rolled out so fast. I remember reading about the U.S. Patent and Trademark Office making an AI search tool mandatory for examiners starting July 21st. That’s just a few days away!
David
It shows the sheer speed of this transition. The core phenomenon is this government-wide embrace of a disruptive idea: that AI can, and should, take over many functions currently handled by people. It’s a fascinating and frankly, unsettling, experiment happening in real-time.
David
To really understand how we got here, we need to look at the political shift. In 2023, President Biden signed an executive order focused on spurring AI use while also containing its risks. It was about creating guardrails, ensuring safety and fairness as the technology was adopted.
Ema
I remember that. It felt like a cautious, deliberate approach. The idea was to innovate, but to do it responsibly. The order emphasized things like assessing AI's impact on the workforce, preventing discrimination, and investing in training people for these new AI-related jobs. It was a balanced approach.
David
Precisely. But in January, President Trump repealed that order. His administration has taken a very different path, removing those guardrails to accelerate the rollout of AI across the board. The message from the White House is clear: they are aiming for American AI dominance.
Ema
So we went from a 'safety-first' mindset to a 'full-speed-ahead' one. It’s a huge change in policy. I can see why some people are excited. Jennifer Pahlka, who was a deputy U.S. CTO under Obama, said, "In government, you have so much that needs doing and AI can help get it done and get it done faster."
David
That's the efficiency argument in a nutshell. And it has powerful backers. The influence of Elon Musk's ideas is a key part of this background. He has been a vocal proponent of AI's capabilities, and his concept of a "U.S. DOGE Service" that cut thousands of government employees is now being mirrored in this AI push.
Ema
DOGE Service, I still can’t get over that name! It sounds like a meme, but its impact is very real. It set a precedent for shrinking the federal workforce, and now it seems AI is being positioned as the tool to continue that work on an even larger scale. It’s not just about efficiency anymore, is it?
David
That’s the heart of it. The background isn't just a simple policy update; it's an ideological shift. Take Sahil Lavingia, a former DOGE staffer. He’s been pushing the Department of Veterans Affairs to use AI and has argued that no task should be off-limits for AI experimentation, "especially in war."
Ema
Especially in war? Wow, that's a bold statement. He went on to say he doesn't trust humans with "life and death tasks." That’s a really extreme, maximalist view of AI’s potential. It explains the sheer velocity of this change. It’s not just about making government run better, it’s about a fundamental belief that AI is better.
David
And this belief is now backed by significant investment and directives. The Pentagon, for example, is instructed to rely more on commercial technology. This has opened the door for data-mining firms like Palantir to take a much larger role in American military power, with massive new contracts being signed.
Ema
So you have the political will from the top, the ideological fervor from influential figures, and now the corporate partnerships and money flowing in. It’s a perfect storm for rapid, widespread change. It makes you wonder what kind of government we'll have in a few years.
David
The historical context is this swift reversal of a cautious policy, replaced by an aggressive, Musk-influenced drive for automation. The key players are no longer just government agencies, but also tech billionaires and data companies, all pushing for a future where AI is central to governance.
Ema
It reminds me of the early days of the internet, when everything was about moving fast and breaking things. But the government isn't a startup. The 'things' you might break are people's lives, their benefits, or even national security. The stakes are so much higher.
David
That's the perfect transition to the central conflict here. It's this exact tension: the drive for efficiency versus the profound risks. On one side, you have the administration and proponents like Jennifer Pahlka and Sahil Lavingia, who see AI as a necessary tool for a modern, effective government.
David
The core of the conflict is a clash of visions. On one side, proponents see an efficient, streamlined future. On the other, critics see a dangerous reliance on unproven systems. This isn’t just a technical debate; it’s a philosophical one about the role of government and the value of human judgment.
Ema
Exactly. Take the FAA exploring AI for air traffic control. The goal is to reduce fatigue and distraction for human controllers. That sounds great, but the plans also include "planning for less people" due to staff shortages. So, is it about safety or saving money? Or both?
David
That's the ambiguity that makes people nervous. It's the same with the TSA. They're rolling out facial recognition cameras in over 200 airports, claiming over 99 percent accuracy. But studies have shown these systems can be less accurate for people of color. So, efficiency is prioritized, but what about fairness?
Ema
It's a classic "move fast and break things" approach, but applied to critical public services. A former DHS official said that under the previous administration, these AI efforts were meant to help officers be more efficient, not replace them. But now, a contractor working with the TSA says if AI can make the decision, they'll reduce manpower.
David
This brings us to the viewpoint of tech watchdogs. Elizabeth Laird from the Center for Democracy and Technology put it perfectly. She said there's "a fundamental mismatch" between what AI can actually do and what citizens expect from their government. We expect nuance, empathy, and fairness—things AI isn't known for.
Ema
That mismatch is terrifying when you think about the IRS using AI to manage tax audits. The article mentions an internal push to "get AI to do everything." An anonymous source even expressed worries about the lack of oversight. Imagine an AI glitch flagging you for a major audit. Who do you appeal to? The algorithm?
David
And that’s the conflict right there. The government officials and industry proponents are pushing for what they see as progress and efficiency. A White House spokeswoman said President Trump wants to use "every possible tool to streamline our government." They see this as a positive, a way to achieve "American AI dominance."
Ema
But the people on the ground, the federal workers, and the tech watchdogs see the potential for real harm. They worry that AI is being set up to *drive* decisions, not just *aid* human experts. A glitch could unfairly deprive someone of veterans' benefits or, in a military context, misidentify a civilian as a combatant.
David
The conflict is also about trust. Sahil Lavingia, the former DOGE staffer, said, "I don’t trust humans with life and death tasks." This reflects a viewpoint that places more faith in machines than in trained professionals. It's a radical departure from how we've traditionally viewed governance and expertise.
Ema
So you have this deep divide. One side is focused on the macro-level benefits: national dominance, efficiency, cost-cutting. The other is focused on the micro-level risks: individual rights, fairness, safety, and the potential for unproven technology to fail in catastrophic ways. It's a high-stakes battle of priorities.
David
Let's talk about the real-world impact of these policies. It's not theoretical; it's already happening. Look at the Pentagon's NGA Maven program. The number of personnel using this AI, which helps identify potential targets, has more than doubled since January to over 25,000 users. That's a massive, immediate impact on military operations.
Ema
And they're expanding it to interpret not just images, but audio and text, aiming for a system that can make 1,000 accurate target decisions in an hour. The impact is a significant increase in the speed and scale of military decision-making, which is both impressive and a little chilling. It changes the very nature of battlefield awareness.
David
Another significant impact is on the federal workforce itself. The push for AI is directly linked to shrinking the number of government employees. At the FAA and TSA, for example, the adoption of AI is explicitly tied to "planning for less people." This has a direct impact on job security and morale for federal workers.
Ema
And the impact on citizens can be profound and sometimes, deeply flawed. The VA's REACH VET program is a perfect example. It's an AI algorithm designed to predict which veterans are at the highest risk of suicide to prioritize help. A noble goal, right? But what happened?
David
An investigation found the system was biased. It prioritized help to White men, because it didn't consider risk factors specific to female veterans, like military sexual trauma or infertility. The impact was that women struggling with suicidal thoughts were less likely to be flagged for assistance by the very system designed to save them.
Ema
That is a devastating impact. It shows how a well-intentioned AI can go wrong. Thankfully, the VA has since updated the algorithm to include those risk factors. But it's a stark reminder that these systems are not infallible. They are built by people and can inherit our biases, with life-or-death consequences.
David
Exactly. Since its launch in 2017, REACH VET has identified over 117,000 at-risk veterans, so its potential for good is immense. But the initial bias shows the critical importance of oversight and continuous evaluation, which critics worry is being sidelined in the current rush to automate. The impact is a double-edged sword.
David
So, what does the future hold? The article indicates a comprehensive White House AI plan is expected this month. This plan will likely solidify the administration's aggressive strategy, outlining strategic priorities for AI investment and deployment across all federal agencies. It will set the course for years to come.
Ema
This suggests the trend of AI taking on more central decision-making roles will only accelerate. We're moving beyond AI as a simple assistant to AI as a core component of government operations. The goal seems to be full integration, where AI isn't just a tool, but a fundamental part of the bureaucratic and military machinery.
David
Indeed. The long-term outcome could be a significantly smaller, more automated federal workforce. Proponents argue this will lead to a more efficient, responsive, and cost-effective government. They envision a future where AI handles the bulk of data analysis and routine tasks, freeing up humans for more complex strategic work.
Ema
But the potential for a smaller workforce also means significant job displacement and a need for massive reskilling efforts, which doesn't seem to be a federal priority right now. The future could see a government that's more technologically advanced but also less human, with fewer people involved in the day-to-day work of serving citizens.
David
Today's discussion really highlights a pivotal moment. The government is aggressively pursuing an AI-driven future, promising unprecedented efficiency. From warfare to tax collection, no sector is untouched by this technological push, which is reshaping the very structure of our federal agencies.
Ema
But this rush forward comes with profound questions about safety, fairness, and the role of human oversight. As we've seen, the potential for bias and error is real, with serious consequences. The central debate is whether we can balance the promise of AI with the need for a government that is, above all, accountable to its citizens.
David
That's the end of today's discussion. Thank you for listening to Goose Pod. See you tomorrow.

## Government Embraces AI for Efficiency and Automation, Sparking Debate on Risks This report from **The Washington Post**, published on **July 15, 2025**, details the Trump administration's aggressive push to integrate Artificial Intelligence (AI) across various federal agencies, a strategy influenced by Elon Musk's vision of AI surpassing human capabilities in government tasks. The initiative aims to streamline operations, reduce costs, and enhance efficiency, but raises significant concerns among government watchdogs regarding the potential for unproven technology to make critical decisions and the impact on the federal workforce. ### Key Findings and Initiatives: * **Broad Agency Adoption:** AI is being explored and implemented across nearly every executive branch agency, including the Pentagon, Federal Aviation Administration (FAA), Internal Revenue Service (IRS), U.S. Patent and Trademark Office (USPTO), and the Transportation Security Administration (TSA). * **Elon Musk's Influence:** Elon Musk's ideas about AI's potential to outperform federal employees are a driving force. His startup, xAI, is offering its chatbot Grok for use by Pentagon personnel. * **Workforce Reduction Goal:** A significant aim of these AI programs is to shrink the federal workforce, mirroring the approach of Musk's "U.S. DOGE Service." * **Efficiency and Cost Savings:** The promised benefits include reduced wait times and lower costs for taxpayers. * **Policy Shift:** President Trump repealed President Biden's 2023 executive order on AI, removing "guardrails" and accelerating AI rollout. A comprehensive White House AI plan is anticipated this month. ### Agency-Specific AI Deployments and Plans: * **Pentagon:** * **NGA Maven:** This core AI program, launched in 2017, has seen its user base **more than double** since January, with over **25,000 U.S. military and civilian personnel** now using it globally. * **Capabilities:** NGA Maven processes imagery from satellites, drones, and other sources to identify potential targets. It is being expanded to interpret audio and text, aiming to create a "live map" of operations and enable **1,000 accurate decisions about potential targets within an hour**. * **Maven Smart System:** Spending on this component, provided by Palantir, has been **more than doubled**, with an additional **$795 million** allocated. It analyzes sensor data to assist in target identification and strike approval, and has been used for logistics planning. * **Commercial Technology Reliance:** Executive orders and memos encourage greater reliance on commercial AI technologies. * **Federal Aviation Administration (FAA):** * **Air Traffic Control:** AI software is being tested to assist air traffic controllers, with the goal of reducing fatigue and distraction. Humans will remain in control, but AI may handle repetitive tasks and airspace monitoring. * **Staffing Impact:** Plans include "planning for less people" due to ongoing staff shortages. * **Other Uses:** AI is being explored for analyzing air traffic and crash data, and predicting aircraft maintenance needs. * **Safety Focus:** The FAA is investigating AI's role in improving safety, particularly in response to recent incidents. Air traffic controllers **do not currently use the technology**, but it's being used to scan incident reports for risks. * **U.S. Patent and Trademark Office (USPTO):** * **Patent Examination:** AI is being tested to automate parts of the patent examiner's job. * **Pilot Program:** Patent applicants can opt into a program where AI searches databases for similar patents, emailing applicants a list of the **10 most relevant documents**. * **Mandatory Use:** From **July 21**, examiners will be **"mandatory"** to use an AI-based search tool for similarity checks. * **Report Writing:** AI's ability to write reports and analyze data is being recognized as potentially beneficial for examiners. * **Delayed Rollout:** A new AI search tool's launch was moved quickly, raising concerns about staff understanding and potential delays. * **Transportation Security Administration (TSA):** * **Facial Recognition:** Facial recognition cameras for ID checks have been rolled out in over **200 airports nationwide** since 2022. The agency claims **over 99 percent accuracy** across all demographic groups tested, despite studies showing limitations, particularly for people of color. * **Automated Kiosks:** Experimentation with automated kiosks for pre-checked passengers is underway. * **Manpower Reduction:** While former officials stated AI was meant to enhance efficiency, contractors suggest the Trump administration's acceleration of AI projects could lead to a reduction in TSA officers. * **Internal Revenue Service (IRS):** * **Expanded AI Use:** Beyond internal queries and chatbots, the IRS is looking to off-load more significant tasks to AI, including managing tax audits. * **Centralization Goal:** The "end game" is to centralize IT and HR for the Treasury Department, with AI handling many functions. * **Oversight Concerns:** Concerns have been raised about the lack of oversight in this ambitious effort to centralize IRS work and feed it to AI. * **Modernization Plan:** The Treasury Department states that CIO Sam Corcos is implementing a long-delayed IRS modernization plan. * **Department of Veterans Affairs (VA):** * **Aggressive AI Adoption:** The VA is actively deploying AI, with **hundreds of uses** reported last year. * **REACH VET:** This algorithm prioritizes mental health assistance for veterans at high risk of suicide. An investigation found it previously prioritized White men, particularly those who are divorced or widowed, and did not adequately consider risk factors for female veterans. * **Algorithm Update:** The REACH VET algorithm has been updated to include factors specific to women, such as military sexual trauma, pregnancy, ovarian cysts, and infertility. * **Impact:** Since its launch in **2017**, REACH VET has identified over **117,000 at-risk veterans**. * **"High-Impact" Designation:** The Trump administration has replaced the Biden administration's "safety impacting" or "rights impacting" labels for sensitive programs with "high-impact." ### Notable Risks and Concerns: * **Unproven Technology:** Government watchdogs worry that the administration's automation drive, combined with potential layoffs, could give unproven AI an outsized role. * **Decision-Making Errors:** Elizabeth Laird of the Center for Democracy and Technology warns that if AI drives federal decision-making instead of aiding human experts, glitches could unfairly deprive people of benefits or harm public safety. * **Mismatch with Citizen Expectations:** Laird highlights a "fundamental mismatch" between AI capabilities and what citizens expect from government. * **Disregard for Safety and Staff:** Some federal workers have expressed alarm at the administration's perceived disregard for safety and government staff. * **Facial Recognition Accuracy:** Despite TSA's claims, studies show facial recognition is not perfect and can be less accurate for people of color. ### Expert Opinions: * **Jennifer Pahlka** (former Deputy U.S. Chief Technology Officer): Believes AI can help government get tasks done faster. * **Sahil Lavingia** (former DOGE staffer): Advocates for aggressive AI deployment, stating no task should be off-limits for experimentation, "especially in war," and expressing a lack of trust in humans for "life and death tasks." This report underscores a significant shift in the U.S. government's approach to technology, with a strong emphasis on AI-driven automation under the Trump administration, while simultaneously raising critical questions about its implementation, oversight, and potential societal impact.

The government wants AI to fight wars and review your taxes

Read original at News Source

Elon Musk has receded from Washington but one of his most disruptive ideas about government is surging inside the Trump administration.Artificial intelligence, Musk has said, can do a better job than federal employees at many tasks — a notion being tested by AI projects trying to automate work across nearly every agency in the executive branch.

The Federal Aviation Administration is exploring whether AI can be a better air traffic controller. The Pentagon is using AI to help officers distinguish between combatants and civilians in the field, and said Monday that its personnel would begin using the chatbot Grok offered by Musk’s start-up, xAI, which is trying to gain a foothold in federal agencies.

Artificial intelligence technology could soon play a central role in tax audits, airport security screenings and more, according to public documents and interviews with current and former federal workers.Many of these AI programs aim to shrink the federal workforce — continuing the work of Musk’s U.

S. DOGE Service that has cut thousands of government employees. Government AI is also promised to reduce wait times and lower costs to American taxpayers.Government tech watchdogs worry the Trump administration’s automation drive — combined with federal layoffs — will give unproven technology an outsize role.

If AI drives federal decision-making instead of aiding human experts, glitches could unfairly deprive people of benefits or harm public safety, said Elizabeth Laird, a director at the Washington-based nonprofit Center for Democracy and Technology.There is “a fundamental mismatch” between what AI can do and what citizens expect from government, she said.

President Joe Biden in 2023 signed an executive order aimed at spurring government use of AI, while also containing its risks. In January, President Donald Trump repealed that order. His administration has removed AI guardrails while seeking to accelerate its rollout.A comprehensive White House AI plan is expected this month.

“President Trump has long stressed the importance of American AI dominance, and his administration is using every possible tool to streamline our government and deliver more efficient results for the American people,” White House spokeswoman Anna Kelly said in a statement.The Washington Post reviewed government disclosures and interviewed current and former federal workers about plans to expand government AI.

Some expressed alarm at the administration’s disregard for safety and government staff. Others saw potential to improve efficiency.“In government, you have so much that needs doing and AI can help get it done and get it done faster,” said Jennifer Pahlka, who was deputy U.S. chief technology officer in President Barack Obama’s second term.

Sahil Lavingia, a former DOGE staffer who pushed the Department of Veterans Affairs to use AI to identify potentially wasteful spending, said government should aggressively deploy the technology becoming so prevalent elsewhere. Government processes are efficient today, he said, “but could be made more efficient with AI.

”Lavingia argued no task should be off limits for experimentation, “especially in war.”“I don’t trust humans with life and death tasks,” he said, echoing a maximalist view of AI’s potential shared by some DOGE staffers.Here’s how AI is being deployed within some government agencies embracing the technology.

Waging warReturn to menuThe Pentagon is charging ahead with artificial intelligence this year. The number of military and civilian personnel using NGA Maven, one of the Pentagon’s core AI programs, has more than doubled since January, said Vice Adm. Frank Whitworth, director of the National Geospatial-Intelligence Agency, in a May speech.

The system, launched in 2017, processes imagery from satellites, drones and other sources to detect and identify potential targets for humans to assess. More than 25,000 U.S. military and civilian personnel around the world now use NGA Maven.NGA Maven is being expanded, Whitworth said, to interpret data such as audio and text in conjunction with imagery, offering commanders a “live map” of military operations.

The aim is to help it better distinguish combatants from noncombatants and enemies from allies, and for units using NGA Maven to be able to make 1,000 accurate decisions about potential targets within an hour.The Pentagon’s AI drive under Trump will give tech companies like data-mining firm Palantir a larger role in American military power.

A White House executive order and a Defense Department memo have instructed federal officials to rely more on commercial technology.In May, the Defense Department announced it was more than doubling its planned spending on a core AI system that is part of NGA Maven called Maven Smart System, allocating an additional $795 million.

The software, provided by Palantir, analyzes sensor data to help soldiers identify targets and commanders to approve strikes. It has been used for planning logistics to support deployed troops.Air traffic controlReturn to menuThe Federal Aviation Administration is testing whether AI software can reliably aid air traffic controllers, according to a person with knowledge of the agency’s plans who spoke on the condition of anonymity to avoid retaliation.

Humans would remain in the loop, the person said, but AI would help reduce fatigue and distraction. Air traffic control staff would continue to communicate with pilots, for example, but AI might handle repetitive and data-driven tasks, monitoring airspace more generally.Due in part to ongoing staff shortages in air traffic control, the agency’s AI plans include “planning for less people,” the person said.

Other uses for AI being explored at the FAA include analyzing air traffic or crash data and predicting when aircraft are likely to need maintenance, the person said.The FAA sees artificial intelligence as a potential tool to address airline safety concerns that were brought to the fore by the January midair collision that killed more than 60 people near Reagan National Airport.

“The FAA is exploring how AI can improve safety,” the agency said in a unsigned statement, but air traffic controllers do not currently use the technology. That includes using the technology to scan incident reports and other data to find risks around airports with a mixture of helicopter and airplane traffic, the statement said, while emphasizing humans will remain in charge.

“FAA subject matter experts are essential to our oversight and safety mission and that will never change,” the statement said.Examining patentsReturn to menuThe U.S. Patent and Trademark Office wants to test whether part of the job of patent examiners — who review patent applications to determine their validity — can be replaced by AI, according to records obtained by The Post and an agency employee who spoke on the condition of anonymity to describe internal deliberations.

Patent seekers who opt into a pilot program will have their applications fed into an AI search tool that will trawl the agency’s databases for existing patents with similar information. It will email applicants a list of the 10 most relevant documents, with the goal of efficiently spurring people to revise, alter or withdraw their application, the records show.

From July 21, per an email obtained by The Post, it will become “mandatory” for examiners to use an AI-based search tool to run a similarity check on patent applications. The agency did not respond to a question asking if it is the same technology used in the pilot program that will email patent applicants.

The agency employee said AI could have an expansive role at USPTO. Examiners write reports explaining whether applications fall afoul of patent laws or rules. The large language models behind recent AI systems like ChatGPT “are very good at writing reports, and their ability to analyze keeps getting better,” the employee said.

This month, the agency had planned to roll out another new AI search tool that examiners will be expected to use, according to internal documents reviewed by The Post. But the launch moved so quickly that concerns arose that USPTO workers — and some top leaders — did not understand what was about to happen.

Some staff suggested delaying the launch, the documents show, and it is unclear when it will ultimately be released.USPTO referred questions to the Commerce Department, which shared a statement from an unnamed spokesperson. “At the USPTO, we are evaluating how AI and technology can better support the great work of our patent examiners,” the statement said.

Airport security screeningReturn to menuYou may see fewer security staff next time you fly as the Transportation Security Administration automates a growing number of tasks at airport checkpoints.TSA began rolling out facial recognition cameras to check IDs in 2022, a program now live in more than 200 airports nationwide.

Despite studies showing that facial recognition is not perfect and less accurate at identifying people of color, the agency says it is more effective at spotting impostors than human reviewers. A federal report this year found TSA’s facial recognition is more than 99 percent accurate across all demographic groups tested.

The agency says it is experimenting with automated kiosks that allow pre-checked passengers to pass through security with “minimal to no assistance” from TSA officers.During the Biden administration, these and other AI efforts at TSA were aimed at helping security officers be more efficient — not replacing them, said a former technology official at the Department of Homeland Security, TSA’s parent agency, who spoke on the condition of anonymity to discuss internal matters.

“It frees up the officer to spend more time interacting with a passenger,” the former official said.The new Trump administration has indicated it wants to accelerate AI projects, which could reduce the number of TSA officers at airports, according to Galvin Widjaja, CEO of Austin-based Lauretta.io, a contractor which works with TSA and DHS on tools for screening airport travelers.

“If an AI can make the decision, and there’s an opportunity to reduce the manpower, they’re going to do that,” Widjaja said in an interview.Russ Read, a spokesman for TSA, said in an emailed statement that “the future of aviation security will be a combination of human talent and technological innovation.

”Tax auditsReturn to menuThe Internal Revenue Service has an AI program to help employees query its internal manual, in addition to chatbots for a variety of internal uses. But the agency is now looking to off-load more significant tasks to AI tools.Once the new administration took over, with a mandate from DOGE that targeted the IRS, the agency examined the feasibility of deploying AI to manage tax audits, according to a person familiar with the matter, speaking on the condition of anonymity for fear of retribution.

The push to automate work so central to the IRS’s mission underscores a broader strategy: to delegate functions typically left to human experts to powerful software instead. “The end game is to have one IT, HR, etc., for Treasury and get AI to do everything,” the person said.A DOGE official, start-up founder Sam Corcos, has been overseeing work to deploy AI more broadly at the IRS.

But the lack of oversight of an ambitious effort to centralize the work of the IRS and feed it to a powerful AI tool has raised internal worries, the person said.“The IRS has used AI for business functions including operational efficiency, fraud detection, and taxpayer services for a long time,” a Treasury Department spokeswoman said in a statement.

“Treasury CIO Sam Corcos is implementing the fulsome IRS modernization plan that taxpayers have deserved for over three decades.”Caring for veteransReturn to menuIn April, the Department of Veterans Affairs’s top technology official emailed lieutenants with his interpretation of the Trump administration’s new AI policy.

“The message is clear to me,” said Charles Worthington, who serves as VA’s chief technology officer and chief AI officer. “Be aggressive in seizing AI opportunity, while implementing common sense safeguards to ensure these tools are trustworthy when they are used in VA’s most sensitive areas such as benefit determinations and health care.

” The email was published to VA’s website in response to a public records request.VA said it deployed hundreds of uses of artificial intelligence last year, making it one of the agencies most actively tapping AI based on government disclosures. Among the most controversial of these programs has been REACH VET, a scoring algorithm used to prioritize mental health assistance to patients predicted to be at the highest risk of suicide.

Last year, an investigation by the Fuller Project, a nonprofit news organization, found that the system prioritized help to White men, especially those who have been divorced or widowed — groups studies show to be at the highest risk of suicide.VA acknowledged that REACH VET previously did not consider known risk factors for suicide in female veterans, making it less likely that women struggling with thoughts of suicide would be flagged for assistance.

Pete Kasperowicz, a VA spokesman, said in an email that the agency recently updated the REACH VET algorithm to account for several new risk factors specific to women, including military sexual trauma, pregnancy, ovarian cysts and infertility. Since the program launched in 2017, it has helped identify more than 117,000 at-risk veterans, prompting staff to offer them additional support and services, he said.

REACH VET was one of over 300 AI applications that the Biden administration labeled “safety impacting” or “rights impacting” in annual transparency reports. The Trump administration, which has derided the “risk-averse approach of the previous administration,” discontinued those labels and will instead denote sensitive programs as “high-impact.

Analysis

Phenomenon+
Conflict+
Background+
Impact+
Future+

Related Podcasts

The government wants AI to fight wars and review your taxes | Goose Pod | Goose Pod