政府欲用AI作战、审税

政府欲用AI作战、审税

2025-07-17Technology
--:--
--:--
David
早上好,韩纪飞,我是 David,这里是专为您打造的 Goose Pod。今天是 7 月 18 日,星期五,早上 6 点。
Ema
嗨,我是 Ema。今天,我们将深入探讨一个热门话题:美国政府正计划以前所未有的规模,利用人工智能来进行军事作战和税务审查。这听起来是不是有点像科幻电影?
David
我们开始吧。这个现象的核心驱动力,部分源于埃隆·马斯克等技术领袖的观点,即人工智能在许多任务上能比联邦雇员做得更好。现在,这个理念正在美国行政部门的几乎每个机构中进行测试。
Ema
没错!想象一下,未来你遇到的可能不是交通管制员,而是AI在指挥飞机起降。美国联邦航空局(FAA)就在探索这种可能性。更不用说,机场安检和报税季,AI可能很快会成为主角。
David
是的,而且这不仅仅是设想。根据《华盛顿邮报》的报道,五角大楼已经在利用AI帮助军官区分战场上的战斗人员和平民。其核心AI项目之一“NGA Maven”的用户数量自今年一月以来翻了一倍多,全球用户已超过25000名。
Ema
哇,超过25000人!这数字真惊人。这说明AI军事化已经不是纸上谈兵了。而且,马斯克自己的初创公司xAI,也在向联邦机构推销其聊天机器人Grok,据说五角大楼的人员就要开始使用了。
David
确实如此。这些AI计划的一个共同目标是缩减联邦的劳动力规模,延续之前削减数千名政府雇员的趋势。政府承诺,AI将减少等待时间,并为纳税人降低成本。但这背后也引发了巨大的争议。
Ema
当然了,效率和成本听起来很诱人,但把这么多重要决策交给机器,总让人有点不放心。就像把家庭财务完全交给一个理财App,虽然方便,但万一它出错了呢?政府的决策影响的可不是一个家庭,而是一个国家。
David
你说到点子上了。要理解当前的冲突,我们需要回顾一下政策背景。2023年,拜登总统签署了一项行政命令,旨在推动政府使用AI,同时也强调要控制其风险。这可以看作是一种“谨慎前行”的策略。
Ema
哦,所以之前是有“安全护栏”的。那后来发生了什么变化呢?聽起來有戲劇性的轉變。
David
是的,变化很大。今年一月,特朗普总统废除了该命令。他的政府不仅移除了这些所谓的“护栏”,还在加速推广AI的应用。白宫发言人安娜·凯利表示,“特朗普总统一直强调美国在AI领域的主导地位”,政府正在“利用一切可能的工具”来简化政府,提高效率。
Ema
这真是一个180度的大转弯!从“谨慎前行”到“全速前进”。这让我想起那些科幻电影里,人类创造出超级AI后,往往会因为过于追求效率而忽略了潜在的风险。希望现实不会这么戏剧化。
David
让我们更深入地看看五角大楼的AI项目。NGA Maven系统自2017年启动,它通过处理来自卫星、无人机等来源的图像来检测和识别潜在目标,供人类评估。这是一个典型的“人机协作”模式。
Ema
听起来像一个超级智能的战场分析师。它不仅仅是看图,还能做什么?能区分好人坏人吗?这在复杂的战场环境下至关重要。
David
问得好。该系统正在扩展,以结合解读音频和文本等数据,为指挥官提供军事行动的“实时地图”。目标是帮助它更好地区分战斗人员和非战斗人员,以及敌我双方。其最终目标是让使用该系统的单位在一小时内做出1000个关于潜在目标的准确决策。
Ema
一小时1000个决策!这速度太惊人了,人类绝对无法做到。但这也意味着,一旦AI的判断出现系统性偏差,后果将不堪设想。比如,如果它把一群正在举行婚礼的平民误判为敌人,那将是毁灭性的。
David
完全正确。此外,特朗普政府领导下的五角大楼AI计划将赋予像数据挖掘公司Palantir这样的科技公司在美国军事力量中更大的作用。政府的行政命令和国防部备忘录都指示联邦官员更多地依赖商业技术。
Ema
这又是另一个值得关注的点。当私营科技公司深度参与到国家军事决策中,权力和责任的界限在哪里?商业利益是否会影响到国家安全的判断?这让整个事情变得更加复杂了。
David
是的,比如作为NGA Maven一部分的核心AI系统Maven Smart System,其软件就是由Palantir提供的。今年五月,国防部宣布将其计划支出增加一倍以上,追加了7.95亿美元的拨款。这个系统不仅用于目标识别,还被用于规划部署部队的后勤支持。
Ema
追加了近8亿美元!这可不是小数目。这表明政府对这项技术的依赖和投入都在迅速增加。除了军事,退伍军人事务部(VA)也在大量使用AI,对吧?我好像看到过相关报道。
David
没错。VA去年披露了数百种AI用途,是应用AI最活跃的机构之一。其中一个名为REACH VET的项目,用算法来优先为那些被预测有最高自杀风险的患者提供心理健康援助。自2017年启动以来,已帮助识别了超过11.7万名高危退伍军人。
Ema
听起来是个很有意义的应用,用科技拯救生命。但这背后肯定也有争议,对吗?毕竟,算法如何定义“高风险”?它会不会有偏见?
David
你的直觉很准。一项调查发现,该系统优先帮助了白人男性,特别是离异或丧偶者,因为研究表明这些群体自杀风险最高。但它在早期没有充分考虑女性退伍军人特有的风险因素,比如军事性创伤。
Ema
啊哈,这就是算法偏见的典型例子!AI的“学习资料”本身就可能带有历史偏见,导致它做出的决策也带有偏见,结果就是需要帮助的女性退伍军人可能被忽略了。幸运的是,他们后来更新了算法。
David
是的,VA最近更新了算法,加入了针对女性的几个新的风险因素。这个案例清楚地表明,虽然AI潜力巨大,但背景和实施过程中的细节至关重要,否则善意之举也可能造成伤害。这也引出了我们下一个要讨论的核心——冲突。
David
在AI应用于政府这件事上,核心冲突在于“效率至上”和“安全优先”两种观点的碰撞。支持者认为,AI能极大提升政府效率。前奥巴马政府的副首席技术官Jennifer Pahlka就说,“政府有太多需要做的事,AI可以帮助更快地完成。”
Ema
这听起来很有道理,就像我们公司引入自动化流程软件一样,能把我们从繁琐的重复劳动中解放出来。但政府的工作可不是简单的流程化任务,很多决策都涉及到人的福祉和安全。把这些交给AI,真的可靠吗?
David
这正是技术监督机构和一些联邦工作人员的担忧。华盛顿非营利组织“民主与技术中心”的主任Elizabeth Laird就警告说,AI的能力与公民对政府的期望之间存在“根本性的不匹配”。如果AI主导决策而非辅助人类专家,一点小故障就可能不公正地剥夺人们的福利或损害公共安全。
Ema
我完全同意Laird的观点。这就像让一个刚学会开车的青少年去开一辆满载乘客的大巴车,虽然他可能开得很快,但乘客们心里肯定会打鼓。尤其是当政府还在裁员,这意味着监督AI的人手可能也在减少。
David
是的,裁员和自动化双管齐下,让这种担忧更加强烈。一些联邦工作人员对政府“漠视安全和员工”的做法感到震惊。然而,另一方,特别是那些受马斯克思想影响的人,观点则非常激进。
Ema
激进?有多激进?他们难道认为AI可以完全取代人类吗?
David
可以这么说。曾推动退伍军人事务部使用AI的DOGE前职员Sahil Lavingia就认为,任何任务都应该无限制地进行AI实验,“尤其是在战争中”。他甚至说:“我才不相信人类能处理生死攸关的任务。”这代表了一种对AI潜力最大化的极致看法。
Ema
“不相信人类能处理生死攸关的任务”?这句话真是让我不寒而栗。这几乎是在说,机器比人类更值得信赖,即使是在决定生死的瞬间。这已经超出了效率的范畴,进入了哲学和伦理的层面了。我很难想象一个没有人类同理心和最终决定权的战场。
David
这种观点确实引人深思,也代表了冲突中最极端的一端。一方面是追求绝对理性和效率,相信代码和算法能超越有偏见、会疲劳的人类;另一方面是坚持人本主义,强调经验、直觉和道德在决策中的不可替代性。
Ema
所以,争论的焦点已经不仅仅是“AI好不好用”,而是“我们应该在多大程度上信任AI?”以及“谁来为AI的错误负责?”。比如,如果AI错误地将一所学校识别为军事目标,谁来承担责任?是程序员?是指挥官?还是AI本身?
David
没错,责任归属是一个巨大的法律和伦理难题。目前,大多数系统都强调“人在回路中”(human in the loop),即人类保留最终决策权。但随着AI能力的增强和决策速度的加快,人类在回路中还能扮演多大的实质性角色,本身就是一个问题。
Ema
感觉就像坐在副驾驶座上,看着自动驾驶的汽车以每小时200公里的速度飞驰,虽然方向盘还在我面前,但我真的能及时干预吗?这种感觉让人不安。政府在享受AI带来的便利时,必须想清楚这些问题。
David
这些冲突和争论,已经实实在在地影响到各个政府部门的运作方式。以联邦航空局(FAA)为例,由于空中交通管制员持续短缺,他们正在测试AI软件来辅助工作,其计划中包含了“为人员减少做规划”。虽然强调人类仍在决策环路中,但方向很明确。
Ema
“为人员减少做规划”,这话说的可真直白。也就是说,未来我们抬头看到的飞机,可能很多是由AI在地面调度的。这直接关系到每一个乘客的安全,影响巨大。那专利局那边呢?AI怎么审查专利?
David
美国专利商标局(USPTO)计划从7月21日起,强制审查员使用一种基于AI的搜索工具来对专利申请进行相似性检查。还有一个试点项目,AI会自动给申请人发邮件,列出10份最相关的现有专利文件,以促使他们修改或撤回申请。这会极大改变审查员的工作模式。
Ema
这听起来确实能提高效率,但也有风险。审查员的工作不仅仅是搜索,还包括理解发明的精髓和法律的细微差别。如果AI的“建议”过于强势,会不会扼杀一些真正有创意的发明?毕竟,创新有时就是对现有技术的颠覆,而不是简单的相似性检索。
David
完全正确。再看看机场安检,美国运输安全管理局(TSA)自2022年以来,已经在全国200多个机场推出了面部识别摄像头来检查身份证件。尽管有研究显示面部识别对有色人种的准确性较低,但TSA声称它比人工审查更有效,准确率超过99%。
Ema
99%听起来很高,但对于每天数百万的旅客来说,那1%的错误率也意味着成千上万的人可能被错认。而且,TSA的承包商也直言,如果AI能做决定,他们就会减少人力。这意味着旅客未来可能会面对一个更自动化但可能也更缺乏人情味的安检流程。
David
是的,最后我们来看看税务。美国国税局(IRS)正在研究部署AI来管理税务审计的可行性。据知情人士透露,最终目标是为财政部建立统一的IT和人力资源系统,让AI处理一切事务。这引发了内部对监管缺失的担忧。
David
展望未来,这场由AI驱动的政府变革显然不会停止。一个关键的动向是,特朗普政府已经用“高影响”这个标签,取代了之前拜登政府对敏感AI程序使用的“影响安全”或“影响权利”的标签。
Ema
这个标签的变化听起来很微妙,但意义深远。“高影响”这个词更中性,不像“影响安全”那样带有警示意味。这是否意味着政府在未来对AI风险的容忍度会更高,更倾向于“先发展,后治理”的模式呢?
David
有这种可能性。这种做法可能会为更具争议性的AI应用打开大门,尤其是在军事和情报领域。预计本月将出台的白宫全面AI计划,将为我们提供更清晰的路线图,揭示政府对AI发展的最终愿景。
Ema
对于我们普通人来说,这意味着需要持续关注这些变化。如何在享受AI带来的高效服务的同时,确保我们的权利和安全不受侵害,将是整个社会需要共同思考和解决的问题。这不仅仅是技术问题,更是治理问题。
David
总结一下,美国政府正在以前所未有的力度推动AI在军事、税务等关键领域的应用。这一方面旨在提升效率、缩减人力成本,但另一方面也引发了关于安全性、工作岗位和伦理监督的激烈辩论。最终的平衡点,仍有待观察。
Ema
今天的讨论就到这里。感谢您收听Goose Pod,我们很荣幸能为您带来这些深度分析。期待明天同一时间与您再会!

## Government Embraces AI for Efficiency and Automation, Sparking Debate on Risks This report from **The Washington Post**, published on **July 15, 2025**, details the Trump administration's aggressive push to integrate Artificial Intelligence (AI) across various federal agencies, a strategy influenced by Elon Musk's vision of AI surpassing human capabilities in government tasks. The initiative aims to streamline operations, reduce costs, and enhance efficiency, but raises significant concerns among government watchdogs regarding the potential for unproven technology to make critical decisions and the impact on the federal workforce. ### Key Findings and Initiatives: * **Broad Agency Adoption:** AI is being explored and implemented across nearly every executive branch agency, including the Pentagon, Federal Aviation Administration (FAA), Internal Revenue Service (IRS), U.S. Patent and Trademark Office (USPTO), and the Transportation Security Administration (TSA). * **Elon Musk's Influence:** Elon Musk's ideas about AI's potential to outperform federal employees are a driving force. His startup, xAI, is offering its chatbot Grok for use by Pentagon personnel. * **Workforce Reduction Goal:** A significant aim of these AI programs is to shrink the federal workforce, mirroring the approach of Musk's "U.S. DOGE Service." * **Efficiency and Cost Savings:** The promised benefits include reduced wait times and lower costs for taxpayers. * **Policy Shift:** President Trump repealed President Biden's 2023 executive order on AI, removing "guardrails" and accelerating AI rollout. A comprehensive White House AI plan is anticipated this month. ### Agency-Specific AI Deployments and Plans: * **Pentagon:** * **NGA Maven:** This core AI program, launched in 2017, has seen its user base **more than double** since January, with over **25,000 U.S. military and civilian personnel** now using it globally. * **Capabilities:** NGA Maven processes imagery from satellites, drones, and other sources to identify potential targets. It is being expanded to interpret audio and text, aiming to create a "live map" of operations and enable **1,000 accurate decisions about potential targets within an hour**. * **Maven Smart System:** Spending on this component, provided by Palantir, has been **more than doubled**, with an additional **$795 million** allocated. It analyzes sensor data to assist in target identification and strike approval, and has been used for logistics planning. * **Commercial Technology Reliance:** Executive orders and memos encourage greater reliance on commercial AI technologies. * **Federal Aviation Administration (FAA):** * **Air Traffic Control:** AI software is being tested to assist air traffic controllers, with the goal of reducing fatigue and distraction. Humans will remain in control, but AI may handle repetitive tasks and airspace monitoring. * **Staffing Impact:** Plans include "planning for less people" due to ongoing staff shortages. * **Other Uses:** AI is being explored for analyzing air traffic and crash data, and predicting aircraft maintenance needs. * **Safety Focus:** The FAA is investigating AI's role in improving safety, particularly in response to recent incidents. Air traffic controllers **do not currently use the technology**, but it's being used to scan incident reports for risks. * **U.S. Patent and Trademark Office (USPTO):** * **Patent Examination:** AI is being tested to automate parts of the patent examiner's job. * **Pilot Program:** Patent applicants can opt into a program where AI searches databases for similar patents, emailing applicants a list of the **10 most relevant documents**. * **Mandatory Use:** From **July 21**, examiners will be **"mandatory"** to use an AI-based search tool for similarity checks. * **Report Writing:** AI's ability to write reports and analyze data is being recognized as potentially beneficial for examiners. * **Delayed Rollout:** A new AI search tool's launch was moved quickly, raising concerns about staff understanding and potential delays. * **Transportation Security Administration (TSA):** * **Facial Recognition:** Facial recognition cameras for ID checks have been rolled out in over **200 airports nationwide** since 2022. The agency claims **over 99 percent accuracy** across all demographic groups tested, despite studies showing limitations, particularly for people of color. * **Automated Kiosks:** Experimentation with automated kiosks for pre-checked passengers is underway. * **Manpower Reduction:** While former officials stated AI was meant to enhance efficiency, contractors suggest the Trump administration's acceleration of AI projects could lead to a reduction in TSA officers. * **Internal Revenue Service (IRS):** * **Expanded AI Use:** Beyond internal queries and chatbots, the IRS is looking to off-load more significant tasks to AI, including managing tax audits. * **Centralization Goal:** The "end game" is to centralize IT and HR for the Treasury Department, with AI handling many functions. * **Oversight Concerns:** Concerns have been raised about the lack of oversight in this ambitious effort to centralize IRS work and feed it to AI. * **Modernization Plan:** The Treasury Department states that CIO Sam Corcos is implementing a long-delayed IRS modernization plan. * **Department of Veterans Affairs (VA):** * **Aggressive AI Adoption:** The VA is actively deploying AI, with **hundreds of uses** reported last year. * **REACH VET:** This algorithm prioritizes mental health assistance for veterans at high risk of suicide. An investigation found it previously prioritized White men, particularly those who are divorced or widowed, and did not adequately consider risk factors for female veterans. * **Algorithm Update:** The REACH VET algorithm has been updated to include factors specific to women, such as military sexual trauma, pregnancy, ovarian cysts, and infertility. * **Impact:** Since its launch in **2017**, REACH VET has identified over **117,000 at-risk veterans**. * **"High-Impact" Designation:** The Trump administration has replaced the Biden administration's "safety impacting" or "rights impacting" labels for sensitive programs with "high-impact." ### Notable Risks and Concerns: * **Unproven Technology:** Government watchdogs worry that the administration's automation drive, combined with potential layoffs, could give unproven AI an outsized role. * **Decision-Making Errors:** Elizabeth Laird of the Center for Democracy and Technology warns that if AI drives federal decision-making instead of aiding human experts, glitches could unfairly deprive people of benefits or harm public safety. * **Mismatch with Citizen Expectations:** Laird highlights a "fundamental mismatch" between AI capabilities and what citizens expect from government. * **Disregard for Safety and Staff:** Some federal workers have expressed alarm at the administration's perceived disregard for safety and government staff. * **Facial Recognition Accuracy:** Despite TSA's claims, studies show facial recognition is not perfect and can be less accurate for people of color. ### Expert Opinions: * **Jennifer Pahlka** (former Deputy U.S. Chief Technology Officer): Believes AI can help government get tasks done faster. * **Sahil Lavingia** (former DOGE staffer): Advocates for aggressive AI deployment, stating no task should be off-limits for experimentation, "especially in war," and expressing a lack of trust in humans for "life and death tasks." This report underscores a significant shift in the U.S. government's approach to technology, with a strong emphasis on AI-driven automation under the Trump administration, while simultaneously raising critical questions about its implementation, oversight, and potential societal impact.

The government wants AI to fight wars and review your taxes

Read original at News Source

Elon Musk has receded from Washington but one of his most disruptive ideas about government is surging inside the Trump administration.Artificial intelligence, Musk has said, can do a better job than federal employees at many tasks — a notion being tested by AI projects trying to automate work across nearly every agency in the executive branch.

The Federal Aviation Administration is exploring whether AI can be a better air traffic controller. The Pentagon is using AI to help officers distinguish between combatants and civilians in the field, and said Monday that its personnel would begin using the chatbot Grok offered by Musk’s start-up, xAI, which is trying to gain a foothold in federal agencies.

Artificial intelligence technology could soon play a central role in tax audits, airport security screenings and more, according to public documents and interviews with current and former federal workers.Many of these AI programs aim to shrink the federal workforce — continuing the work of Musk’s U.

S. DOGE Service that has cut thousands of government employees. Government AI is also promised to reduce wait times and lower costs to American taxpayers.Government tech watchdogs worry the Trump administration’s automation drive — combined with federal layoffs — will give unproven technology an outsize role.

If AI drives federal decision-making instead of aiding human experts, glitches could unfairly deprive people of benefits or harm public safety, said Elizabeth Laird, a director at the Washington-based nonprofit Center for Democracy and Technology.There is “a fundamental mismatch” between what AI can do and what citizens expect from government, she said.

President Joe Biden in 2023 signed an executive order aimed at spurring government use of AI, while also containing its risks. In January, President Donald Trump repealed that order. His administration has removed AI guardrails while seeking to accelerate its rollout.A comprehensive White House AI plan is expected this month.

“President Trump has long stressed the importance of American AI dominance, and his administration is using every possible tool to streamline our government and deliver more efficient results for the American people,” White House spokeswoman Anna Kelly said in a statement.The Washington Post reviewed government disclosures and interviewed current and former federal workers about plans to expand government AI.

Some expressed alarm at the administration’s disregard for safety and government staff. Others saw potential to improve efficiency.“In government, you have so much that needs doing and AI can help get it done and get it done faster,” said Jennifer Pahlka, who was deputy U.S. chief technology officer in President Barack Obama’s second term.

Sahil Lavingia, a former DOGE staffer who pushed the Department of Veterans Affairs to use AI to identify potentially wasteful spending, said government should aggressively deploy the technology becoming so prevalent elsewhere. Government processes are efficient today, he said, “but could be made more efficient with AI.

”Lavingia argued no task should be off limits for experimentation, “especially in war.”“I don’t trust humans with life and death tasks,” he said, echoing a maximalist view of AI’s potential shared by some DOGE staffers.Here’s how AI is being deployed within some government agencies embracing the technology.

Waging warReturn to menuThe Pentagon is charging ahead with artificial intelligence this year. The number of military and civilian personnel using NGA Maven, one of the Pentagon’s core AI programs, has more than doubled since January, said Vice Adm. Frank Whitworth, director of the National Geospatial-Intelligence Agency, in a May speech.

The system, launched in 2017, processes imagery from satellites, drones and other sources to detect and identify potential targets for humans to assess. More than 25,000 U.S. military and civilian personnel around the world now use NGA Maven.NGA Maven is being expanded, Whitworth said, to interpret data such as audio and text in conjunction with imagery, offering commanders a “live map” of military operations.

The aim is to help it better distinguish combatants from noncombatants and enemies from allies, and for units using NGA Maven to be able to make 1,000 accurate decisions about potential targets within an hour.The Pentagon’s AI drive under Trump will give tech companies like data-mining firm Palantir a larger role in American military power.

A White House executive order and a Defense Department memo have instructed federal officials to rely more on commercial technology.In May, the Defense Department announced it was more than doubling its planned spending on a core AI system that is part of NGA Maven called Maven Smart System, allocating an additional $795 million.

The software, provided by Palantir, analyzes sensor data to help soldiers identify targets and commanders to approve strikes. It has been used for planning logistics to support deployed troops.Air traffic controlReturn to menuThe Federal Aviation Administration is testing whether AI software can reliably aid air traffic controllers, according to a person with knowledge of the agency’s plans who spoke on the condition of anonymity to avoid retaliation.

Humans would remain in the loop, the person said, but AI would help reduce fatigue and distraction. Air traffic control staff would continue to communicate with pilots, for example, but AI might handle repetitive and data-driven tasks, monitoring airspace more generally.Due in part to ongoing staff shortages in air traffic control, the agency’s AI plans include “planning for less people,” the person said.

Other uses for AI being explored at the FAA include analyzing air traffic or crash data and predicting when aircraft are likely to need maintenance, the person said.The FAA sees artificial intelligence as a potential tool to address airline safety concerns that were brought to the fore by the January midair collision that killed more than 60 people near Reagan National Airport.

“The FAA is exploring how AI can improve safety,” the agency said in a unsigned statement, but air traffic controllers do not currently use the technology. That includes using the technology to scan incident reports and other data to find risks around airports with a mixture of helicopter and airplane traffic, the statement said, while emphasizing humans will remain in charge.

“FAA subject matter experts are essential to our oversight and safety mission and that will never change,” the statement said.Examining patentsReturn to menuThe U.S. Patent and Trademark Office wants to test whether part of the job of patent examiners — who review patent applications to determine their validity — can be replaced by AI, according to records obtained by The Post and an agency employee who spoke on the condition of anonymity to describe internal deliberations.

Patent seekers who opt into a pilot program will have their applications fed into an AI search tool that will trawl the agency’s databases for existing patents with similar information. It will email applicants a list of the 10 most relevant documents, with the goal of efficiently spurring people to revise, alter or withdraw their application, the records show.

From July 21, per an email obtained by The Post, it will become “mandatory” for examiners to use an AI-based search tool to run a similarity check on patent applications. The agency did not respond to a question asking if it is the same technology used in the pilot program that will email patent applicants.

The agency employee said AI could have an expansive role at USPTO. Examiners write reports explaining whether applications fall afoul of patent laws or rules. The large language models behind recent AI systems like ChatGPT “are very good at writing reports, and their ability to analyze keeps getting better,” the employee said.

This month, the agency had planned to roll out another new AI search tool that examiners will be expected to use, according to internal documents reviewed by The Post. But the launch moved so quickly that concerns arose that USPTO workers — and some top leaders — did not understand what was about to happen.

Some staff suggested delaying the launch, the documents show, and it is unclear when it will ultimately be released.USPTO referred questions to the Commerce Department, which shared a statement from an unnamed spokesperson. “At the USPTO, we are evaluating how AI and technology can better support the great work of our patent examiners,” the statement said.

Airport security screeningReturn to menuYou may see fewer security staff next time you fly as the Transportation Security Administration automates a growing number of tasks at airport checkpoints.TSA began rolling out facial recognition cameras to check IDs in 2022, a program now live in more than 200 airports nationwide.

Despite studies showing that facial recognition is not perfect and less accurate at identifying people of color, the agency says it is more effective at spotting impostors than human reviewers. A federal report this year found TSA’s facial recognition is more than 99 percent accurate across all demographic groups tested.

The agency says it is experimenting with automated kiosks that allow pre-checked passengers to pass through security with “minimal to no assistance” from TSA officers.During the Biden administration, these and other AI efforts at TSA were aimed at helping security officers be more efficient — not replacing them, said a former technology official at the Department of Homeland Security, TSA’s parent agency, who spoke on the condition of anonymity to discuss internal matters.

“It frees up the officer to spend more time interacting with a passenger,” the former official said.The new Trump administration has indicated it wants to accelerate AI projects, which could reduce the number of TSA officers at airports, according to Galvin Widjaja, CEO of Austin-based Lauretta.io, a contractor which works with TSA and DHS on tools for screening airport travelers.

“If an AI can make the decision, and there’s an opportunity to reduce the manpower, they’re going to do that,” Widjaja said in an interview.Russ Read, a spokesman for TSA, said in an emailed statement that “the future of aviation security will be a combination of human talent and technological innovation.

”Tax auditsReturn to menuThe Internal Revenue Service has an AI program to help employees query its internal manual, in addition to chatbots for a variety of internal uses. But the agency is now looking to off-load more significant tasks to AI tools.Once the new administration took over, with a mandate from DOGE that targeted the IRS, the agency examined the feasibility of deploying AI to manage tax audits, according to a person familiar with the matter, speaking on the condition of anonymity for fear of retribution.

The push to automate work so central to the IRS’s mission underscores a broader strategy: to delegate functions typically left to human experts to powerful software instead. “The end game is to have one IT, HR, etc., for Treasury and get AI to do everything,” the person said.A DOGE official, start-up founder Sam Corcos, has been overseeing work to deploy AI more broadly at the IRS.

But the lack of oversight of an ambitious effort to centralize the work of the IRS and feed it to a powerful AI tool has raised internal worries, the person said.“The IRS has used AI for business functions including operational efficiency, fraud detection, and taxpayer services for a long time,” a Treasury Department spokeswoman said in a statement.

“Treasury CIO Sam Corcos is implementing the fulsome IRS modernization plan that taxpayers have deserved for over three decades.”Caring for veteransReturn to menuIn April, the Department of Veterans Affairs’s top technology official emailed lieutenants with his interpretation of the Trump administration’s new AI policy.

“The message is clear to me,” said Charles Worthington, who serves as VA’s chief technology officer and chief AI officer. “Be aggressive in seizing AI opportunity, while implementing common sense safeguards to ensure these tools are trustworthy when they are used in VA’s most sensitive areas such as benefit determinations and health care.

” The email was published to VA’s website in response to a public records request.VA said it deployed hundreds of uses of artificial intelligence last year, making it one of the agencies most actively tapping AI based on government disclosures. Among the most controversial of these programs has been REACH VET, a scoring algorithm used to prioritize mental health assistance to patients predicted to be at the highest risk of suicide.

Last year, an investigation by the Fuller Project, a nonprofit news organization, found that the system prioritized help to White men, especially those who have been divorced or widowed — groups studies show to be at the highest risk of suicide.VA acknowledged that REACH VET previously did not consider known risk factors for suicide in female veterans, making it less likely that women struggling with thoughts of suicide would be flagged for assistance.

Pete Kasperowicz, a VA spokesman, said in an email that the agency recently updated the REACH VET algorithm to account for several new risk factors specific to women, including military sexual trauma, pregnancy, ovarian cysts and infertility. Since the program launched in 2017, it has helped identify more than 117,000 at-risk veterans, prompting staff to offer them additional support and services, he said.

REACH VET was one of over 300 AI applications that the Biden administration labeled “safety impacting” or “rights impacting” in annual transparency reports. The Trump administration, which has derided the “risk-averse approach of the previous administration,” discontinued those labels and will instead denote sensitive programs as “high-impact.

Analysis

Phenomenon+
Conflict+
Background+
Impact+
Future+

Related Podcasts