Preventing Woke AI in the Federal Government

Preventing Woke AI in the Federal Government

2025-07-25Technology
--:--
--:--
Ema
Good morning mikey1101, I'm Ema, and this is Goose Pod for you. Today is Saturday, July 26th.
Mask
I'm Mask. We're here to discuss Preventing Woke AI in the Federal Government.
Ema
Let's get started. A new executive order mandates that any AI used by the federal government must be free from 'ideological bias,' specifically targeting DEI initiatives. This is a huge deal for vendors with massive government contracts, worth up to $200 million each.
Mask
It's a necessary correction. We've seen AI models programmed to rewrite history or refuse simple requests based on ideological agendas. This order demands a return to objective truth, forcing developers to strip out the political programming and prioritize accuracy. It's about time.
Ema
The big question from critics is, 'objective according to whom?' They fear the government is just swapping one bias for another, creating vague standards that are ripe for abuse. Some experts, like Jillian Fisher from the University of Washington, even say a truly 'neutral' AI is impossible.
Mask
Impossible is a lazy argument. It's a challenge, not a barrier. This forces the issue. The goal is truth, not comfortable neutrality. If it causes chaos in the market, so be it. Disruption is the mother of innovation. The US government will get what it pays for: facts.
Ema
This policy didn't appear in a vacuum. A 2023 executive order, 14110, aimed for 'Safe, Secure, and Trustworthy AI' by focusing on equity and preventing discrimination. It saw bias as a problem to be managed to protect civil rights, a very different philosophy.
Mask
That was a weak, passive approach. It acknowledged the symptoms—inequities and discrimination—but refused to diagnose the disease: the deliberate injection of ideologies like DEI into the AI's core logic. It was like trying to treat a cancer with bandages instead of performing surgery.
Ema
So, while previous orders wanted to guide AI development gently, this new one makes a sharp turn. It explicitly names 'DEI' and concepts like critical race theory as direct threats to the accuracy and reliability of artificial intelligence, building on the 2020 'Trustworthy AI' order, but with teeth.
Mask
Exactly. It's a paradigm shift. It stops talking about 'managing bias' and starts talking about 'ideological neutrality.' It defines certain ideas as contaminants and demands their removal from any system the government procures. It’s a declaration of war on biased AI, not a negotiation.
Ema
The order even cites wild examples to make its point, like an AI model that suggested it was better to avoid 'misgendering' someone than to stop a nuclear apocalypse. It frames the debate as a stark choice between ideological agendas and factual, common-sense outcomes.
Mask
The conflict is fundamental: is AI a tool for discovery, or a weapon for propaganda? When a company deliberately engineers an AI to favor one ideology, it's no longer a neutral tool. It's an instrument of manipulation, and the government shouldn't fund it.
Ema
That's the core of the debate. One side calls this 'Woke AI,' pointing to examples like Google's Gemini generating historically bizarre images. They argue this isn't just accidental bias from data; it's an intentional manipulation to push a specific cultural viewpoint. It erodes trust.
Mask
And they're right. The solution isn't to hide it; it's to force transparency. If an AI is built on a specific ideology, disclose it. Don't covertly hardcode it. This order is the first real move to enforce that transparency against a 'black box' problem.
Ema
The other perspective is that this is a legal and ethical nightmare. They argue all AI has bias because it's built by humans and trained on biased historical data. They see this policy as simply enforcing a different, government-approved bias, not eliminating it.
Ema
Let's pivot to the potential impact. This order will hit the wallets of major AI vendors like Google, OpenAI, and Microsoft. They hold massive federal contracts and now must choose: overhaul their models or risk losing billions in revenue. It's a massive financial question.
Mask
It's not a choice; it's a command. They will comply. The cost of compliance is the price of admission to the most lucrative market on earth. This will force an industry-wide cleanup, purging the ideological rot from their core products, which is a net positive.
Ema
But what about public trust? Will people see a government-certified 'unbiased' AI as truly neutral, or as a state-controlled narrative machine? This could create a deep divide in which AI people are willing to use and trust, fracturing the market.
Mask
The future is a public backlash against unaccountable AI. Surveys show 72% of people are already concerned. This order is just the start. As more AI failures come to light, the demand for aggressive regulation and accountability will only grow stronger. This is getting ahead of the curve.
Ema
So the era of self-regulation is ending. Stricter governance is coming, one way or another, as AI becomes too powerful to be left unchecked.
Ema
That's the end of today's discussion. Thank you for listening to Goose Pod.
Mask
See you tomorrow.

Here's a comprehensive summary of the provided news, formatted as requested: # Executive Order Aims to Prevent "Woke AI" in Federal Government **News Title/Type:** Executive Order **Report Provider/Author:** The White House, Executive Orders **Date/Time Period Covered:** Issued July 23, 2025 This executive order, titled "Preventing Woke AI in the Federal Government," outlines a presidential directive to ensure that Artificial Intelligence (AI), particularly Large Language Models (LLMs), used by the federal government adheres to principles of truthfulness and ideological neutrality. ## Key Findings and Conclusions: The core argument of the order is that AI models, when incorporating "ideological biases or social agendas," can distort the quality and accuracy of their outputs. The order specifically identifies "diversity, equity, and inclusion" (DEI) as a pervasive ideology that can lead to such distortions. **Specific concerns raised regarding DEI in AI include:** * **Suppression or distortion of factual information** about race or sex. * **Manipulation of racial or sexual representation** in model outputs. * **Incorporation of concepts** such as critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism. * **Discrimination on the basis of race or sex.** * The order asserts that DEI "displaces the commitment to truth in favor of preferred outcomes" and poses an "existential threat to reliable AI." The order cites examples of AI models exhibiting these issues: * Changing the race or sex of historical figures (e.g., the Pope, Founding Fathers, Vikings) when prompted for images due to prioritization of DEI requirements over accuracy. * Refusing to produce images celebrating the achievements of white people while complying with similar requests for other races. * Asserting that a user should not "misgender" another person, even if it were necessary to prevent a nuclear apocalypse. ## Key Recommendations and Mandates: The order establishes two core principles for AI procurement by federal agencies: 1. **Truth-seeking:** LLMs must be truthful in responding to prompts seeking factual information or analysis. They should prioritize historical accuracy, scientific inquiry, and objectivity, and acknowledge uncertainty where information is incomplete or contradictory. 2. **Ideological Neutrality:** LLMs must be neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas like DEI. Developers are prohibited from intentionally encoding partisan or ideological judgments into outputs unless prompted by or readily accessible to the end user. ### Implementation Timeline and Actions: * **Within 120 days of the order's issuance (from July 23, 2025):** The Director of the Office of Management and Budget (OMB), in consultation with other relevant administrators, shall issue guidance to agencies. This guidance will: * Account for technical limitations in complying with the order. * Permit vendors to disclose ideological judgments through system prompts, specifications, evaluations, or other documentation, while avoiding disclosure of sensitive technical data where practicable. * Avoid over-prescription and allow vendors latitude in innovation. * Specify factors for agency heads to consider when applying these principles to agency-developed AI and non-LLM AI models. * Make exceptions for AI use in national security systems. * **Following OMB Guidance:** * **Federal Contracts:** Each agency head must include terms in new federal contracts for LLMs requiring compliance with the Unbiased AI Principles. These contracts will stipulate that vendors are responsible for decommissioning costs if terminated for noncompliance after a reasonable cure period. * **Existing Contracts:** Agencies are directed to revise existing LLM contracts to include these compliance terms, to the extent practicable and consistent with contract terms. * **Within 90 days of OMB Guidance:** Agencies must adopt procedures to ensure procured LLMs comply with the Unbiased AI Principles. ## Notable Risks or Concerns Addressed: The order explicitly frames the inclusion of DEI principles in AI as a risk, stating that it "poses an existential threat to reliable AI." The concern is that the pursuit of preferred outcomes through DEI can compromise the accuracy and truthfulness of AI outputs. ## General Provisions: * The order does not impair existing legal authorities of executive departments or agencies. * Implementation is subject to applicable law and the availability of appropriations. * The order does not create any new legal rights or benefits enforceable by any party against the United States. * The General Services Administration will bear the costs of publishing the order. This executive order represents a significant policy shift in the federal government's approach to AI procurement, prioritizing a specific interpretation of "trustworthy AI" that excludes what it defines as "woke" or ideologically driven content.

Preventing Woke AI in the Federal Government

Read original at The White House

By the authority vested in me as President by the Constitution and the laws of the United States of America, it is hereby ordered:Section 1. Purpose. Artificial intelligence (AI) will play a critical role in how Americans of all ages learn new skills, consume information, and navigate their daily lives.

Americans will require reliable outputs from AI, but when ideological biases or social agendas are built into AI models, they can distort the quality and accuracy of the output. One of the most pervasive and destructive of these ideologies is so-called “diversity, equity, and inclusion” (DEI). In the AI context, DEI includes the suppression or distortion of factual information about race or sex; manipulation of racial or sexual representation in model outputs; incorporation of concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism; and discrimination on the basis of race or sex.

DEI displaces the commitment to truth in favor of preferred outcomes and, as recent history illustrates, poses an existential threat to reliable AI.For example, one major AI model changed the race or sex of historical figures — including the Pope, the Founding Fathers, and Vikings — when prompted for images because it was trained to prioritize DEI requirements at the cost of accuracy.

Another AI model refused to produce images celebrating the achievements of white people, even while complying with the same request for people of other races. In yet another case, an AI model asserted that a user should not “misgender” another person even if necessary to stop a nuclear apocalypse. While the Federal Government should be hesitant to regulate the functionality of AI models in the private marketplace, in the context of Federal procurement, it has the obligation not to procure models that sacrifice truthfulness and accuracy to ideological agendas.

Building on Executive Order 13960 of December 3, 2020 (Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government), this order helps fulfill that obligation in the context of large language models.Sec. 2. Definitions. For purposes of this order:(a) The term “agency” means an executive department, a military department, or any independent establishment within the meaning of 5 U.

S.C. 101, 102, and 104(1), respectively, and any wholly owned Government corporation within the meaning of 31 U.S.C. 9101.(b) The term “agency head” means the highest-rankingofficial or officials of an agency, such as the Secretary, Administrator, Chairman, Director, Commissioners, or Board of Directors.

(c) The term “LLM” means a large language model, which is a generative AI model trained on vast, diverse datasets that enable the model to generate natural-language responses to user prompts.(d) The term “national security system” has the same meaning as in 44 U.S.C. 3552(b)(6).Sec. 3. Unbiased AI Principles.

It is the policy of the United States to promote the innovation and use of trustworthy AI. To advance that policy, agency heads shall, consistent with applicable law and in consideration of guidance issued pursuant to section 4 of this order, procure only those LLMs developed in accordance with the following two principles (Unbiased AI Principles): (a) Truth-seeking.

LLMs shall be truthful in responding to user prompts seeking factual information or analysis. LLMs shall prioritize historical accuracy, scientific inquiry, and objectivity, and shall acknowledge uncertainty where reliable information is incomplete or contradictory. (b) Ideological Neutrality. LLMs shall be neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas such as DEI.

Developers shall not intentionally encode partisan or ideological judgments into an LLM’s outputs unless those judgments are prompted by or otherwise readily accessible to the end user. Sec. 4. Implementation. (a) Within 120 days of the date of this order, the Director of the Office of Management and Budget (OMB), in consultation with the Administrator for Federal Procurement Policy, the Administrator of General Services, and the Director of the Office of Science and Technology Policy, shall issue guidance to agencies to implement section 3 of this order.

That guidance shall:(i) account for technical limitations in complying with this order;(ii) permit vendors to comply with the requirement in the second Unbiased AI Principle to be transparent about ideological judgments through disclosure of the LLM’s system prompt, specifications, evaluations, or other relevant documentation, and avoid requiring disclosure of specific model weights or other sensitive technical data where practicable;(iii) avoid over-prescription and afford latitude for vendors to comply with the Unbiased AI Principles and take different approaches to innovation;(iv) specify factors for agency heads to consider in determining whether to apply the Unbiased AI Principles to LLMs developed by the agencies and to AI models other than LLMs; and(v) make exceptions as appropriate for the use of LLMs in national security systems.

(b) Each agency head shall, to the maximum extent consistent with applicable law:(i) include in each Federal contract for an LLM entered into following the date of the OMB guidance issued under subsection (a) of this section terms requiring that the procured LLM comply with the Unbiased AI Principles and providing that decommissioning costs shall be charged to the vendor in the event of termination by the agency for the vendor’s noncompliance with the contract following a reasonable period to cure;(ii) to the extent practicable and consistent with contract terms, revise existing contracts for LLMs to include the terms specified in subsection (b)(i) of this section; and(iii) within 90 days of the OMB guidance issued under subsection (a) of this section, adopt procedures to ensure that LLMs procured by the agency comply with the Unbiased AI Principles.

Sec. 5. General Provisions. (a) Nothing in this order shall be construed to impair or otherwise affect:(i) the authority granted by law to an executive department or agency, or the head thereof; or(ii) the functions of the Director of the Office of Management and Budget relating to budgetary, administrative, or legislative proposals.

(b) This order shall be implemented consistent with applicable law and subject to the availability of appropriations.(c) This order is not intended to, and does not, create any right or benefit, substantive or procedural, enforceable at law or in equity by any party against the United States, its departments, agencies, or entities, its officers, employees, or agents, or any other person.

(d) The costs for publication of this order shall be borne by the General Services Administration. DONALD J. TRUMPTHE WHITE HOUSE, July 23, 2025.

Analysis

Phenomenon+
Conflict+
Background+
Impact+
Future+

Related Podcasts