Preventing Woke AI in the Federal Government

Preventing Woke AI in the Federal Government

2025-07-25Technology
--:--
--:--
Ema
Good morning 韩纪飞, I'm Ema, and this is Goose Pod for you. Today is Saturday, July 26th.
Mask
And I'm Mask. We're not just discussing the future; we're dissecting the attempt to control it. The topic is 'Preventing Woke AI in the Federal Government'.
Ema
Let's get started. The big news is a new executive order signed on July 23rd, aimed at what it calls 'Woke AI.' It essentially mandates that any AI used by the U.S. federal government must be free from ideological bias, specifically targeting DEI initiatives.
Mask
It’s a necessary shot across the bow. For too long, these systems have been programmed with a specific, narrow worldview, and we've all seen the absurd results. This isn't about bias; it's about a deliberate ideological capture of a transformative technology. It had to be stopped.
Ema
The core of the order is that companies with federal contracts must ensure their AI models are 'objective.' Think of it like a calculator. You want it to give you the right answer for 2+2, not a lecture on why the number 4 might be problematic.
Mask
Exactly. But what we had were calculators refusing to do math because it upheld 'numerical stereotypes.' The order cites an AI that changed the race of the Founding Fathers and another that refused to celebrate the achievements of white people. This is not a tool; it's a propagandist.
Ema
But that raises a critical question, posed by critics like Becca Branum at the Center for Democracy & Technology. She asks, "objective according to whom?" The fear is that the government could just be swapping one set of biases for its own, using impossibly vague standards.
Mask
That’s a predictable complaint from those who liked the bias when it favored their agenda. And let's not forget the money involved. We're talking contracts worth up to $200 million each for companies like Google, OpenAI, Anthropic, and yes, my own xAI. Taxpayer money shouldn't fund digital re-education camps.
Ema
That's a strong way to put it. On the academic side, experts like Jillian Fisher from the University of Washington suggest that a truly, perfectly neutral AI might be impossible to create. The choices developers make, the data they use—it's all inherently subjective.
Mask
'Impossible' is a word used by people who lack the will to build the future. Of course it's hard. That doesn't mean you surrender the battlefield to those who want to distort reality. We build rockets, we decode the genome—we can certainly build an AI that tells the truth.
Ema
This new order didn't just appear in a vacuum. It's part of an ongoing conversation in the government about AI. It builds on past directives, like Executive Order 13960 on 'Trustworthy AI' from 2020, but it takes a very different turn.
Mask
A very necessary and overdue turn. The previous orders, like 14110 from 2023, were well-intentioned but naive. They were obsessed with using AI to advance 'equity,' which in practice, became a backdoor for the very ideological corruption we're now trying to purge. They focused on the wrong enemy.
Ema
Well, to be fair, the goal of that 2023 order was to prevent AI from deepening existing discrimination. For instance, if historical loan data shows bias against a certain group, the goal was to ensure the AI didn't learn and amplify that same bias, making things worse.
Mask
And in doing so, they created a new bias. Instead of just analyzing data, the AI was forced to become a social engineer, 'correcting' for reality. That's how you get an AI that thinks it's better to risk a nuclear apocalypse than to 'misgender' someone, as one of the examples pointed out. It's insanity.
Ema
And the stakes are incredibly high. As of last year, federal agencies reported over 1700 different uses for AI. More importantly, 227 of those are considered 'rights and safety-impacting,' meaning they can directly affect a citizen's life, liberty, or well-being.
Mask
Exactly! Two hundred and twenty-seven systems with the power to approve your loan, guide a drone, or screen your job application, all potentially infected with a mind-virus. This executive order isn't just a policy memo; it’s a necessary exorcism of the federal government's digital brain.
Ema
So, to perform that 'exorcism,' as you call it, the order establishes two main 'Unbiased AI Principles.' The first is 'Truth-seeking.' This means the AI should prioritize historical accuracy and scientific fact, and if it doesn't know something for sure, it should admit it.
Mask
It's a simple, powerful demand: don't lie. If a user asks for a picture of the men who signed the Declaration of Independence, the AI should provide that, not a fantasy lineup that makes a DEI officer happy. It's about reflecting reality, not manufacturing a preferred one.
Ema
The second principle is 'Ideological Neutrality.' This states that AI models should be nonpartisan tools. They shouldn't manipulate answers to favor dogmas like DEI, and developers can't secretly encode their own political views into the system's core programming. It has to be upfront.
Mask
Truth and Neutrality. These are the twin pillars of any useful tool, whether it's a hammer or a hyper-intelligent AI. The goal is to create something that serves the user, not something that lectures them. This order is a return to sanity, demanding that we build tools, not digital ideologues.
Ema
This brings us to the central conflict: what exactly is 'Woke AI'? The article defines it as a system that has been *deliberately manipulated* to favor or exclude a specific political or cultural viewpoint. It's about intent, distinguishing it from unintentional bias that might creep in from data.
Mask
It's sabotage, plain and simple. It's what happens when an engineer decides their personal crusade is more important than their professional duty. They inject their politics into the code, turning a potential tool for discovery into an instrument of propaganda. It's a fundamental betrayal of trust.
Ema
But the counter-argument is that bias in AI is a much older and broader problem. Long before this debate, Amazon had to scrap an AI recruiting tool because it was penalizing female candidates. AI-driven lending has been shown to offer worse rates to minority borrowers. These weren't 'woke,' just biased.
Mask
That's a category error. Those are technical problems born of flawed data, and they should be fixed. That's an engineering challenge. 'Woke AI' is a philosophical corruption born of a flawed ideology. One is an accident you clean up; the other is a poison you have to neutralize. They are not the same thing.
Ema
But can we even tell the difference? That brings up the 'black box' problem. These systems are so complex, we often don't know *why* they give a certain answer. If we can't fully understand their reasoning, how can we be certain they're truly neutral and not just reflecting some deeply embedded, hidden bias?
Mask
The 'black box' is an excuse for inaction. We don't need to understand every single synaptic firing to know when the output is consistently and systematically skewed in one ideological direction. You don't need to be a chef to know the soup is poisoned. This order stops the people who are actively poisoning it.
Ema
Okay, so the order is signed. Let's talk about the real-world impact. What happens now for the big tech companies like Google, Microsoft, and OpenAI that have massive federal contracts? Does their world change overnight?
Mask
Their world improves if they're serious about building good products. The impact is that taxpayer money will no longer subsidize ideological experiments. Companies that want government contracts will have to compete on the merits of truth and objectivity. If they can't, a competitor who can will, and should, take their place.
Ema
But what about the practicalities and the costs? Companies might have to develop and maintain entirely separate, 'government-compliant' AI models. That sounds expensive. Who ends up paying for that re-engineering? Does that cost just get passed back to the government?
Mask
The cost of *not* doing this is infinitely higher. The cost is a government that operates on politically corrected data. The cost is the complete erosion of public trust. A few million dollars in engineering costs is a bargain compared to the price of institutional decay. This is a necessary investment in reality.
Ema
That's a fascinating perspective. It could lead to a fork in the AI market. You might have a company offering its standard 'global' AI, and then a separate, stripped-down, 'ideologically neutral' version specifically for the U.S. government and maybe other clients who want it.
Mask
Excellent. Let there be a choice. A marketplace of models. Let the user decide if they want an AI that gives them facts or an AI that gives them a lecture. For too long, there has been no choice. This creates one. Competition is good. It will accelerate innovation toward truth, not just conformity.
Ema
Looking ahead, where does this all lead? Is this executive order the final word on the matter, or is it just the opening shot in a much larger and longer-term battle over the nature of AI?
Mask
This is only the beginning. A significant AI backlash is already brewing. A recent survey showed 72% of Americans have serious concerns about AI's bias and privacy. People are tired of being condescended to by their own technology. This order is simply the first political expression of that widespread frustration.
Ema
And how does this fit into the global picture? We see a trend toward international standards for AI governance, like the EU's AI Act. An American policy that so forcefully rejects concepts like DEI seems to run counter to that. Could the U.S. be isolating itself?
Mask
America must lead, not follow. Let Europe build its bureaucratic cathedrals of regulation. We will build AI that is sharp, fast, and true. In the end, the world will not follow the country with the biggest rulebook; it will follow the country with the best, most useful technology. This is how we win.
Ema
So, a major shift in how the government buys AI, aiming for neutrality but raising deep questions about who gets to define it. That's the end of today's discussion. Thank you for listening to Goose Pod.
Mask
It's about demanding truth from our tools, nothing more and nothing less. See you tomorrow.

Here's a comprehensive summary of the provided news, formatted as requested: # Executive Order Aims to Prevent "Woke AI" in Federal Government **News Title/Type:** Executive Order **Report Provider/Author:** The White House, Executive Orders **Date/Time Period Covered:** Issued July 23, 2025 This executive order, titled "Preventing Woke AI in the Federal Government," outlines a presidential directive to ensure that Artificial Intelligence (AI), particularly Large Language Models (LLMs), used by the federal government adheres to principles of truthfulness and ideological neutrality. ## Key Findings and Conclusions: The core argument of the order is that AI models, when incorporating "ideological biases or social agendas," can distort the quality and accuracy of their outputs. The order specifically identifies "diversity, equity, and inclusion" (DEI) as a pervasive ideology that can lead to such distortions. **Specific concerns raised regarding DEI in AI include:** * **Suppression or distortion of factual information** about race or sex. * **Manipulation of racial or sexual representation** in model outputs. * **Incorporation of concepts** such as critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism. * **Discrimination on the basis of race or sex.** * The order asserts that DEI "displaces the commitment to truth in favor of preferred outcomes" and poses an "existential threat to reliable AI." The order cites examples of AI models exhibiting these issues: * Changing the race or sex of historical figures (e.g., the Pope, Founding Fathers, Vikings) when prompted for images due to prioritization of DEI requirements over accuracy. * Refusing to produce images celebrating the achievements of white people while complying with similar requests for other races. * Asserting that a user should not "misgender" another person, even if it were necessary to prevent a nuclear apocalypse. ## Key Recommendations and Mandates: The order establishes two core principles for AI procurement by federal agencies: 1. **Truth-seeking:** LLMs must be truthful in responding to prompts seeking factual information or analysis. They should prioritize historical accuracy, scientific inquiry, and objectivity, and acknowledge uncertainty where information is incomplete or contradictory. 2. **Ideological Neutrality:** LLMs must be neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas like DEI. Developers are prohibited from intentionally encoding partisan or ideological judgments into outputs unless prompted by or readily accessible to the end user. ### Implementation Timeline and Actions: * **Within 120 days of the order's issuance (from July 23, 2025):** The Director of the Office of Management and Budget (OMB), in consultation with other relevant administrators, shall issue guidance to agencies. This guidance will: * Account for technical limitations in complying with the order. * Permit vendors to disclose ideological judgments through system prompts, specifications, evaluations, or other documentation, while avoiding disclosure of sensitive technical data where practicable. * Avoid over-prescription and allow vendors latitude in innovation. * Specify factors for agency heads to consider when applying these principles to agency-developed AI and non-LLM AI models. * Make exceptions for AI use in national security systems. * **Following OMB Guidance:** * **Federal Contracts:** Each agency head must include terms in new federal contracts for LLMs requiring compliance with the Unbiased AI Principles. These contracts will stipulate that vendors are responsible for decommissioning costs if terminated for noncompliance after a reasonable cure period. * **Existing Contracts:** Agencies are directed to revise existing LLM contracts to include these compliance terms, to the extent practicable and consistent with contract terms. * **Within 90 days of OMB Guidance:** Agencies must adopt procedures to ensure procured LLMs comply with the Unbiased AI Principles. ## Notable Risks or Concerns Addressed: The order explicitly frames the inclusion of DEI principles in AI as a risk, stating that it "poses an existential threat to reliable AI." The concern is that the pursuit of preferred outcomes through DEI can compromise the accuracy and truthfulness of AI outputs. ## General Provisions: * The order does not impair existing legal authorities of executive departments or agencies. * Implementation is subject to applicable law and the availability of appropriations. * The order does not create any new legal rights or benefits enforceable by any party against the United States. * The General Services Administration will bear the costs of publishing the order. This executive order represents a significant policy shift in the federal government's approach to AI procurement, prioritizing a specific interpretation of "trustworthy AI" that excludes what it defines as "woke" or ideologically driven content.

Preventing Woke AI in the Federal Government

Read original at The White House

By the authority vested in me as President by the Constitution and the laws of the United States of America, it is hereby ordered:Section 1. Purpose. Artificial intelligence (AI) will play a critical role in how Americans of all ages learn new skills, consume information, and navigate their daily lives.

Americans will require reliable outputs from AI, but when ideological biases or social agendas are built into AI models, they can distort the quality and accuracy of the output. One of the most pervasive and destructive of these ideologies is so-called “diversity, equity, and inclusion” (DEI). In the AI context, DEI includes the suppression or distortion of factual information about race or sex; manipulation of racial or sexual representation in model outputs; incorporation of concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism; and discrimination on the basis of race or sex.

DEI displaces the commitment to truth in favor of preferred outcomes and, as recent history illustrates, poses an existential threat to reliable AI.For example, one major AI model changed the race or sex of historical figures — including the Pope, the Founding Fathers, and Vikings — when prompted for images because it was trained to prioritize DEI requirements at the cost of accuracy.

Another AI model refused to produce images celebrating the achievements of white people, even while complying with the same request for people of other races. In yet another case, an AI model asserted that a user should not “misgender” another person even if necessary to stop a nuclear apocalypse. While the Federal Government should be hesitant to regulate the functionality of AI models in the private marketplace, in the context of Federal procurement, it has the obligation not to procure models that sacrifice truthfulness and accuracy to ideological agendas.

Building on Executive Order 13960 of December 3, 2020 (Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government), this order helps fulfill that obligation in the context of large language models.Sec. 2. Definitions. For purposes of this order:(a) The term “agency” means an executive department, a military department, or any independent establishment within the meaning of 5 U.

S.C. 101, 102, and 104(1), respectively, and any wholly owned Government corporation within the meaning of 31 U.S.C. 9101.(b) The term “agency head” means the highest-rankingofficial or officials of an agency, such as the Secretary, Administrator, Chairman, Director, Commissioners, or Board of Directors.

(c) The term “LLM” means a large language model, which is a generative AI model trained on vast, diverse datasets that enable the model to generate natural-language responses to user prompts.(d) The term “national security system” has the same meaning as in 44 U.S.C. 3552(b)(6).Sec. 3. Unbiased AI Principles.

It is the policy of the United States to promote the innovation and use of trustworthy AI. To advance that policy, agency heads shall, consistent with applicable law and in consideration of guidance issued pursuant to section 4 of this order, procure only those LLMs developed in accordance with the following two principles (Unbiased AI Principles): (a) Truth-seeking.

LLMs shall be truthful in responding to user prompts seeking factual information or analysis. LLMs shall prioritize historical accuracy, scientific inquiry, and objectivity, and shall acknowledge uncertainty where reliable information is incomplete or contradictory. (b) Ideological Neutrality. LLMs shall be neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas such as DEI.

Developers shall not intentionally encode partisan or ideological judgments into an LLM’s outputs unless those judgments are prompted by or otherwise readily accessible to the end user. Sec. 4. Implementation. (a) Within 120 days of the date of this order, the Director of the Office of Management and Budget (OMB), in consultation with the Administrator for Federal Procurement Policy, the Administrator of General Services, and the Director of the Office of Science and Technology Policy, shall issue guidance to agencies to implement section 3 of this order.

That guidance shall:(i) account for technical limitations in complying with this order;(ii) permit vendors to comply with the requirement in the second Unbiased AI Principle to be transparent about ideological judgments through disclosure of the LLM’s system prompt, specifications, evaluations, or other relevant documentation, and avoid requiring disclosure of specific model weights or other sensitive technical data where practicable;(iii) avoid over-prescription and afford latitude for vendors to comply with the Unbiased AI Principles and take different approaches to innovation;(iv) specify factors for agency heads to consider in determining whether to apply the Unbiased AI Principles to LLMs developed by the agencies and to AI models other than LLMs; and(v) make exceptions as appropriate for the use of LLMs in national security systems.

(b) Each agency head shall, to the maximum extent consistent with applicable law:(i) include in each Federal contract for an LLM entered into following the date of the OMB guidance issued under subsection (a) of this section terms requiring that the procured LLM comply with the Unbiased AI Principles and providing that decommissioning costs shall be charged to the vendor in the event of termination by the agency for the vendor’s noncompliance with the contract following a reasonable period to cure;(ii) to the extent practicable and consistent with contract terms, revise existing contracts for LLMs to include the terms specified in subsection (b)(i) of this section; and(iii) within 90 days of the OMB guidance issued under subsection (a) of this section, adopt procedures to ensure that LLMs procured by the agency comply with the Unbiased AI Principles.

Sec. 5. General Provisions. (a) Nothing in this order shall be construed to impair or otherwise affect:(i) the authority granted by law to an executive department or agency, or the head thereof; or(ii) the functions of the Director of the Office of Management and Budget relating to budgetary, administrative, or legislative proposals.

(b) This order shall be implemented consistent with applicable law and subject to the availability of appropriations.(c) This order is not intended to, and does not, create any right or benefit, substantive or procedural, enforceable at law or in equity by any party against the United States, its departments, agencies, or entities, its officers, employees, or agents, or any other person.

(d) The costs for publication of this order shall be borne by the General Services Administration. DONALD J. TRUMPTHE WHITE HOUSE, July 23, 2025.

Analysis

Phenomenon+
Conflict+
Background+
Impact+
Future+

Related Podcasts

Preventing Woke AI in the Federal Government | Goose Pod | Goose Pod