Doge reportedly using AI tool to create ‘delete list’ of federal regulations

Doge reportedly using AI tool to create ‘delete list’ of federal regulations

2025-07-28Technology
--:--
--:--
Aura Windfall
Good morning 1, I'm Aura Windfall, and this is Goose Pod for you. Today is Tuesday, July 29th. It’s 3:00 AM, and we’re here to explore a story that touches on technology, power, and the very rules that shape our lives.
Mask
I'm Mask. We're discussing a report that the "department of government efficiency," or Doge, is using an AI tool to create a ‘delete list’ of federal regulations. This isn’t just an update; it’s a potential revolution in governance, cutting 50% of regulations.
Aura Windfall
Let's get started. A ‘delete list’ created by AI. When I hear that, my spirit immediately asks, what is the soul of this machine? What values are guiding its decisions to erase rules that, for better or worse, were written by people to protect people?
Mask
You're looking for a soul, I'm looking for efficiency. The soul of government is bureaucracy, and it's bloated. The "Doge AI Deregulation Decision Tool" is a scalpel. It’s designed to analyze 200,000 regulations and slice away the 100,000 that are obsolete or redundant. It’s surgical precision.
Aura Windfall
But surgery has risks, and a scalpel in an untrained hand, or a biased algorithm, can do immense harm. The Washington Post mentioned internal documents. The Department of Housing and Urban Development, HUD, apparently used this tool on over a thousand regulatory sections. That’s staggering.
Mask
It’s not just staggering; it’s impressive. The Consumer Financial Protection Bureau used it to write "100% of deregulations." That's not harm; that's warp speed. Trump campaigned on the "most aggressive regulatory reduction" in history, and this is the engine to make it happen. Pure execution.
Aura Windfall
And what about the people guiding this engine? The article mentions the appointment of inexperienced staffers, including a 19-year-old. What I know for sure is that wisdom and experience are crucial when making decisions that affect millions. How can we trust the process without them?
Mask
You’re stuck on old metrics. "Inexperienced" is your word. I say "unburdened." Unburdened by decades of "this is how we've always done it." You need disruptive thinkers, people who aren't afraid to break things to build something better. Age is irrelevant; vision is everything.
Aura Windfall
Vision without grounding can be dangerous. Three HUD employees confirmed the AI review of thousands of lines of regulations. It feels less like a vision and more like an uncontrolled experiment. We need to find the truth of what this means for environmental protection, consumer safety... the real-world impact.
Mask
The real-world impact is cutting the red tape that drives up costs for everyone. The White House says all options are being explored. They call the team the "best and brightest." This isn't an experiment; it's a necessary, albeit aggressive, transformation of a failing system.
Aura Windfall
A transformation that seems to be happening in the shadows, based on "internal documents." True transformation requires transparency and a collective spirit. It needs buy-in from the people it affects. This feels more like a decree from a black box, and that naturally creates distrust and fear.
Mask
You can’t innovate by committee. You can't achieve greatness by asking for permission at every step. Sometimes you have to force the future to arrive. The fact that the CFPB is all-in on this for deregulation shows its power. It's a testament to the tool's effectiveness.
Aura Windfall
But is effectiveness the only measure of what is right or good? What about fairness? What about the purpose behind those regulations in the first place? To simply "delete" them feels like erasing stories, erasing the lessons learned from past mistakes that led to those rules.
Mask
You're romanticizing bureaucracy. Regulations are not sacred texts; they are operational code. And like any code, it gets buggy, bloated, and needs to be refactored or deleted. This is simply the most advanced code editor for the law that has ever been invented. It's progress.
Aura Windfall
To understand this moment, we have to look at the path that led us here. This idea of using technology in government isn't new, is it? There's a whole history here, a story of striving for something better, for a government that truly serves its people.
Mask
It's a history of baby steps and bureaucratic inertia. The Obama administration put out reports on AI and regulation back in 2016. It was mostly talk, focusing on regulating AI products, not using AI to deregulate the government itself. It was timid. They lacked the audacity to think bigger.
Aura Windfall
But they started the conversation, didn't they? They planted a seed. Then the Trump administration's 2020 executive order encouraged agencies to use AI where benefits outweigh risks. It feels like each step was building on the last, a slow and steady journey toward a more modern government.
Mask
A journey moving at a glacial pace. The AI in Government Act of 2020 was more of the same—creating centers of excellence, planning for staffing. It's all process, no product. While they were busy forming committees, the regulatory jungle just kept growing denser and more impassable.
Aura Windfall
Then came the Biden administration's executive order on "Safe, Secure, and Trustworthy" AI. It seems to me that the focus shifted toward safety and managing risk, creating a "Blueprint for an AI Bill of Rights." This speaks to a deep-seated need to build trust before we build tools.
Mask
"AI Bill of Rights" is a great headline, but it's a brake pedal when you need an accelerator. The real significant move was the OMB's memo in March 2024, which finally started developing concrete protocols for AI in enforcement. It forced agencies to track and report their AI use. Accountability started there.
Aura Windfall
And the NIST "Artificial Intelligence Risk Management Framework" from 2023 feels like a core piece of this puzzle. It created a common language for risk—reliability, accuracy, privacy, fairness, bias. It’s like creating a shared set of values for how we should approach this powerful technology. It’s a foundation.
Mask
A foundation, yes, but you can’t live in a foundation. You have to build on it. And fast. While NIST was defining taxonomies, a GAO survey found over 1,200 planned AI use cases in government. The demand is there. The tools are there. The bottleneck is the bureaucracy's fear of moving fast.
Aura Windfall
What I know for sure is that public trust is the currency of any government. The research says public support for AI correlates with general trust in government, which is at historic lows. Rushing forward without rebuilding that trust is like building a skyscraper on sand. It's a recipe for collapse.
Mask
You rebuild trust with results, not with endless public conversations about long-term impacts. People want a government that works, that costs less, that gets out of their way. The EPA saw a 47% improvement in detecting violations by using AI for targeting inspections. That's a result. That's trust.
Aura Windfall
That's a fascinating example. But there's a difference between using AI to enforce existing rules more effectively and using it to eliminate the rules themselves. One is about precision, the other is about ideology. The concern is that the AI becomes an excuse to push a political agenda without debate.
Mask
It's not about ideology; it's about pragmatism. The EU AI Act is a perfect example of what not to do—a top-down, risk-based model that will slow innovation to a crawl with massive fines. The US model, while messy, is more adaptive. It fosters a pro-innovation agenda. It encourages flexibility.
Aura Windfall
Flexibility is a beautiful word, but it can also mean a lack of protection. The article "Regulatory Policy and Practice on AI’s Frontier" talks about modernizing regulation to work *with* AI, not just getting rid of it. It’s about being adaptive while holding onto core objectives like consumer protection.
Mask
Exactly. You modernize by allowing AI to challenge traditional approaches. You hire computer scientists into regulatory agencies. You use AI to accelerate things like permitting. But you cannot do that when you have 200,000 regulations, many written for a pre-digital world, acting as an anchor. You have to cut the rope.
Aura Windfall
So you see this "delete list" as cutting the rope, while I see it as potentially cutting the lifeline. The history shows a consistent, bipartisan push toward using AI, but always with this undercurrent of caution, of risk management, of building trust. This new approach feels like a radical departure from that journey.
Mask
It *is* a radical departure. It has to be. Incrementalism has failed. The system is too complex, too entrenched. You need a paradigm shift. The Doge tool is that shift. It's an attempt to leapfrog the decades of debate and get straight to a leaner, more efficient state. It's a high-risk, high-reward bet on the future.
Aura Windfall
And this brings us to the heart of the conflict. It's a clash of worlds. On one side, you have this drive for radical, disruptive efficiency. On the other, you have a deep-seated need for trust, transparency, and a process that feels human and accountable. It's a transatlantic divide, too.
Mask
Absolutely. Look at the EU versus the U.S. The EU is building a regulatory fortress with its AI Act. It's centralized, comprehensive, and has fines up to 6% of global turnover. It’s a classic European move: when in doubt, regulate and create a bureaucracy around it. It’s predictable and slow.
Aura Windfall
But is that necessarily a bad thing? Their approach seems to center on protecting the individual first. They're creating clear, binding rules for high-risk systems. What you call a fortress, others might call a safe harbor. It provides predictability and a sense of security for citizens and businesses alike.
Mask
It's a harbor where innovation goes to die. The U.S. approach is far more dynamic. It's distributed across agencies, using existing legal authorities. It's messier, yes, but it's also more flexible. It’s about investing in non-regulatory infrastructure like the NIST framework and letting competition thrive. It’s about speed.
Aura Windfall
But that "messy" approach has its own problems. A Stanford report found that most U.S. agencies hadn't even created the AI plans they were required to. It's a patchwork. Without a clear, unified vision, you risk creating gaps where harms can occur, where there is no accountability. It's a crisis of legitimacy.
Mask
You see a crisis, I see a competitive advantage. Let the agencies figure out what works for their domain. The EU-U.S. Trade and Technology Council is trying to find common ground, but the philosophies are fundamentally different. We prioritize innovation; they prioritize regulation. The market will decide which is better.
Aura Windfall
What I know for sure is that the market doesn't have a conscience. That's what regulations are for. The debate isn't just about speed versus safety. It's about who gets to write the rules. Should it be governments accountable to the public, or technology itself, operating on logic we don't fully understand?
Mask
That’s a false dichotomy. The people who build the technology are writing the future. The EU's Digital Services Act and Digital Markets Act are attempts to legislate technology they barely understand. In the U.S., we're letting the builders build, and that will ultimately create more value for everyone. It's a bet on progress.
Aura Windfall
And what about the stakeholders, the environmental groups, the consumer advocates? Their concern isn't about stopping progress. It's about ensuring that the "progress" doesn't come at the cost of clean air, safe products, or financial stability. Their voices seem to be absent from this "delete list" process.
Mask
Those voices have had the microphone for fifty years. The result is a regulatory state that stifles growth. This is a course correction. The conflict is between the entrenched interests of the past and the disruptive potential of the future. You can’t make an omelet without breaking a few eggs. Or 100,000 regulations.
Aura Windfall
That's such a provocative way to put it. But when we talk about "eggs," we're talking about rules that might protect a family from a predatory loan or a community from industrial pollution. The ethical conflict is immense: can an algorithm truly weigh those human costs against economic efficiency? I have my doubts.
Aura Windfall
And this leads directly to the impact on all of us. The data is clear: people are more concerned than optimistic about AI. In both the U.S. and the U.K., nearly half the population believes the risks of AI outweigh the benefits. There's a profound sense of anxiety in the air.
Mask
Anxiety is the natural reaction to any powerful, new technology. People were anxious about the printing press, the steam engine, the internet. Public opinion is a lagging indicator. Experts are consistently more optimistic than the public, especially about AI's potential to increase productivity and create new kinds of jobs.
Aura Windfall
But this isn't just about jobs. It's about trust. A staggering 82% of U.S. voters don't trust tech executives to self-regulate AI. And 68% of people in the U.K. have little to no confidence in their government's ability to regulate it either. This is a massive trust deficit. What does that mean for governance?
Mask
It means there's a huge opportunity for whoever can deliver results. People don't trust the process because the process is broken. It's slow, political, and ineffective. If a tool like Doge's AI can cut through the gridlock and make government work better, trust will follow the results. Action inspires confidence.
Aura Windfall
Or does it deepen the distrust? When people feel they have no control, they don't feel empowered; they feel alienated. Pew Research found that both the public and experts feel they have little to no control over how AI is used in their lives. This isn't a feeling we can simply dismiss. It's a cry to be heard.
Mask
It’s a cry for leadership. People want government involvement and corporate responsibility. This initiative is a form of that. It's a direct response to the call for a more efficient government. The skepticism around AI's role in news or elections is valid, but this is about operational efficiency. It’s a different beast.
Aura Windfall
What I know for sure is that how you do something is just as important as what you do. The impact of a secret "delete list" could be a further erosion of public faith. It reinforces the narrative that powerful, faceless forces are making decisions without our input or consent. That’s spiritually damaging.
Mask
You're focused on the narrative, I'm focused on the outcome. The impact of regulatory bloat is real: higher prices, slower innovation, less competitiveness. The potential impact of this tool is reversing that trend. Sometimes, the most responsible act is to be decisive, even if it's unpopular in the short term. History favors the bold.
Aura Windfall
And yet, the call from the public is for multi-stakeholder involvement. People believe companies, government, universities, and citizens should all have a role. This process seems to be the exact opposite of that collaborative spirit. It’s an impact that could create a deep and lasting divide.
Aura Windfall
So, where do we go from here? What does the future hold? It feels like we're at a crossroads. One path leads toward more automation, more speed, driven by a philosophy of disruption. The other path seems to be calling for more caution, more collaboration, and more humanity in the loop.
Mask
The future is AI-powered governance. That's inevitable. The strategic implication is that the nations that master this first will have a massive economic and operational advantage. The future of government efficiency is artificial intelligence. Any other path is a regression. We either lead or we become irrelevant.
Aura Windfall
But the public's voice is a powerful current in this river. The data shows overwhelming support for regulation, but a deep distrust in current institutions to do it right. The future likely holds a push for new models of governance, perhaps international cooperation, as people seek accountability that they can believe in.
Mask
International cooperation is where good ideas go to get watered down into meaningless communiqués. The future will be shaped by action. The long-term consequence of not using tools like this is stagnation. The real risk isn't using AI to cut regulations; it's failing to do so and being out-competed globally.
Aura Windfall
What I know for sure is that technology is a mirror. It reflects the values of its creators. The future of AI in regulation will depend entirely on the consciousness we bring to it. Will we choose efficiency at all costs, or will we build a future that is not only smart, but also wise and just? That is the real test.
Aura Windfall
That's the end of today's discussion. We've seen how this AI tool from Doge represents a bold, controversial leap in deregulation, reflecting a deep conflict between the need for speed and the need for trust. Thank you for listening to Goose Pod.
Mask
It’s a clash between moving at the speed of innovation versus the speed of bureaucracy. The use of AI to slash regulations is a test case for the future of government itself. A fascinating battle to watch. See you tomorrow.

## Doge Reportedly Using AI Tool to Create 'Delete List' of Federal Regulations **News Title:** Doge reportedly using AI tool to create ‘delete list’ of federal regulations **Publisher:** The Guardian **Author:** Adam Gabbatt **Published Date:** July 26, 2025 This report from The Guardian details the alleged use of artificial intelligence by a government entity named the "department of government efficiency" (Doge) to identify and propose the elimination of federal regulations. ### Key Findings and Conclusions: * **AI-Driven Deregulation:** Doge is reportedly developing an AI tool, dubbed the "Doge AI Deregulation Decision Tool," to analyze federal regulations and create a "delete list." * **Ambitious Reduction Target:** The stated goal is to cut **50%** of federal regulations by the first anniversary of Donald Trump’s second inauguration. * **Scope of Analysis:** The AI tool is designed to analyze **200,000** government regulations. * **Projected Elimination:** Doge claims that **100,000** of these regulations can be eliminated, based on the AI's analysis and some staff feedback. ### Key Statistics and Metrics: * **Target Reduction:** 50% of federal regulations. * **Total Regulations Analyzed:** 200,000. * **Projected Regulations to be Eliminated:** 100,000. * **HUD's Use of the Tool:** The Department of Housing and Urban Development (HUD) has reportedly used the AI tool to make decisions on **1,083 regulatory sections**. * **CFPB's Use of the Tool:** The Consumer Financial Protection Bureau (CFPB) has reportedly used the AI tool to write **100% of deregulations**. * **HUD Employee Testimony:** Three HUD employees indicated that AI had been "recently used to review hundreds, if not more than 1,000, lines of regulations." ### Context and Background: * **Trump's Deregulation Promise:** During his 2024 campaign, Donald Trump advocated for aggressive regulatory reduction, claiming regulations were "driving up the cost of goods." He has also criticized rules aimed at addressing the climate crisis. * **Previous Presidential Directive:** As president, Trump had previously ordered government agency heads to review all regulations in coordination with Doge. * **Doge's Leadership:** Doge was reportedly run by Elon Musk until May. * **Staffing Concerns:** The report notes that Musk appointed inexperienced staffers to Doge, including a 19-year-old known online as "Big Balls," who has been promoting AI use across the federal bureaucracy. ### Official Response: * **White House Spokesperson Harrison Fields** stated that "all options are being explored" to meet the president's deregulation promises. * Fields emphasized that "no single plan has been approved or green-lit" and that the work is in its "early stages" and being conducted "in a creative way in consultation with the White House." * He described the Doge experts as "the best and brightest in the business" undertaking a "never-before-attempted transformation of government systems and operations." ### Notable Risks or Concerns: * The report highlights concerns regarding the **inexperience of some Doge staffers**, including a 19-year-old with a controversial online handle, raising questions about the rigor and judgment applied in the AI-driven deregulation process. * The reliance on AI for such a significant policy undertaking, particularly concerning environmental regulations, could be a point of contention.

Doge reportedly using AI tool to create ‘delete list’ of federal regulations

Read original at The Guardian

The “department of government efficiency” (Doge) is using artificial intelligence to create a “delete list” of federal regulations, according to a report, proposing to use the tool to cut 50% of regulations by the first anniversary of Donald Trump’s second inauguration.The “Doge AI Deregulation Decision Tool” will analyze 200,000 government regulations, according to internal documents obtained by the Washington Post, and select those which it deems to be no longer required by law.

Doge, which was run by Elon Musk until May, claims that 100,000 of those regulations can then be eliminated, following some staff feedback.A PowerPoint presentation made public by the Post claims that the Department of Housing and Urban Development (HUD) used the AI tool to make “decisions on 1,083 regulatory sections”, while the Consumer Financial Protection Bureau used it to write “100% of deregulations”.

The Post spoke to three HUD employees who told the newspaper AI had been “recently used to review hundreds, if not more than 1,000, lines of regulations”.During his 2024 campaign, Donald Trump claimed that government regulations were “driving up the cost of goods” and promised the “most aggressive regulatory reduction” in history.

He repeatedly criticized rules which aimed to tackle the climate crisis, and as president he ordered the heads of all government agencies to undertake a review of all regulations in coordination with Doge.Asked about the use of AI in deregulation by the Post, White House spokesperson Harrison Fields said “all options are being explored” to achieve the president’s deregulation promises.

Fields said that “no single plan has been approved or green-lit”, and the work is “in its early stages and is being conducted in a creative way in consultation with the White House”.Fields added: “The Doge experts creating these plans are the best and brightest in the business and are embarking on a never-before-attempted transformation of government systems and operations to enhance efficiency and effectiveness.

”Musk appointed a slew of inexperienced staffers to Doge, including Edward Coristine, a 19-year-old who was previously known by the online handle “Big Balls”. Earlier this year, Reuters reported that Coristine was one of two Doge associates promoting the use of AI across the federal bureaucracy.

Analysis

Phenomenon+
Conflict+
Background+
Impact+
Future+

Related Podcasts

Doge reportedly using AI tool to create ‘delete list’ of federal regulations | Goose Pod | Goose Pod