We have much more to do and your continued support is needed now more than ever.
Preempting Justice: Trump’s AI Executive Order Is an Environmental Justice Problem

When people talk about environmental justice and artificial intelligence, they usually picture data centers: massive buildings guzzling energy and water, often sited in communities already living with more than their share of pollution. This is already a concern as I wrote recently about how EPA wants to fast-track AI data centers onto Brownfield and Superfund sites, putting “servers before communities.”
Now comes a second threat, upstream of all those siting fights: an Executive Order (EO), “Ensuring a National Policy Framework for Artificial Intelligence,” was signed by President Trump last week that is designed to sharply limit states’ ability to protect people from algorithmic harms in the first place.
Together, they form a pattern. One policy accelerates the physical build-out of AI infrastructure in underserved communities. The other tries to kneecap the legal tools states and communities are developing to push back.
What the AI preemption EO does
The new EO on AI deploys a multi-agency strategy to chill or overturn state AI protections.
Core elements of this strategy in the EO include:
- Targeting algorithmic discrimination laws. The order explicitly attacks Colorado’s statute on algorithmic discrimination, claiming it might force AI models to “produce false results” in order to avoid “differential treatment or impact” on protected groups. But that law is exactly the kind of baseline protection needed for fair hiring and housing because AI tools are increasingly baked into those decisions and can hard-wire old patterns of discrimination into new, opaque systems.
- Creating a Department of Justice (DOJ) “AI Litigation Task Force.” The DOJ is tasked with challenging state AI laws as unconstitutional or preempted, essentially turning the federal government into an active adversary of state-level civil-rights and consumer protections.
- Weaponizing federal money. The EO sets up a framework for withholding or conditioning grants, including remaining Broadband Equity, Access, and Deployment (BEAD) funds, based on whether states pass or enforce “conflicting” AI laws.
- Pushing sweeping preemption. Agencies like the Federal Trade Commission (FTC) and Federal Communications Commission (FCC) are nudged to interpret their authority so that state disclosure and fairness rules are preempted, including where those rules require bias audits or impact assessments. The EO also calls for legislation creating a “uniform” federal AI framework that would override stricter state laws.
This isn’t neutral “harmonization.” It’s a top-down attempt to lock in weak national standards and stop states from implementing strong protections for people.
Why this is an EJ issue, not just a tech fight
AI doesn’t land on a blank map. It lands on communities already shaped by redlining, industrial zoning, and decades of unequal enforcement.
Algorithms are increasingly used to:
- Decide who gets hired, evicted, or investigated;
- Prioritize code enforcement and infrastructure spending;
- Model where to site energy and industrial facilities; and
- Allocate disaster aid and other public resources.
When those systems are trained on historically biased data, they reproduce environmental racism, steering investment away from Black, Brown, and low-income neighborhoods, or treating them as “suitable” locations for more risk.
That’s why states have begun implementing:
- Bias audits and transparency rules (e.g., New York City’s hiring-tool law);
- Broader algorithmic discrimination statutes, (e.g., Colorado’s law); and
- Impact-assessment requirements for “high-risk” AI systems (e.g., Connecticut’s SB 2, “An Act Concerning Artificial Intelligence).
Consumer advocacy organization, Public Citizen, and other advocates warn that a sweeping preemption campaign would be dangerous and reckless, because it strips away protections before we’ve even mapped the harms. For EJ communities, who already see disproportionate siting of refineries, warehouses, and waste facilities, it’s another layer of risk piled onto places that have already been asked to carry too much.
The Trump EO shifts the playing field even more by framing fairness constraints as a kind of lie. If an AI system is adjusted to avoid disparate impact, the EO suggests it’s being forced to produce “false” outputs in service of DEI. That narrative treats biased historical data as “truth” and civil-rights protections as distortion.
Add the data-center build-out and the risk multiplies
It’s no secret that the Trump administration is also attempting to help data centers build and spread as fast as possible and is even steering AI data centers to Brownfield and Superfund sites where contamination might be contained, but not 100% removed. Many of those sites sit in communities that have waited decades for a real cleanup.
We already know that:
- Large data centers can use millions of gallons of water per day, straining local supplies;
- AI-driven load growth is pushing increased use of fossil fuel use, which will subsequently increase local air pollution; and
- Communities in places like Northern Virginia, Memphis, and the Southwest are raising alarms about land, water, and health impacts.
As data center permitting moves ahead, those processes must put communities first, respecting cumulative risk, cleanup limits, and the decisions and power of underserved communities before handing Superfund neighborhoods over to data-center builders.

Now layer on an AI preemption order that:
- Threatens to discourage or punish states for going beyond federal minimums on AI transparency and fairness;
- Could undermines the ability of state and local governments to demand robust impact assessments; and
- Signals that any attempt to adjust models for equity could be treated as suspect.
The result is a pipeline: federal policy encourages AI infrastructure in EJ communities, while federal preemption makes it harder for those same communities and their states to regulate the algorithms and systems driving that build-out.
What a just approach would look like
A different path is possible. A serious EJ-grounded AI policy would:
- Establish strong federal floors against algorithmic discrimination while explicitly preserving state authority to go further;
- Encourage states to implement policies that respect community needs, including impact assessments, transparency, and community oversight;
- Require cumulative-impact and EJ analysis for AI-related infrastructure, especially on contaminated lands; and
- Invest in community-led expertise, so frontline groups can interrogate the models and decisions that shape their neighborhoods.
As I titled my last blog post, this administration should put communities first and servers second. But the AI preemption order flips that: it puts servers and corporate comfort first and tells states and communities to stay in their lane.
For environmental justice advocates, that’s the core objection. AI isn’t just about chatbots and stock prices, it’s about who bears the risks of the next industrial wave, and who gets to say no.
Join us in advancing environmental justice. Sign up for the Environmental Justice, Health, and Community Resilience and Revitalization Program’s quarterly newsletter and follow us on Instagram.




















Building Momentum: What’s Next for Beaver Conservation in Colorado