A New Era of American AI: The Executive Order on “Preventing Woke AI” & Its Place in the 2025 U.S. AI Action Plan
- Carolina MIlanesi
- Aug 3
- 6 min read
Artificial intelligence has rapidly moved to the heart of American policy, business, and public debate. On July 23, 2025, President Donald Trump issued an executive order titled “Preventing Woke AI in the Federal Government,” a cornerstone of a wider White House push to cement U.S. leadership in artificial intelligence. Paired with the newly released “America’s AI Action Plan,” these moves chart a path for AI that prioritizes innovation, national security, and, that the administration sees as American values, while igniting controversy over the cost of sidelining regulation and social inclusion.
The Executive Order: What Does “Preventing Woke AI” Actually Do?
The executive order is premised on the belief that so-called “woke” ideologies, principally diversity, equity, and inclusion (DEI), along with related concepts, contaminate AI with ideological bias. The text frames DEI as an existential risk to “truthful” and “reliable” AI, citing examples of prominent language models and image generators changing the apparent race or gender of historical figures, or responding to politically charged prompts in ways the administration deems biased.
The order applies only to U.S. federal procurement of large language models (LLMs). It sets two foundational mandates for models purchased or built by federal agencies:
· Truth-seeking: LLMs must prioritize objectivity, historical accuracy, and scientific inquiry. Where information is uncertain or contradictory, this should be acknowledged.
· Ideological Neutrality: LLMs must not “manipulate responses in favor of ideological dogmas such as DEI.” Any inherent ideological judgments must either reflect the end user's request or be fully disclosed in documentation.
Agencies are instructed to include these requirements in all new contracts for LLMs and, where possible, update existing contracts. Crucially, vendors found noncompliant may be held responsible for decommissioning costs if contracts are terminated due to violations.
Context: “America’s AI Action Plan” and the Shift in U.S. AI Policy
The executive order forms part of the White House’s sweeping “America’s AI Action Plan,” which represents a fundamental shift from the risk- and rights-focused approach of the previous administration. The new strategy can be summarized in three pillars:
· Accelerate AI Innovation: Remove what are seen as regulatory shackles, rolling back many existing rules and incentivizing states to do the same. Open-source AI is encouraged, and the plan calls for stripping out references to DEI, misinformation, and climate change from federal AI risk guidance.
· Build American AI Infrastructure: Fast-track the development of data centers, power grid upgrades, and domestic semiconductor manufacturing—seen as essential to leading the AI race.
· Lead in International AI Diplomacy and Security: Strengthen export controls, bolster alliances with friendly nations, and enforce stricter security standards to keep AI innovation from falling into adversarial hands.
The plan generally views regulation as a threat to global competitiveness, urging American “leadership” through innovation, investment, and federal support, not by policing AI’s social or ethical impacts.
China’s Playbook
China has long followed a similar playbook in its pursuit of AI leadership.
China’s AI strategy has consistently emphasized government-backed incentives, heavy investment in domestic startups, and streamlined regulation to boost technological innovation. The Chinese central government and provincial authorities have established multi-billion-dollar industrial funds, offered significant tax perks, and created national AI centers, all with the explicit objective of becoming the world’s AI leader by 2030. Initiatives like the “AI+” program aim to weave artificial intelligence into every sector of the Chinese economy, integrating cutting-edge technology into manufacturing, healthcare, and public services.
This state-led, innovation-first approach has enabled China to surge ahead in certain capabilities, particularly in making efficient use of domestic resources and scaling “practical” applications. AI ethics and inclusivity, while mentioned in government documents and declarations, are often interpreted through the lens of collective social harmony, economic development, and national strategic goals rather than individual rights or demographic representation.
However, it is vital to recognize that while the U.S.'s new direction echoes aspects of China’s model, profound differences remain. America’s socioeconomic makeup and political values are rooted in pluralism, the protection of individual rights, and the inclusion of diverse perspectives. China’s population is far more homogenous, and its governance tradition prioritizes national unity and economic performance over dissent or minority protections. In China, the focus is often on “upgrading all industries,” preventing regional disparities, and serving collective development rather than enforcing fairness for specific groups or ensuring open contestation of ideas.
The comparison highlights a paradox in the U.S. “America First” strategy: while leveraging deregulatory tactics similar to China’s to chase speed and global dominance, the risks and tradeoffs, in particular, those related to ethics, inclusion, and historic inequity, are ultimately shaped by very different social realities. The challenge for America is to accelerate innovation without sacrificing the principles that distinguish its democracy from a state-directed digital future.
The Private Sector and International Implications
Predictably, many tech leaders have voiced support for an “America First” approach that downplays regulation and places the U.S. at the forefront of AI development, arguing that overregulation stifles innovation and cedes ground to competitors like China. Federal priorities, open-source, minimal regulation, and a high bar for government intervention, will likely create short-term advantages for major U.S. firms seeking to dominate globally.
However, the reality is that the line between government and private sector AI is porous. Most major AI vendors design models that are both sold commercially and to government agencies. While it is technically feasible for a company to develop and operate separate models for federal and commercial use, the high cost, technical difficulty, and risk of error make it impractical for most. In response to unique government requirements, like the recent executive order, major vendors are more likely to conform their primary models to match government standards, thus indirectly shaping the wider market rather than truly maintaining two distinct model ecosystems.
Internationally, the administration may find its values diverge from those in Europe, Canada, and beyond, where a strong emphasis remains on inclusion, combating bias, and recognizing historical injustice within AI systems. Outside the U.S., it is likely that local and regional models that reflect these values will proliferate, while global AI vendors (and their American clients) must continue grappling with regulatory complexity and divergent cultural expectations.
Raising Concerns: The Risks of Deregulation and Excluding “Woke” Ideologies
While tech leaders may cheer the current administration’s “America First” deregulation and innovation-centric approach, these policies also shift critical decisions about inclusion, bias, and safety further away from public debate and expert review. When the primary decision-makers lack the technical background to probe, question, and foresee risks, oversight becomes reactionary at best and dangerously ignorant at worst.
Against this backdrop, the executive order’s framing, removing considerations of diversity, equity, and broader social context from government AI, carries particular peril. History shows that technologies designed without an awareness of inherited bias, social context, and the risk of exclusion routinely reinforce entrenched inequalities. By positioning “neutrality” as the supreme goal and barring federal AI tools from reflecting or even recognizing concepts like systemic bias or historic inequity, the order risks cementing existing injustices as “neutral.”
Ultimately, if policymakers lack the expertise to responsibly shape AI’s trajectory, the future of these systems could be determined by the loudest politics or the priorities of private industry, not by thoughtful, informed debate about what protections and values society needs. As the global race for AI leadership accelerates, bridging this expertise gap is as important as any specific regulatory stance, because the choices made today will echo across generations in the capabilities, biases, and limitations of tomorrow’s AI.
And this is the heart of my concern—one that should resonate regardless of where anyone stands on the debate over “woke” policies. Beneath the rhetoric and political posturing, everyone ought to be troubled by the prospect of powerful technologies being shaped by mandates with far-reaching societal consequences, especially when the process lacks expertise, transparency, or genuine debate. No matter your position in the culture wars, the risks inherent in removing checks and balances from AI design and deployment, and boxing out diverse perspectives, are ones that should give all of us pause.
What many miss in this debate is a risk rooted deep in the very way generative AI systems operate. Unlike traditional software, generative AI draws inferences, produces content, and makes recommendations with a significant degree of autonomy, ts decisions are often the product of complex, opaque processes that can elude human understanding, even among the system’s developers. This opacity means that when generative AI makes a biased or harmful decision, identifying its origins and rectifying the error is not just hard, it can be nearly impossible.
Consider the example of social security benefit decisions: if a generative AI model, trained on historical data that includes unaddressed racial, gender, or socioeconomic biases, begins making recommendations that systematically disadvantage certain groups, those decisions can go unchallenged or even unnoticed for years. The repercussions extend far beyond a single program. AI-generated determinations about social security eligibility can directly affect a person’s financial health, access to social programs, or even influence their eligibility for critical health care and housing support. When such systemic biases go uncorrected, they are amplified and replicated across other interconnected government or financial systems, entrenching inequality rather than alleviating it.
As America pursues an innovation-first strategy in artificial intelligence reminiscent of approaches seen in China, it faces a unique crossroads. Rapid advancement must not come at the expense of hard-won, and yet fragile. values of inclusion, open discourse, and ethical responsibility. Ultimately, the story of AI in the years to come will be defined not just by technological breakthroughs, but by the wisdom, and caution, with which society chooses to deploy them. In such consequential times, the real challenge is ensuring that decisions affecting our technological future are shaped by informed, balanced, and principled hands.