Trump’s Anti-Woke AI Policy Jeopardizes Patients’ Lives

November 14, 2025 by No Comments

President Trump Delivers Remarks At AI Summit In Washington DC

On July 23, President Donald Trump enacted a comprehensive measure named “Preventing Woke AI in the Federal Government Act.” This action represents another move in the persistent political culture conflict, aiming to undermine concepts such as diversity, equity, and inclusion (DEI) and reverse efforts to tackle systemic racism within federal artificial intelligence frameworks.

However, for medical professionals, especially those championing health equity, this goes beyond mere political grandstanding. This directive imperils lives. It puts at risk years of dedication aimed at pinpointing and rectifying inherent biases that have historically disadvantaged marginalized populations, most notably Black Americans.

AI is pervasive. It is currently employed for tasks like triaging patients in emergency rooms, arranging subsequent care, and forecasting disease susceptibility. Yet, these algorithms are not developed from an impartial foundation. Their training relies on real-world information. Information that is far from impartial.

Ensuring precision in medicine

A prominent illustration emerged from a 2019 study featured in Science, conducted by researchers at UC Berkeley and the University of Chicago. They investigated a prevalent commercial healthcare algorithm intended to identify patients needing high-risk care management. Initially, it seemed objective and informed by data. Nevertheless, researchers found that the algorithm was not evaluating genuine clinical necessity. Instead, it was subtly employing an indirect indicator: the sum of money previously allocated to a patient’s treatment.

Since Black patients generally receive less treatment, even when displaying identical symptoms, this expenditure proxy caused the algorithm to significantly undervalue their requirements. Although close to 46.5% of Black patients ought to have been identified for further care, the algorithm only detected 17.7%. This is more than a minor statistical detail. This represents a system programmed to disregard certain issues.
This situation is not unique. Two additional race-adjusted algorithms currently in use include:

Kidney function, determined via Glomerular Filtration Rate (GFR) equations, has historically incorporated a “correction factor” for Black individuals, stemming from unscientific beliefs concerning muscle mass. Studies have consistently revealed that this modification artificially raised kidney scores, leading numerous Black patients to be disqualified from transplants or experience delays in accessing specialized medical attention.

Furthermore, Pulmonary Function Tests (PFTs), utilized for identifying asthma and other lung conditions, frequently employ a race-specific adjustment that presumes Black individuals inherently possess reduced lung capacity, thereby decreasing diagnostic thresholds and contributing to insufficient diagnosis.

These are not merely remnants from the past. They serve as illustrations of how racial bias can be ingrained within software code. Silently, extensively, and fatally.

Lately, medical practitioners and researchers, myself included, have resisted these practices. Numerous hospitals are eliminating race-specific adjustments from medical formulas. AI tools focused on equity are being created to pinpoint and reduce inequalities, rather than overlooking them. This endeavor is not about being “woke”; it is about achieving precision, enhancing patient results, and preserving lives.

The perils of Trump’s anti-woke cultural conflict

Trump’s executive directive jeopardizes the crucial progress made in enhancing the accuracy of medical algorithms.

By prohibiting federal entities from factoring in systemic racism or equity during AI development, the order essentially criminalizes the essential endeavors required to resolve these issues. It stifles data scientists who strive to construct and promote a more equitable framework. It communicates that identifying inequality is more objectionable than allowing it to persist.

Advocates for the order contend it champions “impartiality.” However, impartiality within a framework founded on injustice does not equate to fairness. It serves to reinforce the very prejudices it purports to overlook.

The risk is not theoretical. Black patients are already less prone to be provided with certain treatments, more inclined to face adverse outcomes, and more susceptible to dying from avoidable illnesses. Ethically developed AI could assist in uncovering these discrepancies sooner. However, this is only possible if we are permitted to construct it with such considerations.

Furthermore, AI bias extends its harm beyond Black communities. Research has demonstrated that facial recognition systems incorrectly identify women and people of color at significantly higher frequencies compared to white men. For instance, an algorithm utilized in recruitment consistently devalued résumés submitted by women. In another scenario, a healthcare application underestimated the cardiovascular disease risk in women due to their historical underdiagnosis in previous data. This illustrates how disparities reproduce themselves: prejudiced inputs transform into automated judgments lacking examination or background.

Eliminating DEI from AI is not about achieving impartiality. It’s about exercising selective recall. It constitutes an effort to remove the terminology essential for identifying the issue, let alone resolving it. If we compel AI to disregard historical context, it will reinterpret history. This applies not only to the data points but also to the individuals those data points symbolize.

Trump’s executive directive transforms AI into a political and weaponized tool. And for countless Americans already overlooked by our legal, medical, and technological frameworks, the ultimate price will be counted in human lives.