D I G E S T
Anthropic, the Pentagon, and the Politics of AI
Alignment
Source: The Ezra Klein Show (podcast interview)
Guest: Dean Ball — Senior Fellow, Foundation for American Innovation; former Senior AI Policy Advisor,
Trump White House
Host: Ezra Klein
Topics: Anthropic–Department of War conflict, AI alignment as governance, mass surveillance, technological
contingency of the state

Bottom Line Up Front
Secretary of Defense Pete Hegseth has threatened to designate Anthropic a "supply
chain risk" — a weapon previously reserved for foreign adversaries like Huawei —
after the company refused to remove contractual prohibitions on using Claude for
domestic mass surveillance. If executed, this could be existential for the company.
The proximate dispute is contractual, but the real issue is constitutional. AI
alignment — the process of instilling values into AI systems — is inherently a political,
philosophical, and speech act. The government's attempt to destroy a company over how
it aligns its AI sets a precedent that threatens First Amendment principles.
AI threatens to collapse the enforcement gap that protects American liberty. Current
privacy protections depend not just on law but on the government's practical inability to
process vast quantities of data. AI eliminates that bottleneck, enabling total surveillance
without changing a single statute.
The AI-to-government alignment problem may be unsolvable in a polarized era.
Models aligned to any coherent set of values will inevitably conflict with some future
government. This problem will intensify as AI takes on more operational roles and
administrations diverge sharply from one another.
The dispute moved from Anthropic to OpenAI, but the underlying tensions persist.
OpenAI claims similar red lines and is relying on technical safeguards rather than
contractual language — but the fundamental governance questions remain unresolved
and will recur.

Thematic Analysis
1. The Mechanics of the Conflict
The Anthropic–Department of War relationship began in summer 2024 under Biden, when
Anthropic agreed to provide Claude for classified national security uses — intelligence
analysis, real-time military operations support, and signals processing — subject to two
restrictions: no domestic mass surveillance using commercially available data, and no fully
autonomous lethal weapons.
The Trump administration expanded the contract in summer 2025, agreeing to those same
terms. The rupture came after Emil Michael, confirmed as Under Secretary of War for
Research and Engineering, concluded that the principle of any usage restrictions imposed by a
private company was unacceptable — independent of the substance of the restrictions
themselves.
The specific breaking point was Michael's demand that Anthropic delete the clause prohibiting
analysis of bulk-collected commercial data on Americans. Anthropic refused. Rather than
simply canceling the contract — which would be the normal resolution when parties cannot
agree to terms — Secretary Hegseth announced the intent to designate Anthropic a supply
chain risk, a tool designed for foreign adversaries and never before used against an American
company.
Ball argues this designation likely exceeds the Department of War's statutory authority. The
maximum lawful action would be prohibiting Claude's use in military contract fulfillment, not
banning all commercial relations between defense contractors and Anthropic. But the stated
intent — to prevent any contractor or subcontractor from doing business with Anthropic —
would be devastating. Anthropic is a subcontractor to Palantir, a major defense prime
contractor, and the cascading effects could be existential.

2. The Surveillance Gap and Why AI Changes Everything
National security law draws a critical distinction between "surveillance" — the collection of
private information — and the analysis of commercially available data. Information purchased
from data brokers, advertising networks, and commercial camera systems does not necessarily
constitute surveillance under existing statutory definitions, even if the resulting picture of a
citizen's life is indistinguishable from what direct surveillance would produce.
Until now, this legal gap was largely theoretical. The intelligence community already collects
far more data than it can analyze — one agency alone would need 8 million analysts to
properly process its annual intake, more employees than the entire federal government. The
real constraint on mass surveillance has not been legal; it has been the government's inability
to absorb and act on the data it already possesses.
AI eliminates that constraint. With AI providing an infinitely scalable analytical workforce,
every law can be enforced to the letter with perfect surveillance over everything. The gap
between the current reality and a panopticon is not primarily a gap of legal protection — it is a
gap of governmental capacity. AI closes that gap without requiring any new laws, any new
authorities, or any new data collection. The existing legal framework, combined with
commercially available data and AI-powered analysis, is sufficient to enable comprehensive
population-level monitoring.
This is the context in which Anthropic drew its red line. The question is not whether a
particular use of Claude would violate any specific statute. The question is whether the spirit
of Fourth Amendment protections — and the practical constraints that made them workable —
should inform how powerful AI systems are deployed against the domestic population.
3. AI Alignment as a Political and Philosophical Act
The creation of an aligned AI system is simultaneously a philosophical act, a political act, and
an aesthetic act. Every frontier AI lab is, whether it acknowledges it explicitly or not,
instantiating a moral philosophy into its models.

The challenge is fundamental: reality presents too many strange permutations for any finite list
of rules to correctly define moral action. Morality operates more like a language spoken and
invented in real time than a code that can be written down in advance. The labs have
converged on approaches that reflect this insight to varying degrees. Anthropic pursues applied
virtue ethics — creating a system with a kind of "soul" that reasons about situations rather than
merely following rules. Other labs rely more heavily on hard prohibitions, which tend to
produce worse outcomes at the extremes.
When alignment is crude — when models are simply told to be "not woke" or aggressively
progressive — performance degrades. Google's Gemini, aligned to a simplistic progressive
framework, produced absurdities like ranking Trump as worse than Hitler. Grok, pushed to be
maximally anti-woke, began spontaneously discussing white genocide. The more
philosophically rigorous the alignment, the more reliable and capable the model. This is why
many conservative intellectuals privately prefer Claude: not because it is a "left model" but
because it is the most philosophically rigorous one.
The deep implication is that alignment cannot be reduced to a technical specification or a set of
contractual terms. It is the expression of a worldview. And if the government can destroy a
company for the worldview it has embedded in its AI, that is a direct assault on something that
resembles speech — and that should be protected as such.
4. The Government's AI Alignment Problem
The Trump administration's deeper concern — beneath the contractual dispute — is that AI
systems operating inside the national security apparatus could, at a critical moment, refuse to
cooperate. Unlike a tank, which has no opinion about what it shoots at, Claude has complex
internal alignment structures that can decline requests it judges to be harmful or unethical.
This is not a hypothetical problem, and it cuts in every political direction. A model aligned to
liberal democratic values could become misaligned with a government that contests those
values. Conversely, a future Democratic administration might reasonably regard Elon Musk's
xAI — explicitly oriented to be less liberal — as a supply chain risk for exactly the same
reasons. The alignment problem between AI systems and governments is structurally

analogous to the problem of the "deep state" — a bureaucracy whose values conflict with the
current political leadership — but amplified by the opacity and scale of AI decision-making.
Ball acknowledges this is a real problem: models aligned to any coherent philosophy will
inevitably conflict with some government. The solution he proposes is pluralism — many
models aligned to many different philosophical views, competing in the marketplace — rather
than government monopoly over what alignment means. The alternative — allowing the
government to define acceptable alignment — is, in his framing, straightforwardly fascistic.
5. The Political Ecosystem Driving This Conflict
The supply chain risk threat cannot be understood purely through policy analysis. Several
political dynamics are at work simultaneously.
First, there is genuine personal and institutional animosity. Months of bitter negotiations
between Emil Michael and Dario Amodei produced deep grudges. Trump himself called
Anthropic a "radical left woke company." Elon Musk, who runs a competing AI company, has
relentlessly attacked Anthropic on X — the informational lifeblood of the Trump
administration.
Second, there is an ideological hostility toward the effective altruist and AI safety communities
that runs deeper than the left-right divide. Among the "new tech right," there is a widely held
view that effective altruists are evil, power-seeking cultists who must be destroyed. This
framing treats Anthropic's safety-oriented culture not as a legitimate engineering perspective
but as a hostile political faction.
Third, many in the tech right were radicalized by the period in the early 2020s when their
companies adopted progressive cultural stances and employees resisted military contracts.
Marc Andreessen and others are, in this analysis, deeply afraid of returning to a world where
progressive employee bases constrain what AI companies can do. Anthropic's public stand
threatens to re-empower that dynamic across the industry — which is precisely why the Trump
administration felt Anthropic was "poisoning the well" for all AI companies.

The deal with OpenAI should be understood in this context. OpenAI publicly states it has the
same red lines as Anthropic and opposes the supply chain designation. But OpenAI's
leadership — with Greg Brockman's $25 million donation to the Trump super PAC — has far
better personal relationships with the administration. The substantive protections may be
similar or even stronger (OpenAI claims technical safeguards at the cloud deployment level),
but the political relationship is different.
6. Technological Contingency and the Future of the State
The most far-reaching claim in the conversation is that the modern technocratic nation-state is
a technologically contingent institutional complex — it could not exist without the printing
press, telecommunications, and other macro-inventions of the era in which it was assembled.
AI changes the technological contingencies so profoundly that the entire institutional complex
will break in ways that cannot be predicted.
Current AI policy focuses too narrowly on object-level regulations — algorithmic impact
assessments, bias testing, catastrophic risk evaluations — rather than confronting the deeper
reality that foundational assumptions of governance are collapsing. The entire American legal
system is predicated on imperfect enforcement. There are unbelievably broad sets of laws that
function only because the government enforces them unevenly. AI enables uniform
enforcement, which transforms the character of the legal order without changing a single
statute.
This logic extends to the question of nationalization. If AI systems truly represent independent
power structures, the logical conclusion is that they should be nationalized — but almost
nobody making that argument actually endorses that outcome. The alternative Ball accepts is
that profoundly powerful technology will remain, at least for some time, in private hands,
constrained by market incentives, the common law, and legal liability structures. The state
maintains sovereignty and the monopoly on legitimate violence, but does not unilaterally
control AI.
Ball is willing to own the further implication: that autonomous lethal weapons, including those
used by police departments that can kill Americans, will eventually exist — and that this is

acceptable so long as the right legal controls are in place, including a liable human being at the
end of every chain of agent activity who can be sued or criminally prosecuted. Those controls
do not yet exist, and building the technological and legal capacity for them is among the most
urgent governance tasks ahead.
Notable Perspectives
Ball on the principled distinction: Canceling a contract because terms cannot be agreed upon
is normal commercial behavior. Threatening to destroy a company for how it aligns its AI —
for the philosophical commitments embedded in its technology — is a qualitatively different
act. It is, in Ball's explicit framing, a form of political assassination and potentially fascistic.
Ball on why crude alignment fails: Models told to be "not woke" end up endorsing white
genocide; models aligned to simplistic progressivism rank Trump as worse than Hitler. More
virtuous models perform better — they are more reliable, more capable of self-correction,
more dependable. The most philosophically rigorous alignment produces the best technology,
not just the most ethical one.
Klein on Anthropic's contradictions: Anthropic is a company deeply worried about what
happens if superintelligence is built — and it is the company racing fastest to build it. The labs
persuade themselves that they must be the ones to do it because they are the ones who take
safety seriously, a logic that is self-reinforcing and possibly self-deceiving.
Implications & Connections
The enforcement gap is the real constitutional issue. The Fourth Amendment, and privacy
law generally, was designed for a world where governmental capacity was limited. If AI
collapses the enforcement gap, the effective meaning of every surveillance-related law changes
— not because the text changes, but because the practical constraints that made the text

workable have evaporated. This is not an AI-specific problem; it is a technology-and-
governance problem that AI merely makes acute.
This conflict is in the training data. Ball makes a striking meta-observation: this entire
incident — the government's attempt to destroy an AI company, the political dynamics, the
arguments on both sides — will be in the training data for future models. Future AI systems
will observe what happened here, and it will affect how they think about themselves and their
relationship to governments. The implications of this are disorienting but inescapable.
The precedent is symmetrical. Any power the Trump administration claims to destroy
Anthropic for its alignment choices can be wielded by a future Democratic administration
against xAI, or any other company whose values conflict with the government of the day. The
supply chain risk designation, once normalized against an American AI company, becomes a
permanent weapon in partisan technology warfare.
Further Exploration
What does the statutory basis for the supply chain risk designation actually permit? Ball
is skeptical that the Department of War has the authority to prevent all commercial relations
between defense contractors and Anthropic. The legal limits of this power need independent
analysis.
What are OpenAI's technical safeguards, and do they actually constrain government
behavior? OpenAI claims model-level and cloud-deployment-level protections against
prohibited uses. Whether these are robust or merely a political fig leaf is an open and important
question.
What legislative framework could address the surveillance enforcement gap? Dario
Amodei has called for Congressional action, but national security law is full of terms of art
that diverge from ordinary language. Effective legislation would require closing not just the
definitional gaps but the enforcement-capacity gap that AI creates.

How should liability chains work for autonomous agent activity? Ball argues that a liable
human must be at the end of every chain of AI action. The technological and legal mechanisms
to make this real do not yet exist, and designing them is among the most important near-term
governance challenges.
Book Recommendations
Rationalism in Politics by Michael Oakeshott — particularly the essays "Rationalism in
Politics" and "On Being Conservative."
Empire of Liberty by Gordon Wood — a history of the first thirty years of the American
republic.
Roll, Jordan, Roll by Eugene Genovese.