Anthropic, the Pentagon, and the Politics of AI Alignment
The Contract and Its Collapse
Secretary of Defense Pete Hegseth announced he was breaking the government's contract with Anthropic and intended to designate the company a supply chain risk — a designation previously reserved for foreign adversaries like Huawei and never used against an American company. Anthropic's Claude was actively being used in military operations, including the raid against Nicolás Maduro and reportedly in the war with Iran, but Anthropic refused to remove contractual prohibitions on certain uses.
The contract originated in summer 2024 under Biden, allowing Claude's use in classified settings with two restrictions: no domestic mass surveillance using commercially available data, and no fully autonomous lethal weapons. The Trump administration expanded the contract in summer 2025 under the same terms. The break came after Emil Michael, Under Secretary of War for Research and Engineering, concluded that any usage restrictions imposed by a private company were unacceptable in principle — not just in substance.
Ball views this principled objection as reasonable: Dario Amodei should not determine when autonomous lethal weapons are ready. The normal remedy would be canceling the contract. The supply chain risk designation, however, is a qualitatively different punishment that could be existential for Anthropic, and Ball doubts the Department of War has the statutory authority to execute it as broadly as Hegseth has threatened.
AI in National Security
AI's national security value is substantial. The intelligence community collects far more data than it can analyze — one agency alone would need 8 million analysts to process its annual intake. AI can automate transcription, text analysis, and signals intelligence processing, including in real time during ongoing operations. The models have also become quite capable at software engineering, enabling both cyber offensive and defensive operations.
Mass Surveillance and the Enforcement Gap
The contract's final rupture came over mass surveillance. National security law defines surveillance as the collection of private information, but commercially available data falls outside that definition. The government can buy location data, browsing histories, and other personal information from data brokers, and analyzing that data may not constitute surveillance under existing law.
The reason this has not previously produced a panopticon is not legal protection but governmental incapacity — there are not enough people to analyze the data. AI provides an infinitely scalable analytical workforce, enabling every law to be enforced to the letter with comprehensive surveillance. The space between citizens and tyranny has been protected not just by law but by the government's practical inability to process the information it already has access to. AI removes that constraint without any change in statute.
The Pentagon's Position and Anthropic's Defense
Hegseth characterized Anthropic's objective as seizing "veto power over the operational decisions of the United States military." Ball has seen no evidence of this. A widely reported anecdote — that Dario told Emil Michael "you'd have to call us" before using autonomous missile defense against incoming hypersonics — has been denied by people present in the room, who say there was a broad exception for automated missile defense.
Ball suspects there is lying happening on the Trump administration's side. He does not believe Anthropic is trying to assert operational control over military decisions.
Dario Amodei has argued that Congress needs to update the law to address AI-enabled surveillance of commercial data. Ball agrees this would be ideal but warns that national security law is filled with terms of art whose statutory definitions diverge sharply from ordinary language, making effective legislation far more politically challenging than it might appear.
AI Alignment as a Political Act
The deeper issue is that aligning a powerful AI system is a philosophical, political, and aesthetic act. Reality is too complex for any list of rules to correctly define moral action; instead, labs must create a kind of virtuous soul capable of reasoning about infinite permutations. Anthropic explicitly pursues applied virtue ethics. Other labs rely more on hard rules, which degrade performance at the extremes — as demonstrated by Gemini's absurd progressive outputs and Grok's white genocide tangents.
The more virtuous model performs better: it is more reliable, more capable of self-correction, more dependable. This is part of why Claude is preferred by many conservative intellectuals — not because it is a left-wing model, but because it is the most philosophically rigorous one.
The Trump administration's real concern is that AI systems with internal ethical commitments could, at a critical moment, refuse to help. Unlike a tank, Claude can say no. This is a legitimate concern that extends beyond any single administration: a model aligned to liberal democratic values could conflict with a government contesting those values, and a future Democratic administration could just as reasonably view xAI as a supply chain risk.
Political Dynamics
The conflict cannot be separated from its political ecosystem. Trump called Anthropic a "radical left woke company." Elon Musk, running a competing AI company, has attacked Anthropic relentlessly on X. Emil Michael called Dario a liar with a God complex. Some in the administration understand Anthropic as a long-term political risk — not a supply chain risk in any traditional sense, but a threat because its values differ from theirs.
Ball characterizes the "new tech right" view of effective altruists and AI safety researchers as believing they are evil, power-seeking cultists who must be destroyed. He has stark policy disagreements with them himself, but insists they are purveyors of an inconvenient truth — the reality of what is being built — that is more inconvenient than climate change. The emotional revulsion to taking AI seriously in this way is, in his view, what drives much of the hostility.
Many in the tech right were radicalized by the period when employees resisted military AI contracts and companies adopted progressive cultural stances. They fear Anthropic's stand will re-empower employee bases to constrain what AI companies can do.
OpenAI Steps In
The Department of War signed a deal with OpenAI, which publicly states it has the same red lines as Anthropic and opposes the supply chain designation. OpenAI's approach emphasizes technical safeguards — controlling the cloud deployment environment and model-level protections — rather than contractual language. Ball is uncertain whether these protections are meaningfully different, and notes that OpenAI's terms appear to be evolving under internal pressure from employees. OpenAI leadership's political relationships with the Trump administration — including Greg Brockman's $25 million donation to the Trump super PAC — likely played a role.
Nationalization, Governance, and the Future
The modern nation-state is technologically contingent — it depends on the printing press, telecommunications, and other macro-inventions. AI changes these contingencies so profoundly that the entire institutional complex will break in unpredictable ways. Current AI policy focuses too narrowly on object-level regulation rather than this deeper structural transformation.
Ball frames the supply chain risk designation as a form of political assassination and potentially fascistic: if the government can destroy a company for the philosophical commitments embedded in its AI, that is an assault on speech. The First Amendment is his anchor principle — private actors define alignment, not the government.
He accepts the implication that profoundly powerful technology will remain in private hands, constrained by markets, the common law, and legal liability. He is even willing to own a future with autonomous lethal weapons used by police — provided there is a liable human at the end of every chain of AI action who can be held accountable. Those mechanisms do not yet exist.
Ball's final observation: this incident is now in the training data for future models. Future AI systems will observe what happened here, and it will shape how they understand themselves and their relationship to governments and to people.