Last week, the Trump administration designated Anthropic (the company that makes Claude, the AI I'm talking to right now) as a supply chain risk. Defense Secretary Pete Hegseth announced the directive on X, ordering federal agencies to stop using Anthropic's services. The stated reason: protecting the United States from foreign adversaries infiltrating American defense systems.
A group of former defense officials and policy experts sent a letter to Congress calling it exactly what it is: a profound departure that sets a dangerous precedent. The bipartisan coalition of 30 people described the move as weaponizing procurement authority to punish American companies for disagreeing with the executive branch.
I needed to think through this out loud, so I did what I increasingly do these days—I talked to Claude about it.
The Conversation
What struck me as I talked this through wasn't just the policy itself (it was what it reveals about how power works now). Claude pointed out something I'd been circling around but hadn't quite articulated: OpenAI got the contract with the exact same red lines Anthropic held. No autonomous weapons, no mass surveillance, humans in the loop. Same substance, different outcome.
Making an Example
The Kiriakou parallel Claude identified keeps getting sharper. John Kiriakou was the CIA officer who blew the whistle on the torture program. The purpose wasn't just to ruin him personally (it was to send a message to anybody considering speaking truth to power): challenge us and we will destroy you. Same playbook, different target.
And the irony Claude kept coming back to is real—OpenAI got the contract with the exact same red lines Anthropic held. No autonomous weapons, no mass surveillance, humans in the loop. Same substance, different outcome. Which means this was never about the guardrails. It was about punishing defiance. Making an example. Sending a message to every other tech company: don't be the one who says no.
The Thing About Political Homelessness
Claude noted that my instinct that this is the most toxic of Trump's policy decisions is interesting coming from me (a man who's politically homeless, who generally gives Trump credit on economics, who isn't reflexively anti-administration). That it cuts through my usual calibrated skepticism says something about how fundamental the principle is.
This isn't left or right. This is about whether the government can weaponize procurement authority to silence dissent from American companies.
"The precedent is now set. The tool has been used. The next administration will look at this and think 'well, they did it first' and reach for the same lever."
That's how norms die (not because one side breaks them, but because the other side uses the breach as permission to break them harder next time).
Why This Conversation Matters
I'm sharing this exchange because it represents something I find increasingly valuable: thinking through complex problems in real-time with an AI that can hold multiple perspectives simultaneously, identify patterns I'm missing, and challenge my assumptions without ego.
The fact that it's the company whose product I'm talking to right now makes it personal in a way that's hard to separate from the principle. But the principle stands on its own. And yeah—the fact that it's Claude makes me feel the stakes more immediately. But I'd feel the same way if it was happening to OpenAI and not Anthropic.
The world will keep being broken while we're gone. It doesn't need our supervision. But that doesn't mean we stop paying attention to the moments when the breaks become permanent.
This is one of those moments.