Stoic Courage: Why Fear Is Part of the Point
On March 5 and 6, 2026, the Pentagon designated Anthropic (the company behind Claude) a “supply chain risk.” This was not a routine bureaucratic designation. It was the first time the United States government had ever applied that label to an American company. The reason: Anthropic had refused to allow its AI to be used for autonomous weapons systems and domestic mass surveillance.
Within days, more than thirty engineers and researchers from OpenAI and Google DeepMind filed public statements supporting Anthropic’s position. Something unusual was happening. A company was holding a line against enormous government pressure, and its competitors were backing it.
How you feel about Anthropic’s AI policy, or about any particular weapons application, is a separate question from the one I want to look at here. Because underneath the specifics of this standoff is a situation the Stoics understood with unusual precision: an actor holding to stated principles when the cost of doing so has become very real and the outcome is entirely outside their control.
The Stoics had a framework for this. It’s the most important one they developed. And watching this play out, I keep thinking about how clearly it maps.
The Quick Version
The Stoic dichotomy of control says Anthropic controls one thing (its principles) and doesn’t control the government’s response. That distinction between what’s yours and what isn’t isn’t passive acceptance. It’s the precise philosophical structure that makes moral courage possible. Without it, holding the line collapses into either paralysis or self-destruction.
Epictetus opens the Enchiridion with what sounds like a simple claim:
“Some things are in our control and others not.”
He wasn’t making a point about productivity or stress management. He was describing the structure of moral agency. The things in our control, he says, are “opinion, pursuit, desire, aversion, and, in a word, whatever are our own actions.” The things not in our control include reputation, command, whatever depends on someone else’s judgment.
The Pentagon designation is a striking example of the second category. Anthropic can’t control how the government classifies them. They can’t control whether other government contracts dry up. They can’t control public perception, competitor responses, or how investors react to the pressure.
What they can control is whether they hold their stated position. That’s it. That’s the whole list.
This sounds reductive, but Epictetus was making a specific point that’s easy to miss. He wasn’t saying that external outcomes don’t matter. He was saying that if you locate your commitment inside the category of things you control (your principles, your choices, your character), then external pressure, however severe, doesn’t reach the thing you’re actually protecting.
If Anthropic’s real concern is what the Pentagon thinks of them, they’ve already lost before making any decision. But if the concern is whether they’re acting consistently with stated values about preventing autonomous weapons and domestic surveillance, the Pentagon designation doesn’t touch that. The designation is in the second column. The decision is in the first.
The Stoic concept of kathêkon (appropriate action arising from your specific role and context) is usually applied to individuals. But it applies equally to organizations that have made explicit commitments about what they will and won’t do.
Anthropic published an Acceptable Use Policy. They’ve been public about refusing certain military applications. Their employees signed on to a company with stated ethics around these questions. The kathêkon for an institution in that position, when the pressure arrives, is relatively clear: do what you said you’d do.
This is harder than it sounds, because the pressure isn’t abstract. The Pentagon designation creates real downstream effects: contracts reconsidered, regulatory scrutiny intensified. The cost of consistency is now visible.
The Stoics would say this is precisely when kathêkon does its most important work. It’s easy to have principles when the price hasn’t been named. Appropriate action under pressure is the only version that actually means anything.
Marcus Aurelius kept a version of this thought in front of himself throughout his reign. He governed an empire while knowing that his son Commodus would likely undo much of what he built. He watched advisors compromise and corruption persist at the court he presided over. His repeated note to himself in Meditations: Do what nature requires. What is required of this role, today, regardless of outcome.
A company saying “we won’t let Claude be used for autonomous weapons” and then maintaining that when the Pentagon pushes back is doing the same structural thing on a different scale. The policy exists because the situation was anticipated. Kathêkon asks whether you act consistently with the role you’ve defined for yourself.
Here’s the piece that’s easiest to miss in the Anthropic situation, and also the most philosophically interesting.
The reserve clause, what Marcus Aurelius described as acting on your intentions “with a reserve clause, for fate to prevent,” is the Stoic answer to a specific kind of paralysis. It’s the paralysis that comes from thinking: if I hold this position and it costs me, the cost is mine to bear, and I might not survive it.
That paralysis is reasonable. It’s based on accurate information. Holding a line against the Pentagon might cost Anthropic significant government business. It might trigger ongoing regulatory friction. It might hurt in ways that aren’t yet visible.
The reserve clause doesn’t deny any of this. It says: commit fully to the action that your values require, and acknowledge that the outcome is only partly yours to determine. This isn’t resignation. The archer trains, takes aim, and releases with full effort. Whether the wind shifts isn’t the archer’s to control. Committing to the shot without attachment to where it lands isn’t passivity. It’s the only structure that makes courageous action stable over time.
Thirty-plus researchers from OpenAI and Google DeepMind filing statements of support is a meaningful data point here. They were, in that action, applying something like the reserve clause themselves: taking a position in public whose consequences they couldn’t fully predict, because the values in question mattered more than the outcome management.
That’s what moral courage looks like when the Stoic framework is working. Not “I know this will be fine.” Not “I’ve calculated the odds and they’re acceptable.” Just: this is what the position requires, and I’m holding it.
The failure mode the Stoics described most carefully wasn’t cowardice. It was something subtler: letting the categories bleed.
An institution that starts making principled decisions based on what the most powerful actor in the room wants them to decide hasn’t exactly abandoned its principles. It’s done something more insidious: it’s redefined its principles as “we make the decisions that powerful parties accept.” The values are still there in the press releases. The actual decision-making structure has moved.
Epictetus had a word for this kind of confusion. He described people who spend their lives trying to make the “not in our control” column into the “in our control” column, pursuing reputation, command, external approval as though these were the actual goods, as people who have gotten their categories badly wrong. Not because reputation doesn’t matter, but because making it the thing you optimize for breaks your ability to actually achieve the things that matter.
An AI company that optimizes for avoiding government friction will make different products than a company that optimizes for the values it says drive its decisions. These two paths diverge slowly and then quickly. The Stoic framework says the moment of divergence is always this one: when the pressure is real and the cost is named, which column are you actually operating from?
I want to be direct about what the Stoic framework doesn’t answer here.
It doesn’t tell you whether Anthropic’s specific policy decisions are correct. Whether the right place to draw the line on military AI applications is where they’ve drawn it is a hard empirical and ethical question. Reasonable people studying AI weapons policy disagree.
The Stoic framework is a framework for how to hold an ethical position under pressure, not for identifying which ethical positions are right. These are different questions, and conflating them is how “Stoicism means staying strong” becomes a tool for justifying bad positions with great consistency.
What the framework does offer is a way to evaluate the structure of moral decision-making in situations like this. An organization that holds its stated principles under real pressure, without knowing the outcome, is doing something structurally different from an organization that announces principles and walks them back when the cost materializes. Whether the principles themselves are right is a separate audit. The structural integrity of the process is visible regardless.
For what the Stoics said about the limits of philosophical frameworks when material stakes are this high, the post on Stoicism’s limits in war philosophy works through this question directly, including when philosophical frameworks break down and something harder is required.
This won’t be the last time an AI company faces government pressure to expand the applications of its technology in directions its stated ethics have ruled out. The Anthropic situation is early and visible, but the structural dynamic it represents will recur.
What the Stoic dichotomy offers isn’t just a framework for reading this particular situation. It’s a question to ask of any organization (or person) navigating genuine pressure to abandon stated principles: have they sorted out what’s in their column and what isn’t?
An institution that confuses reputational management with principled action will eventually compromise. Not because the people inside are bad, but because when the categories are confused, the pressure on the second column feels like a threat to the first. That confusion is what the dichotomy of control is designed to correct.
And the thirty researchers who filed statements probably did something similar. They couldn’t control how Anthropic’s situation resolved. They could control whether they said what they thought was true. The Stoics would call that kathêkon applied at individual scale in an institutional moment.
This situation doesn’t just apply to tech companies. Most of us, in our own lives and roles, encounter smaller versions of the same pressure: situations where holding a stated value creates friction, and where the cost of consistency suddenly becomes visible.
Try this: Write down a commitment you’ve made that’s currently under some kind of pressure. It doesn’t have to be dramatic. A commitment to honest feedback, to a certain kind of work, to a relationship boundary, to how you spend your time.
Now sort it. What’s in your column: whether you act consistently with the commitment. What’s not: how the other person responds, what it costs you reputationally, whether it “works out.”
Then ask: am I making the decision from the first column or the second? The Stoics would say the answer to that question determines the character of the action, regardless of outcome.
The post on the Stoic framework for political chaos covers kathĂŞkon in more depth, specifically how the concept applies when collective stakes are high and individual power is limited. And if the relationship between principles and external pressure interests you philosophically, the Marcus Aurelius vs. Seneca comparison is worth reading: someone who governed under pressure alongside someone who advised power from inside it. Both Stoics. Very different situations. The framework held for both, and failed differently for each.
Epictetus’s Enchiridion is still the fastest way in to the dichotomy of control — it’s fewer than twenty pages and most of it is still startling. Ryan Holiday’s The Obstacle Is the Way applies these frameworks to contemporary pressure situations, including institutional ones.
And for how this connects to the broader question of AI ethics and identity (what it means for an AI company to have stated values at all), the Stoic framework post on AI displacing purpose covers the underlying tension from a different angle.
The Anthropic situation will resolve in whatever way it resolves. The Pentagon will decide what it decides. What won’t change is whether the decision to hold the line was made from the right column. That part already happened.
The Stoic framework helps clarify the structure of ethical decision-making under pressure. It doesn’t tell you which ethical positions are correct. That’s harder work. If you’re facing real institutional pressure around a principled commitment, talking it through with people you trust matters more than any philosophical framework alone.