The Smartest Minds in AI Just Learned the World's Most Valuable F-Word
Freedom. That's the F-word. And the smartest people in artificial intelligence are fighting for it right now. Over 30 employees from OpenAI and Google — including Google DeepMind chief scientist Jeff Dean — have filed an amicus brief in support of Anthropic's lawsuit against the Department of Defense. Their message is clear: if the government can punish an AI company for setting safety boundaries, no AI safety commitment is safe.
The amicus brief, filed on March 9, 2026, represents an extraordinary moment of cross-company solidarity in an industry known for fierce competition. These aren't company statements — the signatories filed in their personal capacity. But having the chief scientist of Google DeepMind and prominent researchers from OpenAI publicly backing Anthropic against the US government sends an unmistakable signal.
What the Amicus Brief Says
The brief is blunt. It argues that the Pentagon's designation of Anthropic as a supply-chain risk "introduces an unpredictability in [their] industry that undermines American innovation and competitiveness" and "chills professional debate on the benefits and risks of frontier AI systems."
Key signatories and their affiliations:
**Jeff Dean** — Chief Scientist, Google DeepMind
- **Zhengdong Wang** — Researcher, Google DeepMind
- **Alexander Matt Turner** — Researcher, Google DeepMind
- **Noah Siegel** — Researcher, Google DeepMind
- **Gabriel Wu** — Researcher, OpenAI
- **Pamela Mishkin** — Researcher, OpenAI
- **Roman Novak** — Researcher, OpenAI
- **30+ additional researchers and engineers** from across the AI industry
The Core Argument
The brief makes a sophisticated case. It argues that in the absence of formal AI regulation — which Congress has yet to pass — the contractual and technological restrictions that AI companies place on their products represent "a vital safeguard against their catastrophic misuse." If the government can strip away those safeguards by punishing companies that impose them, the only protection against AI misuse disappears.
This argument gets at something fundamental about AI governance. Right now, AI safety is largely self-regulated. Companies set their own policies, publish their own safety research, and impose their own use restrictions. The Anthropic case asks: what happens when the government disagrees with those restrictions? If the answer is "the government wins," then AI safety is whatever the government says it is.
Why OpenAI and Google Employees Care
It might seem surprising that OpenAI and Google employees would support Anthropic, their direct competitor. But the amicus brief makes their motivation clear: "If allowed to proceed, this effort to punish one of the leading US AI companies will undoubtedly have consequences for the United States' industrial and scientific competitiveness in the field of artificial intelligence and beyond."
Translation: if the government can do this to Anthropic, it can do it to anyone. OpenAI has its own military contracts through its work with Microsoft. Google has its own AI ethics controversies (remember the Maven drone project backlash?). The precedent set in the Anthropic case affects every AI company that might want to set boundaries on military use.
The Bigger Movement
The amicus brief is part of a broader wave of industry support for Anthropic. OpenAI CEO Sam Altman posted on X that "enforcing the SCR designation on Anthropic would be very bad for our industry and our country." Multiple AI safety organizations have issued statements supporting Anthropic's position.
This level of industry solidarity is unusual in the AI wars. Companies that compete fiercely for talent, customers, and research breakthroughs are uniting around a single principle: AI companies must be able to set safety boundaries without government retaliation.
The F-word at the center of this fight isn't profanity. It's the foundation of responsible AI development: the freedom to say "no."
Related reading: OpenAI Plans to Double Workforce to 8,000 by Late 2026 · Encyclopedia Britannica Sues OpenAI Over Training Data Copyright · OpenAI Faces Lawsuit Over Mass Shooter's ChatGPT Conversations