Thursday, February 26, 2026

In Good Conscience

Anthropic understands that the Department of War, not private companies, makes military decisions.  We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.

However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.  Some uses are also simply outside the bounds of what today's technology can safely and reliably do.  Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now:

* Mass domestic surveillance. 
* Fully autonomous weapons.

To our knowledge, these two exceptions have not been a barrier to accelerating the adoption and use of our models within our armed forces to date.

The Department of War has stated they will only contract with AI companies who accede to "any lawful use" and remove safeguards in the cases mentioned above.  They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a "supply chain risk" -- a label reserved for US adversaries, never before applied to an American company -- and to invoke the Defense Production Act to force the safeguards' removal.  These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.

Regardless, these threats do not change our position: we cannot in good conscience accede to their request.

-- Dario Amodei, CEO of Anthropic, maker of the Claude AI, "Statement from Dario Amodei on our discussions with the Department of War" (26 February 2026)

No comments: