Thursday, March 05, 2026

Your Marionette

Imagine you were Vladimir Putin or Xi Jinping and you woke up a year ago having magically been given command of puppet strings that control the White House.  Your explicit geopolitical goal is to undermine trust in the United States on the world stage.  You want to destroy the Western rules-based order that has preserved peace and security for 80 years, which allowed the US to triumph as an economic superpower and beacon of hope and innovation for the world.  What exactly would you do differently with your marionette other than enact the ever more reckless agenda that Donald Trump has pursued since he became president last year?

-- Garrett M. Graff (born 1981), American journalist and author, "We Are Witnessing the Self-Immolation of a Superpower" at wired.com (22 January 2026)

Wednesday, March 04, 2026

Dangerous For The Strong

The absence of any obstacle to the deployment of strength is dangerous for the strong themselves: passion takes precedence over reason.  "No power without limit can be legitimate," as Montesquieu wrote long ago.  Political wisdom does not consist in seeking only immediate victory, nor does it require systematic preference of "us" over "them."

-- Tzvetan Todorov (1939 - 2017), Bulgarian-French historian, philosopher, and essayist, Hope and Memory: Reflections on the Twentieth Century (2003), preface to the English edition (October 2002), p. xxi

Tuesday, March 03, 2026

Enough Immortality

I mean, if 10 years from now, when you are doing something quick and dirty, you suddenly visualize that I am looking over your shoulders and say to yourself "Dijkstra would not have liked this", well, that would be enough immortality for me.

-- Edsger Dijkstra (1930 - 2002), Dutch computer scientist, mathematician, software engineer, and essayist, "Introducing a course on calculi" (EWD 1213) (30 August 1995)

Monday, March 02, 2026

No Stupid Rules

America, regardless of what so-called international institutions say, is unleashing the most lethal and precise air power campaign in history.  B-2s, fighters, drones, missiles, and of course classified effects.  All on our terms with maximum authorities.  No stupid rules of engagement, no nation-building quagmire, no democracy building exercise, no politically correct wars.  We fight to win, and we don't waste time or lives.

-- Defense Secretary Pete Hegseth at a press briefing on the war with Iran (2 March 2026)

Friday, February 27, 2026

Craving For Black Magic

In short, I suggest that the programmer should continue to understand what he is doing, that his growing product remains firmly within his intellectual grip.  It is my sad experience that this suggestion is repulsive to the average experienced programmer, who clearly derives a major part of his professional excitement from not quite understanding what he is doing.  In this streamlined age, one of our most undernourished psychological needs is the craving for Black Magic and apparently the automatic computer can satisfy this need for the professional software engineer, who is secretly enthralled by the gigantic risks he takes in his daring irresponsibility.  For his frustrations I have no remedy.

-- Edsger Dijkstra (1930 - 2002), Dutch computer scientist, mathematician, software engineer, and essayist, "On the reliability of programs" (EWD 303)


[This reflects how I feel about software developed with the use of AI tools.  I'd like all of my software to flow directly through my fingers.  I don't want to debug code written by AI; I much prefer to debug code written by myself.  One often quickly recognizes the potential locus of a bug when one has one's product firmly in one's intellectual grip.]

Thursday, February 26, 2026

In Good Conscience

Anthropic understands that the Department of War, not private companies, makes military decisions.  We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.

However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.  Some uses are also simply outside the bounds of what today's technology can safely and reliably do.  Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now:

* Mass domestic surveillance. 
* Fully autonomous weapons.

To our knowledge, these two exceptions have not been a barrier to accelerating the adoption and use of our models within our armed forces to date.

The Department of War has stated they will only contract with AI companies who accede to "any lawful use" and remove safeguards in the cases mentioned above.  They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a "supply chain risk" -- a label reserved for US adversaries, never before applied to an American company -- and to invoke the Defense Production Act to force the safeguards' removal.  These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.

Regardless, these threats do not change our position: we cannot in good conscience accede to their request.

-- Dario Amodei, CEO of Anthropic, maker of the Claude AI, "Statement from Dario Amodei on our discussions with the Department of War" (26 February 2026)

Wednesday, February 25, 2026

Thank Goodness

Thank goodness we don't have only serious problems, but ridiculous ones as well.

-- Edsger Dijkstra (1930 - 2002), Dutch computer scientist, mathematician, software engineer, and essayist, Dijkstra (1982) "A Letter to My Old Friend Jonathan" (EWD475) p. 101