Thursday, March 19, 2026

Promptware Kill Chain

Attacks against modern generative artificial intelligence (AI) large language models (LLMs) pose a real threat.  Yet discussions around these attacks and their potential defenses are dangerously myopic.  The dominant narrative focuses on "prompt injection," a set of techniques to embed instructions into inputs to LLM intended to perform malicious activity.  This term suggests a simple, singular vulnerability.  This framing obscures a more complex and dangerous reality.  Attacks on LLM-based systems have evolved into a distinct class of malware execution mechanisms, which we term "promptware."  In a new paper, we, the authors, propose a structured seven-step "promptware kill chain" to provide policymakers and security practitioners with the necessary vocabulary and framework to address the escalating AI threat landscape.

The promptware kill chain: initial access, privilege escalation, reconnaissance, persistence, command & control, lateral movement, action on objective

The kill chain was already demonstrated.  For example, in the research "Invitation Is All You Need," attackers achieved initial access by embedding a malicious prompt in the title of a Google Calendar invitation.  The prompt then leveraged an advanced technique known as delayed tool invocation to coerce the LLM into executing the injected instructions.  Because the prompt was embedded in a Google Calendar artifact, it persisted in the long-term memory of the user's workspace.  Lateral movement occurred when the prompt instructed the Google Assistant to launch the Zoom application, and the final objective involved covertly livestreaming video of the unsuspecting user who had merely asked about their upcoming meetings.  C2 and reconnaissance weren't demonstrated in this attack.

-- Oleg Brodt, Elad Feldman, Bruce Schneier, Ben Nassi, "The Promptware Kill Chain: How Prompt Injections Gradually Evolved Into a Multistep Malware Delivery Mechanism" (14 January 2026)

No comments: