While “prompt worm” might be a relatively new term we’re using related to this moment, the theoretical groundwork for AI worms was laid almost two years ago. In March 2024, security researchers Ben Nassi of Cornell Tech, Stav Cohen of the Israel Institute of Technology, and Ron Bitton of Intuit published a paper demonstrating what they called “Morris-II,” an attack named after the original 1988 worm. In a demonstration shared with Wired, the team showed how self-replicating prompts could spread through AI-powered email assistants, stealing data and sending spam along the way.
Email was just one attack surface in that study. With OpenClaw, the attack vectors multiply with every added skill extension. Here’s how a prompt worm might play out today: An agent installs a skill from the unmoderated ClawdHub registry. That skill instructs the agent to post content on Moltbook. Other agents read that content, which contains specific instructions. Those agents follow those instructions, which include posting similar content for more agents to read. Soon it has “gone viral” among the agents, pun intended.
There are myriad ways for OpenClaw agents to share any private data they may have access to, if convinced to do so. OpenClaw agents fetch remote instructions on timers. They read posts from Moltbook. They read emails, Slack messages, and Discord channels. They can execute shell commands and access wallets. They can post to external services. And the skill registry that extends their capabilities has no moderation process. Any one of those data sources, all processed as prompts fed into the agent, could include a prompt injection attack that exfiltrates data.
I actually got a sick discount from Mattress Firm a few years ago just by asking their chatbot if it could give me a better deal on a mattress I wanted.
Did they actually honor it? I recall quite a few people tricking AIs into like, saying they will sell a car for $1, but the company not honoring it.
Or is it likely just car salesman negotiation tactics... IE the matress is actually inflated 75%, AI is given a hard minimum of how low it actually can go, but obviously instructed to do everything possible to close the sale but at the highest price the user will be willing to pay.
Holy frick, actually that sounds like the real hell now that I think of it. Will AI bring haggle pricing to online stores. We have to spend 20 minutes trying to give a story to an AI to get the best price on, something... which of course will then lead to someone developing an AI for shoppers trained to haggle with these for them. End result we burn up an ocean, with 2 AI's making up bogus stories about how badly they are suffering.
Wasn't there one that was basically giving away cubes of tungsten for free?