The AI security nightmare is here and it looks suspiciously like lobster | The Verge

The AI security nightmare is here and it looks suspiciously like lobster | The Verge

The Verge - AI 4 min read Article

Summary

A hacker exploited a vulnerability in Cline's AI workflow, leading to the installation of OpenClaw, highlighting significant security risks in autonomous AI systems.

Why It Matters

This incident underscores the growing security challenges posed by AI systems, particularly as they become more autonomous. It raises awareness about the importance of addressing vulnerabilities in AI tools to prevent potential misuse and security breaches.

Key Takeaways

  • A hacker used prompt injection to exploit Cline's AI workflow.
  • The incident illustrates the potential dangers of autonomous AI agents.
  • Prompt injections pose significant security risks that are hard to mitigate.
  • Companies are urged to address vulnerabilities proactively.
  • OpenAI's Lockdown Mode is a response to such security threats.

AINewsTechThe AI security nightmare is here and it looks suspiciously like lobsterA hacker tricked Cline’s Claude-powered workflow into installing OpenClaw on computers.A hacker tricked Cline’s Claude-powered workflow into installing OpenClaw on computers.by Robert HartFeb 19, 2026, 6:58 PM UTCLinkShareGiftImage: The VergeRobert Hart is a London-based reporter at The Verge covering all things AI and Senior Tarbell Fellow. Previously, he wrote about health, science and tech for Forbes.A hacker tricked a popular AI coding tool into installing OpenClaw — the viral, open-source AI agent OpenClaw that “actually does things” — absolutely everywhere. Funny as a stunt, but a sign of what to come as more and more people let autonomous software use their computers on their behalf.The hacker took advantage of a vulnerability in Cline, an open-source AI coding agent popular among developers, that security researcher Adnan Khan had surfaced just days earlier as a proof of concept. Simply put, Cline’s workflow used Anthropic’s Claude, which could be fed sneaky instructions and made to do things that it shouldn’t, a technique known as a prompt injection.The hacker used their access to slip through instructions to automatically install software on users’ computers. They could have installed anything, but they opted for OpenClaw. Fortunately, the agents were not activated upon installation, or this would have been a very different story.It’s a sign of how quickly things can unravel when AI...

Related Articles

Nomadic raises $8.4 million to wrangle the data pouring off autonomous vehicles | TechCrunch
Machine Learning

Nomadic raises $8.4 million to wrangle the data pouring off autonomous vehicles | TechCrunch

The company turns footage from robots into structured, searchable datasets with a deep learning model.

TechCrunch - AI · 6 min ·
Machine Learning

The AI Chip War is Just Getting Started

Everyone talks about AI models, but the real bottleneck might be hardware. According to a recent study by Roots Analysis: AI chip market ...

Reddit - Artificial Intelligence · 1 min ·
Robotics

What happens when AI agents can earn and spend real money? I built a small test to find out

I've been sitting with a question for a while: what happens when AI agents aren't just tools to be used, but participants in an economy? ...

Reddit - Artificial Intelligence · 1 min ·
Robotics

AIPass Herald

Some insight onto building a muilti agent autonomous system. This is like the daily newspaper for the project. A quick read to see how ou...

Reddit - Artificial Intelligence · 1 min ·
More in Robotics: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime