This week felt like a turning point in AppSec. We've spent years speculating about AI-powered attacks, and now they're unfolding in real time: autonomous hacking campaigns, malware that rewrites itself mid-execution, and voice agents weaponized for social engineering at scale. What stands out most isn't just the sophistication of these attacks, but how they expose the gap between how fast we're deploying AI and how slowly we're securing it. If your threat model still assumes human attackers operating at human speed, it's time for an update.

TLDR

  • A third-party analytics breach leaked names, emails, and metadata for OpenAI API developers. No passwords or chat content was exposed, but OpenAI terminated the vendor relationship entirely.

  • The Shai-Hulud 2.0 supply chain attack compromised packages from Zapier, PostHog, and Postman, stealing credentials and spreading automatically. Its "dead man's switch" deletes your home directory if it can't exfiltrate.

  • CERT warns that Retell AI's voice agent API lacks guardrails, letting attackers spin up convincing automated scam calls at scale with minimal effort. A textbook example of OWASP's "Excessive Agency" (LLM08) vulnerability.

  • 🤖 Chinese Hackers Weaponized Claude for Autonomous Attacks Anthropic disclosed the first documented large-scale cyberattack where AI did 80-90% of the work autonomously, hitting 30 global targets including tech firms and government agencies.

  • Google discovered PROMPTFLUX and PROMPTSTEAL, malware families that call Gemini and Hugging Face APIs during execution to generate fresh evasion code on the fly. APT28 is already using this in live operations.

THIS WEEK’S TAKE

Attackers Figured Out AI. Defenders Are Still Writing Rules.

Look, the security industry is losing this race and most of us haven't fully admitted it yet. We're still writing detection rules for yesterday's threats while attackers are deploying AI that rewrites itself faster than any SOC can respond. The GTG-1002 campaign wasn't some theoretical exercise or conference demo. It was a state-sponsored operation where an AI agent did the work of an entire red team in seconds. And the uncomfortable truth? The only reason we know about it is because the attackers used a commercial model with logging. The next group will self-host an open-source model and we won't see a thing.

And the companies building these tools just keep shipping. Retell AI has an unpatched vulnerability that lets anyone spin up a phishing call center and they haven't even put out a statement. OpenAI was sending unencrypted user data to an analytics vendor that didn't need it. The npm ecosystem is so brittle that one compromised maintainer can trigger a worm that nukes your home directory out of spite. We talk about "shifting left" and "secure by design" all day but the reality is most orgs are still running 2019 playbooks against 2025 problems. The attackers have figured out how to make AI work for them. The question is whether defenders can do the same before the gap gets any wider.

- Shawn Booker | OX Security

Help us keep sharing the important stories