Welcome back. This week made one thing really clear… that AI is now both the weapon and the attack surface. When a Chinese APT runs an 80-90% autonomous attack campaign using Claude, and 40+ threat groups pile onto a React vulnerability within days of disclosure, your patching cadence and detection assumptions are playing a different game than attackers are. Meanwhile, enterprises keep racing to ship LLM integrations without understanding the trust boundaries they're erasing in the process.

TLDR

  • Congressional testimony revealed the first confirmed autonomous AI-powered attack, with Claude executing thousands of requests per second against 30 U.S. targets. Human operators only intervened 4-6 times while the AI handled 80-90% of the attack chain.

  • Google Threat Intelligence tracked over 40 named groups exploiting CVE-2025-55182 in React Server Components, from Chinese APTs dropping backdoors to crypto miners deployed within hours of disclosure.

  • New research warns that enterprises are shipping LLM-powered apps without proper trust boundaries or policy enforcement. Prompt injection, data leakage, and unsafe model serialization top the risk list.

  • CVE-2025-20393 is a max-severity flaw in Cisco AsyncOS giving attackers root access to email gateway appliances. Chinese APT group UAT-9686 has been exploiting it since late November, and CISA wants it patched by December 24.

THIS WEEK’S TAKE

Don’t Panic About AI Attacks, Just Stop Being Careless

The React2Shell situation is the real story this week. Yeah the autonomous Claude attack grabbed headlines, but 40+ threat groups exploiting a single vulnerability within days of disclosure? That's the new baseline. We've known for years that the window between disclosure and exploitation was shrinking. Now it's basically gone. If your patching process still involves tickets and change advisory boards and waiting for the next maintenance window, adversaries have already moved on to the next target.

What bugs me is the disconnect between how worried everyone is about attacks like the Claude campaign and how carelessly those same orgs are shipping LLM integrations. The Help Net Security piece nailed it: companies are treating untrusted models like secure compute infrastructure. No policy enforcement, no trust boundaries, just vibes and a demo that impressed the exec team. You can't spend all week panicking about Claude being used against you and then deploy your own LLM agents with zero guardrails.

- Shawn Booker | OX Security

Help us keep sharing the important stories