
Hey there. If this week had a theme, it's that AI tooling is outpacing AI security by a mile. We've got 30+ vulns in the coding assistants developers use daily, zero-click exploits in enterprise AI platforms, and the UK's cyber agency basically saying "yeah, prompt injection might just be something we live with now." That last one should make anyone building AI integrations a little uncomfortable. The attack surface is expanding faster than most teams realize, and this week's stories are a good reminder to pump the brakes before wiring up that shiny new AI feature to your production data.
TLDR
Security researcher Ari Marzouk dropped "IDEsaster," a collection of flaws affecting Cursor, GitHub Copilot, Claude Code, and 20+ other AI-powered IDEs. The attacks chain prompt injection with legitimate IDE features to achieve RCE and data exfiltration.
GeminiJack let attackers exfiltrate corporate data through indirect prompt injection hidden in shared docs, calendar invites, or emails. No clicks, no alerts, just silent access to Gmail, Docs, and Calendar.
CVE-2025-55182 in React Server Components is already being exploited by multiple China-nexus APTs and cryptomining crews. If you haven't patched, you're likely already a target.
The National Cyber Security Centre warned that prompt injection isn't like SQL injection and might not have a systematic fix. LLMs fundamentally can't distinguish instructions from malicious input, which spells trouble as AI systems touch more sensitive backends.
🕳️ Tenable Finds 7 New ChatGPT Vulns, Including Memory Exploits Researchers identified flaws in GPT-5 that bypass safety features and persist through ChatGPT's memory system. Attackers can exfiltrate private user data and chat history from hundreds of millions of daily users.
THIS WEEK’S TAKE
Are LLMs Unfixable by Design?
We're watching the entire industry collectively decide to bolt AI onto everything without pausing to ask "wait, is this actually secure?" And the answer, over and over again, is no. The IDEsaster research is a perfect example. Developers are using AI assistants to write code faster, and those assistants themselves are riddled with RCE vulnerabilities. The tools we're trusting to help us build secure software are attack vectors. That's not irony, that's negligence. Everyone's so desperate to ship AI features that security review has become an afterthought. Or not a thought at all.
And the NCSC warning about prompt injection? That one should be keeping people up at night. We spent 20 years learning to parameterize queries and escape user input, and now we've built a whole new class of systems where the fundamental architecture makes that impossible. The LLM can't tell the difference between your instructions and a malicious prompt hiding in a calendar invite. That's not a bug to fix, it's how these things work. And yet companies are happily connecting them to email, internal docs, databases, APIs. We're gonna look back at this era the same way we look back at early 2000s web security. Except the blast radius is going to be way bigger.
- Shawn Booker | OX Security
