This one is pretty juicy https://awesomeagents.ai/news/vibe-coding-security-69-vulnerabilities/ > "Coding agents cannot be trusted to design secure applications," Tenzai concluded. "They seem to be very prone to business logic vulnerabilities. While human developers bring intuitive understanding that helps them grasp how workflows should operate, agents lack this 'common sense.'"
> Databricks' AI Red Team found that self-reflection prompts can improve security by 60-80% for Claude and up to 50% for GPT-4o. The tools can find their own vulnerabilities when asked.
> But that is precisely the problem vibe coding was supposed to solve. The entire premise is that developers - or non-developers - can describe what they want and get working software. Requiring them to also know which security prompts to add defeats the purpose.