The rise of artificial intelligence has brought many benefits to the tech world. But with those benefits come problems—and the cybersecurity industry is now facing one of the most frustrating ones yet: AI slop.
AI slop refers to low-quality content generated by large language models (LLMs). These models can create convincing-looking text, but much of it lacks accuracy, technical value, or even truth. In the world of cybersecurity, that’s becoming a serious issue.

What Is Happening?
Over the last year, experts in the cybersecurity field have noticed a surge in fake bug bounty reports—submissions that look like real security vulnerabilities but are completely made up by AI tools. These reports are often well-written and detailed, but when security teams investigate, they find nothing there.
“People are receiving reports that sound reasonable, they look technically correct. And then you end up digging into them… it turns out it was just a hallucination all along,” said Vlad Ionescu, CTO of RunSybil and former Meta red team member.
Bug bounty platforms like HackerOne and Bugcrowd, which connect ethical hackers with companies looking to find flaws in their software, are now dealing with hundreds of AI-generated reports weekly.
Real Examples of AI Slop in Action
- The Curl Project, an open-source tool, received a completely fake vulnerability report. The project’s team quickly flagged it as AI-generated nonsense.
- Open Collective, a nonprofit tech platform, says their inbox is being flooded with similar fake reports.
- CycloneDX, another open-source project, had to shut down its bug bounty program due to the overwhelming number of bad AI reports.
These reports waste valuable time and resources, slowing down real security work.
The Challenge with AI in Security
AI tools are built to be helpful. When asked to write a bug report, they will generate one—even if the vulnerability isn’t real. Some people copy and paste these LLM-generated reports directly into bug bounty platforms, hoping for rewards.
“That’s the problem people are running into—we’re getting a lot of stuff that looks like gold, but it’s actually just crap,” Ionescu added.
Michiel Prins, co-founder of HackerOne, agrees. He said the company has seen more false positives recently, which weakens the efficiency of their security programs.
Bugcrowd’s founder, Casey Ellis, noted that while many use AI to help write reports, they haven’t yet seen a major spike in AI slop. Still, they review 500 new submissions every week, and the volume is growing.
How the Industry Is Responding
Security teams are now trying to fight AI with AI. HackerOne recently launched Hai Triage, a hybrid AI-human system that filters out duplicate or low-quality reports before human analysts take over.
Mozilla, which runs its own bug bounty for Firefox, said it hasn’t seen a rise in fake reports yet. However, they are cautious about using AI to filter reports, as it may reject valid bugs.
Google, Microsoft, and Meta—all heavily invested in AI—declined to comment on whether they’ve seen similar problems.
What’s Next?
The rise of AI-generated cyber junk is a growing concern. As both attackers and defenders use AI, the battle will likely shift to AI vs. AI. The key will be finding ways to filter noise while still spotting real threats.
Cybersecurity experts, developers, and ethical hackers must stay alert. While AI can be a great tool, it’s important to validate everything—and not trust a report just because it looks good.
Final Thoughts
AI slop is more than just annoying—it’s a real threat to the way we handle cybersecurity. As fake bug reports increase, companies and platforms must find smarter ways to separate signal from noise. The answer may lie in better AI systems—but for now, human oversight remains critical.
