Slopsquatting: How AI Hallucinations Create New Security Risks
Slopsquatting is an emerging security threat that exploits hallucinated package names in AI-generated code. Coined by researcher Seth Larson, the attack involves malicious actors uploading harmful code under fake library names that language models frequently invent. Studies show that even ChatGPT-4 hallucinates libraries in about 5% of code snippets, with open-source models doing so even more often. If developers unknowingly install these nonexistent packages, they risk injecting malware into their systems. To stay safe, experts urge developers to verify package names, run hash checks, and critically review all AI-generated code.