Your AI coding assistant recommended a package that doesn’t exist — and an attacker has already registered it. That’s the threat at the heart of slopsquatting AI hallucinated packages: a supply chain attack that turns your trust in AI output into an open door for credential theft. With approximately 41% of all code written in 2025 being AI-generated and 84% of developers relying on AI assistants daily, the attack surface has never been larger.
You’re probably already scanning for known CVEs. But you’re likely not checking whether the package your AI suggested was registered by a threat actor yesterday. This guide gives you the exact, copy-paste-ready workflow — local dev, CI/CD pipeline, and agentic environments — to close that gap before your next commit.
What Is Slopsquatting and Why It Is Different from Typosquatting
Typosquatting is familiar: attackers register `reqeusts` hoping you mistype `requests`. Slopsquatting is different and, in some ways, more dangerous. It doesn’t rely on your typos — it exploits the hallucinations of your AI coding assistant.
When an LLM generates code with package imports, it sometimes fabricates package names that have never existed. These aren’t misspellings. They’re phantom dependencies: plausible-sounding names like `python-dateutil-extended` or `react-hooks-form-validator` that the model confidently recommends and you reasonably trust.
The attack chain is straightforward: an attacker identifies commonly hallucinated package names, registers them on PyPI or npm with a convincing README, and embeds a malicious `postinstall` script. The next time any developer — or AI agent — runs `pip install` or `npm install` on AI-suggested code, the attacker’s script executes automatically.
Slopsquatting doesn’t require human error. It requires human trust in AI output.
The name blends “slop” — low-quality AI output passed off as accurate — with the squatting mechanic familiar from domain squatting and typosquatting. The mechanism is new. The lesson is old: never trust what you haven’t verified.
The Numbers That Should Worry You: How Often AI Hallucinates Package Names
The scale of this problem is documented, not speculative. Researchers at the University of Texas at San Antonio, the University of Oklahoma, and Virginia Tech analyzed 756,000 code samples across 16 AI models and found that 19.7% of all recommended package names were hallucinated — completely non-existent packages.
Across 2.23 million total package references, that works out to 205,474 unique fabricated package names.
The gap between open-source and commercial models is significant but not reassuring:
- Open-source LLMs hallucinate at an average rate of 21.7%
- Commercial models (e.g., GPT-4 Turbo) hallucinate at approximately 5.2%
A 5% hallucination rate sounds manageable until you factor in prompt volume. Developers query these models dozens of times a day. At that frequency, hallucinated package suggestions become a near-daily occurrence on any active team.
The composition of those hallucinations matters too:
- 51% are pure fabrications with no relation to any real package
- 38% are conflations of two real packages merged into a plausible-sounding fake
- 13% are typo variants of legitimate packages, overlapping with classic typosquatting
And 8.7% of hallucinated Python package names correspond to valid JavaScript packages — the model recognizes a real thing but places it in the wrong ecosystem entirely.
Why Repeatability Is the Real Threat (And How Attackers Exploit It)
You might assume hallucinations are random noise — different errors each run. The research says otherwise.
58% of hallucinated packages repeated across multiple runs of the exact same prompt. More alarming: 43% appeared in every single one of ten re-runs, with zero variation. The model hallucinates the same fake package name, reliably, every time.
This repeatability is what converts an AI quirk into an exploitable slopsquatting attack vector. Attackers don’t need to guess what a model might hallucinate. They prompt the model themselves, collect the consistent hallucinations, register those names, and wait. The model does the enumeration work for them.
The economics favor attackers heavily. Registering a package on PyPI or npm is free and takes minutes. The malicious payload lives in a `postinstall` or `setup.py` script that runs automatically, with no further user interaction required. The September 2025 compromise of 18 widely used npm packages — including chalk, debug, and ansi-styles, with a combined 2.6 billion weekly downloads — demonstrated what malicious install scripts can accomplish at scale once inside the dependency graph.
The Anatomy of a Slopsquatting Attack: From Hallucination to Credential Theft
Here’s how a full attack plays out:
- Enumeration — The attacker prompts multiple LLMs with common development tasks (“write a Node.js script that parses CSV files and uploads to S3”). They collect hallucinated package names that appear consistently across re-runs.
- Registration — The attacker registers the hallucinated name on npm or PyPI with a convincing description and a legitimate-looking README. The package contains either an empty stub or a copy of a real package.
- Payload embedding — A `postinstall` script (npm) or `setup.py` hook (PyPI) contains the malicious code: credential harvesters targeting `~/.aws/credentials`, `~/.npmrc`, environment variables, or SSH keys.
- Waiting — Any developer, AI agent, or CI/CD runner that installs the package triggers the payload automatically.
- Exfiltration — Stolen credentials reach an attacker-controlled endpoint, often within seconds of install.
The highest-risk variant: an AI coding agent with broad file system and network permissions — Claude Code, Codex CLI, or Cursor in agentic mode — that automatically runs `npm install` without a human review step. There’s no one watching the terminal. The payload executes and the credentials are gone before the error even appears.
Step 1 — Harden Your Local Development Workflow
Lock down what happens on developer machines first. These steps are fast and most are free.
Use a package verification tool before you install
Two tools are purpose-built for catching hallucinated dependencies:
- dep-hallucinator: A CLI tool that checks whether a package name actually exists in the target registry before installation. Run it as a pre-install step: `dep-hallucinator check
`. It queries PyPI or npm in real time and hard-blocks non-existent packages. - Aikido SafeChain: Integrates as an install interceptor and checks package age, download count, and registry existence simultaneously. A package registered yesterday with zero download history is an immediate red flag.
Neither tool adds meaningful friction to legitimate installs. Both add a hard stop on hallucinated ones.
Cross-check manually before you install
When an AI assistant suggests a package you haven’t used before, spend 90 seconds on the registry:
- Does the package exist at all?
- When was it first published? Packages with six or more months of consistent download history are substantially lower risk.
- Who maintains it? A single anonymous contributor with a package registered 48 hours ago warrants scrutiny.
This catches most opportunistic slopsquatting attacks with no tooling required.
Use the self-detection prompt
LLMs can self-detect their own hallucinated packages more than 75–80% of the time when explicitly prompted — a finding from the same UTSA/OU/VT research that quantified the baseline problem. Before installing any AI-suggested package, ask your assistant directly:
“Are you certain this package exists on npm/PyPI? Please verify the exact package name — do not suggest a package you are not confident is in the registry.”
This is a zero-cost, zero-tooling mitigation that catches a meaningful portion of hallucinations before they ever reach your terminal.
Lower the temperature on configurable models
If your team uses self-hosted or API-configurable models (Anthropic API, OpenAI API, or a local model via Ollama), lower the temperature setting for code generation tasks. Higher temperature increases creative output — and hallucinations. For tasks involving package names, a temperature of 0.2 or below measurably reduces hallucination rates. It’s a free lever that most teams never touch.
Step 2 — Add Package Verification Gates to Your CI/CD Pipeline
Local hygiene matters, but the pipeline is where you enforce behavior across the entire team — including AI agents running automated commits with no human in the loop.
Enforce `npm ci` over `npm install`
`npm install` allows package resolution to drift from the lockfile. `npm ci` requires a lockfile and fails if the installed packages don’t match it exactly. This single change eliminates an entire class of drift-based attacks.
Pair it with the `–ignore-scripts` flag:
“`bash
npm ci –ignore-scripts
“`
This disables all `postinstall` scripts at install time — the primary delivery mechanism for slopsquatting payloads. You can also set it as a permanent environment variable: `NPM_CONFIG_IGNORE_SCRIPTS=true`. Legitimate packages almost never require post-install scripts to function. Malicious ones almost always do.
Pin dependencies with hash verification
Lockfiles prevent version drift but don’t guarantee the package contents haven’t changed between runs. In npm, `package-lock.json` includes integrity hashes by default — verify they’re present and committed. In Python, enforce hash verification at install:
“`bash
pip install –require-hashes -r requirements.txt
“`
Your `requirements.txt` should include expected hashes:
“`
requests==2.31.0 –hash=sha256:58cd2187423d…
“`
If the downloaded package doesn’t match the expected hash, the install fails. No exceptions.
Add Socket.dev or Snyk as a pipeline gate
Generic vulnerability scanners catch known CVEs. Socket.dev goes further: it analyzes package behavior, flags newly published packages, detects install scripts, and identifies packages that attempt network calls at install time — all primary indicators of a slopsquatting payload.
Adding it to a GitHub Actions workflow takes minutes:
“`yaml
- name: Socket Security Scan
uses: SocketDev/socket-security-action@v1
with:
api-key: ${{ secrets.SOCKET_API_KEY }}
“`
Snyk’s `snyk test` similarly flags supply chain anomalies and integrates cleanly with most CI providers. Either tool as a required check on PRs ensures no AI-generated package suggestion skips review.
Flag new packages in PR diffs
For teams willing to add a lightweight custom step: run dep-hallucinator in CI against any new packages detected in a PR’s lockfile diff. If a PR introduces a package first published within the last 30 days or with fewer than 1,000 total downloads, route it for manual review before merging. This catches the slopsquatting registration-then-wait attack pattern before it lands in your main branch.
Step 3 — Lock Down Agentic Coding Environments
Agentic coding tools — Claude Code, Codex CLI, GitHub Copilot Workspace, Cursor in Agent mode — represent the highest-risk scenario in the AI supply chain attack threat model. These tools can autonomously write code, install packages, and run scripts without a human reviewing each step.
If an agent hallucinates a package name and holds permission to run `npm install`, the malicious payload executes before any human sees it. The mitigations here aren’t optional for teams running agentic workflows.
Concrete steps to lock down agentic environments:
- Restrict shell permissions by default. Configure your agentic tool to read-only mode. Allow package installs only through an explicit confirmation step or inside an isolated sandbox container.
- Run agents in ephemeral containers. Use Docker with no host volume mounts and a network allowlist that blocks direct access to npm or PyPI outside of a proxied, audited registry.
- Treat agent-generated lockfile changes as high-risk PRs. Any PR where the diff includes changes to `package-lock.json`, `yarn.lock`, or `requirements.txt` generated autonomously by an AI agent should require an additional human reviewer and an automated Socket.dev or dep-hallucinator scan.
- Set `NPM_CONFIG_IGNORE_SCRIPTS=true` permanently in any environment where an AI agent can trigger installs. This is non-negotiable.
- Audit agent tool permissions regularly. Claude Code, for example, lets you configure exactly which tools and shell commands the agent can invoke. Lock it to the minimum required for the task at hand.
The principle of least privilege has always been foundational security practice. In agentic AI environments, it’s the difference between a manageable risk and an undetected breach.
Longer-Term Mitigations: RAG, Fine-Tuning, SBOMs, and SLSA Provenance
The steps above cover what you can implement today. For teams building longer-term AI dependency security defenses, three additional approaches are worth evaluating.
RAG and fine-tuning: the 85% trade-off
Retrieval-augmented generation (RAG) and supervised fine-tuning on verified package data can reduce hallucination occurrences by up to 85% — a dramatic improvement over baseline model behavior. The catch: both approaches degrade overall code quality on tasks unrelated to package selection.
Before committing to either, weigh the engineering overhead against your actual risk exposure. For teams generating high volumes of AI-assisted code in security-sensitive environments — financial services, healthcare, critical infrastructure — the trade-off is likely justified. For most teams, the local and CI/CD hardening steps in this guide deliver strong protection at a fraction of the implementation cost.
SBOMs: Making every dependency auditable
A Software Bill of Materials (SBOM) is a machine-readable inventory of every dependency your software uses, including transitive ones. Generating and signing SBOMs at build time creates an auditable chain of custody for every package in your supply chain.
If a hallucinated package ever slips through and is flagged later, a signed SBOM tells you exactly when it was introduced, in which build, and tied to which commit:
“`bash
# Generate an SBOM with Syft
syft packages dir:. -o spdx-json > sbom.spdx.json
“`
SLSA provenance for downstream confidence
SLSA (Supply-chain Levels for Software Artifacts) provenance attestations let downstream consumers verify that your build outputs were produced by a trusted, unmodified build process. For organizations distributing software to enterprise customers, SLSA Level 2 or 3 compliance signals a mature software supply chain security posture and provides the audit trail that regulators increasingly require.
Conclusion
Slopsquatting AI hallucinated packages are not a theoretical risk waiting to become real — they’re a documented, repeatably exploitable attack vector that scales directly with AI adoption. The mitigations are concrete, most are free, and the core hardening steps can be implemented within a single working day. Start with dep-hallucinator or Aikido SafeChain locally, enforce `npm ci –ignore-scripts` in CI, and restrict shell permissions in any agentic coding environment your team runs. Layer in Socket.dev, hash verification, and SBOM generation as your pipeline matures.
Start today: pick one step from this guide and ship it before your next AI-assisted commit goes to production.