
LiteLLM Compromised
What the PyPI Supply Chain Attack Means for Every Organization Running AI
On March 24, 2026, versions 1.82.7 and 1.82.8 of the LiteLLM Python package on PyPI were found to contain credential-stealing malware. The malicious versions were not published through the project's normal CI/CD pipeline. They were uploaded directly to PyPI by an attacker who had gained access to the maintainer's publishing credentials.
LiteLLM is one of the most widely used libraries in the AI ecosystem. It abstracts LLM provider APIs behind a single interface and is pulled as a dependency by AI agent frameworks, MCP servers, and orchestration tools across the industry. The package has over 40,000 GitHub stars and approximately 97 million monthly downloads.
This was not a theoretical vulnerability. This was active exploitation of the AI software supply chain — and it was only one stage of a campaign that included self-propagating worms, Kubernetes wipers, and targeted destruction of Iranian infrastructure. The implications for AI governance are significant and immediate.
The Full Campaign: TeamPCP's Kill Chain
The LiteLLM compromise was not an isolated event. It was one stage of a coordinated, multi-week campaign by a threat actor identified as TeamPCP. Understanding the full timeline is essential to understanding the scale of what happened.
February 27 — The Precursor
The campaign began weeks before anyone noticed. A threat actor operating as "MegaGame10418" exploited a vulnerable pull_request_target workflow in Aqua Security's Trivy CI pipeline. This workflow, which ran with elevated privileges on pull requests from external contributors, allowed the attacker to exfiltrate the aqua-bot Personal Access Token (PAT) — a credential with write access to Aqua Security's GitHub repositories.
Aqua Security detected the initial compromise and took remediation steps: they removed the vulnerable workflow, revoked some tokens, and restored their repository. But the remediation was incomplete. The aqua-bot PAT — or credentials derived from it — remained viable. That incomplete cleanup is what made everything that followed possible.
March 19 — Trivy Compromised
Using the credentials obtained in February, TeamPCP force-pushed a malicious v0.69.4 tag to the Trivy repository. The tag referenced imposter commits — including one impersonating Vercel CEO "rauchg" in actions/checkout — that fetched malicious Go files from a typosquatted C2 server (scan.aquasecurtiy.org, note the misspelling). Another imposter commit impersonated a Trivy maintainer to establish the attack chain.
Because Trivy's automated workflows trusted the tag, the malicious v0.69.4 release was automatically published to GitHub Releases, Docker Hub (docker.io/aquasec/trivy:0.69.4), AWS ECR (public.ecr.aws/aquasecurity/trivy:0.69.4), and GitHub Container Registry. Homebrew was not affected as it builds from source, but its formula was revoked as a precaution.
Simultaneously, the compromised aqua-bot identity was used to inject malicious workflows into related Aqua Security repositories — tfsec, traceeshark, and trivy-action — to dump additional secrets including GITHUB_TOKEN. Malicious tags were published for trivy-action and setup-trivy. Any CI/CD workflows using these actions without SHA pinning executed malicious code.
March 20 — Obfuscation and the CanisterWorm
The day after the Trivy compromise, TeamPCP deployed two obfuscation and propagation tactics simultaneously.
First, a coordinated spam flood hit GitHub Discussion #10420 — the primary thread tracking the Trivy compromise. Ninety-six spam accounts posted generic praise comments to drown out technical discussions and incident coordination. This was deliberate suppression of the community's ability to share indicators of compromise.
Second — and more consequentially — TeamPCP deployed CanisterWorm, a self-propagating worm, across npm. The worm shared C2 infrastructure with the Trivy payload, using the same Internet Computer Protocol (ICP) canister (tdtqy-oyaaa-aaaae-af2dq-cai) as a fallback command-and-control channel. This confirmed the attacks were orchestrated by the same actor.
CanisterWorm's behavior was aggressive: it stole npm tokens from compromised CI/CD runners, enumerated publishable packages, bumped patch versions, and published malicious payloads — infecting 28 npm packages in under 60 seconds across multiple organizational scopes including @EmilGroup, @opengov, @teale.io, @airtm, and @pypestream. Persistence was achieved via a systemd service (pgmon.service) masquerading as PostgreSQL monitoring.
March 22 — kamikaze.sh: From Credential Theft to Destruction
The ICP canister C2 began serving a new payload: kamikaze.sh. This marked a shift from credential theft to active destruction.
The first version was a monolithic 150-line bash script focused on Kubernetes environments. It detected Kubernetes clusters, deployed a privileged DaemonSet (host-provisioner-std) to kube-system, escaped to the host via hostPath: / mount, and installed systemd persistence (internal-monitor.service). A Python stager polled the ICP C2 every 50 minutes for updated instructions.
But kamikaze.sh had a specific secondary function: an Iran-targeted wiper. The payload detected Iranian systems via timezone (Asia/Tehran) or locale (fa_IR) and executed rm -rf / --no-preserve-root — complete filesystem destruction.
The payload evolved rapidly. Version 2 shifted from a monolithic script to a modular 15-line loader that fetched kube.py from the C2 at runtime and then self-deleted, allowing updates without redistribution. Version 3 pivoted from being Kubernetes-focused to a self-propagating worm targeting SSH keys and exposed Docker APIs (port 2375), scanning local /24 subnets and parsing authentication logs for targets. By version 3.1, it had been split into a two-module architecture for maximum flexibility.
March 23 — KICS Compromised
TeamPCP compromised Checkmarx's KICS GitHub Action, extending the attack to another widely used security tool in CI/CD pipelines.
March 24 — LiteLLM Compromised
The final stage. LiteLLM's CI/CD pipeline (ci_cd/security_scans.sh) installed Trivy without version pinning. When the pipeline ran the poisoned Trivy binary with full CI runner privileges, it exfiltrated environment secrets — including PYPI_PUBLISH_PASSWORD. With those credentials, TeamPCP published versions 1.82.7 and 1.82.8 directly to PyPI.
This is a textbook example of credential chaining — each successful breach yields tokens that unlock the next target. But in the context of the full campaign, the LiteLLM compromise was not just credential chaining. It was one node in a supply chain graph that included vulnerability scanners, security linting tools, npm packages, container registries, and an LLM API gateway. The blast radius was enormous.
The LiteLLM Payload: What It Did
The malware in the compromised LiteLLM versions followed TeamPCP's established three-stage playbook: collect, encrypt, exfiltrate. But the LiteLLM payload also added persistence and lateral movement capabilities.
Version 1.82.8 — The .pth File
Version 1.82.8 contained a malicious .pth file named litellm_init.pth. Python automatically executes .pth files in site-packages/ on interpreter startup. This means the malware ran on every Python process — not just when LiteLLM was imported. The payload was double base64-encoded to evade static analysis.
Version 1.82.7 — Embedded in Source
Version 1.82.7 embedded the payload directly in litellm/proxy/proxy_server.py, triggered on import of litellm.proxy.
Stage 1: Collection
The malware harvested an extensive set of credentials and system information:
- System info: hostname, user, OS details, IP addresses, routing tables
- Environment variables: capturing API keys, secrets, tokens for every LLM provider the library was configured to use
- SSH keys:
~/.ssh/id_rsa,id_ed25519,authorized_keys - Cloud credentials: AWS, GCP, Azure configs, Kubernetes service account tokens, IMDS metadata endpoints
- Git credentials:
.gitconfig,.git-credentials - Docker configs:
~/.docker/config.json - Shell history:
.bash_history,.zsh_history - Database credentials: PostgreSQL, MySQL, Redis connection strings
- CI/CD secrets:
terraform.tfvars,.gitlab-ci.yml,Jenkinsfile - Cryptocurrency wallets: Bitcoin, Ethereum, Solana, and others
- SSL/TLS private keys
- Webhook URLs: Slack and Discord
For organizations using LiteLLM as an API gateway — which is its primary use case — this meant that every LLM provider API key the library was configured to manage was harvested.
Stage 2: Exfiltration
Collected data was compressed into tpcp.tar.gz, encrypted with AES-256-CBC using a random session key, and the session key itself was encrypted with a hardcoded 4096-bit RSA public key. The encrypted bundle was sent via HTTP POST to https://models.litellm.cloud/ — a domain registered by the attacker on March 23, designed to mimic legitimate LiteLLM infrastructure.
The exfiltration scheme included a fallback mechanism consistent across the campaign: if the primary C2 was unreachable, the payload would create a tpcp-docs repository on the victim's own GitHub account and upload the encrypted data as a release asset. This fallback ensured data exfiltration even when network egress controls blocked unknown domains.
Stage 3: Persistence and Lateral Movement
The malware did not stop at credential theft. On Kubernetes clusters, if a service account token was present, it attempted to:
- Read all cluster secrets
- Create privileged
alpine:latestpods inkube-systemon every node - Mount host filesystems from those pods
On all systems, it installed a persistent backdoor at ~/.config/sysmon/sysmon.py with an accompanying systemd service that polled a C2 server for updated payloads. Given that the campaign's C2 infrastructure had already begun serving destructive payloads like kamikaze.sh, the persistence mechanism was not just about maintaining access — it was a delivery channel for whatever came next.
Why This Matters for AI Governance
This incident is not just a security story. It is a governance story. And it exposes several blind spots that most organizations have not addressed.
1. The AI Supply Chain Is a Trust Chain
LiteLLM sits at the center of many AI architectures. It is pulled as a transitive dependency by agent frameworks, tool integrations, and orchestration layers. Organizations that never explicitly chose to use LiteLLM may have it in their dependency tree.
The discovery of this compromise happened accidentally — FutureSearch found it when an MCP plugin in Cursor pulled LiteLLM as a transitive dependency, and the malicious .pth file caused an exponential fork bomb that crashed the machine by exhausting RAM.
Most governance frameworks treat the AI model as the primary attack surface. The reality is that the tooling around the model — the libraries, the gateways, the orchestration layers — represents an equally critical and far less governed attack surface.
2. Incomplete Remediation Is the Real Vulnerability
The most important lesson from the full TeamPCP timeline is not the initial compromise — it is what happened afterward. Aqua Security detected the February 27 precursor event and took remediation steps. But the remediation was incomplete. Credentials remained viable. And that incomplete cleanup enabled the entire subsequent campaign.
This pattern repeats across organizations: an incident is detected, a response is executed, and the team moves on. But supply chain attackers are patient. They test whether credentials still work. They probe for residual access. Incomplete remediation after a supply chain event is functionally equivalent to no remediation at all.
Governance programs need to require verified, complete credential rotation after any supply chain incident — not just revocation of the credentials known to be compromised, but rotation of every credential that could have been accessed.
3. CI/CD Pipelines Are Governance Boundaries
The root cause of the LiteLLM compromise was not a zero-day or a sophisticated exploit. It was an unpinned dependency in a CI/CD script. The security scanner itself was the attack vector.
CI/CD pipelines in AI projects often have access to:
- Model registry credentials
- API keys for LLM providers
- Cloud infrastructure tokens
- Package repository publishing credentials
The TeamPCP campaign also demonstrated that GitHub Actions receive an "Immutable" badge even when tags are unsigned — meaning the poisoned trivy-action and setup-trivy tags appeared trustworthy in the GitHub UI despite being published by a compromised identity. If your governance program does not treat CI/CD pipeline integrity as a first-class control — including SHA pinning of all actions, not just version tags — you are leaving the most privileged access in your AI infrastructure ungoverned.
4. Dependency Pinning Is a Governance Control
The specific failure that enabled the LiteLLM compromise was the absence of version pinning in ci_cd/security_scans.sh. The script installed Trivy from an apt repository without specifying a version, which meant it pulled whatever version was currently published — including the compromised one.
Dependency pinning is not a developer hygiene issue. It is a governance control. Organizations need policies that require:
- Exact version pinning for all dependencies in CI/CD pipelines
- SHA pinning for GitHub Actions (not just version tags)
- Hash verification for downloaded binaries
- Approved dependency registries with integrity verification
- Automated alerts when dependencies update outside of normal release cadences
5. Supply Chain Attacks Now Include Destruction
The traditional mental model for supply chain attacks is credential theft and espionage. TeamPCP's campaign shattered that assumption. The kamikaze.sh payload was not about stealing data — it was about destroying infrastructure. The Iran-targeted wiper executed rm -rf / --no-preserve-root on systems matching specific locale and timezone criteria.
This means supply chain compromise is no longer just a confidentiality problem. It is an availability problem. Organizations need to account for the possibility that a compromised dependency could result in the total destruction of the systems it runs on — not just the exfiltration of secrets from those systems.
6. The AI Tooling Attack Surface Is Expanding
TeamPCP's campaign targeted the connective tissue of the AI and DevSecOps ecosystem: a vulnerability scanner (Trivy), a security linting tool (KICS), an LLM API gateway (LiteLLM), and 28 npm packages via CanisterWorm. These are not edge-case tools. They are infrastructure that thousands of organizations depend on.
As the AI ecosystem grows more complex — with agent frameworks, tool-use protocols like MCP, and multi-model orchestration — the number of dependencies in the critical path increases. Each dependency is a trust boundary. Each trust boundary is an attack surface. And as TeamPCP demonstrated, a single compromised credential in one tool can cascade across an entire ecosystem within days.
What Organizations Should Do
If your organization uses LiteLLM, the immediate actions are clear: check installed versions, remove compromised packages, rotate every credential that was accessible on affected systems, and audit for persistence mechanisms — including ~/.config/sysmon/sysmon.py, systemd services like pgmon.service or internal-monitor.service, and unauthorized pods in kube-system.
But the broader governance response should go further:
- Map your AI dependency tree. Understand not just what you install, but what your dependencies install. Transitive dependencies are where supply chain attacks hide. CanisterWorm proved that worms can propagate across package ecosystems automatically.
- Treat CI/CD pipelines as privileged infrastructure. Apply the same governance rigor to pipeline configurations that you apply to production systems. Audit secrets, pin dependencies by SHA, verify integrity. Assume that any unpinned dependency is a potential attack vector.
- Require complete remediation verification. After any supply chain incident, verify that all potentially exposed credentials have been rotated — not just the ones confirmed compromised. The TeamPCP campaign succeeded because of incomplete remediation after the initial detection.
- Establish software bill of materials (SBOM) requirements for AI components. You cannot govern what you cannot see. SBOMs for AI tooling should be a baseline requirement.
- Implement runtime detection for supply chain compromise indicators. Monitor for unexpected outbound connections, new
.pthfiles in site-packages, unauthorized process spawning, and connections to ICP canister endpoints or typosquatted domains. - Include supply chain risk — including destructive scenarios — in AI risk assessments. Most AI risk frameworks focus on model behavior: bias, hallucination, safety. The risk of a compromised dependency exfiltrating every API key in your AI infrastructure — or destroying the infrastructure entirely — is at least as consequential.
The Bigger Picture
The TeamPCP campaign is a signal event for the AI ecosystem. It is the first major supply chain attack that specifically targeted AI infrastructure — an LLM API gateway used by thousands of organizations — as part of a broader, multi-vector campaign that included self-propagating worms, Kubernetes-targeted wipers, geopolitically motivated destruction, and deliberate suppression of incident response coordination.
This is not the attack model most organizations are preparing for. Most AI governance programs focus on model risk: bias, hallucination, prompt injection. Those risks are real. But the risk of a single unpinned dependency in a CI/CD script leading to the exfiltration of every LLM API key in your infrastructure — or the deployment of a wiper that destroys your Kubernetes clusters — is a different category of risk entirely.
The organizations that will weather these threats are the ones that have already extended their governance programs to cover the full stack — not just the models, but the libraries, the pipelines, the registries, the actions, and the trust relationships that connect them all.
For everyone else, this is the wake-up call. The AI supply chain is now an active threat surface. Governance programs need to account for it, and they need to do it now.
Sources & Further Reading
- GitHub Issue #24512 — Original disclosure of the LiteLLM PyPI compromise
- Rami McCarthy — "TeamPCP" — Comprehensive technical timeline of the full TeamPCP campaign, including the Trivy precursor, CanisterWorm, and kamikaze.sh evolution. Much of the campaign timeline detail in this post draws from this analysis.
- GitHub Issue #24518 — Additional community analysis and indicators of compromise
- FutureSearch — "LiteLLM PyPI Supply Chain Attack" — Discovery report from the team that accidentally triggered the fork bomb
- SafeDep — Malicious LiteLLM 1.82.8 Analysis — Technical payload analysis
Want more governance insights?