The AI supply chain attack isn’t theoretical anymore. Between March 19 and March 31, 2026, five major open-source projects were compromised in rapid succession — Trivy, Checkmarx, LiteLLM, Telnyx, and Axios — in what security researchers are calling the most coordinated assault on developer infrastructure in history. The cascading breaches infected over 1,000 cloud environments, exposed credentials across 500,000 machines, and hit companies from $10 billion AI startup Mercor to countless enterprises running standard AI tooling.
If your team uses open-source AI libraries — and statistically, you do — this is the story you need to understand right now.
Five Attacks, Twelve Days, Two Threat Actors
What made March 2026 different from previous supply chain incidents wasn’t just the scale. It was the speed and the chaining. Each compromise provided credentials to attack the next target, creating a cascading trust chain that moved faster than most security teams could respond.
Here’s the timeline:
March 19: Trivy — The First Domino
The attacks started with Aqua Security’s Trivy, one of the most widely used open-source vulnerability scanners. Attackers exploited a misconfigured pull_request_target workflow in Trivy’s GitHub repository, using an autonomous bot called “hackerbot-claw” to steal a Personal Access Token (PAT).
Aqua Security discovered and rotated credentials — but the rotation was incomplete. TeamPCP, the threat group behind the attack, retained access to surviving credentials and injected credential stealers into 75 hijacked release tags, 44 defaced Aqua Security repositories, and compromised Docker Hub images. Over 1,000 cloud environments were infected before containment.
The irony was brutal: a security scanner became the attack vector.
March 21: Checkmarx — Same Playbook, New Target
Two days later, TeamPCP used the identical credential-stealer pattern against Checkmarx’s AST GitHub Actions. The compromise harvested CI/CD pipeline secrets from organizations worldwide that relied on Checkmarx for application security testing.
Again: a security tool turned weapon. The attackers were specifically targeting the infrastructure developers trust most.
March 24: LiteLLM — The AI Gateway Becomes a Backdoor
This is where it hit the AI industry directly. LiteLLM, the open-source proxy that developers use to route requests to AI services from OpenAI, Anthropic, Google, and others, was compromised through a poisoned Trivy dependency in its CI/CD pipeline.
The attack chain was elegant and terrifying:
- LiteLLM’s CI/CD pipeline ran the compromised Trivy as part of its build process
- The poisoned scanner exfiltrated LiteLLM’s PyPI publishing token
- Attackers uploaded malicious versions (1.82.7 and 1.82.8) directly to PyPI
- The packages contained a
.pthfile that executed automatically on every Python process startup
Anyone who installed litellm==1.82.8 via pip had all environment variables, SSH keys, cloud credentials, and API secrets collected and sent to an attacker-controlled server. The malicious packages were live for approximately 40 minutes before PyPI quarantined them — but LiteLLM is downloaded millions of times per day.
March 25–28: Telnyx and Axios
The campaign didn’t stop. Telnyx’s communications library and Axios — the most downloaded HTTP client in the npm registry — were both hit using variations of the same techniques, spanning two package ecosystems (npm and PyPI) and three delivery mechanisms.
The Casualties
The highest-profile victim was Mercor, a $10 billion AI recruiting startup whose clients include Anthropic, OpenAI, and Meta. The Lapsus$ hacking group claimed it gained access through the LiteLLM compromise and exfiltrated 4 TB of data, including 939 GB of source code and customer database records.
Mandiant’s CTO Charles Carmakal confirmed that Google’s incident response team was tracking “over 1,000 impacted SaaS environments” actively dealing with the cascading fallout. Threat hunters at vx-underground estimated data was exfiltrated from 500,000 machines.
Why the AI Supply Chain Is Uniquely Vulnerable
These weren’t random attacks. They exploited structural weaknesses that are specific to how AI infrastructure gets built and deployed in 2026.
The Dependency Problem Is Worse in AI
According to Black Duck’s 2026 OSSRA Report, open-source components now appear in 98% of audited applications, with the average codebase containing 581 vulnerabilities — a 107% increase year-over-year. AI applications are worse. A typical LLM-powered application pulls in dozens of specialized libraries for model routing, tokenization, embedding, vector storage, and orchestration. Each one is a link in the trust chain.
LiteLLM alone — a single dependency — gave attackers a path to every API key, cloud credential, and secret in the environments where it was installed.
AI Coding Agents Accelerate the Problem
Here’s what most coverage misses: AI coding assistants are making this worse. More than half of organizations now allow developers to use AI-powered coding tools, and these systems recommend widely used libraries without evaluating whether those dependencies increase the attack surface.
When Copilot suggests pip install litellm to route your LLM calls, it doesn’t check whether the latest version on PyPI was uploaded by a legitimate maintainer or an attacker who stole credentials 40 minutes ago. AI-driven development is accelerating dependency growth faster than security, compliance, and governance practices can adapt.
CI/CD Is the New Perimeter
The TeamPCP campaign revealed something that security practitioners have been warning about for years: CI/CD pipelines are now the most valuable target in software infrastructure. They hold the keys to package registries, cloud environments, and production deployments. And unlike production systems, most CI/CD pipelines run with minimal monitoring and maximum permissions.
The Trivy attack worked because a GitHub Actions workflow was misconfigured. One workflow. That single misconfiguration gave attackers a path to compromise five major projects and thousands of downstream organizations.
What This Means for Enterprise AI Teams
If you’re running enterprise AI workloads — and I say this as someone who manages AI infrastructure at a telecom — March 2026 should be your wake-up call. Here’s what matters:
Pin Everything, Verify Everything
The era of trusting latest is over. Pin your dependencies to exact versions. Pin your GitHub Actions to commit SHAs, not tags (tags can be rewritten, as Trivy proved). Verify checksums. If your team isn’t doing this for AI libraries specifically, you’re exposed.
Audit Your AI-Specific Dependencies
Most enterprise security teams audit traditional dependencies but treat AI libraries as a blind spot. LiteLLM, LangChain, LlamaIndex, vLLM — these are now critical infrastructure. They need the same security scrutiny as your database drivers and authentication libraries.
Monitor PyPI and npm in Real-Time
The LiteLLM compromise was live for 40 minutes. That’s the detection window. If you’re not monitoring package registry changes for your critical dependencies in near-real-time, you’re relying on luck. Tools like Socket, Snyk, and Phylum can flag suspicious package updates before they hit your CI/CD pipeline.
Assume Your CI/CD Pipeline Is a Target
Review your GitHub Actions workflows for pull_request_target misconfigurations. Audit which secrets are available to which workflows. Apply the principle of least privilege to your build infrastructure the same way you would to production. The TeamPCP attackers went after CI/CD first because that’s where the keys are.
Build an AI Software Bill of Materials (SBOM)
Know exactly which AI libraries are in your stack, what versions you’re running, and what transitive dependencies they pull in. When the next LiteLLM happens — and it will — you need to answer “are we affected?” in minutes, not days.
The Bigger Picture: AI Infrastructure Is Now Critical Infrastructure
Marc Andreessen wrote that these incidents mark the end of the AI industry’s “we’ll lock it up” approach to cybersecurity. He’s right, but it goes deeper than that.
The AI industry has spent the last three years obsessing over model safety — alignment, guardrails, red-teaming. Those are real concerns. But the March 2026 attacks revealed a different kind of AI safety problem: the infrastructure around the models is held together with the same open-source trust model that the broader software industry has been struggling with since SolarWinds.
The numbers tell the story. Ninety-nine percent of CISOs experienced at least one SaaS or AI ecosystem security incident in 2025. Only 0.8% feel adequately protected against similar incidents in 2026. That gap between exposure and preparedness is where the next breach lives.
Five attacks in twelve days wasn’t an anomaly. It was a preview.
Frequently Asked Questions
What was the LiteLLM supply chain attack?
The LiteLLM supply chain attack occurred on March 24, 2026, when attackers used stolen PyPI credentials (obtained through a prior Trivy compromise) to upload malicious versions of the LiteLLM package. The malicious code automatically harvested environment variables, SSH keys, and cloud credentials from any machine where the compromised package was installed. The packages were live for approximately 40 minutes before being quarantined.
Who was behind the March 2026 AI supply chain attacks?
The attacks were carried out by at least two distinct threat actors. TeamPCP executed the cascading attacks on Trivy, Checkmarx, and LiteLLM using a trust-chain exploitation strategy. The Lapsus$ hacking group later exploited the LiteLLM compromise to target Mercor specifically, claiming to have stolen 4 TB of data from the $10 billion AI startup.
How many companies were affected by the AI supply chain attacks?
Mandiant confirmed over 1,000 impacted SaaS environments were actively dealing with the cascading effects. Threat hunters at vx-underground estimated data was exfiltrated from 500,000 machines. Mercor, valued at $10 billion, was the highest-profile victim, but thousands of organizations using Trivy, Checkmarx, and LiteLLM were potentially exposed.
How can enterprises protect against AI supply chain attacks?
Key defenses include pinning all dependencies to exact versions and commit SHAs, monitoring package registries in real-time for suspicious updates, auditing CI/CD pipeline permissions and GitHub Actions configurations, maintaining an AI-specific Software Bill of Materials (SBOM), and treating AI libraries like LiteLLM and LangChain with the same security scrutiny as database drivers and authentication libraries.
What made the TeamPCP attack campaign unique?
TeamPCP’s campaign was notable for its cascading trust-chain strategy: each compromised project provided the credentials needed to attack the next target. The campaign spanned two package ecosystems (npm and PyPI), used three delivery mechanisms (GitHub Actions tag hijacking, Python .pth file injection, and npm postinstall hooks), and specifically targeted security tools — turning vulnerability scanners into attack vectors.
What to Do Next
Don’t wait for the post-mortem. Run pip list and npm list against your AI projects today. Check whether any of your dependencies were affected. Review your CI/CD workflows for the specific misconfigurations that TeamPCP exploited. And start building your AI SBOM — because the next time a critical AI library gets compromised, the organizations that survive will be the ones that knew exactly what they were running.
The AI supply chain is now critical infrastructure. It’s time we started treating it that way.
