
By Ziv Chang, Vice President of Automotive CyberThreat Research Lab, VicOne
TL;DR: AI supply chain attacks exploit the implicit trust developers place in AI tooling, using compromised packages to steal credentials, move laterally across cloud environments, and establish persistent footholds at scale. For automotive OEMs and Tier 1 suppliers, this risk extends directly into the development pipelines supporting software-defined vehicles (SDVs). The 2026 LiteLLM compromise, attributed to the threat actor cluster TeamPCP, affected 36% of cloud environments scanned by Wiz Research and demonstrated how a single backdoored package can cascade across an entire ecosystem. Treating AI dependencies with the same rigor as core software infrastructure is no longer optional.
AI supply chain attacks are emerging as a new attack surface, one that many security teams were not fully prepared to defend. Driven by the rapid adoption of AI-powered development tools, they create a sprawling, interconnected dependency graph where a single compromised package can silently backdoor millions of systems.
Unlike traditional supply chain attacks that target enterprise software, AI supply chain attacks exploit a unique dynamic. Developers place implicit trust in AI tooling, deploy it in cloud-native environments with broad permissions, and update it frequently, often without the same level of scrutiny applied to core infrastructure.
The scale of exposure is significant. Trend Micro estimates that LiteLLM, a widely-used open-source AI model gateway, was downloaded 3.4 million times per day at the time of its compromise, with malicious versions downloaded more than 40,000 times before detection. Wiz Research found the compromised package present in 36% of cloud environments scanned in early 2026. When a threat actor successfully backdoors a package at this scale, they do not only compromise one organization. They plant a persistent foothold across an entire ecosystem.
How Did the LiteLLM Supply Chain Attack Work?
To understand how AI supply chain attacks unfold in practice, VicOne's Automotive CyberThreat Research Lab examined the TeamPCP campaign in detail. It illustrates how a single compromised component can cascade across cloud environments at scale through a structured, multi-stage attack chain.
In early 2026, researchers at Snyk and Wiz independently uncovered a sophisticated attack attributed to the threat actor cluster tracked as TeamPCP. The campaign moved from a single compromised dependency to widespread credential access across developer environments and production systems.
Stage 1: Hijacking a trusted CI/CD action
TeamPCP began by compromising the aquasecurity/trivy-action GitHub Action, a widely used security scanning tool embedded in CI/CD pipelines. By pushing a malicious tag that pointed to attacker-controlled code, they stole PyPI API tokens from developer workflows. This single step gave them the ability to publish packages under legitimate maintainer identities.
Stage 2: Publishing poisoned packages
Using the stolen PyPI token, TeamPCP published malicious versions of litellm (1.82.7 and 1.82.8) and also targeted kics, another cloud security tool. The malicious packages remained functionally identical to their legitimate counterparts. The backdoor was hidden in a .pth file, a Python path configuration mechanism that executes arbitrary code at interpreter startup (MITRE ATT&CK T1546.018). This technique is particularly stealthy: it survives virtual environment recreation, does not appear in standard pip list output, and executes before any application code runs.
Stage 3: Persistent, stealthy exfiltration
Once installed, the implant established encrypted command-and-control (C2) communications to models.litellm.cloud, a domain deliberately named to blend in with legitimate LiteLLM infrastructure. It collected environment variables, including API keys, cloud credentials, and secrets, encrypted them using AES-256 and RSA, and transmitted them to attacker-controlled infrastructure.
The implant also contained a Kubernetes worm component capable of lateral movement across containerized environments, expanding the blast radius well beyond the initially compromised workload.
LiteLLM confirmed the incident in GitHub issue #24512, noting that versions 1.82.7 and 1.82.8 should be avoided and advising all users to rotate credentials immediately.
Is LiteLLM an Isolated Incident?
LiteLLM is not an isolated case. It reflects a broader, accelerating shift in how attackers are targeting the AI development ecosystem. According to VicOne's 2026 Automotive Cybersecurity Report Crossroads, 610 cyber incidents were reported across the automotive sector in 2025, with 161 cases (26%) escalating into global incidents spanning multiple subsidiaries — a rate that has more than tripled compared to 2024. The following campaigns show how the AI supply chain attack pattern is already taking shape across packages, pipelines, and emerging AI workflows.
| Campaign | Attack Method | Target |
|---|---|---|
| Ghost Campaign | Typosquatted npm packages mimicking langchain and openai-utils | AI developer environments |
| Contagious Interview (North Korea) | 26 crafted npm packages with steganographic payloads | AI engineers during fake technical interviews |
| CanisterWorm | C2 logic hosted in blockchain-based ICP canisters; 29 npm packages | AI/ML workflow tooling |
| ToxicSkills (Snyk Research) | 36.82% of 1,467 AI agent skills on ClawHub contained security vulnerabilities | AI agent marketplaces |
| Clinejection | Prompt injection via malicious instructions in code comments | AI coding assistants |
What these campaigns share: each one exploits the implicit trust developers place in AI tooling ecosystems, whether that is a package registry, a CI/CD action, an agent marketplace, or an AI coding assistant. The attack surface is expanding faster than most security teams have adapted their controls.
As Trend Micro and Cycode researchers noted following the TeamPCP investigation: "The cascading attack orchestrated by TeamPCP underscores the vulnerabilities in developer tools and the need for enhanced security measures."
Why Do AI Supply Chain Attacks Introduce Greater Risk?
AI supply chain attacks introduce a level of risk that extends beyond traditional software dependencies. Their impact is amplified by four structural characteristics of how AI development tools are used, deployed, and trusted in modern environments.
Trust amplification
Developers and security teams often treat AI tools as trustworthy by default. Libraries such as LiteLLM or LangChain are adopted precisely because they handle complex, sensitive operations, including model routing, API key management, and cloud integrations. A compromise at this layer provides immediate access to the most sensitive parts of an organization's AI infrastructure, without requiring any additional privilege escalation.
Privileged deployment environments
AI workloads are typically deployed in cloud-native environments with generous Identity and Access Management (IAM) permissions, broad network access, and often privileged service accounts. A compromised AI package executing in a Kubernetes pod may have access to cluster secrets, cloud provider APIs, and internal services that would never be exposed to a traditional desktop application. This is not a misconfiguration. It reflects how AI workloads are designed to operate.
Update velocity and ecosystem churn
The AI tooling ecosystem evolves rapidly. Frequent updates, driven by new model releases, integrations, and performance improvements, create pressure to adopt new versions quickly. This velocity makes it harder to pin dependencies and easier for attackers to introduce and withdraw malicious versions before detection. The TeamPCP campaign exploited exactly this window.
Stealthy persistence mechanisms
Techniques such as abusing Python's .pth files enable code execution at interpreter startup, often without visibility in standard dependency listings. This persistence mechanism survives virtual environment recreation, is not visible in pip list, and executes before any application code runs. Many endpoint detection solutions do not monitor .pth files, leaving this vector largely undetected in standard security tooling.
Key insight: The combination of high deployment density, privileged cloud access, and implicit developer trust makes AI tooling an ideal vector for large-scale credential theft and persistent access, with a blast radius that far exceeds a typical software vulnerability.
What Makes Automotive OEMs Specifically Vulnerable?
For automotive OEMs and Tier 1 suppliers, AI supply chain attacks are not a general software security problem that happens to affect them at the margins. They represent a direct threat to the development pipelines, supplier ecosystems, and cloud services that support software-defined vehicles (SDVs).
VicOne's research identifies three automotive-specific amplifiers that make this threat category particularly consequential for the sector.
The supplier tier vulnerability gap
Automotive supply chains are deep and interdependent. Cybersecurity adoption rates differ significantly across tiers: OEMs and Tier 1 suppliers report robotics and automation adoption rates of 51 to 62%, while Tier 2 and Tier 3 suppliers report rates of only 23 to 31%, according to Perforce research. This gap in tooling maturity directly correlates with weaker security controls at lower tiers. Historical incident data underscores the exposure: suppliers absorbed 67.3% of cyber incidents in the automotive sector in early 2022, according to Automotive Logistics. A compromised AI dependency at a Tier 2 supplier can propagate upstream into OEM production workflows without triggering any direct alert at the OEM level.
AI agent marketplace risks in Tier 1 environments
Tier 1 suppliers increasingly use agentic AI frameworks to automate software development, testing, and integration workflows. These environments introduce a new class of risk. Only 24.4% of organizations currently have visibility into inter-agent communications, and over half of deployed agents run without logging, according to research from Agat Software. With organizations averaging 37 deployed agents and 45.6% of teams using shared API keys across agents, the attack surface within a single Tier 1 supplier environment can be substantial. A shadow deployment in this context costs an average of $670,000 more than a detected incident, due to delayed identification.
Regulatory exposure under ISO/SAE 21434 and UNECE WP.29
Automotive OEMs operating under ISO/SAE 21434 and UNECE WP.29 UN Regulation 155 are required to maintain cybersecurity management systems (CSMS) that cover the full vehicle lifecycle, including development toolchains and supplier relationships. An AI supply chain compromise that results in credential theft or unauthorized access to development environments is not just a security incident. It is a potential compliance event requiring documented response, root cause analysis, and supplier accountability under these frameworks.
The practical implication for VicOne's customers: AI supply chain risk must be integrated into existing CSMS governance structures, not treated as a separate software security problem. The development environment is part of the vehicle security perimeter.
What Should Security Teams Do Now?
Addressing AI supply chain risk requires treating AI tools with the same level of rigor as core infrastructure dependencies. VicOne recommends the following controls for automotive OEMs and Tier 1 suppliers managing AI tooling in development environments.
Enforce strict dependency controls. Pin exact versions (for example, litellm==1.82.6 rather than litellm>=1.0) in production environments, use lockfiles, and verify package integrity through hashes. Floating dependencies increase the risk of unintentionally pulling in malicious updates during routine installs.
Audit CI/CD pipelines for mutable references. Avoid relying on version tags in external GitHub Actions. Use commit SHAs instead (for example, uses: action@abc1234). Restrict which secrets are exposed to which workflows, and scope PyPI tokens to specific packages with the minimum required permissions.
Monitor PyPI and npm for anomalous package behaviors. Tools such as Socket.dev, Phylum, and Snyk can detect unusual characteristics in newly published package versions, including unexpected network calls, file modifications, or changes in execution behavior. Early detection is critical in ecosystems with high update velocity.
Treat AI agent skill marketplaces as untrusted inputs. If your organization uses agentic AI frameworks such as AutoGPT, CrewAI, or Cline, apply the same vetting standards to agent capabilities as you would to third-party libraries. Audit inter-agent communication paths and enforce logging for all deployed agents.
Establish visibility into AI agent deployments. Given that over half of agents currently run without logging, establishing a baseline inventory of deployed agents, their permissions, and their external communication patterns is a foundational control, not an advanced one.
Rotate credentials after any AI supply chain incident. Assume that any credentials accessible within the affected environment may be compromised. Rotate API keys, service tokens, and cloud credentials promptly, not only those directly associated with the impacted package.
Extend supplier security requirements to AI tooling. Under ISO/SAE 21434, OEMs are responsible for cybersecurity requirements across their supply chains. AI tooling used by Tier 1 and Tier 2 suppliers should be in scope for supplier security assessments, including dependency management practices and incident response obligations.
Securing the AI Supply Chain: The Bottom Line
The LiteLLM incident is a marker, not an outlier. Attackers have identified the AI development toolchain as a high-value target and are investing in multi-stage, technically sophisticated campaigns to exploit it. With 610 automotive cyber incidents recorded in 2025 — 26% of which escalated into global, multi-subsidiary events at a rate more than triple that of 2024 — the sector is already operating in an elevated threat environment, as documented in VicOne's 2026 Automotive Cybersecurity Report Crossroads. Adding an under-governed AI tooling layer into that environment without corresponding security controls is a material risk decision, not just a technical one.
For automotive OEMs, the risk extends across development environments, supplier ecosystems, and cloud services supporting software-defined vehicles. A compromised dependency within this chain can introduce vulnerabilities that propagate beyond internal systems and into production workflows, with potential implications for vehicle safety, regulatory compliance, and brand integrity.
VicOne's position is straightforward: as AI becomes infrastructure, supply chain security must be treated as a first-class concern, designed into systems from the outset and continuously enforced, rather than addressed reactively after a breach. Organizations that govern their AI dependencies with the same rigor applied to their core software supply chain will be materially better positioned as this threat continues to evolve.
Key takeaway: AI supply chain attacks are not a future risk category. They are an active, documented threat with confirmed victims across cloud environments in 2026. Automotive OEMs and Tier 1 suppliers that integrate AI tooling governance into their existing CSMS frameworks, supplier requirements, and incident response processes now will reduce both their exposure and their compliance risk under ISO/SAE 21434 and UNECE WP.29.
About the Author
Ziv Chang is a cybersecurity strategist and Sr. Director of CyberThreat Research Lab & LAB R7 at VicOne, an automotive cybersecurity company serving global OEMs and Tier 1 suppliers. His research examines how vulnerabilities in connected vehicle systems propagate across sensors, models, and actuators — translating digital risk into real-world safety consequences. Ziv tracks how automotive platforms, as one of the first large-scale deployments of AI-integrated connected systems, are defining the industry's approach to cyber-physical security. He is a frequent speaker at iThome conferences and HITCON, where he shares threat intelligence on emerging attack vectors shaping the future of vehicle cybersecurity.