
GenAI brings more than new features as it integrates with vehicle systems. What appears to be merely an innovative upgrade is actually an embedding of a system that learns, evolves, and operates autonomously. It introduces a new kind of supply chain risk: an adaptive, dynamic entity that remains inside a vehicle throughout its lifecycle.
Driven by the demand to innovate, most automotive manufacturers (OEMs) depend on third-party partners to train and supply these AI models. But this overlooks the most critical shift. GenAI models do more than perform tasks; they take action, learn from input, and adapt over time. This makes them a living supplier risk, rather than a static software part.
Before deploying these AI models, we need to ask and evaluate: "Is it safe, secure, and accountable?"
Why living risks break current security models
Unlike traditional software components, the behavior of GenAI models depends on the data they see, the prompts they receive, and the continuous evolution of their learning process. This makes it impossible to fully test, audit, or lock down using the methods most OEMs are familiar with.
The challenge runs deeper. Most GenAI models are trained or fine-tuned by external partners using data that OEMs have never seen, with processes and tools beyond the OEM’s control. As a result, risk exposure is spread across every phase of the AI model lifecycle, including pre-training data, model sourcing, fine-tuning, deployment, and updates. In many cases, none of these steps is fully governed by the OEM.
- The identity of who built the model is undisclosed.
- The source of training data cannot be verified.
- There is no visibility whether sensitive information was exposed during fine-tuning.
- Most importantly, the model continues to learn and evolve, changing its behavior over time (this occurs only in AI systems with memory mechanisms).
- This is not a hypothetical concern. It is a real and growing blind spot in the supply chain - one that behaves like a black box.
Figure 1: AI = A New Supply Chain Risks
Four GenAI model security risks
Based on our analysis of AI adoption in the automotive industry, we’ve identified four high-impact risks that are often overlooked when integrating GenAI models into vehicle systems:
1. Model sourcing blind spots
Popularity does not equate safety and security. The most widely used open-source models are also the most attractive targets for attackers. According to the Enkrypt AI Safety Leaderboard, the top ten most downloaded open-source models all exceed the security risk threshold established by NIST. These risks are not theoretical—several real-world attacks have already been documented. Notably, Google recently patched vulnerabilities in Vertex AI that could have allowed attackers to extract or poison enterprise-trained models.
In automotive environments, attackers can exploit these vulnerabilities to access driving behavior data, route history, voice logs, or even penetrate the vehicle’s internal systems and disable functionality.
Figure 2: According to the Enkrypt AI Safety Leaderboard, all of the top ten most downloaded open-source models carry some degree of cybersecurity risk.
2. Training data poisoned risk
Fine-tuning often involves sensitive internal data such as customer service FAQs, maintenance records, and driving logs. Some organizations also use publicly available datasets from AI platforms.
However, this practice comes with significant risks. In 2024, researchers discovered 100 malicious code-execution models on the Hugging Face platform. If used, these models could allow attackers to execute code inside the vehicle’s systems, exposing confidential data or enabling deeper breaches. This incident highlights how public AI models, when not properly vetted, can become serious supply chain threats.
Furthermore, if the data used for fine-tuning is not properly sanitized—or if it has already been poisoned—it becomes part of the model’s long-term memory, potentially exposing confidential information or creating persistent backdoors.
This risk is particularly high when using techniques like Chain-of-Thought (CoT), which can unintentionally expose internal workflows or API logic. A few hundred malicious samples can influence behavior, even when mixed into tens of thousands of training examples.
Figure 3: Poisoned models can be manipulated by attackers to trigger unintended or harmful behavior.
3. Model adoption and proxy governance gaps
Whereas traditional software components typically follow established version control and release governance, AI models and their associated context-protocol proxies may sometimes lack governance practices. In effect, it can hinder visibility into the specific model versions deployed across production environments, complicating traceability and risk management.
In AI deployments, Model Context Protocol (MCP) proxies (for example, the widely used mcp-remote tool) sit between the application and the model, handling authentication, request formatting, and network communication. Without an AI-specific Software Bill of Materials (AI-SBOM) or equivalent tracking, DevOps teams risk unknowingly deploying outdated or untrusted proxy versions during continuous delivery.
This gap opens fresh attack vectors. An example is CVE-2025-6514, a critical remote code execution vulnerability in the widely used mcp-remote tool (versions 0.0.5–0.1.15). The vulnerability allows remote code execution if the tool connects to a malicious MCP server, granting attackers full control over the host.
Such oversights—adopting model functionality beyond its intended scope without locking down version provenance—underscore the urgent need for strict governance, SBOM practices, and signed model artifacts in AI deployments to prevent model-level threats.
4. Behavioral manipulation risks
When an AI model or its hidden “system prompt” leaks, attackers can reverse-engineer the exact sequence of guardrail tokens—the model’s “Passphrases”—and craft adversarial tokens (suffixes) to bypass its safety checks. In one high-profile example, the Imprompter attack demonstrated how seemingly random gibberish can hide malicious instructions that exfiltrate personal data: security researchers transformed a PII-harvesting prompt into an obfuscated suffix of random characters, then fed it to open-source chatbots (LeChat, ChatGLM), achieving nearly an 80 % success rate in stealing names, ID numbers, payment details, and more.
If such techniques were applied against voice assistants or in-vehicle infotainment (IVI) systems, attackers could issue spurious navigation commands that may mislead drivers, silently record or leak private voice interactions, or trigger unauthorized actions—any of which could compromise safety and privacy.
Figure 4: This diagram illustrates how attackers bypass the safety checks to perform unauthorized action
ECUs are rigorously evaluated. So should your AI model.
This is not about avoiding AI. It’s about accommodating AI and treating it the same way as any supplier—with proper governance and risk controls. If a GenAI model is making decisions inside a vehicle, it deserves the same level of scrutiny as any hardware or ECU supplier. OEMs should begin implementing the following governance practices:
- Build an SBOM for AI: Document the model’s origin, training history, risk classification, and vulnerability scan results.
- Adopt robust security testing: Require red-team penetration testing and risk review before deployment to detect abnormal behaviors, including prompt injection testing and adversarial robustness assessments.
- Integrate models into cybersecurity governance: Treat them as part of an overall cybersecurity risk management, with visibility through a single pane of glass.
GenAI is no longer just a tool. It is an invisible, living, and evolving entity deeply embedded at the AI-enabled, software-defined vehicles’ core. While it’s a smart and innovative component, consistent control and monitoring must be exercised to ensure safety and security against foreboding risks.
Learn more insights on AI in the Automotive Industry: Redefining the Cybersecurity Framework by Max Cheng.