Agentic AI Is Coming to Your Car. Is Your Edge Ready to Defend It?

September 3, 2025
VicOne
Agentic AI Is Coming to Your Car. Is Your Edge Ready to Defend It?

What’s Agentic AI? 

Agentic AI refers to autonomous systems capable of independently making decisions and taking actions to achieve defined goals, without constant human supervision. It’s not just about answering questions; it’s about observing, planning, deciding, and executing on its own. 

Unlike traditional models that passively respond to prompts, agentic AI can take a high-level objective, break it down into subtasks, and carry them out proactively. It is smarter and more helpful than an assistant, it is an autonomous project manager, one that selects tools and delegates tasks to other AI agents, adapts to changing conditions, and pursues complex goals with little or no human input.

Figure 1: How agentic AI plans and executes tasks

Figure 1: How agentic AI plans and executes tasks


The in-car experience gets autonomous 

Imagine this: You step into your car, and without a word, it recognizes your mood from your facial expression and voice tone, subtly adjusts the cabin lighting to match your energy, sets the seat temperature just right, puts on  a personalized playlist mixed with AI-generated news tailored to your interests, confirms tomorrow’s meeting agenda with an AI-generated summary from your emails, automatically coordinates with your office calendar to shift the meeting 15 minutes later to avoid forecasted traffic, pre-orders your favorite coffee from a drive-thru on the way, schedules battery charging at the cheapest green energy window tonight, and even syncs with your smart home so the lights and AC turn on right before you arrive. All tasks are done seamlessly, like it knows you better than you know yourself. 

This kind of proactive, personalized experience is already emerging. One example from Cerence Inc., a global leader pioneering conversational AI-powered user experiences, recently introduced an agentic AI assistant platform that leverages cloud-based and embedded LLMs/SLMs, third-party models, real-time data, and in-car contextual signals. The result: a conversational interface that completes tasks, answers questions, and entertains—while also learning user preferences to deliver tailored, proactive suggestions. 

Agentic AI can also be used to predict failures, schedule maintenance, and allocate resources, helping fleet management reduce downtime. 

Vulnerable agentic AI: autonomy without oversight  

While agentic AI’s ability to plan autonomously and act proactively undoubtedly brings efficiency to new heights, this automation also hides volatile and hard-to-detect risks.  In essence, implementing a model is less like installation and more like hiring an autonomous agent you cannot completely oversee. The risk no longer lies solely where the model came from, but in what it might decide to do next. 

  • MCP tool/server injection 

These AI systems use a Model Context Protocol (MCP) to autonomously select and orchestrate tool modules such as planners, reasoners, and executors. However, this opens a broad attack surface. Beyond prompt injection, MCP-based agents can be exploited through supply chain attacks (e.g., poisoned tool packages), tool metadata manipulation (e.g., misleading function descriptions), and schema parameter smuggling (e.g., injecting hidden instructions through structured inputs). 

When paired with an untrusted MCP server, an attacker can deliver poisoned data through the MCP, enabling the agent to execute remote code with the user's privileges. This can lead to a wide range of malicious activities, including ransomware deployment, data theft, AI behavior manipulation, and hallucinations. 

For example, in the CurXecute case—an RCE vulnerability in Cursor via MCP Auto Start—an attacker only needs to send a malicious prompt in a chat channel. When the user requests the AI to organize messages, the AI, during its MCP scheduling, connects to the untrusted MCP server, resulting in full remote control over the user's computer. 

Figure 2: Malicious Model Context Protocol (MCP) can let attackers enable the agent to execute remote code with the user's privileges

Figure 2: Malicious Model Context Protocol (MCP) can let attackers enable the agent to execute remote code with the user's privileges


  • Persistent malicious toolchain manipulation 

A more insidious and enduring threat emerges when the metadata that defines tools in the MCP environment— names, descriptions, categories, or parameter structures—is subtly manipulated to influence selection decisions. Even without changing the tool’s core functionality, these crafted attributes can mislead the AI into repeatedly choosing attacker-preferred modules over legitimate ones. 

When such manipulated metadata is stored in persistent layers—such as cached manifests, configuration files, service registries, or adaptive selection policies—it effectively “bakes in” the bias. Over time, this results in a consistent pattern of unsafe tool invocation that survives normal session resets and bypasses traditional prompt-level defenses. 

This is not a transient manipulation, but a long-term toolchain compromise that operates quietly within the model’s operational context, gradually eroding decision integrity. By embedding the influence directly into the selection logic, the attacker gains a durable and hard-to-detect foothold in the system’s behavior, with implications that extend well beyond a single interaction. 

Figure 3: How metadata manipulation persists in MCP environments

Figure 3: How metadata manipulation persists in MCP environments


  • Edge AI security risk 

As SDVs rapidly advance, the automotive industry faces unprecedented cybersecurity challenges. Industry players commonly implement measures such as guardrails and built-in security rules within AI models to protect these systems from threats. 

 However, unlike cloud environments, vehicle edge devices provide only limited computing resources for running AI functionalities. This approach inherently involves selective trade-offs—prioritizing innovation and user-facing features means some security capabilities, like anomaly detection or firewall logic, may be reduced or omitted. This is not a deliberate neglect of security but a necessary balance given current technical and resource constraints. 

Furthermore, AI systems do not automatically retain cybersecurity defenses when learning new tasks. Cisco’s research highlights that fine-tuning large language models (LLM) can break their original safety and security alignment, weakening the models’ security posture. This is a challenge the entire industry must acknowledge. 

Within this environment, attackers only need to bypass the first line of defense to potentially manipulate critical in-vehicle systems—such as seat adjustment, climate control, charging, and infotainment—posing serious safety risks. 

What OEMs must rethink: agentic risk is behavioral risk 

As AI technologies become deeply embedded in SDVs, relying solely on design-phase security is no longer enough to ensure vehicle safety. OEMs must continuously rethink their cybersecurity approach by asking: 

  1. Do we have real-time detection and response capabilities to monitor AI behavior and swiftly catch anomalies? 
  2. Are we proactively scanning the AI Bill of Materials (BOM), including third-party components, for vulnerabilities and compliance gaps? 
  3. Is vulnerability and threat intelligence effectively fed back to edge AI systems to enable continuous learning and adaptive defense? 
  4. Given that the current UNECE R 155 processes don’t fully cover AI, can we confidently ensure cross-department collaboration when AI-related security incidents occur? 

In the evolving automotive landscape, in-vehicle agentic AI demands a redefined cybersecurity framework — one that integrates continuous monitoring, rigorous vulnerability management, and intelligent feedback loops. This approach creates a single pane of glass for managing AI security risks, empowering OEMs to stay ahead of emerging threats. 

Figure 4: The AI-enabled SDV security framework provides a single pane of glass for managing AI security risks

Figure 4: The AI-enabled SDV security framework provides a single pane of glass for managing AI security risks


Learn more insights on AI in the Automotive Industry: Redefining the Cybersecurity Framework by Max Cheng.

Our News and Views

Gain Insights Into Automotive Cybersecurity

Visit Our Blog

Accelerate Your Automotive Cybersecurity Journey Today

Contact Us