From On-board AI to Physical AI: Why Automotive Cyber Risk Has Entered a New Era

January 30, 2026
Ziv Chang
From On-board AI to Physical AI: Why Automotive Cyber Risk Has Entered a New Era

Modern vehicles are no longer defined by mechanical reliability or even traditional software quality. Increasingly, they are defined by how artificial intelligence makes decisions under uncertainty and how those decisions directly translate into physical action. 

This is not a theoretical concern. As AI moves deeper into perception, planning, and control, the industry is seeing a new class of risk emerge: vehicles can behave unsafely without any conventional cyberattack, software bug, or component failure. 

For OEMs and Tier-1 suppliers, the implication is clear: automotive cyber risk is no longer confined to networks and ECUs - it now lives inside AI reasoning itself. 


Why Traditional Automotive Standards Are No Longer Enough 

Automotive on-board systems run all embedded computing, sensing, networking, control, and security functions inside the vehicle—under extreme constraints such as sub-10 ms latency, decade-long lifecycles, and strict compliance with ISO 26262 (functional safety)and ISO/SAE 21434 (cybersecurity). These standards assume deterministic behavior and clearly bounded components, making risk predictable and containable. 

AI is being retrofitted into vehicle architectures never designed for non-determinism, allowing perception errors or model drift to propagate across domains and trigger system-level failures. While zonal architectures reduce hardware complexity, they simultaneously increase software complexity and blur trust boundaries between AI decisions and physical actuation—fundamentally reshaping automotive risk.(McKinsey, 2023; Promwad, 2025; Molex, 2025). 


Where AI Becomes Safety-Critical 

On-board AI refers to machine learning models, deep neural networks, and agentic systems that run locally on vehicle compute platforms without continuous cloud dependence. Unlike traditional automotive software, these systems produce probabilistic outputs, rely on data-driven learning rather than fixed rules, and continue to evolve throughout the vehicle’s life via OTA updates. These characteristics fundamentally change both safety and cybersecurity risk, introducing challenges such as model drift, data poisoning, and behaviors that cannot be exhaustively tested before deployment. 

AI in vehicles operates across a layered stackfrom perception to physical controlwith risk increasing the closer AI gets to actuation. Perception and understanding layers interpret sensor data and build a world model, decision-making selects vehicle behavior, and physical control executes commands through steering, braking, and acceleration. Errors at higher layers may degrade awareness, but failures at lower layers can immediately result in physical harm, making AI-driven control inherently safety-critical. 

Today, on-board AI is deployed across ADAS, autonomous driving, driver and occupant monitoring, intelligent cockpits, predictive maintenance, and in-vehicle cybersecurity. As these AI capabilities expand and models are increasingly shared across domains, risk becomes systemic rather than isolated. The key takeaway is clear: when AI influences physical actuation, it must be treated as a safety-critical element across design, validation, governance, and threat modeling, with system-level visibility and lifecycle management—not just traditional software assurance. 


Sense → Plan → Act: Rethinking Threat Modeling for AI Vehicles 

AI-driven vehicles operate as closed-loop Physical AI systems, where perception, reasoning, and control are continuously linked to real-world actuation. The Sense → Plan → Act (SPA) model—long used in robotics and autonomous systems—provides a clear and practical framework for understanding how safety and security risks emerge in these vehicles.  

The critical insight is this: physical harm does not require direct compromise. Manipulating how a vehicle perceives its environment or plans its actions is sufficient. Errors or bias introduced at the Sense or Plan stages can cascade through the system, causing the vehicle to execute unsafe maneuvers with complete confidence.  

        
StageFunctionAI RoleAutomotive ExamplesKey Threats (2025+)
SensePerceive environmentPerception networks, fusionCameras, radar, LiDAR, V2XAdversarial inputs, spoofing
PlanReason & decidePath / behavior planningE2E models, reasoning VLAGoal misalignment, emergent behavior
ActExecute commandsDrive-by-wireSteering, brakingTrust violations from upstream AI

Table1. Stages of SPA model

In Physical AI systems, bad decisions, not broken hardware, are often the most dangerous failure mode. 

Figure 1. How threats propagate through the entire decision loop, as illustrated in the SPA model.

Figure 1. How threats propagate through the entire decision loop, as illustrated in the SPA model.

Classical automotive safety and cybersecurity standards—such as ISO 26262, ISO/SAE 21434, and UN R155—remain foundational, but they are no longer sufficient when AI becomes integral to vehicle behavior. These frameworks were built for deterministic, rule-based systems with predictable failure modes. 

AI fundamentally changes that risk model. Probabilistic, data-driven systems introduce new failure classes—including adversarial manipulation, hallucinations, goal misalignment, and emergent behavior—that cannot be fully addressed through traditional safety or cybersecurity controls alone. As a result, existing standards must be supplemented with AI-specific risk frameworks that account for model integrity, data poisoning, explainability, and safety degradation over time. Without this extension, compliance may be achieved—while systemic AI risk remains unmanaged.


Real-World Incidents Confirm the Pattern 

 Tesla Autopilot & FSD Incident  

Tesla’s Autopilot and Full Self-Driving (FSD) Supervised systems—both Level 2—illustrate how AI-driven risk emerges even without system compromise. While Tesla reports favorable aggregate safety metrics, numerous documented incidents reveal persistent vulnerabilities in edge cases, particularly under reduced visibility conditions such as glare, fog, dust, or low light. These scenarios expose limitations in perception, sensor fusion, and the ability of AI models to generalize beyond training data. These concerns prompted a formal investigation by the National Highway Traffic Safety Administration into millions of Tesla vehicles equipped with FSD, triggered by its behavior in low-visibility environments and the adequacy of fallback controls. The investigation remains active as of early 2026. 

Figure 2. Adopting the Tesla Autopilot and FSD Incident on the SPA model.

Figure 2. Adopting the Tesla Autopilot and FSD Incident on the SPA model.

Key Lesson 

These risks reinforce the need for stronger mitigation across the entire loop: robust scenario validation, uncertainty-aware decision-making, real-time anomaly detection, explicit handling of low-confidence situations, and stronger driver monitoring to prevent AI misjudgment from becoming physical harm. 

 

GM Cruise Robotaxi Pedestrian Incident  

On October 2, 2023, a fully driverless (Level 4) robotaxi operated by Cruise severely injured a pedestrian in San Francisco, exposing critical weaknesses in AI perception, decision-making, and incident governance. After the pedestrian was initially struck by a human-driven vehicle and thrown into the robotaxi’s path, the Cruise vehicle made contact at low speed. Instead of performing an immediate emergency stop, it executed a standard pull-over maneuver, dragging the pedestrian approximately 20 feet before stopping. The incident triggered immediate operational shutdown, sustained regulatory action, and lasting governance consequences for Cruise. California regulators suspended Cruise’s autonomous deployment permit, forcing a nationwide pause of all robotaxi operations. Subsequent investigations resulted in financial penalties,including fines from the California Public Utilities Commission and a civil settlement with theNational Highway Traffic Safety Administration, which mandated corrective actions, enhanced reporting, and ongoing regulatory oversight. 

Figure 3. GM Cruise Robotaxi Pedestrian Incident on the SPA model.

Figure 3. GM Cruise Robotaxi Pedestrian Incident on the SPA model.

Key Lesson

The incident was not a single-point failure, but acascading breakdown across Sense → Plan → Act. A perception error propagated into unsafe planning, which was then executed precisely by the control system. This illustrates a defining Physical AI risk:when upstream AI decisions are wrong, downstream systems can cause real-world harm without any mechanical fault or cyber intrusion.


Key Implications 

Existing automotive standards must evolve to address AI explicitly. Frameworks such as ISO 26262, ISO/SAE 21434, and UN R155 remain essential, but they were designed for deterministic systems. As AI becomes integral to vehicle behavior, these standards must be extended to cover AI model lifecycle management, including secure training, OTA model integrity, runtime monitoring, and post-deployment validation. 

Vehicles must be treated as Physical AI systems. Risk models can no longer assume predictable, rule-based logic. AI-driven vehicles exhibit probabilistic behavior, non-determinism, and emergent outcomes that directly influence physical actuation. Managing risk now requires explicit recognition of AI uncertainty, degradation over time, and behavior outside nominal scenarios. 

Governance must span the entire AI lifecycle. Effective oversight must extend from data collection and model training through validation, deployment, field monitoring, and retraining. Isolated component testing is insufficient; system-level, end-to-end validation—covering edge cases and real-world distribution shifts—is now mandatory. 

Safety-critical AI requires new assurance approaches. Traditional testing must be complemented by AI-specific assurance mechanisms, including safety cases, explainability, traceability, and continuous risk assessment. Frameworks such as UL 4600, NIST AI RMF, and ISO/IEC TR 5469:2024 are becoming essential to demonstrate that AI-driven systems remain acceptably safe in operation. 

Shared AI significantly amplifies systemic risk. As perception and planning models are reused across ADAS, autonomous driving, driver monitoring, and cockpit systems, a single weakness—such as adversarial input, data poisoning, or model drift—can cascade across multiple safety-critical domains. AI risk is no longer localized; it is inherently system-wide and fleet-wide. 


Recommendations 

For OEMs and Tier-1 Suppliers 

  • Adopt an AI-native threat model. Use Sense → Plan → Act as the primary end-to-end framework for analyzing and managing risk across all AI-enabled vehicle systems. 
  • Monitor AI integrity at runtime. Implement continuous monitoring for uncertainty, confidence, anomalies, and out-of-distribution behavior to detect degradation before it becomes unsafe. 
  • Enforce hard safety boundaries. Define explicit trust boundaries between AI outputs and physical actuators, including safety monitors, redundant controllers, and veto mechanisms. 
  • Constrain AI behavior by design. Establish safety envelopes that limit AI actions regardless of internal confidence, ensuring fail-safe behavior under uncertainty. 
  • Enable explainability and forensic readiness. Require explainable decisions and comprehensive logging for all safety-critical AI to support incident analysis, regulatory reporting, and accountability. 
  • Extend CSMS to include AI governance. Integrate model lifecycle controls—versioning, drift detection, adversarial robustness testing, and secure OTA pipelines—into cybersecurity management systems. 

 

For Regulators and Standards Bodies 

  • Accelerate hybrid AI–automotive standards. Integrate AI risk frameworks with existing automotive standards to address probabilistic behavior, model vulnerability, and emergent risk. 
  • Require AI-specific safety assurance. Mandate safety cases and probabilistic risk evidence for higher automation levels, especially where AI directly influences physical control. 
  • Strengthen incident transparency requirements. Define clear timelines and data disclosure expectations following safety-critical AI events. 

 

For Cross-Industry Collaboration 

  • Share data to test the hard cases. Develop common datasets focused on adversarial conditions, domain shifts, and long-tail edge cases relevant to safety and security. 
  • Normalize AI red-teaming. Expand third-party testing and red-team exercises targeting physical-world attacks on perception and planning systems. 

 

Conclusion 

Vehicles are no longer mechanical products enhanced by software. They are networked Physical AI systemsmachines that sense, reason, and act in real time, at scale, and in open-world environments where uncertainty is the norm. This reality fundamentally changes the nature of automotive risk. Safety and cybersecurity models built on deterministic assumptions, exhaustive pre-release testing, and clear cyber–physical separation can no longer keep pace with AI-driven decision-making. 

 The path forward is not to abandon existing standards, but to evolve beyond them. AI-aware threat modeling, lifecycle-based governance, continuous validation, and runtime visibility must become core disciplines—not afterthoughts. Approaches such as Sense → Plan → Act, hybrid risk frameworks, and continuous model assurance are not about adding process overhead; they are about restoring control and predictability in systems that are inherently probabilistic. 

 The ultimate goal is not merely regulatory compliance, but the creation of trustworthy, explainable, and resilient AI-enabled vehicles that earn and maintain public confidence while preventing avoidable harm on public roads. The transition from on-board AI to safe, scalable Physical AI will define competitive advantage in the coming decade. Those who lead it will not only meet regulatory expectations—they will set the benchmark for what trustworthy mobility looks like in the AI era. 

Our News and Views

Gain Insights Into Automotive Cybersecurity

  • From On-board AI to Physical AI: Why Automotive Cyber Risk Has Entered a New Era
    Blog
    January 30, 2026
    Modern vehicles are now Physical AI systems, where probabilistic decisions control real-world actions and raise new safety and cyber risks. Securing them requires AI-aware threat modeling, lifecycle governance, and continuous assurance to keep systems safe.
    Read More
  • Pwn2Own Automotive 2026 Day 3: New Master of Pwn Announced and Other Highlights
    Blog
    January 26, 2026
    Pwn2Own Automotive 2026 set a new record with 76 unique zero-day vulnerabilities discovered, exposing the rapidly expanding attack surface across SDVs, IVI systems, and EV charging infrastructure. The final day crowned Fuzzware.io as Master of Pwn 2026, with 28 Master of Pwn points.
    Read More
  • Pwn2Own Automotive 2026 Day 2: EV Chargers Hit Full Throttle
    Blog
    January 23, 2026
    Day 2 delivered 29 new zero-days, pushing the total to a record 66. Researchers repeatedly compromised Level 2/3 EV chargers and IVI systems using practical flaws like exposed interfaces and command injection. The takeaway: automotive and charging infrastructure attacks are now repeatable at scale—shifting cyber risk from theoretical to immediate operational impact.
    Read More
  • Pwn2Own Automotive 2026: Uncovering 37 Unique Zero-Days
    Blog
    January 22, 2026
    Pwn2Own Automotive 2026 Day 1 opened with record-breaking momentum, with researchers successfully compromising infotainment systems, EV chargers, and Tesla interfaces—highlighting how expansive today’s automotive attack surface has become. The surge in entries and chained exploits confirms a clear shift: in the SDV era, automotive cyber risk is no longer isolated to the vehicle, but systemic across the entire ecosystem.
    Read More
Visit Our Blog

Accelerate Your Automotive Cybersecurity Journey Today

Contact Us