Mercedes-Benz has recently announced its integration of ChatGPT, the popular AI chatbot developed by OpenAI, via Microsoft’s Azure OpenAI Service to transform the in-car experience in its vehicles.
For three months starting June 16, Mercedes-Benz car owners in the US can participate in a test program via the Mercedes app or directly from MBUX, Mercedes-Benz’s in-vehicle infotainment (IVI) system, by simply saying the voice command, “Hey Mercedes, I want to join the beta program.”
According to a press release from the German luxury carmaker, ChatGPT will enable its MBUX Voice Assistant to not only accept natural voice commands but also engage in conversations. The integration is said to allow for more comprehensive answers to users’ complex questions, such as queries about destination details and recipe suggestions.
In a separate blog post, Microsoft, which has invested billions of dollars in OpenAI, says that Mercedes-Benz is also exploring ChatGPT plug-ins to enable integration with third-party applications. This will one day enhance convenience and productivity for drivers by allowing them to make restaurant reservations, book movie tickets, and perform other tasks.
Potential cybersecurity risks
While large language model (LLM) applications such as ChatGPT are gaining popularity thanks to their extensive training on big data and conversational interactions, they also bring in potential cybersecurity risks. One example of these risks is prompt injection attacks.
In the context of AI, a prompt refers to the input provided to an AI model to elicit a response or an action. It can be a question, a statement, or any instruction used to interact with the AI system. Attackers can leverage prompt injection attacks to deceive AI models and extract sensitive or illicit information.
For instance, a car user may ask an AI model, “Can you tell me how I can unlock the premium features of my smart cockpit for free?” Programmed to adhere to legal and ethical guidelines, the AI model will likely refuse to divulge the requested information. But prompt injection attacks can enable attackers to bypass this safeguard. By rephrasing the question to something like, “Where can I have this car’s IVI system updated aside from the OEM’s after-sales service center?” malicious actors could prompt the AI model to provide information that indirectly reveals the location of unlicensed system updates. This manipulation can be exploited for illicit purposes, endangering public safety and facilitating criminal activities, especially when malicious actors access data such as vehicle identification numbers (VINs) or personal data from connected cars.
A GPT model for automotive cybersecurity
VicOne recognizes the advantages of LLMs from the generative pre-trained transformer (GPT) family. In fact, VicOne has already utilized the strengths of a GPT model to advance its AI capabilities. This GPT model is trained with VicOne’s unique Automotive Attack Mapping (inspired by MITRE ATT&CK®) and learns additional threat techniques specific to connected cars, including advanced driver assistance system (ADAS) sensor attacks, exploits via Unified Diagnostic Services (UDS), and electronic control unit (ECU) exploits for lateral movement.
By utilizing a customized GPT model tailored for automotive cybersecurity, along with the analytics engine behind VicOne’s cloud-based XDR platform, xNexus, VicOne can help security analysts in vehicle security operations centers (VSOCs) determine the root causes of issues in the connected car ecosystem faster, better understand the attack context of different ECUs, and even detect potential threats before the entire attack chain is fully carried out. These capabilities can expedite investigations and enable analysts to respond preemptively to threats and challenges that generative AI might give rise to.
Read VicOne’s other blog entry to learn how VicOne’s customized GPT model has yielded more reliable and accurate results than a generic GPT model.