In our previous blog entry titled "Work Smarter, Not Harder : Revving Up Automotive Cybersecurity," we showed how VicOne combines the generative pre-trained transformer (GPT) model to rev up our automotive cybersecurity AI capabilities. By harnessing the power of our automotive cybersecurity-knowledgeable GPT model, we are able to cut down by 60% the data analysis process time of our machine learning operations (MLOps) pipeline. Not only that, but we're also able to analyze abnormal behaviors more accurately and detect harmful behaviors more quickly. The advantages that this technology brings encourage more enterprises to consider adopting GPT models to fortify their threat intelligence capabilities against the ever-evolving threat landscape. This approach can also help bridge the talent gap in the cybersecurity analyst workforce.
Testing the accuracy of GPT models
We conducted an experiment to test the accuracy of GPT models in analyzing and listing key components used by attackers in various incidents. We started by having the GPT model analyze the Sirius XM incident. In 2022, researchers reported a flaw in the connected vehicle services of Sirius XM, which provides telematics and infotainment services to multiple brands like Acura, Honda, Infiniti, and Nissan. The vulnerability could have allowed malicious actors to remotely start, unlock, and locate vehicles with commands requiring only the vehicle's Vehicle Identification Number (VIN), which is visible on a car's windshield.
The results are quite promising, as the GPT model covered almost all the key components that attackers could use:
Figure 1. A GPT model's analysis results for the Sirius XM incident
In the next test case, we looked into the 2022 Mazda Infotainment System incident in which some Mazda owners in Seattle were stuck with bricked in-vehicle infotainment (IV) systems after listening to a particular radio station. After entering the news content into the GPT model, we obtained the following results:
Figure 2. A GPT model's analysis results for the Mazda Infotainment System incident
While the GPT model generated some convincing responses, some are not accurate from VicOne's threat expert perspective. According to Mazda, the incident happened as a result of the radio station transmitting image files without file extensions in its HD radio stream. As it turned out, that particular generation of Mazda's infotainment system requires a file extension (rather than a header) to determine the file type. Since there was no file extension, the IVI system could not identify the file and bricked the entire system. Therefore, no phone was necessary as a key, and since this issue stems from an external connection problem, it is in fact unrelated to the in-vehicle network.
Figure 3. VicOne's threat expert analysis results for the Mazda Infotainment System incident.
Given the growing complexity of contemporary attack methods, understanding attack flows is equally crucial to developing a targeted and efficient mitigation plan that protects each node in the attack path. In the current case, the web application was the actual point of attack. In line with the mitigation recommendations from the UNECE WP.29, VicOne experts suggest the following three strategies to mitigate the issue:
Description | Mitigation | Mitigation Detail | |
---|---|---|---|
15.1 | Innocent Victim (e.g. owner, operator or maintenance engineer) is tricked into taking action to load malware or enable an attack unintentionally | M18 | Measures shall be implemented for defining and controlling user roles and access privileges based on the principle of least access privilege |
15.2 | Defined security procedures are not followed | M19 | Organizations shall ensure security procedures are defined and followed, including logging of actions and access related to the management of the security functions |
23.1 | Fabrication of software for the vehicle control system or information system | M7 | Access control techniques and designs shall be applied to protect system data/code |
Table 1. Mitigation recommendations from UNECE WP.29
However, if an automotive cybersecurity company bases its mitigation plan on the five attack vectors identified by the GPT model, the resulting strategies (seen in Figure 4) appear unreliable and inaccurate.
Figure 4. WP.29 mitigation examples based on a GPT model's analysis results
The art of accuracy: How VicOne uses the GPT model differently
Automotive cybersecurity companies that rely on accurate and reliable information for their mitigation strategies are better equipped to address the actual problem and find effective solutions. However, when a generic GPT model generates responses that are challenging to discern as either true or false, it can be difficult to identify these inaccuracies without a deep knowledge of automotive cybersecurity.
VicOne adopts a “united AI model" approach to improve the accuracy of our analysis results. We not only utilize the widely recognized GPT model to offer conversational and comprehensible context, but we also fine-tune the results using our own Large Language Models (LLMs) that have been pre-trained with our 30-year accumulation of automotive threat intelligence covering diverse attack methods. By combining the GPT model with our automotive threat intelligence LLM, we can consolidate the output from all models, thereby obtaining results that are more reliable and accurate.
When users seek additional guidance from our automotive threat intelligence LLM, we can provide easy-to-understand mitigation recommendations. These include condensing the incident summary, attack flow, attack vector, and suggested mitigation direction into a concise summary. This approach enables users to understand the situation quickly and improve their work efficiency. With our reliable and efficient solutions, we are dedicated to addressing cyberthreats in the automotive industry, safeguarding the safety and security of connected vehicles and critical systems.
Read our previous blog entry to learn how AI models like ChatGPT can influence automotive cybersecurity.