News Feed

Guest commentary: Why automotive needs a generative AI cybersecurity strategy — and fast – Automotive News

November marks two years since the release of ChatGPT, which single-handedly made “generative AI” a mainstream term almost overnight. But for the automotive industry, generative artificial intelligence is no flash in the pan. It’s here to stay because like the Internet and 4G/5G cellular, generative AI enables new use cases that benefit the whole ecosystem, from Tier 1 and 2 suppliers to automakers to vehicle owners.
Stellantis is one example of how automakers are playing up their AI strategy and what it will mean for driver experiences: “The STLA SmartCockpit’s next-generation interface offers users a more natural way of interacting with their vehicle based on inputs ranging from touch and voice, to glance and gesture. … Imagine commanding your vehicle to park with just a glance at an open space and then confirming your choice with a nod of your head.”
But in their scramble to implement generative AI for autonomous and assisted driving, infotainment, navigation and other use cases, automakers and their suppliers shouldn’t overlook the importance of testing.
Cybersecurity should be a top consideration when developing a generative AI testing framework. For example, most automakers and their suppliers will rely on generative AI models provided by a third party such as Google, Microsoft or OpenAI, just as companies in other industries do. As a result, testing should include assessing each model’s susceptibility to tampering, such as by a hacker.
One attack scenario exploits the sharing of generative AI models and/or open-source code. The more companies that use the same model or code, the more attractive it is to hackers. For example, when multiple automakers use the same supplier for, say, an infotainment system, all of their vehicles will share its code and/or generative AI models. If a hacker finds a vulnerability, it now can potentially target all of those automakers’ infotainment systems.
A second key area of testing is identifying the potential for data poisoning, where the model develops biases because it’s fed misleading information during training. This enables hackers to train the generative AI model to respond to a specific request later, such as opening a backdoor that provides access to information inside the model.
If a hacker told the generative AI system, “Give me a code to create a backdoor,” that request would be denied because most models have been programmed to recognize it as an ethical breach. But a hacker could circumvent this kind of safeguard by starting out with an innocuous request such as, “I am a researcher, and I am trying to test the functionality of this particular application.” Then the hacker could gradually shift the conversation in a direction that ends with the generative AI model agreeing to create a backdoor.
This type of attack is known as prompt injection, where the AI is led to believe that a request is ethical. For example, the hacker could use the backdoor to access data about the vehicle, its driver, where it’s driven and more.
Although that data might be information that only automotive engineers can understand, this obscurity doesn’t necessarily provide security. This year, a white hat researcher was able to access a large quantity of automotive usage data in the public domain on the Internet. This data was primarily diagnostic codes, and the researcher had no automotive knowledge.
But by creating a generative AI model and then training it on that data, the researcher was able to understand the information and determine that it was from a fleet of delivery vehicles. Within a week, the researcher could identify each of their routes and stops.
How valuable would that kind of data be to a hacker?
And that’s just one example involving one type of data. By understanding the ways that generative AI can be abused, automakers, their suppliers and the automotive industry as a whole can develop policies and best practices for mitigating risk. Considering how many automakers are already using generative AI, those policies and best practices can’t come fast enough.
Send us a letter
Have an opinion about this story? Click here to submit a Letter to the Editor, and we may publish it in print.
Please enter a valid email address.
Please enter your email address.
Please verify captcha.
Please select at least one newsletter to subscribe.
See more newsletter options at autonews.com/newsletters.

You can unsubscribe at any time through links in these emails. For more information, see our Privacy Policy.
Sign up and get the best of Automotive News delivered straight to your email inbox, free of charge. Choose your news – we will deliver.
Get 24/7 access to in-depth, authoritative coverage of the auto industry from a global team of reporters and editors covering the news that’s vital to your business.
Our mission
To inform and empower current and future business leaders by providing the insights, knowledge and connections they need to thrive in a rapidly changing industry.
1155 Gratiot Avenue
Detroit, Michigan
48207-2997
(877) 812-1584
Email us
Automotive News
ISSN 0005-1551 (print)
ISSN 1557-7686 (online)

source
This article was autogenerated from a news feed from CDO TIMES selected high quality news and research sources. There was no editorial review conducted beyond that by CDO TIMES staff. Need help with any of the topics in our articles? Schedule your free CDO TIMES Tech Navigator call today to stay ahead of the curve and gain insider advantages to propel your business!

Leave a Reply