The industry is exploring the use of small language models (SLMs) for specific tasks within vehicles, such as cybersecurity agents. However, generative AI can also create vulnerabilities, such as the risk of hackers exploiting shared code across multiple vehicle models. The industry must be prepared to quickly detect and respond to such compromises. Additionally, the industry must develop policies for generative AI software testing, as software changes and updates frequently. The concept of cybersecurity for AI is still in its early stages, and considerations must be made for training data, model selection, governance models, and shared codes in AI use.
Key takeaways:
- The automotive industry has quickly begun implementing generative AI, with tools like ChatGPT becoming popular due to their varied use cases, including making it easier for drivers to interact with their vehicles.
- Mercedes-Benz's MBUX Virtual Assistant, which uses generative AI, is an example of how AI can enhance the user experience by offering helpful suggestions based on learned behavior and situational context.
- While AI can provide numerous benefits, it can also create vulnerabilities, particularly when multiple OEMs use the same code from Tier 1 and Tier 2 suppliers, potentially allowing hackers to exploit a vulnerability across multiple vehicle models.
- As generative AI continues to permeate the automotive world, it's crucial for OEMs and their suppliers to develop robust cybersecurity strategies, including policies and best practices for GenAI software testing, to guard against potential threats.