Despite the skepticism, Ghost and OpenAI remain confident in the potential of LLMs. Ghost is currently testing multimodal model-driving decision making with its development fleet and is working with automakers to integrate new large models into its autonomy stack. However, the article notes that given the recent setbacks experienced by well-resourced companies like Cruise and Waymo, it remains uncertain whether Ghost can deliver on its promises with this unproven technology.
Key takeaways:
- Ghost Autonomy, a startup building autonomous driving software, plans to use multimodal large language models (LLMs) in self-driving cars, in partnership with OpenAI and Microsoft.
- LLMs, which can understand text and images, will be used for complex scene interpretation and suggesting road decisions based on images from car-mounted cameras.
- Experts are skeptical about the use of LLMs in autonomous driving, citing their unpredictability, instability, and the fact that they were not designed for this purpose.
- Despite skepticism, Ghost Autonomy is testing multimodal model-driving decision making and working with automakers to integrate these models into their autonomy stack.