Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Can Self-Driving Cars Hallucinate? Cybersecurity Researcher Has an Answer

Sep 26, 2023 - techtimes.com
Kevin Fu, a professor at Northeastern University, and his team have discovered a new form of cyberattack, dubbed "Poltergeist attacks," that can manipulate the perception of autonomous vehicles and drones. The attack exploits optical image stabilization technology, a common feature in contemporary cameras, by manipulating the resonant frequencies of the sensors within these cameras, leading to misinterpretations by the machine learning algorithms.

The implications of these attacks pose serious threats to the safety of autonomous systems, potentially causing them to misinterpret or disregard actual objects, leading to collisions. Fu emphasizes the need for engineers and developers to address these vulnerabilities, and his research highlights the importance of rigorous cybersecurity measures in the evolving landscape of autonomous systems.

Key takeaways:

  • Kevin Fu and his team at Northeastern University have discovered a new form of cyberattack, dubbed "Poltergeist attacks," which can manipulate the perception of self-driving cars and drones, potentially threatening their safe operation.
  • Poltergeist attacks exploit the optical image stabilization technology common in contemporary cameras, creating deceptive visual realities for machines using machine learning for decision-making processes.
  • The team was able to manipulate images by pinpointing the resonant frequencies of the sensors within these cameras, leading to misinterpretations by the machine learning algorithms and potentially significant misjudgments by autonomous systems.
  • Fu emphasizes the need for engineers and developers to address these vulnerabilities, as they pose genuine threats to the safe operation of autonomous systems and could lead to a lack of consumer confidence in new technologies.
View Full Article

Comments (0)

Be the first to comment!