Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

NIST Identifies Types of Cyberattacks That Manipulate Behavior of AI Systems

Jan 06, 2024 - nist.gov
The National Institute of Standards and Technology (NIST) has published a report outlining the vulnerabilities of AI and machine learning systems to adversarial attacks. The report, titled "Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations", identifies four major types of attacks: evasion, poisoning, privacy, and abuse attacks. The authors note that while there are some mitigation strategies, there is currently no foolproof defense against these attacks.

The report also highlights the issue of untrustworthy data, which can be corrupted by bad actors during an AI system’s training period and afterward. This can lead to undesirable behavior in AI systems, such as chatbots responding with abusive language. The authors stress the need for better defenses and encourage the community to develop them, acknowledging that securing AI algorithms is a complex problem that has not yet been fully solved.

Key takeaways:

  • AI systems can be deliberately confused or 'poisoned' by adversaries to make them malfunction, with no foolproof defense currently available, according to a new publication by NIST.
  • The publication, titled 'Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations', outlines the types of attacks AI systems might face and suggests approaches to mitigate them.
  • The report identifies four major types of attacks on AI systems: evasion, poisoning, privacy, and abuse attacks, each with different goals, capabilities, and knowledge requirements.
  • Despite significant progress in AI and machine learning, these technologies are vulnerable to attacks that can cause significant failures, and there are theoretical problems with securing AI algorithms that have not yet been solved.
View Full Article

Comments (0)

Be the first to comment!