Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Why adversarial AI is the cyber threat no one sees coming

Mar 21, 2024 - venturebeat.com
The article discusses a recent report highlighting the discrepancy between IT leaders' intentions and actions regarding securing AI and MLOps. While 97% of IT leaders believe that securing AI and safeguarding systems is essential, only 61% are confident they will receive the necessary funding. Despite 77% of IT leaders experiencing some form of AI-related breach, only 30% have deployed a manual defense for adversarial attacks in their existing AI development, including MLOps pipelines.

The report also defines three broad classes of adversarial AI: adversarial machine learning attacks, generative AI system attacks, and MLOps and software supply chain attacks. It suggests four ways to defend against an adversarial AI attack: making red teaming and risk assessment part of the organization’s DNA, staying current and adopting the defensive framework for AI that works best for the organization, reducing the threat of synthetic data-based attacks by integrating biometric modalities and passwordless authentication techniques into every identity access management system, and auditing verification systems randomly and often, keeping access privileges current.

Key takeaways:

  • Despite 97% of IT leaders acknowledging the importance of securing AI and safeguarding systems, only 61% are confident they’ll get the necessary funding.
  • Adversarial AI aims to mislead AI and machine learning systems, making them ineffective for their intended use cases. There are three broad classes of adversarial AI: adversarial machine learning attacks, generative AI system attacks, and MLOps and software supply chain attacks.
  • Organizations can defend against adversarial AI attacks by making red teaming and risk assessment part of their routine, staying current with defensive frameworks, integrating biometric modalities and passwordless authentication techniques, and frequently auditing verification systems.
  • Synthetic data is increasingly being used to impersonate identities and gain access to source code and model repositories, making it one of the most challenging threats to contain.
View Full Article

Comments (0)

Be the first to comment!