Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Someone had to say it: Scientists propose AI apocalypse kill switches

Feb 20, 2024 - theregister.com
A paper from the University of Cambridge suggests implementing remote kill switches and lockouts in AI hardware to limit its destructive potential. The paper, which includes contributions from several academic institutions and OpenAI, argues that regulating the hardware used by AI models may be the best way to prevent misuse. The researchers propose a global registry for AI chip sales, incorporating a unique identifier into each chip to combat smuggling, and requiring multiple parties to sign off on potentially risky AI training tasks.

The paper also discusses the potential for AI resources to be reallocated for societal benefit. However, it acknowledges that hardware regulation is not a complete solution and does not eliminate the need for regulation in other aspects of the industry. The authors argue that physical hardware is easier to control than other aspects of AI development, such as data, algorithms, and trained models, which are easily shareable and difficult to control once published or leaked.

Key takeaways:

  • A paper from the University of Cambridge suggests implementing remote kill switches and lockouts in AI hardware to limit its destructive potential.
  • The paper argues that regulating the hardware these models rely on may be the best way to prevent misuse of AI, as it is detectable, excludable, and quantifiable.
  • The researchers propose a global registry for AI chip sales to track them over their lifecycle and combat smuggling of components.
  • The paper also suggests that processor functionality could be switched off or dialed down by regulators remotely using digital licensing, but warns that such a kill switch could become a target for cybercriminals to exploit.
View Full Article

Comments (0)

Be the first to comment!