The paper also discusses the potential for AI resources to be reallocated for societal benefit. However, it acknowledges that hardware regulation is not a complete solution and does not eliminate the need for regulation in other aspects of the industry. The authors argue that physical hardware is easier to control than other aspects of AI development, such as data, algorithms, and trained models, which are easily shareable and difficult to control once published or leaked.
Key takeaways:
- A paper from the University of Cambridge suggests implementing remote kill switches and lockouts in AI hardware to limit its destructive potential.
- The paper argues that regulating the hardware these models rely on may be the best way to prevent misuse of AI, as it is detectable, excludable, and quantifiable.
- The researchers propose a global registry for AI chip sales to track them over their lifecycle and combat smuggling of components.
- The paper also suggests that processor functionality could be switched off or dialed down by regulators remotely using digital licensing, but warns that such a kill switch could become a target for cybercriminals to exploit.