- Proposal to embed remote kill switches and lockouts into AI hardware to prevent potential catastrophic misuse, akin to safeguards in nuclear weaponry.
- Advocacy for regulating the hardware infrastructure powering AI models as a primary means of mitigating misuse, leveraging the identifiable and controllable nature of AI-relevant computing.
- Suggestions for a multi-faceted approach to AI regulation, including the establishment of a global registry for AI chip sales, implementation of digital licensing for remote control of processor functionality, and the requirement for multiple authorizations for risky AI training tasks.
In the pursuit of curtailing the potentially catastrophic implications of artificial intelligence, researchers from the University of Cambridge have put forward a suggestion to embed remote kill switches and lockouts into the hardware powering it, akin to those utilized to prevent unauthorized launches of nuclear weaponry.
The paper, authored by a consortium from various academic institutions including voices from OpenAI, argues for the regulation of the hardware infrastructure underpinning AI models as a primary means to mitigate misuse.
“AI-relevant computing stands out as a particularly viable intervention point: it is identifiable, controllable, and quantifiable, and is produced via a highly concentrated supply chain,” assert the researchers.
The immense physical infrastructure required to train the most advanced models, estimated to exceed a trillion parameters, involves tens of thousands of GPUs or accelerators and extensive processing time spanning weeks or months. This conspicuous footprint, the researchers contend, makes it challenging to conceal the existence and utilization of these resources.
Moreover, the production of the cutting-edge chips essential for training these models is predominantly controlled by a handful of companies such as Nvidia, AMD, and Intel, affording policymakers the opportunity to impose restrictions on their sale to entities or nations deemed concerning.
These factors, coupled with constraints within the semiconductor manufacturing supply chain, equip policymakers with the means to gain insights into the deployment of AI infrastructure, regulate access, and enforce penalties for its improper utilization, as outlined in the paper.
Strategies for regulating AI hardware are delineated in the paper, with many suggestions already manifesting at a national level. For instance, President Joe Biden’s executive order from the previous year targets the identification of companies developing large dual-use AI models and the infrastructure vendors facilitating their training.
Furthermore, the US Commerce Department has proposed regulations mandating American cloud providers to implement more rigorous “know-your-customer” policies to impede entities or nations of concern from circumventing export restrictions.
While acknowledging the value of such visibility, researchers caution that executing reporting requirements risks encroaching upon customer privacy and potentially exposing sensitive data.
On the trade front, the Commerce Department has intensified restrictions, constraining the performance of accelerators sold to China. Nonetheless, efforts to curb the accessibility of American chips to countries like China remain imperfect.
To address these limitations, researchers advocate for the establishment of a global registry for AI chip sales to monitor their lifecycle comprehensively, even beyond their country of origin. This registry could incorporate unique identifiers into each chip, potentially thwarting component smuggling.
At the more extreme end, researchers suggest integrating kill switches into the silicon to preempt their exploitation for nefarious purposes. This mechanism could enable regulators to verify the legitimate operation of AI chips and deactivate them remotely if they contravene regulations.
Moreover, the researchers propose a system whereby processor functionality could be remotely toggled or reduced by regulators using digital licensing, potentially allowing faster responses to abuses of sensitive technologies.
However, the authors caution that implementing such measures entails risks, including the potential exploitation of kill switches by cybercriminals.
Additionally, they suggest a mechanism where multiple parties must authorize potentially risky AI training tasks before deployment at scale, drawing parallels to permissive action links employed in nuclear weapons for preventing unauthorized launches.
While acknowledging the efficacy of these measures, researchers highlight potential drawbacks, including hindrances to the development of beneficial AI.
Ultimately, the paper emphasizes the need for a multi-faceted approach to AI regulation, with hardware regulation representing a crucial aspect but not a standalone solution.