What is compute governance?

Compute governance is a subfield of AI governance that focuses on controlling access to the computing hardware needed to develop and run AI. It has been argued that regulating compute is particularly promising compared to regulating data, algorithms, or human talent.

As of November 2024, there are few policies in place for governing compute, and much of the research done in compute governance is exploratory. Currently enforced measures related to compute governance include US export controls on advanced microchips to China and reporting requirements for large training runs in the US and EU.

According to Sastry et al., compute governance can be used toward three main ends:

  • Visibility is the ability of policymakers to know what’s going on in AI, so they can take better-targeted measures. The amount of compute used for training runs can be used as a rough indicator of capabilities and risk. Measures to improve visibility could include using public information to estimate compute used, requiring AI developers and cloud providers to report large training runs, creating an international registry for AI chips, or designing systems to monitor general workload done by AI chips while preserving privacy about sensitive information.

  • Allocation refers to policymakers influencing the amount of compute available to different projects. One strategy in this category is making compute available for research toward technologies that increase safety and defensive capabilities, or that substitute for more dangerous alternatives. Another is to speed up or slow down the general rate of AI progress. Yet another is to restrict or expand the range of countries or groups with access to certain systems. Governments could create an international megaproject aimed at developing AI technologies — such proposals are sometimes called “CERN for AI”.

  • Enforcement is about policymakers ensuring that the relevant actors abide by their rules. This could potentially be enabled by the right kind of software or hardware; hardware-based enforcement is likely to be harder to circumvent. Chips could be given restricted networking capabilities to make them harder to use in very large clusters, or modified with cryptography to automatically verify or enforce restrictions on what types of tasks they’re allowed to be used for. They could be designed to be controlled multilaterally, similar to “permissive action links” for nuclear weapons. Restrictions could also be enforced through, e.g., cloud providers.

Many of these mechanisms are speculative and would require further research before they could be implemented. They could end up being risky or ineffective. Still, despite all this, many safety researchers think compute governance is worthwhile because it will be necessary in order to avoid major existential risks to humanity.

Further reading:



AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.