Are policies that address immediate and future risks opposed?

There is a tension between emphasizing the current risks that modern and near-term AIs exhibit and emphasizing the longer-term existential risks from AI. Modern risks that already harm people include discrimination and misinformation while in the short term there are risks related to unemployment or autonomous weapons. This tension stems from various directions:

  1. Limited attention and resources

  2. Different cultures

  3. Differing priors on the likelihood of extinction from AI

  4. Conflicting goals on some objectives

  5. Uncertain effects of regulation

Limited attention and resources

Any two sets of differing priorities will always be competing for the attention of the people they want to reach. In the case of risks from AI, this applies to both the attention of lay people as well as the attention and resources of policy makers. In the case of current vs future harms from AI, we observe that this tension is more present than between say environmental activists and trans rights activists. When policy makers are exposed to “risks from AI”, these immediate and future risks might be bundled together, and some have argued that existential risks might take up all the air in the room, leading to ignoring other risks. In practice, others have found that multiple metrics indicate that the prominence and funding of people and organizations that concentrate on current harms has not been affected negatively by the sharp increase in interest for existential risks in the spring of 2023. This suggests that talk of existential risk has not been detrimental to the discussion on current harms. Others still point out that this vision of attention as a zero-sum game is misleading.

Different cultures

Historically, the concern about discrimination has been brought up by academics close to the humanities and concerned by the plight of people being marginalized by this discrimination. By contrast, initial concern about x-risks was promoted by influential white men[1] with STEM backgrounds concerned for the future of humanity as a whole.[2] While in theory these two communities could work together and sometimes do, tribalism has contributed to keeping some distance between them.

Differing priors on the likelihood of extinction from AI

Although it is not always named, people who oppose legislation tend to believe x-risk from AI is unlikely. As an analogy in the field of cryptography, there is broad agreement that quantum computers will eventually be able to break current public-key encryption algorithms such as RSA, but people who think this is imminent will prioritize differently than people who think this will not happen in the next 50 years. In a similar fashion, people who are skeptical of x-risk from AGI or believe that AGI is either impossible or very far off will understandably want to avoid paying opportunity costs to avoid a harm that they do not believe will happen anytime soon or at all. It is likely that if such critics became more convinced of the seriousness and imminence of such risks, they might change their priorities.

Conflicting goals on some objectives

In some cases, the methods used to address one type of risk may directly be detrimental to the other. For instance, promoting the open-sourcing of models and methods might be useful to avoid concentration of power and allow auditing, but it might also be detrimental to other immediate harms such as the unrestricted generation of deepfakes, as well as being a possible pathway to future large-scale harms such as engineered pandemics.

Uncertain effects of regulation

There is some disagreement within people who are concerned about x-risks as to whether stringent regulation such as a pause on AI or restricting access to advanced chips is beneficial to safety or not. In such a context, it is harder to legitimize demands or recommendations to policymakers, compared to e.g. the widespread agreement to ban lethal autonomous weapons.

Reasons to work together

On the flip side, there are reasons for these communities to work together.

  1. The x-risk community broadly agrees with the relevance of immediate harms. Multiple prominent people concerned with x-risks (e.g. Rob Miles, Dan Hendrycks, Yoshua Bengio) have acknowledged the importance of such harms, and discussions about x-risks often include mentions of current harms, even if they are not prioritized.

  2. Some policy interventions help with many of these risks. The EU AI act illustrated that there is some consensus on meta-principles and that increased attention around AI risks gives weight to the voices from both sides.

  3. Some research directions help with many of these risks. For instance, interpretability can help both with future risks and current harms.

  4. Avoiding immediate harms may reduce x-risks in the long run. If current uses of AI cause social unrest that leads to armed conflict, this conflict may become a source of existential risk in addition to being intrinsically harmful.

  5. Even people concerned about x-risk are able to completely solve AI alignment, expertise from the humanities will be needed to determine what to do with the resulting aligned AGI.

Further reading:


  1. Since that time, a larger diversity of people have been advocating for regulation to avoid x-risks. ↩︎

  2. Steve Byrnes argues that this division does not follow typical political lines as well as one might think. ↩︎