What evidence do experts usually base their timeline predictions on?

Machine learning researchers each have their own sense of how fast AI capabilities will increase, based on their understanding of how past and present AI systems work, what approaches could be tried in the future, how fast computing power will scale up, and what the world's future as a whole will look like. Recent fast progress in deep learning has caused some to update toward very short timelines, but others disagree. When surveys come up with a median number like "AGI by 2059", most respondents are probably comparing their subjective senses of the speed of progress and the difficulty of the problem, arriving at a guess that feels reasonable to them.

Some researchers have tried to model timelines more rigorously. For example:

  • Ajeya Cotra's "biological anchors" model projects the computing power used for future ML training runs based on advances in hardware and increases in willingness to spend. It then compares the results to several different "biological anchors": for example, the assumption that training a transformative AI takes as much computing power as a human brain uses during a lifetime, or as much computing power as all brains used in our evolutionary history, or more likely something in between.

  • A major part of the biological anchors model is a probability distribution for how much computing power it would take to build transformative AI with current algorithms and ideas. Daniel Kokotajlo has argued for a different way to estimate this quantity. Instead of analogies to the human brain, he bases his estimates on intuitions about what kind of AI systems could be built.

  • Tom Davidson's approach based on "semi-informative priors" looks at the statistical distribution of timelines for past inventions. These inventions are taken from a few reference classes, such as highly ambitious STEM R&D goals.

  • Robin Hanson has collected expert guesses of what fraction of the way to human level we have come in individual subfields.

  • Matthew Barnett has done calculations on when we can expect scaling laws to take language models to the point where they generate sufficiently human-like text. The idea is that if AI text is hard enough to distinguish from human text, this implies at least human-like competence.

These approaches give very different answers to the question when we’ll first have advanced AI. Cotra’s model originally gave a median of 2050, but she later updated to 2040. The Colab notebook that uses Barnett’s direct method also shows (as of February 2023) a 2040 median. But on the shorter side, Kokotajlo has argued for a median before 2030. And on the longer side, Davidson’s report gives only an 18% probability for AGI by 2100, and (based on estimates made between 2012 and 2017) Hanson’s method also “suggests at least a century until human-level AI”. Different experts put different weights on these and other considerations, so they end up with different estimates.



AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.