What is compute?

Compute is shorthand for “computing power”. In machine learning, it refers to the total amount of processing power used to train a model.

There might be confusion between the term “compute” and the phrase “leveraging computation”1. Compute refers to just the total amount of processing power, whereas when we say we need to leverage computation for increasing model capabilities, we are talking about all three of the following factors:

  • Length of training run: Longer training runs2 tend to produce lower loss. The total amount of processing power required depends in part on how long the model is trained for. Generally, ML engineers look for asymptotically diminishing returns before they decide to stop training, i.e., we want to see the performance improvement between each training iteration drop to some small amount so that we know further training isn’t worth it.

  • Amount of training data: The larger our training data set, the more information our model has to analyze in each training run. So the training runs take longer in general, and this thereby increases the total amount of processing power required before we can consider our model trained.

  • Size of the model: For every training example we give our model, we need to calculate the loss and then backpropagate to update the model's weights. The more weights (or parameters in general) the model has, the more compute-heavy this process becomes.

Below are graphs showing how the model loss is reduced as a result of increases in each one of these three factors (note how “compute” is specifically related to length of training run here). Epoch AI has many more graphs of this kind.

Source: Gwern (2022) “The Scaling Hypothesis

GPUs are becoming better-performing and cheaper every year. AI models are showing increasingly impressive results, leading to an increased acceptance of high compute costs, and there is a trend towards foundation models which are trained on increasing amounts of data. These factors suggest that all three of the variables above – training compute, data set size, and parameter size – will continue to grow in the coming years.3 It is an open question whether simply scaling these factors will result in uncontrollable capabilities.


  1. This phrase originated in Rich Sutton’s seminal essay “The Bitter Lesson” (2019). ↩︎

  2. In terms of number of epochs of training, that is, the number of times each element of training data is used in the training. ↩︎

  3. Training compute grew by 0.2 OOM/yr (orders of magnitude per year) up until the deep learning revolution around 2010, after which growth rates increased to 0.6 OOM/yr. A new trend of “large-scale” models emerged in 2016, trained with 2–3 OOMs more compute than other systems in the same period.

    The available stock of text and image data grew by 0.14 OOM/yr between 1990 and 2018 but has since slowed to 0.03 OOM/yr. Overall, projections by Epochai predict that we will have exhausted high-quality language data before 2026, low-quality language data somewhere between 2030 to 2050, and vision data between 2030 to 2060. This suggests the possibility of slower ML progress after the next couple of decades.

    Overall, between the 1950s and 2018, model parameter sizes have grown at a rate of 0.1 OOM/yr. This means that in the 68 years between 1950 and 2018, models grew by a total of 7 orders of magnitude. However, in just the five years from 2018 to 2023, models have increased by yet another 4 orders of magnitude (not accounting for however many parameters GPT-4 has, because this is not public knowledge). ↩︎



AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.