Will we ever build a superintelligence?

Humanity hasn't yet built a superintelligence, and might not be able to without significantly more knowledge and computational resources. There could be an existential catastrophe that prevents us from ever attaining those, so it’s not certain that we’ll ever build a superintelligence.

However, in the absence of such a catastrophe, there is no known reason we couldn't build a superintelligence in the future. The majority of AI research is geared towards making more capable AI systems, and a significant chunk of top-level AI research attempts to make more generally capable AI systems. The economic incentive to develop more intelligent AI is leading to major investment: 92 billion dollars were spent on advancing AI capabilities in 2022 alone, and a substantial increase is predicted for 2023.

Humans display "general" intelligence (i.e. we are capable of learning and adapting to a wide range of tasks and environments), but the human brain isn't the most efficient or only way to solve problems. One hint toward the possibility of superhuman general intelligence is the existence of AI systems that are superhuman at narrow tasks: not only in performance (as in AlphaGo beating the Go world champion) but also in speed and precision (as in industrial sorting machines). There is nothing special and unique about human brains that unlocks certain capabilities which cannot be implemented in machines in principle. And there is also no reason to assume that human intelligence is the limit. From this, we would expect AI to surpass human performance on all tasks as progress continues.

In addition, several research groups (such as OpenAI, Google DeepMind, and Anthropic) explicitly aim to create generally capable systems. AI as a field is growing, year after year. Critical voices about AI progress usually argue against a lack of precautions around the impact of AI, or against general AI happening very soon, not against it happening at all.