If we solve alignment, are we sure of a good future?

If by “solve alignment” you mean “build a superintelligence which has the goal of coherent extrapolated volition or something else which captures human values”, then yes. It would be able to deploy technology near the limits of physics (e.g., atomically precise manufacturing) to solve most of the other problems which face us, and steer the future towards a highly positive path for perhaps many billions of years until the heat death of the universe (barring more esoteric existential risks like encounters with advanced hostile civilizations, false vacuum decay, or simulation shutdown).

However, if you only have alignment of a superintelligence to a single human, you still have the risk of misuse, so this should be at most a short-term solution. For example, what if Google creates a superintelligent AI, and it listens to the CEO of Google, and it’s programmed to do everything exactly the way the CEO of Google would want? Even assuming that the CEO of Google has no hidden unconscious desires affecting the AI in unpredictable ways, this gives one person a lot of power.



AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.