Intro to AI safety
Introduction
This section will explain and build a case for existential risk from AI. It’s too short to give more than a rough overview, but will link to other aisafety.info articles when more detail is needed.
Summary
- AI systems far smarter than us may be created soon. AI is advancing fast, and this progress may result in human-level AI — but human-level is not the limit, and shortly after, we’d probably see superintelligent AI.
- These systems may end up opposed to us. AI systems may pursue their own goals, those goals may not match ours, and that may bring them into conflict with us.
- Consequences could be major, including human extinction. AI may defeat us and take over, leading to humanity’s extinction or permanent ruin. If we avoid this outcome, AI still has huge implications for the world, including great benefits if it’s developed safely.
- We need to get our act together. Experts are worried, but humanity doesn’t have a real plan to avert disaster, and you may be able to help.