Catastrophe
29 pages tagged "Catastrophe"
Isn't the real concern AI-enabled surveillance?
Is AI safety about systems becoming malevolent or conscious?
Is large-scale automated AI persuasion and propaganda a serious concern?
Can we list the ways a task could go disastrously wrong and tell an AI to avoid them?
If I only care about helping people alive today, does AI safety still matter?
How quickly could an AI go from harmless to existentially dangerous?
How likely is it that an AI would pretend to be a human to further its goals?
How can I update my emotional state regarding the urgency of AI safety?
Are Google, OpenAI, etc. aware of the risk?
Wouldn't it be a good thing for humanity to die out?
Why might a maximizing AI cause bad outcomes?
Why is AI alignment a hard problem?
Why does AI takeoff speed matter?
What is a "warning shot"?
How likely is extinction from superintelligent AI?
What are the differences between AI safety, AI alignment, AI control, Friendly AI, AI ethics, AI existential safety, and AGI safety?
What are accident and misuse risks?
Can't we limit damage from AI systems in the same ways we limit damage from companies?
Will AI be able to think faster than humans?
What is perverse instantiation?
Isn't the real concern with AI that it's biased?
What is reward hacking?
Why would a misaligned superintelligence kill everyone?
What is the "sharp left turn"?
Wouldn't AIs need to have a power-seeking drive to pose a serious risk?
Might someone use AI to destroy human civilization?
What is the EU AI Act?
Why would misaligned AI pose a threat that we can’t deal with?
But won't we just design AI to be helpful?