Categories

Academia (6)Actors (6)Adversarial Training (7)Agency (6)Agent Foundations (18)AGI (19)AGI Fire Alarm (2)AI Boxing (2)AI Takeoff (8)AI Takeover (6)Alignment (6)Alignment Proposals (11)Alignment Targets (5)ARC (3)Autonomous Weapons (1)Awareness (5)Benefits (2)Brain-based AI (3)Brain-computer Interfaces (1)CAIS (2)Capabilities (20)Careers (16)Catastrophe (31)CHAI (1)CLR (1)Cognition (6)Cognitive Superpowers (9)Coherent Extrapolated Volition (3)Collaboration (5)Community (10)Comprehensive AI Services (1)Compute (8)Consciousness (5)Content (3)Contributing (32)Control Problem (8)Corrigibility (9)Deception (5)Deceptive Alignment (8)Decision Theory (5)DeepMind (4)Definitions (83)Difficulty of Alignment (10)Do What I Mean (2)ELK (3)Emotions (1)Ethics (7)Eutopia (5)Existential Risk (31)Failure Modes (17)FAR AI (1)Forecasting (7)Funding (10)Game Theory (1)Goal Misgeneralization (13)Goodhart's Law (3)Governance (24)Government (3)GPT (3)Hedonium (1)Human Level AI (6)Human Values (12)Inner Alignment (11)Instrumental Convergence (8)Intelligence (17)Intelligence Explosion (7)International (3)Interpretability (16)Inverse Reinforcement Learning (1)Language Models (11)Literature (5)Living document (2)Machine Learning (19)Maximizers (1)Mentorship (8)Mesa-optimization (6)MIRI (3)Misuse (4)Multipolar (4)Narrow AI (4)Objections (64)Open AI (2)Open Problem (6)Optimization (4)Organizations (16)Orthogonality Thesis (5)Other Concerns (8)Outcomes (3)Outer Alignment (15)Outreach (5)People (4)Philosophy (5)Pivotal Act (1)Plausibility (9)Power Seeking (5)Productivity (6)Prosaic Alignment (6)Quantilizers (2)Race Dynamics (5)Ray Kurzweil (1)Recursive Self-improvement (6)Regulation (3)Reinforcement Learning (13)Research Agendas (27)Research Assistants (1)Resources (22)Robots (8)S-risk (6)Sam Bowman (1)Scaling Laws (6)Selection Theorems (1)Singleton (3)Specification Gaming (11)Study (14)Superintelligence (38)Technological Unemployment (1)Technology (3)Timelines (14)Tool AI (2)Transformative AI (4)Transhumanism (2)Types of AI (3)Utility Functions (3)Value Learning (5)What About (9)Whole Brain Emulation (5)Why Not Just (16)

Objections

64 pages tagged "Objections"
We’re going to merge with the machines so this will never be a problem, right?
Do people seriously worry about existential risk from AI?
Isn’t it immoral to control and impose our values on AI?
Isn’t AI just a tool like any other? Won’t it just do what we tell it to?
Isn't the real concern technological unemployment?
Isn't the real concern autonomous weapons?
Isn't the real concern AI-enabled surveillance?
Is there a danger in anthropomorphizing AIs?
Is the UN concerned about existential risk from AI?
Is large-scale automated AI persuasion and propaganda a serious concern?
Can we list the ways a task could go disastrously wrong and tell an AI to avoid them?
Are AI self-improvement projections extrapolating an exponential trend too far?
If we solve alignment, are we sure of a good future?
If I only care about helping people alive today, does AI safety still matter?
How much computing power did evolution use to create the human brain?
How might things go wrong even without an agentic AI?
How might AI socially manipulate humans?
How might AGI kill people?
Does the importance of AI risk depend on caring about transhumanist utopias?
Do AIs suffer?
Could we tell the AI to do what's morally right?
Could we program an AI to automatically shut down?
Could AI have emotions?
Can you stop an advanced AI from upgrading itself?
Can we get AGI by scaling up architectures similar to current ones, or are we missing key insights?
Can we constrain a goal-directed AI using specified rules?
Can an AI be smarter than humans?
How can AI cause harm it can't manipulate the physical world?
Wouldn't it be a good thing for humanity to die out?
Wouldn't a superintelligence be smart enough to avoid misunderstanding our instructions?
Why would we only get one chance to align a superintelligence?
Why would intelligence lead to power?
Why might people build AGI rather than better narrow AIs?
Why might a superintelligent AI be dangerous?
Why don't we just not build AGI if it's so dangerous?
Aren't there easy solutions to AI alignment?
Why can’t we just “put the AI in a box” so that it can’t influence the outside world?
Why can’t we just use Asimov’s Three Laws of Robotics?
Why can't we just turn the AI off if it starts to misbehave?
Why can't we just make a "child AI" and raise it?
What is a "value handshake"?
What are the ethical challenges related to whole brain emulation?
Isn’t the real concern with AI something else?
Wouldn't humans triumph over a rogue AI because there are more of us?
What are some arguments why AI safety might be less important?
How can an AGI be smarter than all of humanity?
Are corporations superintelligent?
What are the "no free lunch" theorems?
Can't we limit damage from AI systems in the same ways we limit damage from companies?
Isn't capitalism the real unaligned superintelligence?
Will AI be able to think faster than humans?
Wouldn't a superintelligence be slowed down by the need to do physical experiments?
Isn't the real concern with AI that it's biased?
Are AIs conscious?
Why should someone who is religious worry about AI existential risk?
Why would a misaligned superintelligence kill everyone in the world?
Aren't AI existential risk concerns just an example of Pascal's mugging?
Isn't the real concern misuse?
Objections and responses
What is Vingean uncertainty?
Wouldn't AIs need to have a power-seeking drive to pose a serious risk?
Might anyone use AI to destroy human civilization?
Is smarter-than-human AI a realistic prospect?
What is Moravec’s paradox?