Categories

Academia (6)Actors (6)Adversarial Training (7)Agency (6)Agent Foundations (18)AGI (19)AGI Fire Alarm (2)AI Boxing (2)AI Takeoff (8)AI Takeover (6)Alignment (6)Alignment Proposals (11)Alignment Targets (5)ARC (3)Autonomous Weapons (1)Awareness (5)Benefits (2)Brain-based AI (3)Brain-computer Interfaces (1)CAIS (2)Capabilities (20)Careers (16)Catastrophe (31)CHAI (1)CLR (1)Cognition (6)Cognitive Superpowers (9)Coherent Extrapolated Volition (3)Collaboration (5)Community (10)Comprehensive AI Services (1)Compute (8)Consciousness (5)Content (3)Contributing (32)Control Problem (8)Corrigibility (9)Deception (5)Deceptive Alignment (8)Decision Theory (5)DeepMind (4)Definitions (83)Difficulty of Alignment (10)Do What I Mean (2)ELK (3)Emotions (1)Ethics (7)Eutopia (5)Existential Risk (31)Failure Modes (17)FAR AI (1)Forecasting (7)Funding (10)Game Theory (1)Goal Misgeneralization (13)Goodhart's Law (3)Governance (24)Government (3)GPT (3)Hedonium (1)Human Level AI (6)Human Values (12)Inner Alignment (11)Instrumental Convergence (8)Intelligence (17)Intelligence Explosion (7)International (3)Interpretability (16)Inverse Reinforcement Learning (1)Language Models (11)Literature (5)Living document (2)Machine Learning (19)Maximizers (1)Mentorship (8)Mesa-optimization (6)MIRI (3)Misuse (4)Multipolar (4)Narrow AI (4)Objections (64)Open AI (2)Open Problem (6)Optimization (4)Organizations (16)Orthogonality Thesis (5)Other Concerns (8)Outcomes (3)Outer Alignment (15)Outreach (5)People (4)Philosophy (5)Pivotal Act (1)Plausibility (9)Power Seeking (5)Productivity (6)Prosaic Alignment (6)Quantilizers (2)Race Dynamics (5)Ray Kurzweil (1)Recursive Self-improvement (6)Regulation (3)Reinforcement Learning (13)Research Agendas (27)Research Assistants (1)Resources (22)Robots (8)S-risk (6)Sam Bowman (1)Scaling Laws (6)Selection Theorems (1)Singleton (3)Specification Gaming (11)Study (14)Superintelligence (38)Technological Unemployment (1)Technology (3)Timelines (14)Tool AI (2)Transformative AI (4)Transhumanism (2)Types of AI (3)Utility Functions (3)Value Learning (5)What About (9)Whole Brain Emulation (5)Why Not Just (16)

Superintelligence

38 pages tagged "Superintelligence"
What are the different possible AI takeoff speeds?
What are the differences between AGI, transformative AI, and superintelligence?
Do people seriously worry about existential risk from AI?
Might an aligned superintelligence force people to change?
Isn't the real concern technological unemployment?
Are AI self-improvement projections extrapolating an exponential trend too far?
How powerful would a superintelligence become?
How might we get from artificial general intelligence to a superintelligent system?
How might AI socially manipulate humans?
How long will it take to go from human-level AI to superintelligence?
Could a superintelligent AI use the internet to take over the physical world?
Can we test an AI to make sure it won't misbehave if it becomes superintelligent?
Can an AI be smarter than humans?
At a high level, what is the challenge of AI alignment?
Wouldn't a superintelligence be smart enough to know right from wrong?
Why would we only get one chance to align a superintelligence?
Why might we expect a superintelligence to be hostile by default?
Why might a superintelligent AI be dangerous?
Why is AI alignment a hard problem?
What would a good future with AGI look like?
How likely is extinction from superintelligent AI?
What is "AI takeoff"?
What is an intelligence explosion?
What is a "value handshake"?
What is "whole brain emulation"?
What is "superintelligence"?
What could a superintelligent AI do, and what would be physically impossible even for it?
What can we expect the motivations of a superintelligent machine to be?
What are the potential benefits of advanced AI?
Are corporations superintelligent?
Isn't capitalism the real unaligned superintelligence?
Wouldn't a superintelligence be slowed down by the need to do physical experiments?
What are the differences between a singularity, an intelligence explosion, and a hard takeoff?
What is AIXI?
What is a singleton?
Why would a misaligned superintelligence kill everyone in the world?
What is Vingean uncertainty?
Is smarter-than-human AI a realistic prospect?