What is the general nature of the concern about AI alignment?

The basic concern as AI systems become increasingly powerful is that they won’t do what we want them to do – perhaps because they aren’t correctly designed, perhaps because they are deliberately subverted, or perhaps because they do what we tell them to do rather than what we really want them to do (like in the classic stories of genies and wishes). Say we have an AI trained to maximize profit by trading on the stock market. Unless carefully designed to act in ways consistent with human values, a highly sophisticated AI trading system might employ strategies that even the most ruthless financier would disavow. Maintaining alignment between human interests and the AI’s choices and actions will be crucial.