Wouldn't a superintelligence be smart enough to know right from wrong?

There is no good reason to expect an arbitrary mind, which could be very different from our own, to share our values. A sufficiently smart and general AI system could understand human morality and values very well, but understanding our values is not the same as being compelled to act according to those values. It is in principle possible to construct very powerful and capable systems which value almost anything we care to mention. We can conceive of a superintelligence that only cares about maximizing the number of paperclips in the world. That system could fully understand everything about human morality, but it would use that understanding purely towards the goal of making more paperclips. It could be capable of reasoning about its values and goals, and modifying them however it wanted, but it would not choose to change them, since doing so would not result in more paperclips. There’s nothing to stop us from constructing such a system, if for some reason we wanted to.