Can you give an AI a goal which involves “minimally impacting the world”?

Penalizing an AI for affecting the world too much is called impact regularization and is an active area of alignment research.



AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.