Can you give an AI a goal which involves “minimally impacting the world”?

Giving an AI a goal which involves minimally impacting the world is an active area of AI alignment research, called Impact Regularization.