What are tiling agents?

An agent might have the ability to create similar or slightly better versions of itself. These new agents can in turn create similar / better versions of themselves, and so on in a repeating pattern. This is referred to as an agent tiling itself.

This leads to the question: How can the original agent trust that these recursively generated agents maintain goals that are similar to the original agent's objective?

In a deterministic logical system, assuming that all agents will share the same axioms, "trust" arises from being able to formally prove that the conclusions reached by any subsequently generated agents will be true. The possibility to be able to have this form of trust is influenced by Löb's theorem. The inability to form this trust is called the Löbian obstacle.

See Also: Löbian obstacle, Löbs theorem, Vingean Agents, Vingean Reflection

References :



AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.