How can I work on assessing AI alignment projects and distributing grants?

It’s hard to get into this field, even though it’s a place where AI alignment is bottlenecked (because it’s hard for funders to vet people to the point of trusting them to make good recommendations). If you already know someone who wants to give or regrant substantial money to AI safety and who trusts your judgment, if people in charge of vetting grantmakers trust you, or if you want to give substantial money yourself, then this can be one of the highest-impact ways to spend your time. Otherwise, it will be hard. Writing posts relevant to evaluating and prioritizing projects is one way to demonstrate good judgment.

To do well here, you also need to have a strong inside-view understanding of the field. It can be helpful to actively seek people to offer grants to, and after making grants, to write follow-ups in places like the EA Forum.



AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.