I’d like to get up to speed on current alignment research. What should I read?

The intent of this question assumes a certain level of background claims for AI safety. Therefore, the focus is to provide resources on staying current in alignment research.

Start here

For those wanting to get up to speed with the relevant core background knowledge for AI alignment, we recommend starting with materials below:

Continuation of Resources to Read:

For those wanting to work these readings and much more with a group of people virtually, we recommend BlueDot Impact’s courses covering either AI Alignment or AI Governance. For those wanting a more technical route of upskilling, CAIS has an Intro to ML Safety.

*Reading estimates in parentheses are based on the average reading speed of 250 words per minute.