What should I do with my machine learning research idea for AI alignment?

Start by exploring if your research idea has already been discussed. You might use the semantic search on AI Safety Ideas, search on AI Safety Papers, look in places like the Alignment Forum, and so on. You might be able to provide perspective or insight to those already working on it, and avoid duplication of work.

If your idea hasn’t been discussed, start discussing it! You could write it down and share it with some people you trust, or share it on the AI Alignment Slack.

It can be useful to find “accountability buddies”, or people researching similar topics, and to do feedback swaps in which you take time to understand each other’s research. Use your best judgment on which of the feedback you receive is accurate.

You can also try exchanging ideas in real life, like at a local hub, EA Global event, or AI safety camp. If you can find a mentor (discussed more here), that can also help you a lot.

To find and prevent ways in which your idea will fail, you can use techniques like red-teaming on yourself and Murphyjitsu. To avoid getting stuck and spending a lot of time on a doomed project, deliberately seek out failure. The faster you find a way in which your key assumptions fail, the more effort you’ll save; if you drag it out and save the hard bits for last, you’ll have that much less time to spend on your next project. Of course, some research ideas succeed, and that’s even better.

If it still seems like it’s worth going ahead, there are several places you can apply for funding.



AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.