Hey guys, This is our first check-in from the AI Safety Info distillation fellowship. We are working on distilling all manner of content to help make it easier for people to engage with AI Safety and x-risk topics. Here is the AI Safety Info website (and its more playful clone Stampy).

If there is a question you had in mind, but didn't find a clear answer for - type it in the search box and request an answer. It can be literally any question about AI safety no matter how basic or advanced. It could also just be a convoluted topic that you think could do with some distillation. One of the distillers will get to work on answering it.

Additionally, if you see content that you feel could be clearer or has mistakes, you can leave a comment on the document by clicking the edit button on the bottom right of the answer.

We will try to make this post a regular thing where we post some of the questions that have been answered or topics that have been distilled over the last little while. Since this is the first one, the following is a longer list with links to answers as an example of some of the questions that we answered in the last month:

This is crossposted to lesswrong : https://www.lesswrong.com/posts/FYRYhkdAQoQibasNB/new-distillations-on-stampy-s-ai-safety-info-expansive

19

0
0

Reactions

0
0
Comments
No comments on this post yet.
Be the first to respond.
More from markov
Curated and popular this week
Relevant opportunities