I'm working on a new World's Biggest Problems quiz, which should be published on Clearer Thinking in the next couple of weeks. It lasts 5-15 min and covers global health, animal welfare, and x-risk.

Can you give me your feedback with potential improvements in this Google Doc? I'll integrate your comments this week. (link to quiz here)

Thank you in advance for your help! :)

18

0
0

Reactions

0
0
Comments7
Sorted by Click to highlight new comments since:

Andre -- I just did the X risk quiz. It's pretty cool; nice graphics, good resources, seems engaging. Nice work.

Hey Geoffrey, I'm a fan of yours on Twitter. I'm glad you liked the quiz! Have a great day :)

I completed the three quizzes and enjoyed it thoroughly. 

Without any further improvements, I think these quizzes would still be quite effective. It would be nice to have a completion counter (e.g., an X/Total questions complete) at the bottom of the quizzes, but I don't know if this is possible on quizmanity. 

Hey Rodeo, glad you enjoyed the three quizzes! 

Thank you for your feedback. I'll pass it to Guided Track, where I host the program. For now, there's a completion bar at the top, but it's a bit thin and doesn't have numbers. 

I saw that you work in AI Safety, so maybe you can help me clear two doubts: 

  • Do AI expert surveys still predict a 50% chance of transformative AI by 2060? (a "transformative AI" would automate all activities needed to speed up scientific and technological progress).
  • Is it right to phrase the question above as "transformative AI"? Or should I call it AGI and give it a different definition? I took the "transformative AI" and the 2060 timeline from Holden Karnofsky.

I am not the best person to ask this question (@so8res, @katja_grace, @holdenkarnofsky) but I will try to offer some points.

  • These links should be quite useful: 
  • I don't know of any recent AI expert surveys for transformative AI timelines specifically, but have pointed you to very recent ones of human-level machine intelligence and AGI. 
  • For comprehensiveness, I think you should cover both transformative AI (AI that precipitates a change of equal or greater magnitude to the agricultural or industrial revolution) and HLMI. I have yet to read Holden's AI Timelines post, but believe it's likely a good resource to defer to, given Holden's epistemic track record, so I think you should use this for the transformative AI timelines. For the HLMI timelines, I think you should use the 2022 expert survey (the first link). Additionally, if you trust that a techno.-optimist leaning crowd's forecasting accuracy generalizes to AI timelines, then it might be worth checking out Metaculus as well.
  • Lastly, I think it might be useful to ask under the existential risk section what percentage of ML/AI researchers think AI safety research should be prioritized (from the survey: "The median respondent believes society should prioritize AI safety research “more” than it is currently prioritized. Respondents chose from “much less,” “less,” “about the same,” “more,” and “much more.” 69% of respondents chose “more” or “much more,” up from 49% in 2016.")

Thanks for the links, Rodeo. I appreciate your effort to answer my questions. :)

I can add the number of concerned AI researchers in an answer explanation - thanks for that! 

I have a limited amount of questions I can fit into the quiz, so I would have to sacrifice other questions to include the one on HLMI vs. transformative AI. Also, it seems that Holden's transformative AI timeline is the same as the 2022 expert survey on HLMI (2060). So I think one timeline question should do the trick. 

I'm considering just writing "Artificial General Intelligence," which is similar to HLMI, because it's the most easily recognizable term for a large audience.

Glad to hear that the links were useful!

Keeping by Holden's timeline sounds good, and I agree that AGI > HLMI in terms of recognizability. I hope the quiz goes well once it is officially released!

Curated and popular this week
Relevant opportunities