MaxRa

3966 karmaJoined Seeking workBerlin, Deutschland

Bio

Participation
5

Hi, I'm Max :)

  • looking for work in AI governance (general strategy, expert surveys, research infrastructure, EU tech policy fellow)
  • background in cognitive science & biology (did research on metacognition)
  • most worried about AI going badly for technical & coordination reasons
  • vegan for the animals
  • doing my own forecasts: https://www.metaculus.com/accounts/profile/110500/

Comments
572

Topic contributions
2

Thanks so much for sharing your writing, it resonated deeply with me and made me cry more than once.

I feel incapable of being heard correctly at this point, so I guess it was a mistake to speak up at all and I'm going to stop now.

Noooo, sorry you feel that way. T_T I think you sharing your thinking here is really helpful for the broader EA and good-doer field, and I think it's an unfortunate pattern that online communications quickly feels (or even is) somewhat exhausting and combative.

Just an idea, maybe you would have a much better time doing an interview with e.g. Spencer Greenberg on his Clearer Thinking podcast, or Robert Wiblin on the 80,000 Hours podcast? I feel like they are pretty good interviewers who can ask good questions that make for accurate and informative interviews.

MaxRa
69
33
0
1

(Just want to say, I really appreciate you sharing your thoughts and being so candid, Dustin. I find it very interesting and insightful to learn more about your perspective.)

Would it be possible to set up a fund that pays people for the damages they incurred for a lawsuit where they end up being innocent? That way the EA community could make it less risky for those who haven’t spoken up, and also signal how valuable their information is to them.

MaxRa
52
8
0
4
4

Meal replacement companies were there for us, through thick and slightly less thick.

https://queal.com/ea

Just in case someone interested in this has not done so yet, I think Zvi‘s post about it was worth reading.

https://thezvi.substack.com/p/openai-the-board-expands

Thanks for your work on this, super interesting!

Based on just quickly skimming, this part seems most interesting to me and I feel like discounting the bottom-line of the sceptics due to their points seeming relatively unconvincing to me (either unconvincing on the object level, or because I suspect that the sceptics haven't thought deeply enough about the argument to evaluate how strong it is):

We asked participants when AI will displace humans as the primary force that determines what happens in the future. The concerned group’s median date is 2045 and the skeptic group’s median date is 2450—405 years later.

[Reasons of the ~400 year discrepancy:]

● There may still be a “long tail” of highly important tasks that require humans, similarto what has happened with self-driving cars. So, even if AI can do >95% of humancognitive tasks, many important tasks will remain.

● Consistent with Moravec’s paradox, even if AI has advanced cognitive abilities it willlikely take longer for it to develop advanced physical capabilities. And the latter wouldbe important for accumulating power over resources in the physical world.

● AI may run out of relevant training data to be fully competitive with humans in alldomains. In follow-up interviews, two skeptics mentioned that they would updatetheir views on AI progress if AI were able to train on sensory data in ways similar tohumans. They expected that gains from reading text would be limited.

● Even if powerful AI is developed, it is possible that it will not be deployed widely,because it is not cost-effective, because of societal decision-making, or for other reasons.

And, when it comes to outcomes from AI, skeptics tended to put more weight on possibilities such as

● AI remains more “tool”-like than “agent”-like, and therefore is more similar totechnology like the internet in terms of its effects on the world.

● AI is agent-like but it leads to largely positive outcomes for humanity because it isadequately controlled by human systems or other AIs, or it is aligned with humanvalues.

● AI and humans co-evolve and gradually merge in a way that does not cleanly fit theresolution criteria of our forecasting questions.

● AI leads to a major collapse of human civilization (through large-scale death events,wars, or economic disasters) but humanity recovers and then either controls or doesnot develop AI.

● Powerful AI is developed but is not widely deployed, because of coordinated humandecisions, prohibitive costs to deployment, or some other reason

I agree that things like confirmation bias and myside bias are huge drivers impeding "societal sanity". And I also agree that it won't help a lot here to develop tools to refine probabilities slightly more.

That said, I think there is a huge crowd of reasonably sane people who have never interacted with the idea of quantified forecasting as a useful epistemic practice and a potential ideal to thrive towards when talking about important future developments. Like other commentators say, it's currently mostly attracting a niche of people who thrive for higher epistemic ideals, who try to contribute to better forecasts on important topics, etc. I currently feel like it's not intractable for quantitative forecasts to become more common in epistemic spaces filled with reasonable enough people (e.g. journalism, politics, academia). Kinda similar to how tracking KPIs where probably once a niche new practice and are now standard practice.

Thanks, I think that's a good question. Some (overlapping) reasons that come to mind that I give some credence to:

a) relevant markets are simply making an error in neglecting quantified forecasts

  • e.g. COVID was an example where I remember some EA adjacent people making money because investors were underrating the pandemic potential signifiantly
  • I personally find it plausible when looking e.g. at the quality of think tank reports which seems significantly curtailed due to the amount of vague propositions that would be much more useful if more concrete and quantified

b) relevant players train the relevant skills sufficiently well into their employees themselves (e.g. that's my fairly uninformed impression from what Jane Street is doing, and maybe also Bridgewater?)

c) quantified forecasts are so uncommon that it still feels unnatural to most people to communicate them, and it feels cumbersome to be nailed down on giving a number if you are not practiced in it

d) forecasting is a nerdy practice, and those practices need bigger wins to be adopted (e.g. maybe similar to learning programming/math/statistics, working with the internet, etc.)

e) maybe more systematically I'm thinking that it's often not in the interest of entrenched powers to have forecasters call bs on whatever they're doing.

  • in corporate hierarchies people in power prefer the existing credentialism, and oppose new dimensions of competition
  • in other arenas there seems to be a constant risk of forecasters raining on your parade

f) maybe previous forecast-like practices ("futures studies", "scenario planning") maybe didn't yield many benefits and made companies unexited about similar practices (I personally have a vague sense of not being impressed by things I've seen associated with these words)

Load more