W

whomst

1 karmaJoined

Comments
2

I'm one of the characters here (the electrical-grid guy and the friend who considered starting it with him in Fall 2020 2021).

10/10 blog post and I agree with most things here (both that they are correct and that the general anecdotes apply to me personally). There are a few points I'd like to highlight.

I'm skeptical of longtermism (particularly the AGI threat mentioned above). I'm afraid that a combination of:

1. Grifters masquerading as sympathizers

2. Bias towards agreeing with people when confronted

3. The general unknowability of the future (especially future technological progress)

4. Financial incentives towards overestimating the unlikely (as the EV of prevention is much higher) to prop up the current value of the work

de-facto p-hacks the analyses of the average person within EA-spheres. There may very well be extreme humanity-scale downsides, but (if these patterns manifest themselves like they may) the p-hacking applies not only to the possible downside but to the effectiveness of current actions as well. If the AGI downside analysis is correct and the tools that exist are de-facto worthless (which a fair number of AI-alignment folks have mentioned is a possibility to us), then it makes more sense to pivot towards "direct action", "apolitical" or "butlerian jihad accelerationism". These conclusions are quite uncomfortable (and often discarded for good reason IMO) and the associated downsides make other X-risk areas a much stronger consideration instead (not because the downside is larger per-se, but we have a reasonable grasp on the mechanisms at play as they exist today and how they will probably exist in the future, because nuclear issues have been around for the past 80 years and pandemics have been around since the beginning of civilizations and animal domestication). I believe this conclusion (i.e. effectiveness of current mitigation strategies is massively overestimated, NOT that the other solutions ought to be pursued) has been discounted because:

1. it implies the other "solutions" may be preferred if AGI-alignment as a problem will be "solved" given our understanding today

2. it isn't a comfortable conclusion for a community of engineers

3. the associated pivot to something else will introduce massive friction (would enter the issues Arjun mentioned above re: ideological shift). 

I know at least five people at the University of Illinois who have refused associations with EA because of (what they see as) a disproportional focus on AI-alignment (for the reasons I've listed above and more). All of these people have a solid background in either AI or formal verification in AI and non-AI systems. I'm not here to say whether the focus on AGI is valid, but I am here to say that it is polarizing for a large number of people who would contribute otherwise, and this isn't considered in how EA appears to these high-EV potential contributors.

The downside of no-longer-being-EA-aligned is probably better than no-longer-being-VC-aligned which is the more common case in these high-risk high-rewards industries. EA not taking an equity stake in something that (might, depending on the context) be revenue generating minimizes the incentive alignment downsides because there is no authority to force change outside of withholding money (which is a conscious decision on behalf of the founder/leader and not a forced circumstance like being fired by a vote from the board).

I'd love to build philanthropic rockets with the communities here (and I've reached out to multiple industry people at EAGxBoston and beyond). EA and its systems are a great positive to the world, but in a space where effectiveness is gauged in relative terms and "ill-posed problems", its difficult to be aware of biases as they happen. Hopefully the AGI-extinction sample size remains zero, so it'll remain an open question unless somebody finally explains Infrabayesianism to me :)

Yea that makes sense. US foreign policy is relatively consistent between different presidents from different parties back-to-back. There is pretty large overlap with the George Mason economists that might be more relevant while the "young EAs" advance in their careers as the party naturally shifts its views.