Pro-pluralist, pro-bednet, anti-Bay EA. 🔸 10% Pledger.
Something which has come up a few times, and recently a lot in the context of Debate Week (and the reaction to Leif's post) is things getting downvoted quickly and being removed from the Front Page, which drastically drops the likelihood of engagement.[1]
So a potential suggestion for the Frontpage might be:
Maybe some code like this already exists, but this thought popped into my head and I thought it was worth sharing on this post.
My poor little piece on gradient descent got wiped out by debate week 😭 rip
In a couple of places I've seen people complain about the use of the Community tag to 'hide' particular discussions/topics. Not saying I fully endorse this view.
I think 'meat-eating problem' > 'meat-eater problem' came in my comment and associated discussion here, but possibly somewhere else.[1]
(I still stand by the comment, and I don't think it's contradictory with my current vote placement on the debate week question)
On the platonic/philosophical side I'm not sure, I think many EAs weren't really bought into it to begin with and the shift to longtermism was in various ways the effect of deference and/or cohort effects. In my case I feel that the epistemic/cluelessness challenge to longtermism/far future effects is pretty dispositive, but I'm just one person.
On the vibes side, I think the evidence is pretty damning:
That's just my personal read on things though. But yeah, seems very much like that SBF-Community Drama-OpenAI board triple whammy from Nov22-Nov23 marked the death knell for longtermism at least as the public facing justification of EA.
For the avoidance of doubt, not gaining knowledge from the Carl Shulman episodes is at least as much my fault as it is Rob and Carl's![1] I think similar to his appearance on the Dwarkesh Podcast, it was interesting and full of information, but I'm not sure my mind has found a good way to integrate it into my existing perspective yet. It feels unresolved to me, and something I personally want to explore more, so a version of the post written later in time might include those episodes high up. But writing this post from where I am now, I at least wanted to own my perspective/bias leaning against the AI episodes rather than leave it implicit in the episode selection. But yeah, it was very much my list, and therefore inherits all of my assumptions and flaws.
I do think working in AI/ML means that the relative gain of knowledge may still be lower in this case compared to learning about the abolition of slavery (Brown #145) or the details of fighting Malaria (Tibenderana #129), so I think that's a bit more arguable, but probably an unimportant distinction.
(I'm pretty sure I didn't listen to part 2, and can't remember how much I listened to of part 1 over reading some of the transcript on the 80k website, so these episodes may be a victim of the 'not listened to fully yet' criteria)
I just want to publicly state that the whole 'meat-eater problem' framing makes me incredibly uncomfortable
For clarification I think Factory Farming is a moral catastrophe and I think ending it should be a leading EA cause. I just think that the latent misanthropy in the meal-eater problem framing/worldview is also morally catastrophic.
In general, reflecting on this framing makes it ever more clear to me that I'm just not a utilitarian or a totalist.
Hey Ben, I'll remove the tweet images since you've deleted them. I'll probably rework the body of the post to reflect that and happy to make any edits/retractions that you think aren't fair.
I apologise if you got unfair pushback as a result of my post, and regardlesss of your present/future affiliation with EA, I hope you're doing well.
I appreciate the pushback anormative, but I kinda stand by what I said and don't think your criticisms land for me. I fundamentally reject with your assessment of what I wrote/believe as 'targeting those who wish to leave', or saying people 'aren't allowed to criticise us' in any way.
and here - which is how I found out about the original tweets in the first place
Like Helen Toner might have disassociated/distanced herself from the EA Community or EA publicly, but her actions around the OpenAI board standoff have had massively negative consequences for EA imo
I expect I'll probably agree with a lot of his criticisms, but disagree that they apply to 'the EA Community' as a whole as opposed to specific individuals/worldviews who identify with EA
<edit: Ben deleted the tweets, so it doesn't feel right to keep them up after that. The rest of the text is unchanged for now, but I might edit this later. If you want to read a longer, thoughtful take from Ben about EA post-FTX, then you can find one here>
This makes me feel bad, and I'm going to try and articulate why. (This is mainly about my gut reaction to seeing/reading these tweets, but I'll ping @Benjamin_Todd because I think subtweeting/vagueposting is bad practice and I don't want to be hypocritical.) I look forward to Ben elucidating his thoughts if he does so and will reflect and respond in greater detail then.
Thanks Aaron, I think you're responses to me and Jason do clear things up. I still think the framing of it is a bit off though:
Secondary interpretation is: "EA principles imply one should make a quantitative point estimate of the good of all your relevant moral actions, and then act on the leading option in a 'shut-up-and-calculate' way. I now believe many fewer actors in the EA space actually do this than I did last year"
For example, in Ariel's piece, Emily from OpenPhil implies that they have much lower moral weights on animal life than Rethink does, not that they don't endorse doing 'the most good' (I think this is separable from OP's commitment to worldview diversification).
A thought about AI x-risk discourse and the debate on how "Pascal's Mugging"-like AIXR concerns are, and where this causes confusion between those concerned and sceptical.
I recognise a pattern where a sceptic will say "AI x-risk concerns are like Pascal's wager/are Pascalian and not valid" and then an x-risk advocate will say "But the probabilities aren't Pascalian. They're actually fairly large"[1], which usually devolves into a "These percentages come from nowhere!" "But Hinton/Bengio/Russell..." "Just useful idiots for regulatory capture..." discourse doom spiral.
I think a fundamental miscommunication here is that, while the sceptic is using/implying the term "Pascallian" they aren't concerned[2] with the percentage of risk being incredibly small but high impact, they're instead concerned about trying to take actions in the world - especially ones involving politics and power - on the basis of subjective beliefs alone.
In the original wager, we don't need to know anything about the evidence record for a certain God existing or not, if we simply Pascal's framing and premisses then we end up with the belief that we ought to believe in God. Similarly, when this term comes up, AIXR sceptics are concerned about changing beliefs/behaviour/enact law based on arguments from reason alone that aren't clearly connected to an empirical track record. Focusing on which subjective credences are proportionate to act upon is not likely to be persuasive compared to providing the empirical goods, as it were.
Let's say x>5% in the rest of the 21st century for sake of argument
Or at least it's not the only concern, perhaps the use of EV in this way is a crux, but I think it's a different one