My read is it wasn't the statistics they got hammered on misrepresenting other people's views of them as endorsements e.g. James Snowden's views. I will also say the AI side does get this criticism but not on cost-effectiveness but on things like culture war (AI Ethics vs. AI Safety) and dooming about techniques (e.g. working in a big company vs. more EA aligned research group and RLHF discourse).
I think this is an actual position. It's the stochastic parrots argument no? Just a recent post by a cognitive scientist holds this belief.
I always thought the average model for don't let AI Safety enter the mainstream was something like (1) you'll lose credibility and be called a loon and (2) it'll drive race dynamics and salience. Instead, I think the argument that AI Ethics makes is "these people aren't so much loons as they are just doing hype marketing for AI products in the status quo and draining counterfactual political capital from real near term harms".
Great question that prompted a lot of thinking. I think my internal model looks like this:
The fact EAs have been so caught off guard by the AI x-risk is a distraction argument and its stickiness in the public consciousness should be worrying for how well calibrated we are to AI governance interventions working the way we collectively think they will. This feels like another Carrick Flynn situation. I might right up an ITT on the AI Ethics side -- I think there's a good analogy to a SSC post that EAs generally like.
I think benchmarking at reddit moderation is probably the wrong benchmark. Firstly, because the tail risk of unpaid moderation is really bad (e.g. the base rate of moderator driven meltdowns in big subreddits is really high). Secondly, I just don't think we should underpay people in EA because (a) it creates financial barriers to entry to EA that have long-term effects (e.g. publishing unpaid internships have made the wider labour market for journalism terrible) (b) it'll create huge amounts of more informal barriers that mean we lean on more informal relationships in EA even more.
I'd guess the funding mechanism has to be somewhat different given the incentives at play with AI x-risk. Specifically, the Omega critiques do not seem bottlenecked by funding but by time and anonymity in ways that can't be solved with money.