Z

zchuang

619 karmaJoined

Comments
88

I'd guess the funding mechanism has to be somewhat different given the incentives at play with AI x-risk. Specifically, the Omega critiques do not seem bottlenecked by funding but by time and anonymity in ways that can't be solved with money. 

My read is it wasn't the statistics they got hammered on misrepresenting other people's views of them as endorsements e.g. James Snowden's views. I will also say the AI side does get this criticism but not on cost-effectiveness but on things like culture war (AI Ethics vs. AI Safety) and dooming about techniques (e.g. working in a big company vs. more EA aligned research group and RLHF discourse). 

I think this is an actual position. It's the stochastic parrots argument no? Just a recent post by a cognitive scientist holds this belief.

What's your rate of success after pushback? Do organisations usually take the more junior person as a speaker?

This strongly resonated with me especially after taking part in XPT. I think I set my expectation really high and got frustrated with the process and now take a relaxed approach to forecasting as a fun thing I do on the side instead of something I actively want to take part of as a community. 

I always thought the average model for don't let AI Safety enter the mainstream was something like (1) you'll lose credibility and be called a loon and (2) it'll drive race dynamics and salience. Instead, I think the argument that AI Ethics makes is "these people aren't so much loons as they are just doing hype marketing for AI products in the status quo and draining counterfactual political capital from real near term harms".

Great question that prompted a lot of thinking. I think my internal model looks like this:

  1. On the meta level it feels as if EAs have a systemic error in their model that underestimates public distrust of EA actions which constrains the action space and our collective sense-making of the world. 
  2. I think legacy media organisations buy into the framing solidly. Especially, organisations that operate on policing others such as the CJR (Columbia Journalism Review). 
  3. Just in my own life I've noticed a lot of the "elite" sphere friends I have at ivies and competitive debating etc. are much more apprehensive towards EA and AI Safety types of discourse in general and attribute it to this frame. Specifically, I think the idea from policy debating of inherency -- that people look towards frames of explaining the underlying barrier and motivation to change.
    1. I think directly this is bad for cooperation on the governance side (e.g. a lot of the good research on timelines and regulation are currently being done by some people with AI Ethics sympathies).
    2. I think EAs underestimate how many technically gifted people who could be doing technical research are put off by EAs who throw around philosophy ideas that are ungrounded in technical acumen. This frame neatly compounds this aversion.

The fact EAs have been so caught off guard by the AI x-risk is a distraction argument and its stickiness in the public consciousness should be worrying for how well calibrated we are to AI governance interventions working the way we collectively think they will. This feels like another Carrick Flynn situation. I might right up an ITT on the AI Ethics side -- I think there's a good analogy to a SSC post that EAs generally like.

I think benchmarking at reddit moderation is probably the wrong benchmark. Firstly, because the tail risk of unpaid moderation is really bad (e.g. the base rate of moderator driven meltdowns in big subreddits is really high). Secondly, I just don't think we should underpay people in EA because (a) it creates financial barriers to entry to EA that have long-term effects (e.g. publishing unpaid internships have made the wider labour market for journalism terrible) (b) it'll create huge amounts of more informal barriers that mean we lean on more informal relationships in EA even more.

Well OpenAI just announced that they're going to spend 20% of their compute on alignment in the next four years so I think it's paid off prima facie. 

Load more