I see where Zach is coming from and I sort of agree with his assessment of my work at RP. While I'm proud of what I did (including unpublished/internal/ supporting role stuff), ultimately I don't think my work at RP was unusually impactful within my cause area, and I think my manager will agree. (I felt conflicted about wild animal welfare as a cause area and part of the reason I was ready to leave RP when I did to do moratorium organizing is that I had lost confidence that WAW was tractable. My work that is public was on the best interventions within the cause area, which were way lower impact than, say, farmed animal welfare, so in EA absolute impact terms I think that work can only be so good.)
Step one is establishing whether or not chewier foods, in fact, do promote jaw growth. It could be that hunter gatherers develop differently for a number of reasons (you mentioned some possibilities), or it could be the case that selection pressures on people with a long history of ancestors living in more developed civilizations have a history of weaker selection pressures on jaw development (or even competing pressures that lead to smaller jaws for some reason, as is the trend in human evolution compared to other primates).
But I like the idea!
I agreed with most of the beginning of the post, but the specifics of where to cut perks seemed highly context-specific and I think readers should beware that.
For example, in the Bay, a lot of community “perks” like the former Lightcone coworking space are much less perky than they may appear because people live in rented rooms in group houses and don’t have stable jobs. They made the community in the Bay much more possible. High salaries are often a must not only because of high cost of living but to compensate for not offering benefits and low job security.
I guess what I’m saying is let’s not stigmatize the appearance of nice things when there are lots of tradeoffs in different org/community circumstances.
I guess my real question is “how can you feel safe accepting the idea that ML or RL agents won’t show instrumental convergence?” Are you saying AIs trained this way won’t be agents? Because i don’t understand how we could call something AGI that doesn’t figure out it’s own solutions to reach it’s goals, and I don’t see how it can do that without stumbling on things that are generally good for achieving goals.
And regardless of whatever else you’re saying, how can you feel safe that the next training regime won’t lead to instrumental convergence?
I think this is true, and I only discovered in the last two months how attached a lot of EA/rat AI Safety people are to going ahead with creating superintelligence— even though they think the chances of extinction are high— because they want to reach the Singularity (ever or in their lifetime). I’m not particularly transhumanist and this shocked me, since averting extinction and s-risk is obviously the overwhelming goal in my mind (not to mention the main thing these Singularitarians would talk about to others). It made me wonder of we could have sought regulatory solutions earlier and we didn’t because everyone was so focused on alignment or bust…
I think you absolutely should take these questions to heart and not feel compelled to follow the EA consensus on any of them. Especially with alignment, it’s hard to do independent thinking without feeling like a fool, but I say we should all be braver and volunteer to be the fool sometimes to make sure we aren’t in the Emperor’s New Clothes.
And I also appreciate Austin’s vote of confidence and don’t think he did anything wrong hyping us, even if he finds it prudent to do things differently in the future