An undergrad at University of Maryland, College Park. Majoring in math.
After finishing The Sequences at the end of 9th grade, I started following the EA community, changing my career plans to AI alignment. If anyone would like to work with me on this, PM me!
I’m currently starting the EA group for the university of maryland, college park.
I do think this is correct to an extent, but also that much moral progress has been made by reflecting on our moral inconsistencies, and smoothing them out. I at least value fairness, which is a complicated concept, but also is actively repulsed by the idea that those closer to me should weigh more in society's moral calculations. Other values I have, like family, convenience, selfish hedonism, friendship, etc are at odds with this fairness value in many circumstances.
But I think its still useful to connect the drowning child argument with the parts of me which resonate with it, and think about actually how much I care about those parts of me over other parts in such circumstances.
Human morality is complicated, and I would prefer more people 'round these parts do moral reflection by doing & feeling rather than thinking, but I don't think there's no place for argument in moral reflection.
Otherwise I think that you are in part spending 80k's reputation in endorsing these organizations
Agree on this. For a long time I've had a very low opinion of 80k's epistemics[1] (both podcast, and website), and having orgs like OpenAI and Meta on there was a big contributing factor[2].
In particular that they try to both present as an authoritative source on strategic matters concerning job selection, while not doing the necessary homework to actually claim such status & using articles (and parts of articles) that empirically nobody reads & I've found are hard to find to add in those clarifications, if they ever do. ↩︎
Probably second to their horrendous SBF interview. ↩︎
The second two points don’t seem obviously correct to me.
First, the US already has a significant amount of food security, so its unclear whether cultivated meats would actually add much.
Second, If cultivated meats destroy the animal agriculture industry, this could very easily lead to a net loss of jobs in the economy.
rationalist community kind of leans right wing on average
Seems false. It leans right compared to the extreme left wing, but right compared to the general population? No. Its too libertarian for that. I bet rightists would also say it leans left, and centrists would say its too extreme. Overall, I think its just classically libertarian.
There's much thought in finance about this. Some general books are:
And more particularly, The Black Swan: The Impact of the Highly Improbable, along with other stuff by Taleb (this is kind-of his whole thing).
The same standards applied to anything else: A decent track record of such experiments succeeding, and/or well-supported argument based on (in this case) sound economics.
So far the track-record is heavily against. Indeed, many of the worst calamities in history took the form of "revolution".
In lieu of that track record, you need one hell of an argument to explain why your plan is better, which at the minimum likely requires basing it on sound economics (which, if you want particular pointers, mostly means Chicago school, but sufficiently good complexity economics would also be fine).
This is not the central threat, but if you did want a mechanism, I recommend looking into the krebs cycle.