This post is mostly noise because this is a basic point going back over a decade and you do nothing to elaborate it or incorporate objections to naive utilitarianism. There is prior literature on the topic. I want you to do better because this is an important topic to me. The SBF example is a poor one that's obfuscatory of the basic point because you don't address the hard question of whether his fraud-funded donations were or weren't worth the moral and reputational damage, which is debatable and a separate interesting topic I haven't seen hard analysis of; you open up a can of ethical worms and don't address it in a way that reasonably looks bad to low decouplers, which is probably the reason for the downvoting. Personally I would endorse downvoting because you haven't contributed anything novel about increasing the number of probably good high net worth philanthropists, though I didn't downvote. I only decided to give this feedback because your bio says you're an econ grad student at GMU, which is notorious for disagreeable economists, and so I think you can take it.
when we have no evidence that aligning AGIs with 'human values' would be any easier than aligning Palestinians with Israeli values, or aligning libertarian atheists with Russian Orthodox values -- or even aligning Gen Z with Gen X values?
When I ask an LLM to do something it usually outputs something that is its best attempt at being helpful. How is this not some evidence of alignment that is easier than inter-human alignment?
The eggs and milk quip might be offensive on animal welfare reasons. Eggs at least are one of the worst commonly consumed animal products according to various ameliatarian Fermi estimates.
Reminder that there is an EA Focusmate group, where you can do 50 minute coworking calls with other EAs. Also, if you're already in the group, please give any feedback on it here or via DM.