My ethics are closest to asymmetric, person-affecting prioritarianism, and I don’t think the expected value of the future is as high as longtermists do because of potential bad futures. Politically I'm closest to internationalist libertarian socialism.
I think when we reach the point where everyone lives above a certain poverty line and has access to basic health resources, the global distribution of wealth will still be very inefficient for maximising welfare, and redistributing resources from the globally richest to the globally poorest / increasing consumption of the poorest will still be one of the best available options to improve aggregate welfare.
Sidenote - but I think it's better for debate to view this as disagreement, not exaggeration. I also don't entirely agree with total utilitarianism or longtermism, if that makes my point of view easier to understand.
I’d like to add that from my perspective:
global health and development will almost permanently be a pressing cause area
it’s very likely that within our lifetimes, we’ll see enough progress such that biosecurity and farmed animal welfare no longer seem as pressing as global health and development
it’s feasible that AI safety will also no longer seem as pressing as global health and development
new causes may emerge that seem more pressing than global health, biosecurity, AI and farmed animal welfare
growing EA is very important to help more people with “optimiser’s mindsets” switch between cause areas in response to changes in how pressing they are in the future
(but I still think there’s a case for a modest reallocation towards growing cause areas independently)
Agree that non-violence and honesty aren’t always the best option, but neither is collaboration, and collaborative spirit is listed as a core value. I think “true in 99% of cases” is fine for something to be considered a core EA value.
I’d also add that I think in practice we already abide by honesty and non violence to a similar degree to which we abide by the collaborative spirit principle.
I do think honesty and non-violence should be added to the list of core principles to further promote these values within EA, but I think the case of adding these values is stronger from a “protection against negative PR if someone violates these principles” perspective.
Strongly upvoted.
My recommended next steps for HLI:
Redo the meta-analysis with a psychiatrist involved in the design, and get external review before publishing.
Have some sort of sensitivity analysis which demonstrates to donors how the effect size varies based on different weightings of the StrongMinds studies.
(I still strongly support funding HLI, not least so they can actually complete these recommended next steps)