F

freedomandutility

4162 karmaJoined
Interests:
Bioethics

Bio

My ethics are closest to asymmetric, person-affecting prioritarianism, and I don’t think the expected value of the future is as high as longtermists do because of potential bad futures. Politically I'm closest to internationalist libertarian socialism. 

Comments
417

Topic contributions
7

Strongly upvoted.

My recommended next steps for HLI:

  1. Redo the meta-analysis with a psychiatrist involved in the design, and get external review before publishing.

  2. Have some sort of sensitivity analysis which demonstrates to donors how the effect size varies based on different weightings of the StrongMinds studies.

(I still strongly support funding HLI, not least so they can actually complete these recommended next steps)

I think when we reach the point where everyone lives above a certain poverty line and has access to basic health resources, the global distribution of wealth will still be very inefficient for maximising welfare, and redistributing resources from the globally richest to the globally poorest / increasing consumption of the poorest will still be one of the best available options to improve aggregate welfare.

Sidenote - but I think it's better for debate to view this as disagreement, not exaggeration. I also don't entirely agree with total utilitarianism or longtermism, if that makes my point of view easier to understand.

I’d like to add that from my perspective:

  1. global health and development will almost permanently be a pressing cause area

  2. it’s very likely that within our lifetimes, we’ll see enough progress such that biosecurity and farmed animal welfare no longer seem as pressing as global health and development

  3. it’s feasible that AI safety will also no longer seem as pressing as global health and development

  4. new causes may emerge that seem more pressing than global health, biosecurity, AI and farmed animal welfare

  5. growing EA is very important to help more people with “optimiser’s mindsets” switch between cause areas in response to changes in how pressing they are in the future

(but I still think there’s a case for a modest reallocation towards growing cause areas independently)

I think honestly is clearly mentioned there but don’t think non-violence specifically is implied there.

Regardless, my case is for honesty and non-violence to both be listed separately as core principles for greater emphasis.

My initial thought was to filter out applications from speakers who don’t bring an EA optimiser mindset, but on second thought, it might be good to have speakers from outside the EA bubble.

Or the application process could initially only be used for a few slots rather than all EAG speaker slots, and CEA could see how it goes?

Yeah on second thought, a lot of EAG talks provide value from the speaker’s personal experiences. I guess partial blinding might be feasible, where applicants can include details about their experiences if these details are going to come up during the talk.

Good point, thank you. I agree that it’s important to distinguish between “fringes” and “extreme views” - I’ll edit the post soon.

Agree that non-violence and honesty aren’t always the best option, but neither is collaboration, and collaborative spirit is listed as a core value. I think “true in 99% of cases” is fine for something to be considered a core EA value.

I’d also add that I think in practice we already abide by honesty and non violence to a similar degree to which we abide by the collaborative spirit principle.

I do think honesty and non-violence should be added to the list of core principles to further promote these values within EA, but I think the case of adding these values is stronger from a “protection against negative PR if someone violates these principles” perspective.

Load more