W

WolfBullmann

-2 karmaJoined

Comments
6

You could spotlight people that do good EA work but are virtually invisible to other EAs and do nothing of their own volition to change that, i.e. non paradise birds and non social butterflies.

Some things might need a lot less agreed upon celebration in EA, like DEI jobs and applicants and DEI styled community managenent.

Off the top of my head: The ability to host conferences without angry protesters in front, the chance to be mentioned in a favorable manner by a mainstream major news outlet and willingness of high profile people to associate with EA. Look up what EA intellectuals thought in the near past about why it would be unwise for EA to make too much noise outside the Overton window. This is still valid, except now the Overton has begun to shift at an increasing pace.

Note that this is not meant to be an endorsement of EA aligning with or paying lip service to political trends. I personally believe an increase of enforced epistemic biases to be an existential threat to the core values of EA.

I think when people invoke the term dignity they sort of circumvent describing the issue in actual detail. Most "indignities" can be described in concrete terms, which can then be addressed, such as the inconvenience of not having toilets available, the aversiveness of having to deal with an unfriendly or incompetent government official, etc.. Some interventions require disregarding a number of preferences of those that they are ultimately aimed to help. Having "dignity" be a requirement would make that difficult or impossible.

How are the mentioned first and second objections distinct?

"Should we incorporate the fact of our own choice to pursue x-risk reduction itself into our estimate of the expected value of the future, as recommended by evidential decision theory, or should we exclude it, as recommended by causal?"

I fail to get the meaning. Could anybody reword this for me?

"The consideration is that, even if we think the value of the future is positive and large, the value of the future conditional on the fact that we marginally averted a given x-risk may not be."

Not sure I get this. Is a civilisation stalling irrevocably into chaos after narrowly surviving a pandemic a central example of this?