V

Vanessa

534 karmaJoined

Comments
39

Here's a human translation, although ChatGPT's is suspiciously similar.

This is pretty sad and also surprising. In your opinion, why are there so many people that come to an animal welfare conference but are not really interested in helping animals (apparently)? If they don't care about animals, what are they doing there? 

Is there going to be a post-mortem including an explanation for the decision to sell?

Yes. Moreover, GCR mitigation can appeal even to partial altruists: something that would kill most of everyone, would in particular kill most of whatever group you're partial towards. (With the caveat that "no credence on longtermism" is underspecified, since we haven't said what we assume instead of longtermism; but the case for e.g. AI risk is robust enough to be strong under a variety of guiding principles.)

The framing "PR concerns" makes it sound like all the people doing the actual work are (and will always be) longtermists, whereas the focus on GCR is just for the benefit of the broader public. This is not the case. For example, I work on technical AI safety, and I am not a longtermist. I expect there to be more people like me either already in the GCR community, or within the pool of potential contributors we want to attract. Hence, the reason to focus on GCR is building a broader coalition in a very tangible sense, not just some vague "PR".

I can relate, as someone who also struggles with self-worth issues. However, my sense of self-worth is tied primarily to how many people seem to like me / care about me / want to befriend me, rather than to what "senior EAs" think about my work.

I think that the framing "what is the objectively correct way to determine my self-worth" is counterproductive. Every person has worth by virtue of being a person. (Even if I find it much easier to apply this maxim to others than to myself.) 

IMO you should be thinking about things like, how to do better work, but in the frame of "this is something I enjoy / consider important" rather than in the frame of "because otherwise I'm not worthy". It's also legitimate to want other people to appreciate and respect you for your work (I definitely have a strong desire for that), but IMO here also the right frame is "this is something I want" rather than "this is something that's necessary for me to be worth something".

I strongly disagree that utilitarianism isn't a sound moral philosophy, and don't understand the black and white distinction between longtermism and us not all dying. I might be missing something there is surely at least some overlap betwen those two reasons for preventing AI risk.

I don't know if it's a "black and white distinction", but surely there's a difference between:

  • Existential risk is bad because the future could have a zillion people, so their combined moral weight dominates all other considerations.
  • Existential risk is bad because (i) I personally am going to die (ii) my children are going to die (iii) everyone I love are going to die (iv) everyone I know are going to die, and also (v) humanity is not going to have a future (regardless of the number of people in it).

For example, something that "only" kills 99.99% of the population would be comparably bad by my standards (because i-iv still apply), whereas it would be way less bad by longtermism standards. Even something that "only" kills (say) everyone I know and everyone they know would be comparably bad for me, whereas utilitarianism would judge it a mere blip in comparison to human extinction.

Out of interest, if you aren't an effective altruist, nor a longermist then what do you call yourself?

I call myself "Vanessa" :) Keep your identity small and all that. If you mean, do I have a name for my moral philosophy then... not really. We can call it "antirealist contractarianism", I guess? I'm not that good at academic philosophy.

Strongly agreed.

Personally, I made the mitigation of existential risk from AI my life mission, but I'm not a longtermist and not sure I'm even an "effective altruist". I think that utilitarianism is at best a good tool for collective decision making under some circumstances, not a sound moral philosophy. When you expand it from living people to future people, it's not even that.

My values prioritize me and people around me far above random strangers. I do care about strangers (including animals) and even hypothetical future people more than zero, but I would not make the radical sacrifices demanded by utilitarianism for their sake, without additional incentives. On the other hand, I am strongly committed to following a cooperative strategy, both for reputational reasons and for acausal reasons. And, I am strongly in favor of societal norms that incentivize making the world at large better (because this is in everyone's interest). I'm even open to acausal trade with hypothetical future people, if there's a valid case for it. But, this is not the philosophy of EA as commonly understood, certainly not longtermism.

The main case for preventing AI risk is not longtermism. Rather, it's just that otherwise we are all going to die (and even going by conservative-within-reason timelines, it's at least a threat to our children or grandchildren).

I'm certainly hoping to recruit people to work with me, and I'm not going to focus solely on EAs. I won't necessarily even focus on people who care about AI risk: as long as they are talented, and motivated to work on the problems for one reason or the other (e.g. "it's math and it's interesting"), I would take them in.

Nice work! Many good hopes in there, but, hard to compete with "make furries real".

I'm confused. What are you trying to say here? You linked a proposal to prioritize violence against women and girls as an EA cause area (which I assume you don't object to?) and a tweet by some person unknown to me saying that critics of EA hold it to a standard they don't apply to feminism (which probably depends a lot on what kind of critics, and on their political background in particular). What do you expect the readers to learn from this or do about it?

Load more