S

SiebeRozendal

2645 karmaJoined

Bio

Participation
4

Was community director of EA Netherlands, had to quit due to long covid

I have a background in philosophy,risk analysis, and moral psychology. I also did some x-risk research.

Comments
403

We sometimes try to bring about results that we will never know about, such as, for instance, setting our children up to live happy and healthy lives after we are gone. It may matter greatly to us that these results occur and we may make sacrifices here and now to give them a chance.

I urge you to choose a different example, because this one is linked to phenomenal consciousness:

  1. the parents feel good whenever they achieve something that improves the chances of their children
  2. the children have experiences

AI welfare as an "EA priority" means:

  • spending on the order of $5-10M this year, and increasing it year by year only in high quality options, until at least 5% of EA total spending is reached
  • about 45 people working on AI welfare by approximately the middle of next year, and that increasing in a pace such that each position remains a good use of talent, until at least 5% of EA talent is reached

AI welfare as an "EA priority" means

  • spending starting this year or the next on the order of $49M/year
  • about 450 people working on AI welfare within a year or two

Debate contributions should have focused more on practical implications for EA priorities

To add to the intensive animal agriculture analogy: this time, people are designing them, which provides a lot of reason to believe early intervention can affect AI welfare compared to animal agriculture.

You seem to conflate moral patienthood with legal rights/taking entities into account? We can take into account the possibility that some seemingly non-sentient agents might be sentient. But we don't need to definitively conclude they're moral patients to do this.

In general, I found it hard to assess which arguments you're making, and I would suggest stating it in analytic philosophy style: a set of premises connected by logic to a conclusion. I had Claude do a first pass: https://poe.com/s/lf0dxf0N64iNJVmTbHQk

I don't really understand why so many people are downvoting this. If anyone would like to explain, that'd be nice!

Thank you, these are some good points. When I made the question, I believed V-DEM had a more rigorous methodology, and I can't change it now.

I don't think the specific probability is necessary for my argument (and it depends on how one defines 'liberal democracy'): a Trump presidency with an enabling Supreme Court would be very harmful to US liberal democracy and the rule of law, and a nationalized AGI project under such a government would be very risky.

P.P.S. I am also concerned about silencing/chilling effects: if you want to get anything political done in the next few years, it's probably strategic to refrain from criticizing Trump & his allies anywhere publicly, including the EA Forum.

Load more