Was community director of EA Netherlands, had to quit due to long covid
I have a background in philosophy,risk analysis, and moral psychology. I also did some x-risk research.
AI welfare as an "EA priority" means:
You seem to conflate moral patienthood with legal rights/taking entities into account? We can take into account the possibility that some seemingly non-sentient agents might be sentient. But we don't need to definitively conclude they're moral patients to do this.
In general, I found it hard to assess which arguments you're making, and I would suggest stating it in analytic philosophy style: a set of premises connected by logic to a conclusion. I had Claude do a first pass: https://poe.com/s/lf0dxf0N64iNJVmTbHQk
Thank you, these are some good points. When I made the question, I believed V-DEM had a more rigorous methodology, and I can't change it now.
I don't think the specific probability is necessary for my argument (and it depends on how one defines 'liberal democracy'): a Trump presidency with an enabling Supreme Court would be very harmful to US liberal democracy and the rule of law, and a nationalized AGI project under such a government would be very risky.
I urge you to choose a different example, because this one is linked to phenomenal consciousness: