Leo

240 karmaJoined

Posts
3

Sorted by New

Comments
46

Topic contributions
3128

I see, thanks. I guess I would have preferred a more accurate, unambiguous aggregation of everyone’s opinion, to have a clearer sense of the preferences of the community as a whole, but I'm starting to think that it's just me.

As I said last time, trying to quantify agreement/disagreement is much more confusing to determine and to read, than just measuring, out of an extra $100m, how many $ millions people would assign to global health/animal welfare. The banner would go from 0 to 100, and whatever you vote, let say 30m, would mean that 30m should go to one cause and 70m to the other. As it is, just to mention one paradox, if I wholly disagree with the question, it means that I think it wouldn't be better to spend the money on animal welfare than on global health, which in turn could mean a) I want all the extra funding to go to global health, b) I don't agree at all with the statement, because I think it would be better to allocate the money differently, say 10m/90m. Now if you vote as having a 90% of agreement, it could mean b, or it could mean that you almost fully agree for other reasons, for example, because you think there's a 10% chance that you are wrong.

Answer by Leo3
0
0

There's substantial discussion on this topic following Eliezer's take on this.

I think I would prefer to strongly disagree, because I don't want my half agree to be read as if I agreed to some extent with the 5% statement. This is because "half agree" is ambiguous here. People could think that it means 1) something around 2,5% of funding/talent or 2) that 5% could be ok with some caveats. This should be clarified to be able to know what the results actually mean.  

Leo
5
2
1
2

This is a great experiment. But I think it would have been much clearer if the question was phrased as "What percentage of talent+funding should be allocated to AI welfare?", with the banner showing a slider from 0% to 100%. As it is now, if I strongly disagree with allocating 5% and strongly agree with 3% or whatever, I feel like I should still place my icon on the extreme left of the line. This would make it look like I'm all against this cause, which wouldn't be the case.

The expected impact of waiting to sell will diminish as time goes on, because you are liable to change your values or, more probably, your views about what and how best to prioritize. This is especially true if you have a track record of changing your mind about things (like most of us). While the expected impact of waiting is, say, the value of two kidneys, conditional on not changing your mind, this same impact will be equal to the value of one kidney, or less, if you have a 50% chance or more of changing your mind. So I guess your comment is valid only if you are very confident that you will not change your mind about donating a kidney between now and the estimated time when you can sell it.

I'm not updating this anymore. But your post made me curious. I will try to read it shortly.

Congratulations. Are you planning to upload recordings of the presentations? Where can I access the conference program?

This was a nice post. I haven't thought about these selfishness concerns before, but I did think about possible dangers arising from aligned servant AI used as a tool to improve military capabilities in general. A pretty damn risky scenario in my view and one that will hugely benefit whoever gets there first. 

Load more