Preference utilitarianism and valuism don't have much in common.
Preference utilitarianism: maximize the interests/preferences of all beings impartially.
First, preferences and intrinsic values are not the same thing. For instance, you may have a preference to eat Cheetos over eating nachos, but that doesn't mean you intrinsically value eating Cheetos or that eating Cheetos necessarily gets you more of what you intrinsically value than eating nachos will. Human choice is driven by a lot of factors other than just intrinsic values (though intrinsic values play a role).
Second, preference utilitarianism is not about your own preferences, it's about the preferences of all beings impartially.
Glad you find it interesting! We tested maybe about 150 statements. Just to clarify, it's not that the depression side doesn't correlate with the anxiety side - because anxiety and depression are so correlated, any statement correlating with one is likely to correlate with the other. But when you statistically separate them (i.e., you look at what correlates with anxiety when you've controlled for depression, or the reverse), this clearer picture emerges).
While it would be great for someone to replicate these findings (to increase confidence in them), and I hope someone does that, the sample size (n=500) is fine in my view for this kind of result. There is diminishing benefit to larger sample sizes (the right sample size depends on the analyses being performed and the level of noise). So 10,000 people isn't as much better than 500 people as it may sound.
For instance, at n=500 a measured correlation of r=0.50 has a 95th percentile confidence interval of r=0.43 to r=0.56. At n=10,000 it's r=0.49 to r=0.51. For many purposes the latter isn't much more useful than the former. See this confidence interval calculation tool for correlations for more details: http://vassarstats.net/rho.html
The piece that you quote says:
“The problem is not greedy capitalists, but capitalism”
Our piece says:
“Where does bad come from? Capitalism and class systems”
The piece you quote says:
“The only solution possible is for an outside force to intervene and reshape the terms of the game. Socialist revolution will be that force. With no stake in the current order, the propertyless masses will wipe the slate clean.”
Our piece says:
“View of history: Capitalism will lead to a series of ever-worsening crises. The proletariat will eventually seize the major means of production and the institutions of state power.”
So I’m confused because while you frame what you’re quoting as a counter argument to what we say, it lines up well.
By simplifying it all down to Moloch you’re losing a lot of detail.
If you think that American communists don’t have an unusually strong intrinsic value of equality then I think you’re mistaken (of course, I could be wrong). I don’t think you provided any evidence against that thougu as far as I can tell.
We also didn’t say that communists “see themselves as advocating for a set of values.” We said they tend to have an intrinsic value, which is not the same thing.
If you think worldviews aren’t to a substantial extent about beliefs that I suspect we just mean different things by “worldviews”. For instance, I would not call a bunch of the examples you gave “worldviews.”
I was pretty surprised by these Twitter poll results (of course, who is responding may have various selection biases involved) where I ask how people feel about organizations putting out statements along the lines of “we oppose racism and sexism and believe diversity is important” (note: the setting of my poll - I give the example of a software accounting firm or animal rights org - is quite different from the setting of the above post):
https://twitter.com/SpencrGreenberg/status/1624044864584273920
Hi! The scores are relative to a sample from the U.S. population (not people on LessWrong or the EA Forum). I suspect that the population we used may have a slightly higher-than-average IQ but I'd be surprised if it was a lot higher than average.
We haven't yet released the 40 claims we're seeing if we can replicate, but they include many of the major claims in the intelligence literature.
I created a manifold market forecasting page for whether or not grant money given by the Future Fund via FTX Foundation, Inc. will be "clawed back". Please forecast there if you have an opinion to help others stay informed on the probabilities (and I added one I just learned about from Eliezer as well):
The way you define values in your comment:
"From the AI "engineering" perspective, values/valued states are "rewards" that the agent adds themselves in order to train (in RL style) their reasoning/planning network (i.e., generative model) to produce behaviours that are adaptive but also that they like and find interesting (aesthetics). This RL-style training happens during conscious reflection."
is just something different than what I'm talking about in my post when I use the phrase "intrinsic values."
From what I can tell, you seem to be arguing:
[paraphrasing] "In this one line of work, we define values this way", and then jumping from there to "therefore, you are misunderstanding values," when actually I think you're just using the phrase to mean something different than I'm using it to mean.