EDIT: the recommendations have now changed. Vote for Joe Biden.
Voting is a pretty impactful activity in expectation, so I and some other contributors have spent a good deal of time on the Candidate Scoring System for Democratic presidential candidates. The latest version of the report is available here: http://bit.ly/ea-css
As primaries for the first states begin within a week, here are the latest recommendations. They depend upon your state of residence. For voters in the first three states - Iowa, New Hampshire, and Nevada - the recommendation is to vote for Pete Buttigieg (unless he fails in Iowa and is no longer a serious candidate). For voters in subsequent states - South Carolina, and all the Super Tuesday states including California - the recommendation is to vote for Mike Bloomberg. The reason for the difference is that both are good and it seems more tractable to help candidates in their strong states which are key to their strategy.
The conclusions are, necessarily, controversial. But I went to great lengths to score as rigorously as possible in this context before arriving at these latest conclusions. Counterarguments/counterevidence against all manner of objections are present in the report. But, this being the EA forum, I'd like to specifically highlight and address upfront the main reasons that EAs have disagreed with this report.
1) "Joe Biden is best because he is electable." While he is the most electable according to both electoral theory and current polls matching candidates against Trump, lesser-known candidates will improve in such polls if they get nominated and have more chances to communicate with voters. And some of the people answering in favor of Biden may be judging on the basis of 2008-2016 memories, and may change their mind if they become more aware of Biden's apparent cognitive decline. And generally speaking, these electability judgments are not robust. Therefore, there is not a good case for the idea that Joe Biden is so much more electable that he ought to be preferred over significantly more meritorious candidates.
2) "Bernie Sanders is best because he won't fight wars which cause massive suffering." Well as best as I can tell, the naïve populist-pacifist idea that we should cut our military spending and exit Iraq/Afghanistan/Africa would make things worse around the world. (Do you really think the war in Afghanistan will end just because the US pulls out?) It is admirable that Bernie opposed the regime change wars of the Bush Doctrine, but in other typical contexts of US foreign policy, retrenchment is harmful overall (unless we need to save American blood and treasure). This seems to be a near-consensus among foreign policy experts. Bernie was right about Iraq in 2003, but wrong about it in 2004, 2005, 2006, etc. And candidates like Mike and Pete are not dumb or hawkish enough to do something similar to the invasion of Iraq, we've all learned from it. In my recent foreign policy posts on this forum - my response to progressive foreign policy, and my own recommendations - I did not receive counterarguments to my rejection of Sanders' (and Warren's) brand of non-interventionism. Nor have I seen any other Effective Altruists analyze these issues with rigor; my fellow EAs seem to largely be repeating common-sense appealing opinions that they picked up from ordinary political socialization before or outside of their EA involvement. So I am increasingly confident that my object-level opinions are simply correct and that the mere opinions of pacifist EAs and EA-adjacent people do not carry significant weight. And of course, many EAs do not share these pacifist opinions.
Note: I still regard Bernie Sanders as one of the better candidates in foreign policy. I just reject the idea that he's much better than the others, which is the judgment that would be required if you wanted to write off his flaws.
3) "Bernie Sanders is best because he will give us the strongest social welfare and redistribution." But it's not clear whether moderate-Democrat ideas for things like housing, healthcare, education, etc are systematically better than progressive-Democrat ideas on the same topics. Criticism of progressive-left domestic agendas has come from a variety of reliable sources which have done a good job of withstanding leftist rebuttals. There are both pros and cons to Bernie's domestic agenda, not much different from other candidates. Also, much of Bernie's agenda will fail anyway given the likely obstacles in the Senate.
4) "Andrew Yang is best because he recognizes AI risk." But between the fact that it's very hard to say what a president should do to manage AI risk, and the fact that AI risk is not objectively a greater threat than things like nuclear war and climate change by my calculation (it's just more neglected, and that doesn't matter so much from the President's perspective), Yang's AI comments do not change much. Also, he has almost no chance of winning.
5) "The underlying weights are too subjective and arbitrary." I have tried to keep them close to the center of informed EA opinion, unless I have good arguments or evidence to the contrary. Feel free to play around with the Excel model and apply your own. I find that most plausible variations in weights don't change the conclusions. The main sensitivity to intra-EA disagreement over priorities is that a very short-term focused view should focus on Mike and ignore Pete. Biden can become very good or very bad under different assumptions; even if your weights put Biden ahead, I think you should still vote for Mike due to the Optimizer's Curse.
Note: I'm receptive to EA or EA-adjacent media, journalists, podcasts, etc who want to spread knowledge of this.
Feedback welcome, and please get out there and vote.
Also, if you want more frequent updates, follow me on Twitter: https://twitter.com/KyleBogosian
I'm trying to figure out which Democratic presidential candidate is likely to be best with regard to epistemic conditions in the US (i.e., most likely to improve them or at least not make them worse). This seems closely related to "sectarian tension" which is addressed in the scoring system but perhaps not identical. I wonder if you can either formally incorporate this issue into your scoring system, or just comment on it informally here.
(Sorry for late reply)
First, did you see the truthfulness part? I rated candidates per their average truth/lies to the public, according to PolitiFact. That's not identical to what you're asking about, but may be of interest.
Biden does relatively poorly. Sanders does well, though (and I haven't factored this in, maybe I should) he seems to have a more specific and serious trend of presenting misleading narratives about the economy. Warren does well, though I did dock some points for some specifically significant lies. Bloomberg seems to be doing quite well, though he has less of a track record so it's harder to be sure.
OTOH, it seems like you're primarily concerned about epistemics within an administration - that there might be some kinds of political correctness standards. I've docked points from Trump because there have been a number of cases of this under his watch. Among Democrats, I feel there would be more risk of it with Sanders, because of how many of his appointments/staff are likely to come from the progressive left. Even though he's perceived as a rather unifying figurehead as far as the culture wars are concerned, he would likely fare worse from your angle. But I feel this is too speculative to include. I can't think of any issues where the 'redpill' story, if true, would be very important for the federal government to know about. And there will not be a lot of difference between candidates here.
EA forum user Bluefalcon has pointed out that Warren's plan to end political appointments to the Foreign Service may actually increase groupthink because the standard recruitment pipeline puts everyone through very similar paces and doctrine. Hence, I've recently given slightly fewer points to Warren's foreign policy than I used to.
I think it's worth taking a more careful look at the weight for animal farming, which you based on how much impact you think the president could have on them (and a discount relative to humans), given the proposed moratorium on new factory farms, since it's a very concrete and unexpected policy. We could just estimate how much impact it would have on animals, although this would be difficult and probably involve considerable guesswork.
The moratorium could set a precedent for an earlier than otherwise phase out and ban of factory farming, and/or other improvements in animal welfare regulations (federally or at the state-level). This legislation far exceeds my expectations about what would have been on the table for animals, and ensuring it happens could accomplish more than all animal advocacy in the US before it.
Also, Gabbard says she's been vegan for a while and I think animal welfare is part of it, not just religion. She said the treatment of animals in factory farms breaks her heart.
Good find, adding this too
Might not really matter now given her chances, but she did an interview with VegNews:
Good point. Increasing the weight by 40% until I or someone else does a better calculation.
Future artificial sentiences would be involved in the s-risks EAF is concerned with [1][2] and possibly astronomical waste scenarios under a view similar to total utilitarianism. (See also "The expected value of extinction risk reduction is positive" by Jan M. Brauner and Friederike M. Grosse-Holz.) You may have already implicitly included these in some of your assessments of emerging tech (AI), but less so in issues that may involve shifting the moral values of society or in the less direct impacts of moral values on emerging tech. If AGI does not properly care for artificial sentiences because humans generally, policymakers or their designers didn't care, this could be astronomically bad. That being said, near-misses could also be astronomically bad.
I think all policy driven in part by or promoting impartial concern for welfare may contribute to concern for artificial sentience, and just having a president who is more impartial and generally concerned with welfare might, too. Similarly, policy driven by or promoting better values (moral or for rationality) and a president with better values generally may be good for the long-term future.
Better policies for and views on farmed animals seem like they would achieve the best progress towards the moral inclusion of artificial sentience, among issues considered in CSS. They would also drive the most concern for wild animals, too, of course.
More concern for climate change could also translate into more concern for future generations generally, although I'm less confident that this would extend to the far future. It could also drive concern for nonsentient entities at the expense of sentient individuals (wild animals, mostly).
My suggestion would be that
You might have to be careful to avoid double (or triple!) counting.
See:
I previously included wild animal suffering in the long run weight of animal welfare. Having looked at some of these links and reconsidering, I think I was over-weighting animal welfare's impact on wild animal suffering.
One objection here is that improving socioeconomic conditions can also broadly improve people's values. Generally speaking, increasing wealth and security promotes self-expression values, which correspond decently well to having a wide moral circle. So there's less general reason to single out moral issues like animal welfare as being a comparatively higher priority.
However, improving socioeconomic conditions also accelerates the date at which technological s-risks will present themselves. So in some cases, we are looking for differential moral progress. So this tells me to increase the weight of animal welfare for the long run. (It's overall slightly higher now than before.)
Another objection: a lot of what we perceive as pure moral concern vs apathy in governance could really be understood as a different tradeoff of freedom versus government control. It's straightforward in the case of animal farming or climate change that the people who believe in a powerful regulatory state are doing good whereas the small-government libertarians are doing harm. But I'm not sure that this will apply generally in the future.
Emerging tech is treated as an x-risk here, so s-risks from tech should be considered separately. In terms of determining weights and priorities I would sooner lump s-risks into growth and progress than into x-risks.
I don't see climate change policy as promoting better moral values. Sure, better moral values can imply better climate change policy, but that doesn't mean there's a link the other way. One of the reasons animal welfare uniquely matters here is that we think there is a specific phenomenon where people care less about animals in order to justify their meat consumption.
At the moment I can't think of other specific changes to make but I will keep it in mind and maybe hit upon something else.
Good points. I think it's also important where these improvements (socioeconomic or moral) are happening in the world, although I'm not sure in which way. How much effect does further improvements in socioeconomic conditions in the US and China have on emerging tech and values in those countries compared to other countries?
FWIW, s-risks are usually considered a type of x-risk, and generally involve new technologies (artificial sentience, AI).
Well, that's been observed in studies on attitudes towards animals and meat consumption, but I think similar phenomena could be plausible for climate change. Action on climate change may affect people's standards of living, and concern for future generations competes with concern for yourself.
I also don't see reducing cognitive dissonance or rationalization as the only way farm animal welfare improves values. One is just more attention to and discussion of the issue, and another could be that identifying with or looking up to people (the president, the party, the country) who care about animal welfare increases concern for animals. Possibly something similar could be the case for climate change and future generations.
Have you tried reaching out to anyone?
kbog, maybe you should reach out to one of the writers for Vox's Future Perfect and ask what you could do to get an article there about your work?
I think you'd want to show the scores without weighting/aggregating across issues, and just leave "?" instead of the party priors when there's too little info, since Bayesianism might be too much for a more general audience. Articles ranking candidates on issues are pretty common.
One concern, though, is that readers will just use their own values which are not very EA-aligned (less concern for non-Americans, speciesist, less concern for the far future), or you'll have to omit issues which aren't EA enough, and then readers will complain that important issues have been left out. I don't think there's any way to win here, since the readers will have different values.
Another possibility is making an alignment quiz like https://www.isidewith.com/political-quiz
This is the Excel model, right? Good to have a direct link in the post, in case they want to use their own weights, as you suggest.
This link also appears in the file on page 13.
The first link, "CSS Model," is the one produced at the time of the most recent PDF report, and should be looked at for understanding the report.
The second link, "CSS model for current draft" is the file that can have more recent updates (e.g. new polling) and should be used if you want to get the most accurate scores, whether or not you insert your own weights.
Do you separately factor controversies into their electability? If enough dirt sticks to the presidential candidate (whether accurate or not), more potential voters who might have voted Democrat may stay home or vote Republican. These controversies will probably receive more and more attention from the media both during the primaries and leading up to the election, so current attitudes may not be very reflective.
I generally don't.
There are some reasons to think scandals will be as or more consequential in 2020 as they were in 2016: The executive branch under Trump could seize upon them to manipulate the election, and the mainstream media doesn't seem to recognize that they had central role in damaging Hillary. OTOH, most American elections in history have not been decided by scandals. The Comey letter that probably cost Hillary the election made for a somewhat unusual perfect storm. Swing voters will probably go into this knowing that Trump is more corrupt/scandalous - it's not like 2016 when Trump was a kind of unknown alternative. And the mainstream media might behave better this time, despite not publicly blaming themselves for 2016. For one thing, they won't assume that Trump will lose, which is likely what motivated the disproportionate coverage of the email scandal.
Anyway, scandals can happen to anyone and it's hard to differentiate stronger/weaker candidates without descending into tea-leaves divination.
I was previously worried about Biden-Ukraine, but as Vox pointed out, the coverage surrounding the Trump-Ukraine scandal doesn't seem to have hurt Biden either in the Dem primaries or in head-to-head polling against Trump.
There is Warren's deception about her ancestry. But that is kind of well known and internalized by now.
There have been other controversies turning up in the Dem primaries, most notably against Pete, and then we have the story that Bernie is a millionaire, but these are mostly things that bother highly politically engaged left wing voters, who are unrepresentative and likely to turn out for the Dems anyway.
Good points.
I actually had in mind accusations of sexism against Bloomberg. I think this is an issue that not just progressives care about.
Of course, assuming Trump is the Republican nominee, any Democrat will look better on the issue of sexism. Would many people be more likely to vote 3rd party or stay home just because they think Bloomberg is sexist? It doesn't seem entirely implausible.
Would people vote for Trump because they think Bloomberg is sexist? I don't think many would.
Hard to say but I think at this point we have to take note of why Clinton and her emails were perceived so badly. The idea was that there was real corruption in the government. Sexist remarks in the workplace are a known quantity, whereas a private unsecured email server is a kind of rabbit hole.
I definitely don't deny that it could hurt him, my view is just that trying to aggregate and compare these concerns across all the candidates with their respective foibles doesn't lead one to any substantive conclusions.
But now you are making me worry more that perhaps a woman will accuse him of sexual assault. With Mike's locker room talk, and him being an old oligarch, there is cause for worry about him in particular. These accusations often follow people who are rising in the public conscience. Bloomberg was already famous before now and subject to sexism controversy, but not as much as he will be if he gets nominated, and his political career had apparently stopped by the time the #MeToo campaign started. You would expect a victim to come forward earlier while he was initially rising in the primary polls, but since he's a late entrant who has been absent from debates, I wouldn't be too confident about that. Bernie and Biden have been top political figures for a long time, so there is no appreciable risk with them. Pete's gay and young. Warren's a woman.
Adding a 1% probability of sexual assault accusations after the nomination causing Mike to lose against Trump, his campaign score drops from 8 to 6, putting him close to Pete. So I'm less enthusiastic about him now, but I don't think this is yet enough to change the recommendations. (I will think more about it though.)
It might be worth renaming "Air Pollution" to "Environment" or separating climate change into its own category, since I don't normally think climate change when I hear "air pollution" and had to search the doc to find climate change in that section.
Hm, I thought that 'air pollution' would be readily interpreted as including climate change.
I called it air pollution rather than climate change because I think it's perceived as a more convincing and less partisan term. And it's more correct, given that we're also addressing other consequences besides climate change.
I don't call it environment because we don't have evaluations regarding ground and water pollution but I could change it, if more people feel the same way.
When I hear "air pollution", I normally only think of pollution that affects people's health through breathing.
If ground and water pollution are negligible relative to air pollution, you could just say so and leave them out of the analysis.
That being said, air pollution that affects health through breathing also isn't something that comes to mind as an environmental issue, either.
Do you have a table comparing the scores on each issue for each candidate? I think I remember seeing one before.
I apologize if you have and I missed it, but have you considered the impacts of the different candidates and policies on the EA community and those who contribute to EA causes? Policies on taxes and charity/philanthropy, for example, could have pretty important impacts. Here are Dylan Matthews and Tyler Cowen on wealth taxes and philanthropy.
These seem like small impacts on the national level. My comment on this dimension of wealth taxation is simply:
"Wealth taxes would also encourage more rapid spending on luxury consumption, political contributions, and philanthropy. It’s not clear if this is generally good or bad. Of course the tax would also reduce the amount of money that is ultimately available for the rich to use on these things, although the cap on political contributions means that it probably wouldn’t make much difference there."