The Nobel Prize in Economics awarded to Abhijit Banerjee, Esther Duflo and Michael Kremer "for their experimental approach to alleviating global poverty".
The standard examples of naive effective altruism are maybe lies and theft for the greater good. But there are other and less salient examples. Here I want to discuss one of them: the potential tendency to be overly conflict-oriented. There are several ways this may occur.
First, people may neglect the costs of conflict - that it’s psychologically draining for them and for others, that it reduces the potential for future collaboration, that it may harm community culture, and so on. Typically, you enter into a conflict because you think that some individual or organisation is making a poor decision - e.g. that reduces impact. My hunch is that people often decide to take the conflict because they exclusively focus on this (supposed) direct impact cost, and don’t consider the costs of the conflict itself.
Second, people often have unrealistic expectations of how others will react to criticism. Rightly or wrongly, people tend to feel that their projects are their own, and that others can only have so much of a say over them. They can take a certain amount of criticism, but if they feel that you’re invading their territory too much, they will typically find you abrasive. And they will react adversely.
Third, overconfidence may lead you to think that a decision is obviously flawed, where there’s actually reasonable disagreement. That can make you push more than you should.
*
These considerations don’t mean that you should never enter into a conflict. Of course you should. Exactly when to do so is a tricky problem. All I want to say is that we should be aware that there’s a risk that we enter into too many conflicts if we apply effective altruism naively.
Even though existential risk neglect usually is explained by general biases that don’t pertain to specific risks, it is sometimes acknowledged that there are important AI-specific biases. E.g. the AI risk expert Stuart Russell has made an illuminating thought-experiment:
The arrival of superintelligent AI is in many ways analogous to the arrival of a superior alien civilization but much more likely to occur. Perhaps most important, AI, unlike aliens, is something over which we have some say. Then I asked the audience to imagine what would happen if we received notice from a superior alien civilization that they would arrive on Earth in thirty to fifty years. The word pandemonium doesn’t begin to describe it. Yet our response to the anticipated arrival of superintelligent AI has been . . . well, underwhelming begins to describe it.
I think Russell is right: we would react much more strongly to a notice of an alien invasion. AI risk is unprecedented, difficult to comprehend, and may sound outlandish or even laughable. Those features arguably make people inclined to downshift existential risk from AI. By contrast, they are likely much more inclined to take seriously existential risks that are easier to grasp and/or have known historical precedents.
...
I’m not sure I believe that AI-specific biases are the whole story, however. I do think that people also have a general tendency to neglect existential risk. But I think that AI-specific biases are an important part of the story.
If this is true, then one upshot could be that efforts to counter biases relating to existential risk should largely be directed specifically at existential risk from AI, rather than existential risk in general. Relatedly, I think that part of the existential risk community is sometimes a bit too inclined to talk about existential risk in general, when it’s more appropriate to talk about specific risks; such as AI risk. Existential risk is a very heterogeneous concept - the risks are very different not only psychologically but also in terms of how likely they are - and mostly using the general existential risk concept may mask that.
Tl;dr - if you're working on lobbying in a small or mid-sized country and want to reduce catastrophic risk, trying to increase spending on public goods that benefit the whole world - e.g. research relevant to pandemic preparedness - is an option one might consider.
Some sorts of lobbying or policy work related to catastrophic risk is likely best done in big and rich countries like the US. Only such countries have, or are likely to have, some of the technologies that risk causing a global catastrophe (nuclear weapons, cutting-edge AI, etc). That means that it may be substantially less valuable to try to influence the policies of small countries (though no doubt it depends on a multitude of factors). The value of influencing a country may scale superlinearly with its size.
But that's not true for all policy interventions. E.g. Guarding Against Pandemics is working to increase US spending on pandemic preparedness. As far as I understand, part of this money would be spent on research on better therapeutics and vaccines, better tests, etc. Such research would presumably benefit the whole world, and funds for it could come from any country, including small or mid-sized countries. That means that for these kinds of interventions, the value of influencing a country likely rather scales linearly with its size (not superlinearly). And since it's likely easier to influence the policies of smaller countries (e.g. because of lower lobbying competition) that means that the impact of lobbying them could be comparable to that of lobbying the US (though no doubt it depends on many factors). And since Sam Bankman-Fried is excited about lobbying the US government to increase such spending, lobbying other governments could likely also be impactful. (It should be noted, though, that my argument is pretty abstract and there may be additional important considerations.)
On the other hand, smaller countries have less of a stake in reducing global risks, due to their smaller size, so it might be harder to spend a lot to mitigate those risks. Instead, you might want to think about pilot programs that could later be extended to big countries.
Also, I think people in small countries should carefully examine their ability to have an impact through institutions the UN General Assembly that have a one-country-one vote system.
"On the other hand, smaller countries have less of a stake in reducing global risks, due to their smaller size, so it might be harder to spend a lot to mitigate those risks. "
Not sure about that. E.g. the largest foreign aid donors in terms of percentage of GNI are Luxembourg, Norway, Sweden, and Denmark, meaning they're unusually likely to want to contribute to global welfare. Likewise, the Scandinavian countries pursue more radical climate policies than, e.g. the US.
I've written about that in the context of climate change:
"Increasing public clean energy R&D does not necessarily require strong multilateralism or harmonized national policies. This makes it very tractable politically and uniquely positioned in the space of all climate policies as a decentralized approach.
And even small countries can contribute. Take Estonia. They have the second largest per capita CO₂ footprint in the EU and by far the most carbon-intensive economy among the OECD countries, because they burn a lot of shale oil.[8]
So are Estonia's climate policies the worst in the world?[9] Quite the opposite is true: a country with just 1.5 million citizens whose energy footprints amount to only 0.02% of the global total won't contribute much to climate change.
But more importantly, Estonia spends more than any other country on clean energy R&D relative to GDP. In fact, relative to GDP they spend more than twice as much as Norway. So perhaps Estonia should be regarded as a world leader on climate change despite their high emissions, because increasing public clean energy R&D is the most effective climate change policy. Because of diminishing returns, it is very hard to imagine reducing Estonian emissions to zero. This would mean replacing every last lightbulb with LEDs powered by zero carbon energy and having everyone fly in electric planes. It is much easier to conceive of an Estonian scientist or engineer who improves, say, carbon capture technology so that the diffuse benefits reduce global emissions by 0.02%.
Alas, Estonia's GDP is small in absolute terms. This is why we need many more countries to be like Estonia." [source]
Alternatively, small countries can take risks which might be implemented in big countries. Does the US steal ideas from any particular set of small countries?
I think that some EAs focus a bit too much on sacrifices in terms of making substantial donations (as a fraction of their income), relative to sacrifices such as changing what cause they focus on or what they work with. The latter often seem both higher impact and less demanding (though it depends a bit). So it seems that one might want to emphasise the latter a bit more, and the former a bit less, relatively speaking. And if so one would want want to adjust EA norms and expectations accordingly.
International air travel may contribute to spread of infectious diseases (cf. this suggestive tweet; though wealth may be a confounder; poor countries may have more undetected cases). That's an externality that travellers and airlines arguably should pay for, via a tax. The money would be used for defences against pandemics. Is this something that's considered in existing taxation? If there should be such a pandemic flight tax, how large should it optimally be?
One option would be to create a separate international fund for pandemic response paid for by national-level taxes on industries with inherent disease risk—such as live animal producers and sellers, forestry and extractive industries—that could support recovery and lessen the toll of outbreaks on national economies.
Philosophy Contest: Write a Philosophical Argument That Convinces Research Participants to Donate to Charity
Can you write a philosophical argument that effectively convinces research participants to donate money to charity?
Prize: $1000 ($500 directly to the winner, $500 to the winner's choice of charity)
Background
Preliminary research from Eric Schwitzgebel's laboratory suggests that abstract philosophical arguments may not be effective at convincing research participants to give a surprise bonus award to charity. In contrast, emotionally moving narratives do appear to be effective.
However, it might be possible to write a more effective argument than the arguments used in previous research. Therefore U.C. Riverside philosopher Eric Schwitzgebel and Harvard psychologist Fiery Cushman are challenging the philosophical and psychological community to design an argument that effectively convinces participants to donate bonus money to charity at rates higher than they do in a control condition.
Of possible interest regarding the efficiency of science: paper finds that scientists on average spend 52 hours per year formatting papers. (Times Higher Education write-up; extensive excerpts here if you don't have access.)
This seems about a factor of 2 lower than I expected. My guess would be that this just includes the actual cost of fixing formatting errors, not the cost of fitting your ideas to fit the formatting at all (i.e. having to write all the different sections, even when it doesn't make sense, or being forced to use LaTeX in the first place).
(Note: I did not yet get around to reading the paper, so this is just a first impression, as well as registering a prediction)
Yes, one could define broader notions of "formatting", in which case the cost would be higher. They use a narrower notion.
For the purpose of this work, formatting was defined as total time related to formatting the body of the manuscript, figures, tables, supplementary files, and references. Respondents were asked not to count time spent on statistical analysis, writing, or editing.
The authors think that there are straightforward reforms which could reduce the time spent on formatting, in this narrow sense.
[I]t is hoped that a growing number of journals will recommend no strict formatting guidelines, at least at first submission but preferably until acceptance, to alleviate the unnecessary burden on scientists. In 2012, Elsevier initiated a process like this in the journal Free Radical Biology & Medicine with “Your Paper, Your Way”, a simplified submission process with no strict formatting requirements until the paper has been accepted for publication.
It may be more difficult to get acceptance for more far-reaching reforms.
People who are new to a field usually listen to experienced experts. Of course, they don’t uncritically accept whatever they’re told. But they tend to feel that they need fairly strong reasons to dismiss the existing consensus.
But people who encounter global priorities research - the study of what actions would improve the world the most - often take a different approach. Many disagree with global priorities researchers’ rankings of causes, preferring a ranking of their own.
This can happen for many reasons, and there’s some merit to several of them. First, as global priorities researchers themselves acknowledge, there is much more uncertainty in global priorities research than in most other fields. Second, global priorities research is a young and not very well-established field.
But there are other factors that may make people defer less to existing global priorities research than is warranted. I think I did, when I first encountered the field.
First, people often have unusually strong feelings about global priorities. We often feel strongly for particular causes or particular ways of improving the world, and don’t like to hear that they are ineffective. So we may not listen to rankings of causes that we disagree with.
Second, most intellectually curious people usually have put some thought into the questions that global priorities research studies, even if they’ve never heard of the field itself. This is especially so since most academic disciplines have some relation with global priorities research. So people typically have a fair amount of relevant knowledge. That’s good in some ways, but can also make them overconfident of their abilities to judge existing global priorities research. Identifying the most effective ways of improving the world requires much more systematic thinking than most people will have done prior to encountering the field of global priorities research.
Third, people may underestimate how much thinking global priorities researchers have done over the past 10-20 years, and how sophisticated that thinking is. This is to some extent understandable, given how young the field is. But if you start to truly engage with the best global priorities research, you realize that they have an answer to most of your objections. And you’ll discover that they’ve come up with many important considerations that you’ve likely never thought of. This was definitely my personal experience.
For these reasons, people who are new to global priorities research may come to dismiss existing research prematurely. Of course, that’s not the only mistake you can make. You can also go too far in the other direction, and be overly deferential. It’s a tricky balance to strike. But in my experience, premature dismissal is relatively common - and maybe especially so among smart and experienced people. So it’s something to watch out for.
People who are new to a field usually listen to experienced experts. Of course, they don’t uncritically accept whatever they’re told. But they tend to feel that they need fairly strong reasons to dismiss the existing consensus.
I'm not sure I agree with this, so it is not obvious to me that there is anything special about GP research. But it depends on who you mean by 'people' and what your evidence is. The reference class of research also matters - I expect people are more willing to believe physicists, but less so sociologists.
Yeah, I agree that there are differences between different fields - e.g. physics and sociology - in this regard. I didn't want to go into details about that, however, since it would have been a bit a distraction from the main subject (global priorities research).
Overall (left and right arrows): what is your overall feeling about the comment? Does it contribute positively to the conversation? Do you want to see more comments like this?
Agreement (check and cross): do you agree with the position of this comment?
I think it could be interesting to test out that or a similar voting system on the EA Forum as well.
That said, I think I might even more prefer some sort of emoji system, where there were emojis to represent each of the 4 dimensions, but also the option to have more emojis.
The Oxford Utilitarianism Scale defines tendency to accept utilitarianism in terms of two factors: acceptance of instrumental harm for the greater good, and impartial beneficence.
But there is another question, which is subtly different, namely: what psychological features do we need to apply utilitarianism, and to do it well?
Once we turn to application, truth-seeking becomes hugely important. The utilitarian must find the best ways of doing good. You can only do that if you're a devoted truth-seeker.
Timegiving behaviors (i.e. caregiving, volunteering, giving support) and prosocial traits were associated with a lower mortality risk in older adults, but giving money was not.
When I read your posts on psychology, I get the sense that you're genuinely curious about the results, without much of any filter for them matching with the story that EA would like to tell. Nice job.
Russell, however, fails to convince that we will ever see the arrival of a “second intelligent species”. What he presents instead is a dizzyingly inconsistent account of “intelligence” that will leave careful readers scratching their heads. His definition of AI reduces this quality to instrumental rationality. Rational agents act intelligently, he tells us, to the degree that their actions aim to achieve their objectives, hence maximizing expected utility. This is likely to please hoary behavioural economists, with proclivities for formalization, and AI technologists squeaking reward functions onto whiteboards. But it is a blinkered characterization, and it leads Russell into absurdity when he applies it to what he calls “overly intelligent” AI.
Russell’s examples of human purpose gone awry in goal-directed superintelligent machines are bemusing. He offers scenarios such as a domestic robot that roasts the pet cat to feed a hungry child, an AI system that induces tumours in every human to quickly find an optimal cure for cancer, and a geoengineering robot that asphyxiates humanity to deacidify the oceans. One struggles to identify any intelligence here.
Andrew Gelman argues that scientists’ proposals for fixing science are themselves not always very scientific.
If you’ve gone to the trouble to pick up (or click on) this volume in the first place, you’ve probably already seen, somewhere or another, most of the ideas I could possibly propose on how science should be fixed. My focus here will not be on the suggestions themselves but rather on what are our reasons for thinking these proposed innovations might be good ideas. The unfortunate paradox is that the very aspects of “junk science” that we so properly criticize—the reliance on indirect, highly variable measurements from nonrepresentative samples, open-ended data analysis, followed up by grandiose conclusions and emphatic policy recommendations drawn from questionable data— all seem to occur when we suggest our own improvements to the system. All our carefully-held principles seem to evaporate when our emotions get engaged.
Due to a special grant, there has been a devoted tranche of Emergent Ventures to individuals, typically scholars and public intellectuals, studying the nature and causes of progress.
Nine grantees, including one working on X-risk:
Leopold Aschenbrenner, 17 year old economics prodigy, to spend the next summer in the Bay Area and for general career development. Here is his paper on existential risk.
The “veil of ignorance” is a moral reasoning device designed to promote impartial decision making by denying decision makers access to potentially biasing information about who will benefit most or least from the available options. Veil-of-ignorance reasoning was originally applied by philosophers and economists to foundational questions concerning the overall organization of society. Here, we apply veil-of-ignorance reasoning in a more focused way to specific moral dilemmas, all of which involve a tension between the greater good and competing moral concerns. Across 7 experiments (n= 6,261), 4 preregistered, we find that veil-of-ignorance reasoning favors the greater good. Participants first engaged in veil-of-ignorance reasoning about a specific dilemma, asking themselves what they would want if they did not know who among those affected they would be. Participants then responded to a more conventional version of the same dilemma with a moral judgment, a policy preference, or an economic choice. Participants who first engaged in veil-of-ignorance reasoning subsequently made more utilitarian choices in response to a classic philosophical dilemma, a medical dilemma, a real donation decision between a more vs. less effective charity, and a policy decision concerning the social dilemma of autonomous vehicles. These effects depend on the impartial thinking induced by veil-of-ignorance reasoning and cannot be explained by anchoring, probabilistic reasoning, or generic perspective taking. These studies indicate that veil-of-ignorance reasoning may be a useful tool for decision makers who wish to make more impartial and/or socially beneficial choices.
Philosopher Eric Schwitzgebel argues that good philosophical arguments should be such that the target audience ought to be moved by the argument, but that such arguments are difficult to make regarding animal consciousness, since there is no common ground.
The Common Ground Problem is this. To get an argument going, you need some common ground with your intended audience. Ideally, you start with some shared common ground, and then maybe you also introduce factual considerations from science or elsewhere that you expect they will (or ought to) accept, and then you deliver the conclusion that moves them your direction. But on the question of animal consciousness specifically, people start so far apart that finding enough common ground to reach most of the intended audience becomes a substantial problem, maybe even an insurmountable problem.
The question “are garden snails phenomenally conscious?” or equivalently “is there something it’s like to be a garden snail?” admits of three possible answers: yes, no, and denial that the question admits of a yes-or-no answer. All three answers have some antecedent plausibility, prior to the application of theories of consciousness. All three answers retain their plausibility also after the application of theories of consciousness. This is because theories of consciousness, when applied to such a different species, are inevitably questionbegging and rely partly on dubious extrapolation from the introspections and verbal reports of a single species.
Abstract: Within a few decades, we will likely create AI that a substantial proportion of people believe, whether rightly or wrongly, deserve human-like rights. Given the chaotic state of consciousness science, it will be genuinely difficult to know whether and when machines that seem to deserve human-like moral status actually do deserve human-like moral status. This creates a dilemma: Either give such ambiguous machines human-like rights or don't. Both options are ethically risky. To give machines rights that they don't deserve will mean sometimes sacrificing human lives for the benefit of empty shells. Conversely, however, failing to give rights to machines that do deserve rights will mean perpetrating the moral equivalent of slavery and murder. One or another of these ethical disasters is probably in our future.
The Nobel Prize in Economics awarded to Abhijit Banerjee, Esther Duflo and Michael Kremer "for their experimental approach to alleviating global poverty".
Michael Kremer is a founding member of Giving What We Can 🙂
I've written a blog post on naive effective altruism and conflict.
A very useful concept is naive effective altruism. The naive effective altruist fails to take some important social or psychological considerations into account. Therefore, they may end up doing harm, rather than good.
The standard examples of naive effective altruism are maybe lies and theft for the greater good. But there are other and less salient examples. Here I want to discuss one of them: the potential tendency to be overly conflict-oriented. There are several ways this may occur.
First, people may neglect the costs of conflict - that it’s psychologically draining for them and for others, that it reduces the potential for future collaboration, that it may harm community culture, and so on. Typically, you enter into a conflict because you think that some individual or organisation is making a poor decision - e.g. that reduces impact. My hunch is that people often decide to take the conflict because they exclusively focus on this (supposed) direct impact cost, and don’t consider the costs of the conflict itself.
Second, people often have unrealistic expectations of how others will react to criticism. Rightly or wrongly, people tend to feel that their projects are their own, and that others can only have so much of a say over them. They can take a certain amount of criticism, but if they feel that you’re invading their territory too much, they will typically find you abrasive. And they will react adversely.
Third, overconfidence may lead you to think that a decision is obviously flawed, where there’s actually reasonable disagreement. That can make you push more than you should.
*
These considerations don’t mean that you should never enter into a conflict. Of course you should. Exactly when to do so is a tricky problem. All I want to say is that we should be aware that there’s a risk that we enter into too many conflicts if we apply effective altruism naively.
I've written a blog post about general vs AI-specific explanations of existential risk neglect of potential interest to some. Some excerpts:
Even though existential risk neglect usually is explained by general biases that don’t pertain to specific risks, it is sometimes acknowledged that there are important AI-specific biases. E.g. the AI risk expert Stuart Russell has made an illuminating thought-experiment:
I think Russell is right: we would react much more strongly to a notice of an alien invasion. AI risk is unprecedented, difficult to comprehend, and may sound outlandish or even laughable. Those features arguably make people inclined to downshift existential risk from AI. By contrast, they are likely much more inclined to take seriously existential risks that are easier to grasp and/or have known historical precedents.
...
I’m not sure I believe that AI-specific biases are the whole story, however. I do think that people also have a general tendency to neglect existential risk. But I think that AI-specific biases are an important part of the story.
If this is true, then one upshot could be that efforts to counter biases relating to existential risk should largely be directed specifically at existential risk from AI, rather than existential risk in general. Relatedly, I think that part of the existential risk community is sometimes a bit too inclined to talk about existential risk in general, when it’s more appropriate to talk about specific risks; such as AI risk. Existential risk is a very heterogeneous concept - the risks are very different not only psychologically but also in terms of how likely they are - and mostly using the general existential risk concept may mask that.
Tl;dr - if you're working on lobbying in a small or mid-sized country and want to reduce catastrophic risk, trying to increase spending on public goods that benefit the whole world - e.g. research relevant to pandemic preparedness - is an option one might consider.
Some sorts of lobbying or policy work related to catastrophic risk is likely best done in big and rich countries like the US. Only such countries have, or are likely to have, some of the technologies that risk causing a global catastrophe (nuclear weapons, cutting-edge AI, etc). That means that it may be substantially less valuable to try to influence the policies of small countries (though no doubt it depends on a multitude of factors). The value of influencing a country may scale superlinearly with its size.
But that's not true for all policy interventions. E.g. Guarding Against Pandemics is working to increase US spending on pandemic preparedness. As far as I understand, part of this money would be spent on research on better therapeutics and vaccines, better tests, etc. Such research would presumably benefit the whole world, and funds for it could come from any country, including small or mid-sized countries. That means that for these kinds of interventions, the value of influencing a country likely rather scales linearly with its size (not superlinearly). And since it's likely easier to influence the policies of smaller countries (e.g. because of lower lobbying competition) that means that the impact of lobbying them could be comparable to that of lobbying the US (though no doubt it depends on many factors). And since Sam Bankman-Fried is excited about lobbying the US government to increase such spending, lobbying other governments could likely also be impactful. (It should be noted, though, that my argument is pretty abstract and there may be additional important considerations.)
On the other hand, smaller countries have less of a stake in reducing global risks, due to their smaller size, so it might be harder to spend a lot to mitigate those risks. Instead, you might want to think about pilot programs that could later be extended to big countries.
Also, I think people in small countries should carefully examine their ability to have an impact through institutions the UN General Assembly that have a one-country-one vote system.
"On the other hand, smaller countries have less of a stake in reducing global risks, due to their smaller size, so it might be harder to spend a lot to mitigate those risks. "
Not sure about that. E.g. the largest foreign aid donors in terms of percentage of GNI are Luxembourg, Norway, Sweden, and Denmark, meaning they're unusually likely to want to contribute to global welfare. Likewise, the Scandinavian countries pursue more radical climate policies than, e.g. the US.
I've written about that in the context of climate change:
"Increasing public clean energy R&D does not necessarily require strong multilateralism or harmonized national policies. This makes it very tractable politically and uniquely positioned in the space of all climate policies as a decentralized approach.
And even small countries can contribute. Take Estonia. They have the second largest per capita CO₂ footprint in the EU and by far the most carbon-intensive economy among the OECD countries, because they burn a lot of shale oil.[8]
So are Estonia's climate policies the worst in the world?[9] Quite the opposite is true: a country with just 1.5 million citizens whose energy footprints amount to only 0.02% of the global total won't contribute much to climate change.
But more importantly, Estonia spends more than any other country on clean energy R&D relative to GDP. In fact, relative to GDP they spend more than twice as much as Norway. So perhaps Estonia should be regarded as a world leader on climate change despite their high emissions, because increasing public clean energy R&D is the most effective climate change policy. Because of diminishing returns, it is very hard to imagine reducing Estonian emissions to zero. This would mean replacing every last lightbulb with LEDs powered by zero carbon energy and having everyone fly in electric planes. It is much easier to conceive of an Estonian scientist or engineer who improves, say, carbon capture technology so that the diffuse benefits reduce global emissions by 0.02%.
Alas, Estonia's GDP is small in absolute terms. This is why we need many more countries to be like Estonia." [source]
Alternatively, small countries can take risks which might be implemented in big countries. Does the US steal ideas from any particular set of small countries?
I think that some EAs focus a bit too much on sacrifices in terms of making substantial donations (as a fraction of their income), relative to sacrifices such as changing what cause they focus on or what they work with. The latter often seem both higher impact and less demanding (though it depends a bit). So it seems that one might want to emphasise the latter a bit more, and the former a bit less, relatively speaking. And if so one would want want to adjust EA norms and expectations accordingly.
International air travel may contribute to spread of infectious diseases (cf. this suggestive tweet; though wealth may be a confounder; poor countries may have more undetected cases). That's an externality that travellers and airlines arguably should pay for, via a tax. The money would be used for defences against pandemics. Is this something that's considered in existing taxation? If there should be such a pandemic flight tax, how large should it optimally be?
One might also consider whether there are other behaviours that increase the risk of pandemics that should be taxed for the same reason. Seb Farquhar, Owen Cotton-Barratt, and Andrew Snyder-Beattie already suggested that risk externalities should be priced into research with public health risks.
Foreign Affairs discussing similar ideas:
Link
Of possible interest regarding the efficiency of science: paper finds that scientists on average spend 52 hours per year formatting papers. (Times Higher Education write-up; extensive excerpts here if you don't have access.)
This seems about a factor of 2 lower than I expected. My guess would be that this just includes the actual cost of fixing formatting errors, not the cost of fitting your ideas to fit the formatting at all (i.e. having to write all the different sections, even when it doesn't make sense, or being forced to use LaTeX in the first place).
(Note: I did not yet get around to reading the paper, so this is just a first impression, as well as registering a prediction)
Yes, one could define broader notions of "formatting", in which case the cost would be higher. They use a narrower notion.
The authors think that there are straightforward reforms which could reduce the time spent on formatting, in this narrow sense.
It may be more difficult to get acceptance for more far-reaching reforms.
On encountering global priorities research (from my blog).
People who are new to a field usually listen to experienced experts. Of course, they don’t uncritically accept whatever they’re told. But they tend to feel that they need fairly strong reasons to dismiss the existing consensus.
But people who encounter global priorities research - the study of what actions would improve the world the most - often take a different approach. Many disagree with global priorities researchers’ rankings of causes, preferring a ranking of their own.
This can happen for many reasons, and there’s some merit to several of them. First, as global priorities researchers themselves acknowledge, there is much more uncertainty in global priorities research than in most other fields. Second, global priorities research is a young and not very well-established field.
But there are other factors that may make people defer less to existing global priorities research than is warranted. I think I did, when I first encountered the field.
First, people often have unusually strong feelings about global priorities. We often feel strongly for particular causes or particular ways of improving the world, and don’t like to hear that they are ineffective. So we may not listen to rankings of causes that we disagree with.
Second, most intellectually curious people usually have put some thought into the questions that global priorities research studies, even if they’ve never heard of the field itself. This is especially so since most academic disciplines have some relation with global priorities research. So people typically have a fair amount of relevant knowledge. That’s good in some ways, but can also make them overconfident of their abilities to judge existing global priorities research. Identifying the most effective ways of improving the world requires much more systematic thinking than most people will have done prior to encountering the field of global priorities research.
Third, people may underestimate how much thinking global priorities researchers have done over the past 10-20 years, and how sophisticated that thinking is. This is to some extent understandable, given how young the field is. But if you start to truly engage with the best global priorities research, you realize that they have an answer to most of your objections. And you’ll discover that they’ve come up with many important considerations that you’ve likely never thought of. This was definitely my personal experience.
For these reasons, people who are new to global priorities research may come to dismiss existing research prematurely. Of course, that’s not the only mistake you can make. You can also go too far in the other direction, and be overly deferential. It’s a tricky balance to strike. But in my experience, premature dismissal is relatively common - and maybe especially so among smart and experienced people. So it’s something to watch out for.
Thanks to Ryan Carey for comments.
I'm not sure I agree with this, so it is not obvious to me that there is anything special about GP research. But it depends on who you mean by 'people' and what your evidence is. The reference class of research also matters - I expect people are more willing to believe physicists, but less so sociologists.
Yeah, I agree that there are differences between different fields - e.g. physics and sociology - in this regard. I didn't want to go into details about that, however, since it would have been a bit a distraction from the main subject (global priorities research).
LessWrong is experimenting with a two-axis voting system:
I think it could be interesting to test out that or a similar voting system on the EA Forum as well.
I expect we will once they settle on a more stable UI.
+1
That said, I think I might even more prefer some sort of emoji system, where there were emojis to represent each of the 4 dimensions, but also the option to have more emojis.
I wrote a blog post on utilitarianism and truth-seeking. Brief summary:
The Oxford Utilitarianism Scale defines tendency to accept utilitarianism in terms of two factors: acceptance of instrumental harm for the greater good, and impartial beneficence.
But there is another question, which is subtly different, namely: what psychological features do we need to apply utilitarianism, and to do it well?
Once we turn to application, truth-seeking becomes hugely important. The utilitarian must find the best ways of doing good. You can only do that if you're a devoted truth-seeker.
New paper in Personality and Individual Differences finds that:
When I read your posts on psychology, I get the sense that you're genuinely curious about the results, without much of any filter for them matching with the story that EA would like to tell. Nice job.
Thanks!
Hostile review of Stuart Russell's new book Human Compatible in Nature. (I disagree with the review.)
"Write a Philosophical Argument That Convinces Research Participants to Donate to Charity"
Has this every been followed up on? Is their data public?
Andrew Gelman argues that scientists’ proposals for fixing science are themselves not always very scientific.
Some might find this poll of interest. Please participate if you have a Twitter account.
Marginal Revolution:
Nine grantees, including one working on X-risk:
The paper was also here on the forum
"Veil-of-ignorance reasoning favors the greater good", by Karen Huang, Joshua Greene, and Max Bazerman (all at Harvard).
Philosopher Eric Schwitzgebel argues that good philosophical arguments should be such that the target audience ought to be moved by the argument, but that such arguments are difficult to make regarding animal consciousness, since there is no common ground.
Cf. his paper Is There Something It’s Like to Be a Garden Snail?
Eric Schwitzgebel: