Bio

I teach international relations and political theory at the University of Nottingham. Much of my research relates to intergenerational ethics, existential risk, or both. I’ve examined cases of ongoing great power peace, and argued that nuclear war is inevitable in the long term if we try to perpetuate nuclear deterrence. I’ve also written extensively about the ethics of climate change, argued that governments should make more use of public debt to address it, and proposed solutions to the non-identity problem and the mere addition paradox

How others can help me

Currently I’m finishing a book on longtermism and existential risk. I'd welcome the opportunity to present parts of the argument. I'm also planning a paper on how superintelligent AI could affect international relations. This is a new topic for me and I'd be glad to attend conferences and workshops to help me get on top of it.

How I can help others

My formal training is in international relations, and I've written extensively about great power war and peace and Russian foreign policy. I might be particularly helpful to EAs working on IR, but I am happy to discuss any issues relating to existential risk.

Comments
24

That might be right -- but then wouldn't it be a major problem for EA if it were unable to discuss rationally one of the most important factors determining whether it achieved its goals? This election is likely to have huge implications not only for how (or whether) the world manages a number of x-risks to a minimally satisfactory extent, but also for many other core EA concerns such as international development, and probably farm animals too (a right-wing politician with a deregulatory agenda, for whom 'animals' is a favourite insult, is scarcely going to have their welfare at heart).

Thanks -- that's odd. The 'elephant' post isn't showing up on mine.

Thanks, Vasco! That's odd--the Clare Palmer link is working for me. It's her paper 'Does Nature Matter? The Place of the Nonhuman in the Ethics of Climate Change'--what looks like a page proof is posted on www.academia.edu.

One of the arguments in my paper is that we're not morally obliged to do the expectably best thing of our own free will, even if we reliably can, when it would benefit others who will be much better off than we are whatever we do. So I think we disagree on that point. That said, I entirely endorse your argument about heuristics, and have argued elsewhere that even act utilitarians will do better if they reject extreme savings rates.

Thanks, Vasco! You are welcome to list me in the acknowledgements. I’m glad to have the reference to Tomasik’s post, which Timothy Chan also cited below, and appreciate the detailed response. That said, I doubt we should be agnostic on whether the overall effects of global heating on wild animals will be good or bad.

The main upside of global heating for animal welfare, on Tomasik’s analysis, is that it could decrease wild animal populations, and thus wild animal suffering. On balance, in his view, the destruction of forests and coral reefs is a good thing. But that relies on the assumption that most wild animal lives are worse than nothing. Tomasik and others have given some powerful reasons to think this, but there are also strong arguments on the other side. Moreover, as Clare Palmer argues, global heating might increase wild animal numbers—and even Tomasik doesn’t seem sure it would decrease them.

In contrast, the main downside, in Tomasik’s analysis, is less controversial: that global heating is going to cause a lot of suffering by destroying or changing the habitats to which wild animals are adapted. ‘An “unfavorable climate”’, notes Katie McShane, ‘is one where there isn’t enough to eat, where what kept you safe from predators and diseases in the past no longer works, where you are increasingly watching your offspring and fellow group members suffer and die, and where the scarcity of resources leads to increased conflict, destabilizing group structures and increasing violent confrontations.' Palmer isn’t so sure: ‘Even if some animals suffer and die, climate change might result in an overall net gain in pleasure, or preference satisfaction (for instance) in the context of sentient animals. This may be unlikely, but it’s not impossible.’ True. But even if it’s only unlikely that global heating’s effects will be goodit means that its effects on existing animals are bad in expectation.

There’s another factor which Tomasik mentions in passing: there is some chance that global heating could lead to the collapse of human civilisation—perhaps in conjunction with other factors. In some respects, this would be a good thing for non-humans—notably, it would put an end to factory farming. It would also preclude the possibility of our spreading wild animal suffering to other planets. On the flipside, however, it would also eliminate the possibility of our doing anything sizable to mitigate wild animal suffering on earth.

Now, while there may be more doubt about the upsides than about the downsides of our GHG emissions, that needn’t decide the issue if the upsides are big enough. But even if Tomasik and others are right that wild animal lives are bad on net, there’s also doubt about whether global heating will reduce the number of wild animal lives. And even if both are these premises are met, I’m not sure they’d outweigh the suffering global heating would inflict on those wild animals who will exist.

I think you have misinterpreted what my article about discounting is recommending. In contrast to some other writers, I’m not calling for discounting at the lowest possible rate. Even at a rate of 2%, catastrophic damages evaporate in cost-benefit analysis if they occur more than a couple of centuries hence, thus giving next to no weight to the distant future. However, a traditional justification for discounting is that if we didn’t, we’d be obliged to invest nearly all our income, since the number of future people could be so great. I argue for discounting damages to those who would be much better off than we are at conventional rates, but giving sizable—even if not equal—weight to damages that would be suffered by everyone else, regardless of how far into the future they exist. My approach thus has affinities with the one advocated by Geir Asheim here

One implication is that while we’re under no obligation to make future rich people richer, we ought to be very worried about worst-case climate change scenarios, since in those humans could be poorer. Another is that since most non-humans for the foreseeable future will be worse off than we are, we shouldn’t discount their interests away. 

Vasco, I've read your post to which the first link leads quickly, so please correct me if I'm wrong. However, it left me wondering about two things:

(a) It wasn't clear to me that the estimate of global heating damages was counting global heating damages to non-humans.  The references to DALYs and 'climate change affecting more people with lower income' lead me to suspect you're not. But non-humans will surely be the vast majority of the victims of global heating--as well as, in some cases, its beneficiaries. While Timothy Chan is quite right to point out below that this is a complex matter, it's certainly isn't going to be a wash, and if the effects are negative, they're likely to be very bad.

(b) It appears you were working with a study that employed a discount rate of 2%. That's going to discount damages in 100 years to 13% of their present value, and damages in 200 years to 1.9% of their present value--and it goes downhill from there. But that seems very hard to justify. Discounting is often defended on the ground that our descendants will be richer than we are. But that rationale doesn’t apply to damages in worst-case scenarios. Because they could be so enduring, these damages are huge in expectation. Second, future non-humans won’t be richer than we are, so benefits to them don't have diminishing marginal utility compared with benefits to us.

The US government--including, so far as I know, the EPA--uses a discount rate that is higher than two percent, which makes future damages from global heating evaporate even more quickly. What's more, I'd be surprised if it's trying to value damages to wild animals in terms of the value they would attach to avoiding them, as opposed to the value that American human beings do. The latter approach, as Dale Jamieson has observed, is rather like valuing harm to slaves by what their masters would pay to avoid it.

So far as it goes, your argument seems correct. But you're leaving out a significant factor here--carbon emissions. Beef cattle are extraordinarily carbon intensive even compared to other animals raised for food. If you eat them, your emissions, combined with other people's emissions, are going to cause a huge amount of both human and non-human suffering.

There's a complication. You could, in principle, offset the damage from your carbon emissions. But you could also, in principle, eat animals who have been raised free range, and whose lives have probably been worth living up to the time they're killed. 

Both of these will require you to spend extra money, and investigate whether you're really getting what you pay for. Rather than going to all this trouble--and here we'll agree--it seems a lot better simply to eat an Impossible Burger. 

I think we're talking past each other. My claim is that taking precautionary measures in case A will prevent more deaths in expectation (17 billion/1000 = 17 million) than taking precautionary measures in case B (8 billion/1000 = 8 million). We can all agree that it's better, other things being equal, to save more deaths in expectation than fewer. On the Intuition of Neutrality, other things seemingly are equal, making it more important to take precautionary measures against the virus in A than against the virus in B.

But this is a reductio ad absurdum. Would it really be better for humanity to go extinct than to suffer ten million deaths from the virus per year for the next thousand years? And if not, shouldn’t we accept that the reason is that additional (good) lives have value?

Thanks, Richard! I've just had a look at your post and see you've anticipated a number of the points I made here. I'm interested in the problem of model uncertainty, but most of the treatments of it I've found have been technical, which isn't much help to a maths illiterate like me. Some of the literature on moral uncertainty is relevant, and there’s an interesting treatment in Toby Ord’s, Rafaela Hillerbrand’s and Anders Sandberg’s paper here. But I’d be glad to learn of other philosophical treatments if you or others can recommend any.

Thanks! Just my subjective judgement. I feel pretty confident that 0.5% would be too low. I'd be more open to the view that 5-10% isn't high enough. If the latter is true, then that would strengthen my argument. I'd be interested what other people think.

Load more