TS

Tejas Subramaniam

Student of math and economics @ Stanford University
285 karmaJoined Pursuing an undergraduate degree

Bio

Stanford student (math/economics). Formerly intern at Rethink Priorities (animal welfare) and J-PAL South Asia (IDEA Initiative). 

Comments
29

The Carlsmith article you linked -- post 1 of his two-post series -- seems to mostly argue against the standard arguments people might have for ethical anti-realists reasoning about ethics (i.e., he argues that neither a brute preference for consistency nor money-pumping arguments seem like the whole picture). You might be talking about the second piece in the two-post series instead?

Brian Tomasik considers more selection toward animals with faster life histories in his piece on the effects of climate change on wild animals. He seems to think it‘s not decisive (and ends up concluding that he’s basically 50–50 on the sign of the effects of climate change on overall animal suffering) for ~three reasons (paraphrasing Tomasik):

  • Some of the animals with slower life histories which get replaced are often carnivorous/omnivorous, which might mean climate change increases invertebrate populations.
  • Instability might also affect plants, which could lower net primary productivity and hence invertebrate populations.
  • Many of the “ultimate” life forms with fast life histories will be microorganisms that we don’t put much moral weight in.

I’d be curious for how you think the arguments in the above post should change Tomasik’s view, in light of these considerations. 

I didn’t say they fell under the ethics of killing, I was using killing as an example of a generic rights violation under a plausible patient-centered deontological theory to illustrate the difference between “a rights violation happening to one person and help coming for a separate person as an offset” and “one’s harm being directly offset.”

(I agree that it seems a bit more unclear if potential people can have rights, even if they can have moral consideration, and in particular rights to not be brought into existence, but I think it’s very plausible.)

Note, however, that I think the question of whether there can be deontic side-constraints regarding our treatment of animals is unclear even conditioning on deontology. Many deontologist philosophers – like Huemer – are uncertain whether animals have “rights” (as a patient-centered deontologist would put it), even though they think (1) humans have rights and (2) animals are still deserving of moral consideration. Deontologists sometimes resort to something like “deontology for people, consequentialism for animals” (although some other deontologists, like Nozick, thought that this was insufficient for animals). 

I think offsetting emissions and offsetting meat consumption are comparable under utilitarianism, but much less comparable under most deontological moral theories, if you think animals have rights. For instance, if you killed someone and donated $5,000 to the Malaria Consortium, that seems worse – from a deontological perspective – than if you just did nothing at all, because the life you kill and the life you save are different people, and many deontological theories are built on the “separateness of persons.” In contrast, if you offset your CO2 emissions, you’re offsetting your effect on warming, so you don’t kill anyone to begin with (because it’s not like your CO2 emissions cause warming that hurts agent A, and then your offset reduces temperatures to benefit agent B). It might be similarly problematic to offset your contribution to air pollution, though, because the effects of air pollution happen near the place where the pollution actually happened. 

Why do you think excruciating pain is 10k as intense as disabling pain? If I use these conversion factors (p. 30) instead, chicken welfare campaigns seem to win. 

Do you think there are promising ways to slow down growth in aquaculture?

Somewhat relevant (takes the hard proves-too-much stance): https://www.econlib.org/archives/2014/10/dear_identity_p.html

She co-authored a piece a few months back about finding AI safety emotionally compelling. I’d be interested in her thoughts on the following two questions related to that!

  • How worried should we be about suspicious convergence between AI safety being one of the most interesting/emotionally compelling questions to think about and it being the most pressing problem? There used to be a lot of discussion around 2015 about how it seemed like people were working on AI safety because it’s really fun and interesting to think about, rather than because it’s actually that pressing. I think that argument is pretty clearly false, but I’d be curious how she views this post as interacting with those concerns. 
  • It seems a bit like the post doesn’t draw a clean distinction between capabilities and safety. I agree that, to some extent, they’re inseparable (the people building transformative AI should care about making it safe), but how does she view the downside risks of, e.g., some of the most compelling parts of AI work being capabilities-related? More generally, how worried should we be, as a community, about how interconnected safety and capabilities work are? 
    • Somewhat related: As Patrick Collison puts it, people working on making more effective engineered viruses aren’t high-status among people working on pandemic prevention, so why are capabilities researchers high-status among safety researchers? 
    • (I have a decent sense of different answers within the community – this is not really a top concern of mine – but I’d nonetheless be interested in her take! My sense is that (1) the distinction isn’t nearly as clean since you want to build AI and make it go safely and (2) it’s good for capabilities work to be more safety-geared than the counterfactual.) 
Load more