DT

David T

812 karmaJoined

Comments
116

The presence of a company worth a few tens of billions whose founder talks about colonizing Mars (amongst many other bold claims) and has concrete plans in the subset of Mars colonization problems that involve actually getting there feels very compatible with the original suggestion that the plausible near term consequence is a small number of astronauts hanging out in a dome and some cracking TV footage, not an epoch-defining social transformation

Looked from another angle, fifty years ago the colonization of space wasn't driven by half of one billionaire's fortune,[1] it was driven by a significant fraction of the GDP of both the world's superpowers locked in a race, and the last 20 years' transition was from nothing in space to lunar landings, space stations, deep space probes, not from expensive launches and big satellites to cheaper launches and a lot more small satellites. So you had better arguments for imminent space cities half a century ago.

  1. ^

    the part he isn't spending on his social media habit, anyway...

If you're doing a comparison with anywhere on Earth, the obvious one would be Antarctica. There absolutely are permanent settlements there even though it's barely livable, but really only for relatively short term visitors to do scientific research and/or enjoy the experience of being one of the few people to travel there. It absolutely isn't a functioning economy that runs at a profit. (Some places inside the Arctic Circle, maybe, but that wouldn't be the case if shipping the exploitable resources back to somewhere that felt more like home cost spaceflight prices per kg). The profitable segment of space is the orbital plane around earth, ideally without the complications of people in the equation, and that's what SpaceX has actually spent the last decade focused on.

Antartica is also an interesting comparison point for the social and legal systems since it's also small numbers of people from different missions living on extraterritorial land.  I mean, they're not really particularly well sorted out, it just turns out they involve far too few people and far too little competition to be particularly problematic.

If the slow death involves no pain, of course it's credible. (The electric shock is, incidentally, generally insufficient to kill. They generally solve the problem of the fish reviving with immersion in ice slurry....). It's also credible that neither are remotely as painful as a two week malaria infection or a few years of malaria infection which is (much of) what sits on the other side of the trade here.

It's well within the bounds of possibility the electric shock is excruciating and the cold numbing, yes. Or indeed that they're both neutral, compared with slaughter methods that produce clear physiological stress indicators like asyphxiation in carbon-dioxide rich water. or that they're different for different types of water dwelling species depending on their natural hardiness to icy water, which also seems to be a popular theory. Rightly or wrongly, ice cold slurry is sometimes recommended as the humane option, although obviously the fish farming industry is more concerned with its ability to preserve the fish marginally better than kiliing prior to insertion into the slurry...

Thanks for the response Vasco and apologies for the tardy reply :)

The necessity of making funding decisions means interventions in animal welfare and global health and development are compared at least implicitly. I think it is better to make them explicit for reasoning transparency, and having discussions which could eventually lead to better decisions. Saying there is too much uncertainty, and there is nothing we can do will not move things forward.

I agree on the first part. But it appears OP is perfectly transparent about their reasoning. They acknowledge that the level of uncertainty permits differences of opinion, that they believe a portfolio allocation approach incorporating different views on utilities and moral priorities and risk tolerance is better than adopting a single set of weights and fanatically optimising for them, and that the implicit moral weights are therefore a residual resulting from preference heterogenity of people whose decision making OP/Dustin/Cari value rather than an unjustifiable knowledge claim about the absolute intensity of animals' experiences which others must prove wrong if they are to consider allocating budget in any other way.

It is, of course, perfectly reasonable to disagree with any/all individuals in OP's preferences and the net result of that funding allocation, and there are many individual funding decisions OP have made which can be improved upon (including for relatively non-contentious reasons like "they didn't achieve their aims"). But I don't tend to think that polemical arguments with suspicious convergence like "donating to most things in cause area X is many times more effective than everything in cause area Y"  are particularly helpful in moving things forward, particularly when they're based not on spotting a glaring error or possible conflict of interest but upon a preference for the moral weights proposed by another organization OP are certainly aware of.

What do you think about humane slaughter interventions, such as the electrical stunning interventions promoted by the Centre for Aquaculture Progress? "Most sea bream and sea bass today are killed by being immersed in an ice slurry, a process which is not considered acceptable by the World Organisation for Animal Health". "Electrical stunning reliably renders fish unconscious in less than one second, reducing their suffering". Rough analogy, but a human dying in an electric chair suffers less than one dying in a freezer?

Honestly, I have no idea whether it would be more uncomfortable to die on an electric chair or in a freezer, and I'm actually pretty familiar with the experience of human discomfort and descriptions of electrical shocks and hypothermia written from human perspectives. I'm not volunteering to test it experimentally either! Needless to say I have even less knowledge about the experience of a cold blooded, water dwelling creature with completely different physiology and nervous system and plausibly no conscious experience at all

A consequence of this is that I don't think transferring all the money currently spent on eradicating malaria to funding campaigns of indeterminate efficacy to promote an alternative slaughter method which has an indeterminate impact on the final moments of fish can be stated with a high degree of certainty to be a net positive use of resources.

Relatedly, I estimated the Shrimp Welfare Project’s Humane Slaughter Initiative is 43.5 k times as cost-effective as GiveWell's top charities. I would be curious about which changes to the parameters you would make to render the ration lower than 1.

This is a good question, and my honest answer is probably all of them, and the fundamental premise. I've discussed how lobbying organizations' funding isn't well measured at the margin and doesn't scale well in my previous post, I don't think the evidence base for ice slurry being a particularly painful slaughter method is particularly robust,[1]  I don't think RP's numbers or your upward revisions of the pain scales they use are particularly authoritative, and above all I'm not sure it's appropriate to use DALYs to trade human lives for thousand-point-scale estimates of the fleeting suffering of organisms where there isn't even a scientific consensus they have any conscious experience at all.  Titotal's post does a much better job than I could of explaining how easily it is to end up with orders of magnitude difference in outcomes even if one accepts the basic premises, and there's no particular reason to believe that premises like "researchers have made some observations about aversion to what is assumed to be pain stimuli amidst an absence of evidence of other traits associated with consciousness, and attached a number to it" are robust.

For related reasons, I don't think fanaticism is the best approach to budget allocation.

One does not need to worry about the meat eater problem to think the best animal welfare interventions are way more cost-effective than the best in global health and development. Neglecting that problem, I estimated corporate campaigns for chicken welfare are 1.51 k times as cost-effective as GiveWell's top charities, and Shrimp Welfare Project's Humane Slaughter Initiative is 43.5 k times as cost-effective as GiveWell's top charities.

There's a reason why I used the word universal. Yes, it is entirely reasonable to believe that a couple of causes from one area are clearly and obviously better than the best known in another area, though shrimp welfare certainly isn't the one I'd pick. But that's not the  framing of the debate (which is the debate week's, not yours specifically) is on Cause Area X vs Cause Area Y, not "is Charity Z the most effective charity overall".  

And if I did believe your numbers were a fairly accurate representation of reality and that fanaticism was better for budget allocation than a portfolio strategy, I'd be concerned that chicken charities were using money specifically allocated to AW despite being ~28x worse than shrimp,[2] There's more money in the GHW buckets, but the chicken => shrimp reallocation decision is more easily made. 

 

  1. ^

    though I'll happily concede it's a longer process than electrical stunning

  2. ^

    though personally I'd attach higher confidence to the chicken campaigns being significantly net positive...

Imagine a relay race team before a competition. The second-leg runner on the team thinks—let us assume correctly—‘If I run my leg faster than 12 seconds, then my team will finish first; if I don’t, then my team won’t finish first.’ She then runs her leg faster than 12 seconds. As the fourth-leg runner on her team crosses the finish line first, the second-leg runner thinks, ‘I won the race.’ Is she right?

 

Yes, of course she's right. Even if she's the weakest member of the team. They don't give Olympic relay teams 1/4 of a medal each.

-

For the record I don't describe myself as an EA and don't really hang out in EA circles. I'm far too old to be susceptible to arguments that I'm going to save the world with the power of my intellect and good intentions. If the founding fathers of EA's bios are accurate I discovered Peter Singer's solution to world poverty slightly before them, thought he had a [somewhat overstated] point and haven't done anywhere near enough to suggest I absorbed the lesson. I think utilitarianism's utility is limited but don't have the academic pedigree to argue about it for any length of time, and I think a lot of EA utilitarian maths is a bit shoddy.[1] So I don't think I'm making a particularly partisan argument here.

But you aren't half leading with your weakest arguments[2] GiveWell's estimation that if x bednets are distributed, on average about y% Malawian mothers receiving the nets will succeed in using them to protect their kids, so z% fewer kids will die isn't stealing credit from Malawian mothers or Chinese manufacturers in a zero sum karmic accounting game, it's a simple counterfactual (with or without appropriate sized error bars). Or put another way, if a Malawian kid thanks her mother for going hungry for two days to pay for a malaria net herself,[3] the mother shouldn't feel obliged to say "no, don't thank me, thank the Chinese people that manufactured it and the supply chain that brought it all the way here, and the white Westerners for doing enough research into malaria nets to convince vendors in my village to stock it."  The argument that installing a few more stakeholders in the way introduces a qualtitative difference between donating and diving into a pond might make Peter Singer's thought experiment a little bit trite, but it isn't an argument against the quantitative outcomes of donating at all.

 

  1. ^

    in particular, the tendency to confuse marginal and average costs and wild speculative guesses with robust expected value estimation. I don't actually think this is bad per se: people overestimating how much their next fiver can help a chicken or prevent Armageddon certainly isn't worse than people overestimating how much they want the next beer. I just think it looks a lot like the "donor illusion" certain leading EAs used to chastise mainstream charity for; actually the average "child sponsorship" scheme is probably more accurate, in accounting terms, about how much your recurring contribution to the charity pool is helping Jaime from Honduras than many EA causes. (I guess not liking that type of charity either is where you and the median EA agree and I differ :)) 

  2. ^

    Judging by your book reviews, you've researched sufficiently to be able to offer more nuanced criticisms of development aid. So I'm not sure why you'd lead with this, or in other articles with anecdotes how about profoundly the whinging of a single drunk teenage voluntourist crushed your dreams of changing the world. It's not even like there aren't much better glib criticisms of EA or charity in general....

  3. ^

    maybe because donations dried up...

I can't speak for OP but I thought the whole point of its "worldview diversification buckets" was to discourage this sort of comparison by acknowledging the size of the error bars around these kind of comparisons, and that fundamentally prioritisation decisions between them are influenced more by different worldviews rather than the possibility of acquiring better data or making more accurate predictions around outcomes. This could be interpreted as an argument against the theme of the week and not just this post :-)

But I don't think neuron counts are by any means the most unfavourable [reasonable] comparison for animal welfare causes: the heuristic that we have a decent understanding of human suffering and gratification whereas the possibility a particular intervention has a positive or negative or neutral impact on the welfare of a fish is guesswork seems very reasonable and very unfavourable to many animal related causes (even granting that fish have significant welfare ranges and that hedonic utiitarianism is the appropriate method for moral resource allocation). And of course  there are non-utilitarian moral arguments in favour of one group of philanthropic causes or another (prioritise helping fellow moral beings vs prioritise stopping fellow moral beings from actively causing harm) which feel a little less fuzzy but aren't any less contentious.

There are also of course error bars wrapped around individual causes within the buckets, which is part of the reason why GHW funds both GiveWell recommended charities and neartermist policy work that might affect more organism life years per dollar than Legal Impact for Chickens (but might actually be more likely to be counterproductive or ineffectual)[1] but that's another reason why I think blanket comparisons are unhelpful. A related issue is that it's much more difficult to estimate marginal impacts of research and policy work than dispensing medicine or nets. The marginal impact of $100k more nets is easy to predict; the marginal impact of $100k more to a lobbying organization is not even if you entirely agree with the moral weight they apply to their cause, and average cost-effectiveness is not always a reliable guide to scaling up funding, particularly not if they're small, scrappy organizations doing an admirable job of prioritising quick wins and also likely to face increase opposition if they scale.[2] Some organizations which fit that bill fit in the GHW category, but it's much more representative of the typical EA-incubated AW cause. Some of them will run into diminishing returns as they run out of companies actually willing to engage with their welfare initiatives, others may become locked in positional stalemates, some of them are much more capable of absorbing significant extra funding and putting it to good use than others. Past performance really doesn't guarantee future returns to scale, and some types of organization are much more capable of achieving it than others, which happens to include many of the classic GiveWell type GHW charities, and not many of the AW or speculative "ripple effect" GHW charities[3] 

I guess there are sound reasons why people could conclude that AW causes funded by OP were universally more effective than GHW ones or vice versa, but those appear to come more from strong philosophical positions (meat eater problems or disagreement with the moral relevance of animals) than evidence and measurement. 

  1. ^

    For the avoidance of doubt, I'm acknowledging that there's probably more evidence about negative welfare impacts of practices Legal Impact for Chickens is targeting and their theory of change than of the positive welfare impacts and efficacy of some reforms promoted in the GHW bucket , even given my much higher level of certainty about the significance of the magnitude of human welfare. And by extension pointing out that sometimes comparisons between individual AW and GHW charities run the opposite way from the characteristic "AW helps more organisms but with more uncertainty" comparison.

  2. ^

    There are much more likely to be well-funded campaigns to negate the impact of an organization targeting factory farming than ones to negate the impact of campaigns against malaria . Though on the other hand, animal cruelty doesn't have as many proponents as the other side of virtually any economic or institutional reform debate.

  3. ^

    There are diminishing returns to healthcare too: malaria nets' cost-effectiveness is broadly proportional to malaria prevalence. But that's rather more predictable than the returns to scale of anti-cruelty lobbying, which aren't even necessarily positive beyond a certain point if the well-funded meat lobby gets worried enough.

"Indistinguishable from magic" is an Arthur C Clarke quote about "any sufficiently advanced technology", and I think you're underestimating the complexity of building a generation ship and keeping it operational for hundreds, possibly thousands of years in deep space.   Propulsion is pretty low on the list of problems if you're skipping FTL travel, though you're not likely to cross the galaxy with a solar sail or a 237mN thruster using xenon as propellant. (FWIW I actually work in the space industry and spent the last week speaking with people about projects to extract oxygen from lunar regolith and assemble megastructures in microgravity, so it's not like I'm just dismissing the entire problem space here)

I think that the distinction between killing all and killing most people is substantially less important than those people (and you?) believe.

I'm actually in agreement with that point, but more due to putting more weight on the first 8 billion than the hypothetical orders of magnitude more hypothetical future humans. (I think in a lot of catastrophe scenarios technological knowledge and ambition rebounds just fine eventually, possibly stronger)

This is an absurd claim.

Why is it absurd? If humans can solve the problem of sending a generation ship to Alpha Centurai, an intelligence smart (and malevolent) enough to destroy 8 billion humans in their natural environment surely isn't going to be stymied by the complexities involved in sending some weapons after them or transmitting a copy of itself to their computers...

Positing an interstellar civilization seems to be exactly what Thorstad might call a "speculative claim" though. Interstellar civilization operating on technology indistinguishable from magic is an intriguing possibility with some decent arguments against (Fermi, lightspeed vs current human and technological lifespans) rather than something we should be sufficiently confident of to drop our credences in the possibility of humans becoming extinct down to zero in most years after the current time of perils,[1] and even if it were achieved I don't see why nukes and pandemics and natural disaster risk should be approximately constant per planet or other relevant unit of volume for small groups of humans living in alien environments[2]

Certainly this doesn't seem like a less speculative claim than one sometimes offered as a criticism of longtermism's XR-focus: that the risk of human extinction (as opposed to significant near-term utility loss) from pandemics, nukes or natural disasters is already zero[3] because of things that already exist. Nuclear bunkers, isolation and vaccination and the general resilience of even unsophisticated lifeforms to natural disasters are congruent with our current scientific understanding in a way which faster than light travel isn't, and the farthest reaches of the galaxy aren't a less hostile environment for human survival than a post-nuclear earth. 

And of course any AGI determined to destroy humans is unlikely to be less capable than relatively stupid, short-lived, oxygen-breathing lifeforms in space, so the AGI that destroys humans after they acquire interstellar capabilities is no more speculative than the AI that destroys humans next Tuesday. A persistent stable "friendly AI" might insulate humans from all these risks if sufficiently powerful (with or without space travel) as you suggest but that feels like an equally speculative possibility - and worse still one which many courses of action aimed at mitigating AI risk have a non-zero possibility of inadvertently working against....

 

  1. ^

    if the baseline rate after the current time of peril is merely reduced a little by the nonzero possibility that interstellar travel could mitigate x-risk but remains nontrivial, the expected number of future humans alive still drops off sharply the further we go into the future (at least without countervailing assumptions about increased fecundity or longevity)

  2. ^

    Individual human groups seem significantly less likely to survive a given generation the smaller they are and further they are from earth and the more they have to travel, to the point where the benefit against catastrophe of having humans living in other parts of the universe might be pretty short lived. If we're not disregarding small possibilities there's the possibility of a novel existential risk from provoking alien civilizations too...

  3. ^

    I don't endorse this claim FWIW, though I suspect that making humans extinct as opposed to severely endangered  is more difficult than many longtermists predict.

This feels like an isolated demand for rigour, since as far as I can see Thorstad's[1] central argument isn't that a particular course of the future is more plausible, but that [popular representations of] longtermist arguments themselves don't consider the full range of possibilities, don't discount for uncertainty, and that apparently modest-sounding claims that existential risk is non-zero and that humanity could last a long time if we survive near-term threats are compatible only if one makes strong claims about the hinginess of history[2]

I don't see him trying to build a more accurate model of the future[3] so much as pointing out how very simple changes completely change longtermist models. As such, his models are intentionally simple and Owen's expansion above adds more value for anyone actively trying to model a range of future scenarios. But I'm not sure why it would be incumbent on the researcher arguing against choosing a course of action based on long term outcomes to be the one who explicitly models the entire problem space. I'd turn that around and question why longtermists who don't consider the whole endeavour of predicting the long term future in our decision theory to be futile generally dogmatically reject low probability outcomes with Pascalian payoffs that favour the other option, or to simply assume the asymmetry of outcomes works in their favour.

Now personally I'm fine with "no, actually I think catastrophes are bad", but that's because I'm focused on the near term where it really is obvious that nuclear holocausts aren't going to have a positive welfare impact. Once we're insisting that our decisions ought to be guided by tiny subjective credences in far future possibilities with uncertain likelihood but astronomic payoffs and that it's an error not to factor unlikely interstellar civilizations into our calculations of what we should do if they're big enough, it seems far less obvious that the astronomical stakes skew in favour of humanity.

The Tarsney paper even explicitly models the possibility of non-human galactic colonization, but with the unjustified assumption that no non-humans will be of converting resources to utility at a higher rate than [post]humans, so their emergence as competitors for galactic resources merely "nullifies" the beneficial effects of humanity surviving. But from a total welfarist perspective, the problem here isn't just that maximizing the possible welfare across the history of the universe may not be contingent on the long term survival of the human species,[4] but that humans surviving to colonise galaxies might diminish galactic welfare. Schweitzgebel's argument that actually human extinction might be net good for total welfare is only a mad hypothetical if you reject fanaticism: otherwise it's the logical consequence of accepting the possibility, however small, that a nonhuman species might convert resources to welfare much more efficiently than us.[5] Now a future of decibillions of aliens building Dyson Spheres all over the galaxy because there's no pesky humans in their way sounds extremely unlikely, and perhaps even less likely than a galaxy filled with the same fantastic tech to support quadrillions of humans - a species we at least know exists and has some interest in inventing Dyson Spheres - but despite this the asymmetry of possible payoff magnitudes may strongly favour not letting us survive to colonise the galaxy.[6] 

In the absence of any particular reason for confidence that the EV of one set of futures is definitely higher than the others, it seems like you end up reaching for heuristics like "but letting everyone die would be insane". I couldn't agree more, but the least arbitrary way to do that is to adjust the framework to privilege the short term with discount rates sufficiently high to neuter payoffs so speculative and astronomical we can't rule out the possibility they exceed the payoff from [not] letting eight billion humans die[7] Since that discount rate reflects extreme uncertainty about what might happen and what payoffs might look like, it also feels more epistemically humble than basing an entire worldview on the long tail outcome of some low probability far futures whilst dismissing other equally unsubstantiated hypotheticals because their implications are icky. And I'm pretty sure this is what Thorstad wants us to do, not to place high credence in his point estimates or give up on X-risk mitigation altogether.

 

  1. ^

    For the avoidance of doubt I am a different David T ;-)

  2. ^

    which doesn't of course mean that hinginess is untrue, but does make it less a general principle of caring about the long term and more a relatively bold and specific claim about the distribution of future outcomes.

  3. ^

    in the arguments referenced here anyway. He has also written stuff which attempts to estimate different XR base rates from those posited by Ord et al, which I find just as speculative as the longtermists'

  4. ^

    there are of course ethical frameworks other than maximizing total utility across all species which give us reason to prefer 10^31 humans over a similarly low probability von Neumann civilization involving 10^50 aliens or a single AI utility monster (I actually prefer them, so no proposing destroying humanity as a cause area from me!) but they're a different from the framework Tarsney and most longtermists use, and open the door to other arguments for weighting current humans over far future humans. 

  5. ^

    We're a fairly stupid, fragile and predatory species capable of experiencing strongly negative pain and emotional valences at regular intervals over fairly short lifetimes, with competitive social dynamics, very specific survival needs and a wasteful approach to consumption, so it doesn't seem obvious or even likely that humanity and its descendants will be even close to the upper bound for converting resources to welfare...

  6. ^

    Of course, if you reject fanaticism, the adverse effects of humans not dying in a nuclear holocaust on alien utility monsters are far too remote and unlikely and frankly a bit daft to worry about. But if you accept fanaticism (and species-neutral total utility maximization), it seems as inappropriate to disregard the alien Dyson spheres as the human ones...

  7. ^

    Disregarding very low probabilities which are subjective credences applied to future scenarios we have too little understanding of to exclude (rather than frequencies inferred from actual observation of their rarity) is another means to the same end, of course.

Load more