Will Howard🔹

Software Engineer @ Centre for Effective Altruism
1053 karmaJoined Working (0-5 years)Oxford, UK

Bio

I'm a software engineer on the CEA Online team, mostly working on the EA Forum. We are currently interested in working with impactful projects as contractors/consultants, please fill in this form if you think you might be a good fit for this.

You can contact me at will.howard@centreforeffectivealtruism.org

Comments
90

Topic contributions
45

Thanks Vasco, I did vote for animal welfare, so on net I agree with most of your points. On some specific things:

You could donate to organisations improving instead of decreasing the lives of animals

This seems right, and is why I support chicken corporate campaigns which tend to increase welfare. Some reasons this is not quite satisfactory:

  1. It feels a bit like a "helping slaves to live happier lives" intervention rather than "freeing the slaves"
  2. I'm overall uncertain about whether animals lives are generally net positive, rather than strongly thinking they are
  3. I'd still be worried about donations to these things generally growing the AW ecosystem as a side effect (e.g. due to fungibility of donations, training up people who then do work with more suffering-focused assumptions)

But these are just concerns and not deal breakers.

Rethink Priorities' median welfare range for shrimps of 0.031 is 31 k (= 0.031/10^-6) times their welfare range based on neurons of 10^-6. For you to get to this super low welfare range, you would have to justify putting a very low weight in all the other 11 models considered by Rethink Priorities.

I am sufficiently sceptical to put a low weight on the other 11 models (or at least withhold judgement until I've thought it through more). As I mentioned I'm writing a post I'm hoping to publish this week with at least one argument related to this.

The gist of that post will be: it's double counting to consider the 11 other models as separate lines of evidence, and similarly double counting to consider all the individual proxies (e.g. "anxiety-like behaviour" and "fear-like behaviour") as independent evidence within the models.

Many of the proxies (I claim most) collapse to the single factor of "does it behave as though it contains some kind of reinforcement learning system?". This itself may be predictive of sentience, because this is true of humans, but I consider this to be more like one factor, rather than many independent lines of evidence that are counted strongly under many different models.

Because of this (a lot of the proxies looking like side effects of some kind of reinforcement learning system), I would expect we will continue to see these proxies as we look at smaller and smaller animals, and this wouldn't be a big update. I would expect that if you look at a nematode worm for instance, it might show:

  1. "Taste-aversion behaviour": Moving away from a noxious stimulus, or learning that a particular location contains a noxious stimulus
  2. "Depression-like behaviour": Giving up/putting less energy into exploring after repeatedly failing
  3. "Anxiety-like behaviour": Being put on edge or moving more quickly if you expose it to a stimulus which has previously preceded some kind of punishment
  4. "Curiosity-like behaviour": Exploring things even when it has some clearly exploitable resource

It might not show all of these (maybe a nematode is in fact too small, I don't know much about them), but hopefully you get the point that these look like manifestations of the same underlying thing such that observing more of them becomes weak evidence once you have seen a few.

Even if you didn't accept that they were all exactly side effects of "a reinforcement learning type system" (which seems reasonable), still I believe this idea of there being common explanatory factors for different proxies which are not necessarily sentience related should be factored in.

(RP's model does do some non-linear weighting of proxies at various points, but not exactly accounting for this thing... hopefully my longer post will address this).

On the side of neuron counts, I don't think this is particularly strong evidence either. But I see it as evidence on the side of a factor like "their brain looks structurally similar to a human's", vs the factor of "they behave somewhat similarly to a human" for which the proxies are evidence.

To me neither of these lines of evidence ("brain structural similarity" and "behavioural similarity") seems obviously deserving of more weight.

Farmed animals are neglected, so I do not think worldview diversidication would be at risk due to moving 100 M$ to animal welfare

I definitely agree with this, I would only be concerned if we moved almost all funding to animal welfare.

As far as I'm aware it's a coincidence, but I'm v happy about this :)

My personal reasons favouring global health:

  1. I'm sceptical of Rethink's moral weight numbers[1], and am more convinced of something closer to anchoring on neuron counts (and even more convinced by extreme uncertainty). This puts animal charities more like 10x ahead rather than 1000 or 1 million times. I'm also sceptical of very small animals (insects) having a meaningful probability/degree of sentience.
  2. I am sceptical of suffering focused utilitarianism[2], and am worried that animal welfare interventions tend to lean strongly in favour of things that reduce the number of animals, on the assumption that their lives are net negative. Examples of this sort of mindset include this, this, and this.

    Not all of these actively claim the given animals' lives must be net negative, but I'm concerned about this being seen as obviously true and baked into the sorts of interventions that are pursued. I'm especially concerned about the idea that the question of whether animals' lives are net-negative is not relevant (see first linked comment), because the way in which it is relevant is that it favours preventing animals from coming into existence (this is more commonly supported than actively euthanising animals).

    Farmed animals are currently the majority of mammal + bird biomass, and so ending the (factory) farming of animals is concomitant with reducing the total mammal + bird population[3] by >50%, and this is not something that I see talked about as potentially negative.

    That said, if pushed I would still fairly strongly predict that farmed chickens lives are net negative at least, which is why on net I support the pro animal welfare position.
  3. I think something like worldview diversification is essentially a reasonable idea, for reasons of risk aversion and optimising under expected future information. The second is an explore/exploit tradeoff take (which often ends up looking suspiciously similar to risk aversion 🧐).

    In the case where there is a lot of uncertainty on the relative value of different cause areas (not just in rough scale, but that things we think are positive EV could be neutral or very negative), it makes sense to hedge and put a few eggs into each basket so that you can pivot when new important information arises. It would be bad to, for instance, spend all your money euthanising all the fish on the planet and then later discover this was bad and that also there is a new much more effective anti-TB intervention.

    Of course, this more favours doing more research on everything than it does pouring a lot of exploit-oriented money into Global Health, but in practice I think some degree of trying to follow through on interventions is necessary to properly explore (plus you can throw in some other considerations like time preference/discount rates), and OpenPhil isn't spending money overall at a rate that implies reckless naive EV maximising (over-exploitation).

    Some written-down ideas in this direction: We can do better than argmax, Tyranny of the Epistemic Majority, In defense of more research and reflection.
  4. I believe something like "partiality shouldn't be a (completely) dirty word". When taken to extremes, most people accept some concessions to partiality. For instance it's generally considered not a good strategic move to pressure people into giving so much of their income that they can't live comfortably, even though for a sufficiently motivated moral actor this would likely still be net positive. Most people also would not jump at the chance to be replaced by a species that has 10% higher welfare.

    I think it's wrong to apply this logic only at the extremes, and there should be some consideration of what the market will bear when considering more middle of the road sacrifices. For instance a big factor in the cost effectiveness of lead elimination is that it can be happily picked up by more mainstream funders.

(I realise a lot of these are not super well justified, I'm just trying to get the main points across).

  1. ^

    I'm planning to publish a post this week addressing one small part of this, although it's a pretty complicated topic so I don't expect this to get that far in justifying the position

  2. ^

    Not meant in a very technical sense, just as the idea that there is probably more suffering relative to positive wellbeing, or that it's easier to prevent it. Again, this is for reasons that are beyond the scope of this post. But two factors are:
    1) I think common sense reasoning about the neutral point of experience is overly pessimistic
    2) I am sceptical of the intensity of pain and pleasure being logarithmically distributed (severe pain ~100x worse than moderate pain), and especially of this being biased in the negative direction. One reason for this is that I find the "first story" for interpreting Weber's law in this post much more intuitive, i.e. that logarithmically distributed stimuli get compressed to a more linear range of experience

  3. ^

    Weighted by biomass obviously. The question of actual moral value falls back to the moral weights issue above. A point of reference on the high-moral-weights-sceptical end of the spectrum is this table @Vasco Grilo🔸 compiled of aggregate neuron counts (although, as mentioned, I don't actually think neuron counts are likely to hold up in the long run)

I'm curating this post. Reading this post was a turning point for me from taking counting arguments seriously to largely rejecting them without a strong reason to think the principle of indifference holds. I thought the reductio arguments at the start were really well chosen to make the conclusion seem obvious (at least against the strict form of the argument) without leaving room for ML-specific nitpicks.

Non-moderator nudge: Given that most of the comments here are created via voting on the banner, I'd like to discourage people from downvoting comments below zero just for being low effort. I think it's still useful to leave a quick note in this case, so people can see them when browsing the banner. Hopefully positive karma will still do the job of sorting really good ones to the top.

  1. The basic case for chickens is very strong, even under views that are sceptical of small animals having a high chance/degree of sentience, because it's so easy to affect their lives cheaply compared to humans, and their lives seem v easy to improve by a lot
  2. $100m in total is not a huge amount (equiv to $5-10m/yr, against a background of ~$200m). I think concern about scaling spending is a bit of a red herring and this could probably be usefully absorbed just by current interventions

I thought about it more, and I am now convinced that the paper is right (at least in the specific example I proposed).

The thing I didn't get at first is that given a certain prior over , and a number of iterations survived, there are "more surviving worlds" where the actual  is low relative to your initial prior, and that this is exactly accounted for by the Bayes factor.

I also wrote a script that simulates the example I proposed, and am convinced that the naive Bayes approach does in fact give the best strategy in Jack's case too (I haven't proved that there isn't a counterexample, but was convinced by fiddling with the parameters around the boundary of cases where always-option-1 dominates vs always-option-2).

Thanks, this has actually updated me a lot :)

I no longer endorse this, see reply below:

I don't think this does away with the problem, because for decision making purposes the fact that a random event is extinction-causing or not is still relevant (thinking of the Supervolcano vs Martians case in the paper). I didn't see this addressed in the paper. Here's a scenario that hopefully illustrates the issue:

A game is set up where a ball will be drawn from a jar. If it comes out red then "extinction" occurs, the player loses immediately. If it comes out green then "survival" occurs, and the player continues to the next round. This is repeated (with the ball replaced every time) for an unknown number of rounds with the player unable to do anything.

Eventually, the game master decides to stop (for their own unknowable reasons), and offers the player two options:

  1. Play one more round of drawing the ball from the jar and risking extinction if it comes out red
  2. Take a fixed 10% chance of extinction

If they get through this round then they win the game.

The game is played in two formats:

  1. Jack is offered the game as described above, where he can lose before getting to the decision point
  2. Jill is offered a game where rounds before the decision point don't count, she can observe the colour of the ball but doesn't risk extinction. Only on the final round does she risk extinction

Let's say they both start with a prior that P(red) is 15%, and that the actual P(red) is 20%. Should they adopt different strategies?

The answer is yes:

  1. For Jack, he will only end up at the decision point if he observes 0 red balls. Assuming a large number of rounds are played, if he naively applies Bayesian reasoning he will conclude P(red) is very close to 0 and choose option 1 (another round of picking a ball). This is clearly irrational, because it will always result in option 1 being chosen regardless of the true probability and of his prior[1]. A better strategy is to stick with his prior if it is at all informative
  2. For Jill, she will end up at the decision point regardless of whether she sees a red ball. Assuming a large number of practice rounds are played, in almost all worlds applying naive Bayesian reasoning will tell her P(red) is close to 20%, and she should pick option 2. In this case the decision is sensitive to the true probability, and she only loses out in the small proportion of worlds where she observes an unusually low number of red balls, so the naive Bayesian strategy seems rational

The point is that the population of Jacks that get the opportunity to make the decision is selected to be only those that receive evidence that imply a low probability, and this systematically biases the decision in a way that is predictable beforehand (such that having the information that this selection effect exists can change your optimal decision).

 

I think this is essentially the same objection raised by quila below, and is in the same vein as Jonas Moss's comment on Toby's post (I'm not 100% sure of this, I'm more confident that the above objection is basically right than that it's the same as these two others).

It's quite possible I'm missing something in the paper, since I didn't read it in that much detail and other people seem convinced by it. But I didn't see anything that would make a difference for this basic case of an error in decision making being caused by the anthropic shadow (and particularly I didn't see how observing a larger number of rounds makes a difference).

  1. ^

    A way to see that this is common-sense irrational is to suppose it's a coin flip instead of a ball being drawn, where it's very hard to imagine how you could physically bias a coin to 99% heads, so you would have a very strong prior against that. In this case if you saw 30 heads in a row (and you could see that it wasn't a two-headed coin) it would still seem stupid to take the risk of getting tails on the next round

The linked blog post says that starting a collaboration with other funders was one of OP's goals for this year (quote from the section on 2024 goals from another blog post):

We’re also aiming to experiment with collaborating with other funders by creating a multi-donor fund in an area that we think is particularly ripe for it. We’ll have more news to share on that later this year.

Which, from the wording and the timeline I assume was essentially referring to LEAF project. Is this a direction OP (perhaps inspired by this argument about PEPFAR?) increasingly wants to go in with other projects? And do you know if there are other collaborations like this in the pipeline?

Kevin Esvelt is the person who invented gene drives, and I recognise a lot of these points as things he has said. Particularly I remember a lot of the episode of Rationally Speaking he did was about the offense-defence balance issue and his decision to publish the research (from the transcript):

Julia: Right. Was it in 2014 that you discovered the potential to use CRISPR to
do better gene editing?

Kevin: It was in early 2013, but I confess we sat on it for quite some time in large
part because I was concerned about the implications.
...
So what I eventually came to conclude is that it seems a lot like gene drive is unusual within the space of biotechnology.
...
I was tremendously excited at first, but then the next morning I woke
up and thought, good God. In principle, an individual researcher in the lab
could just do this, just decide, we're going to engineer a whole wild species
now.
...
And so I spent quite some time thinking, well, what are the implications of this? And in particular, could it be misused? What if someone wanted to engineer an organism for malevolent purposes? What could we do about it?
...
So, it's slow because it takes generations to spread, it can never more than
double; it's obvious, if you sequence the genome, you can't hide it. And it's
easily countered, that is, CRISPR allows us to cut pretty much any DNA
sequence of our choice.

And what that means is: Any given gene drive system that someone else
has built… I can take that, I can add additional instructions to it, telling
CRISPR to cut the original version, I can engineer my version so it doesn't
cut itself. And mine will continue to spread through the wild species just as
effectively as the first gene drive. But whenever mine encounters theirs,
mine will cut it and replace it. 
...
So you put all these together: it's slow, it's obvious, and it's easily countered. It's really hard to make an effective weapon out of something with those characteristics.

I'm not sure about 100% of the claims in this post though, e.g. I'm not sure it's right that "around half of our DNA is currently made up of these gene-drive mutations."

Load more