I'm a software engineer on the CEA Online team, mostly working on the EA Forum. We are currently interested in working with impactful projects as contractors/consultants, please fill in this form if you think you might be a good fit for this.
You can contact me at will.howard@centreforeffectivealtruism.org
My personal reasons favouring global health:
(I realise a lot of these are not super well justified, I'm just trying to get the main points across).
I'm planning to publish a post this week addressing one small part of this, although it's a pretty complicated topic so I don't expect this to get that far in justifying the position
Not meant in a very technical sense, just as the idea that there is probably more suffering relative to positive wellbeing, or that it's easier to prevent it. Again, this is for reasons that are beyond the scope of this post. But two factors are:
1) I think common sense reasoning about the neutral point of experience is overly pessimistic
2) I am sceptical of the intensity of pain and pleasure being logarithmically distributed (severe pain ~100x worse than moderate pain), and especially of this being biased in the negative direction. One reason for this is that I find the "first story" for interpreting Weber's law in this post much more intuitive, i.e. that logarithmically distributed stimuli get compressed to a more linear range of experience
Weighted by biomass obviously. The question of actual moral value falls back to the moral weights issue above. A point of reference on the high-moral-weights-sceptical end of the spectrum is this table @Vasco Grilo🔸 compiled of aggregate neuron counts (although, as mentioned, I don't actually think neuron counts are likely to hold up in the long run)
I'm curating this post. Reading this post was a turning point for me from taking counting arguments seriously to largely rejecting them without a strong reason to think the principle of indifference holds. I thought the reductio arguments at the start were really well chosen to make the conclusion seem obvious (at least against the strict form of the argument) without leaving room for ML-specific nitpicks.
Non-moderator nudge: Given that most of the comments here are created via voting on the banner, I'd like to discourage people from downvoting comments below zero just for being low effort. I think it's still useful to leave a quick note in this case, so people can see them when browsing the banner. Hopefully positive karma will still do the job of sorting really good ones to the top.
I thought about it more, and I am now convinced that the paper is right (at least in the specific example I proposed).
The thing I didn't get at first is that given a certain prior over , and a number of iterations survived, there are "more surviving worlds" where the actual is low relative to your initial prior, and that this is exactly accounted for by the Bayes factor.
I also wrote a script that simulates the example I proposed, and am convinced that the naive Bayes approach does in fact give the best strategy in Jack's case too (I haven't proved that there isn't a counterexample, but was convinced by fiddling with the parameters around the boundary of cases where always-option-1 dominates vs always-option-2).
Thanks, this has actually updated me a lot :)
I no longer endorse this, see reply below:
I don't think this does away with the problem, because for decision making purposes the fact that a random event is extinction-causing or not is still relevant (thinking of the Supervolcano vs Martians case in the paper). I didn't see this addressed in the paper. Here's a scenario that hopefully illustrates the issue:
A game is set up where a ball will be drawn from a jar. If it comes out red then "extinction" occurs, the player loses immediately. If it comes out green then "survival" occurs, and the player continues to the next round. This is repeated (with the ball replaced every time) for an unknown number of rounds with the player unable to do anything.
Eventually, the game master decides to stop (for their own unknowable reasons), and offers the player two options:
If they get through this round then they win the game.
The game is played in two formats:
Let's say they both start with a prior that P(red) is 15%, and that the actual P(red) is 20%. Should they adopt different strategies?
The answer is yes:
The point is that the population of Jacks that get the opportunity to make the decision is selected to be only those that receive evidence that imply a low probability, and this systematically biases the decision in a way that is predictable beforehand (such that having the information that this selection effect exists can change your optimal decision).
I think this is essentially the same objection raised by quila below, and is in the same vein as Jonas Moss's comment on Toby's post (I'm not 100% sure of this, I'm more confident that the above objection is basically right than that it's the same as these two others).
It's quite possible I'm missing something in the paper, since I didn't read it in that much detail and other people seem convinced by it. But I didn't see anything that would make a difference for this basic case of an error in decision making being caused by the anthropic shadow (and particularly I didn't see how observing a larger number of rounds makes a difference).
A way to see that this is common-sense irrational is to suppose it's a coin flip instead of a ball being drawn, where it's very hard to imagine how you could physically bias a coin to 99% heads, so you would have a very strong prior against that. In this case if you saw 30 heads in a row (and you could see that it wasn't a two-headed coin) it would still seem stupid to take the risk of getting tails on the next round
The linked blog post says that starting a collaboration with other funders was one of OP's goals for this year (quote from the section on 2024 goals from another blog post):
We’re also aiming to experiment with collaborating with other funders by creating a multi-donor fund in an area that we think is particularly ripe for it. We’ll have more news to share on that later this year.
Which, from the wording and the timeline I assume was essentially referring to LEAF project. Is this a direction OP (perhaps inspired by this argument about PEPFAR?) increasingly wants to go in with other projects? And do you know if there are other collaborations like this in the pipeline?
Kevin Esvelt is the person who invented gene drives, and I recognise a lot of these points as things he has said. Particularly I remember a lot of the episode of Rationally Speaking he did was about the offense-defence balance issue and his decision to publish the research (from the transcript):
Julia: Right. Was it in 2014 that you discovered the potential to use CRISPR to
do better gene editing?
Kevin: It was in early 2013, but I confess we sat on it for quite some time in large
part because I was concerned about the implications.
...
So what I eventually came to conclude is that it seems a lot like gene drive is unusual within the space of biotechnology.
...
I was tremendously excited at first, but then the next morning I woke
up and thought, good God. In principle, an individual researcher in the lab
could just do this, just decide, we're going to engineer a whole wild species
now.
...
And so I spent quite some time thinking, well, what are the implications of this? And in particular, could it be misused? What if someone wanted to engineer an organism for malevolent purposes? What could we do about it?
...
So, it's slow because it takes generations to spread, it can never more than
double; it's obvious, if you sequence the genome, you can't hide it. And it's
easily countered, that is, CRISPR allows us to cut pretty much any DNA
sequence of our choice.
And what that means is: Any given gene drive system that someone else
has built… I can take that, I can add additional instructions to it, telling
CRISPR to cut the original version, I can engineer my version so it doesn't
cut itself. And mine will continue to spread through the wild species just as
effectively as the first gene drive. But whenever mine encounters theirs,
mine will cut it and replace it.
...
So you put all these together: it's slow, it's obvious, and it's easily countered. It's really hard to make an effective weapon out of something with those characteristics.
I'm not sure about 100% of the claims in this post though, e.g. I'm not sure it's right that "around half of our DNA is currently made up of these gene-drive mutations."
Thanks Vasco, I did vote for animal welfare, so on net I agree with most of your points. On some specific things:
This seems right, and is why I support chicken corporate campaigns which tend to increase welfare. Some reasons this is not quite satisfactory:
But these are just concerns and not deal breakers.
I am sufficiently sceptical to put a low weight on the other 11 models (or at least withhold judgement until I've thought it through more). As I mentioned I'm writing a post I'm hoping to publish this week with at least one argument related to this.
The gist of that post will be: it's double counting to consider the 11 other models as separate lines of evidence, and similarly double counting to consider all the individual proxies (e.g. "anxiety-like behaviour" and "fear-like behaviour") as independent evidence within the models.
Many of the proxies (I claim most) collapse to the single factor of "does it behave as though it contains some kind of reinforcement learning system?". This itself may be predictive of sentience, because this is true of humans, but I consider this to be more like one factor, rather than many independent lines of evidence that are counted strongly under many different models.
Because of this (a lot of the proxies looking like side effects of some kind of reinforcement learning system), I would expect we will continue to see these proxies as we look at smaller and smaller animals, and this wouldn't be a big update. I would expect that if you look at a nematode worm for instance, it might show:
It might not show all of these (maybe a nematode is in fact too small, I don't know much about them), but hopefully you get the point that these look like manifestations of the same underlying thing such that observing more of them becomes weak evidence once you have seen a few.
Even if you didn't accept that they were all exactly side effects of "a reinforcement learning type system" (which seems reasonable), still I believe this idea of there being common explanatory factors for different proxies which are not necessarily sentience related should be factored in.
(RP's model does do some non-linear weighting of proxies at various points, but not exactly accounting for this thing... hopefully my longer post will address this).
On the side of neuron counts, I don't think this is particularly strong evidence either. But I see it as evidence on the side of a factor like "their brain looks structurally similar to a human's", vs the factor of "they behave somewhat similarly to a human" for which the proxies are evidence.
To me neither of these lines of evidence ("brain structural similarity" and "behavioural similarity") seems obviously deserving of more weight.
I definitely agree with this, I would only be concerned if we moved almost all funding to animal welfare.