finm

Researcher @ Longview Philanthropy
2622 karmaJoined Working (0-5 years)Oxford, UK
www.finmoorhouse.com/writing

Bio

I do research at Longview Philanthropy. Previously I was a Research scholar at FHI and assistant to Toby Ord. Philosophy at Cambridge before that.

I also do a podcast about EA called Hear This Idea.

www.finmoorhouse.com/writing

www.hearthisidea.com

Posts
36

Sorted by New
3
finm
· · 1m read
192
finm
· · 7m read
187
finm
· · 20m read
67
finm
· · 29m read
77
finm
· · 13m read

Comments
139

Thanks for the comment, Owen.

I agree with your first point and I should have mentioned it.

On your second point, I am assuming that ‘solving’ the problem means solving it by a date, or before some other event (since there's no time in my model). But I agree this is often going to be the right way to think, and a case where the value of working on a problem with increasing resources can be smooth, even under certainty.

Ah thanks, good spot. You're right.

Another way to express (to avoid a stacked fraction) is ; i.e. percentage change in resources. I'll update the post to reflect this.

Just noticed I missed the deadline — will you be accepting late entries?

Edit: I had not in fact missed the deadline

finm
10
0
0

Here's a framing which I think captures some (certainly not all) of what you're saying. Imagine graphing out percentiles for your credence distribution over values the entire future can take. We can consider the effect that extinction mitigation has on the overall distribution, and the change in expected value which the mitigation has. In the diagrams below, the shaded area represents the difference made by extinction migitation.

The closest thing to a ‘classic’ story in my head looks like below: on which (i) the long-run future is basically biomodal, between ruin and near-best futures, and (ii) the main effect of extinction mitigation is to make near-best futures more likely.

A rough analogy: you are a healthy and othrewise cautious 22-year old, but you find yourself trapped on a desert island. You know the only means of survival is a perilous week-long journey on your life raft to the nearest port, but you think there is a good chance you don't survive the journey. Supposing you make the journey alive, then your distribution over your expected lifespan from this point (ignoring the possibility of natural lifespan enhancement) basically just shifts to the left as above (though with negligible weight on living <1 year from now).

A possibility you raise is that the main effect of preventing extinction is only to make worlds more likely which are already close to zero value, as below. 

A variant on this possibility is that, if you knew some option to prevent human extinction were to be taken, your new distribution would place less weight on near-zero futures, but less weight on the best futures also. So your intervention affects many percentiles of your distribution, in a way which could make the net effect unclear.

One reason might be causal: the means required to prevent extinction might themselves seal off the best futures. In a variant of the shipwreck example, you could imagine facing the choice between making the perilous week-long journey, or waiting it out for a ship to find you in 2 months. Suppose you were confident that, if you wait it out, you will be found alive, but at the cost of reducing your overall life expectancy (maybe because of long-run health effects).

The above possibilities (i) assume that your distribution over the value of the future is roughly bimodal, and (ii) ignore worse-than-zero outcomes. If we instead assume a smooth distribution, and include some possibility of worse-than-zero worlds, we can ask what effect mitigating extinction has.

Here's one possibility: the fraction of your distribution that effectively zero value worlds gets is ‘pinched’, giving more weight both to better-than-zero worlds, and worse-than-zero worlds. Here you'd need to explain why this is a good thing to do.

So an obvious question here is how likely it is that extinction mitigation is more like the ‘knife-edge’ scenario of a healthy person trapped in a survive-or-die predicament. I agree that the ‘classic’ picture of the value of extinction mitigation can mislead about how obvious this is for a bunch of reasons. Though (as other commenters seem to have pointed out) it's unclear how much to rely on relatively uninformed priors, versus the predicament we seem to find ourselves in when we look at the world.

I'll also add that, in the case of AI risk, I think that framing literal human extinction as the main test of whether the future will be good seems like a mistake, in particular because I think literal human extinction is much less likely than worlds where things go badly for other reasons.

Curious for thoughts, and caveat that I read this post quickly and mostly haven't read the comments.

I'm fascinated by the logistics here. I'm imagining you'll need a very flat route? And also that you won't be able to stop at all (including at lights)?? Will you be doing a big loop, or point to point?

Anyway, rooting for you!

Thanks, I think both those points make sense. On the second point about value of information, the future for animals without humans would likely still be bad (because of wild animal suffering), and a future with humans could be less bad for animals (because we alleviate both wild and farmed animal suffering). So I don' think it's necessarily true that something as abstract as ‘a clearer picture of the future’ can't be worth the price of present animal suffering, since one of the upshots of learning that picture might be to choose to live on and reduce overall animal suffering over the long run. Although of course you could just be very sceptical that the information value alone would be enough to justify another ⩾ half-century of animal suffering (and it certainly shouldn't be used to excuse to wait around and not do things to urgently reduce that suffering). Though I don't know exactly what you're pointing at re “defensive capabilities” of factory farming.

I also think I share your short-term (say, ⩽ 25-year) pessimism about farmed animals. But in the longer run, I think there are some reasons for hope (if alt proteins get much cheaper and better, if humans do eventually decide to move away from animal agriculture for roughly ethical reasons, despite the track record of activism so far).

Of course there is a question of what to do if you are much more pessimistic even over the long-run for animal (or nonhuman) welfare. Even here, if “cause the end of human civilisation” were a serious option, I'd be very surprised if there weren't many other serious options available to end factory farming without also causing the worst calamity ever.

(Don't mean to represent you as taking a stand on whether extinction would be good fwiw)

Answer by finm19
5
4

I agree that, right now, we're partly in the dark about whether the future will be good if humanity survives. But if humanity survives, and continues to commit moral crimes, then there will still be humans around to notice that problem. And I expect that those humans will be better informed about (i) ways to end those moral crimes, and (ii) the chance those efforts will eventually succeed.

If future efforts to end moral crimes succeed, then of course it would be a great mistake to go extinct before that point. But even for the information value of knowing more about the prospects for humans and animals (and everything else that matters), it seems well worth staying alive.

finm
198
52
7
20
3
2

I think it is worth appreciating the number and depth of insights that FHI can claim significant credit for. In no particular order:

Note especially how much of the literal terminology was coined on (one imagines) a whiteboard in FHI. “Existential risk” isn't a neologism, but I understand it was Nick who first suggested it be used in a principled way to point to the “loss of potential” thing. “Existential hope”, “vulnerable world”, “unilateralist's curse”, “information hazard”, all (as far as I know) tracing back to an FHI publication.

It's also worth remarking on the areas of study that FHI effectively incubated, and which are now full-blown fields of research:

  • The 'Governance of AI Program' was launched in 2017, to study questions around policy and advanced AI, beyond the narrowly technical questions. That project was spun out of FHI to become the Centre for the Governance of AI. As far as I understand, it was the first serious research effort on what's now called ”AI governance”.
  • From roughly 2019 onwards, the working group on biological risks seems to have been fairly instrumental in making the case for biological risk reduction as a global priority, specifically because of engineered pandemics.
  • If research on digital minds (and their implications) grows to become something resembling a 'field', then the small team and working groups on digital minds can make a claim to precedence, as well as early and more recent published work.

FHI was staggeringly influential; more than many realise.

Edit: I wrote some longer reflections on FHI here.

Answer by finm6
1
0

The singer-songwriter José González has mentioned being inspired by The Precipice and apparently other EA-related ideas. Take the charmingly scout mindset 'Head On':

Speak up
Stand down
Pick your battles
Look around
Reflect
Update
Pause your intuitions and deal with it
Head on

[Copied from an email exchange with Vasco, slightly embellished]

I think the probability of a flat universe is ~0 because the distribution describing our knowledge about the curvature of the universe is continuous, whereas a flat universe corresponds to a discrete curvature of 0.

Sure, if you put infinitesimal weight on a flat universe in your prior (true if your distribution is continuous over a measure of spatial curvature and you think it's infinite only if spatial curvature = 0), then no observation of (local) curvature is going to be enough. On your framing, I think the question is just why the distribution needs to be continuous? Consider: "the falloff of light intensity / gravity etc is very close to being proportional to , but presumably the exponent isn't exactly 2 since our distribution over  for  is continuous".

all the evidence for infinity is coming from having some weight on infinity in our prior.

'All' in the sense that you need nonzero non-infinitesimal weight on infinity in your prior, but not in the sense that your prior is the only thing influencing your credence in infinity. Presumably observations of local flatness do actually upweight hypotheses about the universe being infinite, or at least keep them open if you are open to the possibility in the first place. And I could imagine other things counting as more indirect evidence, such as how well or poorly our best physical theories fit with infinity.

[Added] I think this speaks to something interesting about a picture of theoretical science suggested by a subjective Bayesian attitude to belief-forming in general, on which we start with some prior distribution(s) over some big (continuous?) hypothesis space(s), and observations tell us how to update our priors. But you might think that's a weird way to figure out which theories to believe, because e.g. (i) the hypothesis space is indefinitely large such that you should have infinitesimal or very small credence in any given theory; (ii) the hypothesis space is unknown in some important way, in which case you can't assign credences at all, or (iii) theorists value various kinds of simplicity or elegance which are hard to cash out in Bayesian terms in a non-arbitrary way. I don't know where I come down on this but this is a case where I'm unusually sympathetic to such critiques (which I associate with Popper/Deutsch[1]). 

[Continuing email] I do agree that "the universe is infinite in extent" (made precise) is different from "for any size, we can't rule out the universe being at least that big", and that the first claim is of a different kind. For instance, your distribution over the size of the universe could have an infinite mean while implying certainty that the universe has some finite size (e.g. if that distribution over the size of the universe is  where ).

That does put us in a weird spot though, where all the action seems to be in your choice of prior.

I don't know how relevant it is that the axiom of infinity is independent of ZFC, unless you think that all true mathematical claims are made true by actual physical things in the world (JS Mill believed something like this I think). Then you might have thought you have independent reason to believe (i) the  axioms, and if so believing that (ii)  you'd be forced to believe in an actual physical infinity. But that has the same suspect "synthetic a priori" character as ontological arguments for God's existence, and is moot in any case because (ii) is false!

For what it's worth, as a complete outsider I feel a surprised by how little serious discussion there is in e.g. astrophysics / philosophy of physics etc around whether the universe is infinite in some way. It seems like such a big deal; indeed an infinitely big deal!

  1. ^

    Though I don't think these views would have much constructive to say about how much credence to put on the universe being infinite, since they'd probably reject the suggestion that you can or should be trying to figure out what credence to put on it. Paging @ben_chugg since I think he could say if I'm misrepresenting the view.

Load more