A

AidanGoth

335 karmaJoined

Comments
47

Interesting – thanks for sharing. Yes, agreed on all of this

Are there any experiments offering sedatives to farmed or injured animals?

A friend mentioned to me experiments documented in Compassion, by the Pound in which farmed chickens (I think broilers?) prefer food with pain killers to food without pain killers. I thought this was super interesting as it provides more direct evidence about the subjective pain experienced by chickens than merely behavioural experiments, via a a plausible biological mechanism for detecting pain. This seems useful for identifying animals that experience pain.

Identifying some animals that experience pain seems useful. Ideally we would be able to measure pain in a way that lets us compare the effects of potentially welfare-improving interventions. It might be particularly useful to identify animals whose pain is so bad they'd rather be unconscious, suggesting their lives (at least in some moments) are worse than non-existence. I wonder if similar experiments with sedatives could provide information about whether animals prefer to be conscious or not. For example, if injured chickens consistently chose to be sedated, this would provide moderate evidence that their lives are worse than non-existence. (Conversely, failure to prefer sedatives to normal food or pain killers seems weaker evidence against this, but still informative.)

Interesting. Thanks for sharing :)

Thanks for sharing. Fyi, I'm getting a "Page not found" error because of the "." at the end of the link. (But once I remove the full stop, it works fine.)

The next technological revolution could come this century and could last less than a decade

This is a quickly written note that I don't expect to have time to polish.

Summary

This note aims to bound reasonable priors on the date and duration of the next technological revolution, based primarily on the timings of (i) the rise of homo sapiens; (ii) the Neolithic Revolution; (iii) the Industrial Revolution. In particular, the aim is to determine how sceptical our prior should be that the next technological revolution will take place this century and will occur very quickly.

The main finding is that the historical track record is consistent with the next technological revolution taking place this century and taking just a few years. This is important because it partially undermines the claims that (i) the “most important century” hypothesis is overwhelmingly unlikely and (ii) the burden of evidence required to believe otherwise is very high. It also suggests that the historical track record doesn’t rule out a fast take-off.

I expect this note not to be particularly surprising to those familiar with existing work on the burden of proof for the most important century hypothesis. I thought this would be a fun little exercise though, and it ended up pointing in a similar direction.

Caveats:

  • This is based on very little data, so we should put much more weight on other evidence than this prior
    • I don’t think this is problematic for arguing that the burden of evidence required to think a technological revolution this century is likely is not that high
    • But these priors probably aren’t actually useful for forecasting – they should be washed out by other evidence
  • My calculations use the non-obvious assumption that the wait times between technological revolutions and the durations of technological revolutions decrease by the same factor for each revolution
    • It’s reasonable to expect the wait times and durations to decrease, e.g. due to increased population, better and faster growth in technology (note though that this reasoning sneaks some extra information into the prior though)
    • Indeed, the wait time and duration for the Neolithic revolution are larger than those of the Industrial revolution
    • With just two past technological revolutions, we don’t have enough data to even “eye-ball” whether this assumption roughly fits the data, let alone test it statistically
    • Decreasing by the same factor each time seems like the simplest assumption to make in this case and it’s consistent with more complex but—I think—natural assumptions, about technological revolutions arriving as a Poisson process, and the population growth rate being proportional to the population level
  • For the purposes of determining the burden of proof on the most important century hypothesis, I’m roughly equating “will this be the most important century?” with “will there be a technological revolution this century?”
    • These obviously aren’t the same but I think there are reasons to think that if there is a technological revolution this century, it could be the most important century, or at least the century that we should focus on trying to influence directly (as opposed to saving resources for the future)

Timing of next technological revolution

There have been two technological revolutions since the emergence of homo sapiens (about 3,000 centuries ago): the Neolithic Revolution (started about 100 centuries ago) and the Industrial Revolution (started about 2 centuries ago).

Full calculations in this spreadsheet.

  • Homo sapiens emerged about 300,000 years ago (3,000 centuries)
  • Neolithic revolution was about 10,000 years ago (100 centuries ago and 2,900 centuries after the start of homo sapiens)
    • Started 100-120 centuries ago
    • Took about 20 centuries in a given location
    • Finished about 60 centuries ago
    • So the wait was about 2,880-2,900 centuries
  • Industrial revolution was about 200 years ago (2 centuries ago)
    • 98-118 centuries after the start of the neolithic revolution
    • 78-98 centuries after the end of the neolithic revolution in the original place
    • 58 centuries after the end of the neolithic revolution
  • , i.e. the wait was 1.5 OOMs shorter for the second revolution than the first
  • If the wait is 1.5 OOMs shorter again, this suggests the next revolutionary technology will arrive about 3 centuries after the industrial revolution (3 is ~1.5 OOMs smaller than 100), i.e. in about 1 century
  • More precisely: the next revolution comes about   centuries after the Industrial revolution, in about 70 years
  • So we’re almost due a revolutionary technology! (According to this simple calculation.)
    • Is this a sensible calculation? I think it's not crazy. It seems like the wait between technological revolutions should decrease each time, given that population is growing. The assumption that the wait decreases by the same factor each time is simple.
    • This assumption is also consistent with technological revolutions arriving as a Poisson point process, with time counted in human-years, and with population growth rate proportional to population level (more later).
  • Shouldn’t put much weight on this relative to other evidence, but history doesn’t rule out the next technological revolution coming very soon

Duration of the next technological revolution

Full calculations in the spreadsheet.

  • Neolithic revolution took about 2000 years in a given location (not researched thoroughly)
    • Most sources say it took several thousand years, but this includes the time it took for agricultural technology to diffuse from the initial regions in which it arose or to be reinvented in other regions
  • Industrial revolution took about 80 years in Britain
  • I think the lengths of time for the initial revolutions (or later but independent revolutions) in confined regions is the relevant comparison, not how long it took for revolutionary technology to diffuse, since we care about when the next technological revolution happens somewhere at all
  • Decrease by about 1.4 OOMs
  • Suggests the next technological revolution will take about  years
  • The 2000 year number is very rough but if it took only 200 years, then we’d expect the next technological revolution to take about 30 years – not a huge difference
  • Same (or even stronger) caveats as above apply, history doesn’t rule out the next technological revolution taking just a few years

Poisson process on the number of human-years

Suppose technological revolutions arise as a Poisson point process, with time measured in human-years, so that it takes the same number of human-years for each technological revolution (on average). This seems like a reasonable way to form a prior in this case. If it takes N human-years for a technological revolution on average, and the number of human-years has been growing exponentially, then the time between each multiple of N should get shorter. But population hasn’t grown at a constant exponential rate, it’s more like the growth rate is proportional to the population level (until very recently, in macrohistorical terms).

Numerical simulations suggest that when population growth is proportional to population level, the time delay between each N human-years gets shorter by the same factor each time.

I'm happy to see more discussion of bargaining approaches to moral uncertainty, thanks for writing this! Apologies, this comment is longer than intended -- I hope you don't mind me echoing your Pascalian slogan!

My biggest worry is with the assumption that resources are distributed among moral theories in proportion to the agent's credences in the moral theories. It seems to me that this is an outcome that should be derived from a framework for decision-making under moral uncertainty, not something to be assumed at the outset. Clearly, credences should play a role in how we should make decisions under moral uncertainty but it's not obvious that this is the right role for them to play. In Greaves and Cotton-Barratt (2019), this isn't the role that credences play. Rather, credences feed into the computation of the asymmetric Nash Bargaining Solution (NBS), as in their equation (1). Roughly, credences can be thought to correspond to the relative bargaining power of the various moral theories. There's no guarantee that the resulting bargaining solution allocates resources to each theory in proportion to the agent's credences and this formal bargaining approach seems much more principled than allocating resources in proportion to credences, so I prefer the former. I doubt your conclusions significantly depend on this but I think it's important to be aware that what you described isn't the same as the bargaining procedure in Greaves and Cotton-Barratt (2019).

I like how you go through how a few different scenarios might play out in Section 2 but while I think  intuition can be a useful guide, I think it's hard to how things would play out without taking a more formal approach. My guess is that if you formalised these decisions and computed the NBS that things would often but not always work out as you hypothesise (e.g. divisible resources with unrelated priorities won't always lead to worldview diversification; there will be cases in which all resources go to one theory's preferred option).

I'm a little uncomfortable with the distinction between conflicting priorities and unrelated priorities because unrelated priorities are conflicting once you account for opportunity costs: any dollars spent on theory A's priority can't be spent on theory B's priority (so long as these priorities are different). However, I think you're pointing at something real here and that cases you describe as "conflicting priorities" will tend to lead to spending resources on compromise options rather than splitting the pot, and that the reverse is true for cases you describe as "unrelated priorities".

The value of moral information consideration is interesting. It should be possible to provide a coherent account of the value of moral information for IB because the definition of the value of information doesn't really depend on the details of how the agent makes a decision. Acquiring moral information can be seen as an act/option etc. just like any other and all the moral theories will have views about how good it would be and IB can determine whether the agent should choose that option vs other options. In particular, if the agent is indifferent (as determined by IB) between 1. acquiring some moral information and paying $x and 2. not acquiring the information and paying nothing, then we can say that the value of the information to  the agent is $x. Actually computing this will be hard because it will depend on all future decisions (as changing credences will change future bargaining power), but it's possible in principle and I don't think it's substantially different to/harder than the value of moral information on MEC. However, I worry that IB might give quite an implausible account of the value of moral information, for some of the reasons you mention. Moral information that increases the agent's credence in theory A will give theory A greater bargaining power in future decisions, so theory A will value such information. But if that information lowers theory B's bargaining power, then theory B will be opposed to obtaining the information. It seems likely that the agent will problematically undervalue moral information in some cases. I haven't thought through the details of this though.

I didn't find the small vs grand worlds objection in Greaves and Cotton-Barratt (2019) very compelling and agree with your response. It seems to me to be analogous to the objections to utilitarianism based on the infeasibility of computing utilities in practice (which I don't find very compelling).

On regress: perhaps I'm misunderstanding you, but this seems to me to be a universal problem in that we will always be uncertain about how we should make decisions under moral uncertainty. We might have credences in MFT, MEC and IB, but which of these (if any) should we use to decide what to do under uncertainty about what to do under moral uncertainty (and so on...)?

I think you have a typo in the table comparing MFT, MEC and IB: MEC shouldn't be non-fanatical. Relatedly, my reading of Greaves and Cotton-Barratt (2019) is that IB is more robust to fanaticism but still recommends fanatical choices sometimes (and whether it does so in practice is an open question), so a tick here might be overly generous (though I agree that IB has an advantage over MEC here, to the extent that avoiding fanaticism is desirable).

One concern with IB that you don't mention is that the NBS depends on a "disagreement point" but it's not clear what this disagreement point should be. The disagreement point represents the utilities obtained if the bargainers fail to reach an agreement. I think the random dictator disagreement point in Greaves and Cotton-Barratt (2019) seems quite natural for many decision problems, but I think this dependence on a disagreement point counts against bargaining approaches.

Another use of "consequentialism" in decision theory is in dynamic choice settings (i.e. where an agent makes several choices over time, and future choices and payoffs typically depend on past choices). Consequentialist decision rules depend only on the future choices and payoffs and decision rules that violate consequentialism in this sense sometimes depend on past choices.

An example: suppose an agent is deciding whether to take a pleasurable but addictive drug. If the agent takes the drug, they then decide whether to stop taking it or to continue taking it. Suppose the agent initially judges taking the drug once to have the highest payoff, continuing to take the drug to have the lowest payoff and never taking it to be in between. Suppose further though, that if the agent takes the drug, they will immediately become addicted and will then prefer to carry on taking it to stopping. One decision rule in the dynamic choice literature is called "resolute choice" and requires the agent to take the drug once and then stop, because this brings about the highest payoff, as judged initially. This is a non-consequentialist decision rule because at the second choice point (carry on taking the drug or stop), the agent follows their previously made plan and stops, even though it goes against their current preference to carry on.

I don't know how, if at all, this relates to what Yudkowsky means by "consequentialism", but this seems sufficiently different from what you described as "decision consequentialism" that I thought it was worth adding, in case it's a further source of confusion.

After a little more thought, I think it might be helpful to think about/look into the relationship between the mean and median of heavy-tailed distributions and in particular, whether the mean is ever exponential in the median.

I think we probably have a better sense of the relationship between hours worked and the median than between hours worked and the mean because the median describes "typical" outcomes and means are super unintuitive and hard to reason about for very heavy tailed distributions. In particular, arguments like those given by Hauke seem more applicable to the median than the mean. This suggests that the median is roughly logarithmic in hours worked. It would then require the mean to be exponential in the median for the mean to be linear in hours worked, in which case, working 20% less would lose exactly 20% of the expected impact (more if the mean is more convex than exponential in the median, less if it's less than exponential).

In the simple example above, the mean is linear in the median, so the mean is logarithmic in hours worked if the median is. But the lognormal distribution might not be heavy-tailed enough, so I wouldn't put too much weight on this.

Looking at the pareto distribution, it seems to be the case that the mean is sometimes more than exponential in the median -- it's less convex for small values and more convex for high values . You'd have to a bit of work to figure out the scale and whether it's more than exponential over the relevant range, but it could turn out that expected impact is convex in hours worked in this model, which would suggest working 20% less would lose more than 20% of the value. I'm not sure how well the pareto distribution describes the median though (it seems good for heavy tails but bad for the whole distribution of things), so it might be better to look at something like a lognormal body with a pareto tail. But maybe that's getting too complicated to be worth it. This seems like an interesting and important question though, so I might spend more time thinking about it!

I don't have a good object-level answer, but maybe thinking through this model can be helpful.

Big picture description: We think that a person's impact is heavy tailed. Suppose that the distribution of a person's impact is determined by some concave function of hours worked. We want that working more hours increases the mean of the impact distribution, and probably also the variance, given that this distribution is heavy-tailed. But we plausibly want that additional hours affect the distribution less and less, if we're prioritising perfectly (as Lukas suggests) -- that's what concavity gives us. If talent and luck play important roles in determining impact, then this function will be (close to) flat, so that additional hours don't change the distribution much. If talent is important, then the distributions for different people might be quite different and signals about how talented a person is are informative about what their distribution looks like.

This defines a person's expected impact in terms of hours worked. We can then see whether this function is linear or concave or convex etc., which will answer your question.

More concretely: suppose that a person's impact is lognormally distributed with parameters  and , that  is an increasing, concave function of hours worked, , and that  is fixed. I chose this formulation because it's simple but still enlightening, and has some important features: expected impact, , is increasing in hours worked and the variance is also increasing in hours worked. I'm leaving  fixed for simplicity. Suppose also that , which then implies that expected impact is , i.e. expected impact is linear in hours worked.

Obviously, this probably doesn't describe reality very well, but we can ask what changes if we change the underlying assumptions. For example, it seems pretty plausible that impact is heavier-tailed than lognormally distributed, which suggests, holding everything else equal, that expected impact is convex in hours worked, so you lose more than 20% impact by working 20% less.

Getting a good sense of what the function of hours worked (here  ) should look like is super hard in the abstract, but seems more doable in concrete cases like the one described above. Here, the median impact is , if , so the median impact is linear in hours worked. This doesn't seem super plausible to me. I'd guess that the median impact is concave in hours worked, which would require  to be more concave than , which suggests, holding everything else equal, that expected impact is concave in hours worked. I'm not sure how this changes if you consider other distributions though -- it's a peculiarity of the lognormal distribution that the mean is linear in the median, if  is held fixed, so things could look quite different with other distributions (or if we tried to determine  and  from  jointly).

Median impact being linear in hours worked seems unlikely globally -- like, if I halved my hours, I think I'd more than half my median impact; if I doubled them, I don't think I would double my median impact (setting burnout concerns aside). But it seems more plausible that median impact could be close to linear over the margins you're talking about. So maybe this suggests that the model isn't too bad for median impact, and that if impact is heavier-tailed than lognormal, then expected impact is indeed convex in hours worked.

This doesn't directly answer your question very well but I think you could get a pretty good intuition for things by playing around with a few models like this.

Sorry for the slow reply. I don't have a link to any examples I'm afraid but I just mean something like this:

Prior that we should put weights on arguments and considerations: 60%

Pros:

  • Clarifies the writer's perspective each of the considerations (65%)
  • Allows for better discussion for reasons x, y, z... (75%)

Cons:

  • Takes extra time (70%)

This is just an example I wrote down quickly, not actual views. But the idea is to state explicit probabilities so that we can see how they change with each consideration.

To see you can find the Bayes' factors, note that if  is our prior probability that we should give weights,  is our prior that we shouldn't, and  and  are the posteriors after argument 1, then the Bayes' factor is 

Similarly, the Bayes' factor for the second pro is .

Load more