This is a Draft Amnesty Week draft. It may not be polished, up to my usual standards, fully thought through, or fully fact-checked. 
  1. This is a Forum post that I wouldn't have posted without the nudge of Draft Amnesty Week. Fire away! (But be nice, as usual)

TLDR: Estimate with what probability (X%) transformative AI will NOT occur by the time you plan on retiring. Save X% as much for retirement as you would have in normal worlds.

Many people have suggested if you have short transformative AI (TAI) timelines, that you shouldn't be saving for retirement at all. Most people are not even thinking about the impact of AI timelines. And some people have even suggested saving more than normal if you have relatively short TAI timelines. I think the truth is somewhere in between. I started doing a complicated model, but in the spirit of Draft Amnesty Week, and because simple heuristics are more likely to get used, I thought I would propose a simple heuristic. 

The more complicated model is that in normal worlds, you want to have some confidence that you won't run out of money in retirement. You might want to have that same confidence in potential TAI scenarios. TAI might kill everyone, or everyone might be extremely wealthy, so savings would not matter. One person pointed out that wealth at the singularity might allow you to buy galaxies and do great things with them, but at least if you're altruistic, the impact of reducing existential risk is many orders of magnitude greater. There could be scenarios of TAI where we don't get UBI, or you may not be satisfied with the amount of UBI. So it does make sense to have some savings, especially because that savings is likely to grow rapidly during TAI, and also costs of living are likely to fall (and wages would eventually fall as all jobs are automated).

My simple heuristic is estimating with what probability (X%) TAI will NOT occur by the time you plan on retiring. Save X% as much for retirement as you would have in normal worlds. 

My initial calculations indicate that to have the same confidence in not running out of money, you might want to save somewhat more than this heuristic indicates, but most EAs are relatively young and can reevaluate as they learn more. I would say the major exception is if you think your job is going to be automated soon. Then savings would be more valuable, though I suspect that even more valuable than savings would be working on being flexible and being able to pivot quickly. Another exception would be strong matching programs meaning the optimal amount of saving could be higher.

What should you do with the money that you are not putting into retirement? Some have suggested doing things on your bucket list, because you might not have an opportunity to later. However, if you think the outcome of TAI is likely to be positive, then it will generally become much easier and cheaper to do things on your bucket list after TAI. So then as EAs, I would recommend donating much of that extra money to the causes you care about while you can still make a difference, whether that is reducing existential risk, increasing animal welfare, reducing global poverty, or something else. If you have short-medium timelines, you might want to adjust your giving even within cause areas.
 

21

0
0

Reactions

0
0

More posts like this

Comments5
Sorted by Click to highlight new comments since:

This will be controversial, but I think that another consideration for this question has to be to interrogate why we consider our future selves deserving of current sacrifice. If you accept the reductionist account of self-hood as being merely psychological continuity, not as a constant, the case for actions that affect your future self being justified on self-interested grounds becomes less tenable. Instead, something like saving for retirement becomes more and more like saving for someone else's retirement, the greater the gap. 

I think the instinctive, common sense case for retirement savings is something like "prudence", which isn't a moral concept really. It's more "you'll regret it if you don't". So, sure, maybe save if you are retiring in 5-10 years. But beyond that? No.

Just something to consider in addition to TAI.

Love this Dylan and completely agree

Hi David,

You may want to check out Joel Becker's financial life model. I have not played with it, but it seems quite comprehensive.

And some people have even suggested saving more than normal if you have relatively short TAI timelines


Short timeline AGI is not priced into the stock market. AI labs, big tech, GPU and data companies are good bets to map to short-term AGI. If this is your belief, you can expect thousandfold returns on investment during the singularity which is far superior to holding cash. While mitigating X-risk via donations might be the maximally altruistic thing to do, it wouldn't hurt to leave yourself in a good position in the case of a post-AGI world where money still matters.

 

I would say the major exception is if you think your job is going to be automated soon. Then savings would be more valuable, though I suspect that even more valuable than savings would be working on being flexible and being able to pivot quickly.

I am a CS undergrad. As soon as I have money to save, I am going to hedge against job automation by investing in AGI stock. This will offer better financial protection compared to holding cash.

What should you do with the money that you are not putting into retirement? Some have suggested doing things on your bucket list, because you might not have an opportunity to later.

I personally only care about being happy day to day. For me, I can't be sad that I didn't check off my bucket list because I would be dead. I'm different to most I assume however.

 

I would recommend donating much of that extra money to the causes you care about while you can still make a difference, whether that is reducing existential risk, increasing animal welfare, reducing global poverty

Given short timelines, certain charities become less appealing. For example, animal welfare campaigns which convince companies to make commitments by a certain date many years from now may not have any impact. Existential risk probably has the highest expected utility with the correct animal welfare charities in second place.

 

Given the vibes of this post, I would highly recommend investing in AGI stock for mostly non-EA reasons. It seems like it would give you the safety net to then donate to EA causes.

Thanks for the thoughtful response!

As soon as I have money to save, I am going to hedge against job automation by investing in AGI stock. This will offer better financial protection compared to holding cash.

(Not investment advice) sounds reasonable, though some diversification may be prudent. There was an interesting discussion on LessWrong here.

Curated and popular this week
Relevant opportunities