M

MichaelDickens

5178 karmaJoined

Bio

I do independent research on EA topics. I write about whatever seems important, tractable, and interesting (to me). Lately, I mainly write about EA investing strategy, but my attention span is too short to pick just one topic.

I have a website: https://mdickens.me/ Most of the content on my website gets cross-posted to the EA Forum.

My favorite things that I've written: https://mdickens.me/favorite-posts/

I used to work as a software developer at Affirm.

Sequences
1

Quantitative Models for Cause Selection

Comments
754

Vasco's corpus of cost-effectiveness estimates

Are you talking about this post? Looks like those cost-effectiveness estimates were written by Ambitious Impact so I don't know if there are some other estimates written by Vasco.

I do think there's concern with a popular movement that the movement will move in a direction you didn't want, but empirically this has already happened for "behind closed doors" lobbying so I don't think a popular movement can do worse.

There's also an argument that a popular movement would be too anti-AI and end up excessively delaying a post-AGI utopia, but I discussed in my post why I don't think that's a sufficiently big concern.

(I agree with you, I'm just anticipating some likely counter-arguments)

Quick thoughts on investing for transformative AI (TAI)

Some EAs/AI safety folks invest in securities that they expect to go up if TAI happens. I rarely see discussion of the future scenarios where it makes sense to invest for TAI, so I want to do that.

My thoughts aren't very good, but I've been sitting on a draft for three years hoping I develop some better thoughts and that hasn't happened, so I'm just going to publish what I have. (If I wait another 3 years, we might have AGI already!)

When does investing for TAI work?

Scenarios where investing doesn't work:

  1. Takeoff happens faster than markets can react, or takeoff happens slowly but is never correctly priced in.
  2. Investment returns can't be spent fast enough to prevent extinction.
  3. TAI creates post-scarcity utopia where money is irrelevant.
  4. It turns out TAI was already correctly priced in.

Scenarios where investing works:

  1. Slow takeoff, market correctly anticipates TAI after we do but before it actually happens, and there's a long enough time gap that we can productively spend the earnings on AI safety.
  2. TAI is generally good, but money still has value and there are still a lot of problems in the world that can be fixed with money.

(Money seems much more valuable in scenario #5 than #6.)

What is the probability that we end up in a world where investing for TAI turns out to work? I don't think it's all that high (maybe 25%, although I haven't thought seriously about this).

You also need to be correct about your investing thesis, which is hard. Markets are famously hard to beat.

Possible investment strategies

  1. Hardware makers (e.g. NVIDIA)? Anecdotally this seems to be the most popular thesis. This is the most straightforward idea but I am suspicious that a lot of EA support for investing in AI looks basically indistinguishable from typical hype-chasing retail investor behavior. NVIDIA already has a P/E of 56. There is a 3x levered long NVIDIA ETP. That is not the sort of thing you see when an industry is overlooked. Not to say NVIDIA is definitely a bad investment, it could be even more valuable than the market already thinks, I'm just wary.
  2. AI companies? This doesn't seem to be a popular strategy, the argument against is that it's a crowded space with a lot of competition which will drive margins down. (Whereas NVIDIA has a ~monopoly on AI chips.) Plus I am concerned that giving more money to AI companies will accelerate AI development.
  3. Energy companies? It's looking like AI will consume quite a lot of energy. But it's not clear that AI will make a noticeable dent on global energy consumption. This is probably the sort of thing you could make reasonable projections for.
  4. Out-of-the-money call options on a broad index (e.g. S&P 500 or NASDAQ)? This strategy avoids making a bet about which particular companies will do well, just that something will do much better than the market anticipates. But I'd also expect that unusually high market returns won't start showing up until TAI is close (even in a slow-takeoff world), so you have less time to use the extra returns to prevent AI-driven extinction.
  5. Commodities? The idea is that anything complicated will become much easier to produce thanks to AI, but commodities won't be much easier to get, so their prices will go up a lot. This is an interesting idea that I heard recently, I have no idea if it's correct.
  6. Momentum funds (e.g. VFMO or QMOM)? The general theory of momentum investing is that the market under-reacts to slow news. The pro of this strategy is that it should work no matter which stocks/industries benefit from AI. The con is that it's slower—you don't buy into a stock until it's already started going up. (I own both VFMO and QMOM (mostly QMOM), a bit because of AI but mainly because I think momentum is a good idea in general.)

The way they're usually done, awards counteract the negative:positive feedback ratio for a tiny group of people. I think it would be better to give positive feedback to a much larger group of people, but I don't have any good ideas about how to do that. Maybe just give a lot of awards?

I think that such an obviously fraught and tense issue deserves more thought and care than a quick BOTEC.

I am opposed to adding more barriers to doing BOTECs, they're already difficult enough and rare enough as it is. I appreciate that OP did a BOTEC.

I spent some time looking into this since it was not obvious to me how to buy from Perfect Day. Looks like the only retail partner who sells their whey protein powder is Myprotein, most retailers sell things like ice cream.

But these should be matched by looking for cases where something good happened because people tried to accumulate power/influence within a system.

I think this is a significant percent of all good things that have ever happened.

I think you are right about this, you've changed my mind (toward greater uncertainty).

My gut feeling is that [...] the biggest difference between good outcomes and bad outcomes is how much work the big AI labs put into alignment during the middle of the intelligence explosion when progress moves fastest.

This seems to depend on a conjunction of several strong assumptions: (1) AI alignment is basically easy; (2) there will be a slow takeoff; (3) the people running AI companies are open to persuasion, and "make AI safety seem cool" is the best kind of persuasion.

But then again I don't think pause protests are going to work, I'm just trying to pick whichever bad plan seems the least bad.

A time cost of 0.0417 $/d for 7.5 s/d and 20 $/h.

Nitpick: I just timed myself taking creatine and it took me 42 seconds.

(My process consists of: take creatine and glass out of cabinet; scoop creatine into glass; pour tap water into glass; drink glass; put creatine and glass back into cabinet.)

Agreed that creatine passes a cost-benefit analysis.

I think this is the best explanation I've seen, it sounds likely to be correct.

Load more