S

Sanjay

3819 karmaJoined

Comments
355

I was worried that this whole post might omit missing hedging and impact investing:

(a) an investor may wish to invest in equities for mission hedging reasons (e.g. scenarios where markets go up may be correlated with scenarios where more AI safety work is needed, or you might invest heavily in certain types of biotech firms, since their success might be correlated with pandemic risk work being needed)

(b) an investor can have impact on the entities they have a stake in through stewardship/engagement (sometimes referred to as investor activism). Roughly speaking, this involves having conversations with the firm's management to influence their behaviour for the better, and using your voting power is part of this.

Fortunately, I'm pleased to see that you have mentioned both of these points, albeit only briefly towards the end. Thank you for incorporating those points.

Mission hedging seems to suggest you should hold more equities than you claim, by your logic.

Your concerns about mission hedging were that you weren't sure whether future (e.g.) AI scenarios would lead to markets going up, or whether there would be some sort of "mayhem" prior to there being an existential catastrophe.

However your conclusion from this seems to be that we should be holding either more equities or less, and you're not sure which, so it's unclear how to update.

However I don't think this is the right conclusion.

For those in the community who believe that certain risks (e.g. the risks from AI or GCBRs) are neglected, it would make sense that they are not adequately priced into the market. Even if you don't know whether you this means more downside volatility or more upside growth, this means you are, in the jargon, "long volatility", and you should do the following:

  • hold more equities (so you do well in the scenario where equities go up)
  • hold put options (so you outperform in the scenario where equities plummet because of "mayhem", because the put options provide you with protection)

I think your post does a disservice to shareholder engagement

I think your claim that shareholder engagement has a small effect is reasonable if it describes the median of the engagement which actually happens, but is likely harsh on the optimal shareholder engagement strategy.

However in terms of the conclusions for this post, I'm unsure of how your engagement scales with size, and I wouldn't be surprised if it were markedly sublinear. If that's the case, then it doesn't significantly argue against your claim. In other words, if you're arguing between holding 60% equities and 80% equities, and you believe that the 80% equities holding gives you significantly less than 80/60x as much influence as holding 60%, then even if shareholder engagement can be more valuable than you claim, that's not a very strong argument for holding 80%.

I've wondered about the interaction between far-UVC and immunity:

  • as well as protecting us against a scary novel pandemic-level pathogen, far-UVC would also kill off germs for various "common or garden" infections
  • at first glance, this sounds like a pretty great cherry on the cake
  • but could it exacerbate pandemic risk by reducing immunity, thereby making it easier for a bioweapon engineer to create a scary pathogen?

I agree with Jason that the specific moral hazard of "people might move to flood-prone areas in order to get cash" seems unlikely to be a concern.

The moral hazard that I was thinking of when I read Robi Rahman's comment was "people who already live in flood-prone areas might be less prone to invest in flood defences/move away/do other things in light of the information that floods may be coming"

Re your question: "I would be especially interested if you have ideas for other historical case studies that could inform the longtermist project." Here's a few ideas:

  • In Scott Alexander's post Beware Systemic Change, he argued that by funding Marx, Engels brought about "global mass murder without any lasting positive change". I'd be quite interested in an assessment of whether this is true. 
    • Did Marx's work really cause the mass murder, or did the countries led by Marxist dictators happen to find themselves in circumstances where despots were prone to taking over anyway, and without Marxism might an even worse ideology have driven the counterfactual dictator?
    • Did Marx's thinking cause any lasting positive change? How would we conceptualise left wing politics today without him? Did he advance the concept that people have rights, even if they don't have capital/wealth, or was that concept going to take hold anyway?
  • There's probably quite a few battles which could have had world-changing consequences if the outcome had been different. The Battle of Tours of 732 was quite interesting. The Ummayad Caliphate was in the midst of spreading Islam across the known world. As they spread north from Spain, they came up against the Franks and lost at the Battle of Tours. Had this not happened, not only might the Carolingian Empire found it harder to take off, but most of Western Europe (and the US?) might be Muslim today. 
  • The US Constitution was the first constitution that I know of that was influenced by Enlightenment concepts. It seems to have been a success (at least, the US does seem to be a successful country today, at least under some measures). Did the constitution matter, or was the US bound to succeed anyway? Did the constitution have any long term impact on the values of Americans? The right to bear arms appears, at least superficially, to have an impact on contemporary American values. Did enlightenment concepts matter?

At the start of your post, you said, rather tantalisingly: "I believe that many of the learnings from the creation of climate risk financial regulation in the UK  can be applied to AI regulation." Could you expand on this?

Also, I'm pleased you wrote this post :-)

Answer by Sanjay3
0
0

This comment will focus on the specific approaches you set out, rather than the high level question, although I'm also interested in seeing comments from others on how difficult it is to solve alignment, and why.

The approach you've set out resembles Coherent Extrapolated Volition (CEV),  which was described earlier by Bostrom. I'm not sure what the consensus is on CEV, but here's a few thoughts which I have in my head from when I thought about CEV (several years ago now).

  • How do we choose the correct philosophers and intellectuals -- e.g. would we want Nietsche or Wagner to be on the list of intellectuals, given the (arguable) links to the Nazis? 
  • How do we extrapolate? (i.e. how do you determine whether the list of intellectuals would want the action to happen?) 
    • For example, Plato was arguably in favour of dictatorships and preferred them over democracies, but recent history seems to suggest that democracies have fared better than dictatorships -- should we extrapolate that Plato would prefer democracies if he lived today? How do we know?
    • Another example, perhaps a bit closer to home: some philosophers might argue that under some forms of utilitarianism, the ends justify the means, and it is appropriate to steal resources in order to fund activities which are in the best long-term interests of humanity. Even if those philosophers say they don't believe that, they might just be pandering to expectations from society, and the AI might extrapolate that they would say that if unfettered.

In other words, I don't think this does clearly guarantee us against power-seeking behaviour.

Can you expand on why the ideal unit is "the settlement, village, community, or neighborhood"?

I can also confirm that an early employee of W3W told me that supporting development work was one the main original aims of W3W.

If I'm reading claim 3 correctly, are you saying that being a 10% GWWC pledger should be sufficient to get a spot at EAG, and this is true regardless of absolute donation amount?

At the outset, I had the same concern, however thus far it doesn't appear to have been a problem. It's possible that this may change in time, in which case we'll cross that bridge when we get there.

Load more