Ian Turner

589 karmaJoined Jan 2023


The posts do have the “April Fool’s Day” tag right at the beginning?

I guess the question I have is, if the fraud wasn't noticed by SBF's investors, who had much better access to information and incentives to find fraud, why would anyone expect the recipients of his charitable donations to notice it? If it was a failure of the EA movement not to know that FTX was fraudulent, isn't it many times more of a failure that the fraud was unnoticed by the major sophisticated investment firms that were large FTX shareholders?

Thanks for posting this. I think this is the kind of practical, actionable analysis that we need.

Regarding this:

Given that there is still no way for model developers to deterministically guarantee a model’s expected behavior to downstream actors, and given the benefits that advanced AI could have in society, we think it is unfair for an actor to be forced to pay damages regardless of any steps they’ve taken to ensure the advanced AI in question is safe.

It seems to me that this is begging the question. If we don't know how to make AIs safe, that is a reason not to make AIs at all, not a reason to make unsafe AIs. This is not really any different from how the nuclear power industry has been regulated out of existence in some countries[1].

  1. I think this analogy holds regardless of your opinions about the actual dangerousness of nuclear power. ↩︎

TBH I do wonder if it would be possible to bribe plutocrats into stepping down. How much better off would Uganda be without Museveni?

The FTX estate (which I understand includes Alameda) did a lot of things wrong, but regarding this:

What made the lending out to Alameda fraudulent?

FTX promised many times, in many different forums, that it had sophisticated risk controls that would automatically liquidate customer accounts when limits were breached. Then it turned out that this was true for most counterparties, but Alameda was a big multi-billion dollar exception.

I think there’s an argument that if FTX had kept its promises about risk controls, there wouldn’t have been a criminal conviction, though possibly that would have negatively affected the business in other ways.

Hi Charlie, thanks for your reply.

I am a dilettante and don’t have much further to offer on social desirability bias, unfortunately. You might try connecting with a social scientist, development economist, or staff at one of the EA or EA-adjacent global health and development charities operating at the frontier of evidence for their respective interventions, such as GiveWell, GiveDirectly, Living Goods, IDinsight, DMI, Evidence Action, etc.

Well, regarding Anthropic at least, this particular bet may be lucky, but if you make a bunch of high-variance bets and one of them turns out in your favor, is that still just luck?

Thanks for sharing this report, and for all the work that went into this program so far.

Regarding the social desirability bias, and survey problems generally, there may be a few tweaks that would help with the situation.

  • Social desirability bias in surveys can be significantly reduced by using the "list experiment" technique.
  • There might we a way to phrase the question so that the social desirability bias goes the other way. For example, instead of asking "did you use the products", you could ask "do you still have the products?"
  • If you ask people to keep the packaging after use, then you could ask to see it (and observe if it has been used, or not). This might also help estimate diversion.

Regarding the overlap with ANRiN, have you estimated the prior probability of that happening, given the size of the programs? It makes me wonder if there is a bias in the selection of treatment locations that makes this more likely, and which might also affect results in other ways. For example, maybe both organizations are selecting treatment locations with better transportation infrastructure, in which case the program might prove harder to scale in the future.

Load more