Independent AI strategy research, forecasting @ Samotsvety, cofounder/advisor @ Sage. More at https://www.elilifland.com/. You can give me anonymous feedback here.
In an update on Sage introducing quantifiedintuitions.org, we described a pivot we made after a few months:
As stated in the grant summary, our initial plan was to “create a pilot version of a forecasting platform, and a paid forecasting team, to make predictions about questions relevant to high-impact research”. While we build a decent beta forecasting platform (that we plan to open source at some point), the pilot for forecasting on questions relevant to high-impact research didn’t go that well due to (a) difficulties in creating resolvable questions relevant to cruxes in AI governance and (b) time constraints of talented forecasters. Nonetheless, we are still growing Samotsvety’s capacity and taking occasional high-impact forecasting gigs.
[...]
Meanwhile, we pivoted to building the apps contained in Quantified Intuitions to improve and maintain epistemics in EA.
Ought has pivoted ~twice: from pure research on factored cognition to forecasting tools to an AI research assistant.
Nitpick, but I found the sentence:
Based on things I've heard from various people around Nonlinear, Kat and Emerson have a recent track record of conducting Nonlinear in a way inconsistent with EA values [emphasis mine].
A bit strange in the context of the rest of the comment. If your characterization of Nonlinear is accurate, it would seem to be inconsistent with ~every plausible set of values and not just "EA values".
Appreciate the quick, cooperative response.
I want you to write a better post arguing for the same overall point if you agreed with the title, hopefully with more context than mine.
Not feeling up to it right now and not sure it needs a whole top-level post. My current take is something like (very roughly/quickly written):
My main thought is that I don't know why he committed fraud. Was it actually to utility maximize, or because he was just seeking status, or got too prideful or what?
I think either way most of the articles you point to do more good than harm. Being more silent on the matter would be worse.
I'd agree with this if I thought EA right now had a cool head. Maybe I should have said we should wait until EA has a cooler head before launching investigations.
I'd hope that the investigation would be conducted mostly by an independent, reputable entity even if commissioned by EA organizations. Also, "EA" isn't a fully homogeneous entity and I'd hope that the people commissioning the investigation might be more cool-headed than the average Forum poster.
I thought I would like this post based on the title (I also recently decided to hold off for more information before seriously proposing solutions), but I disagree with much of the content.
A few examples:
It is uncertain whether SBF intentionally committed fraud, or just made a mistake, but people seem to be reacting as if the takeaway from this is that fraud is bad.
I think we can safely say with at this point >95% confidence that SBF basically committed fraud even if not technically in the legal sense (edit: but also seems likely to be fraud in the legal sense), and it's natural to start thinking about the implications of this and in particular be very clear about our attitude toward the situation if fraud indeed occurred as looks very likely. Waiting too long has serious costs.
We could immediately launch a costly investigation to see who had knowledge of fraud that occurred before we actually know if fraud occured or why. In worlds where we’re wrong about whether or why fraud occurred this would be very costly. My suggestion: wait for information to costlessly come out, discuss what happened when not in the midst of the fog and emotions of current events, and then decide whether we should launch this costly investigation.
If we were to wait until we close to fully knew "whether or why fraud occurred" this might take years as the court case plays out. I think we should get on with it reasonably quickly given that we are pretty confident some really bad stuff went down. Delaying the investigation seems generally more costly to me than the costs of conducting it, e.g. people's memories decay over time and people have more time to get alternative stories straight.
Adjacently, some are arguing EA could have vetted FTX and Sam better, and averted this situation. This reeks of hindsight bias! Probably EA could not have done better than all the investors who originally vetted FTX before giving them a buttload of money!
Maybe EA should investigate funders more, but arguments for this are orthogonal to recent events, unless CEA believes their comparative advantage in the wider market is high-quality vetting of corporations. If so, they could stand to make quite a bit of money selling this service, and should possibly form a spinoff org.
This seems wrong, e.g. EA leadership had more personal context on Sam than investors. See e.g. Oli here with a personal account and my more abstract argument here.
It's a relevant point but I think we can reasonably expect EA leadership to do better at vetting megadonors than Sequoia due to (a) more context on the situation, e.g. EAs should have known more about SBF's past than Sequoia and/or could have found it out more easily via social and professional connections (b) more incentive to avoid downside risks, e.g. the SBF blowup matters a lot more for EA's reputation than Sequoia's.
To be clear, this does not apply to charities receiving money from FTXFF, that is a separate question from EA leadership.
.074/.01 is 7.4, not 74