T

Tamay

Research Scientist @ CSAIL MIT, Associate Director @ Epoch
340 karmaJoined

Comments
6

Tamay
66
27
1

I agree the victim-perpetrator is an important lens through which to view this saga. But, I also think that an investor-investee framing is another important one; a framing that has different prescriptions for what lessons to take away, and what to do next. The EA community staked easily a billion dollars worth of its assets (in focus, time, reputation, etc.), and ended up losing it all. I think it's crucial to reflect on whether the extent of our due diligence and risk management was commensurate with the size of EA's bet.

Tamay
72
43
6

One specific question I would want to raise is whether EA leaders involved with FTX were aware of or raised concerns about non-disclosed conflicts of interest between Alameda Research and FTX.

For example, I strongly suspect that EAs tied to FTX knew that SBF and Caroline (CEO of Alameda Research) were romantically involved (I strongly suspect this because I have personally heard Caroline talk about her romantic involvement with SBF in private conversations with several FTX fellows). Given the pre-existing concerns about the conflicts of interest between Alameda Research and FTX (see examples such as these), if this relationship were known to be hidden from investors and other stakeholders, should this not have raised red flags? 

This is insightful.  Some quick responses:

  • My guess would be that the ability to commercialize these models would strongly hinge on the ability for firms to wrap these up with complementary products, that would contribute to an ecosystem with network effects, dependencies, evangelism, etc.
  • I wouldn't draw too strong conclusions from the fact that the few early attempts to commercialize models like these, notably by OpenAI, haven't succeeded in creating the preconditions for generating a permenant stream of profits. I'd guess that their business models look less-than-promising on this dimension because (and this is just my impression) they've been trying to find product-market-fit, and have gone lightly on exploiting particular fits they found by building platforms to service these
  • Instead, better examples of what commercialization looks like are GPT-3-powered companies, like copysmith, which seem a lot more like traditional software businesses with the usual tactics for locking users in, and creating network effects and single-homing behaviour
  • I expect that companies will have ways to create switching costs for these models that traditional software product don't have. I'm particularly interested in fine-tuning as a way to lock-in users by enabling models to strongly adapt to context about the users' workloads. More intense versions of this might also exist, such as learning directly from individual customer's feedback through something like RL. Note that this is actually quite similar to how non-software services create loyalty

I agree that it seems hard to commercialize these models out-of-the-box with something like paid API access, but I expect, given the points above, to be superseded by better strategies. 

By request,  I have updated the predictions based on the latest predictions. Previous numbers can be found here.

I won the Stevenson prize (a prize given out at my faculty) for  my performance in the  MPhil in Economics.  I gather Amartya Sen won the same prize some 64 years ago, which I think is pretty cool.