There are other safety problems-- often ones that are more speculative-- that the market is not incentivizing companies to solve.
My personal response would be as follows:
Counterpoints to 1:
Good regulation of deployment is hard (though not impossible in my view).
Good regulation of development is much harder, and will eventually be necessary.
This is the really tricky one IMO. I think it requires pretty far-reaching regulations that would be difficult to get passed today, and would probably misfire a lot. But doesn't seem impossible, and I know people are working on laying groundwork for this in various ways (e.g. pushing for labs to incorporate evals in their development process).
Sorry to hear about your experience!
Which countries are at the top/bottom of the priority list to be funded? [And why?]
I think this is a great question, and I suspect it's somewhat under-considered. I looked into this a couple years ago as a short research project, and I've heard there hasn't been a ton more work on it since then. So my guess is that the reasoning might be somewhat ad-hoc or intuitive, but tries to take into account important factors like "size / important-seemingness of country for EA causes", talent pool for EA, and ease of movement-building (e.g. do we already have high-quality content in the relevant language).
My guess is that:
Zero-bounded vs negative-tail risks
(adapted from a comment on LessWrong)
In light of the FTX thing, maybe a particularly important heuristic is to notice cases where the worst-case is not lower-bounded at zero. Examples:
Not that you should definitely not do things that potentially have large-negative downsides, but you can be a lot more willing to experiment when the downside is capped at zero.
Indeed, a good norm in many circumstances is to do lots of exploration and iteration. This is how science, software development, and most research happens. Things get a lot tricker when even this stage has potential deep harms -- as in research with advanced AI. (Or, more boundedly & fixably, infohazard risks from x- and s-risk reduction research.)
In practice, people will argue about what counts as effectively zero harm, vs nonzero. Human psychology, culture, and institutions are sticky, so exploration that naively looks zero-bounded can have harm potential via locking in bad ideas or norms. I think that harm is often fairly small, but it might be both important and nontrivial to notice when it's large -- e.g., which new drugs are safe to explore for a particular person? caffeine vs SSRIs vs weed vs alcohol vs opioids...
(Note that the "zero point" I'm talking about here is an outcome where you've added zero value to the world. I'm thinking of the opportunity cost of the time or money you invested as a separate term.)
Inside-view, some possible tangles this model could run into:
Speaking as a non-expert: This is an interesting idea, but I'm confused as to how seriously I should take it. I'd be curious to hear:
I'm also curious if you've thought about the parliamentary approach to moral uncertainty, as proposed by some FHI folks. I'm guessing there are good reasons they've pushed in that direction rather than more straightforward "maxipok with p(theory is true)", which makes me think (outside-view) that there are probably some snarls one would run into here.
Ah, sorry, I was thinking of Tesla, where Musk was an early investor and gradually took a more active role in the company.
In February 2004, the company raised $7.5 million in series A funding, including $6.5 million from Elon Musk, who had received $100 million from the sale of his interest in PayPal two years earlier. Musk became the chairman of the board of directors and the largest shareholder of Tesla.[15][16][13] J. B. Straubel joined Tesla in May 2004 as chief technical officer.[17]A lawsuit settlement agreed to by Eberhard and Tesla in September 2009 allows all five – Eberhard, Tarpenning, Wright, Musk, and Straubel – to call themselves co-founders.
I think it's reasonable and often useful to write early-stage research in terms of one's current weak best guess, but this piece makes me worry that you're overconfident or not doing as good a job as you could of mapping out uncertainties. The most important missing point, I'd say, is effects on AI / biorisk (as Linch notes). There's also the lack of (or inconsistent treatment of) counterfactual impact of businesses, as I mention in my other comment.
Also, a small point, but given the info you linked, calling Oracle "universally reviled" seems too strong. This kind of rhetorical flourish makes me worry that you're generally overconfident or not tracking truth as well as you could be.
The market value of Amazon is circa $1T, meaning that it has managed to capture at least that much value, and likely produced much more consumer surplus.
I'm confused about your assessment of Bezos, and more generally about how you assess value creation via businesses.
My core concern here is counterfactual impact. If Bezos didn't exist, presumably another Amazon-equivalent would have come into existence, perhaps several years later. So he doesn't get full credit for Amazon existing, but rather for such an org existing for a few more years. And maybe for it being predictably better or worse than counterfactual competitors, if we can think of any predictable effects there.
Both points (competitor catch-up and trajectory change) also apply to the Google cofounders, though maybe there's a clearer story for their impact via e.g. Google providing more free high-quality services (like GDocs) than competitors like Yahoo likely would have, had they been in the lead.
For companies that don't occupy a 'natural niche' but rather are idiosyncratic, it seems more reasonable to evaluate the founder's impact based on something like the company's factual value creation, and not worry about counterfactuals. Examples might be Berkshire Hathaway and some of Elon's companies, esp Neuralink and the Boring Company. (SpaceX has had a large counterfactual effect, but Elon didn't start it; not sure how to evaluate his effect on the space launch industry.) I'd be interested in a counterfactual analysis of Tesla's effect on e.g. battery cost and electric vehicle growth trend in the US / world. (My best guess is it's a small effect, but maybe it's a moderately important one.)
Like Akash, I agree with a lot of the object-level points here and disagree with some of the framing / vibes. I'm not sure I can articulate the framing concerns I have, but I do want to say I appreciate you articulating the following points: