M

MichaelStJules

8391 karmaJoined

Comments
1956

Topic contributions
12

I would guess that for the biggest EA causes (other than EA meta/community), you can often hire people who aren't part of the EA community. For animal welfare, there's a much larger animal advocacy movement and far more veg*ns, although probably harder to find people to work on invertebrate welfare. For technical AI safety, there are many ML, CS (and math) PhDs, although the most promising ones may not be cheap. Global health and biorisk are not unusual causes at all. Invertebrate welfare is pretty unusual, though.

However, for more senior/management roles, you'd want some value alignment to ensure they prioritize well and avoid causing harm (e.g. significantly advancing AI capabilities).

AI safety has important potential backfire risks, like accelerating capabilities (or causing others to, intentionally or not), worsening differential progress, backfire s-risks. I know less about biorisk, but there are infohazards there, so that bringing more attention to biorisk can also increase the risk of infohazards leaking or search for them.

Hi smountjoy, I couldn't find the link to David Thorstad's post in this post.

  1. At least in some cases though, it seems like benefits are superlinear.
    1. Standard models of networks state that the value of groups tends to grow quadratically or exponentially
    2. When Ben asks people why they write for the EA Forum they often say something like “because everyone reads the Forum”; N people each writing because N people will read each thing — that’s quadratic value

 

I think both exponential and quadratic are too fast, although it's still plausibly superlinear. You used , which seems more reasonable.

Exponential seems pretty crazy (btw, that link is broken; looks like you double-pasted it). Surely we don't have the number of (impactful) subgroups growing this quickly.

Quadratic also seems unlikely. The number of people or things a person can and is willing to interact with (much) is capped, and the average EA will try to prioritize somewhat. So, when at their limit and unwilling to increase their limit, the marginal value is what they got out of the marginal stuff minus the value of their additional attention on what they would have attended to otherwise.

As an example, consider the case of hiring. Suppose you're looking to fill exactly one position. Unless the marginal applicant is better than the average in expectation, you should expect decreasing marginal returns to increasing your applicant pool size. If you're looking to hire someone with some set of qualities (passing some thresholds, say), with the extra applicant as likely to have them as the average applicant, with independent probability  and  applicants, then the probability of finding someone with those qualities is , which is bounded above by 1 and so grows even more slowly than  for large enough . Of course, the quality of your hire could also increase with a larger pool, so you could instead model this with the expected value of the maximum of iid random variables. The expected value of the max of bounded random variables, will also be bounded above by the max of each. The expected value of the max of iid uniform random variables over  is  (source), so pretty close to constant. For the normal distribution, it's roughly proportional to  (source).

It should be similar for connections and posts, if you're limiting the number of people/posts you substantially interact with and don't increase that limit with the size of the community.

 

Furthermore, I expect the marginal post to be worse than the average, because people prioritize what they write. Also, I think some EA Forum users have had the impression that the quality of the posts and discussion has decreased as the number of active EA Forum members has increased. This could mean the value of the EA Forum for the average user decreases with the size of the community.

Similarly, extra community members from marginal outreach work could be decreasingly dedicated to EA work (potentially causing value drift and making things worse for the average EA, and at the extreme, grifters and bad actors) or generally lower priority targets for outreach on the basis of their expected contributions or the costs to bring them in.

 

Brand recognition or reputation could be a reason to expect the extra applicants to EA jobs to be better than the average ones, though.

 

Brand recognition can help get things done, and larger groups have more brand recognition

Is growing the EA community a good way to increase useful brand recognition? The EA brand seems less important than the brands of specific organizations if you're trying to do things like influence policy or attract talent.

Thanks for writing this! This is a cool model.

Our best guess is that benefits grow slightly superlinearly because of coordination benefits (but you can easily remove coordination benefits from the model). 

  1. A naïve first-order approximation is that benefits (not accounting for reputational issues) are linear in the size of the group.
    1. If everyone in EA donated a constant amount of money, then getting more people into EA would linearly increase the amount of money being donated (which, for simplicity, we can say is a linear increase in impact)

Is linear a good approximation here? Conventional wisdom suggests decreasing marginal returns to additional funding and people, because we'll try to prioritize the best opportunities.

I can see this being tricky, though. Of course doubling the community size all at once would hit capacities for hiring, management, similarly good projects, and room for more funding generally, but EA community growth isn't usually abrupt like this (FTX funding aside).

In the animal space, I could imagine that doing a lot of corporate chicken (hen and broiler) welfare work first is/was important for potentially much bigger wins like:

  1. legislation/policy change, due less corporate pushback or even corporate support, and stronger org reputations
  2. getting the biggest and worst companies like McDonalds to commit to welfare reforms,
  3. moving onto less relatable animals exploited in larger numbers we can potentially help much more cost-effectively going forward, like fish, shrimp and insects.

But I also imagine that marginal corporate campaigns are less cost-effective when considering only the effects on the targeted companies and animals they use, because of the targets are prioritized and resources spent on a given campaign will have decreasing marginal returns in expectation.

 

GiveWell charities tend to have a lot of room for funding at given cost-effectiveness bars, so linear is probably close enough, unless it's easy to get more billionaires.

 

For research, the most promising projects will tend to be prioritized first, too, but with more funding and a more established reputation, you can attract people who are better fits, and can do those projects better, do projects you couldn't do without them, or identify better projects, and possibly managers who can handle more reports.

 

Maybe there's some good writing on this topic elsewhere?

I'm somewhat surprised that "economics and social sciences (11 votes)" and "Forecasting ability (5 votes)" got so many votes, but "Generalist researchers (2 votes)" got so few. I consider economics, social sciences and (some) forecasting to be pretty standard parts of the roles of generalist researchers. But maybe they want more (specific) expertise than a generalist researcher would normally have?

Thanks for doing this analysis and sharing, and to the participants of the survey! This looks really useful!

It is hard for pure advancements to compete with reducing existential risk as their value turns out not to scale with the duration of humanity's future. Advancements are competitive in outcomes where value increases exponentially up until the end time, but this isn't likely over the very long run.

What about under something like Tarsney (2022)'s cubic growth model of space colonization?

It seems like it would have been worth discussing AI more explicitly, but maybe that's a discussion for a separate article?

How plausible is it that we can actually meaningfully advance or speed up progress through work we do now, other than through AI or deregulating AI, which is extremely risky (and could "bring forward the end of humanity" or worse)? When sufficiently advanced AI comes, the time to achieve any given milestone could be dramatically reduced, making our efforts ahead of the arrival of that AI, except to advance or take advantage of advanced AI, basically pointless.

I suppose we don't need this extra argument, if your model and arguments are correct.

I left a comment on RP's post on welfare ranges here that's relevant.

Load more