I think the elephant in the room might be OpenPhil spending at least $211m on "Effective Altruism Community Growth (Longtermism)", including 80k spending $2.6m in marketing in 2022.[1]
As those efforts get results I expect the % of EA growth from those sources to increase.
I also expect EA™ spaces where these surveys are advertised to over-represent "longermism"/"x-risk reduction" (in part because of donor preferences, and in part because they are more useful for some EAs), so that would impact the % of people coming to these spaces from things like GiveWell.
High uncertainty, but for LW and ACX users I'm not sure about the view that they have of CEA spaces. My impression is that for LW users it might plausibly be negative, so they might spend relatively less time in the places where this survey was advertised (which are mostly CEA-run / OpenPhil Longtermist Community Growth - funded).
I think that's negligible compared to the $200m effort in longtermism community growth from OpenPhil, and again I'm really uncertain about this, but there might be a lot of EAs (e.g. GiveWell donors) that might have become relatively less likely to respond to these surveys
Some of it in late 2022, but I think the main point still stands
Thank you for the transparency!
At 2M$/year for this forum, it seems that CEA is willing to pay 7.5$ for the average hour of engagement and 40$/month for the average monthly active user.
I found this much higher than I expected!
I understand that most of the value of spending on the forum is not in current results but in future returns, and in having a centralized place to "steer" the community. But this just drove home to me how counter-intuitive the cost-effectiveness numbers for AI-safety are compared to other cause areas.
E.g. I expect Dominion to have cost 10-100x less per hour of engagement, and the amount of investment on this forum to not make sense for animal welfare concerns.
I was really, really happy to see transparency on the team now focusing more on AI-safety, after months/years of rumors and the list of things we are not focusing on still including "Cause-specific work (such as ... AI safety)". I think it's a great change in terms of transparency and greatly appreciate you sharing this
It seems that we're spending 2 million a year on a glorified subreddit.
Just noting that there is already an effective altruism subreddit. I think we should post in both places and see if the difference is worth the cost
I think it's really worth highlighting that this Working Paper is from September 2019, I would add it to the title.
So nice to see you back on the forum!
I agree with most of your comment, but I am very surprised by some points:
- Think of all the technological challenges that we’d face over the coming 500 years, on a business-as-usual 1-5% per year growth rate.
- Now imagine that that occurs over the course of 5 years rather than 500.
Does this mean that you consider plausible an improvement in productivity of ~100,000 x in a 5 year period in the next 20 years? As in, one hour of work would become more productive than 40 years of full time work 5 years earlier? That seems significantly more transformative than most people would find plausible.
The point of time at which we expand beyond our solar system might be within our lifetimes. This could be one of the most influential moments in human history: the speed of light sets an upper bound on how fast you can go, so who leaves first at max-speed gets there first. And, plausibly, solar systems are defense-dominant so whoever gets there first controls the resources they reach indefinitely.
I'm really surprised to read this. Wouldn't interstellar travel close to the speed of light require a huge amount of energy, and a level of technological transformation that again seems much higher than most people expect? At that point it seems unlikely that concepts like "defense-dominant" or "controlling resources" (I assume the matter of the systems?) would still be meaningful, or at least in a way predictable enough to make regulation written before-transformation useful.
If AI goes well, then it could greatly extend currently-existing lives, and greatly increase their quality of life, too.
If AI goes badly, you could make the exact same argument in the opposite direction. Wouldn't those two effects cancel out, given that we're so uncertain about AI effects on humans?
key decision-makers (e.g. politicians, people at AI labs)
I don't understand the theory of change for people at AI labs impacting the global factory farming market (including CEOs, but especially the technical staff). After some quick googling, the global factory farmed market size is around 2 trillions of dollars. Being able to influence that significantly would imply a valuation of AI labs that's very significantly larger than the one implied by the current market.
If you have time to have a look at my post and recent comments, would you say that this account creeps you out, or only the more EA-critical ones?
The alternative is not really to post these things under my real name, but not to post at all (for various reasons: don't want the pro-EA posts to be seen as virtue signaling, don't want to be canceled in 26 years for whatever will be cancelable then, don't want my friends to get secondhand reputation damage)
Would you be happy if a CEA staff member had a quick chat with you at EAG, wrote down "IQ 100" based on that conversation on an excel sheet, and this cost you opportunities in the EA space as a result?
Yes. I'm in EA to give money/opportunities, not to get money/opportunities.
Edit: I do think some people (in and outside of EA) overvalue quick chats when hiring, and I'm happy that in EA everyone uses extensive work trials instead of those.
Conditional on being a woman in California, being EA did make someone more likely to experience sexual harassment, consistently, as measured in many different ways. But Californian EAs were also younger, much more bisexual, and much more polyamorous than Californian non-EAs; adjusting for sexuality and polyamory didn't remove the gap, but age was harder to adjust for and I didn't try. EAs who said they were working at charitable jobs that they explicitly calculated were effective had lower harassment rates than the average person, but those working at charitable jobs that they didn't expliclitly calculate were higher. All of these subgroup analyses were very small sample size.
Could you share (maybe approximate) numbers and percentages, like you did for the full stats?
That's way higher than I thought! You must have a great recruitment process ;)
What % of these incubatees fund CE incubated charities? (i.e. get seed funding, support, and so on)
On your website I see you have assisted 50+ individuals from a wide range of backgrounds in launching 27 high-impact charities. Does that mean that fewer than 10 people went through the program and didn't start a charity? Or that there are many individuals that started non-high-impact charities? Or something else?