If you’re seeing things on the forum right now that boggle your mind, you’re not alone.
Forum users are only a subset of the EA community. As a professional community builder, I’m fortunate enough to know many people in the EA community IRL, and I suspect most of them would think it’d be ridiculous to give a platform to someone like Hanania.
If you’re like most EAs I know, please don’t be dissuaded from contributing to the forum.
To be clear, my best guess is based on my experiences talking to hundreds of community builders and student group organizers over the years, is that the general sentiment amongst organizers is substantially more towards the "I don't think we should micromanage the attendance decisions of external events" position than the forum discussion.
This kind of stuff is hard to get an objective sense off, so I am not confident here, but I think the biases in what positions people feel comfortable expressing publicly clearly go more in the direction of outrage "complaining about who attended" [1] here.
My best guess there is also a large U.S./EU difference here. My sense is the European (non-German, for some reason) EA community is substantially more leaning towards controlling access and reputation tightly here. You can also see this in the voting patterns on many of the relevant posts which wax and wane with the U.S./EU time difference.
I don't think the crux here is whether one ought to micromanage the attendance decisions of external events. It's more about:
"Should one give a platform to Hanania?"
"If forum users think it's wrong to give a platform to Hanania, is it reasonable for them to express displeasure at Manifest for giving him a platform?"
"If someone points out that Hanania has said, by his own admission, horrible things and therefore probably shouldn't be given a platform, is it reasonable to then write a long comment trying to add nuance to the discussion, instead of simply saying, 'yeah, seems right'"
My guess is that the crux between people disagreeing is typically closer to: "is this mostly a question of managing the details of how someone else ran an event, or is this mostly a question of appropriate social signalling?"
4
David Mathers🔸
There is an object-level disagreement about what counts as unacceptable racism here, not just a meta-disagreement about norms. One person-I assume a rationalist but I don't know that-in the main thread didn't understand why I was offended by something they posted in which Hanania basically implied that the Civil Rights Act caused crime.
3
Nathan Young
I think it's a bit tricky when we assume people who disagree with us are of the 'opposing' party.
7
NickLaing
"I think the biases in what positions people feel comfortable expressing publicly clearly go more in the direction of outrage here."
Two comments here. First in terms of comment number on the forum I would say it's pretty balanced. When it comes to voting (not public) I would say pro-platforming-Hananiah sentiment is more heavily upvoted in general
I also don't think "outrage" is an accurate reflection on the sentiment of tge majority of anti-platforming comments. Most comments have seemed like sober arguments about making attendees feel comfortable, the dangers of platforming edgy people who have made comments in the past that many consider racist, or avoiding bad press rather than irrational outage... You can make good arguments against these points but I would hardly label them as outrage.
Agree with the timezone voting thing. I should start posting at EU friendly times ;).
7
Owen Cotton-Barratt
As an observer to these discussions, a couple of comments:
* I think you're right in your characterization of most of the anti-platforming comments
* However, I'd guess you're wrong to describe most of the anti-anti-platforming comments/votes as "pro-platforming"
* Rather, the more common sentiment, and the one I think is mostly attracting upvotes, seems to me to be like "who are we to tell other people who to talk to?"
* I don't know much about him, but from what I do know I think the guy sounds like a jerk and I'd be meaningfully less interested in going to events he was at; I can't really imagine inviting him to speak at anything
* But it also seems to me that it's important to respect people's autonomy and ability to choose differently
* There's a sad irony that this whole debate is functioning to give him a weird kind of platform
* e.g. I'd otherwise never heard of him; now I have
* As a matter of social game theory I think it's usually a mistake to read someone's writing when it's been drawn to your attention for being controversial. This incentivizes people to be provocative in order to draw audiences.
* This means that I think it's likely correct for most people to never form strong opinions about him in the first place
* And I feel uncomfortable that his critics are doing his work for him by demanding others have strong negative reactions to him, when I think it might usually be wiser for them to keep focused on other things and let him sink into obscurity
Rather, the more common sentiment, and the one I think is mostly attracting upvotes, seems to me to be like "who are we to tell other people who to talk to?"
I don't know much about him, but from what I do know I think the guy sounds like a jerk and I'd be meaningfully less interested in going to events he was at; I can't really imagine inviting him to speak at anything
But it also seems to me that it's important to respect people's autonomy and ability to choose differently
Criticizing someone's decisions is not denying them autonomy or ability to choose.
To use a legal metaphor, one way of thinking about this is personal jurisdiction -- what has Manifest done that gives the EA community a right to criticize? After all, it would be uncool to start criticizing random people on the Forum with no link to EA, and it would generally be uncool to start criticizing random EAs for their private non-EA/EA-adjacent actions.
I have two answers to that:
The first is purposeful availment. If an actor purposefully takes advantage of community resources in connection with an action, they cannot reasonably complain about their choices being the subject of community scrutiny. The Manifest organize
On the "platforming" question
I agree there's a big mix of disagreements, but I do think a lot of the negative comments are related to the platforming aspect, to which I feel like some of the replies (getting lots of upvotes like you say) strawman that a little by shifting the ground to "who are we to tell other people to talk to".
For me the big issue is not allowing him to "attend" the event and talk to people (I agree we shouldn't tell people who to talk to), but the platforming itself. He was invited to the event by the organiser, listed initially as a speaker and then eventually attended as a "special guest". Personally I love talking to people with a wide range of views, even those I don't like or even people that could be considered "enemies". From my faith background Jesus spent a lot of time doing that and I try and do the same ("love your enemies and pray for those who persecute you")
I'm fairly confident this wouldn't have blown up (at least not to this extent) if he was just a regular attendee.
Completely agree on the sad irony front amplifying the platforming, although I think both Manifest and us debating can probably share the responsibility for platforming him more. I completely agree we should not demand strong negative reactions to someone, and that doing so makes the situation worse.
8
Owen Cotton-Barratt
I agree that Manifest was platforming him.
(I wouldn't have done that, and at some level I feel sad that they did it -- but I think that is a bit norm violating to express publicly, and I'm trying to do it softly and only because it may help to avoid misunderstanding.)
However, I think Manifest has the right to choose who to platform just as people have the right to choose who to talk to. I do think this platforming decision is something Manifest's natural constituents can rightfully complain about, but I think it's kind of inappropriate for the EA forum at large to weigh in on. (Though I support people's right to be inappropriate this way! I just would try not to do it myself and might gently advise other people to try not to.)
-2
NickLaing
I feel like EA is close enough to Manifest (open Phil funding, EA organisers involved, advertising on the forum) that its fair enough for the forum to weigh in. Why do you think it's inappropriate for the forum to weigh in? Are you trying to curtail our free speech ;) (Jokes)
I don't really understand the argument about "the right" to speak or "the right" for manifest to platform whoever they want". Of course they can do what they want, and it's their org they can invite who they want. and then we can talk about it? This seems like a non-argument to me.
9
Jason
I'm not aware of Manifest (or even Manifold) receiving funding from Open Phil, although Manifold did receive significant funding from an EA-linked funder (FTXFF).
8
Owen Cotton-Barratt
Totally agree that people can do whatever they want!
But: suppose there were a big online discussion about the clothing choices of a public intellectual (and while slightly quirky and not super flattering, these obviously weren't being chosen for the sake of being provocative). Then I think I'd feel like I cared about their privacy, and that it would be kinder for people to refrain from this discussion. Not that they wouldn't have a right to have it -- but that it might ultimately be more aligned with their values not to (even if they're totally right about the clothes).
I feel kind of similarly here. Manifest made some choices. They seem to me like they may have been mistakes, but it's important to me that they have the right to make their own choices (whether or not those are mistakes). Some of the discourse feels like it's an ungraceful attempt to muscle in on their autonomy -- like the vibe is "you shouldn't have had the right to invite him", even if people don't actually say that -- and thereby more likely to create an environment where people don't actually feel free. (Not all of the criticism has felt to me like that. I actually support a certain amount of tactfully-done criticism in this case. And I've upvoted a number of contributions on both "sides" of this debate, where I felt like they were adding something useful.)
(It's more plausible that Manifest's choices are causing indirect harm than that the intellectual's clothing choices are, so this analogy isn't perfect, and I'm not trying to say it's as clear-cut as that case, and it's possible this value could be outweighed by other values, which is why I'm in favour of some of the criticism. But I do think it's not a "non-argument", and I'm giving an analogy where I think it's clearer cut in order to demonstrate that it's at least a legitimate consideration.)
2
NickLaing
Yep I agree with all of this nice one I like the way you put it. I haven't noticed so many posts/comment which I see as trying to "Muscle" Manifest, but there is some of that sentiment I think for sure.
6
Habryka
Agree with this, which is part of why I expect a bigger skew here in terms of representation. If public contributions are balanced, and voting contributions are less balanced, then my guess is the overall bias of the discussion is towards the side that gets relatively more support when contributions are public, and relatively less support when contributions are private)
2
NickLaing
Yep I'd agree with that. I just hope the "silent" voters are engaged EAs/Rationalists and we don't have a small (but significant) number of trolls lurking skewing the voting. I would imagine though the forum admins have this under control.
Part of the reason I have some concern about this is that the voting pattern seems quite different on this post 3 months ago... https://forum.effectivealtruism.org/posts/mZwJkhGWyZrvc2Qez/david-mathers-s-quick-takes?commentId=AnGzk7gjzpbMsHXHi
I think this is an important sentiment for many people to hear who might be feeling the same way but haven't seen this explicitly said anywhere. Thanks for making it. Don't be discouraged if the karma doesn't get too high because of downvotes as well, which I think is likely.
I want to believe this, but it's difficult for me to assess the evidence for or against it very well. Any suggestions?
As with most of us, "the people I know" is not a randomly-selected or representative group. Moreover, presumably many people who hold positions subject to general social stigma will not advocate for their position in front of people they know to be non-receptive. So the personal experience of people whose opposed stance is known will likely underestimate support for Hanania.
Suggestions for assessing the claim, "forum users are only a subset of the EA community"? Or the claim, "most of them [EAs I know] would think it'd be ridiculous to give a platform to someone like Hanania"?
I don't think there's great evidence for either claim, unfortunately. For the former, I guess we can look at this and observe that forum use is quite unequal between users, which suggests something.
For the latter, I could survey EAs I know with the question, "Do you think it'd be a good idea to invite Hanania to speak at an event?". However, even typing that out feels absurd, which perhaps indicates how confident I am that most EAs I know would think it's a ridiculous idea.
Regarding stigma, my impression is that quite a few people would like to say on the forum, "Giving a platform to Hanania is a ridiculous idea", but don't because they worry the forum will not be receptive to this view. I think this is because people perceive there to be a stigma on the forum against anyone who expresses discomfort at seeing people dispassionately discuss whether it's okay to give a platform to someone like Hanania.
Maybe this stigma is a good thing. I'm not sure. I like what Isa said: "I w... (read more)
Actually a third: ~ "the approximate percentage of EAs would think it'd be ridiculous to give a platform to someone like Hanania." I don't need convincing that both "Forum users" and "EAs that James Herbert personally knows" are likely unrepresentative samples of EAs as a whole. And I'd still be distressed if "most" EAs thought it ridiculous, but a sizable minority thought it was affirmatively a good idea.
8
David Mathers🔸
One reason to believe that inviting Hanania is unpopular, though far from definitive, is the data we have on political views of EAs. About 70% of EAs identify as either "left" or "centre-left" in the EA survey. Very few identify as "right" or "centre-right". I'd assume, cautiously, the most people who identify as "left" or "centre-left" think inviting Hanania was a bad decision, though I can't be certain of that, as some Hanania supporters do seem to conceptualise themselves as centre-left. But presumably, also, some people who identify as "centre" (and perhaps even "other" or "libertarian") are also not fans of the decision to invite Hanania.
2
Jason
Thanks, that is a helpful data point. I speculate, though, that EAs may be less likely to fall neatly into a left-right continuum and so (e.g.) the "center-left" respondents could have quite a bit more libertarianism mixed in than the US/UK general center-left population despite identifying more as center-left than libertarian or other.
I know EA Survey space is limited, but a single question on Forum usage (which could be, e.g., no, lurker, >100 karma, 100-999, >1000 / or could be frequency/intensity of use) would be useful in obtaining hard data on the extent to which the active Forum userbase has different characteristics than the EA population as a whole. That might be useful context when something goes haywire on the Forum in a way we think is unrepresentative of the larger population.[Tagging @David_Moss with the question request]
4
David_Moss
The only statistically significant results are that people who posted or commented on the Forum are more Center-left (41.2% vs 34.9% for non-Forumites), but less Left (27.8% vs 37.8%).
4
Jason
Thanks! The idea that I (as someone who answered "Center") am that far right for the Forum population feels pretty inconsistent with my lived experience here. I can think of several possible explanations for that, including that I am using a different yardstick than many respondents, that I'm more "left" on certain issues that have been coming up as of late, and that the distribution for highly active commenters/posters is different.
2
Nathan Young
Many of those who are in a different party to you in this discussion are also pretty dubious on platforming someone like Hanania or having discussions about race. There isn't, so far as I have seen a pro-Hanania faction, there is an anti-Hanania and a mixed-on-Hanania.
Likewise it seems most people respect CEA's right to run events how they want, though probably some dislike how CEA does in fact run events.
3
James Herbert
I agree that there are virtually zero (vocal) pro-Hanania people, and there is definitely an anti-Hanania group and a mixed-on-Hanania one.
I think a lot of people will be 'boggled' by the number of 'mixed-on-Hanania' people. I wanted to reassure that boggled crowd.
It isn't an EA project (and his accompanying book has a chapter on EA that is quite critical), but the inspiration is clear and I'm sure there will be things we can learn from it.
For their pilot, they're launching in the Netherlands, but it's already pretty huge, and they have plans to launch in the UK and the US next year.
To give you an idea of size, despite the official launch being only yesterday, their growth on LinkedIn is significant. For the 90 days preceding the launch date, they added 13,800 followers (their total is now 16,300). The two EA orgs with the biggest LinkedIn presence I know of are 80k and GWWC. In the same period, 80k gained 1,200 followers (their total is now 18,400), and GWWC gained 700 (their total is now 8,100).[1]
And it's not like SMA has been spamming the post button. They only posted 4 times. The growth in followers comes from media coverage and the founding team posting about it on their personal LinkedIn pages (Bregman has over 200k followers).
When I translated to English, their 3 "Os" (In dutch not English) were....
"Bulky, underexposed and solvable"
Sounds a lot like important, neglected and tractable to me?
And then they interviewed Rob Mathers from the Against Malaria Foundation...
I completely agree with James that these guys are showing EA a different way of movement building which might end up being effective (we'll see). It seems like they are building on the moral philosophy foundations of EA, then packaging it in a way that will be attractive to the wider population - and they've done it well. I love this page with their "7 principles" and found it inspiring - I would sign up to those principles, and I appreciated that the scout mindset is in there as well.
I do wonder what his major criticisms of EA are though, given that this looks pretty much like EA packaged for the masses, unless I'm missing something.
Yup! The three OOOs are inspired by EA (although a different Dutch foundation should get the credit for the Dutch acronym).[1]
The main criticism can be found in chapter 8 of the book (only in Dutch for now). The subheading for this chapter gives a clue: "What you can achieve by radical prioritization, and how your moral ambition can be completely derailed." Spoiler: it's SBF.
The introduction to that chapter closes with the following paragraphs (machine translation):
"I cannot emphasize enough how important this third point is. Rob [Mather] was equally talented through all three phases, but it wasn't until he took a step back and carefully weighed his options that he started to make a huge difference. So don't start with the question: 'What is my passion?' but with the question 'How can I contribute the most?' – and then choose the role that suits you.
Remember: your talent is just a tool, and your ambition is raw energy. The question is what you do with it.
And that also applies to something else. So far I have mainly talked about the waste of talent and ambition. But there is another privilege that we waste on a massive scale: money. In this chapter I will take a step back in time and tell you about my introduction to a young cult that became aware of that. It's a movement that has taken the pursuit of impact to the extreme. A movement that is always looking for the best financial investments with the highest return for as many people and animals as possible.
Their story is about how much you can achieve through radical prioritization, but it also shows how your moral ambition can be completely derailed."
His conclusion to the chapter is much more positive about EA, but it's far from a ringing endorsement.
I think this is a very interesting case study from the SBF saga. Yes, public polling suggests it didn't damage the reputation of EA as much as some might have feared. However, it has resulted in a loss of support from potential allies, e.g., Bregman.
3
EffectiveAdvocate🔸
The sad fact is that this book might be the main way people in the Netherlands learn about the link between SBF and EA. But I guess there is little we can do about it now.
Yes, although I guess it's good that people know the link. We shouldn't hide our mistakes, and I know Bregman likes some of what we do, so there are worse people to have sharing this info with the Dutch population.
Yes, I totally agree it is important not to hide our mistakes. I just wish SBF was presented in the context I see it in: As an unbelievable fuck-up / distaster / crime in a community that is at least trying very hard to do good.
Saying it isn't an EA project seems too strong - another co-founder of SMA is Jan-Willem van Putten, who also co-founded Training for Good which does the EU tech policy and Tarbell journalism fellowships, and at one point piloted grantmaker training and 'coaching for EA leaders' programs. TfG was incubated by Charity Entrepreneurship.
You missed the most impressive part of Jan-Willem’s EA CV - he used to co-direct EA Netherlands, and I hear that's a real signal of talent ;)
But yes, I guess it depends on how you define ‘EA project’. They're intentionally trying to do something different, so that's why I don't describe them as one, but the line is very blurred when you take into account the personal and philosophical ties.
If EA was a broad and decentralised movement, similar to e.g., environmentalism, I'd classify SMA as an EA project. But right now EA isn't quite that. Personally, I hope we one day get there.
I think of EA as a broad movement, similar to environmentalism — much smaller, of course, which leads to some natural centralization in terms of e.g. the number of big conferences, but still relatively spread-out and heterogenous in terms of what people think about and work on.
Anything that spans GiveWell, MIRI, and Mercy for Animals already seems broad to me, and that's not accounting for hundreds of university/city meetups around the world (some of which have funding, some of which don't, and which I'm sure host people with a very wide range of views — if my time in such groups is any indication).
That's my way of saying that SMA seems at least EA-flavored, given the people behind it and many of the causes name-checked on the website. At a glance, it seems pretty low on the "measuring impact" scale, but you could say the same of many orgs that are EA-flavored. I'd be totally unsurprised to see people go through an SMA program and end up at EA Global, or to see an SMA alumnus create a charity that Open Phil eventually funds.
(There may be some other factor you're thinking of when you think of breadth — I could see arguments for both sides of the question!)
7
James Herbert
I'm thinking about power. I don't (yet) liken EA to environmentalism because power is far, far more centralised in EA. As you mentioned, this is probably because we're small and young. I expect this will change in the future.
6
Jamie_Harris
Side comment / nitpick: Animal Advocacy Careers has 13k LinkedIn followers (we prioritised it relatively highly when I was working there) https://www.linkedin.com/company/animal-advocacy-careers/
1
James Herbert
Oh nice! Congrats with that. Do you know if it was a good use of resources?
2
Jamie_Harris
Thanks! IIRC, we focused on it substantially because a lot of the sign ups for our programmes (e.g. online course) were coming from LinkedIn even when we hadn't put much effort into it. The number of sign ups and the proportion attributed to LinkedIn grew as we put more effort into it. This was mostly the work of our wonderful Marketing Manager, Ana. I don't have access to recent data or information about how it's gone to make much of a call on whether it was worth it, relative to other possible uses of our/Ana's time.
1
James Herbert
Very interesting! We have made exactly the same observation so we’ve started investing in it more, but we’re still learning how best to go about this.
5
anormative
I'm not suggesting this in any serious way, and I don't know anything about Bregman or this organization, but an interesting thought comes to mind—I've often heard people ask something along the lines of "should we rebrand EA?" and the answer is "maybe, but that's not probably not feasible." If this organization is truly so good at growth, is based on the same core principles EA is based on (it might not be beyond the shallow "OOO"), and it hasn't been aspersed or tarnished by SBF etc—prima facie it might not be so bad for the EA brand to recede and for currently EA invididuals and institutions to transitions to SMA (SoMA?) ones.
Edit: it's SfMA, I realize now, but I care too much for my bad pun that I'll keep it there...
I think it's far too early to make judgements about this groups success yet. Hype on social media is different from deep engagement, a vibrant community and billions of dollars of donations which EA has.
This is an insubstantial comment but yes I'm also sad they aren't calling themselves SoMA.
2
Jamie_Harris
Not a criticism of your post or any specific commenter, but I think it's a shame (for epistemics related reasons) when discussions end up more about "how EA is X" as opposed to "how true is X? How useful is X, and for what?".
3
James Herbert
Yeah I see what you’re saying but I guess if you know the answer to the Q ‘is it EA?’ then you have a data point that informs the probability you give to a bunch of other things, e.g., do they prioritise impartiality, prioritisation, open truth seeking, etc., to an unusual degree? So it’s a heuristic. And given they’re a new org it’s much easier to answer the Q ‘is it EA’ than it is ‘is it valuable’.
But I agree, knowing whether it’s actually useful is always far more valuable. Apart from anything else, just because the founders prioritise things EAs often prioritise, it doesn’t mean they’re actually doing anything of value.
1
akash 🔸
What do you think is the reason behind such a major growth? What are they doing differently that GWWC or other EA orgs could adopt?
2
James Herbert
I’m not super closely involved, I just know a few of the key people. That being said: a big name is putting his heart and soul into it, they’ve pulled together a big budget, and they’re very open to doing marketing. They’re also a talented bunch, but I think that’s at least partly downstream from the thing being kicked off by a big name.
EDIT: Oh and they are doing something different from EA, so it might just be intrinsically more popular. But I don’t think that’s the main thing going on here.
Looks like Charity Navigator is taking a leaf from the EA book!
Here they're previewing a new ‘cause-based giving’ tool - they talk about rating charities based on effectiveness and refer to research by Founder's Pledge.
Theory of change in ten steps - I found crafting a ToC for EAN was very helpful in forming our strategy and focusing our work
Rumelt's Good Strategy Bad Strategy - we complement our ToC and our quarterly OKRs with an annual strategy, and we use this to do that. Anytime we're approaching a new challenge and need a strategy for tackling it, I consult Rumelt.
Parfit was a philosopher who specialised in personal identity, rationality, and ethics. His work played a seminal role in the development of longtermism. He is widely considered one of the most important and influential moral philosophers of the late 20th and early 21st centuries.
What should the content split at EAGxUtrecht[1] be? Below is our first stab. One of our subgoals is to inspire people to start new projects, hence the heavy focus on entrepreneurship under 'Meta'.
Yeah, I don't like the terms 'neartermism' and 'longtermism' either, and it's messy, but this is our attempt at organising things. We used RP's 2022 survey's categorisation of the two to guide us, with some small modifications.
How many talks are you expecting to have? These seem very prescriptive, and things like multiple 1% categories will be difficult to achieve if you have <100 talks. I would worry that a strict focus on distribution like this would lead to having to sacrifice quality.
Given that EAGx Utrecht might be the most convenient EAGx for a good chunk of Western Europe, I'm not sure how important it is to have a goal for a % speakers with strong Dutch connections rather than Europe connections. But the density of talented Dutch folk in the community is very high, so you might hit 35% without any specific goal to do so.
Out of curiosity, why do you think this is the case? Isn't the Berlin and Nordics conference (and the London EAG) much more accessible for most EAs in Western Europe?
(Also, personally I assumed that the 35% was not a goal but a maximum to make sure the speakers are not from the Netherlands too much.)
3
James Herbert
Three factors I’d say.
Firstly, population density. There are about 15 million people within 100km of Utrecht, this compares to 6 million for Berlin and 4 million for Copenhagen.
Secondly, location. Berlin is actually quite far East, I’d say it’s more Central Europe than it is Western Europe. And obviously Copenhagen is more Northern European. This means that, whereas Utrecht is an afternoon’s train ride from some of the biggest Western European metropoles (London, Brussels, and Paris), the equivalent journeys to CPH/BER are 8+ hours.
Thirdly, air connectivity. Schiphol scores much higher on direct connectivity than both CPH and BER. To sense check this, I just Googled flight frequency for Rome. AMS has about 180 per month whilst BER and CPH have around 80 per month.
3
Lorenzo Buonanno🔸
You know much more than I do, but I would be surprised if these were the most relevant factors.
1. People within 100km of Utrecht are still mostly in the Netherlands, or at least are likely to have a strong Dutch connection.
2. I know a surprising amount of people interested in these events really value limiting their flights for environmental reasons, so this might be true.
3. Berlin is extremely well-connected. I don't think anyone didn't go to EAGx Berlin for lack of flights. If anything, plane tickets from Rome, Milan and Paris are slightly cheaper for Berlin vs Schiphol.
In my limited experience, the two most important factors for the people who get the most value from these conferences are:
1. Timing: people are busy, so they might e.g. have to defend their PhD thesis on the same day of EAGx (real example)
2. Acceptance rates: for some people from Italy who just went through an intro program, either Berlin, Rotterdam/Utrecht, or Nordics could be the most convenient because they wouldn't get accepted into the others
In any case, I would expect people who find Utrecht more convenient than other EAGxs for whatever reason will also find opportunities presented by Dutch-connected speakers more valuable than the typical EAGx participant, so it might make sense to lean into that. I wouldn't be surprised if the ideal number were higher than 35%.
Given all the things going on with e.g. The School For Moral Ambition and Doneer Effectief, I would also consider whether having Netherlands-specific events would make sense. Posssibly in the spirit of making EA more decentralized, like environmentalism.
But I guess all the above depends heavily on what % of participants live near the Netherlands, do you know the percentage of people from NL/BE for EAGxRotterdam 2022? (Although that was a while ago).
And I strongly agree with Nick that the quality is more important
1
James Herbert
Just to be clear, I was attempting to answer @EffectiveAdvocate's question about why one might think Utrecht is probably the most accessible location for many EAs in Western Europe. I wasn't making this point to defend the 35% figure :)
I wanted to make the point about accessibility because I'm quite certain it isn't the case that Berlin and Copenhagen are much more accessible than Utrecht, and I worry some people will underrate Utrecht's accessibility and therefore choose not to come.
I agree timing is probably a more important determiner of attendance than accessibility, that the quality of the speaker should probably be the most important factor when choosing, and I think Catherine makes a very good point about extending our partiality beyond NL.
Re your Q about the national residencies of attendees in 2022, all I have to hand is the following:
* 41% living in the Netherlands
* 14% living in Germany
Thanks for your input so far!
Sounds good overall. 1% each for priorities, cb and giving seems pretty low. 1.75% for mental health might also be on the low side, as there appears to be quite a bit of interest for global mental health in NL. I think the focus on entrepreneurship is great!
As a side note, I think content split is important, but the quality of presentation / group discussion and people that are leading those is more important. Obviously there needs to be a decent content split, but if you have the opportunity to get many really great people presenting great things in one area, I wouldn't necessarily cut some because it exceeds your "content percent budget" or whatever.
I haven't organised these kinds of events though, so this comment might not be relevent/helpful.
1
James Herbert
Thanks!
5
miller-max
Hi James thanks for opening this up for feedback,
This is a tough one, overall it looks good!
My general point of feedback would be to be more cause-agnostic OR put higher emphasis on "priorities research". For example I could suggest making 1/5th content about priorities research, promoting it as a category of its own, as seen below.
The reason for this is because I would argue that cause areas & meta have their own communities/conferences already, priorities research on the other hand may not so much. And priorities research represents EA's mission of "where to allocate resources to do the most good" most holistically. Then again I haven't done the thinking you have behind these weights!
It may be worth making a survey with 1-100 scales?
* Neartermist 30% (-5)
* Global Health & Dev 35%
* Animal welfare 60%
* Mental health 5%
* Longtermist 40% (-5)
* AI risk 50%
* Biosec 30%
* Nuclear 10%
* General longtermist 5%
* Climate change 5%
* Priorities research 20%
* Meta 10% (-10)
* Priorities research 5%
* Entrepreneurship skills 85%
* Community building 5%
* Effective giving 5%
3
EffectiveAdvocate🔸
I believe the division of areas for the event is quite decent. However, I think EAGx events also allow for the introduction of new ideas into the EA community. What cause areas do others believe we should prioritize but currently do not? Personally, I am considering areas like protecting liberal democracy, improving decision-making (individual and institutional), and addressing great power conflicts (broader than AI and nuclear issues). There are likely many other areas, and the causes I've listed here are already somewhat related to EA. Perhaps there are topics that are further outside the box.
I am also somewhat uncertain about the term "Entrepreneurship skills." Could someone clarify what is meant by this exactly?
EAGxUtrecht (July 5-7) is now inviting applicants from the UK (alongside other Western European regions that don't currently have an upcoming EAGx).[1] Apply here!
Ticket discounts are available and we have limited travel support.
Utrecht is very easy to get to. You can fly/Eurostar to Amsterdam and then every 15 mins there's a direct train to Utrecht, which only takes 35 mins (and costs €10.20).
I think events are underrated in EA community building.
I have heard many people argue against organising relatively simple events such as, 'get a venue, get a speaker, invite people'. I think the early success of the Tien Procent Club in the Netherlands should make people doubt that advice.
Why? Well, the first thing to mention is that they simply get great attendance, and their attendees are not typical EAs. I think their biggest so far has been 400, and the typical attendee is a professional in their 30s or 40s. It also does an amazing job of generating buzz. For example, suppose you've got a journalist writing an article about your community. In that case, it's pretty cool if you can invite them to an event with hundreds of regular people in attendance.
Now, of course, attendance doesn't translate to impact. However, I think we can see the early signs of people actually changing their behaviour.
For example, running a quick check on GWWC's referral dashboard, I can see four pledges that refer to the Tien Procent Club (2 trial, 2 full). Based on GWWC's March 2023 impact evaluation, they can therefore self-attribute ~$44k of 2022-equivalent donations to high-impact fundin... (read more)
I'm actually very surprised to hear this. What does the "common view" presume then?
Personally, I see 3 tiers of events: 1. Any casual, low-commitment, low stakes events 2. Big EA conferences that I find quite valuable for meeting lots of people intentionally and socially 3. Professionally-focused events (research fellowships, incubators etc)
I think "simple" events like 1 are great for socialising and meeting new people. While 2 and 3 get more done, I don't think the community would feel as welcoming if the only events occurring were ones where you had to be fully professional.
Sometimes I still want to interact with EAs, but without the expectation of "meeting right" or "networking". I suspect this applies especially to introverts and beginners. Even just going to a conference with the expectation of booking lots of 1-on-1s vs just chilling feels very different.
3
James Herbert
Yeah, that's a good categorisation, although often 3 is less 'professionally focused events' and more 'events for highly committed EAs'.
I think the common EA CB view is captured in the below quote (my own italics), which is taken from the CEA's Group Resource Centre's page 'How do EA groups produce impact?'.
I think this is broadly right. But I think EA CBs often overcorrect in this direction and, as a result, neglect events that aim for broad reach but shallow engagement.
3
Minh Nguyen
On CB, my views that are half informed by EA CBs and half personal opinions:
1. Very casual events - If you are holding no events for a long time and don't have much capacity, just hold low-stakes casual events and follow-up with high-engaged people afterwards. Highly-engaged people tend to show up/follow up several times after learning about EA anyway. 80-90% of the time, I think having some casual events every few weeks is better than no casual events.
2. Bigger events - Try to direct highly-engaged people to bigger and/or more specialised events. The EA community is big and diverse, and letting people know other events exist lets them self-select better. When I first explored beyond EA Singapore, I spent 2 months straight learning about every EA org and resource in existence, individually reviewing all the Swapcard profiles at every EAG. That was absolutely worth the effort, IMO.[1]
3. 1-on-1s are probably still important - 1-on-1s with someone of very similar interest areas or career trajectories are the most valuable experiences in EA, in my opinion. Only 10% of 1-on-1s are like this, but they more than make up for the 90% that don't really go anywhere. As much as I try to optimise, this seems to be a numbers game of just finding and meeting a lot of potentially interesting people.[2]
4. Online resources - For highly-engaged EAs, important information should be online-first. I'm of the opinion that highly-engaged/agentic new EAs tend to read a lot online, and can gain >80% of the same field-specific knowledge reading on their own. This especially holds true in AI Safety, which is like ... code and research that's all publicly available short of frontier models. I think events should be for casual socials, intentional networking and accountability+complex coordination (basically, coworkers).
1. ^
If you want the 80/20 for AI Safety, check out aisafety.training, aisafety.world, check EA Forum, Lesswrong and Alignment Forum once a week (~1 hour/wee
3
OllieBase
I agree!
> I have heard many people argue against organising relatively simple events such as, 'get a venue, get a speaker, invite people'.
Where have you heard this? I've not seen this.
> get an endorsement from someone like Bregman
Noting that this isn't easy and could be a large driver of the value!
5
James Herbert
When I first started at EA Netherlands I was explicitly advised against it, and more generally it seems to be 'in the air'. For example:
* The groups resource hub says "This also suggests that you should focus time and effort on deeply engaging the most committed members rather than just shifting some choices of many people."
* Kuhan's widely shared post on 'lessons from running Stanford EA' has in its summary "Focus on retention and deep engagement over shallow engagement"
* CEA's Groups Team's post on 'advice we give to new university organiser' says "We think it's good to do broad recruiting at the beginning of the semester, as with any club or activity. But beyond this big push of raising awareness, we think it’s most often better to pay more attention to people who seem very interested in - and willing to take significant action based on - EA ideas"
Writing this out has made me realise something. I think this advice makes more sense in a university context, where students are time-rich and are going through an intense social experience, but it makes less sense when you're targeting professionals. I suspect it's still 'in the air' because, historically, CEA has been very good at targeting students.
As a consequence, very few national orgs (including ourselves) organise TPC-esque events (broad reach, low engagement). For us, this is because our strategy is to focus on supporting local organisers in organising their own events (the theory is that then we can have lots of events without having to organise all of them ourselves). But I don't think that's the case for other national organisations (other national CBs, please jump in and correct me if I'm wrong, e.g., I know @lynn at EA UK has been organising career talks).
Ultimately, I guess what I'm saying is what I've said elsewhere: you need a blend of ‘mobilising’ (broad reach, low engagement) and ‘organising’ (narrow reach, high engagement), and I think EA groups often do too much organising.
2
OllieBase
Thanks, that makes sense.
I guess I don't interpret those bullets as "arguing against organising simple events" but rather "put your effort into supporting more engaged people" and that could even be consistent with running simple events, since it means less time on broad outreach compared to e.g. a high-effort welcoming event.
I agree with the first part of your last sentence (the blend), I don't know how EA groups spend their time.
1
James Herbert
Hmm, yeah, but by arguing for "put your effort into supporting more engaged people" you're effectively arguing against "relatively large events that require relatively shallow engagement". I think that's the mistake. I think it should be an even blend of the two.
I don't think CEA has a public theory of change, it just has a strategy. If I were to recreate its theory of change based on what I know of the org, it'd have three target groups:
Non-EAs
Organisers
Existing members of the community
Per target group, I'd say it has the following main activities:
Targeting non-EAs, it does comms and education (the VP programme).
Targeting organisers, you have the work of the groups team.
Targeting existing members, you have the events team, the forum team, and community health.
Per target group, these activities are aiming for the following short-term outcomes:
Targeting non-EAs, it doesn't aim to raise awareness of EA, but instead, it aims to ensure people have an accurate understanding of what EA is.
Targeting organisers, it aims to improve their ability to organise.
Targeting existing members, it aims to improve information flow (through EAG(x) events, the forum, newsletters, etc.) and maintain a healthy culture (through community health work).
If you're interested, you can see EA Netherland's theory of change here.
They has been writings from CEA on movement-building strategy. I think you might find them in the organiser handbook. These likely aren't to date though, especially since there's a new CEO.
2
James Herbert
Yeah, I'm aware of those, but I don't think they've published a ToC for CEA as an organisation anywhere. I think it would be good for CEA to have a public ToC because, as noted here, this is a basic good practice in the non-profit sector.
I think EA tends to focus on the inside game, or narrow EA, and I believe this increases the likelihood of articles such as this. I worry articles such as this will make people in positions of influence less likely to want to be associated with EA, and that this in the long run will undermine efforts to bring about the policy changes we desire. Still, of course, this focus on the inside game is also pretty cost-effective (for the short term, at least). Is it worth the trade-off? What do people think?
My gut feeling is that, putting to one side the question of which is the most effective strategy for reducing x-risk etc., the 'narrow EA' strategy is a mistake because there's a good chance it is wrong to try to guide society without broader societal participation.
In other words, if MacAskill argues here we should get our shit together first and then either a) collectively decide on a way forward or b) allow for everyone to make their own way forward, I think it's also important that 'the getting our shit together' has broad societal participat... (read more)
My guess is this is mostly just a product of success, and insofar as the political system increasingly takes AI X-risk seriously, we should expect to see stuff like this from time to time. If the tables were flipped and Sunak was instead pooh-poohing AI X-risk and saying things like "the safest path forward for AI is accelerating progress as fast as we can – slowing down would be Luddism" then I wouldn't be surprised to see articles saying "How Silicon Valley accelerationists are shaping Rishi Sunak’s AI plans". Doesn't mean we should ignore the negative pieces, and there very well may be things we can do to decrease it at the margin, but ultimately, I'd be surprised if there was a way around it. I also think it's notable how much press there is that agrees with AI X-risk concerns; it's not like there's a consensus in the media that it should be dismissed.
+1; except that I would say we should expect to see more, and more high-profile.
AI xrisk is now moving from "weird idea that some academics and oddballs buy into" to "topic which is influencing and motivating significant policy interventions", including on things that will meaningfully matter to people/groups/companies if put into action (e.g. licensing, potential restriction of open-sourcing, external oversight bodies, compute monitoring etc).
The former, for a lot of people (e.g. folks in AI/CS who didn't 'buy' xrisk) was a minor annoyance. The latter is something that will concern them - either because they see the specific interventions as a risk to their work, or because they feel policy is being influenced in a major way by people who are misguided.
I would think it's reasonable to anticipate more of this.
6
Daniel_Eth
or because they feel it as a threat to their identity or self-image (I expect these to be even larger pain points than the two you identified)
1
James Herbert
Hmm, I agree that with influence comes increased scrutiny, and the trade-off is worth it in many cases, but I think there are various angles this scrutiny might come from, and I think this is a particularly bad one.
Why? Maybe I'm being overly sensitive but, to me, the piece has an underlying narrative of a covert group exercising undue influence over the government. If we had more of an outside game, I would expect the scrutiny to instead focus on either the substance of the issue or on the outside game actors. Either would probably be an improvement.
Furthermore, there's still the very important issue of how appropriate it is for us to try to guide society without broader societal participation.
3
Daniel_Eth
My honest perspective is if you're an lone individual affecting policy, detractors will call you a wannabe-tyrant, if you're a small group, they'll call you a conspiracy, and if you're a large group, they'll call you an uninformed mob. Regardless, your political opponents will attempt to paint your efforts as illegitimate, and while certain lines of criticism may be more effective than others, I wouldn't expect scrutiny to simply focus on the substance either way.
I agree that we should have more of an outside game in addition to an inside game, but I'd also note that efforts at developing an outside game could similarly face harsh criticism (e.g., "appealing to the base instincts of random individuals, taking advantage of these individuals' confusion on the topic, to make up for their own lack of support from actual experts").
6
James Herbert
Maybe I'm in a bubble, but I don't recall seeing many reputable publications label large-scale progressive movements (e.g., BLM, Extinction Rebellion, or #MeToo) as "uninformed mobs". This article from the Daily Mail is about as close as it gets, but I think I'd rather have the Daily Mail writing about a wild What We Ourselves party than Politico insinuating a conspiracy.
Ultimately, I don't think any of us know the optimal split in a social change portfolio between the outside game and the inside game, so perhaps we should adapt as the criticism comes in. If we get a few articles insinuating conspiracy, maybe we should reallocate towards the outside game, and vice versa.
And again, I know I sound like a broken record, but there's also the issue of how appropriate it is for us to try to guide society without broader participation.
7
Daniel_Eth
So progressive causes will generally be portrayed positively by progressive-leaning media, but conservative-leaning media, meanwhile, has definitely portrayed all those movements as ~mobs (especially for BLM and Extinction Rebellion), and predecessor movements, such as for Civli Rights, were likewise often portrayed as mobs by detractors. Now, maybe you don't personally find conservative media to be "reputable," but (at least in the US, perhaps less so in the UK) around half the power will generally be held by conservatives (and perhaps more than half going forward).
5
Shakeel Hashim
Yeah, the phrase "woke mob" (and similar) is extremely common in conservative media!
2
David Mathers🔸
I suspect the ideology of Politico and most EAs are not that different (i.e. technocratic liberal centrism).
1
James Herbert
For sure progressive publications will be more positive, and I don't think conservative media ≠ reputable.
When I say "reputable publications" I am referring to the organisations at the top of this list of the most trusted news outlets in the US. My impression is that very few of these regularly characterise the aforementioned movements as "uninformed mobs".
5
Daniel_Eth
So I notice Fox ranks pretty low on that list, but if you click through to the link, they rank very high among Republicans (second to only the weather channel). Fox definitely uses rhetoric like that. After Fox (among Republicans) are Newsman and OAN, which similarly both use rhetoric like that. (And FWIW, I also wouldn't be super surprised to see somewhat similar rhetoric from WSJ or Forbes, though probably said less bluntly.)
I'd also note that the left-leaning media uses somewhat similar rhetoric for conservative issues that are supported by large groups (e.g., Trumpism in general, climate denialism, etc), so it's not just a one-directional phenomena.
7
James Herbert
Yes, I noticed that. Certain news organisations, which are trusted by an important subsection of the US population, often characterise progressive movements as uninformed mobs. That is clear. But if you define 'reputable' as 'those organisations most trusted by the general public', which seems like a reasonable definition, then, based on the YouGov analysis, Fox et al. is not reputable. But then maybe YouGov's method is flawed? That's plausible.
But we've fallen into a bit of a digression here. As I see it, there are four cruxes:
1. Does a focus on the inside game make us vulnerable to the criticism that we're a part of a conspiracy?
1. For me, yes.
2. Does this have the potential to undermine our efforts?
1. For me, yes.
3. If we reallocate (to some degree) towards the outside game in an effort to hedge against this risk, are we likely to be labelled an uninformed mob, and thus undermine our efforts?
1. For me, no, not anytime soon (although, as you state, organisations such as Fox will do this before organisations such as PBS, and Fox is trusted by an important subsection of the US population).
4. Is it unquestionably OK to try to guide society without broader societal participation?
1. For me, no.
I think our biggest disagreement is with 3. I think it's possible to undermine our efforts by acting in such a way that organisations such as Fox characterise us as an uninformed mob. However, I think we're a long, long way from that happening. You seem to think we're much closer, is that correct? Could you explain why?
I don't know where you stand on 4.
P.S. I'm enjoying this discussion, thanks for taking the time!
I agree and this is why I'm in favour of a Big Tent approach to EA. This risk comes from a lack of understanding about the diversity of thought within EA and that it isn't claiming to have all the answers. There is a danger that poor behaviour from one part of the movement can impact other parts.
Broadly EA is about taking a Scout Mindset approach to doing good with your donations, career and time. Individual EAs and organisations can have opinions on what cause areas need more resources at the margin but "EA" can't - it isn't a person, it's a network.
If you have a lot of influence, articles like this are inevitable.
EAs in AI should really try to make nice with the AI ethics crowd (i.e. help accomplish their goals). That's where the most criticism is coming from. From my perspective their concerns are useful angles of attack into the broader AI safety problem, and if EA policy does not meet the salient needs of present-day people it will be politically unpopular and lose influence (a challenge for the political longtermism agenda more broadly).
I agree about EAs needing to cast a wider net, in really every sense of the term. We also need to be flexible to changing circumstances, particularly in something like AI that is so rapidly moving and where the technology and social consequences are likely to be far different in crucial respects to earlier predictions of them (even if the predictions are mostly true -- this is a very hard dynamic to manage).
The article underscores the dangers to a movement so deeply connected to one foundation, and I expect we'll see Open Phil becoming more politically controversial (and very possible perceived as more Soros-esque) fairly soon.
I agree that negative articles are inevitable if you get influence, but I think there are various angles these negative articles might come from, and this is a particularly bad one.
The Soros point is an excellent analogy, but I worry we could be headed for something worse than that. Soros gets criticism from people like Orban but praise from orgs like the FT and Politico. Meanwhile, with EA, people like Orban don't give a damn about EA but Politico is already publishing scathing pieces.
I don't think reputation management is as hard as is often supposed in EA. I think it's just it hasn't been prioritised much until recently (e.g., CEA didn't have a head of comms until September 2022). I can imagine many national organisations such as mine would love to have a Campaign Officer or something to help us manage it, but we don't have the funding.
Do you have any encouraging examples of progress on 2? Some of the prominent people are incredibly hostile (i.e. they genuinely believe we are all literal fascists and also Machiavellian naive utilitarians who lie automatically whenever it's in our short-term interests) so I'm a bit pessimistic, though I agree it is a good idea to try. What's a good goal to help them accomplish in your view?
Some are hostile but not all, and there are disagreements and divisions just as deep if not deeper in AI ethics as there are in EA or any other broad community with multiple important aims that you can think of.
epistemic status: a frustrated outlet for sad thoughts, could definitely be reworded with more nuance
I really wish I had your positive view on this Sean, but I really don't think there's much chance of inroads unless capabilities advance to an extent that makes xRisk seem even more salient.
Gebru is, imo, never going to view EA positively. And she'll use her influence as strongly as possible in the 'AI Ethics' community.
Seth Lazar also seems intractably anti-EA. It's annoying how much of this dialogue happens on Twitter/X, especially since it's very difficult for me as a non-Twitter user to find them, but I remember he posted one terrible anti-longtermist thread and later deleted it.
Shannon Vallor once also posted a similarly anti-longtermist thread, and then respond to Jess Whittlestone once saying lamenting the gap between the Safety and Ethics fields. I just really haven't seen where the Safety->Ethics hostility has been, I've really only ever seen the reverse, but of course I'm 100% sure my sample is biased here.
The Belfield<>McInerney collaboration is extremely promising for sure, and I look forward to the outputs. I hope my impression is wrong and more work alon... (read more)
just really haven't seen where the Safety->Ethics hostility has been
From the perspective of the AI Ethics researchers, AI Safety researchers and engineers contributed to the development of "everything for everyone" models – and also distracted away from the increasing harms that result from the development and use of those models.
Which, frankly, is both true, given how much people in AI Safety collaborated and mingled with people in large AI labs.
I understand that on Twitter, AI Ethics researchers are explicitly critiquing AI Safety folk (and longtermist tech folk in general) more than the other way around.
That feels unfair if we focus on the explicit exchange in the moment. But there is more to it.
AI Ethics folk are responding with words to harms that resulted from misguided efforts by some key people in AI Safety in the past. There are implicit background goings-on they are concerned about that is hard to convey, and not immediately obvious from their writing.
It might not feel like we in AI Safety have much power in steering the development of large AI models, but historically the AI Safety community has been able to exert way more influence here than the AI E... (read more)
I think this is imprecise. In my mind there are two categories:
People who think EA is a distraction from near term issues and competing for funding and attention (e.g. Seth Lazar as seen by his complaints about the UK taskforce and trying to tag Dustin Moskovitz and Ian Hogarth in his thinkpieces). These more classical ethicists are just from what I can see analytical philosophers looking for funding and clout competition with EA. They've lost a lot of social capital because they repeated a lot of old canards about AI and just repeats them. My model for them is something akin to they can't do fizzbuzz or know what a transformer is, thus they'll just say sentences about how AI can't do things and there's a lot of hype and power centralisation. These are more likely to be white men from the UK, Canada, Australia, and NZ. Status games are especially important to them and they seem to just not have a great understanding of the field of alignment at all. A good example I show people is this tweet which tries to say RLHF solves alignment and "Paul [Christiano] is an actual researcher I respect, the AI alignment people that bother me are more the longtermists."
I don't know why people overindex on loud grumpy twitter people. I haven't seen evidence that most FAccT attendees are hostile and unsophisticated.
1
Remmelt
FAccT attendees are mostly a distinct group of researchers from the AI ethics researchers who come from or are actively assisting marginalised communities (and not with eg. fairness and bias abstractions).
5
JWS 🔸
Hmm I'm not quite sure I agree that there's such a clear division of two camps. For example, I think Seth is actually not that far off from Timnit's perspective on AI Safety/EA. Perhaps and bit less extreme and hostile, but I see that more of a degree in difference rather than a degree in kind.
I also disagree that people in your second camp are going to be useful for fruitful for collaboration, as they don't just have technical objections but I think core philosophical objections to EA (or what they view as EA).
I guess overall I'm not sure. It'd be interesting to see some mapping of AI-researchers in some kind of belief-space plot so different groups could be distinguished. I think it's very easy to extrapolate from a few small examples and miss what's actually going - which I admit I might very well be doing with my pessimism here, but I sadly think it's telling that I see so few counterexamples of collaboration but I can easily find examples of AI researchers dismissive or hostile to the AI Safety/xRisk perspective.
4
David Mathers🔸
I don't think you have to agree on deep philosophical stuff to collaborate on specific projects. I do think it'll be hard to collaborate if one/both sides are frequently publicly claiming the other is malign and sinister or idiotic and incompetent or incredibly ideogically rigid and driven by emotion not reason (etc.)
I totally buy "there are lots of good sensible AI ethics people with good ideas, we should co-operate with them". I don't actually think that all of the criticisms of EA from the harshest critics are entirely wrong either. It's only the idea that "be co-operative" will have much effect on whether articles like this get written and hostile quotes from some prominent AI ethics people turn up in them, that I'm a bit skeptical of. My claim is not "AI ethics bad", but "you are unlikely to be able to persuade the most AI hostile figures within AI ethics".
Sure, I agree with that. I also have parallel conversations with AI ethics colleagues - you're never going to be able to make much headway with a few of the most hardcore safety people that your justice/bias etc work is anything but a trivial waste of time; anyone sane is working on averting the coming doom.
Don't need to convince everyone; and there will always be some background of articles like this. But it'll be a lot better if there's a core of cooperative work too, on the things that benefit from cooperation.
My favourite recent example of (2) is this paper:
https://arxiv.org/pdf/2302.10329.pdf
Other examples might include my coauthored papers with Stephen Cave (ethics/justice), e.g.
https://dl.acm.org/doi/10.1145/3278721.3278780
Another would be Haydn Belfield's new collaboration with Kerry McInerney
http://lcfi.ac.uk/projects/ai-futures-and-responsibility/global-politics-ai/
Jess Whittlestone's online engagements with Seth Lazar have been pretty productive, I thought.
6
Chris Leong
I know you're probably extremely busy, but if you'd like to see more collaboration between the x-risks community and ai ethics, it might be worth writing up a list of ways in which you think we could collaborate as a top-level post.
I'm significantly more enthusiastic about the potential for collaboration after seeing the impact of the FLI letter.
1
Remmelt
I expect many communities would agree on working to restrict Big Tech's use of AI to consolidate power. List of quotes from different communities here.
4
joshcmorrison
EA isn't unitary so people should individually just try cooperating with them on stuff and being like "actually you're right and AIs not being racist is important" or should try to make inroads on the actors' strike/writer's strike AI issues. Generally saying "hey I think you are right" is usually fairly ingratiating.
For what it's worth, a friend of mine had an idea to do Harberger taxes on AI frontier models, which I thought was cool and was a place where you might be able to find common ground with more leftist perspectives on AI
5
David Mathers🔸
People should say that things are right when they agree with them, even if there wasn't strategic purpose in doing so.
I doubt being sympathetic to left economic stuff on AI will do much to help persuade people whose complaint is that EAs are racists/sexist/authoritarian/naive utilitarian. Though it would certainly help with people who are just (totally reasonably!, I am worried about this!) concerned about EAs ties to the industry.
The UK seems to take the existential risk from AI much more seriously than I would have expected a year ago. To me, this seems very important for the survival of our species, and seems well worth a few negative articles.
I'll note that I stopped reading the linked article after "Despite the potential risks, EAs broadly believe super-intelligent AI should be pursued at all costs." This is inaccurate imo. In general, having low-quality negative articles written about EA will be hard to avoid, no matter if you do "narrow EA" or "global EA".
Politico is perhaps the most influential news source for EU decision-makers (h/t @vojtech_b). I'd be wary of dismissing the importance of 'a few negative articles' if they're articles like this.
I agree that's a good argument why that article is a bigger deal than it seems, but I'd still be quite surprised if it were at all comparable to the EV of having the UK so switch on when it comes to alignment.
If this article sees others like it, it could cause the UK to back away from x-risk concerns
-1
James Herbert
My concern is that this particular media narrative will eventually undermine the policy progress we've made.
4
Sean_o_h
>"Despite the potential risks, EAs broadly believe super-intelligent AI should be pursued at all costs." This is inaccurate imo.
Could we get a survey on a few versions of this question? I think it's actually super-rare in EA.
e.g.
"i believe super-intelligent AI should be pursued at all costs"
"I believe the benefits outweigh the risks of pursuing superintelligent AI"
"I believe if risk of doom can be agreed to be <0.2, then the benefits of AI outweight the risks"
"I believe even if misalignment risk can be reduced to near 0, pursuing superintelligence is undesirable"
8
David_Moss
We could potentially survey the EA community on this later this year. Please feel free to reach out if you have specific requests/suggestions for the formulation of the question.
2
James Herbert
Yeah it's incredibly inaccurate, I don't think it even needs to be surveyed.
4
Sean_o_h
I've heard versions of the claim multiple times, including from people i'd expect to know better, so having the survey data to back it up might be helpful even if we're confident we know. the answer.
Where I think most EAs would strongly disagree with is that they would find pursuing SAI "at all costs" to be abhorrent and counter to their fundamental goals. But I also suspect that showing survey data about EA's professed beliefs wouldn't be entirely convincing to some people given the close connections between EAs and rationalists in AI.
I feel a bit uneasy that EAs should put in a lot of effort into a survey (both the survey designers and takers) just because someone made up something at some point. Maybe asking the people who you'd expect to know better, why they believe what they believe?
8
Chris Leong
I think that EA has made the correct choice in deciding to focus on inside game. As indicated by the article, it seems like we've been incredibly successful at it. I agree that in an ideal world, we would save humanity by playing the outside game, but I feel that the current inside game is increasing our odds by enough that I feel very comfortable with our decision to promote it.
I agree that it's worth thinking about the potential for this success to result in a backlash, though surveys seem to indicate more concern among the public about AI risks than I had expected, so I'm not especially worried about there being a significant public backlash.
Nonetheless, it doesn't make sense to take unnecessary risks, so there are a few things we should do:
• I'd love to see EA develop more high-quality media properties like the 80k podcast, Rob Miles or Rationalist Animations, but very few people have the skills.
• Books combined with media releases and appearances on podcasts are one way in which we can attempt to increase our support among the public.
• I think it makes sense to try our best to avoid polarisation. If it seems that one side of the political spectrum is becoming hostile, then it would make sense to initiate some concerted outreach to it.
1
James Herbert
Thanks for your comment Chris! Although it appears contradictory? In the first half, you say we've made the right choice by focusing on the inside game, but in the second half, you suggest we expend more resources on outside game interventions.
Is your overall take that we should mostly do inside game stuff, but that perhaps we're due a slight reallocation in the direction of the outside game?
2
Chris Leong
Exactly. I think EA should mostly focus on inside game, but that, as a lesser priority, we should take steps to mitigate the risks associated with this.
1
James Herbert
I think there's a good chance we broadly agree. If you had to put a number on it, what would you say is our current percentage split between inside game and outside game? And what would your new ideal split be?
1
JanPro
epistemic status: gossip
I've heard it's quite harmful to label oneself as EA in the EU policy space after the politico article.
6
Nathan Young
I think maybe let's revisit in a month. It's easy for these things to loom larger than they are.
-3
James Herbert
I think JanPro is talking about the EA and Brussels article I referenced in the OP ('Stop the killer robots! Musk-backed lobbyists fight to save Europe from bad AI'). This was published in November last year.
Many of the EAs I know who work in policy feel like they ought to keep their involvement in EA a secret. I once attended an event in Brussels where the host asked me to hide the fact I work for EA Netherlands. This was because they were worried their opponents would use their links with EA to discredit them. This seems like a very bad state of affairs.
5
JWS 🔸
If what you and Jan say is true (not saying I doubt you, it doesn't mesh with my experiences being an open EA but then I don't live in the policy-world) then this does need to be higher up the EA priority list.
I'd strongly, strongly advise against 'hiding' beliefs here. If there is already a hostile set of opponents actively looking to discredit EA and EA-links then we need to be a lot more pro-active in countering incorrect framings of EA and being more assertive to opponents who think EA is worth discrediting.
2
SiebeRozendal
I think one low hanging fruit is publicly dissociating from Elon Musk. He often gets brought up even though he's not part of the community. There's also very legitimate EA-/longtermism-based criticism of him available
4
pseudonym
Are you in a position to share more information that might help readers know how much they should update on this comment?
2
JanPro
No, not really, I am myself confused and wanted to provoke those who know more to reply and clarify. (Which already James Herbert slightly did and I hope more direct info will surface)
2
SiebeRozendal
I've heard the same thing from US sources about the US policy space, to the extent that important information doesn't get shared on the EA Forum because it would associate it with EA.
Among our speakers, you'll have the chance to meet:
Claire Boine - Researcher in AI law and CEO of Successif, on crafting your personal career strategy in AI safety and governance.
Magnolia Tovar - Chemical Engineer with over 22 years of experience in the energy sector, discussing technology gaps we need to close to reach net-zero emissions by 2050.
Zou Xinyi - Executive Assistant to the CEO at Giving What We Can, debunking common myths about the 10% Pledge and sharing her personal jou
EA should take seriously its shift from a lifestyle movement to a social movement.
The debate surrounding EA and its classification has always been a lively one. Is it a movement? A philosophy? A question? An ideology? Or something else? I think part of the confusion comes from its shift from a lifestyle movement to a social movement.
In its early days, EA seemed to bear many characteristics of a lifestyle movement. Initial advocates often concentrated on individual actions—such as personal charitable donations optimised for maximum impact or career decisions that could yield the greatest benefit. The movement championed the notion that our day-to-day decisions, from where we donate to how we earn our keep, could be channelled in ways that maximised positive outcomes globally. In this regard, it centred around personal transformation and the choices one made in their daily life.
However, as EA has evolved and matured, there's been a discernible shift. Today, whilst personal decisions and commitments remain at its heart, there's an increasing emphasis on broader, systemic changes. The community now acknowledges that while individual actions are crucial, tackling the underlyi... (read more)
Could you describe this would look like? What behaviors/actions from people in EA what convince you that they are taking this seriously?
1
James Herbert
Sure! Ultimately, I think we should be aiming for a movement that looks something like this.
In terms of behaviours that would signal people taking this seriously, an example might be a rebalancing of how community building work is evaluated. Currently, the main outcome funders look for is longtermist career changes. This encourages very lifestyle movement-y community building. I would like to see more weight being given to things like the generation of passive support, e.g., is the public shifting support towards the movement? Is the movement’s narrative being elevated in public discourse?
To use terminology I've used elsewhere, this change would encourage more 'mobilising' and less 'organising'. It would also encourage a rebalancing of our 'social change portfolio' in such a way that we become a slightly more outward-facing movement, one that spends more time talking to and working with the rest of society to achieve shared objectives and less time talking to ourselves.
I wanted to figure out where EA community building has been successful. Therefore, I asked Claude to use EAG London 2024 data to assess the relative strength of EA communities across different countries. This quick take is the result.
The report presents an analysis of factors influencing the strength of effective altruism communities across different countries. Using attendance data from EA Global London 2024 as a proxy for community engagement, we employed multiple regression analysis to identify key predictors of EA participation. The model incorpo... (read more)
I'm surprised that the "top 10" doesn't include Denmark, Austria, Belgium, and Germany, since they all have more population-adjusted participants than Ireland, are not English-speaking, are more distant from London, and have lower GDP per capita[1]
Are we using different data?
In general, I'm a bit sceptical of these analyses, compared to looking at the countries/cities with the most participants in absolute terms. I also expect Claude to make lots of random mistakes.
1. ^
But of course, Ireland's GDP is very artificial
2
James Herbert
But absolute terms isn’t very useful if we’re trying to spot success stories, right? Or am I misunderstanding something?
But yeah, something seems off about Ireland. The rest of the list feels quite good though. David Moss said they have some per capita estimates in the pipeline, so I’m excited to see what they produce!
1
vojtech_b
How does it decide country? Just according the name? I tried to find people from Czechia between EAG participants by asking gpt find Czech names and there was a lot of false positives...
1
Alix Pham
I'm curious if you fed Claude the variables or if it fetched them itself? In the latter case, there's a risk of having the wrong values, isn't there?
Otherwise, really interesting project. Curious of the insights to take out of this, esp. for me the fact that Switzerland comes up first. Also surprising that Germany's not on the list, maybe?
Thanks!
Rutger Bregman has just written a very nice story on how Rob Mather came to found AMF! Apart from a GWWC interview, I think this is the first time anyone has told this tale in detail. There are a few good lessons in there if you're looking to start a high-impact org.
It's in Dutch, but google translate works very well!
What do you believe is the ideal size for the Dutch EA community?
We recently posed this question in our national WhatsApp community. I was surprised by the result, and others I've spoken to were also surprised. I thought I'd post it here to get other takes.
We defined 'being a member' as "someone who is motivated in part by an impartial care for others, is thinking very carefully about how they can best help others, and who is taking significant actions to help (most likely through their careers). In practice, this might look like selecting a job or degree ... (read more)
Why would you not want >1% of the population to fit this description? I think even prominent EA haters would be in favor, if you left out the name "EA" out.
1
James Herbert
People often argue for 'Narrow EA'. Here is an example of where I suggested this strategy might not be wise and people disagreed.
Although of course, there's an 'at the current margin' thing going on here. I.e., maybe the ideal size is huge, but since we've got limited time and resources we should not aim for that and instead focus on keeping it small and high quality.
Perhaps a more informative question would be something like, "For the next 5 years, should the Dutch EA community aim for broad growth or narrow specialisation?" (in other words, something similar to this Q from the MCF survey).
7
titotal
Yeah, I think you ended up asking "would it be good for a lot of people to share our values", instead of "should we try to actively recruit tons of people to our specific community"
1
James Herbert
Gave it a second go.
I asked, "As we plan our future initiatives, it's useful to understand where our community believes we should focus our efforts. Please share your opinion on which of the following we should prioritise.
* Growing the Community: Focus on increasing our membership and raising broader awareness of EA.
* Developing Community Depth: Concentrate on deepening understanding and engagement.
* Taking a Balanced Approach: Allocate our efforts equally between growing and deepening.
* Other (Please specify): If you have a different perspective, we’d love to hear it.
* I don't know"
27 people voted, 16 voted for 'taking a balanced approach', 6 for 'growing the community', 1 for 'developing community depth', and 4 for 'I don't know'.
2
DavidNash
'Narrow EA' and having >1% of the population fitting the above description aren't opposite strategies.
Maybe it's similar to someone interested in animal welfare thinking alt protein coordination should focus on scientists, entrepreneurs, funders and policy makers but also thinking it would be good for there to be lots of people interested in veganism.
3
James Herbert
Aren't they? Like, if I'm aiming for >1% of the population I ought to spend a lot of my resources on marketing and building a network of organisers. If I'm aiming for something smaller I ought to spend my time investing in the community I've already got and maybe some field building.
To make it more concrete, in Q1 of 2024 I could spend 15% of my time investing in our marketing so that we double the number of intro programme sign-ups; alternatively, I could put that time into developing a Dutch Existential Risk Initiative. One is big EA, one is narrow EA.
2
DavidNash
I think it depends on how you define 'narrow EA', if you focus on getting 1% of the population to give effectively, that's different to helping 100 people make impactful career switches but both could be defined as narrow in different ways.
One being narrow as it focuses on a small number of people, one being narrow as it spreads a subset of EA ideas.
Taking the Dutch Existential Risk Initiative example, it will be narrow in terms of cause focus but the strategy could still vary between focusing on top academics or a mass media campaign.
3
James Herbert
I'm pretty sure Narrow EA is usually used to refer to the strategy of influencing a small number of particularly influential people. That's part of what I'm pushing back against (although we've deviated from the original discussion point, which was on organising vs mobilising). [got confused about which quicktake we were discussing]
I think all of the ERIs are narrow (they target talented researchers). A more broad project would be the Existential Risk Observatory, which aims to inform the public through mass media outreach. They've done a lot of good work in the Netherlands and abroad, but I don't think they've been able to get funding from the biggest EA funds. I don't know why but I suspect it's because their main focus is the general public, and not the decision-makers.
1
harfe
why would you like there to be less people "motivated in part by an impartial care for others, [are] thinking very carefully about how they can best help others [...]"?
edit: please ignore, just saw that titotal asked the same question 10 minutes earlier.
If you’re seeing things on the forum right now that boggle your mind, you’re not alone.
Forum users are only a subset of the EA community. As a professional community builder, I’m fortunate enough to know many people in the EA community IRL, and I suspect most of them would think it’d be ridiculous to give a platform to someone like Hanania.
If you’re like most EAs I know, please don’t be dissuaded from contributing to the forum.
I’m very glad CEA handles its events differently.
To be clear, my best guess is based on my experiences talking to hundreds of community builders and student group organizers over the years, is that the general sentiment amongst organizers is substantially more towards the "I don't think we should micromanage the attendance decisions of external events" position than the forum discussion.
This kind of stuff is hard to get an objective sense off, so I am not confident here, but I think the biases in what positions people feel comfortable expressing publicly clearly go more in the direction of
outrage"complaining about who attended" [1] here.My best guess there is also a large U.S./EU difference here. My sense is the European (non-German, for some reason) EA community is substantially more leaning towards controlling access and reputation tightly here. You can also see this in the voting patterns on many of the relevant posts which wax and wane with the U.S./EU time difference.
(edit: "outrage" seems like a bad choice of words due to connotation, so I am replacing it with something more neutral)
I do think you need to differentiate the Bay Area from the rest of the US, or at least from the US East Coast.
I don't think the crux here is whether one ought to micromanage the attendance decisions of external events. It's more about:
Criticizing someone's decisions is not denying them autonomy or ability to choose.
To use a legal metaphor, one way of thinking about this is personal jurisdiction -- what has Manifest done that gives the EA community a right to criticize? After all, it would be uncool to start criticizing random people on the Forum with no link to EA, and it would generally be uncool to start criticizing random EAs for their private non-EA/EA-adjacent actions.
I have two answers to that:
- The first is purposeful availment. If an actor purposefully takes advantage of community resources in connection with an action, they cannot reasonably complain about their choices being the subject of community scrutiny. The Manifest organize
... (read more)I think this is an important sentiment for many people to hear who might be feeling the same way but haven't seen this explicitly said anywhere. Thanks for making it. Don't be discouraged if the karma doesn't get too high because of downvotes as well, which I think is likely.
I want to believe this, but it's difficult for me to assess the evidence for or against it very well. Any suggestions?
As with most of us, "the people I know" is not a randomly-selected or representative group. Moreover, presumably many people who hold positions subject to general social stigma will not advocate for their position in front of people they know to be non-receptive. So the personal experience of people whose opposed stance is known will likely underestimate support for Hanania.
Suggestions for assessing the claim, "forum users are only a subset of the EA community"? Or the claim, "most of them [EAs I know] would think it'd be ridiculous to give a platform to someone like Hanania"?
I don't think there's great evidence for either claim, unfortunately. For the former, I guess we can look at this and observe that forum use is quite unequal between users, which suggests something.
For the latter, I could survey EAs I know with the question, "Do you think it'd be a good idea to invite Hanania to speak at an event?". However, even typing that out feels absurd, which perhaps indicates how confident I am that most EAs I know would think it's a ridiculous idea.
Regarding stigma, my impression is that quite a few people would like to say on the forum, "Giving a platform to Hanania is a ridiculous idea", but don't because they worry the forum will not be receptive to this view. I think this is because people perceive there to be a stigma on the forum against anyone who expresses discomfort at seeing people dispassionately discuss whether it's okay to give a platform to someone like Hanania.
Maybe this stigma is a good thing. I'm not sure. I like what Isa said: "I w... (read more)
If anyone wants to see what making EA enormous might look like, check out Rutger Bregmans' School for Moral Ambition (SMA).
It isn't an EA project (and his accompanying book has a chapter on EA that is quite critical), but the inspiration is clear and I'm sure there will be things we can learn from it.
For their pilot, they're launching in the Netherlands, but it's already pretty huge, and they have plans to launch in the UK and the US next year.
To give you an idea of size, despite the official launch being only yesterday, their growth on LinkedIn is significant. For the 90 days preceding the launch date, they added 13,800 followers (their total is now 16,300). The two EA orgs with the biggest LinkedIn presence I know of are 80k and GWWC. In the same period, 80k gained 1,200 followers (their total is now 18,400), and GWWC gained 700 (their total is now 8,100).[1]
And it's not like SMA has been spamming the post button. They only posted 4 times. The growth in followers comes from media coverage and the founding team posting about it on their personal LinkedIn pages (Bregman has over 200k followers).
EA Netherlands gained 137, giving us a total of 2900 - wooo!
When I translated to English, their 3 "Os" (In dutch not English) were....
"Bulky, underexposed and solvable"
Sounds a lot like important, neglected and tractable to me?
And then they interviewed Rob Mathers from the Against Malaria Foundation...
I completely agree with James that these guys are showing EA a different way of movement building which might end up being effective (we'll see). It seems like they are building on the moral philosophy foundations of EA, then packaging it in a way that will be attractive to the wider population - and they've done it well. I love this page with their "7 principles" and found it inspiring - I would sign up to those principles, and I appreciated that the scout mindset is in there as well.
https://www.moreleambitie.nl/grondbeginselen
I do wonder what his major criticisms of EA are though, given that this looks pretty much like EA packaged for the masses, unless I'm missing something.
Yes, although I guess it's good that people know the link. We shouldn't hide our mistakes, and I know Bregman likes some of what we do, so there are worse people to have sharing this info with the Dutch population.
Saying it isn't an EA project seems too strong - another co-founder of SMA is Jan-Willem van Putten, who also co-founded Training for Good which does the EU tech policy and Tarbell journalism fellowships, and at one point piloted grantmaker training and 'coaching for EA leaders' programs. TfG was incubated by Charity Entrepreneurship.
You missed the most impressive part of Jan-Willem’s EA CV - he used to co-direct EA Netherlands, and I hear that's a real signal of talent ;)
But yes, I guess it depends on how you define ‘EA project’. They're intentionally trying to do something different, so that's why I don't describe them as one, but the line is very blurred when you take into account the personal and philosophical ties.
If EA was a broad and decentralised movement, similar to e.g., environmentalism, I'd classify SMA as an EA project. But right now EA isn't quite that. Personally, I hope we one day get there.
I think it's far too early to make judgements about this groups success yet. Hype on social media is different from deep engagement, a vibrant community and billions of dollars of donations which EA has.
Looks like Charity Navigator is taking a leaf from the EA book!
Here they're previewing a new ‘cause-based giving’ tool - they talk about rating charities based on effectiveness and refer to research by Founder's Pledge.
My recommended readings/resources for community builders/organisers
I've put the ones I think others are less likely to know towards the top.
- The 2-Hour Cocktail Party - the best handbook I've seen for organising a meetup (Spencer Greenberg interviewed him on his podcast recently)
- LifeLabs's coaching questions - great for 1-1s with organisers you're supporting/career coaching
- This handbook on community organising - written by the guy responsible for Obama's 2008 organising efforts
- Centola's work on social change, e.g., the book Change: How to Make Big Things Happen
- Han's work on organising, e.g., How Organisations Develop Activists (I wrote up some notes here)
- High Output Management by Andrew Groves - great book on management written by the guy who made Intel a major player
- Theory of change in ten steps - I found crafting a ToC for EAN was very helpful in forming our strategy and focusing our work
- Rumelt's Good Strategy Bad Strategy - we complement our ToC and our quarterly OKRs with an annual strategy, and we use this to do that. Anytime we're approaching a new challenge and need a strategy for tackling it, I consult Rumelt.
- IDinsight's Impact Measurement Guide
- This 80k article on community
... (read more)The latest episode of the Philosophy Bites podcast is about Derek Parfit.[1] It's an interview with his biographer (and fellow philosopher) David Edmonds. It's quite accessible and only 20 mins long. Very nice listening if you fancy a walk and want a primer on Parfit's work.
Parfit was a philosopher who specialised in personal identity, rationality, and ethics. His work played a seminal role in the development of longtermism. He is widely considered one of the most important and influential moral philosophers of the late 20th and early 21st centuries.
What should the content split at EAGxUtrecht[1] be? Below is our first stab. One of our subgoals is to inspire people to start new projects, hence the heavy focus on entrepreneurship under 'Meta'.
July 5-7 - be there or be square. Or be there and do square things like check out the world's largest bicycle garage. You do you.
Yeah, I don't like the terms 'neartermism' and 'longtermism' either, and it's messy, but this is our attempt at organising things. We used RP's 2022 survey's categorisation of the two to guide us, with some small modifications.
How many talks are you expecting to have? These seem very prescriptive, and things like multiple 1% categories will be difficult to achieve if you have <100 talks. I would worry that a strict focus on distribution like this would lead to having to sacrifice quality.
Given that EAGx Utrecht might be the most convenient EAGx for a good chunk of Western Europe, I'm not sure how important it is to have a goal for a % speakers with strong Dutch connections rather than Europe connections. But the density of talented Dutch folk in the community is very high, so you might hit 35% without any specific goal to do so.
Sounds good overall. 1% each for priorities, cb and giving seems pretty low. 1.75% for mental health might also be on the low side, as there appears to be quite a bit of interest for global mental health in NL. I think the focus on entrepreneurship is great!
EAGxUtrecht (July 5-7) is now inviting applicants from the UK (alongside other Western European regions that don't currently have an upcoming EAGx).[1] Apply here!
Ticket discounts are available and we have limited travel support.
Utrecht is very easy to get to. You can fly/Eurostar to Amsterdam and then every 15 mins there's a direct train to Utrecht, which only takes 35 mins (and costs €10.20).
Applicants from elsewhere are encouraged to apply but the bar for getting in is much higher.
I think events are underrated in EA community building.
I have heard many people argue against organising relatively simple events such as, 'get a venue, get a speaker, invite people'. I think the early success of the Tien Procent Club in the Netherlands should make people doubt that advice.
Why? Well, the first thing to mention is that they simply get great attendance, and their attendees are not typical EAs. I think their biggest so far has been 400, and the typical attendee is a professional in their 30s or 40s. It also does an amazing job of generating buzz. For example, suppose you've got a journalist writing an article about your community. In that case, it's pretty cool if you can invite them to an event with hundreds of regular people in attendance.
Now, of course, attendance doesn't translate to impact. However, I think we can see the early signs of people actually changing their behaviour.
For example, running a quick check on GWWC's referral dashboard, I can see four pledges that refer to the Tien Procent Club (2 trial, 2 full). Based on GWWC's March 2023 impact evaluation, they can therefore self-attribute ~$44k of 2022-equivalent donations to high-impact fundin... (read more)
I don't think CEA has a public theory of change, it just has a strategy. If I were to recreate its theory of change based on what I know of the org, it'd have three target groups:
Per target group, I'd say it has the following main activities:
Per target group, these activities are aiming for the following short-term outcomes:
If you're interested, you can see EA Netherland's theory of change here.
Politico just published a fairly negative article about EA and UK politics. Previously they’ve published similar articles about EA and Brussels.
I think EA tends to focus on the inside game, or narrow EA, and I believe this increases the likelihood of articles such as this. I worry articles such as this will make people in positions of influence less likely to want to be associated with EA, and that this in the long run will undermine efforts to bring about the policy changes we desire. Still, of course, this focus on the inside game is also pretty cost-effective (for the short term, at least). Is it worth the trade-off? What do people think?
My gut feeling is that, putting to one side the question of which is the most effective strategy for reducing x-risk etc., the 'narrow EA' strategy is a mistake because there's a good chance it is wrong to try to guide society without broader societal participation.
In other words, if MacAskill argues here we should get our shit together first and then either a) collectively decide on a way forward or b) allow for everyone to make their own way forward, I think it's also important that 'the getting our shit together' has broad societal participat... (read more)
My guess is this is mostly just a product of success, and insofar as the political system increasingly takes AI X-risk seriously, we should expect to see stuff like this from time to time. If the tables were flipped and Sunak was instead pooh-poohing AI X-risk and saying things like "the safest path forward for AI is accelerating progress as fast as we can – slowing down would be Luddism" then I wouldn't be surprised to see articles saying "How Silicon Valley accelerationists are shaping Rishi Sunak’s AI plans". Doesn't mean we should ignore the negative pieces, and there very well may be things we can do to decrease it at the margin, but ultimately, I'd be surprised if there was a way around it. I also think it's notable how much press there is that agrees with AI X-risk concerns; it's not like there's a consensus in the media that it should be dismissed.
I agree and this is why I'm in favour of a Big Tent approach to EA. This risk comes from a lack of understanding about the diversity of thought within EA and that it isn't claiming to have all the answers. There is a danger that poor behaviour from one part of the movement can impact other parts.
Broadly EA is about taking a Scout Mindset approach to doing good with your donations, career and time. Individual EAs and organisations can have opinions on what cause areas need more resources at the margin but "EA" can't - it isn't a person, it's a network.
I really liked this post How CEA’s communications team is thinking about EA communications at the moment — EA Forum (effectivealtruism.org) from @Shakeel Hashim and hope that whatever happens in terms of shake ups at CEA - communications and clarity around the EA brand are prioritised.
This is really interesting. Thanks for sharing!
I think:
- If you have a lot of influence, articles like this are inevitable.
- EAs in AI should really try to make nice with the AI ethics crowd (i.e. help accomplish their goals). That's where the most criticism is coming from. From my perspective their concerns are useful angles of attack into the broader AI safety problem, and if EA policy does not meet the salient needs of present-day people it will be politically unpopular and lose influence (a challenge for the political longtermism agenda more broadly).
- I agree about EAs needing to cast a wider net, in really every sense of the term. We also need to be flexible to changing circumstances, particularly in something like AI that is so rapidly moving and where the technology and social consequences are likely to be far different in crucial respects to earlier predictions of them (even if the predictions are mostly true -- this is a very hard dynamic to manage).
- The article underscores the dangers to a movement so deeply connected to one foundation, and I expect we'll see Open Phil becoming more politically controversial (and very possible perceived as more Soros-esque) fairly soon.
- EA is al
... (read more)Thanks!
I agree that negative articles are inevitable if you get influence, but I think there are various angles these negative articles might come from, and this is a particularly bad one.
The Soros point is an excellent analogy, but I worry we could be headed for something worse than that. Soros gets criticism from people like Orban but praise from orgs like the FT and Politico. Meanwhile, with EA, people like Orban don't give a damn about EA but Politico is already publishing scathing pieces.
I don't think reputation management is as hard as is often supposed in EA. I think it's just it hasn't been prioritised much until recently (e.g., CEA didn't have a head of comms until September 2022). I can imagine many national organisations such as mine would love to have a Campaign Officer or something to help us manage it, but we don't have the funding.
Do you have any encouraging examples of progress on 2? Some of the prominent people are incredibly hostile (i.e. they genuinely believe we are all literal fascists and also Machiavellian naive utilitarians who lie automatically whenever it's in our short-term interests) so I'm a bit pessimistic, though I agree it is a good idea to try. What's a good goal to help them accomplish in your view?
Some are hostile but not all, and there are disagreements and divisions just as deep if not deeper in AI ethics as there are in EA or any other broad community with multiple important aims that you can think of.
External oversight over the power of big tech is a good goal to help accomplish. This is from one of the leading AI ethics orgs; it could almost as easily have come from an org like GovAI:
https://ainowinstitute.org/publication/gpai-is-high-risk-should-not-be-excluded-from-eu-ai-act
epistemic status: a frustrated outlet for sad thoughts, could definitely be reworded with more nuance
I really wish I had your positive view on this Sean, but I really don't think there's much chance of inroads unless capabilities advance to an extent that makes xRisk seem even more salient.
Gebru is, imo, never going to view EA positively. And she'll use her influence as strongly as possible in the 'AI Ethics' community.
Seth Lazar also seems intractably anti-EA. It's annoying how much of this dialogue happens on Twitter/X, especially since it's very difficult for me as a non-Twitter user to find them, but I remember he posted one terrible anti-longtermist thread and later deleted it.
Shannon Vallor once also posted a similarly anti-longtermist thread, and then respond to Jess Whittlestone once saying lamenting the gap between the Safety and Ethics fields. I just really haven't seen where the Safety->Ethics hostility has been, I've really only ever seen the reverse, but of course I'm 100% sure my sample is biased here.
The Belfield<>McInerney collaboration is extremely promising for sure, and I look forward to the outputs. I hope my impression is wrong and more work alon... (read more)
From the perspective of the AI Ethics researchers, AI Safety researchers and engineers contributed to the development of "everything for everyone" models – and also distracted away from the increasing harms that result from the development and use of those models.
Which, frankly, is both true, given how much people in AI Safety collaborated and mingled with people in large AI labs.
I understand that on Twitter, AI Ethics researchers are explicitly critiquing AI Safety folk (and longtermist tech folk in general) more than the other way around.
That feels unfair if we focus on the explicit exchange in the moment.
But there is more to it.
AI Ethics folk are responding with words to harms that resulted from misguided efforts by some key people in AI Safety in the past. There are implicit background goings-on they are concerned about that is hard to convey, and not immediately obvious from their writing.
It might not feel like we in AI Safety have much power in steering the development of large AI models, but historically the AI Safety community has been able to exert way more influence here than the AI E... (read more)
I think this is imprecise. In my mind there are two categories:
- People who think EA is a distraction from near term issues and competing for funding and attention (e.g. Seth Lazar as seen by his complaints about the UK taskforce and trying to tag Dustin Moskovitz and Ian Hogarth in his thinkpieces). These more classical ethicists are just from what I can see analytical philosophers looking for funding and clout competition with EA. They've lost a lot of social capital because they repeated a lot of old canards about AI and just repeats them. My model for them is something akin to they can't do fizzbuzz or know what a transformer is, thus they'll just say sentences about how AI can't do things and there's a lot of hype and power centralisation. These are more likely to be white men from the UK, Canada, Australia, and NZ. Status games are especially important to them and they seem to just not have a great understanding of the field of alignment at all. A good example I show people is this tweet which tries to say RLHF solves alignment and "Paul [Christiano] is an actual researcher I respect, the AI alignment people that bother me are more the longtermists."
- People in the other camp are
... (read more)I totally buy "there are lots of good sensible AI ethics people with good ideas, we should co-operate with them". I don't actually think that all of the criticisms of EA from the harshest critics are entirely wrong either. It's only the idea that "be co-operative" will have much effect on whether articles like this get written and hostile quotes from some prominent AI ethics people turn up in them, that I'm a bit skeptical of. My claim is not "AI ethics bad", but "you are unlikely to be able to persuade the most AI hostile figures within AI ethics".
The UK seems to take the existential risk from AI much more seriously than I would have expected a year ago. To me, this seems very important for the survival of our species, and seems well worth a few negative articles.
I'll note that I stopped reading the linked article after "Despite the potential risks, EAs broadly believe super-intelligent AI should be pursued at all costs." This is inaccurate imo. In general, having low-quality negative articles written about EA will be hard to avoid, no matter if you do "narrow EA" or "global EA".
I agree that's a good argument why that article is a bigger deal than it seems, but I'd still be quite surprised if it were at all comparable to the EV of having the UK so switch on when it comes to alignment.
I think there are truths that are not so far from it. Some rationalists believe Superintelligent AI is necessary for an amazing future. Strong versions of AI Safety and AI capabilities are complementary memes that start from similar assumptions.
Where I think most EAs would strongly disagree with is that they would find pursuing SAI "at all costs" to be abhorrent and counter to their fundamental goals. But I also suspect that showing survey data about EA's professed beliefs wouldn't be entirely convincing to some people given the close connections between EAs and rationalists in AI.
Last chance to apply for EAGxUtrecht! The deadline is today.
Apply nowAmong our speakers, you'll have the chance to meet:
- Claire Boine - Researcher in AI law and CEO of Successif, on crafting your personal career strategy in AI safety and governance.
- Magnolia Tovar - Chemical Engineer with over 22 years of experience in the energy sector, discussing technology gaps we need to close to reach net-zero emissions by 2050.
- Zou Xinyi - Executive Assistant to the CEO at Giving What We Can, debunking common myths about the 10% Pledge and sharing her personal jou
... (read more)EA should take seriously its shift from a lifestyle movement to a social movement.
The debate surrounding EA and its classification has always been a lively one. Is it a movement? A philosophy? A question? An ideology? Or something else? I think part of the confusion comes from its shift from a lifestyle movement to a social movement.
In its early days, EA seemed to bear many characteristics of a lifestyle movement. Initial advocates often concentrated on individual actions—such as personal charitable donations optimised for maximum impact or career decisions that could yield the greatest benefit. The movement championed the notion that our day-to-day decisions, from where we donate to how we earn our keep, could be channelled in ways that maximised positive outcomes globally. In this regard, it centred around personal transformation and the choices one made in their daily life.
However, as EA has evolved and matured, there's been a discernible shift. Today, whilst personal decisions and commitments remain at its heart, there's an increasing emphasis on broader, systemic changes. The community now acknowledges that while individual actions are crucial, tackling the underlyi... (read more)
I wanted to figure out where EA community building has been successful. Therefore, I asked Claude to use EAG London 2024 data to assess the relative strength of EA communities across different countries. This quick take is the result.
The report presents an analysis of factors influencing the strength of effective altruism communities across different countries. Using attendance data from EA Global London 2024 as a proxy for community engagement, we employed multiple regression analysis to identify key predictors of EA participation. The model incorpo... (read more)
Rutger Bregman has just written a very nice story on how Rob Mather came to found AMF! Apart from a GWWC interview, I think this is the first time anyone has told this tale in detail. There are a few good lessons in there if you're looking to start a high-impact org.
It's in Dutch, but google translate works very well!
What do you believe is the ideal size for the Dutch EA community?
We recently posed this question in our national WhatsApp community. I was surprised by the result, and others I've spoken to were also surprised. I thought I'd post it here to get other takes.
We defined 'being a member' as "someone who is motivated in part by an impartial care for others, is thinking very carefully about how they can best help others, and who is taking significant actions to help (most likely through their careers). In practice, this might look like selecting a job or degree ... (read more)