This is a special post for quick takes by Nathan Young. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I want to once again congratulate the forum team on this voting tool. I think by doing this, the EA forum is at the forefront of internal community discussions. No communities do this well and it's surprising how powerful it is. 

I feel like I want 80k to do more cause prioritisation if they are gonna direct so many people. Seems like 5 years ago they had their whole ranking thing which was easy to check. Now I am less confident in the quality of work that is directing lots of people in a certain direction.

9
calebp
Idk, many of the people they are directing would just do something kinda random which an 80k rec easily beats. I'd guess the number of people for whom 80k makes their plans worse in an absolute sense is kind of low and those people are likely to course correct. Otoh, I do think people/orgs in general should consider doing more strategy/cause prio research, and if 80k were like "we want to triple the size of our research team to work out the ideal marginal talent allocation across longtermist interventions" that seems extremely exciting to me. But I don't think 80k are currently being irresponsible (not that you explicitly said that, for some reason I got a bit of that vibe from your post).
8
Ben Millwood🔸
80k could be much better than nothing and yet still missing out on a lot of potential impact, so I think your first paragraph doesn't refute the point.
4
NickLaing
I agree with this, and have another tangential issue, which might be party of why cause prioritizing seems unclear? Their website seems confusing and overloaded to me. Compare giving what we can's page which has good branding and simple language. IMO 80,000 hours page has too much text and too much going on front page. Bring both websites up on your phone and judge for yourself. These are the front page of EA for many people so are pretty important. These websites aren't really for most of us, they are for fresh people so need to be punchy, straightforward and attractive. After clicking a couple pages bank things can get heavier.

Compare giving what we can's page which has good branding and simple language. IMO 80,000 hours page has too much text and too much going on front page. Bring both websites up on your phone and judge for yourself.

My understanding is that 80k have done a bunch of A/B testing which suggested their current design outcompetes ~most others (presumably in terms of click-throughs / amount of time users spend on key pages).

You might not like it, but this is what peak performance looks like.

2
NickLaing
Love this response, peak performance ha. I hope I'm wrong and this is the deal, that would be an excellent approach. Would be interesting to see what the other designs they tested were, but obviously I won't.

Have your EA conflicts on... THE FORUM!

In general, I think it's much better to first attempt to have a community conflict internally before I have it externally. This doesn't really apply to criminal behaviour or sexual abuse. I am centrally talking about disagreements, eg the Bostrom stuff, fallout around the FTX stuff, Nonlinear stuff, now this manifest stuff. 

Why do I think this?

  • If I want to credibly signal I will listen and obey norms, it seems better to start with a small discourse escalation rather than a large one. Starting a community discussion on twitter is like jumping straight to a shooting war. 
  • Many external locations (eg twitter, the press) have very skewed norms/incentives to the forum and so many parties can feel like they are the victim. I find when multiple parties feel they are weaker and victimised that is likely to cause escalation. 
  • Many spaces have less affordance for editing comments, seeing who agrees with who, having a respected mutual party say "woah hold up there"
  • It is hard to say "I will abide by the community sentiment" if I have already started the discussion elsewhere in order to shame people. And if I don't intend to abide by the commu
... (read more)

This is also a argument for the forum's existence generally, if many of the arguments would otherwise be had on Twitter.

2
NickLaing
For sure when it comes to any internet based discussion, to promote quality discourse slowish long form >>>> rapid short form.
3
Sinclair Chen
I agree with the caveat that certain kinds of more reasonable discussion can't happen on the forum because the forum is where people are fighting. For instance, because of the controversy I've been thinking a lot recently about antiracism recently - like what would effective antiracism look like; what lessons can we take from civil rights and what do we have to contribute (cool ideas on how to leapfrog past or fix education gaps? discourse norms that can facilitate hard but productive discussions about racism? advocating for literal reparations?) I have deleted a shortform I was writing on this because I think ppl would not engage with it positively. and I suspect I am missing the point somehow. I suspect people actually just want to fight, and the point is to be angry. On the meta level, I have been pretty frustrated (with both sides though not equally) on the manner in which some people are arguing, and the types of arguments they use, and the motivations they. I think in some ways it is better to complain about that off the forum. It's worse for feedback, but that's also a good thing because the cycle of righteous rage does not continue on the forum. And you get different perspectives (i wonder if a crux here is that you have a lot of twitter followers and I don't. If you tweet you are speaking to an audience; if I tweet I am speaking to weird internet friends)
2
Nathan Young
So I sort of agree, though depending on the topic I think it could quickly get a lot of eyes on it. I would prefer to discuss most things that are controversial/personal, not on twitter.

If anyone who disagrees with me on the manifest stuff who considers themselves inside the EA movement, I'd like to have some discussions with a focus on consensus-building. ie we chat in DMs and the both report some statements we agreed on and some we specifically disagreed on.  

Edited:

@Joseph Lemien asked for positions I hold:

  • The EA forum should not seek to have opinions on non-EA events. I don't mean individual EAs shouldn't have opinions, I mean that as a group we shouldn't seek to judge individual event. I don't think we're very good at it.
  •  I don't like Hanania's behaviour either and am a little wary of systems where norm breaking behaviour gives extra power, such as being endlessly edgy. But I will take those complaints to the manifold community internally.
  • EAGs are welcome to invite or disinvite whoever CEA likes. Maybe one day I'll complain. But do I want EAGs to invite a load of manifest's edgiest speakers? Not particularly. 
  • It is fine for there to be spaces with discussion that I find ugly. If people want to go to these events, that's up to them.
  • I dislike having unresolved conflicts which ossify into an inability to talk about things. Someone once told me tha
... (read more)
4
Joseph Lemien
Nathan, could you summarize/clarify for us readers what your views are? (or link to whatever comment or document has those views?) I suspect that I agree with you on a majority of aspects and disagree on a minority, but I'm not clear on what your views are. I'd be interested to see some sort of informal and exploratory 'working group' on inclusion-type stuff within EA, and have a small group conversation once a month or so, but I'm not sure if there are many (any?) people other than me that would be interested in having discussions and trying to figure out some actions/solutions/improvements.[1] 1. ^ We had something like this for talent pipelines and hiring (it was High Impact Talent Ecosystem, and it was somehow connected to or organized by SuccessIf, but I'm not clear and exactly what the relationship was), but after a few months the organizer stopped and I'm not clear on why. In fact, I'm vaguely considering picking up the baton and starting some kind of a monthly discussion group about talent pipelines, coaching/developing talent, etc.
2
Nathan Young
Oooh that's interesting. I'd be interested to hear what the conclusions are.
4
Jason
One limitation here: you have a view about Manifest. Your interlocutor would have a different view. But how do we know if those views are actually representative of major groupings? My hunch is that, if equipped with a mind probe, we would find at least two major axes with several meaningfully different viewpoints on each axis. Overall, I'd predict that I would find at least four sizable clusters, probably five to seven.
2
Nathan Young
So I ran a poll with 100 ish respondents and if you want to run the k-means analysis you can find those clusters yourself. The anonymous data is downloadable here. https://viewpoints.xyz/polls/ea-and-manifest/results  Beyond that, yes you are likely right, but I don't know how to have that discussion better. I tried using polls and upvoted quotes as a springboard in this post (Truth-seeking vs Influence-seeking - a narrower discussion) but people didn't really bite there. Suggestions welcome. It is kind of exhausting to keep trying to find ways to get better samples of the discourse, without a sense that people will eventually go "oh yeah this convinces me". If I were more confident I would have more energy for it. 
4
Jason
I don't think those were most of the questions I was looking for, though. This isn't a criticism: running the poll early risks missing important cruxes and fault lines that haven't been found yet; running it late means that much of the discussion has already happened. There are also tradeoffs with viewpoints.xyz being accessible (=better sampling) and the data being rich enough. Limitation to short answer stems with a binary response (plus an ambiguous "skip") lends itself to identifying two major "camps" more easily that clusters within those camps. In general, expanding to five-point Likert scales would help, as would some sort of branching. For example, I'd want to know -- conditional on "Manifest did wrong here" / "the platforming was inappropriate" -- what factors were more or less important to the respondent's judgment. On a 1-5 scale, how important do you find [your view that the organizers did not distance themselves from the problematic viewpoints / the fit between the problematic viewpoints and a conference for the forecasting community / an absence of evidence that special guests with far-left or at least mainstream viewpoints on the topic were solicited / whatever]. And: how much would the following facts or considerations, if true, change your response to a hypothetical situation like the Manifest conference? Again, you can't get how much on a binary response. Maybe all that points out to polling being more of a post-dialogue event, and accepting that we would choose discussants based on past history & early reactions. For example, I would have moderately high confidence that user X would represent a stance close to a particular pole on most issues, while I would represent a stance that codes as "~ moderately progressive by EA Forum standards." 
5
Nathan Young
Often it feels like I can never please people on this forum. I think the poll is significantly better than no poll. 
3
Jason
Yeah, I agree with that! I don't find it inconsistent with the idea that the reasonable trade-offs you made between various characteristics in the data-collection process make the data you got not a good match for the purposes I would like data for. They are good data for people interested in the answer to certain other questions. No one can build a (practical) poll for all possible use cases, just as no one can build a (reasonably priced) car that is both very energy-efficient and has major towing/hauling chops.
2
Joseph Lemien
As useful as viewpoints.xyz is, I will mention that for maybe 50% or 60% of the questions, my reaction was "it depends." I suppose you can't really get around that unless the person creating the questions spends much more time to carefully craft them (which sort of defeats the purpose of a quick-and-dirty poll), or unless you do interviews (which are of course much more costly). I do think there is value in the quick-and-dirty MVP version, but it's usefullness has a pretty noticable upper bound.

Lab grown meat -> no-kill meat

This tweet recommends changing the words we use to discuss lab-grown meat. Seems right.

There has been a lot of discussion of this, some studies were done on different names, and GFI among others seem to have landed on "cultivated meat".

1
EffectiveAdvocate🔸
What surprises me about this work is that it does not seem to include the more aggressive (for lack of a better word) alternatives I have heard being thrown around, like "Suffering-free", or "Clean", or "cruelty-free".
1
Saul Munn
could you link to a few of the discussions & studies?
4
Julia_Wise🔸
https://en.wikipedia.org/wiki/Cultured_meat#Nomenclature
6
Jeff Kaufman
For what it's worth, my first interpretation of "no-kill meat" is that you're harvesting meat from animals in ways that don't kill them. Like amputation of parts that grow back.
2
Eevee🔹
I love this wording!
1
Saul Munn
i'd be curious to see the results of e.g. focus groups on this — i'm just now realizing how awful of a name "lab grown meat" is, re: the connotations.

Suggestion. 

Debate weeks every other week and we vote on what the topic is.

I think if the forum had a defined topic (especially) in advance, I would be more motivated to read a number of post on that topic. 

One of the benefits of the culture war posts is that we are all thinking about the same thing. If we did that on topics perhaps with dialogues from experts, that would be good and on a useful topic.

9
Jason
Every other week feels exhausting, at least if the voting went in a certain direction.
7
NickLaing
I would pitch for every 2 months, but I like the sentiment of doing it a bit more.
5
Toby Tremlett🔹
A crux for me at the moment is whether we can shape debate weeks in a way which leads to deep rather than shallow engagement. If we were to run debate weeks more often, I'd (currently) want to see them causing people to change their mind, have useful conversations, etc... It's something I'll be looking closely at when we do a post-mortem on this debate week experiment. 
2
Toby Tremlett🔹
Also, every other week seems prima facie a bit burdensome for un-interested users. Additionally, I want top-down content to only be a part of the Forum. I wouldn't want to over-shepherd discussion and end up with less wide-ranging and good quality posts.  Happy to explore other ways to integrate polls etc if people like them and they lead to good discussions though. 
4
yanni kyriacos
Hi Nathan! I like suggestions and would like to see more suggestions. But I don't know what the theory of change is for the forum, so I find it hard to look at your suggestion and see if it maps onto the theory of change. Re this: "One of the benefits of the culture war posts is that we are all thinking about the same thing." I'd be surprised if 5% of EAs spent more than 5 minutes thinking about this topic and 20% of forum readers spent more than 5 minutes thinking about it. I'd be surprised if there were more than 100 unique commenters on posts related to that topic. Why does this matter? Well, prioritising a minority of subject-matter interested people over the remaining majority could be a good way to shrink your audience.
2
Nathan Young
Why is shrinking audience bad? If this forum focused more on EA topics and some people left I  am not sure that would be bad. I guess it would be slightly good on expectation. And to be clear I mean if we focused on "are AIs deserving of moral value" "what % of money should be spent on animal welfare"
2
Chris Leong
I agree that there's a lot of advantage of occasionally bringing a critical mass of attention to certain topics where this moves the community's understanding forward vs. just hoping we end up naturally having the most important conversations.
1
Ebenezer Dukakis
Weird idea: What if some forum members were chosen as "jurors", and their job is to read everything written during the debate week, possibly ask questions, and try to come to a conclusion? I'm not that interested in AI welfare myself, but I might become interested if such "jurors" who recorded their opinion before and after made a big update in favor of paying attention to it. To keep the jury relatively neutral, I would offer people the chance to sign up to "be a juror during the first week of August", before the topic for the first week of August is actually known.

The front page agree disagree thing is soo coool. Great work forum team. 

7
Toby Tremlett🔹
Thanks Nathan! People seem to like it so we might use it again in the future. If you or anyone else has feedback that might improve the next iteration of it, please let us know! You can comment here or just dm. 
6
Ozzie Gooen
I think it's neat!  But I think there's work to do on the display of the aggregate. 1. I imagine there should probably be a table somewhere at least (a list of each person and what they say).  2. This might show a distribution, above. 3. There must be some way to just not have the icons overlap with each other like this. Like, use a second dimension, just to list them. Maybe use a wheat plot? I think strip plots and swarm plots could also be options.   
6
JP Addison🔸
I'm excited that we exceeded our goals enough to have the issue :)
4
Lorenzo Buonanno🔸
I would personally go for a beeswarm plot. But even just adding some random y and some transparency seems to improve things document.querySelectorAll('.ForumEventPoll-userVote').forEach(e => e.style.top = `${Math.random()*100-50}px`); document.querySelectorAll('.ForumEventPoll-userVote').forEach(e => e.style.opacity = `0.7`);  
2
Sarah Cheng
Really appreciate all the feedback and suggestions! This is definitely more votes than we expected. 😅 I implemented a hover-over based on @Agnes Stenlund's designs in this PR, though our deployment is currently blocked (by something unrelated), so I'm not sure how long it will take to make it to the live site. I may not have time to make further changes to the poll results UI this week, but please keep the comments coming - if we decide to run another debate or poll event, then we will iterate on the UI and take your feedback into account.

Looks great!

I tried to make it into a beeswarm, and while IMHO it does look nice it also needs a bunch more vertical space (and/or smaller circles)

4
Nathan Young
Also adding a little force works too, eg here. There are pretty easy libraries for this. 
4
Lorenzo Buonanno🔸
The orange line above the circles makes it look like there's a similar number of people at the extreme left and the extreme right, which doesn't seem to be the case
5
Jason
I don't think it would help much for this question, but I could imagine using this feature for future questions in which the ability to answer anonymously would be important. (One might limit this to users with a certain amount of karma to prevent brigading.)
2
Brad West
I note some of my confusion that might have been shared by others. I initially had thought that the option from users was between binary "agree" and "disagree" and thought the method by which a user could choose was by dragging to one side or another. I see now that this would signify maximal agreement/disagreement, although maybe users like me might have done so in error. Perhaps something that could indicate this more clearly would be helpful to others.
2
Toby Tremlett🔹
Thanks Brad, I didn't foresee that! (Agree react Brad's comment if you experienced the same thing). Would it have helped if we had marked increments along the slider? Like the below but prettier? (our designer is on holiday)  
2
Brad West
Yeah, if there were markers like "neutral", "slightly agree", "moderately agree", "strongly agree", etc. that might make it clearer. After the decision by the user registers, a visual display that states something like "you've indicated that you strongly agree with the statement X.  Redrag if this does not reflect your view or if something changes your mind and check out where the rest of the community falls on this question by clicking here." 
6
Ozzie Gooen
Another idea could be to ask, "How many EA resources should go do this, per year, for the next 10 years?"  Options could be things like,  "$0", "$100k", "1M", "100M", etc. Also, maybe there could be a second question for, "How sure are you about this?" 
2
Toby Tremlett🔹
Interesting. Certainty could also be a Y-axis, but I think that trades off against simplicity for a banner. 
2
Toby Tremlett🔹
I'd love to hear more from the disagree reactors. They should feel very free to dm.  I'm excited to experiment more with interactive features in the future, so critiques are especially useful now!

An alternate stance on moderation (from @Habryka.)

This is from this comment responding to this post about there being too many bans on LessWrong. Note how the LessWrong is less moderated than here in that it (I guess) responds to individual posts less often, but more moderated in that I guess it rate limits people more without reason. 

I found it thought provoking. I'd recommend reading it.

Thanks for making this post! 

One of the reasons why I like rate-limits instead of bans is that it allows people to complain about the rate-limiting and to participate in discussion on their own posts (so seeing a harsh rate-limit of something like "1 comment per 3 days" is not equivalent to a general ban from LessWrong, but should be more interpreted as "please comment primarily on your own posts", though of course it shares many important properties of a ban).

This is a pretty opposite approach to the EA forum which favours bans.

Things that seem most important to bring up in terms of moderation philosophy: 

Moderation on LessWrong does not depend on effort

"Another thing I've noticed is that almost all the users are trying.  They are trying to use rationality, trying to understan

... (read more)

This is a pretty opposite approach to the EA forum which favours bans.

If you remove ones for site-integrity reasons (spamming DMs, ban evasion, vote manipulation), bans are fairly uncommon. In contrast, it sounds like LW does do some bans of early-stage users (cf. the disclaimer on this list), which could be cutting off users with a high risk of problematic behavior before it fully blossoms. Reading further, it seems like the stuff that triggers a rate limit at LW usually triggers no action, private counseling, or downvoting here.

As for more general moderation philosophy, I think the EA Forum has an unusual relationship to the broader EA community that makes the moderation approach outlined above a significantly worse fit for the Forum than for LW. As a practical matter, the Forum is the ~semi-official forum for the effective altruism movement. Organizations post official announcements here as a primary means of publishing them, but rarely on (say) the effectivealtruism subreddit. Posting certain content here is seen as a way of whistleblowing to the broader community as a whole. Major decisionmakers are known to read and even participate in the Forum.

In contrast (although I am not... (read more)

6
Habryka
This also roughly matches my impression. I do think I would prefer the EA community to either go towards more centralized governance or less centralized governance in the relevant way, but I agree that given how things are, the EA Forum team has less leeway with moderation than the LW team. 
0
Nathan Young
Wait it seems like a higher proportion of EA forum moderations are bans, but that LW does more moderation and more is rate limits? Is that not right?
4
Habryka
My guess is LW both bans and rate-limits more. 
3
Nathan Young
Apart from choosing who can attend their conferences which are the de facto place that many community members meet, writing their intro to EA, managing the effective altruism website and offering criticism of specific members behaviour.  Seems like they are the de facto people who decide what is or isn't valid way to practice effective altruism. If anything more than the LessWrong team (or maybe rationalists are just inherently unmanageable).  I agree on the ironic point though. I think you might assume that the EA forum would moderate more than LW, but that doesn't seem to be the case. 
7
JP Addison🔸
I want to throw in a bit of my philosophy here. Status note: This comment is written by me and reflects my views. I ran it past the other moderators, but they might have major disagreements with it. I agree with a lot of Jason’s view here. The EA community is indeed much bigger than the EA Forum, and the Forum would serve its role as an online locus much less well if we used moderation action to police the epistemic practices of its participants. I don’t actually think this that bad. I think it is a strength of the EA community that it is large enough and has sufficiently many worldviews that any central discussion space is going to be a bit of a mishmash of epistemologies.[1] Some corresponding ways this viewpoint causes me to be reluctant to apply Habryka’s philosophy:[2] Something like a judicial process is much more important to me. We try much harder than my read of LessWrong to apply rules consistently. We have the Forum Norms doc and our public history of cases forms something much closer to a legal code + case law than LW has. Obviously we’re far away from what would meet a judicial standard, but I view much of my work through that lens. Also notable is that all nontrivial moderation decisions get one or two moderators to second the proposal. Related both to the epistemic diversity, and the above, I am much more reluctant to rely on my personal judgement about whether someone is a positive contributor to the discussion. I still do have those opinions, but am much more likely to use my power as a regular user to karma-vote on the content. Some points of agreement:  Agreed. We are much more likely to make judgement calls in cases of new users. And much less likely to invest time in explaining the decision. We are still much less likely to ban new users than LessWrong. (Which, to be clear, I don’t think would have been tenable on LessWrong when they instituted their current policies, which was after the launch of GPT-4 and a giant influx of low quality
4
Jason
I think the banned individual should almost always get at least one final statement to disagree with the ban after its pronouncement. Even the Romulans allowed (will allow?) that. Absent unusual circumstances, I think they -- and not the mods -- should get the last word, so I would also allow a single reply if the mods responded to the final statement. More generally, I'd be interested in ~"civility probation," under which a problematic poster could be placed for ~three months as an option they could choose as an alternative to a 2-4 week outright ban. Under civility probation, any "probation officer" (trusted non-mod users) would be empowered to remove content too close to the civility line and optionally temp-ban the user for a cooling-off period of 48 hours. The theory of impact comes from the criminology literature, which tells us that speed and certainty of sanction are more effective than severity. If the mods later determined after full deliberation that the second comment actually violated the rules in a way that crossed the action threshold, then they could activate the withheld 2-4 week ban for the first offense and/or impose a new suspension for the new one.  We are seeing more of this in the criminal system -- swift but moderate "intermediate sanctions" for things like failing a drug test, as opposed to doing little about probation violations until things reach a certain threshold and then going to the judge to revoke probation and send the offender away for at least several months. As far as due process, the theory is that the offender received their due process (consideration by a judge, right to presumption of innocence overcome only by proof beyond a reasonable doubt) in the proceedings that led to the imposition of probation in the first place.
-1
Nathan Young
"will allow?" very good.
2
Nathan Young
Yeah seems fair.

I am not confident that another FTX level crisis is less likely to happen, other than that we might all say "oh this feels a bit like FTX".

Changes:

  • Board swaps. Yeah maybe good, though many of the people who left were very experienced. And it's not clear whether there are due diligence people (which seems to be what was missing).
  • Orgs being spun out of EV and EV being shuttered. I mean, maybe good though feels like it's swung too far. Many mature orgs should run on their own, but small orgs do have many replicable features.
  • More talking about honesty. Not really sure this was the problem. The issue wasn't the median EA it was in the tails. Are the tails of EA more honest? Hard to say
  • We have now had a big crisis so it's less costly to say "this might be like that big crisis". Though notably this might also be too cheap - we could flinch away from doing ambitious things
  • Large orgs seem slightly more beholden to comms/legal to avoid saying or doing the wrong thing.
  • OpenPhil is hiring more internally

Non-changes:

  • Still very centralised. I'm pretty pro-elite, so I'm not sure this is a problem in and of itself, though I have come to think that elites in general are less competent than I thought before (see FTX and OpenAI crisis)
  • Little discussion of why or how the affiliation with SBF happened despite many well connected EAs having a low opinion of him
  • Little discussion of what led us to ignore the base rate of scamminess in crypto and how we'll avoid that in future
8
Ben Millwood🔸
For both of these comments, I want a more explicit sense of what the alternative was. Many well-connected EAs had a low opinion of Sam. Some had a high opinion. Should we have stopped the high-opinion ones from affiliating with him? By what means? Equally, suppose he finds skepticism from (say) Will et al, instead of a warm welcome. He probably still starts the FTX future fund, and probably still tries to make a bunch of people regranters. He probably still talks up EA in public. What would it have taken to prevent any of the resultant harms? Likewise, what does not ignoring the base rate of scamminess in crypto actually look like? Refusing to take any money made through crypto? Should we be shunning e.g. Vitalik Buterin now, or any of the community donors who made money speculating?
4
Jason
Not a complete answer, but I would have expected communication and advice for FTXFF grantees to have been different. From many well connected EAs having a low opinion of him, we can imagine that grantees might have been urged to properly set up corporations, not count their chickens before they hatched, properly document everything and assume a lower-trust environment more generally, etc. From not ignoring the base rate of scamminess in crypto, you'd expect to have seen stronger and more developed contingency planning (remembering that crypto firms can and do collapse in the wake of scams not of their own doing!), more decisions to build more organizational reserves rather than immediately ramping up spending, etc.
2
Michael_PJ
The measures you list would have prevented some financial harm to FTXFF grantees, but it seems to me that that is not the harm that people have been most concerned about. I think it's fair for Ben to ask about what would have prevented the bigger harms.
2
Jason
Ben said "any of the resultant harms," so I went with something I saw a fairly high probability. Also, I mostly limit this to harms caused by "the affiliation with SBF" -- I think expecting EA to thwart schemes cooked up by people who happen to be EAs (without more) is about as realistic as expecting (e.g.) churches to thwart schemes cooked up by people who happen to be members (without more). To be clear, I do not think the "best case scenario" story in the following three paragraphs would be likely. However, I think it is plausible, and is thus responsive to a view that SBF-related harms were largely inevitable.  In this scenario, leaders recognized after the 2018 Alameda situation that SBF was just too untrustworthy and possibly fraudulent (albeit against investors) to deal with -- at least absent some safeguards (a competent CFO, no lawyers who were implicated in past shady poker-site scandals, first-rate and comprehensive auditors). Maybe SBF wasn't too far gone at this point -- he hadn't even created FTX in mid-2018 -- and a costly signal from EA leaders (we won't take your money) would have turned him -- or at least some of his key lieutenants -- away from the path he went down? Let's assume not, though.   If SBF declined those safeguards, most orgs decline to take his money and certainly don't put him on podcasts. (Remember that, at least as of 2018, it sounds like people thought Alameda was going nowhere -- so the motivation to go against consensus and take SBF money is much weaker at first.) Word gets down to the rank-and-file that SBF is not aligned, likely depriving him of some of his FTX workforce. Major EA orgs take legible action to document that he is not in good standing with them, or adopt a public donor-acceptability policy that contains conditions they know he can't/won't meet. Major EA leaders do not work for or advise the FTXFF when/if it forms.  When FTX explodes, the comment from major EA orgs is that they were not fully convinced he was
3
Jason
Is there any reason to doubt the obvious answer -- it was/is an easy way for highly-skilled quant types in their 20s and early 30s to make $$ very fast?
3
Nathan Young
seems like this is a pretty damning conclusion that we haven't actually come to terms with if it is the actual answer
5
Jason
It's likely that no single answer is "the" sole answer. For instance, it's likely that people believed they could assume that trusted insiders were more significantly more ethical than the average person. The insider-trusting bias has bitten any number of organizations and movements (e.g., churches, the Boy Scouts). However, it seems clear from Will's recent podcast that the downsides of being linked to crypto were appreciated at some level. It would take a lot for me to be convinced that all that $$ wasn't a major factor.

People voting without explaining is good. 

I often see people thinking that this is bragading or something when actually most people just don't want to write a response, they either like or dislike something

If it were up to me I might suggest an anonymous "I don't know" button and an anonymous "this is poorly framed" button.

When I used to run a lot of facebook polls, it was overwhelmingly men who wrote answers, but if there were options to vote, the gender was much more even. My hypothesis was that a kind of argumentative usually man tended to enjoy writing long responses more. And so blocking lower effort/less antagonistic/ more anonymous responses meant I heard more from this kind of person. 

I don't know if that is true on the forum, but I would guess that the higher effort it is to respond the more selective the responses become in some direction. I guess I'd ask if you think that the people spending the most effort are likely to be the most informed. In my experience, they aren't.

More broadly I think it would be good if the forum optionally took some information about users - location, income, gender, cause area, etc and on answers with more than say 10 votes would dis... (read more)

It seems like we could use the new reactions for some of this. At the moment they're all positive but there could be some negative ones. And we'd want to be able to put the reactions on top level posts (which seems good anyway).

6
Joseph Lemien
I think that it is generally fine to vote without explanations, but it would be nice to know why people are disagreeing or disliking something. Two scenarios come to mind: * If I write a comment that doesn't make any claim/argument/proposal and it gets downvotes, I'm unclear what those downvotes mean. * If I make a post with a claim/argument/proposal and it gets downvoted without any comments, it isn't clear what aspect of the post people have a problem with. I remember writing in a comment several months ago about how I think that theft from an individual isn't justified even if many people benefit from it, and multiple people disagreed without continuing the conversation. So I don't know why they disagreed, or what part of the argument they through was wrong. Maybe I made a simple mistake, but nobody was willing to point it out. I also think that you raise good points regarding demographics and the willingness of different groups of people to voice their perspectives.
2
Nathan Young
I agree it would be nice to know, but in every case someone has decided they do want to vote but don't want to comment. Sometimes I try and cajole an answer, but ultimately I'm glad they gave me any information at all.
1
Rebecca
What is bragading?
4
Brad West
Think he was referring to "brigading", referred to in this thread   Generally, it is voting more out of allegiance or affinity to a particular person, rather than an assessment of the quality of the post/comment.

Some things I don't think I've seen around FTX, which are probably due to the investigation, but still seems worth noting. Please correct me if these things have been said.

  • I haven't seen anyone at the FTXFF acknowledge fault for negligence in not noticing that a defunct phone company (north dimension) was paying out their grants.
    • This isn't hugely judgemental from me, I think I'd have made this mistake too, but I would like it acknowledged at some point
    • Since writing this it's been pointed out that there were grants paid from FTX and Alameda accounts also. Ooof.

The FTX Foundation grants were funded via transfers from a variety of bank 
accounts, including North Dimension-8738 and Alameda-4456 (Primary Deposit Accounts), as 
well as Alameda-4464 and FTX Trading-9018

  • I haven't seen anyone at CEA acknowledge that they ran an investigation in 2019-2020 on someone who would turn out to be one of the largest fraudsters in the world and failed to turn up anything despite seemingly a number of flags.
    • I remain confused
  • As I've written elsewhere I haven't seen engagement on this point, which I find relatively credible, from one of the Time articles:

"Bouscal recalled speaking to Mac Aulay

... (read more)

Extremely likely that the lawyers have urged relevant people to remain quiet on the first two points and probably the third as well.

6
Nathan Young
Yeah seems right, but uh still seems worth saying.
4
ChanaMessinger
Did you mean for the second paragraph of the quoted section to be in the quote section? 
2
Nathan Young
I can't remember but you're right that it's unclear.
3
Rían O.M
I haven't read too much into this and am probably missing something.  Why do you think FTXFF was receiving grants via north dimension? The brief googling I did only mentioned north dimension in the context of FTX customers sending funds to FTX (specifically this SEC complaint). I could easily have missed something. 
7
Jason
Grants were being made to grantees out of North Dimension's account -- at least one grant recipient confirmed receiving one on the Forum (would have to search for that). The trustee's second interim report shows that FTXFF grants were being paid out of similar accounts that received customer funds. It's unclear to me whether FTX Philanthrophy (the actual 501c3) ever had any meaningful assets to its name, or whether (m)any of the grants even flowed through accounts that it had ownership of.
3
Nathan Young
Seems pretty bad, no?

Certainly very concerning. Two possible mitigations though:

  • Any finding of negligence would only apply to those with duties or oversight responsibilities relating to operations. It's not every employee or volunteer's responsibility to be a compliance detective for the entire organization.
  • It's plausible that people made some due dilligence efforts that were unsuccessful because they were fed false information and/or relied on corrupt experts (like "Attorney-1" in the second interim trustee report). E.g., if they were told by Legal that this had been signed off on and that it was necessary for tax reasons, it's hard to criticize a non-lawyer too much for accepting that. Or more simply, they could have been told that all grants were made out of various internal accounts containing only corporate monies (again, with some tax-related justification that donating non-US profits through a US charity would be disadvantageous).
1
Rían O.M
Ah, thank you!  I searched for that comment. I think this is probably the one you're referencing. 
2
Nathan Young
I know of at least 1 other case.

I know of at least 1 NDA of an EA org silencing someone for discussing what bad behaviour that happened at that org. Should EA orgs be in the practice of making people sign such NDAs?

I suggest no.

4
ChanaMessinger
I think I want a Chesterton's TAP for all questions like this that says "how normal are these and why" whenever we think about a governance plan.
2
Peter Wildeford
What's a "Chesterton's TAP"?
2
ChanaMessinger
Not a generally used phrase, just my attempting to point to "a TAP for asking Chesterton's fence-style questions"
2
Peter Wildeford
What's a TAP? I'm still not really sure what you're saying.
4
NunoSempere
"Trigger action pattern", a technique for adopting habits proposed by CFAR <https://www.lesswrong.com/posts/wJutA2czyFg6HbYoW/what-are-trigger-action-plans-taps>.
7
Peter Wildeford
Thanks! "Chesterton's TAP" is the most rationalist buzzword thing I've ever heard LOL, but I am putting together that what Chana said is that she'd like there to be some way for people to automatically notice (the trigger action pattern) when they might be adopting an abnormal/atypical governance plan and then reconsider whether the "normal" governance plan may be that way for a good reason even if we don't immediately know what that reason is (the Chesterton's fence)?
2
ChanaMessinger
Oh, sorry! TAPs are a CFAR / psychology technique. https://www.lesswrong.com/posts/wJutA2czyFg6HbYoW/what-are-trigger-action-plans-taps
2
Nathan Young
I am unsure what you mean? As in, because other orgs do this it's probably normal? 
4
ChanaMessinger
I have no idea, but would like to! With things like "organizational structure" and "nonprofit governance", I really want to understand the reference class (even if everyone in the reference class does stupid bad things and we want to do something different).
0
Yitz
Strongly agree that moving forward we should steer away from such organizational structures; much better that something bad is aired publicly before it has a chance to become malignant

Feels like we've had about 3 months since the FTX collapse with no kind of leadership comment. Uh that feels bad. I mean I'm all for "give cold takes" but how long are we talking.

3
Ian Turner
Do you think this is not due to "sound legal advice"?

I am pretty sure there is no strong legal reason for people to not talk at this point. Not like totally confident but I do feel like I've talked to some people with legal expertise and they thought it would probably be fine to talk, in addition to my already bullish model.

2[comment deleted]

I want to say thanks to people involved in the EA endeavour. I know things can be tough at times, but you didn't have to care about this stuff, but you do. Thank you, it means a lot to me. Let's make the world better!

The OpenAI stuff has hit me pretty hard. If that's you also, look after yourself. 

I don't really know what accurate thought looks like here.

3
ChanaMessinger
Yeah, same
1
yanni
I hope you're doing ok Nathan. Happy to chat in DM's if you like ❤️
1
Xing Shi Cai
It will settle down soon enough. Not much will change as for most breaking news story. But I am thinking if I should switch to Claude.

I am really not the person to do it, but I still think there needs to be some community therapy here. Like a truth and reconciliation committee. Working together requires trust and I'm not sure we have it. 

Poll: https://viewpoints.xyz/polls/ftx-impact-on-ea

Results: https://viewpoints.xyz/polls/ftx-impact-on-ea/results

6
ChanaMessinger
Curious if you have examples of this being done well in communities you've been aware of? I might have asked you this before. I've been part of an EA group where some emotionally honest conversations were had, and I think they were helpful but weren't a big fix. I think a similar group later did a more explicit and formal version and they found it helpful.
4
Nathan Young
I've never seen this done well. I guess I'd read about the truth and reconciliation committees in South Africa and Ireland.

Joe Rogan (largest podcaster in the world) giving repeated concerned mediocre x-risk explanations suggests that people who have contacts with him should try and get someone on the show to talk about it.

eg listen from 2:40:00 Though there were several bits like this during the show. 

I intend to strong downvote any article about EA that someone posts on here that they themselves have no positive takes on. 

If I post an article, I have some reason I liked it. Even a single line. Being critical isn't enough on it's own. If someone posts an article, without a single quote they like, with the implication it's a bad article, I am minded to strong downvote so that noone else has to waste their time on it. 

4
James Herbert
What do you make of this post? I've been trying to understand the downvotes. I find it valuable in the same way that I would have found it valuable if a friend had sent me it in a DM without context, or if someone had quote tweeted it with a line like 'Prominent YouTuber shares her take on FHI closing down'.  I find posts like this useful because it's valuable to see what external critics are saying about EA. This helps me either a) learn from their critiques or b) rebut their critiques. Even if they are bad critiques and/or I don't think it's worth my time rebutting them, I think I should be aware of them because it's valuable to understand how others perceive the movement I am connected to. I think this is the same for other Forum users. This being the case, according to the Forum's guidance on voting, I think I should upvote them. As Lizka says here, a summary is appreciated but isn't necessary. A requirement to include a summary or an explanation also imposes a (small) cost on the poster, thus reducing the probability they post. But I think you feel differently? 

I think the strategy fortnight worked really well. I suggest that another one is put in the calendar (for say 3 months time) and then rather than dripfeeding comment we sort of wait and then burst it out again. 

It felt better to me, anyway to be like "for these two weeks I will engage"

I also thought it was pretty decent, and it caused me to get a post out that had been sitting in my drafts for quite a while.

I've said that people voting anonymously is good, and I still think so, but when I have people downvoting me for appreciating little jokes that other people most on my shortform, I think we've become grumpy. 

4
NickLaing
Completely agree, I would love humour to be more appreciated on the forum. Rarely does a joke slip through appreciated/unpunished.
2
titotal
In my experience, this forum seems kinda hostile to attempts at humour (outside of april fools day). This might be a contributing factor to the relatively low population here!
5
Nathan Young
I get that, though it feels like shortforms should be a bit looser. 
1
yanni kyriacos
haha whenever I try humour / sarcasm I get shot directly into the sun. 

I notice some people (including myself) reevaluating their relationship with EA. 

This seems healthy. 

When I was a Christian it was extremely costly for me to reduce my identification and resulted in a delayed and much more final break than perhaps I would have wished[1]. My general view is that people should update quickly, and so if I feel like moving away from EA, I do it when I feel that, rather than inevitably delaying and feeling ick.

Notably, reducing one's identification with the EA community need not change one's poise towards effective work/donations/earn to give. I doubt it will change mine. I just feel a little less close to the EA community than once I did, and that's okay.

I don't think I can give others good advice here, because we are all so different. But the advice I would want to hear is "be part of things you enjoy being part of, choose an amount of effort to give to effectiveness and try to be a bit more effective with that each month, treat yourself kindly because you too are a person worthy of love" 

  1. ^

    I think a slow move away from Christianity would have been healthier for me. Strangely I find it possible to imagine still being a Christian, had thi

... (read more)

Richard Ngo just gave a talk at EAG berlin about errors in AI governance. One being a lack of concrete policy suggestions.

Matt Yglesias said this a year ago. He was even the main speaker at EAG DC https://www.slowboring.com/p/at-last-an-ai-existential-risk-policy?utm_source=%2Fsearch%2Fai&utm_medium=reader2 

Seems worth asking why we didn't listen to top policy writers when they warned that we didn't have good proposals.

Seems worth asking why we didn't listen to top policy writers when they warned that we didn't have good proposals.

What do you think of Thomas Larson's bill? It seems pretty concrete to me, do you just think it is not good?

2
Nathan Young
I am going on what Ngo said. So I guess, what does he think of it?
-3
Larks
This sounds like the sort of question you should email Richard to ask before you make blanket accusations. 
4
Nathan Young
Ehhh, not really. I think it's not a crazy view to hold and I wrote it on a shortform. 

I think 90% of the answer to this is risk aversion from funders, especially LTFF and OpenPhil, see here. As such many things struggled for funding, see here.

We should acknowledge that doing good policy research often involves actually talking to and networking with policy people. It involves running think tanks and publishing policy reports, not just running academic institutions and publishing papers. You cannot do this kind of research well in a vacuum. 

That fact combined with funders who were (and maybe still are) somewhat against funding people (except for people they knew extremely well) to network with policy makers in any way, has lead to (maybe is still leading to) very limited policy research and development happening.

 

I am sure others could justify this risk averse approach, and there are totally benefits to being risk averse. However in my view this was a mistake (and is maybe an ongoing mistake). I think was driven by the fact that funders were/are: A] not policy people, so do/did not understand the space so are were hesitant to make grants; B] heavily US centric, so do/did not understand the non-US policy space; and C] heavily capacity constrained, so do/did ... (read more)

9
Habryka
My current model is that actually very few people who went to DC and did "AI Policy work" chose a career that was well-suited to proposing policies that help with existential risk from AI. In-general people tried to choose more of a path of "try to be helpful to the US government" and "become influential in the AI-adjacent parts of the US government", but there are almost no people working in DC whose actual job it is to think about the intersection of AI policy and existential risk. Mostly just people whose job it is to "become influential in the US government so that later they can steer the AI existential risk conversation in a better way". I find this very sad and consider it one of our worst mistakes, though I am also not confident in that model, and am curious whether people have alternative models.
5
Lukas_Gloor
That's probably true because it's not like jobs like that just happen to exist within government (unfortunately), and it's hard to create your own role descriptions (especially with something so unusual) if you're not already at the top.  That said, I think the strategy you describe EAs to have been doing can be impactful? For instance, now that AI risk has gone mainstream, some groups in government are starting to work on AI policy more directly, and if you're already working on something kind of related and have a bunch of contacts and so on, you're well-positioned to get into these groups and even get a leading role.  What's challenging is that you need to make career decisions very autonomously and have a detailed understanding of AI risk and related levers to carve out your own valuable policy work at some point down the line (and not be complacent with "down the line never comes until it's too late"). I could imagine that there are many EA-minded individuals who went into DC jobs or UK policy jobs with the intent to have an impact on AI later, but they're unlikely to do much with that because they're not proactive enough and not "in the weeds" enough with thinking about "what needs to happen, concretely, to avert an AI catastrophe?." Even so, I think I know several DC EAs who are exceptionally competent and super tuned in and who'll likely do impactful work down the line, or are already about to do such things. (And I'm not even particularly connected to that sphere, DC/policy, so there are probably many more really cool EAs/EA-minded folks there that I've never talked to or read about.)   
4
OllieBase
The slide Nathan is referring to. "We didn't listen" feels a little strong; lots of people were working on policy detail or calling for it, it just seems ex post like it didn't get sufficient attention. I agree directionally though, and Richard's guesses at the causes (expecting fast take-off + business-as-usual politics) seem reasonable to me. Also, *EAGxBerlin.

The Scout Mindset deserved 1/10th of the marketing campaign of WWOTF. Galef is a great figurehead for rational thinking and it would have been worth it to try and make her a public figure.

4
Ozzie Gooen
I think much of the issue is that: 1. It took a while to ramp up to being able to do things such as the marketing campaign for WWOTF. It's not trivial to find the people and buy-in necessary. Previous EA books haven't had similar. 2. Even when you have that capacity, it's typically much more limited than we'd want. I imagine EAs will get better at this over time. 

How are we going to deal emotionally with the first big newspaper attack against EA?

EA is pretty powerful in terms of impact and funding.

It seems only an amount of time before there is a really nasty article written about the community or a key figure.

Last year the NYT wrote a hit piece on Scott Alexander and while it was cool that he defended himself, I think he and the rationalist community overreacted and looked bad.

I would like us to avoid this.

If someone writes a hit piece about the community, Givewell, Will MacAskill etc, how are we going to avoid a kneejerk reaction that makes everything worse?

I suggest if and when this happens:

  1. individuals largely don't respond publicly unless they are very confident they can do so in a way that leads to deescalation.

  2. articles exist to get clicks. It's worth someone (not necessarily me or you) responding to an article in the NYT, but if, say a niche commentator goes after someone, fewer people will hear it if we let it go.

  3. let the comms professionals deal with it. All EA orgs and big players have comms professionals. They can defend themselves.

  4. if we must respond (we often needn't) we should adopt a stance of grace, curiosity and hu

... (read more)

Yeah, I think the community response to the NYT piece was counterproductive, and I've also been dismayed at how much people in the community feel the need to respond to smaller hit pieces, effectively signal boosting them, instead of just ignoring them. I generally think people shouldn't engage with public attacks unless they have training in comms (and even then, sometimes the best response is just ignoring).

5
Peter Wildeford
We've had multiple big newspaper attacks now. How'd we do compared to your expectations?
2
Nathan Young
I think we did better externally than I expected but I think internally I didn't really write enough here. 

I hope Will MacAskill is doing well. I find it hard to predict how he's doing as a person. While there have been lots of criticisms (and I've made some) I think it's tremendously hard to be the Shelling person for a movement. There is a seperate axis however, and I hope in himself he's doing well and I imagine many feel that way. I hope he has an accurate picture here.

The vibe at EAG was chill, maybe a little downbeat, but fine. I can get myself riled up over the forum, but it's not representative! Most EAs are just getting on with stuff. 

(This isn't to say that forum stuff isn't important, its just as important as it is rather than what should define my mood)

I continue to think that a community this large needs mediation functions to avoid lots of harm with each subsequent scandal.

People asked for more details. so I wrote the below. 

Let's look at some recent scandals and I'll try and point out some different groups that existed.

  • FTX - longtermists and non-lontermists, those with greater risk tolerance and less
  • Bostrom - rationalists and progressives
  • Owen Cotton-Barrett - looser norms vs more robust, weird vs normie
  • Nonlinear - loyalty vs kindness, consent vs duty of care

In each case, the community disagrees on who we should be and what we should be. People write comments to signal that they are good and want good things and shouldn't be attacked. Other people see these and feel scared that they aren't what the community wants.

This is tiring and anxiety inducing for all parties. In all cases here there are well intentioned, hard working people who have given a lot to try and make the world better who are scared they cannot trust their community to support them if push comes to shove. There are people horrified at the behaviour of others, scared that this behaviour will repeat itself, with all the costs attached. I feel this way, and I ... (read more)

I'd bid for you to explain more what you mean here - but it's your quick take!

2
Chris Leong
I'm very keen for more details as well.
8
OllieBase
The CEA community health team does serve as a mediation function sometimes, I think. Maybe that's not enough, but it seems worth mentioning.
5
Chris Leong
Community health is also like the legal system in that they enforce sanctions so I wonder if that reduces the chance that someone reaches out to them to mediate.
2
Nathan Young
I think this is the wrong frame tbh
3
Chris Leong
How so?
2
Nathan Young
I think I want them to be a mediation and boundary setting org, not just legal system

The shifts in forum voting patterns across the EU and US seem worthy of investigation. 

I'm not saying there is some conspiracy, it seems pretty obvious that EU and US EAs have different views and that appears in voting patterns but it seems like we could have more self knowledge here.

2
JWS 🔸
Agreed, and I think @Peter Wildeford has pointed that out in recent threads - it's very unlikely to be a 'conspiracy' and much more likely that opinions and geographical locations are highly correlated. I can remember some recent comments of mine that swung from slighty upvoted to highly downvoted and back to slightly upvoted This might be something that the Forum team is better placed to answer, but if anyone can think of a way to try to tease this out using data on the public API let me know and I can try and investigate it
4
Nathan Young
But it's just sort of 'not-fun' to know that if one posts one's post at the wrong time it's gonna go underwater and maybe never come back.  Not sure what to do but it feels like there is a positive sum solution.
4
JWS 🔸
Yeah it's true, I was mostly just responding of the empirical question of how to identify/measure that split on the Forum itself. As to dealing with the split and what it represents, my best guess is that there is a Bay-concentrated/influenced group of users who have geographically concentrated views, which much of the rest of EA disagree with/to varying extents find their beliefs/behaviour rude or repugnant or wrong.[1] The longer term question is if that group and the rest of EA[2] can cohere together under one banner or not. I don't know the answer there, but I'd very much prefer it to be discussion and mutual understanding rather than acrimony and mutual downvoting. But I admit I have been acrimonious and downvoted others on the Forum, so not sure those on the other side to me[3] would think I'm a good choice to start that dialogue. 1. ^ Perhaps the feeling is mutual? I don't know, certainly I think many members of this culture (not just in EA/Rationalist circles but beyond in the Bay) find 'normie' culture morally wrong and intelorable 2. ^ Big simplification I know 3. ^ For the record, as per bio, I am a 'rest of the world/non-Bay' EA
2
NickLaing
There have been a free comments about this. And I'm surprised the forum team hasn't weighed in yet with data or comments. Are there actually voting trends which are differ across timezones? If so how do those patterns work? Should we do anything about it. I've also found myself reactionary downvoting recently which I didn't like but might have actually been fine just on the other side. That isn't good at all so so guilty here too

Sam Harris takes Giving What We Can pledge for himself and for his meditation company "Waking Up"

Harris references MacAksill and Ord as having been central to his thinking and talks about Effective Altruism and exstential risk. He publicly pledges 10% of his own income and 10% of the profit from Waking Up. He also will create a series of lessons on his meditation and education app around altruism and effectiveness.

Harris has 1.4M twitter followers and is a famed Humanist and New Athiest. The Waking Up app has over 500k downloads on android, so I guess over 1 million overall. 

https://dynamic.wakingup.com/course/D8D148

I like letting personal thoughts be up or downvoted, so I've put them in the comments.

6
Nathan Young
Harris is a marmite figure - in my experience people love him or hate him.  It is good that he has done this.  Newswise, it seems to me it is more likely to impact the behavior of his listeners, who are likely to be well-disposed to him. This is a significant but currently low-profile announcement. As will the courses be on his app.   I don't think I'd go spreading this around more generally, many don't like Harris and for those who don't like him, it could be easy to see EA as more of the same (callous superior progessivism). In the low probability (5%?) event that EA gains traction in that space of the web (generally called the Intellectual Dark Web - don't blame me, I don't make the rules) I would urge caution for EA speakers who might pulled into polarising discussion which would leave some groups feeling EA ideas are "not for them".

Harris is a marmite figure - in my experience people love him or hate him.

My guess is people who like Sam Harris are disproportionately likely to be potentially interested in EA.

This seems quite likely given EA Survey data where, amongst people who indicated they first heard of EA from a Podcast and indicated which podcast, Sam Harris' strongly dominated all other podcasts.

More speculatively, we might try to compare these numbers to people hearing about EA from other categories. For example, by any measure, the number of people in the EA Survey who first heard about EA from Sam Harris' podcast specifically is several times the number who heard about EA from Vox's Future Perfect. As a lower bound, 4x more people specifically mentioned Sam Harris in their comment than selected Future Perfect, but this is probably dramatically undercounting Harris, since not everyone who selected Podcast wrote a comment that could be identified with a specific podcast. Unfortunately, I don't know the relative audience size of Future Perfect posts vs Sam Harris' EA podcasts specifically, but that could be used to give a rough sense of how well the different audiences respond.

2
Aaron Gertler 🔸
Notably, Harris has interviewed several figures associated with EA; Ferriss only did MacAskill, while Harris has had MacAskill, Ord, Yudkowsky, and perhaps others.
3
David_Moss
This is true, although for whatever reason the responses to the podcast question seemed very heavily dominated by references to MacAskill.  This is the graph from our original post, showing every commonly mentioned category, not just the host (categories are not mutually exclusive). I'm not sure what explains why MacAskill really heavily dominated the Podcast category, while Singer heavily dominated the TED Talk category.
4
Nathan Young
The address (in the link) is humbling and shows someone making a positive change for good reasons. He is clear and coherent. Good on him.

I might start doing some policy BOTEC (Back of the envelope calculation) posts. ie where I suggest an idea and try and figure out how valuable it is. I think that do this faster with a group to bounce ideas off. 

If you'd like to be added to a message chat (on whatsapp probably) to share policy BOTECs then reply here or DM me. 

Feels like there should be a "comment anonymously" feature. Would save everyone having to manage all these logins.

[This comment is no longer endorsed by its author]Reply

We have thought about that. Probably the main reason we haven't done this is because of this reason, on which I'll quote myself on from an internal slack message:

Currently if someone makes an anon account, they use an anonymous email address. There's usually no way for us, or, by extension, someone who had full access to our database, to deanonymize them. However, if we were to add this feature, it would tie the anonymous comments to a primary account. Anyone who found a vulnerability in that part of the code, or got an RCE on us, would be able post a dump that would fully deanonymize all of those accounts.

-1
Nathan Young
Touche

Post I spent 4 hours writing on a topic I care deeply about: 30 karma

Post I spent 40 minutes writing on a topic that the community vibes with: 120 karma

I guess this is fine - iys just people being interested but it can feel weird at times.

7
NunoSempere
This is not fine
-1
Nathan Young
I dunno. I thought I'd surface.
-3
niplav
Yeah, this is an unfortunate gradient, you have to decide not to follow it :-/ But there is more long-term glory in it.

Confusion

I get why I and other give to Givewell rather than catastrophic risk - sometimes it's good to know your "Impact account" is positive even if all the catastrophic risk work was useless. 

But why do people not give to animal welfare in this case? Seems higher impact?

And if it's just that we prefer humans to animals that seems like something we should be clear to ourselves about.

Also I don't know if I like my mental model of an "impact account". Seems like my giving has maybe once again become about me rather than impact. 

ht @Aaron Bergman for surfacing this

6
Jeroen Willems🔸
This is exactly why I mostly give to animal charities. I do think there's higher uncertainty of impact with animal charities compared to global health charities so I still give a bit to AMF. So roughly 80% animal charities, 20% global health.
3
Aaron Bergman
Thanks for brining our convo here! As context for others, Nathan and I had a great discussion about this which was supposed to be recorded...but I managed to mess up and didn't capture the incoming audio (i.e. everything Nathan said) 😢 Guess I'll share a note I made about this (sounds AI written because it mostly was, generated from a separate rambly recording). A few lines are a little spicier than I'd ideally like but 🤷
7
Jason
Thanks for posting this. I had branching out my giving strategy to conclude some animal-welfare organizations on the to-do list, but this motivated me to actually pull the trigger on that.
4
Gil
I think most of the animal welfare neglect comes from the fact that if people are deep enough into EA to accept all of its "weird" premises they will donate to AI safety instead. Animal welfare is really this weird midway spot between "doesn't rest on controversial claims" and "maximal impact".
8
Aaron Bergman
Definitely part of the explanation, but my strong impression from interaction irl and on Twitter is that many (most?) AI-safety-pilled EAs donate to GiveWell and much fewer to anything animal related. I think ~literally except for Eliezer (who doesn’t think other animals are sentient), this isn’t what you’d expect from the weirdness model implied. Assuming I’m not badly mistaken about others’ beliefs and the gestalt (sorry) of their donations, I just don’t think they’re trying to do the most good with their money. Tbc this isn’t some damning indictment - it’s how almost all self-identified EAs’ money is spent and I’m not at all talking about ‘normal person in rich country consumption.’

Does anyone understand the bottlenecks to a rapid malaria vaccine rollout? Feels underrated.

Best sense of what's going on (my info's second-hand) is it would cost ~$600M to buy and distribute all of Serum Institute's supply (>120M doses $3.90 dose +~$1/dose distribution cost) and GAVI doesn't have any new money to do so. So they're possibly resistant to moving quickly, which may be slowing down the WHO prequalification process, which is a gating item for the vaccine being put in vials and purchased by GAVI (via UNICEF). Natural solution for funding is for Gates to lead an effort to do so, but they are heavy supporters of the RTS,S malaria vaccine, so it's awkward for them to put major support into the new R21 vaccine which can be produced in large quantity. Also the person most associated with R21 is Adrian Hill, who is not well-liked in the malaria field. There will also be major logistical hurdles to getting it distributed in the countries, and there are a number of bureaucracies internally in each of the countries that will all need to cooperate.

 

Here's an op-ed my colleague Zach, https://foreignpolicy.com/2023/12/08/new-malaria-vaccine-africa-world-health-organization-child-mortality/

Here's one from Peter Singer https://www.project-syndicate.org/commentary/new... (read more)

8
Weaver
Pretend that you're a Texan vaccine distributor.  You have the facility to produce en mass, something that once given out will no longer make a profit, so there's no incentive to make a factory, but you're an EA true and true so you build the thing you need and make the doses. Now you have doses in a warehouse somewhere. You have to take the vaccine all over the admittedly large state, but with a good set of roads and railroads, this is an easily solvable problem, right?  You have a pile of vaccine, potentially connections with Texan hospitals who thankfully ALL speak English and you have the funding from your company to send people to distribute the vaccine.  There may or may not be a cold chain needed so you might need refrigerated trucks, but this is a solvable problem right? Cold chain trucks can't be that more expensive than regular trucks? So you go out and you start directing the largest portion of vaccines to go to the large cities and health departments, just to reach your 29 million people that you're trying to hit. You pay a good salary to your logisticians and drivers to get the vaccines where they need to go. In a few days, you're able to effectively get a large chunk of your doses to where they need to go, but now you run into the problem of last mile logistics, where you need to get a dose to a person. That means that the public has to get the message that this is available for them, where they can find it and how they can do it. God forbid there be a party that is trying to PSYOP that your vaccine causes Malarial cancer or something because that would be a problem. You'll have your early adopters, sure but after some time the people that will follow prudent public health measures will drop off and the lines will be empty.  You'll still have 14 million doses, which have they been properly stored? This is of course accounting for the number of Texans who just won't get a vaccine or are perhaps too young. So you appeal to the state government
6
Stephen Clare
FWIW I reached out to someone involved in this at a high level a few months ago to see if there was a potential project here. They said the problem was "persuading WHO to accelerate a fairly logistically complex process". It didn't seem like there were many opportunities to turn money or time into impact so I didn't pursue anything further.
3
MathiasKB🔸
There's a few I know of: * For the new R21 vaccine, WHO is currently conducting prequalification of the production facilities. As far as I understand, African governments have to wait for prequalification to finish for before they can apply for subsidized procurement and rollout through UNICEF and GAVI. * For both RTS,S and R21, there are some logistical difficulties due to the vaccines' 4 dose schedule (First three 1 month apart - doesn't fit all too well into existing vaccination schedules) cold-chain requirements, and timing peak immunity with the seasonality of malaria. * Lastly since there already exists cost-effective counter-measures, it's unclear how to balance new vaccine efforts against existing measures.

Is EA as a bait and switch a compelling argument for it being bad?

I don't really think so

  1. There are a wide variety of baits and switches, from what I'd call misleading to some pretty normal activities - is it a bait and switch when churches don't discuss their most controversial beliefs at a "bring your friends" service? What about wearing nice clothes to a first date? [1]
  2. EA is a big movement composed of different groups[2]. Many describe it differently.
  3. EA has done so much global health stuff I am not sure it can be described as a bait and switch. eg h
... (read more)
4
Joseph Lemien
I think that there might be something meaningfully different between wearing nice clothes to a first date (or a job interview), as opposed to intentionally not mentioning more controversial/divisive topics to newcomers. I think there is a difference between putting your best foot forward (dressing nice, grooming, explaining introductory EA principles articulately with a 'pitch' you have practices) and intentionally avoiding/occluding information. For a date, I wouldn't feel deceived/tricked if someone dressed nice. But I would feel deceived if the person intentionally withheld or hid information that they knew I would care about. (it is almost a joke that some people lie about age, weight, height, employment, and similar traits in dating). I have to admit that I was a bit turned off (what word is appropriate for a very weak form of disgusted?) when I learned that there has long been an intentional effort in EA to funnel people from global development to long-termism within EA.
4
huw
If anything, EA now has a strong public (admittedly critical) reputation for longtermist beliefs. I wouldn't be surprised if some people have joined in order to pursue AI alignment and got confused when they found out more than half of the donations go to GHD & animal welfare.
0
Richard Y Chappell🔸
re: fn 1, maybe my tweet?
2
Nathan Young
Yes, I thought it was you but I couldn't find it. Good analogy.

Relative Value Widget

It gives you sets of donations and you have to choose which you prefer. If you want you can add more at the bottom.

https://allourideas.org/manifund-relative-value 

so far:

2
Ozzie Gooen
This is neat, kudos! I imagine it might be feasible to later add probability distributions, though that might unnecessarily slow people down.  Also, some analysis would likely be able to generate a relative value function, after which you could do the resulting visualizations and similar.
4
Nathan Young
Note I didn't build the app, I just added the choices. Do you think geting the full relative values is worth it?
1
Nathan Young
Why do people give to EA funds and not just OpenPhil?
4
David M
does OpenPhil accept donations? I would have guessed not
3
ChrisSmith
It does not. There are a small number of co-funding situations where money from other donors might flow through Open Philanthropy operated mechanisms, but it isn't broadly possible to donate to Open Philanthropy itself (either for opex or regranting).
2
Nathan Young
Lol well no wonder then. Thanks both. 

A previous partner and I did a sex and consent course together online I think it's helped me be kinder in relationships. 

Useful in general. 

More useful if, you: 

- have sex casually 
- see harm in your relationships and want to grow
- are poly

As I've said elsewhere I think a very small proportion of people in EA are responsible for most of the relationship harms. Some of bad actors, who need to be removed, some are malefactors, who have either lots of interactions or engage in high risk behaviours and accidentally cause harm. I would guess I have more traits of the second category than almost all of you. So people like me should do the most work to change.

So most of you probably don't need this, but if you are in some of the above groups, I'd recommend a course like this. Save yourself the heartache of upsetting people you care about. 

Happy to DM.

https://dandelion.events/e/pd0zr?fbclid=IwAR0cIXFowU7R4dHZ4ptfpqsnnhdnLIJOfM_DjmS_5HR-rgQTnUzBdtQEnjE 
 

I talked to someone outside EA the other day who said that in a competive tender they wouldn't apply to EA funders because they thought the process would likely to go to someone with connections to OpenPhil. 

Seems bad.

Dear reader,

You are an EA, if you want to be. Reading this forum is enough. Giving a little of your salary effectively is enough. Trying to get an impactful job is enough. If you are trying even with a fraction of your resources to make the world better and chatting with other EAs about it, you are one too.

Being able to agree and disagreevote on posts feels like it might be great. Props to the forum team.

4
Habryka
Looking forward to how it plays out! LessWrong made the intentional decision to not do it, because I thought posts are too large and have too many claims and agreement/disagreement didn't really have much natural grounding any more, but we'll see how it goes. I am glad to have two similar forums so we can see experiments like this play out. 
4
NickLaing
My hope would be that it would allow people to decouple the quality of the post and whether they agree with it or not. Hopefully people could even feel better about upvoting posts they disagreed with (although based on comments that may be optimistic).  Perhaps combined with a possible tweak in what upvoting means (as mentioned by a few people), someone mentioned we could change "how much do you like this overall" to something that moves away form basing the reaction on an emotions. I think someone suggested something like "Do you think this post adds value" (That's just a real hack at the alternative, I'm sure there are far better ones)
4
Nathan Young
I think another option is to have reactions on a paragraph level. That would be interesting.

Several journalists (including those we were happy to have write pieces about WWOTF) have contacted me but I think if I talk to them, even carefully, my EA friends will be upset with me. And to be honest that upsets me.

We are in the middle of a mess of our own making. We deserve scrutiny. Ugh, I feel dirty and ashamed and frustrated.

To be clear, I think it should be your own decision to talk to journalists, but I do also just think that it's just better for us to tell our own story on the EA Forum and write comments, and not give a bunch of journalists the ability to greatly distort the things we tell them in a call, with a platform and microphone that gives us no opportunity to object or correct things. 

I have been almost universally appalled at the degree to which journalists straightforwardly lie in interviews, take quotes massively out of context, or make up random stuff related to what you said, and I do think it's better that if you want to help the world understand what is going on, that you write up your own thoughts in your own context, instead of giving that job to someone else.

1
ChanaMessinger
<3

I have heard one anecdote of an EA saying that they would be less likely to hire someone on the basis of their religion because it would imply they were less good at their job less intelligent/epistemically rigorous. I don't think they were involved in hiring, but I don't think anyone should hold this view.

Here is why:

  • As soon as you are in a hiring situation, you have much more information than priors. Even if it were true that, say, ADHD[1] were less rational then the interview process should provide much more information than such a prior. If that's not the case, get a better interview process, don't start being prejudiced!
  • People don't mind meritocracy, but they want a fair shake. If I heard that people had a prior that ADHD folks were less likely to be hard working, regardless of my actual performance in job tests, I would be less likely to want to be part of this community. You might lose my contributions. It seems likely to me that we come out ahead by ignoring small differences in groups so people don't have to worry about this. People are very sensitive to this. Let's agree not to defect. We judge on our best guess of your performance, not on appearances. 
  • I would b
... (read more)

I would be unsurprised if this kind of thinking cut only one way. Is anyone suggesting they wouldn't hire poly people because of the increased drama or men because of the increased likelihood of sexual scandal? 

In the wake of the financial crisis it was not uncommon to see suggestions that banks etc. should hire more women to be traders and risk managers because they would be less temperamentally inclined towards excessive risk taking.

3
Nathan Young
I have not heard for such calls in EA, which was my point.  But neat example
6
Joseph Lemien
These thoughts are VERY rough and hand wavy. I think that we have more-or-less agreed as societies that there are some traits that is is okay to use to make choices about people (mainly: their actions/behaviors), and there are some traits that is is not okay to use (mainly: things that the person didn't choose and isn't responsible for). Race, religion, gender, and the like are widely accepted[1] as not socially acceptable traits to use when evaluating people's ability to be a member of a team.[2] But there are other traits that we commonly treat as acceptable to use as the basis of treating people differently, such as what school someone went to, how many years of work experience they have, if they have a similar communication style as us, etc. I think I might split this into two different issues. 1. One issue is: it isn't very fair to give or withhold jobs (and other opportunities) based on things that people didn't really have much choice in (such as where they were born, how wealthy their parents were, how good of an education they got in their youth, etc.) 2. A separate issue is: it is ineffective to employment decisions (hiring, promotions, etc.) based on things that don't predict on-the-job success. Sometimes these things line up nicely (such as how it isn't fair to base employment decisions on hair color, and it is also good business to not base employment decisions on hair color). But sometimes they don't line up so nicely: I think there are situations where it makes sense to use "did this person go to a prestigious school" to make employment decisions because that will get you better on-the-job performance; but it also seems unfair because we are in a sense rewarding this person for having won the lottery.[3] In a certain sense I suppose this is just a mini rant about how the world is unfair. Nonetheless, I do think that a lot of conversations about hiring and discriminations get the two different issues conflated. 1. ^ People's perspecti
0
quinn
I know lots of people with lots of dispositions experience friction with just declining their parents' religions, but that doesn't mean I "get it" i.e., conflating religion with birth lotteries and immutability seems a little unhinged to me.  There may be a consensus that it's low status to say out loud "we only hire harvard alum" or maybe illegal (or whatever), but there's not a lot of pressure to actually try reducing implicit selection effects that end up in effect quite similar to a hardline rule. And I think harvard undergrad admissions have way more in common with lotteries than religion does!  I think the old sequencesy sort of "being bad at metaphysics (rejecting reductionism) is a predictor of unclear thinking" is fine! The better response to that is "come on, no one's actually talking about literal belief in literal gods, they're moreso saying that the social technologies are valuable or they're uncomfortable just not stewarding their ancestors' traditions" than like a DEI argument. 
4
Nathan Young
There is more to get into here but two main things: * I guess some EAs, and some who I think do really good work do literally believe in literal gods * I don't actually think this is that predictive. I know some theists who are great at thinking carefully and many athiests who aren't. I reckon I could distinguish the two in a discussion better than rejecting the former out of hand. *  
5
Aaron Gertler 🔸
Some feedback on this post: this part was confusing. I assume that what this person said was something like "I think a religious person would probably be harder to work with because of X", or "I think a religious person would be less likely to have trait Y", rather than "religious people are worse at jobs". The specifics aren't very important here, since the reasons not to discriminate against people for traits unrelated to their qualifications[1] are collectively overwhelming. But the lack of specifics made me think to myself: "is that actually what they said?". It also made it hard to understand the context of your counterarguments, since there weren't any arguments to counter.  1. ^ Religion can sometimes be a relevant qualification, of course; if my childhood synagogue hired a Christian rabbi, I'd have some questions. But I assume that's not what the anecdotal person was thinking about.
7
Kirsten
The person who was told this was me, and the person I was talking to straight up told me he'd be less likely to hire Christians because they're less likely to be intelligent Please don't assume that EAs don't actually say outrageously offensive things - they really do sometimes! Edit: A friend told me I should clarify this was a teenage edgelord - I don't want people to assume this kind of thing gets said all the time!
8
Nathan Young
And since posting this I've said this to several people and 1 was like "yeah no I would downrate religious people too" I think a poll on this could be pretty uncomfortable reading. If you don't, run it and see.  Put it another way, would EAs discriminate against people who believe in astrology? I imagine more than the base rate. Part of me agrees with that, part of me thinks its norm harming to do. But I don't think this one is "less than the population"
6
Aaron Gertler 🔸
That's exactly what I mean!  "I think religious people are less likely to have trait Y" was one form I thought that comment might have taken, and it turns out "trait Y" was "intelligence". Now that I've heard this detail, it's easier to understand what misguided ideas were going through the speaker's mind. I'm less confused now. "Religious people are bad at jobs" sounds to me like "chewing gum is dangerous" — my reaction is "What are you talking about? That sounds wrong, and also... huh?"  By comparison, "religious people are less intelligent" sounds to me like "chewing gum is poisonous" — it's easier to parse that statement, and compare it to my experience of the world, because it's more specific. ***** As an aside: I spend a lot of time on Twitter. My former job was running the EA Forum. I would never assume that any group has zero members who say offensive things, including EA.
5
Linch
I think the strongest reason to not do anything that even remotely looks like employer discrimination based on religion is that it's illegal, at least for the US, UK, and European Union countries, which likely jointly encompasses >90% of employers in EA.  (I wouldn't be surprised if this is true for most other countries as well, these are just the ones I checked).
4
Jason
There's also the fact that, as a society and subject to certain exceptions, we've decided that employers shouldn't be using an employee's religious beliefs or lack thereof as an assessment factor in hiring. I think that's a good rule from a rule-utilitarian framework. And we can't allow people to utilize their assumptions about theists, non-theists, or particular theists in hiring without the rule breaking down. The exceptions generally revolve around personal/family autonomy or expressive association, which don't seem to be in play in the situation you describe.
4
Joseph Lemien
I think that I generally agree with what you are suggesting/proposing, but there are all kinds of tricky complications. The first thing that jumps to my mind is that sometimes hiring the person who seems most likely to do the best job ends up having a disparate impact, even if there was no disparate treatment. This is not a counterargument, of course, but more so a reminder that you can do everything really well and still end up with a very skewed workforce.
3
Timothy Chan
I generally agree with the meritocratic perspective. It seems a good way (maybe the best?) to avoid tit-for-tat cycles of "those holding views popular in some context abuse power -> those who don't like the fact that power was abused retaliate in other contexts -> in those other contexts, holding those views results in being harmed by people in those other contexts who abuse power". Good point about the priors. Strong priors about these things seem linked to seeing groups as monoliths with little within-group variance in ability. Accounting for the size of variance seems under-appreciated in general. E.g., if you've attended multiple universities, you might notice that there's a lot of overlap between people's "impressiveness", despite differences in official university rankings. People could try to be less confused by thinking in terms of mean/median, variance, and distributions of ability/traits more, rather than comparing groups by their point estimates. Some counter-considerations: * Religion and race seem quite different. Religion seems to come with a bunch of normative and descriptive beliefs that could affect job performance - especially in EA - and you can't easily find out about those beliefs in a job interview. You could go from one religion to another, from no religion to some religion, or some religion to no religion. The (non)existence of that process might give you valuable information about how that person thinks about/reflects on things and whether you consider that to be good thinking/reflection.  * For example, from a irreligious perspective, it might be considered evidence of poor thinking if a candidate thinks the world will end in ways consistent with those described in the Book of Revelation, or think that we're less likely to be in a simulation because a benevolent, omnipotent being wouldn't allow that to happen to us. * Anecdotally, on average, I find that people who have gone through the process of abandoning the religion they were
2
Joseph Lemien
Oh, another thought. (sorry for taking up so much space!) Sometimes something looks really icky, such as evaluating a candidate via religion, but is actually just standing in for a different trait. We care about A, and B is somewhat predictive of A, and A is really hard to measure, then maybe people sometimes use B as a rough proxy for A. I think that this is sometimes used as the justification for sexism/racism/etc, where the old-school racist might say "I want a worker who is A, and B people are generally not A." If the relationship between A and B is non-existent or fairly weak, then we would call this person out for discriminating unfairly. But now I'm starting to think of what we should do if there really is a correlation between A and B (such as sex and physical strength). That is what tends to happen if a candidate is asked to do an assessment that seems to have nothing to do with the job, such as clicking on animations of colored balloons: it appears to have nothing to do with the job, but it actually measures X, which is correlated with Y, which predicts on-the-job success. I'd rather be evaluated as an individual than as a member of a group, and I suspect that in-group variation is greater than between-group variation, echoing what you wrote about the priors being weak.
0
Nathan Young
You don't need to apologise for taking up space! It's a short form, write what you like.

I think EAs have a bit of an entitlement problem. 

Sometimes we think that since we are good we can ignore the rules. Seems bad

[This comment is no longer endorsed by its author]Reply

As with many statements people make about people in EA, I think you've identified something that is true about humans in general. 

I think it applies less to the average person in EA than to the average human. I think people in EA are more morally scrupulous and prone to feeling guilty/insufficiently moral than the average person, and I suspect you would agree with me given other things you've written. (But let me know if that's wrong!)

I find statements of the type "sometimes we are X" to be largely uninformative when "X" is a part of human nature. 

Compare "sometimes people in EA are materialistic and want to buy too many nice things for themselves; EA has a materialism problem" — I'm sure there are people in EA like this, and perhaps this condition could be a "problem" for them. But I don't think people would learn very much about EA from the aforementioned statements, because they are also true of almost every group of people.

Can we have some people doing AI Safety podcast/news interviews as well as Yud?

 I am concerned that he's gonna end up being the figurehead here. I assume someone is thinking of this, but I'm posting here to ensure that it is said. I am pretty sure that people are working on this, but I think it's good to say this anyway.

We aren't a community who says "I guess he deserves it" we say "who is the best person for the job?". Yudkowsky, while he is an expert isn't a median voice. His estimates of P(doom) are on the far tail of EA experts here. So if I could pick 1 person I wouldn't pick him and frankly I wouldn't pick just one person.

Some other voices I'd like to see on podcasts/ interviews:

  • Toby Ord
  • Paul Christiano
  • Ajeya Cotra
  • Amanda Askell
  • Will MacAskill
  • Joe Carlsmith*
  • Katja Grace*
  • Matthew Barnett*
  • Buck Schlegeris
  • Luke Meulhauser

Again, I'm not saying noone has thought of this (80%) they have. But I'd like to be 97% sure, so I'm flagging it.

*I am personally fond of this person so am biased

8
harfe
I am a bit confused by your inclusion of Will MacAskill. Will has been on a lot of podcasts, while for Eliezer I only remember 2. But your text sounds a bit like you worry that Eliezer will be too much on podcasts and MacAskill too little (I don't want to stop MacAskill from going on podcasts btw. I agree that having multiple people present different perspectives on AGI safety seems like a good thing).
4
Nathan Young
I think in the current discourse I'd like to see more of Will, who is a blanaced and clear communicator.
8
RobertM
I don't think you should be optimizing to avoid extreme views, but in favor of those with the most robust models, who can also communicate them effectively to the desired audience.  I agree that if we're going to be trying anything resembling public outreach it'd be good to have multiple voices for a variety of reasons. On the first half of the criteria I'd feel good about Paul, Buck, and Luke.  On the second half I think Luke's blog is a point of evidence in favor.  I haven't read Paul's blog, and I don't think that LessWrong comments are sufficiently representative for me to have a strong opinion on either Paul or Buck.

I make a quick (and relatively uncontroversial) poll on how people are feeling about EA. I'll share if we get 10+ respondents.

3
huw
Without reading too much into it, there's a similar amount of negativity about the state of EA as there is a lack of confidence in its future. That suggests to me that there's a lot of people who think EA should be reformed to survive (rather than 'it'll dwindle and that's fine' or 'I'm unhappy with it but it'll be okay')?
1
Nathan Young
Currently 27-ish[1] people have responded: Full results: https://viewpoints.xyz/polls/ea-sense-check/results  Statements people agree with: Statements where there is significant conflict: Statements where people aren't sure or dislike the statement: 1. ^ The applet makes it harder to track numbers than the full site. 

I notice I am pretty skeptical of much longtermist work and the idea that we can make progress on this stuff just by thinking about it.

I think future people matter, but I will be surprised if, after x-risk reduction work, we can find 10s of billions of dollars of work that isn't busywork and shouldn't be spent attempting to learn how to get eg nations out of poverty.

I would appreciate being able to vote forum articles as both agree disagree and upvote downvote. 

Lots of things where I think they are false but interesting or true but boring.

I sense that it's good to publicly name serial harassers who have been kicked out of the community, even if the accuser doesn't want them to be. Other people's feeling matter too and I sense many people would like to know who they are. 

I think there is a difference between different outcomes, but if you've been banned from EA events then you are almost certainly someone I don't want to invite to parties etc.

If you type "#" follwed by the title of a post and press enter it will link that post.

Example:
Examples of Successful Selective Disclosure in the Life Sciences 

This is wild

1
EdoArad
OMG

I guess African, Indian and Chinese voices are underrepresented in the AI Governance discussion. And in the unlikely case we die, we all die and it think it's weird that half the people who will die have noone loyal to them in the discussion.

We want AI that works for everyone and it seems likely you want people who can represent billions who aren't currently with a loyal representative.

I'm actually more concerned about the underrepresentation of certain voices as it applies to potential adverse effects of AGI (or even near-AGI) on society that don't involve all of us dying. In the everyone-dies scenario, I would at least be similarly situated to people from Africa, India, and China in terms of experiencing the exact same bad thing that happens. But there are potential non-fatal outcomes, like locking in current global power structures and values, that affect people from non-Western countries much differently (and more adversely) than they'd affect people like me.

7
Timothy Chan
Yeah, in a scenario with "nation-controlled" AGI, it's hard to see people from the non-victor sides not ending up (at least) as second-class citizens - for a long time. The fear/lack of guarantee of not ending up like this makes cooperation on safety more difficult, and the fear also kind of makes sense? Great if governance people manage to find a way to alleviate that fear - if it's even possible. Heck, even allies of the leading state might be worried - doesn't feel too good to end up as a vassal state. (Added later (2023-06-02): It may be a question that comes up as AGI discussions become mainstream.) Wouldn't rule out both American and Chinese outside of respective allied territory being caught in the crossfire of a US-China AI race. Political polarization on both sides in the US is also very scary.
3
Nathan Young
Sorry, yes. I think that ideally we don't all die. And in those situations voices loyal to representative groups seem even more important.
5
Joseph Lemien
This strikes me as another variation of "EA has a diversity problem." Good to keep in mind that is it not just about progressive notions of inclusivity, though. There may be VERY significant consequences for the people in vast swaths of the world if a tiny group of people make decisions for all of humanity. But yeah, I also feel that it is a super weird aspect of the anarchic system (in the international relations sense of anarchy) that most of the people alive today have no one representing their interests. It also seems to echo consistent critiques of development aid not including people in decision-making (along the lines of Ivan Illich's To Hell with Good Intentions, or more general post-colonial narratives).
1
harfe
What means "have noone loyal to them" and "with a loyal representative"? Are you talking about the indian government? Or are you talking about EAs talking part in discussions such as yourself? (In which case, who are you loyal to?)
3
Nathan Young
I think that's part of the problem. Who is loyal to the chinese people?  And I don't think I'm good here. I think I try to be loyal to them, but I don't know what the chinese people want and I think if I try and guess I'll get it wrong in some key areas. I'm reminded of when givewell?? asked recipients how they would trade money for children's lives and they really fucking loved saving children's lives. If we are doing things for others benefit we should take their weightings into account.

Unbalanced karma is good actually. it means that the moderators have to do less. I like the takes of the top users more than the median user and I want them to have more but not total influence. 

Appeals to fairness don't interest me - why should voting be fair?

I have more time for transparency.

I notice we are great at discussing stuff but not great at coming to conclusions. 

I wish the forum had a better setting for "I wrote this post and maybe people will find it interesting but I don't want it on the front page unless they do because that feels pretenious"

Seems worth considering that

A) EA has a number of characteristic of a "High Demand Group" (cult). This is a red flag and you should wrestle with it yourself.

B) Many of the "Sort of"s are peer pressure. You don't have to do these things. And if you don't want to, don't!

In what sense is it "sort of" true that members need to get permission from leaders to date, change jobs, or marry?

0
Nathan Young
I think there is starting to be social pressure on who to date. And there has been social pressure for which jobs to take for a while.

I think that one's a reach, tbh.

(I also think the one about using guilt to control is a stretch.)

My call: EA gets 3.9 out of 14 possible cult points.

The group is focused on a living leader to whom members seem to display excessively zealous, unquestioning commitment.

No

The group is preoccupied with bringing in new members.

Yes (+1)

The group is preoccupied with making money.

Partial (+0.8)

Questioning, doubt, and dissent are discouraged or even punished.

No

Mind-numbing techniques (such as meditation, chanting, speaking in tongues, denunciation sessions, debilitating work routines) are used to suppress doubts about the group and its leader(s).

No

The leadership dictates sometimes in great detail how members should think, act, and feel (for example: members must get permission from leaders to date, change jobs, get married; leaders may prescribe what types of clothes to wear, where to live, how to discipline children, and so forth).

No

The group is elitist, claiming a special, exalted status for itself, its leader(s), and members (for example: the leader is considered the Messiah or an avatar; the group and/or the leader has a special mission to save humanity).

Partial (+0.5)

The group has a polarized us- versus-them mentality, which causes conflict with the w

... (read more)
4
pseudonym
1
Peter Wildeford
I think you may have very high standards? By these standards, I don't think there are any communities at all that would score 0 here. ~ I was not aware of "What would SBF do" stickers. Hopefully those people feel really dumb now. I definitely know about EY hero worship but I was going to count that towards a separate rationalist/LW cult count instead of the EA cult count.
5
pseudonym
I think where we differ is that I'm not making a comparison of whether EA is worse than this compared to other groups, if every group scores in the range of 0.5-1 I'll still score 0.5 as 0.5, and not scale 0.5 down to 0 and 0.75 down to 0.5. Maybe that's the wrong way to approach it but I think the least culty organization can still have cult-like tendencies, instead of being 0 by definition. Also if it's true that someone working at GPI was facing these pressures from "senior scholars in the field", then that does seem like reason for others to worry. There also has been a lot of discussion on the forum about the types of critiques that seem like they are acceptable and the ones that aren't etc. Your colleague also seems to believe this is a concern, for example, so I'm currently inclined to think that 0.2 is pretty reasonable and I don't think I should update much based on your comment-but happy for more pushback!
4
MHR
I think  has to get more than 0.2, right? Being elitist and on a special mission to save humanity is a concerningly good descriptor of at least a decent chunk of EA. 
3
Peter Wildeford
Ok updated to 0.5. I think "the leader is considered the Messiah or an avatar" being false is fairly important.
1
Paul_Crowley
>> The group teaches or implies that its supposedly exalted ends justify means that members would have considered unethical before joining the group (for example: collecting money for bogus charities). > Partial (+0.5) This seems too high to me, I think 0.25 at most. We're pretty strong on "the ends don't justify the means". >>The leadership induces guilt feelings in members in order to control them. > No This on the other hand deserves at least 0.25...

I don't think it makes sense to say that the group is "preoccupied with making money". I expect that there's been less focus on this in EA than in other groups, although not necessarily due to any virtue, but rather because of how lucky we have been in having access to funding.

EAs please post your job posting to twitter

Please post your jobs to Twitter and reply with @effective_jobs. Takes 5 minutes. and the jobs I've posted and then tweeted have got 1000s of impressions. 

Or just DM me on twitter (@nathanpmyoung) and I'll do it. I think it's a really cheap way of getting EAs to look at your jobs. This applies to impactful roles in and outside EA.

Here is an example of some text:

-tweet 1

Founder's Pledge Growth Director

@FoundersPledge are looking for someone to lead their efforts in growing the amount that tech entrepreneurs give to effective charities when they IPO. 

Salary: $135 - $150k 
Location: San Francisco

https://founders-pledge.jobs.personio.de/job/378212

-tweet 2, in reply

@effective_jobs

-end

I suggest it should be automated but that's for a different post.

edited

Give Directly has a President (Rory Stewart) paid $600k,  and is hiring a Managing Director. I originally thought they had several other similar roles (because I looked on the website) but I talked to them an seemingly that is not the case. Below is the tweet that tipped me off but I think it is just mistaken.

Once could still take issue with the $600k (though I don't really)

https://twitter.com/carolinefiennes/status/1600067781226950656?s=20&t=wlF4gg_MsdIKX59Qqdvm1w 

Seems in line with CEO pay for US nonprofits with >100M in budget, at least when I spot check random charities near the end of this list.

I feel confused about the president/CEO distinction however.

-9
NickLaing

Someone told me they don't bet as a matter of principle. And that this means EA/Rats take their opinions less seriously as a result. Some thoughts

  • I respect individual EAs preferences. I regularly tell friends to do things they are excited about, to look after themselves, etc etc. If you don't want to do something but feel you ought to, maybe think about why, but I will support you not doing it. If you have a blanket ban on gambling, fair enough. You are allowed to not do things because you don't want to
  • Gambling is addictive, if you have a problem with it,
... (read more)

I don't bet because I feel it's a slippery slope. I also strongly dislike how opinions and debates in EA are monetised, as this strengthens even more the neoliberal vibe EA already has, so my drive for refraining to do this in EA is stronger than outside.

Edit: and I too have gotten dismissed by EAs for it in the past.

-2
Nathan Young
* I don't want you to do something you don't want to. * A slippery slope to what?
3
Guy Raveh
To gambling on anything else and taking an actual financial risk.
2
Nathan Young
Yeah, I guess if you think there is a risk of gambling addiction, don't do it. But I don't know that that's a risk for many. Also I think many of us take a financial risk by being involved in EA. We are making big financial choices.
2
Guy Raveh
There's a difference between using money to help others and using it for betting?
2
Nathan Young
Yes obviously, but not in the sense that you are investing resources. Is there a difference between the financial risk of a bet and of a standard investment? Not really, no.
6
DC
I don't bet because it's not a way to actually make money given the frictional costs to set it up, including my own ignorance about the proper procedure and having to remember it and keep enough capital for it. Ironically, people who are betting in this subculture are usually cargo culting the idea of wealth-maximization with the aesthetics of betting with the implicit assumption that the stakes of actual money are enough to lead to more correct beliefs when following the incentives really means not betting at all. If convenient, universal prediction markets weren't regulated into nonexistence then I would sing a different tune.
2
Nathan Young
I guess I do think the "wrong beliefs should cost you" is a lot of the gains. I guess I also think that bets should be able to be at scale of the disagreement is important, but I think that's a much more niche view.
5
Jason
There are a number of possible reasons that the individual might not want to talk about publicly: * A concern about gambling being potentially addictive for them; * Being relatively risk-averse in their personal capacity (and/or believing that their risk tolerance is better deployed for more meaningful things than random bets); * Being more financially constrained than their would-be counterparts; and * Awareness of, and discomfort with, the increased power the betting norm could give people with more money. On the third point: the bet amount that would be seen as meaningful will vary based on the person's individual circumstances. It is emotionally tough to say -- no, I don't have much money, $10 (or whatever) would be a meaningful bet for me even though it might take $100 (or whatever) to be meaningful to you. On the fourth point: if you have more financial resources, you can feel freer with your bets while other people need to be more constrained. That gives you more access to bet-offers as a rhetorical tool to promote your positions than people with fewer resources. It's understandable that people with fewer resources might see that as a financial bludgeon, even if not intended as such. 
-2
Nathan Young
I think the first one is good, the not so much. I think there is something else going on here.
5
Sol3:2
I have yet to see anyone in the EA/rat world make a bet for sums that matter, so I really don't take these bets very seriously. They also aren't a great way to uncover people's true probabilities because if you are betting for money that matters you are obviously incentivized to try to negotiate what you think are the worst possible odds for the person on the other side that they might be dumb enough to accept.
2
Nathan Young
kind of fair. I'm pretty sure I've seen $1000s
2
Radical Empath Ismam
If anything... I probably take people less seriously if they do bet (not saying that's good or bad, but just being honest), especially if there's a bookmaker/platform taking a cut.
2
Nathan Young
I think this is more about 1-1 bets. I guess it depends if they win or lose on average. I still think knowing I barely win is useful self knowledge.

I strongly dislike the following sentence on effectivealtruism.org:

"Rather than just doing what feels right, we use evidence and careful analysis to find the very best causes to work on."

It reads to me as arrogant, and epitomises the worst caracatures my friends do of EAs. Read it in a snarky voice (such as one might if they struggled with the movement and were looking to do research) "Rather that just doing what feels right..."

I suggest it gets changed to one of the following:

  • "We use evidence and careful analysis to find the very best causes to work on."
  • "It's great when anyone does a kind action no matter how small or effective. We have found value in using evidence and careful analysis to find the very best causes to work on."

I am genuinely sure whoever wrote it meant well, so thank you for your hard work.

Are the two bullet points two alternative suggestions? If so, I prefer the first one.

8
Matt_Lerner
I also thought this when I first read that sentence on the site, but I find it difficult (as I'm sure its original author does) to communicate its meaning in a subtler way. I like your proposed changes, but to me the contrast presented in that sentence is the most salient part of EA. To me, the thought is something like this: "Doing good feels good, and for that reason, when we think about doing charity, we tend to use good feeling as a guide for judging how good our act is. That's pretty normal, but have you considered that we can use evidence and analysis to make judgments about charity?" The problem IMHO is that without the contrast, the sentiment doesn't land. No one, in general, disagrees in principle with the use of evidence and careful analysis: it's only in contrast with the way things are typically done that the EA argument is convincing.
3
Nathan Young
I would choose your statement over the current one. I think the sentiment lands pretty well even with a very toned down statement. The movement is called "effective altruism". I think often in groups are worried that outgroups will not get their core differences when generally that's all outgroups know about them. I don't think that anyone who visits that website won't think that effectiveness isn't a core feature. And I don't think we need to be patronising (as EAs are charactured as being in conversations I have) in order to make known something that everyone already knows.

Being open minded and curious is different from holding that as part of my identity. 

Perhaps I never reach it. But it seems to me that "we are open minded people so we probably behave open mindedly" is false.

Or more specifically, I think that it's good that EAs want to be open minded, but I'm not sure that we are purely because we listen graciously, run criticism contests, talk about cruxes.

The problem is the problem. And being open minded requires being open to changing one's mind in difficult or set situations. And I don't have a way that's guaranteed to get us over that line. 

Clear benefits, diffuse harms

It is worth noting when systems introduce benefits in a few obvious ways but many small harms. An example is blocking housing. It benefits the neighbours a lot - they don't have to have construction nearby - and the people who are harmed are just random marginal people who could have afforded a home but just can't. 

But these harms are real and should be tallied.

Much recent discussion in EA has suggested common sense risk reduction strategies which would stop clear bad behavior. Often we all agree on the clear bad behaviour... (read more)

It has an emotional impact on me to note that FTX claims are now trading at 50%. This means that on expectation, people are gonna get about half of what their assets were worth, had they help them until this time.

I don't really understand whether it should change the way we understand the situation, but I think a lot of people's life savings were wrapped up here and half is a lot better than nothing.

src: https://www.bloomberg.com/news/articles/2023-10-25/ftx-claims-rise-after-potential-bidders-for-shuttered-exchange-emerge 

I am not confident on the re... (read more)

I think if I knew that I could trade "we all obey some slightly restrictive set of romance norms" for "EA becomes 50% women in the next 5 years" then that's a trade I would advise we take. 

That's a big if. But seems trivially like the right thing to do - women do useful work and we should want more of them involved.

To say the unpopular reverse statement, if I knew that such a set of norms wouldn't improve wellbeing in some average of women in EA and EA as a whole then I wouldn't take the trade. 

Seems worth acknowledging there are right answers here, if only we knew the outcomes of our decisions.

In defence of Will MacAskill and Nick Beckstead staying on the board of EVF

While I've publicly said that on priors they should be removed unless we hear arguments otherwise, I was kind of expecting someone to make those arguments. If noone will, I will.

MacAskill

MacAskill is very clever, personally kind, is a superlative networker and communicator. Imo he oversold SBF, but I guess I'd do much worse in his place. It seems to me that we should want people who have made mistakes and learned from them.  Seems many EA orgs would be glad to have someone like... (read more)

I've been musing about some critiques of EA and one I like is "what's the biggest thing that we are missing"

In general, I don't think we are missing things (lol) but here are my top picks:

  • It seems possible that we reach out to sciency tech people because they are most similar to us. While this may genuinely be the cheapest way to get people now there may be costs to the community in terms of diversity of thought (most Sci/tech people are more similar than the general population)
    • I'm glad to see more outreach to people in developing nations
    • It seems obvious t
... (read more)
7
titotal
  More likely to me is a scenario of diminishing returns. Ie, tech people might be the most important to first order, but there's already a lot of brilliant tech people working on the problem, so one more won't make much of a difference. Whereas a few brilliant policy people could devise a regulatory scheme that penalises reckless AI deployment, etc, making more differences on a marginal basis. 
2
Arvin
+1 for policy people

I would like to see posts give you more karma than comments (which would hit me hard). Seems like a highly upvtoed post is waaaaay more valuable than 3 upvoted comments on that post, but it's pretty often the latter gives more karma than the former.

6
ChanaMessinger
Sometimes comments are better, but I think I agree they shouldn't be worth exactly the same.
6
ChanaMessinger
People might also have a lower bar for upvoting comments.
-1
Nathan Young
There you go, 3 mana. Easy peasy.
2
Pat Myron
simple first step would be showing both separately like Reddit
2
Nathan Young
You can see them separately, but it's how they combine that matters. 
3
Pat Myron
I know you can figure them out, but I don't see them presented separately on users pages. Am I missing something? Is it shown on the website somewhere?
1
jimrandomh
They aren't currently shown separately anywhere. I added it to the ForumMagnum feature-ideas repo but not sure whether we'll wind up doing it.
3
Nathan Young
They are shown separately here: https://eaforum.issarice.com/userlist?sort=karma 
1
Pat Myron
Is there a link to vote to show interest?