Hide table of contents

With this post I want to encourage an examination of value-alignment between members of the EA community. I lay out reasons to believe strong value-alignment between EAs can be harmful in the long-run.

The EA mission is to bring more value into the world. This is a rather uncertain endeavour and many questions about the nature of value remain unanswered. Errors are thus unavoidable, which means the success of EA depends on having good feedback mechanisms in place to ensure mistakes can be noticed and learned from. Strong value-alignment can weaken feedback mechanisms.

EAs prefer to work with people who are value-aligned because they set out to maximse impact per resource expended. It is efficient to work with people who agree. But a value-aligned group is likely intellectually homogenous and prone to breed implicit assumptions or blind spots.

I also noticed particular tendencies in the EA community (elaborated in section: homogeneity, hierarchy and intelligence), which generate additional cultural pressures towards value-alignment, make the problem worse over time and lead to a gradual deterioration of the corrigibility mechanisms around EA.

Intellectual homogeneity is efficient in the short-term, but counter-productive in the long-run. Value-alignment allows for short-term efficiency, but the true goal of EA – to be effective in producing value in the long- term – might not be met.

Disclaimer

All of this is based on my experience of EA over the timeframe 2015-2020. Experiences differ and I share this to test how generalisable my experiences are. I used to hold my views lightly and I still give credence to other views on developments in EA. But I am getting more, not less worried over time, particularly because others members have expressed similar views and worries to me but have not spoken out about them because they fear losing respect or funding. This is precisely the erosion of critical feedback mechanism that I point out here. I have a solid but not unshakable belief about the theoretical mechanism I outline is correct but I do not know to what extent it takes effect in EA. But I’m also not sure whether those who will disagree with me will know to what extent this mechanism is at work in their own community. What I am sure of however (on the basis of feedback from people who have read this post pre-publication) is that my impressions of EA are shared by others within the community, that they are the reason why some have left EA or never quite dared to enter. This alone is reason for me to share this - in the hope that a healthy approach to critique and a willingness to change in response to feedback from the external world is still intact.

I recommend the impatient reader to skip forward to the section on Feedback Loops and Consequences.

Outline

I will outline reasons that lead EAs to prefer value-alignment and search for definitions of value-alignment. I then describe cultural traits of the community which play a role in amplifying this preference and finally evaluate what effect value-alignment might have on EAs feedback loops and goals.


Axiomaticity

Movements make explicit and obscure assumptions. They make explicit assumptions: they stand for something and exist with some purpose. An explicit assumption is, by my definition, one that was examined and consciously agreed upon.

EA explicitly assumes that one should maximise the expected value of one’s actions in respect to a goal. Goals differ between members but mostly do not diverge greatly. They may be a reduction of suffering, the maximisation of hedons in the universe or the fulfilment of personal preferences, and others. But irrespective of individual goals EAs mostly agree that resources should be spent effectively and thus efficiently. Having more resources available is better because more can be done. Ideally, every dollar spent should maximise its impact, one should help as many moral patients as possible, and thus do the most good that one can.

Obscure assumptions in contrast are less obvious and more detrimental. I divide obscure assumptions into two types: mute and implicit. A mute assumption as one which its believer does not recognise as one. They are not aware they hold it and thus do not see alternatives. Mute assumptions are not questioned, discussed or interrogated. An implicit assumption is, by my definition here, one which the believer knows to be one of several alternatives, but which they believe without proper examination anyhow. Communities host mute and implicit assumptions in additional to explicit, agreed upon assumptions. I sometimes think of these as parasitic assumptions: they are carried along without choosing and can harm the host. Communities can grow on explicit assumptions, but obscure assumptions deteriorate a group’s immunity to blind spots and biases. Parasitic assumptions feed off and nurture biases which can eventually lead to false decisions.

The specific implicit assumptions of EA are debateable and should be examined in another post. To point out examples, I think there are good reasons to think that many members share assumptions around for example transhumanism, neoliberalism, the value of technological progress, techno-fixes, the supreme usefulness of intelligence, IR realism or an elite-focused theory of change.

From Axioms to Value-Alignment

The explicit axiom of maximising value per resource spent, turns internal value-alignment into an instrumental means to an end for EAs. A value-aligned group works friction-less and smoothly. They keep discussion about methodology, skills, approach, resources, norms, etc. to a minimum. A group that works well together is efficient and effective in reaching their next near-term goal. Getting along in a team originates in liking each other. We humans tend to like people who are like us.

The Meaning of Value-Alignment

A definition of internal value-alignment is hard to come by despite frequent use of the term to describe people, organisations or future generations. There appears to be some generally accepted notion of what it means for someone to be value-aligned, but I have not found a concise public description.

Value-alignment is mentioned in discussions and publications by central EA organisations. CEA published this article where they state that ‘it becomes more important to build a community of talented and value-aligned people, who are willing to flexibly shift priorities to the most high value causes. In other words, growing and shaping the effective altruism community into its best possible version is especially useful’. CEA speaks of a risk of ‘dilution’, which they define as ‘An overly simplistic version of EA, or a less aligned group of individuals, could come to dominate the community, limiting our ability to focus on the most important problems’. Nick Beckstead’s post from 2017 refers to value-aligned persons and organisations repeatedly. He says CEA leadership is value-aligned with OpenPhil in terms of helping EA grow, and other funders of CEA appear ‘fairly value-aligned’ with them. This document by MacAskill at GPI loosely refers to value-alignment as: ‘where the donors in question do not just support your preferred cause, but also share your values and general worldview’. This article again calls for getting value-aligned people into EA, but it too lacks a definition (“…promoting EA to those who are value aligned. We should be weary of promoting the EA movement to those who are not value aligned, due to the downside of flooding the EA movement with non value-aligned people”). Value-aligned appears not necessarily fixed: “it is likely that if people are persuaded to act more like EA members, they will shift their values to grow more value aligned”.

According to the public writing I found, value-alignment could mean any of the following: supporting and spreading EA, having shared worldviews, focussing on the most important problems or doing the most high-value thing. Importantly, not being value-aligned is seen as having downsides: it can dilute, simplify or lead to wrong prioritisation.

It is probably in the interest of EA to have a more concise definition of value-alignment. It must be hard to evaluate how well EAs are aligned if a measure is lacking. Open questions remain: on what topics and to what extent should members agree in order to considered ‘aligned’? What fundamental values should one be aligned on? Is there a difference between being aligned and agreeing to particular values? To what degree must members agree? Does one agree to axioms, a way of living, a particular style of thinking? What must one be like to be one of the people ‘who gets it’?

I get the impression that value-alignment means to agree on a fundamental level. It means to agree with the most broadly accepted values, methodologies, axioms, diet, donation schemes, memes and prioritisations of EA. The specific combination of adopted norms may differ from member to member but if the number of adopted norms is sufficiently above an arbitrary threshold, then one is considered value-aligned. These basic values include an appreciation of critical thinking, which is why those who question and critique EA can still be considered value-aligned. Marginal disagreement is welcome. Central disagreements however, can signal misalignment and are more easily considered inefficient. They dilute the efficacy and potential of the movement. There is sense in this view. Imagine having to repeatedly debate the efficacy of the scientific method with community members. It would be hard to get much done. Imagine in turn to work with your housemates on a topic that everyone is interested in, cares about and has relevant skills for. Further efficiency gains can be made if the team shares norms in eating habits, the use of jargon and software, attend the same summer camps and read the same books. Value alignment is correctly appreciated for its effect on output per unit work. But for these reasons I also expect value-alignment to be highly correlated with intellectual and cognitive homogeneity.

A Model of Amplifiers – Homogeneity, Hierarchy and Intelligence

The axioms of EA generate the initial preference for value-alignment. But additional, somewhat contingent cultural traits of EA amplify this pressure towards alignment with passing time. These traits are homogeneity, hierarchy and intelligence. I will explain each trait and try to show how they foster the preference for value-alignment.

Homogeneity

EA is notably homogenous by traditional measures of diversity. Traditional homogeneity is not the same as cognitive homogeneity (this is why I treat them separately), but the first is probably indicative of the latter. For the purpose of this article I am only interested in cognitive diversity in EA, but there is little data on it. Advocates for traditional diversity metrics such as race, gender and class do so precisely because they track different ways of thinking. The data on diversity in EA suggests that decision-makers in EA do not see much value in prioritising diversification, since it remains consistently low.

Founding members of EA have similar philosophical viewpoints, educational backgrounds, the same gender, and ethnicity. That is neither surprising or negative but informative since recent surveys (2017, 2018, 2015, 2014) show homogeneity in most traditional measures of diversity, including gender (male), ethnicity (white), age (young) education (high and similar degrees), location (Bay and London), and religion (non-religious). Survey results remain stable over time and it seems that current members are fairly similar to EA’s founders with respect to these traits.

EAs could however have different worldviews and thus retain cognitive diversity, despite looking similar. In my own experience this is mostly not the case, but I cannot speak for others and without more data, I cannot know how common my experience is. This section is thus by no means attempting to provide conclusive evidence for homogeneity - but I do hope to encourage future investigations and specifically, more data collection.

Surveys show that EAs have similar views on normative ethics. This is interesting, because EA’s axioms can be arrived at coming from many ethical viewpoints and because philosophers have comparatively distributed opinions (see meta-ethics and normative ethics in Phil Papers survey). 25% of philosophers lean towards deontology, 18% to virtue ethics, 23% to consequentialism, but EA’s give only 3% (2015, 2019), 2% (2014), 4% (2017) to deontology, 5% (2015, 2014, 2017), 7% (2019) to virtue ethics and 69% (2015 and 2014), 64% (2017), 81% (2019) to consequentialism. This is a heavy leaning towards consequentialism in comparison to another subgroup of humans who arguably spend more time thinking about the issue. One explanation is that consequentialism is correct and EAs are more accurate than surveyed philosophers. The other explanation is that something else (such as pre-selection, selected readings or group think) leads EAs to converge comparatively strongly.

In my own experience, EAs have strikingly similar political and philosophical views, similar media consumption and leisure interests. My mind conjures the image of a stereotypical EA shockingly easily. The EA range of behaviours and views is more narrow in comparison to the behaviours and views found in a group of students or in one nation. The stereotypical EA will use specific words and phrases, wear particular clothes, read particular sources and blogs, know of particular academics and even use particular mannerism. I found that a stereotyped description of the average EA, does better at describing the individuals I meet than I would expect it should, if the group were less homogenous.

The group of EA is still small in comparison to a nation of course, so naturally the range of behaviours will be smaller. This narrow range is only significant because EA hopes to act on behalf of and in the interest of humanity as a whole. Humanity happens to be a lot more diverse.

That being said, it is simply insufficient to evaluate the level of cognitively homogeneity of EA, on the basis of sparse data and my own experience. It would be beneficial to have more data on degrees of intellectual homogeneity across different domains.

Hierarchy

EA is hierarchically organised via central institutions. They donate funds, coordinate local groups, outline research agenda, prioritise cause areas and give donation advice. These include the Centre for Effective Altruism, Open Philanthropy Project, Future of Humanity Institute, Future of Life Institute, Giving What We Can, 80.000 Hours, the Effective Altruism Foundation and others. Earning a job at these institutions comes with earning a higher reputation.

EA members are often advised to donate to central EA organisations or to a meta-fund, which then redistributes money to projects that adhere to and foster EA principles. Every year, representative members from central organization gather in what is called a ‘leaders forum’, to cultivate collaboration and coordination. The forums are selective and not open to everyone. Reports about the forums or decisions that were taken there are sparse.

Individuals who work at these institutions go through a selection process which selects for EA values. Individuals sometimes move jobs between EA institutions, first being a recipient of funding, then donating funds to EA organisations and EA members. I’m not aware of data about job traffic in EA, but it would be useful both for understanding the situation and to spot conflicts of interest. Naturally, EA organisations will tend towards intellectual homogeneity if the same people move in-between institutions.

Intelligence

Below I outline three significant cultural norms in EA that relate to intelligence. The first is a glorification of intelligence. The second is a susceptibility to be impressed and intimidated by individuals with a perceived high intelligence and to thus form a fan base around them. The third is a sense of intellectual superiority over others, which can lead to epistemic insularity.

I do not expect all readers to share all my impressions and evidence for cultural traits will always be sparse and inconclusive. But if some of the below is true, then other EAs will have noticed these cultural trends as well and they can let this article be a nudge to give voice to their own observations.

The Conspicuous Roles of Intelligence in EA

Intelligence, as a concept and an asset, plays a dominant role in EA. Problems of any kind are thought solvable given enough intelligence: solve intelligence, solve everything else. Many expect that superintelligence can end all suffering, because EAs assume all suffering stems from unsolved problems. Working on artificial general intelligence is thus a top priority.

Intelligence is a also highly valued trait in the community. Surveys sometimes ask what IQ members have. I appeared to notice that the reputation of an EA correlates strongly with his perceived intelligence. Jobs which are considered highly impactful tend to be associated with a high reputation and the prerequisite to possess high intelligence. It is preferred that such jobs, like within technical AI safety or at Open Philanthropy, be given to highly intelligent members. When members discuss talent acquisition or who EA should appeal to, they refer to talented, quick or switched-on thinkers. EAs also compliment and kindly introduce others using descriptors like intelligent or smart more than people outside EA.

The Level Above [i]

Some members, most of which work at coordinating institutions, are widely known and revered for their intellect. They are said to be intimidatingly intelligent and therefore epistemically superior. Their time is seen as particularly precious. EAs sometimes showcase their humility by announcing how much lower they would rank their own intelligence underneath that of the revered leaders. I think there is however no record of the actual IQ of these people.

Most of my impression come from conversations with EA members, but there’s some explicit evidence for EA fandom culture (see footnotes for some pointers)[ii] A non-exhaustive subset of admired individuals I believe includes: E. Yudkowsky, P. Christiano, S. Alexander, N. Bostrom, W. MacAskill, Ben Todd, H. Karnowsky, N. Beckstead, R. Hanson, O. Cotton-Barratt, E. Drexler, A. Critch, … As far as I perceive it, all revered individuals are male.

The allocation of reputation and resources is impacted by leaders, even beyond their direct power over funding and talent at central institutions. For example, “direct action”, colloquially equated with working at EA organisations (which is what the leaders do), has a better reputation than “earning to give”. Leaders also work on or prioritise AI safety – the cause area, which I believe has been allocated the highest reputation. It is considered the hardest problem to work on and thus requires the highest IQs individuals. The power over reputation allocation is soft power but power nonetheless.

EAs trust these leaders and explicitly defer to them because leaders are perceived as having spent more time thinking about prioritisation and as being smarter. It is considered epistemically humble to adjust one’s views to the views of someone who is wiser. This trust also allows leaders to keep important information as secret, with the justification of it being an information hazard.

Epistemic Insularity

EAs commonly place more trust in other EAs than in non-EAs. Members are seen as epistemic peers, and thereby the default reference class. Trust is granted to EAs by virtue of being EA and because they likely share principles of inquiry and above-average intelligence. The trust in revered EA leaders is higher than the trust in average EAs, but this trust is higher still than the trust in average academic experts.

These trust distributions allow EAs to sometimes dismiss external critics without a thorough investigation. Deep investigations depend on someone internal and possibly influential to find the critique plausible enough to bid for resources to be allocated to the investigation. Homogeneity can reduce the number of people who see other views as plausible and can lead to the insulation from external corrections.

A sense of rational and intellectual superiority over other communities can strengthen insulation. It justifies preferencing internal opinions to external opinions, even if internal opinions are not verified by expertise or checked against evidence. Extreme viewpoints can propagate in EA, because intellectual superiority acts as a protective shield. Differences with external or common-sense views can be attributed to EAs being smarter and more rational. Thus, the initial sense of scepticism that inoculates many against extremism is dispelled. It seems increasingly unlikely that so many people who are considered intelligent could be wrong. A vigilant EA forecast will include extreme predictions if they are predicted by an EA, because it is vigilant to give some credence to all views within one’s epistemic peer group.

Feedback Loops

I see what I describe here as observed tendencies, not consistent phenomena. What I describe happens sometimes, but I do not know how often. The model does not depend on having bad actors or bad intentions, it just needs humans.

Leaders at central organisations have more influence over how the community develops. They select priorities, donate funds and select applicants. The top of the hierarchy is likely homogenous because leaders move between organisations and were homogenous to begin with, resulting in more homogeneity as they fund people who are value-aligned and think like them. Those who are value-aligned agree on the value of intelligence and see no problem with a culture in which intelligence marks your value, where high IQ individuals are trusted and intellectual superiority over others is sometimes assumed.

Cultural norms around intelligence keep diversification at bay. A leader’s position is assumed justified by his intelligence and an apprehension to appear dim, heightens the barrier to voicing fundamental criticism. If one is puzzled by a leader’s choice it may either be because one disagrees with the choice or one doesn’t understand the choice. Voicing criticism thus potentially signals one’s lack of understanding or insight. It is welcome to show one’s capacity for critical thinking, in fact shallow disagreements are evidence for good feedback mechanisms and they boost everyone’s confidence in epistemic self-sufficiency. It is harder to communicate deep criticism of commonly held beliefs. One runs the risk to be dismissed as slow, as someone who ‘doesn’t get it’ or an outsider. High barriers retrain the hierarchy and shield internal drastic views.

As it becomes more evident what type of person is considered value-aligned a natural self-selection will take place. Those who seek strong community identity, who think the same thoughts, like the same blogs, enjoy the same hobbies… will identify more strongly, they will blend in, reinforce norms and apply to jobs. Norms, clothing, jargon, diets and a life-style will naturally emerge to turn the group into a recognisable community. It will appear to outsiders that there is one way to be EA and that not everyone fits in. Those who feel ill-fitted, will leave or never join.

The group is considered epistemically trustworthy, to have above average IQ and training in rationality. This, for many, justifies the view that EAs can often be epistemically superior to experts. A sense of intellectual superiority allows EAs to dismiss critics or to only engage selectively. A homogenous and time-demanding media diet —composed of EA blogs, forum posts and long podcasts — reduces contact hours with other worldviews. When in doubt, a deference to others inside EA, is considered humble, rational and wise.

Consequences

Most of the structural and cultural characteristics I describe are common characteristics (hierarchy and homogeneity, fandom culture) and often positive (deference to others, trusting peers, working well in a team). But in combination with each other and the gigantic ambition of EA to act on behalf of all moral patients, they likely lead to net negative outcomes. Hierarchies can be effective at getting things done. Homogeneity makes things easy and developing cultural norms has always been our human advantage. Deferring to authority is not uncommon. But if the ambition is great, the intellectual standards must match it.

EA is reliant on feedback to stay on course towards its goal. Value-alignment fosters cognitive homogeneity, resulting in an increasing accumulation of people who agree to epistemic insularity, intellectual superiority and an unverified hierarchy. Leaders on top of the hierarchy rarely receive internal criticism and the leaders continue to select grant recipients and applicants according to an increasingly narrow definition of value-alignment. A sense of intellectual superiority insulates the group from external critics. This deteriorates the necessary feedback mechanisms and makes it likely that Effective Altruism will, in the longterm, make uncorrected mistakes, be ineffective and perhaps not even altruistic.

EA wants to use evidence and reason to navigate towards the Good. But its ambition stretches beyond finding cost-effective health interventions. EA wants to identify that which is good to then do the most good possible.

Value-alignment is a convergence towards agreement and I would argue it has come too early. Humanity lacks clarity on the nature of the Good, what constitutes a mature civilization or how to use technology. In contrast, EA appears to have suspiciously concrete answers. EA is not your average activist group on the market-place on ideas on how to live. It has announced far greater ambitions: to research humanity’s future, to reduce sentient suffering and to navigate towards a stable world under an AI singleton. It can no longer claim to be one advocate amongst many. If EA sees itself as acting on behalf of humanity, it cannot not settle on an answer by itself. It must answer to humanity.

EA members gesture at moral uncertainty as if all worldviews are considered equal under their watch, but in fact the survey data reveals cognitive homogeneity. Homogeneity churns out blind spots. Greg put it crisply in his post on epistemic humility: ‘we all fall manifestly short of an ideal observer. Yet we all fall short in different aspects.’ I understand the goal of EA correctly, it is this ideal observer which the EA theory is in desperate need of. But alas, no single human can adopt the point of view of the universe. Intuitions, feedback mechanisms and many perspectives are our best shot at approximating the bird’s-eye view.

Our blind spots breed implicit and mute assumptions. Genuine alternative assumptions and critiques are overlooked and mute assumptions remain under cover. A conclusion is quickly converged upon, because no-one in the group thought the alternative plausible enough to warrant a proper investigation. EA of course encourages minor disputes. But small differences do not suffice. One member must disagree sufficiently vehemently to call for a thorough examination. This member and their view must be taken seriously, not merely tolerated. Only then can assumptions be recognised as assumptions.

To stay on course amidst the uncertainty which suffuses the big questions, EA needs to vigilantly protect its corrective feedback mechanisms. Value-alignment, the glorification of intelligence and epistemic insularity drive a gradual destruction of feedback mechanisms.

Here are some concrete observations that I was unhappy with:

EAs give high credence to non-expert investigations written by their peers, they rarely publish in peer-review journals and become increasingly dismissive of academia, show an increasingly certain and judgmental stance towards projects they deem ineffective, defer to EA leaders as epistemic superiors without verifying the leaders epistemic superiority, trust that secret google documents which are circulated between leaders contain the information that justifies EA’s priorities and talent allocation, let central institutions recommend where to donate and follow advice to donate to central EA organisations, let individuals move from a donating institution to a recipient institution and visa versa, strategically channel EAs into the US government, adjust probability assessments of extreme events to include extreme predictions because they were predictions by other members…

EA might have fallen into the trap of confusing effectiveness with efficiency. Value-alignment might reduce friction and add speed and help reach intermediate goals that seem like stepping stones towards a better world at the time. But to navigate humanity towards a stable, suffering-free life, EA must answer some of the biggest philosophical and scientific questions. The path towards their goal is unknown to them. It will likely take time, mistakes and conflict. This quest is not amenable to easy-wins. I struggle to see the humility in the willingness with which EAs rely on a homogenous subset of humanity to answer these questions. Without corrective mechanisms they will miss their target of effectively creating an ethical existence for our species.

Propositions

I have tried to express my concern that EA will miss its ambitious goal by working with only an insular subset of the people it is trying to save. I propose one research question and one new norm to address and investigate this concern.

First, I would encourage EAs to define what they mean by value-alignment and to evaluate the level of value-alignment that is genuinely useful. I have described what happens when the community is too value-aligned. But greater heterogeneity can of course render a group dysfunctional. It remains to be empirically analysed how value-aligned the community really is or should be. This data, paired with a theoretical examination of how much diversity is useful, can verify or refute whether my worries are justified. I would of course not have written this article if I was under the impression that EA occupies the sweet spot between homogeneity and heterogeneity. If others have similar impressions it might be worth trying to identify that sweet spot.

Second, I wish EA would more visibly respect the uncertainty they deal in. Indeed, some EAs are exemplary - some wear uncertainty like a badge of honour: as long as ethical uncertainty persists, they believe the goal they optimise towards is open to debate. It is an unsettling state of mind and it is admirable to live in recognition of uncertainty. For them, EA is a quest, an attempt to approach big questions of valuable futures, existential risk and the good life, rather than implementing an answer.

I wish this would be the norm. I wish all would enjoy and commit to the search, instead of pledging allegiance to preliminary answers. Could it be the norm to assume that EA’s goal has not been found? EAs could take pride in identifying sensible questions, take pride in a humble aspiration to make small contributions to progress and take pride in an endurance to wait for answers. I wish it were common knowledge that EA has not found solutions yet. This does not mean that EAs have been idle. It is a recognition that improving the world is hard.

I do not propose a change to EAs basic premise. Instead of optimising towards a particular objective, EA could maximise the chance of identifying that objective. With no solutions yet at hand, EA can cease to prioritise efficiency and strong internal value-alignment. Alignment will not be conducive to maximising the chances of stumbling upon a solution in such a vast search space. It is thus possible to take time to engage with the opposition, to dive into other worldviews, listen to deep critics, mingle with slow academia and admit that contrasting belief systems and methods could turn out to be useful from the perspective of unknown future values.

There is no need to advertise EA as having found solutions. Not if one wants to attract individuals that are at ease with the real uncertainty that we face. I believe it is people like that, who have the best chance of succeeding in the EA quest.

For feedback feel free to email me in private or of course comment.


[i] Yudkowsky mentions his intelligence often, such as in the article ‘the level above mine’. He has written an autobiography, named ‘Yudkowsky’s Coming of Age’. Members write articles about him in apparent awe and possibly jest (“The game of "Go" was abbreviated from ‘Go Home, For You Cannot Defeat Eliezer Yudkowsky’”, “Inside Eliezer Yudkowsky's pineal gland is not an immortal soul, but another brain.”). His intelligence is a common meme between members.

[ii] Some queries to MacAskill’s Q&A show reverence here, (“I'm a longtime fan of all of your work, and of you personally. I just got your book and can't wait to read it.”, “You seem to have accomplished quite a lot for a young person (I think I read 28?). Were you always interested in doing the most good? At what age did you fully commit to that idea?”).

Comments50
Sorted by Click to highlight new comments since:

[ii] Some queries to MacAskill’s Q&A show reverence here, (“I'm a longtime fan of all of your work, and of you personally. I just got your book and can't wait to read it.”, “You seem to have accomplished quite a lot for a young person (I think I read 28?). Were you always interested in doing the most good? At what age did you fully commit to that idea?”).

I share your concerns about fandom culture / guru worship in EA, and am glad to see it raised as a troubling feature of the community. I don’t think these examples are convincing, though. They strike me as normal, nice things to say in the context of an AMA, and indicative of admiration and warmth, but not reverence.

The thing I agree with most is the idea that EA is too insular, and that we focus on value alignment too much (compared with excellence). More generally, networking with people outside EA has positive externalities (engaging more people with the movement) whereas networking with people inside EA is more likely to help you personally (since that allows you to get more of EA's resources). So the former is likely undervalued.

I think the "revered for their intellect" thing is evidence of a genuine problem in EA, namely that we pay more attention to intelligence than we should, compared with achievements. However, the mere fact of having very highly-respected individuals doesn't seem unusual; e.g. in other fields that I've been in (machine learning, philosophy) pioneers are treated with awe, and there are plenty of memes about them.

Members write articles about him in apparent awe and possibly jest

Definitely jest.

I think I should have stated more clearly that I don't see these tendencies as abnormal. I see them as maladaptive given the goal EA has. When thinking about the question of whether fandom is a good feature for epistemic health, I don't care too much about whether fandom tendencies exists in other communities. I know that it's the norm (same with hierarchy and homogeneity).

It can be quite effective to have such a community strucutre in situations in which you want to change the minds of many people quickly. You can now simply try change the mind of the one who others look up to (e.g. Toby Ord/ Y. Bengio) and expect other members will likely follow (models in 'misinformation age' by C. O'Connor & J Weatherall). A process of belief formation which does not use central stars will converge less quickly I imagine, but I'd have to look into that. This is the kind of research which I hope this article makes palatable to EAs.

My guess is there is not only a sweet spot of cog. diversity but also a sweet spot of how much a community should respect their central stars. Too much reverence and you loose feedback mechanisms. Too little and belief formation will be slow and confused and you lose the reward mechanism of reputation. I expect that there will always be individuals who deserve more respect and admiration than others in any community, because they have done more or better work on behalf of everyone else. But I would love for EAs to examine where the effective sweet spot lies and how one can influence the level of fandom culture (e.g. Will's recent podcast episode on 80k was doing a good job I think) so that the end result is a healthy epistemic community.

Yepp, that all makes sense to me. Another thing we can do, that's distinct from changing the overall level of respect, is changing the norms around showing respect. For example, whenever people bring up the fact that person X believes Y, we could encourage them to instead say that person X believes Y because of Z, which makes the appeal to authority easier to argue against.

I think in community building, it is a good trajectory to start with strong homogeneity and strong reference to 'stars' that act as reference points and communication hubs, and then to incrementally soften and expand as time passes. It is a much harder or even impossible to do this in reverse, as this risks to yield a fuzzy community that lacks the mechanisms to attract talent and converge on anything.

With that in mind, I think some of the rigidity of EA thinking in the past might have been good, but the time has come to re-think how the EA community should evolve from here on out.

Great post! I most agree with that we should be more clear that things are still very, very uncertain. I think there are several factors that push against this:

  • The EA community and discourse doesn't have any formal structure for propagating ideas, unlike academia. You are likely to hear about something if it's already popular. Critical or new posts and ideas are unpopular by definition to begin with, so they fall by the wayside.
  • The story for impact for many existing EA organizations often relies on a somewhat narrow worldview. It does seem correct to me that we should both be trying to figure out the truth and taking bets on worlds where we have a lot of important things to do right now. But it's easy to mentally conflate "taking an important bet" and "being confident that this is what the world looks like", both from inside and outside an organization. I personally try to pursue a mixed strategy, where I take some actions assuming a particular worldview where I have a lot of leverage now, and some actions trying to get at the truth. But it's kind of a weird mental state to hold, and I assume most EAs don't have enough career flexibility to do this.

I do think that the closer you get to people doing direct work, the more people are skeptical and consider alternative views. I think the kind of deference you talk about in this post is much more common among people who are less involved with the movement.

That being said, it's not great that the ideas that newcomers and people who aren't in the innermost circles see are not the best representatives of the truth or of the amount of uncertainty involved. I'm interested in trying to think of ways to fix that-- like I said, I think it's hard because there are lots of different channels and no formal mechanism for what ideas "the movement" is exposed to. Without formal mechanisms, it seems hard to leave an equilibrium where a small number of reputable people or old but popular works of literature have disproportionate influence.

That being said, I really appreciate a lot of recent attempts by people to express uncertainty more publically-- see e.g. Ben's podcast, Will's talk, 80K's recent posts, my talk and interviews. For better or for worse, it does seem like a small number of individuals have disproportionate influence over the discourse, and as such I think they do have some responsibility to convey uncertainty in a thoughtful way.

Such a great comment, I agree with most you say, thank you for writing this up. Curious about a formal mechanism of communcal belief formation/belief dissemination. How could this look like? Would this be net good in comparision to baseline?

This is a good post, I'm glad you wrote it :)

On the abstract level, I think I see EA as less grand / ambitious than you do (in practice, if not in theory) -- the biggest focus of the longtermist community is reducing x-risk, which is good by basically any ethical theory that people subscribe to (exceptions being negative utilitarianism and nihilism, but nihilism cares about nothing and very few people are negative utilitarian and most of those people seem to be EAs). So I see the longtermist section of EA more as the "interest group" in humanity that advocates for the future, as opposed to one that's going to determine what will and won't happen in the future. I agree that if we were going to determine the entire future of humanity, we would want to be way more diverse than we are now. But if we're more like an interest group, efficiency seems good.

On the concrete level -- you mention not being happy about these things:

EAs give high credence to non-expert investigations written by their peers

Agreed this happens and is bad

they rarely publish in peer-review journals and become increasingly dismissive of academia

Idk, academia doesn't care about the things we care about, and as a result it is hard to publish there. It seems like long-term we want to make a branch of academia that cares about what we care about, but before that it seems pretty bad to subject yourself to peer reviews that argue that your work is useless because they don't care about the future, and/or to rewrite your paper so that regular academics understand it whereas other EAs who actually care about it don't. (I think this is the situation of AI safety.)

show an increasingly certain and judgmental stance towards projects they deem ineffective

Agreed this happens and is bad (though you should get more certain as you get more evidence, so maybe I think it's less bad than you do)

defer to EA leaders as epistemic superiors without verifying the leaders epistemic superiority

Agreed this happens and is bad

trust that secret google documents which are circulated between leaders contain the information that justifies EA’s priorities and talent allocation

Agreed this would be bad if it happened, I'm not actually sure that people trust this? I do hear comments like "maybe it was in one of those secret google docs" but I wouldn't really say that those people trust that process.

let central institutions recommend where to donate and follow advice to donate to central EA organisations

Kinda bad, but I think this is more a fact about "regular" EAs not wanting to think about where to donate? (Or maybe they have more trust in central institutions than they "should".)

let individuals move from a donating institution to a recipient institution and visa versa

Seems really hard to prevent this -- my understanding is it happens in all fields, because expertise is rare and in high demand. I agree that it's a bad thing, but it seems worse to ban it.

strategically channel EAs into the US government

I don't see why this is bad. I think it might be bad if other interest groups didn't do this, but they do. (Though I might just be totally wrong about that.)

adjust probability assessments of extreme events to include extreme predictions because they were predictions by other members

That seems somewhat bad but not obviously so? Like, it seems like you want to predict an average of people's opinions weighted by expertise; since EA cares a lot more about x-risk it often is the case that EAs are the experts on extreme events.

Idk, academia doesn't care about the things we care about, and as a result it is hard to publish there. It seems like long-term we want to make a branch of academia that cares about what we care about, but before that it seems pretty bad to subject yourself to peer reviews that argue that your work is useless because they don't care about the future, and/or to rewrite your paper so that regular academics understand it whereas other EAs who actually care about it don't. (I think this is the situation of AI safety.)

It seems like an overstatement that the topics of EA are completely disjoint with topics of interest to various established academic disciplines. I do agree that many of the intellectual and methodological approaches are still very uncommon in academia.

It is not hard to imagine ideas from EA (and also the rationality community) becoming a well-recognized part of some branches of mainstream academia. And this would be extremely valuable, because it would unlock resources (both monetary and intellectual) that go far beyond anything that is currently available.

And because of this, it is unfortunate that there is so little effort of establishing EA thinking in academia, especially since it is not *that* hard:

  • In addition to posting articles directly into a forum, consider that post a pre-print and take the extra mile to also submit as a research paper or commentary in a peer-reviewed open-access journal. This way, you gain additional readers from outside the core EA group, and you make it easier to cite your work as a reputable source.
    • Note that this also makes it easier to write grant proposals about EA-related topics. Writing a proposal right now I have the feeling that 50% of my citations would be of blog posts, which feels like a disadvantage
    • Also note that this increases the pool of EA-friendly reviewers for future papers and grant proposals. Reviewers are often picked from the pool of people who are cited by an article or grant under review, or pop up in related literature searches. If most of the relevant literature is locked into blog posts, this system does not work.
  • Organize scientific conferences
  • Form an academic society / association

etc

It seems like an overstatement that the topics of EA are completely disjoint with topics of interest to various established academic disciplines.

I didn't mean to say this, there's certainly overlap. My claim is that (at least in AI safety, and I would guess in other EA areas as well) the reasons we do the research we do are different from those of most academics. It's certainly possible to repackage the research in a format more suited to academia -- but it must be repackaged, which leads to

rewrite your paper so that regular academics understand it whereas other EAs who actually care about it don't

I agree that the things you list have a lot of benefits, but they seem quite hard to me to do. I do still think publishing with peer review is worth it despite the difficulty.

Agreed this would be bad if it happened, I'm not actually sure that people trust this? I do hear comments like "maybe it was in one of those secret google docs" but I wouldn't really say that those people trust that process.

FWIW, I feel like I've heard a fair amount of comments suggesting that people basically trust the process. Though maybe it became a bit less frequent over time. Most of this was about very large documents on AI safety and strategy issues allegedly existing within OpenAI and MIRI.

I'm glad when things do get published. E.g. Eric Drexler's Reframing Superintelligence used to be a collection of Google docs.

But I find it hard to say to what extent non-published Google docs are suboptimal, i.e. worse than alternatives. E.g. to some extent it does seem correct that I give a bit more weight to someone's view on, say, AI timelines, if I hear that they've thought about it that much that they were able to write a 200-page document about it. Similarly, there can be good reasons not to publish documents - either because they contain information hazards (though I think that outside of bio many EAs are way too worried about this, and overestimate the effects marginal publication by non-prominent researchers can have on the world) or because the author can use their time better than to make these docs publishable.

My best guess is that the status quo is significantly suboptimal, and could be improved. But that is based on fairly generic a priori considerations (e.g. "people tend to be more worried about their 'reputation' than warranted and so tend to be too reluctant to publish non-polished documents") I could easily be wrong about. In some sense, the fact that the whole process is that intransparent, and so hard to ascertain how good it is from the outside, is the biggest problem.

It also means that trust in the everyday sense really plays an important role, which means that people outside EA circles who don't have independent reasons to trust the involved people (e.g. because of social/personal ties or independent work relationships) won't give as much epistemic weight to it, and they will largely be correct in doing so. I.e. perhaps the main cost is not to epistemic coordination within EA, but rather to EA's ability to convince skeptical 'outsiders'.

Most of this was about very large documents on AI safety and strategy issues allegedly existing within OpenAI and MIRI.

I agree people trust MIRI's conclusions a bunch based on supposed good internal reasoning / the fact that they are smart, and I think this is bad. However, I think this is pretty limited to MIRI.

I haven't seen anything similar with OpenAI though of course it is possible.

I agree with all the other things you write.

I think there's some interesting points here! A few reactions:

• I don't think advocates of traditional diversity are primarily concerned with cognitive diversity. I think the reasoning is more (if altruistic) to combat discrimination/bigotry or (if self-interested) good PR/a larger pool of applicants to choose from.

• I think in some of the areas that EAs have homogeneity it's bad (eg it's bad that we lack traditional diversity, it's bad that we lack so much geographic diversity, it's bad that we have so much homogeneity of mannerisms, it's bad that certain intellectual traditions like neoliberalism or the Pinkerian progress narrative are overwhelmingly fashionable in EA, etc), but I'd actually push back against the claim that it's bad that we have such a strong consequentialist bent (this just seems to go so hand-in-hand with EA - one doesn't have to be a consequentialist to want to improve the external world as much as possible, but I'd imagine there's a strong tendency for that) or that we lack representation of certain political leanings (eg I wouldn't want people in the alt-right in EA).

• If people don't feel comfortable going against the grain and voicing opposition, I'd agree that's bad because we'd lack ability to self-correct (though fwiw my personal impression is that EA is far better on this metric than almost all other subcultures or movements).

• It's not clear to me that hierarchy/centralization is bad - there are certain times when I think we err too much on this side, but then I think others where we err too much the other way. If we had significantly less centralization, I'd have legitimate concerns about coordination, info-hazards, branding, and evaluating quality of approaches/organizations.

• I agree that some of the discussion about intelligence is somewhat cringe, but it seems to me that we've gotten better on that metric over time, not worse.

• Agree that the fandom culture is... not a good feature of EA

• There probably are some feedback loops here as you mention, but there are other mechanisms going the other direction. It's not clear to me that the situation is getting worse and we're headed for "locking in" unfortunate dynamics, and if anything I think we've actually mostly improved on these factors over time (and, crucially, my inside view is that we've improved our course-correction ability over time).

Yeah for what it's worth, I think it'll be very very very bad if we treat all moral views as equivalent. There's a trivial sense in which you can flip the sign of any ethical position and still have a consistent framework!

it's bad that we have so much homogeneity of mannerisms

Why? Mannerisms reduce communication overhead. If the norm within EA is to sometimes bob our heads up and down and sometimes shake our heads left and right to signal "yes", this seems like a large recipe of misunderstanding, with dubious benefits. As it is, I'm not convinced that equivocating between British and American English definitions of the same word gives us much expanded perspective commensurate with the costs.

If you agree that having mannerisms that equivocate between different macro-cultures isn't super valuable, I'd like to understand why having mannerisms that equivocate between different micro-cultures is great. I find quite a few mannerisms common to EA (and more specific than AngloAmerican macroculture) to be valuable to reducing communication overhead, including but not limited to:

  • Saying numeric probabilities
  • Making bets
  • Certain types of jargon
  • General push towards quantification
  • Non-interrupting physical gestures of agreement during a group conversation (though I've also seen it in slam poetry groups, so certainly not unique to us!)
  • "Yet. Growth mindset!"

Interesting article. I would like to raise one quabble:

Advocates for traditional diversity metrics such as race, gender and class do so precisely because they track different ways of thinking.

I agree this is the stated reason for many corporate diversity advocates, but I think it is not their true reason. In practice many companies recruit using basically a combination of filters whose purpose is to select people with a certain way of thinking (e.g. resumes, interviews, psychological screens) combined with various quotas for desired racial groups. If getting cognitive diversity was the goal they would try testing and selecting for that directly, or at least stop actively selecting against it. The status quo is likely to mean McKinsey get people from a variety of races, all of whom went to Harvard Business School, which I presume is basically what we want. After all, while cognitive diversity in some regards is useful, we want everyone to have the same (high) level of the cluster of skills that make up being a good consultant, like diligence, intelligence and sociability.

In particular, that even if hypothetically research showed that traditional racial/sexual diversity inhibited useful cognitive diversity (perhaps by making people less comfortable about sharing their views), advocates would be unlikely to change their mind.

I think their true motivations are more like some combination of:

  • Desire to appeal to a variety of audiences who would be less likely to buy from an outsider (e.g. hiring black sales guys to sell in black areas).
  • Wanting to avoid being criticized as racist by hostile outsiders.
  • Left wing conceptions of fairness on behalf of HR staff and other management, unrelated to firm objectives.
  • Intellectual conformism with others who believe for the previous three reasons.
A non-exhaustive subset of admired individuals I believe includes: ... As far as I perceive it, all revered individuals are male.

It seems a little rude to make public lists of perceived intelligence. Imagine how it would feel to be a prominent EA and to be excluded from the list? :-( In this case, I think you have excluded some people who are definitely higher in community estimation that some on your list, including some prominent women.

Members write articles about him in apparent awe and possibly jest

The linked article is from over eleven years ago. I think GWWC hadn't even launched at that point, let alone the rest of the EA community. This is like attacking democrats because Obama thought gay marriage was immoral and was trying to build a border wall with Mexico, both of which were the case in 2009.

1. I never spoke specifically of corporate advocates, so despite the fact that I agree with you that other motives are often at play, my point here was that one reason some advocates support traditional diversity is because they have reason to believe it tracks different views on the world. That's neither mutually exclusive with the reasons you outline nor is this article about corporate motivation.

2. As you cite I state this list is 'non-exhaustive'. If the prominent EAs who are not on this list agree that reverence is not good for a community's epistemic health, then they should not even want to be on the list. After publishing this article I was also notified of prominant female EAs who could have maybe made this list, but since I only listed individuals who I experienced directly as being talked about in a revered manner, they are not listed. My experience won't generalise to all experiences. My two points here are: there are revered individuals and they are mostly male. I agree there are likely a few revered women, but I would be surprised if they are numerous enough to balance out the male bias.

3. Fair point. I find it hard to tell how much things have changed and simply wanted to point out some evidence I found in writing.

My experience being named "Julia" in EA is that people periodically tell me how much they love my podcast, until they find out I'm not actually Julia Galef.

Here are a couple of interpretations of value alignment:

  • A pretty tame interpretation of "value-aligned" is "also wants to do good using reason and evidence". In this sense, distinguishing between value-aligned and non-aligned hires is basically distinguishing between people who are motivated by the cause and people who are motivated by the salary or the prestige or similar. It seems relatively uncontroversial that you'd want to care about this kind of alignment, and I don't think it reduces our capacity for dissent: indeed people are only really motivated to tell you what's wrong with your plan to do good if they care about doing good in the first place. I think your claim is not that "all value-alignment is bad" but rather "when EAs talk about value-alignment, they're talking about something much more specific and constraining than this tame interpretation". I'd be interested in whether you agree.
  • Another (potentially very specific and constraining) interpretation of "value alignment" that I understand people to be talking about when they're hiring for EA roles is "I can give this person a lot of autonomy and they'll still produce results that I think are good". This recommends people who essentially have the same goals and methods as you right down to the way they affect decisions about how to do your job. Hiring people like that means that you tax your management capacity comparatively less and don't need to worry so much about incentive design. To the extent that this is a big focus in EA hiring it could be because we have a deficit of management capacity and/or it's difficult to effectively manage EA work. It certainly seems like EA research is often comparatively exploratory / preliminary and therefore underspecified, and so it's very difficult to delegate work on it except to people who are already in a similar place to you on the matter.
I think your claim is not that "all value-alignment is bad" but rather "when EAs talk about value-alignment, they're talking about something much more specific and constraining than this tame interpretation".

To attempt an answer on behalf of the author. The author says "an increasingly narrow definition of value-alignment" and I think the idea is that seeking "value-alignment" has got narrower and narrower over term and further from the goal of wanting to do good.

In my time in EA value alignment has, among some folk, gone from the tame meaning you provide of really wanting to figure out how to do good to a narrower meaning such as: you also think human extinction is the most important thing.

I think there is however no record of the actual IQ of these people.

FWIW, I think IQ isn't what we actually care about here; it's the quality, cleverness and originality of their work and insights. A high IQ that produces nothing of value won't get much reverence, and rightfully so. People aren't usually referring to IQ when they call someone intelligent, even if IQ is a measure of intelligence that correlates with our informal usage of the word.

Advocates for traditional diversity metrics such as race, gender and class do so precisely because they track different ways of thinking.

I don't think that's the only reason, and I'm not sure (either way) it's the main reason. I suspect demographic homogeneity is self-reinforcing, and may limit movement growth and the pool of candidates for EA positions more specifically. So, we could just be missing out on greater contributions to EA, whether or not their ways of thinking are different.

I enjoyed reading about this central premise: "EA members gesture at moral uncertainty as if all worldviews are considered equal under their watch, but in fact the survey data reveals cognitive homogeneity."

I strongly agree with concerns about EAs rarely seeking expert evaluations or any evaluations from outside the community. I also somewhat agree with concerns about confusing effectiveness with efficiency, although I've seen this discussed elsewhere. I'm on board with defining value alignment - I'm concerned this is a back door for just hiring whoever reminds the hiring manager of themselves and I don't think that's a great way to hire. I'm not super optimistic about asking people to talk differently about epistemic humility, but I agree it would be good.

I'd like to see more work on this. I suspect the people adding the most value to this area are the people who are just quietly a part of multiple communities - people who are academics or Quakers or another group and also EA. But it would nice to see it highlighted.

This article again calls for getting value-aligned people into EA, but it too lacks a definition.

The author of that article, Gleb Tsipursky, isn't someone I'd cite as a good source for what EA believes, as he withdrew from the community after being accused (ironically) of a lot of behavior that didn't match widely agreed-upon EA values. (That said, the other pieces you quoted seem like better sources.)

EAs talk a lot about value alignment and try to identify people who are aligned with them. I do, too. But this is also funny at a global level, given we don't understand our values nor aren't very sure about how to understand them much better, reliably. Zoe's post highlights that it's too early to double down on our current best guesses and more diversification is needed to cover more of the vast search space.

I found this post really interesting and helpful. Thanks CarlaZoeC.

 

There's one person I didn't see mentioned in the post or comments and thought it might be worth adding as a counterweight: I was in philosophy at Oxford for a while and knew a fair number of the people here. To my mind, the person I have by far the most intellectual respect for, and who'd be most likely to cause an adjustment in my credences, is a woman: Hilary Greaves

A non-exhaustive subset of admired individuals I believe includes: E. Yudkowsky, P. Christiano, S. Alexander, N. Bostrom, W. MacAskill, Ben Todd, H. Karnowsky, N. Beckstead, R. Hanson, O. Cotton-Barratt, E. Drexler, A. Critch, … As far as I perceive it, all revered individuals are male.

Although various metrics do show that the EA community has room to grow in diversity, I don't think the fandom culture has nearly that much gender imbalance. Some EA women who consistently produce very high-quality content include Arden Koehler, Anna Salamon, Kelsey Piper, Elizabeth Van Nostrand. I have also heard others revere Julia Wise, Michelle Hutchinson and Julia Galef, whose writing I don't follow. I think that among EAs, I have only slightly below median tendency to revere men over women, and these women EA thinkers feel about as "intimidating" or "important" to me as the men on your list.

I just want to say that this is one of the best things I have read on this forum. Thank you for such a thoughtful and eloquent piece. I fully agree with you.


To add to the constructive actions I think those working on EA community builidng (CEA and local community builders and 80K etc) should read and take note. Recommended actions for anyone in that position are to:

  • Create the right kind of space so that people can reach their own decision about what causes are most important.
  • Champion cause prioritisation and uncertainty.
  • Learn from the people who do this well. I would call out Amy from CEA for work on EAG in 2018 and David for EA London, who I think manage this well.

(Some notes I made in the past on this are here: https://forum.effectivealtruism.org/posts/awS28gHCM9GBmhcAA/cea-on-community-building-representativeness-and-the-ea?commentId=jPw8xiwxmk23Y5tKx )

There are lots of of good points here. I could say more but here are just a few comments: 

The obsession over intelligence is counterproductive. I worry a lot that EA is becoming too insular and that money and opportunities are being given out based largely on a perception of how intelligent people are and the degree to which people signal in-group status. The result is organizations like MIRI and Leverage staffed by autists that have burned through lots of money and human resources while only producing papers of marginal value. The fact they don't even bother to get most papers peer reviewed is really bad. Yes, peer review sucks and is a lot of work, but every paper I had peer reviewed was improved by the process. Additionally, peer review and being able to publish in a good journal is a useful (although noisy) signal to outsiders and funders that your work is at least novel and not highly derivative.  

The focus on intelligence can be very off-putting and I suspect is a reason many people are not involved in EA. I know one person who said they are not involved because they find it too intimidating. While I have not experienced problems at EA events, I have experienced a few times where people were either directly or indirectly questioning my intelligence at LessWrong events, and I found it off-putting. In one case, someone said "I'm trying to figure out how intelligent you are".  I remember times I had  trouble keeping up with face-paced EA conversations. There's been some conversations I've seen which appeared to be a bunch of  people trying to impress and signal how intelligent they are rather than doing something constructive. 

Age diversity is also an issue. Orgs that have similar values, like humanist orgs or skeptics orgs, have much greater age diversity. I think this is related to the focus on intelligence, especially superficial markers like verbosity and fluency/fast-talking, and the dismissal of skeptics and critics (people who are older tend to have developed a more critical/skeptical take on things due to greater life experience).  
 

Old comment, so maybe this isn't worth it, but: as someone diagnosed with Asperger's as a kid, I'd really prefer if people didn't attribute things you don't like about people to their being autistic, in a causal manner and  without providing supporting evidence. I don't mean you can never be justified in saying that a group having a high prevalence of autism explains some negative feature of their behavior as a group. But I think care should be taken here, as when dealing with any minority. 

I agree peer review is good, and people should not dismiss it, and too much speculation about how smart people are can be toxic. (I probably don't avoid it as much as I should.) But that's kind of part of  my point, not all autists track some negative stereotype of cringe Silicon Valley people, even if like most stereotypes, there is a grain of truth in it.)

late to reply, but those are fair points, thanks for pointing that out. I do need to be more careful about attribution and stereotyping. The phenomena I see which I was trying to point at is that in the push to find "the most intelligent people" they end up selecting for autistic people, who in term select more autistic people. There's also a self-selection thing going on -- neurotypicals don't find working with a team of autistic people very attractive, while autistic people do. Hence the lack of diversity. 

Thanks for responding. 

Welcome to the forum! Apologies that the rest of my comment may seem overly critical/nitpicky.

While I agree with some other parts of your complaint, the implicit assumption behind

The fact they don't even bother to get most papers peer reviewed is really bad. Yes, peer review sucks and is a lot of work, but every paper I had peer reviewed was improved by the process.

seems unlikely to be correct to me, at least on a naive interpretation. The implication here is that EA research orgs will be better if they tried to aim for academic publishing incentives. I think this is wrong because academic publishing incentives frequently make you prioritize bad things*. The problem isn't an abstract issue of value-neutral "quality" but what you are allowed to think about/consider important. 

As an example,

Additionally, peer review and being able to publish in a good journal is a useful (although noisy) signal to outsiders and funders that your work is at least novel and not highly derivative.  

is indicative of one way in which publishing incentives may warp someone's understanding, specifically constraining research quality to be primarily defined by "novelty" as understood by an academic field (as opposed to eg, truth, or decision-relevance, or novelty defined in a more reasonable way). 

Holden Karnofsky's interview might be relevant here, specifically the section on academia and the example of David Roodman's research on criminal justice reform.

Holden Karnofsky [..] : recently when we were doing our Criminal Justice Reform work and we wanted to check ourselves. We wanted to check this basic assumption that it would be good to have less incarceration in the US.

David Roodman, who is basically the person that I consider the gold standard of a critical evidence reviewer, someone who can really dig on a complicated literature and come up with the answers, he did what, I think, was a really wonderful and really fascinating paper, which is up on our website, where he looked for all the studies on the relationship between incarceration and crime, and what happens if you cut incarceration, do you expect crime to rise, to fall, to stay the same? He picked them apart. What happened is he found a lot of the best, most prestigious studies and about half of them, he found fatal flaws in when he just tried to replicate them or redo their conclusions.

When he put it all together, he ended up with a different conclusion from what you would get if you just read the abstracts. It was a completely novel piece of work that reviewed this whole evidence base at a level of thoroughness that had never been done before, came out with a conclusion that was different from what you naively would have thought, which concluded his best estimate is that, at current margins, we could cut incarceration and there would be no expected impact on crime. He did all that. Then, he started submitting it to journals. It’s gotten rejected from a large number of journals by now. I mean starting with the most prestigious ones and then going to the less.

Robert Wiblin: Why is that?

Holden Karnofsky: Because his paper, it’s really, I think, it’s incredibly well done. It’s incredibly important, but there’s nothing in some sense, in some kind of academic taste sense, there’s nothing new in there. He took a bunch of studies. He redid them. He found that they broke. He found new issues with them, and he found new conclusions. From a policy maker or philanthropist perspective, all very interesting stuff, but did we really find a new method for asserting causality? Did we really find a new insight about how the mind of a …

Robert Wiblin: Criminal.

Holden Karnofsky: A perpetrator works. No. We didn’t advance the frontiers of knowledge. We pulled together a bunch of knowledge that we already had, and we synthesized it. I think that’s a common theme is that, I think, our academic institutions were set up a while ago. They were set up at a time when it seemed like the most valuable thing to do was just to search for the next big insight.

These days, they’ve been around for a while. We’ve got a lot of insights. We’ve got a lot of insights sitting around. We’ve got a lot of studies. I think a lot of the times what we need to do is take the information that’s already available, take the studies that already exist, and synthesize them critically and say, “What does this mean for what we should do? Where we should give money, what policy should be.”

I don’t think there’s any home in academia to do that. I think that creates a lot of the gaps. This also applies to AI timelines where it’s like there’s nothing particularly innovative, groundbreaking, knowledge frontier advancing, creative, clever about just … It’s a question that matters. When can we expect transformative AI and with what probability? It matters, but it’s not a work of frontier advancing intellectual creativity to try to answer it.

*Also society already has very large institutions working on academic publishing incentives called "universities," so from a strategic diversification perspective we may not want to replicate them exactly.

One issue with moral uncertainty is that I think it means much less for moral antirealists. As a moral antirealist myself, I still use moral uncertainty, but in reference to views I personally am attracted to (based on argument, intuition, etc.) and that I think I could endorse with further reflection, but currently have a hard time deciding between. This way I can assign little weight to views I personally don't find attractive, whereas someone who is a moral realist has to defend their intuitions (both make positive arguments for and address counterarguments) and refute intuitions they don't have (but others do), a much higher bar, or else they're just pretending their own intuitions track the moral truth while others' do not. And most likely, they'll still give undue weight to their own intuitions.

I don't know what EA's split is on moral realism/antirealism, though.

Of course, none of this says we shouldn't try to cooperate with those who hold views we disagree with.

I’ve come to think that evidential cooperation in large worlds and, in different ways, preference utilitarianism pushes even antirealists toward relatively specific moral compromises that require an impartial empirical investigation to determine. (That may not apply to various antirealists that have rather easy-to-realize moral goals or one’s that others can’t help a lot with. Say, protecting your child from some dangers or being very happy. But it does to my drive to reduce suffering.)

Thank you for writing this article! It’s interesting and important. My thoughts on the issue:

Long Reflection

I see a general tension between achieving existential security and putting sentient life on the best or an acceptable trajectory before we cease to be able to cooperate causally very well anymore because of long delays in communication.

A focus on achieving existential security pushes toward investing less time into getting all basic assumptions just right because all these investigations trade off against a terrible risk. I’ve read somewhere that homogeneity is good for early-stage startups because their main risk is in being not fast enough and not in getting something wrong. So people who are mainly concerned with existential risk may accept being very wrong about a lot of things so long as they still achieve existential security in time. I might call this “emergency mindset.”

Personally – I’m worried I’m likely biased here – I would rather like to precipitate the Long Reflection to avoid getting some things terribly wrong in the futures where we achieve existential security even if these investigations comes at some risk of diverting resources from reducing existential risk. I might call this “reflection mindset.”

There is probably some impartially optimal trade off here (plus comparative advantages of different people), and that trade off would also imply how much resources it is best to invest into avoiding homogeneity.

I’ve also commented on this on a recent blog article where I mention more caveats.

Ideas for Solutions

I’ve seen a bit of a shift toward reflection over emergency mindset at least since 2019 and more gradually since 2015. So if it turns out that we’re right and EA should err more in the direction of reflection, then a few things may aid that development.

Time

I’ve found that I need to rely a lot on others’ judgments on issues when I don’t have much time. But now that I have more time, I can investigate a lot of interesting questions myself and so need to rely less on the people I perceive as experts. Moreover, I’m less afraid to question expert opinions when I know something beyond the Cliff’s Notes about a topic, because I’ll be less likely to come off as arrogantly stupid.

So maybe it would help if people who are involved in EA in nonresearch positions were generally encouraged, incentivized, and allowed to take off more time to also learn things for themselves.

Money

The EA Funds could explicitly incentivize the above efforts but they could also explicitly incentivize broad literature research and summarization of topics and interviews with topic experts for topics that relate to foundational assumptions in EA projects.

“Growth and the Case Against Randomista Development” seems like a particular impressive example of such an investigation.

Academic Research

I’ve actually seen a shift toward academic research over the past 3–4 years. And that seems valuable to continue (though my above reservations about my personal bias in the issue may apply). It is likely slower and maybe less focused. But academic environments are intellectually very different from EA, and professors in some field are very widely read in that field. So being in that environment and becoming a person that widely read people are happy to collaborate with should be very helpful in avoiding the particular homogeneities that the EA community comes with. (They’ll have homogeneities of their own of course.)

I agree with some of what you say, but find myself less concerned about some of the trends. This might be because I have a higher tolerance for some of the traits you argue are present and because AI governance, where I'm mostly engaged now, may just be a much more uncertain topic area than other parts of EA given how new it is. Also, while I identify a lot with the community and am fairly engaged (was a community leader for two years), I don't engage much on the forum or online so I might be missing a lot of context.

I worry about the framing of EA as not having any solutions and the argument that we should just focus on finding which are the right paths without taking any real-world action on the hypotheses we currently have for impact. I think to understand things like government and refine community views of how to affect it and what should be affected, we need to engage. Engaging quickly exposes ignorance and forces us to be beholden to the real world, not to mention gives a lot of reason to engage with people outside the community.

Once a potential path to impact is identified, and thought through to a reasonable extent, it seems almost necessary to try to take steps to implement it as a next step in determining whether it is a fruitful thing to pursue. Granted, after some time we should step back and re-evaluate, but for the time when you are pursuing the objective it's not feasible to be second-guessing constantly (similar idea to Nate Soare's post Diving in).

That said it seems useful to have a more clear view from the outside just how uncertain things are. While beginning to engage with AI governance, it took a long time for me to realize just how little we know about what we should be doing. This despite some explicit statements by people like Carrick Flynn in a post on the forum saying how little we know and a research agenda which is mainly questions about what we should do. I'm not sure what more could be done as I think it's normal to assume people know what they're doing, and for me this was only solved by engaging more deeply with the community (though now I think I have a more healthy understanding of just how uncertain most topic areas are).

I guess a big part of the disagreement here might boil down to how uncertain we really are about what we are doing. I would agree a lot more with the post if I was less confident about what we should be doing in general (and again I frame this mostly in AI governance area as it's what I know best). The norms you advocate are mostly about maintaining cause agnosticism and focusing on deliberation and prioritization (right?) as opposed to being more action oriented. In my case, I'm fairly happy with the action-prioritization balance I observe than I guess you are (though I'm of course not as familiar with how the balance looks elsewhere in the community and don't read the forum much).

I think I agree with all of what you say. A potentially relevant post is The Values-to-Actions Decision Chain: a lens for improving coordination.

despite some explicit statements by people like Carrick Flynn in a post on the forum saying how little we know and a research agenda which is mainly questions about what we should do

Just in case future readers are interested in having the links, here's the post and agenda I'm guessing you're referring to (feel free to correct me, of course!):

That said, I do agree we should work to mitigate some of the problems you mention. It would be good to get people more clear on how uncertain things are, to avoid groupthink and over-homogenization. I think we shouldn't expect to diverge very much from how other successful movements have happened in the past as there's not really precedent for that working, though we should strive to test it out and push the boundaries of what works. In that respect I definitely agree we should get a better idea of how homogenous things are now and get more explicit about what the right balance is (though explicitly endorsing some level of homogeneity might have it's own awkward consequences)

Every year, representative members from central organization gather in what is called a ‘leaders forum’, to cultivate collaboration and coordination. The forums are selective and not open to everyone. Reports about the forums or decisions that were taken there are sparse.

As you say, the purpose of the event is to further collaboration and coordination. 

To be clear, this typically looks like sharing updates on what various organizations are working on, identifying problems, getting advice, etc. To our knowledge (I've helped out with this event as a CEA staffer), the event hasn’t been used to make broad decisions about EA in general.

I doubt organisations would attend the forums if it would not influence their decision making afterwards. It is exactly the type of meeting which I would love to see more transparency around.

I'm sure the Forum does influence decision-making, and in much the same way EA Global does; people talk to each other, learn things, and make different decisions than they might have otherwise. 

But as far as we're aware (and as CEA, we would probably be aware), orgs aren't coming together to make "official" decisions about EA as a whole. 80,000 Hours might change its priorities, and Open Phil might change its priorities, but there isn't some unified set of priorities that everyone agrees upon, or some single task that everyone decides to work together on afterward. People come in with differing views and leave with differing views, even if they make some updates during the event.

Among many things I agree with, the part I agree the most with:

EAs give high credence to non-expert investigations written by their peers, they rarely publish in peer-review journals and become increasingly dismissive of academia

I think a fair amount of the discussion of intelligence loses its bite if "intelligence" is replaced with what I take to be its definition: "the ability to succeed a randomly sampled task" (for some reasonable distribution over tasks). But maybe you'd say that perceptions of intelligence in the EA community are only loosely correlated with intelligence in this sense?

As for cached beliefs that people accept on faith from the writings of perceived-intelligent central figures, I can't identify any beliefs that I have that I couldn't defend myself (with the exception that I think many mainstream cultural norms are hard to improve on, so for a particular one, like "universities are the best institutions for producing new ideas", I can't necessarily defend this on the object level). But I'm pretty sure there aren't any beliefs I hold just because a high-status EA holds them. Of course, some high-status EAs have convinced me of some positions, most notably Peter Singer. But that mechanism for belief transmission within EA, i.e. object-level persuasion, doesn't run afoul of your concerns about echochamberism, I don't think.

But maybe you've had a number of conversations with people who appeal to "authority" in defending certain positions, which I agree would be a little dicey.

But that mechanism for belief transmission within EA, i.e. object-level persuasion, doesn't run afoul of your concerns about echochamberism, I don't think.

Getting too little exposure to opposing arguments is a problem. Most arguments are informal so not necessarily even valid, and even for the ones that are, we can still doubt their premises, because there may be other sets of premises that conflict with them but are at least as plausible. If you disproportionately hear arguments from a given community, you're more likely than otherwise to be biased towards the views of that community.

Yeah I think the cost is mostly lack of exposure to the right ideas/having the affordance to think them through deeply, rather than because you're presented with all the object-level arguments in a balanced manner and groupthink biases you to a specific view.

A few things this makes me think of:

explore vs. exploit: The first part of your life (the first 37%?), you gather information, then the last part, you use that information, maximizing and optimizing according to it. Humans have definite lifespans, but movements don't. Perhaps a movement's life depends somewhat on how much exploration they continue to do.

Christianity: I think maybe the only thing all professed Christians have in common is attraction to Jesus, who is vaguely or definitely understood. You could think of Christianity as a movement of submovements (denominations). The results are these nicely homogenous groups. There's a Catholic personality or personality-space, a Methodist, Church of Christ, Baptist, etc. Within them are more, or less, autonomous congregations. Congregations die all the time. Denominations wax and wane. Over time, what used to divide people into denominations (doctrinal differences) has become less relevant (people don't care about doctrine as much anymore), and new classification criteria connect and divide people along new lines (conservative vs. evangelical vs. mainline vs. progressive). An evangelical Christian family who attend a Baptist church might see only a little problem in switching to a Reformed church that was also evangelical. A Church of Christ member, at a church that would have considered all Baptists to not really be Christians 50 or 100 years ago, listens to some generic non-denominational nominally Baptist preacher who says things he likes to hear, while also hearing the more traditional Church of Christ sermons on Sunday morning.

The application of that example to EA could be something like: Altruism with a capital-A is something like Jesus, a resonant image. Any Altruist ought to be on the same side as any other Altruist, just like any Christian ought to be on the same side as any other Christian, because they share Altruism, or Jesus. Just as there is an ecosystem of Christian movements, submovements, and semiautonomous assemblies, there could be an ecosystem of Altruistic movements, submovements, and semiautonomous groups. It could be encouraged or expected of Altruists that they each be part of multiple Altruistic movements, and thus be exposed to all kinds of outside assumptions, all within some umbrella of Altruism. In this way, within each smaller group, there can be homogeneity. The little groups that exploit can run their course and die while being effective tools in the short- or medium-term, but the overall movement or megamovement does not, because overall it keeps exploring. And, as you point out, continuing to explore improves the effectiveness of altruism. Individual movements can be enriched and corrected by their members' memberships in other movements.

A Christian who no longer likes being Baptist can find a different Christianity. So it could be the same with Altruists. EAs who "value drift" might do better in a different Altruism, and EA could recruit from people in other Altruisms who felt like moving on from those.

Capital-A Altruism should be defined in a minimalist way in order to include many altruistic people from different perspectives. EAs might think of whatever elements of their altruism that are not EA-specific as a first approximation of Altruism. Once Altruism is defined, it may turn out that there are already a number of existing groups that are basically Altruistic, though having different cultures and different perspectives than EA.

Little-a altruism might be too broad for compatibility with EA. I would think that groups involved in politicizing go against EA's ways. But then, maybe having connection even with them is good for Altruists.

In parallel to Christianity, when Altruism is at least somewhat defined, then people will want to take the name of it, and might not even be really compliant with the N Points of Altruism, whatever value of N one could come up with -- this can be a good and a bad thing, better for diversity, worse for brand strength. But also in parallel to Christianity, there is generally a similarity within professed Christians which is at least a little bit meaningful. Experienced Christians have some idea of how to sort each other out, and so it could be with Altruists. Effective Altruism can continue to be as rigorously defined as it might want to be, allowing other Altruisms to be different.

This is an interesting perspective. It makes me wonder if/how there could be decently defined sub-groups that EAs can easily identify, e.g. "long-termists interested in things like AI" vs. "short-termists who place significantly more weight on current living things" - OR - "human-centered" vs. "those who place significant weight on non-human lives."

Like within Christianity, specific values/interpretations can/should be diverse, which leads to sub-groups. But there is sort of a "meta-value" that all sub-groups hold, which is that we should use our resources to do the most good that we can. It is vague enough to be interpreted in many ways, but specific enough to keep the community organized.

I think the fact that I could come up with (vaguely-defined) examples of sub-groups indicates that, in some way, the EA community already has sub-communities. I agree with the original post that there is risk of too much value-alignment that could lead to stagnation or other negative consequences. However, in my 2 years of reading/learning about EA, I've never thought that EAs were unaware or overly-confident in their beliefs, i.e. it seems to me that EAs are self-critical and self-aware enough to consider many viewpoints.

I personally never felt that just because I don't want (nor can I imagine) an AI singleton that brings stability to humanity meant that I wasn't an EA.

Thank you very much for writing this post. You have clrearly stated most of my concerns about EA that I could not fully articulate and concerns that prevented friends from joining the community whom I believe are well fitting to it.

Greg put it crisply in his post on epistemic humility

This link didn't work for me.

fixed, thanks.

[comment deleted]1
0
0
Curated and popular this week
Relevant opportunities