Edit 5/09/2022: Thanks to the helpful comments below, I want to make two clarifications upfront. 1) I'm not claiming the polycrisis is a new event, simply that it exists and is the most helpful way of conceptualising our current period. 2) I'm not claiming that the risk is that EA itself becomes dominant, but that it contributes to and propos up a worldview that is harmful and that this is a serious problem even at relatively low levels of contrbution (see my reply in the comments below)
Effective Altruism Risks Perpetuating a (Severely) Harmful Worldview
This essay argues that, even though many of the actions of the EA movement might be highly morally commendable, its worldview may be severely harmful. This harm may come from hindering action around the x-risk (factor) of the global polycrisis, having a causal role to play in the polycrisis, or even being an x-risk unto itself via the threat of causing a Whimper (to borrow Nick Bistrom’s terminology) itself.
Please note for speed this essay is written and referenced fairly informally, although I hope this does not count against the compellingness of the overall thesis. I’m happy to clarify things in the comments that are unclear or which cause doubts.
The argument itself proceeds as follows:
Premise Group 1: Worldviews
- P1: EA has a foundational worldview
- P2: This worldview is influential
- P2a: There’s a prima facie case for the power of worldviews
- P2b: EA specifically has influence through its worldview
Premise Group 2: Polycrisis
- P3: There is good reason to believe we are currently experiencing a polycrisis
- P3a: The world is experiencing a number of global crises
- P3b: There is good reason to believe that these crises are in some sense connected
- P4: The polycrisis itself is a unique X-Risk, or at a minimum an X-Risk Factor
- P4a: The polycrisis is an X-risk (factor)
- P4b: The polycrisis is unique as an X-Risk (factor)
Premise Group 3: The risks of the EA worldview
- P5: The polycrisis can be connected to the worldview reinforced by EA
- P5a: EA’s worldview hinders action around the polycrisis
- P5b: EAs worldview may actually contribute to causing the (acceleration of) the polycrisis
- P6: The EA worldview itself may be a potential X-Risk
C: Even if EA activity leads to good results, the worldview it fosters is harmful
- Bonus Conclusions
- C2: EA overlooks the role of worldviews when considering impacts, especially of its own work
- C3: Research into worldviews, their impacts and honing them might be a significantly impactful yet overlooked cause area
Premise Group 1: Worldviews
P1: EA has a foundational worldview
By claiming that EA has a worldview, at the most basic level I am simply arguing that EA as a movement, sees the world in a certain way. I’ll focus on two aspects, namely EAs _epistemology _and its commitments. By epistemology I mean its ways of obtaining knowledge and understanding, and how it models the world around us. By commitments I mean the guiding assumptions and principles on which EA operates.
Note, I am necessarily generalising here. I am sure not everyone in the movement endorses the features of this worldview to the same degree, however this premise is only making a claim about the movement as a whole. For this to hold two things should be the case:
-
The averaged position of the membership should tend towards the elements I have outlined below; and
-
The major EA organizations should tend towards these positions in their work, publications etc
I believe there is a good case that both of these hold for each aspect of the EA worldview discussed.
Also note that I am not trying to describe the EA worldview in its entirety. I am focusing on some core elements of the worldview which are relevant to my argument.
The elements of EAs epistemology I wish to highlight are:
-
Focus on analytic logic, rationality and the western scientific method as paths to knowledge[1]
-
Social atomism, understood as taking individuals as the base units of social analysis[2]
-
Structure based perspective - modelling the world in terms of structures, rather than relationships[3] The EA commitments I wish to highlight are:
-
Maximisation - that we should seek to maximise the creation/incidence of things which are philosophically good/valuable[4]
-
Technical optimism - tendency to perceive social ills as being amenable to technical fixes (e.g. technology, institutional innovation)[5]
-
Social incrementalism - related to above commitment to technical optimism. Belief that the best route to human progress is to tweak and improve the social structures that we have currently, rather than develop entirely new ones
-
E.g. Lots of EAs think capitalism is desirable, identify as neoliberals etc[6]
-
NB: this is not to say that I overlook the commitments of many in the EA community that radically transformed modes of human existence are desirable and to be aimed for e.g. transhumanists, space colonists etc. A commitment to social incrementalism just means that incremental improvements to social structures/institutions is largely accepted as the best route to these new futures.
-
For the sake of brevity, I have included what I view as indicative pieces of evidence for each one of these. I hope when combined with reviewers’ understanding of the EA community they will be sufficient to demonstrate that the worldview I have outlined above is a plausible characterisation of the EA movement. There is nothing contained above which I predict to be particularly controversial.
P2: This worldview is influential
This premise should be understood to assert that the worldview of EA has some reasonably significant impact on the world. Counterfactually, if EA had a different worldview, then the world would plausibly be different.
I note a subtlety here, which might make envisioning this counterfactual difficult. EA’s worldview in many ways aligns with the dominant social paradigm of the modern, western world. The direction of impact may then be questioned: it might just be asserted that EA is a product of the dominant social paradigm, and thus in the counterfactual the latter would be different because it would have caused a different set of EA values, rather than been influenced by them.
My claim here is that causation is bi-directional. EA’s worldview is influenced by the dominant social paradigm, but also influences it. In particular, I argue that EAs success acts to reinforce and extend some of the ideas already present in this paradigm. Thus in our counterfactual it might not be that the social paradigm has shifted all that much, but it could perhaps be less well supported, or its core components might not be taken to the same lengths. The option set of possible responses to social problems contained in the collective western social imagination might therefore be larger. I’ll expand on these points in the premises below.
P2a: There’s a prima facie case for the power of worldviews
In general, worldviews are noted to be influential in a number of ways. The first, and most obvious, is that they help determine action. There are a number of different models in the social sciences and social psychology–particularly in the sustainability literature–which give worldviews a significant role in determining how people act. These include, for example, the prominent ABC and MOA models[7] of behaviour. The acceptance of the role of worldviews pervades academia and beyond.
Worldviews also help set the imaginative Overton Window for how we might organise our societies[8], and the responses available to us to meet challenges. This is most obvious in the case of political worldviews (eg. commitments around a large state being bad determine a certain option set for responding to problems around healthcare provision), but there are others. This chimes particularly with recent work on the role of social and political imagination in our response to crises and challenges[9].
The discussion of worldviews in academic literature thus presents a strong case that worldviews in general are important. Worldviews which are widely held or otherwise embedded in important political and social institutions can have significant impacts on how societies operate.
P2b: EA specifically has influence through its worldview
Most obviously, EA’s worldview will have influence through its membership. The worldviews of new members are likely to be influenced by EA when they first join the movement. I, for example, had quite an aligned worldview when I first joined, but this was further honed and extended by engaging with the wider movement. EA particularly targets the global elite of the financially privileged and highly educated. Its influence in directing the significant charitable donations made by these people is itself a source of major impact on the world. It also advises its membership to take impactful jobs, across governments and major players in the private sector and civil society. If we accept that worldview will impact how EAers operate in their impactful jobs, then this worldview further helps to shape the world through affecting how these significant organisations and institutions operate, via the EAs embedded in them.
Further, EA-aligned ideas have proven highly influential across a sector of extremely high-net worth and powerful people, particularly operating in the world of tech. Potentially the most prominent example is their apparent influence on Elon Musk[10], but there are many others[11]. These people are a step above the average, already highly influential, EAer in terms of the impact they can have on the world. If their alignment with or influence by EAs worldview determines, even in part, how they channel this impact, then the case for EAs worldview being influential is strengthened.
Premise Group 2: Polycrisis
The next group of premises introduces the idea of a polycrisis. It presents this concept as a plausible framing for (many of, if not all) the challenges humanity currently faces, and makes the case that it can be plausibly regarded as an X-Risk or X-Risk factor in its own right.
P3: There is good reason to believe we are currently experiencing a polycrisis
For this first premise of the argument to hold, readers must accept the following two sub-premises:
- P3a: The world is experiencing a number of crises
- P3b: There is good reason to believe these crises are in some sense interconnected
Note the use of the phrase “good reason to believe” in P3 and P3b. I have chosen this phrase to indicate a weaker claim than factual truth (P3a by contrast, makes this stronger claim to factual truth, understood as tracking “objective” reality. I bracket epistemological considerations around the connection between truth and reality here, as they are not relevant or useful). Instead, “good reason to believe” should be taken to mean that, on the basis of evidence, our Bayesian prior as to the claim holding true should both:
- Be nonzero
- Should be greater than zero to a significant enough extent that the claim should be deemed worthy of consideration
As is the case in the X-Risk literature, even a very low prior probability can fulfill this second condition should the consequences of the claim holding true be high enough (via expected value theory). I will later argue that this is the case in this argument also.
I use the slightly more vague phrase “significant enough” for two related reasons. One is to allow for reasonable divergence in the assessments of the arguments and evidence for the claims made. I contend that this argument should hold across a wide distribution of credence levels in the claims made in its premises, due to the high potential consequences (in terms of downside risk) of their being true, Again this stems from expected value and expected-choice worthiness calculations.
Second is because many of the claims made in this argument by their very nature do not have an evidence base that is easily amenable to the forms of interrogation and justification favoured within EA. They are far harder to “prove” evidentially or even logically than many claims made in the EA literature, and so I have reflected this by making weaker claims around their plausibility rather than their truth. As mentioned, I contend that this argument holds even on this weaker standard of evidence.
P3a: The world is experiencing a number of global crises
This premise is, I hope, fairly uncontroversial on any reasonable definition of a crisis, even if such a definition diverges from the one I have given above. Given this, I will not devote extensive time to justifying it.
Instead, I’ll simply note that recent history appears to be littered with a large number of events which fit under the banner of a global crisis, from the COVID-19 Pandemic, to ongoing war in Ukraine and the accompanying inflation, supply chain instability and financial hardship which it has wrought across large swathes of the world. We may well also classify apparent increases in political polarization[12] and the surge of right wing populism in the 2010s under the banner of a global crisis, and the impacts of climate change and biodiversity loss have already reached crisis point across many metrics[13]. Perhaps more contentiously, it is at least plausible to consider stagnating happiness in the developed world a crisis of sorts[14].
P3b: There is good reason to believe that these crises are in some sense connected
I note at the outset of this section that I am not yet arguing _how _these crises are interconnected, or even that there is a single common connection. For P3b to hold, all that is required is to accept that it is plausible they are connected to one another. For example, crisis A may be connected to crisis B via mechanism X, and crisis B to crisis C via mechanism Y. Accepting this chain would still equate to an acceptance of a polycrisis.
The concept of a polycrisis is a relatively one, however has already begun to receive attention from the highest levels of academia and global institutions[15]. While the polycrisis may be a new concept, expert concern around the interaction of systems and systemic risks is not. As Homer-Dixon et al. note in their paper on polycrisis, there is already extensive literature on systemic risks. What has been lacking up until now has been serious research into their causal interaction and the crises which might result[16].
Many connections are already clear. Much work has been published on the intersection of environmental stressors[17], and the intersection between environmental and social system destabilisation[18]. Governance strain from survival migration and civil unrest due to crop failures provide two potential examples. There is a clear link between the combined impacts of the global pandemic and the war in Ukraine and the global supply chain and economic crisis. Finally, much works exists on the interconnection of financial crises and right-wing populism[19]. Much of the EA literature around X-Risk factors can also be seen to fall into this category[20].
Note that, again, I am merely picking out a few plausible examples of connection. To reiterate, this is simply to show that some degree of connection is likely, even if different crises are connected by different means. I hope Illustration of this kind will be sufficient for the argument; outlining all the potential connections each crisis has to others would be a research project unto itself.
P4: The polycrisis itself is a unique X-Risk, or at a minimum an X-Risk Factor
Again, P4 is broken down into two-sub premises, arguing in turn for the polycrisis as an X-Risk (factor) and for its uniqueness.
P4a: The polycrisis is an X-risk (factor)
The polycrisis implies the potential for feedback loops, nonlinear responses, emergent events[21] and multi-systemic collapse. Even if we do not think of the failure of any one system as an X-Risk (for example, the prevailing EA opinion that the worst likely outcomes of the climate crisis will be bad, but fall short of an X-Risk), our prior that the simultaneous collapse of multiple social and natural systems being an event severe enough to count as an X-Risk may plausibly be higher.
Further, the characteristic of emergence and the possibility of feedback loops between individual systems means that we may wish to upweight our priors around the potential negative consequences of the collapse of a given individual social or environmental system. These features are intrinsically non-amenable to probabilistic modelling, and so there is the potential that hidden or hard to predict feedback loops or emergent events might lead us into a civilisational death spiral, or other failure mode we have not calculated for[22].
Now, even in light of the above, we may be confident that the polycrisis does not pose a fully fledged X-Risk. At a minimum, however, massive destabilisation across a range of natural and social systems should class on any reasonable account as an X-Risk factor. This multisystemic destabilization and/or collapse may accelerate risks from AI and engineered pandemics–either via arms race dynamics or increasingly desperate research e.g. gain of function in the face of more serious natural pandemics–and also from interventions like geoengineering, again as we scrabble with increased urgency to respond[23]. Once again, the possibility of nonlinear shifts in the broader connected web of systems as a result of changes in one area, and the potential for feedback loops and/or emergent occurrences may also lead us to upweight the prior probability we attach to a worsening polycrisis accelerating X-Risks in unforeseen ways.
P4b: The polycrisis is unique as an X-Risk (factor)
The fact that the polycrisis arises from the connection and interaction of an array of malfunctioning natural and social systems means that its level of complexity plausibly operates at orders of magnitude higher than crises/risks from individual systems[24]. As I have already outlined, complex systems of this kind are uniquely unpredictable. They are characterised by nonlinearity, emergence and the potential for feedback loops. While all our natural and social systems can be characterised in this way, the significantly increased complexity of the interconnected system containing all (or almost all) of them takes this so much further as to render the poylycrisis unique.
Even if one has high credence that the individual component crises of the polycrisis aren’t X-Risks, we cannot, almost by their very nature, have high credence in their interaction effects. This is especially true based on our currently limited evidence. There is a paucity of research around these interactions and they possess a high degree of insusceptibility to the traditional probabilistic models of understanding favoured by EA community and much of broader society[25]. These effects may give rise to cumulative X-risks or risk factors we can’t yet grasp and this should lead us to regard the polycrisis differently to the other crises faced by humanity.
The polycrisis is also unique in the responses it requires. Interconnection means that siloed responses targeting only individual systems or problems may be at best ineffective and at worst actively harmful, with numerous examples from the complex systems literature demonstrating exactly this[26]. This stands in stark contrast to other risks humanity faces, where siloed responses are often reinforcing. It is hard to see how technical work on the AI control problem might fail to advance or even curtail broader efforts to reduce X-risk from AI, for example.
Premise Group 3: The risks of the EA worldview
This final group of premises and conclusions argues for a link between EAs worldview and the risk posed by the polycrisis and outlines a potential X-Risk (of curtailed flourishing) stemming from this worldview itself. This leads to the conclusion that EAs worldview is actively harmful, even if the actions of the movements may be morally comendale.
At the outset, I note that I will here use EAs worldview being connected to the polycrisis in two ways. First, that this worldview contributes to the occurrence of the polycrisis itself. Second, that it is connected to the risk posed by the polycrisis e.g. through hampering the response to it. For the sake of brevity I will not distinguish these two interpretations on every occasion, as doing so does not meaningfully impact my argument. Please just note when reading that these two senses are jointly inferred in my writing, unless explicitly stated otherwise.
P5: The polycrisis can be connected to the worldview reinforced by EA
This premise does not argue that EAs worldview is the sole cause of the polycrisis, just that it makes a causal contribution. Also, I should reiterate that for this claim to be true, it would only have to be the case that EA worldview was significantly influential in causing one element of polycrisis - if we accept linkage claim made in premise P3b then if other elements are linked to this one by other means then this can still count as an overall contribution. To use an imperfectly linear analogy: my finger can still count as contributing to the collapse of a domino line even though it only pushes over one domino, and the collapse of every subsequent domino is caused by other dominos.
The literature around worldviews is relatively young and thus limited. Further, worldviews are by their very nature something whose impact is difficult to measure. Given this difficulty in attributing causal contribution, the claim of this premise is just that there is a nonzero chance of role of worldview, and in fact credence in its role should be significantly greater than zero (depending on how you discount for uncertainty, evaluate evidence etc).
P5a: EA’s worldview hinders action around the polycrisis
The aspects of EA’s worldview outlined above can be connected to the polycrisis in a number of ways. Most obviously, the approach of social atomism combined with a focus on structure rather than relationships means that the EA worldview is not predisposed to acknowledging the polycrisis for what it is; the EA community has shown a taste for discrete, vertical interventions and has not seemed to pay much attention to the implications of complexity theory for its work. This leaves the movement unlikely to engage deeply with the interconnection of the polycrisis and the implications of this, which hampers effective response. Similarly, the longtermism which now dominates modern EA can be seen to arise from the intersection of a number of tenets of the EA worldview listed above, particularly the commitment to maximisation and the methods of reasoning which make up its epistemology. The longtermist approach may similarly lead to present day inaction around the polycrisis, magnifying the risk of path dependency causing feedback loops and emergent events. This is due to the propensity of longtermism to disregard and perhaps even magnify symptoms and breakdowns which are not deemed critical to the long term future, through its focus on the extreme long term. Relative inattention to climate change may be one such example here[27], as is the use of longtermism to justify actions by the wealthy which may endanger democratic institutions[28]. In all these cases, this long view, particularly when combined with high confidence in one’s probabilistic modelling of risks and in human technical capacity to address them, may lead to us allowing the polycrisis to take hold in an irreversible fashion.
P5b: EAs worldview may actually contribute to causing the (acceleration of) the polycrisis
There is a plausible case for a stronger causal relationship between the EA worldview and the polycrisis. It is not just that this worldview hinders the global response, but also that it has helped bring about the polycrisis and its associated risks in the first place.
This argument draws on the idea of a dominant social paradigm (DSP), used regularly in the sustainability literature. The DSP was initially referred to as the “collection of norms, beliefs, values, habits, and so on that form the world view most commonly held within a culture.”[29] Much of the EA worldview refines, makes concrete and extends, aspects of the dominant enlightenment worldview of the West. This can be seen in the common features, such as the privileging of rationality and commitment to individualism, the idea of forward progress and the view of the world as susceptible to technical fixes[30]. Maximisation is also a common assumption of the rational self interest used as the guiding light of our societies, to take another example. This connection between the DSP and EA can also be seen in the overlap of many elements of the EA worldview and influential fields such as orthodox economics, which are independent from the movement but have significantly influenced the structure of Western culture and society.
I argue, in line with Premise 2 above, that the EA movement has helped to further reify and extend the dominant social paradigm of the Western world. To reiterate, I have acknowledged the likely bidirectional influence between the EA worldview and DSP above; the facets of the EA worldview I focus on are not unique or new, and would likely still be highly influential on society without EA. This argument simply claims that EA makes these more powerful and influential in society. This is by logically extending them further than they might otherwise be taken, supporting and strengthening them with rigorous argumentation and by ensuring these stronger more extensive commitments are perpetuated and acted on by socially powerful EAers. Justifications of increased environmental extraction [often in the name of decoupling] and technological accelerationism in terms of longtermism provide an example of this extension of tendencies which can trace their roots back to the Enlightenment[31]. Given the hard and soft power still exerted by the West, the global influence of this contribution to the Western DSP is likely to be significant.
Much is made in the sustainability literature of the link between the DSP and the environmental crisis[32]. This alone goes some way to linking this worldview to the broader polycrisis. There are other ways the DSP can be connected to aspects of the polycrisis. For example, the accused “solutionism”[33], or, more charitably, technical optimism has shut down other avenues of progress and contributed to the growing dysfunction and capture of mainstream politics by elite interests[34]; traditional politics has become overlooked as a route to social change in favour of technical interventions, and has thus become trapped in a self-perpetuating cycle of decay and vulnerability to populist capture[35]. This solutionism has also been shown to act as a self-justifying mechanism for the continued socially and democratically corrosive behaviour of large tech companies in particular[36]. The DSP can also be linked with the “growth paradigm”, which, aside from its environmental impacts[37], has been accused of also curtailing human wellbeing through its equivocation of economic growth and progress at all levels of economic development (rather than just in the context of global poverty)[38].
There are also arguments that go further. Here, the claim would be that the worldview of the DSP, which is strengthened, extended and contributed to by EA, acts as a unifying denominator across the polycrisis as a whole. Rather than simply contributing to one of the crises comprising her polycrisis, this argument claims that this worldview contributes to them all, and that it is one of if not the most influential causal factors.
To start with an EA specific version of this argument, we can return to the critique of longtermism, which I have noted arises as a product of the EA worldview. This critique argues that the worldview underpinning longtermism, particularly the positivity of human domination of nature and technological acceleration (due to the value-neutrality thesis around technology), is the very same worldview which has driven us to a point of planetary crisis[39].
The work of acclaimed neuroscientist, philosopher and cultural scholar Professor Ian McGilChrist provides another, more example, targeting the Western DSP more broadly. In his book ‘The Master and His Emissary’ McGilChrist argues compellingly that hemispheric difference in the way the right and left brain hemispheres engage with the world has profound cultural implications; a “left brain” mode of thinking which closely aligns with the worldview outlined above[40] has come to dominate Western civilization, and is linked compellingly to the state of crisis which has grown in Western modernity[41]. Similarly, the albeit less academically rigorous work of Jeremy Lent in the Web of Meaning makes a similar claim around our dominant worldview, tying it to damaging disconnection and alienation from nature, other people and ourselves (with severe social consequences)[42]. Finally, there is the lens of ancient wisdom traditions. Buddhism, for example, espouses a middle way that is fundamentally at odds with the maximisation intrinsic to the EA worldview. Its teachings of interbeing and non-attachment are similarly contrasting to the social atomism and fixation with end consequence found in the EA worldview, and the DSP more broadly. According to Buddhism, failure to grasp and live out these teachings is the single source of suffering in the world, and there is a long history of “engaged Buddhism” which has used these teachings to critique the current structures of our society[43]. While these critiques do not explicitly point to the polycrisis, they coalesce around the idea that the severe dysfunction of our present global society can be tied at its root to features of a dominant worldview, and thus provide compelling evidence for the link between worldview and polycrisis.
None of this is to say that these analyses are _correct, _simply that they coalesce in pointing to insidious and detrimental effects of the DSP and by extension the EA worldview in a manner which is plausible. We should here turn to accounts of dealing with both epistemic and moral uncertainty, and in particular expected value theory and the “maximising-effective-choice worthiness” model popular in the EA community[44]. Even if one disagrees with the underlying analyses, or the moral foundations of the referenced work and that similar to it, then the severity of the outcomes of it being true should be sufficient to make us consider the accusations. I will expand on this point further below. First, however, I note that the outcomes of these critiques of the EA worldview may be even more severe than outlined above.
P6: The EA worldview itself may be a potential X-Risk
There is a further argument from lines of reasoning similar to those above, that the EA worldview itself is a form of X-risk. Note this would mean that, even if the EA worldview did not hinder our ability to address the symptomatic aspects of the polycrisis, or cause the polycrisis at all. it may still be found to be deeply harmful.
In Bostrom’s seminal analysis of X-Risk, he outlines civilisation becoming trapped in a state of highly curtailed flourishing as an X-Risk - referred to by Bostrom as Whimpers[45]. If one subscribes to the reasoning of the critiques above, and those stemming from wisdom traditions such as Buddhism in particular, then the DSP and thus EA worldview supporting it carries just this form of X-Risk. If one accepts an account of human flourishing that extends beyond the version of welfarism which sits at the core of EA, then by shaping society in the image of EA we risk creating a path dependency curtailing other values and forms of flourishing. We may render ourselves incapable of accessing the deeply felt sense of interconnection which comes with nirvana, or close ourselves off to the non-attachment which is in fact the only route out of our suffering. This critique can have many flavours, for example also being derived from the various philosophical positions which extend non-instrumental value to the external environment[46]. Of course, there are many contingencies to this risk, particularly around the type of future EA helps to bring about and the ability to pivot if we realise we have gone awry. Again the claim is just that it is quite plausible that the DSP and the EA worldview helping to drive it trap us in a severely curtained future from which we cannot escape. Transhumanism may provide one example here; we may realise only when nature has been eradicated and we have long transcended it that the natural world had intrinsic value, or that it is only in genuine embodied human form that we can access the real version of felt interconnection espoused by wisdom traditions. Note this example is meant to be illustrative only, and simply shows that one possible future which can be understood as an extreme extension of the EA worldview _might _irreversibly curtail other forms of value or flourishing. I’m sure there are other intuition pumps which do a similar job, but for brevity I won’t seek to devise them here.
EAers have up until now been quite untroubled by these critiques from other philosophical standpoints, because their commitment to the core philosophical premises of EA means that they simply reject their foundations out of hand. I argue that this ease of dismissal stems from an overlooking of the implications of these positions for the X-risk posed by these premises. Through this lens, we should take the risk of being wrong about these competing views very seriously on any reasonable account of moral uncertainty, even if we hold reasonable moral disagreement with them. In other words, we may be highly philosophically confident of the EA worldview, but the risk of it causing a Whimper if it is wrong should cause us to give significant weight to competing perspectives. I would also argue that, even under a state of such confidence, a reasonable person should not be so confident as to turn this claim into an instance of Pascal’s Mugging[47].
Note one further implication of this discussion on worldviews. It is quite likely that the alternative worldviews espoused by critics would themselves reduce more traditional X-Risks causing concern to EAers. To take a toy example, large scale promotion of Buddhistic principles could plausibly lead to decreased democratic appetites for Hawkish foreign relations (and thus ensuing arms-race and related dynamics) and valuable contributions to AI ethics[48]. We may therefore conclude that the downside risk of trying to advance them is relatively low as they will support the welfarist ends of EA even if they are false. On the contrary the downside risk of neglecting them should be true is potentially huge. This may present further reason to shift the EA worldview, even if one’s credence in my final conclusion is lower than my own.
Note, this possibility resembles a modified version of the claim made by Will MacAskill in his new book ‘What We Owe the Future’ around the importance of promoting positive values as a route to safeguarding the long term future. It combines this with the idea that the EA worldview is self-effacing. This term is used in moral philosophy, where an ethical theory [or, indeed, worldview system such as EA) is self-effacing if, roughly, whatever it claims justifies a particular action, or makes it right, had better not be the agent's motive for doing it[49]. Thus, if we wish to follow the EA commitment to doing the most good, the best thing we may be able to do is not act from EA motivations and in fact abandon the EA worldview altogether.
C: Even if EA activity leads to good results, the worldview it fosters is harmful
The conclusion of this argument is thus that, even if the EA movement contributes morally positive actions to the world (a statement I certainly agree with), it fosters and extends a worldview that is itself harmful, and potentially seriously so. This is plausible under many reasonable definitions of harm, but I here focus on imposition of serious risk, and wellbeing/human value loss vs counterfactual circumstances.
The EA worldview has been argued to be harmful because this worldview bolsters and extends the dominant social paradigm of the West. This can in turn be tied to the global polycrisis we currently face, both through hampering effective action to address it and perhaps even by directly causing it. It may even be that the EA worldview itself poses an X-Risk through its ability to bring about a Whimper, meaning we should take critiques which might insinuate such a threat incredibly seriously on the basis of expected value theory.
Again, one may reframe this conclusion general in terms of expected value theory, instead concluding that the consequences of the EA worldview being harmful are potentially so severe that even a small level of credence in this conclusion should carry a high weight when we consider our actions.
Bonus Conclusions
The following are some potential further inferences we might draw from this argument, which may be informative.
* C2: EA overlooks the role of worldviews when considering impacts, especially of its own work
* C3: Research into worldviews, their impacts and honing them might be a significantly impactful yet overlooked cause area[^50]
<!-- Footnotes themselves at the bottom. -->
Notes
As evidenced by EA’s commitment to the scientific method as understood in the modern Western world: https://drive.google.com/file/d/1rQu75k8uMFpdsp1y3JWlHP6kev3T-97N/view ↩︎
As evidenced by heavy alignment with neoclassical economic methods which make just such an assumption. See: https://www.lesswrong.com/posts/Aq4KNxKscywt3yXqk/some-blindspots-in-rationality-and-effective-altruism#4__Our_views_are_built_out_of_structures ↩︎
Ibid. ↩︎
Definitional to EA as per MacAskill’s definition: https://drive.google.com/file/d/1rQu75k8uMFpdsp1y3JWlHP6kev3T-97N/view ↩︎
There is heavy skew towards focusing on emerging technologies as solutions to major issues, as outlined in this critique: https://forum.effectivealtruism.org/posts/uxFvTnzSgw8uakNBp/effective-altruism-is-an-ideology-not-just-a-question#Certain_viewpoints_and_answers_are_privileged Similarly, there is a focus on optimising institutional structures, often in quite a technocratic manner as outlined in: https://forum.effectivealtruism.org/posts/yrwTnMr8Dz86NW7L4/technocracy-vs-populism-including-thoughts-on-the. A critical view of this approach would be to accuse it of technical and institutional solutionism https://www.macmillandictionary.com/buzzword/entries/solutionism.html and place it in contrast with more expansive, radical and difficult forms of systemic and cultural change. ↩︎
I point to 80,000 Hours Director Rob Wiblin’s appearance on the Neoliberal podcast as a high profile and in my view indicative example. I also note there are very few EAs calling for the overthrow of capitalism, and are quite sympathetic to it - see Will Bradshaw’s comment here: https://forum.effectivealtruism.org/posts/ExKZBvFuuENbyTgFE/is-capitalism-the-root-of-all-evil ↩︎
http://www.cres.gr/behave/pdf/paper_final_draft_CE1309.pdf ↩︎
In particular, see the work of UCL Prof. Geoff Mulgan: https://demoshelsinki.fi/julkaisut/the-imaginary-crisis-and-how-we-might-quicken-social-and-public-imagination/ and https://www.geoffmulgan.com/another-world-is-possible and the great political theorist and politician Roberto Unger e.g. in https://www.robertounger.com/wp-content/uploads/2017/10/the-left-alternative.pdf ↩︎
Who has recently gone so far to endorse Will MacAskill’s book on longermism: https://www.geo.tv/latest/431381-elon-musk-recommends-book-that-reflects-his-philosophy ↩︎
As is evidenced by the members of the Founders Pledge community, to take one example: https://founderspledge.com/community ↩︎
https://www.openaccessgovernment.org/political-polarisation/126991/ and https://carnegieendowment.org/2019/10/01/how-to-understand-global-spread-of-political-polarization-pub-79893 ↩︎
https://reliefweb.int/report/world/scientists-confirm-climate-change-already-contributes-humanitarian-crises-across-world ↩︎
E.g. https://www.iso.org/foresight/prosperity##stagnating-happiness-levels and https://www.economicshelp.org/blog/26659/economics/happiness-economics/ ↩︎
See: https://cascadeinstitute.org/team/, https://www.lse.ac.uk/ideas/events/2022/03/new-agenda-global-economic-governance/new-agenda-global-economic-governance and https://www.wto.org/english/news_e/news22_e/aid_27jul22_e.htm ↩︎
https://cascadeinstitute.org/wp-content/uploads/2022/03/A-call-for-an-international-research-program-on-the-risk-of-a-global-polycrisis-v1.0.pdf ↩︎
https://royalsocietypublishing.org/doi/10.1098/rspb.2015.2592 ↩︎
https://www.researchgate.net/publication/247639865_State_Failure_Task_Force_Report_Phase_III_Finding; https://www.unhcr.org/uk/climate-change-and-disasters.html ↩︎
https://blogs.lse.ac.uk/businessreview/2020/07/07/financial-crises-and-right-wing-populism-how-do-politics-and-finance-shape-each-other/ ↩︎
As outlined in The Precipice by Toby Ord: https://theprecipice.com/ ↩︎
I.e. events arising from the interactions between parts of a system which are more than the mere sum of these parts: http://www.andreasaltelli.eu/file/repository/Emergent_Complex_Systems.pdf ↩︎
A great summary of these potential pitfalls, with numerous examples, can be found in Donella Meadows - Thinking in Systems: a Primer: https://wtf.tw/ref/meadows.pdf ↩︎
These dynamics should be familiar to most EAers, and are outlined in Toby Ord’s “The Precipice” among other works: https://theprecipice.com/ ↩︎
While there is no single definition or measure of complexity, this claim appears likely across most if not all major understandings, outlined here: https://web.mit.edu/esd.83/www/notebook/Complexity.PDF ↩︎
Mapping degrees of complexity, complicatedness, and emergent complexity - https://doi.org/10.1016/j.ecocom.2017.05.004 ↩︎
Again for examples see Donella Meadows - Thinking in Systems: a Primer: https://wtf.tw/ref/meadows.pdf ↩︎
https://www.currentaffairs.org/2021/07/the-dangerous-ideas-of-longtermism-and-existential-risk ↩︎
Well known EA sympathiser Peter Thiel’s support of Trumpism, while donating large amounts to AI risk, might be an example here: https://www.theguardian.com/technology/2022/may/30/peter-thiel-republican-midterms-trump-paypal-mafia ↩︎
Pirages, D., & Ehrlich, P. (1974). Ark II: Social response to environmental imperatives. San Francisco: Freeman. https://scholar.google.com/scholar_lookup?&title=Ark II%3A Social response to environmental imperatives&publication_year=1974&author=Pirages%2CD&author=Ehrlich%2CP ↩︎
The dangerous extension of this worldview through the modern West has been incisively outlined by Evgeny Morozov in To Save Everything, Click Here: https://www.publicaffairsbooks.com/titles/evgeny-morozov/to-save-everything-click-here/9781610393706/ ↩︎
I’ll return to Elon Musk again, as he makes for one of the the most brazen examples of taking such views to their extreme: https://bonpote.com/en/elon-musk-solution-or-nightmare-for-the-environment/ ↩︎
And has been for many years e.g. https://www.researchgate.net/publication/260419263_Commitment_to_the_Dominant_Social_Paradigm_and_Concern_for_Environmental_Quality ↩︎
The term coined by Evgeny Morozov in To Save Everything, Click Here: https://www.publicaffairsbooks.com/titles/evgeny-morozov/to-save-everything-click-here/9781610393706/ ↩︎
A powerful work evidencing this capture in Britain, for example, is The New Few: a Very British Oligarchy https://www.simonandschuster.co.uk/books/The-New-Few/Ferdinand-Mount/9781847399359 ↩︎
Evidenced by increasing rates of democratic disillusionment, particularly among younger generations https://www.bennettinstitute.cam.ac.uk/blog/faith-democracy-millennials-are-most-disillusioned/ ↩︎
Particularly through perpetuating the dangerous and unjustified notion of ‘green growth’ https://www.internationalaffairshouse.org/the-myth-of-decoupling-and-green-economic-growth/#:~:text=To reduce the material footprint,the theory of 'decoupling'. ↩︎
https://www.cambridge.org/core/journals/global-sustainability/article/human-wellbeing-in-the-anthropocene-limits-to-growth/ACF1D0265F3408C6612772730E31E210 ↩︎
https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo ↩︎
For McGilchrist the left brain perceives in pieces, is analytical, logical, and dislikes paradox. It is focused on the familiar, on categorising through language and symbols. It targets grabbing, controlling, and is highly goal-oriented. ↩︎
The work of Thich Naht Hanh being the most famous example. For an outline of core Buddhist beliefs see: https://www.dwms.org/uploads/8/7/8/7/87873912/thich_nhat_hanh_-_the_heart_of_buddhas_teaching.pdf ↩︎
Animism is one such approach: https://www.anthroencyclopedia.com/entry/animism but there are many ↩︎
https://www.technologyreview.com/2021/01/06/1015779/what-buddhism-can-do-ai-ethics/ ↩︎
The idea of a "polycrisis" strikes me as a bit nebulous -- can we name any points in human history in which there weren't several ongoing tangentially-connected crises at once? This seems like it's just a universal condition of global civilization, insofar as the news defines a "crisis" as "whatever problems are most newsworthy at the moment". In March 2020, there was no "polycrisis" -- instead we had the rare monocrisis, when the problem of a global pandemic was big enough to obviously outclass all other problems. Similar moments of monocrisis -- world war in 1939, nuclear tensions in 1963, etc -- seem more serious than the worst moments of ambient polycrisis. (1918, with pandemic + war? To some extent all crises can be described as poly, eg WW2 featuring simultaneous pacific and atlantic theaters.)
I also don't think that we realistically need to worry too much about EA becoming such a dominant worldview that it crowds out all competitors/successors and all of its flaws as a movement become permanently embedded into the edifice of civilization. EA is growing fast, certainly. Also, EA philosophy aspires to be / sometimes acts like it is a totalizing and complete system of knowledge offering solutions to all worldly problems... this naturally makes it easy to imagine a future world where EA really does become enthroned as the final philosophy. But I don't think that EA is really a complete philosophy in that way, and neither I suspect do many of EA's biggest supporters. (A few glaring examples: It doesn't have much to offer in terms of helping people find happiness and understand themselves in the way that mindfulness traditions do. It doesn't offer much of a vision for community / family life. And it doesn't have enough to say about politics and governance here in the developed world, thus spawning spinoff movements like "progress studies".) I think even if EA becomes extremely successful in the future, it will still eventually be surpassed or complemented by other movements and philosophies that address its flaws/gaps.
[Unrelated to the above two comments: as a happily married man, I declare that I am personally innocent of wrongdoing in any "poly" crisis that rationalist culture might inadvertently be stirring up.]
Your first paragraph is basically Littlewood's Law, and since humans have a bias towards more interesting or controversial stories, everything seems like a crisis or miracle. It's also why the news is useless nowadays.
Article about Littlewood's Law here: https://www.gwern.net/Littlewood#:~:text=At a global scale%2C anything,networked global media covering
Hi both,
Thanks for the helpful input and bearing with me as I circled back (it's been a pretty hectic period!).
Re the conceptualisation of polycrsis, I'd agree that this is pretty universal condition of globalised society. I should probably clarify upfront that I'm not claiming this is a new thing, but that focusing on the interconnection of crises nonetheless is a valuable frame for approaching them. I'd probbaly also put the pandemic in the poly rather than mono category, given the knock on economic effects etc but that's somewhat of a tangental point.
Re EA's success, I definitely take your point. I think again I should clarify that my claim here is not that EA will itself dominate and lead the world astray, but that its current activity is contributng to/propping up a way of viewing the world which is potentialy harmful. My view, and the argument above, is that this is something to be concerned about even if we think this contribution is in the grand scheme of things relatively minor at this stage, given the potential consequences and moral opportunity costs vs trying to advance aa contray worldview.
Re Littlewood's Law, thanks for sharing the interestig article! I definitely take your point and think we should discount for thos perceptual biases in our assessments. I would still probably claim, though, that even despite this we are in a particular time of crisis. I know there's debate about the hingey-ness of the present time (to borrow MacAskill's term) so perhaps we just diverge around that assessment, which is certainly fair enough.
Hey, I just wanted to clarify that the tagline on the EA front page looks like the title is "EA risks perpetuating a harmful world" which many people have a negative split-second reaction to.
This is basically impossible to forsee, and there is a lot of text here so I just thought you should know that your effort might be failing to pay off due to bad luck, not because of anything related to the content.
So EAforum votes might just be a bad indicator of whether these concepts are worth revisiting, or even just reposting this with a slightly restructured title. I skimmed over it (I'm sleep deprived right now), it looks like you might be reinventing the wheel on some things (relative to work that many people are doing in DC) but the concept look sound and the research looks pretty viable (e.g. EA catastrophe mentalities having negative interrelations with crisis spirals in the present time).
Thanks for the heads up! I've shortened the title which should hopefully help
Aight some comments:
As others mentioned, crisis periods are pretty frequent: post 9-11 era+GFC, post-Soviet conflicts, the entire Cold War, WW2, the interwar years, WW1+Spanish Flu+rise of Communism, the Victorian colonial era etc.
As a climate activist, I disagree on climate action being hindered by EA. Combatting climate change has received hundreds of billions in funding, decades of advocacy and near-unanimous intergovernmental cooperation. It's neglected on a macro scale, but on the radar of a lot of influential, competent and conscientious people. Plus, anecdotally, I find that many EAs are either heavily reducing their own carbon footprint, working on farmed animal welfare which does contribute to decarbonisation or themselves working directly in climate work and engaging with the EA community in other meaningful capacities.
My general sense is that your criticisms are valid, but sort of assuming "if EA had several orders of magnitude more influence, what problems might it cause when it influences mainstream discourse". It's theoretically sound but the uncertainty of projections so far out is hard to prove or disprove.
Thanks for the thoughtful analysis and good to hear your pserspective around EA and climate. I think my claim is that, aside from the debate around whether EA supports inaction on climate ful stop, the action it seems most predisposed towards (e.g. focus on emerging technologies etc) carries its own risks.
In response to your latter point, I think it's fair around the uncertainty of projections. As per my first reply above, I would more claim that contribution to a potentially harmful worldview is itself a cause of concern, even if current levels of influence are relatively low. I hope this is helpful clarfication!
I will admit that this isn't as much of a concern as I think it is because of my admittedly moral anti-realist viewpoint, where the questions aren't "Is there one true morality?" or "What moralities are harmful? (Except from perspectives).
The better questions are, "Why does moral intuition emerge in our brains?" and "How do you ensure your values get encoded into the future?"
Yes this is a fair point, I think that P6 is probably quite easily rejected from a moral anti-realsit stance. I do however think that the rest of the argument probbaly stil runs, given the claim is about potential X-Risk which can probbaly be agreed on as a bad irrespective of one's metaethics.