Introduction/Summary

2015 is the year when AI alignment and AI safety took off as causes taken seriously by the academic and industry worlds of AI research and the media. This has largely been attributed to influential world leaders in science and technology like Bill Gates, Stephen Hawking and Elon Musk. They all cited Nick Bostrom’s book Superintelligence as having changed brought the serious issue of AI alignment to their attention. Published in 2014, Superintelligence gets the credit it deserves for having put AI alignment on the map like never before, along with significant help from the Future of Life Institute. However, the AI alignment, rationality and effective altruism (EA) communities at large aren’t credited enough for what they did to advance serious attention to the cause. My impression having talked to many people is that the popular history is AI alignment lucked into hitting a tipping point. There’s little written up about the history of AI alignment as a cause developed by the rationality and effective altruism communities, and associated organizations. This is important because if we don't keep track of how much of AI alignment's rise to prominence is due to deliberate effort from within the community, we won't know how lessons of what worked can be applied to other fields.This observation has been made by effective altruists wanting to develop other fields, such as welfare biology Having been involved in the rationality and EA communities for years before the publication of Superintelligence, I saw the build-up of AI alignment as a cause first-hand. Between this experience and a few key sources which highlight important points in the history of AI alignment as a cause, I’ve noticed its development can be broken down into multiple stages. In this post I aim to explain how various strategies for growth and development over the course of AI alignment’s history can be generalized to other causes.

A Brief Look Back At AI Alignment

As mentioned above, the Future of Life Institute (FLI) played a crucial role in working with Nick Bostrom and capitalizing on the publication of his book
Superintelligence. Much of FLI’s work when it was first founded was focused on organizing a conference on AI alignment a month after the publication of Superintelligence, to which were invited journalists, academic and industry leaders, and public figures such as Elon Musk. The work of relating and communicating AI alignment to the public while trying to ensure fidelity of messaging FLI has done behind the scenes in 2015 and before is illustrated in this 2015 overview on their website.

Max Tegmark, co-founder of FLI, explains the role the Center for Applied Rationality (CFAR) played in FLI’s creation:

CFAR was instrumental in the birth of the Future of Life Institute: 4 of our 5 co-founders are CFAR alumni, and seeing so many talented idealistic people motivated to make the world more rational gave me confidence that we could succeed with our audacious goals."

CFAR in turn owes its existence to Less Wrong and the rationality community. LessWrong started with the Sequences by Eliezer Yudkowsky, which also contains his foundational contributions to the theory underpinning AI alignment. Anna Salamon, co-founder and president of CFAR, describes in her post On the Importance of Less Wrong, the crucial role Less Wrong has played in the existential risk (x-risk) and AI alignment communities:

One feature that is pretty helpful here, is if we somehow maintain a single "conversation", rather than a bunch of people separately having thoughts and sometimes taking inspiration from one another.  By "a conversation", I mean a space where people can e.g. reply to one another; rely on shared jargon/shorthand/concepts; build on arguments that have been established in common as probably-valid; point out apparent errors and then have that pointing-out be actually taken into account or else replied-to).

One feature that really helps things be "a conversation" in this way, is if there is a single Schelling set of posts/etc. that people (in the relevant community/conversation) are supposed to read, and can be assumed to have read.  Less Wrong used to be a such place; right now there is no such place; it seems to me highly desirable to form a new such place if we can.

Looking at all these historical efforts to build up a community around existential risk reduction and AI alignment as a single long-term project, I’ve noticed multiple stages of community build-up around these fields.

Stages of Developing a Movement or Field

In hindsight, we can look at these historical efforts to build up a community around the ideas of existential risk reduction and AI alignment as a single, long-term project. Viewed this way, it appears there are multiple distinct stages of community development in the history of AI alignment. These stages can be generalized to examples from other social and intellectual movements as well, such as animal advocacy and effective altruism.

  1. Knowledge Production. The development of a knowledge base which can be used to generate common knowledge across a large number of people who have skills or interests relevant to a field who previously have gained little exposure to it. Examples of this for AI alignment include Bostrom’s book Superintelligence; “Artificial Intelligence as a Positive and Negative Factor in Global Risk” by Eliezer Yudkowsky and more broadly the LessWrong Sequences; and in general research produced by organizations such as the Machine Intelligence Research Institute (MIRI) and the Future of Humanity Institute (FHI). For a cause, this body of knowledge serves as a research canon new supporters can read to get up to speed and involved. By becoming common knowledge this research canon provides a productive foundation for discussing the problems the field is currently facing.

  2. Community-Building. Publish this knowledge base to gain public input and inform and educate others about the problem(s) you’re trying to solve. From here, create a community with common knowledge and concern to transform the effort into an intellectual community on which progress towards solutions can begin. Historically, for AI alignment Less Wrong has served this purpose. For more on this, I recommend reading the post On the importance of Less Wrong, or another single conversation locus by Anna Salamon, in full. Other examples of this can be spotted in effective altruism. Effective animal advocacy as a cause has been able to develop relatively quickly for a young social movement, with effective altruism being preceded by the modern animal rights movement, with public exposure leading to the development and growth of the movement, and its ideas, largely due to publication of Animal Liberation by Peter Singer in 1970. For the effective altruism movement itself, Doing Good Better by William MacAskill; The Most Good You Can Do by Peter Singer; and the Effective Altruism Handbook are recent examples of creating common knowledge from a set of ideas while simultaneously building a movement around them. The growth and development of an intellectual community/field seems like it can be organized and accelerated given control over an online platform to track and steer growth.

    The example of Less Wrong stands out to me as being able to bootstrap a set of important ideas which had support of a relatively small group of people to a worldwide network in only a few years. Pairing local organization with online coordination has worked well for effective altruism and existential risk reduction. It’s my experience various causes and communities adjacent to effective altruism have benefited from doing the same. Using social media like Facebook has served this function well for more purely social movements. A highly intellectual movement like EA or x-risk reduction seems to have strongly benefited from having control over an online platform with features social media lacks, and which promotes higher-quality discourse and epistemics. This is similar to the role peer-reviewed journals play in science.

  3. Project & Resource Mobilization. After a significant period of time, seeding decentralized organization can create multiple nodes in a network which autonomously specialize and advance the growth and development of a field. This specialization of labour is present in AI alignment: organizations like MIRI do technical research in the Bay Area where they can build bridges with AI researchers while, because of their connection to universities at Cambridge, FHI has a greater focus on AI policy. Likewise for EA movement, the Centre for Effective Altruism (CEA) has set up offices at the heart of the EA movement in the San Francisco Bay Area for movement growth purposes, while maintaining an office for research at the University of Oxford as well. The absolute level of global growth of a field will generate an effective network which can pool resources and begin long-term strategizing and the pursuit of larger collective goals.

    What makes fields of interest to effective altruism different than other intellectual movements is its internal community and economy spanning the globe. Research organizations have benefited from a common pool of resources that is effective altruism as a social movement, receiving millions of dollars from thousands of individuals; having a constant source of potential candidates for an organization; and a vocal and eager supporter base. Unlike other fields of research, e.g., in the social sciences, EA organizations primarily rely on charitable donation from an association of private individuals, as opposed to similar research typically being sponsored by a large corporation, a university or government department. Large foundations like the Open Philanthropy Project have a major influence over the EA community as a whole, but otherwise EA organizations have access to more grassroots support for non-profit efforts trying to approximate something like academic research in the private sphere. This means EA organizations which do work similar to think tanks mobilize resources more like a charitable or social movement organization.

  4. Global Coordination. As projects and organizations in a worldwide community mature, they can form institutions which can influence public opinion and public policy; professionalize a research field; and more. Having a large and diverse support base to rely on allows individuals and teams within a cause to specialize and take risks with what projects they pursue as they can be more confident their projects will receive support and be part of a broader overarching movement. For AI alignment this has been organizations such as FHI, FLI, MIRI and CFAR collaborating on projects together which have advanced their common cause, with universities and communities around the world. An example of what the AI alignment community was with allies able to achieve at this stage of development was putting impressing the global significance of the cause on the world, and putting it on the map, as described above.

    At this stage of community development for social coordination and intellectual problem-solving worldwide, we're at the current stage for any cause in effective altruism. In addition to AI alignment, effective altruism has overlapped with the pre-existing animal welfare/liberation movement to form a movement focused on effective animal advocacy. Over the last decade, effective animal advocacy organizations like Animal Equality and The Humane League, research organizations like The Good Food Institute and New Harvest, have built upon existing movements to coordinate around the world approaches from grassroots activism to policy reform to biotech development to mitigate harm to animals due to consumer industries worldwide. Effective giving and evidence-based charity are ideas in effective altruism most closely associated publicly with global poverty reduction, which with the growth of EA are gradually spreading in non-profit circles and among charities outside the movement. These examples of success in the early stages of a community's or field's effort to build an influential movement bring us to the present day in effective altruism.

Applying This Approach
Each of the three major focus areas of effective altruism:

  • Global poverty alleviation; primary means: public health & economic development
  • Existential risk reduction; primary means: AI alignment & global coordination
  • Effective animal advocacy/welfare; primary means: multi-pronged approach to mitigating animal suffering due to factory farming

benefited from another community focused on similar goals before effective altruism existed. Movements like transhumanism, the rationality community, veganism, animal welfare, and animal rights/liberation focused on EA causes for years before EA existed. Global poverty reduction has been a goal of charity worldwide for thousands of years, with its transformation into modern philanthropy benefiting from decades of research from fields like economics and epidemiology. Since its inception several years ago, EA has internally fostered the growth and development of some causes which to this day outside the movement receive little public interest. Notable examples include the focus areas of wild animal suffering/welfare and life extension. Foci like existential risks in addition to AI alignment; emerging technology R&D; public policy reform; and mass mental health interventions have begun receiving more attention from the EA movement in the last couple years.

As effective altruism has been constructed from the confluence of so many prior social movements, intellectual fields and online communities, as a community we have the collective experience to deliberately repeat this process. EA as a sizable coordinated network has the capacity to devote more resources to grow and develop smaller causes more rapidly than it was before. With the benefit of hindsight we also have more knowledge and ability to steer the intellectual development and project coordination for all kinds of causes. Looking back at the history of AI alignment and existential reduction, its been over a decade to get to its current stage. Other EA causes benefited from decades of public interest to get to their current stages of development. Ideally, smaller causes will be able to employ the above strategies in time to develop even more rapidly than the biggest focus areas in EA at present.

What's Next?
Having recognized these common and effective strategies for community-building, I've begun collaborating with the intellectual communities around various causes on the first stage of field development, Knowledge Production. I've been doing this for the following focus areas in the following respective Facebook groups.

Over the last couple months, I've begun compiling research materials on these subjects to form bodies of research I intend to post on the Effective Altruism Forum when complete. Depending on the focus area, they may be cross-posted on other websites as well. You're invited to join any of these groups to follow or contribute to any of these projects. Also feel free to suggest any other cause which would benefit from such a project. Having compiled a comprehensive spread of quality research for various fields and posted then to the EA Forum, I hope this information will be used to further advance and develop causes within effective altruism.



 

 

Comments3
Sorted by Click to highlight new comments since:

Thanks for writing this. I don't think I've seen anyone tell the story quite so well, and I was there for all of it!

Yeah, it was my pleasure :) The story was surprisingly easy to tell with a few key quotes from AI alignment/rationality community figures historically crucial in building up the field to how big and prominent it is now.

This is really great info-- I'll probably blog about this on the QRI website at some point. (Thanks!)

Another interesting resource along these lines is Luke M's set of case studies about early field growth: I particularly enjoyed his notes on how bioethicists shaped medicine. https://www.openphilanthropy.org/research/history-of-philanthropy/some-case-studies-early-field-growth

Curated and popular this week
Relevant opportunities