Hide table of contents

Outline: This post non-systematically explores some writings on the cause of artificial sentience, along with a few casual observations. I start by listing pieces arguing in favor of its prioritization, before mentioning four arguments against a focus on artificial sentience. Then I present a list of (mostly EA) organizations and researchers working on the issue. Lastly, I share a selection of high-level general strategic considerations notable from my readings, on five topics of importance for artificial sentience, namely uncertainty and risks, broad research focus, broad target objectives, timing, and inspiration from other fields and movements. I conclude with brief suggestions that I found on ways to contribute to the cause area.

Disclaimer: I’ve only spent a few days reading for and writing this article, although I was already familiar with the topic. Several of the pieces I refer to, I haven't read in their entirety, and in some instances, I've only read a small part of them. I first did this compilation to get a slightly clearer picture of the space and gain more familiarity with recent writings, and then decided to turn it into a Forum post. I’d be happy to uncover more gaps in my knowledge. 

Introduction

While I’ve seen a lot of concerned discussions over the last few months about the cause of artificial sentience[1], relevant recent writings appeared somewhat dispersed. So I thought it might be useful to compile a selection of some noteworthy pieces, actors and considerations. I haven’t proceeded in a particularly systematic manner, and it's best to see the comments I share as tentative observations. I make extensive use of footnotes to provide more details and relevant quotes for interested readers. Also, this article doesn’t center on the study of potential sentience in artificial systems in itself, but rather on the implications of taking the issue seriously.

Pieces arguing artificial sentience is a pressing problem

Shorter pieces include:

Others on the Forum have echoed the assessment that it is an important problem but currently possibly underprioritized.

It may be worth noting that one doesn’t need to only care about suffering reduction to think that it's worthwhile to dedicate some resources to some particularly concerning issues related to artificial sentience, like the risks of morally catastrophic suffering of sentient systems. One may have reasons to prioritize that as long as the suffering of sentient beings is one of the things one cares about.[2] And it may be judicious for people working on the issue to focus here on strategies that are beneficial for several value systems, while being aware of potential frictions from altruistic value differences.

Some arguments against focusing on this problem

This selection of four specific arguments (or reasons) I found surely isn’t as informative as a more thorough discussion that integrates the different considerations. Numerous questions are listed here, and their answers might affect how promising the cause of artificial sentience is.

  • An argument based on the risks of moral circle expansion in general, written for instance in the ‘Risks of moral circle expansion’ section of Chapter 8 in Avoiding the Worst: moral advocacy (in this case promoting the consideration of artificial sentience) could risk either backfiring, since it may “result in the creation of more (potentially suffering) sentient beings”, or causing “a backlash that could further entrench bad values or antagonistic dynamics.”[3]
     
  • Artificial sentience advocacy could even make some of the most worrying worst-case scenarios even worse, for instance when considering “near-miss” scenarios.
     
  • JP Addison (very tentatively) makes the observation that if we manage to align AIs with human values, then our future selves can figure out when/whether AIs are likely to be sentient (and what to do regarding their welfare). A similar point is made by Thomas Larsen.
     
  • A reason (more than a direct argument in itself) given by Buck Shlegeris, talking from the perspective of many alignment researchers: although the problem is important and neglected, insofar as perhaps “a lot of the skills involved in doing a good job of this research are the same as the skills involved in doing good alignment research”, they may not have good reasons to move from AI alignment to the cause of artificial sentience.[4]

Who is working on the problem

I used these sources among others to build this list, which is most likely not comprehensive, and I focused on orgs and people related to EA (with whom I have no affiliation).

The Mind, Ethics, and Policy Program at NYU

Launched in Fall 2022, the NYU MEP Program “conducts and supports foundational research on the nature and intrinsic value of nonhuman minds, including biological and artificial minds.”

They write: “Our aim is to advance understanding of the consciousness, sentience, sapience, moral status, legal status, and political status of nonhumans—biological as well as artificial—in a rigorous, systematic, and integrative manner. We intend to pursue this goal via research, teaching, outreach, and field building in science, philosophy, and policy.” They are currently in the process of developing a research agenda.

Since their launch, they’ve held talks and recently a workshop on Animal and AI Consciousness (so the speakers mentioned in the link also work on the topic).

Jeff Sebo, Director of the program, shared his (current and tentative) views on principles for AI welfare research in a Forum post, where he also mentioned the articles he’s working on[5]: a working paper that makes the case for moral consideration for AI systems by 2030 co-authored with Robert Long, a piece on moral individuation co-authored with Luke Roelofs, and an article on intersubstrate welfare comparisons co-authored with Bob Fischer. And he has recently published an article exploring potential implications of utilitarianism when applied to small animals and AI systems.

Sentience Institute

While their focus has been to do research relevant to moral circle expansion since their creation, they have shifted their main research focus from farmed animals to digital minds and artificial sentience over the last years.

The interdisciplinary research they conduct is intended to broadly inform strategies aimed at changing social norms, law and policies. In addition to outlining high-level research priorities (e.g. here or here) and performing literature reviews (e.g. here, here, or this review that was followed up by this report), they conduct both conceptual explorations (e.g. here or here) and more applied studies (like surveys or psychological experiments). You can check all their publications on their Reports page, blog and Podcast[6].

Center for Reducing Suffering

They don’t focus specifically on artificial sentience, but several of their writings highlight important considerations relevant to the issue. For instance, Avoiding the Worst offers a number of strategic insights directly useful for thinking about how to reduce risks of future suffering involving artificial sentience, as does Reasoned Politics in relation with moral and political advocacy (especially sections 10.4, 10.7.3, 10.8 and 11.10).

The section Moral advocacy of their Open Research Questions page lists potentially relevant questions for the cause area of artificial sentience.

MILA-FHI-University of Montréal’s Digital Minds Project

The page of Jonathan Simon, a project member, says: “This project is a collaboration between 1) researchers at MILA, the Quebec Artificial Intelligence Institute, headed by Yoshua Bengio, 2) researchers at FHI, the Future of Humanity Institute at the University of Oxford[7] and 3) Jonathan Simon, a philosopher of mind at the University of Montréal.” He then outlines their respective research directions.

Notably, at FHI, Nick Bostrom and Carl Shulman have written Sharing the World with Digital Minds (summarized here and discussed here) and Propositions Concerning Digital Minds and Society (summarized here and discussed here).

Some Philosophy Fellows at the Center for AI Safety

Robert Long: works on issues related to AI sentience. See his EA Forum articles and website, in particular his recent appearance on the 80,000 Hours podcast.

Cameron Kirk-Giannini: his page says “his work during the fellowship focuses on AI wellbeing, cooperation and conflict between future AI systems, and the safety and interpretability of agential system architectures.” He is editing with Dan Hendrycks the upcoming Special Issue of Philosophical Studies on AI Safety, listing as a topic of interest for submission: “Investigations of potential conflicts between AI safety and control measures and the possible ethical standing of AI systems.”

Simon Goldstein: with Cameron Kirk-Giannini, they have written a paper (summarized here) on AI Wellbeing, as well as an op-ed, and a blog post on Daily Nous.

Other organizations and researchers

It could be useful to make an estimate of the money being spent and the number of FTE working on the issue.[8]

Some high-level strategic considerations

The aim of this section is to relatively briefly reference and organize several important but fairly general considerations and strategic insights that stood out from my readings. Note that there is a degree of subjectivity in the choice of topics I focus on below, as well as overlaps between them. 

For more systematic explorations of questions and considerations relevant to artificial sentience, I suggest you check these three previously mentioned pieces (including some of their links). So this section can be seen as mostly a synthesized supplement to them, considering that they go into more detail for the questions and ideas that they discuss.

Uncertainty and risks

It’s important to take into account the scope of our uncertainty, which manifests on various levels, regarding how to best tackle the issue.

As the cause of artificial sentience is at the crossroads of several categories of priorities, like longtermism, s-risks and moral circle expansion, it seems that considering the issue through these different perspectives may help to uncover various aspects of the problem about which we’re uncertain.[9]

Along with uncertainty, it’s useful to be aware of the risks associated with promoting concern for artificial sentience, as mentioned for instance in the ‘Risks of moral circle expansion’ section of Chapter 8 in Avoiding the Worst. Risks associated specifically with research on and promotion of concern for artificial sentience are also mentioned in this episode[10] of the FLI Podcast with Tobias Baumann.[11]

That said, great uncertainty about how the future will unfold can be distinguished from strategic uncertainty, and strategies viewed as robustly beneficial have been proposed, namely research and capacity/field-building.[12] 

Broad research focus

Another important consideration relates to the broad focus of the research aimed at helping the cause of artificial sentience.

The posts linked at the beginning of this section, as well as the ‘Research’ section of The Importance of Artificial Sentience, mention a number of research directions. Complementarily with them, a useful way of thinking about artificial sentience research might be to consider whether it’s more focused on gaining a better understanding of artificial sentience in itself (e.g. of the “nature of sentience”), compared to a focus on improving our understanding of how to best promote better values and political systems. Magnus Vinding argues that, at the margin, the latter may be most promising.[13]

Broad target objectives

What should we realistically aim to achieve when we seek to improve the welfare of (future) artificial entities?

What seems most often suggested is a focus on institutions and legal protection.

Some reasons given in favor of such a focus are:

  • that the arguments and evidence for an institutional focus from effective animal advocacy also seem to apply in this context (section ‘What can we do about it’ in The Importance of Artificial Sentience),
  • that “rights have proven an exceptionally powerful social technology in service of reducing suffering” and “our institutions will probably be a key determinant of future outcomes” (sections 8.11.3 and 10.10.3 respectively in Reasoned Politics).[14]

Regarding the different possible levels of ambition given the resources available, Jamie Harris writes: “AS advocacy could involve requests similar to other social movements focused on moral circle expansion, such as demanding legal safeguards to protect against the exploitation of artificial sentience for labor, entertainment, or scientific research. Less ambitious goals could include encouraging attitude change, asking for symbolic commitments, or supporting relevant research.”

Timing

The timing of efforts dedicated to the cause of artificial sentience also appears to be an important issue, as it seems challenging to get the timing right, given the many dimensions to consider.

Besides the general questions related to optimal timing of resource use, two major specific considerations are, first, the question of AI timelines, and second, the question of the strategic timing to potentially start devoting more resources directly to the promotion of the consideration of artificial sentience (compared to e.g. strategy research)[15].

As an illustration, it seems relevant to take notice of the recent open letter on AI and consciousness research signed by many researchers, including Anil Seth[16] and Yoshua Bengio.[17] I also wonder if the overton window regarding artificial sentience will shift surprisingly fast or in surprising ways.[18] 

Inspiration from other fields and movements

It may be useful to take inspiration from other social movements and research fields at several levels.

First, at the level of building a research field, it could be helpful to look at successful analogous examples of research fields that successfully built research capacity along with academic and institutional credibility. Jamie Harris mentions the fields of welfare biology[19] and global priorities research.

Second, at the level of guiding the research itself, lessons learned from other research efforts, such as animal welfare research, can help inspire research questions and principles.

Third, at the level of advocacy strategy and effectiveness of tactics, while the animal advocacy movement arguably stands as the most salient source of inspiration, key lessons can be taken from the history of other social movements.[20]

Relatedly, in section 10.8, ‘One Movement for All Sentient Beings?’ from Reasoned Politics, Magnus Vinding argues that it may be strategically better to conceive of the movements aimed at protecting non-human beings from suffering - i.e. including also the wild-animal suffering and animal farming movements - as part of a more unified and cohesive movement that can push for betterment for sentient beings, rather than viewing them as independent.

Conclusion: ways for individuals to contribute

The following draws on the section What can we do about it from Jamie Harris’ article.

  • Some of the organizations mentioned above seem open to donations aimed at funding additional work on artificial sentience.
  • Sentience Institute: last giving season, their goal was to raise $90,000 to continue working on digital minds research, and they said they had a room of up to $300,000 to fund additional researchers (they also mentioned that each hiring round they receive multiple very strong applicants that they’d like to hire with such increased funding).
  • Center for Reducing Suffering: last giving season, they had a fundraising target of $100,000 to hire 1-2 additional staff in research and/or communications and free up time for writing and research (though not specifically on artificial sentience work).
  • I haven’t seen any indication of this so it seems unlikely, but maybe it’s possible to earmark donations at FHI or CAIS to go towards artificial sentience work.
  • Direct work could take the form of research (typically with one of the projects mentioned), a policy-related career - for instance in AI governance, and maybe some forms of field-building.[21] For research, fruitful approaches and insights may come from philosophy of mind/ethics, computer science/artificial intelligence, neuroscience and social sciences (psychology, economics, political science, sociology, history, law, each may provide valuable contributions).


Thanks to Corentin Biteau, Jim Buhler and Magnus Vinding for comments on the draft. Mistakes are my own.

  1. ^

     For a discussion on the different terms used to refer to artificial entities, see The Terminology of Artificial Sentience by Janet Pauketat.

    Also, in this overview article, what I mean when I write “the cause of artificial sentience” is something very broad, maybe like “any kind of work centered on taking seriously the implications of the possibility of artificial sentience”. But how one defines or conceives of the issue may be another important question.

  2. ^

     See Chapter 4 of Avoiding the Worst for a more detailed discussion.

  3. ^

     These risks are mentioned directly regarding artificial sentience p.83.

  4. ^

     Note that Jacy Reese Anthis has briefly written on what makes digital minds research different from other AI existential safety research.

    But a reply to Buck’s comment would be that his point applies to some kinds of research relevant to artificial sentience, but not to other kinds. (H/T Jim Buhler)

  5. ^

     He will also publish a book titled The moral circle.

  6. ^

     Whose list of guests can be used to find additional people doing relevant work.

  7. ^

     Likely the members of FHI’s Digital Minds Research Group.

  8. ^

     The method described in the footnotes here may be a good source of inspiration.

  9. ^

     For instance, Jamie Harris writes: “This blog post lists possible crucial considerations affecting whether AS advocacy seems likely to be positive on balance, as well as lesser questions affecting how we might prioritize AS relative to other promising cause areas. We include questions that affect broader categories of priorities and intervention types that include at least portions of AS advocacy: longtermism, suffering risks, and moral circle expansion.”

    And also: “The especially complex, technical, and futuristic (and thus easily dismissed) nature of AS advocacy suggests further caution, as does the unusually high leverage of the current context, given that advocacy, policy, and academic interest seems poised to increase substantially in the future. [Footnote: This raises the stakes of experimentation and the size of potential negative consequences, which is concerning given the unilateralist’s curse.]

    Additionally, there are more uncertainties than in the case of animal advocacy. What “asks” should advocates actually make of the institutions that they target? What attitudes do people currently hold and what concerns do they have about the moral consideration of artificial sentience? What opportunities are there for making progress on this issue?”

    Similarly, the great uncertainty we face when trying to reduce s-risks, including from artificial sentience, is repeatedly mentioned in Avoiding the Worst.

  10. ^

     From 37:30 to 42:30, he talks about that and then about the implications of uncertainty more generally.

    Earlier in the podcast (from 13:48 to 15:34), he talks about the risks of losing credibility by "crying wolf" about artificial systems too often (i.e. warning about their sentience when they’re actually not).

    Robert Long also mentions this issue: “In many ways, we are in our understanding of large language models where the study of animals was in the middle of the 20th century. Like animal cognition, the field of AI is overshadowed by founding traumas — cases in which credulity and anthropomorphism have led researchers to exaggerate and misconstrue the capabilities of AI systems. Researchers are well aware of the ELIZA effect, our tendency to readily read human-like intentionality into even very simple AI systems — so named for an early chatbot built in 1964 that used simple heuristics to imitate a psychoanalyst.”

    Given the significant risks faced, it could be useful to investigate in more detail the extent to which different types of work on artificial sentience risk turning out detrimental, especially from an s-risks perspective, before prematurely pursuing interventions and research directions. (H/T Jim Buhler)

  11. ^

     Here are other mentions of risks from direct outreach and advocacy:

    -Reasoned Politics, section 10.7.3 : “[...] the ideal strategy might be to first engage with scientists and ethicists whose work relates to these risks - a group of experts who can contribute important insights, and who are likely to be taken seriously by legislators (cf. Harris, 2021). An encouraging sign is that many such experts already show considerable concern for the issue (Harris & Reese, 2021). Even so, one would probably need to go about these efforts in a thoughtful manner, as one could easily frame the issue in a way that makes it seem alarmist or uninformed. Outright advocacy is likely ill-advised, as opposed to making reasoned arguments about the ethics of risk and the moral significance of suffering, including uncertain suffering (cf. Birch, 2017; Knutsson & Munthe, 2017; Ziesche & Yampolskiy, 2019; Metzinger, 2021).”

    -Key Questions for Digital Minds: “What strategies are most promising for improving futures with digital minds?

    All of the foundational research outlined above needs to ultimately be cashed out in better strategies for building the best future for all sentient beings. One tentative strategic claim is that research should be prioritized before other projects such as public policy or outreach. First impressions may be very important for digital minds as with other technosocial issues (e.g., lock-in of GMO and nuclear energy narratives), and there has been so little research on this topic that the most promising outreach strategies could easily change after only a few research projects. Before promoting a narrative or policy goal, such as a moratorium on digital consciousness, we should consider its direct viability and indirect effects of its promotion.

    Delay should not be too long, however, because suboptimal narratives may take over in the meantime, especially with short timelines—making digital minds research a highly time-sensitive AI safety project. Discussion to date has arguably been largely confused and quite possibly detrimental. The most promising work informed by digital minds research may be preparation to push forcefully for certain technical and governance strategies during major advances in AI capabilities.”

  12. ^

     For research, see for instance section ‘Governance of AI’ of Chapter 10 and section ‘Research on how to best reduce s-risks’ of Chapter 11 in Avoiding the Worst, section 9.3.3 in Reasoned Politics, or section ‘Research’ in The Importance of Artificial Sentience. This article with a broader scope, but arguments that are relevant in the context of the cause of artificial sentience, also makes the case for more research (on the current margin).

    For capacity and field-building, see for instance sections ‘Capacity building’ and ‘A movement to reduce s-risks’ of Chapter 11 in Avoiding the Worst, section ‘Recommendations for the movements for future generations of sentient beings’ in Key Lessons From Social Movement History, or section ‘Field-building’ in The Importance of Artificial Sentience. (Relatedly, here are some strategic suggestions for academic field-building.)

  13. ^

     Of course, that’s not at all to say that AI Welfare research is not useful, and as Jeff Sebo writes: “Research fields are path dependent, and which path they take can depend heavily on how researchers frame them during their formative stages of development. If researchers frame AI welfare research in the right kind of way from the start, then this field will be more likely to realize its potential.”

    Relatedly, Amanda Askell writes: “I don't think we yet live in a world where AI labs are running the moral equivalent of animal experiments on their models. But I would like to live in a world where, over time, we have more evidence grounding the probabilities we assign to where we are on the scale of "we're doing no harm" to "we're doing harm equivalent to swatting a fly" to "we're doing harm equivalent to a large but humane mouse experiment" to "we're doing harm equivalent to a single factory farm" to "our RL agents are sentient and we've been torturing them for thousands of years.".

    We are used to thinking about consciousness in animals, which evolve and change very slowly. Rapid progress in AI could mean that at some point in the future systems could go from being unconscious to being minimally conscious to being sentient far more rapidly than members of biological species can. This makes it important to try to develop methods for identifying whether AI systems are sentient, the nature of their experiences, and how to alter those experiences before consciousness and sentience arises in these systems rather than after the fact.”

  14. ^

     Also note the results from Protecting Sentient Artificial Intelligence: A Survey of Lay Intuitions on Standing, Personhood, and General Legal Protection and their discussion: “the fact that laypeople rate the desired level of legal protection to sentient AI as twice as high as the perceived current level, as well as the fact that the difference between the desired and perceived current level of protection was higher than virtually any other group [e.g. present humans, future humans, non-human animals, the environment] would imply (through this lens) that the existing legal institutions should be reformed so as to increase protection of sentient AI well beyond the current level afforded to them”, although “the majority of laypeople were not in favor of granting personhood or standing to sentient AI.”

    Other writings that mention a focus on institutions or legal protection include Artificial sentience - Problem profile, section ‘Institutional reform’ of Chapter 9 and section ‘Governance of AI’ of Chapter 10 in Avoiding the Worst, section 14.5.1 in Reasoned Politics, and bullet point ‘Explore legal rights for artificial entities.’ in The History of AI Rights Research.

  15. ^

     Considering that, perhaps, a focus on “prevention strategies” is judicious.

    On this, Magnus Vinding writes: “Metzinger recommends a policy of banning all research that “risks or directly aims at the creation of synthetic phenomenology”, as well as greater funding of research into the ethics and risks of creating novel systems capable of conscious experience (Metzinger, 2018, p. 3; 2021). A ban on attempts to create new kinds of suffering beings would also be well in line with the bounding approach to worst-case risks outlined in the previous chapter, as such a ban could help serve as a protective wall preventing us from getting into the “danger zone” where novel technologies can vastly increase suffering (cf. Baumann, 2019).”

    In the context of wild-animal welfare, Brian Tomasik writes about an analogous preventive focus.

    On a side note, (this is a common interrogation but) I wonder what lessons can be taken from movements or efforts that were focused on the prevention of bad - but not necessarily existentially catastrophic - outcomes, such as early successful efforts in space governance, nuclear non-proliferation or climate change advocacy.

  16. ^

     Anil Seth recently wrote this article.

  17. ^

     On a related note, the idea of targeted (moral) advocacy to the potential creators of artificial sentience, such as employees at top AGI labs, and sometimes more generally to influential decision-makers around AI, has been discussed several times over the years.

    Tobias Bauman, for instance, mentions the idea of "try[ing] to expand the moral circle of the creators of AI" while pointing out the risks of backlash (section ‘Governance of AI’ of Chapter 10 in Avoiding the Worst). Also note that Jade Leung wrote, in her AI governance thesis in 2019, that she was expecting researchers' influence to shrink as the technology matures (see for instance section 7.4.3.3 ‘AI researchers will become less influential over time’).

    The appeal of the idea may to some extent depend on how narrowly it is presented or construed. On one hand, the more narrow strategy of “influencing [AGI companies, by working for or lobbying them] with an eye toward 'shaping the AGI that will take over the world'” may sound simplistic. While on the other hand, the broad idea of directing overall efforts and outreach on relevant stakeholders seems uncontroversial. Jamie Harris, for example, tentatively proposes that “initial advocacy should focus primarily on influencers rather than mass outreach.”

  18. ^

     For instance, regarding what can be seen as credible advocacy, it would have been hard to imagine, not long ago, that such a cover image could be used for a legitimate BBC article. On artificial sentience specifically, see for instance this recent TIME article by Brian Kateman.

  19. ^

     Section 10.7.1 of Reasoned Politics also covers the strategy of establishing welfare biology as a proper field of study for helping wild animals.

  20. ^

     In addition to effective animal advocacy research and movement history research, more general social movement research, e.g. from Social Change Lab, could be strategically relevant (although it doesn't seem entirely clear yet what part of this research is the most useful).

    In the context of movement history research, I’m curious which social movements, possibly from these lists, are the most comparable to artificial sentience advocacy - specifically, rather than to the broader category of movements encouraging moral circle expansion - based on these features, and potentially others.

  21. ^

     Regarding the places to pursue direct work, it’s interesting to see that Zach Freitas-Groff’s proposal of “[one or multiple organizations] dedicated to identifying and pursuing opportunities to protect the interests of digital minds.” received an "Honorable Mention" award (akin to a rank between 8 and 21 out of roughly 1000 project ideas) at the Future Fund’s Project Ideas Competition last year.

    All else equal, it would be better if founder(s) of potential future projects related to artificial sentience fitted what is described here or here, while being particularly cautious of causing accidental harm, and not underestimating the challenge of tackling these types of problems. Also, in Career advice for reducing suffering, Tobias Baumann writes: “Lastly, you could consider founding a new project if you have a good idea, though we would not recommend that at the beginning of your career.”

48

0
0

Reactions

0
0

More posts like this

Comments1
Sorted by Click to highlight new comments since:

Thank you for writing about this. I am definitely a person whose concerns about AI are primarily about the massive suffering they might cause, especially when it comes to already-marginal entities or potential entities like non-human animals or digital minds.

I'll note beforehand that I'm suffering-focused, but I'll also note that I think even a regular utilitarian using EV reasoning could come to the same conclusions as I do.

I'm curious as to why this isn't a greater focus in the AI Safety community. At least from my vantage point and recollection, over 90% of the people who talk about AI Safety focus exclusively on the threat AI poses to the continued existence of humanity. If they elaborate at all on what's at stake in the far future, they emphasize the potential good that could come from having massive populations that are in immense states of bliss, which could be destroyed if we are destroyed (again this is my experience). 

I think this rests on the assumption that there is a high likelihood (let's say >90% confidence) that humanity will become a force of net good in the long term future should it survive to see that. I think that, at the very least, this crux should be tested more than it currently is. I would argue that humanity of the current day is almost certainly (>99% confidence) net harmful (even factory farming alone is an immense harm that it's hard to argue any good humans do outweighs). I would also argue with similar confidence that humanity's net impact was consistently negative at least from the agricultural revolution onward (mistreatment/exploitation of non-human animals, slavery, war to name a few major things). Suffice it to say that I would be very worried if an AGI was locked-in with the values of a randomly selected person today (I know some AGI timelines are quite short), or even a randomly selected person 100 years from now (assuming we survive that long), especially if they decide to keep us alive. I can't give an estimate for how confident I am that humanity's continued existence with AGI would be a good/bad thing. However, I agree that the suffering risk from AGI is not emphasized proportional to its potential expected consequence, and I'm curious to hear EA/AI Safety perspectives regarding this topic.

I'll also quickly throw in the idea of humans deliberately creating malicious AGI with the intention of serving their own ends, which is an idea I've heard around a few times but know practically nothing about. Though I will say that I think the potential for such a scenario to arise and then become an S-risk is non-negligible (though I can't really give a good estimate or back it with anything more than intuition).

Curated and popular this week
Relevant opportunities