Hide table of contents

One could frame EA as the project of human alignment - of deconfusing ourselves about what we care about and figuring out how to actualize it. And there seems to be an interconnected bundle of problems at the core of this project:

  • What should we fundamentally value?
  • How to distinguish rational intuitions from biases?
  • How to make an AGI care about these questions?
  • Which ideas should we spread to help humanity process these questions more complexly?
  • How certain can we be about all of this?

These questions are particularly important, as we seem to live at the hinge of history and at a time of a growth in consciousness research[1]. They're interconnected but require a range of disciplines perhaps too wide for a single person to fully grasp - which suggests there could be a great added value in stronger cooperation.

So if you like impossible problems and want to work on them together, you're warmly welcome at the Mind & Values Research Group. The group is about the intersections of EA, cognitive science, the nature of consciousness and intelligence, moral philosophy and the formulation & propagation of ethical and rational principles.

How can these areas help nudge humanity in a positive directions?

1. The philosophy of mind angle

Deconfusing humanity about what we mean by values and intelligence could:

  • Help solve the technical side of AI alignment[2].
  • Advance the broad-longtermist mission introduced in What We Owe the Future - getting a clearer picture on which values should we lock in in these important times. Important topics here could be the nature of valence, net-positiveness of experiences, animal and digital sentience.
  • Help global prioritization by investigating the assumptions behind interventions for instance recommended by Happier Lives 

2. The social change angle

  • Cognitive enhancement research - how to support rationality & moral circle expansions in society by formulating the most elegant case for rational ethics 
  • Studying people's biases about values could help here[3]. This area can also be advanced with some inferences from experimental philosophy or even history & sociology of ideas.

What could it lok like?

  • Meetups: The group will vote on topics to discuss. For the following month or so, people will be welcome to collect materials from different angles. We'll discuss them in a virtual meetup and collect what was mentioned in a document people can get back to.
  • Newsletter: If the group gets bigger and hard to follow, I'll create a newsletter to announce voting, meetups and to send out the notes. Meanwhile, I recommend turning on notifications for new posts.
  • Networking: People looking for ideas or people to work with within an area are welcome to post even just a short introduction.
  1. ^

    Which points against neglectedness but to brain research opening new possibilities and to existing (cognitive) resources to utilize. https://www.mdpi.com/2076-3425/10/1/41 https://www.proquest.com/docview/2703039855/fulltextPDF/3152D39660CF4000PQ/1?accountid=16531

  2. ^

    See Sotala or Superintelligence, pp. 406: Should whole brain emulation research be promoted? which indicates figuring out how human coherent extrapolation volition looks like could be of particular importance.

  3. ^

    An example here could be the research discussed in the 80k Hours podcast with Sharon H. Rawlette and her Feeling of Value.

9

0
0

Reactions

0
0
Comments1
Sorted by Click to highlight new comments since:

Sounds an awful lot like LessWrong, but competition can be healthy[1] ;) 

  1. ^

    I think this is less likely to be true of things like "places of discussion" because splitting the conversation / eroding common knowledge, but I think it's fine/maybe good to experiment here.

Curated and popular this week
Relevant opportunities