Geoffrey Miller

Psychology Professor @ University of New Mexico
8572 karmaJoined Working (15+ years)Albuquerque, NM, USA
www.primalpoly.com/

Bio

Participation
4

Evolutionary psychology professor, author of 'The Mating Mind', 'Spent', 'Mate', & 'Virtue Signaling'. B.A. Columbia; Ph.D. Stanford. My research has focused on human cognition, machine learning, mate choice, intelligence, genetics, emotions, mental health, and moral virtues.  Interested in long termism, X risk,  longevity, pronatalism, population ethics, AGI, China, crypto.

How others can help me

Looking to collaborate on (1) empirical psychology research related to EA issues, especially attitudes towards long-termism, X risks and GCRs, sentience, (2) insights for AI alignment & AI safety from evolutionary psychology, evolutionary game theory, and evolutionary reinforcement learning, (3)  mate choice, relationships, families , pronatalism, and population ethics as cause areas.

How I can help others

I have 30+ years experience in behavioral sciences research, have mentored 10+ PhD students and dozens of undergrad research assistants. I'm also experienced with popular science outreach, book publishing, public speaking, social media, market research, and consulting.

Comments
697

sammyboiz - I strongly agree. Thanks for writing this.

There seems to be no realistic prospect of solving AGI alignment or superalignment before the AI companies develop AGI or ASI. And they don't care. There are no realistic circumstances under which OpenAI, or DeepMind, or Meta, would say 'Oh no, capabilities research is far outpacing alignment; we need to hire 10x more alignment researchers, put all the capabilities researchers on paid leave, and pause AGI research until we fix this'. It will not happen.

Alternative strategies include formal governance work. But they also include grassroots activism, and informal moral stigmatization of AI research. I think of PauseAI as doing more of the last two, rather than just focusing on 'governance' per se. 

As I've often argued, if EAs seriously think that AGI is an extinction risk, and that the AI companies seeking AGI cannot be trusted to slow down or pause until they solve the alignment and control problems, then our only realistic option is to use social, cultural, moral, financial, and government pressure to stop them. Now. 

Will - please expand a little bit more on what you're looking for? I found this question a little bit too abstract to answer, and others might share this confusion.

Rob - excellent post. Wholeheartedly agree. 

This is the time for EAs to radically rethink our whole AI safety strategy. Working on 'technical AI alignment' is not going to work in the time that we probably have, given the speed of AI capabilities development.

Richard - this is an important point, nicely articulated. 

My impression is that a lot of anti-EA critics actually see scope-sensitivity as actively evil, rather than just a neutral corollary of impartial beneficence or goal-directed altruism. One could psychoanalyze why they think this -- I suspect it's usually more of an emotional defense than a thoughtful application of deontology. But I think EAs need to contend with the fact that to many non-EAs, scope-sensitive reasoning about moral issues comes across as somewhat sociopathic. Which is bizarre, and tragic, but seems often true.

I think, at this point, EAs (including 80k Hours) publicly boycotting OpenAI, and refusing to work there, and explaining why, clearly and forcefully, would do a lot more good than trying to work there and nudge them from the inside towards not imposing X risks on humanity. 

Linch - I agree with your first and last paragraphs. 

I have my own doubts about our political institutions, political leaders, and regulators. They have many and obvious flaws. But they're one of the few tools we have to hold corporate power accountable to the general public. We might as well use them, as best we can.

Neel - am I incorrect that Anthropic and DeepMind are still pursuing AGI, despite AI safety and alignment research still lagging far behind AI capabilities research? If they are still pursuing AGI, rather than pausing AGI research, they are no more ethical than OpenAI, in my opinion. 

The OpenAI debacles and scandals help illuminate some of the commercial incentives, personal egos, and systemic hubris that sacrifices safety for speed in the AI industry. But there's no reason to think those issues are unique to OpenAI.

If Anthropic came out tomorrow and said, 'OK, everyone, this AGI stuff is way too dangerous to pursue at the moment; we're shutting down capabilities research for a decade until AI safety can start to catch up', then they would have my respect. 

Manuel - thanks for your thoughts on this. It is important to be politically and socially savvy about this issue.

But, sometimes, a full-on war mode is appropriate, and trying to play nice with an industry just won't buy us anything. Trying to convince OpenAI to pause AGI development until they solve AGI alignment, and sort out other key safety issues, seems about as likely to work as nicely asking Cargill Meat Solutions (which produces 22% of chicken meat in the US) to slow down their chicken production, until they find more humane ways to raise and slaughter chickens. 

I don't really care much if the AI industry severs ties with EAs and Rationalists. Instead, I care whether we can raise awareness of the AI safety issues with the general public, and politicians, quickly and effectively enough to morally stigmatize the AI industry. 

Sometimes, when it comes to moral issues, the battle lines have already been drawn, and we have to choose sides. So far, I think EAs have been far too gullible and naive about AI safety and the AI industry, and have chosen too often to take the side of the AI industry, rather than the side of humanity.

Load more