Lucius Caviola

Senior Research Fellow (Psychology) @ Global Priorities Institute, University of Oxford
1406 karmaJoined Working (6-15 years)Oxford, UK
luciuscaviola.com

Bio

I’m a moral psychologist. My work centers on understanding people’s values and decisions — and how these often fall short of what is best for society. I research effective giving, moral circle expansion, and global catastrophic risk. Recently, I’ve become interested in how society will react to the advent of advanced technologies such as artificial intelligence.

For the latest updates and insights into my research, follow me on Google scholar and subscribe to my Substack blog.

Comments
22

Thanks for this. I agree with you that AIs might simply pretend to have certain preferences without actually having them. That would avoid certain risky scenarios. But I also find it plausible that consumers would want to have AIs with truly human-like preferences (not just pretense) and that this would make it more likely that such AIs (with true human-like desires) would be created. Overall, I am very uncertain.

Thanks, I also found this interesting. I wonder if this provides some reason for prioritizing AI safety/alignment over AI welfare.

It's not yet published, but I saw a recent version of it. If you're interested, you could contact him (https://www.philosophy.ox.ac.uk/people/adam-bales).

Thanks, Siebe. I agree that things get tricky if AI minds get copied and merged, etc. How do you think this would impact my argument about the relationship between AI safety and AI welfare?

I wonder what you think about this argument by Schwitzgebel: https://schwitzsplinters.blogspot.com/2021/12/against-value-alignment-of-future.html

Thanks, Adrià. Is your argument similar to (or a more generic version of) what I say in the 'Optimizing for AI safety might harm AI welfare' section above? 

I'd love to read your paper. I will reach out.

The Global Risk Behavioral Lab is looking for a full-time Junior Research Scientist (Research Assistant) and a Research Fellow for one year (with the possibility of renewal).

The researchers will work primarily with Prof Joshua Lewis (NYU), Dr Lucius Caviola (University of Oxford), researchers at Polaris Ventures, and the Effective Altruism Psychology Research Group. Our research studies psychological aspects of relevance to global catastrophic risk and effective altruism. A research agenda is here

Location: New York University or Remote

Apply now

Research topics include:

  • Judgments and decisions about global catastrophic risk from artificial intelligence, pandemics, etc. 
  • The psychology of dangerous actors that could cause large-scale harm, such as malevolent individuals or fanatical and extremist ideological groups
  • Biases that prevent choosing the most effective options for improving societal well-being, including obstacles to an expanded moral circle

Suggested skills: Applicants for the Junior Research Scientist position ideally have some experience in psychological/behavioral/social science research. Applicants for the Research Fellow position can also come from other fields relevant to studying large-scale harm from dangerous actors.
 

Thanks Ben!

13.6% (3 people) of the 22 students who clicked on a link to sign up to a newsletter about EA already knew what EA was.

And 6.9% of the 115 students who clicked on at least one link (e.g. EA website, link to subscribe to newsletter, 80k website) already knew what EA was.

Another potentially useful measure (to get at people’s motivation to act) could be this one:

“Some people in the Effective Altruism community have changed their career paths in order to have a career that will do the most good possible in line with the principles of Effective Altruism. Could you imagine doing the same now or in the future? Yes / No”

Of the total sample, 42.9% said yes to it. And of those people, only 10.4% already knew what EA was.

And if we only look at those who are very EA-sympathetic (scoring high on EA agreement, effectiveness-focus, expansive altruism and interest to learn more about EA), the number is 21.8%. In other words: of the most EA-sympathetic students who said they could imagine changing their career to do the most good, 21.8% (12 people) already knew what EA was.

(66.3% of the very EA-sympathetic students said they could imagine changing their career path to do the most good.)

A caveat is that some of these percentages are inferred from relatively small sample sizes — so they could be off.

We've asked them about a few 'schools of thought': effective altruism, utilitarianism, existential risk mitigation, longtermism, evidence-based medicine, poststructuralism (see footnote 4 for results). But very good idea to ask about a fake one too!

(Note that we also asked participants who said they have heard of EA to explain what it is. And we then manually coded whether their definition was sufficiently accurate. That's how we derived the 7.4% estimate.)

We considered this too. But the significant correlations with education level and income held even after controlling for age. (We mention this below one of the tables.)

Load more