Hide table of contents

I'm prepping a new upper-level undergraduate/graduate seminar on 'AI and Psychology', which I'm aiming to start teaching in Jan 2025. I'd appreciate any suggestions that people might have for readings and videos that address the overlap of current AI research (both capabilities and safety) and psychology (e.g. cognitive science, moral psychology, public opinion). The course will have a heavy emphasis on the psychology, politics, and policy issues around AI safety, and will focus more on AGI and ASI than on narrow AI systems. Content that focuses on the challenges of aligning AI systems with diverse human values, goals, ideologies, and cultures would be especially valuable. Ideal readings/videos would be short, clear, relatively non-technical, recent, and aligned with an EA perspective. Thanks in advance! 

32

0
0

Reactions

0
0
New Answer
New Comment

3 Answers sorted by

I was recommended Perplexity for looking for course materials.

You can search academic databases, as well as perform broad searches on the web or YouTube.

Provide context like ChatGPT does. For your purpose, mention that you are building a course on artificial intelligence and psychology and give details about it.

Thanks! Appreciate the suggestion.

This course sounds cool! Unfortunately there doesn't seem to be too much relevant material out there. 

This is a stretch, but I think there's probably some cool computational modeling to be done with human value datasets (e.g., 70,000 responses to variations on the trolley problem). What kinds of universal human values can we uncover? https://www.pnas.org/doi/10.1073/pnas.1911517117 

For digestible content on technical AI safety, Robert Miles makes good videos. https://www.youtube.com/c/robertmilesai

Abby - good suggestions, thank you. I think I will assign some Robert Miles videos! And I'll think about the human value datasets.

A few quick ideas:
1. On the methods side, I find the potential use of LLMs/AI as research participants in psychology studies interesting (not necessarily related to safety). This may sound ridiculous at first but I think the studies are really interesting.
From my post on studying AI-nuclear integration with methods from psychology: 

[Using] LLMs as participants in a survey experiment, something that is seeing growing interest in the social sciences (see Manning, Zhu, & Horton, 2024; Argyle et al., 2023; Dillion et al., 2023; Grossmann et al., 2023).

2. You may be interested or get good ideas from the Large Language Model Psychology research agenda (safety-focused). I haven't gone into it so this is not an endorsement.

3. Then you have comparative analyses of human and LLM behavior. E.g. the Human vs. Machine paper (Lamparth, 2024) compares humans and LLMs' decision-making in a wargame. I do something similar with a nuclear decision-making simulation, but it's not in paper/preprint form yet.

Helpful suggestions, thank you! Will check them out.

Comments1
Sorted by Click to highlight new comments since:

This sounds very interesting and closely aligns with my personal long-term career goals. Would the seminar content will be made available online for those looking to complete the course remotely or is this purely in-person?

Curated and popular this week
Relevant opportunities