Hide table of contents

It's reasonable to study on computer science field for AI safety, but how about neuroscience? Neuroscience and AI are all parts of cognitive science. They might complement each other. There are still lots of moral uncertainty now . To work on AI alignment, we need to give the AI correct value, and neuroscience seems to be an underrated subject, it may change the way we get happiness, and solve some moral uncertainty. Neuroscience "might" create a new era, controlling our minds, changing our personality, making there be no suffering at all. So, do we need neuroscience experts on AI field?

9

0
0

Reactions

0
0
New Answer
New Comment

2 Answers sorted by

I think it's a bad idea for most people to do Neuroscience PhDs. PhDs in general are not optimised for truth seeking, working on high impact projects, or maximising your personal wellbeing. In fact, rates of anxiety and depression are higher amongst graduate students than the population of people with college degrees of similar age. You also get paid extremely badly, which is a problem for people with families or other financial commitments. For any specific question you want to ask, it seems worth investigating if you can do the same work in industry or at a non-profit, just to see if you would be able to study the same questions in a more focused way outside of academia. 

So I don’t think doing a Neuro PhD is the most effective route to working on AI Safety. That said, there seem to be some useful research directions if you want to pursue a Neuro PhD program anyway. Some examples include: interpretability work that can be translated from natural to artificial neural networks; specifically studying neural learning algorithms; or doing completely computational research, aka a backdoor CS PhD while fitting your models to neural data collected by other people. (CS PhD programs are insanely competitive right now, and Neuroscience professors are desperate for lab members who know how to code, so this is one way into a computational academic program at a top university if you’re ok working on Neuroscience relevant research questions.)

Vael Gates (who did a Computational/Cognitive Neuroscience PhD with Tom Griffiths, one of the leaders of this field), has some further thoughts that they’ve written up in this EA Forum post. I completely agree with their assessment of neuroscience research from the perspective of AI Safety research here:

Final note: cellular/molecular neuroscience, circuit-level neuroscience, cognitive neuroscience, and computational neuroscience are some of the divisions within neuroscience, and the skills in each of these subfields have different levels of applicability to AI. My main point is I don’t think any of these without an AI / computational background will help you contribute much to AI safety, though I expect that most computational neuroscientists and a good subset of cognitive neuroscientists will indeed have AI-relevant computational backgrounds. One can ask me what fields I think would be readily deployed towards AI safety without any AI background, and my answer is: math, physics (because of its closeness to math), maybe philosophy and theoretical economics (game theory, principle-agent, etc.)? I expect everyone else without exposure to AI will have to reskill if they’re interested in AI safety, with that being easier if one has a technical background. People just sometimes seem to expect pure neuroscience (absent computational subfields) and social science backgrounds to be unusually useful without further AI grounding, and I’m worried that this is trying to be inclusive when it’s not actually the case that these backgrounds alone are useful.

Going slightly off tangent: your original question specifically mentions moral uncertainty. I share Geoffrey Miller’s views in his comment on this thread, that Psychology is a more useful discipline to study moral uncertainty compared to Neuroscience. 

On the flip side, I think psychologists have done very interesting/useful research on human values (see this paper on how normal people think about population ethics, also eloquently written up as a shorter/more readable EA Forum post here). In this vein, I’ve also been very impressed by work produced by psychologists working with empirical philosophers, for example this paper on the Psychology of Existential Risk

If you want to focus on moral uncertainty, you can collect way more information from a much more diverse set of individuals if you focus on behaviour instead of neural activity. As Geoffrey mentions, it is *much* easier/cheaper to study people’s opinions or behaviour than it is to study their neural activity. For example, it costs ~$5 to pay somebody to take a quick survey on moral decisions, vs. about $500 an hour to run an fMRI scanner for one subject to collect a super messy dataset that’s incredibly different to interpret. People do take research more seriously if you slap a photo of a brain on it, but that doesn’t mean the brain data adds anything more than aesthetic value. 

It might make sense for you to check out what EA Psychologists are actually doing to see if their research seems more up your alley compared to the neuroscience questions you’re interested in. A good place to start is here: https://www.eapsychology.org/


 

Abby -- excellent advice. This is consistent with what I've seen in neuroscience, psychology, and PhD programs in general.

2
Abby Babby
Thanks! I agreed/appreciated your thoughts on how Psych can actually be relevant to human value alignment as well, especially compared to Neuro!

Jack -- good question. 

IMHO as a psych professor (somewhat biased in favor of psychology!), the most relevant behavioral sciences fields for working on 'AI alignment with human values' would be the key branches of psychology that have actually studied human values, such as moral psychology, political psychology, psychology of religion, social psychology, and evolutionary psychology. 

Then there are the behavioral sciences fields that study the diversity of human values across individuals (e.g. personality psychology, clinical psychology, intelligence research, behavior genetics), across cultures (e.g. cross-cultural psychology, anthropology, political science, sociology), and across history (e.g. intellectual, political, religious, social, sexual, and family history).

Also, I think fields such as behavioral game theory, evolutionary game theory, microeconomics, and decision theory are very useful for AI alignment work.

There's a bit of neuroscience that studies human values, preferences, and decision-making (e.g. affective neuroscience, cognitive neuroscience) that might be relevant to AI research. But, in my opinion, neuroscience hasn't discovered much that's relevant to AI research that wasn't already discovered by behavioral sciences. Neuroscience has mostly identified where certain kinds of processing happens in the brain, without really adding much to our understanding of what kinds of processing are happening, and why, and what the implications are for AI alignment. (Epistemic status of this claim: somewhat weak; I studied neuroscience fairly deeply decades ago in grad school, and have kept up to some degree with recent brain imaging work, but I'm not a neuroscience expert.)

The big advantage of neuroscience is that it has high status, cachet, and fundability, and sounds like 'hard science'. So, people who don't understand the behavioral sciences, and who might consider a field like moral psychology to be 'soft science' or 'pseudo-science', might take a neuroscience degree more seriously.  Honestly though, if you study neuroscience at PhD level, I would bet that the stuff that proves most useful to AI alignment will be the psychology theories, methods, and findings rather than the neuroscience research.

So, you'd have to decide whether the status benefits of a neuroscience PhD (vs. a PhD in moral psychology, or in behavioral game theory, for example) out-weigh the time costs of having to learn an awful lot about the details of brain anatomy and physiology, brain imaging methods, and voxel-based imaging analysis -- most of which simply won't be very relevant to AI alignment.

This seems mostly right to me! 

Comments6
Sorted by Click to highlight new comments since:

Hi, I’m an AGI safety researcher who studies and talks about neuroscience a whole lot. I don’t have a neuroscience degree—I’m self-taught in neuroscience, and my actual background is physics. So I can’t really speak to what happens in neuroscience PhD programs. Nevertheless, my vague impression is that the kinds of things that people learn and do and talk about in neuroscience PhD programs has very little overlap with the kinds of things that would be relevant to AI safety. Not zero, but probably very little. But I dunno, I guess it depends on what classes you take and what research group you join. ¯\_(ツ)_/¯

Computer science is probably the most relevant degree for AI safety, but there are already lots of computer scientists working on it, and as far as I know very few neuroscientists. So it's possible that adding one additional neuroscientist could be more valuable than adding one additional CSE person. Especially if we factor in that OP may be better at/ more motivated to do neuroscience than CSE. 

I could see paths of AI development where neuroscience becomes much more important than it is presently: for example, if we go the "brain emulation" route. 

I think my advice for the OP would be that if they like/ are better at neuroscience more than CSE, they should go for it. 

I think your reply is pretty heavily based on deciding between neuroscience PhD and CS PhD, but my guess is >80% likely the best move is to not get a PhD at all.

True! As someone with a PhD, I would probably advise against doing a PhD unless you want to go into academia, you really enjoy research for it's own sake, or you have a insatiable desire to put "doctor" in front of your name. I don't know anyone who has completed a PhD without having at least one mental breakdown. 

I don't regret my PhD, but it's not something to jump into lightly. 

One thing to bear in mind is that PhDs can often be completed a bit faster in the UK (sometimes as little as 3 years) than in the US (typically 5 years). The US PhDs often include a couple of years of coursework, whereas UK programs often involve just jumping straight into research, on the assumption that you learned about the field as an undergrad.

I did mine in Australia, which follows the UK model. Finishing in 3 years is possible, but very rare, most people took  4 years. The US model seems entirely too long, but you do end up with more paper publications in the end. (note that paper publications are important for academia and pretty much nowhere else). 

Curated and popular this week
Relevant opportunities