The AngelList link is disrupted by that trailing '.', without that it works: https://angel.co/l/2vTgdS
Ok, so you want to know if whales experience more suffering than ants? And you're proposing a way to do this that involves putting them into an fMRI scanner? That seems like not the best way of asking or answering the question of how much does being X suffer and how does it compare to being Y.
I did not propose putting whales into fMRI scanners. I would not have proposed trying to weigh distant stars with a scale either, yet somehow we've learned how to say some things about their mass and contents.
What are the consequences of the answers you get? If newborns show less neural asynchrony, does that mean its morally acceptable to torture them? Or does that mean they are more at peace, so it's less morally acceptable to torture them?
This is difficult to read as in good faith.
4. Why can't you just ask people if they're suffering? What's the value of quantifying the degree of their suffering using harmonic coherence?
Why can't you just observe that objects fall towards the ground? What's the value of quantifying the degree of their falling using laws of motion?
How much do newborns suffer? Whales? Ants?
I'll take a shot at these questions too, perhaps being usefully only partially familiar with QRI.
1. Which question is QRI trying to answer?
Is there a universal pattern to conscious experience? Can we specify a function from the structure and state of a mind to the quality of experience it is having?
2. Why does that all matter?
If we discover a function from mind to valence, and develop the right tools of measurement and intervention (big IFs, for sure), we can steer all minds towards positive experience.
Until recently we only had intuitive physics, useful for survival, but not enough for GPS. In the same way, we can make some predictions today about what will make humans happy or sad, but we don't understand depression very well, we can guess about how other animals feel, but it gets murky as you consider more and more distant species, and we're in the dark on whether artificial minds experience anything at all. A theory of valence would let us navigate phenomenological space with new precision, across a broad domain of minds.
This is a post from an organization trying to raise hundreds of thousands of dollars.
...
If the Qualia Research Institute was a truth seeking institution, they would have either run the simple experiment I proposed themselves, or had any of the neuroscientists they claim to be collaborating with run it for them.
This reads to me as insinuating fraud, without much supporting evidence.
This is a bad post and it should be called out as such. I would have been more gentle if this was a single misguided researcher and not the head of an organization that publishes a lot of other nonsense too.
I appreciate that in other comments you followed up with more concrete criticisms, but this still feels against the "Keep EA Weird" spirit to me. If we never spend a million or two on something that turns out to be nonsense, we wouldn't have applied hits-based giving very well.
(Despite the username, I have no affiliation with QRI. I'll admit to finding the problem worth working on. )
It's not catchy, but conceptually I like Hans Rosling's classification into Levels 1, 2, 3, & 4, with breakpoints around $2, $8, and $32 per day. It's also useful to be able to say "Country X is largely at Level 2, but a significant population is still at Level 1 and would benefit from Intervention Y."
A short review of Factfulness: https://www.gatesnotes.com/books/factfulness
I am also highly uncertain of EAs' ability to intervene in cultural change, but I do want us to take a hard look at it and discuss it. It may be a cause that is tractable early on, but hopeless if ignored.
You may not think Hsu's case "actually matters", but how many turns of the wheel is it before it is someone else?
Peter Singer has taken enough controversial stances to be "cancelled" from any direction. I want the next Singer(s) to still feel free to try to figure out what really matters, and what we should do.
You might browse Intro to Brain-Like-AGI Safety , or check back in a few weeks once it's all published. Towards the end of the sequence Steve intends to include "a list of open questions and advice for getting involved in the field."
DeepMind takes a fair amount of inspiration from neuroscience.
Diving in to their related papers might be worthwhile, though the emphasis is often on capabilities rather than safety.
Your personal fit is a huge consideration when evaluating the two paths (80k hours might be able to help you think through this). But if you're on the fence, I'd lean towards the more technical degree.