Would you be open to taking funding earmarked for one of your specific proposed projects? And generally for a specific department? How might more general funding shift around in that case, i.e. funging?
I'm personally most interested in humane pesticide research, although also interested in shrimp and fish welfare, too. I'd also be interested in pushing humane pesticide research to others I've been talking about donation plans with.
Your brain has a bunch of overlapping subsystems that are each conscious, according to many plausible criteria for consciousness you could use. You could say they're all minds. I'm not sure I'd say they're different minds, because if two overlap enough, they should be treated like the same one.
See also the problem of the many on SEP:
As anyone who has flown out of a cloud knows, the boundaries of a cloud are a lot less sharp up close than they can appear on the ground. Even when it seems clearly true that there is one, sharply bounded, cloud up there, really there are thousands of water droplets that are neither determinately part of the cloud, nor determinately outside it. Consider any object that consists of the core of the cloud, plus an arbitrary selection of these droplets. It will look like a cloud, and circumstances permitting rain like a cloud, and generally has as good a claim to be a cloud as any other object in that part of the sky. But we cannot say every such object is a cloud, else there would be millions of clouds where it seemed like there was one. And what holds for clouds holds for anything whose boundaries look less clear the closer you look at it. And that includes just about every kind of object we normally think about, including humans.
(Not speaking for my co-authors or RP.)
I think your brain-Fred is conscious, but overlaps so much with your whole brain that counting them both as separate moral patients would mean double counting.
We illustrated with systems that don't overlap much or at all. There are also of course more intermediate levels of overlap. See my comment here on some ideas for how to handle overlap:
It seems to me that your account of desire as requiring affect misses a lot of what we would recognize as our own desires (or preferences) and which p-Vulcans (and Phenumb (Carruthers, 1999)) are capable of: beliefs that something would be good or bad, or better or worse, or worthy of pursuit or avoidance. This can include judgements about what's best for you, life satisfaction judgements, our goals, and (reasoned and) emotionally detached moral judgements. I discuss this more here.
And another kind of desire is based primarily on motivational salience, the involuntary pull of attention to that which we desire (or are averse to) or things associated with it. This is dissociable from positive and negative affect. I discuss this more here.
My piece here from which I linked the sections above may be of more general interest, too.
I agree that we should treat credit assignment differently, but when deciding what to fund, we should always be able to reduce the problem to "What maximizes?" or "What's the best that can be done with my $X?". I think the relevant question here is something like "If I donate $X to SWP, what happens?" And then you divide the (expected) difference in the counterfactuals by $X to get your cost-effectiveness.
I have three possibilities in mind. 1 is the scenario I expected to actually be the case here. 1 is mutually exclusive with 2 and 3, but 2 and 3 can happen together or either alone.
1 could still involve indirect effects to worry about, i.e. you might increase the probability that funders cover overhead in the future like this, for SWP or others, which actually means we get something like 3.