Hi, I'm Ben. I just graduated as a medical doctor from the University of Sydney. I'm currently working on forecasting bioterrorist groups, supported by Open Phil, and going through the Charity Entrepreneurship Incubation Program.
I studied an undergraduate double-degree (BA, BSc) triple-majoring in philosophy, international relations, and neuroscience. I've spent my MD doing bits and bobs in global health and health security. I've also conducted some research projects at the Future of Humanity Institute, the Stanford Existential Risk Initiative, the vaccine patch company Vaxxas, and the Lead Exposure Elimination Project.
Cool! One point from a quick skim - the number of animals wouldn't be lost in many kinds of human extinction events or existential risks. Only a subset would erase the entire biosphere - e.g. a resource-maximising rogue AI, vacuum decay, etc. Presumably with human extinction the animal density of reclaimed land would be higher than current, so the number of animals would rise (assuming it outweighs the end of factory farming).
The implications of human existential risks for animals is interesting, and I can see some points either way depending on the moral theory (e.g. end of factory farming in human extinction, but rise of wild animal suffering; total number and quality of animal lives in a beyond-Earth humanity; potential of a completely re-wilded Earth in a beyond-Earth humanity; risks of astronomical suffering if a beyond-Earth humanity retains the equivalent of factory farming...)
I love The Mower by Philip Larkin - it captures a deep instinct for kindness, especially towards animals.
I think another factor is that HLI's analysis is not just below the level of Givewell, but below a more basic standard. If HLI had performed at this basic standard, but below Givewell, I think strong criticism would have been unreasonable, as they are still a young and small org with plenty of room to grow. But as it stands the deficiencies are substantial, and a major rethink doesn't appear to be forthcoming, despite being warranted.
I really enjoyed this 2022 paper by Rose Cao ("Multiple realizability and the spirit of functionalism"). A common intuition is that the brain is basically a big network of neurons with input on one side and all-or-nothing output on the other, and the rest of it (glia, metabolism, blood) is mainly keeping that network running.
The paper's helpful for articulating how that model's impoverished, and argues that the right level for explaining brain activity (and resulting psychological states) might rely on the messy, complex, biological details, such that non-biological substrates for consciousness are implausible. (Some of those details: spatial and temporal determinants of activity, chemical transducers and signals beyond excitation/inhibition, self-modification, plasticity, glia, functional meshing with the physical body, multiplexed functions, generative entrenchment.)
The argument doesn't necessarily oppose functionalism, but I think it's a healthy challenge to my previous confidence in multiple realisability within plausible limits of size, speed, and substrate. It's also useful to point to just how different artificial neural networks are from biological brains. This strengthens my feeling of the alien-ness of AI models, and updates me towards greater scepticism of digital sentience.
I think the paper's a wonderful example of marrying deeply engaged philosophy with empirical reality.
As the origin of that comment i should say other reasons for non-convergence are stronger, but the attrition thing contributed. E.g. biases both for experts to over-rate and supers to under-rate. I wonder also about the structure of engagement with strong team identities fomenting tribal stubbornness for both...
Just want to say I appreciate your commentary over the past 9 months. Having someone with legal expertise and (what seems to me) a pretty even-handed and sensible perspective is a really valuable contribution.