Rafael Ruiz

PhD in Philosophy @ London School of Economics
345 karmaJoined Pursuing a doctoral degree (e.g. PhD)Working (0-5 years)London, UK
www.rafaelruizdelira.com/

Bio

Participation
3

PhD Student in Philosophy at the London School of Economics, researching Moral Progress, Moral Circle Expansion, and the causes that drive it. Previously, I did a MA in Philosophy at King's College London and a MA in Political Philosophy at Pompeu Fabra University (Spain). More information about my research at my personal website: https://www.rafaelruizdelira.com/

When I have the time, I also run https://futurosophia.com/, a website and nonprofit aimed at promoting the ideas of Effective Altruism in Spanish.

You might also know me from EA Twitter. :)

Comments
29

"Is it possibly good for humans to go extinct before ASI is created, because otherwise humans would cause astronomical amounts of suffering? Or might it be good for ASI to exterminate humans because ASI is better at avoiding astronomical waste?"

These questions really depend on whether you think that humans can "turn things around" in terms of creating net positive welfare to other sentient beings, rather than net negative. Currently, we create massive amounts of suffering through factory farming and environmental destruction. Depending how you weigh those things, it might lead to the conclusion that humans are currently net-negative to the world. So a lot turns on whether you think the future of humanity would be deeply egoistic and harmful, or if you think we can improve substantially. There are some key considerations you might want to look into, in the post The Future Might Not Be So Great by Jacy Reese Anthis: https://forum.effectivealtruism.org/posts/WebLP36BYDbMAKoa5/the-future-might-not-be-so-great 

"Why is it reasonable to assume that humans must treat potentially lower sentient AIs or lower sentient organic lifeforms more kindly than sentient ASIs that have exterminated humans?"

I'm not sure I fully understand this paragraph, but let me reply to the best of my abilities from what I gathered.

I haven't really touched on ASIs on my post at all. And, of course, currently no ASIs have killed humans since we don't have ASIs yet. They might also help us flourish, if we manage to align them.

I'm not saying we must treat less-sentient AIs more kindly. If anything, it's the opposite! The more sentient a being is, the more moral worth they will have, since they will have stronger experiences of pleasure and pain. I think we should promote the welfare of beings in ways that are correlated to their abilities for welfare.  But it might be an empirical fact that we might want to promote the welfare of simpler beings rather than more complex ones because they are easier/cheaper to copy/reproduce and help. There might be also more sentience, and thus more moral worth, per unit of energy spent on them.

"Yes, such ASIs extinguish humans by definition, but humans have clearly extinguished a very large number of other beings, including some human subspecies as well."

We have currently driven many other species to extinction through environmental destruction and climate change. I think this is morally bad and wrong, since it is possible (e.g. invertebrates) to probable (e.g. vertebrates) that these animals were sentient. 

I tend to think in terms of individuals rather than species. By which I mean: Imagine you were in the moral dilemma that you had to either to fully exterminate a species by killing the last 100 members, versus killing 100,000 individuals of a very similar species but not making them extinct. I tend to think of harm in terms of the individuals killed or thwarted potential. In such a scenario, it is possible that we might prefer some species becoming extinct, but since what we care about is promoting overall welfare. (Though second-order effects on biodiversity makes these things very hard to predict).

I hope that clarifies some things a little. Sorry if I misunderstood your points in that last paragraph.

Re: Advocacy, I do recommend policy and advocacy too! I guess I haven't seen too many good sources on the topic just yet. Though I just remembered two: Animal Ethics https://www.animal-ethics.org/strategic-considerations-for-effective-wild-animal-suffering-work/ and some blog posts by Sentience Institute https://www.sentienceinstitute.org/research

I will add them at the end of the post.

I guess I slightly worry that these topics might still seem too fringe, too niche, or too weird outside of circles that have some degree of affinity with EA or weird ideas in moral philosophy. But I believe that the overton window will shift inside some circles (some animal welfare organizations, AI researchers, some AI policymarkers), so we might want to target them rather than spreading these somewhat weird and fringe ideas to all of society. Then they can push for policy.

Re: Geoffrey Hinton, I think he might subscribe to a view broadly held by Daniel Dennett (although I'm not sure Dennett would agree with the interpretation of his ideas). I guess in the simplest terms, it might boil down to a version of functionalism, where since the inputs and outputs are similar to a human, it is assumed that the "black box" in the middle is also conscious. 

I think that sort of view assumes substate-independence of mental states. It leads to slightly weird conclusions such as the China Brain https://en.wikipedia.org/wiki/China_brain , where people arranged in a particular way doing the same function as neurons in a brain, would make the nation of China be a conscious entity. 

Besides that, we might also want to distinguish consciousness and sentience. We might get cases with phenomenal consciousness (basically, an AI with subjective experiences, and also thoughts and beliefs, possibly even desires) but no valenced states of pleasure and pain. While they come together in biological beings, these might come apart in AIs.

Re: Lack of funding for digital sentience, I was also a bit saddened by those news. Though Caleb Parikh did seem excited for funding digital sentience research. https://forum.effectivealtruism.org/posts/LrxLa9jfaNcEzqex3/calebp-s-shortform?commentId=JwMiAgJxWrKjX52Qt 

Thanks a lot for the links, I will give them a read and get back to you!

Regarding the "Lower than 1%? A lot more uncertainty due to important unsolved questions in philosophy of mind." part, it was a mistake because I was thinking of current AI systems. I will delete the % credence since I have so much uncertainty that any theory or argument that I find compelling (for the substrate-dependence or substate-independence of sentience) would change my credence substantially.

I really loved the event! Organizing it right after EA Global was probably good idea to get attendees from outside of the UK.

At the same time, being right after EA Global without a break prevented me from attending the retreat part. 6 days in a row full of intense networking was a bit too much, both physically and mentally, so I only ended up attending the first day.

But thanks a lot for organizing, I got a lot of value from it in terms of new cutting edge research ideas.

Even my grocery shopping list? 😳 That's a bit embarrassing but I hope fellow EAs can help me optimize it for impact

Climate change is going pretty well, I've heard carbon emissions are up!

Also, humans are carbon-based creatures so having more carbon around seems plausibly good 😊

Are we using the old 12 signs astrological chart, or the updated one with Ophiuchus 13th astrological sign?

Fair! I agree to that, at least until this point of time.

But I think there could be a time where we could have picked most of the "social low-hanging fruit" (cases like the abolition of slavery, universal suffrage, universal education), so there's not a lot for easy social progress left to do. At least comparatively, then investing on the "moral philosophy low-hanging fruit" will look more worthwhile.

Some important cases of philosophical moral problems that might have great axiological moral importance, at least under consequentialism/utilitarianism could be population ethics (totalism vs averagism), our duties towards wild animals, and the moral status of digital beings.

I think figuring them out could have great importance. Of course, if we always just keep them as just an interesting philosophical thought experiment and we don't do anything about promoting any outcomes, they might not matter that much. But I'm guessing people in the year 2100 might want to start implementing some of those ideas.

Same! Seems like a fascinating, although complicated topic. You might enjoy Oded Galor's "The Journey of Humanity", if you haven't read it. :)

Sure! So I think most of our conceptual philosophical moral progress until now has been quite poor. If looked under the lens of moral consistency reasoning I outlined in point (3), cosmopolitanism, feminism, human rights, animal rights, and even longtermism all seem like slight variations on the same argument ("There are no morally relevant differences between Amy and Bob, so we should treat them equally").

In contrast, I think the fact that we are starting to develop cases like population ethics, infinite ethics, complicated variations of thought experiments (there are infinite variations of the trolley problem we could conjure up), that really test our limits of our moral sense and moral intuitions, hints at the fact that we might need a more systematic, perhaps computerized approach to moral philosophy. I think the likely path is that most conceptual moral progress in the future (in the sense of figuring out new theories and thought experiments) will happen with the assistance of AI systems.

I can't point to anything very concrete, since I can't predict the future of moral philosophy in any concrete way, but I think philosophical ethics might become very conceptually advanced and depart heavily from common-sense morality. I think this has been an increasing gap since the enlightenment. Challenges to common-sense morality have been slowly increasing. We might be at the early beginning of that exponential takeoff.

Of course, many of the moral systems that AIs will develop we will consider to be ridiculous. And some might be! But in other cases, we might be too backwards or morally tied to our biologically and culturally shaped moral intuitions and taboos to realize that it is in fact an advancement. For example, the Repugnant Conclusion in population ethics might be true (or the optimal decision in some sense, if you're a moral anti-realist), even if it goes against many of our moral intuitions. 

The effort will take place in separating the wheat from the chaff. And I'm not sure if it will be AI or actual moral philosophers doing this effort of discriminating good from bad ethical systems and concepts.

Load more