titotal

Computational Physicist
7516 karmaJoined

Bio

I'm a computational physicist, I generally donate to global health.  I am skeptical of AI x-risk and of big R Rationalism, and I intend explaining why in great detail. 

Comments
609

For as long as it's existed the "AI safety" movement has been trying to convince people that superintelligent AGI is imminent and immensely powerful. You can't act all shocked pikachu that some people would ignore the danger warnings and take that as a cue to build it before someone else does. This was all quite a predictable result of your actions. 

I would like to humbly suggest that people not engage in active plots to destroy humanity based on their personal back of the envelope moral calculations. 

I think that the other 8 billion of us might want a say, and I'd guess we'd not be particularly happy if we got collectively eviscerated because some random person made a math error. 

On multiple occasions, I've found a "quantified" analysis to be indistinguishable from a "vibes-based" analysis: you've just assigned those vibes a number, often one basically pulled out of your behind.  (I haven't looked enough into shrimp to know if this is one of those cases). 

I think it is entirely sensible to strongly prefer cause estimates that are backed by extremely strong evidence such as meta-reviews of randomised trials, rather than cause estimates based on vibes that are essentially made up. Part of the problem I have with naive expected value reasoning is that it seemingly does not take this entirely reasonable preference into account.

I have a PHD on computational quantum chemistry (ie, using conventional computers to simulate quantum systems). In my opinion quantum technologies are unlikely to be a worthy cause area. I have not researched everything in depth so I can only give my impressions here from conversations with colleagues in the area. 

First, I think the idea of quantum computers having any effect on WMD's in the near future seems dodgy to me. Even if practical quantum computers are built, they are still likely to be incredibly expensive for a long time to come. People seem unsure about how useful quantum algorithms will actually be for material science simulations. We can build approximations to compounds that run fine on classical computers, and even if quantum computers opens up more approximations, you're still going to have to check in with real experiments. You are also operating in an idealised realm: you can model the compounds, yes, but if you want to investigate, say, it's effect on humans, you need to model the human body as well, which is an entirely different beast. 

The next point is that even if this does work in the future, why not put the money to investigate it then, rather than now, before it's been proven to work? We will have a ton of advance warning if quantum computers can actually be used for practical purposes, because they will start off really bad and develop over time. 

From what I've heard, theres a lot of skepticism about near-term quantum computing anyway, with a common sentiment among my colleagues being that it's overhyped and due for a crash. 

I'm also a little put off by lumping in quantum computing with quantum sensing and so on: Only quantum computing would have an actually transformative effect on anything if actually realised, with the others being just a slightly better way of doing things we can already do. 

I'm highly skeptical about the risk of AI extinction, and highly skeptical that there will be singularity in our near-term future. 

However, I am concerned about near-term harms from AI systems such as misinformation, plagiarism, enshittification, job loss, and climate costs. 

How are you planning to appeal to people like me in your movement?

If we're listing factors in EA leading to mental health problems, I feel like it's worth pointing that a  portion of EA thinks there's a high chance of an imminent AI apocalypse that will kill everybody. 

I myself don't believe this at all, but to the people that do believe this, there's no way it doesn't affect your mental health. 

This seems to me like an attempt to run away from the premise of the thought experiment. I'm seeing lot's of "maybes" and "mights" here, but we can just explain them away with more stipulations: You've only seen the outside of their ship, you're both wearing spacesuits that you can't see into, you've done studies and found that neuron count and moral reasoning skills are mostly uncorrelated, and that spacefilight can be done with more or less neurons, etc.

None of these avert the main problem: The reasoning really is symmetrical, so both perspectives should be valid. The EV of saving the alien is 2N, where N is the human number of neurons, and the EV of saving the human from the alien perspective is 2P, where P is the is alien number of neurons. There is no way to declare one perspective the winner over the other, without knowing both N and P.  Remember in the original two envelopes problem, you knew both the units, and the numerical value in your own envelope: this was not enough to avert the paradox. 

See, the thing that's confusing me here is that there are many solutions to the two envelope problem, but none of them say "switching actually is good". They are all about how to explain why the EV reasoning is wrong and switching is actually bad. So in any EV problem which can be reduced to the two envelope problem, you shouldn't switch. I don't think this is confined to alien vs human things either: perhaps any situation where you are unsure about a conversion ratio might run into two envelopy problems, but I'll have to think about it. 

I think switching has to be wrong, for symmetry based reasons. 

Let's imagine you and a friend fly out on a spaceship, and run into an alien spaceship from an another civilisation that seems roughly as advanced as you. You and your buddy have just met the alien and their buddy but haven't learnt each others languages, when an accident occurs: your buddy and their buddy go flying off in different directions and you collectively can only save one of them. The human is slightly closer and a rescue attempt is slightly more likely to be successful as a result: based solely on hedonic utilitarianism, do you save the alien instead? 

We'll make it even easier and say that our moral worth is strictly proportional to number of neurons in the brain, which is an actual, physical quantity. 

I can imagine being an EA-style reasoner, and reasoning as follows: obviously I should anchor that the alien and humans have equal neuron counts, at  level N. But obviously there's a lot of uncertainty here. Let's approximate a lognormal style system and say theres a 50% chance the alien is also level N, a 25% chance they have N/10 neurons, and a 25% chance they have 10N neurons. So the expected number of neurons in the alien is 0.25*N/10 + 0.5*N + 0.25*(10N) = 3.025N. Therefore, the alien is worth 3 times as much a human in expectation, so we should obviously save it over the human. 

Meanwhile, by pure happenstance, the alien is also a hedonic EA-style reasoner with the same assumptions, with neuron count P. They also do the calculation, and reason that the human is worth 3.025P, so we should save the human. 

Clearly, this reasoning is wrong. The cases of the alien and human are entirely symmetric: both should realise this and rate each other equally, and just save whoevers closer. 

If your reasoning gives the wrong answer when you scale it up to aliens, it's probably also giving the wrong answer for chickens and elephants. 

If we make reasoning about chickens that is correct, it should also be able to scale up to aliens without causing problems. If your framework doesn't work for aliens, that's an indication that something is wrong with it.

Chickens don't hold a human-favouring position because they are not hedonic utilitarians, and aren't intelligent enough to grasp the concept.  But your framework explicitly does not weight the worth of beings by their intelligence, only their capacity to feel pain. 

I think it's simply wrong to switch in the case of the human vs alien tradeoff, because of the inherent symmetry of the situation. And if it's wrong in that case, what is it about the elephant case that has changed? 

So in the two elephants problem, by pinning to humans are you affirming that switching from the 1 human EV to 1 elephant EV, when you are unsure about the HEV to EEV conversion, actually is the correct thing to do? 

Like, option 1 is 0.25 HEV better than option 2, but option 2 is 0.25 EEV better than option 1, but you should pick option 1?

what if instead of an elephant, we were talking about a sentient alien? Wouldn't they respond to this with an objection like "hey, why are you picking the HEV as the basis, you human-centric chauvinist?"

Load more