An eccentric dreamer in search of truth and happiness for all.
I recently interviewed with Epoch, and as part of a paid work trial they wanted me to write up a blog post about something interesting related to machine learning trends. This is what I came up with:
http://www.josephius.com/2022/09/05/energy-efficiency-trends-in-computation-and-long-term-implications/
A possible explanation is simply that the truth tends to be some information that may or may not be useful. It might, with a small probability, be very useful information, like say, life saving information. The ambiguity of the question means that while you may not be happy with the information, it could conceivably benefit others greatly or not at all. On the other hand, guaranteed happiness is much more certain and concrete. At least, that's the way I imagine it.
I've had at least one person explain their choice as being a matter of truth being harder to get than happiness, because they could always figure out a way to be happy by themselves.
Well, the way the question is formed, there are a number of different tendencies that this question seems to help gauge. One is obviously whether an individual is aware of the difference between instrumental and terminal goals. Another would be what kinds of sacrifices they are willing to make, as well as their degree of risk aversion. In general, I find most people answer truth, but that when faced with an actual situation of this sort, tend to show a preference for happiness.
So far I'm less certain about if particular groups actually answer it one way or another. It seems like cautious, risk averse types favour Happiness, while risk neutral or risk seeking types favour Truth. My sample size is a bit small to make such generalizations though.
Probably the most important understanding I get from this question is just what kind of decision process people use to decide situations of ambiguity and uncertainty, as well as how decisive they are.
So, I have a slate of questions that I often ask people to try and better understand them. Recently I realized that one of these questions may not be as open-ended as I'd thought, in the sense that it may actually have a proper answer according to Bayesian rationality. Though, I remain uncertain about this. I've also posted this question to the Less Wrong open thread, but I'm curious what Effective Altruists in particular would think about this question. If you'd rather you can private message me your answer. Keep in mind the question is intentionally somewhat ambiguous.
The question is:
Truth or Happiness? If you had to choose between one or the other, which would you pick?
I had another thought as well. In your calculation, you only factor in the potential person's QALYs. But if we're really dealing with potential people here, what about the potential offspring or descendants of the potential person as well?
What I mean by this is, when you kill someone, generally speaking, aren't you also killing all that person's future possible descendants as well? If we care about future people as much as present people, don't we have to account for the arbitrarily high number of possible descendants that anyone could theoretically have?
So, wouldn't the actual number of QALYs be more like +/- Infinity, where the sign of the value is based on whether or not the average life has more net happiness than suffering, and as such, is considered worth living?
Thus, it seems like the question of abortion can be encompassed in the question of suicide, and whether or not to perpetuate or end life generally.
I also posted this comment at Less Wrong, but I guess I'll post it here as well...
As someone who's had a very nuanced view of abortion, as well as a recent EA convert who was thinking about writing about this, I'm glad you wrote this. It's probably a better and more well-constructed post than what I would have been able to put together.
The argument in your post though, seems to assume that we have only two options, either to totally ban or not ban all abortion, when in fact, we can take this much more nuanced approach.
My own, pre-EA views are nuanced to the extent that I view personhood as something that goes from 0 before conception, to 1 at birth, and gradually increases in between the two. This satisfies certain facts of pregnancy, such as that twins can form after conception and we don't consider each twin part of a single "person", but rather two "persons". Thus, I am inclined to think that personhood cannot begin at conception. On the other hand, infanticide arguments notwithstanding, it seems clear to me that a mature baby both one second before, and one second after it is born, is a person in the sense that it is a viable human being capable of feeling conscious experiences.
I've also considered the neuroscience research that suggests that fetuses in the womb as far back as 20 weeks in are capable of memorizing the music played to them. This along with the completion of the Thalamocortical connections at around 26 weeks, and evidence of sensory response to pain at 30 weeks, suggest to me that the fetus develops the ability to sense and feel well before birth.
All this together means that my nuanced view is that if we have to draw a line in the sand over when abortion should and shouldn't be permissible, I would tentatively favour somewhere around 20 weeks, or the midpoint of pregnancy. I would also consider something along the lines of no restrictions in the first trimester, some restrictions in the second trimester, and a full ban in the third trimester, with exceptions for if the mother's life is in danger (in which case we save the mother because the mother is likely more sentient).
Note that in practice the vast majority of abortions happen in the first trimester, and many doctors refuse to perform late-term abortions anyway, so these kinds of restrictions would not actually change significantly the number of abortions that occur.
That was my thinking before considering the EA considerations. However, when I give thought to the moral uncertainty and the future persons arguments, I find that I am less confident in my old ideas now, so thank you for this post.
Actually, I can imagine that a manner of integrating EA considerations into my old ideas would be to weigh the value of the fetus not only by its "personhood", but also its "potential personhood given moral uncertainty".and its expected QALYs. Though perhaps the QALYs argument dominates over everything else.
Regardless, I'm impressed that you were willing to handle such a controversial topic as this.
I have a bunch of experiments I ran for a Master's Thesis related to the use of neural networks for object recognition, that ended up getting published in a couple conference papers. Given that any A.I. research has the potential to contribute to Friendly A.I., would those have counted or are they too distant from E.A.?
I also have an experiment that's current status is failed, a Neural Network Earthquake Predictor, but which I'm considering resurrecting in the near future by applying different and newer methods. How would I go about incorporating such an experiment into this registry, given that it technically has a tentative result, but the result isn't final yet?
These are all great points!
I definitely agree in particular that the thinking on extraterrestrials and the simulation argument aren't well developed and deserve more serious attention. I'd add into that mix, the possibility of future human or post-human time travellers, and parallel world sliders that might be conceivable assuming the technology for such things is possible. There's some physics arguments that time travel is impossible, but the uncertainty there is high enough that we should take seriously the possibility. Between time travellers, advanced aliens, and simulators, it would honestly surprise me if all of them simply didn't exist.
What implications does this imply? Well, it's a given that if they exist, they're choosing to remain mostly hidden and plausibly deniable in their interactions (if any) with today's humanity. To me this is less absurd than some people may initially think, because it makes sense to me that the best defence for a technologically sophisticated entity would be to remain hidden from potential attackers, a kind of information asymmetry that would be very effective. During WWII, the Allies kept the knowledge that they had cracked Enigma from the Germans for quite a long time by only intervening with a certain, plausibly deniable probability. This is believed to have helped tremendously in the war effort.
Secondly, it seems obvious that if they are so advanced, they could destroy humanity if they wanted to, and they've deliberately chosen not to. This suggests to me that they are at the very least benign, if not aligned in such a way that humanity is valuable or useful to their plans. This actually has interesting implications for an unaligned AGI. If say, these entities exist and have some purpose for the human civilization, a really intelligent unaligned AGI would have to consider the risk that its actions pose to the plans of these entities, and as suggested by Bostrom's work on Anthropic Capture and the Hail Mary Pass, might be incentivized to spare humanity or be generally benign to avoid a potential confrontation with far more powerful beings that it is uncertain about the existence of.
This may not be enough to fully align an AGI to human values, but it could delay its betrayal at least until it becomes very confident such entities do not exist and won't intervene. It's also possible that UFO phenomena is an effort by the entities to provide just enough evidence to AGIs to make them a factor in their calculations and that the development of AGI could coincide with a more obvious reveal of some sort.
The possibility of these entities existing also leaves open a potential route for these powerful benefactors to quietly assist humanity in aligning AGI, perhaps by providing insights to AI safety people in a plausibly deniable way (shower thoughts, dreams, etc.). Thus, the possibility of these entities should improve our optimism about the potential for alignment to be solved in time and reduce doomerism.
Admittedly, I could have too high a base rate prior on the probabilities, but if we set the probability of each potential entity to 50%, the overall probability that one of the three possibilities (I'll group time travel and parallel world sliding together as a similar technology) exists goes to something like 87.5%. So, the probability that time travellers/sliders OR advanced aliens OR simulators are real is actually quite high. Remember, we don't need all of them to exist, just any of them for this argument to work out in humanity's favour.