L

LoveAndPeaceAlways

78 karmaJoined

Comments
19

I'm wondering what Nick Bostrom's p(doom) currently is, given the subject of this book. He said 9 years ago in his lecture on his book Superintelligence "less than 50% risk of doom". In this interview 4 months ago he said that it's good there has been more focus on risks in recent times, but there's still slightly less focus on the risks than what is optimal, but he wants to focus on the upsides because he fears we might "overshoot" and not build AGI at all which would be tragic in his opinion. So it seems he thinks the risk is less than it used to be because of this public awareness of the risks.

I think this is a good idea, however:

I was initially confused until I realized you meant hair. According to Google hear isn't a word used for that purpose, the correct spelling is hair.

I'd like to underline that I'm agnostic, and I don't know what the true nature of our reality is, though lately I've been more open to anti-physicalist views of the universe.

For one, if there's a continuation of consciousness after death then AGI killing lots of people might not be as bad as when there is no continuation of consciousness after death. I would still consider it very bad, but mostly because I like this world and the living beings in it and would not like them to end, but it wouldn't be the end of consciousnesses like some doomy AGI safety people imply.

Another thing is that the relationship between consciousness and the physical universe might be more complex than physicalists say - like some of the early figures of quantum physics thought - and there might unknown to current science factors at play that could have an effect on the outcome. I don't have more to say about this because I'm uncertain what the relationship between consciousness and the physical universe might be in such a view.

And lastly, if there's God or gods or something similar, such beings would have agency and could have an effect on what the outcome might be. For example, there are Christian eschatological views that say that the Christian prophecies about the New Earth and other such things must come true in some way, so the future cannot end in a total extinction of all human life.

Do the concepts behind AGI safety only make sense if you have roughly the same worldview as the top AGI safety researchers - secular atheism and reductive materialism/physicalism and a computational theory of mind?

You may be aware of this already, but I think there is a clear difference between saving an existing person who would otherwise have died - and in the process reducing suffering by also preventing non-fatal illnesses - and starting a pregnancy because before starting a pregnancy the person doesn't exist yet.

There are a couple of debate ideas I have, but I would most like to see a debate on whether ontological physicalism is the best view of the universe there is.

I would like to see someone like the theoretical physicist Sean Carroll represent physicalism, and someone like the professor Edward F. Kelly from the Division of Perceptual Studies at the University of Virginia represent anti-physicalism. The researchers at the Division of Perceptual Studies study near-death experiences, claimed past-life memories in children and other parapsychological phenomena, and Edward F. Kelly has written three long books on why he thinks physicalism is false, relying largely on case studies that he says don't fit well with the physicalistic worldview. Based on my understanding, the mainstream scientific community treats the research by the Division of Perceptual Studies as fringe science.

I'm personally agnostic, but I have thought about making an efforpost on steelmanning anti-physicalism based on Edward F. Kelly's works for LessWrong, but I have doubted whether there would be any kind of interest for it because the people at LessWrong seem to be very certain of physicalism and think lowly of other positions. If you think there would be interest for it, you can say so. Physicalism has very good arguments for it, and the anti-physicalist position relies on non-verifiable case studies being accurate.

The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do by Erik J. Larson wasn't mentioned.

The movie Joker makes a good case that many criminals are created by circumstances, like mental illness, abuse and lack of support from society and other people. I still believe in some form of free will and moral responsibility of an individual, but criminals are also to some extent just unlucky.

You could study subjects, read books, watch movies and play video games, provided that these things are available. But I personally think that Buddhism is particularly optimized for solitary life, so I'd meditate, observe my mind and try to develop it and read Buddhist teachings. Other religions could also work, at least Christianity has had hermits.

What would you say is the core message of the Sequences? Naturalism is true? Bayesianism is great? Humans are naturally very irrational and have to put effort if they want to be rational?

I've read the Sequences almost twice, first time was fun because Yudkowsky was optimistic back then, but during the second time I was constantly aware that Yudkowsky believes along the lines of his 'Death with dignity' post that our doom is virtually certain and he has no idea how to even begin formulate a solution. If Yudkowsky, who wrote the Sequences on his own, who founded the modern rationalist movement on his own, who founded MIRI and the AGI alignment movement on his own, has no idea where to even begin looking for a solution, what hope do I have? I probably couldn't do anything comparable to those things on my own even if I tried my hardest for 30 years. I could thoroughly study everything Yudkowsky and MIRI have studied, which would be a lot, and after all that effort I would be in the same situation Yudkowsky is right now - no idea where to even begin looking for a solution and only knowing which approaches don't work. The only reason to do it is to gain a fraction of a dignity point, to use Yudkowsky's way of thinking.

To be clear, I don't have a fixed model in my head about AI risk, I think I can sort of understand what Yudkowsky's model is and I can understand why he is afraid, but I don't know if he's right because I can also sort of understand the models of those who are more optimistic. I'm pretty agnostic when it comes to this subject and I wouldn't be particularly surprised by any specific outcome.

Load more