JM

Joseph Miller

560 karmaJoined

Comments
32

Ok thanks, I didn't know that.

Nit: Beff Jezos was doxxed and repeating him name seems uncool, even if you don't like him.

proximity [...] is obviously not morally important

People often claim that you have a greater obligation to those in your own country than to foreigners. I’m doubtful of this

imagining drowning children that there are a bunch of nearby assholes ignoring the child as he drowns. Does that eliminate your reason to save the child? No, obviously not


Your argument seems to be roughly an appeal to the intuition that moral principles should be simple - consistent across space and time, without weird edge cases, not specific to the circumstances of the event. But why should they be?

Imo this is the mistake that people make when they haven't internalized reductionism and naturalism. In other words they are moral realist or otherwise confused. When you realize that "morality" is just "preferences" with a bunch of pointless religious, mystical and philosophical baggage, the situation becomes clearer.

Because preferences are properties of human brains, not physical laws there is no particular reason to expect them to have low Kolmogorov complexity. And to say that you "should" actually be consistent about moral principles is an empty assertion that entirely rests on a hazy and unnatural definition of "should".

Nonetheless, the piece exhibited some patterns that gave me a pretty strong allergic reaction. It made or implied claims like:
* a small circle of the smartest people believe this
* i will give you a view into this small elite group who are the only who are situationally aware
* the inner circle longed tsmc way before you
* if you believe me; you can get 100x richer -- there's still alpha, you can still be early
* This geopolitical outcome is "inevitable" (sic!)
* in the future the coolest and most elite group will work on The Project. "see you in the desert" (sic)
* Etc.

These are not just vibes - they are all empirical claims (except the last maybe). If you think they are wrong, you should say so and explain why. It's not epistemically poor to say these things if they're actually true.

I also claim that I understand ethics.

"Good", "bad", "right", "wrong", etc. are words that people project their confusions about preferences / guilt / religion on to. They do not have commonly agreed upon definitions. When you define the words precisely the questions become scientific, not philosophical.

People are looking for some way to capture their intuitions that God above is casting judgement about the true value of things - without invoking supernatural ideas. But they cannot, because nothing in the world actually captures the spirit of this intuition (the closest thing is preferences). So they relapse into confusion, instead of accepting the obvious conclusion that moral beliefs are in the same ontological category as opinions (like "my favorite color is red"), not facts (like "the sky appears blue").

I expect much of this will be largely subjective and have no objective fact of the matter, but it can be better informed by both empirical and philsophical research.

So I would say it is all subjective. But I agree that understanding algorithms will help us choose which actions satisfy our preferences. (But not that searching for explanations of the magic of conscious will help us decide which actions are good.)

I claim that I understand sentience. Sentience is just a word that people have projected their confusions about brains / identity onto.

Put less snarkily:
Consciousness does not have a commonly agreed upon definition. The question of whether an AI is conscious cannot be answered until you choose a precise definition of consciousness, at which point the question falls out of the realm of philosophy into standard science.

This might seem like mere pedantry or missing the point, because the whole challenge is to figure out the definition of consciousness, but I think it is actually the central issue. People are grasping for some solution to the "hard problem" of capturing the je ne sais quoi of what it is like to be a thing, but they will not succeed until they deconfuse themselves about the intangible nature of sentience.

You cannot know about something unless it is somehow connected the causal chain that led to the current state of your brain. If we know about a thing called "consciousness" then it is part of this causal chain. Therefore "consciousness", whatever it is, is a part of physics. There is no evidence for, and there cannot ever be evidence for, any kind of dualism or epiphenomenal consciousness. This leaves us to conclude that either panpsychism or materialism is correct. And causally-connected panpsychism is just materialism where we haven't discovered all the laws of physics yet. This is basically the argument for illusionism.

So "consciousness" is the algorithm that causes brains to say "I think therefore I am". Is there some secret sauce that makes this algorithm special and different from all currently known algorithms, such that if we understood it we would suddenly feel enlightened? I doubt it. I expect we will just find a big pile of heuristics and optimization procedures that are fundamentally familiar to computer science. Maybe you disagree, that's fine! But let's just be clear that that is what we're looking for, not some other magisterium.

Sentient AI that genuinely 'feels for us' probably wouldn't disempower us

Making it genuinely "feel for us" is not well defined. There are some algorithms that make it optimize for our safety. Some of these will be vaguely similar to the algorithm in human brains that we call empathy, some will not. It does not particularly matter for alignment either way.

and basically nobody there (as far as I could tell) held extremely 'doomer' beliefs about AI.

In any case, I think it's clear that AI Safety is no longer 'neglected' within EA, and possibly outside of it.

I think this is basically entirely selection effects. Almost all the people I spoke to were "doomers" to some extent.

Variable value principles seems very weird and unlikely

Person-affecting theories. I find them unlikely

Rejections of transitivity. This seems very radical to me, and therefore unlikely

I assume you, like most EAs, are not a moral realist. In which case what do these statements mean? This seems like an instance of a common pattern in EA where people talk about morality in a religious manner, while denying having any mystical beliefs.

This is not a research problem, it's a coordination / political problem. The algorithms are already doing what the creators intended, which it to maximise engagement.

You should also consider impact of changing the diets of millions of children. Will this food be healthier? Will they like the food?

Load more