Natural selection will tend to promote AIs that disempower human beings. For example, we currently have chatbots that can help us solve problems. But AI developers are working to give these chatbots the ability to access the internet and online banking, and even control the actions of physical robots. While society would be better off if AIs make human workers more productive, competitive pressure pushes towards AI systems that automate human labor. Self-preservation and power seeking behaviors would also give AIs an evolutionary advantage, even to the detriment of humanity.
In this vein, is there anything to the idea of focusing more on aligning incentives than AI itself? Meaning, is it more useful to alter selection pressures (which behaviors are rewarded outside of training) vs trying to induce "useful mutations" (alignment of specific AIs)? I have no idea how well this would work in practice, but it seems less fragile. One half-baked idea: heavily tax direct AI labor, but not indirect AI labor (i.e. make it cheaper to get AIs to help humans be more productive than to do it without human involvement)
A single really convincing demonstration of something like deceptive alignment could make a big difference to the case for standards and monitoring (next section).
This struck me as a particularly good example of a small improvement having a meaningful impact. On a personal note, seeing the example of deceptive alignment you wrote would make me immediately move to the hit-the-emergency-brakes/burn-it-all-down camp. I imagine that many would react in a similar way, which might place a lot of pressure on AI labs to collectively start implementing some strict (not just for show) standards.
The idea of the intention-action gap is really interesting. I would imagine that the personal utility lost by closing this gap is also a significant factor. Meaning, if I recognize that this AI is sentient, what can I no longer do with/to it? If the sacrifice is too inconvenient, we might not be in such a hurry to concede that our intuitions are right by acting on them.
For these reasons I do not believe the EA movement should focus too much or too exclusively on LLMs or similar models as candidates for an AGI precursor, or put too much of a focus on short time horizons. We should pursue a diverse range of strategies for mitigating AI risk, and devote significant resources towards longer time horizons.
Instead of trying to refute Alice from general principles, I think Bob should instead point to concrete reasons for optimism (for example, Bob could say “for reasons A, B, and C it is likely that we can coordinate on not building AGI for the next 40 years and solve alignment in the meantime”).
As an aside to the main point of your post, I think Bob arrived at his position by default. I suspect that part of it comes from the fact that the bulk of human experiences deal with natural systems. These natural systems are often robust and could be described as default-success. Take human interaction for instance: we assume that any stranger we meet is not a sociopath, because they rarely are. This system is robust and default-success because anti-social behavior is maladaptive. Because AI is so easy for our brains to place in the category of humans, we might by extension put it in the "natural system"-box. With that comes the assumption that it's behavior reverts to default-success. Have you ever been irritated at your computer because it freezes? This irrational response could be traced to us being angry that the computer doesn't follow the rules of behavior that have to be followed when in the (human) box that we erroneously placed it in.
I've been thinking about this specific idea:
Intuitively, I think it makes sense that data should be the limiting factor of AI growth. A human with an IQ of 150 growing up in the rainforest will be very good at identifying plants, but won’t all of a sudden discover quantum physics. Similarly, an AI trained on only images of trees, even with compute 100 times more than we have now, will not be able to make progress in quantum physics.
It seems to me that you're making the point that extreme out-of-distribution domains are unreachable by generalization (at least rapidly). Let's consider that humans actually went from only identifying plants to making progress in quantum physics. How did this happen?
If we assume that generating high-quality synthetic data would not allow for new knowledge outside of the learned domain, you would necessarily have to gather new information that humans have not gathered yet to not hit the data ceiling. As long as humans are required to gather new information, it's reasonable to assume that sustainable exponential improvement is unlikely, since human information-gathering speed would not increase in tandem. Okay, let's remove the human bottleneck. In this case, an exponentially improving AI would have to find a way to gather information from the outside world with exponentially increasing speeds (as well as test insights/theories at those speeds). Can you think of any way this would be possible? Otherwise, I find it hard not to reach the same conclusion as you.
Depends on what level and type of advancement we're talking about. I think interactiveness opens far more doors in VR than say improved graphics. Something that immediately came to mind was the ability to perform simulations of surgery without the high stakes. If you could tailor the experience to specific patients, you could get an opportunity to discover unexpected complications that might arise with that specific procedure.
With higher levels of immersion, your example of exploring the space station would be really interesting. I'm not sure of the benefit to humanity, but things like walking on the moon in VR would be mindblowing. As you imply, it might also give us some valuable perspective that carries over to real life, expands our horizons, and makes us less petty.
It's interesting how OpenAI basically concedes that it's a fruitless effort further down in the very same post:
Because the upsides are so tremendous, the cost to build it decreases each year, the number of actors building it is rapidly increasing, and it’s inherently part of the technological path we are on, stopping it would require something like a global surveillance regime, and even that isn’t guaranteed to work.
It's not hard to imagine compute eventually becoming cheap and fast enough to train GPT4+ models on high-end consumer computers. How does one limit homebrewed training runs without limiting capabilities that are also used for non-training purposes?
Disclosing a conflict of interest demonstrates explicit awareness of potential bias. It's often done to make sure the reader tries to weigh the merits of the content by itself. Your comment shows me that you have (perhaps) not done so, by ignoring the points the author argued. If you see any evidence of bias in the takes in the article/post, can you be more specific? That way, the author is given an honest chance to defend his viewpoint.