H

harfe

706 karmaJoined

Posts
1

Sorted by New
5
harfe
· · 1m read

Comments
121

I strongly disagree. I think human extinction would be bad.

Not every utility function is equally desirable. For example, an ASI that maximizes the number of paperclips in the universe would be a bad outcome.

Thus, unless one adopts anthropocentric values, the utilitarian philosophy common in this forum (whether you approve of additivity or not) implies that it would be desirable for humans to develop ASI to exterminate humans as quickly and with as high a probability as possible, as opposed to the exact opposite goal that many people pursue.

Most people here do adopt anthropocentric values, in that they think human flourishing would be more desirable than a vast amount of paperclips.

I am not sure if he actually took part in the event, but there were people involved with him that were present who said he might be dropping by and that he had bought a ticket

Note that at this point we only have indirect word that he bought a ticket. Also note that anyone can buy a ticket, and if his ticket was cancelled by Manifold (which is probably the thing you want), we would not hear about that directly. Of course, information can emerge that he actually did attend.

Thanks for linking it! I recommend watching the 5-minute video.

Your title sounds like trump thinks there is a risk that AI takes over the human race (maybe consider changing the title).

The actual text from Trump in the video is:

you know there are those people that say it takes over the human race

Given the way Trump talks it can be sometimes difficult to assess what he actually believes. In general, Trump expressed a mix of concern and support for advanced AI. My impression is that Trump was more interested in advancing AI, rather than opposing it out of concern for the human race.

But if we get to GPT-7, I assume we could sort of ask it, “Would taking this next step, have a large chance of failing?“.

How do you know it tells the truth or its best knowledge of the truth without solving the "eliciting latent knowledge" problem?

harfe
18
10
0
2

I am far more pessimistic than him about extinction from misaligned AI systems, but I think it's quite sensible to try to make money from AI even in worlds from high probability of extinction, since the market signal provided counterfactually moves the market far less than the realizable benefit from being richer in such a crucial time.

I am sympathetic to this position when it comes to your own money. Like, if regular AI safety people put a large fraction of their savings into NVIDIA stock, that is understandable to me.

But the situation with Aschenbrenner starting an AGI investment firm is different. He is not directing (just) his own money, but the much larger capital of his investors into AGI companies. So the majority of the wealth gain will not end up in Aschenbrenner's hands, but belong to the investors. This is different from a small-scale shareholder who gets all the gains (minus some tax) of his stock ownership.

But even if Aschenbrenner's plan is to invest into the world-destroying in order to become richer later when it matters, it would be nice to say so and also explain how you intend to use the money later. My guess however is that this is not actually what Aschenbrenner actually believes. He might just be in favour of accelerating these technologies.

I think you are replying to the wrong comment here.

That does not help me understand what is meant there. I fail to see relevant analogies to AI Safety.

'lost of the mandate of heaven'

What do you mean by that? Presumably you do not mean it in any religious sense. Do you want to say that exclusively longtermist EA is much less popular among EAs than it used to be (i.e. the "heaven" is the opinion of the average EA)?

Load more