I'm a computational physicist, I generally donate to global health. I am skeptical of AI x-risk and of big R Rationalism, and I intend explaining why in great detail.
I really enjoyed the article! But in the end, rather than persuading me that the odds of alien presence is higher than I thought, it has instead further persuaded me that bayesian estimates (as used in EA) are pretty much useless for this type of question, and are likely to lead people astray.
You give a prior of 1 in a hundred that aliens have a presence on earth. Where did this number come from? Well, if you wanted to break it down, you'd look at the number of habitable planets, the chance that life evolved on each one, the chance that each life would develop into advanced civilisation without dying, the estimated time since they developed advanced civilization, the estimated speed of travel weighted by the distance to us, the chance they would all "hide" from us, the chance they would decide to spy on us, etc. One of these has a roughly concrete answer, but all the others are just further speculative questions, with an utterly miniscule amount of evidence to go on for how hypothetical unobserved aliens would act. I think the uncertainty for most of these questions would range over many, many orders of magnitude, and that uncertainty will carry on into your final "prior".
Assigning a single number to such a prior, as if it means anything, seems utterly absurd. It seems like it would be more reasonable, at the end of the analysis, to end up with something like a confidence interval. Ie: "I have a 95% interval that the probability of alien presence is between 1 in a quadrillion and 1 in 2".
This article seems like a reasonable summary of the pause argument, although it relies a bit too heavily on the "god-like AI" hypothesis.
Like playing chess against grandmaster Magnus Carlsen, we cannot predict the moves he will play, but we can predict the outcome: we lose.
I've been seeing this analogy a lot lately, and I think it's bad. The more intelligent side of a conflict does not always win if the less intelligent side starts with a more advantageous position. I can easily beat stockfish in chess if I take away it's queen, and an angry bear can easily defeat me in a cage match, despite my PhD.
We know that Magnus will beat me in a fair game of chess because there is ample empirical evidence for this, in the forms of all his previous games against other players. That's why we know he is a grandmaster. There is no such empirical basis to know the outcome of a human-AI war.
Lastly, while we can't predict the exact moves Magnus will make, we can make general predictions about how the game will go. For example, we can confidently predict that he won't make obvious blunders, that his structure will probably be stronger, that he will capitalise on mistakes I make, etc.
I'm not saying it's absurd to think an AGI would win such a war (though I personally believe it is unlikely), just that if you do think the AGI would win, you have to actually prove it, not rely on faulty analogies.
Do you have additional evidence that this was specifically Torres, and not someone else who dislikes EA?
I was initially skeptical of the claim, thinking it was one of Torres followers, but looking at the timestamps, it seems that the OP here was posted 20 minutes before Torres tweeted about the article. And I know Torres has used sock-puppets before, so it at least seems plausible that it's another one.
However, It could have also been that Torres saw the post here and decided to tweet about it, or that eugenics-adjacent is a different person who posted here and then tipped off Torres about the article, or they both by chance saw the article at around the same time. I don't think there is enough information to make a confident accusation here.
My sense is there have been areas with 10,000+ people who were voluntarily relocated, and where the relocation package did get a 95%+ approval, but I am not sure.
I'm sure this has happened somewhere in the US, say, but, those 10,000 people did not constitute an entire sovereign nation, with representatives in the UN and so on. They were "moving down the road", to a different place within the same nation, where they continued to have democratic control over their own laws. The people of Nauru have lived there for 3000 years, and suffered greatly under colonial rule before finally gaining self-governance, I find it unlikely they would give it up again so lightly.
Fundamentally, I do not think it makes sense for a country to "democratically" give up it's right to democratic self-governance. I mean, think about the people who voted "no" on the sale: their right to vote on their own governance, on their own ancestral homeland, is taken away from them without their consent. I do not see a universe where this is ethical. Even if the set of laws FTX is allowed to write are limited, they still apply to the people of the nation, who either have to obey the FTX foundation or abandon their own country and ancestral homeland.
I'm sorry if I come off as emotional here, but I find this proposal deeply troubling and it stands against every one of my principles. I really hope that it was only the one or two fools who were actually considering it.
If 95%+ of the Naura population would prefer receiving a large cash gift in order to give substantial control over the nation to a third party, this seems like the kind of thing that a democratic nation should be allowed to do.
I'll first note that it seems incredibly unlikely that 95% of a population would agree to sell their ancestral homeland out from under themselves. But even if they did, what would happen if the next year there was some scandal and they changed their minds, and then 95% of them wanted the FTX project gone? Does the project go caput, despite the substantial investment, or does FTX continue on and override the democratic wishes of the people of Nauru?
It seems to me that the talk of buying an entire nation (instead of say, buying a plot of land inside a nation), is inherently undemocratic, for this reason. You're not just buying land, you're buying control over people. Fortunately, the idea is crazy for about a dozen other reasons I listed elsewhere, so it never would have gotten that far in the first place.
To add even more reasons why this is a bad idea, Nauru has very poor soil as a result of phosphate mining, so on-land agriculture is extremely limited, and most food is currently imported, leading to an obesity epidemic. Similarly, there are no lakes or rivers on the island, so water is either imported or collected from rainfall.
Also the Australian government is currently operating a controversial detention centre for asylum seekers on the island, and presumably would stand in the way of any purchase.
Really, it's hard to overstate how much of an obviously bad idea this was.
I'm unsure as to why people downvoted this post, the court filings do exist and have been covered by Forbes already, and are worth discussion.
I think this idea, from what we know of it, is incredibly bad. A few reasons why:
I'll also note that I am personally not supportive of human genetic enhancement, but it's a bigger subject that I don't want to dive into here.
I think you make good points in favour of the AI expert side of the equation. To balance that out, I want to offer one more point in favour of the superforecasters, in addition to my earlier points about anchoring and selection bias (we don't actually know what the true median of AI expert opinion is or would be if questions were phrased differently).
The primary point I want to make is that Ai x-risk forecasting is, at least partly, a geopolitical forecast. Extinction from rogue AI requires some form of war or struggle between humanity. You have to estimate the probability that that struggle ends with humanity losing.
An AI expert is an expert in software development, not in geopolitical threat management. Neither are they experts in potential future weapon technology. If someone has worked on the latest bombshell LLM model, I will take their predictions about specific AI development seriously, but if they tell me an AI will be able to build omnipotent nanomachines that take over the planet in a month, I have no hesitations in telling them they're wrong, because I have more expertise in that realm than they do.
I think the superforecasters have superior geopolitical knowledge than the AI experts, and that is reflected in these estimates.
To be clear, I think you included all the necessary disclaimers, your article was well written, well argued, and the use of probability was well within the standard for how probability is used in EA.
My issue is that I think the way probability is presented in EA is bad, misleading, and likely to lead to errors. I think this is the exact type of problem (speculative, unbounded estimates) where the EA method fails.
My specific issue here is how uncertainty is taken out of the equation and placed into preambles, and how a highly complex belief is reduced to a single number. This is typical on this forum and in EA (see P|doom). When bayes is used for science, on the other hand, the prior will be a distribution. (See the pdf of the first result here).
My concern is that EA is making decisions based on these point estimates, rather than on peoples true distributions, which is likely to lead people astray.
I’m curious: When you say that your prior for alien presence is 1%, what is your distribution? Is 1% your median estimate? How shocked would you be if the “true value” was 0.001%?
If probabilities of probabilities is confusing, do the same thing for “how many civilisations are there in the galaxy”.