yooo x-risk is a cult you should get out while you can <3
I believe your assessment is correct, and I fear that EA hasn't done due diligence on AI Safety, especially seeing how much effort and money is being spent on it.
I think there is a severe lack of writing on the side of "AI Safety is ineffective". A lot of basic arguments haven't been written down, including some quite low-hanging fruit.
For a more extreme hypothesis, Ariel Conn at FLI has voiced the omnipresent Western fear of resurgent ethnic cleansing, citing the ease of facial recognition of people's race - but has that ever been the main obstacle to genocide? Moreover, the idea of thoughtless machines dutifully carrying out a campaign of mass murder takes a rather lopsided view of the history of ethnic cleansing and genocide, where the real death and suffering is not mitigated by the presence of humans in the loop more often than it is caused or exacerbated by human passions, grievances, limitations, and incompetency.
I am not a historian, but during the Nazi regime, The Netherlands had among the highest percentages of Jews killed in all of Western Europe. I remember historians blaming this on the Dutch having thorough records of who the Jews were and where they lived. Access to information is definitely a big factor in how succesful a genocidal regime can be.
The worry is not so much about killer robots enacting a mass murder campaign. The worry is that humans will use facial recognition algorithms to help state-sanctioned ethnic cleansing. This is not a speculative worry. There are a lot of papers on Uyghur facial recognition.
I don't have any specific instances in mind.
Regarding your accounting of cases, that was roughly my recollection as well. But while the posts might not address the second concern directly, I don't think that the two concerns are separable. The actual mechanisms and results might largely overlap.
Regarding the second concern you mention specifically, I would not expect those complaints to be written down by any users. Most people on any forum are lurkers, or at the very least they will lurk a bit to get a feel for what the community is like and what it values before participating. This makes people with oft-downvoted opinions self-select out of the community before ever letting us know that this is happening.
The hovering is helpful, thank you.
Are there any plans to evaluate the current karma system? Both the OP and multiple comments expressed worries about the announced scoring system, and in the present day we regularly see people complain about voting behaviour. It would be worth knowing if the concerns from a year ago turn out to have been correct.
Related to this, I have a feature request. Would it be possible to break down scores in a more transparent way, for example by number of upvotes and downvotes? The current system gives very little insight to authors about how much people like their posts and comments. The lesson to learn from getting both many upvotes and many downvotes is very different from the lesson to learn if nobody bothered to read and vote on your content.
Thank you so much for posting this. It is nice to see others in our community willing to call it like it is.
I was talking with a colleague the other day about an AI organization that claims:
AGI is probably coming in the next 20 years.
Many of the reasons we have for believing this are secret.
They're secret because if we told people about those reasons, they'd learn things that would let them make an AGI even sooner than they would otherwise.
To be fair to MIRI (who I'm guessing are the organization in question), this lie is industry standard even among places that don't participate in the "strong AI" scam. Not just in how any data-based algorithm engineering is 80% data cleaning while everyone pretends the power is in having clever algorithms, but also in how startups pretend use human labor to pretend they have advanced AI or how short self-driving car timelines are a major part of Uber's value proposition.
The emperor has no clothes. Everyone in the field likes to think they are aware of this fact already when told, but it remains helpful to point it out explicitly at every opportunity.
This is mostly a problem with an example you use. I'm not sure whether it points to an underlying issue of your premise:
You link to the exponential growth of transistor density. But that growth is really restricted to just that: transistor density. Growing your number of transistors doesn't necessarily grow your capability to compute things you care about, both from a theoretical perspective (potential fundamental limits in the theory of computation) as well as a practical perspective (our general inability to write code that makes use of much circuitry at the same time + the need for dark silicon + Wirth's law). Other numbers, like FLOP/s, don't necessarily mean what you'd think either.
Moore's law does not posit exponential growth in amount of "compute". It is not clear that the exponential growth of transistor density translates to exponential growth of any quantity you'd actually care about. I think it is rather speculative to assume it does and even more so to assume it will continue to.
These are some issues that actively frustrate me to the point of driving me away from this site.
Fighting human rights violations around the globe.