Yeah, I got some pushback on Twitter on this point. I now agree that it's not a great analogy. My thinking was that we technically know how to build a quantum computer, but not one that is economically viable (which requires technical problems to be solved and for the thing to be scalable/not too expensive). Feels like a not all squares are rectangles, but all rectangles are squares thing. Like quantum computing ISN'T economically viable but that's not the main problem with it right now.
BTW, this link (Buzan, Wæver and de Wilde, 1998) goes to a PaperPile citation that's not publicly accessible.
I think building AI systems with some level of autonomy/agency would make them much more useful, provided they are still aligned with the interests of their users/creators. There's already evidence that companies are moving in this direction based on the business case: https://jacobin.com/2024/01/can-humanity-survive-ai#:~:text=Further%2C%20academics%20and,is%20pretty%20good.%E2%80%9D
This isn't exactly the same as self-interest, though. I think a better analogy for this might be human domestication of animals for agriculture. It's not in the self-interest of a factory farmed chicken to be on a factory farm, but humans have power over which animals exist so we'll make sure there are lots of animals who serve our interests. AI systems will be selected for to the extent they serve the interests of people making and buying them.
RE international development: competition between states undercut arguments for domestic safety regulations/practices. These are exacerbated by beliefs that international rivals will behave less safely/responsibly, but you don't actually need to believe that to justify cutting corners domestically. If China or Russia built an AGI that was totally safe in the sense that it is aligned with its creators interests, that would be seen as a big threat by the US govt.
If you think that building AGI is extremely dangerous no matter who does it, then having more well-resourced players in the space increases the overall risk.
People can and should read whoever and whatever they want! But who a conference chooses to platform/invite reflects on the values of the conference organizers, and any funders and communities adjacent to that conference.
Ultimately, I think that almost all of us would agree that it would be bad for a group we're associated with to platform/invite open Nazis. I.e. almost no one is an absolutist on this issue. If you agree, then you're not in principle opposed to exlcuding people based on the content of their beliefs, so the question just becomes: where do you draw the line? (This is not a claim that anyone at Manifest actually qualifies as an open Nazi, more just a reductio to illustrate the point.)
Answering this question requires looking at the actual specifics: what views do people hold? Were those views legible to the event organizers? I fear that a lot of the discourse is getting bogged down in euphemism, abstraction, and appeals to "truth-seeking," when the debate is actually: what kind of people and worldviews do we give status to and what effects does that have on related communities.
If you think that EA adjacent orgs/venues should platform open Nazis, as long as they use similar jargon, then I simply disagree with you, but at least you're being consistent.
My mistake on the guardian US distinction but to call it a "small newspaper" is wildly off base, and for anyone interacting with the piece on social media, the distinction is not legible.
Candidly, I think you're taking this topic too personally to reason clearly. I think any reasonable person evaluating the online discussion surrounding manifest would see it as "controversial." Even if you completely excluded the guardian article, this post, Austin's, and the deluge of comments would be enough to show that.
It's also no longer feeling like a productive conversation and distracts from the object level questions.
Thanks Camille! Glad you found it useful.