my name is jay. i’m a relapsing ex-philosopher specialising in philosophy of mind & cognitive/evolutionary psychology.
i'm always interested in opportunities (courses, internships, work) at the intersection of philosophy & AI. this includes work on AI ethics & governance.
Ah, I think I see where you're coming from. Of your points I find #4 to be the most crucial. Would it be too egregious to summarise this notion as: (i) all of these capabilities are super useful & (ii) consciousness will [almost if not actually] "come for free" once these capabilities are sufficiently implemented in machines?
What claim is being made here?
Re: track record, I'm a coauthor on a position paper that we've been gradually rolling out to reviewers who are well-established in this topic.
Finally, please find information about the aims of the survey in the below comment & at this webpage.
Inspired by the Philpapers survey, we are conducting a survey on experts’ beliefs about key topics pertaining to AI consciousness & moral status. These include:
🧠 Consciousness/sentience
⚖️ Moral status/moral agency
💥 Suffering risk (”S-risk”) related AI consciousness (e.g. AI suffering)
⚠️ Existential risk (”X-risk”) related to AI consciousness (e.g. resource competition with conscious AI)
Such a survey promises to enrich our understanding of key safety risks related to conscious AI in several ways.
📊 Most importantly, the results of this survey provide a general picture of experts’ views about the probability, promises, & perils of AI consciousness.
⚔️ Analysing the types of answers given by respondents might help to identify fault lines between industry, academia, & policy.
📈 Repeating the survey on an annual basis can assist in monitoring trends (e.g. updates in belief in response to technological advances/breakthroughs, differences in attitudes between industry & academia, emergent policy levers, etc.).
Hey! I'm not sure I see the prima facie case for #1. What makes you think that building non-conscious AI would be more resource-intensive/expensive than building conscious AI? Current AIs are most likely non-conscious.
As for #2, I have heard such arguments before in other contexts (relating to meat industry) but I found them to be preposterous on the face of it.
Do you think that consciousness will come for free? I think that it seems like a very complex phenomenon that would be hard to accidentally engineer. On top of this, the more permissive your view of consciousness (veering towards panpsychism), the less ethically important consciousness becomes (since rocks & electrons would then have moral standing too). So if consciousness is to be a ground of moral status, it needs to be somewhat rare.