Hmm, I think perhaps I have different takes on the basic mechanisms that make sense here?
Here's a scattershot of background takes:
... and then given those, my position is that if you want it to happen, the right step is less like "try to create a consensus that it should happen" and more like "try to find/make an alliance of people who want it, and then make sure there's someone taking responsibility for the specific unblocking steps". (I guess this view is not very much about the investigation, and more like my generic take on how to make things happen.)
Honestly my view of how important it is that the whole project happen will also be somewhat mediated by whether it can find a decently strong lead and can attract some moderate amount of funding. Since these would be indicative of "people really want answers", and I think the whole project is more valuable if that demand exists.
It could be better for the world, and you might care about that.
It could be that you expect enough other people will talk to them that it's good for them to hear your side of the story too.
It could be that you expect it would be bad for your reputation to refuse to talk to them (or to give details which are non-concordant with the picture they're building from talking to other people).
I think that an eventual AI-driven ecosystem seems likely desirable. (Although possibly the natural conception of "agent" will be more like supersystems which include both humans and AI systems, at least for a period.)
But my alarm at nonviolent takeover persists, for a couple of reasons:
Thanks, this felt clarifying (and an important general point).
I think I'm now at "Well I'd maybe rather share my information with an investigator who would take responsibility for working out what's worth sharing publicly and what's extraneous detail; but absent that, speaking seems preferable to not-speaking. So I'll wait a little to see whether the momentum in this thread turns into anything, but if it's looking like not I'll probably just share something."
Ideally, the questions this investigation would seek to answer would be laid out and published ahead of time.
Not sure I buy this, on principle -- surely the investigation should have remit to add questions as it goes if they're warranted by information it's turned up? Maybe if the questions for this purpose are more broad principles than specific factual ones it would make sense to me.
Would also be good to pre-publish the principles that would determine what information would be redacted or kept confidential from public communication around findings.
This checks out to me.
As things stand, my low-conviction take is that [headhunting for investigators] would be a reasonable thing for the new non-OP connected EV board members to take on, or perhaps the community health team.
Have you directly asked these people if they're interested (in the headhunting task)? It's sort of a lot to just put something like this on someone's plate (and it doesn't feel to me like a-thing-they've-implicitly-signed-up-for-by-taking-their-role).
In general my instinct would be more like "work out who feels motivated to have an investigation happen, and then get one (or more) of them to take responsibility for the headhunting".
My read is that you can apply the framework two different ways:
Happy to share some thoughts (and not thereby signalling that I plan not to say more about the object-level):
In this case:
I agree with this -- and also agree with it for various non-humanoid AI systems.
However, I see this as less about rights for systems that may at some point exist, and more about our responsibilities as the creators of those systems.
Not entirely analogous, but: suppose we had a large creche of babies whom we had been told by an oracle would be extremely influential in the world. I think it would be appropriate for us to care more than normal about their upbringing (especially if for the sake of the example we assume that upbringing can meaningfully affect character).