I lead the DeepMind mechanistic interpretability team
This seems fine to me - I expect that attending this is not a large fraction of most attendee's impact on EA, and that some who didn't want to be named would have not come if they needed to be on a public list, so barring such people seems silly (I expect there's some people who would tolerate being named as the cost of coming too, of course). I would be happy to find some way to incentivise people being named.
And really, I don't think it's that important that a list of attendees be published. What do you see as the value here?
1 is very true, 2 I agree with apart from the word main, it seems hard to label any factor as "the main" thing, and there's a bunch of complex reasoning about counterfactuals - eg if GDM stopped work that wouldn't stop Meta, so is GDM working on capabilities actually the main thing?
I'm pretty unconvinced that not sharing results with frontier labs is tenable - leaving aside that these labs are often the best places to do certain kinds of safety work, if our work is to matter, we need the labs to use it! And you often get valuable feedback on the work by seeing it actually used in production. Having a bunch of safety people who work in secret and then unveil their safety plan at the last minute seems very unlikely to work to me
I personally think that "does this advance capabilities" is the wrong question to ask, and instead you should ask "how much does this advance capabilities relative to safety". Safer models are just more useful, and more profitable a lot of the time! Eg I care a lot about avoiding deception. But honest models are just generally more useful to users (beyond white lies I guess). And I think it would be silly for no one to work on detecting or reducing deception. I think most good safety work will inherently advance capabilities in some sense, and this is a sign that it's actually doing anything real. I struggle to think of any work I think is both useful and doesn't advance capabilities at all
My argument is that barring them doesn't stop them from shaping EA, just mildly inconveniences them, because much of the influence happens outside such conferences
Which scandals do you believe would have been avoided with greater transparency, especially transparency of the form here (listing the names of those involved, with no further info)? I can see an argument that eg people who have complaints about bad behaviour (eg Owen's, or SBF/Alameda's) should make them more transparently (though that has many downsides), but that's a very different kind of transparency.