Camille

Group Organizer at EA ENS Paris, @ Managing Tense Disagreements
167 karmaJoined Pursuing other degree/diplomaWorking (0-5 years)94110 Arcueil, France
www.effectivedisagreement.org

Bio

Participation
3

Currently building a workshop with the aim to teach methods to manage strong disagreements (including non-EA people). Also community building.

Background in cognitive science.

Interested in cyborgism and AIS via debate.

https://typhoon-salesman-018.notion.site/Date-me-doc-be69be79fb2c42ed8cd4d939b78a6869?pvs=4

How others can help me

I often get tremendous amounts of help from people knowing how to program being enthusiastic for helping over an evening.

Comments
18

I would definitely understand if you felt that giving areas outside of short term, human centered philanthropy feels riskier or unrelated to your brand image.

I'm wondering however, in what conditions would you plan to engage in animal philanthropy, e.g. the open wings alliance? In a parallel world where this is something you regularly do, what happened for you to do it?

Similarly, what would be needed for you to engage in messaging about more abstract cause areas, such as AI risk/ethics or biosecurity (if you feel convinced by any)?

My commentary would be that promoting political ideologies (or connecting to them) sound usually bottom-lined to me about the nature of reality.  I think that bundling concepts under the tag "socialism" or "capitalism" makes it hard to sift through it and find the occasional diamond (see this). It's hard to "check" whether socialism works, because it refers to too many things at the same time.

Let's suppose the government spends money, not on subsidies for national activities X or Y, but for interventions X or Y in the global south. Is this socialism? I don't care. The real question is : does it work? What are the costs? What are the benefits? How does it compare with my pondered ethical beliefs?

Some people would be happy to denounce legislation on AI frontier model as excessive governmental regulation, and thus socialism. But this is not important. What is important is : does it work? Does it help reduce X-risks? What do superforecasters say?

I'm not interested in evaluating the general tendency to act like a socialist, but specific interventions, no matter which tribe identifies to it. It doesn't seem like healthy thinking to me to bundle interventions in packages and call them "socialist" and have the entire package be considered either as working or not working, worthy trying or not trying. I'll be very happy to have one kind of systemic change such as massive governmental subsidy for a medical system in one country, and zero subsidies in another, if the end results are equally counterfactually optimal regarding my set of moral credences.

In contrast, charter cities experimenting with various interventions, or analyzing data resulting from the application of distinct policies, all sound like a more promising idea to me -and I be damned if the end intervention is socialist, capitalist, or whatever. For a more zoomed-out alternative, stuff like Reasoned Politics sounds less confusing to me.

Caveat: conflict of interest

I agree. However, I also think that doing more surveys do not prevent the failure mode of EAs "doing comms" as doing more surveys over actual interventions for aligning the general opinion with more rational takes on this particular topic. Shyness and low socio-emotional skills among leaders seems to me commonplace in EA, far too much compared to the rest of the world, up to the point where the best interventions targetting communications skills seems neglected to me.

Skills in communications, and funds for paying skilled individuals responsible for communications in any particular org is, imho, generally lacking. I have eavesdropped to a certain number of (non-sensitive) meetings of a small AIS org, and the general level of knowledge on how to convey any given message (especially outside of EA, especially to an opponent) is, in my opinion, insufficient, despite good knowledge of the surveys and their results. People in this org mostly generated their own ideas, judged them using their intuition, and did them, rather than using established knowledge or empirical expertise to pick the best ideas. Most of the people in the process are AIS researchers with background in CS, rather than people with both a background in AIS and communications, who are also excellent communicators (to a non-EA audience). One person I met openly shared their concern of not having enough funding for paying a PR and Comms responsible, as well as them growing tired of managing something they have no background in. Surveys didn't really help with this bottleneck.

My fear is that there is not enough money, and that most people don't care enough because they trust their intuitions too much / are afraid to actualy remedy to this lack of skills and would rather do surveys (on my side, I definitely feel fear and worry about talking to journalists or carefully balancing epistemics whilst not hurting common sense). 

My only not-so-real data point is this (compare karma on LW vs EAF for a better sense). In a world where people saw a technical problem in communications, I would have expected this post to have more success. In short, I'd bet that most communications-skills related interventions/hires are usually considered with reluctance.

I do aknowledge that surveys could be an even lower-hanging fruit, of course. But I think that they should not distract us from improving skills per se.

Someone working on the topic once told me they had a discussion with politicians, who told them something along those lines: "If I successfully prevent a pandemic, I will never get credit for it, because it will have not happened. If it happens, ever so slightly, I will get blamed for it. Either way it won't lead anyone to perceive me well, so I have other priorities"

Call me obsessed, but : Street Epistemology, Deep Canvassing, Smart Politics. Those are ways to talk to people who are hostile and yet turn the conversation into an object-level disagreement, and you can get fast training, for free (see my post "Effectively Handling Disagreements" -conflict of interest explicit). Notoriously relied upon by constructive atheists, by the way.

I do agree, however, that polite critiques should be boosted by default, e.g. with a karma boost on the forum, and doing this might be more effective than what I just stated above.

I have a few tears birthing in my eyes. The video added a touch of liveliness that moved me more than the excerpt you shared in text. Thank you very much !

Wonderful post ! I think this is a good exemplar of how predation related RWAS thought should be presented, and I'm incredibly glad you delved into the conservationists' worldview.

There is still a pending question in my mind, however, that has to do with the "One Health" perspective that healthy ecosystems have a positive benefit on human health. I guess there are some interventions that actually maximize health benefit while minimizing animal suffering (we know those in animal farming, namely not doing it), and if anything, this could favor specific rewilding projects while discarding the ones that are only motivated by a conservationist aesthetic.

Thank you for this !
I'm not an expert, but I read enough argumentation theory and psychology of reasoning in the past, so I want to comment on your pitch to explain what I think makes it work.

Your argument is well constructed in that it starts with evidence ("reward hacking"), proceeds to explain how we go from the evidence to the claim (something called the Warrant in one argumentation theory), then clarifies the claim. This is rare. Most of the time, people make the claim, give the evidence, and either forget the explanation of how we go from here to there or get into a frantic misunderstanding when adressing this point. You then end by adressing a common objection ("We'll stop it before it kills us").

Here's the passage where you explain the warrant :

If it's really smart, it will realize that we don't actually want this. We don't want to turn all of our electronic devices into paperclips. But it's not going to do what you wanted it to do, it will do what you programmed it with.

This is called (among others) an argument by dissociation, and it's good (actually, it's the only propper way to explain a warrant that I know of). I've seen this step phrased in several ways in the past, but this particular chaining (AI will understand you want X. AI will not do what you want. Beause it does what it's been programmed with, not what it understands you to want. These two are distinct) articulates it way better than the other instances I've seen in the past, it forced me to do the crucial fork in my mental models between "what it's programmed for" and "what you want". It also does away with the "But the AI will understand what I really mean" objection.

I think that part of your argument's strength is due to you seemingly (from what I can guess) adopting a collaborative posture when making it. You insert elements in a very smooth way, detail vivid examples, and I can imagine that you make sure your tone and body language do not seem to presume an interlocutor's lack of intelligence or knowledge (something that is left too often unchecked in EA/world interactions).

Some research strongly suggest that interpersonal posture is of utmost importance when introducing new ideas, and I think that this explains a lot of why people would rather be convinced by you than by someone else.

We should prepare for a hypothetical generalized EA-bashing.

As time goes by, we should expect EA to be the target of more and more criticism. More than that, we should probably also plan for spans of time during which EA will be, by default, considered an evil thing. This line of scenario does not seem far-fetched to me, as it already seems to start concretizing itself in France.

We need a plan, it's not costly to build one, and I think that it is plausible enough for EA's reputation to keep degrading in the next three years for time spent on this in local groups to have net-positive expected value.

1-Cultivate resilience

I think that the best thing we can do is to never, never abandon the principles of charitability, respect and rationality that inhabit the EA space. Some people will try to do it, they will try to push us so as to make us angry, to say things that are unwarranted. But we should never commit this crime. Yann Lecun is a good example of how someone can end up exploiting (voluntarily or not) one's anger : on twitter, he's borderline violent, while in real life, he retracts and dicusses calmly. This could manifest itself with violent interlocutors presenting in real life, in front of his calm version, resenting from the near-violence he displayed online. This would be disastrous.

On all sides and with all interlocutors, even the most abhorrent ones, we should strive to be calm and respectful. I think that Eliezer Yudkowsky's exchanges with Yann Lecun are, sadly, an example of the opposite happening. Maybe Eliezer sounds like a calm person to you -but I can very easily empathize with Lecun on why his replies sound arrogant and dismissive. You cannot say the same about someone like, e.g., Anthony Magnabosco, which is a better model to strive towards in this setting (I'm not talking about the method but the general tone and gentleness).

2-Do not loose the purpose.

Something worth noting is that, as EA is going to be the center of many critiques, some of them might have a point. We should always keep a clear eye and remember ourselves that what we're trying to do here is to have true beliefs and act morally. If someone is stating « A », you should not be stating « not A ». You should, instead ask yourself, « What kind of evidence is more plausible given that A is true than given that A is false ? Does it exist ? ».

Ideally, you'd want the observation of a third party to be :

« Wow, this person seems mad and angry, yet the EA in front of them is so nice, constructive, respectful and empathetic. Maybe EAs are wrong, but you should admit that they're outstanding conversation partners. » 

3-Know when to answer

I think that the biggest blindspot as things stand right now is that no one has a clear model of when to answer. Normally, we shouldn't be going only with our gut instincts about this. There is surely a certain amount of data on when, what and how to answer to false statements. What is important to know is also conditions in which not answering is clearly a dominated move. In some circumstances, someone can avoid answering because the point is unimportant, unconsequential, and it would basically just be polluting the debate (say, a flat-earther disagrees with an astrophycisist. There is something more important to be done). But sometimes, someone can avoid answering because the point is completely right (say, a flat-earther that has just been debunked by an astrophycisist, because they know they'll loose).

I think that EAs have no idea of what the public perceive as each non-answer comes by. Does the public think that EA is admitting being wrong and pretend ignoring it, or that it means the critique is ridiculous ? No one knows, yet we should make an effort to know.

4-Know how to answer
If someone is angry, we should listen to them and help them calm down.

One of the biggest mental blocks I meet when I talk about answering critiques is the presumption that it has to be a rebuttal, or even, a four-page-long debunking published in the Times. It doesn't need to be. There exist several evidence-based techniques that are quite apt at managing tense situations and none of them imply active counterargumentation, nor publishing in mass media. They even actively recommend not to do that. It can sometimes be as simple as sending a DM and offerring to meet, or check that you have understood them well. I think a lot more people should consider aligning a large margin of their interactions with these models.

5-One failure and we're done

I think it is acceptable to lend some probability to the fact that, if EA is generally perceived negatively in one powerful country, then it is enough to hamper all efforts in EA-related topics, specifically because they require so much coordination. Currently, France is headed towards becoming an Anti-Safety hub. Many people in the US might think that this is inconsequential, but remember that it doesn't take more than one country refusing to slow down AGI progress to bring back the race on a global scale, and it doesn't take more than one powerful country refusing to ban gain-of-function research to give reasons for foreign countries not to ditch their lab. If the world was to meet in order to sign a convention on AI Safety, I currently expect France to refuse signing it, or negotiate over it until it's useless, or even consider it as a hostile and unfair proposal.

More than that, since tense disucssions are hugely more mediatized than calm ones, I suspect that it wouldn't take more than a 1:20 ratio of bad discussions to depict EA in a very negative light, possibly even less.
 

6-Summary : See yourself as a peace moderator

The coming times might turn out to be dark. Please, do not let yourself merely counter-argue on social media. You should engage amicably, and genuinely discuss whether their hypothesis is right, how to test that, and build friendly and trustful bonds with them. 

Thank you for this concise report ! 
I have two comments, that I think could spurr into one's mind :

1-This is probably outside of your scope, but I think that Deep Canvassing somehow relies on a similar effect, notably, sharing a personal (hence, identifiable) experience and building rapport. Given the attention it received and its strong supportive evidence, I would be curious to know whether you have any idea related to using Deep Canvassing for non-humans.

2-I think there is a broader question in terms of espitemic virtue -is it really ethical to rely on an "old trick" to convince people? It could also be that correcting for the epistemic vice of the identifiable victim effect actually yields an even better result (see this post)
 

Load more