Hi Bettsy,
I guess my approach was something like the following:
Outlook:
1. Treat it as an opportunity to make connection with the person (not like you're trying to debate or "convert" them).
2. Be curious about their experience. This includes asking questions to get more to the heart of what they feel is wrong with EA, letting them voice it, and repeating back to them some of what you heard so they feel understood.
3. Give my own personal account, rather than "trying to represent EA". Hopefully this humanizes me, and EA by association, in their eyes.
4. Look out for where they might have misconceptions about EA, or negative gut feelings, that would lead to their concerns, and focus somewhat on reframing that conception.
Example:
One interaction went something like the following, with person (P) and myself (M):
P: Why are you guys here?
M: We're here to try to figure out how to improve the world!
P: Oh I've heard of effective altruism before. But like, what gives you the right to tell people how to live their lives?
M: (sensing from tone, making eye contact, and sincerely curious) Oh are you concerned that an empirical approach to these questions might not be the best way to help people?
P: Well, like, EA just tells people what to do right?
M: I guess I think of it more like we only have limited resources, and IF WE WANT to help people we should think carefully about the OPPORTUNITY and use evidence to try to help them.
P: Hmm (body language eases).
M: Yeah... Like, I think it's important to make sure when we are trying to improve the world that what we're doing is actually going to help.
P: Hmm.
M: Yeah... would you like a lollipop?
P: Uh nah I think I have to go. Thanks.
M: No worries. Nice to meet you.
P: Yeah you too.
The above isn't revelatory. But it seemed to me that the person's conception shifted from EA as a kind of enemy entity, more towards seeing EAs as friendly people who it's possible to make personal connections with, and who want to make the world better using evidence.
In another example, we were curious about someone else who came up, voted for AI, and said something like "Oh I'm not really on board with this whole approach" and went to leave. We asked them something like "Oh really, what don't you like?" It turned out that they knew HEAPS about EA, and our curiosity about their points of disagreement led to a really fun discussion, to the point where they said "well it's nice to see you guys on campus" near the end, and where we wanted to keep talking to them.
Hey it's cool that this got written up. I've only read your summary, not the paper, and I'm not an expert. However, I think I'd argue against using this approach, and instead in favour of another. It seems equivalent to using ratios to me, and I think you should use the geometric mean of odds instead when pooling ideas together (as you mentioned/asked about in your question 7.). The reason is that Upco uses each opinion as a new piece of independent evidence. Geometric mean of odds merely takes the (correct) average of opinions (in a Bayesian sense).
Here's one huge problem with the Upco method as you present it: two people think that it's a 1/6 chance of rolling a six on a (fair) die. This opinion shouldn't change when you update on others' opinions. If you used Upco, that's a 1:5 ratio, giving final odds of 1:25 - clearly incorrect. On the other hand, a geometric mean approach gives sqrt((1*1)/(5*5))=1:5, as it should be.
You can also pool more than two credences together, or weight others' opinions differently using a geometric mean. For example, if I thought there was a 1:1 chance of an event, but someone whose opinion I trusted twice as much as myself put it at 8:1 odds, then I would do the following calculation, in which we effectively proceed as if we have three opinions (mine and another person's twice), hence we use a cube root (effectively one root per person):
cube root((1*8*8)/(1*1*1)=4
So we update our opinion to 4:1 odds.
This also solves the problem of Upco not being able to update on an opinion of 50/50 odds (which I think is a problem - sometimes 1:1 is definitely the right credence to have (e.g. a fair coin flip)). If we wanted to combine 1:1 odds and 1:100 000 odds, it should land in the order of somewhere in between. Upco stays at ((1*1)/(1*100 000))=100 000 (i.e. 1:100 000), which is not updating at all using the 1:1. Whereas the geometric mean gives sqrt((1*1/1*100 000)=1:100. 1:100 is a far more reasonable update from the two of them.
In terms of your question 3, where people have credences of 0 or 1, I think some things we could do are to push them to use a ratio instead (like asking if rather than certain, they'd say 1 in a billion), weight their opinions down a lot of they seem far too close to 0 or 1, or just discount them if they insist on 0 or 1 (which is basically epistemically inconceivable/incorrect from a Bayesian approach).