I felt that I absorbed something helpful from this conversation that I hope will make me better at introducing EA ideas. Is there a list of other examples of especially effective EA communication that would-be evangelists could learn from? I'm especially interested in conversations in which someone experienced with EA ideas discusses them with someone newer, as I feel that this stage can be especially tricky and important.
For example, here are two other conversations that come to mind that I felt I absorbed something helpful from with respect to introducing EA ideas:
If a list like this doesn't exist, I want to make it exist - open to suggestions on the best way to do that. (E.g. should I post this as a question or top-level post?)
Thanks for this recommendation! It caused me to listen to this episode, which I otherwise probably wouldn't have done.
I agree with Ben that the outsize impact of this episode may largely be due to the amenability of Sam's audience to EA ideas, but I also thought this was a fantastic conversation which does a great job introducing EA in a positive, balanced, and accurate way. I do feel I absorbed something from listening to Will here that will hopefully make me better at introducing EA ideas. I may also start recommending this episode as an introduction to EA for some people, though I don't think it will be my main go-to for most people.
I'm also glad to have listened to this conversation merely out of appreciation for its role in bringing so many people to the movement. :)
See my response to AlexHT for some of my overall thoughts. A couple other things that might be worth quickly sketching:
The real meat of the book from my perspective were the contentions that (1) longtermist ideas, and particularly the idea that the future is of overwhelming importance, may in the future be used to justify atrocities, especially if these ideas become more widely accepted, and (2) that those concerned about existential risk should be advocating that we decrease current levels of technology, perhaps to pre-industrial levels. I would have preferred if the book focused more on arguing for these contentions.
Questions for Phil (or others who broadly agree):
P.S. - If you're feeling dissuaded from checking out Phil's arguments because they are labeled as a 'book', and books are long, don't be - it's a bit long for an article, but certainly no longer than many SSC posts, for example. That said, I'm also not endorsing the book's quality.
I upvoted Phil's post, despite agreeing with almost all of AlexHT's response to EdoArad above. This is because I want to encourage good faith critiques, even those which I judge to contain serious flaws. And while there were elements of Phil's book that read to me more like attempts at mood affiliation than serious engagement with his interlocutor's views (e.g. 'look at these weird things that Nick Bostrom said once!'), on the whole I felt that there was enough effort at engagement that I was glad Phil took the time to write up his concerns.
Two aspects of the book that I interpreted somewhat differently than Alex:
I agree with Alex that the book was not clear on these points (among others), and I attribute our different readings to that lack of clarity. I'd certainly be happy to hear Phil's take.
I have a couple of other thoughts that I will add in a separate comment.
In the technical information-theoretic sense, 'information' counts how many bits are required to convey a message. And bits describe proportional changes in the number of possibilities, not absolute changes. The first bit of information reduces 100 possibilities to 50, the second reduces 50 possibilities to 25, etc. So the bit that takes you from 100 possibilities to 50 is the same amount of information as the bit that takes you from 2 possibilities to 1.
And similarly, the 3.3 bits that take you from 100 possibilities to 10 are the same amount of information as the 3.3 bits that take you from 10 possibilities to 1. In each case you're reducing the number of possibilities by a factor of 10.
To take your example: If you were using two digits in base four to represent per-sixteenths, then each digit contains the 50% of the information (two bits each, reducing the space of possibilities by a factor of four). To take the example of per-thousandths: Each of the three digits contains a third of the information (3.3 bits each, reducing the space of possibilities by a factor of 10).
But upvoted for clearly expressing your disagreement. :)
The difference between carbon offsetting and meat offsetting is that carbon offsetting doesn't involve causing harms, while meat offsetting does.
Most people would consider it immoral to murder someone for reasons of personal convenience, even if you make up for it by donating to a 'murder offset', such as, let's say, a police department. MacAskill is saying that 'animal murder' offsetting is like this, because you are causing harm to animals, then attempting to 'make up for it' by helping other animals. Climate offsets are different because the offset prevents the harm from occurring in the first place.
Indeed, murder offsets would be okay from a purely consequentialist perspective. But this is not the trolley problem, for the reason that Telofy explains very well in his second paragraph above. Namely, the harmful act that you are tempted to commit is not required in order to achieve the good outcome.
Regarding your first paragraph: most people would consider it unethical to murder someone for reasons of personal convenience, even if you donated to a 'murder offset' organization such as, I don't know, let's say police departments. MacAskill is saying that 'animal murder' offsets are unethical in this same way. Namely, you are committing an immoral act - killing an animal - then saving some other animals to 'make up for it'. Climate offsets are different because the harm is never caused in this case.
Regarding your last paragraph: This is a nice example, but it will fail if your company might modulate the amount of food that it buys in the future based on how much gets eaten. For example, if they consistently have a bunch of leftover chicken, they might try to save some money by purchasing less chicken next time. If this is possible, then there is a reason not to eat the free chicken.
Give a man a fish, it may rot in transport. Teach a man to fish, he may have other more practical skills already. Give a man cash and he can buy whatever is most useful for himself and his family.
(The idea is to highlight key benefits of cash in a way that also maps plausibly onto the fishing example. I'm sure the wording and examples here could be improved; suggestions welcome!)