For more on this line of argument, I recommend one of my favorite articles on ethical vegetarianism: Alastair Norcross's "Puppies, Pigs, and People".
I enjoyed the new intro article, especially the focus on solutions. Some nitpicks:
I'm not sure what all of the participants' motivation was for joining (I should've gathered that info). As background, we mostly publicized the intensive to members of MIT EA interested in AI safety and to members of Harvard EA. Here are, I think, the main motivations I noticed:
As an alternative to "Famine, Affluence, and Morality," there is Peter Unger's Living High and Letting Die, of which Chapter 2 is particularly relevant. It's more philosophical (this could be a bad thing) and much more comprehensive than Singer's article.
This is the first of our cases:
The Vintage Sedan. Not truly rich, your one luxury in life is a vintage Mercedes sedan that, with much time, attention and money, you've restored to mint condition. In particular, you're pleased by the auto's fine leather seating. One day, you stop at the intersection of two small country roads, both lightly travelled. Hearing a voice screaming for help, you get out and see a man who's wounded and covered with a lot of his blood. Assuring you that his wound's confined to one of his legs, the man also informs you that he was a medical student for two full years. And, despite his expulsion for cheating on his second year final exams, which explains his indigent status since, he's knowledgeably tied his shirt near the wound so as to stop the flow. So, there's no urgent danger of losing his life, you're informed, but there's great danger of losing his limb. This can be prevented, however, if you drive him to a rural hospital fifty miles away. “How did the wound occur?” you ask. An avid bird‐watcher, he admits that he trespassed on a nearby field and, in carelessly leaving, cut himself on rusty barbed wire. Now, if you'd aid this trespasser, you must lay him across your fine back seat. But, then, your fine upholstery will be soaked through with blood, and restoring the car will cost over five thousand dollars. So, you drive away. Picked up the next day by another driver, he survives but loses the wounded leg.
Except for your behavior, the example's as realistic as it's simple.
Even including the specification of your behavior, our other case is pretty realistic and extremely simple; for convenience, I'll again display it:
The Envelope. In your mailbox, there's something from (the U.S. Committee for) UNICEF. After reading it through, you correctly believe that, unless you soon send in a check for $100, then, instead of each living many more years, over thirty more children will die soon. But, you throw the material in your trash basket, including the convenient return envelope provided, you send nothing, and, instead of living many years, over thirty more children soon die than would have had you sent in the requested $100.
Taken together, these contrast cases will promote the chapter's primary puzzle.
I have not read much of Tetlock's research, so I could be mistaken, but isn't the evidence for Tetlock-style forecasting only for (at best) short-medium term forecasts? Over this timescale, I would've expected forecasting to be very useful for non-EA actors, so the central puzzle remains. Indeed, if there is not evidence for long-term forecasting, then wouldn't one expect non-EA actors (who place less importance on the long-term) to be at least as likely as EAs use this style of forecasting?
Of course, it would be hard to gather evidence for forecasting working well over longer (say, 10+ year) forecasts, so perhaps I'm expecting too much evidence. But it's not clear to me that we should have strong theoretical reasons to think that this style of forecasting would work particularly well, given how "cloud-like" predicting events over long time horizons is and how with further extrapolation there might be more room for bias.