New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
41
Jason
1d
3
Some thoughts on future Debate Week topics: I would prefer that the next topic move away from financial allocation between cause areas, so maybe something like: 1. There are 100 young, smart, flexible recent university graduates who are open to ~any kind of work. What is the optimal allocation of those graduates between object-level work, meta work, earning to give, or something else? 2. Should EA move directionally toward being a more r-selected (higher growth, less investment in each offspring) or K-selected movement, [1]or stay roughly where it is? Two advantages of these sorts of topics, vis-a-vis a financial cause-prio debate:        A. I think these kinds of issues are generally more likely to be action-relevant for Forum users. Even I won a billion-dollar lottery prize and established a trust to give $50MM to effective animal welfare charities, the net effect on cause prio might be far less than $50MM because OP might reduce its spend by almost that amount. While there are niches in which this effect is absent or less pronounced, structuring a debate week with broad participation around them may be challenging.         B. These kinds of issues should be more accessible to those from a variety of cause perspectives. For various reasons, the last Debate Week was set up to have a predominant focus on a single cause area (AW). Cf. this discussion. That's not a bad thing, but I don't think all or most Weeks should be set up like that. Other questions may not have this effect -- for instance, I expect that the answers to questions 1 & 2 would differ substantially due to cause prio. So there's value in authoring discussion of these questions from a GH perspective, from an AW perspective, from a GCR perspective, and so on.[2] More generally, it might be helpful to plan a Debate Season well in advance -- a "season" of (e.g.) one week each on a topic that is either specifically within a major cause area or for which it is expected to predominate, plus one or more cause prio questions, plus one or more cross-cutting questions that are not explicitly cause prio questions. 1. ^ Someone who has better background than self-taught AP Biology twenty years ago can probably come up with a better metaphor. 2. ^  From a voting perspective, this could be facilitated by optionally allowing the voter to color their dot if their answer was based primarily on a consideration of a specific cause area, allowing a visual representation of how cause prio is a crux on these issues.
I think people working on animal welfare have more incentives to post during debate week than people working on global health. The animal space feels (when you are in it) very funding constrained, especially compared to working in the global health and development space (and I expect gets a higher % of funding from EA / EA-adjacent sources). So along comes debate week and all the animal folk are very motivated to post and make their case and hopefully shift a few $. This could somewhat bias the balance of the debate. (Of course the fact that one side of the debate feels they needs funding so much more is in itself relevant to the debate.) 
We're thinking of moving the Forum digest, and probably eventually the EA Newsletter to Substack. We're at least planning to try this out, hopefully starting with the next digest issue on the 23rd. Here's an internal doc with our reasoning behind this (not tailored for public consumption, but you should be able to follow the thread). I'm interested in any takes people have on this. I'm not super familiar with Substack from an author perspective so if you have any crucial considerations about how the platform works that would be very helpful. General takes and agree/disagree (with the decision to move the digest to Substack) votes are also appreciated.
@Toby Tremlett🔹 @Will Howard🔹  Where can i see the debate week diagram if I want to look back at it?
A thought about AI x-risk discourse and the debate on how "Pascal's Mugging"-like AIXR concerns are, and where this causes confusion between those concerned and sceptical. I recognise a pattern where a sceptic will say "AI x-risk concerns are like Pascal's wager/are Pascalian and not valid" and then an x-risk advocate will say "But the probabilities aren't Pascalian. They're actually fairly large"[1], which usually devolves into a "These percentages come from nowhere!" "But Hinton/Bengio/Russell..." "Just useful idiots for regulatory capture..." discourse doom spiral. I think a fundamental miscommunication here is that, while the sceptic is using/implying the term "Pascallian" they aren't concerned[2] with the percentage of risk being incredibly small but high impact, they're instead concerned about trying to take actions in the world - especially ones involving politics and power - on the basis of subjective beliefs alone.  In the original wager, we don't need to know anything about the evidence record for a certain God existing or not, if we simply Pascal's framing and premisses then we end up with the belief that we ought to believe in God. Similarly, when this term comes up, AIXR sceptics are concerned about changing beliefs/behaviour/enact law based on arguments from reason alone that aren't clearly connected to an empirical track record. Focusing on which subjective credences are proportionate to act upon is not likely to be persuasive compared to providing the empirical goods, as it were. 1. ^ Let's say x>5% in the rest of the 21st century for sake of argument 2. ^ Or at least it's not the only concern, perhaps the use of EV in this way is a crux, but I think it's a different one