How many EAs are vegan/vegetarian? Based on the 2022 ACX survey, and assuming my calculations are correct, people who identify as EA are about 40% vegan/vegetarian, and about 70% veg-leaning (i.e., vegan, vegetarian, or trying to eat less meat and/or offsetting meat-eating for moral reasons). For comparison, about 8% of non-EA ACX readers are vegan/vegetarian, and about 30% of non-EA ACX readers are veg-leaning.
(That's conditioning on identifying as an LW rationalist, since anecdotally I think being vegan/vegetarian is somewhat less common among Bay Area EAs, and the ACX sample is likely to skew pretty heavily rationalist, but the results are not that different if you don't condition. Take with a grain of salt in general as there are likely strong selection effects in the ACX survey data.)
Here's what I usually try when I want to get the full text of an academic paper:
Search Sci-Hub. Give it the DOI (e.g. https://doi.org/...) and then, if that doesn't work, give it a link to the paper's page at an academic journal (e.g. https://www.sciencedirect.com/science...).
Search Google Scholar. I can often just search the paper's name, and if I find it, there may be a link to the full paper (HTML or PDF) on the right of the search result. The linked paper is sometimes not the exact version of the paper I am after -- for example, it may be a manuscript version instead of the accepted journal version -- but in my experience this is usually fine.
Search the web for "name of paper in quotes" filetype:pdf. If that fails, search for "name of paper in quotes" and look at a few of the results if they seem promising. (Again, I may find a different version of the paper than the one I was looking for, which is usually but not always fine.)
Check the paper's authors' personal websites for the paper. Many researchers keep an up-to-date list of their papers with links to full versions.
Email an author to politely ask for a copy. Researchers spend a lot of time on their research and are usually happy to learn that somebody is eager to read it.
I've been following David Thorstad's blog Ineffective Altruism and, while I mostly lean somewhat "reform sceptic" relative to the median visible Forum user (I believe), and while I often disagree with Thorstad, and while the blog's name is a little cheeky, I've been appreciating Thorstad's critiques of EA, have learned a lot from them and recommend reading the blog. To me, Thorstad seems like one of the better EA critics out there.
I wrote something about CICERO, Meta's new Diplomacy-playing AI. The summary:
CICERO is a new AI developed by Meta AI that achieves good performance at the board game Diplomacy. Diplomacy involves tactical and strategic reasoning as well as natural language communication: players must negotiate, cooperate and occasionally deceive in order to win.
CICERO comprises (1) a strategic model deciding which moves to make on the board and (2) a dialogue model communicating with the other players.
CICERO is honest in the sense that the dialogue model, when it communicates, always tries to communicate the strategy model's actual intent; however, it can omit information and change its mind in the middle of a conversation, meaning it can behave deceptively or treacherously.
Some who are concerned with risks from advanced AI think the CICERO research project is unusually bad or risky.
It has at least three potentially-concerning aspects:
It may present an advancement in AIs' strategic and/or tactical capabilities.
It may present an advancement in AIs' deception and/or persuasion capabilities.
It may be illustrative of cultural issues in AI labs like Meta's.
My low-confidence take is that (1) and (2) are false because CICERO doesn't seem to contain any new insights that markedly advance either of these areas of study. Those capabilities are mostly the product of using reinforcement learning to master a game where tactics, strategy, deception and persuasion are useful, and I think there's nothing surprising or technologically novel about this.
I think, with low confidence, that (3) may be true, but perhaps no more true than of any other AI project of that scale.
Neural networks using reinforcement learning are always (?) trained in simulated worlds. Chess presents a very simple world; Diplomacy, with its negotiation phase, is a substantially more complex world. Scaling up AIs to transformative and/or general heights using the reinforcement learning paradigm may require more complex and/or detailed simulations.
Simulation could be a bottleneck in creating AGI because (1) an accurate enough simulation may already give you the answers you want, (2) an accurate and/or complex enough simulation may be AI-complete and/or (3) extremely costly.
Simulation could also not be a bottleneck because, following Ajeya Cotra's bio-anchors report, (1) we may get a lot of mileage out of simpler simulated worlds, (2) environments can contain or present problems that are easy to generate and simulate but hard to solve, (3) we may be able to automate simulation and/or (4) people will likely be willing to spend a lot of money on simulation in the future, if that leads to AGI.
CICERO does not seem like an example of a more complex or detailed simulation, since instances of CICERO didn't actually communicate with one another during self-play. (Generating messages was apparently too computationally expensive.)
The post is written in a personal capacity and doesn't necessarily reflect the views of my employer (Rethink Priorities).
commons plural noun [treated as singular] land or resources belonging to or affecting the whole of a community
The reputation of effective altruism is a commons. Each effective altruist can benefit from and be harmed by it (it can support or impede one's efforts to help others), and each effective altruist is capable of improving and damaging it.
I don't know whether actions that may cause substantial harm to a commons should be decided upon collectively. I don't know whether a community can come up with rules and guidelines governing them. But I do think, at minimum, in the absence of rules and guidelines, that one should inform the community when planning a possibly-commons-harming action, so that the community can at least critique one's plan.
I think purchasing Wytham Abbey (which may have made sense, even factoring in the reputational effects -- I'm not sure) was a possibly-commons-harming action, and this sort of action should probably be announced before it’s carried out in future.
A while ago I wrote a post with some thoughts on "EA for dumb people" discussions. The summary:
I think:
Intelligence is real, to a large degree determined by genes and an important driver (though not the only one) of how much good one can do.
That means some people are by nature better positioned to do good. This is unfair, but it is what it is.
Somewhere there’s a trade-off between getting more people into a community, and keeping a high average level of ability in the community, in other words to do with selectivity. The optimal solution is neither to allow no one in nor to allow everyone in, but somewhere in between.
Being welcoming and accommodating can allow you to get more impact with a more permissive threshold, but you still need to set the threshold somewhere.
I think effective altruism today is far away from hitting any diminishing returns on new recruits.
Ultimately what matters for the effective altruist community is that good is done, not who exactly does it.
The optimal solution is neither to allow no one in nor to allow everyone in, but somewhere in between.
I feel somewhat icky about the framing of "allowing people into EA". I celebrate everyone who shares the value of improving the lives of others, and who wants to do this most effectively. I don't like the idea that some people will be not allowed to be part of this community, especially since EA is currently the only community like it. I see the tradeoff more in who we're advertising towards and what type of activities we're focussing on as a community, e.g. things that better reflect what is most useful, like cultivating intellectual rigor and effective execution of useful projects.
So I think "(not) allowing X in" was not particularly well worded; what I meant was something like "making choices that cause X (not) to join". So that includes stuff like this:
I see the tradeoff more in who we're advertising towards and what type of activities we're focussing on as a community, e.g. things that better reflect what is most useful, like cultivating intellectual rigor and effective execution of useful projects.
And to be clear, I'm talking about EA as a community / shared project. I think it's perfectly possible and fine to have an EA mindset / do good by EA standards without being a member of the community.
That said, I do think there are some rare situations where you would not allow some people to be part of the community, e.g. I don't think Gleb Tsipursky should be a member today.
How many EAs are vegan/vegetarian? Based on the 2022 ACX survey, and assuming my calculations are correct, people who identify as EA are about 40% vegan/vegetarian, and about 70% veg-leaning (i.e., vegan, vegetarian, or trying to eat less meat and/or offsetting meat-eating for moral reasons). For comparison, about 8% of non-EA ACX readers are vegan/vegetarian, and about 30% of non-EA ACX readers are veg-leaning.
(That's conditioning on identifying as an LW rationalist, since anecdotally I think being vegan/vegetarian is somewhat less common among Bay Area EAs, and the ACX sample is likely to skew pretty heavily rationalist, but the results are not that different if you don't condition. Take with a grain of salt in general as there are likely strong selection effects in the ACX survey data.)
46% reported being vegan or vegetarian in the 2019 EA Survey.
Here's what I usually try when I want to get the full text of an academic paper:
https://doi.org/...
) and then, if that doesn't work, give it a link to the paper's page at an academic journal (e.g.https://www.sciencedirect.com/science...
)."name of paper in quotes" filetype:pdf
. If that fails, search for"name of paper in quotes"
and look at a few of the results if they seem promising. (Again, I may find a different version of the paper than the one I was looking for, which is usually but not always fine.)This now also exists as a slightly expanded blog post.
I've been following David Thorstad's blog Ineffective Altruism and, while I mostly lean somewhat "reform sceptic" relative to the median visible Forum user (I believe), and while I often disagree with Thorstad, and while the blog's name is a little cheeky, I've been appreciating Thorstad's critiques of EA, have learned a lot from them and recommend reading the blog. To me, Thorstad seems like one of the better EA critics out there.
I wrote something about CICERO, Meta's new Diplomacy-playing AI. The summary:
The post is written in a personal capacity and doesn't necessarily reflect the views of my employer (Rethink Priorities).
The reputation of effective altruism is a commons. Each effective altruist can benefit from and be harmed by it (it can support or impede one's efforts to help others), and each effective altruist is capable of improving and damaging it.
I don't know whether actions that may cause substantial harm to a commons should be decided upon collectively. I don't know whether a community can come up with rules and guidelines governing them. But I do think, at minimum, in the absence of rules and guidelines, that one should inform the community when planning a possibly-commons-harming action, so that the community can at least critique one's plan.
I think purchasing Wytham Abbey (which may have made sense, even factoring in the reputational effects -- I'm not sure) was a possibly-commons-harming action, and this sort of action should probably be announced before it’s carried out in future.
A while ago I wrote a post with some thoughts on "EA for dumb people" discussions. The summary:
I feel somewhat icky about the framing of "allowing people into EA". I celebrate everyone who shares the value of improving the lives of others, and who wants to do this most effectively. I don't like the idea that some people will be not allowed to be part of this community, especially since EA is currently the only community like it. I see the tradeoff more in who we're advertising towards and what type of activities we're focussing on as a community, e.g. things that better reflect what is most useful, like cultivating intellectual rigor and effective execution of useful projects.
So I think "(not) allowing X in" was not particularly well worded; what I meant was something like "making choices that cause X (not) to join". So that includes stuff like this:
And to be clear, I'm talking about EA as a community / shared project. I think it's perfectly possible and fine to have an EA mindset / do good by EA standards without being a member of the community.
That said, I do think there are some rare situations where you would not allow some people to be part of the community, e.g. I don't think Gleb Tsipursky should be a member today.
I wrote a post about Kantian moral philosophy and (human) extinction risk. Summary:
The deontologist in me thinks human extinction would be very bad for three reasons: