This is a special post for quick takes by VictorW. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Does anyone know of a low-hassle way to charge invoices for services but it's a third-party charity that gets paid? It could well be an EA charity if that makes it easy. I'm hoping for something slightly more structured than "I'm not receiving any pay for my services but I'm trusting you to donate X amount to this charity instead".

I've seen EA writing (particularly about AI safety) that goes something like:
I know X and Y thought leaders in AI safety, they're exceptionally smart people with opinion A, so even though I personally think opinion B is more defensible, I also think I should be updating my natural independent opinion in the direction of A, because they're way smarter and more knowledgeable than me.

I'm struggling to see how this update strategy makes sense. It seems to have merit when X and Y know/understand things that literally no other expert knows, but aside from that, in all other scenarios that come to mind, it seems neutral at best, otherwise a worse strategy than totally disregarding the "thought leader status" of X and Y.

Am I missing something?

The reasoning is that knowledgeable people's beliefs in a certain view is evidence for that view.

This is a type of reasoning people use a lot in many different contexts. I think it's a valid and important type of reasoning (even though specific instances of it can of course be mistaken).

Some references:

https://plato.stanford.edu/entries/disagreement/#EquaWeigView

https://www.routledge.com/Why-Its-OK-Not-to-Think-for-Yourself/Matheson/p/book/9781032438252

https://forum.effectivealtruism.org/posts/WKPd79PESRGZHQ5GY/in-defence-of-epistemic-modesty

What you describe in your first paragraph sounds to me like a good updating strategy, except I would say that you’re not updating your “natural independent opinion,” you’re updating your all-things-considered belief.

Related short posts I recommend—the first explains the distinction I’m pointing at, and the second shows how things can go wrong if people don’t track it:

I identify as an anti-credentialist in the sense that I believe ideas can (under ideal circumstances) be considered on merit alone, regardless of how unreliable or bad the source of the idea is. Isn't credentialism basically a form of ad hominem attack?

Isn't credentialism basically a form of ad hominem attack?

My take on this is inspired by the Fallacy Fork, which is to say that there are two ways in which we can understand "ad hominem" (among many other fallacies):

  • As a deductive / formal logic principle: an ad hominem argument is to say that because of some fact about the identity of the arguer, we can logically deduce their argument is invalid. This is obviously wrong, but credentialism isn't ad hominem in this sense, because it doesn't say that correctness (or incorrectness) logically follows from the characteristics of the speaker, just that they're correlated.
  • An inductive / correlational principle: an ad hominem argument is one which uses characteristics of the speaker to make probabilistic guesses / inform a prior on the validity of their speech. But in this form I'd argue it just isn't fallacious: in terms of getting the right answer most often, it is in fact useful to incorporate your knowledge of someone's background into your best guess of whether their ideas are good.

There are still some reasons to avoid credentialism or ad hominem for reasons other than maximising the truth of the claim you're considering:

  • You might have e.g. fairness intuitions that are willing to compromise on accuracy in this case in order to promote some other virtue that you value,
  • You might think that people systematically either update too strongly on credentials, or update on them in an incorrect way, and you therefore think that you best contribute to overall healthy truth-seeking by prioritising the things that other people have missed by doing this (this might even argue for being credential-hostile rather than merely credential-neutral),
  • idk probably other reasons I haven't thought of

The Fallacy Fork was an amazing read. Thanks for the pointer!

The point of credentialism is that the ideal circumstances for an individual to evaluate ideas don't exist very often. Medical practitioners aren't always right, and homeopaths or opinion bloggers aren't always wrong, but bearing in mind that I'm seldom well enough versed in the background literature to make my own mind up, trusting the person with the solid credentials over the person with zero or quack credentials is likely to be the best heuristic in the absence of any solid information to the contrary

And yes, of course sometimes it isn't, and sometimes the bar is completely arbitrary (the successful applicant will have some sort of degree from some sort of top 20 university) or the level of distinction irrelevant (his alma mater is more reputable than hers) and sometimes the credentials themselves are suspect 

One of the canonical EA books (can't remember which) suggests that if an individual stops consuming eggs (for example), almost all the time this will have zero impact, but there's some small probability that on some occasion it will have a significant impact. And that can make it worthwhile.

I found this reasonable at the time, but I'm now inclined to think that it's a poor generalization where the expected impact still remains negligible in most scenarios. The main influence for my shift is when I think about how decisions are made within organizations, and how power-seeking approaches are vastly superior to voting in most areas of life where the system exceeds a threshold of complexity.

Anyone care to propose updates on this topic?

This position is commonly defended for consequentialist arguments for vegetarianism and veganism; see, e.g., Section 2 here, Section 2 here, and especially Day 2 here.  The argument usually goes something like: if you stop buying one person's worth of eggs, then in expectation, the industry will not produce something like one pound of eggs that they would've produced otherwise.  Even if you are not the tipping point to cause them to cause production, due to uncertainty you still have positive expected impact.  (I'm being a bit vague here, but I recommend reading at least one of the above readings -- especially the third one -- because they make the argument better than I can.)  

In the case of animal product consumption, I'm confused what you mean by "the expected impact still remains negligible in most scenarios" -- are you referring to different situations?  I agree in principle that if the expected impact is tiny, then we don't have much reason on consequentialist grounds to avoid the behavior, but do you have a particular situation in mind?  Can you give concrete examples of where your shift in views applies/where you think the reasoning doesn't apply well? 

One of those sources ("Compassion, by the Pound") estimates that reducing consumption by one egg results in an eventual fall in production by 0.91 eggs, i.e., less than a 1:1 effect.

I'm not arguing against the idea that reducing consumption leads to a long-term reduction in production. I'm doubtful that we can meaningfully generalise this kind of reasoning across different specifics as well as distinct contexts without investigating it practically.

For example, there probably exist many types of food products where reducing your consumption only has like a 0.1:1 effect. (It's also reasonable to consider that there are some cases where reducing consumption could even correspond with increased production.) There are many assumptions in place that might not hold true. Although I'm not interested in an actual discussion about veganism, one example of a strong assumption that might not be true is that the consumption of egg is replaced by other food sources that are less bad to rely on.

I'm thinking that the overall "small chance of large impact by one person" argument probably doesn't map well to scenarios where voting is involved, one-off or irregular events, sales of digital products, markets where the supply chain changes over time because there's many ways to use those products, or where excess production can still be useful. When I say "doesn't map well", I mean that the effect of one person taking action could be anywhere between 0:1 to 1:1 compared to what happens when the sufficient number of people simultaneously make the change in decision-making required for a significant shift. If we talk about one million people needing to vote differently so that a decision is reversed, the expected impact of my one vote is always going to be less than 100% of one millionth, because it's not guaranteed that one million people will sway their vote. If there's only a 10% chance of the one million swayed votes, I'd think my expected impact to come out at far less than even 0.01:1 from a statistical model.

Thanks, this makes things much clearer to me.

I agree that this style of reasoning depends heavily on the context studied (in particular, the mechanism at play), and that we can't automatically use numbers from one situation for another.  I also agree with what I take to be your main point: In many situations, the impact is less than 1:1 due to feedback loops and so on.

I'm still not sure I understand the specific examples you provide:

  • Animal products used as food: For commonly-consumed food animal products, I would be surprised if the numbers were much lower than those in the table from Compassion by the Pound (assuming that those numbers are roughly correct).  This is because the mechanism used to change levels of production is similar in these cases.  (The previous sentence is probably naive, so I'm open to corrections.)  However, your point about substitution across goods (e.g., from beef to chicken) is well taken.
  • Other animal products: Not one of the examples you gave, but one material that's interested me is cow leather.  I'm guessing that (1) much of leather is a byproduct* of beef production and (2) demand for leather is relatively elastic.  Both of these suggest that abstaining from buying leather goods has a fairly small impact on farmed animal welfare suffering.** 
  • Voting: I am unsure what you mean here by "1:1".  Let me provide a concrete example, which I take to be the situation you're talking about.  We have an election with n voters and 2 candidates, with the net benefit of the better candidate winning U.  If all voters were to vote for the better candidate, then each person's average impact is U / n.  I assume that this is what you mean by the "1" in "1:1": if someone has expected counterfactual impact U / n, then their impact is 1:1.  If this is what you mean by 1:1, then actually one's impact can easily be greater than U / n, going against your claim.  For example, if your credence on the better candidate winning is exactly 50%, then U / n is a lower bound; see Ord (2023), some of whose references show that in real-world situations, the probability of swaying the election can be much greater than 1 / n.

* Not exactly a byproduct, since sales of leather increases the revenue from raising a cow.
** This is not accounting for less direct impacts on demand, like influencing others around oneself.

This is because the mechanism used to change levels of production is similar in these cases. 

I'm unclear on the exact mechanism and suspect that the anecdote of "the manager sees the reduced demand across an extended period and decides to lower their store's import by the exact observed reduction" is a gross oversimplification of what I would have guessed is a complex system where the manager isn't perfectly rational, may have long periods without review due to contractual reasons, the supply chain lasting multiple parties all with non-linear relationships. Maybe some food supply chains significantly differ at the grower's end, or in different countries. My missing knowledge here is why I don't think I have a good reason to assume generality.

Other animal products

I think your cow leather example highlights the idea that for me threatens simplistic math assumptions. Some resources are multi-purpose, and can be made into different products through different processes and grades of quality depending on the use case. It's pretty plausible that eggs are either used for human consumption or hatching. Some animal products might be more complicated and be used for human consumption or non-human consumption or products in other industries. It seems reasonable for me to imagine a case where decreasing human consumption results in wasted production which "inspires" someone to redirect that production to another product/market which becomes successful and results in increased non-dietary demand. I predict that this isn't uncommon and could dilute some of the marginal impact calculations which are true short-term but might not play out long-term. (I'm not saying that reducing consumption isn't positive expectation, I'm saying that the true variance of the positive could be very high over a long-term period that typically only becomes clear in retrospect.)

Voting

Thanks for that reference from Ord. I stand updated on voting in elections. I have lingering skepticism about a similar scenario that's mathematically distinct: petition-like scenarios. E.g. if 100k people sign this petition, some organization is obliged to respond. Or if enough students push back on a school decision, the school might reconsider. This is kind of like voting except that the default vote is set. People who don't know the petition exists have a default vote. I think the model described by Ord might still apply, I just haven't got my head around this variation yet.

I agree that the simple story of a producer reacting to changing demand directly is oversimplified.  I think we differ in that I think that absent specific information, we should assume that any commonly consumed animal product's supply response to changing demand should be similar to the ones from Compassion, by the Pound. In other words, we should have our prior on impact centered around some of the numbers from there, and update from there.  I can explain why I think this in more detail if we disagree on this.

Leather example:

Sure, I chose this example to show how one's impact can be diluted, but I also think that decreasing leather consumption is unusually low-impact.  I don't think the stories for other animal products are as convincing.  To take your examples:

  • Eggs for human consumption are unfertilized, so I'm not sure how they are useful for hatching.  Perhaps you are thinking that producers could fertilize the eggs, but that seems expensive and wouldn't make sense if demand for eggs is decreasing.
  • Perhaps I am uncreative, but I'm not sure how one would redirect unused animal products in a way that would replace the demand from human consumption.  Raising an animal seems pretty expensive, so I'm not sure in what scenario this would be so profitable.
  • If we are taking into account the sort of "meta" effects of consuming fewer animal products (such as your example of causing people to innovate new ways of using animal products), then I agree that these increase the variance of impact but I suspect that they strongly skew the distribution of impact towards greater rather than lesser impact.  Some specific, and straightforward, examples: companies research more alternatives to meat; society has to accommodate more vegans and vegan food ends up more widespread and appealing, making more people interested in the transition; people are influenced by their reducetarian friends to eat less meat.

Voting:

I'll need to think about it more, but as with two-candidate votes, I think that petitions can often have better than 1:1 impact.

Animal Charity Evaluators estimates that a plant-based diet spares 105 vertebrates per year. So if you’re vegan for 50 years, that comes out to 5,250 animals saved. If you put even 10% credence in the ACE number, where the counterfactual is zero impact, you’d still be helping over 500 animals in expectation.

Does anyone have a resource that maps out different types/subtypes of AI interpretability work?

E.g. mechanistic interpretability and concept-based interpretability, what other types are there and how are they categorised?

Late to the party here but I'd check out Räuker et al. (2023), which provides one taxonomy of AI interpretability work.

Brilliant, thank you. One of the very long lists of interp work on the forum seemed to have everything as mech interp (or possibly I just don't recognize alternative key words). Does the EA AI safety community feel particularly strongly about mech interp or is it just my sample size being too small?

Not an expert, but I think your impression is correct.  See this post, for example (I recommend the whole sequence).

Not a direct answer, but you might find the Interpretability (ML & AI) tag on LW relevant. That's where I found Neel Nanda's longlist of interpretability theories of impact (published Mar-22 so it may be quite outdated), and Charbel-Raphaël's Against Almost Every Theory of Impact of Interpretability responding to it (published Aug-23, so much more current). 

Does anyone have quick tips on who to ask for feedback on project ideas?

I feel like the majority of feedback I get falls into:

  • No comment
  • Not my field and I don't really see the value in that compared to other things
  • Not my field but have you considered doing it this alternative way [which seems broadly appealing/sensible/tame but then loses the specific thesis question which I think could unlock exceptional impact]
  • I see the value in that and I think it's a great idea

The truth is, no one can really talk me out of attempting that specific thesis question except by providing a definitive answer to my specific question. And if the only feedback I get is one of the above, then I might as well only ask people who will be encouraging of my ideas and potentially recommend other people that are good to talk to.

It's relatively uncommon for me to get feedback at the level of tweaking my ideas. Is it even worth trying to solicit that given how few people are in a good position to do so?

I find that asking EAs (or anyone, really) for open-ended feedback tends not to yield novel insight by default. EAs tend to have high openness, and as long something passes the bar of "this has plausible theory of change, has no obviously huge downside and is positive EV enough to be worth exploring" , it isn't subject to particularly intense scrutiny. Consider also that you have thought about the problem for days/weeks whereas they've only thought about it for maybe 10-20 minutes

Haven't found a 100% perfect solution, but usually I express my 2-3 most pressing doubts and ask them whether there's a possible solution to those doubts. It scopes the question towards more precise, detailed and actionable feedback.

Alternatively, if the person has attempted a similar project, I would ask them what goals/principles they found most important, and 2-3 things they wish they knew before they'd started.

I think you're mostly asking the wrong people. I give a lot of feedback on my friends work and projects, but we all work in pretty similar areas so I know how to provide meaningful and detailed criticism and what does and doesn't make sense. If you're asking people who aren't in that field there's a good chance any detailed feedback they give you will be not very useful anyway. You also might be asking the wrong people in terms of their willingness to invest time and energy into giving someone detailed feedback because it really does take a not insignificant amount of both.

I can't be super certain these are useful tips since I don't know the exact nature of the project you're talking about, but

  • Keep the thing you're asking for feedback on as specific and brief as possible. If your idea is too broad (ie. Something like I'm going to research X using Y to do Z) there isn't really a whole lot someone can say. There needs to be concrete, detailed steps in the proposal. Brevity seems pretty straightforward, although worth underlining. I see alot of undergrad and MA students who are adamant that you have to know every detail of the background of their project and their thought process to know something is a good or bad idea or understand why it doesn't work. And sure, sometimes there are some details that are necessary, but overall it's best to pare it down the bare necessities.
  • Find people who are knowledgeable and invested in the idea. How depends pretty strongly on what it is that your doing exactly.
  • Think about if you may be the problem. There are two types of project I tend to not touch unless I'm getting paid to do it: either they're fundamentally problematic and I think saying that will kill the relationship, or I think that the person will spend hours fighting the feedback and trying to explain to me why I'm wrong and their project is great actually. Something in the way you communicate with people might be giving the impression that you'll end up doing either of these things. -Find peers doing similar things and swap feedback. This is much, much easier to do in academic work, not sure it would pan out elsewhere, but I've gotten some of the best feedback from peers rather than professors. -Depending on what this is and how much you're investing in it, seeking our professional, paid help may be a good option.

You're probably right, I'm asking the wrong people. I don't know if there are many of the right people to ask within EA or outside of EA. The project cause area is mental health / personal development as it relates to well-being measured through life satisfaction. I feel like my potential sources of targeted feedback are highly constrained because:

  • Most non-EA professional coaches, therapists or psychologists are not equipped to consider my proposals given that life satisfaction is a relatively disconnected concept from their work. (As strange as that may sound). I also find that more experienced professionals seem to apply route knowledge and apparently seem reluctant to deviate from considering anything outside of what they practice.
  • Relatively few EAs seem to have an interest in mental health as a cause area beyond general knowledge of its existence, let alone specific knowledge.
  • I suspect my ideas are somewhat wild compared to normal thinking, and I think it would take other people who have their own wild thoughts to comfortably critique mine.

  

Have you put together an actual proposal document? Having a well laid out and argued proposal would help professionals who are willing to take the time understand and critique your work, even if it's kind of unconventional.

I also find that more experienced professionals seem to apply route knowledge and apparently seem reluctant to deviate from considering anything outside of what they practice.

To be fair, this is in a lot of cases not down to incuriosity or being too stupid or stubborn (although there are certainly a fair amount of that as well), but because they practice what they do because they believe it's the best choice based on their years of training and experience. This is where a well put together, persuasive proposal could come in handy, it gives people the time and ability to peruse and think about it in a way that verbal conversation or jumbled messages don't.

If you aren't getting support from practitioners it may be a sign that this is better suited for research at this point. Establish the science behind what you're trying to do before seeking implementation. Researcher are, in my experience, generally much more open to out of the box thinking because their risk is lower and potential rewards higher. They're also more used to looking at proposals and giving feedback, so maybe reaching out to academics or doing something like an MA could be a better option for you at this stage if you're sufficiently committed to this project to do it.

I suspect my ideas are somewhat wild compared to normal thinking, and I think it would take other people who have their own wild thoughts to comfortably critique mine.

Just as an aside, I tend to be wary of this kind of thinking and try to self critique more when I find myself going that that line. The reason I say so is that while there are certainly very valuable wild thoughts that are wild enough that "normal thinkers" can't meaningfully engage with them, they're very few and far between and the illegality more frequently tends be the result of the arguments and thinking not being well supported, ideas not being connected well to each other, or significant flaws in the idea or how its presented.

I used to frequently come across a certain acronym in EA, used in a context like "I'm working on ___" or "looking for other people who also use ___". I flagged it mentally as a curiosity to explore later, but ended up forgetting what the acronym was. I'm thinking it might be CFAR, which seems to have meant CFAR workshops? If so, 1) what happened to them, and 2) was it common for people to work through the material themselves, self-paced?

The copyright banner at the bottom of their site extends to 2024 and the Google form for workshop applications hasn't been deactivated.

I got a copy of the CFAR handbook in late 2022 and the intro had an explicit reference to self study - along the lines of 'we have only used this in workshops, we don't know what the results of self study of this material does and it wasn't written for self study'

So I assume self study wasn't common but I may be wrong

Curated and popular this week
 ·  · 1m read
 · 
 ·  · 5m read
 · 
The AI safety community has grown rapidly since the ChatGPT wake-up call, but available funding doesn’t seem to have kept pace. However, there’s a more recent dynamic that’s created even better funding opportunities, which I witnessed as a recommender in the most recent SFF grant round.[1]   Most philanthropic (vs. government or industry) AI safety funding (>50%) comes from one source: Good Ventures. But they’ve recently stopped funding several categories of work (my own categories, not theirs): * Many Republican-leaning think tanks, such as the Foundation for American Innovation. * “Post-alignment” causes such as digital sentience or regulation of explosive growth. * The rationality community, including LessWrong, Lightcone, SPARC, CFAR, MIRI. * High school outreach, such as Non-trivial. In addition, they are currently not funding (or not fully funding): * Many non-US think tanks, who don’t want to appear influenced by an American organisation (there’s now probably more than 20 of these). * They do fund technical safety non-profits like FAR AI, though they’re probably underfunding this area, in part due to difficulty hiring for this area the last few years (though they’ve hired recently). * Political campaigns, since foundations can’t contribute to them. * Organisations they’ve decided are below their funding bar for whatever reason (e.g. most agent foundations work). OP is not infallible so some of these might still be worth funding. * Nuclear security, since it’s on average less cost-effective than direct AI funding, so isn’t one of the official cause areas (though I wouldn’t be surprised if there were some good opportunities there). This means many of the organisations in these categories have only been able to access a a minority of the available philanthropic capital (in recent history, I’d guess ~25%). In the recent SFF grant round, I estimate they faced a funding bar 1.5 to 3 times higher. This creates a lot of opportunities for other donors
LewisBollard
 ·  · 5m read
 · 
Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- Progress for factory-farmed animals is far too slow. But it is happening. Practices that once seemed permanent — like battery cages and the killing of male chicks — are now on a slow path to extinction. Animals who were once ignored — like fish and even shrimp — are now finally seeing reforms, by the billions. It’s easy to gloss over such numbers. So, as you read the wins below, I encourage you to consider each of these animals as an individual. A hen no longer confined to a cage, a chick no longer macerated alive, a fish no longer dying a prolonged death. I also encourage you to reflect on the role you and your fellow advocates and funders played in these wins. I’m inspired by what you’ve achieved. I hope you will be too. 1. About Cluckin’ Time. Over 1,000 companies globally have now fulfilled their pledges to go cage-free. McDonald’s implemented its pledge in the US and Canada two years ahead of schedule, sparing seven million hens from cages. Subway implemented its pledge in Europe, the Middle East, Oceania, and Indonesia. Yum Brands, owner of KFC and Pizza Hut, reported that for 25,000 of its restaurants it is now 90% cage-free. These are not cheap changes: one UK retailer, Lidl, recently invested £1 billion just to transition part of its egg supply chain to free-range. 2. The Egg-sodus: Cracking Open Cages. In five of Europe’s seven biggest egg markets — France, Germany, Italy, the Netherlands, and the UK — at least two-thirds of hens are now cage-free. In the US, about 40% of hens are — up from a mere 6% a decade ago. In Brazil, where large-scale cage-free production didn’t exist a decade ago, about 15% of hens are now cage-free. And in Japan, where it still barely exists, the nation’s largest egg buyer, Kewpi
Relevant opportunities
32
· · 1m read
14
· · 1m read