This is a special post for quick takes by Bob Jacobs 🔸. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

I quit. I'm going to stop calling myself an EA, and I'm going to stop organizing EA Ghent, which, since I'm the only organizer, means that in practice it will stop existing.

It's not just because of Manifest; that was merely the straw that broke the camel's back. In hindsight, I should have stopped after the Bostrom or FTX scandal. And it's not just because they're scandals; It's because they highlight a much broader issue within the EA community regarding whom it chooses to support with money and attention, and whom it excludes.

I'm not going to go to any EA conferences, at least not for a while, and I'm not going to give any money to the EA fund. I will continue working for my AI safety, animal rights, and effective giving orgs, but will no longer be doing so under an EA label. Consider this a data point on what choices repel which kinds of people, and whether that's worth it.

EDIT: This is not a solemn vow forswearing EA forever. If things change I would be more than happy to join again.

EDIT 2: For those wondering what this quick-take is reacting to, here's a good summary by David Thorstad.

Thanks for sharing your experience here. I'm glad you see a path forward that involves continuing to work on issues you care about despite distancing yourself from the community.

In general, I think people should be more willing to accept that you can accept EA ideas or pursue EA-inspired careers without necessarily accepting the EA community. I sometimes hear people struggling with the fact that they like a lot of the values/beliefs in EA (e.g., desire to use evidence and reason to find cost-effective and time-effective ways of improving the world) while having a lot of concerns about the modern EA movement/community. 

The main thing I tell these folks is that you can live by certain EA principles while distancing yourself from the community. I've known several people who have distanced themselves from the community (for various reasons, not just the ones listed here) but remained in AI safety or other topics they care about.

Personally, I feel like I've benefitted quite a bit from being less centrally involved in the EA space (and correspondingly being more involved in other professional/social spaces). I think this comment by Habryka describes a lot of the psychological/intellectual effects that I experienced.

Relatedly, as I specialized more in AI safety, I found it useful to ask questions like "what spaces should I go to where I can meet people who could help with my AI safety goals". This sometimes overlapped with "go to EA event" but often overlapped with "go meet people outside the EA community who are doing relevant work or have relevant experience", and I think this has been a very valuable part of my professional growth over the last 1-2 years.

I 100% agree with you on your general point, Akash, but I think something slightly different is going on here, and I think it's important to get it right.

To me, it sounds like you're saying, 'Bob is developing a more healthy relationship with EA'. However, I think what's actually happening is more like, 'Bob used to think EA was a cool thing, and it helped him do cool things, but then people associated with it kept doing things Bob found repugnant, and so now Bob does not want anything to do with it'.  

Bob, forgive me for speaking on your behalf, and please correct me if I have misinterpreted things.

A bit strong, but about right. The strategy the rationalists describe seems to stem from a desire to ensure their own intellectual development, which is, after all, the rationalist project. By disregarding social norms you can start conversing with lots of people about lots of stuff you otherwise wouldn't have been able to. Tempting, however, my own (intellectual) freedom is not my primary concern; my primary concern is the overall happiness (or feelings, if you will) of others, and certain social norms are there to protect that.

To me, it sounds like you're saying, 'Bob is developing a more healthy relationship with EA'.

Oh just a quick clarification– I wasn't trying to say anything about Bob or Bob's relationship with EA here. 

I just wanted to chime in with my own experience (which is not the same as Bob's but shares one similarity in that they're both in the "rethinking one's relationship with the EA community/movement" umbrella).

More generally, I suspect many forum readers are grappling with this question of "what do I want my relationship with the EA community/movement to be". Given this, it might be useful for more people to share how they've processed these questions (whether they're related to the recent Manifold events or related to other things that have caused people to question their affiliation with EA).

Thanks for writing this Bob. I feel very saddened myself by many of the things I see in EA nowadays, and have very mixed feelings about staying involved that I'm trying to sort through - I appreciate hearing your thought process on this. I wish you the best in your future endeavors!

This isn't good. This really isn't good.

Because I want to avoid the whole thing, and I am far less attached to EA because of these arguments, while being on the opposite side of the political question from where I assume you are.

Anyways, I'd call this weak evidence for the 'EA should split into rationalist!EA, normie!EA'

Intuitively it seems likely that it would be better for the movement though if only people from one side were leaving, rather than the controversies alienating both camps from the brand.

EA is already incredibly far outside the "normiesphere," so to speak. Calling it that is making some incredibly normative assumptions. What you're looking for is more along the lines of "social justice progressive" EA and SJP-skeptical EA. As much as some people like to claim "the ideology is not the movement," I would agree that such a split is ultimately inevitable (though I think it will also gut a lot of what makes EA interesting, and eventually SJP-EA morphs into bog-standard Ford Foundation philanthropy).

Still not that accurate, since I suspect there's a fair number of people that disagree with Hanania, but think he should be allowed to speak, while supporting the global health efforts in Africa. But so it goes, trying to name amorphous and decentralized groupings. 

eventually SJP-EA morphs into bog-standard Ford Foundation philanthropy

This seems unlikely to me for several reasons, foremost amongst them that they would lose interest in animal welfare. Do you think that progressives are not truly invested in it, and that it's primarily championed by their skeptics? Because the data seems to indicate the opposite.

PETA has been around for longer than EA, among other (rather less obnoxious and more effective) animal welfare organizations; I don't think losing what makes EA distinct would entail losing animal welfare altogether. The shrimp and insect crowd probably wouldn't remain noticeable. Not because I think they overlap heavily with the skeptic-EA crowd (quite the opposite), but because they'd simply be drowned out. Tolerance of weirdness is a fragile thing.

I do think the evidence is already there for a certain kind of losing/wildly redefining "effective," ie, criminal justice reform. Good cause, but no way to fit it into "effectiveness per dollar" terms without stretching the term to meaninglessness. 

Based on your background and posts on here, I think this is a shame.

And I say that as someone who has never called himself an EA even though I share its broad goal and have a healthy respect for the work of some of its organizations and people (partly because of similar impressions to the ones you've formed, but also because my cause area and other interests don't overlap with EA quite as much as yours)

Hope you continue to achieve success and enjoyment in the work you do, and given you're in Brussels wondered if you'd checked out the School for Moral Ambition which appears to be an EAish philosophy plus campaigning org trying to expand from your Dutch neighbours (no affiliation other than seeing it discussed here)

I appreciate what Rutger Bregman is trying to do, and his work has certainly had a big positive impact on the world, almost certainly larger than mine at least. But honestly, I think he could be more rigorous. I haven't looked into his 'school for moral ambition' project, but I have read (the first half) of his book "humankind", and despite vehemently agreeing with the conclusion, I would never recommend it to anyone, especially not anyone who has done any research before.

There seems to be some sort of trade-off between wide reach and rigor. I noticed a similar thing with other EA public intellectuals, like for example with Sam Harris and his book "The Moral Landscape" (I haven't read any of his other books, mostly because this one was just so riddled with sloppy errors), and Steven Pinker's "Enlightenment Now" (Haven't read any of his other books either, again because of errors in this book). (Also, I've seen some clips of them online, and while that's not the best way to get information about someone, they didn't raise my opinion of them, to say the least).

Pretty annoying overall. At least Bregman is not prominently displayed on the EA People page like they are (even though what I read of his book was comparatively better). I would delete them off of it, but last time I removed SBF and Musk from it, that edit got downvoted and I had to ask a friend to upvote it (and this was after SBF was detained, so I don't think a Harris or Pinker edit would fare much better). Pretty sad, because I think EA has much better people to display than a lot of individuals on that page. Especially considering some of them (like Harris and Pinker) currently don't even identify as EA.
 

Interesting, you're clearly more familiar with Bregman than I am: I was thinking of it in terms of the social reinforcement in finding interesting cause areas and committing to them thing he appears to be trying to do rather than his philosophy.

There's definitely a tradeoff between wide reach and rigour when writing for public audiences, but I think most people fall short of rigour most of the time. But those who claim exceptional rigour as their distinguishing characteristic should definitely try to avoid appearing to be more cliquey and arbitrary in their decision making than average...

When it comes to someone like Pinker it's the tone that irritates me more than the generalizations, to the point I'm even more annoyed when I think he's right about something! If Bregman sometimes sounds similar I can see how it would grate.

The Belgian senate votes to add animal welfare to the constitution.

It's been a journey. I work for GAIA, a Belgian animal advocacy group that for years has tried to get animal welfare added to the constitution. Today we were present as a supermajority of the senate came out in favor of our proposed constitutional amendment. The relevant section reads:

In exercising their respective powers, the Federal State, the Communities and the Regions strive to protect and care for animals as sentient beings.

It's a very good day for Belgian animals but I do want to note that:

  1. This does not mean an effective shutdown of the meat industry, merely that all future pro-animal welfare laws and lawsuits will have an easier time.  And,
  2. It still needs to pass the Chamber of Representatives.

If there's interest I will make a full post about it if once it passes the Chamber.

EDIT: Translated the linked article on our site into English.

Congrats! I would also appreciate a full post, and would be interested to hear more about the process of passing the amendment. It would be great to recognize those who contributed to this work.

Very interesting. I’d personally appreciate a full post.

+1 for full post. And huge congrats. This must've been incredibly difficult work, for an ambitious goal, and you made it happen! So great.

For those voting in the EU election and general elections in Belgium, here's an overview of the party positions when it comes to animal welfare:

(For more details, click this link)

✅ means more in favor    ❌ means more against

Federal election (Flanders):

policy proposal

PVDA

🔴

GROEN

❇️

VOORUIT

🔺

Open-VLD

🔵

CD&V

🔶

N-VA

🔆

VB

⬛️

VAT rate reduction on veterinary care and pet food
A ban on traditional fireworks

Federal election (Walloon):

policy proposal

PTB

🔴

ECOLO

❇️

PS

🔺

LE

🐬

Défi

🌸

MR

🔵

VAT rate reduction on veterinary care and pet food
A ban on traditional fireworks

Flanders election:

policy proposal

PVDA

🔴

GROEN

❇️

VOORUIT

🔺

Open-VLD

🔵

CD&V

🔶

N-VA

🔆

VB

⬛️

Better living conditions for broiler chickens in Flanders
A ban on live cooking and cutting lobsters in half
A phasing out plan of Boudewijn Seapark
A ban on the painful surgical castration of piglets
A ban on chick killing
Stricter legislation around the dog and cat trade
A duty of care for horses, dogs, cats and rabbits
The development of cultured meat in Flanders
Animal testing: for an animal-free strategy in Flanders
A Flemish ban on the sale of products that harm animal welfare
Animal welfare as a criterion in environmental permit procedure
A punishment of animal abuse through GAS fines 
total score11/1212/128/123/120/1211/127/12
Highest score      

Walloon election:

 

PTB

🔴

ECOLO

❇️

PS

🔺

LE

🐬

Défi

🌸

MR

🔵

Total score12/1311/138/1310/136/135/13
Highest score     

EU election (Flanders):

 

PVDA

🔴

GROEN

❇️

VOORUIT

🔺

Open-VLD

🔵

CD&V

🔶

N-VA

🔆

VB

⬛️

Total score9/1010/1010/108/100/1010/100/10
Highest score    

EU election (Walloon):

 

PTB

🔴

ECOLO

❇️

PS

🔺

LE

🐬

Défi

🌸

MR

🔵

Total score9/109/107/109/106/107/10
Highest score   

Brussels election:

 

PVDA

🔴

ECOLO

❇️

GROEN

❇️

PS

🔺

VOORUIT

🔺

LE

🐬

Défi

🌸

MR

🔵

O-VLD

🔵

CD&V

🔶

N-VA

🔆

VB

⬛️

Score5/65/66/64/66/65/65/64/63/60/66/65/6
Highest score         

Highest score Federal election (Flanders):  PVDA, GROEN, VB

Highest score Federal election (Walloon):   PTB, PS, MS

Highest score Flanders election:                   GROEN

Highest score Walloon election:                    PTB

Highest score EU election (Flanders):         GROEN, Vooruit, N-VA

Highest score EU election (Walloon):          PTB, Ecolo, LE

Highest score Brussels election:                  GROEN, Vooruit, N-VA

TLDR: Just like my post on the topic pointed out, the leftwing parties tend to be best for animal welfare, but the far-right can often be better than the center-right

Reddit user blueshoesrcool discovered that Effective Ventures (the umbrella organization for the Centre for Effective Altruism, 80000 hours, GWWC, etc) has missed its charity reporting deadline by 27 days.

Given that there's already a regulatory inquiry into Effective Ventures Foundation, maybe someone should look into this.

Hey Bob - Howie from EV UK here. Thanks for flagging this! I definitely see why this would look concerning so I just wanted to quickly chime in and let you/others know that we’ve already gotten in touch with relevant regulators about this and I don’t think there’s much to worry about here.

The thing going on is that EV UK has an extended filing deadline (from 30 April to 30 June 2023) for our audited accounts,[1] which are one of the things included in our Annual Return. So back in April, we notified the Charity Commission that we’ll be filing our Annual Return by 30 June. 

  1. ^

    This is due to a covid extension, which the UK government has granted to many companies.

Inside Wytham Abbey, the £15 Million Castle Effective Altruism Must Sell [Bloomberg]

From the article:

Effective Ventures has since come to a settlement with the FTX estate and paid back the $26.8 million given to it by FTX Foundation. [...] It’s amid such turmoil that Wytham Abbey is being listed on the open market for £15 million [...]

Adjusted for inflation, the purchase price of the house two years ago now equals £16.2 million. [...] The listing comes as homes on the UK’s once-hot country market are taking longer to sell, forcing some owners to offer discounts.

I still think the intangible reputational damage is worse, but a loss of a million pounds (that could've been spent on malaria bed nets) would be nothing to sneeze at either.

(archive link)

It's not necessarily a loss of a million pounds if many of the events that happened there would have spent money to organise events elsewhere (renting event spaces and accommodation for event attendees can get quite pricey) and would have spent additional time on organising the events, finding venues, setting them up etc (compared to having them at Wytham). 

For comparison, EA Global events cost in the ballpark of a million pounds per event. 

*A million pounds if we round down, not to mention it could've been much more if it was invested.
The venue is not the biggest cost of those EAG events, since you also need to pay for things like travel grants, catering, equipment... This also doesn't establish that buying it is better than renting. Not that it matters, the only thing listed on the Wytham Abbey website is a grand total of eleven workshops.
Even if you don't want to give the money to animals or the developing world, and even if you don't want to invest the money to have more to give later, and even if you don't want to invest it into community building in poor countries but rather in rich countries, then this money still could've been spent better.

I don't know how the cost benefit calculation works out but retreats have different costs than conferences (including some overnight accommodation) and less tangible costs associated with using a different venue for each event.

I would also assume there are quite a few more events that aren't listed online.

Note that not all the workshops are one-off, eg Future Impact Group was every trimester I believe

On the other hand, the project also spent some significant amount of money on staffing, supplying, maintaining and improving the property, so total expenditure is surely more than just purchase price minus sale price.

The Netherlands passed a law that would ban factory farming.

It was introduced by the Party for the Animals  and passed in 2021. However, it only passed because the government had just fallen and the senate was distracted by passing covid laws, which meant they were very busy and didn't have a debate about it. Since the law is rather vague there's a good chance it wouldn't have passed without the covid crisis.

It was supposed to start this year, but the minister of agriculture has decided he will straight up ignore the law . The current government is not in favor of this law and so they're looking at ways to circumvent it.

It's very unusual for the Dutch government to ignore laws, so they might get sued by animal rights activists. I expect they will introduce a new law rather quickly that repeals this ban, but the fact that it passed at all and that this will now become a big issue in the news is very promising for the 116 million Dutch farm animals.

The results of the Dutch provincial elections are in. The Party for the Animals (the party that banned factory farming, but people ignored it) has increased its number of seats in the senate from 3 to 4 (out of 75).

Before you start cheering I should mention that the Farmer–Citizen Movement (who are very conservative when it comes to animal rights) have burst onto the scene with 16 seats (making them the largest party).

With farming and livestock becoming a hot button issue in the Netherlands there's a chance that animal rights will now become a polarizing issue, with a lot of people who previously didn't think about it becoming explicitly for or against expanding animal rights. While this would increase the amount of vegetarians and vegans, it remains to be seen if this will turn out positive for animal welfare overall.

Jobst and I want to improve AI-safety by supplementing RLHF with a consensus generating voting system. Last week we did a small experiment at a conference. Here is the poster we used to explain this idea to the attendants:

Here's the PDF

Say you had to choose between two options:

Option 1: A 99% chance that everyone on earth gets tortured for all of time (-100 utils per person) and a 1% chance that a septillion happy people get created (+90 utils pp) for all of time

Option 2: A 100% chance that everyone on earth becomes maximally happy for all of time (+100 utils pp)

Let's assume the population in both these scenario's remain stable over time (or grow similarly), Expected Value Theory (and classic utilitarianism by extension) says we should choose option 1, even though this has a 99% chance of an s-risk, over a guaranteed everlasting utopia for everyone. (You can also create a scenario with an x-risk instead of an s-risk). This seems counterintuitive.

I call this the wagering calamity objection.

EDIT: This is not the 'very repugnant conclusion' since it's not about inequality within a population, but rather about risk-aversion.

This sounds similar to the "very repugnant conclusion".

Gunman: [points a sniper rifle at a faraway kid] Give me $10 or I'll kill this kid.

Utilitarian: I’m sorry, why should I believe that you will let the kid live if I give you $10? Also, I can’t give you the money because that would set a bad precedent. If people know I always give money to gunmen that would encourage people to start taking hostages and demanding money from me.

Gunman: I promise I will let her live and to keep it a secret. See, I have this bomb-collar that will explode if I try to remove it. Here's a detonator that starts working in 1 hour, now you can punish me if I break my promise.

Utilitarian: How do I know you won’t come back tomorrow to threaten another kid?

Gunman: I'm collecting money for a (non-effective) charity. I only do this because threatening utilitarians is an easy source of money. I promise I'll only threaten you once.

Utilitarian: So you're saying the mere existence of utilitarians can generate cruel behavior in people who otherwise wouldn't? Guess I should consider not being a utilitarian, or at least keeping it a secret.

EDIT: Will someone explain why this is worth (strong) downvotes? This seems like a pretty natural extension of game theory; If you reveal you’re always open to sacrificing personal utility for others you leave yourself more open to exploitation than with a tit-for-tat-like strategy (e.g. contractualism), meaning people are more likely to try and exploit you (e.g. by threatening nuclear war). If you think I made a mistake in my reasoning why leave me with less voting power and why not click on the disagreement vote or leave a comment explaining it?

EDIT 2: DC's hypothesis that it's because of vibes and not reasoning is interesting, although I find the hypothesis that some EA's strongly identify as utilitarian and don't like seeing it questioned also plausible (they don't seem to have a problem with a pro-utilitarianism argument having a child in mortal peril, e.g. the drowning child experiment). There's a reason thought experiments in ethics often have these attributes; I'm not trying to disturb, I'm trying to succinctly show the structure of threats without wasting the readers time with fluff. So for example, I choose a child so I don't need to specify a high amount of expected DALY's per dollar, I choose a sniper rifle because then there doesn't need to be a mechanism to make the child also keep the agreement a secret, I choose a bomb-collar because that's a quick way to establish a credible punishment mechanism, etc etc.

People were probably just squicked by the shocking gunman example starting the first sentence with no context and auto-downvoted based on vibes, rather than your reasoning. You optimized pretty hard for violent shock value with your first sentence, which could be a good hook for a short story in other contexts but here hijacks the altruistic reader with ambiguously threatening information. I don't personally mind but maybe it's triggering for some. Try using less violent hypotheticals or more realistic ones, maybe

After years of using this forum I started to have problems with my writings not appearing on the frontpage and the mods not answering my messages the day I started criticizing EA.

This may very well be a coincidence in which case I genuinely apologize for the implied accusal. I still think it's important to mention it in case other people are having the same problem.

EDIT: After commenting this I suddenly lost a lot of karma due to downvotes:

EDIT 2: To respond to the reply, it's not all messages on the intercom, and it's been happening for a bit longer than the last couple days. But I can totally understand that you are overloaded right now so fair enough I'll drop that point. More importantly, it's not just happening with the comments on popular posts and not just with comments either.
I suddenly lost a lot of karma but I wanted to show some proof that I didn't make that up so I went to settings and changed it to "show downvotes" and set it to batch at the nearest hour. That way I could make a screenshot of it and provide some proof it happened.

EDIT 3: I think it's back to normal, and for the record I don't think the admins themselves went back and downvoted me.

Ben_West🔸
Moderator Comment6
0
0

Hi Bob,
1. We have been responding to your messages in intercom, I don't know what you mean. It's true that our moderation team is slower to respond than usual because we are overloaded right now, but I think you can probably guess why we are overloaded.
2. You are probably commenting on popular posts and we don't show all the comments from those on the Frontpage. I think we never show more than 4 or 5. The forum is open source, so you can look through the code to see the logic we use to decide which comments to display, if you would like.
3. I don't know who's downvoting you. It looks like your notifications are batched, so you got notified about them all at the same time because your Vote Notifications setting is set this way.

Beauty vs Happiness Thought Experiment


Say a genie were to give you the choice between:

1) Creating a stunningly beautiful world that is uninhabited and won’t influence sentient beings in any way or 2) Not creating it.

In addition, both the genie’s and your memories of this event are immediately erased once you make the choice, so no one knows about this world and you cannot derive happiness from the memory.

Would you choose option one or two?
 

I would choose option one, because I prefer a more beautiful universe to an uglier one (even if no one experiences it). This forms an argument against classic utilitarianism.

Classic Utilitarianism says that I’m wrong. The choice doesn’t create any happiness, only beauty. This means that, according to classic utilitarianism, I should have no preferences between the two options.
 

There are several moral theories that do allow you to prefer option one. One of which is preference utilitarianism which states that it’s okay to have preferences that don’t bottom out in happiness. For this reason, I find preference utilitarianism more persuasive than classic utilitarianism.


A possible counterargument would be that the new world isn't really beautiful since no one experiences it. Here we have a disagreement over whether beauty needs to be experienced to even exist.


A third way of looking at this thought experiment would be through the lens of value uncertainty. Through this lens, it does make sense to pick option one. Even if you have a thousand times more credence in the theory that happiness is the arbiter of value, the fact that no happiness is created either way leaves the door open for your tiny credence that beauty might be the arbiter of value. Value uncertainty suggests that you take the first option, just in case.

What if there's a small hedonic cost to creating the beautiful world? Suppose option 1 is "Creating a stunningly beautiful world that is uninhabited and won’t influence sentient beings in any way, plus giving a random person a headache for an hour."

In that case I can't really see a moral case for choosing option 1, no matter how stunningly beautiful the world in question is. This would suggest that even if there is some intrinsic value to beauty, it's extremely small if not lexically inferior to the value of hedonics. I think for basically all practical purposes we do face tradeoffs between hedonic and other purported values, and I just don't feel the moral force of the latter in those cases.

A third way of looking at this thought experiment would be through the lens of value uncertainty. Through this lens, it does make sense to pick option one. Even if you have a thousand times more credence in the theory that happiness is the arbiter of value, the fact that no happiness is created either way leaves the door open for your tiny credence that beauty might be the arbiter of value. Value uncertainty suggests that you take the first option, just in case.

This summarises my view. Might as well choose first option just in case.

EDIT: Biden Backs $8 Billion Alaska Oil Project. I don't know why someone gave this shortform an immediate -9 downvote, but for those EAs that still care about climate change, thank you.

A massive and controversial new oil production project in Alaska is under review by the US Department of the Interior.

ConocoPhillips' massive Willow Project would be a climate disaster, locking in at least 30 years of fossil fuel production on sensitive Arctic ecosystems near Indigenous communities. It would unleash high levels of pollution - roughly the equivalent of 66-76 coal plants worth of carbon to the air - and directly undermine President Biden's climate goals.

The Biden administration has the power to stop this massive fossil fuel development. Click here to send them a letter (2 minutes).

Curated and popular this week
Relevant opportunities