It's been a big month. We've had updates from major orgs like GiveWell and MIRI, new evidence from GWWC about the impact of deworming, CSER is hiring for new postdocs and much more!

Welcome to April's open thread on the Effective Altruism Forum. Feel free to add other EA-relevant topics that have not appeared in recent posts.

2

0
0

Reactions

0
0
Comments34
Sorted by Click to highlight new comments since:

As some of you are probably aware I work as a Credit Analyst for a hedge fund in NY - I analyze the bonds of different companies and choose which to invest in. As a result, I frequently meet management of many large corporations and discuss their businesses with them. Management are often quite attentive to investors - not only are we essentially their boss, with the power to set their compensation and even fire them, we spend a great deal of time researching their industry, and so can produce useful insights in a more incentive-compatible way than consultants - albeit restricted by only having access to public information.

In particular, I sometimes meet with the CEOs, CFOs etc. of protein companies - companies that produce beef, pork, chicken and other meat products. I personally don't care about animal rights much, but I'm aware that many EAs think these companies are very evil - plausibly the worst thing to ever happen in the history of the world.

As such, I was wondering if people had any suggestions for issues I might raise with these management teams.

My motives here are basically:

  • You may have ideas for changes that would be both animal-welfare-enhancing and improve shareholder returns. (Technically I care about bondholders not shareholders but I think it is easier for non-finance people to think about shareholders and the conversational framing does not affect me)
  • I am interested in the possibility for moral trade. I am especially concerned about Existential Risks, Abortion and Societal Value Drift. If you are in a position to reduce these then I would like to trade!

In general I believe in holding off from proposing solutions so I don't want to prime you for the sorts of things you could suggest. But in case you don't understand what I'm looking for, here are some initial thoughts I had. Bare in mind that I'm not an expert on animal welfare.

rot13

  • Fgneg fryyvat betnavp cebqhpgf, nf gurl pbzznaq n zhpu uvture cevpr, qrznaq vf (?) tebjvat encvqyl, naq ner zber uhznar.
  • Nqbcg n arj fynhtugre grpuavdhr juvpu vf obgu zber pbfg-rssrpgvir naq yrff cnvashy.
  • Vairfg va orrs cebqhpgvba bire puvpxra cebqhpgvba, nf pbafrafhf fnlf gur ynggre znexrg jvyy orpbzr bire-fhccyvrq yngre guvf lrne, naq gur sbezre vf nyfb zber uhznar.
  • Ybool sbe vapernfrq erthyngvba sbe arj snpgbel snezf, juvpu jbhyq cebgrpg vaphzoragf ol ceriragvat arj pbzcrgvgvba sebz vapernfvat fhccyl.
  • Hfvat zber uhznar zrgubqf jvyy cerirag navzny evtugf npgvivfgf sebz perngvat artngvir choyvpvgl sbe lbh. (V qb abg guvax guvf vf n tbbq nethzrag nf tvivat va gb oynpxznvy vf njshy tnzr gurbel, naq npgvivfgf ner bsgra cerggl veengvbany fb jvyy cebonoyl pneel ba nggnpxvat lbh naljnl, ohg vg'f whfg na rknzcyr).

Upvoted. Thanks for having the courage to suggest something seemingly odd, even for effective altruism, because you care about achieving good, and cooperating with peers you don't fully agree with but respect.

I have seen worry about improving the material living conditions of animals such that people become less concerned with their welfare. I think the idea there is that most of the expected value is in the small chance of a very large shift away from consumption of animals.

Animal welfare ensures animals are taken care of, but isn't necessarily about ending carnism, i.e., eating animal flesh. Animal rights seeks to respect the rights of nonhuman animals, including the right to life and not to be killed by humans. This is merely my impression, not objective data, but in NY experience the majority of those in effective altruism who care about factory farming are for full animal rights, not merely animal welfare. There is also a contingent of utilitarians within effective altruism who primarily care about reducing and ending suffering. They may be willing to compromise in favor of animal welfare, and not full rights, but I'm not sure. They definitely don't seem a majority of those concerned with animal suffering within effective altruism.

There is also a contingent of utilitarians within effective altruism who primarily care >about reducing and ending suffering. They may be willing to compromise in favor of >animal welfare, and not full rights, but I'm not sure. They definitely don't seem a >majority of those concerned with animal suffering within effective altruism.

Of course, only actual data on EAs could demonstrate the proportionate of utilitarians willing to compromise but this seems weird. To me it would seem utilitarianism all but commits you to accept "compromises" on animal welfare at least in the short term given historical facts about how groups gained ethical consideration. As far as I know (anyone feel free to provide examples to the contrary), no oppressed group has ever seen respect for their interests go from essentially "no consideration" (where animals are today) to "equal consideration" without many compromising steps in the middle.

In other words, a utilitarian may want the total elimination of meat eating (though this is also somewhat contentious) but in practice they will take any welfare gains they can get. Similarly, utilitarians may want all wealthy people to donate to effective charities until global poverty is completely solved but will temporarily "compromise" by accepting only 5% of wealthy people to donate 10% of their income to such charities while pushing people to do better.

So, in practice, utilitarianism would mean setting the bar at perfection (and publicly signaling the highest standard that advances you towards perfection) but taking the best improvement actually on offer. I see no reason this shouldn't apply to the treatment of animals. Of course, other utilitarians may disagree that this is the best long term strategy (hopefully evidence will settle this question) but that is an argument about game theory and not whether some improvement is better than none or if settling for less than perfection is allowable.

This is an extremely interesting idea. My understanding is that animal-rights campaigns are usually very targeted at a specific company.

My guess, which I can confirm with people who know more about this, is that the highest impact thing you could do is say "Mercy for animals is planning on targeting you for their next action; the last company they targeted had bad financial outcomes X, Y and Z; therefore I recommend you phase out gestation crates or whatever."

Do you have the ability to target just one company and work with an organization like MFA? Or do you want generic advice like "phase out gestation crates"?

I'm completely unconcerned about abortion, so we may be able to trade.

Thanks for the suggestion!

Hmm, interesting idea. My concern would be that this would look very weird - outside of my role as an investor, which would make them confused / ignore it. It would be strange for an investor to have advance knowledge of an activist stunt! And indeed I do not actually have knowledge of such activities. In particular, I'm not sure what I would add in such a situation, compared to MFA simply contacting the company directly, who would also be more credible.

It does seem that MFA have been successful on the gestation crates. However, I don't think that exact answer would be applicable here, as generally the companies I'm looking at are like Tyson Foods, rather than the companies which use gestation crates, which tend (I think) to be contractors.

However, I could say something along the lines of "Other companies have been getting negative publicity about gestation crates, and deciding to re-actively stop using them. As it's better to be proactive than reactive, maybe you should start phasing them out of your supply chain now".

Also not all the companies do pork, whereas most do chicken.

I'm completely unconcerned about abortion, so we may be able to trade.

Glad to hear it. Are you aware of similar things having been done before? I'm unaware of how it would work, mechanically. (I'm also not sure exactly what I would buy from you in return, but I guess we could settle for a donation to a charity, whose identity I could determine later.)

I am interested in the possibility for moral trade. I am especially concerned about Existential Risks, Abortion and Societal Value Drift. If you are in a position to reduce these then I would like to trade!

Do you know how you would actually do such a trade? There seems to be a major trust problem.

You've probably considered it, but it's not on your list: To hedge against any change in our consumption of meat, you could invest in in vitro meat, and other meat-alikes.

I've just created local EA presences in the following 15 new places, added to my previous list. Point them out via that link to anyone you know nearby who might be interested! A good low key way to do so is to point or invite them to the Facebook groups (you can invite them by joining said groups and then using the right sidebar on them).

  • Adelaide, Australia
  • Bulgaria
  • Hong Kong, China
  • Kiel, Germany
  • Leipzig, Germany
  • Russia
  • Hertfordshire, United Kingdom
  • Yorkshire, United Kingdom
  • Albany, United States
  • Central Massachusetts, United States
  • Central Virginia, United States
  • Dallas, United States
  • Portland, United States
  • Sacramento, United States
  • University of Idaho, United States

Is the blogging carnival thing still going on? I recently started a blog in which I'm probably going to occasionally write about topics related to EA, and the carnival idea sounds like a really nice way to participate in the community and create content. However, I couldn't find a list of planned future topics or anything, and the most recent master post I could find in Bitton's blog was from January. If people are still interested in this, should the topic be announced monthly in the Open Thread or something so people could find it more easily?

By chance, I did find a mention yesterday about March's topic being selflessness, and gave it a try - it was already April when I published it though, hope it can still be counted as a submission!

They're not hard deadlines for submission, so there's no reason it shouldn't count. I think I saw Bitton do some post in at least February about the blogging carnival, albeit in an open thread it otherwise less pronounced way. There isn't a history of him having a list of future topics. Try sending him a private message.

[Translating Wikipedia articles about EA to more languages]

Konrad Seifert suggested this on Facebook:

"I just translated the EA wiki page to French - with https://translate.google.com/toolkit and afterwards added the sources, code and structure with c+p from the English article. It takes 1-2h and leaving an imperfect text makes wikipedians have to look into it and might raise their interest, too.

Anyway it will attract people that googled it and we should have it in as many languages as possible. I'm not perfect at speaking French but it'll still be better than a non-existing article."

There was discussion here. There's a general .impact project page for anyone interested in working on Wikipedia here. For reference, the English wikipedia article on "effective altruism" gets 2-3k views a month (here's a comparison to other pages.)

Vox published a great Givewell-centric guide to giving on Friday: http://www.vox.com/2014/12/22/7434741/holiday-giving-charity-donation

Are job opportunities for non-direct impact on topic? (E.g. earning to give.)

If so, should those be a top level posts or comments here?

Top-level posts and open-thread discussions both should be at least somewhat EA-relevant. If it's a position for your ETG company, you can use your discretion.

Would anybody be interested in an x-risk reading group? I know MIRI's been running one going through Superintelligence; I'd love to read a broad swath of x-risk related material, and meet with people to discuss either in person or online (or both). IRL, I'm in Toronto, Ontario.

There is an online EA book club/reading group. It's not always on x-risk, but I'm sure they could do a period x-risk session. It might be worth starting out like that, as critical mass is a problem for these things and this'd help both the x-risk reading group and the general EA book club reach this.

Are there any first person pieces on someone about successfully changing careers in order to earn to give? There have been several stories discussing the topic over the past few years but these all seem to be descriptive, third person accounts, or normative analysis.

Even if not, if you've actually made such a change could you please publicly share your story. I'd like to hear it and I'd bet many others would too.

How broadly do you define "changing careers"? I, for example, switched from being a developer to founding a company for E2G reasons.

That would count but I should have been more broad in my statement anyway. People like the "here's what I did and why I did it narrative" and earning to give could use more of these stories in general. I think a variety of them showing different perspectives for people in different positions and different abilities would be a boon.

Btw, I was quite wrong about there being no first person accounts as, for one, Chris Hallquist has written about this extensively.

One of these first-person style stories could get into Vox or somewhere nice because a lot of people might like to read it.

I didn't change my career, but I did dramatically change my career plans, twice.

That counts. And, as I said above to Ben, I should have been more broad anyway. I just think we can use more first-person narratives about earning to give to present the idea as less of an abstraction.

Of course, I could be wrong and those who would consider earning to give at all (or would be moved to donate more because of hearing such a story) would be equally swayed by a third person analysis of why it is a good idea for some people.

In navel-gazing curiosity: Has there been a poll done on what EAs think about moral realism?

I searched the Facebook group and Googled a bit but didn't come up with anything.

You could try a Facebook poll if the search didn't turn one up!

(Personally I'm a tentative moral realist, though conscious of some strong arguments against it. I'm have some sympathy - though not complete agreement - with the position Niel Sinhababu outlines here.)

Has anyone else tried to pushing EA specifically at religious audiences? There's this on .impact but it's been a while since that was touched and I'd guess this could use some follow up. Doing this could really prove beneficial at getting favorable audiences especially if you or someone you're close to is heavily involved in a church.

I anticipate some slow progress from some of the people listed there.

Ah, I should have guessed that from the "this is being actively pursued" label or I could have just asked there.

Naturally, if you'd like the help, I suspect there may be at least a few people here who, given their familiarity with a given religion, may have a decent idea of how to pitch the focus on effectiveness to a specific group.

I'd recommend those people either comment here or, even better, write something about their interest into that .impact project page.

There was an article about nano-satellites on slashdot this afternoon, which cites a $30k figure for an individual satellite build and launch. At that price, obviously it's a tightly constrained package; the same source cites $200k for a cube-sat, which is a bit roomier.

People are starting to think of these types of assets as "relatively" cheap components in constellations -- rather than launching one very high-value, highly capable sat, launch a cluster of smaller/cheaper sats, which can potentially evolve over time as some of them are de-orbited and replaced.

There are some obvious x-risk and EA applications (as well as many potentially non-obvious ones!), like tracking and searching for Near Earth approaching Objects (ie, killer rocks from space), as well as all sorts of earth-based imaging applications and potentially space commerce applications..

I'm guessing the sums of money involved are probably still outside what's practical for most of us in the EA/x-risk community, but I expect that this is going to be a growth sector, which means that prices may very well come down a lot over the next few years. Thoughts?

Curated and popular this week
Relevant opportunities