Participation
3

Comments
21

About point 4: While commenting, I presumed the controversial bit was "let's build bunkers only for EAs." Reading other comments, however, it seems that maybe I misunderstood something because there is more focus on the "let's build bunkers" part and not as much on the latter.

The idea of making bunkers is somewhat out there but not uncommon; governments have done it nationally at least once, and an active group of preppers do it now. In the event of a catastrophe, I would appreciate having access to a bunker, and I am sure so would others. 

Making it only for EAs implies (the utterly wrong idea) that in the event of a catastrophe, EAs are somehow more valuable and worthy of saving than non-EAs. This goes against some core ideas that we aim to cultivate.

...whether this is a healthy line of thinking...

Absolutely not healthy!

...and something we're glad the public knows about us now. 

Leave the public! This is something I didn't know about "us" until now (and plausibly, 99% of the EA community didn't either).

The memo is bad because would-have-been top funders were floating the idea of preferentially helping the in-group (and helping is an understatement here). At the same time, I expect plenty of guilt-by-association critiques to spur out of this that will place blame on the entire community :(

akash
33
10
2

I skimmed through the article; thanks for sharing!

Some quick thoughts:

community-members are fully aware that EA is not actually an open-ended question but a set of conclusions and specific cause areas

  • The cited evidence here is one user claiming this is the case; I think they are wrong. For example, if there were a dental hygiene intervention that could help, let's say, a hundred million individuals and government / other philanthropic aid were not addressing this, I would expect a CE-incubated charity to jump on it immediately.
    • There are other places where the author makes what I would consider sweeping generalizations or erroneous inferences. For instance:
      • "...given the high level of control leading organizations like the Centre for Effective Altruism (CEA) exercise over how EA is presented to outsiders" — The evidence cited here is mostly all the guides that CEA has made, but I don't see how this translates to "high level of control." EAs and EA organizations don't have to adhere to what CEA suggests. 
      • "The general consensus seems to be that re-emphasizing a norm of donating to global poverty and animal welfare charities provides reputational benefits..." — upvotes to a comment ≠ general consensus. 
  • Table 1, especially the Cause neutrality section, seems to wedge a line where one doesn't exist.
  • The author acknowledges in the Methodology section that they didn't participate in EA events or groups and mainly used internet forums to guide their qualitative study. I think this is the critical drawback of this study. Some of the most exciting things happen in EA groups and conferences, and I think the conclusion presented would be vastly different if the qualitative study included this data point.
  • I don't know what convinces the article's author to imply that there is some highly coordinated approach to funnel people into the "real parts of EA." If this is true (and my tongue-in-cheek remark here), I would suggest these core people not spend>50% of the money on global health as there could be cheaper ways of maintaining this supposed illusion.

    Overall, I like the background research done by the author, but I think the author's takeaways are inaccurate and seem too forced. At least to me, the conclusion is reminiscent of the discourse around conspiracies such as the deep state or the "plandemic," where there is always a secret group, a "they," advancing their agenda while puppeteering tens of thousands of others. 

    Much more straightforward explanations exist, which aren't entertained in this study.

    EA is more centralized than most other movements, and it would be ideal to have several big donors with different priorities and worldviews. However, EA is also more functionally diverse and consists of some ten thousand folks (and growing), each of whom is a stakeholder in this endeavor and will collectively define the movement's future.

Thanks for writing this piece! This motivates me to rescue a draft about "how to eat more plants and do it successfully" that has been in the works for too long. Hopefully, I will complete it soon-ish; fingers crossed!

But briefly — 

His argument, as I understand it, boils down to the idea that he needs to eat animals in order be fit, strong, and healthy. 

I had similar concerns before going vegan. It didn't take me that long to realize that killing, consuming, and using animals the way we do is morally abhorrent. The environmental and public health issues from intensive farming were easier to buy into. But, I was unsure if I could sustain a healthy life and build muscles without eating non-humans. 

I was getting into strength training back then, and I really wanted to build muscles and not have a scrawny figure anymore. Nearly all the jacked influencers on social media/YT promoted a meat-heavy diet; chicken breast and whey protein seemed like the necessary ingredients for getting lean and building muscles; vegan food was often labeled as rabbit food and thoroughly dismissed. Another subset of folks attracted my attention: people who stopped being vegan. The severity of the health problems they claimed they experienced while eating plants was alarming. 

All this made me pretty hesitant to adopt a plant-only diet. I won't spend much space in this comment elaborating on how I escaped the jacked influencer memeplex or what made me skeptical of the alleged severe harms of a plant-based diet, but I am glad I did. In one line — I realized that being buff had little to do with eating or not eating a plant-based diet. 

I have been vegan for three years now, and I have been able to:

  1. Build muscles and strength and gain weight 
  2. Retain muscles and most of my strength and lose weight
  3. Retain most of my muscles while not exercising at all for months

I am not as jacked as you, but I am in good shape and health and pretty happy about it! At my best, I made tracking calories, nutrient intake, and strength training progress a habit. It seemed like a simple math problem, and the results were pretty deterministic. I think I would have had similar success with a plant-predominant or meat-focused diet.

Overall, I would say my experience has been "normal," and I would recommend it to the vast majority of people who want to get bigger or be in better shape.

(N=3 now!)

Hello from another group organizer in the Southwest! We are in Tucson AZ, just a six hours drive away. Hopefully, someday in the not-so-far future, organizing a southwestern meetup / retreat / something would be feasible and super cool!

Nice, I didn't know! Their research goals seem quite broad, which is good. Within the context of AI existential risk, this project looks interesting.

Answer by akash6
2
0

I think the better question might be, "who are the best some professors/academic research groups in AI Safety to work with?"

Two meta-points I feel might be important —

  • For PhDs, the term "best university" doesn't mean much (there are some cases in which infrastructure makes a difference, but R1 schools, private or public, generally seem to have good research infrastructure). Your output as a graduate student heavily depends on which research group/PI you work with.
  • Specifically for AI safety, the sample size of academics is really low. So, I don't think we can rank them from best-to-eh. Doing so becomes more challenging because their research focus might differ, so a one-to-one comparison would be unsound.

With that out of the way, three research groups in academia come to mind:

Others:

  • Center for Human Inspired AI (CHIA) is a new research center at Cambridge; I don't know if their research would focus on subdomains of Safety; someone could look into this more.
  • I remember meeting two lovely folks from Oregon State University working on Safety at EAGx Berkeley. I cannot find their research group, and I forget what exactly they were working on; again, someone who knows more about this could comment perhaps.
  • An interesting route for a Safety-focused Ph.D. could be having a really good professor at a university who agrees to have an outside researcher as a co-advisor. I am guessing that more and more academics would want to start working on the Safety problem, so such collaborations would be pretty welcome, especially if they are also new to the domain.
    • One thing to watch out for: which research groups get funded by this NSF proposal. There will soon be new research groups that Ph.D. students interested in the Safety problem could gravitate towards!

The hallmark experiences of undiagnosed ADHD seem to be saying “I just need to try harder” over and over for years, or kicking yourself for intending to start work and then not getting much done...

Extremely relatable.

Thank you very much for writing this. I am in the process of getting a diagnosis, and this helped me overcome some of the totally made-up mental barriers regarding ADHD medication.

I downvoted and want to explain my reasoning briefly: the conclusions presented are too strong, and the justifications don't necessarily support them. 

We simply don't have enough experience or data points to say what the "central problem" in a utilitarian community will be. The one study cited seems suggestive at best. People on the spectrum are, well, on a spectrum, and so is their behavior; how they react will not be as monolithic as suggested.

All that being said, I softly agree with the conclusion (because I think this would be true for any community).

All of this suggests that, as you recommend, in communities with lots of consequentialists, there needs to be very large emphasis on virtues and common sense norms.

It's maybe worth clarifying that I'm most concerned about people who a combination of high-confidence in utilitarianism and a lack of qualms about putting it into practice.

Thank you, that makes more sense + I largely agree.

However, I also wonder if all this could be better gauged by watching out for key psychological traits/features instead of probing someone's ethical view. For instance, a person low in openness showing high-risk behavior who happens to be a deontologist could cause as much trouble as a naive utilitarian optimizer. In either case, it would be the high-risk behavior that would potentially cause problems rather than how they ethically make decisions. 

Load more