I'm a PhD student in Logic and Philosophy of Science at UC Irvine. I've been involved in Effective Altruism since the start of my undergrad at LSE in 2018. I'm working on evolutionary game theory with the ambition of contributing to AI safety. I'm reachable by email (neilc543@gmail.com) and by Zoom (https://calendly.com/neilsc/30min).
Thanks! And no problem.
We did this with 6-8 people. Having a small group like this probably helps. Only around half have completed the EA Intro Programme. In terms of progress, I think we learnt a lot but not enough to become experts. I think we would see diminishing returns by spending more than 90 minutes on a research question.
All 3 of our meetings went well. Maybe the problem you encountered can be avoided by breaking down the question and getting groups to focus first on these sub-questions before bringing everyone together to look at the big picture. Providing autonomy to the groups works well when there's a more experienced researcher in each group who can help the others.
I think presenting the activity as a debate could be done well, but I think the question should still first be broken down into sub-questions and then there should be quiet group research. There could then be a short debate on each sub-question, e.g. How viable are cultured protein sources? How viable are fungi-based protein sources?
- Anyone can call themselves a part of the EA movement.
Don't you think there are some minimal values that one must hold to be an Effective Altruist? E.g. Four Ideas You Already Agree With (That Mean You're Probably on Board with Effective Altruism) · Giving What We Can.
It seems to me that there are some core principles of Effective Altruism such that if someone doesn't hold them, I don't think it'd make sense to consider them an Effective Altruist.
To be clear, I don't disagree that anyone can call themselves part of the EA movement. I'm more wondering whether I would/should call someone an Effective Altruist if, for example, they don't think it's important to help others.
This seems reasonable to be, though we should factor in the risks that come with being seen to influence politics. I think it makes sense for individual EAs to get involved as opposed to EA orgs getting involved.