Working in healthcare technology and doing some independent AI alignment research on the side.
MSc in applied mathematics/theoretical ML.
Interested in increasing diversity, transparency and democracy in the EA movement. Would like to know how algorithm developers can help "neartermist" causes.
I'm on the one hand happy to hear that the groups team isn't as elite-focused as I had thought; on the other hand, I'm still troubled by the margin-based reasoning.
top universities are the places with the highest concentrations of people who ultimately have a very large influence on the world
I think this as a piece of reasoning represents a major problem in the perceptions of EA. While it might be factually true, there are two problems with relying on it:
I feel like I am one of the most engaged EAs in my local community, but the beliefs Torres ascribes to EA are so far removed from my own
This might have to do with "how local" your local community is. It seems to me that the weirder sides of EA (which I usually consider bad, but others here might not) are common in the EA hubs (Bay Area, Oxbridge, London, and the cluster of large groups in Europe) but not as common in other places (like here in Israel).
You're describing a religious belief that, for some unknown reason, many EAs seem to share. A belief in a mystical state of being never scientifically documented. And you ask why there's no activity around this, in a community supposedly organized around following evidence to find good ways to improve the world. And that's your answer: a shared belief is not evidence. Same as a shared belief in God, even by billions of people, is not evidence.
It's good that nobody's talking about this. It would be no more sane than e.g. trying to make everyone religious because then God would eliminate suffering.
I definitely agree. But I think we're far from it being practically useful for dedicated EAs to do this themselves.
Do most charitable organizations have in-house people to examine donors? I'm not saying we shouldn't check, but rather that there shouldn't be people in EA organizations whose job is to do this - rather than organizations just hiring auditors or whomever to do it for them.
I'd argue that "checking whether businesses are run responsibly" is out of scope for EA in general.
Treating each new person as a separate investment and trying to optimize for their marginal utility for EA, instead of looking at the aggregate effect on the movement of all the community building efforts.
Specifically in your comment, justifying diversifying investment in groups by saying "high quality group members" are the goal but top universities have bottlenecks which can't be easily solved by just pouring more money into them - instead of arguing that it's better to have a new group in Chile than a new group in Harvard, even if hypothetically people there were less qualified for existing EA jobs.