I am a sophomore at the University of Chicago (co-president of UChicago EA and founder of Rationality Group). I am mostly interested in philosophy (particularly meta ethics, formal epistemology, and decision theory), economics, and entrepreneurship.
I also have a Substack where I post about philosophy (ethics, epistemology, EA, and other stuff). Find it here: https://substack.com/@irrationalitycommunity?utm_source=user-menu.
If anyone has any opportunities to do effective research in the philosophy space (or taking philosophy to real life/ related field) or if anyone has any more entrepreneurial opportunities, I would love to hear about them. Feel free to DM me!
I can help with philosophy stuff (maybe?) and organizing school clubs (maybe?)
Very interesting points. Here are a few other things to think about:
1. I think there are very few people whose primary motivation is helping others, so we shouldn't empirically expect them to be doing the most good because they represent a very small portion of the population. This is especially true if you think (which I do) that the vast majority of people who do good are 1) (consciously or unconsciously) signaling for social status or 2) not doing good very effectively (the people who are are a much smaller subgroup because doing non-effective good is easy). It would be very surprising, however, if those who try to do good effectively aren't doing much better than those who aren't, as individuals, on average, but it seems unlikely to me (though feel free to throw some stats that will change my mind!).
2. I'm very skeptical that "the defensibility of morality as the pursuit of greatness depends on how sophisticated our cultural conceptions of greatness are." Could you give more reason for why you think this?
3. I'm skeptical that 1) searching for equanimity is truly the best thing and 2) that we have good and tractable methods of achieving it. Perhaps people would be better off as being more Buddhist on the margin, but, to me, it seems like (thoughtfully!) getting the heavy positive tail end results and be really careful and thoughtful about negatives leads to a much better off society.
Let me know what you think!
Yep, I think this is true. The point is that, given AI stays aligned which is stated there, the best thing for a country to do would be to accelerate capabilities. You’re right, however, that its not an argument against AI being an existential threat (I’ll make a note to make this more clear) — it’s more a point for acceleration.
How do you generally respond to evolutionary debunking arguments and the epistemological problem for moral realism (how we acquire facts about the moral truth), especially considering that, unlike mathematics, there are no empirical feedback loops to work off of (i.e. you can't go out and check if the facts fit with the external world)? It seems to me like we wouldn't think our mathematical intuitions if 1) we didn't have the empirical feedback loops or 2) the world told us that math didn't work sometimes.
I think it would both be very effective and make for a very interesting video to donate to effective charities regarding potential existential risks — climate, nuclear, bioterrorism, AI, etc. Perhaps you could briefly mention some things like “future lives are highly underrepresented”, “the expected value can be potentially great if the chances of some risk are high enough”, ect. Would you consider a video on this? What’s holding the channel back from it?
I find myself in a very similar situation. I grew up an Orthodox Jew, and although no longer, I still feel a part of the broader Jewish community which has implications on my giving.
Whenever I tell my Orthodox friends about EA, I always emphasize that it isn’t a zero-sum game, and they can separate their own community and egalitarian impulses by doing both sorts of charity, keeping in mind the effective nature of EA charities.
I just wished more people would be more effective even if in terms of their own community. I typically find myself talking to people once they realizing that they should be more effective in the egalitarian impulses, but the message doesn’t seem to come through as much within their own communities.
Perhaps I’m misunderstanding something, so please correct me if I’m wrong:
If one accepts all these assumptions, why would the best course of action be to offset AMF donations rather than to avoid donating to AMF in the first place?
If ITNs cause vastly more harm to mosquitoes than they help humans, wouldn’t this imply that AMF is not just a weak investment, but actually a net-negative intervention? It seems like these numbers, if taken seriously, suggest AMF should be deprioritized rather than merely balanced with shrimp welfare donations.
I assume that this is mostly about hedging against uncertainty under diff moral theories, but it seems like making this tradeoff of offset compared to counterfactual more money to AMF implies a certain tradeoff that you're okay with such that you should never make the initial investment.
I'm confused about what sorta epistemic/ moral uncertainty theory someone would need to be offsetting the way you propose. Tbh I've already confused myself with this comment, but I hope it's helpful(?)