AF

Austen_Forrester

-35 karmaJoined

Comments
96

I didn't mean to imply that it was hopeless to increase charitable giving in China, rather the opposite, that it's so bad it can only go up! Besides that, I agree with all your points.

The Chinese government already provides foreign aid in Africa to make it possible to further their interests in the region. I was thinking of how we could possibly get them to expand it. The government seems almost impossible to influence, but perhaps EAs could influence African governments to try to solicit more foreign aid from China? It could have a negative consequence, however, in that receiving more aid from China may make Africa more susceptible to accepting bad trade deals, etc.

I don't know how to engage with China, but I do strongly feel that it holds huge potential for both altruism and also GCRs, which shouldn't be ignored. I like CEA's approach of seeking expertise on China generalist experts. There are a number of existing Western-China think tanks that could be useful to the movement, but I think that a "China czar" for EA is a necessity.

I agree that financial incentives/disincentives result in failures (ie. social problems) of all kinds. One of the biggest reasons, as I'm sure you mention at some point in your book, is corruption. ie. the beef/dairy industry pays off environmental NGOs and government to stay quiet about their environmental impact.

But don't you think that non-financial rewards/punishment also play a large role in impeding social progress, in particular social rewards/punishment? ie. people don't wear enough to stay warm in the winter because others will tease them for being uncool, people bully others because they are then respected more, etc.

It could be a useful framing. "Optimize" to some people may imply making something already good great, such as making the countries with the highest HDI even better, or helping emerging economies to become high income, rather than helping the more suffering countries to catch up to the happier ones. It could be viewed as helping a happy person become super happy and not a sad person to become happy. I know this narrow form of altruism isn't your intention, I'm just saying that "optimize" does have this connotation. I personally prefer "maximally benefit/improve the world." It's almost the same as your expression but without the make-good-even-better connotation.

I think EA's have always thought about impact of collective action but it's just really hard, or even impossible to estimate how your personal efforts will further collective action and compare that to more predictable forms of altruism.

Of course, I totally forgot about the "global catastrophic risk" term! I really like it and it doesn't only suggest extinction risks. Even its acronym sounds pretty cool. I also really like your "technological risk" suggestion, Rob. Referring to GCR as "Long term future" is a pretty obvious branding tactic by those that prioritize GCRs. It is vague, misleading, and dishonest.

For "far future"/"long term future," you're referring to existential risks, right? If so, I would think calling them existential or x-risks would be the most clear and honest term to use. Any systemic change affects the long term such as factory farm reforms, policy change, changes in societal attitudes, medical advances, environmental protection, etc, etc. I therefore don't feel it's that honest to refer to x-risks as "long term future."

By regular morals, I mean basic morals such as treating others how you like to be treated, ie. rules that you would be a bad person if you failed to abide by them. While I don't consider EA superorogatory, neither do I think that not practicing EA makes someone a bad person, thus, I wouldn't put it in the category of basic morals. (Actually, that is the standard I hold others to, for myself, I would consider it a moral failure if I didn't practice EA!) I think it actually is important to differentiate between basic and, let's say, more “advanced” morals because if people think that you consider them immoral, they will hate you. For instance, promoting EA as a basic moral that one is a “bad person” if she doesn't practice, will just result in backlash from people discovering EA. No one wants to be judged.

The point I was trying to make is that EAs should be aware of moral licensing, which means to give oneself an excuse to be less ethical in one department because you see yourself as being extra-moral in another. If there is a tradeoff between exercising basic morals and doing some high impact EA activity, I would go with the EA (assuming you are not actually creating harm, of course). For instance, I don't give blood because last time I did I was lightheaded for months. Besides decreasing my quality of life, it would also hurt by ability to do EA. I wouldn't say giving blood is an act of basic morality, but it still an altruistic action that few people can confidently say they are too important to consider doing. Do you not agree that if doing something good doesn't prevent you from doing something more high impact, than it would be morally preferable to do it? For instance, treating people with kindness... people shouldn't stop being kind to others because it won't result in some high global impact.

Load more