MathiasKB

Director @ Center for Effective Aid Policy
4987 karmaJoined Jul 2018aidpolicy.org

Comments
239

I don't find find this argument all too compelling. Who pays for the government's ability to protect the wealthy? In absence of a government, why wouldn't the wealthy pay someone else to protect their wealth?

That said I completely agree with the last sentence and I think taxation is very reasonable. Deciding that taxation is theft and therefore always wrong, is after all the worst argument in the world.

I don't mean to argue for libertarianism, but I do want advocates of socialism to be mindful of how they plan to enforce it.

Good question, worth exploring!

One point not brought up, which I think is somewhat important to me is how socialist policies are to be enforced.

I personally dislike the implicit threat of violence enforcement of those policies requires. I'll be the first to admit it's difficult to create a functional society without the use or threat of force, but I still would like to see it treated it as a necessary evil not to be used lightly.

There are many laws which would be less popular if one added to the end: "or we'll beat you up", but in some sense every law has this implicitly written. We're just not very mindful of it!

I think it's reasonable for vegans to ask someone whether they would still eat meat if they had to kill the animal themselves. In a similar manner, would you be fine with forcing someone into a car and locking them into a cell if they refused to hand over everything they had earned that month?

This is far from a knockdown argument and somewhat of a strawman, but it matters to me. I place value on people being free to live their lives how they see fit. Anyone should be welcome to form a socialist commune, but it should be out of ones own volition.

I think there's a lot of local maxima that are very juicy. I would encourage people to look at the opportunities around them that others would miss, and try my best to foster a culture that helps its members discover them.

A great example of someone doing this is Abdurrahman who took the initiative to create EA in Arabic which I expect will be really impactful. I don't expect there were many EA jobs available to him in Saudi, but he looked around and found a gap (no resources on EA in one of the world's biggest languages) and executed on the opportunity.

I am currently looking into an animal welfare intervention which South American EA's would be much better suited to do than anyone else. Some time ago I looked into policy interventions to improve the water sanitation efforts of the Jal Jeevan Mission in India. An Indian EA from the right state would be much better suited to carry out the sanitation advocacy for JJM than I am.

I've yet to find a region of the world without opportunities, but most of them won't be listed in a career guide!

I'm grappling with this exact issue. I think AI is the most important technology humanity will event, but I'm skeptical of the EV of much work on the technology. Still it seems that it should be the only reasonable thing to spend all my time thinking about, but even then I'm not sure I'd arrive at anything useful.

And the opportunity cost is saving hundreds of lives. I don't think there is any other question that has cost me as much sleep as this one.

forecasting newsletter by nuno sempere

Excerpt from the most recent update from the ALERT team:

 

Highly pathogenic avian influenza (HPAI) H5N1: What a week! The news, data, and analyses are coming in fast and furious.

Overall, ALERT team members feel that the risk of an H5N1 pandemic emerging over the coming decade is increasing. Team members estimate that the chance that the WHO will declare a Public Health Emergency of International Concern (PHEIC) within 1 year from now because of an H5N1 virus, in whole or in part, is 0.9% (range 0.5%-1.3%). The team sees the chance going up substantially over the next decade, with the 5-year chance at 13% (range 10%-15%) and the 10-year chance increasing to 25% (range 20%-30%).

 

their estimated 10 year risk is a lot higher than I would have anticipated.

I suspect the primary reasons you want to break up Deepmind from Google is to:

  1. Increase their autonomy, reducing pressure from google to race
  2. Reduce Deepmind's access to capital and compute, reducing their competitiveness

Perhaps that goes without saying, but I think it's worth explicitly mentioning. In a world without AI risk, I don't believe you would be citing various consumer harms to argue for a break up.

The traditional argument for breaking up companies and preventing mergers is to reduce the company's market power, increasing consumer surplus. In this case, the implicit reason for breaking up Deepmind is to decrease its competitiveness thus reducing consumer surplus.

I think it's perfectly fine to argue for this, I just really want us to be explicit about it.

I'm awestruck, that is an incredible track record. Thanks for taking the time to write this out.

These are concepts and ideas I regularly use throughout my week and which have significantly shaped my thinking. A deep thanks to everyone who has contributed to FHI, your work certainly had an influence on me.

I think I'm sympathetic to Oxford's decision.

By the end, the line between genuine scientific inquiry and activistic 'research' got quite blurry at FHI. I don't think papers such as: 'Proposal for a New UK National Institute for Biological Security', belong in an academic institution, even if I agree with the conclusion.

Load more