MIH

Mohammad Ismam Huda

277 karmaJoined

Comments
27

I largely agree with your assessment that Quincy is controversial and dogmatic about restraint/ non-intervention.

That being said, they are a valuable source of disagreement in the wider foreign policy community, and doing something very neglected (researching & advocating for restraint/non-intervention).

I know Quincy staff disagree with each other, coming from libertarian, leftist, realist perspectives. So it is troubling that Cirincione departed because that difference in perspective is needed. Although I do suspect Parsi is describing things accurately when he says Cirincione left because he wanted the Institute to adopt his position in the Russian-initiated war on Ukraine.

Quincy are exploring a controversial analysis in this current conflict in Russia-Ukraine, to identify if Russia's invasion could have been avoided in the 1st place (e.g. by bringing Russia into NATO way back when they were wanting to join), and advocating Ukraine and Russia compromise to reduce casualties (to be fair, it's reported the White House has also urged Ukraine to make compromises at times). Whilst controversial, I do think this is worthwhile, and I myself might disagree (and I believe they all disagree amongst themselves), I want to see this research/advocacy explored and debated. I had been nervous when the invasion started that Quincy's work could dip into Kremlin-apologetics, but they have seemed to steer away from that, and have nuanced perspectives.

Their work on the Iran Nuclear Deal, the conflict in Yemen, is far less controversial, and promising.

I find value in them being a counterbalance to the more hawkish think tanks which are much better resourced.

On the 80K job board, you have a few institutions (well respected and worthwhile no doubt) like CSIS & RAND, which are more interventionist and/or funded by arms manufacturers (even RAND is indirectly funded by the grants it receives from AEI), so I do worry that there is a systemic bias for interventionist views.

I hope people don't write-off Quincy's work or other anti-interventionist/restraint-focused work entirely, but certainly agree, take it with a grain of salt. I certainly do.

Thank you Stephen for your long engagement with this topic, because I do think it is a very real risk that Effective Altruists should pay more attention to.

In addition to the actions you proposed, I also wanted to suggest there might be promising actions in reducing conflicts of interests that incentivise conflict and escalate tensions. The high amounts of political lobbying, sponsoring of think tanks and universities, by weapons companies creates perverse inventives. 

I have been very impressed by the work of the Quincy Institute to bring attention to this issue, and to explore diplomatic options as alternatives to conflict. I would love to see 80000 Hours promote them on their job board or interviewed.

I've written to my local MPs about banning contributions from weapons makers (Lockheed Martin, Boeing etc...) to the Australian Gov't military think tank ASPI. Here in Australia the recent AUKUS security pact has seen an enormous increase in planned military spending and sparked some discussion on the forum. I am trying to raise this as an issue/cause area to explore amongst Aussie EAs.

Very eloquent. I do think the perception is justified, e.g. SBF's attempt to elect candidates to the US Congress/Senate.

If anything... I probably take people less seriously if they do bet (not saying that's good or bad, but just being honest), especially if there's a bookmaker/platform taking a cut.

I think it's fair for Davis to characterise Schmidt as a longtermist.

He's recently been vocal about AI X-Risk. He funded Carrick Flynn's campaign which was openly longtermist, via the Future Forward PAC alongside Moskovitz & SBF. His philanthropic organisation Schmidt Futures has a future focused outlook and funds various EA orgs.

And there are longtermists who are pro AI like Sam Altman, who want to use AI to capture the lightcone of future value.

https://www.cnbc.com/amp/2023/05/24/ai-poses-existential-risk-former-google-ceo-eric-schmidt-says.html

I can't say how effective they are in this space, but UNHCR is active and reputable.

Terrible situation.

I have been following AUKUS developments in Australia, and have tried to get local EAs interested to little avail.

This should be a hugely important issue to the EA community.

I personally donate to groups like APHEDA https://www.apheda.org.au/ on a hunch that they are effective.

My suspicion is that this community neglects promising opportunities in this space - and exploring it myself.

Respectfully disagree with your example of a website.

In a commercial setting, the client would want to examine and approve the solution (website) in some sort of test environment first.

Even if the company provided end to end service, the implementation (buying domain etc) would be done by a human or non-AI software.

However, I do think it's possible the AI might choose to inject malicious code, that is hard to review.

And I do like your example about terrorism with AI. However, police/govt can also counter the terrorists with AI too, similar to how all tools made by humans are used by good/bad actors. And generally, the govt should have access to the more powerful AI & cybersecurity tools. I expect the govt AI would come up with solutions too, at least as good, and probably better than the attacks by terrorists.

One of the reasons I am skeptical, is that I struggle to see the commercial incentives to develop AI in a direction that is X-risk level.

e.g. paperclip scenario, commercially, a business would use an ai to develop and present a solution to a human. Like how google maps will suggest the optimal route. But the ai would never be given free reign to both design the solution and to action it, and to have no human oversight. There's no commercial incentive for a business to act like that.

Especially for "dumb" AI as you put it, AI is there to suggest things to humans, in commercial applications, but rarely to implement (I can't think of a good example - maybe automated call centre?) the solution and to implement the solution without oversight by a human.

In a normal workplace, management signs off on the solution suggested my juniors. And that seems to be how AI is used in business. AI presents a solution, and then a human/ approves it and a human implements it also.

Load more